uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,993,863 | arxiv | \section{Introduction and summary \label{Intro}}
Gauge theory on the fuzzy sphere has been of interest for many years
as the simplest example of a noncommutative gauge theory with finitely
many degrees of freedom which retains all of the classical symmetries
of the corresponding undeformed field theory
(see for
instance~\cite{Madore:1991bw,Grosse:1995ar,Klimcik:1997mg,Carow-Watamura:1998jn,Baez:1998he,Grosse:2001qt,Grosse:2001ss,Presnajder:2003ak,matrixsphere,
Castro-Villarreal:2004vh,Ydri:2006xw,Aschieri:2006uw}
and references therein). It can be formulated as an $N\times N$ matrix
model, which provides a natural regularization preserving all
symmetries of quantum gauge theory on the classical sphere which is
recovered in the large $N$ limit. At the classical level one finds
non-trivial gauge field configurations such as monopoles which can be
naturally described in terms of the noncommutative topology of
projective modules.
Besides Yang-Mills gauge theory which is the focus of this paper,
certain other gauge theories on the fuzzy sphere naturally emerge
in string theory upon
quantizing the worldvolume dynamics of spherical
D2-branes~\cite{Alekseev:2000fd}, obtained for instance as expansions
about vacua of matrix models with a Chern-Simons
term~\cite{Iso:2001mg,Azuma:2004zq} describing superstrings in pp-wave
backgrounds~\cite{Berenstein:2002jq}. These models contain additional
scalar degrees of freedom and are not considered here.
The formulation of Yang-Mills theory as an $N\times N$ matrix
model allows a nonperturbative quantization in terms of a
finite-dimensional path integral~\cite{matrixsphere}. This can then be
evaluated in terms of an $N$-dimensional integral, and
the classical result as a sum over two-dimensional
instantons~\cite{witten1,Minahan:1993tp,Gross:1994mr}
is recovered in the commutative limit $N \to \infty$.
A different approach to evaluate the path integral was
given in~\cite{Ydri:2006xw}, which is also restricted to the large
$N$ limit. This indicates in particular that
the model is void of the usual perturbative ambiguities which plague
noncommutative gauge theories in higher dimensions, such as UV/IR mixing
(see~\cite{Douglas:2001ba,Szabo:2001kg} for reviews).
In this paper we will formulate a new model for quantum Yang-Mills
theory on the fuzzy sphere, and solve it exactly.
The model reduces to pure
Yang-Mills theory on the classical sphere when $N\to\infty$ without
any spurious auxilliary scalar fields. The classical theory admits
topologically non-trivial solutions as in previous matrix model
formulations~\cite{matrixsphere},
including some purely noncommutative ones.
Its main virtue is that the finite-dimensional
configuration space of gauge fields can be described as a compact
coadjoint orbit, which is naturally a symplectic manifold with a
hamiltonian action of a nonabelian Lie symmetry group. The Yang-Mills
action is the square of the corresponding moment map, and therefore
our model can be solved exactly using nonabelian localization
techniques~\cite{witten1,JK1,Paradan1,JKKW1,Woodward:2004xz,witten} to
cast the partition function as a sum over local
contributions from the classical solutions of the gauge
theory. It can also be solved by abelian localization
techniques which exploit the usual Duistermaat-Heckman theorem
(see~\cite{Blau:1995rs,szaboloc} for extensive treatments) and which
provide an interesting alternative to the semiclassical expansion.
Although the model described in this paper is fundamentally
different from the fuzzy gauge theories that naturally emerge in
string theory, which contain a Chern-Simons term in their action,
nonabelian localization bears certain remarkable similarities to the
nonabelian localization of Chern-Simons theory on Seifert homology
spheres~\cite{witten}.
There are two main motivations behind the present work. Firstly, in
the commutative case, two-dimensional gauge theories are exactly
solvable and can be solved explicitly, either at strong coupling by exploiting
the Migdal formula~\cite{Migdal:1975zg,Rusakov:1990rs} which expresses
it in terms of a sum over irreducible representations of its gauge
group, or at weak coupling by using Poisson resummation techniques to
cast it as a sum over two-dimensional
instantons~\cite{witten1,Minahan:1993tp,Gross:1994mr}.
One would therefore
like to have a similar picture in the noncommutative case.
The instanton
expansion can be readily generalized to provide the exact solution for
gauge theory on a two-dimensional noncommutative
torus~\cite{szabo,szaborev1}. However, in previous formulations of
gauge theory on the fuzzy sphere this is not possible,
either because
extra scalar degrees of freedom not normally present in commutative
Yang-Mills theory destroy the topological nature of the gauge theory
and hence its exact solvability, or else because the exact solution
does not decompose neatly into isolated contributions from classical
solutions.
Our model fills this gap, providing a gauge theory on the
fuzzy sphere whose exact solution is on a unified footing with
that of gauge theory on the noncommutative torus, in the same way that all
two-dimensional gauge theories admit universal solutions. This is even
apparent from the strong coupling expansions of the two noncommutative
gauge theories~\cite{szaborev1,Paniak:2003gn}, which exhibit the same
degrees of complexity.
However, the precise implementation of the
nonabelian localization principle is rather different in the two
cases. In the case of the torus, one starts from a rational
noncommutative gauge theory and exploits Morita equivalence with
commutative gauge theory to extract the exact instanton expansion, and
then uses continuity arguments to extend the expansion to generic
values of the noncommutativity parameter. On the fuzzy sphere, Morita
equivalence is not available in this manner, and we will have to
evaluate the quantum fluctuation integrals required in the
semiclassical expansion explicitly. This entails a significantly
larger amount of analysis and work than in the case of the torus.
Secondly, our formulation of gauge theory on the fuzzy sphere provides
a new finite-dimensional model which can be solved explicitly by nonabelian
localization techniques. In particular, we draw heavily on techniques
developed recently in~\cite{witten} to analyse higher critical points
in ordinary two-dimensional Yang-Mills theory. In our case, the
analysis is intrinsically finite-dimensional and in accord with rigorous
results established in~\cite{Paradan1,Woodward:2004xz}. The techniques
we exploit in this paper involve a beautiful mix of methods from
random matrix theory and (both abelian and nonabelian)
localization. In particular, we will throughout compare with some
analogous results obtained directly from random matrix theory
in~\cite{matrixsphere}. Our approach thereby extends the toolkit of
methods which can be generally used to treat gauge theories on fuzzy
spaces.
The outline of this paper is as follows. In Section~\ref{SymplModel}
we introduce our new symplectic model for gauge theory on the fuzzy
sphere, showing that it reduces to pure Yang-Mills theory on the
classical sphere in the large $N$ limit. We also describe in detail
the standard construction of the symplectic structure on the coadjoint
orbit space of gauge fields. In Section~\ref{ClassSols} we classify all
classical solutions of the gauge theory, finding fuzzy versions of the
usual instantons and monopoles as well hosts of purely noncommutative
solutions such as fluxons~\cite{Gross:2000ss}. We then give a detailed
description of the local geometry of the configuration space near each
Yang-Mills critical point. In Section~\ref{NonabLoc} we review some general
aspects of nonabelian localization, and apply it to compute precisely
the contributions to the path integral from the vacuum and also higher
unstable critical points, showing in each case that the standard
instanton contributions on the sphere are recovered at
$N\to\infty$. In Sections~\ref{Abelianization}, \ref{IZ-loc}, and
\ref{Abel-loc} we give an alternative
description of the exact path integral in terms of abelian
localization, which exploits the fact that the configuration space is
a hermitian symmetric space to express the gauge field degrees of
freedom in a suitable system of coordinates~\cite{helgason1}. These
coordinates have been previously used to evaluate integrals arising in
random matrix theory in~\cite{casmag1,szabo1}. Finally, in
Section~\ref{AbLoc} we compare the abelian and nonabelian localization
approaches, indicating how to map between the Yang-Mills critical
points and those of the abelianized localization. This is similar to
the abelianized localization at higher critical points of ordinary
Yang-Mills theory studied in~\cite{Blau:1995rs}, although in the fuzzy
case the mapping is not one-to-one and is thus far more intricate.
\bigskip
\section{Symplectic model for Yang-Mills theory on the fuzzy
sphere\label{SymplModel}}
In this section we will introduce our new symplectic model for gauge
theory on the fuzzy sphere. A similar formulation was given for gauge
theory on fuzzy $\Gamma P^2$ in~\cite{CP2paper}. This formulation will be
particularly suitable for the approach that we take later on to
computing the path integral using localization techniques.
\subsection{The fuzzy sphere\label{FuzzySphere}}
Let $N\in{\mathbb{N}}$, and let $\xi_i$, $i=1,2,3$ be the $N\times N$ hermitian
coordinate generators of the fuzzy sphere $S^2_N\cong{\rm Mat}_N$
which satisfy the relations
\begin{equation} \label{FS-l}
\epsilon^{ij}{}_{k}\, \xi_i \,\xi_j = {\,{\rm i}\,} \xi_k \qquad
\mbox{and}\qquad \xi_i\, \xi^i = \mbox{$\frac14$}\,\left(N^2-1
\right)~\mbox{1 \kern-.59em {\rm l}}_N
\end{equation}
where throughout repeated upper and lower indices are implicitly
summed over. The deformation parameter is $\frac1N$ and $S_N^2$
becomes the algebra of functions on the classical unit sphere $S^2$
in the limit $N\to\infty$.
The quantum space $S_N^2$ preserves the
classical invariance under global rotations as follows.
The $\xi_i$ generate an $N$-dimensional representation of
the global $SU(2)$ isometry group. Under the adjoint action of
$SU(2)$, this representation decomposes covariantly into
$p$-dimensional irreducible representations $(p)$ of $SU(2)$ as
\begin{equation}
{\rm Mat}_N\cong(1)\oplus(3)\oplus\cdots\oplus(2N-1) \ ,
\label{MatNSU2decomp}\end{equation}
which are interpreted as fuzzy spherical harmonics.
This decomposition defines a natural
map from $S^2_N$ to the space of functions
on the commutative sphere.
The integral of a function
$f\in S_N^2$ over the fuzzy sphere is given by the trace of $f$,
which coincides with the usual integral on $S^2$
\begin{equation}
\Tr(f) = \frac N{4\pi}\,\int_{S^2}\,{\rm d}\Omega~f \
\label{fuzzyint}\end{equation}
where the above map is understood.
Rotational invariance of the integral
then corresponds to invariance of the matrix trace under the
adjoint action of $SU(2)$.
Following~\cite{matrixsphere}, let us combine the generators $\xi_i$
into a larger hermitian $\mathcal{N}\times\mathcal{N}$ matrix
\begin{equation}
\Xi = \mbox{$\frac 12$}\, \mbox{1 \kern-.59em {\rm l}}_N\otimes\sigma^0 + \xi_i \otimes\sigma^i
\label{Xi-collective}
\end{equation}
where $\mathcal{N} = 2N$, $\sigma^0 = \mbox{1 \kern-.59em {\rm l}}_2$, while
\begin{equation}
\sigma^1=\begin{pmatrix}0&1\\1&0\end{pmatrix} \ , \quad
\sigma^2=\begin{pmatrix}0&{\,{\rm i}\,}\\-{\,{\rm i}\,}&0\end{pmatrix} \quad \mbox{and}
\quad \sigma^3=\begin{pmatrix}1&0\\0&-1\end{pmatrix}
\label{Pauli}\end{equation}
are the Pauli spin matrices obeying
\begin{equation}
\Tr\big(\sigma^i\big)=0 \qquad \mbox{and} \qquad
\sigma^i\,\sigma^j=\delta^{ij}~\mbox{1 \kern-.59em {\rm l}}_2+{\,{\rm i}\,}\epsilon^{ij}{}_k\,\sigma^k \ .
\label{Pauliids}\end{equation}
One easily finds from (\ref{FS-l}) and (\ref{Pauliids}) the identities
\begin{equation}
\Xi^2 = \mbox{$\frac{N^2}4$}~\mbox{1 \kern-.59em {\rm l}}_{\mathcal{N}} \qquad \mbox{and} \qquad
\Tr(\Xi) = N \ .
\end{equation}
Since $\xi_i\otimes\sigma^i$ is an intertwiner of the Clebsch-Gordan
decomposition $(N)\otimes(2)=(N-1)\oplus(N+1)$, this implies that
$\Xi$ has eigenvalues $\pm \,\frac N2$ with respective multiplicities
$N_\pm=N\pm1$.
\subsection{Configuration space of gauge fields\label{ConfSpace}}
We will now describe the gauge field degrees of freedom in our
formulation. To elucidate the construction in as transparent a way as
possible, we begin with the abelian case of $U(1)$ gauge theory. To
introduce $\mathfrak{u}(N)$ gauge fields $A_i$ on $S_N^2$, consider the
covariant coordinates~\cite{Madore:2000en}
\begin{equation}
C_i = \xi_i + A_i \qquad \mbox{and} \qquad C_0 =
\mbox{$\frac 12$}~\mbox{1 \kern-.59em {\rm l}}_N + A_0
\label{covcoordsdef}\end{equation}
which transform under the gauge group $G = U(N)$ as $C_\mu \mapsto
U^{-1}\, C_\mu\, U$ for $\mu=0,1,2,3$ and $U \in U(N)$. We can again
assemble them into a larger $\mathcal{N}\times\mathcal{N}$ matrix
\begin{equation}
C = C_\mu \otimes\sigma^\mu \ .
\label{C-collective}
\end{equation}
Generically, these would consist of four independent fields, and we
have to somehow reduce them to two tangential fields on $S_N^2$. There
are several ways to do this. For example, one can impose the
constraints $A_0 =0$ and $C_i\, C^i = \frac{N^2-1}4~\mbox{1 \kern-.59em {\rm l}}_\mathcal{N}$ as
in~\cite{matrixsphere}, leading to a constrained hermitian multi-matrix
model describing quantum gauge theory on the fuzzy sphere which
recovers Yang-Mills theory on the classical sphere in the large $N$
limit.
Here we will use a different approach and impose the constraints
\begin{equation}
C^2 = \mbox{$\frac{N^2}4$}~\mbox{1 \kern-.59em {\rm l}}_{\mathcal{N}} \qquad \mbox{and} \qquad \Tr(C)
=N
\label{constraint}
\end{equation}
which is equivalent to requiring that $C$ has eigenvalues $\pm\,\frac
N2$ with multiplicities $N_\pm=N\pm1$. In terms of the components of
(\ref{C-collective}), this amounts to the constraints
\begin{equation}
C_i \,C^i + C_0^2=\mbox{$\frac{N^2}{4}$}~\mbox{1 \kern-.59em {\rm l}}_{\mathcal{N}} \qquad \mbox{and}
\qquad {\,{\rm i}\,}\epsilon_{i}{}^{jk}\,C_j \,C_k+ \{C_0,C_i\}= 0 \ .
\label{C2}
\end{equation}
We checked in Section~\ref{FuzzySphere} above that this is satisfied
for $A_\mu=0$, wherein $C=\Xi$. We can then consider the action of the
unitary group $U(2N)$ given by
\begin{equation}
C ~\longmapsto~ U^{-1}\, C \, U
\end{equation}
which generates a coadjoint orbit of $U(2N)$ and preserves
the constraint \eq{constraint}. The gauge fields $A_\mu$ are in this
way interpreted as fluctuations about the coordinates of the quantum
space $S_N^2$. The constraint (\ref{constraint}) ensures that the
covariant coordinates (\ref{C-collective}) describe a dynamical fuzzy
sphere. The gauge group $G=U(N)$ and the global isometry group $SU(2)$
of the sphere are subgroups of the larger symmetry group $U(2N)$.
In particular, the generators of the gauge group are given by
elements of
the form $\phi =\phi_0 \otimes\sigma^0 \, \in\, \mathfrak{g} := \mathfrak{u}(N)\subset\mathfrak{u}(\mathcal{N}\,)$, which defines the gange
algebra $\mathfrak{g}$.
We thus claim that a possible {configuration space of gauge
fields} is given by the {\em single} coadjoint orbit
\begin{equation}
\cO := \cO(\Xi) = \big\{ C = U^{-1}\, \Xi\, U~\big|~ U \in U(\mathcal{N}\,)
\big\}
\label{orbit-2}
\end{equation}
where $\Xi\in \mathfrak{u}(2N)$ is given by (\ref{Xi-collective}). Explicitly,
dividing by the stabilizer of $\Xi$ gives a representation of the
orbit (\ref{orbit-2}) as the symmetric space $\cO\cong
U(2N)/U(N+1)\times U(N-1)$ of dimension $\dim(\cO)=2(N^2-1)$. A
similar construction was given in~\cite{CP2paper} for the case of $\Gamma P^2$, and
applied to $S^2_N$ in a different way in~\cite{Ydri:2006xw}. To justify
this claim, we must check that the orbit $\cO$ captures the correct
number of degrees of freedom at least in the commutative limit
$N\to\infty$, i.e. that the gauge fields $A_i$ are essentially
tangent vector fields on $S_N^2$.
The tangent space to $\cO(\Xi)$ at a point $C$ is
isomorphic to $T_C\cO\cong\mathfrak{u}(\mathcal{N}\,)/\mathfrak{r}$, where
$\mathfrak{r}=\mathfrak{u}(N_+)\times\mathfrak{u}(N_-)$ is the stabilizer subalgebra of
$\Xi$. This identification is equivariant with respect to the natural
adjoint action of the Lie group $U(\mathcal{N}\,)$. Explicitly, tangent
vectors to $\cO(\Xi)$ at $C$ have the form\footnote{To streamline
notation, we will not write explicitly the local dependences of
fields and operators defined at points $C\in\cO$.}
\begin{equation}
V_\phi = {\,{\rm i}\,}[C,\phi]
\label{tangentvectors}
\end{equation}
for any hermitian element $\phi \in \mathfrak{u}(\mathcal{N}\,)/\mathfrak{r}$,\footnote{With
our conventions, the vector fields \eq{tangentvectors} are real.}
which are just the generators of the unitary group $U(\mathcal{N}\,)$ acting
on $\cO(\Xi)$ by the adjoint action. These actually describe vector
fields on the entire orbit space $\cO(\Xi)$. Here and in the following
we use the symbol $C$ to denote both elements of $\cO(\Xi)$, as well
as the matrix of overcomplete coordinate functions on $\cO(\Xi)$
defined using the embeddings
$\cO(\Xi)\hookrightarrow\mathfrak{u}(\mathcal{N}\,)\hookrightarrow\Gamma^{\mathcal{N}^2}$.
\subsubsection*{\it The map ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$}
Following~\cite{CP2paper}, we can make the description of the tangent
space to $\cO$, spanned by the vectors $V_\phi$, more
explicit as follows. Consider for $C \in\cO$ the map
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}\,:\, \mathfrak{u}(\mathcal{N}\,)~ \longrightarrow~ \mathfrak{s}\mathfrak{u}(\mathcal{N}\,)
\end{equation}
defined by
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi) = \mbox{$\frac1N$}\,V_\phi=
\mbox{$\frac{{\,{\rm i}\,}}N$}\, \big[C\,,\,\phi\big] \ .
\label{calJmapdef}\end{equation}
Using \eq{constraint} one finds that it satisfies
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^3 = -{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}
\label{cJprojprop}\end{equation}
and hence amounts to suitable projectors.
Moreover, the map ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ is
an antihermitian operator with respect to the invariant Cartan-Killing
inner product $\Tr(\phi \, \psi)$ on $\mathfrak{u}(\mathcal{N}\,)$, since
\begin{equation}
\Tr\big(\phi\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}( \psi)\big) = \mbox{$\frac{{\,{\rm i}\,}}{N}$}\,
\Tr\big(\phi \,[C,\psi]\big)
= - \mbox{$\frac{{\,{\rm i}\,}}{N}$}\,\Tr\big([C,\phi]\,\psi\big) = -
\Tr\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)\, \psi\big) \ .
\label{cJantiherm}\end{equation}
The map ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ will play an instrumental role in this paper and its
geometrical properties will be studied in more detail in the next
section.
Here we simply note the meaning of ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ in the commutative limit
$N\to\infty$. In component form with $\phi = \phi_\mu
\otimes\sigma^\mu$, it acts as\footnote{Throughout,
the notation $\approx$ will always mean an equality which is valid
in the large $N$ commutative limit.}
\begin{eqnarray}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)&\approx& -\mbox{$\frac{{\,{\rm i}\,}}N$}\,
\big[\phi_\mu \otimes\sigma^\mu\,,\,C_j\otimes\sigma^j\big]
\nonumber\\[4pt] &\approx&
-\mbox{$\frac{{\,{\rm i}\,}}N$}\, \big[\phi_\mu \,,\,C_j\big]
\otimes\sigma^\mu\,
\sigma^j + \mbox{$\frac{{\,{\rm i}\,}}N$}\,\phi_\mu\,
C_j\otimes\big[\sigma^\mu\,,\,\sigma^j\big]
\label{calJcomponent}\end{eqnarray}
where we have set $C_0 \approx \frac 12~\mbox{1 \kern-.59em {\rm l}}_N$ in the large $N$ limit
as will be justified below. Thus at large $N$ this reduces to
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)\approx O\big(\mbox{$\frac1N$}\big) -
\epsilon^{ij}{}_{k}\,\phi_i\,x_j \otimes \sigma^k
\end{equation}
for ``almost'' commutative functions describing the gauge field
fluctuations $A_\mu$. Here $\xi_i\approx\frac N2\,x_i$ define homogeneous
coordinates $x_i$ on the sphere. This result means that if we interpret
$\phi_i$ as a three-component vector field on the fuzzy sphere,
including radial components, then the operator ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ vanishes
on the normal component and essentially coincides with the complex
structure for tangential fields on the K\"ahler manifold $S^2$. In
particular, the image of ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$, i.e. the space of tangent vectors
\eq{tangentvectors} to $\cO(\Xi)$ or
small variations of the gauge field, indeed admits two independent
field degrees of freedom. This implies that the orbit \eq{orbit-2}
describes two tangent vector fields on $S^2_N$. Hence the tangent
space to $\cO$ can be interpreted precisely the space of tangent
vector fields on the fuzzy sphere.
This nicely reflects the affine nature of the space of gauge fields.
\subsubsection*{\it Nonabelian gauge theory}
The generalization to nonabelian $U(n)$ gauge theory is very
simple. One now takes
\begin{equation}
\mathcal{N} = 2 n \,N
\label{cN2nN}\end{equation}
and enlarges the matrix \eq{Xi-collective} to $\Xi\otimes\mbox{1 \kern-.59em {\rm l}}_n$
(which we continue to denote as $\Xi$ for ease of notation). The
configuration space is given by the $U(\mathcal{N}\,)$ orbit (\ref{orbit-2})
with $C^2 = \frac{N^2}4~\mbox{1 \kern-.59em {\rm l}}_\mathcal{N}$ and
\begin{equation}
\Tr(C) = n\,N \ .
\label{UnTrconstr}\end{equation}
Then $C$ has eigenvalues $\pm \,\frac N2$ of respective multiplicities
$n\,(N\pm1)$. The configuration space
\begin{equation}
\cO = U(2n\,N)/U(n\,N_+)\times U(n\,N_-)
\label{nonaborbit}\end{equation}
describes $\mathfrak{u}(n)$ -- valued gauge fields on $S^2_N$. Its dimension is
given by
\begin{equation}
\dim(\cO) = 2 n^2\,\left(N^2-1\right) \ .
\label{dim-nonabelian}
\end{equation}
The gauge group
is now given by $G = U(nN)$, and acts on the covariant coordinates
$C_i = \xi_i\otimes \mbox{1 \kern-.59em {\rm l}}_n + A_i,
\,\, C_0 = \frac 12\, \mbox{1 \kern-.59em {\rm l}}_{nN} + A_0$
as $C_\mu \to U^{-1} C_\mu U$. This leads to the expected
transformation law for the $\mathfrak{u}(n)$ -- valued gauge fields $A_i$.
The corresponding gauge algebra is now
$\mathfrak{g} := \mathfrak{u}(n N)\subset\mathfrak{u}(\mathcal{N}\,)$,
consisting of elements of
the form $\phi =\phi_0 \otimes\sigma^0 \, \in\, \mathfrak{g}$.
\subsection{The Yang-Mills action\label{YMAction}}
Consider the action
\begin{equation}
S=S(C) := \mbox{$\frac{N}g$}\,\Tr\big(C_0-\mbox{$\frac
12$}~\mbox{1 \kern-.59em {\rm l}}_{n\,N}\big)^2
\label{YM-action}
\end{equation}
for $C \in \cO$, which is invariant under the group of gauge
transformations $G$ as well as global $SU(2)$ rotations.
We claim that it reduces in the
commutative limit $N \to \infty$ to the
usual Yang-Mills action on the sphere $S^2$, and
can therefore be taken
as a definition of the Yang-Mills action on the fuzzy sphere
$S^2_N$. We establish this explicitly below in the abelian case
$n=1$, the extension to general $n$ being obvious.
Consider the three-component field strength~\cite{matrixsphere}
\begin{eqnarray}
F_i &:=& {\,{\rm i}\,}\epsilon_{i}{}^{jk}\,C_j\, C_k + C_i \nonumber\\[4pt]
&=& {\,{\rm i}\,}\epsilon_{i}{}^{jk}\,[\xi_j, A_k] +
{\,{\rm i}\,}\epsilon_{i}{}^{jk}A_j\, A_k + A_i \
\label{fieldstrength-3}
\end{eqnarray}
where $C_i = \xi_i + A_i$ as in \eq{covcoordsdef}.
To understand its significance, consider the ``north pole'' of $S_N^2$
where $\xi_3 \approx \frac N2 \,x_3 = \frac N2~\mbox{1 \kern-.59em {\rm l}}_N$ (with unit radius),
and one can replace the operators
\begin{equation}
{\,{\rm i}\,}\,{\rm ad}_{\xi_i} \;\;\longrightarrow \;\;
-\varepsilon_{i}{}^{j}\,\partial_j:=
-\varepsilon_{ij}\,\mbox{$\frac{\partial }{\partial x_{j}}$}
\end{equation}
in the commutative limit for $i,j = 1,2$. Hence upon identifying the
commutative gauge fields $A^{\rm cl}_i$ through
\begin{equation}
A^{\rm cl}_i = -\varepsilon_{i}{}^{j}\, A_j \ ,
\end{equation}
the ``radial'' component $F_3$ of the field strength
\eq{fieldstrength-3} reduces in the commutative limit to the standard
expression
\begin{equation}
F_3 \approx \partial_1 A_{2}^{\rm cl}-\partial_2 A_{1}^{\rm cl}
+ {\,{\rm i}\,} \big[A_1^{\rm cl}\,,\,A_2^{\rm cl}\big] \ .
\end{equation}
The constraint \eq{C2} now implies
\begin{eqnarray}
F_i + \big\{C_0-\mbox{$\frac 12$}~\mbox{1 \kern-.59em {\rm l}}_N\,,\,C_i\big\} =
F_i + \big\{A_0\,,\,C_i\big\} &=& 0 \ , \nonumber\\
\{\xi_i, A^i\} + A_0 + A_i\, A^i + A_0\, A_0 &=& 0 \ .
\label{constraint-F}
\end{eqnarray}
Since only configurations with $A_0 = O(\frac 1N)$
have finite action \eq{YM-action} and $\xi_3$ is of order $N$, this
implies that $A_3$, $F_1$ and $F_2$ are of order $\frac 1N$ at the north
pole, while $A_{1}$ and $A_2$ can be finite of order $1$. In
particular, only the radial component $F_3$ survives the $N \to
\infty$ limit, with
\begin{equation}
F_3 = -\{A_0,C_3\} \approx -N\, A_0 \ .
\label{rho-F}
\end{equation}
This analysis can be made global by considering the ``radial'' field
strength $F_r = x^i\, F_i$, which reduces to the usual field
strength scalar on $S^2$. The action \eq{YM-action} thus indeed
reduces to the usual Yang-Mills action in the commutative limit
with dimensionless gauge coupling $g$, giving
\begin{equation}
S \approx \frac 1{N\,g} \,\Tr (F_r)^2 \approx \frac 1{4\pi\, g}
\,\int_{S^2}\,{\rm d}\Omega~ (F_r)^2 \ .
\end{equation}
\subsection{Symplectic geometry of the configuration
space\label{SymplStruct}}
The standard Kirillov-Kostant construction makes the orbit space
(\ref{orbit-2}) into a symplectic manifold~\cite{BGVbook}. Given two
tangent vector fields
$V_\phi, V_\psi$ as above with
$\phi,\psi\in\mathfrak{u}(\mathcal{N}\,)$, the symplectic two-form
$\omega\in\Omega^2(\cO)$ is defined locally through its pairing with
the bivector $V_\phi\wedge V_\psi$ as
\begin{equation}
\langle\omega,V_\phi\wedge V_\psi\rangle = {\,{\rm i}\,}\Tr\big(C\,[\phi,\psi]
\big) \ .
\label{symplectic-form}
\end{equation}
Using trace manipulations it is easy to see that the kernel of this
pairing coincides with the stabilizer algebra $\mathfrak{r}$, and hence it is
nondegenerate on $\cO(\Xi)$.
We will derive below
an explicit form of $\omega$ \eq{omegaexpl}, which allows
to verify directly the well-known fact that $\omega$ is closed,
\begin{equation}
{\rm d}\omega =0 \ .
\label{omegaclosed}\end{equation}
Thus $\omega$ indeed defines an invariant symplectic structure on
$\cO(\Xi)$.
The tangent vectors $V_\phi$ are hamiltonian vector fields, and we
claim that their generator is given by
\begin{equation}
H_\phi = \Tr(\phi\, C)
\label{Hphigen}\end{equation}
for $\phi\in\mathfrak{u}(\mathcal{N}\,)$. Indeed, then ${\rm d} H_\phi = \Tr(\phi~{\rm d} C)$,
and by using the dual evaluation
\begin{equation}
\langle {\rm d} C,V_\phi\rangle ={\,{\rm i}\,}[C,\phi]
\label{dualeval}\end{equation}
one has
\begin{eqnarray}
\langle {\rm d} H_\phi,V_\psi\rangle &=& {\,{\rm i}\,}\Tr\big(\phi\,
[C,\psi]\big)\nonumber\\[4pt]
&=& -{\,{\rm i}\,}\Tr\big(C\,[\phi,\psi]\big)\nonumber\\[4pt] &=&
-\langle\omega,V_\phi\wedge
V_\psi\rangle ~=~ -\langle \iota_{V_\phi} \omega,V_\psi\rangle
\end{eqnarray}
where $\iota_{V_\phi}$ denotes contraction with the vector field
$V_\phi$. Thus
\begin{equation}
{\rm d} H_\phi = -\iota_{V_\phi} \omega
\label{hamiltonian}
\end{equation}
as claimed. This means that the hamiltonian function (\ref{Hphigen})
defines a periodic flow generated by the action of a one-parameter
subgroup $C\mapsto \epsilon^{{\,{\rm i}\,} t\,\phi}\,C~\epsilon^{-{\,{\rm i}\,} t\,\phi}$,
$t\in{\mathbb{R}}$. The corresponding equivariant moment map
$\mu:\cO(\Xi)\to\mathfrak{u}(\mathcal{N}\,)^\vee$ is the inclusion map which has the
pairings
\begin{equation}
\big\langle\mu(C)\,,\,\phi\big\rangle = H_\phi \ ,
\label{momentmap}
\end{equation}
and it defines a representation of the Lie algebra $\mathfrak{u}(\mathcal{N}\,)$ through
the Poisson algebra corresponding to $\omega$.
For gauge transformations $\phi = \phi_0 \otimes\sigma^0$, the
moment map $\mu$ reduces to
\begin{equation}
\big\langle\mu(C)\,,\,\phi\big\rangle=2\Tr\big(\phi_0\,C_0\big)=
\Tr\big(\phi_0 \,(\mbox{1 \kern-.59em {\rm l}}_{n\,N} +2A_0)\big) \ .
\label{momentmapred}\end{equation}
In the commutative limit and for abelian gauge fields $n=1$, this
becomes
\begin{equation}
\big\langle\mu(C)\,,\,\phi\big\rangle\approx\Tr(\phi_0)-
\frac2N\,\Tr(\phi_0\, F_r) \approx -\frac1{2\pi}\,
\int_{S^2}\,{\rm d}\Omega~ \phi_0 \,F_r \
\label{momentmap-3}
\end{equation}
up to an irrelevant shift,
which is just the anticipated moment map for Yang-Mills theory on the
classical sphere~\cite{witten1}. Given the appropriate symplectic
structure and moment map on the gauge field configuration space $\cO$,
the nonabelian localization principle for two-dimensional Yang-Mills
theory can be applied for the action constructed as the square of the
moment map. This is precisely the Yang-Mills action on $S^2_N$ given
in~\eq{YM-action}. The constant term $\frac 12~\mbox{1 \kern-.59em {\rm l}}_{n\,N}$ is just the
first Chern number of a background gauge field configuration and is
of no significance for this discussion. This procedure
will be worked out in detail in Section~\ref{NonabLoc}.
\subsubsection*{\it More about the symplectic form}
For later use, we will now derive some properties of the symplectic
form introduced in (\ref{symplectic-form}). Consider the
${\,{\rm i}\,}\mathfrak{u}(\mathcal{N}\,)$-valued one-form on $\cO(\Xi)$ given by
\begin{equation}
\theta := C^{-1} ~{\rm d} C \ .
\label{thetaCMdef}\end{equation}
Given the constraints (\ref{constraint}) and using ${\rm d} C^2 =0$, this
can be rewritten as
\begin{equation}
\theta = \mbox{$\frac 4{N^2}$}\, C ~{\rm d} C = \mbox{$\frac 2{N^2}$}\,
[C,{\rm d} C] \ .
\label{thetaCrel1}\end{equation}
It obeys the constraints
\begin{equation}
{\rm d}\theta + \theta^2 =0 \qquad \mbox{and} \qquad \Tr(\theta) =0 \ .
\label{thetaconstrs}\end{equation}
Thus $\theta\in\Omega^1(\cO,{\,{\rm i}\,}\mathfrak{u}(\mathcal{N}\,))$ is essentially the
canonical invariant Maurer-Cartan one-form, with the additional
property
\begin{equation}
[C,\theta] = -2 {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2 ({\rm d} C) = 2~{\rm d} C
\label{MCformprop1}\end{equation}
where we have used the fact that ${\rm d} C$ is tangent to the orbit
space and applied the projection property (\ref{cJprojprop}). In
particular, along with the fact that $C^2$ is constant, this implies
that
\begin{equation}
C\, \theta + \theta\, C =0 \ .
\label{MCformprop2}\end{equation}
Using again the constraint (\ref{constraint}), the symplectic two-form
\eq{symplectic-form} can be written as
\begin{equation}
\omega = -\mbox{$\frac {\,{\rm i}\,}{2 N^2}$}\,\Tr \big(C\,[{\rm d} C, {\rm d} C] \big)
= \mbox{$\frac \ii4$}\,\Tr \big(C\,\theta^2\big) \ .
\label{omegaexpl}
\end{equation}
To see this, we substitute this expression
using (\ref{cJantiherm}) and (\ref{cJprojprop}) into
\begin{eqnarray}
\langle\omega,V_\phi\wedge V_\psi\rangle &=&
- \mbox{$\frac{\,{\rm i}\,}{N^2}$} \,\Tr\big(C\,[\,[C,\phi]\,,\,[C,\psi]\,]
\big)\nonumber\\[4pt]
&=& {\,{\rm i}\,} \Tr\big(C\,[{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi),{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\psi)]\big) \nonumber\\[4pt]
&=& {\,{\rm i}\,} \Tr\big([C,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)]\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\psi)\big) \nonumber\\[4pt]
& =& -N\,\Tr\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^3(\phi)\, \psi\big) \nonumber\\[4pt]
&=& N\, \Tr\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)\, \psi
\big)\nonumber\\[4pt] & =&{\,{\rm i}\,} \Tr\big([C,\phi]\, \psi\big)
~=~ {\,{\rm i}\,}\Tr\big(C\,[\phi, \psi]\big)
\label{omegaexpl-2}
\end{eqnarray}
for any $\phi,\psi \in \mathfrak{u}(\mathcal{N}\,)$,
which coincides with the definition (\ref{symplectic-form}). Using
\eq{MCformprop1} and \eq{MCformprop2}, this
identity gives a simple proof of the closure property
(\ref{omegaclosed}) as
\begin{equation}
{\rm d}\omega = \mbox{$\frac \ii4$}\,\Tr \big({\rm d} C~\theta^2\big)
= -\mbox{$\frac {\,{\rm i}\,}{8}$}\,\Tr\big( [\theta,C]\,\theta^2\big)
= 0 \ .
\end{equation}
\bigskip
\section{The classical configuration space\label{ClassSols}}
In this section we will investigate in detail the space of classical
solutions of $U(n)$ gauge theory on the fuzzy sphere $S_N^2$ defined
by the action (\ref{YM-action}). Understanding this space will be
crucial for the exact solution of the quantum gauge theory, which as
we will see in the next section is given exactly by its semiclassical
expansion. We will first classify the solutions to the classical
equations of motion, over which the partition function will be
summed. Among these solutions we will find a variety of fluxons and,
as in the case of gauge theory on the noncommutative torus, only a
very small subset of all two-dimensional noncommutative instantons on
$S_N^2$ map into the usual instantons of Yang-Mills theory on $S^2$ in
the commutative limit $N\to\infty$. We will then thoroughly describe
the local symplectic geometry of the configuration space $\cO$ near
each critical point of the Yang-Mills action, as symplectic integrals
over these neighbourhoods will produce the required quantum
fluctuation determinants in the semiclassical expansion.
\subsection{Classical solutions\label{CritPoints}}
The critical points of the Yang-Mills action \eq{YM-action} are easy
to find. Since the most general variation of a gauge field $C \in
\cO$ is given by $\delta} \def\D{\Delta} \def\ddt{\dot\delta C = [C,\phi]$, by varying \eq{YM-action} one
finds that the critical points satisfy
\begin{equation}
0 = \Tr\big(\delta} \def\D{\Delta} \def\ddt{\dot\delta C_0~ (C_0-\mbox{$\frac 12$}~\mbox{1 \kern-.59em {\rm l}}_{n\,N})\big) =
\Tr\big([C,\phi] \,C_0\big) = \Tr\big(\phi \,[C_0, C]\big)
\end{equation}
for arbitrary $\phi\in\mathfrak{u}(\mathcal{N}\,)/\mathfrak{r}$. They are therefore given
by solutions of the equation $[C_0,C]=0$, which agrees
with the known saddle-points in the matrix model formulation
of~\cite{matrixsphere}. This equation is equivalent to
\begin{equation}
[C_0,C_i]=0
\label{eom}\end{equation}
which together with \eq{C2} implies that
\begin{eqnarray}
[C_i, C_j] &=& {\,{\rm i}\,} \epsilon_{ij}{}^{k} \,(2 C_0)\; C_k \ , \nonumber\\[4pt]
C_0^2 &=& \mbox{$\frac{N^2}{4}$}~\mbox{1 \kern-.59em {\rm l}}_{n\,N} - C_i\,C^i \ .
\label{critical-YM}
\end{eqnarray}
For solutions with $C_0\neq 0$, we can use (\ref{eom}) to define
\begin{equation}
L_i=\frac1{2C_0}\,C_i
\label{Lidef}\end{equation}
and rewrite (\ref{critical-YM}) as
\begin{eqnarray}
[L_i, L_j] &=& {\,{\rm i}\,} \epsilon_{ij}{}^{k} \, L_k \ , \nonumber\\[4pt]
L_i\,L^i&=& \big(\mbox{$\frac{N^2}{4C_0^2} - \frac
14$}\big)~\mbox{1 \kern-.59em {\rm l}}_{n\,N} \ .
\label{di-su2}
\end{eqnarray}
These equations mean that the critical points of the Yang-Mills action
correspond to (isomorphism classes of) $(n\,N)\times(n\,N)$ unitary
representations of the isometry group $SU(2)$, i.e. homomorphisms
$\pi_{n\,N}:SU(2)\to U(n\,N)$. Up to isomorphism, for each integer $p\geq1$
there is a unique irreducible $SU(2)$ representation $(p)$ of dimension
$p$. Therefore, there is a one-to-one correspondence between classical
solutions and ordered partitions $(n_1,\dots,n_k)$ of the integer
$n\,N=n_1+\dots+n_k$, with $n_i$ the dimension of the $i$-th
irreducible subrepresentation in the representation $\pi_{n\,N}$
characterizing the given critical point. Each such classical solution
breaks the $U(n\,N)$ gauge symmetry locally to the centralizer
$\prod_i\,U(k_i)$ of the homomorphism $\pi_{n\,N}$,
where $k_i$ denotes the multiplicity of the blocks. They can be
seen~\cite{matrixsphere} to give precisely the usual two-dimensional
instantons for $U(n)$ Yang-Mills theory on $S^2$. These solutions also
agree with those that can be interpreted as configurations of
D0-branes inside D2-branes~\cite{Alekseev:2000fd}, although the ones
which will survive the large $N$ limit are different.
Therefore, each critical point is labelled (up to gauge
equivalence) by the set of dimensions $n_i$ of the irreducible
representations, supplemented with a sign $s_i$ which is defined by $s_i =
{\rm sgn}(C_0(n_i)) = \pm\, 1$ (in that representation) when $C_0(n_i)
\neq 0$ and $s_i =0$ if $C_0(n_i)=0$. We can thereby label the
{\it critical surfaces}, i.e. the connected components of the moduli
space of classical solutions in $\cO$, as
\begin{equation}
\cC_{(n_1,s_1),\dots,(n_k,s_k)} \qquad \mbox{with} \qquad
n_i \in {\mathbb{N}} \quad \mbox{and} \quad s_i \in \{\pm \,1,0\}
\end{equation}
with the constraints
\begin{equation}
1 \leq n_1 \leq n_2 \leq \cdots\leq n_k \ , \quad
\sum_{i=1}^k\, n_i = n\,N \quad \mbox{and} \quad
\sum_{i=1}^k\, s_i =n \ ,
\label{partitionconstrs}\end{equation}
and $s_i =0$ only if $n_i=1$. Any non-trivial irreducible
representation with $n_i>1$ and $C_0\neq 0$ gives a contribution $\pm N$
to the trace $\Tr(C)$, which must be balanced in order to satisfy the
eigenvalue multiplicity constraint (\ref{UnTrconstr}). This is the
role of the condition $\sum_i\, s_i =n$ in
(\ref{partitionconstrs}). Note that one can change the sign of any
individual irreducible representation.
The meaning of the blocks $(n_i,s_i)$ can be described as follows:
\begin{itemize}
\item \underline{$s_a = \pm\, 1~$:} ~ In this case $C_0 \neq 0$, and
hence $\|C_0\| > \frac 12$ due to \eq{di-su2}. These solutions come
with two signs. Note that any irreducible representation with small
dimension will be highly suppressed in the large $N$ limit.
The most extreme case is a sum of trivial representations, with $n_a
=1$, for which
\begin{equation}
C_i =0 \qquad \mbox{and} \qquad C_0(n_a=1)= s_a\,\mbox{$ \frac N2$} \ .
\end{equation}
\item \underline{$s_a=0~$:} ~ In this case $C_0=0$ and $n_a=1$, which
implies that $C_i = c_i$ with $c_i\in{\mathbb{R}}$ and
$\frac{N^2}{4} = c_i\,c^i$. These solutions are also suppressed at
large $N$ but less so than those with $C_i=0$ above.
They correspond to {\em fluxons}~\cite{Gross:2000ss} whose positions
on $S^2$ are determined by the vector $c_i$.
\end{itemize}
Note that each such saddle-point (or more generally any gauge field
configuration $C$) defines a projective module over the fuzzy sphere
algebra $S^2_N$, obtained by writing $C$ in $2n\times 2n$ block-matrix
form. The module then corresponds to a projector
$\Pi_{(n_1,s_1),\dots,(n_k,s_k)} \in{\rm Mat}_{2n}(S^2_N)$. Let us
describe some of these critical points explicitly.
\subsubsection*{\it Ground state}
The vacuum solution has $k=n$ and is given by the critical surface
$\cC_{(N,1),\dots,(N,1)}$, which implies that $C_0=\frac
12~\mbox{1 \kern-.59em {\rm l}}_{n\,N}$. It follows that $C_i\,C^i=\frac{N^2-1}{4}~\mbox{1 \kern-.59em {\rm l}}_N$,
which is the quadratic Casimir invariant of the $N$-dimensional
irreducible representation of $SU(2)$. Using a suitable $U(n\,N)$
gauge transformation, it can be written as
\begin{equation}
C_i = \xi_i\otimes \mbox{1 \kern-.59em {\rm l}}_n
\label{vacsolnonab}\end{equation}
and we recover the original coordinates of the fuzzy sphere
$S_N^2$. This is equivalent to the vanishing curvature condition $F
=0$. In the abelian case $n=1$, an application of Schur's lemma shows
that the only matrix which commutes with $C$ is the constant matrix
and so the gauge group $U(N)$ acts freely on the moduli space of
vacuum solutions, corresponding simply to a change of basis in this
case. For $n>1$ the solution is a direct sum of $n$ identical
representations. This commutes with the action of $\mathfrak{u}(n)$, and so
now the gauge group $U(n\,N)$ contains a non-trivial stabilizer. The
moduli space of flat connections is therefore isomorphic to the smooth
manifold $U(n\,N)/U(n)$ in the nonabelian case. Note that any
configuration near the vacuum, with small but finite action, is given
by a small deformation of an irreducible $SU(2)$ representation
describing $S_N^2$, and in particular the gauge field fluctuations
$A_\mu$ are ``small''. It is in this sense that the quantum gauge
theory will describe a fluctuating theory of noncommutative fuzzy
sphere geometries.
\subsubsection*{\it Fluxons}
At the other extreme, if $C_0$ has several zero eigenvalues,
i.e. several fluxons, the situation is much more complicated. For
example, when $C_0=0$ and $n=1$ we obtain a fuzzy version of the
moduli space of constant curvature connections in genus~$0$ provided
by the critical surface
\begin{equation}
\mu^{-1}(C_0=0)=\big\{C_i\in \mathfrak{u}(N)~\big|~C_i\,C^i=
\mbox{$\frac{N^2}4$}~\mbox{1 \kern-.59em {\rm l}}_N~,~[C_i,C_j]=0\big\}
\label{mu0}\end{equation}
along with the condition (\ref{UnTrconstr}) on the multiplicities of
the eigenvalues of $C_i\otimes\sigma^i$. The action of the $U(N)$
gauge group on (\ref{mu0}) can be used to simultaneously diagonalize
the three matrices $C_i$. The Marsden-Weinstein symplectic reduction
of the orbit space $\cO(\Xi)$ is then essentially a symmetric product
orbifold of the classical sphere $S^2$ given by
\begin{equation}
{\cal M}} \def\cN{{\cal N}} \def\cO{{\cal O}_0:=\mu^{-1}(C_0=0){/\!/}\,U(N)\cong{\rm Sym}^N\big(S^2\big) \ ,
\label{calM0}\end{equation}
where ${\rm Sym}^N(S^2):=(S^2)^N/\mathfrak{S}_N$ and the quotient by the Weyl
group $\mathfrak{S}_N\subset U(N)$ is the residual gauge symmetry acting by
permutations of the real eigenvalues of the hermitian matrices $C_i$
representing the positions of the fluxons on $S^2$, which are
indistinguishable. The fluxon moduli space ${\cal M}} \def\cN{{\cal N}} \def\cO{{\cal O}_0$ contains
orbifold singularities arising from the fixed points of the
$\mathfrak{S}_N$-action on $(S^2)^N$, which occur whenever two or more fluxon
locations coincide. This is analogous to the vacuum solution of
two-dimensional $U(N)$ gauge theory on a noncommutative torus wherein
the moduli space of constant curvature connections is the symmetric
product orbifold ${\rm Sym}^N(T^2)$~\cite{szabo}, and there is a natural
correspondence between two-dimensional noncommutative instantons and
fluxons~\cite{Griguolo:2004jp}. In the present case the $U(N)$ action
on the fluxon configuration space (\ref{mu0}) also has additional
fixed points. Note that the restriction of the symplectic two-form
(\ref{omegaexpl}) to the moduli space ${\cal M}} \def\cN{{\cal N}} \def\cO{{\cal O}_0$ is given by
\begin{equation}
\omega\big|_{{\cal M}} \def\cN{{\cal N}} \def\cO{{\cal O}_0}=-\frac{4{\,{\rm i}\,}}{N^2}\,\sum_{a=1}^N\,\epsilon^{ijk}
\,c_i^a~{\rm d} c_j^a\wedge{\rm d} c_k^a
\label{omegaM0}\end{equation}
where $c_i^a\in{\mathbb{R}}$ are the eigenvalues of $C_i$ with
$\sum_i\,(c_i^a)^2=\frac{N^2}4$ for each $a=1,\dots,N$. With the
usual embedding of the two-sphere $S^2\hookrightarrow{\mathbb{R}}^3$, this is
just the standard round symplectic two-form on the K\"ahler
manifold~$(S^2)^N$.
Each fluxon contributes a suppression factor $\epsilon^{-\frac{N}{4g}}$ due
to \eq{YM-action}.
\subsubsection*{\it Instantons on $S^2$}
The configurations which will dominate the path integral in the large
$N$ classical limit are the low-energy solutions with small
actions. These are solutions with $n$ partitions and critical surfaces
$\cC_{(n_1,1), \dots, (n_n,1)}$ with $n_i \approx N$. They correspond
to the usual instantons of $U(n)$ gauge theory on $S^2$
with vanishing $U(1)$ flux, as shown in~\cite{matrixsphere}.
These solutions may also contain additional
fluxons, which behave like localized flux tubes which ensure that the
total $U(1)$ flux vanishes. Their
contributions are suppressed by factors of at least
$\epsilon^{-\frac{N}{4g}}$, however they do contribute in the
double scaling, quantum plane limit wherein $S_N^2$ becomes
noncommutative ${\mathbb{R}}^2$~\cite{Chu:2001xi,Behr:2005wp}.
\subsubsection*{\it Monopoles}
As shown in~\cite{matrixsphere,Karabali:2001te}, an
irreducible representation with $n_i = N-m_i$ corresponds to the gauge
field of a monopole with magnetic charge $m_i\in{\mathbb{Z}}$.
Configurations with non-trivial $U(1)$ monopole number can
therefore be obtained by relaxing the constraint
(\ref{UnTrconstr}) and replacing it by
\begin{equation}
\Tr(C) = n\,N - {\sf c}_1
\label{tracemodify}\end{equation}
where ${\sf c_1}=\sum_i\,m_i \in {\mathbb{Z}}$ is the first Chern
number.
In order to maintain the constraint $C^2 = \frac{N^2} 4~\mbox{1 \kern-.59em {\rm l}}_\mathcal{N}$,
the matrix dimension (\ref{cN2nN}) must then be replaced
with $\mathcal{N}=2(n\,N-{\sf c}_1)$.
Some of these nontrivial $U(1)$ bundles are realized within the
original configuration space \eq{nonaborbit},
in the presence of trivial blocks with $n_a=1, s_a = \pm\, 1$.
For example, in the abelian case $n=1$ the solutions in
$\cC_{(N-2,1),(1,1),(1,-1)}$ are naturally interpreted as monopoles with
charge $m=2$. The blocks $(1,\pm\, 1)$ have vanishing field
strength $F_i=0$, and are naturally interpreted as Dirac strings.
They are suppressed by factors of at least
$\epsilon^{-N^3/g}$.
Replacing the trivial blocks with fluxons leads to vanishing
global $U(1)$ flux as discussed above.
\subsection{The classical action\label{ClassAction}}
The values of the Yang-Mills action (\ref{YM-action}) on the classical
solutions obtained in Section~\ref{CritPoints} above will determine
the classical contributions to the path integral in the next
section. The action at these critical points can be
evaluated as follows. Note that for each $p$-dimensional irreducible
representation $L_i$ of the isometry group $SU(2)$, one has
$L_i\,L^i=\frac{p^2-1}4~\mbox{1 \kern-.59em {\rm l}}_p$ and hence from \eq{di-su2} it follows
that
\begin{equation}
\mbox{$\frac{N^2}{p^2}$}~\mbox{1 \kern-.59em {\rm l}}_p = 4C_0(p)^2
\end{equation}
on that representation, so that $C_0(p) = \pm \,\frac{N}{2p}~\mbox{1 \kern-.59em {\rm l}}_p$.
Consider the reduced Yang-Mills action
\begin{equation}
S':=\mbox{$\frac Ng$} \,\Tr\big(C_0^2\big) = S + \mbox{$\frac Ng$}\,
\Tr(C_0) - \mbox{$\frac N{4g}$} \Tr(\mbox{1 \kern-.59em {\rm l}}_{n\,N})=S + \mbox{$
\frac {n\,N^2}{4g}$}
\label{YM-action-prime}
\end{equation}
which is somewhat easier to manipulate than $S$. For a dominant
solution with critical surface $\cC_{(n_1,1),\dots, (n_n,1)}$ and
$n_i >1$, the action $S'$ is given by
\begin{equation}
S'\big((n_1,1)\,,\,\dots\,,\, (n_n,1)\big) = \frac N{g}\,
\sum_{i=1}^n\, n_i\, \frac{N^2}{4 n_i^2}
= \frac{N^3}{4g}\, \sum_{i=1}^n\, \frac{1} {n_i} \ .
\label{action-eval}
\end{equation}
While possible fluxon blocks with $n_i=1$
do not contribute at all to $S'$, they do contribute
$\frac{N}{4g}$ to the original action $S$ (\ref{YM-action}).
Their total contributions to $S$ is proportional
to the fluxon charge, i.e. the total number of blocks with $n_i=1$,
and agrees with the usual fluxon action~\cite{Gross:2000ss} in the
quantum plane limit of $S_N^2$~\cite{Chu:2001xi}.
The dominant configurations in the classical limit are therefore those
with
\begin{equation}
n_i = N - m_i\qquad \mbox{and} \qquad \sum_{i=1}^n\, m_i=0
\label{nidomclass}\end{equation}
with small $m_i\in{\mathbb{Z}}$, for which
\begin{equation}
C_0(n_i) = \mbox{$ \frac{N}{2(N-m_i)}~\mbox{1 \kern-.59em {\rm l}}_{n_i} \approx \frac 12
\,\big(1+\frac{m_i}N\big)~\mbox{1 \kern-.59em {\rm l}}_{n_i}$} \ .
\end{equation}
Note that then
\begin{equation}
\Tr(C_0) = \sum_{i=1}^n\, (N-m_i)\, \frac{N}{2(N-m_i)} = \frac{n\,N}2
\label{TrC0dom}\end{equation}
as required. It follows that
\begin{equation}
S\big((n_1,1)\,,\,\dots\,,\, (n_n,1)\big)
\approx\frac Ng\, \sum_{i=1}^n\, (N-m_i)
\,\left(\frac{m_i}{2N}\right)^2 +O\big(\mbox{$\frac1N$}\big)
\approx\frac 1{4g}\, \sum_{i=1}^n\, m_i^2 \ ,
\label{action-eval-2}
\end{equation}
which is the usual expression~\cite{Minahan:1993tp,Gross:1994mr} for
the classical action of $U(n)$ Yang-Mills theory on the sphere $S^2$
with trivial gauge bundle evaluated on the two-dimensional instanton
on $S^2$ corresponding to a configuration of $n$ Dirac monopoles of
magnetic charges $m_i\in{\mathbb{Z}}$. Non-trivial gauge bundles over $S^2$ of
first Chern class ${\sf c}_1\in{\mathbb{Z}}$ are obtained by modifying the trace
constraint as in \eq{tracemodify}.
\subsection{Local symplectic geometry of the configuration
space\label{LocalGeomO}}
We will now develop the local symplectic geometry of the
configuration space of gauge fields near each Yang-Mills critical
point. This is done by analysing in more detail the map
(\ref{calJmapdef}), satisfying (\ref{cJprojprop}). We want to find
a useful description of the tangent space $T_C \cO \cong{\rm
im}({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L})$, i.e. of the local geometry of the orbit space
$\cO$. Since ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ is an anti-hermitian operator with respect to the
Cartan-Killing form on $\mathfrak{u}(\mathcal{N}\,)$ (see (\ref{cJantiherm})), it
follows that the space $\mathfrak{u}(\mathcal{N}\,)$ splits into two orthogonal
subspaces as
\begin{equation}
\mathfrak{u}(\mathcal{N}\,) = \ker({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}) \oplus\ker\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2+\mbox{1 \kern-.59em {\rm l}}_\mathcal{N}\big)
\label{ucN-decomp}
\end{equation}
where $\ker({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}) = \mathfrak{r}=\mathfrak{u}(n\,N_+)\oplus\mathfrak{u}(n\,N_-)$ is the
stabilizer subalgebra, while $\ker({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2+\mbox{1 \kern-.59em {\rm l}}_\mathcal{N}) \cong T_C \cO$ is
the tangent space to the configuration space at $C\in\cO$. In
particular, ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ defines a complex structure on $T_C \cO$,
and \eq{ucN-decomp} is just the Cartan decomposition
of $\mathfrak{u}(\mathcal{N}\,)$ corresponding to
the
symmetric space $\cO$. This follows immediately by noticing that the
involutive automorphism
\begin{equation}
{\sf j}\,:\, \mathfrak{u}(\mathcal{N}\,) ~\longrightarrow~ \mathfrak{u}(\mathcal{N}\,) \ , \qquad
\phi~\longmapsto~ C \,\phi\, C^{-1}
\end{equation}
is $\mbox{1 \kern-.59em {\rm l}}_\mathcal{N}$ on $\ker({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L})$ and $-\mbox{1 \kern-.59em {\rm l}}_\mathcal{N}$ on $\ker({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2+\mbox{1 \kern-.59em {\rm l}}_\mathcal{N})$
upon using the constraints (\ref{constraint}). Moreover, for any
$V_\phi,V_\psi\in T_C\cO$, from (\ref{omegaexpl}) one has
\begin{equation}
\langle\omega,V_\phi\wedge V_\psi\rangle
= \mbox{$\frac{\,{\rm i}\,}{N^2}$}\, \Tr\big([C,V_\phi]\, V_\psi\big)
= \mbox{$\frac1{N}$}\, \Tr\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(V_\phi)\, V_\psi\big)
\label{symplectic-eval}
\end{equation}
and
\begin{equation}
\big\langle\omega\,,\,V_\phi\wedge {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(V_\psi)\big
\rangle = \mbox{$\frac1N$}\, \Tr(V_\phi\, V_\psi) \ ,
\label{Kahlerderiv}\end{equation}
expressing the fact that the symplectic two-form $\omega$ makes the
configuration space $\cO$ into a K\"ahler manifold with respect to the
complex structure (\ref{calJmapdef}). All of these properties are just
standard features of hermitian symmetric spaces~\cite{helgason1}, as
will be exploited at length in this paper.
Consider the restriction of the map ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ to the gauge algebra
$\mathfrak{g} = \mathfrak{u}(n\,N)\subset\mathfrak{u}(\mathcal{N}\,)$ containing elements of the form
$g=\phi\otimes\sigma^0$. Since ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)$ is the infinitesimal gauge
transformation of the gauge field $C$ generated by $\phi$, it
describes the orbits of the gauge group $G = U(n\,N)$ acting on the
configuration space $\cO$, in $T_C\cO$. Generically this action is
free (apart from the trivial $\mathfrak{u}(1)$),
but not for certain critical points. For example, for the vacuum
solution (\ref{vacsolnonab}) the subalgebra $\mbox{1 \kern-.59em {\rm l}}_N\otimes \mathfrak{u}(n)$
commutes with $C$. The higher critical points in the nonabelian case
generically have a smaller $\mathfrak{u}(1)^{n}$ centralizer algebra.
More precisely, consider the kernel of ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ at $C$ restricted to the
gauge algebra $\mathfrak{g}$,
\begin{equation}
\mathfrak{s}:=\ker({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}) \cap {\mathfrak{g}} \ ,
\label{kerJmgdef}\end{equation}
which is the subgroup of the gauge group that stabilizes
$C$. The elements $\phi \in \mathfrak{s}$ are orthogonal to $T_C\cO$ due to
\eq{ucN-decomp}. Hence $\mathfrak{g}$ decomposes into orthogonal subspaces
\begin{equation}
\mathfrak{g} = \mathfrak{s} \oplus \mathfrak{g}'
\end{equation}
where $\mathfrak{g}'=\mathfrak{s}^\perp=:\mathfrak{g}\ominus\mathfrak{s}$ contains the ``proper'' gauge
transformations, acting freely near $C$. If $(n_1,\dots,n_n)$ is a
partition of the integer $n\,N$ which does not contain trivial
representations of $SU(2)$ (no fluxons), then $\mathfrak{g}'$ is the tangent
space to the corresponding critical surface
$\cC_{(n_1,1),\dots,(n_n,1)}\subset \cO$,
\begin{equation}
\cC_{(n_1,1),\dots,(n_n,1)} \cong U(n\,N) / S \ ,
\label{globalcritsurface}\end{equation}
where $S = \exp(\mathfrak{s})$.
We claim that {\em the subspaces ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g})$ and $\mathfrak{g}$ are linearly
independent.} For this, assume to the contrary that ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g})$ and
$\mathfrak{g}$ are linearly dependent, i.e. ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g) \in \mathfrak{g}$ for some $g \in
\mathfrak{g}$. This implies that $[C_i,g] =0$, and therefore
$[C_0^2,g]=0$ due to \eq{C2}. Restricting attention to
critical points $C$ for which the spectrum of $C_0$ is non-negative
(the others being strongly suppressed at large $N$), this implies that
$g$ commutes with the spectral projectors of $C_0$, and
hence also with $C_0$ itself. Together with $[C_i,g] =0$ it follows
that ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g)=0$. However, ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g})$ and $\mathfrak{g}$ need not be orthogonal
subspaces.
Generically one then has
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}) + {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}) \subset T_C\cO \ .
\end{equation}
The two subspaces are not orthogonal in general, since for $g_1,g_2
\in \mathfrak{g}$ one can compute the inner product
\begin{eqnarray}
\Tr\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(g_1)\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g_2)\big) &=& \Tr\big(g_1\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g_2)\big)
\nonumber\\[4pt] &=&-\mbox{$\frac{{\,{\rm i}\,}}N$}\,\Tr\big(C\,
[g_1,g_2]\big)\nonumber\\[4pt] &=&-\mbox{$\frac1N$}\,\langle
\omega,V_{g_1}\wedge V_{g_2}\rangle~=~\mbox{$\frac{{\,{\rm i}\,}}N$}\,
\Tr\big(g_1\, [C_0,g_2]\big)
\label{orthogornot}
\end{eqnarray}
which is non-vanishing in general. For the vacuum solution with
$C_0=\frac12~\mbox{1 \kern-.59em {\rm l}}_{n\,N}$, it follows from this expression that the
subspaces are indeed orthogonal, and hence ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}) \oplus {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g})
\subset T_C\cO$. In fact, one has
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2 (\mathfrak{g}) \oplus {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}) = T_C\cO \qquad\mbox{if} \quad
C_0 =\mbox{$\frac12$}~\mbox{1 \kern-.59em {\rm l}}_{n\,N}
\label{tangent-decomposition}
\end{equation}
which provides a useful description of the local geometry near the
global minimum. To see (\ref{tangent-decomposition}), note
first that in the abelian case $n=1$ one has $\mathfrak{s}=\mathfrak{u}(1)$, and
\eq{tangent-decomposition} then follows since $\dim (\cO) =
2(N^2-1) = 2 \dim (\mathfrak{g}')$. In the nonabelian case, for the vacuum state
the gauge stabilizer $\mathfrak{s} \cong\mathfrak{u}(n)$ has dimension $n^2$ and
hence $\dim({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2 (\mathfrak{g}') \oplus {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}')) = 2n^2
\,N^2-2n^2 = \dim(\cO)$.
In general, the subspaces ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}) = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus\mathfrak{s})$ and
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g})$ are not linearly independent, and we can define
\begin{equation}
E_0 := {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}) \cap {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g})
\end{equation}
which is generically a non-trivial subspace. Define also the subspaces
$\mathfrak{h}, \tilde \mathfrak{h} \subset \mathfrak{g}\ominus\mathfrak{s}$ with the properties that
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{h}) = E_0 = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\tilde \mathfrak{h}) \ .
\end{equation}
Since ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{h}) = -{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\tilde \mathfrak{h})$ implies that $\mathfrak{h} \subset
\tilde\mathfrak{h} \subset \mathfrak{h}$, we have
\begin{equation}
\mathfrak{h} = \tilde \mathfrak{h} \qquad \mbox{and} \qquad {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(E_0) = E_0 \ .
\end{equation}
We can accordingly decompose the gauge algebra $\mathfrak{g}$ into orthogonal
subspaces as
\begin{equation}
\mathfrak{g} = \mathfrak{g}_1 \oplus \mathfrak{h} \oplus \mathfrak{s} \ .
\end{equation}
Since ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}: \mathfrak{h} \to E_0$ is a bijection, there is a unique map
\begin{equation}
j\,:\, \mathfrak{h}~\longrightarrow~ \mathfrak{h} \qquad \mbox{with} \qquad
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(h) ={\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}\big(j(h)\big)
\label{juniquemap}\end{equation}
for all $h\in\mathfrak{h}$ which satisfies $j^2 =-\mbox{1 \kern-.59em {\rm l}}_{n\,N}$. Similarly, in
order to span the entire tangent space at $C\in\cO$ we generally have
to introduce another subspace $E_1$, with $ {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(E_1) =E_1$,
which gives the general decomposition
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus\mathfrak{h}) \oplus {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{h})
\oplus E_0 \oplus E_1 ~=~ T_C\cO \ .
\label{TCOdecomp}\end{equation}
\subsection{Explicit decomposition at Yang-Mills critical
surfaces\label{ExplYMDecomp}}
We will now provide an explicit description of the various subspaces
appearing in the decomposition of the tangent space
(\ref{TCOdecomp}). Consider the Yang-Mills critical surfaces
$\cC_{(n_1,1),\dots,(n_n,1)}$ and suppose first that $n_1 \neq
\cdots\neq n_n$ are all distinct integers, corresponding to a
completely nondegenerate solution. The elements $\phi$ of the
subspace (\ref{kerJmgdef}) satisfy $[C,\phi] =0$. This implies that
$\phi$ respects the block decomposition described by the given partition
$(n_1,\dots, n_n)$, and is therefore proportional to $\mbox{1 \kern-.59em {\rm l}}_{n_i}$ on
each block. These are thus $\mathfrak{u}(1)^n$ degrees of freedom. If
some $n_i$ are degenerate, this space is enhanced to
\begin{equation}
\mathfrak{s} = \mathfrak{u}(k_1) \times \cdots \times \mathfrak{u}(k_l)
\end{equation}
for a critical surface with $C = \bigoplus_i\, C(n_i)\otimes
\mbox{1 \kern-.59em {\rm l}}_{k_i}$ and $n_i$ all distinct. For the vacuum this is $\mathfrak{u}(n)$,
corresponding to the maximally degenerate solution, as in
Section~\ref{LocalGeomO} above.
We wish to work out the map ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ explicitly. For this, we decompose
\begin{equation}
\phi = \left(\begin{array}{ccc} \phi_{11} & \phi_{12} & \dots \\
\phi_{21} & \phi_{22} & \dots \\
\cdots &\cdots &\ddots
\end{array}\right)
\end{equation}
where $\phi_{ij} \in (n_i) \otimes (n_j)$ and as before $(p)$ denotes
the $p$-dimensional irreducible representation of $SU(2)$. In the
degenerate case, there is another factor corresponding to $\mathfrak{u}(k_j)$.
The non-orthogonality of ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g})$ and ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g})$ in
\eq{orthogornot} is now easily understood as being simply due to the
different $\mathfrak{u}(1)$ charges between the $SU(2)$ sectors of
$\mathfrak{s}$. Since $[C,C_0]=0$ at the Yang-Mills critical
surfaces, one has ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}([C_0,\phi]) =
[C_0,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)]$. Thus the hermitian operator
\begin{equation}
({\rm ad}_{{\,{\rm i}\,} C_0})_{ij} = {\,{\rm i}\,} C_0(n_i) - {\,{\rm i}\,} C_0(n_j) =
{\,{\rm i}\,} \frac N2 \,\frac{n_j-n_i}{n_i\,n_j} =: {\,{\rm i}\,} c_{ij}
\label{ad-C0-explicit}
\end{equation}
acting on $\phi_{ij} \in (n_i) \otimes (n_j)$ commutes with
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$. This implies that we can decompose the subspaces
in \eq{TCOdecomp} such as
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{h}) = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{h})=E_0$ into irreducible representations
of the operator ${\rm ad}_{{\,{\rm i}\,} C_0}$, i.e. into the various $\mathfrak{u}(1)$
blocks. Restricted to the diagonal blocks $C_0(n_i)$ is proportional
to the unit matrix $\mbox{1 \kern-.59em {\rm l}}_{n_i}$, so that $\Tr({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g_1)\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(g_2))
=0$ there as for the vacuum.
\subsubsection*{\it Global $SU(2)$ symmetry}
To proceed further, we need to exploit an additional symmetry that we
have neglected so far, the global rotation group $SU(2)$. Recall from
Section~\ref{CritPoints} above that each saddle-point defines a
representation of $SU(2)$ acting on the representation space $V \cong
\Gamma^{n\,N}$ as (\ref{Lidef}), and trivially on potential fluxon
components. In the abelian case $n=1$, this induces via the adjoint
action the rotations of functions $f\mapsto J_if=[L_i,f]$ in $S^2_N
\cong V \otimes \overline{V}$, but it is a somewhat different symmetry for
the nonabelian instantons. Let us decompose $V$ into irreducible
representations as
\begin{equation}
V = \bigoplus_{i=1}^n\, (n_i) \ .
\end{equation}
This representation can be extended to the module $V \otimes \Gamma^2$ for
the action of the operators
\begin{equation}
J_i = L_i + \mbox{$\frac 12$}\, \sigma^i
\end{equation}
which by construction commute with $C$,
\begin{equation}
[J_i,C] =0 \ ,
\end{equation}
on the critical surfaces. This follows from the fact that
$C_i\otimes\sigma^i$ is an intertwiner for the action of $J_i$ on
\begin{equation}
V \otimes \Gamma^2 = \Big(\,\bigoplus_{i=1}^n\, (n_i+1)\Big)~
\oplus~ \Big(\,\bigoplus_{i=1}^n\,(n_i-1)\Big) =: V^+ \oplus V^-
\label{C-decomp-1}
\end{equation}
and $C$ has eigenvalues $\pm \,\frac N2$ on the component subspaces
$V^\pm$. This enables one to decompose $C$ further using the projectors
$\Pi_i^{\pm}$ onto the irreducible representations $(n_i\pm1)$ with
\begin{equation}
\big[C\,,\,\Pi_i^\pm\big] =0 \ ,
\end{equation}
and the constrained covariant coordinates take the simple form
\begin{equation}
C = \frac N2\,
\left(\begin{array}{cc}\mbox{$\bigoplus\limits_{i=1}^n$}\, \Pi_i^+ & 0
\\ 0 & -\,\mbox{$\bigoplus\limits_{i=1}^n$}\,
\Pi_i^-\end{array}\right) \ .
\label{C-explicit}
\end{equation}
In particular, since $C_0\otimes\sigma^0$ is two-fold degenerate it
follows that
\begin{equation}
C_0\otimes\sigma^0 =
\left(\begin{array}{cc}\mbox{$\bigoplus\limits_{i=1}^n$}\,
C_0(n_i)\, \Pi_i^+ & 0
\\ 0 & \,\mbox{$\bigoplus\limits_{i=1}^n$}\,C_0(n_i)\,
\Pi_i^-\end{array}\right)
\label{C0-explicit}
\end{equation}
separates the explicit blocks according to \eq{ad-C0-explicit}.
The complex structure map ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ respects this $SU(2)$ symmetry,
\begin{equation}
[J_i,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}] =0 \ ,
\end{equation}
which enables one to decompose the tangent space $T_C\cO$
into irreducible representations of the $SU(2)$ isometry group. With
respect to the block decomposition (\ref{C-decomp-1}), the subspace
$\ker({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L})\subset\mathfrak{u}(\mathcal{N}\,)$ consists of block diagonal operators while
$T_C\cO$ consists of block off-diagonal operators, and the action of
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ on tangent vectors is given explicitly by
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L} \left(\begin{array}{cc}0 & X \\
X^\dagger & 0\end{array}\right)
= \left(\begin{array}{cc}0 & {\,{\rm i}\,} X \\
-{\,{\rm i}\,} X^\dagger & 0\end{array}\right) \ .
\label{J-tangent-explicit}
\end{equation}
This is the obvious complex structure on $T_C \cO$ compatible with
the action of the isometry group.
The decomposition of the tangent space $T_C\cO$ into irreducible
representations of $SU(2)$ is now provided by
\begin{equation}
T_C^-\cO \cong \Big(\,\bigoplus_{i=1}^n\, (n_i+1)
\Big) \otimes \Big(\,\bigoplus_{j=1}^n\, (n_j-1)\Big)
= \bigoplus_{i,j=1}^n\, (n_i+1)\otimes (n_j-1) \ ,
\label{X-decomp-1}
\end{equation}
where $T_C^\pm\cO:=T_C\cO\big|_{V^\pm}$
corresponds to the upper-right respectively lower-left blocks in
\eq{J-tangent-explicit}, and the different sectors
$(i,j)$ are separated by the eigenvalues of the operator ${\rm ad}_{{\,{\rm i}\,}
C_0}$ in the irreducible case. Note in particular that the lowest
spin component in the Clebsch-Gordan decomposition of
$(n_i+1)\otimes (n_i-1)$ is a spin one field as appropriate for
gauge fields.
This implies ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L} (\mathfrak{g}_0) =0$, where $\mathfrak{g}_0$
is the subspace of $SU(2)$ singlet components of $\mathfrak{g}$,
and in fact $\mathfrak{g}_0 = \mathfrak{s}$ by Schur's lemma.
\subsubsection*{\it Global minimum}
Consider first the vacuum surface $\cC_{(N,1),\dots,(N,1)}$. Compare
the $SU(2)$-invariant decomposition of the gauge algebra $\mathfrak{g}$, given
by
\begin{eqnarray}
\mathfrak{g} &\cong& (N) \otimes (N) \otimes \mathfrak{u}(n)\nonumber\\[4pt]
&=& \big((1)\oplus (3) \oplus \cdots \oplus (2N-1 )\big)
\otimes \mathfrak{u}(n)~=~ \big((1) ~\oplus~ (N+1) \otimes (N-1)\big)
\otimes \mathfrak{u}(n) \ ,
\label{mg-decomp-abel-vac}
\end{eqnarray}
with \eq{X-decomp-1} in the degenerate case $C_0=\frac
12~\mbox{1 \kern-.59em {\rm l}}_{n\,N}$. It follows that the image of ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g})$ indeed covers
all modes of $T_C\cO$, and the complexification is achieved by adding
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g})$. This gives another proof of the decomposition
(\ref{tangent-decomposition}). The singlet subspace of
(\ref{mg-decomp-abel-vac}) is
$\mathfrak{g}_0=(1)\otimes\mathfrak{u}(n)\cong\mathfrak{u}(n)=\mathfrak{s}$.
\subsubsection*{\it Maximally irreducible saddle points}
Now consider a generic, completely non-degenerate critical surface
$\cC_{(n_1,1),\dots,(n_n,1)}$, and the corresponding decomposition of
$T_C\cO =T_C^-\cO \oplus T_C^+\cO$ given by \eq{X-decomp-1}. The
different sectors $(i,j)$ are distinguished by the eigenvalues of the
operator ${\rm ad}_{{\,{\rm i}\,} C_0}$. Hence we can pick some fixed pair $n_i
>n_j$, and decompose
\begin{equation}
(n_i+1)\otimes (n_j-1) \cong
\big(|n_i-n_j|+3\big)_{{\,{\rm i}\,} c_{ij}}\oplus\big(|n_i-n_j|+5
\big)_{{\,{\rm i}\,} c_{ij}} \oplus \cdots \oplus \big(n_i+n_j-1
\big)_{{\,{\rm i}\,} c_{ij}}\subset T_C \cO
\label{X-1}
\end{equation}
which has eigenvalue given by \eq{ad-C0-explicit} as indicated by the
subscripts. Similarly, one has
\begin{equation}
(n_j+1)\otimes (n_i-1) \cong
\big(|n_i-n_j|-1\big)_{{\,{\rm i}\,} c_{ji}} \oplus\big(|n_i-n_j|+1
\big)_{{\,{\rm i}\,} c_{ji}} \oplus \cdots \oplus \big(n_i+n_j-1
\big)_{{\,{\rm i}\,} c_{ji}} \subset T_C \cO
\label{X-2}
\end{equation}
(where $(0)$ is omitted) with ${\rm ad}_{{\,{\rm i}\,} C_0}$ eigenvalue ${\,{\rm i}\,}
c_{ji}= -{\,{\rm i}\,} c_{ij}$. The corresponding conjugate matrix
decompositions $(n_j-1)\otimes(n_i+1)$ and $(n_i-1)\otimes(n_j+1)$ are
determined by hermiticity. They are given respectively by \eq{X-1}
with eigenvalue ${\,{\rm i}\,} c_{ji}=-{\,{\rm i}\,} c_{ij}$ and by \eq{X-2} with
eigenvalue ${\,{\rm i}\,} c_{ij}$.
We denote the tangent space decomposition (\ref{X-decomp-1})
determined by \eq{X-1} and \eq{X-2} as
\begin{equation}
T_C \cO := \bigoplus_{i,j=1}^n \, \Gamma|n;n_i+1,n_j- 1; {\,{\rm i}\,} c_{ij},l
\rangle_{T_C\cO} \
\label{TO-basis}
\end{equation}
where $n$ denotes the dimension of $(n)$, and
we will drop its magnetic quantum number $l$ from now on.
This defines a natural basis for $T_C \cO$, in which the action of
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ is given by {\em block-wise} multiplication with
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}=\sigma^2=\left(\begin{array}{ccc}0 & {\,{\rm i}\,}\\ -{\,{\rm i}\,} & 0\end{array}\right)
\label{cJblockwise}\end{equation}
as in \eq{J-tangent-explicit},
and the action of ${\rm ad}_{{\,{\rm i}\,} C_0}$ by
\begin{equation}
\big({\rm ad}_{{\,{\rm i}\,} C_0}\big)_{ij}= |c_{ij}|\,\left(\begin{array}{ccc} 0 &
\sigma^2 \\ \sigma^2 & 0\end{array}\right) \
\label{adiC0blockwise}\end{equation}
since its sign depends on $n_i \gtrless n_j$.
In particular, by virtue of \eq{symplectic-eval} the tangent space
$T_C \cO$ is naturally a symplectic vector space with symplectic form
of type~$(1,1)$ with respect to the complex structure ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$. This
construction thereby defines a local symplectic model for the
neighbourhood of the Yang-Mills critical point $C$ in the K\"ahler
manifold $\cO$. In the next section this model space will be used to
evaluate fluctuation integrals over tubular neighbourhoods of the
critical surfaces. In particular, all pertinent one-forms can be
explicitly evaluated on $T_C \cO$ by using the explicit expressions
for $C$ and $C_0$ in \eq{C-explicit} and \eq{C0-explicit}.
Let us now look at the $SU(2)$-invariant decomposition of the gauge
algebra $\mathfrak{g}$ given by
\begin{eqnarray}\label{mg-decomp-abel}
\mathfrak{g} &\cong& \bigoplus_{i,j=1}^n\, (n_i) \otimes (n_j) \\[4pt]
&=& \bigoplus _{i,j=1}^n\, \big( (|n_i-n_j|+1) \oplus(|n_i-n_j|+3)
\oplus \cdots\oplus(n_i+n_j-1)\big)
~=:~ \bigoplus_{i,j=1}^n\, \Gamma|n;n_i,n_j;{\,{\rm i}\,} c_{ij}\rangle_\mathfrak{g} \ . \nonumber
\end{eqnarray}
This can be compared with the $SU(2)$-invariant decomposition of the
tangent space $T_C\cO$ in \eq{TO-basis} above, whose higher modes
match perfectly with those of $\mathfrak{g}$ except for a doubling due to the
complex structure ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$. There is, however, some mismatch in the low
lying modes. In particular, $T_C\cO$ contains the extra subspace
\begin{equation}
E_1 := \bigoplus_{i>j}\, \Gamma|n_i-n_j-1;n_j +1,n_i-1;-{\,{\rm i}\,}
c_{ij}\rangle_{T_C\cO}
\label{E_1}
\end{equation}
which is not contained in ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g})$. On the other hand, the modes in
the subspace
\begin{equation}
E_0 := \bigoplus_{i>j}\, \Gamma|n_i-n_j+1;n_j +1,n_i-1;-{\,{\rm i}\,}
c_{ij}\rangle_{T_C\cO}
\label{E_0}
\end{equation}
occur only {\em once} in $T_C\cO$, which means that they are already
spanned by the image ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g})$ since ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L} \neq 0$ on the non-trivial
modes. This implies that $E_0 = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(E_0) = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{h})$ where
\begin{equation}
\mathfrak{h} = \bigoplus_{i\neq j}\,\Gamma \big||n_i-n_j|+1\,;\,n_i,n_j\,;\,{\,{\rm i}\,}
c_{ij}\big\rangle_\mathfrak{g} \ .
\label{mh-explicit}
\end{equation}
The linear independence of the subspaces ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g} \ominus \mathfrak{h})$ and
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g} \ominus \mathfrak{h})$ follows from the explicit embedding
$T_C\cO\hookrightarrow\mathfrak{u}(\mathcal{N}\,)$ given below. Therefore
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus \mathfrak{h}) \oplus {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus \mathfrak{h}) $
spans the entire tangent space $T_C \cO$ except for the subspace
$E_1$, which gives the decomposition \eq{TCOdecomp} with
the various subspaces now explicitly identified. We have ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(E_0) =
E_0$ and ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(E_1) = E_1$, with the action of ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ given by diagonal
eigenvalues $\pm {\,{\rm i}\,}$ on the two components in \eq{E_0} and
\eq{E_1}. On the remaining space $T_C \cO \ominus E_0 \ominus E_1$ the
action of ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ is obtained by exchanging the two components in
\eq{TCOdecomp}.
To complete this analysis, we need to explicitly embed $T_C \cO$
into the space $\mathfrak{u}(\mathcal{N}\,)$, which
admits the $SU(2)$-invariant decomposition
\begin{eqnarray}
\mathfrak{u}(\mathcal{N}\,) ~\cong~ \mathfrak{g} \otimes \big((2)\otimes (2)\big) &=&
\bigoplus_{i,j=1}^n\, \big((n_i+1)\otimes (n_j+1)~\oplus~
(n_i-1)\otimes (n_j-1) \nonumber\\ && \quad\qquad
\oplus~(n_i+1)\otimes (n_j-1) ~\oplus~(n_i-1)\otimes (n_j+1) \big)
\label{ucN-decomp-2}
\end{eqnarray}
corresponding to \eq{C-decomp-1}.
Since we know the action of ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ on the rhs, we can
determine the map
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}\,:\, \mathfrak{g} ~\longrightarrow~ T_C \cO ~\hookrightarrow~ \mathfrak{g} \otimes
\big((2)\otimes (2)\big)
\end{equation}
using
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}\big(|n;n_i,n_j;{\,{\rm i}\,} c_{ij}\rangle_\mathfrak{g}\big) = \sum_{k,l=1}^n\,
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}\big(|n;n_k+ 1,n_l- 1;{\,{\rm i}\,} c_{kl}\rangle_{T_C\cO}\big)~
{}_{T_C\cO}\langle n;n_k+ 1,n_l- 1;{\,{\rm i}\,} c_{kl} |n;n_i,n_j;
{\,{\rm i}\,} c_{ij}\rangle_\mathfrak{g} \, + h.c.
\label{J-phi-action}
\end{equation}
The non-vanishing inner products in this expression can be
written in terms of Wigner $6j$-symbols for the group $SU(2)$, which
are known explicitly. This also enables one to compute the projection
\begin{eqnarray}
\Pi_0\,:\,T_C \cO ~\longrightarrow~ \mathfrak{g} \ , \qquad
V_0 \otimes\sigma^0+ V_i \otimes\sigma^i ~\longmapsto~ V_0
\label{p0-def}\end{eqnarray}
as
\begin{equation}
\Pi_0 |n;n_i+ 1,n_j- 1;{\,{\rm i}\,} c_{ij}\rangle_{T_C\cO} =\sum_{k,l=1}^n\,
|n;n_k,n_l;{\,{\rm i}\,} c_{kl}\rangle_\mathfrak{g} ~
{}_\mathfrak{g}\langle n;n_k,n_l;{\,{\rm i}\,} c_{kl}|n;n_i+ 1,n_j- 1;{\,{\rm i}\,} c_{ij}
\rangle_{T_C\cO} \ .
\end{equation}
In the basis \eq{TCOdecomp}, one has the useful explicit formula
\begin{equation}
\Pi_0\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g})={\rm ad}_{{\,{\rm i}\,} C_0}(\mathfrak{g})
\label{pi0-explicit}
\end{equation}
which is of order $\frac1N$ and can also be used for $E_0$,
while $\Pi_0\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g})$ is of order
$\frac1{N^2}$ and $\Pi_0(E_1)= \{0\}$.
\subsubsection*{\it General solutions}
The case where some of the irreducible representations $(n_i)$ have
multiplicity $k_i>1$ is a combination of the structures above for the
vacuum state and for the nondegenerate case. Now the
basis \eq{TO-basis} acquires additional labelling reflecting the
$\mathfrak{u}(k_i)$ degrees of freedom, and it takes the symbolic form
\begin{equation}
T_C \cO =\bigoplus_{i,j=1}^l \, \Gamma\big|n\,;\,(n_i+ 1,a_i)\,,\,
(n_j- 1,a_j)\,;\,{\,{\rm i}\,} c_{ij}\big\rangle_{T_C\cO} \ .
\label{TO-basis-general}
\end{equation}
In particular, one can now easily compute the symplectic form on
$T_C\cO$ using \eq{symplectic-eval}. It is essentially given by the
complex structure ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$.
\subsection{Fluctuations around the critical surfaces\label{Fluct}}
We conclude this section with a summary of the salient features of the
decompositions in Sections~\ref{LocalGeomO} and~\ref{ExplYMDecomp}
above, as pertaining to how they will be exploited in the next section
to evaluate fluctuation integrals over the local neighbourhoods of
Yang-Mills critical points. Recall that globally the critical surface
(with no fluxons) through some critical point $C$ is given by the
space of gauge transformations acting on $C$, as in
\eq{globalcritsurface}. Its tangent space is embedded locally as
\begin{equation}
T_C\cC_{(n_1,s_1),\dots,(n_k,s_k)} = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus\mathfrak{s})
\subset T_C \cO \ ,
\label{critical-embedding}
\end{equation}
which can be determined explicitly using \eq{J-phi-action}.
Recall also that the gauge stabilizer $\mathfrak{s}$ of $C$ consists of the
$SU(2)$ singlets in $\mathfrak{g}$. It is given by $\mathfrak{s} \cong\mathfrak{u}(n)$ for
the vacuum, and $\mathfrak{s}\cong\mathfrak{u}(1)^{n}$ for completely irreducible
saddle-points. In particular, $\mathfrak{s}$ is never trivial,
quite unlike the situation in ordinary
two-dimensional Yang-Mills theory~\cite{witten}. The global symmetry
cannot be disentangled in the noncommutative case, and the nonabelian
localization even at the global minimum is akin to that at higher
critical points of two-dimensional Yang-Mills theory or more precisely
at the flat connections of Chern-Simons gauge theory on a Seifert
fibration~\cite{witten}. The non-trivial part of the localization at
higher critical points will therefore be given by fluctuation
integrals over the spaces $E_0$, $E_1$ and $\mathfrak{s}$. The only effect of
the remaining part ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g} \ominus \mathfrak{h})\oplus {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g} \ominus \mathfrak{h})
$ will be to induce normalization terms as for the vacuum critical
point. In particular, the subspaces ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus\mathfrak{s})$ and
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{s})$ locally model the tangent space $T_C\cO$ near
the vacuum.
To understand the physical meaning of the subspace $E_1$, note that
the gauge field strength remains constant for variations along
$\phi=X\in E_1$, since
$\delta C_0\big|_{E_1}= i [C,\phi]_0\in \Pi_0(E_1) =\{0\}$. Let us
compute the second order variation of the Yang-Mills action, given by
\begin{eqnarray}
\Tr\big(C_0~\delta^2C_0\big) &=& -\Tr\big( C_0\,[\,[C,\phi]\,,\,\phi]
\big)\nonumber\\[4pt]
&=& \Tr \big([C_0,\phi]\,[C,\phi]\big) ~=~ -N\,
\Tr \big({\rm ad}_{{\,{\rm i}\,} C_0}(\phi)\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)\big) \ .
\end{eqnarray}
Restricting to fluctuations $\phi=X\in E_1$ with respect to the
decomposition (\ref{E_1}) one has
\begin{equation}
\Bigl.\Tr\big(C_0~\delta^2C_0\big)\Bigr|_{E_1}=
-N\, \sum_{i>j}\,\Tr\big({\rm ad}_{{\,{\rm i}\,} C_0}
(X_{ji}^\dagger)\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(X_{ji})\big)= -2N\,\sum_{i>j}\,
|c_{ij}| \, \Tr \big(X_{ji}^\dagger\, X_{ji}\big)
\end{equation}
by using the actions (\ref{cJblockwise}) and
(\ref{adiC0blockwise}), cf. \eq{J-ad-id}. For the maximally nondegenerate
saddle-points, this fluctuation is thus negative, demonstrating that
the two-dimensional instantons on the fuzzy sphere $S_N^2$ are
generically {\it unstable}. On the other hand, since the subspace
$E_0= {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{h})$ is obtained through gauge transformations, it produces
flat directions for the Yang-Mills action.
\bigskip
\section{Nonabelian localization\label{NonabLoc}}
This section is the crux of the present paper, wherein we shall derive
the semiclassical expansion of the partition function for Yang-Mills
theory on the fuzzy sphere $S_N^2$ and show that it agrees with the
known instanton expansion of quantum gauge theory on $S^2$ in the classical
limit $N\to \infty$. We will begin by describing the nonabelian
localization principle, adapted to our specific gauge theory.
We will then explicitly evaluate the contributions from
two extreme classes of Yang-Mills critical points, the vacuum and the
maximally irreducible solutions, and show that they give the
expected contributions to the path integral at large $N$. The
intermediate contributions from degenerate solutions, which we do not
treat in detail here, are somewhat more involved but can in principle
be evaluated using our techniques. The contribution from the vacuum to
the partition function could be expressed in terms of the abstract
cohomological formula of~\cite{witten1} given by intersection pairings
on the vacuum moduli space, or by using the more explicit residue
formula of~\cite{JK1}. The contributions from some higher unstable
critical points to the nonabelian localization formula are formally
described in~\cite{Paradan1,Woodward:2004xz,witten}, but the general
cases that we need (including reducible saddle points) are not
explicitly treated in full generality. Here we will directly evaluate,
following~\cite{witten}, the explicit quantum fluctuation integrals
near the critical points using the local symplectic geometry of the
previous section.
\subsection{Equivariant cohomology and the localization
principle\label{LocPrinc}}
The goal of this section is to compute the partition function of
quantum Yang-Mills theory on the fuzzy sphere defined by the action
(\ref{YM-action}) on the configuration space (\ref{orbit-2}) of gauge
fields. After an irrelevant shift of the covariant coordinates
\eq{covcoordsdef} which is equivalent to working with the reduced
Yang-Mills action \eq{YM-action-prime}, it is defined by
\begin{eqnarray}
Z&:=&\frac1{{\rm vol}(G)}\,\left(\frac{g}{4\pi\,N}\right)^{\dim(G)/2}\,
\int_{\cO}\, {\rm d} C~ \exp\Big(-\mbox{$\frac{N}{g}$}\,
\Tr\big(C_0^2\big)\Big) \nonumber\\[4pt] &=&
\frac1{{\rm vol}(G)}\,\left(\frac{g'}{2\pi}\right)^{\dim(G)/2}\,
\int_{\cO}\, \exp\Big(\omega -\mbox{$\frac{1}{2g'}$}\,
\Tr \big(C_0^2\big)\Big)
\label{Z-1}
\end{eqnarray}
where we have used the fact that the symplectic volume form
$\omega^d/d!$, with $d:=\dim_\Gamma(\cO)$, defines the natural gauge
invariant measure on $\cO$ provided by the Cartan-Killing riemannian
volume form (up to some irrelevant normalization). This
follows from the fact that the natural invariant metric on $\cO$ is a
K\"ahler form. We have divided by the volume of the gauge group
$G=U(n\,N)$ with respect to the invariant Cartan-Killing form and by
another normalization factor for later convenience, and also
introduced the rescaled gauge coupling
\begin{equation}
g' = \frac g{2N} \ .
\label{gprime}\end{equation}
We will now describe, following~\cite{witten1,witten}, how the
technique of nonabelian localization can be applied to evaluate the
symplectic integral (\ref{Z-1}) exactly.
We begin by using a gaussian integration to rewrite \eq{Z-1} as
\begin{equation}
Z = \frac1{{\rm vol}(G)}\,\int_{\mathfrak{g}\times \cO}\, \Big[\,\frac{{\rm d}\phi}{2\pi}\,
\Big]~\exp\Big(\omega -{\,{\rm i}\,} \Tr(C_0\, \phi) -\mbox{$
\frac{g'}{2}$}\, \Tr \big(\phi^2\big)\Big) \ ,
\label{Z-2}\end{equation}
where the euclidean measure for integration over the gauge algebra
$\phi\in\mathfrak{g}=\mathfrak{u}(n\,N)$ is determined by the invariant
Cartan-Killing form. Since the moment map for the $G$-action on $\cO$
is given by \eq{momentmapred}, by \eq{hamiltonian} we have
\begin{equation}
{\rm d} \Tr(C_0\, \phi) = -\iota_{V_\phi} \omega \ .
\label{moment-property}
\end{equation}
Introduce the BRST operator
\begin{equation}
Q= {\rm d} - {\,{\rm i}\,} \iota_{V_\phi} \ ,
\end{equation}
where ${\rm d}$ is the exterior derivative on $\Omega(\cO)$ and the
contraction $\iota_{V_\phi}$ acts trivially on $\phi$. It preserves
the gradation if one assigns charge~$+2$ to the elements $\phi$ of
$\mathfrak{g}$, and it satisfies
\begin{equation}
Q^2 = -{\,{\rm i}\,}\{{\rm d},\iota_{V_\phi}\} = -{\,{\rm i}\,} {\cal L}_{V_\phi}
\end{equation}
where ${\cal L}_{V_\phi}$ is the Lie derivative along the vector field
$V_\phi$. Thus $Q^2=0$ exactly on the space
\begin{equation}
\Omega_G(\cO):=\big(\Gamma[[\mathfrak{g}]]\otimes\Omega(\cO)\big)^G
\end{equation}
consisting of gauge invariant differential forms on $\cO$ which take
values in the ring of symmetric functions on the Lie algebra $\mathfrak{g}$.
By construction one has
\begin{equation}
Q\big(\omega -{\,{\rm i}\,} \Tr(C_0 \,\phi)\big) =0
\end{equation}
using \eq{omegaclosed} and \eq{moment-property}, and
\begin{equation}
Q\Tr \big(\phi^2\big) =0 \ .
\end{equation}
Therefore, the integrand of the partition function (\ref{Z-2}) defines
a $G$-equivariant cohomology class in $H_G(\cO)$, and the value of $Z$
depends only on this class. The integral of any $Q$-exact equivariant
differential form in $\Omega_G(\cO)$ over $\mathfrak{g}\times\cO$ is
clearly~$0$, as is the integral of any $\iota_{V_\phi}$-exact form
even if its argument is not gauge invariant. Thus $Z$ is unchanged by
adding any $Q$-exact form to the action, which will fix a gauge for
the localization. Hence we can replace it by
\begin{equation}
Z = \frac1{{\rm vol}(G)}\,\int_{\mathfrak{g}\times\cO}\,
\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~
\exp\Big(\omega -{\,{\rm i}\,} \Tr(C_0\, \phi) -\mbox{$\frac{g'}{2}$}\, \Tr
\big(\phi^2\big)+ t~ Q\alpha\Big) \ ,
\label{Z-3}
\end{equation}
which is independent of $t\in{\mathbb{R}}$ for any $G$-invariant one-form
$\alpha$ on $\cO$, where
\begin{equation}
Q\alpha = {\rm d}\alpha - {\,{\rm i}\,} \langle\alpha,V_\phi\rangle \ .
\label{Qalpha}\end{equation}
The independence of \eq{Z-3} on the particular representative
$\alpha\in\Omega(\cO)^G$ of its equivariant cohomology class will play
a crucial role in our evaluation of the partition function.
Expanding the integrand of (\ref{Z-3}) by writing $\exp(t~{\rm d}\alpha)$
as a polynomial in $t$ and using the fact that the configuration space
$\cO$ is compact, it follows
that for $t \to\infty$ the integral localizes at the stationary points
of $\langle\alpha,V_\phi\rangle$ in $\mathfrak{g}\times\cO$. By writing $V_\phi
= V_a ~\phi^a$, where $\phi^a$ is an orthonormal basis of $\mathfrak{g}^\vee$,
we have $\langle\alpha,V_\phi\rangle = \langle\alpha,V_a\rangle~\phi^a$
and the critical points are thus determined by the equations
\begin{eqnarray}
\langle\alpha,V_a\rangle &=& 0 \ , \label{crit-1}\\[4pt]
\phi^a ~{\rm d}\langle\alpha,V_a\rangle &=& 0 \ . \label{crit-2}
\end{eqnarray}
Since \eq{crit-2} is invariant under rescaling of $\phi$ and the Lie
algebra $\mathfrak{g}$ is contractible, the homotopy
type of the space of solutions in $\mathfrak{g}\times\cO$ is unchanged by
restricting to $\phi=0$ and the saddle-points reduce to the zeroes
of $\langle\alpha,V_a\rangle$ in $\cO$.
Given the reduced Yang-Mills function \eq{YM-action-prime}, let
us consider explicitly the invariant one-form $\alpha$ given
by~\cite{witten,szabo}
\begin{equation}
\alpha=-{\,{\rm i}\,}\Tr\big(C_0 \,[C,{\rm d} C]_0\big)= g'\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}\big({\rm d} S'\,\big) \ .
\label{loc1form}\end{equation}
We claim that the vanishing locus of $\langle\alpha,V_a\rangle$ in
this case coincides with the critical surfaces of the original
Yang-Mills action (\ref{YM-action}) as found in
Section~\ref{CritPoints}. To see this, we note that the condition
\begin{equation}
0 = \langle\alpha,V_a\rangle =\Tr\big(C_0\,[C\,,\,[C,\phi^a]\,]_0
\big)=-\Tr\big([C,C_0]\,[C,\phi^a]\big)
\end{equation}
certainly holds whenever $[C,C_0]=0$. On the other hand, by
setting $\phi = C_0$ it implies
\begin{equation}
0=\langle\alpha,V_\phi\rangle = -\Tr\big([C,C_0]^2\big)
\end{equation}
which by nondegeneracy of the inner product implies that $[C,C_0] =
0$. Therefore the action in \eq{Z-3} has indeed the same critical
points as the Yang-Mills action \eq{YM-action}.
Let us now explicitly establish, following~\cite{szabo}, the
localization of the partition function onto the classical solutions of
the gauge theory. Plugging (\ref{loc1form}) and (\ref{Qalpha}) into
\eq{Z-3} and carrying out the integration over $\phi\in\mathfrak{g}$ gives
\begin{eqnarray}
Z &=& \frac1{{\rm vol}(G)}\,\int_{\mathfrak{g}\times \cO} \,\Big[\,\frac{{\rm d}\phi}
{2\pi}\,\Big] ~\exp\Big(t~{\rm d}\alpha+\omega\Big)\nonumber\\ &&
\qquad\qquad\qquad \times~
\exp\Big(-{\,{\rm i}\,} \Tr(C_0\, \phi) -\mbox{$\frac{g'}{2}$} \Tr
\big(\phi^2\big)- {\,{\rm i}\,} t \,\Tr\big([C\,,\,[C,C_0]\,]\,\phi\big)
\Big)\nonumber\\[4pt] \label{Z-4}
&=& \frac1{{\rm vol}(G)}\,\left(\frac{g'}{2\pi}\right)^{\dim(G)/2}\,\int_{\cO}\,
\exp\Big(t~{\rm d}\alpha + \omega \Big)\\ && \qquad\qquad\qquad \times~
\exp\Big( -\mbox{$\frac{1}{2g'}$}\, \Tr\big(C_0^2\big)
+ \mbox{$\frac t{g'}$}\, \Tr\big(C_0\,[C\,,\,[C,C_0]\,]\big)
-\mbox{$\frac{t^2}{2g'}$}\, \Tr\big([C\,,\,[C,C_0]\,]\big)^2\Big)\nonumber
\end{eqnarray}
where we have used $\Tr(C\,[C,-]) =0$. The only configurations which
contribute to (\ref{Z-4}) in the large $t$ limit are therefore
solutions of the equation
\begin{equation}
[C\,,\,[C,C_0]\,]=0
\end{equation}
which implies as in~\cite{szabo} that
\begin{equation}
0 = \Tr\big(C_0 \,[C\,,\,[C,C_0]\,]\big) = - \Tr\big([C,C_0]^2
\big) \ ,
\end{equation}
giving $[C,C_0]=0$ as desired. Therefore the integral \eq{Z-4}
receives contributions only from the solutions of the Yang-Mills
equations \eq{eom}, which establishes the claimed localization.
The local geometry in $\mathfrak{g}\times\cO$ about each critical point, as
analysed in detail in the last section, determines the partition
function as a sum of local contributions involving the values of the
Yang-Mills action evaluated on the classical solutions as in
Section~\ref{ClassAction}. Consider an equivariant tubular
neighbourhood $\mathcal{N}_{(n_1,s_1),\dots,(n_k,s_k)}$ of a critical surface
$\cC_{(n_1,s_1),\dots,(n_k,s_k)}$ in $\mathfrak{g}\times\cO$. Since the
partition function \eq{Z-3} is independent of $t$, we can consider its
large $t$ limit as above, and this limit will always be implicitly
assumed from now on. Let $\cW$ be a compact subset of $\cO$ with
$\cW\cap\cC=\emptyset$, where
$\cC:=\bigcup_{(n_i,s_i)}\,\cC_{(n_1,s_1),\dots,(n_k,s_k)}$. Then the
integral over $\cW$ in \eq{Z-4} has a gaussian decay in
$t\to\infty$. This means that in expanding $\exp(t~{\rm d}\alpha + \omega)$
into a finite sum of terms of the form $\omega^p\wedge (t~{\rm d} \alpha)^{ m}$,
we can disregard all terms which contain $\omega$ since they will
be suppressed by factors of $\frac 1t$ and vanish in the large $t$
limit. The only terms which survive the $t\to\infty$ limit are those
with $p=0,m=d$, and the integral therefore vanishes unless $\omega$ is
replaced by ${\rm d}\alpha$, except at the saddle point where
${\rm d}\alpha=0$. Then one has
\begin{equation}
Z = \frac1{{\rm vol}(G)}\,
\int_{\mathfrak{g}\times\cO}\, \Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~
\exp\Big(t\, \big({\rm d}\alpha - {\,{\rm i}\,} \langle\alpha,V_\phi\rangle\big)\Big)~
\exp\Big(-{\,{\rm i}\,} \Tr(C_0\, \phi) -\mbox{$\frac{g'}{2}$}\, \Tr \big(\phi^2
\big)\Big)
\label{Z-5}
\end{equation}
in the vicinity of any critical point in which ${\rm d}\alpha$ is
nondegenerate.
The integral $Z_{(n_1,s_1),\dots,(n_k,s_k)}$ in (\ref{Z-5}) over the
neighbourhood $\mathcal{N}_{(n_1,s_1),\dots,(n_k,s_k)}$ is determined by the
local behaviour of $\alpha$ and the $G$-action near
$\cC_{(n_1,s_1),\dots,(n_k,s_k)}$. Then
\begin{equation}
Z=\sum_{\stackrel{\scriptstyle(n_1,s_1),\dots,(n_k,s_k)}
{\scriptstyle\sum_i\,n_i=n\,N~,~\sum_i\,s_i=n}}\,
Z_{(n_1,s_1),\dots,(n_k,s_k)} \ .
\label{Zlocalsum}\end{equation}
As expected~\cite{Paradan1}, the sum over critical surfaces in
\eq{Zlocalsum} contains the sum over weights $1\leq n_1\leq
n_2\leq\cdots\leq n_k$ of the gauge group $G=U(n\,N)$. Our explicit
computations will confirm the local behaviour of the partition
function given by~\cite{Paradan1}
\begin{equation}
Z_{(n_1,s_1),\dots,(n_k,s_k)}=\big(g'\,\big)^{-\dim(G)}~
\epsilon^{-\frac1{2g'}\,\sum_i\,n_i^2}~H_{(n_1,s_1),\dots,(n_k,s_k)}
\big(\,\sqrt{g'}~\big) \ .
\label{ZParadan}\end{equation}
The smooth function $H_{(n_1,s_1),\dots,(n_k,s_k)}:{\mathbb{R}}\to\Gamma$, which is
bounded by a polynomial at infinity, is determined by the equivariant
Euler class of the fixed point locus corresponding to the weight
$(n_1,\dots,n_k)$ after reducing the integral over $\mathfrak{g}$ to its Cartan
subalgebra, as we do explicitly in the next section.
\subsection{Explicit evaluation of the localization
forms\label{LocFormProps}}
The explicit computation of the local contributions
$Z_{(n_1,s_1),\dots,(n_k,s_k)}$ to the Yang-Mills partition function
on $S_N^2$ will rely on the local behaviour of the invariant one-form
$\alpha$ introduced in \eq{loc1form} near the Yang-Mills critical
points. We will now pause to derive explicit expressions for the BRST
transformations (\ref{Qalpha}) on the subspaces appearing in the
tangent space decomposition \eq{TCOdecomp}. Given the invariant
Maurer-Cartan one-form \eq{thetaCMdef} and the projector \eq{p0-def},
consider the $\mathfrak{u}(n\,N)$-valued one-form
\begin{equation}
\theta_0 := \Pi_0(\theta) = \mbox{$\frac 12$}\, \tr_\sigma (\theta)
\end{equation}
where $\tr_\sigma$ denotes the partial trace over the spin matrices
$\sigma^\mu$. It is given explicitly by
\begin{equation}
\theta_0=\mbox{$\frac4{N^2}$}\,\big(C~{\rm d} C\big)_0
= \mbox{$\frac4{N^2}$}\,\big(C_i~{\rm d} C^i + C_0~ {\rm d} C_0\big)
\end{equation}
and satisfies
\begin{equation}
{\rm d}\theta_0 = - \mbox{$\frac 12$}\,\tr_\sigma\big(\theta^2\big)
= - \Pi_0\big(\theta^2\big) \ .
\end{equation}
One has
\begin{equation}
\langle \theta,V_\phi\rangle=\mbox{$\frac{2}{N^2}$}\, [C,V_\phi]
= -\mbox{$\frac{2{\,{\rm i}\,}}{N}$}\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L} (V_\phi) \qquad \mbox{and} \qquad
\langle \theta_0,V_\phi\rangle = -\mbox{$\frac {2{\,{\rm i}\,}}N$}\,
\Pi_0\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L} (V_\phi)
\label{etatilde-eval}
\end{equation}
for any tangent vector $V_\phi = {\,{\rm i}\,}[C,\phi]$.
Using the identity $C~{\rm d} C=-{\rm d} C~C$, the localization one-form
(\ref{loc1form}) can now be written as
\begin{equation}
\alpha = -\mbox{$\frac {{\,{\rm i}\,} N^2}2$}\; \Tr(C_0\, \theta) =
-\mbox{$\frac {{\,{\rm i}\,} N^2}2$}\; \Tr(C\, \theta_0) \ .
\end{equation}
Hence the pairing in (\ref{Qalpha}) is given by
\begin{eqnarray}
\langle \alpha,V_\phi\rangle &=& - N\, \Tr\big(C_0\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L} (V_\phi)\big)
\nonumber\\[4pt] &=& N\, \Tr\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L} (C_0)\, V_\phi\big)
~=~ - N^2\, \Tr\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(C_0)\, \phi\big) \ .
\end{eqnarray}
This vanishes on the critical surfaces, where
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(C_0)=0$. Furthermore, for any $g\in\mathfrak{g}$ one has
\begin{eqnarray}
\big\langle \alpha\,,\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(g)\big\rangle &=& - N \,
\Tr\big(C_0\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^3(g)\big)\nonumber\\[4pt]
&=& N \,\Tr\big(C_0\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g)\big)
= {\,{\rm i}\,}\Tr\big(C_0\, [C_0,g]\big) ~=~0 \label{alpha-J2}
\end{eqnarray}
while for $e_0\in E_0$ one has
\begin{equation}
\big\langle \alpha\,,\,e_0\big\rangle = \big\langle \alpha\,,\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(e_0)
\big\rangle = \big\langle \alpha\,,\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(h)\big\rangle =0
\label{alpha-E0}\end{equation}
for some $h\in\mathfrak{h}$. Both identities \eq{alpha-J2} and \eq{alpha-E0}
hold even off-shell. We also note the on-shell relations
\begin{equation}
\big\langle \alpha\,,\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g)\big\rangle = - N \,\Tr\big(C_0\,
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(g)\big) =0 \qquad \mbox{and} \qquad
\big\langle \alpha\,,\,e_1\big\rangle =-N \,\Tr\big(C_0\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}
(e_1)\big) =0 \label{alpha-E1}
\end{equation}
for $e_1\in E_1$.
To evaluate the integral \eq{Z-5} using the stationary phase method,
we must understand how it behaves near the Yang-Mills critical
points. For this, we will study the local behaviour of the BRST
variation (\ref{Qalpha}), beginning with the pairing
$\langle\alpha,V_\phi\rangle$. Let us write a generic gauge field of
$\cO$ as $C = \overline{C} + \varepsilon {\,{\rm i}\,}[\,\overline C,\Psi] + \frac 12\,
\varepsilon^2{\,{\rm i}\,}[\,\overline C\,,\,{\,{\rm i}\,}[\,\overline
C,\Psi]\,]+O(\varepsilon^3)$, where $\overline{C}$ is the given critical
point, $\Psi \in\mathfrak{s}\mathfrak{u}(\mathcal{N}\,)$ are the fluctuations around $\overline C$
and $\varepsilon$ is a small real parameter. Then
\begin{eqnarray}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(C_0) &=& 0 + \varepsilon\, \Big(
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2\big({\,{\rm i}\,}[\,\overline C,\Psi]_0\big) + \mbox{$\frac{\,{\rm i}\,} N$}\,\big[{\,{\rm i}\,}
[\,\overline C,\Psi]\,,\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\,\overline C_0)\big]
+ \mbox{$\frac{\,{\rm i}\,} N$}\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}\big([{\,{\rm i}\,}[\,\overline C,\Psi]\,,\,\overline
C_0]\big)\Big) + O\left(\varepsilon^2\right) \nonumber\\[4pt]
&=& \varepsilon\, \Big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2\big((V_\Psi)_0\big)
+ \mbox{$ \frac{\,{\rm i}\,} N$}\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}\big([V_\Psi,\overline C_0]\big) \Big) + O
\left(\varepsilon^2\right) \ ,
\label{J2C0explO1}\end{eqnarray}
which for $\phi \in \mathfrak{g}$ gives
\begin{eqnarray}
\langle \alpha,V_\phi\rangle
&=& -\varepsilon\, N^2 \,\Tr\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2((V_\Psi)_0)\,\phi
+\mbox{$\frac{\,{\rm i}\,} N$}\,\big[{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(V_\Psi),\,\overline C_0\,\big] ,\phi
\big) +O\left(\varepsilon^2\right)\nonumber\\[4pt]
&=& -\varepsilon\, N^2 \,\Tr\big((V_\Psi)_0\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\phi)
+ {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(V_\Psi) \big[\,\mbox{$\frac{\,{\rm i}\,} N$}\,\overline C_0\,,\phi\,\big]
\big) +O\left(\varepsilon^2\right)\nonumber\\[4pt]
&=& -\varepsilon\, N^2 \,\Tr\Big(V_\Psi\,\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\phi)_0
- {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)_0) \big)\Big) +O\left(\varepsilon^2\right) \ .
\label{alphaV-expand1}
\end{eqnarray}
using \eq{pi0-explicit}. This is non-degenerate for $\phi\in\mathfrak{g}
\ominus \mathfrak{s} \ominus \mathfrak{h}$, i.e. non-vanishing
for some $V_\Psi \in T_C \cO$. To see this, it is
sufficient to show that ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\phi)_0 - {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)_0)) \neq 0$.
Indeed, assuming the contrary
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\phi)_0) = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)_0)$ would imply that either
$\phi \in \mathfrak{s}$, or ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)_0 \in \mathfrak{h}$ which is amounts to $\phi \in
\mathfrak{h}\oplus \mathfrak{s}$. On the other hand, this pairing is indeed degenerate
for any $V_\Psi \in E_1$.
For $\phi \in \mathfrak{s}$, the second-order contribution to the
form (\ref{alphaV-expand1}) can be obtained from
\begin{equation}
V_\phi = {\,{\rm i}\,}[C,\phi] = {\,{\rm i}\,}\varepsilon[V_\Psi,\phi] + O
\left(\varepsilon^2\right)
\end{equation}
and
\begin{equation}
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(C_0) =\mbox{$\frac{\,{\rm i}\,} N$}\,[C,C_0] = \mbox{$\frac{\,{\rm i}\,} N$} \,
\varepsilon \, \big([V_\Psi,C_0] +[C,(V_\Psi)_0]\big)+O
\left(\varepsilon^2\right) \ .
\end{equation}
It follows that
\begin{equation}
\langle \alpha,V_\phi\rangle =
-\varepsilon^2 \,\Tr\Big({\rm ad}_\phi(V_\Psi)\, \big({\rm ad}_{C_0}(V_\Psi)+
{\,{\rm i}\,} N\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}((V_\Psi)_0)\big)\Big)+O\left(\varepsilon^3\right) \ .
\label{alpha-eval-second}
\end{equation}
In particular, for $V_\Psi \in E_1$ this pairing simplifies to
\begin{equation}
\langle \alpha,V_\phi\rangle =
-\varepsilon^2 \,\Tr\big({\rm ad}_\phi(V_\Psi)~{\rm ad}_{C_0}(V_\Psi)\big)+O
\left(\varepsilon^3\right) \ .
\label{alpha-expand}
\end{equation}
We now turn to the exact part ${\rm d}\alpha$ of (\ref{Qalpha}). Using
\eq{thetaconstrs}--\eq{MCformprop2}, one finds
\begin{equation}
{\rm d}\alpha=-{\,{\rm i}\,}\mbox{$\frac{N^2}2$} \,\Tr\big({\rm d} C~ \theta_0 -
C_0 \,\theta^2\big)
=-{\,{\rm i}\,}\mbox{$\frac{N^2}2$} \,\Tr\big(C\, \theta\,
\theta_0+C_0~ {\rm d}\theta\big) \ . \label{da-2}
\end{equation}
For flat connections with $F=0$, the second term in the first equality
of \eq{da-2} vanishes and one has
\begin{equation}
{\rm d}\alpha= -{\,{\rm i}\,}\mbox{$\frac{N^2}2$} \,\Tr({\rm d} C ~\theta_0)
=-{\,{\rm i}\,}\mbox{$\frac{N^2}2$} \,\Tr(C\, \theta\, \theta_0) \qquad
\mbox{if}\quad C_0
=\mbox{$\frac12$}~\mbox{1 \kern-.59em {\rm l}}_{n\,N} \ .
\end{equation}
{}From \eq{thetaCrel1} and \eq{thetaconstrs} one generally has
$\theta^2= -\frac{4}{N^2}~({\rm d} C)^2$, and hence
\begin{eqnarray}
\big\langle\Tr(C_0\, \theta^2)\,,\,V_\phi\wedge V_\psi\big\rangle &=&
\mbox{$\frac{4}{N^2}$} \,\Tr\big(C_0 \,[\,[C,\phi]\,,\,[C,\psi]\,]
\big) \nonumber\\[4pt]
&=& \mbox{$\frac{4}{N^2}$} \,\Tr\big([C_0\,,\, [C,\phi]\,]\,[C,\psi]
\big) ~=~-\mbox{$\frac{4}{N^2}$}\,\Tr\big({\rm ad}_{C_0}(V_\phi) \,V_\psi
\big)
\label{da1-eval}
\end{eqnarray}
for any pair of tangent vectors $V_\phi = {\,{\rm i}\,}[C,\phi]$ and $V_\psi =
{\,{\rm i}\,}[C,\psi]$. Similarly, one has
\begin{equation}
\big\langle \Tr(C\, \theta\,\theta_0) \,,\,V_\phi\wedge V_\psi\big
\rangle = \big\langle \Tr({\rm d} C~ \theta_0) \,,\,V_\phi\wedge V_\psi
\big\rangle= - \mbox{$\frac{2{\,{\rm i}\,}}{N}$} \,\Tr\big(V_\phi \,
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L} (V_\psi)_0 - V_\psi\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L} (V_\phi)_0\big)
\label{TrCtheta0pair}\end{equation}
which vanishes if any of the arguments belongs to the subspace $E_1$.
If $V_\psi = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(h) \in E_0$ for some $h\in\mathfrak{h}$, then by using the map
(\ref{juniquemap}) along with (\ref{TrCtheta0pair}) one computes
the on-shell pairing
\begin{eqnarray}
\big\langle\Tr(C\, \theta\,\theta_0) \,,\,V_\phi\wedge V_\psi
\big\rangle &=& - \mbox{$\frac{2{\,{\rm i}\,}}{N}$}\,\Tr\big(V_\phi\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(j(h))_0
-{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(h)_0 \,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}( V_\phi)\big) \nonumber\\[4pt]
&=& - \mbox{$\frac{2}{N^2}$}\,\Tr\big({\rm ad}_{C_0}(V_\phi)\,j(h) +
{\rm ad}_{C_0}(h)\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(V_\phi)\big) \nonumber\\[4pt]
&=& - \mbox{$\frac{2}{N^2}$}\,\Tr\big( N~{\rm ad}_{C_0}({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi))\,
j(h)+ {\rm ad}_{C_0}(h)\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(V_\phi)\big) \nonumber\\[4pt]
&=& - \mbox{$\frac{2}{N^2}$}\,\Tr\big(-N~{\rm ad}_{C_0}(\phi)\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(h)
+ {\rm ad}_{C_0}(h)\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(V_\phi)\big) \nonumber\\[4pt]
&=& - \mbox{$\frac{2}{N^2}$}\, \Tr\big(N~{\rm ad}_{C_0}({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi))\,
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(h) - {\rm ad}_{C_0}({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(h))\, V_\phi\big)\nonumber\\[4pt]
&=&-\mbox{$\frac{2}{N^2}$}\,
\Tr\big({\rm ad}_{C_0}(V_\phi)\, V_\psi - {\rm ad}_{C_0}(V_\psi)\, V_\phi\big) \
.
\end{eqnarray}
This coincides with \eq{da1-eval}, and in particular it vanishes
unless the vector field $V_\phi$ also belongs to the subspace
$E_0$. In summary, we have the on-shell evaluations
\begin{equation}
\langle {\rm d}\alpha,V_\phi\wedge V_\psi\rangle =
2{\,{\rm i}\,}\Tr\big(V_\phi~{\rm ad}_{C_0}(V_\psi)\big) \qquad \mbox{if} \quad
V_\psi \in E_1 \label{da-E1}
\end{equation}
and
\begin{equation}
\langle {\rm d}\alpha,V_\phi\wedge V_\psi\rangle = 0 \qquad \mbox{if} \quad
V_\psi \in E_0 \ .
\label{da-E0}
\end{equation}
\subsection{Localization at the vacuum moduli
space\label{LocVacSurface}}
We will now compute the localized partition function
$Z_0:=Z_{(N,1),\dots,(N,1)}$ at the vacuum critical surface. We denote
this gauge orbit as
\begin{equation}
\cO_0 := \cC_{(N,1),\dots,(N,1)}
= \big\{g\, C \,g^{-1}~\big|~ g \in U(n\,N)\big\}
\cong U(n\,N)/U(n) \ .
\end{equation}
In this case the subspaces $E_0$ and $E_1$ in (\ref{TCOdecomp}) are
trivial. Localization implies that we can restrict ourselves to a
$G$-equivariant tubular neighbourhood $\mathcal{N}_0=\mathcal{N}_{(N,1),\dots,(N,1)}$
of the critical surface, under the action of the gauge group $G =
U(n\,N)$. The neighbourhood $\mathcal{N}_0$ has an equivariant
retraction~\cite[Chap.~27]{GSbook} by a local equivariant
symplectomorphism onto the {\it local symplectic model} ${\cal F}_0$,
defined to be an equivariant symplectic vector bundle over $\cO_0$ with fibre
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{s})$ which is a sub-bundle of the tangent bundle
$T\cO$ restricted to $\cO_0$. This means that the tangent space to
${\cal F}_0$ at the vacuum critical point $C$ in \eq{vacsolnonab} is
given by $T_C \cO_0 \oplus{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{s})
\cong{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus\mathfrak{s})\oplus{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{s}) = T_C \cO$, the
symplectic two-form on ${\cal F}_0$ is simply $\omega$, and the
hamiltonian $G$-action on ${\cal F}_0$ descends from the moment map
$\mu$. In physical terms, the gauge fields are decomposed along the
vacuum moduli space $\cO_0$ plus infinitesimal non-gauge variations in
the subspace ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{s})$.
Due to the presence of the localization form $\alpha$ in the path
integral, we can restrict ourselves to this model ${\cal F}_0$ and use
it to replace the open neighbourhood $\mathcal{N}_0$~\cite{witten}. Indeed,
because ${\cal F}_0$ is an equivariant retraction from $\mathcal{N}_0$, the
$G$-equivariant cohomology of $\mathcal{N}_0$ is the same as that of ${\cal
F}_0$. Furthermore, since the fibres of the bundle ${\cal F}_0$ are
contractible, its $G$-equivariant cohomology is identified under
pullback with the $S$-equivariant cohomology of its base space
$\cO_0$, so that $H_G(\mathcal{N}_0)\cong H_S(\cO_0)$. Since $S$ acts
trivially on $\cO_0$, one has $H_S(\cO_0)\cong\Gamma[[\mathfrak{s}]]^S\otimes
H(\cO_0)$ and the $S$-equivariant cohomology classes of $\cO_0$
coincide with ordinary cohomology classes of $\cO_0$ valued in the
ring of invariant functions on the stabilizer $\mathfrak{s}$. Putting
everything together gives an isomorphism
$H_G(\mathcal{N}_0)\cong\Gamma[[\mathfrak{s}]]^S\otimes H(\cO_0)$ which reduces the
equivariant integral over $\mathfrak{g}\times\mathcal{N}_0$ in \eq{Z-5} to an ordinary
integral over $\mathfrak{s}\times\cO_0$. This is precisely the nonabelian
localization that is formally carried out in~\cite{Woodward:2004xz},
and will turn out to be very much like the localization at the trivial
connection of Chern-Simons theory on a Seifert homology
sphere~\cite{witten}. In the present case, the integral over
$\phi\in\mathfrak{s}$ will then give the interesting non-trivial quantum
fluctuation determinants about the classical solution. We will now
carry out this reduction explicitly.
Let $g'_i$ be an orthonormal basis of $\mathfrak{g}' = \mathfrak{g}\ominus\mathfrak{s}$, and consider the
corresponding basis
\begin{equation}
J_i = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g'_i) \qquad \mbox{and} \qquad \tilde J_j = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(g'_j)
\label{J-i}
\end{equation}
of $T_C\cO={\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus\mathfrak{s}) \oplus {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{s})$, with the dual
basis $\lambda^i, \tilde \lambda^j$ defined by
\begin{equation}
\big\langle\lambda^i\,,\, J_j\big\rangle = \delta^i{}_j \ ,
\quad \big\langle\,\tilde\lambda^i\,,\, \tilde J_j\big\rangle =
\delta^i{}_j \qquad \mbox{and} \qquad \big\langle\lambda^i\,,\,\tilde
J_j\big\rangle =\big\langle\,\tilde\lambda^i\,,\, J_j\big\rangle = 0 \ .
\end{equation}
Introduce the functions
\begin{equation}
f_i = \langle\alpha,J_i\rangle
\end{equation}
which vanish on-shell but have non-degenerate
derivatives ${\rm d} f_i$ due to \eq{alphaV-expand1}.
Then by expanding $\phi = \phi^i~ g_i + \phi^a~ s_a$ into components
$\phi^i$ along $\mathfrak{g}\ominus\mathfrak{s}$ and $\phi^a$ along $\mathfrak{s}$, we have
\begin{equation}
\langle\alpha, V_\phi\rangle =
N\,\big\langle\alpha\,,\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)\big\rangle = N \,f_i \,\phi^i \ .
\label{alpha-Vphi}
\end{equation}
It follows that the localization one-form can be expanded as
\begin{equation}
\alpha = f_i~ \lambda^i
\end{equation}
with
\begin{equation}
{\rm d}\alpha = {\rm d} f_i\wedge\lambda^i + f_i ~{\rm d}\lambda^i \ .
\end{equation}
In particular, one has
\begin{equation}
\frac{({\rm d}\alpha)^d}{d!} =
\bigwedge_{i=1}^d\, \left({\rm d} f_i \wedge \lambda^i\right) + f_j~
\Upsilon^j
\end{equation}
where $d=\dim_\Gamma(\cO)=n^2\,(N^2-1)$ is the (real) dimension of the
vacuum orbit $\cO_0$. The forms $f_j~\Upsilon^j$ vanish on-shell, and are
killed by localization in the large $t$ limit. For example, inner
products of the form $\langle\alpha, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(s)\rangle$, $s\in\mathfrak{s}$ are
non-vanishing off-shell at second order due to \eq{alpha-eval-second},
but these higher-order terms do not contribute because of the
localization in the large $t$ limit. This can be seen explicitly by
rescaling $f_i = \sqrt{t}\,f_i'$.
The corresponding local contribution to the partition function
(\ref{Z-5}) for $t\to\infty$ is then given by
\begin{eqnarray}
Z_0 &=&\frac1{{\rm vol}(G)}\,
\int_{\mathfrak{g}\times {\cal F}_0}\,\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~
\frac{t^d}{d!}\,({\rm d}\alpha)^{d}~
\epsilon^{-{\,{\rm i}\,} t\,\langle\alpha,V_\phi\rangle-{\,{\rm i}\,} \Tr(C_0\, \phi) -\frac{g'}{2}\,
\Tr (\phi^2)}\nonumber\\[4pt]
&=& \frac1{{\rm vol}(G)}\,
\int_{\mathfrak{g}\times {\cal F}_0}\,\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~
t^d~\bigwedge_{i=1}^d\, \left({\rm d} f_i \wedge \lambda^i\right)~
\epsilon^{-{\,{\rm i}\,} N\,t\, f_i \,\phi^i-{\,{\rm i}\,} \Tr(C_0\, \phi) -\frac{g'}{2} \,
\Tr (\phi^2)} \nonumber\\[4pt]
&=& \frac1{{\rm vol}(G)}\,
\int_{\mathfrak{s}}\,\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~
\epsilon^{-{\,{\rm i}\,} \Tr(C_0\, \phi) -\frac{g'}{2}\,\Tr (\phi^2)}~
\frac 1{N^{d}}\,\int_{\cO_0}\, \bigwedge_{i=1}^{d}\,\lambda^i \ .
\label{Z0-1}\end{eqnarray}
Here the $f_i$ integrals over the fibre ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{s})$
have produced delta-functions setting $\phi^i=0$ in
$\mathfrak{g}\ominus\mathfrak{s}$. We can carry out the integral over the moduli space
$\cO_0$ in \eq{Z0-1} by observing that
\begin{equation}
\frac 1{N^{d}}\,\int_{\cO_0}\,\bigwedge_{i=1}^{d}\,\lambda^i=\int_{G/S}\,
\bigwedge_{i=1}^{d}\,\eta^i= \frac{{\rm vol}(G)}{{\rm vol}(S)} \ ,
\label{intcO0}\end{equation}
where the pullbacks ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^*(\lambda^i) = \eta^i$ define left-invariant
one-forms on the gauge group $G$ dual to $g'_i$,
with the map $N\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ regarded as the
derivative of the diffeomorphism
\begin{equation}
G/S ~\longrightarrow~ \cO_0 \ , \qquad g ~\longmapsto~ g\, C \,g^{-1}
\ .
\end{equation}
To evaluate the remaining integral over the gauge stabilizer algebra
$\mathfrak{s}\cong\mathfrak{u}(n)$ in \eq{Z0-1}, we note that, for the vacuum critical
point with $C_0=\frac12\,\mbox{1 \kern-.59em {\rm l}}_{n\,N}$, the integrand defines a gauge
invariant function $f:\mathfrak{u}(n)\to{\mathbb{R}}$. We may thus apply to it the Weyl
integration formula which reduces its integral over $\mathfrak{u}(n)$ to an
integral over the Lie algebra $\mathfrak{u}(1)^n$ of the maximal torus
$U(1)^n$ of $U(n)$. It is given by
\begin{equation}
\int_{\mathfrak{u}(n)}\,[{\rm d}\phi]~f(\phi)=\frac{{\rm vol}\big(U(n)\big)}
{n!\,(2\pi)^n}\,\int_{{\mathbb{R}}^n}\,[{\rm d} s]~\Delta(s)^2~f(s) \ ,
\label{Weylint}\end{equation}
where we have identified $\mathfrak{u}(1)^n\cong{\mathbb{R}}^n$ in a basis where the
Cartan subalgebra of $U(n)$ is represented by diagonal $n\times n$
matrices $s={\rm diag}(s_1,\dots,s_n)$ by mapping them onto $n$-vectors
$s=(s_1,\dots,s_n)\in{\mathbb{R}}^n$. Here
\begin{equation}
\Delta(s)=\prod_{i<j}\,(s_i-s_j)=\det_{1\leq i,j\leq n}\,
\big(s_i^{j-1}\big)
\end{equation}
is the Vandermonde determinant, which is the Weyl determinant for
$U(n)$ arising as the jacobian for the diagonalization of hermitian
matrices on the left-hand side of (\ref{Weylint}). The factor $n!$ is
the order of the Weyl group $\mathfrak{S}_n$ of $U(n)$ acting by permutations
of the components $s_i$ of $s\in{\mathbb{R}}^n$, while $(2\pi)^n$ is the
volume of the maximal torus $U(1)^n$ with respect to the chosen
invariant Haar measure.
\subsubsection*{\it An integral identity}
We will make use here and in Section~\ref{LocMaxNon-Deg} below of the
integral identity
\begin{eqnarray}
&& \int_{{\mathbb{R}}^n}\, [{\rm d} s]~\Delta(s)^2~
\epsilon^{-{\,{\rm i}\,} \frac N2\, \sum_i\, s_i + \frac\ii4\, \sum_i\, m_i\, s_i
-\frac{g}{4}\, \sum_i\, s_i^2} \nonumber\\
&& \qquad\qquad\qquad~=~ \epsilon^{-\frac{n\,N^2-m\,N}{4g}}\,
\int_{{\mathbb{R}}^n}\,[ {\rm d} s]~\Delta(s)^2~
\epsilon^{\frac\ii4\, \sum_i\, m_i\,s_i -\frac{g}{4}\, \sum_i\, s_i^2}
\label{Lemma}\end{eqnarray}
where $m = \sum_i\, m_i$. To derive \eq{Lemma}, we set $s = \sum_i\,
s_i$ and $t_i = s_i - \frac 1n\, s$ so that $\sum_i\, t_i =0$. Then
\begin{eqnarray}
&& \int_{{\mathbb{R}}^n}\,[ {\rm d} s]~\Delta(s)^2~
\epsilon^{-{\,{\rm i}\,} \frac N2\, \sum_i\, s_i + \frac\ii4\, \sum_i\, m_i\, s_i
-\frac{g}{4}\, \sum_i\, s_i^2} \nonumber\\
&&\qquad\qquad\qquad~=~ \int_{{\mathbb{R}}}\,{\rm d} s~
\epsilon^{-{\,{\rm i}\,} \frac N2\, s + {\,{\rm i}\,}\frac{m}{4n}\, s}~
\int_{{\mathbb{R}}^n}\, [{\rm d} t]~\Delta(t)^2~
\epsilon^{\frac\ii4\, \sum_i\, m_i\, t_i-\frac{g}{4}\, \sum_i\,
(t_i + \frac 1n\, s)^2} \nonumber\\[4pt]
&&\qquad\qquad\qquad~=~ \int_{{\mathbb{R}}}\,{\rm d} s~
\epsilon^{-{\,{\rm i}\,} \frac N2\, s + {\,{\rm i}\,}\frac{m}{4n}\, s- \frac{g}{4n}\, s^2}~
\int_{{\mathbb{R}}^n}\, [{\rm d} t]~\Delta(t)^2~
\epsilon^{\frac\ii4\, \sum_i\, m_i\, t_i-\frac{g}{4}\, \sum_i\, t_i^2}
\nonumber\\[4pt]
&&\qquad\qquad\qquad~=~ 2\,\sqrt{\mbox{$\frac{ \pi\,n}{g}$}}~
\epsilon^{-\frac{(2n\,N -m)^2}{16n\, g}}\,
\int_{{\mathbb{R}}^n}\, [{\rm d} t]~\Delta(t)^2~
\epsilon^{\frac\ii4\, \sum_i\, m_i\, t_i-\frac{g}{4}\, \sum_i\, t_i^2} \ .
\end{eqnarray}
On the other hand
\begin{eqnarray}
\int_{{\mathbb{R}}^n}\,[ {\rm d} s]~\Delta(s)^2~
\epsilon^{\frac\ii4\, \sum_i\, m_i\, s_i -\frac{g}{4}\, \sum_i\, s_i^2}&=&
\int_{\mathbb{R}}\,{\rm d} s~\epsilon^{{\,{\rm i}\,}\frac{m}{4n}\, s}~
\int_{{\mathbb{R}}^n}\, [{\rm d} t]~ \Delta(t)^2~
\epsilon^{\frac\ii4\, \sum_i\, m_i\, t_i -\frac{g}{4}\, \sum_i\,
(t_i + \frac 1n\, s)^2} \nonumber\\[4pt]
&=&\int_{{\mathbb{R}}}\, {\rm d} s~\epsilon^{{\,{\rm i}\,}\frac{m}{4n}\, s -
\frac{g}{4n}\, s^2}~\int_{{\mathbb{R}}^n}\,[ {\rm d} t]~\Delta(t)^2~
\epsilon^{ \frac\ii4\, \sum_i\, m_i\, t_i-\frac{g}{4}\, \sum_i\, t_i^2}
\nonumber\\[4pt]
&=&2\,\sqrt{\mbox{$\frac{ \pi\,n}{g}$}}~
\epsilon^{-\frac{m^2}{16n\, g}}~
\int_{{\mathbb{R}}^n}\,[ {\rm d} t]~\Delta(t)^2~
\epsilon^{\frac\ii4\, \sum_i\, m_i\, t_i-\frac{g}{4}\, \sum_i\, t_i^2} \ .
\nonumber\\
\end{eqnarray}
\subsubsection*{\it Final reduction}
{}From \eq{Z0-1}, \eq{intcO0} and \eq{Weylint} we obtain
\begin{eqnarray}
Z_0 &=& \frac1{{\rm vol}(S)}\, \int_{\mathfrak{s}}\,
\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~
\epsilon^{-{\,{\rm i}\,} \Tr(C_0\, \phi) -\frac{g'}{2}\,\Tr (\phi^2)} \nonumber\\[4pt]
&=&\frac{1}{n!}\, \frac 1{(2\pi)^{n^2}}\,
\int_{{\mathbb{R}}^n}\, \Big[\,\frac{{\rm d} s}{2\pi}\,\Big]~
\Delta(s)^2~\epsilon^{-{\,{\rm i}\,} \frac N2\, \sum_i\, s_i -\frac{g}{4}\,
\sum_i\, s_i^2}
\label{Z0-2}\end{eqnarray}
where we have substituted \eq{gprime} and used ${\rm vol}(S) =
{N}^{N^2/2}~{\rm vol}(U(n))$ with respect to the Cartan-Killing metric on
$\mathfrak{s}$, since $S = U(n) \otimes \mbox{1 \kern-.59em {\rm l}}_N$. Applying the integral identity
\eq{Lemma} therefore allows us to finally write the partition function
as
\begin{equation}
Z_0 = \frac{1}{n!}\,\frac 1{(2\pi)^{n^2+n}}~
\epsilon^{-\frac{ n\,N^2}{4 g}}\,
\int_{{\mathbb{R}}^n}\, [{\rm d} s]~\Delta(s)^2~
\epsilon^{ -\frac{g}{4}\, \sum_i\, s_i^2} \ .
\label{Z0final}\end{equation}
The exponential prefactor in the above expression
is the Boltzmann weight of
the action \eq{YM-action-prime} evaluated on the vacuum solution. The
remaining quantum fluctuation integral is the standard
expression~\cite{Minahan:1993tp} for the contribution from the global
minimum of the Yang-Mills action on $S^2$ to the $U(n)$ sphere
partition function. It arises from the trivial instanton configuration
with vanishing monopole charges $m_i=0$ in \eq{nidomclass}.
\subsection{Localization at maximally irreducible saddle
points\label{LocMaxNon-Deg}}
We now turn to the opposite extreme and look at the local contribution
to the partition function \eq{Z-5} from a generic maximally
non-degenerate critical surface. We denote this gauge orbit by
\begin{equation}
\cO_{\rm max}:= \cC_{(n_1,1),\dots,(n_n,1)}
= \big\{g\, C \,g^{-1}~\big|~ g \in U(n\,N-{\sf c}_1)\big\}\cong
U(n\,N-{\sf c}_1)/U(1)^n
\end{equation}
and assume that the integers $n_1 > n_2 > \cdots > n_n$ are explicitly
specified. Here we allow also ${\sf c}_1 \neq 0$ which describes sectors
with non-vanishing $U(1)$ monopole number \eq{tracemodify}.
We want to compute the integral $Z_{\rm max}$ in \eq{Z-5}
over a local neighbourhood $\mathcal{N}_{\rm max}$ of $\cO_{\rm max}$, which is
independent of $t$ in the large $t$ limit.
We first need to find a suitable basis for the tangent space $ T_{C}
\cO$ at the irreducible critical point $C$. The definition of the
basis $J_i,\tilde J_i$ introduced in \eq{J-i} naturally extends
to include the non-trivial subspaces $E_0, E_1$ in this case with
\begin{equation}
J_i = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g'_i) \ , \qquad \tilde J_j = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(g'_j) \ , \qquad
H_i = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(h'_i) \,\in\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{h}) = E_0 \qquad \mbox{and} \qquad K_i \in E_1 \ .
\end{equation}
for $g'_i$ and $h'_i$ an orthonormal basis of
$\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s}$ and of $\mathfrak{h} \ominus \mathfrak{s}$, respectively.
The elements $K_i$ are assumed to form an orthonormal basis of $E_1$,
orthogonal to ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}) \oplus {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g})$.
Recall from Section~\ref{ExplYMDecomp} that $E_0$ and $E_1$ are
naturally complex vector spaces, whose generators are embedded into
the tangent space decomposition \eq{TCOdecomp} as
\begin{equation}
K_{i} = \left(\begin{array}{ccccc}0&0 &\vline & 0 & 0 \\
0&0 &\vline& X_{i} & 0 \\
\hline
0 & X_{i}^\dagger&\vline & 0&0 \\
0 & 0&\vline & 0&0
\end{array}\right)
\label{E_1-explicit-2}
\end{equation}
and similarly for $H_i$. The complex structure is given by the map ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$,
which amounts to multiplying $X_i$ by ${\,{\rm i}\,}$. We accordingly take the
real basis $K_i$ to be ordered as $\{K_i\} = \{(\tilde K_i, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\tilde
K_i))\}$, and similarly for $H_i$. As matrices, all of the generators
$H_i, K_j$ are hermitian.
The corresponding dual one-forms
$\beta^i,\gamma^i$ are defined as usual by
\begin{equation}
\big\langle \beta^i\,,\, H_j\big\rangle = \delta^i{}_j \qquad \mbox{and}
\qquad \big\langle \gamma^i\,,\, K_j\big\rangle = \delta^i{}_j
\end{equation}
with all other pairings equal to~$0$.
We need to evaluate the pairing $\langle \alpha,V_\phi\rangle$. It
vanishes on-shell, and identically on ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g})$. Its evaluation on
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s})$ has the form $\langle\alpha,
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g'_i\,)\rangle =f_i$, and as before this implies
\eq{alpha-Vphi}. Together with \eq{alpha-E0} and \eq{alpha-E1}, it
follows that the localization one-form $\alpha$ admits an expansion
\begin{equation}
\alpha = f_i\, \lambda^i + g_i \,\beta^i + k_i\, \gamma^i
\end{equation}
where $f_i, g_i, k_i$ vanish on-shell. We can evaluate
\begin{equation}
{\rm d}\alpha = {\rm d} f_i\wedge \lambda^i + f_i ~{\rm d}\lambda^i + {\rm d}
g_i\wedge \beta^i + g_i~ {\rm d}\beta^i + {\rm d} k_i\wedge \gamma^i+ k_i ~{\rm d}\gamma^i
\end{equation}
using \eq{da-E0} and \eq{da-E1} to get
\begin{equation}
\langle {\rm d}\alpha,H_i\wedge H_j\rangle =0 \qquad \mbox{and} \qquad
\langle {\rm d}\alpha,K_i \wedge K_j\rangle = A_{ij} \ ,
\end{equation}
where
\begin{equation}
A_{ij} = 2{\,{\rm i}\,} \Tr\big(K_i~{\rm ad}_{C_0}(K_j)\big)
\label{Aijdef}\end{equation}
is an antisymmetric matrix. Furthermore, ${\rm d}\alpha$ vanishes when
evaluated on mixed terms of the form $K_i\wedge {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g), K_i\wedge
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(g)$, $H_i\wedge {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(g'\,)$ and $H_i\wedge {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(g'\,)$ with
$g\in\mathfrak{g},\, g'\in\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s}$. Therefore
\begin{equation}
{\rm d}\alpha = {\rm d} f_i\wedge \lambda^i
+ \mbox{$\frac 12$} \, A_{ij}~ \gamma^i \wedge \gamma^j + O_f
\end{equation}
where $O_f$ denotes contributions which vanish on-shell such as $f_i
~{\rm d}\lambda^i$. One then has
\begin{equation}
\frac{({\rm d}\alpha)^{d-d_0}}{(d-d_0)!} =
\mbox{pfaff}( A)~\Big(\,\bigwedge_{i=1}^{2d_1}\,\gamma^i\Big)~ \wedge~
\Big(\,\bigwedge_{j=1}^{d-d_0-d_1}\,{\rm d} f_j \wedge \lambda^j\Big)
+ O_f
\end{equation}
where $d_0$ (resp. $d_1$) is the complex dimension of the vector space
$E_0$ (resp. $E_1$), and
\begin{equation}
\mbox{pfaff}(A)= \epsilon^{i_1\cdots i_{2d_1}}\,
A_{i_1 i_2}\cdots A_{i_{2d_1-1} i_{2d_1}}
\end{equation}
is the pfaffian of the antisymmetric matrix $A=(A_{ij})$.
Let us now recall the local geometry and define its symplectic model.
The $G$-equivariant tubular neighbourhood $\mathcal{N}_{\rm max}$ of $\cO_{\rm max}$
has an equivariant
retraction~\cite{GSbook} by a local equivariant symplectomorphism onto
the local symplectic model ${\cal F}_{\rm max}$, defined to be an
equivariant symplectic vector bundle over $\cO_{\rm max}$ with fibre
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s})\oplus E_1$ which is a sub-bundle of
the tangent bundle $T \cO$ restricted to $\cO_{\rm max}$. This means
that the tangent space to ${\cal F}_{\rm max}$ is given by
\begin{equation}
T_C \cO_{\rm max}
\oplus {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s})\oplus E_1 ~\cong~ E_0 \oplus
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s}) \oplus{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s})\oplus
E_1~ = ~T_C \cO \ ,
\label{TF-max}
\end{equation}
the symplectic form on ${\cal F}_{\rm max}$ is simply $\omega$, and the
hamiltonian $G$-action on ${\cal F}_{\rm max}$ descends from the
moment map $\mu$. In physical terms, the gauge fields are split along
the moduli space $\cO_{\rm max}$, plus infinitesimal non-gauge variations
belonging to ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s})$ and unstable modes in
the subspace $E_1$. Due to the presence of the localization form $\alpha$
in the action, we can restrict ourselves to this model ${\cal F}_{\rm max}$
replacing $\mathcal{N}_{\rm max}$. Identically to the case of
Section~\ref{LocVacSurface} above, the canonical symplectic integral
over $\mathfrak{g}\times\mathcal{N}_{\rm max}$ will in this way reduce to an integral
over $\mathfrak{s}\times\cO_{\rm max}$ and the localization now resembles that
at an irreducible flat connection of Chern-Simons
theory~\cite{witten}.
We may now proceed to calculate
\begin{eqnarray}
Z_{\rm max} &=& \frac1{{\rm vol}(G)}\,\int_{\mathfrak{g}\times \mathcal{N}_{\rm max}}\,
\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~
\exp\Big(\omega + t \,\big({\rm d}\alpha-{\,{\rm i}\,}\langle\alpha,V_\phi\rangle
\big)-{\,{\rm i}\,} \Tr(C_0\, \phi) -\mbox{$\frac{g'}{2}$}\, \Tr \big(\phi^2
\big)\Big) \nonumber\\[4pt]
&=&\frac1{{\rm vol}(G)}\,
\int_{\mathfrak{g}\times \cO_{\rm max} \times {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s})
\times E_1} \,
\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~ \frac{(t~{\rm d}\alpha)^{d-d_0}}
{(d-d_0)!}\wedge\frac{\omega^{d_0}}{d_0!}~
\epsilon^{-{\,{\rm i}\,} t\,\langle\alpha,V_\phi\rangle-{\,{\rm i}\,} \Tr(C_0 \,\phi) -
\mbox{$\frac{g'}{2}$}\, \Tr(\phi^2)} \nonumber\\[4pt]
&=& \frac1{{\rm vol}(G)}\,
\int_{(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s}) \oplus \mathfrak{h} \oplus \mathfrak{s}}\,
\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~\mbox{pfaff} (A)~
\nonumber\\ && \qquad\qquad\qquad
\times~\int_{\cO_{\rm max} \times {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s})
\times E_1}\,t^{d-d_0}~\Big(\,\bigwedge_{i=1}^{2d_1}\,\gamma^i
\Big)~ \wedge~\Big(\,\bigwedge_{j=1}^{d-d_0-d_1}\,{\rm d} f_j
\wedge \lambda^j\Big)~\wedge~\frac{\omega^{d_0}}{d_0!}
\nonumber\\ && \qquad\qquad\qquad
\times~\epsilon^{-{\,{\rm i}\,} t\, (N \,f_i\,\phi^i+ \langle \alpha,V_{\phi'}\rangle)
-{\,{\rm i}\,} \Tr(C_0\, \phi) -\frac{g'}{2}\, \Tr (\phi^2)}
\label{Zmaxproceed}\end{eqnarray}
with $\phi'\in\mathfrak{h}\oplus\mathfrak{s}$. In the second line we have used the fact
that ${\rm d}\alpha$ vanishes when evaluated on the subspace $E_0$, and
therefore we need $d_0$ powers of $\omega$ to yield a non-trivial
volume form. Then $(t~{\rm d}\alpha)^{d-d_0}\wedge\omega^{d_0}$ is the only
term which survives in the large $t$ limit. We will modify this below
by adding a second localization form $\alpha'$ in order to write the
localization integral in the generic form (\ref{Z-5}) without the
symplectic two-form $\omega$.
We can now evaluate the integrals in \eq{Zmaxproceed} over $f_i$ in
the fibre ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s})$ and
$\phi^i\in\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s}$ as in Section~\ref{LocVacSurface}
above, which localizes for $t\to\infty$ to an integral over the
subspace $E_1$ and the gauge orbit $\cO_{\rm max}$ given by
\begin{eqnarray}
Z_{\rm max} &=&\frac1{{\rm vol}(G)}\,\int_{\mathfrak{h} \oplus \mathfrak{s}}\,
\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~
\frac{\mbox{pfaff} (A)}{N^{d-d_0-d_1}}~\int_{\cO_{\rm max} \times E_1}\,
t^{d_1}~\Big(\,\bigwedge_{i=1}^{2d_1}\,\gamma^i\Big)~ \wedge~
\Big(\,\bigwedge_{j=1}^{d-d_0-d_1}\,\lambda^j\Big)~\wedge~
\frac{\omega^{d_0}}{d_0!}\nonumber\\ &&
\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\times~\epsilon^{-{\,{\rm i}\,} t\,\langle \alpha,V_{\phi}\rangle
-{\,{\rm i}\,} \Tr(C_0\, \phi) -\frac{g'}{2}\, \Tr (\phi^2)} \ .
\label{Zdeg-4}
\end{eqnarray}
The gauge invariant volume form for the integration domain whose
tangent space is $E_0$ is given by the symplectic volume form
$\omega^{d_0}/d_0!$, since ${\rm d}\alpha$ vanishes on $E_0$, but this will
be modified below. It remains to compute the integral over $E_1$. Upon
evaluating $\langle \alpha,V_\phi\rangle$ at second order on $E_1$,
i.e. away from the critical surface, we will find below that this
pairing becomes a quadratic form which leads to a localization through
a gaussian integral. However, to evaluate it explicitly it is easier
to first localize the integral over $E_0$, which presently is a
complicated non-gaussian integral which does not admit a gaussian
approximation at $t\to\infty$ and is difficult to evaluate in a closed
analytic form. But this can be done by adapting a trick taken
from~\cite{witten}, which amounts to adding a further suitable
localization one-form $\alpha'$, or equivalently a cohomologically trivial
form $Q\alpha'$, to the action in \eq{Z-5}. Indeed, we may compute $Z_{\rm
max}$ using any other invariant form $\alpha'$ which is homotopic to
$\alpha$ on the open neighbourhood $\mathcal{N}_{\rm max}$. The one-form
$\alpha'$ need only be non-vanishing on $E_0\subset\mathcal{N}_{\rm max}$, as the
other integrals can be directly carried out.
\subsubsection*{\it The localization form $\alpha'$}
In order to evaluate the integrals over $E_0$ and $\mathfrak{h}$,
following~\cite{witten} we introduce an additional localization term
$\exp(t~Q\alpha'\,)$ in the partition function with
\begin{equation}
\alpha' := -{\,{\rm i}\,}\Tr(\theta\, \phi) \Big|_{E_0} = -\mbox{$\frac{2}N$}\,
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}~{\rm d}\Tr(C\, \phi)\Big|_{E_0} \ .
\label{alphaprime}
\end{equation}
The projection onto $E_0$ is equivalent to projecting $\phi\in\mathfrak{g}$
onto $\mathfrak{h}$. This one-form is equivariant on-shell, and it can be
extended to the $G$-equivariant tubular neighbourhood $\mathcal{N}_{\rm max}$ of
the critical surface $\cO_{\rm max}$ as follows.
On the tangent space ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s}) \oplus E_0$ of
$T\cO_{\rm max}$ \eq{TF-max} there is an equivariant projection onto the subspace
$E_0$. In this way $\alpha'$ is properly defined on the
local model, and can hence be extended to $\mathcal{N}_{\rm max}$. One could
also define $\alpha'=-{\,{\rm i}\,}\chi\,\Tr(\theta\, \phi)\big|_{E_0}$ using a smooth
$G$-invariant cutoff function $\chi$ with support near the given
saddle-point and $\chi=1$ in the tubular neighbourhood, which is
globally well-defined over $\mathcal{N}_{\rm max}$ as an equivariant
differential form. Note that $t_1\,\alpha+t_2\,\alpha'$ vanishes only on the
original critical points for any $t_1,t_2\in{\mathbb{R}}$ with $t_1\neq0$, and
no new ones are introduced. Then our previous computation \eq{Z-4}
would essentially go through, since $\alpha'$ vanishes on
${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\mathfrak{g}\ominus\mathfrak{h}\ominus\mathfrak{s})$ and there are no critical points where
${\rm d}\chi\neq0$. It is therefore just as good a localization form to
use as $\alpha$ is. It follows that the modification of the canonical
symplectic integral over $\mathcal{N}_{\rm max}$ given by
\begin{equation}
Z_{\rm max} = \frac1{{\rm vol}(G)}\,\int_{\mathfrak{g}\times \mathcal{N}_{\rm max}}\,\Big[\,
\frac{{\rm d}\phi}{2\pi}\Big]~\exp\Big(\omega + t_1~Q\alpha + t_2~Q\alpha'-
{\,{\rm i}\,} \Tr(C_0\, \phi) -\mbox{$\frac{g'}{2}$}\,\Tr\big(\phi^2\big)\Big)
\label{Zmaxt1t2}\end{equation}
is independent of both $t_1,t_2\in{\mathbb{R}}$.
Then $\alpha'$ will localize the integral over
$\mathfrak{h}\subset\mathfrak{g}$ as well as the integral over the unstable modes in
$E_1$, without the need to expand $\langle\alpha,V_\phi\rangle$ to higher
order.
\subsubsection*{\it Integration over $\mathfrak{h}$}
The new localization form $\alpha'$ satisfies
\begin{equation}
{\rm d}\alpha' = {\,{\rm i}\,}\Tr\big(\theta^2\,\phi\big)\Big|_{E_0} =
-\mbox{$\frac\ii2$}\, \Tr\big(\theta\, [\phi,\theta]\big)\Big|_{E_0}
\end{equation}
and
\begin{equation}
\langle \alpha',V_{h_i}\rangle = -\mbox{$\frac{2}N$}\,\Tr\big(
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(V_{h_i})\, \phi\big) = \mbox{$\frac{2}N$}\,\Tr\big(V_{h_i}\,
{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)\big) = 2\Tr\big(H_i\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\phi)\big) \ ,
\end{equation}
where $H_{i} = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(h_{i})$ with $h_{i}$ a basis of $\mathfrak{h}$.
This produces a gaussian integral localizing $\mathfrak{h}$ to the gauge
stabilizer algebra $\mathfrak{s}\cong\mathfrak{u}(1)^n$. To evaluate it, we will need
the matrix
\begin{equation}
M_{ij} := \Tr(H_{i}\, H_{j})
\label{inner-H}
\end{equation}
which is hermitian since we take $H_{i}$ and $h_{i}$ to be hermitian.
Similarly, one has
\begin{eqnarray}
\langle {\rm d}\alpha',H_i\wedge H_j\rangle &=& \mbox{$ \frac{4{\,{\rm i}\,}}{N^2}$}\,
\Tr\big({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(H_i)\,[s,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(H_j)]\big) \nonumber\\[4pt]
&=&-\mbox{$\frac{4{\,{\rm i}\,}}{N^2}$}\,\Tr\big(H_i\,[s,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2(H_j)]\big)
\nonumber\\[4pt] &=&\mbox{$\frac{4{\,{\rm i}\,}}{N^2}$}\, \Tr\big(H_i\,[s,H_j]\big)~=:~
\mbox{$\frac{4{\,{\rm i}\,}}{N^2}$}\,\tilde A_{ij}
\label{tildeAdef}\end{eqnarray}
where we have restricted to $\phi=s \in \mathfrak{s}$ using the
localization. This implies that
\begin{equation}
{\rm d}\alpha' =\mbox{$\frac{2{\,{\rm i}\,}}{N^2}$}\,\tilde A_{ij}~\beta^i\wedge \beta^j
\qquad \mbox{and} \qquad \frac{({\rm d}\alpha'\,)^{d_0}}{d_0!}
= \left(\mbox{$\frac{4{\,{\rm i}\,}}{N^2}$}\right)^{d_0}~\mbox{pfaff}\big(\tilde A\,\big)~
\bigwedge_{i=1}^{2d_0}\, \beta^i \ .
\label{dalphaprime}\end{equation}
To evaluate the matrices $M=(M_{ij})$ and $\tilde A=(\tilde A_{ij})$
above explicitly, we recall that the basis $H_i:=H_{kl;i}$ (where
$k,l$ are block indices) of $E_0$ takes the block form
\begin{equation}
H_{kl;i} = \left(\begin{array}{ccccc}0&0 &\vline & 0 & 0 \\
0&0 &\vline& Y_{lk;i} & 0 \\
\hline
0 & Y_{lk;i}^\dagger&\vline & 0&0 \\
0 & 0&\vline & 0&0
\end{array}\right) = {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(h_{kl;i})
\label{E_0-explicit-2}
\end{equation}
where $h_{kl;i}\in \mathfrak{h}$ is a hermitian block matrix with a similar
block decomposition. They are orthogonal for different $k,l$, and we will
often omit the indices $k,l$. Note that the complex structure on $E_0$
defined by the map ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}$ is compatible the natural complex structure
on $\mathfrak{h}$. This basis is particularly useful for evaluating the
pfaffian which appears in \eq{dalphaprime}, because ${\rm ad}_s(H_{kl;i})$
for $s\in\mathfrak{s}$ acts as multiplication by $(s_k - s_l)$ in the
upper-right blocks of \eq{E_0-explicit-2}. It follows that
\begin{equation}
{\,{\rm i}\,}\,{\rm ad}_{C_0}(H_{kl;i}) = c_{lk}\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(H_{kl;i}) \qquad \mbox{and}
\qquad {\,{\rm i}\,}\,{\rm ad}_{s}( H_{kl;i}) = (s_l-s_k)\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(H_{kl;i})
\label{J-ad-id}
\end{equation}
where the eigenvalues $c_{lk} >0$ are defined in
\eq{ad-C0-explicit}. These formulas hold only for $k>l$, and analogous
statements are true for the subspace $E_1$.
We can choose an orthogonal basis $Y_i$ such that $G_{ij} = 2\,
\Tr\big(Y_i\,Y_j^\dagger\big)$ is diagonal, as $G_{ij}$ is a hermitian
matrix. Then
\begin{eqnarray}
\Tr\big(H_i\, H_j\big) &=& \Tr\big(Y_i\,Y_j^\dagger + Y_i^\dagger\,
Y_j\big) ~=~ G_{ij} \ , \nonumber\\[4pt]
\Tr\big(H_i\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(H_j)\big) &=& \Tr\big({\,{\rm i}\,} Y_i\,Y_j^\dagger -{\,{\rm i}\,}
Y_i^\dagger\, Y_j\big) ~=~0 \ .
\end{eqnarray}
This means that the symmetric matrix $M=(M_{ij})$ in (\ref{inner-H})
has the block decomposition
\begin{equation}
M = \left(\begin{array}{ccc} G & 0 \\ 0 & G
\end{array}\right)
\label{Mblock}\end{equation}
in the basis $(\tilde H_i, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(\tilde H_i))$, and similarly the matrix
$\tilde A$ in (\ref{tildeAdef}) is given by
\begin{eqnarray}
\tilde A_{ij} &=& \Tr\big(H_i~{\rm ad}_{s}(H_j)\big)\nonumber\\[4pt]
&=& -{\,{\rm i}\,} (s_l-s_k)\,\Tr\big(H_i\,{\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(H_{j})\big) ~=~
-{\,{\rm i}\,} (s_k-s_l)\, \left(\begin{array}{ccc} 0 & G \\ -G & 0
\end{array}\right)_{ij} \ .
\label{tildeAblock}\end{eqnarray}
We can read off the pfaffian from this expression and use
(\ref{Mblock}) to write it as
\begin{equation}
\mbox{pfaff}\big(\tilde A\,\big) =
(-{\,{\rm i}\,})^{d_0}\sqrt{\det(M)}\; \prod_{k > l} \,(s_k-s_l)^{|n_k-n_l|+1} \ .
\label{PfaffA-eval}
\end{equation}
We can now evaluate the localization integral
\begin{equation}
\int_{\mathfrak{h}}\,\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~t_2^{d_0}\,
\frac{({\rm d} \alpha'\,)^{d_0}}{d_0!}~\epsilon^{-{\,{\rm i}\,} t_2 \,\langle \alpha',V_\phi\rangle }
=\left(\mbox{$\frac{4{\,{\rm i}\,}}{N^2}$}\right)^{d_0}\,
\int_{\mathfrak{h}}\,\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~t_2^{d_0}~
\mbox{pfaff}\big(\tilde A\,
\big)~\epsilon^{-2{\,{\rm i}\,} t_2\, \phi^{i}\, M_{ij}\, \phi^{j}} ~
\bigwedge_{i=1}^{2d_0}\, \beta^i
\end{equation}
where $\phi = \phi^{i}\, h_{i}= \phi^{{kl;i}}\, h_{{kl;i}}$. The
oscillatory gaussian integral is defined by analytic continuation $t_2
\to t_2-{\,{\rm i}\,} \varepsilon$ for a small positive parameter $\varepsilon$,
which we are free to do as the partition function is formally independent of
$t_2$. With this continuation understood and a suitable orientation of
the vector space $\mathfrak{h}$, we readily compute
\begin{eqnarray}
\int_{\mathfrak{h}}\,\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~t_2^{d_0}\,
\frac{({\rm d} \alpha'\,)^{d_0}}{d_0!}~\epsilon^{-{\,{\rm i}\,} t_2 \,\langle \alpha',V_\phi\rangle }
&=& \left(\mbox{$\frac{4{\,{\rm i}\,}}{N^2}$}\right)^{d_0}\,
\left(\mbox{$\frac 1{2\pi}$}\right)^{2d_0}\,
\left(-\mbox{$\frac{\pi}{2{\,{\rm i}\,}}$}\right)^{d_0}~
\frac{\mbox{pfaff}\big(\tilde A\,\big)}{\sqrt{\det(M)}}~
\bigwedge_{i=1}^{2d_0}\, \beta^i \nonumber\\[4pt]
&=& \frac{{\,{\rm i}\,}^{d_0}}{( 2\pi\, N^2)^{d_0}}~
\prod_{k > l}\, (s_k-s_l)^{|n_k-n_l|+1}~
\bigwedge_{i=1}^{2d_0}\, \beta^i \ .
\label{locmhms}\end{eqnarray}
This integral thus produces a measure on $\mathfrak{s}$ which we will use below
to perform the remaining integral over the stabilizer.
\subsubsection*{\it Integration over $E_1$}
Now that the $\phi$-integration in \eq{Zdeg-4} is localized onto
$\mathfrak{s}$, we can proceed to evaluate the integral over $E_1$. This space
has a basis $K_i$ with block decomposition $K_{kl;i}$ similar to
\eq{E_0-explicit-2} for $n\geq k>l\geq1$ (for $k<l$ the $K_{kl;i}$ do
not exist), which are non-vanishing if $n_k>n_l+1$. We need to
evaluate $\langle \alpha,V_s\rangle$ for $s\in\mathfrak{s}$ up to second order in
the fluctuations about the critical point in $E_1$, which is
non-tangential to the gauge orbit $\cO_{\rm max}$. For this, we
introduce real linear coordinates $x^i, y^i$, $i=1,\dots,d_1$ on $E_1$
such that a generic vector $V_\Psi \in E_1$ is parametrized as
\begin{equation}
V_\Psi = \big(x^i\, K_i\,,\, y^i\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(K_i)\big)
=\left(\begin{array}{ccccc}0&0 &\vline & 0 & 0 \\
0&0 &\vline& z^i\, X_{i} & 0 \\
\hline
0 & \overline z^{\,i}\, X_{i}^\dagger&\vline &
0&0 \\ 0 & 0&\vline & 0&0
\end{array}\right)
\end{equation}
where we have introduced complex coordinates $z^i = x^i + {\,{\rm i}\,}
y^i$. Then $\gamma^i = {\rm d} x^i$ and $\gamma^{i+d_1}={\rm d} y^i$ for
$i=1,\dots,d_1$.
As above, we can choose coordinates such that $G_{ij} =
2\,\Tr\big(X_i\, X_j^\dagger\big)$ is diagonal. Then \eq{alpha-expand}
gives
\begin{equation}
\langle \alpha,V_s\rangle = -\Tr\big({\rm ad}_s(V_\Psi)~{\rm ad}_{C_0}(V_\Psi)
\big)= \big(x^i\,,\,y^i\big) ~\tilde M_{ij}(s) ~
\left(\begin{array}{c} x^j \\ y^j\end{array}\right)
\end{equation}
to second order, where
\begin{eqnarray}
\tilde M_{ij}(s) &=& \Tr\big(K_i~{\rm ad}_s~{\rm ad}_{C_0}(K_j)\big)\nonumber\\[4pt]
&=& (s_k-s_l)\, c_{kl} \; \Tr(K_i\, K_j)~=~
(s_k-s_l)\, c_{kl}\, \left(\begin{array}{ccc} G & 0 \\ 0 & G
\end{array}\right)_{ij}
\label{tildeMinblock}\end{eqnarray}
is a symmetric matrix and we have used the obvious analog of
\eq{J-ad-id} for the basis $K_i$. Similarly, the antisymmetric matrix
$A$ in \eq{Aijdef} can be expressed as
\begin{equation}
A_{ij} = 2{\,{\rm i}\,} \Tr\big(K_{kl;i}~{\rm ad}_{C_0}(K_{kl;j})\big)
= 2c_{lk}\,\Tr\big(K_{kl;i}\, {\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}(K_{kl;j})\big)
= 2c_{kl}\, \left(\begin{array}{ccc} 0 & G \\ -G & 0
\end{array}\right)_{ij} \ ,
\end{equation}
and using \eq{tildeMinblock} its pfaffian is therefore given by
\begin{equation}
\mbox{pfaff} (A)
= 2^{d_1}~\sqrt{\det\big(\tilde M(s)\big)}~
\prod_{k > l}\,(s_k-s_l)^{1-|n_k-n_l|} \ .
\label{pfaffAtildeM}\end{equation}
The pfaffians $\mbox{pfaff} (\tilde A\,)$ and $\mbox{pfaff} (A)$ represent the
$S$-equivariant Euler classes in $H_S(\cO_{\rm max})$ of equivariant
bundles over $\cO_{\rm max}$ with fibres $E_0$ and $E_1$,
respectively, in terms of the weights $s_k$ for the (trivial)
$S$-action on $\cO_{\rm max}$. They are the typical representatives of
fluctuations in equivariant localization~\cite{szaboloc,BGVbook}, and
they also appear in the nonabelian localization formulas of~\cite{witten} and
of~\cite{Woodward:2004xz}. Using the analytic continuation $t_1 \to
t_1-{\,{\rm i}\,}\varepsilon$ and a suitable orientation of $E_1$ as before, we
can now evaluate the oscillatory gaussian integral
\begin{equation}
\int_{E_1}\,\prod_{i=1}^{d_1}\,{\rm d} x^i~{\rm d} y^i ~ t_1^{d_1} ~
\epsilon^{-{\,{\rm i}\,} t_1\,\langle \alpha,V_s\rangle}
= \left(\frac{\pi}{{\,{\rm i}\,}}\right)^{d_1}\,\frac 1{\sqrt{\det
\big( \tilde M(s)\big)}} \ .
\label{gaussintE1}\end{equation}
\subsubsection*{\it Symplectic integral over ${\cal F}_{\rm max}$}
Putting the results \eq{Zdeg-4}, \eq{locmhms}, \eq{pfaffAtildeM} and
\eq{gaussintE1} together, we may evaluate the large $t_1,t_2$ limit of
the symplectic integral \eq{Zmaxt1t2} to obtain
\begin{eqnarray}
Z_{\rm max} &=& \frac1{{\rm vol}(G)}\,\int_{\mathfrak{g}\times {\cal F}_{\rm max}}\,
\Big[\,\frac{{\rm d}\phi}{2\pi}\,\Big]~
\exp\Big({\rm d}(t_1\, \alpha+t_2\,\alpha'\,) - {\,{\rm i}\,}\langle t_1\,\alpha+t_2\,
\alpha',V_\phi\rangle\Big)\nonumber\\ && \qquad\qquad\qquad \qquad \times~
\epsilon^{-{\,{\rm i}\,} \Tr(C_0\, \phi) -\frac{g'}{2}\,\Tr(\phi^2)} \nonumber\\[4pt]
&=& \frac1{{\rm vol}(G)}\, \left(\frac{\pi}{{\,{\rm i}\,}}\right)^{d_1}\,
\frac{{\,{\rm i}\,}^{d_0}}{(2\pi\, N^2)^{d_0}}\,
\int_{\mathfrak{s}}\,\Big[\,\frac{{\rm d} s}{2\pi}\,\Big]~ \prod_{k > l}\,
(s_k-s_l)^{|n_k-n_l|+1}~
\frac{\mbox{pfaff}(A)}{\sqrt{\det\big(\tilde M(s)\big)}} \nonumber\\
&& \qquad\qquad\qquad \qquad\times~
\frac 1{N^{d-d_0-d_1}}\, \int_{\cO_{\rm max}} \,\Big(\,
\bigwedge_{j=1}^{d-d_0-d_1}\,\lambda^j\Big)~ \wedge ~\Big(\,
\bigwedge_{i=1}^{2d_0}\,\beta^i \Big)~
\epsilon^{-{\,{\rm i}\,} \Tr(C_0\,s) -\frac{g'}{2}\, \Tr (s^2)}\nonumber\\[4pt]
&=&\frac1{{\rm vol}(G)}\,\frac{{\,{\rm i}\,}^{d_0-d_1}}{(2\pi)^{d_0-d_1}}\,
\prod_{k=1}^n\, \sqrt{n_k}\, \int_{{\mathbb{R}}^n}\,\Big[\,\frac{{\rm d} s}{2\pi}\,
\Big]~\Delta(s)^2~
\epsilon^{-{\,{\rm i}\,} \Tr(C_0\, s) -\frac{g'}{2}\, \Tr (s^2)}\nonumber\\
&& \qquad\qquad\qquad \qquad\times~
\frac 1{N^{d+d_0-d_1}}\, \int_{\cO_{\rm max}} \,\Big(\,
\bigwedge_{j=1}^{d-d_0-d_1}\,\lambda^j\Big)~ \wedge ~\Big(\,
\bigwedge_{i=1}^{2d_0}\,\beta^i \Big)
\label{Zmax-s1}\end{eqnarray}
where we have transformed the integration over $\phi =
s=\mbox{diag}(s_1~\mbox{1 \kern-.59em {\rm l}}_{n_1},\dots,s_n~\mbox{1 \kern-.59em {\rm l}}_{n_n}) \in \mathfrak{s}$ to an integral
over $s=(s_1,\dots,s_n)\in{\mathbb{R}}^n$. We can carry out the integral over
the moduli space $\cO_{\rm max}$ by observing again
\begin{equation}
\frac 1{N^{d+d_0-d_1}}\, \int_{\cO_{\rm max}}\,\Big(\,
\bigwedge_{j=1}^{d-d_0-d_1}\,\lambda^j\Big)~ \wedge ~\Big(\,
\bigwedge_{i=1}^{2d_0}\,\beta^i \Big)
= \int_{G/S}\,\bigwedge_{j=1}^{d+d_0-d_1}\,\eta^j
= \frac{{\rm vol}(G)}{{\rm vol}(S)} \ ,
\label{Omaxint}\end{equation}
where ${\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^*(\lambda^i) = \eta^i$ are left-invariant one-forms on the
gauge group $G$. Note that \eq{Omaxint} includes the integral over $E_0$,
and $\dim_{\mathbb{R}} (\mathfrak{g} \ominus \mathfrak{s}) = d + d_0 - d_1$.
We also have ${\rm vol}(S) = \prod_k\, 2\pi\, \sqrt{n_k}$ in our
metric on $\mathfrak{s}$, since $S=\prod_k\,U(1)\otimes\mbox{1 \kern-.59em {\rm l}}_{n_k}$,
and $C_0(n_i) = \frac {N}{2n_i}~\mbox{1 \kern-.59em {\rm l}}_{n_i}$.
Using furthermore $d_0- d_1 = n^2-n$ which is an even integer,
we may then bring (\ref{Zmax-s1}) into the form
\begin{eqnarray}
Z_{\rm max} &=& \frac{{\,{\rm i}\,}^{n^2-n}}{(2\pi)^{n^2+n}}\,
\int_{{\mathbb{R}}^n}\, [{\rm d} s]~ \Delta(s)^{2}~
\epsilon^{-{\,{\rm i}\,} \Tr(C_0\,s) -\frac{g'}{2} \,\Tr (s^2)} \nonumber\\[4pt]
&=& \frac{{\,{\rm i}\,}^{n^2-n}}{(2\pi)^{n^2+n}}\,
\int_{{\mathbb{R}}^n}\, [{\rm d} s]~ \Delta(s)^{2}~
\epsilon^{-\frac\ii2\, N\,\sum_i\, s_i -\frac{g}{4}\, \sum_i
\frac{n_i}{N}\, s_i^2 } \label{Zmax-s2} \\[4pt]
&=& \frac{{\,{\rm i}\,}^{n^2-n}}{(2\pi)^{n^2+n}}\, \frac{N^{n/2}}
{\prod\limits_{k=1}^n\,\sqrt{n_k}}\,
\int_{{\mathbb{R}}^n}\, [{\rm d}\tilde s\,]~ \prod_{k > l} \,
\Big(\,\sqrt{\mbox{$\frac{N}{n_k}$}}\, \tilde s_k-\sqrt{
\mbox{$\frac{N}{n_l}$}}\,\tilde s_l\Big)^{2}~
\epsilon^{-\frac\ii2\, \sum_i\, \sqrt{\frac{N^3}{n_i}}\,\tilde s_i
-\frac{g}{4}\, \sum_i\, \tilde s_i^{\,2} } \nonumber
\end{eqnarray}
where $\tilde s_i := \sqrt{n_i/N} \,s_i$. Completing the square of the
gaussian function of $\tilde s_i$ in \eq{Zmax-s2} identifies the
Boltzmann weight of the action \eq{YM-action-prime} on the
non-degenerate solution in \eq{action-eval}. In the large $N$ limit,
we substitute \eq{nidomclass} with $\tilde s_i
\approx\big(1+\frac{m_i}{2N}\big)\, s_i$. Neglecting terms of order
$\frac 1N$ then reduces \eq{Zmax-s2} to
\begin{equation}
Z_{\rm max}\approx \pm\, \frac 1{(2\pi)^{n^2+n}}\,
\int_{{\mathbb{R}}^n}\, [{\rm d} s]~ \Delta(s)^{2}~
\epsilon^{- \frac\ii2\,N \,\sum_i\, s_i}~
\epsilon^{\frac\ii4\, \sum_i\, m_i\, s_i -\frac{g}{4}\, \sum_i\, s_i^2 } \
,
\label{Zmax-s3}\end{equation}
and an application of the integral identity (\ref{Lemma}) leads to our
final result
\begin{equation}
Z_{\rm max}\approx \pm\, \frac 1{(2\pi)^{n^2+n}}~
\epsilon^{-\frac{n\,N^2- m\,N}{4 g}}\,
\int_{{\mathbb{R}}^n}\, [{\rm d} s]~\Delta(s)^{2}~
\epsilon^{\frac\ii4\, \sum_i\, m_i\, s_i -\frac{g}{4}\, \sum_i\, s_i^2 } \
.
\label{Zmaxfinal}\end{equation}
The exponential prefactor in this formula exhibits the shift of the
vacuum action, corresponding to the modification of the trace
constraint \eq{UnTrconstr} to \eq{tracemodify}, by the Chern class
${\sf c}_1=m=\sum_i\,m_i$. The remaining contributions coincide with
the classical result~\cite{Minahan:1993tp} for the contribution to the
$U(n)$ sphere partition function from the Yang-Mills instanton on
$S^2$ specified by the configuration of magnetic monopole charges
$m_1,\dots,m_n\in{\mathbb{Z}}$. In particular, using the standard manipulation
of~\cite{Minahan:1993tp} one can change integration variables in
\eq{Zmaxfinal} to identify the anticipated Boltzmann weight of the
action \eq{action-eval-2}.
\bigskip
\section{Abelianization\label{Abelianization}}
In the following sections we will describe an alternative technique of
evaluating
the partition function of $U(n)$ Yang-Mills theory on the fuzzy sphere
$S_N^2$, within the framework of our symplectic model. This method can
be regarded as a finite-dimensional version of the technique of
abelianization for ordinary Yang-Mills theory in two
dimensions~\cite{Blau:1995rs}, which can be used to derive the
strong-coupling expansion of the gauge theory and agrees with the
nonabelian localization. The advantage of this formalism is that it
captures {\it all} classical contributions to the partition function
in a single go and for any $N$, in contrast to nonabelian localization
which requires analysis of each type of critical point individually
and only yields tractable expressions in the large $N$ classical
limit. Its downfall is that it leads to somewhat cumbersome
expressions for the partition function which arise from a rather
different sort of localization. This is analogous to the case of gauge
theory on the two-dimensional noncommutative torus whose
strong-coupling expansion involves the addition of infinitely many
higher Casimir operators to the usual Migdal
formula~\cite{Paniak:2003gn}, or its matrix model regularization which
is given by a complicated combinatorial formula~\cite{szaborev1}. This
complexity makes it difficult to explicitly extract the contributions
from fuzzy sphere instantons, and we will examine this problem more
thoroughly in the next section. Here we shall derive in detail our
alternative abelianized formula for the partition function
(\ref{Z-1}), representing yet another new solution for quantum gauge
theory on the fuzzy sphere.
Let us start from the partition function in the form \eq{Z-2}. The
crucial observation is that the function $f:\mathfrak{g}\to{\mathbb{R}}$ defined by the
symplectic integral
\begin{equation}
f(\phi):=\frac1{{\rm vol}(G)}\,\int_{\cO(\Xi)}\,
\exp\left(\omega-{\,{\rm i}\,}\Tr(C_0\,\phi)-\mbox{$\frac{g'}2$}\,
\Tr\big(\phi^2\big)\right)
\label{fphidef}\end{equation}
is gauge invariant. Analogously to what we did in
Section~\ref{LocVacSurface}, we may therefore apply the Weyl
integration formula \eq{Weylint} which reduces its integral over the
gauge algebra $\mathfrak{g}=\mathfrak{u}(n\,N)$ to an integral over the Lie algebra
$\mathfrak{u}(1)^{n\,N}$ of the maximal torus $T=U(1)^{n\,N}$ of
$G=U(n\,N)$. This rewriting of the $\phi$-integral in (\ref{Z-2}) is
called {\it diagonalization} or {\it abelianization}, and it can be
thought of as the eigenvalue representation of the gauge theory
regarded as a matrix model. In this way we may bring the partition
function into the form
\begin{equation}
Z=\frac1{(n\,N)!}\,\int_{{\mathbb{R}}^{n\,N}}\,
\Big[\,\frac{{\rm d} p}{2\pi}\,\Big]~
{\,\rm e}\,^{-\frac{g'}4\,\Tr(p^2)}~\Delta(p)^2~Z_\cO(p) \ ,
\label{partfndiag}\end{equation}
where
\begin{equation}
Z_\cO(p)=\int_{\cO(\Xi)}\,\exp\Big(\omega-\mbox{$\frac\ii2$}\,
\Tr(p\,C)\Big)
\label{calZdef}\end{equation}
is the Fourier transform of the orbit $\cO(\Xi)$ and we have
identified $(n\,N)$-vectors with diagonal matrices $p={\rm
diag}(p_1,\dots,p_{n\,N})\otimes\sigma^0$.
Localization can then be applied to the symplectic integral \eq{calZdef}
in three different ways, by:
\begin{enumerate}
\item Considering $p \in \mathfrak{u}(\mathcal{N}\,)$ and observe that $Z_\cO(p)$ can
be considered as being
invariant under $p \to U^{-1}\, p\, U$ for $U \in U(\mathcal{N}\,)$. One can then
evaluate the integral over the orbit space $\cO(\Xi)$ directly
using the Itzykson-Zuber formula \eq{IZformula} for the unitary group
$U(\mathcal{N}\,)$. This is essentially the calculation that was carried out
in~\cite{matrixsphere}, which is adapted to the present formulation
in Section~\ref{IZ-loc}.
It amounts to an abelian localization of
the original orbit integral via the Duistermaat-Heckman theorem.
\item Considering $p \in \mathfrak{u}(nN) \otimes \sigma^0$ and apply abelian
localization to the maximal torus $T$ of the gauge group $G = U(n\,N)$.
This will be elaborated in detail in Section~\ref{Abel-loc},
taking advantage of a suitable polar decomposition of the orbit space.
This in turn will involve a localization onto the radial $U(N_+)\times
U(N_-)$-foliation, accompanied by a fluctuation
integral over the moduli space of symplectic leaves.
\item Adding a localization form $Q\alpha$ as in Section~\ref{NonabLoc},
and applying nonabelian localization techniques to write the
partition function as a sum over local contributions from Yang-Mills
critical points.
\end{enumerate}
Technique~3 here was of course dealt with at length in
Section~\ref{NonabLoc}, and will be compared
in some detail to the other two approaches below.
Comparison with
technique~1 first is interesting in its own right as a comparison
between the matrix model approach of~\cite{matrixsphere} to gauge
theory on $S_N^2$ and the results of the present paper. It is also a
useful warm-up to the abelianization approach of technique~2 which
shares some of its qualitative features. We will find that the
abelianization technique through the polar decomposition of the
configuration space exploits the radial coordinates in a rather
explicit way to describe the local geometry of Yang-Mills critical
surfaces, and it may also find useful applications in related
considerations.
\bigskip
\section{Itzykson-Zuber localization on the configuration space}
\label{IZ-loc}
The integral \eq{calZdef} can be evaluated immediately using the Itzykson-Zuber
formula~\cite{Itzykson:1979fi}, which we briefly recall.
If $X,Y$ are $m\times
m$ hermitian matrices with nondegenerate eigenvalues $x_i,y_i\in{\mathbb{R}}$,
$i=1,\dots,m$, then one has
\begin{equation}
\int_{U(m)}\,[{\rm d} U]~\exp\Big(\mbox{$\frac{{\,{\rm i}\,} N}s$}\,\Tr\big(
X\,U\,Y\,U^\dag\big)\Big)=c_N(m,s)~\frac{\det\limits_{1\leq i,j
\leq m}\,\big(\epsilon^{\frac{{\,{\rm i}\,} N}s\,x_i\,y_j}\big)}
{\Delta(x)~\Delta(y)}
\label{IZformula}\end{equation}
where for $m\in{\mathbb{N}}$ and $s\in\Gamma$ we have defined
\begin{equation}
c_N(m,s):={\rm vol}\big(U(m)\big)\,
({\,{\rm i}\,} N/s)^{-m\,(m-1)/2}~\prod_{k=1}^{m-1}\,k! \ .
\label{cmsdef}\end{equation}
Applied to the present situation for $U(\mathcal{N}\,)$, this yields
\begin{eqnarray}
Z_\cO(p) &=&\frac1{{\rm vol}\big(U(N_+)\big)~{\rm vol}\big(U(N_-)\big)}~
\int_{U(\mathcal{N}\,)}\,[{\rm d} U]~ \exp\Big(-\mbox{$\frac\ii2$}\,
\Tr\big(U^{-1}\, \Xi\, U\,\Phi\big)\Big) \nonumber\\[4pt]
&=&c_1'(\mathcal{N},2)~\frac{\det\limits_{1\leq i,j\leq\mathcal{N}}\,\big(
\epsilon^{-\frac\ii2 \,\Xi_i\,\Phi_j}\big)}{\D(\Xi) ~\D(\Phi)}
\label{U-int}
\end{eqnarray}
where $\Phi = \mbox{diag}(p_1,...,p_{nN}) \otimes \sigma^0$ and
$c_1'(\mathcal{N},2):=c_1(\mathcal{N},2)/{\rm vol}(U(N_+))~{\rm vol}(U(N_-))$. This
formula can be understood as an abelian localization with respect
to the action of the maximal torus group $U(1)^{\mathcal{N}}$ on the flag
manifold $U(\mathcal{N}\,)/U(1)^{\mathcal{N}}$~\cite{szaboloc}. The corresponding
fixed points are the solutions of the equation
\begin{equation}
[C,\Phi] =0 \ ,
\label{IZsaddle}\end{equation}
which are the saddle-points of the Itzykson-Zuber integral, and the
expansion of the determinant in \eq{U-int} into a sum over
permutations $\pi\in\mathfrak{S}_\mathcal{N}$ gives the sum over critical points in the
localization formula. This is completely analogous to the abelianized
localization of Section~\ref{Abel-loc}. However, the expression
\eq{U-int} is formal as it stands because both sets of eigenvalues $\Xi_i$ and
$\Phi_i$ are degenerate, and correspondingly the critical surfaces are
in fact nontrivial spaces.
Therefore \eq{U-int} has to be defined
using an appropriate limiting procedure which removes the degeneracy.
The partition function \eq{Z-2} is then given by
\begin{equation}
Z=\frac{~ c_1'(\mathcal{N},2)}{(n\,N)!}\,\int_{{\mathbb{R}}^{n\,N}}\,
\Big[\,\frac{{\rm d} p}{2\pi}\,\Big]~
{\,\rm e}\,^{-\frac{g'}4\,\Tr(p^2)}~\Delta(p)^2
~\frac{\det\limits_{1\leq i,j\leq\mathcal{N}}\,\big(
\epsilon^{-\frac\ii2 \,\Xi_i\,\Phi_j}\big)}{\D(\Xi) ~\D(\Phi)}\ ,
\label{Z-full-IZ}
\end{equation}
where the
set of eigenvalues $\Phi_i$ of $\Phi$ consists of two copies of
$(p_1,\dots,p_{nN})$ and is therefore highly degenerate.
While this explicit formula in terms of an $nN$-dimensional integral
is very appealing, the
ratio of degenerate determinants in \eq{Z-full-IZ} makes it
difficult to evaluate explicitly~\cite{matrixsphere}, and its
combinatorial expansion is even more intricate than that of
Section~\ref{UnAbelian}. Thus far only an asymptotic
analysis (of a slightly modified integral) has been made possible
in~\cite{matrixsphere}. The reason for this complexity is the fact
that, without the addition of a suitable localization form $Q\alpha$ to
the path integral \eq{Z-2}, the localization is onto the solutions of
the equation \eq{IZsaddle} in $\cO$ which are not related to the
critical surfaces of the Yang-Mills action in any simple way.
This will be explored in more detail below.
\bigskip
\section{Abelian localization and radial coordinates\label{Abel-loc}}
We now return to the symplectic orbit integral \eq{calZdef}, and
observe that it
fulfills the conditions of the Duistermaat-Heckman theorem, or
equivalently the abelian version of the localization theorem of
Section~\ref{LocPrinc}. Therefore, we have mapped the original
nonabelian localization problem to the simpler problem of {\it
abelian} localization. Indeed, $\langle\mu_T(C),p\rangle=\Tr(p\,C)$
is just the restriction of the moment map
$\mu:\cO(\Xi)\to\mathfrak{u}(\mathcal{N}\,)^\vee$ to the maximal torus $T$ of the
gauge group $G$. The torus action on the orbit space $\cO(\Xi)$ is the
restriction of the adjoint $G$-action given by
\begin{equation}
C~\longmapsto~P\,C\,P^{-1}
\end{equation}
for $C=C_\mu\otimes\sigma^\mu=U\,\Xi\,U^{-1}\in\cO(\Xi)$, $U\in
U(\mathcal{N}\,)$ and $P\in T$. To compute the corresponding localization
formula we need the fixed points of this $T$-action.
They are given by those $C\in\cO(\Xi)$ which commute with the
$T$-action generated by the element $p\in\mathfrak{u}(1)^{n\,N}$, so that
\begin{equation}
[C,p]=0 \ .
\label{Cp0}\end{equation}
This equation will be studied in detail in Section~\ref{AbLoc}.
It is solved by those $U\in U(\mathcal{N}\,)$ for which
$U^{-1}\,P\,U$ lies in the stabilizer subgroup $U(n\,N_+)\times
U(n\,N_-)\subset U(\mathcal{N}\,)$ of the element $\Xi$ (with $N_\pm:=N\pm1$
as before). The saddle points
$U$ are generically also labelled by permutation matrices $\Sigma\in
U(\mathcal{N}\,)$ representing elements $\pi\in\mathfrak{S}_{n\,N}$. On the
configuration space $\cO$, the saddle point equation \eq{Cp0} means
that $C$ commutes with the characteristic projectors of $p$, i.e. $C$
has the same block decomposition as $p$.
The Fourier transform (\ref{calZdef}) will thus generically localize
onto a subspace of $U(n\,N_+)\times U(n\,N_-)$ in $\cO$. It may be
evaluated with the help of the degenerate version of the
Duistermaat-Heckman theorem~\cite{szaboloc}, which expresses it in
terms of an integral over the critical submanifold $U(n\,N_+)\times
U(n\,N_-)$ with the quantum fluctuation determinants determined by the
$T$-equivariant Euler class of the normal bundle to the
stabilizer~\cite{BGVbook}. While this can be worked out in principle,
it is rather cumbersome to do in practise. Instead we will proceed in
a more direct fashion by exploiting some further geometrical properties of the
configuration space $\cO$, which in the next section will be related
to the local symplectic geometry near each Yang-Mills critical point
as analysed at length in Section~\ref{ClassSols}. This explicit
calculation will justify the abelianized localization {\it a priori},
with the quantum fluctuation determinants given by integrals over
symplectic leaves of a foliation of the configuration space
parametrized by abelian subspaces of the tangent spaces to $\cO$. The
symplectic integral (\ref{calZdef}) could also be analysed using
Fourier transform techniques along with the Guillemin-Lerman-Sternberg
theorem~\cite{GLS1}, as in~\cite{JK1,Paradan1,JKKW1}, but this leads
to much more complicated combinatorial expressions than the ones we
derive.
\subsection{Polar decomposition of the configuration
space\label{RadialCoords}}
The key step in the evaluation of (\ref{calZdef}) is the introduction
of {\it radial coordinates} on the orbit space
(see~\cite{helgason1,casmag1,szabo1} for details). Let us go back to
the Cartan decomposition \eq{ucN-decomp} at a given point
$C\in\cO$. Let $\mathfrak{t}$ be a maximal abelian subalgebra in the tangent
space $T_C\cO\cong\ker({\cal J}} \def\cK{{\cal K}} \def\cL{{\cal L}^2+\mbox{1 \kern-.59em {\rm l}}_\mathcal{N})$. Then the radial coordinates
on the orbit space $\cO$ are given by
\begin{equation}
U=V\,R\,V^{-1}=V\,R~{\sf j}\big(V^{-1}\big)
\end{equation}
where $V\in U(n\,N_+)\times U(n\,N_-)$, modulo elements of the
centralizer of $\mathfrak{t}$, and $R\in\exp(\mathfrak{t})$ up to the adjoint action of
the Weyl group of the {\it restricted} root system of the irreducible
symmetric space~$\cO$. By definition, they satisfy the respective
commutation and anticommutation relations
\begin{equation}
V\,\Xi = \Xi\, V \qquad \mbox{and} \qquad R \,\Xi = \Xi\, R^{-1} \ .
\label{commanticommrels}\end{equation}
The corresponding covariant coordinate $C\in\cO(\Xi)$ is then given by
\begin{eqnarray}
C &=& U\,\Xi\,U^{-1} \nonumber\\[4pt] &=& V\, R \, \Xi\, R^{-1}\, V^{-1}
\nonumber\\[4pt]
&=&\mbox{$\frac12$}\,V\,\left(R^2\,\Xi + \Xi\, R^{-2}\right)\,V^{-1}
~=~\mbox{$\frac12$}\,\left(V\, R^2\, V^{-1}\,\Xi + \Xi\, V\, R^{-2}\,
V^{-1}\right) \ .
\label{orbit-coordinates}
\end{eqnarray}
The jacobian for the change of invariant integration measure on $\cO$
can be computed by standard techniques with the result
\begin{equation}
{\rm d} C=r(n,N)~[{\rm d} V]~\prod_{i=1}^{\dim(\mathfrak{t})}\,{\rm d} r_i~
\prod_{\alpha>0}\,\bigl|\sin(\alpha,\log R)\bigr|^{m_\alpha}
\label{changemeas}\end{equation}
where\footnote{The normalization constant $r(n,N)$ is determined by
the requirement $\int_\cO\,{\rm d} C={\rm vol}(\cO)$.}
\begin{equation}
r(n,N)=\frac{{\rm vol}\big(U(\mathcal{N}\,)\big)}{{\rm vol}\big(U(n\,N_+)\big)^2~
{\rm vol}\big(U(n\,N_-)\big)^2}\,\frac{2^{n^2\,(N^2-1)/2}}
{2^{n\,(N-2n\,N-3)/2}} \ .
\label{rnNdef}\end{equation}
The radial coordinates $r_i\in[0,\frac\pi2]$ are the eigenvalues of
$U$, while $V$ are the angular coordinates with $[{\rm d} V]$ denoting the
standard invariant Haar measure. The second product runs over positive
roots of the restricted root lattice on $\cO$, and $m_\alpha$ is the
multiplicity of the root $\alpha$ in the Cartan decomposition
\eq{ucN-decomp}. The pairing is defined by choosing an orthonormal
basis $\vec e_i$ in weight space and identifying a root vector
$\alpha$ with the dual element $\alpha^\vee=\sum_i\alpha_i\,\vec
e_i$. Then $(\alpha,\log R)=\sum_i\alpha_i\,r_i$. This polar
decomposition defines a foliation of the configuration space $\cO$ by
conjugacy classes under the adjoint action of the stabilizer
subgroup. The radial symplectic leaves ${\cal L}(R)$ of this foliation
are parametrized by the abelian Lie group $\exp(\mathfrak{t})$.
Let us make this decomposition more explicit using the known data for
the symmetric space \eq{nonaborbit}~\cite{casmag1}. The
restricted root lattice is given by the root system
$BC_{n\,N_-}=B_{n\,N_-}\cup C_{n\,N_-}$ which has positive weights $\vec e_i\pm
\vec e_j$, $2\vec e_i$ and $\vec e_i$ with $i,j=1,\dots,n\,N_-$,
$i<j$. The corresponding multiplicities are $m_{\vec e_i\pm\vec
e_j}=2$, $m_{2\vec e_i}=1$ and $m_{\vec e_i}=4n$. The gauge
invariant volume form on $\cO$ thereby becomes
\begin{equation}
{\rm d} C=r(n,N)~[{\rm d} V]~\prod_{i=1}^{n\,N_-}\,{\rm d} r_i~\sin2r_i\,\sin^{4n}r_i~
\prod_{i<j}\,\sin^2(r_i-r_j)\,\sin^2(r_i+r_j) \ .
\end{equation}
Using the trigonometric identities
\begin{equation}
\sin(r_i-r_j)\,\sin(r_i+r_j)=\mbox{$\frac12$}\,(
\cos2r_j-\cos2r_i) \qquad \mbox{and}
\qquad \sin^2r_i=\mbox{$\frac12$}\,(1-\cos2r_i) \ ,
\end{equation}
and defining $\lambda_i:=\cos2r_i\in[-1,1]$, we may bring the measure to
the form
\begin{equation}
{\rm d} C=\frac{r(n,N)}{2^{n^2\,(N^2-1)}}~[{\rm d} V]~\Delta(\lambda)^2~
\prod_{i=1}^{n\,N_-}\,{\rm d}\lambda_i~(1-\lambda_i)^{2n} \ .
\end{equation}
A convenient choice for the radial coordinates is provided by setting
\begin{equation}
\rho:={\rm diag}(r_1,\dots,r_{n\,N_-})
\label{rhodefr}\end{equation}
and defining
\begin{equation}
R={\rm diag}\bigl(\sigma^0\otimes\mbox{1 \kern-.59em {\rm l}}_n\,,\,
\exp({\,{\rm i}\,}\sigma^1\otimes\rho)\bigr)={\rm diag}\big(
\sigma^0\otimes\mbox{1 \kern-.59em {\rm l}}_n\,,\,\sigma^0\otimes\cos(\rho)+{\,{\rm i}\,}
\sigma^1\otimes\sin(\rho)\big) \ .
\label{Rconv}\end{equation}
We also choose a basis in which
\begin{equation}
\Xi=\mbox{$\frac N2$}~{\rm diag}\big(\mbox{1 \kern-.59em {\rm l}}_{n\,N_+}\,,\,-\mbox{1 \kern-.59em {\rm l}}_{n\,N_-}
\big)=\mbox{$\frac N2$}~{\rm diag}\big(\sigma^0\otimes\mbox{1 \kern-.59em {\rm l}}_n\,,\,
\sigma^3\otimes\mbox{1 \kern-.59em {\rm l}}_{n\,N_-}\big)
\label{Xichoice}\end{equation}
and $V\in U(n\,N_+)\times U(n\,N_-)$ is given by
\begin{equation}
V={\rm diag}(V_+,V_-) \ ,
\label{Vconv}\end{equation}
with $V_\pm\in U(n\,N_\pm)$ and $[{\rm d} V]=[{\rm d} V_+]~[{\rm d} V_-]$. The
relations \eq{commanticommrels} are then automatically satisfied.
\subsection{Evaluation of the abelianized partition function: $U(1)$
gauge theory\label{U1Abelian}}
We will now explicitly evaluate the Fourier transform
\eq{calZdef}, beginning with the abelian case $n=1$. Using
\eq{orbit-coordinates} and \eq{rhodefr}--\eq{Vconv}, it is
straightforward to work out the abelian moment map in (\ref{calZdef})
with the result
\begin{eqnarray}
\big\langle\mu_T(C)\,,\,p\big\rangle
&=&\Tr\big(p\,U\,\Xi\,U^{-1}\big)\nonumber\\[4pt]
&=&\mbox{$\frac12$}\,\Tr\bigl(p\,\Xi\,V\,(R^2+R^{-2})\,V^{-1}
\bigr) \nonumber\\[4pt]
&=&\Tr\Bigl(p\,\Xi\,V~{\rm diag}\bigl(
\sigma^0\,,\,\cos(2\sigma^1\otimes\rho)\bigr)\,V^{-1}\Bigr)
\nonumber\\[4pt]
&=&\mbox{$\frac N2$}\,\Tr\bigl({\rm diag}(p_1\,\sigma^0,p_2,\dots,
p_{N})\,V_+~{\rm diag}(\sigma^0,\lambda_1,\dots,\lambda_{N_-})\,
V_+^{-1}\bigr)
\nonumber\\ && -\,\mbox{$\frac N2$}\,\Tr\bigl({\rm diag}(p_2,
\dots,p_{N})\,V_-~{\rm diag}(\lambda_1,\dots,\lambda_{N_-})
\,V_-^{-1}\bigr)
\label{calZactionexpl}
\end{eqnarray}
where we have used an inconsequential redefinition of the unitary
matrix $V_+$ by multiplication with an appropriate permutation
matrix. Upon substitution into (\ref{calZdef}), we see that the two
angular integrals decouple from each other.
The integral over $V_-\in U(N_-)$ is now easily evaluated with the
help of \eq{IZformula} with the result
\begin{equation}
\frac{c_N(N_-,4)}{\Delta(p_2,\dots,p_{N})\,\Delta(\lambda)}~
\sum_{\pi_-\in\mathfrak{S}_{N_-}}\,{\rm sgn}(\pi_-)~
\prod_{i=1}^{N_-}\,{\,\rm e}\,^{\frac{{\,{\rm i}\,} N}4\,
p_{i+1}\,\lambda_{\pi_-(i)}} \ .
\label{VminusIZ}\end{equation}
The integral over $V_+\in U(N_+)$ is more delicate since the
Itzykson-Zuber formula will involve a ratio of degenerate
determinants. Since both numerator and denominator of \eq{IZformula}
are completely antisymmetric functions of the eigenvalues $x_i$ and
$y_i$ independently, the limit where some eigenvalues coalesce gives a
well-defined analytic function in $(x_i,y_i)$ because all poles are
cancelled by zeroes in the determinant. We will regularize the
$V_+$-integral by replacing the first $p_1$ entry in the last line of
(\ref{calZactionexpl}) with an auxilliary momentum variable
$p_0\in{\mathbb{R}}$, the second entry of $1$ with an auxilliary radial variable
$\lambda_0\in[-1,1]$, and then afterwards take the limits $p_0\to
p_1$, $\lambda_0\to1$. Defining $\lambda_N:=1$, the Itzykson-Zuber
formula \eq{IZformula} applied to the regularized $V_+$-integral
yields
\begin{equation}
\frac{c_N(N_+,-4)}{\Delta(p_0,p_1,\dots,p_N)\,\Delta(\lambda_0,
\lambda_1,\dots,\lambda_N)}~\sum_{\pi_+\in\mathfrak{S}_{N_+}}\,
{\rm sgn}(\pi_+)~{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_0\,\lambda_{\pi_+(N)}}~
\prod_{i=0}^{N_-}\,{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_{i+1}\,\lambda_{\pi_+(i)}}
\ .
\label{VplusIZ}\end{equation}
Taking the limit $p_0\to p_1$ first using
l'H\^opital's rule gives
\begin{eqnarray}
&&\frac{\frac{{\,{\rm i}\,} N}4\,c_N(N_+,-4)}{p_1~\prod\limits_{i=2}^N\,(p_1-p_i)~
\Delta(p)~\Delta(\lambda_0,\lambda_1,\dots,\lambda_N)}\nonumber\\ &&
\qquad\qquad\times\,
\sum_{\pi_+\in\mathfrak{S}_{N_+}}\,{\rm sgn}(\pi_+)\,\lambda_{\pi_+(N)}~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_{\pi_+(N)}}~
\prod_{i=0}^{N_-}\,{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_{i+1}\,\lambda_{\pi_+(i)}}
\ .
\label{VplusIZp}\end{eqnarray}
Finally, taking the limit $\lambda_0\to1$ again
using l'H\^opital's rule yields
\begin{eqnarray}
&&\frac{-\frac{{\,{\rm i}\,} N}4\,c_N(N_+,-4)}{p_1~\prod\limits_{i=2}^N\,(p_1-p_i)~
\Delta(p)~\prod\limits_{i=1}^{N_-}\,(1-\lambda_i)^2~\Delta(\lambda)}
\\ && \qquad\qquad\times\,
\left|\begin{matrix}\left(1-\frac{{\,{\rm i}\,} N}4\right)~{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1}&
\lambda_1~{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_1}&\dots&
\lambda_{N_-}~{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_{N_-}}&
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1}\cr-\frac{{\,{\rm i}\,} N}4\,p_1~{\,\rm e}\,^{-\frac{{\,{\rm i}\,}
N}4\,p_1}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_1}&\dots&
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_{N_-}}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1}\cr
\vdots&\vdots& &\vdots&\vdots\cr-\frac{{\,{\rm i}\,} N}4\,p_N~{\,\rm e}\,^{-\frac{{\,{\rm i}\,}
N}4\,p_N}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_N\,\lambda_1}&\dots&
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_N\,\lambda_{N_-}}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_N}
\end{matrix}\right| \ . \nonumber
\end{eqnarray}
Substituting the above into (\ref{calZdef}) gives us the expression
\begin{eqnarray}
Z_\cO(p)&=&-~\frac{4~{\rm vol}(\cO)}{p_1~\Delta(p)^2}\,
\frac{N!\,(N-1)!\,\prod\limits_{k=1}^{N-2}\,(k!)^2}
{\left(\,\sqrt8\,N\right)^{N^2-N}}~\prod_{l=1}^{N_-}\,
\int_{-1}^1\,{\rm d}\lambda_l~\sum_{\pi_-\in\mathfrak{S}_{N_-}}\,{\rm sgn}(\pi_-)~
\prod_{i=1}^{N_-}\,{\,\rm e}\,^{\frac{{\,{\rm i}\,} N}4\,
p_{i+1}\,\lambda_{\pi_-(i)}}\nonumber\\ &&
\times~\left|\begin{matrix}\left(1-\frac{{\,{\rm i}\,} N}4\right)~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1}&
\lambda_1~{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_1}&\dots&
\lambda_{N_-}~{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_{N_-}}&
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1}\cr-\frac{{\,{\rm i}\,} N}4\,p_1~{\,\rm e}\,^{-\frac{{\,{\rm i}\,}
N}4\,p_1}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_1}&\dots&
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_{N_-}}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1}\cr
\vdots&\vdots& &\vdots&\vdots\cr-\frac{{\,{\rm i}\,} N}4\,p_N~{\,\rm e}\,^{-\frac{{\,{\rm i}\,}
N}4\,p_N}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_N\,\lambda_1}&\dots&
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_N\,\lambda_{N_-}}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_N}
\end{matrix}\right| \ .
\label{calZalmost}\end{eqnarray}
We will now write the product of determinants in (\ref{calZalmost}) as
a single sum over the Weyl group $\mathfrak{S}_N$ of the original gauge
symmetry group $U(N)$. For this, we embed $\mathfrak{S}_{N_-}$ in the Weyl
group $\mathfrak{S}_N$ as the subgroup of permutations $\pi_-$ of
$\{1,\dots,N_-,N\}$ with $\pi_-(N)=N$. We perform a Laplace expansion
of the second determinant in (\ref{calZalmost}) into minors along the
first row to write
\begin{eqnarray}
&&\left|\begin{matrix}\left(1-\frac{{\,{\rm i}\,} N}4\right)~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1}&
\lambda_1~{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_1}&\dots&
\lambda_{N_-}~{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_{N_-}}&
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1}\cr-\frac{{\,{\rm i}\,} N}4\,p_1~{\,\rm e}\,^{-\frac{{\,{\rm i}\,}
N}4\,p_1}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_1}&\dots&
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1\,\lambda_{N_-}}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1}\cr
\vdots&\vdots& &\vdots&\vdots\cr-\frac{{\,{\rm i}\,} N}4\,p_N~{\,\rm e}\,^{-\frac{{\,{\rm i}\,}
N}4\,p_N}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_N\,\lambda_1}&\dots&
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_N\,\lambda_{N_-}}&{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_N}
\end{matrix}\right|\nonumber\\ && \qquad\qquad~=~
\sum_{\pi_+\in\mathfrak{S}_N}\,{\rm sgn}(\pi_+)\,
\Biggl[\left(1-\mbox{$\frac{{\,{\rm i}\,} N}4$}\,p_1\right)~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_1}~\prod_{i=1}^N\,{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,
\lambda_i\,p_{\pi_+(i)}}\Biggr.\nonumber\\ && \qquad \qquad \qquad
\Biggl.-\,\frac{{\,{\rm i}\,} N}4\,\sum_{i=1}^N\,\lambda_i~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,\lambda_i\,p_1}\,p_{\pi_+(i)}~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,p_{\pi_+(i)}}~
\prod_{\stackrel{\scriptstyle k=1}{\scriptstyle k\neq i}}^N\,
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,\lambda_k\,p_{\pi_+(k)}}\Biggr] \ .
\label{2nddetexpand}\end{eqnarray}
When inserted into the expression (\ref{calZalmost}), we can use the
invariance of the radial integration measure and domain under
permutations of the $\lambda_i$'s to reduce the double sum over the
Weyl groups to a {\it single} sum over the relative permutation
$\pi:=\pi_+\,\pi_-^{-1}\in\mathfrak{S}_N$ with $\pi(N)=\pi_+(N)$. The sum over
$\pi_+$ can be replaced by a sum over $\pi$, while the remaining sum
over $\pi_-$ simply produces the order $N!$ of the Weyl group of
$U(N)$.
In this way we may bring the Fourier transform of the orbit into the
form
\begin{eqnarray}
Z_\cO(p)&=&-~\frac{4~{\rm vol}(\cO)}{p_1~\Delta(p)^2}~
\frac{\prod\limits_{k=1}^{N}\,(k!)^2}
{(N-1)!\,\left(\,\sqrt8\,N\right)^{N^2-N}}\nonumber\\ && \times~\sum_{\pi
\in\mathfrak{S}_N}\,{\rm sgn}(\pi)\,\left[\left(1-\mbox{$\frac{{\,{\rm i}\,} N}4$}\,
p_1\right)~{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,(p_1+p_{\pi(N)})}~
\prod_{i=1}^{N_-}\,\int_{-1}^1\,{\rm d}\lambda_i~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,\lambda_i\,(p_{\pi(i)}-p_{i+1})}\right.
\nonumber\\ && \qquad\qquad
-\,\frac{{\,{\rm i}\,} N}4~\sum_{j=1}^{N_-}\,p_{\pi(j)}~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,(p_{\pi(j)}+p_{\pi(N)})}
\nonumber\\ && \qquad\qquad\qquad\qquad\quad
\times~\int_{-1}^1\,{\rm d}\lambda_j~\lambda_j~{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,
\lambda_j\,(p_1-p_{j+1})}~\prod_{\stackrel{\scriptstyle i=1}
{\scriptstyle i\neq j}}^{N_-}\,\int_{-1}^1\,{\rm d}\lambda_i~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,\lambda_i\,(p_{\pi(i)}-p_{i+1})}
\nonumber\\ && \qquad\qquad
\left.-\,\mbox{$\frac{{\,{\rm i}\,} N}4$}\,p_{\pi(N)}~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,(p_{\pi(N)}+p_1)}~
\prod_{i=1}^{N_-}\,\int_{-1}^1\,{\rm d}\lambda_i~{\,\rm e}\,^{-
\frac{{\,{\rm i}\,} N}4\,\lambda_i\,(p_{\pi(i)}-p_{i+1})}\right] \ .
\label{calZb4int}\end{eqnarray}
Finally, the radial integrations can be expressed in terms of the
spectral sine-kernel of the unitary ensemble of random matrix theory
and its derivative given by
\begin{equation}
{\sf K}(x):=\frac{\sin x}x=\frac12\,\int_{-1}^1\,{\rm d}\lambda~
{\,\rm e}\,^{-{\,{\rm i}\,}\lambda\,x} \quad \mbox{and} \quad {\sf K}'(x)=\frac1x\,
\left(\cos x-\frac{\sin x}x\right)=-\frac\ii2\,\int_{-1}^1\,
{\rm d}\lambda~\lambda~{\,\rm e}\,^{-{\,{\rm i}\,}\lambda\,x} \ .
\label{sinekernel}\end{equation}
Then the abelianized partition function \eq{partfndiag} is
written as an exact expansion in gaussian momentum transforms given by
\begin{eqnarray}
Z&=&-~\frac{8~{\rm vol}(\cO)\,N!\,(N-1)!\,\prod\limits_{k=1}^{N-2}\,
(k!)^2}{2^{N^2}\,(2\pi)^N\,\left(\,\sqrt2\,N\right)^{N^2-N}}~
\sum_{\pi\in\mathfrak{S}_N}\,{\rm sgn}(\pi)~\int_{{\mathbb{R}}^N}\,
[{\rm d} p]~\frac{\epsilon^{-\frac g{4N}\,\sum_i\,p_i^2}}{p_1}\nonumber\\ && \times~
\left[\left(1-\mbox{$\frac{{\,{\rm i}\,} N}4$}\,p_1\right)~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,(p_1+p_{\pi(N)})}~\prod_{i=1}^{N_-}\,
{\sf K}\left(\mbox{$\frac N4$}\,(p_{\pi(i)}-p_{i+1})\right)\right.
\nonumber \\ && \qquad\qquad +\,\frac N4~\sum_{j=1}^{N_-}\,p_{\pi(j)}~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,(p_{\pi(j)}+p_{\pi(N)})}~
{\sf K}'\left(\mbox{$\frac N4$}\,(p_1-p_{j+1})\right)~
\prod_{\stackrel{\scriptstyle i=1}{\scriptstyle i\neq j}}^{N_-}\,
{\sf K}\left(\mbox{$\frac N4$}\,(p_{\pi(i)}-p_{i+1})\right)\nonumber\\ &&
\qquad\qquad \left.-\,\mbox{$\frac{{\,{\rm i}\,} N}4$}\,p_{\pi(N)}~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,(p_1+p_{\pi(N)})}~\prod_{i=1}^{N_-}\,
{\sf K}\left(\mbox{$\frac N4$}\,(p_{\pi(i)}-p_{i+1})\right)\right] \ .
\label{calZfinalp}\end{eqnarray}
For low values of $N$, the momentum integrals in this formula can be
computed in terms of transcendental error functions, which are the
typical contributions in nonabelian localization~\cite{witten1} and
reflect the occurence of non-gaussian quantum fluctuation integrals.
Note that there is a single momentum $p_1$ singled out in the formula
\eq{calZfinalp}. In the $U(n)$ case of Section~\ref{UnAbelian} below
there will be $n$ momenta singled out which is where the sum over sets
of $n$ integers required by the nonabelian localization formula in the
large $N$ limit will come from. At $N\to\infty$, the spectral kernels
${\sf K}\big(\frac{N}4\, (p_{\pi(i)} -
p_{i+1})\big)\approx\frac{4\pi}{N}\,\delta(p_{\pi(i)} -
p_{i+1})$ provide the necessary groupings of variables into
partitions of $N$ arising from the sum over the residual gauge
symmetry group $\mathfrak{S}_N$. The conjugacy class of a given permutation
$\pi\in\mathfrak{S}_N$ is characterized entirely by its cycle decomposition,
which contains $n_k\geq0$ cycles of length $k$ for $k=1,\dots,N$ with
$N=\sum_k\,k\,n_k$ and ${\rm
sgn}(\pi)=(-1)^{\sum_k\,(k-1)\,n_k}$. However, the saddle-point
partitions here do {\it not} correspond to the cycles themselves,
but rather to the {\it numbers} $N_{n_1,\dots,n_N}$ of cycles
$(n_1,\dots,n_N)$. For instance, the vacuum state now corresponds to
the instanton configuration with $N$ fluxons, i.e. only trivial
representations due to the abelianization, with moduli space
\eq{calM0} as described in Section~\ref{CritPoints}. The higher
critical points consist of an even number of irreducible
representations which are suppressed roughly as
$\epsilon^{-N^3/2g\,n_i}$. This indicates that the radial coordinates on
the configuration space $\cO$ are not so nicely adapted to the local
symplectic geometry of the Yang-Mills critical surfaces. We will
return to these issues in the next section.
\subsection{Evaluation of the abelianized partition function: $U(n)$
gauge theory\label{UnAbelian}}
The nonabelian case $n>1$ becomes very complicated due to the
increasing complexity of the combinatorics involved in regulating the
Itzykson-Zuber integral (\ref{IZformula}) over $V_+\in U(n\,N_+)$. We
will therefore only briefly sketch the essential features, defering
the explicit evaluation in favour of a more formal, regulated
combinatorial expansion. Consider the radial coordinates $\lambda_i$,
$i = 1,\dots,n\,N_-$ on $\cO$ and add $2n$ new real variables
$1+\varepsilon_i$. We assemble them into the ordered set defined by
\begin{equation}
\big(\,\overline\lambda_1,\dots,\overline\lambda_{n\,N_+}\big):=
\big(1+ \varepsilon_1,\dots,1+
\varepsilon_{2n},\lambda_{1},\dots, \lambda_{n\,N_-}\big) \ .
\label{lambdabar}\end{equation}
Similarly, we double the first $n$ entries of the momentum vector
$p=(p_1,\dots,p_{n\,N})$ and gather them into the ordered set defined
by
\begin{equation}
\big(\,\overline p_1,\dots,\overline p_{n\,N_+}\big):=
\big(p_1+\kappa,\dots, p_{n}+\kappa,p_1,
\dots,p_{n},p_{n+1},\dots, p_{n\,N}\big) \ .
\label{pbar}
\end{equation}
At the end we will take the limits $\varepsilon_i, \kappa \to 0$.
The evaluation of the Fourier transform \eq{calZdef} now proceeds
exactly as in Section~\ref{U1Abelian} above. To organize the
combinatorics, we use the identity
\begin{eqnarray}
&& \lim_{\varepsilon_i \to 0} \,
\frac{\det\limits_{1\leq i,j\leq n\,N_+}\,
\big({\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,\overline p_{i}\,\overline \lambda_j}\big)}
{\Delta(\varepsilon)}\label{expansion-lambdabar}
\\ && \qquad\qquad ~=~\frac{{\rm vol}\big(U(2n)
\big)}{c_N(2n,-4)}~\sum_{\cQ \subset \{\,\overline p_i\}} \,
{\rm sgn}\big(\cQ\hookrightarrow\{\,\overline p_i\}\big)~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,\sum_{i}\, q_i}~ \Delta(q)~
\det_{1\leq i,j\leq n\,N_-}\,
\big({\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,\hat p_{i}\,\lambda_j}\big) \nonumber
\end{eqnarray}
where $\{\hat p_1,\dots,\hat p_{n\,N_-}\} = \{\,\overline p_1,\dots,\overline
p_{n\,N_+}\} \setminus\cQ $ with $\cQ =\{q_1,\dots,q_{2n}\}$ a subset
of $\{\,\overline p_1,\dots,\overline p_{n\,N_+}\}$ which is ordered according
to \eq{pbar}, and the sign is determined by the parity of the
embedding. The identity \eq{expansion-lambdabar} can be derived by
performing a Laplace expansion of the determinant on the left-hand
side into the $2n$ rows containing the variables $1 + \varepsilon_i$,
and using the limit formula
\begin{equation}
\lim_{\varepsilon_i \to 0} \,
\frac{\det\limits_{1\leq i,j\leq 2n}\,
\big({\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,q_i\,\varepsilon_j}\big)}
{\Delta(\varepsilon)}
= \frac{{\rm vol}\big(U(2n)\big)}{c_N(2n,-4)}~\Delta(q)
\end{equation}
which follows from the Itzykson-Zuber formula \eq{IZformula}. The
Vandermonde determinants can also be factorized as
\begin{equation}
\Delta(\,\overline \lambda\,) = \Delta(\lambda)~ \Delta(\varepsilon)~
\prod_{i=1}^{n\,N_-}\, (1-\lambda_i)^{2n}
\end{equation}
up to higher order terms in $\varepsilon_i\to0$, along with
\begin{equation}
\Delta(\,\overline p\,)~\Delta( p_{n+1},\dots, p_{n\,N})
= \kappa^n~ \Delta(p)^2~ \Delta(p_1 ,\dots, p_n)^2
\end{equation}
in the limit $\kappa\to0$.
In this way the partition function \eq{partfndiag} can be expanded as
\begin{eqnarray}
Z &=& \zeta_{n,N}~\lim_{\kappa\to0}\,\frac1{\kappa^n}~
\sum_{\cQ \subset \{\,\overline p_i\}} \,
{\rm sgn}\big(\cQ\hookrightarrow\{\,\overline p_i\}\big)~
\int_{{\mathbb{R}}^{n\,N}}\,[{\rm d} p]~\frac{{\,\rm e}\,^{-\frac{g}{4N}\,\sum_i\,p_i^2}}
{\Delta(p_1 ,\dots, p_n)^2 }~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,\sum_i\, q_i}~\Delta(q)\nonumber\\ &&\times~
\prod_{l=1}^{n\,N_-}\,
\int_{-1}^1\, {\rm d}\lambda_l~\det_{1\leq i,j\leq n\,N_-}\,
\big({\,\rm e}\,^{\frac{{\,{\rm i}\,} N}4\, p_{i+n}\,\lambda_j}\big)~
\det_{1\leq i,j\leq n\,N_-}\,
\big({\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,\hat p_{i}\,\lambda_j}\big)
\label{Znonab-1}\end{eqnarray}
where
\begin{equation}
\zeta_{n,N}:=\frac{{\rm vol}(\cO)}{(n\,N)!\,(2\pi)^{n\,N}}~
\frac{({\,{\rm i}\,} N)^{2n^2+n\,N\,(1-n\,N_+)}}{2^{n\,N_-\,(2-n\,N_+)}}~
\prod_{k=1}^{n\,N_--1}\,(k!)^2~\prod_{m=1}^{2n}\,
\frac{(m+n\,N_--1)!}{m!} \ .
\label{zetanNdef}\end{equation}
We now expand the two determinants in \eq{Znonab-1} into a double sum
over the Weyl group $\mathfrak{S}_{n\,N_-}$, and use permutation symmetry of the
radial integration to rewrite it as a sum over a single relative
permutation exactly as in Section~\ref{U1Abelian} above. Using
\eq{sinekernel} we arrive finally at the exact combinatorial expansion
\begin{eqnarray}
Z &=& 2^{n\,N_-}\,(n\,N_-)!~\zeta_{n,N}~\lim_{\kappa\to0}\,
\frac1{\kappa^n}~ \sum_{\cQ \subset \{\,\overline p_i\}} \,
{\rm sgn} \big(\cQ\hookrightarrow\{\,\overline p_i\}\big)~
\sum_{\pi\in\mathfrak{S}_{n\,N_-}}\,{\rm sgn}(\pi)\nonumber\\ &&\times~
\int_{{\mathbb{R}}^{n\,N}}\,[{\rm d} p]~\frac{{\,\rm e}\,^{-\frac{g}{4N}\,\sum_i\,p_i^2}}
{\Delta(p_1 ,\dots, p_n)^2 }~
{\,\rm e}\,^{-\frac{{\,{\rm i}\,} N}4\,\sum_i\, q_i}~ \Delta(q)~\prod_{i =1}^{n\,N_-}\,
{\sf K}\big(\mbox{$\frac{N}4$}\, (\hat p_{\pi(i)} - p_{i+n})\big) \ .
\label{Znonabfinal}\end{eqnarray}
The combinatorics of the large $N$ limit of the partition function
\eq{Znonabfinal} can be described as follows. The sine-kernels
${\sf K}\big(\frac{N}4\, (\hat p_{\pi(i)} - p_{i+n})\big) \approx\frac{4\pi}{N}\,
\delta(\hat p_{\pi(i)} - p_{i+n})$ define a link from $\hat p_{\pi(i)}$
to $p_{i+n}$. Following these, we obtain a set of open or closed links
determined by $\pi\in\mathfrak{S}_{n\,N_-}$. The open links must start at
$\{p_1+\kappa,\dots,p_n+\kappa, p_1,\dots,p_n\}$ (since those are not
contained in the $p_{i+n}$) and end at $\{q_1,\dots,q_{2n}\}$ (since
those are not contained in the $\hat p_i$). The closed links
correspond to cycles in the conjugacy class of the permutation
$\pi$. In particular, there are no factors ${\,\rm e}\,^{-\frac{{\,{\rm i}\,}
N}4\,p_i}$, $i=1,\dots,n$ or $\Delta(p_1 ,\dots, p_n)^2$ for the
internal variables, and hence we can explicitly evaluate the internal
integrals. The difficulty lies in evaluating the sum over all possible
distinct cycles for the internal variables in a closed form.
\subsubsection*{\it Comparison with the constrained matrix model}
In~\cite{matrixsphere}, quantum gauge theory on the fuzzy sphere
$S_N^2$ was formulated as a multi-matrix model with action
\begin{equation}
S_{\rm mm} =\mbox{$\frac 1{N\,g}$}\, \Tr\big(C^2 -
\mbox{$\frac{N^2}4$}~\mbox{1 \kern-.59em {\rm l}}_\mathcal{N}\big)^2
\end{equation}
and the constraint $C_0 = \frac 12~\mbox{1 \kern-.59em {\rm l}}_N$. It was shown that
this matrix model also reproduces Yang-Mills theory on $S^2$ in the
large $N$ limit. This differs from the formulation of the present
paper essentially by replacing the pair (action , constraint) given by
$\big((C^2 - \frac{N^2}4~\mbox{1 \kern-.59em {\rm l}}_\mathcal{N})^2\,,\, (C_0-\frac 12~\mbox{1 \kern-.59em {\rm l}}_N)\big)$
with the permuted pair $\big((C_0-\frac 12~\mbox{1 \kern-.59em {\rm l}}_N)^2\,,\, (C^2 -
\frac{N^2}4~\mbox{1 \kern-.59em {\rm l}}_\mathcal{N})\big)$. This can be understood by imposing the
respective constraints using gaussian terms in the actions, as then
the tangential degrees of freedom are essentially the same in both
cases. The symplectic formulation of the present paper has not only
the advantage of applying the equivariant localization principle to
systematically construct the instanton expansion of gauge theory on
the fuzzy sphere, but it also somewhat simplifies the evaluation of
the matrix integral. It also enables one in principle to keep control
of the $\frac1N$ corrections to Yang-Mills theory on $S^2$, and the
approximate delta-functions at $N\to\infty$ responsible for the
groupings of variables are more transparent along the lines explained
in Sections~\ref{U1Abelian} and~\ref{UnAbelian}.
\bigskip
\section{Yang-Mills critical surfaces in abelianized
localization\label{AbLoc}}
In this final section we will elucidate the relationship between the
nonabelian and abelianized localization approaches to the exact
instanton expansion of Yang-Mills theory on the fuzzy sphere $S_N^2$.
As discussed above, the critical surfaces for
abelian localization are determined by
the saddle-point equation \eq{IZsaddle}, \eq{Cp0}
\begin{equation}
[C,\Phi] =0 \
\end{equation}
for $\Phi = \phi\otimes\sigma^0$ with $\phi\in\mathfrak{u}(N)$, which can be
assumed to be diagonal by using a gauge transformation.
Its distinct eigenvalues $\Phi_\nu$ are arranged
into degenerate blocks as
\begin{equation}
\Phi = \bigoplus_{\nu=1}^k \, \Phi_\nu~ \mbox{1 \kern-.59em {\rm l}}_{n_\nu}\otimes \sigma^0
\label{phi-block}
\end{equation}
with $\sum_\nu\,n_\nu=N$. Then $[C,\Phi] =0$ implies that the
covariant coordinate
\begin{equation}
C = U^{-1}\, \Xi\, U = \bigoplus_{\nu=1}^k\, C_\nu
\label{C-block-decomp}\end{equation}
has the same block decomposition as $\Phi$. Thus it can be
diagonalized as
\begin{equation}
C_\nu = V^{-1}_\nu \,\Xi_\nu \,V_\nu
\label{C-alpha-blocks}
\end{equation}
where $V_\nu$ is a $2n_\nu\times2n_\nu$ unitary matrix on the block
defined by $\mbox{1 \kern-.59em {\rm l}}_{n_\nu}\otimes \sigma^0$ in \eq{phi-block}, and
$\Xi_\nu$ has eigenvalues $\pm\, \frac N2$. Then comparing
\eq{C-block-decomp} and \eq{C-alpha-blocks} implies
\begin{equation}
\Big(\,\bigoplus_{\nu=1}^k\, V_\nu\Big)\, U^{-1} \,\Xi\, U \,
\Big(\,\bigoplus_{\nu=1}^k\, V^{-1}_\nu\Big)
= \bigoplus_{\nu=1}^k \,\Xi_\nu
= \Sigma^{-1}\,\Xi\,\Sigma
\end{equation}
for some permutation matrix $\Sigma\in U(\mathcal{N}\,)$ representing an
element $\pi\in \mathfrak{S}_\mathcal{N}/\mathfrak{S}_{N_+}\times\mathfrak{S}_{N_-}$, since both $\Xi$ and
$\bigoplus_\nu\, \Xi_\nu$ are diagonal $\mathcal{N}\times\mathcal{N}$ matrices with
the same set of degenerate eigenvalues. It follows that
\begin{equation}
U\,\Big(\,\bigoplus_{\nu=1}^k\, V^{-1}_\nu\Big)\, \Sigma^{-1} \in
U(N_+)\times U(N_-) \ ,
\end{equation}
and therefore $U\in U(\mathcal{N}\,)$ is equal to
$\Sigma\,\big(\bigoplus_{\nu}\,V_\nu\big)$ times an element of the
stablizer subgroup $U(N_+) \times U(N_-)\subset U(\mathcal{N}\,)$ of the
element $\Xi$.
We conclude that the gauge equivalence classes of solutions of the
saddle point equation $[C,\Phi] =0$ in the configuration space $\cO$
are described by the following data:
\begin{itemize}
\item A quotient permutation $\pi
\in\mathfrak{S}_\mathcal{N}/\mathfrak{S}_{N_+}\times\mathfrak{S}_{N_-}$;
\item A unitary matrix in the stabilizer group $U(N_+) \times U(N_-)$;
and
\item A unitary block transformation $\bigoplus_\nu\,V_\nu$ adapted to
the block decomposition \eq{phi-block} of $\Phi$.
\end{itemize}
It is evident that these critical surfaces are much larger than the
critical surfaces of the original Yang-Mills action, and they are not
even in any one-to-one correspondence with the Yang-Mills saddle
points. Any such block configuration is degenerate for the action in
\eq{Z-2}, and contains some Yang-Mills blocks of
Section~\ref{ExplYMDecomp} (with the irreducible low-energy critical
surface $\cC_{(N,1)}$ and possibly fluxons or other purely
noncommutative solutions). The reason is the absence of any
localization form $Q\alpha$, without which there is no way to separate the
desired Yang-Mills blocks of Section~\ref{ExplYMDecomp} from these
abelianized critical surfaces.
\subsection{Itzykson-Zuber localization on the symplectic leaves}
We now consider the foliation of the orbit $\cO(\Xi) \cong
U(2N)/R$ by conjugacy classes under the adjoint
action of the stabilizer group $R = U(N_+) \times U(N_-)$. The
corresponding symplectic leaves $\cL(\lambda)$ are parametrized by
the radial coordinates $\lambda_i\in[-1,1]$, $i=1,\dots,N_-$. For a
given leaf $\cL(\lambda)$, the integral $\int_R\, [{\rm d}
V]~\epsilon^{-\frac\ii2\,\langle\mu_T(C),p\rangle}$ is obtained by using the
Itzykson-Zuber formula for the unitary groups $U(N_+)$ and $U(N_-)$,
as we did in Sections~\ref{U1Abelian} and~\ref{UnAbelian}. As in
Section~\ref{IZ-loc} above, the Itzykson-Zuber formula
can itself be regarded as a consequence of abelian localization, and
the expansions of the resulting determinants in
Section~\ref{U1Abelian} is precisely the sum over the saddle-points on
each leaf $\cL(\lambda)$.
Let us identify these saddle-points explicitly. Choosing $\Xi$ as in
\eq{Xichoice}, the critical points of the moment map
\eq{calZactionexpl} with respect to arbitrary variations of
$(V_+,V_-)\in R$ are given by the solutions of the equations
\begin{eqnarray}
\big[{\rm diag}(p_2,\dots, p_N)\,,\,V_-~{\rm
diag}(\lambda_1,\dots,\lambda_{N_-})\,V_-^{-1}\big] &=& 0 \ ,
\nonumber\\[4pt]
\big[{\rm diag}(p_1\,\sigma^0,p_2,\dots, p_N)\,,\,V_+
~{\rm diag}(\sigma^0,\lambda_1,\dots,\lambda_{N_-})\,V_+^{-1}
\big] &=& 0 \ .
\label{saddle-Vpm}
\end{eqnarray}
As in Section~\ref{UnAbelian}, we consider for convenience the
extended sets of radial coordinates \eq{lambdabar} and momentum
variables \eq{pbar} for $n=1$. Then the first equation in
\eq{saddle-Vpm} means that the matrix $V_-~{\rm
diag}(\lambda_1,\dots,\lambda_{N_-})\,V_-^{-1}$ commutes with the
spectral projectors of $(p_2,\dots,p_N)$, i.e. it has the same
block decomposition, and similarly the second equation in
\eq{saddle-Vpm} implies that the matrix $V_+~{\rm diag}(\,\overline
\lambda_1,\dots,\overline\lambda_{N_+})\,V_+^{-1}$ commutes with the
spectral projectors of $\overline p$.
Using unitary transformations on each of these blocks, the matrix
$V_-~{\rm diag}(\lambda_1,\dots,\lambda_{N_-})\,V_-^{-1}$ can then be
diagonalized with the same eigenvalues $\lambda_i$. It follows that
\begin{eqnarray}
&& \Big(\,\bigoplus_{\nu=1}^k\, U_\nu\Big)\, V_- ~
{\rm diag}(\lambda_1,\dots,\lambda_{N_-})\, V_-^{-1}\,
\Big(\,\bigoplus_{\nu=1}^k\,
U_\nu^{-1}\Big) \nonumber\\ && \qquad\qquad\qquad
~=~ \mbox{diag} (\lambda_{\pi_-(1)},\dots,\lambda_{\pi_-(N_-)})~=~
\Sigma_-~\mbox{diag} (\lambda_1,\dots,\lambda_{N_-})\, \Sigma_-^{-1}
\end{eqnarray}
for some $U_\nu\in SU(n_\nu)$, where $n_\nu$ labels the degenerate
blocks of $(p_2,\dots,p_N)$ with $\sum_\nu\,n_\nu=N_-$ and
$\Sigma_- \in SU(N_-)$ is a permutation matrix corresponding to an
element $\pi_-\in\mathfrak{S}_{N_-}$. If $\lambda_i$ are {\em nondegenerate},
this implies that $\big(\bigoplus_\nu\, U_{\nu}\big)\, V_- =
\Sigma_-$ and hence
\begin{equation}
V_- = \Big(\,\bigoplus_{\nu=1}^k\,U_\nu^{-1}\Big) \,\Sigma_- \ .
\end{equation}
If some $\lambda_i$ are degenerate, it only follows that
$\Sigma_-^{-1}\,\big(\bigoplus_\nu\,U_\nu\big)\, V_-$ commutes with
the spectral projectors of $\lambda$, so that
$\Sigma_-^{-1}\,\big(\bigoplus_\nu\,U_\nu\big)\,
V_-=\bigoplus_\nu\,\tilde U_\nu$ for some $\tilde U_\nu\in
SU(n_\nu)$. It follows that the angular saddle-point $V_-\in U(N_-)$
is given by
\begin{equation}
V_- = \Big(\,\bigoplus_{\nu=1}^k\,U_\nu^{-1}\Big)\,\Sigma_-\,
\Big(\,\bigoplus_{\nu=1}^k\, \tilde U_\nu\Big) \ .
\label{V-minus}
\end{equation}
Similar statements hold for the angular saddle-point $V_+\in U(N_+)$,
with the additional feature that the first two entries of $\overline p$
and $\overline \lambda$ are degenerate by definition.
In each case, the value of the action \eq{calZactionexpl} is given by
\begin{equation}
\big\langle\mu_T(C)\,,\,p\big\rangle
=\frac N4\,\sum_{i=1}^{N_+}\,\overline p_{i}\,\overline
\lambda_{\pi_+(i)} - \frac N4\,\sum_{i=1}^{N_-}\,
p_{i+1}\,\lambda_{\pi_-(i)} \ .
\label{action-saddle}
\end{equation}
Therefore, each saddle-point is characterized by two permutation
matrices $\Sigma_\pm$ corresponding to $\pi_\pm\in\mathfrak{S}_{N_\pm}$, which
may or may not generate non-trivial fibers on the homogeneous spaces
of the group $\prod_\nu\, U(n_\nu)$ depending on the degeneracies of
$p$ and $\lambda$. The integral over these $V_\pm$ orbits can then be
evaluated using the Itzykson-Zuber formula leading to \eq{VminusIZ}
and \eq{VplusIZ}, which gives precisely the sum over the saddle
points. The regularization required in \eq{VplusIZ} reflects the fact
that the critical surfaces are no longer isolated points, due to the
degeneracies of $\overline \lambda_i$ and $\overline p_i$.
The main point of this analysis is that these critical surfaces are
again not in any one-to-one correspondence with those of the original
Yang-Mills action. In fact, the abelian critical surfaces above
contain as subspaces those of the Itzykson-Zuber localization on
$\cO(\Xi)$ discussed in Section~\ref{IZ-loc} above,
which are not only stationary on the symplectic leaves $\cL(\lambda)$ but
also with respect to variations of the radial coordinates
$\lambda_i$. However, even the critical surfaces for the Itzykson-Zuber
localization on the configuration space $\cO(\Xi)$ are not simply
related to those of the Yang-Mills action. In particular, the
variational problem for the action (\ref{action-saddle}) does not
determine the $\lambda_i$. A given radial saddle-point $\pi_\pm$ can thus
correspond to various types of Yang-Mills solutions by appropriately
choosing some $\lambda_i$, as we show explicitly in Section~\ref{YMRadial}
below. This arbitrariness in the radial coordinates $\lambda_i$ is lifted
by the addition of the localization one-form $\alpha$ of
Section~\ref{NonabLoc}, which serves to single out the Yang-Mills
saddle points from the new critical points. Nevertheless, it is
instructive to work out the radial coordinates of some Yang-Mills
saddle-points to illustrate the powerful workings of the polar
decomposition.
\subsection{Radial coordinates for Yang-Mills critical
surfaces\label{YMRadial}}
We will now work out the radial coordinates for the solutions of the
Yang-Mills equations $[C_0, C_i]=0$, which will identify precisely the
appropriate localization values of $\lambda_i$ for each critical surface of
Section~\ref{CritPoints}. Given (\ref{Xichoice}) we now consider the
fuzzy sphere coordinates $\Sigma_0\,\Xi\,\Sigma_0^{-1}$ and
correspondingly modify the radial coordinates \eq{Rconv} to
\begin{equation}
R=\Sigma_0 \,\left(\begin{array}{ccc} \sigma^0 & 0 \\
0 & \exp({\,{\rm i}\,}\sigma^1\otimes\rho)
\end{array}\right) \,\Sigma_0^{-1} \ ,
\label{radialcoords-modified}
\end{equation}
where $\Sigma_0\in U(\mathcal{N}\,)$ is a permutation matrix representing the
cyclic permutation
\begin{equation}
\pi_{(N_+)}=(1\,2\,\cdots\,N_+) \ .
\label{piNplusperm}\end{equation}
As we will see, the modification by $\Sigma_0$, although irrelevant
from the point of view of the path integral, will greatly simplify the
explicit parametrization.
Using this parametrization and \eq{Vconv}, we can
write the covariant coordinates \eq{orbit-coordinates} in the explicit
form
\begin{eqnarray}
C &=& \frac N2\,V\,\Sigma_0\left(\begin{array}{ccc}
\sigma^0 & 0 \\ 0 & \sigma^3\otimes\cos(2\rho)+
\sigma^2\otimes\sin(2\rho) \end{array}\right)\,
\Sigma_0^{-1}\,V^{-1} \nonumber\\[4pt]
&=& \frac N2\, \left(\begin{array}{ccc}
V_+ \,\left(\begin{array}{ccc} 1 & & \\
& \cos(2\rho) & \\
& & 1 \end{array}\right) \, V_+^{-1} &
-{\,{\rm i}\,} V_+\, \left(\begin{array}{c} 0 \\
\sin(2\rho) \\ 0 \end{array}\right)\, V_-^{-1} \\
{\,{\rm i}\,} V_- \,\left(\begin{array}{ccc} 0~, & \sin(2\rho)~, &0
\end{array}\right)\,V_+^{-1} &
- V_- \,\cos(2\rho)\, V_-^{-1}\end{array}\right)
\label{C-radial-explicit}
\end{eqnarray}
where we have applied the commutation relation ${\,{\rm i}\,}[\sigma^1,\sigma^3]
= 2 \sigma^2$. The role of the cyclic permutation matrix $\Sigma_0$ is
to move the unit entries of $\sigma^0$ symmetrically around the matrix
$\cos(2\rho)$. We note for later use that if the
unitary matrices $V_\pm\in U(N_\pm)$ are block-diagonal, then so is
$C$. We will now use this parametrization to illustrate the use of the
radial coordinates by working out \eq{C-radial-explicit} explicitly
for various classical gauge field configurations of
Section~\ref{CritPoints}.
\subsubsection*{\it The vacuum solution}
The generators of the irreducible $N$-dimensional representation of
the $\mathfrak{s}\mathfrak{u}(2)$ Lie algebra \eq{FS-l} are given explicitly by
\begin{equation}
(\xi_3)_{ij}=-\delta_{ij}~\mbox{$\frac{N+1-2i}2$} \qquad \mbox{and}
\qquad (\xi_+)_{ij}=\delta_{i+1,j}~\sqrt{(N-i)\,i}
\label{Ndimrep}\end{equation}
where $i,j=1,\dots,N$ and $\xi_{\pm}=\xi_1\pm{\,{\rm i}\,}\xi_2$ with
$\xi_-=\xi_+^\dag$. The vacuum solution \eq{Xi-collective} in the
abelian case $n=1$ thus has the explicit form
\begin{eqnarray}
C &=& \left(\begin{array}{cc} \frac 12~\mbox{1 \kern-.59em {\rm l}}_N + \xi_3 & \xi_+ \nonumber\\
\xi_- & \frac 12~\mbox{1 \kern-.59em {\rm l}}_N - \xi_3 \end{array}\right)\\[4pt]
&=& \left(\begin{array}{cc}\frac 12~\mbox{diag}(-N+2,\dots,N-2,N) & \xi_+ \\
\xi_- & \frac 12~\mbox{diag}(N,\dots,-N+4,-N+2) \end{array}\right)
\label{C-vac-block}
\end{eqnarray}
using the splitting into equal blocks of size $N$. This should be
identified with \eq{C-radial-explicit}, which splits into blocks of
sizes $N_\pm$. Noting the explicit form of $\xi_\pm$ in \eq{Ndimrep} as
raising and lowering operators, it follows that one can consistently
take both $V_+~{\rm diag}(\lambda_1,\dots,\lambda_{N_-},1,1)\,V_+^{-1} $ and
$V_-~{\rm diag}(\lambda_1,\dots,\lambda_{N_-})\,V_-^{-1} $ to be diagonal
matrices.
We can then consistently match the eigenvalues as
\begin{equation}
N \,(\lambda_1,\dots,\lambda_{N_-},1,1)=(-N+2,\dots, N-2,N,N) \ ,
\label{eigenvacmatch}\end{equation}
which gives
\begin{equation}
\lambda_i=-\mbox{$\frac{N-2i}N$}\qquad \mbox{for} \quad i=1,\dots,N_-
\label{la-vacuum}
\end{equation}
and provides the eigenvalues of the radial matrix $R$ for the vacuum
critical surface $\cC_{(N,1)}$. Note that the eigenvalue $\frac N2$
from the second diagonal block $\frac 12~\mbox{1 \kern-.59em {\rm l}}_N-\xi_3$ of $C$ in
\eq{C-vac-block} is contained in the matrix $\frac N2\,V_+~{\rm
diag}(\lambda_1,\dots,\lambda_{N_-},1,1)\,V_+^{-1}$. It follows that $V_-=
\Sigma_-$ is a permutation matrix in $U(N_-)$, while
$V_+ = \Sigma_+\,U_2$ is a permutation matrix up to a possible
conjugation with a unitary matrix $U_2\in SU(2)\subset U(N_+)$ acting
on the two marked indices labelling the unit entries. We can absorb
$\Sigma_-$ by a redefinition of the $\lambda_i$, and hence take
\begin{equation}
V_- = \mbox{1 \kern-.59em {\rm l}}_{N_-}
\label{Vminuswithout}\end{equation}
without loss of generality. It is also enough to consider the case
$U_2=\mbox{1 \kern-.59em {\rm l}}_{N_+}$. Comparing \eq{C-radial-explicit} with
\eq{C-vac-block}, it follows that
\begin{equation}
V_+ = \Sigma_+
\label{cycle-vacuum}
\end{equation}
is a permutation matrix representing the irreducible cycle
\eq{piNplusperm} of length $N_+$. Furthermore, one has
\begin{equation}
\sin(2\rho_i) = \sqrt{1-\lambda_i^2}
= \sqrt{\mbox{$\frac {4i}N - \frac{4i^2}{N^2}$}}
= \mbox{$\frac 2N$}\, \sqrt{i\, (N - i)} = \mbox{$\frac 2N$}\,
(\xi_+)_{i,i+1}
\label{sin2rho}
\end{equation}
for $i = 1,\dots, N_-$, which is indeed the correct representation of
$\xi_\pm$ in \eq{Ndimrep}, embedded in the correct off-diagonal way
in \eq{C-radial-explicit} due to the block decomposition into sizes
$N_\pm$.
Let us point out one interesting feature of the covariant coordinate
\eq{C-vac-block}. The two diagonal entries of $\frac N2$ in the center
of the matrix constitute a trivial $2\times 2$ unit matrix $\sigma^0$
which completely decouples from the rest of $C$. This block can be
traced to the $\sigma^0$ in the upper-left corner of the first line in
\eq{C-radial-explicit}, whose position is determined by the
permutation matrix $\Sigma_0$, or equivalently to the auxilliary
radial coordinates $\overline\lambda_i =1+\varepsilon_i$, $i=1,2$. In fact,
any explicit entry of $\pm\, \frac N2$ in $C$ necessarily decouples
from the rest of $C$, for otherwise $C$ would have eigenvalues of
modulus larger than $\frac N2$. This means, in particular, that we can
permute these two entries using a suitable permutation matrix
$V_+=\Sigma_+$ without any effect on $C$ (but it will have an effect
on the momenta $p_i$ if they are included). This observation will be
useful below. This construction clearly generalizes to give the blocks
$C(n_a)$ of size $2n_a$ of the critical surfaces
$\cC_{(n_1,s_1),\dots,(n_k,s_k)}$ corresponding to irreducible $SU(2)$
representations of dimensions $n_a< N$. The most extreme case $n_a=1$
consists of the one-dimensional representation with $C_0(n_a=1)=\frac
N2$ and $C_i(n_a=1)=0$, whereby
\begin{equation}
C(n_a=1)=\mbox{$\frac N2$}\,\sigma^0
\label{Csigma0expl}\end{equation}
and hence only the explicit $\sigma^0$ block survives.
\subsubsection*{\it Nonabelian generalization}
For $n \geq 2$, the vacuum critical surface $\cC_{(N,1), ..., (N,1)}$
is associated with the solution \eq{vacsolnonab} which is a direct sum
of $n$ irreducible $SU(2)$ representations of dimension $N$. This can
clearly be obtained by repeating the above construction $n$ times. In
particular, $V_+ = (\Sigma_+)^{\oplus n}$ is a product of $n$ ``marked cycles'' as
above. Notice, however, that the {\em same} saddle point is obtained
if one acts with an additional permutation of the $2n$ auxilliary
radial coordinates $\overline \lambda_i = 1$, $i=1,\dots,2n$ (recall that the
explicit entries $\pm\, \frac N2$ of $C$ are always isolated). In
doing this, the decomposition of $V_+$ into irreducible cycles gets
modified. It can nonetheless be made into one irreducible cycle with
$2n$ marked points which come in groups of two at equal distance, for
example. This demonstrates that the mapping between the Yang-Mills
saddle-points and those of the abelianization approach in
Section~\ref{Abel-loc} is complicated. In particular, it is not
injective. Again, this construction generalizes to blocks of the
critical surfaces $\cC_{(n_1,s_1),\dots, (n_k,s_k)}$ corresponding to irreducible
$SU(2)$ representations of various dimensionalities.
\subsubsection*{\it Fluxons}
Fix an integer $1\leq n\leq N$ and consider the block gauge
field configuration of size $2n$ given by
\begin{eqnarray}
C &=& \frac N2\,V\,\bigl(\sigma^3\otimes\cos(2\rho)
+ \sigma^2\otimes\sin(2\rho)\bigr)\,V^{-1} \nonumber\\[4pt]
&=&\frac N2\, \left(\begin{array}{ccc}
V_+ \, \cos(2\rho) \, V_+^{-1} &
-{\,{\rm i}\,} V_+ \, \sin(2\rho)\, V_-^{-1} \\
{\,{\rm i}\,} V_-\, \sin(2\rho)\, V_+^{-1} &
- V_- \,\cos(2\rho)\, V_-^{-1}\end{array}\right) \ ,
\label{C-cycle}
\end{eqnarray}
which is almost the same as (\ref{C-radial-explicit}) above but
without the $\sigma^0$ block. We choose
\begin{equation}
\lambda_i=-\mbox{$\frac{n-2i}n$} \qquad \mbox{for} \quad i=1,\dots,n-1 \ ,
\label{lainchoose}\end{equation}
along with
\begin{equation}
V_+=\Sigma_{(n)} \qquad \mbox{and} \qquad V_- = \mbox{1 \kern-.59em {\rm l}}_{n-1}
\label{cycle-coords}
\end{equation}
where $\Sigma_{(n)}\in U(n+1)$ is a cyclic permutation matrix
representing $\pi_{(n)}:=(1\,2\,\cdots\,n)$. Then we get explicitly
\begin{equation}
C=\frac N{2n}\,\left(\begin{array}{cc} \,\mbox{diag}(-n+2,\dots,
n-2) & \tilde\xi_+ \\
\tilde\xi_- & \mbox{diag}(n-2,\dots, -n+2) \end{array}\right)
\label{C-cycle-explicit}
\end{equation}
where $\tilde \xi_\pm$ are cyclic operators (rather than
raising/lowering operators as before).
In this case $C_0 = 0$, and hence this solution is part of the
orbifold singularities for $n$ coincident fluxons in the moduli space
\eq{calM0} of Section~\ref{CritPoints}, rather than an irreducible
representation of the isometry group $SU(2)$. This construction is
further used below. In particular, the special case $n=1$ gives a
single fluxon $C(n=1) = \frac N2\,\sigma^3$. Then there exists a
unitary transformation $U\in SU(2)$ such that
\begin{equation}
U\,C(n=1)\,U^{-1}=\mbox{$\frac N2$}\,U\,\sigma^3\,U^{-1}=c_i\,\sigma^i
\ ,
\end{equation}
which gives the position $c_i$ of the fluxon on the sphere $S^2$.
\subsubsection*{\it Multi-block solutions}
Let us modify the previous radial solution by setting
$\lambda_1=\pm\,1$ and taking $\lambda_{i+1}$ to be given by \eq{lainchoose},
while keeping the angular variables \eq{cycle-coords} in $U(n+2)$ and
$U(n)$ the same. Then the block covariant coordinates \eq{C-cycle} of
size $2(n+1)$ are given explicitly as
\begin{equation}
C=\frac N{2n}\,\left(\begin{array}{cc} \,\mbox{diag}(-n+2, \dots,
n-2,\pm\, n) & \xi_+ \\
\xi_- & \mbox{diag}(\mp\, n,n-2,\dots, -n+2) \end{array}\right) \ ,
\label{C-cycle-1-explicit}
\end{equation}
which is almost the same as the vacuum configuration \eq{C-vac-block}
for an $n$-dimensional irreducible representation except that there
are two explicit diagonal entries $\frac N2,-\frac N2$ instead of
$\frac N2,\frac N2$. In particular, $C_0$ is no longer constant and
hence the gauge fields \eq{C-cycle-1-explicit} are not solutions of
the Yang-Mills equations of motion. This can be cured by the addition
of extra irreducible representations as follows.
One can now construct solutions of the Yang-Mills equations with
several blocks and arbitrary parameters, i.e. the generic critical
surfaces $\cC_{(n_1,s_1),\dots,(n_k,s_k)}$, by joining an even number
of copies of \eq{C-cycle-1-explicit} in a suitable way. Fix another
integer $m\geq1$ such that $n+m\leq N$, and consider again the block
covariant coordinate \eq{C-cycle} of size $2(n+m)$ with
\begin{equation}
\lambda_1=1 \ , \quad \lambda_{i}=
-\mbox{$\frac{n-2(i-1)}n$}\quad\mbox{for}\quad i=2,\dots,n
\quad\mbox{and}\quad \lambda_{j+n-1}=-\mbox{$\frac{m-2(j-1)}m$}\quad
\mbox{for}\quad j=1,\dots,m \ .
\label{lamultichoice}\end{equation}
The angular degrees of freedom are given by
\begin{equation}
V_+=\Sigma_{(n+m)} \qquad \mbox{and} \qquad V_-=\mbox{1 \kern-.59em {\rm l}}_{n+m-1}
\label{2irreps-coords}\end{equation}
in $U(n+m\pm1)$, corresponding to the cyclic permutation $\pi_{(n+m)}$
decomposed as
\begin{equation}
\pi_{(n+m)}=(\pi_{(n)})_{1,\dots,n}\circ(\pi_{(m)})_{n+1,\dots,n+m}
\circ(1~n{+}1)
\label{pinmdecomp}
\end{equation}
where the subscripts indicate the indices that the permutations act
on. The role of the transposition $(1~n{+}1)$ is to first interchange
the explicit $1$ and $-1$ in \eq{lamultichoice} for the upper block in
\eq{C-cycle}, which then takes the form of two copies of the matrix
\eq{C-cycle-1-explicit} but with the correct explicit diagonal entries
$\pm\, \frac N2$. Since $V_+ =\Sigma_{(n+m)}$ corresponds to an irreducible
cycle, $C$ is a direct sum of two irreducible representations with
opposite sign and hence lives on the critical surface block
$\cC_{(n,1),(m,-1)}$ with vanishing overall trace. This construction
clearly generalizes to an arbitrary number of irreducible
representations of the $SU(2)$ isometry group.
\subsection{Action of the gauge group\label{sec:gaugegroup-embed}}
Finally, let us describe how the gauge symmetry acts on the radially
foliated solutions. Recall that the gauge group $G \cong SU(n\,N)$ is
embedded in the symmetry group of the orbit space $\cO$ as $\phi =
\phi_0 \otimes\sigma^0$ in the Lie algebra of $G \subset
SU(2n\,N)$. This embedding is well adapted to the modification of the
radial coordinates in \eq{radialcoords-modified} by the permutation
matrix $\Sigma_0$. Indeed, there is an embedding of the ``diagonal''
subgroup $U(n\,N_-) \subset U(n\,N_+) \times U(n\,N_-)$ into $G$ given
by taking $V_-$ into $\mbox{diag}(\mbox{1 \kern-.59em {\rm l}}_n,V_-) \otimes \sigma^0$ as
\begin{equation}
V_- ~\longmapsto ~
\left(\begin{array}{cc}\left(\begin{array}{ccc}\mbox{1 \kern-.59em {\rm l}}_n &&\\
& V_-&\\
&& \mbox{1 \kern-.59em {\rm l}}_n \end{array}\right) & 0\\
0 & V_- \end{array}\right) \ .
\end{equation}
This shows explicitly that a large part of the gauge group
is part of the stabilizer group $R=U(n\,N_+) \times U(n\,N_-)$ which
defines the foliation of the radial coordinates.
Furthermore, there is an additional symmetry $SU(n)\subset U(n\,N_+)$
embedded into $G$ by taking $U$ into $\mbox{diag}( U,\mbox{1 \kern-.59em {\rm l}}_{n\,N_-}) \otimes
\sigma^0$ as
\begin{equation}
U ~\longmapsto~
\left(\begin{array}{cc}\left(\begin{array}{ccc}U &&\\
& \mbox{1 \kern-.59em {\rm l}}_{n\,N_-}&\\
&& U \end{array}\right) & 0\\
0 & \mbox{1 \kern-.59em {\rm l}}_{n\,N_-} \end{array}\right) \
.
\label{extra-sun}
\end{equation}
This extra $SU(n)$ symmetry acts on the marked momenta
$p_{1},\dots, p_n$ of Section~\ref{UnAbelian}, and together with the
degenerate Itzykson-Zuber localization it is thus responsible for the
emergence of the nonabelian gauge symmetry in the commutative
limit. The remainder $SU(n\,N) / SU(n\,N_-) \times SU(n)$ of the gauge
group mixes the symplectic leaves, so that the radial foliation is not
$G$-equivariant.
\bigskip
\section*{Acknowledgments}
\noindent
We thank C.-S.~Chu, B.~Dolan, H.~Grosse, X.~Martin and D.~O'Connor for helpful
discussions. The work of H.S. was supported in part by the FWF Project
P16779-N02 and in part by the FWF Project P18657.
The work of R.J.S. was supported in part by the EU-RTN Network
Grant MRTN-CT-2004-005104.
\bigskip
|
1,314,259,993,864 | arxiv | \section{Introduction}
Suppose $X=(X_t,\mathbb{P}_t)$ is a Hunt process on ${\mathbb R}^d$ with a
L\'{e}vy system $(N,H)$ given by $H_{t}=t$ and
$$ N(x,dy)=2C(x,y){|x-y|}^{-(d+\alpha)}m(dy),$$
where $m$ is a measure on ${\mathbb R}^d$ given by $m(dx)=M(x)dx$ with
$M(x)$ bounded between two positive numbers. That is for any
nonnegative function $f$ on ${\mathbb R}^d\times{\mathbb R}^d$ vanishing on the
diagonal
$$\mathbb{E}_x\left(\sum_{s \le
t}f(X_{s-},X_{s})\right)=\mathbb{E}_x\int_{0}^{t}\int_{R^d}\frac{2C(X_s,y)f(X_s,y)}{{|X_s-y|}^{d+\alpha}}m(dy)ds,$$
for every $x \in {\mathbb R}^d$ and $t>0$.
We introduce $\alpha$-stable-like processes as follows.
\begin{defn}
We say that $X$ is an $\alpha$-stable-like process if $C(x,y)$ is
bounded.
\end{defn}
In this paper, we assume that $X$ admits a transition density
$p(t,x,y)$ with respect to $m$ and $p(t,x,y)$ is jointly
continuously on $(0,\infty)\times{\mathbb R}^d\times{\mathbb R}^d$ and satisfies the
condition
\begin{equation}
M_{1}t^{-\frac{d}{\alpha}}\left(1 \wedge
\frac{t^{\frac{1}{\alpha}}}{|x-y|}\right)^{d+\alpha}\leq
p(t,x,y)\leq M_{2}t^{-\frac{d}{\alpha}}\left(1 \wedge
\frac{t^{\frac{1}{\alpha}}}{|x-y|}\right)^{d+\alpha},\,\,\forall
(t,x,z)\in (0,\infty)\times{\mathbb R}^d\times{\mathbb R}^d,
\end{equation}
where $M_{1}$ and $M_{2}$ are positive constants.
Here we do not assume that $X$ is symmetric. When $X$ is
symmetric, it is called a symmetric $\alpha$-stable-like process,
which was introduced in \cite{CT}, where a symmetric Hunt process
is associated with a regular Dirichlet form and thus Dirichlet
form method can be applied. It was also shown in \cite{CT} that
the transition densities of symmetric $\alpha$-stable-like
processes satisfy (1.1).
We list some examples which are $\alpha$-stable-like processes and
satisfy (1.1). For one dimensional strictly $\alpha$-stable
processes with L\'{e}vy measure $\nu$ concentrated neither on
$(0,\infty)$ nor on $(-\infty,0)$, the L\'{e}vy measure $\nu(dx)
=c_1x^{-1-\alpha}dx$ on $(0,\infty)$ and $\nu(dx)
=c_2x^{-1-\alpha}dx$ on $(-\infty,0)$ with $c_1>0$ and $c_2>0$,
which implies $C(x,y)$ in the L\'{e}vy system as above is bounded
between two positive numbers. We set $c=c_1+c_2$ and
$\beta=(c_1-c_2)/(c_1+c_2)$. Let $\rho=(1+\beta)/2$ or
$=(1-\beta\frac{2-\alpha}{2})/2$, according to $\alpha<1$ or $>1$.
Without loss of generality, we can fix the parameter $c$ and
assume that $c$ equals $\cos(\frac{\pi\beta\alpha}{2})$,
$\frac{\pi}{2}$ or $\cos(\pi\beta\frac{2-\alpha}{2})$ for
$\alpha<1$, $=1$, or $>1$, respectively. \cite{SK} gave the
following estimates for the continuous transition density
$p(t,0,x)$, which equals $t^{-1/\alpha}p(1,0,t^{-1/\alpha}x)$:
\begin{enumerate}
\item When $x\rightarrow \infty$,
\begin{eqnarray*}
p(1,0,x)&\sim&\frac{1}{\pi}\Gamma(\alpha+1)(\sin(\pi
\rho \alpha))x^{-\alpha-1},\, \,\,\textrm{if } \alpha \neq 1,\\
p(1,0,x)&\sim& \frac{1+\beta}{2}x^{-2}, \,\,\,\textrm{if } \alpha =1,
\end{eqnarray*}
\item When $x\rightarrow 0$,
\begin{eqnarray*}
p(1,0,x)&\rightarrow&
\frac{1}{\pi}\Gamma(1/\alpha+1)(\sin\pi\alpha), \,\,\,\textrm{if
}\alpha \neq 1,\\
p(1,0,x)&\rightarrow& \frac{1}{\pi}b_1, \,\,\,\textrm{if }\alpha=1,
\,\beta>0,
\end{eqnarray*}
where $b_1$ is a positive constant.
See (14.37), (14.30), (14.33)
and (14.32) in \cite{SK} for details. It is clear that the dual
process of the one dimensional strictly $\alpha$-stable process
has the transition density $p(t,0,-x)$. Thus applying the above
estimates to $p(t,0,-x)$, we get
\item When $x\rightarrow -\infty$,
\begin{eqnarray*}
p(1,0,x)&\sim&\frac{1}{\pi}\Gamma(\alpha+1)(\sin(\pi
\rho\alpha))|x|^{-\alpha-1},\,\,\,\textrm{if } \alpha \neq 1,\\
p(1,0,x)&\sim& \frac{1+\beta}{2}|x|^{-2}, \,\,\,\textrm{if }
\alpha=1.
\end{eqnarray*}
\item When $x\rightarrow 0$,
$$p(1,0,x)\rightarrow \frac{1}{\pi}\tilde{b}_1,\,\,\,\textrm{if }
\alpha=1,\, \beta<0,$$ where $\tilde{b}_1$ is a positive constant.
\end{enumerate}
One dimensional strictly $\alpha$-stable process with $\alpha=1$
and $\beta=0$ is a Cauchy process with drift $0$. It is easy to
see that when $x\rightarrow 0$, $p(1,0,x)\rightarrow$ a positive
constant.
\cite{SK} also pointed out that $p(t,0,x)$ is positive when the
L\'{e}vy measure $\nu$ is concentrated neither on $(0,\infty)$ nor
on $(-\infty,0)$.
Therefore when the L\'{e}vy measure $\nu$ is concentrated neither
on $(0,\infty)$ nor on $(-\infty,0)$, the transition density $p$
satisfies (1.1).
For higher dimensions, \cite{VZ}
considered a large class of nonsymmetric strictly $\alpha$-stable
processes with $0<\alpha<2$, which has L\'{e}vy measure $\nu$
satisfying
$$
\nu(B)=\int_S\lambda(d\xi)\,\int^{\infty}_{0}1_{B}(r\xi)r^{-(1+\alpha)}dr$$
for every Borel set $B$ in ${\mathbb R}^d$, where $\lambda$ is a finite
measure on the unit sphere $S=\{x\in {\mathbb R}^d:\,|x|=1\}$ and is called
the spherical part of the L\'{e}vy measure $\nu$. $\lambda$ is
assumed to have a density $\phi:S\rightarrow(0,\infty)$ such that
$$\phi=\frac{d\lambda}{d\sigma} \textrm{ and } \kappa \leq
\phi(\xi)\leq \kappa^{-1},\,\,\,\forall \xi\in S,$$ where $\sigma$
is the surface measure on the unit sphere and $\kappa>0$ is a
positive constant. The assumption on $\phi$ implies the transition
density $p(t,0,x)>0$ for all $t>0$ and all $x\in {\mathbb R}^d$. It is
known that $p(1,0,x)$ is uniformly bounded in $x\in {\mathbb R}^d$.
\cite{VZ} pointed out that the L\'{e}vy measure $\nu$ has a
density $f(x)=\phi(x/|x|)|x|^{-(d+\alpha)}$ with respect to the
$d$-dimensional Lebesgue measure, and
$$\kappa|x|^{-(d+\alpha)}\leq f(x)\leq\kappa^{-1}|x|^{-(d+\alpha)}$$
for every $x\in {\mathbb R}^d \setminus \{0\}$. Then the transition density
$p(t,0,x)$ of the processes satisfy
$$
p(t,0,x)\leq \tilde{C}t^{-\frac{d}{\alpha}},\,
\,\,x\in{\mathbb R}^d,\,\,t>0,
$$
and
$$
p(t,0,x)\leq
\tilde{C}t|x|^{-(\alpha+d)},\,\,\,x\in{\mathbb R}^d\setminus \{0\},\,\,t>0,
$$
where $\tilde{C}$ is a positive constant. See (2.6) and (2.7) in
\cite{VZ} for these two inequalities. Thus we have
\begin{equation}
p(t,0,x)\leq \tilde{C}t^{-\frac{d}{\alpha}}\left(1 \wedge
\frac{t^{\frac{1}{\alpha}}}{|x-y|}\right)^{d+\alpha} ,\,\,\,
x\in{\mathbb R}^d,\,\,t>0.
\end{equation}
(2.12) in \cite{VZ} gave the following estimate,
$$\tilde{c}|x|^{-(\alpha+d)}\leq p(1,0,x)\leq
\tilde{C}|x|^{-(\alpha+d)},\,\,\,\textrm{for large }x,$$ where
$\tilde{c}$ is a positive constant and $\tilde{C}$ is the same
constant as above. This implies that
$$\tilde{c}|t^{-\frac{1}{\alpha}}x|^{-(\alpha+d)}\leq p(1,0,t^{-\frac{1}{\alpha}}x)\leq
\tilde{C}|t^{-\frac{1}{\alpha}}x|^{-(\alpha+d)},\,\,\,\textrm{for
large }t^{-\frac{1}{\alpha}}x.$$ Thus
\begin{equation}
\tilde{c}t^{-\frac{d}{\alpha}}|t^{-\frac{1}{\alpha}}x|^{-(\alpha+d)}\leq
t^{-\frac{d}{\alpha}}p(1,0,t^{-\frac{1}{\alpha}}x)=p(t,0,x),\,\,\,\textrm{for
large }t^{-\frac{1}{\alpha}}x.
\end{equation}
For small $t^{-\frac{1}{\alpha}}x$, since $p(1,0,x)$ is positive
and continuous in $x\in {\mathbb R}^d$, there exists a positive constant
$\tilde{c}_0$ such that
$$\tilde{c}_0\leq p(1,0,t^{-\frac{1}{\alpha}}x),$$
which implies
\begin{equation}
\tilde{c}_0t^{-\frac{d}{\alpha}}\leq
t^{-\frac{d}{\alpha}}p(1,0,t^{-\frac{1}{\alpha}}x)=p(t,0,x),\,\,\,\textrm{for
small }t^{-\frac{1}{\alpha}}x.
\end{equation}
Combining (1.2), (1.3) and (1.4), we can see that the transition
density $p$ satisfies (1.1). It is clear that
$C(x,y)=\phi(\frac{x-y}{|x-y|})|x-y|^{-(d+\alpha)}$ is bounded
between two positive numbers.
We say that a function $V$ on ${\mathbb R}^d$ belongs to the
Kato class $\bf{K_{d,\alpha}}$ if $$\lim_{t\downarrow 0}\sup_{x\in
{\mathbb R}^d}\int_{0}^{t}\int_{{\mathbb R}^d}p(s,x,y)|V(y)|dyds=0,$$ and we say
that a signed measure $\mu$ on ${\mathbb R}^d$ belongs to the Kato class
$\bf{K_{d,\alpha}}$ if $$\lim_{t\downarrow 0}\sup_{x\in
{\mathbb R}^d}\int_{0}^{t}\int_{{\mathbb R}^d}p(s,x,y)|\mu|(dy)ds=0.$$
Suppose $F$ is a function on ${\mathbb R}^d \times {\mathbb R}^d$.
\begin{defn} We
say $F$ belongs to $\bf{J_{d,\alpha}}$ if $F$ is bounded,
vanishing on the diagonal, and the function
$$
x\mapsto \int_{{\mathbb R}^d}\frac{|F(x,y)|}{|x-y|^{d+\alpha}}\,dy
$$
belongs to $\bf{K_{d,\alpha}}$.
\end{defn}
For any $F \in \bf{J_{d,\alpha}}$, we set
$$ A_{t}^{F}=\sum_{s \le t}F(X_{s-},X_{s}),\,\, t> 0.$$
We can define a non-local Feynman-Kac semigroup as follows
$$ S_{t}^{F}f(x)=\mathbb{E}_{x}(e^{-A_{t}^{F}}f(X_{t})),$$
where $f$ is a measurable function on ${\mathbb R}^d$. This semigroup was
studied in \cite{S1} and \cite{CS}.
Let $\tilde{F}(x,y)=F(y,x)$, for any $(x,y)\in {\mathbb R}^d \times {\mathbb R}^d$.
In this paper, we always assume both $F$ and $\tilde{F}$ $\in
\bf{J_{d,\alpha}}$.
Recently, sharp two-sided estimates of the density of the
semigroup $\{ S_{t}^{F},t\ge 0\}$ were established in \cite{S2}.
Under the assumption that $X$ is a symmetric $\alpha$-stable-like
process, using a martingale argument and results from \cite{CS},
the following result was established in \cite{S2}: Suppose that
$F\in \bf{J_{d,\alpha}}$ is a symmetric function, the semigroup
$\{ S_{t}^{F},t\ge 0\}$ admits a density $q(t,x,y)$ with respect
to $m$ and that $q$ is jointly continuous on
$(0,\infty)\times{\mathbb R}^d\times{\mathbb R}^d$. Furthermore, there exist
positive constants $C_1,C_2,C_3$ and $C_4$ such that
$$C_{1}e^{-C_{2}t}t^{-\frac{d}{\alpha}}\left(1 \wedge \frac{t^{\frac{1}{\alpha}}}{|x-y|}\right)^{d+\alpha} \leq q(t,x,y) \leq C_{3}e^{C_{4}t}t^{-\frac{d}{\alpha}}\left(1 \wedge \frac{t^{\frac{1}{\alpha}}}{|x-y|}\right)^{d+\alpha}$$
for all $(t,x,y)\in (0,\infty) \times {\mathbb R}^d \times {\mathbb R}^d$.
The question that we are going to address in this paper is the
following : can we establish the same two-sided estimates for the
density of the Feynman-Kac semigroup of nonsymmetric
$\alpha$-stable-like process $X$ when $F\in \bf{J_{d,\alpha}}$ is
nonsymmetric. The proof of the above result in \cite{S2} can not
be adapted to the case where neither $F$ nor $X$ is symmetric. It
seems that, to answer the question, one has to use some new ideas.
In this paper, we are going to tackle the question above by
combining the generalization of an idea of \cite{BM}, which was
used to deal with the estimates of the density of continuous
functionals of Brownian motion, with some results on discontinuous
additive functionals.
The content of this paper is organized as follows. In section 2,
we present some preliminary results on discontinuous additive
functionals. In section 3, we establish the two-sided estimates on
the density of Feynman-Kac semigroup under certain assumptions of
$F(x,y)$.
\section{Preliminary Results on Discontinuous Additive Functionals}
For convenience, we use $A_t$ to denote $\sum_{s \le
t}F(X_{s-},X_{s})$ instead of $A^F_t$. We have the following
formulae for $A^2_{t}$.
$$
A^2_{t}=2\int_{0}^{t}
A_{s}\,dA_{s}-\int_{0}^{t}F(X_{s-},X_{s})\,dA_{s},
$$
and
$$
A^2_{t}=2\int_{0}^{t}(A_{t}- A_{s})\,dA_{s}+\int_{0}^{t}F(X_{s-},X_{s})\,dA_{s}.
$$
The proof is straight forward.
In general, the formulae for $A^n_t$ are given by the following
theorem.
\begin{thm}
\begin{eqnarray*}
A^n_{t}&=&C^1_{n}\int_{0}^{t}A_{s}^{n-1}\,dA_{s}-C^2_{n}\int_{0}^{t}A_{s}^{n-2}F(X_{s-},X_{s})\,dA_{s}+C^3_{n}\int_{0}^{t}A_{s}^{n-3}F^2(X_{s-},X_{s})\,dA_{s}+\cdots\\
&&+(-1)^{i-1}C^{i}_n \int_{0}^{t}A_{s}^{n-i}F^{i-1}(X_{s-},X_{s})\,dA_{s}+\cdots+(-1)^{n-1}C^n_n\int_{0}^{t}F^{n-1}(X_{s-},X_{s})\,dA_{s},\\
A^n_{t}&=&C^1_{n}\int_{0}^{t}(A_{t}-A_{s})^{n-1}\,dA_{s}+C^2_{n}\int_{0}^{t}(A_{t}-A_{s})^{n-2}F(X_{s-},X_{s})\,dA_{s}\\
&&+C^3_{n}\int_{0}^{t}(A_{t}-A_{s})^{n-3}F^2(X_{s-},X_{s})\,dA_{s}+\cdots+C^{i}_n \int_{0}^{t}(A_{t}-A_{s})^{n-i}F^{i-1}(X_{s-},X_{s})\,dA_{s}\\
&&+\cdots+C^n_n\int_{0}^{t}F^{n-1}(X_{s-},X_{s})\,dA_{s},\\
\textrm{ where } C^i_n&=&\frac{n!}{i!(n-i)!}.
\end{eqnarray*}
.
\end{thm}
\noindent{\bf Proof.} We use induction to show these two formulae for $A^n_t$ hold
for all $n>1$. It is clear that they are true for $n=2$. Suppose
they hold for $n \leq m-1$, we show they hold for $n=m$.
It follows from the integration by parts formula,
$$A^m_t=\int_{0}^{t}A_{s-}\,dA^{m-1}_{s}+\int_{0}^{t}A_{s}^{m-1}\,dA_{s}$$
where
\begin{eqnarray*}
&&\int_{0}^{t}A_{s-}\,dA^{m-1}_{s}\\
&=&\int_{0}^{t}(A_s-F(X_{s-},X_{s}))\,dA^{m-1}_{s}\\
&=&\int_{0}^{t}A_s\,dA^{m-1}_{s}-\int_{0}^{t}F(X_{s-},X_{s})\,dA^{m-1}_{s}\\
&=&\int_{0}^{t}A_s(\sum_{i=1}^{m-1}(-1)^{i-1}C_{m-1}^{i}A^{m-1-i}_{s}F^{i-1}(X_{s-},X_{s}))\,dA_{s}\\
&&-\int_{0}^{t}F(X_{s-},X_{s})(\sum_{j=1}^{m-1}(-1)^{j-1}C_{m-1}^{j}A^{m-1-j}_{s}F^{j-1}(X_{s-},X_{s})\,dA_{s}\\
&& (\textrm{ by the first formula for } A^n_t \textrm{ when } n=m-1 \textrm{ } ) \\
&=&\sum_{i=1}^{m-1}(-1)^{i-1}C_{m-1}^{i}\int_{0}^{t}A^{m-i}_{s}F^{i-1}(X_{s-},X_{s})\,dA_{s}\\
&&-\sum_{j=1}^{m-1}(-1)^{j-1}C_{m-1}^{j}\int_{0}^{t}A^{m-1-j}_{s}F^{j}(X_{s-},X_{s})\,dA_{s}\\
&=&\sum_{i=1}^{m-1}(-1)^{i-1}C_{m-1}^{i}\int_{0}^{t}A^{m-i}_{s}F^{i-1}(X_{s-},X_{s})\,dA_{s}\\
&&-\sum_{i=2}^{m}(-1)^{i-2}C_{m-1}^{i-1}\int_{0}^{t}A^{m-i}_{s}F^{i-1}(X_{s-},X_{s})\,dA_{s}\\
&&\textrm{ } (\textrm{ let }j=i-1 )\\
&=&\sum_{i=2}^{m-1}(-1)^{i-1}(C_{m-1}^{i}+C_{m-1}^{i-1})\int_{0}^{t}A^{m-i}_{s}F^{i-1}(X_{s-},X_{s})\,dA_{s}\\
&&+C_{m-1}^{1}\int_{0}^{t}A_{s}^{m-1}\,dA_{s}-(-1)^{m-2}\int_{0}^{t}F^{m-1}(X_{s-},X_{s})\,dA_{s}\\
&=&\sum_{i=2}^{m-1}(-1)^{i-1}C_{m}^{i}\int_{0}^{t}A^{m-i}_{s}F^{i-1}(X_{s-},X_{s})\,dA_{s}\\
&&+C_{m-1}^{1}\int_{0}^{t}A_{s}^{m-1}\,dA_{s}-(-1)^{m}\int_{0}^{t}F^{m-1}(X_{s-},X_{s})\,dA_{s}\\
&&\textrm{ } (\textrm{ by }C_{m-1}^{i}+C_{m-1}^{i-1}=C_{m}^{i}
),
\end{eqnarray*}
thus
$$
A^m_t=\int_{0}^{t}A_{s-}\,dA^{m-1}_{s}+\int_{0}^{t}A_{s}^{m-1}\,dA_{s}=\sum_{i=1}^{m}(-1)^{i-1}C_{m}^{i}\int_{0}^{t}A^{m-i}_{s}F^{i-1}(X_{s-},X_{s})\,dA_{s},
$$
i.e. the first formula for $A^n_t$ holds for n=m.
Now we go to the second formula for $A^n_t$, for $n=m$.
\begin{eqnarray*}
&&C^1_{m}\int_{0}^{t}(A_{t}-A_{s})^{m-1}\,dA_{s}\\
&=&C^1_{m}\int_{0}^{t}\sum_{i=0}^{m-1}C_{m-1}^{i}A^{i}_{t}(-1)^{m-1-i}A^{m-1-i}_{s}\,dA_{s}\\
&=&\sum_{i=0}^{m-1}(-1)^{m-1-i}C^1_{m}C_{m-1}^{i}A^{i}_{t}\int_{0}^{t}A^{m-1-i}_{s}\,dA_{s}\\
&=&\sum_{i=0}^{m-1}(-1)^{m-1-i}C_{m}^{i}(m-i)A^{i}_{t}\int_{0}^{t}A^{m-1-i}_{s}\,dA_{s}\\
&&\textrm{ } (\textrm{ by }C_{m}^{1}C_{m-1}^{i}=C_{m}^{i}(m-i)\textrm{ } )\\
&=&\sum_{i=0}^{m-1}(-1)^{m-1-i}C_{m}^{i}A^{i}_{t}((m-i)\int_{0}^{t}A^{m-1-i}_{s}\,dA_{s})\\
&=&\sum_{i=0}^{m-1}(-1)^{m-1-i}C_{m}^{i}A^{i}_{t}(A^{m-i}_{t}+\int_{0}^{t}\sum_{k=2}^{m-i}(-1)^{k}C_{m-i}^{k}A^{m-i-k}_{s}F^{k-1}(X_{s-},X_{s})\,dA_{s})\\
&&\textrm { }(\textrm{ by the first formula for }A^n_t \textrm{ when } n=m-i \textrm{ } )\\
&=&\sum_{i=0}^{m-1}(-1)^{m-1-i}C_{m}^{i}A^{i}_{t}A^{m-i}_{t}\\
&&+\int_{0}^{t}\sum_{i=0}^{m-1}(-1)^{m-1-i}\sum_{k=2}^{m-i}(-1)^{k}C_{m}^{i}C_{m-i}^{k}A^{i}_{t}A^{m-i-k}_{s}F^{k-1}(X_{s-},X_{s})\,dA_{s},\\
\end{eqnarray*}
where
$$\sum_{i=0}^{m-1}(-1)^{m-1-i}C_{m}^{i}A^{i}_{t}A^{m-i}_{t}=(\sum_{i=0}^{m-1}(-1)^{m-1-i}C_{m}^{i})A^{m}_{t}=(1)A^{m}_{t},$$
and
\begin{eqnarray*}
&&\int_{0}^{t}\sum_{i=0}^{m-1}(-1)^{m-1-i}\sum_{k=2}^{m-i}(-1)^{k}C_{m}^{i}C_{m-i}^{k}A^{i}_{t}A^{m-i-k}_{s}F^{k-1}(X_{s-},X_{s})\,dA_{s}\\
&=&\int_{0}^{t}\sum_{k=2}^{m}\sum_{i=0}^{m-k}(-1)^{m-k-i-1}C_{m}^{k}C_{m-k}^{i}A^{i}_{t}A^{m-k-i}_{s}F^{k-1}(X_{s-},X_{s})\,dA_{s}\\
&&\textrm{ }(\textrm{ by }C_{m}^{i}C_{m-i}^{k}=C_{m}^{k}C_{m-k}^{i} \textrm{ and } (-1)^{m-1-i+k}=(-1)^{m-k-i-1}\textrm{ } )\\
&=&\sum_{k=2}^{m}C_{m}^{k}(-1)^{-1}\int_{0}^{t}(A_{t}-A_{s})^{m-k}F^{k-1}(X_{s-},X_{s})\,dA_{s}\\
&=&-\sum_{k=2}^{m}C_{m}^{k}\int_{0}^{t}(A_{t}-A_{s})^{m-k}F^{k-1}(X_{s-},X_{s})\,dA_{s},\\
\end{eqnarray*}
therefore
$$C^1_{m}\int_{0}^{t}(A_{t}-A_{s})^{m-1}\,dA_{s}=A^{m}_{t}-\sum_{k=2}^{m}C_{m}^{k}\int_{0}^{t}(A_{t}-A_{s})^{m-k}F^{k-1}(X_{s-},X_{s})\,dA_{s},$$
i.e. the second formula of $A^n_t$ holds for $n=m$. {\hfill $\Box$ \bigskip}
\section{Density of Feynman-Kac Semigroups Given by Discontinuous Additive Functionals}
From now on we define $q_{0}(t,x,y)=p(t,x,y)$ where $p(t,x,y)$ is
the transition density of $\alpha$-stable-like process $X$ and
satisfies (1.1). By the second formula for $A^n_t$, we have for
any bounded measurable function $g$
\begin{eqnarray*}
\mathbb{E}_x[A^n_{t}g(X_t)]&=&\sum_{i=1}^nC^{i}_n\mathbb{E}_x[\int_{0}^{t}(A_{t}-A_{s})^{n-i}F^{i-1}(X_{s-},X_{s})g(X_{t})\,dA_{s}]\\
&=&\sum_{i=1}^nC^{i}_n\mathbb{E}_x[\int_{0}^{t}\mathbb{E}_{X_s}\left(A_{t-s}^{n-i}g(X_{t-s})\right)\,d(\sum_{r \le s}F^{i}(X_{r-},X_{r}))]\\
&=&\sum_{i=1}^nC^{i}_n\mathbb{E}_x[\int_{0}^{t}\int_{{\mathbb R}^d}\frac{2C(X_s,y)F^{i}(X_s,y)}{|X_s-y|^{d+\alpha}}\mathbb{E}_{y}\left(A_{t-s}^{n-i}g(X_{t-s})\right)\,m(dy)ds].\\
\end{eqnarray*}
We define $q_n(t,x,z)$ as follows,
$$q_n(t,x,z)=\sum_{i=1}^nC^{i}_n\int_{0}^{t}\int_{{\mathbb R}^d}p(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{2C(w,y)F^{i}(w,y)}{|w-y|^{d+\alpha}}q_{n-i}(t-s,y,z)\,m(dy)ds.$$
Then by induction, we can show that for any $n>0$
$$\int_{{\mathbb R}^d}q_n(t,x,z)g(z)\,m(dz)=\mathbb{E}_x[A^n_{t}g(X_t)] $$
and
\begin{eqnarray*}
\mathbb{E}_x[A^n_{t}g(X_t)]&=&\sum_{i=1}^nC^{i}_n\mathbb{E}_x[\int_{0}^{t}\int_{{\mathbb R}^d}\frac{2C(X_s,y)F^{i}(X_s,y)}{|X_s-y|^{d+\alpha}}\int_{{\mathbb R}^d}q_{n-i}(t-s,y,z)g(z)\,m(dz)\,m(dy)ds]\\
&=&\sum_{i=1}^nC^{i}_n\int_{0}^{t}\int_{{\mathbb R}^d}p(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{2C(w,y)F^{i}(w,y)}{|w-y|^{d+\alpha}}\int_{{\mathbb R}^d}q_{n-i}(t-s,y,z)g(z)\\
&&\cdot m(dz)\,m(dy)ds.\\
\end{eqnarray*}
We assume that there exist positive constants $\overline{C}$, $L$,
$M_0$ and $\overline{M}$ such that $|2C(x,y)|\leq \overline{C}$,
$|F(x,y)|\leq \frac{L}{2}$ and $0 <M_0\leq M(y)\leq \overline{M}$
where $m(dy)=M(y)dy$. Define
$\overline{F}(w,y)=|F(w,y)|+|F(y,w)|$, which is symmetric and
satisfies $|\overline{F}(w,y)| \le L$ . Define
$\overline{p}(t,x,y)=p(t,x,y)+p(t,y,x)$. Then
$\overline{p}(t,x,y)$ is symmetric and satisfies
$$2M_{1}t^{-\frac{d}{\alpha}}\left(1 \wedge \frac{t^{\frac{1}{\alpha}}}{|x-y|}\right)^{d+\alpha}\leq \overline{p}(t,x,y)\leq 2 M_{2}t^{-\frac{d}{\alpha}}\left(1 \wedge \frac{t^{\frac{1}{\alpha}}}{|x-y|}\right)^{d+\alpha},\,\,\forall (t,x,y)\in (0,\infty)\times{\mathbb R}^d\times{\mathbb R}^d. $$
Denote $
(\int_{{\mathbb R}^d}\frac{|\overline{F}(w,y)|}{|w-y|^{d+\alpha}}\,dy)dw$
by $\mu(dw)$ and let
$C_t=\sup_{x\in{\mathbb R}^d}\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)\,\mu(dw)ds$.
Then $C_t \downarrow 0$ as $t \downarrow 0$. It is clear that
there exist two positive constants $D_{1}$ and $D_{2}$ such that
$D_{1}\le \int_{{\mathbb R}^d}\overline{p}(t,x,y)\,m(dy)\le D_{2}$, as
$\overline{p}(t,x,y)$ is comparable to $p(t,x,y)$. Let
$\overline{q}_{0}(t,x,z)=\overline{p}(s,x,z)$ and define
$\overline{q}_{n}(t,x,z)$ by
$$
\overline{q}_n(t,x,z)=\sum_{i=1}^nC^{i}_n\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{n-i}(t-s,y,z)\,m(dy)ds.
$$
We can see that $|q_{n}(t,x,z)| \leq \overline{q}_{n}(t,x,z)$.
Before we move on to the main results, two lemmas are needed.
\begin{lemma}
For any two positive constants $K<1$ and $L$, there exists a
constant $C_{0}(K,L)$ which depends on $K$ and $L$, such that
\begin{equation}
K^{n-1}+K^{n-2}\frac{L}{2!}+K^{n-3}\frac{L^2}{3!}+\cdots+\cdots+K^{n-i}\frac{L^{i-1}}{i!}+\cdots+\frac{L^{n-1}}{n!}\leq C_{0}(K,L)K^n\,,\\\textrm{ for all }
n>0.
\end{equation}
\end{lemma}
\noindent{\bf Proof.} Use the fact that there exists $i_{0}\ge0$, such that
$$
\frac{L^{l-1}}{l!}\leq \left(\frac{K}{2}\right)^{l},
\,\,\,\textrm{for } l\geq i_{0}.
$$
{\hfill $\Box$ \bigskip}
\begin{remark}
For any given $K$ and $L$, we can choose a small $t_{0}$ so that for a given constant $M_1$, $C_tM_1C_{0}(K,L)\le 1$ for $0\leq t \leq t_{0}$.
Thus
\begin{eqnarray*}
&&C_tM_1\left( K^{n-1}+K^{n-2}\frac{L}{2!}+K^{n-3}\frac{L^2}{3!}+\cdots+\frac{L^{n-1}}{n!}\right)\le C_tM_1C_{0}(K,L)K^n \le K^n.\\
\end{eqnarray*}
\end{remark}
\begin{lemma}
$\overline{q}_n(t,x,y)$ is symmetric in $x$ and $y$.
\end{lemma}
\noindent{\bf Proof.} We know that
\begin{eqnarray*}
\overline{q}_n(t,z,x)&=&\sum_{i_1=1}^nC^{i_1}_n\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s_1,z,w_1)m(dw_1)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i_1}(w_1,y_1)}{|w_1-y_1|^{d+\alpha}}\overline{q}_{n-i}(t-s_1,y_1,x)\,m(dy_1)ds_1\\
&=&\sum_{i_1=1}^nC^{i_1}_n\sum_{i_2=1}^{n-i_1}C^{i_2}_{n-i_1}
[\int_{0}^{t}\int_{{\mathbb R}^d}
\overline{p}(s_1,z,w_1)m(dw_1)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i_1}(w_1,y_1)}{|w_1-y_1|^{d+\alpha}}\,m(dy_1)ds_1
\int_{0}^{t-s_1}\\
&&\cdot\int_{{\mathbb R}^d}\overline{p}(s_2,y_1,w_2)m(dw_2)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i_2}(w_2,y_2)}{|w_2-y_2|^{d+\alpha}}\overline{q}_{n-i_1-i_2}(t-s_1-s_2,y_2,x)\,m(dy_2)ds_2]\\
&\vdots&\\
&=&\sum_{i_1+i_2+\cdots+i_k=n}C^{i_1}_nC^{i_2}_{n-i_1}\cdot\cdots \cdot C^{i_k}_{n-i_1-i_2-\cdots-i_{k-1}}
[\int_{0}^{t}\int_{0}^{t-s_1}\cdots\int_{0}^{t-s_1-\cdots-s_{k-1}}\int_{{\mathbb R}^d}\cdots\int_{{\mathbb R}^d}\\
&&\overline{p}(s_1,z,w_1)\frac{\overline{C}\overline{F}^{i_1}(w_1,y_1)}{|w_1-y_1|^{d+\alpha}}
\overline{p}(s_2,y_1,w_2)\frac{\overline{C}\overline{F}^{i_2}(w_2,y_2)}{|w_2-y_2|^{d+\alpha}}
\cdot\cdots \cdot
\overline{p}(s_k,y_{k-1},w_k)\\
&&\cdot\frac{\overline{C}\overline{F}^{i_k}(w_k,y_k)}{|w_k-y_k|^{d+\alpha}}\overline{p}(t-s_1-\cdots-s_k,y_k,x)\,ds_1\cdots
ds_k m(dw_1)\cdots m(dw_k)\\
&&\cdot m(dy_1)\cdots m(dy_k)]\\
&=&\sum_{i_1+i_2+\cdots+i_k=n}C^{i_1,\ldots,i_k}_n
[\int_{0}^{t}\int_{0}^{t-s_1}\cdots\int_{0}^{t-s_1-\cdots-s_{k-1}}\int_{{\mathbb R}^d}\cdots\int_{{\mathbb R}^d}
\overline{p}(s_1,z,w_1)
\frac{\overline{C}\overline{F}^{i_1}(w_1,y_1)}{|w_1-y_1|^{d+\alpha}}\\
&& \cdot \overline{p}(s_2,y_1,w_2)\frac{\overline{C}\overline{F}^{i_2}(w_2,y_2)}{|w_2-y_2|^{d+\alpha}}\cdot\cdots \cdot \overline{p}(s_k,y_{k-1},w_k)
\frac{\overline{C}\overline{F}^{i_k}(w_k,y_k)}{|w_k-y_k|^{d+\alpha}}\\
&& \cdot \overline{p}(t-s_1-\cdots - s_k,y_k,x)\,ds_1\cdots
ds_k m(dw_1)\cdots m(dw_k) m(dy_1)\cdots m(dy_k)].\\
\end{eqnarray*}
Put $t-s_1-\cdots-s_k=\tilde{s}_1, s_k=\tilde{s}_2,\ldots,
s_l=\tilde{s}_{k+2-l},\ldots, s_2=\tilde{s}_k$. It is easy to see
the absolute value of the Jacobian of this transformation is $1$.
Let $
y_k=\tilde{w}_1,\ldots,y_{l}=\tilde{w}_{k-l+1},\ldots,y_{1}=\tilde{w}_k,\,
w_k=\tilde{y}_1,\ldots,w_{l}=\tilde{y}_{k-l+1},\ldots,w_{1}=\tilde{y}_k$
and $ j_k=i_1,\ldots,j_{l}=i_{k-l+1},\ldots,j_1=i_k $.\\
Thus the
above equality becomes
\begin{eqnarray*}
\overline{q}_n(t,z,x)&=&\sum_{j_1+j_2+\cdots+j_k=n}C^{j_1,\ldots,j_k}_n
[\int_{0}^{t-\tilde{s}_1-\cdots-\tilde{s}_{k-1}}\int_{0}^{t-\tilde{s}_1-\cdots-\tilde{s}_{k-2}}\cdots\int_{0}^{t}\int_{{\mathbb R}^d}\cdots\int_{{\mathbb R}^d}\\
&&\cdot \overline{p}(t-\tilde{s}_1-\cdots-\tilde{s}_{k},z,\tilde{y}_k)\frac{\overline{C}\overline{F}^{j_k}(\tilde{y}_k,\tilde{w}_k)}{|\tilde{y}_k-\tilde{w}_k|^{d+\alpha}}
\overline{p}(\tilde{s}_k,\tilde{w}_k,\tilde{y}_{k-1})\\
&&\cdot\frac{\overline{C}\overline{F}^{j_{k-1}}(\tilde{y}_{k-1},\tilde{w}_{k-1})}{|\tilde{y}_{k-1}-\tilde{w}_{k-1}|^{d+\alpha}}
\cdot\cdots\cdot
\overline{p}(\tilde{s}_2,\tilde{w}_2,\tilde{y}_1)
\frac{\overline{C}\overline{F}^{j_1}(\tilde{y}_1,\tilde{w}_1)}{|\tilde{y}_1-\tilde{w}_1|^{d+\alpha}}\\
&&\cdot \overline{p}(\tilde{s}_1,\tilde{w}_1,x)\,d\tilde{s}_k\cdots
d\tilde{s}_1m(d\tilde{y}_k)\cdots m(d\tilde{y}_1)m(d\tilde{w}_k)\cdots
m(d\tilde{w}_1)]. \end{eqnarray*}
Rearranging the components of the integrand and using the fact
that $\overline{F}(x,y)$ and $\overline{p}(t,x,y)$ are symmetric
in $x$ and $y$, it is easy to see that the above expression for
$\overline{q}_n(t,z,x)$ is equal to $\overline{q}_n(t,x,z)$. {\hfill $\Box$ \bigskip}
In the proof of the following theorem, we use an idea similar to
that used in \cite{BM} for Brownian motions.
\begin{thm}
There exist two positive constants $ K<1$ and $ M$, and there
exists $ t_{0}> 0$ such that $0<t \leq t_{0}$,
\begin{equation}
\overline{q}_n(t,x,z)\leq n!MK^nt^{-\frac{d}{\alpha}}\,, \textrm{ for all } n,
\end{equation}
and
\begin{equation}
\int_{0}^{t}\int_{{\mathbb R}^d}\overline{q}_n(s,x,z)\,\mu(dz)ds\leq C_tn!K^n\,,\textrm{ for all } n.
\end{equation}
\end{thm}
\noindent{\bf Proof.} It is clear that when $n=0$, (3.2) and (3.3) hold.
We assume they hold for $n\leq m-1$, and consider the case $n=m$. Writing $\overline{q}_m(t,x,y)$ into two terms in the following way:
\begin{eqnarray*}
\overline{q}_m(t,x,z)&=&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&&+\sum_{i=1}^mC^{i}_m\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+{\alpha}}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds.
\end{eqnarray*}
Since (3.2) and (3.3) hold for $n\leq m-1$, we have
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&\leq&\sum_{i=1}^mC^{i}_mM{\overline{M}}^2\overline{C}L^{i-1}(m-i)!K^{m-i}\left(\frac{t}{2}\right)^{-\frac{d}{\alpha}}\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)\,\mu(dw)ds\\
&\leq&\sum_{i=1}^mC_tC^{i}_mM{\overline{M}}^2\overline{C}L^{i-1}(m-i)!K^{m-i}\left(\frac{t}{2}\right)^{-\frac{d}{\alpha}}.
\end{eqnarray*}
Similarly,
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&\leq&\sum_{i=1}^mC^{i}_mM\left(\frac{t}{2}\right)^{-\frac{d}{\alpha}}\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\,m(dw)\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&\leq&\sum_{i=1}^mC^{i}_mM\left(\frac{t}{2}\right)^{-\frac{d}{\alpha}}\overline{C}L^{i-1}{\overline{M}}^2\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{q}_{m-i}(t-s,y,z)\,\mu(dy)ds\\
\end{eqnarray*}
\begin{eqnarray*}
&\leq&\sum_{i=1}^mC^{i}_mM\left(\frac{t}{2}\right)^{-\frac{d}{\alpha}}\overline{C}L^{i-1}{\overline{M}}^2C_t(m-i)!K^{m-i}\\
&&\textrm{ } (\textrm{ by symmetry, } \overline{q}_{m-i}(t-s,y,z)=\overline{q}_{m-i}(t-s,z,y) \textrm{ and }(3.3) \textrm{ }) \\
&=&\sum_{i=1}^mC_tC^{i}_mM{\overline{M}}^2\overline{C}L^{i-1}(m-i)!K^{m-i}\left(\frac{t}{2}\right)^{-\frac{d}{\alpha}}.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
\overline{q}_m(t,x,z)&\leq& \sum_{i=1}^mC_tC^{i}_m2^{1+{\frac{d}{\alpha}}}M{\overline{M}}^2\overline{C}L^{i-1}(m-i)!K^{m-i}t^{-\frac{d}{\alpha}}\\
&=&m!2^{1+{\frac{d}{\alpha}}}M{\overline{M}}^2\overline{C}t^{-\frac{d}{\alpha}}C_t\left(\sum_{i=1}^mK^{m-i}\frac{L^{i-1}}{i!}\right).
\end{eqnarray*}
Let $M_1=2^{1+\frac{d}{\alpha}}{\overline{M}}^2\overline{C}$. Then
by Remark 3.2, we can choose a small $t_{0}$ such that for $0<
t\leq t_{0}$,
$$
C_tM_1\left(\sum_{i=1}^mK^{m-i}\frac{L^{i-1}}{i!}\right)\leq K^m\,, \textrm{ for any }
m\,.
$$
Thus
$$
\overline{q}_m(t,x,y)\leq
m!MK^mt^{-\frac{d}{\alpha}},
$$
i.e. (3.2) holds for $n=m$.
Now we show (3.3) holds for $n=m$.
\begin{eqnarray*}
&&\int_{0}^{t}\int_{{\mathbb R}^d}\overline{q}_m(s,x,z)\,\mu(dz)ds\\
&=&\int_{0}^{t}\int_{{\mathbb R}^d}\left(\sum_{i=1}^mC^{i}_m\int_{0}^s\int_{{\mathbb R}^d}\overline{p}(u,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(s-u,y,z)m(dy)du\right)\mu(dz)ds.
\end{eqnarray*}
Let $s-u=v$, we get
\begin{eqnarray*}
&&\int_{0}^{t}\int_{{\mathbb R}^d}\overline{q}_m(s,x,z)\,\mu(dz)ds\\
&=& \int_{0}^{t}\int_{{\mathbb R}^d}\left(\sum_{i=1}^mC^{i}_m\int_{0}^{t-u}\int_{{\mathbb R}^d}\overline{p}(u,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(v,y,z)\,\mu(dz)dv\right)\,m(dy)du\\
&=& \sum_{i=1}^mC^{i}_m\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(u,x,w)\left(\int_{0}^{t-u}\int_{{\mathbb R}^d}\overline{q}_{m-i}(v,y,z)\mu(dz)dv\right)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\,m(dy)m(dw)du\\
&\leq& \sum_{i=1}^mC^{i}_m\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(u,x,w)C_t(m-i)!K^{m-i}\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\,m(dy)m(dw)du\\
\end{eqnarray*}
\begin{eqnarray*}
&=& \sum_{i=1}^mC^{i}_mC_t(m-i)!K^{m-i}\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(u,x,w)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\,m(dy)m(dw)du\\
&\leq& \sum_{i=1}^mC^{i}_mC_t(m-i)!K^{m-i}\overline{C}L^{i-1}{\overline{M}}^2\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(u,x,w)\,\mu(dw)du\\
&\leq& C_tC_t{\overline{M}}^2\overline{C}m!\left(\sum_{i=1}^mK^{m-i}\frac{L^{i-1}}{i!}\right).
\end{eqnarray*}
It is clear that for the previous $t_{0}$, when $0<t\leq t_{0}$,
$$
C_t{\overline{M}}^2\overline{C}\left(\sum_{i=1}^mK^{m-i}\frac{L^{i-1}}{i!}\right)\leq
K^m.
$$
Thus
$$
C_tC_t{\overline{M}}^2\overline{C}m!\left(\sum_{i=1}^mK^{m-i}\frac{L^{i-1}}{i!}\right)\leq
C_tm!K^m,
$$
i.e.
$$
\int_{0}^{t}\int_{{\mathbb R}^d}\overline{q}_m(s,x,z)\,\mu(dz)ds\leq
C_tm!K^m.
$$ Therefore (3.3) holds for $n=m$.
{\hfill $\Box$ \bigskip}
By the above theorem, we have for $0<t\leq t_{0}$,
$$ \sum_{n=0}^{\infty}\frac{\overline{q}_n(t,x,z)}{n!}\leq \sum_{n=0}^{\infty}MK^nt^{-\frac{d}{\alpha}}=M\frac{1}{1-K}t^{-\frac{d}{\alpha}}. $$
Since $|q_n(t,x,z)|\le \overline{q}_n(t,x,z)$,
$\sum_{n=0}^{\infty}\frac{{q}_n(t,x,z)}{n!}$ is uniformly
convergent on $[\epsilon,t_{0}]\times{\mathbb R}^d\times{\mathbb R}^d$, for any
$\epsilon > 0$. Let
$q(t,x,z)=\sum_{n=0}^{\infty}(-1)^n\frac{q_n(t,x,z)}{n!}$. Then
$q(t,x,z)$ is well defined on $(0,t_{0}]\times{\mathbb R}^d\times{\mathbb R}^d$.
Thus we have the following properties for $q(t,x,z)$,
\begin{prop}
(i) $\int_{{\mathbb R}^d}q(t,x,z)g(z)\,m(dz)=\mathbb{E}_x[e^{-A_{t}}g(X_{t})],$
for any $g$ bounded measurable and any $t>0$, \noindent (ii)
$\int_{{\mathbb R}^d}q(t,x,y)q(s,y,z)m(dy)=q(t+s,x,z),\, \textrm{for any }
t,s
> 0$.
\end{prop}
In the following, we estimate $q(t,x,z)$ from above and from
below.
It is clear that for any $(t,x,y) \in (0,\infty) \times {\mathbb R}^d
\times {\mathbb R}^d$, there exists a positive constant $D_2$ such that
$\int_{R^d}\overline{p}(t,x,y)m(dy)\leq D_2$. For this $D_2$, the
positive constants $M_1$,$\,L$ and $K<1$ given in (1.1), Remark
3.2 and Theorem 3.4, and $\overline{C}$ which is the upper bound
of $|2C(x,y)|$, there exists a large enough positive integer $k$
such that $\frac{L}{(1-\frac{1}{10}-
2^{-\frac{1}{2}})^{d+\alpha}}\frac{1}{2k}\overline{C}D^2_{2}\le
\frac{1}{8}M_{1}.$
Now instead of considering
$$ \overline{q}_1(t,x,z)=\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}(w,y)}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,m(dy)ds,$$
we consider
\begin{eqnarray*}
\overline{q}_{1,k}(t,x,z)&=&\frac{\overline{q}_1(t,x,z)}{k}\\
&=&\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,m(dy)ds.
\end{eqnarray*}
We have the following theorem
\begin{thm}
There exists a small $t_{1}$ such that when $0 <t \le t_{1},$
\begin{equation}
\overline{q}_{1,k}(t,x,z) \le \frac{1}{2}{p}(t,x,z)
,\,\,\forall (x,z)\in {\mathbb R}^d \times {\mathbb R}^d.
\end{equation}
\end{thm}
\noindent{\bf Proof.} We write $\overline{q}_{1,k}(t,x,z)$ into two terms
\begin{eqnarray*}
\overline{q}_{1,k}(t,x,z)&=&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,m(dy)ds\\
&&+\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,m(dy)ds.
\end{eqnarray*}
First we look at the first term. There are two cases.
Case 1. When $|x-z| \le t^{\frac{1}{\alpha}}$,
\begin{eqnarray*}
&&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,m(dy)ds\\
&\le&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}2M_{2}(t-s)^{-\frac{d}{\alpha}}\,m(dy)ds\\
&\le& 2M_{2}{\overline{M}}^2\overline{C}\frac{1}{k}\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)(t-s)^{-\frac{d}{\alpha}}\,\mu(dw)ds\\
&\le& C_{t}M_{2}{\overline{M}}^2\overline{C}\frac{1}{k}2^{1+{\frac{d}{\alpha}}}t^{-\frac{d}{\alpha}}.\\
\end{eqnarray*}
Case 2. When $|x-z| \ge t^{\frac{1}{\alpha}}$. Let
$B_1=\{y\in{\mathbb R}^d|\,|y-z| \ge \frac{1}{10}|x-z|\},
B_2=\{w\in{\mathbb R}^d|\,|w-x| \ge 2^{-\frac{1}{2}}|x-z|\}$ and
$B_3=\{(w,y)\in {\mathbb R}^d \times {\mathbb R}^d|\,|y-z| <
\frac{1}{10}|x-z|,\,|w-x| < 2^{-\frac{1}{2}}|x-z|\} $. On $B_3$,
we have $|w-y| \ge (1-\frac{1}{10}- 2^{-\frac{1}{2}} )|x-z|$.
\begin{eqnarray*}
&&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,m(dy)ds\\
&\leq&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)1_{B_1}(y)\,m(dy)ds\\
&&+\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,1_{B_2}(w)\,m(dy)ds\\
&&+\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,1_{B_3}(w,y)\,m(dy)ds\\
&\leq&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}2M_2\overline{M}10^{d+\alpha}\frac{(t-s)}{|x-z|^{d+\alpha}}1_{B_1}(y)\,dyds\\
&&+\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}2M_2{\overline{M}}^22^{\frac{1}{2}(d+\alpha)}\frac{s}{{|x-z|}^{d+\alpha}}dw\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,1_{B_2}(w)\,dyds\\
&&+\frac{L}{(1-\frac{1}{10}- 2^{-\frac{1}{2}})^{d+\alpha}}\frac{1}{k}\overline{C}\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{1}{{|x-z|}^{d+\alpha}}\overline{p}(t-s,y,z)\,1_{B_3}(w,y)\,\\
&&\cdot m(dy)ds\\
&\leq&C_t2M_{2}{\overline{M}}^2\overline{C}\frac{1}{k}10^{d+\alpha}\frac{t}{|x-z|^{d+\alpha}}\\
&&+C_t2M_{2}{\overline{M}}^2\overline{C}\frac{1}{k}2^{\frac{1}{2}(d+\alpha)}\frac{\frac{t}{2}}{|x-z|^{d+\alpha}}\\
&&+\frac{L}{(1-\frac{1}{10}- 2^{-\frac{1}{2}})^{d+\alpha}}\frac{1}{k}\overline{C}D^{2}_{2}\frac{\frac{t}{2}}{|x-z|^{d+\alpha}}\\
&&( \textrm{ by } \int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\le D_{2} \textrm{ and }\int_{{\mathbb R}^d}\overline{p}(t-s,y,z)m(dy)\le D_{2} \textrm{ } )\\
&\leq&C_tM_{2}{\overline{M}}^2\overline{C}\frac{1}{k}(2\cdot 10^{d+\alpha}+2^{\frac{1}{2}(d+\alpha)})\frac{t}{|x-z|^{d+\alpha}}+\frac{1}{8}p(t,x,z).\\
\end{eqnarray*}
Since $C_t \downarrow 0$ as $t \downarrow 0$, then for both case 1 and case 2, we can find a small $t_{11}$ such that when $0 < t \leq t_{11}$,
$$\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,m(dy)ds \leq \frac{1}{4}p(t,x,z).$$
For the second term of $\overline{q}_{1,k}(t,x,z)$:
$$\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,m(dy)ds.$$
Letting $t-s=\tilde{s}$, the second term becomes
$$\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(\tilde{s},y,z)\,m(dy)d\tilde{s}.$$
There are two cases.
Case a. When $|x-z| \le t^{\frac{1}{\alpha}}$,
\begin{eqnarray*}
&&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(\tilde{s},y,z)\,m(dy)d\tilde{s}\\
&\le&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}2M_{2}(t-\tilde{s})^{-\frac{d}{\alpha}}m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(\tilde{s},y,z)\,m(dy)d\tilde{s}\\
&\le&M_{2}{2\overline{M}}^2\overline{C}\frac{1}{k}\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(\tilde{s},y,z)(t-\tilde{s})^{-\frac{d}{\alpha}}\,\mu(dy)ds\\
&\le& C_{t}2M_{2}{\overline{M}}^2\overline{C}\frac{1}{k}2^{\frac{d}{\alpha}}t^{-\frac{d}{\alpha}}\\
&& (\textrm{ by symmetry of } \overline{p}(\tilde{s},y,z) ).
\end{eqnarray*}
Case b. When $|x-z| \ge t^{\frac{1}{\alpha}}$. Let
$\tilde{B_1}=\{y\in{\mathbb R}^d|\,|y-z| \ge \frac{1}{10}|x-z|\},
\tilde{B_2}=\{w\in{\mathbb R}^d|\,|w-x| \ge 2^{-\frac{1}{2}}|x-z|\}$ and
$\tilde{B_3}=\{(w,y)\in {\mathbb R}^d \times {\mathbb R}^d|\,|y-z| <
\frac{1}{10}|x-z|,\,|w-x| < 2^{-\frac{1}{2}}|x-z|\} $. On
$\tilde{B_3}$, we have $|w-y| \ge (1-\frac{1}{10}-
2^{-\frac{1}{2}} )|x-z|$.
\begin{eqnarray*}
&&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(\tilde{s},y,z)\,m(dy)d\tilde{s}\\
&\leq&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(\tilde{s},y,z)1_{\tilde{B_1}}(y)\,m(dy)d\tilde{s}\\
&&+\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(\tilde{s},y,z)\,1_{\tilde{B_2}}(w)\,m(dy)d\tilde{s}\\
&&+\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(\tilde{s},y,z)\,1_{\tilde{B_3}}(w,y)\,m(dy)d\tilde{s}\\
&\leq&\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}2M_2\overline{M}10^{d+\alpha}\frac{\tilde{s}}{|x-z|^{d+\alpha}}1_{\tilde{B_1}}(y)\,dyd\tilde{s}\\
&&+\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}2M_2{\overline{M}}^22^{\frac{1}{2}(d+\alpha)}\frac{(t-\tilde{s})}{{|x-z|}^{d+\alpha}}dw\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(\tilde{s},y,z)\,1_{B_2}(w)\,dyd\tilde{s}\\
&&+\frac{L}{(1-\frac{1}{10}- 2^{-\frac{1}{2}})^{d+\alpha}}\frac{1}{k}\overline{C}\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{1}{{|x-z|}^{d+\alpha}}\overline{p}(\tilde{s},y,z)\,1_{B_3}(w,y)\,\\
&&\cdot m(dy)d\tilde{s}\\
\end{eqnarray*}
\begin{eqnarray*}
&\leq&C_t2M_{2}{\overline{M}}^2\overline{C}\frac{1}{k}10^{d+\alpha}\frac{t}{|x-z|^{d+\alpha}}\\
&&+C_t2M_{2}{\overline{M}}^2\overline{C}\frac{1}{k}2^{\frac{1}{2}(d+\alpha)}\frac{t}{|x-z|^{d+\alpha}}\\
&&+\frac{L}{(1-\frac{1}{10}- 2^{-\frac{1}{2}})^{d+\alpha}}\frac{1}{k}\overline{C}D^2_{2}\frac{\frac{t}{2}}{|x-z|^{d+\alpha}}\\
&&( \textrm{ by } \int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\le D_{2} \textrm{ and }\int_{{\mathbb R}^d}\overline{p}(\tilde{s},y,z)m(dy)\le D_{2} \textrm{ } )\\
&\leq&C_t2M_{2}{\overline{M}}^2\overline{C}\frac{1}{k}(10^{d+\alpha}+2^{\frac{1}{2}(d+\alpha)})\frac{t}{|x-z|^{d+\alpha}}+\frac{1}{8}p(t,x,z).
\end{eqnarray*}
Since $C_t \downarrow 0$ as $t \downarrow 0$, then for both case a
and case b, we can find a small $t_{12}$ such that when $0 < t
\leq t_{12}$,
$$\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(\tilde{s},y,z)\,m(dy)d\tilde{s}\leq \frac{1}{4}p(t,x,z),$$
i.e.
$$\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\frac{\overline{F}(w,y)}{k}}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,m(dy)ds \leq \frac{1}{4}p(t,x,z).$$
Let $t_{1}=\textrm{min}(t_{11},t_{12})$. Then when $0 < t \leq
t_{1},$
$$ \overline{q}_{1,k}(t,x,z) \le \frac{1}{2}p(t,x,z),\,\, \forall (x,z)\in {\mathbb R}^d \times {\mathbb R}^d. $$
{\hfill $\Box$ \bigskip}
Define $q_{1,k}(t,x,z)=\frac{q_{1}(t,x,z)}{k}$. It is clear that
$|q_{1,k}(t,x,z)|\leq \overline{q}_{1,k}(t,x,z)$. Theorem 3.6
implies that when $0 < t \leq t_{1},$
$$ p(t,x,z)- q_{1,k}(t,x,z) \ge p(t,x,z)- \overline{q}_{1,k}(t,x,z) \ge \frac{1}{2}p(t,x,z).$$
We know
$\int_{{\mathbb R}^d}q_{1,k}(t,x,z)g(z)\,m(dz)=\mathbb{E}_x[\frac{A_{t}}{k}g(X_t)]$,
for any $g$ measurable. Since $1-\frac{A_{t}}{k} \leq
e^{-\frac{A_{t}}{k}}$, we have
$$
\frac{1}{|B_{r}|}\mathbb{E}_x[(1-\frac{A_{t}}{k})1_{B_{r}}(X_{t})] \leq \frac{1}{|B_{r}|}\mathbb{E}_x[e^{-\frac{A_{t}}{k}}1_{B_{r}}(X_{t})].\\
$$
Thus
\begin{eqnarray*}
\frac{1}{2}\frac{1}{|B_{r}|}\mathbb{E}_x[1_{B_{r}}(X_{t})] &\leq& \frac{1}{|B_{r}|}\mathbb{E}_x[e^{-\frac{A_{t}}{k}}1_{B_{r}}(X_{t})]\\
&\leq&(\frac{1}{|B_{r}|}\mathbb{E}_x[e^{-A_{t}}1_{B_{r}}(X_{t})])^{\frac{1}{k}}(\frac{1}{|B_{r}|}\mathbb{E}_x[1_{B_{r}}(X_{t})])^{1-\frac{1}{k}}\\
&&(\textrm{ by H\"older inequality }).\\
\end{eqnarray*}
Therefore
\begin{eqnarray*}
\frac{
\frac{1}{2}\frac{1}{|B_{r}|}\mathbb{E}_x[1_{B_{r}}(X_{t})]}{(\frac{1}{|B_{r}|}\mathbb{E}_x[1_{B_{r}}(X_{t})])^{1-\frac{1}{k}}}
&\leq& (\frac{1}{|B_{r}|}\mathbb{E}_x[e^{-A_{t}}1_{B_{r}}(X_{t})])^{\frac{1}{k}},\\
\textrm{ i.e. }\\
\frac{1}{2}(\frac{1}{|B_{r}|}\mathbb{E}_x[1_{B_{r}}(X_{t})])^{\frac{1}{k}} &\leq& (\frac{1}{|B_{r}|}\mathbb{E}_x[e^{-A_{t}}1_{B_{r}}(X_{t})])^{\frac{1}{k}},\\
\textrm{ i.e. }\\
\frac{1}{2^k}\frac{1}{|B_{r}|}\mathbb{E}_x[1_{B_{r}}(X_{t})] &\leq&
\frac{1}{|B_{r}|}\mathbb{E}_x[e^{-A_{t}}1_{B_{r}}(X_{t})].
\end{eqnarray*}
Let $r\downarrow 0$, we have
$$ \frac{1}{2^k}p(t,x,z) \leq q(t,x,z). $$
Therefore when $0 < t \leq t_{0},$
$$ \frac{1}{2^k}M_{1}t^{-\frac{d}{\alpha}}\left(1 \wedge \frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha} \leq q(t,x,z). $$
Applying (iv) of Proposition 3.5, we have
$$q(t,x,y)\ge {C}_{3}e^{-{C}_{4}t}t^{-\frac{d}{\alpha}}\left(1 \wedge \frac{t^{\frac{1}{\alpha}}}{|x-y|}\right)^{d+\alpha},\,\,\forall (t,x,y)\in (0,\infty) \times {\mathbb R}^d \times {\mathbb R}^d,$$
where $C_{3}$ and $C_{4}$ are positive constants.
Next we establish the upper bound.
It is clear that for the positive constants $L$ and $K<1$ given
in Remark 3.2 and Theorem 3.4, and
$\overline{M}$,$\,\overline{C}$, which are the upper bounds for
$M(y)$ and $|2C(x,y)|$ respectively, there exists a constant $
\tilde{C} \ge 1$ such that
$$ L^{n-1}{\overline{M}}^2\overline{C} \le
\tilde{C}\frac{1}{2}n!K^{n}, \,\, \forall n \ge 1.$$
Suppose that $g \ge 0$ is a measurable function and $g \le
C_{g}\min(\frac{1}{D_{2}},1)$, where $C_{g} \ge 1$ is a constant,
then we have the following
\begin{prop}
There exists $t_{2}\ge 0$ such that when $0< t \le t_{2}$,
\begin{equation}
\int_{{\mathbb R}^d}\overline{q}_n(t,x,z)g(z)\,m(dz)
\le\tilde{C}C_{g}C_{t}n!K^{n},\,\, \forall n \ge 1.
\end{equation}
\end{prop}
\noindent{\bf Proof.} When $n=1$,
\begin{eqnarray*}
&&\int_{{\mathbb R}^d}\overline{q}_1(t,x,z)g(z)\,m(dz)\\
&=&\int_{{\mathbb R}^d}\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}(w,y)}{|w-y|^{d+\alpha}}\overline{p}(t-s,y,z)\,m(dy)ds g(z)\,m(dz)\\
\end{eqnarray*}
\begin{eqnarray*}
&\le&\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}(w,y)}{|w-y|^{d+\alpha}}C_{g}\overline{M}\,dy\,ds\\
&&(\textrm{ by }\int_{{\mathbb R}^d}\overline{p}(t-s,y,z)g(z)\,m(dz) \le C_{g})\\
&\le& {\overline{M}}^2\overline{C}C_{g}C_{t}\le \tilde{C}\frac{1}{2}KC_{g}C_{t}\le \tilde{C}C_{g}C_{t}K.\\
\end{eqnarray*}
Thus (3.5) holds for $n=1$.
Suppose it holds for $n \le m-1$, we show that it holds for
$n=m$.
\begin{eqnarray*}
&&\int_{{\mathbb R}^d}\overline{q}_m(t,x,z)g(z)\,m(dz)\\
&=&\int_{{\mathbb R}^d}\sum_{i=1}^mC^{i}_m\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds g(z)\,m(dz)\\
&=&\sum_{i=1}^mC^{i}_m\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\int_{{\mathbb R}^d}\overline{q}_{m-i}(t-s,y,z)g(z)\,m(dz)\,m(dy)ds\\
&\leq&\sum_{i=1}^{m-1}C^{i}_m\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\,m(dy)ds\tilde{C}C_{g}C_{t}(m-i)!K^{m-i}\\
&&+\int_{0}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{m}(w,y)}{|w-y|^{d+\alpha}}\,m(dy)dsC_{g}\\
&=&\sum_{i=1}^{m-1}C^{i}_mC_{t}L^{i-1} {\overline{M}}^2\overline{C}\tilde{C}C_{g}C_{t}(m-i)!K^{m-i}+L^{m-1}{\overline{M}}^2\overline{C}C_{g}C_{t}\\
&=&m!\sum_{i=1}^{m-1}(\frac{L^{i-1}K^{m-i}}{i!})C_{t}{\overline{M}}^2\overline{C}\tilde{C}C_{g}C_{t}+L^{m-1}{\overline{M}}^2\overline{C}C_{g}C_{t}.\\
\end{eqnarray*}
Since $C_{t} \downarrow 0 $ as $ t \downarrow 0$, $\exists t_{2}
\ge 0 $ such that when $0< t \le t_{2}$
$$\sum_{i=1}^{n-1}(\frac{L^{i-1}K^{n-i}}{i!})C_{t}{\overline{M}}^2\overline{C}\le \frac{1}{2}K^{n},\,\,\forall n \ge 2,$$
by the choice of $\tilde{C}$,
$$L^{n-1}{\overline{M}}^2\overline{C} \le
\tilde{C}\frac{1}{2}n!K^{n},\,\,\forall n \ge 1,$$
Thus
\begin{eqnarray*}
\int_{{\mathbb R}^d}\overline{q}_m(t,x,z)g(z)\,dz
&\le&\frac{1}{2}m!K^{m}\tilde{C}C_{g}C_{t}+\frac{1}{2}m!K^{m}\tilde{C}C_{g}C_{t}\\
&=&\tilde{C}C_{g}C_{t}m!K^{m},
\end{eqnarray*}
i.e. the statement holds for $n=m$. {\hfill $\Box$ \bigskip}
For the $L$, $K$, $\overline{C}$ and $D_2$ given above, it is
clear that there exists $\tilde{C_{2}} \ge 1$ such that
$$\frac{L^n}{(1-\frac{1}{10}-2^{-\frac{1}{2}})^{d+\alpha}}\frac{\overline{C}D^2_{2}}{2} \le
\frac{1}{8}\tilde{C_{2}}n!K^{n},\,\,\forall n \ge 0. $$
We claim that
\begin{thm}
There exist $ t_{3} >0 \textrm{ and }\tilde{C_{1}} \ge 1$ such
that when $0 < t \le t_{3}$,
\begin{equation}
\overline{q}_{n}(t,x,z) \le
\tilde{C_{1}}n!K^{n}t^{-\frac{d}{\alpha}}\left(1 \wedge
\frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha},\,\,\forall n
\ge 0.
\end{equation}
\end{thm}
\noindent{\bf Proof.} Since $\overline{q}_{0}(t,x,z)=\overline{p}(t,x,z)$, there
exist $ t_{13}>0 \textrm{ and } \tilde{C_{1}} \ge \tilde{C_{2}}$
such that when $0 < t \le t_{13}$
$$\overline{q}_{0}(t,x,z) \le \tilde{C_{1}}t^{-\frac{d}{\alpha}}\left(1 \wedge \frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha},$$
i.e. the statement holds for $n=0$.
Suppose it is true for $n \le m-1$. We show that it holds for
$n=m$.
We write $\overline{q}_{m}(t,x,z)$ into two terms
\begin{eqnarray*}
\overline{q}_m(t,x,z)&=&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&&+\sum_{i=1}^mC^{i}_m\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds.\\
\end{eqnarray*}
First we look at the first term. There are two cases:
Case 1. When $|x-z| \le t^{\frac{1}{\alpha}}$,
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&\le& \sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}(w,y)}{|w-y|^{d+\alpha}}\,dydsL^{i-1}\overline{M}\tilde{C_{1}}(m-i)!K^{m-i}(\frac{t}{2})^{-\frac{d}{\alpha}}\\
&\le&
\sum_{i=1}^mC^{i}_mC_{t}{\overline{M}}^2\overline{C}L^{i-1}\tilde{C_{1}}(m-i)!K^{m-i}(\frac{t}{2})^{-\frac{d}{\alpha}}\\
&=&m!\sum_{i=1}^m(\frac{L^{i-1}K^{m-i}}{i!}){\overline{M}}^2\overline{C}(\frac{1}{2})^{-\frac{d}{\alpha}}C_{t}\tilde{C_{1}}t^{-\frac{d}{\alpha}}.
\end{eqnarray*}
Since there exists $ t_{23} >0 \textrm{ and } t_{23} \le t_{13} $ such that when $0 < t \le
t_{23}$,
$$\sum_{i=1}^n(\frac{L^{i-1}K^{n-i}}{i!}){\overline{M}}^2\overline{C}(\frac{1}{2})^{-\frac{d}{\alpha}}C_{t}\le
\frac{1}{2}K^{n},\,\,\forall n \ge 1,$$
thus in case 1, when $0 < t \le t_{23}$,
\begin{eqnarray*}
\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds
\le \frac{1}{2}\tilde{C_{1}}m!K^{m}t^{-\frac{d}{\alpha}}.
\end{eqnarray*}
Case 2. When $|x-z|\ge t^{\frac{1}{\alpha}}$. Let
$B_1=\{y\in{\mathbb R}^d|\,|y-z| \ge \frac{1}{10}|x-z|\},
B_2=\{w\in{\mathbb R}^d|\,|w-x| \ge 2^{-\frac{1}{2}}|x-z|\}$ and
$B_3=\{(w,y)\in {\mathbb R}^d \times {\mathbb R}^d|\,|y-z| < \frac{1}{10}|x-z|,\,
|w-x| < 2^{-\frac{1}{2}}|x-z|\} $. On $B_3$, we have $|w-y| \ge
(1-\frac{1}{10}- 2^{-\frac{1}{2}}
)|x-z|$.
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&\le&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)1_{B_{1}}(y)\,m(dy)ds\\
&&+\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)1_{B_{2}}(w) \,m(dy)ds\\
&&+\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)1_{B_{3}}(w,y) \,m(dy)ds\\
&\le& \sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}(w,y)}{|w-y|^{d+\alpha}} \overline{M}L^{i-1}\tilde{C_{1}}(m-i)!K^{m-i}10^{d+\alpha}\frac{(t-s)}{|x-z|^{d+\alpha}}\,dyds\\
&&+\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\,m(dw)\tilde{C_{1}}{\overline{M}}^22^{\frac{1}{2}(d+\alpha)}\frac{s}{{|x-z|}^{d+\alpha}}\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,dydsL^{i-1}\\
&&+\sum_{i=1}^{m-1}C^{i}_m\frac{L^i}{(1-\frac{1}{10}- 2^{-\frac{1}{2}})^{d+\alpha}}\overline{C}\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{1}{|x-z|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,\\
&&\cdot m(dy)ds+\frac{L^m}{(1-\frac{1}{10}-2^{-\frac{1}{2}})^{d+\alpha}}\overline{C}D^2_{2}\frac{\frac{t}{2}}{|x-z|^{d+\alpha}}\\
&\le& \sum_{i=1}^mC^{i}_mC_{t}
{\overline{M}}^2\overline{C}L^{i-1}\tilde{C_{1}}(m-i)!K^{m-i}10^{d+\alpha}\frac{t}{|x-z|^{d+\alpha}}\\
&&+\sum_{i=1}^mC^{i}_m\tilde{C_{1}}{\overline{M}}^2\overline{C}2^{\frac{1}{2}(d+\alpha)}\frac{t}{{|x-z|}^{d+\alpha}}C_{t}(m-i)!K^{m-i}L^{i-1}\\
&&( \textrm{ by symmetry of } \overline{q}_{m-i}(t-s,y,z)\textrm{ and Theorem 3.4 } )\\
&&+\sum_{i=1}^{m-1}C^{i}_m\frac{L^i}{(1-\frac{1}{10}- 2^{-\frac{1}{2}})^{d+\alpha}}\overline{C}D_{2}\frac{t}{|x-z|^{d+\alpha}}\tilde{C}C_{g}C_{t}(m-i)!K^{m-i}\\
&&( \textrm{ by symmetry of } \overline{q}_{m-i}(t-s,y,z), \textrm{ Proposition
3.7 and } \int_{{\mathbb R}^d}\overline{p}(s,x,w)\,m(dw)\le D_{2} )\\
&&+\frac{L^m}{(1-\frac{1}{10}-
2^{-\frac{1}{2}})^{d+\alpha}}\overline{C}D^2_{2}\frac{\frac{t}{2}}{|x-z|^{d+\alpha}}.\\
\end{eqnarray*}
It is easy to see that there exists $t_{33}>0$ and $t_{33} \leq
\min(t_0,t_2)$ such that when $ 0< t \le t_{33}$, the first three
terms in above inequality $\le
\frac{1}{4}\tilde{C_{1}}m!K^m\frac{t}{|x-z|^{d+\alpha}},\,\,\textrm{for
all } m>0$. We can also have the fourth term in the above
inequality $\le
\frac{1}{8}\tilde{C_{1}}m!K^m\frac{t}{|x-z|^{d+\alpha}},\,\,
\textrm{for all } m>0$, by the choice of $\tilde{C_2}$ and
$\tilde{C_{1}} \ge \tilde{C_{2}}$. Thus in case 2 when $ 0< t \le
t_{33},$
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&\le&
\frac{1}{2}\tilde{C_{1}}m!K^{m}\frac{t}{|x-z|^{d+\alpha}}.
\end{eqnarray*}
Combining case 1 and case 2, when $ 0< t \le \min(t_{23},t_{33}),$
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&\le& \frac{1}{2}\tilde{C_{1}}m!K^{m}t^{-\frac{d}{\alpha}}\left(1 \wedge
\frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha}.
\end{eqnarray*}
For the second term in the expression of
$\overline{q}_{m}(t,x,z)$:
$$\sum_{i=1}^mC^{i}_m\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds.$$
Letting $t-s=\tilde{s}$, the second term becomes
$$\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)\,m(dy)d\tilde{s}.$$
There are two cases.
Case a. When $|x-z| \le t^{\frac{1}{\alpha}}$,
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)\,m(dy)d\tilde{s}\\
&\le& \sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\tilde{C_{1}}(\frac{t}{2})^{-\frac{d}{\alpha}}m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)\,m(dy)d\tilde{s}\\
&\le&
\sum_{i=1}^mC^{i}_m\tilde{C_{1}}(\frac{t}{2})^{-\frac{d}{\alpha}}L^{i-1}{\overline{M}}^2\overline{C}\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{q}_{m-i}(\tilde{s},y,z)\,\mu(dy)d\tilde{s}\\
&\le&
\sum_{i=1}^mC^{i}_m\tilde{C_{1}}(\frac{t}{2})^{-\frac{d}{\alpha}}L^{i-1}{\overline{M}}^2\overline{C}C_{t}(m-i)!K^{m-i}\\
&& (\textrm{ by symmetry of } \overline{q}_{m-i}(\tilde{s},y,z)
\textrm{ and Theorem 3.4} )\\
&=&m!\sum_{i=1}^m(\frac{L^{i-1}K^{m-i}}{i!}){\overline{M}}^2\overline{C}(\frac{1}{2})^{-\frac{d}{\alpha}}C_{t}\tilde{C_{1}}t^{-\frac{d}{\alpha}}.
\end{eqnarray*}
Since there exists $ \tilde{t}_{23} >0 \textrm{ and } \tilde{t}_{23} \le \min(t_0,t_{13}) $ such that when $0 < t \le
\tilde{t}_{23}$,
$$\sum_{i=1}^n(\frac{L^{i-1}K^{n-i}}{i!}){\overline{M}}^2\overline{C}(\frac{1}{2})^{-\frac{d}{\alpha}}C_{t}\le
\frac{1}{2}K^{n},\,\,\forall n \ge 1,$$
we have in case a when $0 < t \le \tilde{t}_{23}$,
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)\,m(dy)d\tilde{s}\\
&\le&
\frac{1}{2}\tilde{C_{1}}m!K^{m}t^{-\frac{d}{\alpha}},
\end{eqnarray*}
i.e.
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&\le&
\frac{1}{2}\tilde{C_{1}}m!K^{m}t^{-\frac{d}{\alpha}}.
\end{eqnarray*}
Case b. When $|x-z|\ge t^{\frac{1}{\alpha}}$. Let
$\tilde{B_1}=\{y\in{\mathbb R}^d|\,|y-z| \ge \frac{1}{10}|x-z|\},
\tilde{B_2}=\{w\in{\mathbb R}^d|\,|w-x| \ge 2^{-\frac{1}{2}}|x-z|\}$ and
$\tilde{B_3}=\{(w,y)\in {\mathbb R}^d \times {\mathbb R}^d|\,|y-z| <
\frac{1}{10}|x-z|,\, |w-x| < 2^{-\frac{1}{2}}|x-z|\} $. On
$\tilde{B_3}$, we have $|w-y| \ge (1-\frac{1}{10}-
2^{-\frac{1}{2}} )|x-z|$.
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)\,m(dy)d\tilde{s}\\
&\le&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)1_{\tilde{B_{1}}}(y)\,m(dy)d\tilde{s}\\
&&+\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)1_{\tilde{B_{2}}}(w) \,m(dy)d\tilde{s}\\
&&+\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)1_{\tilde{B_{3}}}(w,y) \,m(dy)d\tilde{s}\\
&\le& \sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}(w,y)}{|w-y|^{d+\alpha}} \overline{M}L^{i-1}\tilde{C_{1}}(m-i)!K^{m-i}\frac{10^{d+\alpha}\tilde{s}}{|x-z|^{d+\alpha}}\,dyd\tilde{s}\\
&&+\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\,m(dw)\tilde{C_{1}}\overline{M}2^{\frac{1}{2}(d+\alpha)}\frac{(t-\tilde{s})}{{|x-z|}^{d+\alpha}}\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)\,dydsL^{i-1}\\
&&+\sum_{i=1}^{m-1}C^{i}_m\frac{L^i}{(1-\frac{1}{10}- 2^{-\frac{1}{2}})^{d+\alpha}}\overline{C}\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{1}{|x-z|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)\,\\
&&\cdot m(dy)d\tilde{s}+\frac{L^m}{(1-\frac{1}{10}-2^{-\frac{1}{2}})^{d+\alpha}}\overline{C}D^2_{2}\frac{\frac{t}{2}}{|x-z|^{d+\alpha}}\\
\end{eqnarray*}
\begin{eqnarray*}
&\le& \sum_{i=1}^mC^{i}_mC_{t}
{\overline{M}}^2\overline{C}L^{i-1}\tilde{C_{1}}(m-i)!K^{m-i}10^{d+\alpha}\frac{t}{|x-z|^{d+\alpha}}\\
&&+\sum_{i=1}^mC^{i}_m\tilde{C_{1}}{\overline{M}}^2\overline{C}2^{\frac{1}{2}(d+\alpha)}\frac{t}{{|x-z|}^{d+\alpha}}C_{t}(m-i)!K^{m-i}L^{i-1}\\
&&( \textrm{ by symmetry of } \overline{q}_{m-i}(\tilde{s},y,z)\textrm{ and Theorem 3.4 } )\\
&&+\sum_{i=1}^{m-1}C^{i}_m\frac{L^i}{(1-\frac{1}{10}- 2^{-\frac{1}{2}})^{d+\alpha}}\overline{C}D\frac{t}{|x-z|^{d+\alpha}}\tilde{C}C_{g}C_{t}(m-i)!K^{m-i}\\
&&( \textrm{ by symmetry of } \overline{q}_{m-i}(\tilde{s},y,z), \textrm{ Proposition
3.7 and } \int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)\,m(dw)\le D_2)\\
&&+\frac{L^m}{(1-\frac{1}{10}-
2^{-\frac{1}{2}})^{d+\alpha}}\overline{C}D^2_{2}\frac{\frac{t}{2}}{|x-z|^{d+\alpha}}.\\
\end{eqnarray*}
It is easy to see that there exists $\tilde{t}_{33}>0$ and
$\tilde{t}_{33}\leq\min(t_0,t_2)$ such that when $ 0< t \le
\tilde{t_{33}}$, the first three terms in above inequality $\le
\frac{1}{4}\tilde{C_{1}}m!K^m\frac{t}{|x-z|^{d+\alpha}},\,\,\textrm{for
any } m>0$. We can also have the fourth term in the above
inequality $\le
\frac{1}{8}\tilde{C_{1}}m!K^m\frac{t}{|x-z|^{d+\alpha}},\,\,\textrm
{for any } m>0$, by the choice of $\tilde{C_2}$ and $\tilde{C_{1}}
\ge \tilde{C_{2}}$. Thus in case b when $ 0< t \le
\tilde{t}_{33},$
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{0}^{\frac{t}{2}}\int_{{\mathbb R}^d}\overline{p}(t-\tilde{s},x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(\tilde{s},y,z)\,m(dy)d\tilde{s}\\
&\le&
\frac{1}{2}\tilde{C_{1}}m!K^{m}\frac{t}{|x-z|^{d+\alpha}},
\end{eqnarray*}
i.e.
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&\le&
\frac{1}{2}\tilde{C_{1}}m!K^{m}\frac{t}{|x-z|^{d+\alpha}}.
\end{eqnarray*}
Combining case a and case b, when $ 0< t \le
\min(\tilde{t}_{23},\tilde{t}_{33}),$
\begin{eqnarray*}
&&\sum_{i=1}^mC^{i}_m\int_{\frac{t}{2}}^{t}\int_{{\mathbb R}^d}\overline{p}(s,x,w)m(dw)\int_{{\mathbb R}^d}\frac{\overline{C}\overline{F}^{i}(w,y)}{|w-y|^{d+\alpha}}\overline{q}_{m-i}(t-s,y,z)\,m(dy)ds\\
&\le& \frac{1}{2}\tilde{C_{1}}m!K^{m}t^{-\frac{d}{\alpha}}\left(1 \wedge
\frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha}.
\end{eqnarray*}
Therefore when $0 < t <
t_{3}=\min(t_{23},t_{33},\tilde{t}_{23},\tilde{t}_{33})$,
$$ \overline{q}_{m}(t,x,z) \le \tilde{C_{1}}m!K^{m}t^{-\frac{d}{\alpha}}\left(1 \wedge
\frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha},$$ i.e. the
statement holds for $n=m$. {\hfill $\Box$ \bigskip}
By Theorem 3.8, we have for $0<t\leq t_{3}$,
$$ \sum_{n=0}^{\infty}\frac{\overline{q}_n(t,x,z)}{n!}\leq \sum_{n=0}^{\infty}\tilde{C_{1}}K^nt^{-\frac{d}{\alpha}}\left(1 \wedge \frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha} =\tilde{C_{1}}\frac{1}{1-K}t^{-\frac{d}{\alpha}}\left(1 \wedge \frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha}. $$
Since $|q_n(t,x,z)|\le \overline{q}_n(t,x,z)$ and
$q(t,x,z)=\sum_{n=0}^{\infty}(-1)^n\frac{{q}_n(t,x,z)}{n!}$,
$$q(t,x,z)\le
\tilde{C_{1}}\frac{1}{1-K}t^{-\frac{d}{\alpha}}\left(1 \wedge
\frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha}.$$
Applying (ii) of Proposition 3.5, we have
$$q(t,x,z)\le {C}_{5}e^{{C}_{6}t}t^{-\frac{d}{\alpha}}\left(1 \wedge \frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha},\,\,\forall (t,x,y)\in (0,\infty) \times {\mathbb R}^d \times {\mathbb R}^d,$$
where $C_{5}$ and $C_{6}$ are positive constants. Thus we have
established the lower and upper estimates of $q(t,x,y)$ as
follows,
\begin{thm}
There exist positive constants $C_{3}$,$C_{4}$,$C_{5}$ and $C_{6}$
such that
\begin{equation}
C_{3}e^{-C_{4}t}t^{-\frac{d}{\alpha}}\left(1 \wedge
\frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha} \leq q(t,x,z)
\leq C_{5}e^{C_{6}t}t^{-\frac{d}{\alpha}}\left(1 \wedge
\frac{t^{\frac{1}{\alpha}}}{|x-z|}\right)^{d+\alpha}
\end{equation} for all $(t,x,z)\in (0,\infty) \times {\mathbb R}^d \times
{\mathbb R}^d $.
\end{thm}
From Theorem 3.9 and (ii) of Proposition 3.5, it is easy to obtain
the following property for $q(t,x,z)$,
\begin{prop}
$q(t,x,z)$ is joint continuous on
$(0,\infty)\times{\mathbb R}^d\times{\mathbb R}^d.$
\end{prop}
\bigskip
\bigskip
{\bf Acknowledgement}: I am very grateful to my advisor Professor
Renming Song for his encouragement and many suggestions.
\vspace{.5in}
\begin{singlespace}
\small
|
1,314,259,993,865 | arxiv | \section{Introduction} Naive Bayes classifiers have proven to be useful in many prediction problems with complete training data. Here we consider the situation where a naive Bayes classifier is trained with data where the response is right censored. Such prediction problems are for instance encountered in profiling systems used at National Employment Agencies. A profiling system provides predictions of unemployment duration based on individual characteristics. To train such a system register data on individuals is used where unemployment durations as well as demographic and socio-economics information is recorded. Unemployment duration is then typically censored by the end of the observation period as well as exit from unemployment due to other reasons than employment, typically entrance into educational programs \cite{NP08}. Naive Bayes classifiers have proven useful in many prediction problems with complete data \cite{WMN06}, \cite{WBW05} and references therein. In this paper for this censored response case we propose the maximum collective conditional likelihood estimator and show that it is strongly consistent under the usual identifiability condition \cite{WM07} whose notation is used in our proof.
Formally, consider a class variable $X_0$ $-$say unemployment duration$-$ and a $n-$attribute random variable
vector $X_{[n]}=(X_1, \ldots, X_n)$ $-$individual features, where all the variables are discrete
and finite. Note that contitunous variables may be discretized making this framework more general. We assume thus that the state space of $X_i$ is
$\mathcal{X}_i = \{1, \ldots, r_i\}$. We further assume that
$(X_0,X_{[n]})$ forms a naive Bayesian network so that their
joint density and conditional density of $X_{0}$ given $X_{[n]}=x_{[n]}$ are as follows:
\begin{align*}
p(x_0,x_1,\ldots,x_n) &= p(x_0) \prod_{i=1}^n p(x_i \mid x_0), \\
p(x_0\mid x_{[n]}) &= \frac{p(x_0) \prod_{i=1}^n p(x_i\mid x_0)}{\sum_{x'_0} p(x'_0) \prod_{i=1}^n p(x_i\mid x'_0)}.
\end{align*}
Let the parameter space be $\mathit{\Theta} = \mathit{\Delta}_{r_0}\times \mathit{\Delta}_{r_1}^{r_0}\times \cdots \times \mathit{\Delta}_{r_n}^{r_0}$
where $\mathit{\Delta}_t = \{ (p_1,\ldots,p_{t-1}): 0\leq p_i, \sum_{i=1}^{t-1} p_i \leq 1 \}$ and for $\mathit{\Delta}_a^b$, where $b$ is the usual power.
The interior of $\mathit{\Theta}$ is denoted by $\mathit{\Theta}^o$. In the following, we always
assume that the true parameter is an element of $\mathit{\Theta}^o$. Note that,
if this is not the case then the naive Bayesian network is degenerated
in the sense that some variables (if binary) may vanishes as their state space
will be reduced to singletons or some may appear with reduced state
spaces. Then, the parameter of the naive Bayesian model is $\theta =
(\theta_0, \theta_1, \ldots, \theta_n)\in
\mathit{\Theta}$, where $\theta_0=(\theta_{x_0=1}, \ldots,
\theta_{x_0=r_0-1})$ and $\theta_i = (\theta_{i \mid x_0=1}, \ldots,
\theta_{i \mid x_0=r_0})$ such that $\theta_{i \mid
x_0}=(\theta_{x_i=1 \mid x_0},\dots, \theta_{x_i=r_i-1 \mid x_0})$ for $i=1,
\dots, n$. Since we are working on the non-Bayesian case we have
\begin{align*}
p_{\theta}(x_0) &= \theta_{x_0} \qquad \text{ if $x_0=1,\ldots,r_0-1$}, \\
p_{\theta}(r_0) &= 1-\sum_{x_0=1}^{r_0-1} \theta_{x_0}, \\
p_{\theta}(x_i\mid x_0) &= \theta_{x_i\mid x_0} \qquad \text{ if $x_i=1,\ldots,r_i-1$}, \\
p_{\theta}(r_i\mid x_0) &= 1-\sum_{x_i=1}^{r_i-1} \theta_{x_i\mid x_0}.
\end{align*}
Note that the above marginal and conditional densities are
over-parameterized, i.e., when we write, for example, the density of
$X_0$, $p_\theta(x_0)$ the parameter $\theta$ also contains irrelevant
components in addition to relevant ones
to determine the probabilities of $X_0$.
Suppose we have a random sample with $N=N_1+N_2$ number of data cases on the random variable vector $(X_0,X_1,\ldots,X_n)$, and denote
$$D=
\big\{(x_0^{(1)},x_1^{(1)},\ldots,x_n^{(1)}),\ldots,(x_0^{(N)},x_1^{(N)},\ldots,
x_n^{(N)}) \big\}.$$
Let the first $N_1$ cases to be fully, and for the remaining $N_2$ cases to be right censored in $X_0$. In the example of unmeployment duration, where $X_0$ denotes the duration of an unemployment spell for an individual and $(X_1,\ldots,X_n)$ a suitable vector of features/covariates, (right) censoring of $X_0$ may be due to the end of study or entrance in educational programs for unemployed. In the sequel, by ``censored'' it is meant right censored response.
The compound collective conditional likelihood (CCCL) of $\theta$ given the data $D$ is defined as
\begin{eqnarray*}
CCCL_N(\theta)&=& \prod_{j=1}^{N_1} p_{\theta}(x_0^{(j)} \mid x_{[n]}^{(j)}) \prod_{k=N_1+1}^{N}P_{\theta}(X_0 > x_0^{(k)} \mid x_{[n]}^{(k)}) \\
&=& \prod_{j=1}^{N_1} p_{\theta}(x_0^{(j)} \mid x_{[n]}^{(j)}) \prod_{k=N_1+1}^{N}\sum_{x_0=x_0^{(k)}+1}^{r_0}p_{\theta}(x_0 \mid x_{[n]}^{(k)}) \\
&=& \prod_{j=1}^{N_1} \frac{p_{\theta}(x_0^{(j)}) \prod_{i=1}^n p_{\theta}(x_i^{(j)} \mid x_0^{(j)})}
{\sum_{x'_0} p_{\theta}(x'_0) \prod_{i=1}^n p_{\theta}(x_i^{(j)} \mid x'_0)} \\
& & \times \prod_{k=N_1+1}^{N} \sum_{x_0=x_0^{(k)}+1}^{r_0} \frac{p_{\theta}(x_0) \prod_{i=1}^n p_{\theta}(x_i^{(k)} \mid x_0)}
{\sum_{x'_0} p_{\theta}(x'_0) \prod_{i=1}^n p_{\theta}(x_i^{(k)} \mid x'_0)} \\
&= & \prod_{j=1}^{N_1} \frac{p_{\theta}(x_0^{(j)}) \prod_{i=1}^n p_{\theta}(x_i^{(j)} \mid x_0^{(j)})}
{\sum_{x'_0} p_{\theta}(x'_0) \prod_{i=1}^n p_{\theta}(x_i^{(j)} \mid x'_0)} \\
& & \times \prod_{k=N_1+1}^N \Bigg\{ \frac{P_{\theta}(X_0=x_0^{(k)}+1) \prod_{i=1}^n p_{\theta}(x_i^{(k)} \mid X_0=x_0^{(k)}+1)}
{\sum_{x'_0} p_{\theta}(x'_0) \prod_{i=1}^n p_{\theta}(x_i^{(k)} \mid x'_0)} \\
& & + ... + \frac{P_{\theta}(X_0=r_0) \prod_{i=1}^n p_{\theta}(x_i^{(k)} \mid X_0=r_0)}
{\sum_{x'_0} p_{\theta}(x'_0) \prod_{i=1}^n p_{\theta}(x_i^{(k)} \mid x'_0)} \Bigg\} \\
&=& CCL_{N}^1(\theta)+... +CCL_{N}^M(\theta),
\end{eqnarray*}
where $M=\prod_{j=N_1+1}^N (r_0-x_0^{(j)})$. Thus, the CCCL is a sum of $M$ collective conditional likelihoods.
The maximum compound collective conditional likelihood estimator (MCCCLE) is then
\begin{gather*}
\hat{\theta}_N = \argmax_{\theta \in \mathit{\Theta}} \Big\{ CCL_{N}^1(\theta)+ ...+ CCL_{N}^M(\theta) \Big\}.
\end{gather*}
MCCCLE has no closed form expression in general and need to be solved numerically. We will need below the following MCCLEs
\begin{eqnarray*}
\hat{\theta}_N^i = \argmax_{\theta \in \mathit{\Theta}} CCL_{N}^i(\theta) \ \ \ \textrm{ for } i=1,...,M.
\end{eqnarray*}
\section{Strong consistency of MCCCLE}
In this section, we give a proof for the strong consistency of MCCCLE. First, we need the following identifiability assumption as usual in maximum likelihood theory.
\begin{Assumption} \label{identifiability} \emph{(Identifiability Condition)}
If $p_{\theta}(x_0\mid x_{[n]})=p_{\theta'}(x_0\mid x_{[n]})$
for all $x_0$ and $x_{[n]}$ then $\theta=\theta'$.
\end{Assumption}
This condition requires that $\theta$ should be uniquely
determined by the corresponding density $p_\theta(.\mid .)$. As shown earlier \cite{WM07} for MCCLE, this does not always hold.
The collective conditional likelihood function for the naive Bayes network model with complete data is a concave down function in the parameters \cite{RGMT05}. As noticed above, the compound collective conditional likelihood function with censored response is a sum of collective likelihood function.
\begin{Lemma}
The compound collective conditional likelihood function $CCCL_N(\theta)$ defined above is concave down in $\theta$.
\end{Lemma}
\noindent
\emph{\bf Proof:} First note that for any $\theta \in \mathit{\Theta}$, $p_\theta(x_0 \mid x_{[n]})>0$, and, therefore, so do the collective conditional likelihood functions composing $CCCL_N$. Furthermore, the sum of two convex functions with same support is convex and so do the sum of any number of convex functions with same domain, thereby yielding the result.
\begin{flushright} $\square $ \end{flushright}
\begin{Lemma}
If $f$ and $g$ are two convex functions on the same domain with their global minima at $x_1$ and $x_2$ respectively, then $f+g$ has its global minima at $tx_1+(1-t)x_2$ for some $t \in [0,1]$.
\end{Lemma}
\noindent
\emph{\bf Proof}:
If $x_1=x_2$ then the result holds for $t=0$ or $t=1$. Consider the case where $x_1 \neq x_2$. Then $\dot{f}(x_1)+\dot{g}(x_1) < 0 $ and $\dot{f}(x_2)+\dot{g}(x_2) > 0 $ or vice versa. Since $f+g$ is a convex function (sum of two convex functions with same domain), there must be a point $x_3$ such that $\dot{f}(x_3)+\dot{g}(x_3) = 0$, where $x_3=tx_1+(1-t)x_2$ for some $t \in [0,1] $.
\begin{flushright} $\square $ \end{flushright}
If $M=2$ then, since $CCL_N^1(\theta)$ and $CCL_N^2(\theta)$ are concave down functions having their maxima at $\hat{\theta}_N^1$ and $\hat{\theta}_N^2$ respectively, $CCL_N^1(\theta)+ CCL_N^2(\theta)$ which is also concave down has its maximum at $\hat{\theta}_N:=t_D \hat{\theta}_N^1 + (1-t_D) \hat{\theta}_N^2 $ where $t_D $ is a vector of the same length as $\theta^*$, whose components are in $ [0,1] $, and which is dependent on the data $D$. Note that both $\hat{\theta}_N^1$ and $\hat{\theta}_N^2$ are consistent estimates for $\theta^*$ under Assumption \ref{identifiability} (since they are MCCLEs, see \cite{WM07}).
\begin{eqnarray}
P_{\theta^*} \Big\{ \lim_{N_1 \rightarrow \infty } \hat{\theta}_N^1 &=& \theta^* \Big\} =1 \label{theta1.eq}\\
P_{\theta^*} \Big\{ \lim_{N_1 \rightarrow \infty } \hat{\theta}_N^2 &=& \theta^* \Big\} =1 \label{theta2.eq}
\end{eqnarray}
In the sequel, we write $\theta=0$ for $\theta \in \mathit{\Theta}^0$ to mean that all the components of $\theta$ are zeros and similarly for any inequality on $\theta$. Now we can prove the strong consistency of MCCCLE.
\begin{Theorem} \label{strong.consistency}
Under Assumption \ref{identifiability}, MCCCLE $\hat{\theta}_N$ is strongly consistent as as follows,
\begin{eqnarray}
P_{\theta^*} \Big\{ \lim_{N_1 \rightarrow \infty } \hat{\theta}_{M+N_1} &=& \theta^* \Big\} =1, \forall M.
\end{eqnarray}
\end{Theorem}
\noindent
\emph{\bf Proof:} We prove the result by induction. Let $M=2$,
then $\hat{\theta}_N = t_D \hat{\theta}_N^1+ (1-t_D) \hat{\theta}_N^2 = \hat{\theta}_N^2 + t_D ( \hat{\theta}_N^1 - \hat{\theta}_N^2 )$.
By (\ref{theta1.eq}) and (\ref{theta2.eq}) we have
\begin{eqnarray}
P_{\theta^*} \Big\{ \lim_{N_1 \rightarrow \infty } \hat{\theta}_N^1 = \theta^* \cap \lim_{N_1 \rightarrow \infty } \hat{\theta}_N^2 = \theta^* \Big\} =1. \\
\end{eqnarray}
Since $ \hat{\theta}_N^1, \hat{\theta}_N^2$ and $\theta^*$ are finite and $0 \leq t_D \leq 1$ (therefore $0 \leq \limsup_{N \rightarrow \infty} t_D \leq 1$) we can write
\begin{eqnarray}
P_{\theta^*} \Big\{ \limsup_{N_1 \rightarrow \infty } t_D(\hat{\theta}_N^1 - \hat{\theta}_N^2) = 0 \Big\} &=&1 \\
P_{\theta^*} \Big\{ \limsup_{N_1 \rightarrow \infty } \hat{\theta}_N^2 + t_D(\hat{\theta}_N^1 - \hat{\theta}_N^2) = \theta^* \Big\} &=& 1 \\
P_{\theta^*} \Big\{ \limsup_{N_1 \rightarrow \infty } \hat{\theta}_N = \theta^* \Big\} &=& 1
\end{eqnarray}
Similarly we can write
\begin{eqnarray}
P_{\theta^*} \Big\{ \liminf_{N_1 \rightarrow \infty } \hat{\theta}_N = \theta^* \Big\} &=& 1.
\end{eqnarray}
Hence
\begin{eqnarray}
P_{\theta^*} \Big\{ \lim_{N_1 \rightarrow \infty } \hat{\theta}_N = \theta^* \Big\} &=& 1
\end{eqnarray}
Now assume that for $M>2$, $\hat\theta_N := w_D^1 \hat{\theta}_N^1+ ...+w_D^M \hat{\theta}_N^M$ where $w_D^1 +...+w_D^M=1$, the maximizer of $CCCL_N(\theta)$, is a consistent estimator of $\theta^*$. Then, for the case of $M+1$, assume for simplicity that the additional new censored observation is $x_0^{(N+1)}=r_0-2$. Then,
\begin{eqnarray*}
CCCL_{N+1}(\theta)&=& \Big\{ CCL_{N}^1(\theta)+... +CCL_{N}^M(\theta) \Big\} \\
& & \times \Bigg\{ \frac{P_{\theta}(X_0=r_0-1) \prod_{i=1}^n p_{\theta}(x_i^{(N+1)} \mid X_0=r_0-1)}
{\sum_{x'_0} p_{\theta}(x'_0) \prod_{i=1}^n p_{\theta}(x_i^{(N+1)} \mid x'_0)} \\
& & + \frac{P_{\theta}(X_0=r_0) \prod_{i=1}^n p_{\theta}(x_i^{(N+1)} \mid X_0=r_0)}
{\sum_{x'_0} p_{\theta}(x'_0) \prod_{i=1}^n p_{\theta}(x_i^{(N+1)} \mid x'_0)} \Bigg\}.
\end{eqnarray*}
Let us rewrite this (with obvious new notation)
\begin{eqnarray*}
CCCL_{N+1}(\theta)&=& \Big\{ CCL_{N+1}^{a,1}(\theta)+... +CCL_{N+1}^{a,M}(\theta) \Big\} \\
& & + \Big\{ CCL_{N+1}^{b,1}(\theta)+... +CCL_{N+1}^{b,M}(\theta) \Big\}
\end{eqnarray*}
Now denote $\hat{\theta}_{N+1}^a := w_D^{a,1} \hat{\theta}_{N+1}^{a,1}+ ...+w_D^{a,M} \hat{\theta}_{N+1}^{a,M}$ and $\hat{\theta}_{N+1}^b := w_D^{b,1} \hat{\theta}_{N+1}^{b,1}+ ...+w_D^{b,M} \hat{\theta}_{N+1}^{b,M}$
where $\sum_j^M w_D^{i,j}=1$ for $i=a,b$, the maximizers of the first and second sums of CCLs respectively. By assumption they are consistent estmators of $\theta^*$. Now similarly to the case $M=2$, we can write $\hat{\theta}_{N+1} := u_D\hat{\theta}_{N+1}^a+(1-u_D)\hat{\theta}_{N+1}^b$, where $0 \leq u_D \leq 1$, the maximizer of $CCCL_{N+1}(\theta)$, and show that it is strongly consistent for $\theta^*$.
\begin{flushright} $\square $ \end{flushright}
\begin{Corollary} \label{cor.strong.consistency}
$p_{\hat{\theta}_N}(x_0\mid x_{[n]})$ is strongly consistent estimator of $p_{\theta^*}(x_0\mid x_{[n]})$
for each $x_{[n]}$.
\end{Corollary}
\noindent
\emph{\bf Proof:}
Immediate from the theorem since the densities $p_{\theta}(x_0\mid
x_{[n]})$ for all $x_{[n]}$ are rational functions of the parameter
which have no poles in $\mathit{\Theta}^o$.
\begin{flushright} $\square $ \end{flushright}
{\bf ACKNOWLEDGEMENTS.} The authors gratefully acknowledge the financial support of the Swedish Research Council (through the Swedish Initiative for Microdata Research in the Medical and Social Sciences (SIMSAM) and Ageing and Living Condition Program) and Swedish Research Council for Health, Working Life and Welfare (FORTE).
|
1,314,259,993,866 | arxiv | \section{Introduction}
The rapid development of multifiber spectroscopy in recent years has
made possible the simultaneous acquisition of large numbers of galaxy
spectra. The obtention of extensive and complete redshift data bases
for clusters of galaxies has hastened the investigation of the physical
properties of their visual component which, in turn, is allowing for a
better understanding of the characteristics of the dark matter
distribution on Mpc scales. Here, we report a total of 104 redshift
measurements for 99 galaxies in the field of A3733 and use these data,
in combination with a previously published sample of 39 redshifts, to
perform a kinematic and spatial analysis of the central regions of this
cluster.
A3733 is a southern galaxy cluster listed in the ACO catalog (Abell,
Corwin, \& Olowin 1989)\cite{ACO89} as of intermediate Abell's
morphological type and richness class $R=1$. This cluster hosts a
central cD galaxy, included in the Wall \& Peacock (1985)\cite{WP85}
all-sky catalog of brightest extragalactic radio sources at 2.7 GHz,
which has led to its classification as of Bautz-Morgan type I--II
(Bautz \& Morgan 1970)\cite{BM70}. A3733 is also one of the 107 nearby
rich ACO clusters ($R\ge 1$, $z\le 0.1$) included by Katgert et
al. (1996)\cite{Ka96} in the ESO Nearby Cluster Survey (ENACS), as well
as a one of the X-ray-brightest Abell clusters detected in the ROSAT
All-Sky Survey by Ebeling et al. (1996)\cite{Eb96}.
The only major kinematical study of A3733 done so far is that of Stein
(1997)\cite{St97}. From a sample of 27 cluster members located within
$r\la 16\arcmin$ from the cluster center, this author has found no
evidence of significant substructure in the cluster core. This study of
A3733, which is part of a more general investigation of the frequency
of substructure in the cluster cores from an optical spectroscopic
survey conducted on a sample of 15 nearby ($0.01\la z\la 0.05$) galaxy
clusters (Stein 1996)\cite{St96}, is based on a dataset that has many
characteristics in common with the ENACS data gathered for the same
field. Indeed, the two datasets have been obtained with the OPTOPUS
multifiber spectrograph at the ESO 3.6-m telescope and cover
essentially the same area on the sky. Besides, they have also a very
similar number of galaxies: 39 and 44, respectively (28 of which are
shared).
The MEFOS redshift dataset for A3733 reported in this paper contains
two and a half times the number of galaxy radial velocities reported by
Stein (1996)\cite{St96}, including 26 reobservations, while it covers a
circular region around the center of A3733 four times
larger. Furthermore, its high degree of completeness offers the
possibility of extracting a complete magnitude-limited subset with a
number of galaxies large enough for its use on statistical
analysis. The plan of the paper is as follows. In Sect.~2 we discuss
the MEFOS spectroscopic observations and data reduction, and present a
final sample with 112 entries built by the combination of the MEFOS and
Stein's (1996)\cite{St96} data. Section~3 begins with a brief
description of the tools which will be used for the analysis of the
data. Next, we identify the galaxies in our sample that belong to
A3733, and use this dataset and a nearly complete magnitude-limited
subset of it to examine the kinematical properties and structure of the
central regions of the cluster. Section~4 summarizes the results of our
study.
\section{MEFOS observations and data reduction}
A total of 104 redshift measurements for 99 galaxies within a circular
region of $30\arcmin$ around the radio position of the cluster cD,
$\mbox{RA} = 20^{\mathrm h} 58^{\mathrm m} 39\fs 0$ and $\mbox{Dec} =
-28\degr 15\arcmin 22\arcsec$ (Tadhunter et al. 1993)\cite{Ta93}, were
obtained using the MEFOS multifiber spectrograph at the 3.6-m ESO
telescope at La Silla (Chile). The observations were carried out on May
23--26, 1995 during an observing run whose main target was the dwarf
galaxy population of the Centaurus cluster (see Stein et
al. 1997)\cite{SJF97}. The MEFOS instrument has a circular field of
view of $1\degr$ and 29 fiber arms which carry two spectral fibers of
$2\farcs 5$ aperture for simultaneous object and sky acquisition, and
one image fiber of $36\arcsec\times 36\arcsec$ for the interactive
repositioning of the spectral fibers. A grating with 300
lines~mm$^{-1}$ was used to produce spectra in the range between
3800 and 6100~\AA\ with a typical resolution of ca. 10~\AA. The detector
was a TI $512\times 512$ CCD chip.
The raw CCD spectra were reduced using the MIDAS package through
several steps which include cleaning from defects, cosmic ray removal,
flat-fielding, one-dimensional extraction, and wavelength calibration
using a He-Ne lamp before and after each exposure. The sky subtraction
was performed subtracting from each one of the object spectra the mean
of the output of all the fibers positioned on blank sky positions from
the corresponding exposure. Prior to sky subtraction the signal of each
spectrum (including the sky spectra) was scaled with respect to the
intrinsic transmission efficiency of the corresponding fiber, which had
been determined using the average over the observed fields of the signal
under the O\,{\sc i} emission line at 5577.4 \AA.
After the final one-dimensional spectra had been extracted, velocities
were computed either from emission lines or from absorption lines, or
from both. Emission-line redshifts were obtained from galaxies with at
least two clearly visible emission lines (mostly O\,{\sc ii},
H{$\beta$}, and O\,{\sc iii}). Their redshifted positions were
determined from fits with a Gaussian superposed onto a quadratic
polynomial approximating the local continuum. The final redshift of a
galaxy was then computed as the unweighted mean over the $n$
emission lines present in its spectrum. Since the errors in the
redshift measurement of each single line are essentially dominated by
uncertainties in the wavelength calibration (Stein
1996)\cite{St96}, individual measurement errors were taken equal to 100
\kms, independently of line strength. Accordingly, an uncertainty of
$100/\sqrt{n}$ \kms\ was assigned to emission-line
redshifts. Absorption-line redshifts were obtained using the
standard cross-correlation algorithm described by Tonry \& Davis
(1979)\cite{TD79}. This technique requires the previous removal of both
galaxy emission lines and strong night-sky lines, and the
transformation of the spectral continuum to a constant level of
zero. Special care was taken that the continuum subtraction did not
create spurious features of low spatial frequency which could be
confused with broad, superposed absorption lines. For the determination
of the zero-point shift one single template was constructed by merging
20 galaxy spectra with high S/N and well known redshifts. Only
normalized cross-correlation peaks of height 0.25 or larger were
considered as significant. Both emission-line and cross-correlation
redshifts were then corrected to heliocentric values.
The 26 galaxies observed also by Stein (1996)\cite{St96} in the field
of A3733 with the OPTOPUS spectrograph were used to determine the
scaling factor of the internal errors estimated in the
cross-correlation procedure, resulting in external errors of typically
40--50 \kms. These same galaxies gave a mean velocity difference of
$-23$ \kms\ (the MEFOS redshifts being typically smaller), consistent
with zero to within the reported measurement errors. Only for 5 of our
galaxies we could measure both emission-line and cross-correlation
redshifts. Again, an excellent consistency was found between the two
kinds of measurements.
The cross-correlation and emission-line radial velocities
for the 99 galaxies observed with the MEFOS spectrograph in the field
of A3733 are listed in Cols. (4) and (5) of Table 1, together with
their estimated external errors. Columns (6) and (7) give the same
information for the 39 galaxies observed with the OPTOPUS instrument by
Stein (1996)\cite{St96}. Column (8) lists the final radial velocities
and their estimated uncertainties which result form a weighted average
of the data in Cols.~(4)--(7). The combination of these two samples
gives a total of 112 entries, which will be used in the following
section to examine the kinematical properties and structure of
A3733. This is about three times the number of galaxies used in the
previous study by Stein (1997)\cite{St97}. The first three columns of
Table 1 contain the celestial coordinates for the epoch B1950.0 and the
\bj\ magnitudes from the COSMOS catalog kindly provided by
H. MacGillivray. The completeness in apparent magnitude of the final
dataset is high, with percentages of 100, 92, 75, 63, and 50\% of all
known (COSMOS) galaxies in the same region of the sky covered by our
observations at the \bj\ magnitude limits of 17.0, 17.5, 18.0, 18.5,
and 19.0, respectively.
\section{Kinematic and spatial analysis}
Following the recommendations of Beers, Flynn, \& Gebhardt
(1990)\cite{BFG90}, we will characterize the velocity distribution of
our cluster sample by means of the biweight estimators of central
location (i.e., systemic velocity), $\vmean$, and scale (i.e., velocity
dispersion), $\sigma$. We will assign errors to these estimates equal
to the 68\% bias-corrected bootstrap confidence intervals inferred from
$10\,000$ resamplings. The program ROSTAT, kindly provided by T. Beers,
will be used for all these calculations.
The ROSTAT program includes also a wide variety of statistical tests,
which can be used to assess the consistency of the empirical
line-of-sight velocity distribution of the A3733 members (see next
subsection) with draws from a single Gaussian parent population. A fair
representation of the overall results of the ROSTAT tests will be given
by quoting the value of the statistic and associated probability for
the canonical $B_1$ and $B_2$ tests, which measure, respectively, the
skewness (asymmetry) and curtosis (elongation) of the velocity
distribution, and for the Anderson-Darling $A^2$ omnibus
test. Definitions of these tests can be found in Yahil \& Vidal
(1977)\cite{YV77} and D'Agostino (1986)\cite{Da86}. The Gaussianity
tests will be complemented by the Dip test of Hartigan \& Hartigan
(1985)\cite{HH85}, which tests the hypothesis that a sample is drawn
from a unimodal (though not necessarily Gaussian) parent distribution,
and by the search of individual weighted gaps, $g_\ast$, in the
velocity distribution of size 2.75 or larger (for a definition of
weighted gap see, for instance, Beers et
al. 1990)\cite{BFG90}. Individual weighted gaps this large are highly
significant since they arise less than 1\% of the time in random draws
from a Gaussian distribution, independently of sample size. We refer
the reader to the listed sources and references therein for a detailed
explanation of these statistical techniques.
We will investigate also the presence of substructure in the spatial
distribution of galaxies by means of two powerful tests. First, we will
apply a 2D test developed by Salvador-Sol\'e, Sanrom\'a, \&
Gonz\'alez-Casado (1993)\cite{SSG93}, hereafter referred to as the SSG
test, which relies exclusively on the projected positions of galaxies
on the sky (though velocity information is required to define strict
cluster membership). This test produces two different estimates of the
projected number density profile of the cluster, $\ndec(r)$ and
$\ndir(r)$, which are, respectively, sensitive and insensitive to the
existence of correlation in the galaxy positions relative to the
cluster background density. The subscript ``dec'' identifies the
density profile obtained via the \emph{deconvolution} of the histogram
of intergalaxy separations, while the subscript ``dir'' applies to the
density profile arising \emph{directly} from the integral of the
histogram of clustercentric distances of the cluster galaxies (eqs.~[4]
and [6], respectively, in Salvador-Sol\'e et al. 1993\cite{SSG93}). The
two profiles are convolved with a window of smoothing size
$\lambda_{\mathrm min}$ corresponding to the minimum resolution-length
imposed by the calculation of $\ndec(r)$. The significance of
substructure is estimated from the null hypothesis that $\ndec(r)$
arises from a Poissonian realization of some (unknown) theoretical
density profile which has led to the observed radial distribution of
galaxies. The probability of this being the case is calculated by means
of the statistic:
\begin{equation}\label{chi2}
\chi^2 = {\left(\ndec(0)-\ndir(0)\right)^2 \over{2S^2(0)}}\;,
\end{equation}
for one degree of freedom. In eq.~(\ref{chi2}), $\ndec(0)$ and
$\ndir(0)$ are the central values of the respective density profiles of
the cluster, while $S^2(0)$ is the central value of the radial run of
the variance of $\ndir(r)$ calculated from a set of simulated clusters
convolved to the $\lambda_{\mathrm min}$ imposed by $\ndec(r)$. The
simulated clusters are generated by the azimuthal scrambling of the
observed galaxy positions around the center of the cluster, i.e., by
randomly shuffling between 0 and $2\pi$ the azimuthal angle of each
galaxy, while maintaining its clustercentric distance unchanged. It
must be stressed, however, that the sensitivity of the SSG test is not
affected by deviations of the spatial distribution of the galaxy sample
under scrutiny from circular symmetry (see Salvador-Sol\'e et
al. 1993\cite{SSG93}). It is also worth noting that this test does not
require a priori assumptions on the form of the true projected number
density profile of the cluster, nor on the number and size of the
subgroups that might be present in the data.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{ds1509f1.eps}}
\caption{{\bf a} Stripe density plot and velocity histogram of the
galaxies with $V_\hel< 14\,000$ \kms\,in the A3733 sample. The arrow
marks the location of a highly significant weighted gap ($p=0.001$)
in the velocity distribution. {\bf b} Corresponding spatial
distribution. The 7 members of a suspect foreground group are
identified by open squares, while the asterisk marks the galaxy with
the lowest $V_\hel$. Filled circles identify our choice of strict
cluster members.}
\label{Fig. 1}
\end{figure}
The second spatial substructure test that will be applied to our data
is the 3D Dressler \& Shectman (1988b; DS)\cite{DS88b} test, which is
sensitive to local kinematic deviations in the projected galaxy spatial
distribution. The DS test assigns a local estimate of the velocity
mean, $\vmean_\loc$, and dispersion, $\sigma_\loc$, to each galaxy with
a measured radial velocity. These values are then compared with the
values of the kinematical parameters for the entire sample. The
statistic used to quantify the presence of substructure is the sum of
the local kinematic deviations for each galaxy, $\di$, over the $N$
cluster members, which we will calculate through the expression:
\begin{eqnarray} \label{delta}
\Delta & = & \sum_{i=1}^N\di \nonumber \\
& = & \sum_{i=1}^N\left[{N_{\mathrm
kern}+1\over{\sigma^2}}\left(\left(\vmean_{\loc,i}-\vmean\right)^2+\left(\sigma_{\loc,i}-\sigma\right)^2\right)\right]^{1/2}\;,
\end{eqnarray}
where $N_{\mathrm kern}={\mathrm nint}(\sqrt N)$, ${\mathrm nint}(x)$
stands for the integer nearest to $x$, maximizes the sensitivity of
the DS test to significant substructure (Bird 1994).\cite{Bi94} To
avoid the formulation of any hypothesis on the form of the velocity
distribution of the parent population, the DS test calibrates the
$\Delta$ statistic by means of Monte-Carlo simulations that randomly
shuffle the velocities of the galaxies while keeping their observed
positions fixed. In this way any existing correlation between
velocities and positions is destroyed. The probability of the null
hypothesis that there are no local correlations between the position
and velocity of the cluster members is given in terms of the fraction
of simulated clusters for which their cumulative deviation, $\dsim$, is
smaller than the observed value, $\dobs$. Again, we refer the reader to
the quoted references for further details on the two spatial
substructure tests used in the present analysis.
\subsection{The sample of cluster members}
Before we can investigate the presence of substructure in A3733 we need
to assign cluster membership to the galaxies in our sample. Examination
of the radial velocities of the 112 galaxies listed in Table~1 allows
the exclusion of 30 obvious interlopers (all background galaxies and
groups), which are separated by more than 6500 \kms\ from the main
velocity group. Subsequent membership assignment for the remaining 82
galaxies is based on the their velocity distribution and projected
positions, displayed in Figs.~1a and 1b, respectively. These figures
show the existence of 8 objects with velocities smaller than $10\,500$
\kms\ separated from the other galaxies by a gap in heliocentric
velocity of $\sim 450$ \kms. Seven of these galaxies appear also to be
concentrated on a small area of the sky. The cluster diagnostics
described at the beginning of this section reveal that the above gap in
velocity corresponds to an individually large normalized gap of size
3.39 in the 82 ordered velocities. The ``per-gap'' probability for a
weighted gap this size is only 0.001. This and the fact that the
suspected foreground group of 7 galaxies has a velocity dispersion of
only 73 \kms\ suggest that it might constitute a separate dynamical
entity. Accordingly, we chose to consider {\it bona fide\/} A3733
members the 74 galaxies in our sample with heliocentric velocities
between $10\,500$ and $13\,000$ \kms. Note that we are excluding also
from cluster membership the remaining foreground object with the lowest
measured radial velocity. From the set of cluster members, we obtain
$\vmean_\hel=11\,653^{+74}_{-76}$ \kms\ and $\sigma =614^{+42}_{-30}$
\kms\ after applying relativistic and measurement error corrections
(Danese, De Zotti, \& di Tullio 1980)\cite{DDD80}. These values are
compatible, within the adopted uncertainties, with the values
$\vmean_\hel=11\,716\pm 103$ \kms\ and $\sigma =522\pm 84$ \kms\
obtained in the previous analysis of this cluster by Stein
(1997)\cite{St97} from a sample containing 27 of the current cluster
members. The mean heliocentric velocity calculated for A3733 results in
a mean cosmological redshift of $\,\overline{z}_{\mathrm CMB}=0.0380$
after correction to the CMB rest frame (Kogut et
al. 1993)\cite{Ko93}. At the cosmological distance of A3733, one Abell
radius, $r_{\mathrm A}$ ($\equiv 1.5\; h^{-1}$ Mpc), is equal to 0.805
degrees. The subset of 82 galaxies with $V_\hel<13\,000$ \kms\ has
$\vmean_\hel=11\,532^{+94}_{-89}$ \kms, $\sigma =754^{+65}_{-48}$ \kms,
$\overline{z}_{\mathrm CMB}=0.0385$, and $r_{\mathrm A}=0.812$
degrees.
The values of the kinematical parameters of the cluster have been
calculated without taking into account its dynamical state. Indeed, the
visual inspection of Fig.~1a yields suggestive indication of deviation
of the velocity distribution from Gaussianity in the form of lighter
tails and a hint of multimodality. The dictum of the $B_2$ statistic,
which indicates the amount of elongation in a sample relative to the
Gaussian, confirms the platycurtic behavior (i.e., $B_2<3$) of the
velocity histogram giving only a $0.001$ probability that it could have
arisen by chance from a parent Gaussian population. Nevertheless, the
results of the $B_1$ and $A^2$ tests do not indicate significant
departures from normality. As for the possible multimodality, the Dip
test cannot reject the unimodal hypothesis, nor we detect the presence
of highly significant large gaps in the ordered velocities.
Comparable results are obtained if we remove from the sample of cluster
members those galaxies with strong emission lines in their
spectrum. Indeed, the spatial distribution and kinematic properties of
these latter galaxies are similar to those of the galaxies for which
only cross-correlation redshifts are available. Specifically, for the
12 cluster members with emission-line redshifts we find
$\vmean_\hel=11\,416^{+256}_{-218}$ \kms\ and $\sigma
=694^{+164}_{-101}$ \kms, while the remaining 62 galaxies have
$\vmean_\hel=11\,694^{+80}_{-83}$ \kms\ and $\sigma =594^{+44}_{-37}$
\kms.
\subsection{The magnitude-limited sample}
In order to mitigate the effects of incomplete sampling which may
contaminate the results of the statistical tests, especially of those
relying on local spatial information, we concentrate our subsequent
analysis on the subset of 37 members of A3733 with \bj\ $\leq 18$, for
which our original redshift sample contains 75\% of the COSMOS
galaxies. This magnitude limit is chosen as a compromise between
defining a sample (nearly) free of sampling biases and simultaneously
having a large enough number of objects for the detection of
substructure not to be affected by Poissonian errors.
For this sample, the Gaussianity tests confirm essentially the results
obtained for the whole set of cluster members: the $B_2$ test rejects
the Gaussian hypothesis at the 6\% significance level, while the $B_1$
and $A^2$ tests are consistent with a parent normal
population. Remarkably, the results of the other two 1D tests are now
substantially different: the Dip test rejects the hypothesis of
unimodality at the 4\% significance level, while a large gap of size
roughly 230 \kms\ ($g_\ast=3.14$, $p=0.002$) appears near the middle of
the distribution ($V_\hel\sim 11\,500$ \kms) of velocities.
The kinematical complexity of the inner regions of A3733 suggested by
these latter results is not reflected, however, on the spatial
distribution of the galaxies. The SSG test gives, for 1000 realizations
of the cluster generated by the azimuthal scrambling of the galaxy
positions around the location of the cD (see Sect. 2), a 56\% probability
that there is no substructure, which is nonsignificant. The resulting
$\lambda_{\mathrm min}$ of $16\farcm 7$ ($\equiv 0.52\,h^{-1}$~Mpc)
puts an upper limit to the half-coherence length of any possible clump
that may remain undetected in the central regions of A3733. This value
is above the typical scale-length of $\sim 0.3\,h^{-1}$~Mpc of the
clumps detected by Salvador-Sol\'e, Gonz\'alez-Casado, \& Solanes
(1993)\cite{SGS93} in the Dressler \& Shectman's (1988a)\cite{DS88a}
clusters. This suggests that the presence of significant substructure
in the magnitude-limited sample might be hidden by the large smoothing
scale imposed by the calculation of $\ndec(r)$. We have investigated
this possibility by applying also the SSG test to the sample containing
all the 74 cluster members, for which the minimum resolution-length
reduces to only $3\farcm 69$ ($\equiv 0.12\,h^{-1}$~Mpc). In spite of
the fact that this latter sample is biased towards the most populated
regions of A3733, therefore emphasizing any possible clumpiness of the
galaxy distribution on the plane of the sky, we still obtain a 14\%
probability for the null hypothesis.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{ds1509f2.eps}}
\caption{{\bf a} Spatial distribution of the 37 galaxies belonging
to the magnitude-limited sample (\bj\,$\le 18$) of A3733
members. Galaxies with $V_\hel\leq 11\,500$ \kms\ are identified by
empty circles, while solid circles mark the location of the galaxies
with $V_\hel > 11\,500$ \kms. Curves are equally spaced contours of
the adaptive kernel density contour map for this sample. The contours
range from $2.18\times 10^{-4}$ to $1.88\times 10^{-3}
\mbox{\,galaxies\,arcmin}^{-2}$. The initial smoothing scale is set
to $12\farcm 6$. {\bf b} Local deviations from the global kinematics
as measured by the DS test. Open circles drawn at the position of the
individual galaxies scale with the deviation of the local kinematics
from the global kinematics, $\di$, from which the test statistic
$\dobs=\sum\di$'s is calculated (see text). The adaptive kernel
contour map is superposed (dashed lines). {\bf c and d} Monte-Carlo
models of the magnitude-limited sample obtained after 1000 random
shufflings of the observed velocities: {\bf c} model with the
cumulative deviation $\dsim$ closest to the median of the
simulations; {\bf d} model whose $\dsim$ is closest to the value of
the upper quartile. Spatial coordinates are relative to the cluster
center (see text).}
\label{Fig. 2}
\end{figure*}
The DS test also points to the lack of significant substructure in the
magnitude-limited sample: more than 15\% of the values of the statistic
$\dsim$ obtained in 1000 Monte-Carlo simulations of this sample are
larger than $\dobs$. A visual judgment of the statistical significance
of the local kinematical deviations can be done by comparing the plots
in Figs.~2a--d. Figure~2a shows the spatial distribution of the
galaxies superposed on their adaptive kernel density contour map (see
Beers 1992\cite{Be92} and references therein for a description of the
adaptive kernel technique). The primary clump in this map is centered
at the position of the cD galaxy and is elongated along the north-south
axis; a mild density enhancement can be seen at the plot coordinates
(17,-3). In this figure galaxies with $V_\hel\leq 11\,500$ \kms\ are
represented by empty circles, while solid circles mark the location of
those with $V_\hel > 11\,500$ \kms. Although there is no strong spatial
segregation among the galaxies belonging to each of these two velocity
subgroups, the galaxies included in the second one dominate the central
density enhancement. In Fig.~2b each galaxy is identified with a circle
whose radius is proportional to $\mathrm{\exp(\di)}$, where $\di$ is
given by eq.~(\ref{delta}). Hence, the larger the circle, the larger
the deviation from the global values (but beware of the insensitivity
of the $\di$'s to the sign of the deviations from the mean cluster
velocity). The superposition of the projected density contours shows
that most of the galaxies to the north of the density peak, and to a
lesser extent those closest to the center of the eastern small density
enhancement, have apparently large local deviations from the global
kinematics. The remaining figures show two of the 1000 Monte-Carlo
models performed: Fig.~2c corresponds to the model whose $\dsim$ is
closest to the median of the simulations, while Fig.~2d corresponds to
the model with a $\dsim$ closest to the value of the upper
quartile. The comparison of Fig.~2b with these last two figures shows
that the observed local kinematical deviations are indeed not
significant.
As commented in the Introduction, Stein (1997)\cite{St97} has not found
either any evidence of significant clumpiness on his A3733 OPTOPUS data
(see his Table~3). Nevertheless, we caution that this previous study is
restricted to the innermost ($r\le 16\arcmin$) regions of the cluster
and that it uses, due to the small size of the sample, all the redshifts
available without regard to their completeness.
The results of all the statistical tests applied to our
magnitude-limited sample are summarized in Table~2, together with the
results obtained from the whole sample of cluster members, for
comparison. In Col.~(1) we list the name of the sample and in Col.~(2)
the number of galaxies in it. Columns (3)--(14) give the values of the
test statistic and associated significance levels for the $B_1$, $B_2$,
$A^2$, Dip, SSG, and DS tests, respectively. The significance levels
refer to the probability that the empirical value of a given statistic
could have arisen by chance from the null hypothesis. Thus, the smaller
the quoted probability the more significant is the departure from it.
\setcounter{table}{1}
\begin{table*}
\caption[]{Results of the statistical tests}
\begin{flushleft}
\begin{tabular}{ccccclccccccll}
\hline
\noalign{\smallskip}
\multicolumn{1}{c}{Sample} & \multicolumn{1}{c}{$N_{\mathrm gal}$}
& \multicolumn{1}{c}{$B_1$}
& \multicolumn{1}{c}{$p(B_1)$} & \multicolumn{1}{c}{$B_2$}
& \multicolumn{1}{c}{$p(B_2)$} & \multicolumn{1}{c}{$A^2$}
& \multicolumn{1}{c}{$p(A^2)$} & \multicolumn{1}{c}{Dip}
& \multicolumn{1}{c}{$p({\mathrm Dip})$} & \multicolumn{1}{c}{$\chi^2$}
& \multicolumn{1}{c}{$p(\chi^2)$}
& \multicolumn{1}{c}{$\dobs$}
& \multicolumn{1}{c}{$p(\dobs)$}\\
\multicolumn{1}{c}{(1)} &\multicolumn{1}{c}{(2)} &\multicolumn{1}{c}{(3)}
&\multicolumn{1}{c}{(4)} &\multicolumn{1}{c}{(5)} &\multicolumn{1}{c}{(6)}
&\multicolumn{1}{c}{(7)} &\multicolumn{1}{c}{(8)} &\multicolumn{1}{c}{(9)}
&\multicolumn{1}{c}{(10)} &\multicolumn{1}{c}{(11)}
&\multicolumn{1}{c}{(12)} &\multicolumn{1}{c}{(13)}
&\multicolumn{1}{c}{(14)}\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Mag.-limited &37&0.20&0.28&2.08&0.06&0.52&0.18
&0.08&0.04&0.35&0.56&40.8&0.15\\
All members &74&0.00&0.50&1.99&0.001&0.51&0.20
&0.03&0.99&2.19&0.14&107.&0.003\\
\noalign{\smallskip}
\end{tabular}
\end{flushleft}
\label{Table2}
\end{table*}
\section{Summary}
We have reported 104 radial velocity measurements performed with the
MEFOS multifiber spectrograph at the 3.6-m ESO telescope for 99
galaxies in a region of $30\arcmin$ around the center of the cluster
A3733. To augment this data, we have combined the MEFOS measurements
with 39 redshifts measured by Stein (1996)\cite{St96} with the OPTOPUS
instrument at the same telescope. This has given a final dataset with a
total of 112 entries in the field of A3733. Radial velocities have been
then supplemented by COSMOS \bj\ magnitudes and accurate sky positions
in order to investigate the kinematics and structure of the central
regions of the cluster.
From a sample containing 74 strict cluster members, we have derived a
heliocentric systemic velocity for A3733 of $11\,653^{+74}_{-76}$ \kms,
resulting in a $\overline{z}_{\mathrm CMB}$ of 0.0380, and a velocity
dispersion of $614^{+42}_{-30}$ \kms, in good agreement with the
estimates by Stein (1997)\cite{St97} from the OPTOPUS data
alone. Statistical tests relying exclusively on the distribution of
observed velocities have yield suggestive indication of the possible
kinematical complexity of A3733, especially when applied to a nearly
complete magnitude-limited (\bj\ $\leq 18$) sample of cluster
members. Despite this result, two powerful substructure tests that
incorporate spatial information have failed to detect in this latter
sample any statistically significant evidence of clumpiness in the
galaxy component, in agreement with the findings of a previous study
based on a spatially less extended and less complete dataset. Given
that the sensitivity of the spatial substructure tests we have used is
reduced when the subunits are seen with small projected separations,
the results of the present study cannot exclude, however, the
possibility that the signs of kinematical complexity detected in the
velocity histogram of A3733 might be due to the existence of
galaxy subcondensations superposed along the line-of-sight.
\begin{acknowledgements}
This work has been supported by the Direcci\'on General de
Investigaci\'on Cient\'{\i}fica y T\'ecnica, under contract
PB96--0173. P.S. acknowledges partial support from a research network
grant by the Commission of the European Communities.
\end{acknowledgements}
|
1,314,259,993,867 | arxiv | \section{Introduction}
A central step in stochastic control problems concerns estimating \emph{expected costs-to-go} that are used to approximate the optimal feedback control. In simulation approaches to this question, costs-to-go are sampled by generating trajectories of the stochastic system and then regressed against current system state. The resulting Q-values are finally ranked to find the action that minimizes expected costs.
\red{
When simulation is expensive, computational efficiency and experimental design become important. Sequential strategies rephrase learning the costs-to-go as another dynamic program, with actions corresponding to the sampling decisions. In this article, we explore a Bayesian formulation of this sequential design problem. The ranking objective imposes a novel loss function which mixes classification and regression criteria. Moreover, the presence of multiple stochastic samplers (one for each possible action) and a continuous input space necessitates development of targeted response surface methodologies. In particular, a major innovation is modeling in parallel the spatial correlation within each Q-value, while utilizing a multi-armed bandit perspective for picking which sampler to call next.}
\red{To obtain a tractable approximation of the Q-values, we advocate the use of Gaussian process metamodels, viewing the latent response surfaces as realizations of a Gaussian random field. Consequently, the ranking criterion is formulated in terms of the posterior uncertainty about each Q-value. Thus, we connect metamodel uncertainty to the sampling decisions, akin to the discrete-state frameworks of ranking-and-selection and multi-armed bandits. Our work brings forth a new link between emulation of stochastic simulators and stochastic control, offering a new class of approximate dynamic programming algorithms.}
\subsection{Abstract Ranking Problem}
Let $\mu_\ell : \mathcal{X} \to \mathbb{R}$, $\ell \in \mk{L} \equiv \{1,2, \ldots, L\}$ be $L$ smooth functions over a subset $\mathcal{X}$ of $\mathbb{R}^d$. We are interested in the problem of learning the resulting \emph{ranking} of $\mu_\ell$ over the input space $\mathcal{X}$, namely finding the classifier
\begin{equation}\label{def_cal}
\mathcal{C}(x) := \arg\min_\ell \left\{\mu_\ell(x)\right\} \in \mk{L}.
\end{equation}
The functions $\mu_\ell$ are a priori unknown but can be noisily sampled. That is for any $x \in \mathcal{X}, \ell \in \mk{L}$ we have access to a simulator $Y_\ell(x)$ which generates estimates of $\mu_\ell(x)$:
\begin{equation}\label{def_Y}
Y_\ell(x) = \mu_\ell(x) + \epsilon_\ell(x), \ \ell \in \mk{L}
\end{equation}
where $\epsilon_\ell$ are independent, mean zero random variables with variance $\sigma_\ell^2(x)$.
Intuitively speaking, we have $L$ smooth hyper-surfaces on $\mathcal{X}$ that can be sampled via Monte Carlo. In the dynamic programming context, $x$ is the system state, $\ell$ indexes the various actions available to the controller, $\mu_\ell(\cdot)$ represents the expected costs-to-go and $\epsilon_\ell(\cdot)$ captures the simulation noise arising from pathwise simulation of the underlying stochastic system and corresponding costs.
Our goal is to identify the minimal surface \emph{globally} over the entire input space. More precisely, we seek to assign at each $x\in \mathcal{X}$ a label $\hat{\mathcal{C}}(x)$, while optimizing the loss metric
\begin{equation}\label{eq:loss}
\mathcal{L}(\hat{\mathcal{C}}, \mathcal{C}) := \int_\mathcal{X} \left\{ \mu_{\hat{\mathcal{C}}(x)}(x) -\mu_{\mathcal{C}(x)}(x) \right\} \; F(\,\mathrm{d} x),
\end{equation}
\red{where $F(\cdot)$ is a specified weight function on $\mathcal{X}$ determining the relative importance of ranking different regions. Thus, the loss is zero if the ranking is correct $\hat{\mathcal{C}}(x) = \mathcal{C}(x)$, and otherwise is proportional to the (positive) difference between the selected response and the true minimum $\mu_{\hat{\mathcal{C}}}-\mu_{\mathcal{C}}$. The above criterion aims to identify the optimal action $\ell^*(x) \equiv \mathcal{C}(x)$ to take in state $x$; if the wrong action $\hat{\mathcal{C}}(x)$ is chosen instead, then \eqref{eq:loss} captures the resulting integrated loss to the controller assuming a probability distribution $F(\cdot)$ of potential states $x$.}
The loss function in \eqref{eq:loss} blends regression and classification objectives. In regression, one seeks to estimate the response marginally with the loss function tied to a single surface $\mu_\ell(\cdot)$. Instead, \eqref{eq:loss} is only about correctly identifying the index of the minimal response. As a result, small estimation errors are tolerated as long as the minimal response does not change, leading to a thresholding behavior in the loss function. In classification the loss function is discrete (typically with fixed mis-classification penalties), whereas \eqref{eq:loss} takes losses \emph{proportional} to the mis-classification distance $\mu_{\hat{\mathcal{C}}(x)}(x)-\mu_{{\mathcal{C}}(x)}(x)$.
A further key distinction is that in classification the sampling space is just $\mathcal{X}$ (returning a noisy label $C(x) \in \mk{L}$), whereas in our context a sampling query consists of the \emph{location-index} pair $(x,\ell) \in \mathcal{X} \times \mk{L}$, sampling one response at a time. The question of which surface to sample requires separate analysis over $\mk{L}$.
\red{We analyze the design problem of constructing efficient sampling strategies that can well-estimate $\mathcal{C}(\cdot)$ while optimizing the number of Monte Carlo samples needed. Because $\mu_\ell(\cdot)$'s are unknown, we frame \eqref{eq:loss} as a Bayesian sequential learning problem of adaptively growing a design $\mathcal{Z}$ that quickly learns $\mathcal{C}(x)$. Classical static, i.e.~response-independent, designs are inadequate for ranking since the whole essence of optimizing computational efforts is predicated on \emph{learning} the structure of the unknown $\mu_\ell$'s. Intuitively, learning manifests itself in focusing the sampling through discriminating both in the input space $\mathcal{X}$ (focus on regions where identifying $\mathcal{C}(x)$ is difficult) and in the sampling indices $\mk{L}$ (focus on the surfaces where $\mu_\ell$ is likely to be the smallest response).}
Due to the joint design space $\mathcal{X} \times \mathcal{L}$, our problem allows for a dual interpretation. Fixing $\ell$, \eqref{def_cal} is about reconstructing an unknown response surface $x \mapsto \mu_\ell(x)$ through noisy samples. Aggregating the different response surfaces, sequential design over $\mathcal{X}$ reduces to identifying the partitions of $\mathcal{X} = \cup_{i=1}^L \mathcal{C}_i$ into the sets
\begin{align}
\mathcal{C}_i := \{ x : \mathcal{C}(x) = i \} = \{ x : \mu_{\mathcal{C}(x)}(x) = \min_\ell \mu_\ell(x) = \mu_i(x) \}, \quad i =1,\ldots,L.
\end{align}
Because in interiors of the partitions $\mathcal{C}_i$ the ranking $\mathcal{C}(x)$ is easier to identify, the main problem is to find the partition boundaries $\partial \mathcal{C}_i$. \red{ As a result, \eqref{def_cal} is related to contour-finding, for which sequential design was studied in \cite{GL13,Picheny10,RanjanBingham08}. Standard contour-finding attempts to identify the level set $\{ \mu_1(x) = a \}$ of the response surface, which corresponds to $L=2$ with known $\mu_2(x) = a$ in \eqref{def_cal}. Hence, the analysis herein can be viewed as a multi-variate extension of contour finding. In turn, contour-finding generalizes the classical objective of minimizing a noisy response, connecting to the expected improvement/information gain trade-off in simulation optimization. In particular, we re-purpose the active learning rules of \cite{cohn:1996,Mackay92}.}
\red{
Conversely, fixing $x$, the aim of determining the smallest response $\arg\min_\ell \mu_\ell(x)$ corresponds to the setting of multi-armed bandits (MAB). The bandit has $L$ arms and corresponding payoffs $\mu_\ell(x), \ell \in \mathcal{L}$, with the decision-theoretic objective \eqref{def_cal} known as the pure exploration problem \cite{BubeckMunos11,BubeckMunos11X}. Decision policies for which arm to pull are usually expressed in terms of posterior mean and confidence about the respective payoff; this point of view motivates our use of Gap-Upper Confidence Bound (UCB) design strategies \cite{Auer02,SRKRKASE:12}. Compared to this literature, \eqref{eq:loss} contains two key differences. First, the loss function is a weighted pure-regret criterion which to our knowledge has never been used in MAB context. Second, instead of a single bandit with independent arms, we treat the radical extension to a continuum of bandits indexed by $x \in \mathcal{X}$. Recently, \cite{GrunewalderAudibert10,GabillonBubeck11} considered multiple bandits which can be viewed as \eqref{def_cal} with a discrete, non-metrized $\mathcal{X}$. We generalize their setting to a continuous $\mathcal{X}$, with a spatial correlation structure of the arms. }
\subsection{Summary of Approach}
To handle continuous state spaces $x \in \mathcal{X}$ that appear in stochastic control, we adopt the framework of kriging or Gaussian process (GP) regression for modeling the Q-values. In both contexts of Design of Experiments (DoE) and continuous MAB's, kriging models have emerged as perhaps the most popular framework \cite{WilliamsRasmussenBook}. In particular, kriging has been used extensively for sequential regression designs
as it allows an intuitive approach to borrow information spatially across samples to build global estimates of the entire response surface $\mu_\ell$. Two further advantages are the analytic structure of Gaussian processes that allows for analytic evaluation of many Expected Improvement criteria, and the ability to naturally transition between modeling of deterministic (noise-free) experiments where data needs to be interpolated, and stochastic simulators where data smoothing is additionally required.
More generally, we suggest a Bayesian perspective to global ranking, viewing the response surfaces as realizations of a random variable taking values in a given function space. This offers a tractable quantification of posterior metamodel uncertainty and related sequential metrics for determining the minimum surface. Thus, we stress that kriging is not essential to implementation of our algorithms; for example competitive alternatives are available among tree-based models, such as dynamic trees \cite{GTP-trees11} and Bayesian trees \cite{chip:geor:mccu:2010}. Moreover, while classical kriging may not be flexible enough for some challenging problems, there are now several well-developed generalizations, including treed GPs \cite{tgpPackage}, local GPs \cite{gramacy:apley:2013}, and particle-based GPs \cite{GramacyPolson11}, all offering off-the-shelf use through public \texttt{R} packages.
Following the Efficient Global Optimization approach \cite{JonesSchonlauWelch98}, we define expected improvement scores that blend together the local complexity of the ranking problem and the posterior variance of our estimates. In particular, we rely on the expected \emph{reduction} in posterior variance and borrow from the Stepwise Uncertainty Reduction criteria based on GP regression from \cite{PichenyGinsbourger13,ChevalierPicheny13}. We also investigate UCB-type heuristics \cite{Auer02} to trade-off exploration and exploitation objectives. Based on the above ideas, we obtain a number of fully sequential procedures that specifically target efficient learning of $\mathcal{C}(\cdot)$ over the entire design space $\mathcal{X}$. Extensive numerical experiments are conducted to compare these proposals and identify the most promising solutions.
As explained, our algorithms are driven by the exploration-exploitation paradigm quantified in terms of (empirically estimated) local ranking complexity for $\mathcal{C}(x)$ and confidence in the estimated $\hat{\mathcal{C}}$. To quantify the local ranking complexity, we use the \emph{gaps} $\Delta(x)$ \cite{GabillonBubeck11,CarpentierLazaric11,HoffmanDeFreitas13}. For any $x \in \mathcal{X}$, denote by $\mu_{(1)}(x) < \mu_{(2)}(x) < \ldots < \mu_{(L)}(x)$ the ranked responses at $x$ and by $$\Delta(x) :=\mu_{(1)}(x) - \mu_{(2)}(x)$$ the gap between the best (smallest) and second-best response. $\Delta(x)$ measures the difficulty in ascertaining $\mathcal{C}(x)$: for locations where $\mu_{(1)} - \mu_{(2)}$ is \emph{big}, we do not need high fidelity, since the respective minimal response surface is easy to identify; conversely for locations where $\mu_{(1)} - \mu_{(2)}$ is small we need more precision. Accordingly, we wish to preferentially sample where $\Delta(x)$ is small. This is operationalized by basing the experimental design decisions on the estimated gaps $\widehat{\Delta}(x)$.
\red{In terms of design over $\mk{L}$, exploration suggests to spend the budget on learning the responses offering the biggest information gain. Namely, substantial benefits are available by discriminating over the sampling indices $\ell$ through locally concentrating on the (two) most promising surfaces $\mu_{(1)}, \mu_{(2)}$. This strategy is much more efficient than
the naive equal sampling of each $Y_\ell$}. In addition, since the noise level in $Y_\ell$ may vary with $\ell$ this must also be taken into account. Summarizing, our expected improvement metrics blend empirical gaps $\widehat{\Delta}$ and empirical posterior uncertainty based on kriging variance $\delta_\ell(x)$, jointly discriminating across $\mathcal{X} \times \mk{L}$.
Our contributions can be traced along three directions. First, we introduce and analyze a novel sequential design problem targeting the loss function \eqref{eq:loss}. This setting is motivated by dynamic programming algorithms where statistical response models have been widely applied since the late 1990s \cite{Egloff05,Longstaff}. Here we contribute to this literature by proposing a Bayesian sequential design framework that generates substantial computational savings. This aspect becomes especially crucial in complex models where simulation is expensive and forms the main computational bottleneck. Second, we generalize the existing literature on Bayesian optimization and contour-finding to the multi-surface setting, which necessitates constructing new EI measures that address joint design in space and index dimensions. We demonstrate that this allows for a double efficiency gain: both in $\mathcal{X}$ and in $\mathcal{L}$. Third, we extend the multiple bandits problem of \cite{GabillonBubeck11}
to the case of a continuum of bandits, which requires building a full meta-model for the respective arm payoffs. Our construction offers an alternative to the recent work \cite{BubeckMunos11X} on $\mathcal{X}$-armed bandits and opens new vistas regarding links between MAB and DoE.
Our approach also generalizes Gramacy and Ludkovski \cite{GL13}. The latter work proposed sequential design for the contour-finding case where the design is solely over the input space $\mathcal{X}$. In that context \cite{GL13} introduced several EI heuristics and suggested the use of dynamic trees for the response modeling. The framework herein however requires a rather different approach, in particular we emphasize the bandit-inspired tools (such as UCB) that arise with simultaneous modeling of multiple response surfaces.
The rest of the paper is organized as follows. Section \ref{sec:model} describes the kriging response surface methodology that we employ, as well as some analytic formulas helpful in the context of ranking. Section \ref{sec_estEI} then develops the expected improvement heuristics for \eqref{def_cal}. Sections \ref{sec:toy} and \ref{sec:epi} illustrate the designed algorithms using synthetic data (where ground truth is known), and a case-study from epidemic management, respectively. Finally, Section \ref{sec:conclude} concludes.
\subsection{Connection to Stochastic Control}\label{sec:control}
\red{Consider the objective of minimizing total costs associated with a controlled state process $X$,}
\begin{align}
c(0; u_{0:T}) = \sum_{t=0}^T g(t, X_t, u_t)
\end{align}
on the horizon $\{0,1,\ldots,T\}$. Above $g(t,x,u)$ encodes the stagewise running costs, $u_{0:T}$ is the control strategy taking values in the finite action
space $u_t \in \mk{L}$, and $X_t \equiv X^{u}_t$ is a stochastic discrete-time
Markov state process with state space $\mathcal{X} \subseteq
\mathbb{R}^d$. The dynamics of $X^u$ are of the form
$$X^{u}_{t+1} =
F(X_t,u_t, \xi_{t+1})$$ for some map $F: \mathcal{X} \times \mk{L} \times \mathbb{R} \to \mathcal{X}$, where $\xi_{t+1}$ is a random independent centered noise source.
The performance criterion optimizes \emph{expected} rewards, captured in the value function $V(0,x)$,
$$
V(t,x) := \inf_{u_{t:T} \in \mathcal{U}} \mathbb{E}[ c(t; u_{t:T}) | X_t = x], \qquad t \in \{0,1,\ldots,T\}, x \in \mathcal{X},
$$
over all admissible closed-loop Markov strategies $u_{t:T} \in \mathcal{U}$.
Thus, at time $t$, the action $u_t \equiv u(t,X_t)$ is specified in feedback form as a function of current state $X_t$. The policy map $(t,x) \mapsto u^*(t,x)$ translates system states into actions and is related to the value function via the dynamic
programming equation (DPE):
\begin{align}\label{eq:dpe}
V(t,x) &= \min_{u \in \mk{L}} \left\{ g(t,x,u) + \mathbb{E}_t \left[ V(t+1, X^u_{t+1}) \right](x) \right\} = \mu_{u^*}(x;t), \\
\text{with} & \quad \qquad \mu_u(x; t) := g(t,x,u) + \mathbb{E}_t[ V(t+1, X^u_{t+1})](x). \label{eq:cost-to-go}
\end{align}
The notation $\mathbb{E}_t[\cdot ](x) \equiv \mathbb{E}[ \cdot | X_t=x]$ is meant to emphasize averaging of the
stochastic future at $t+1$ based on time-$t$ information summarized by the system state $X_t=x$. The term $\mu_u(x; t)$ is the Q-value, providing the expected cost-to-go if one applies action $u \in \mathcal{L}$ at $X_t=x$.
\red{Solving the DPE is equivalent to computing the Q-values since by \eqref{eq:dpe}, $V(t,x) = \min_{\ell \in \mk{L}} \{ \mu_\ell(x; t) \}$. The ranking problem in \eqref{def_cal} is then known as the policy map $x \mapsto u^*(t,x)$ that partitions the state space $\mathcal{X}$ into
$L$ {action sets} $\mathcal{C}_i(t)$.
Given $u^*(s,\cdot)$ for all $s=t+1,\ldots, T$ and all $x \in \mathcal{X}$ (initialized via $V(T,x) = g(T,x)$), we observe that
\begin{align}\label{eq:ls}
\mu_u(x; t) = g(t,x,u)+\mathbb{E}_t \left[ \sum_{n={t+1}}^T g(n,X^{\widetilde{u}}_{n},\widetilde{u}_{n} ) \right](x),
\end{align}
where $(\widetilde{u}_t)$ is a strategy that uses action $u$ at $t$ and $u^*(s,X_s)$ thereafter, $s>t$. Indeed, the sum in \eqref{eq:ls} is precisely the random variable for the pathwise costs-to-go $c(t,u_{t:T})$.
The loss \eqref{eq:loss} is then the difference between acting optimally as $u^*(t,X_t)$ at $t$, vis-a-vis taking action $\ell$ (and then acting optimally for the rest of the future, $\{t+1,\ldots,T\}$), weighted by the distribution $F(\,\mathrm{d} x)$ of $X_t$.}
The formulation \eqref{eq:ls} allows pursuit of \emph{policy search} methods by tying the accuracy in \eqref{eq:cost-to-go} not to the immediate fidelity of (estimated) Q-values $\mu_u(\cdot; t)$, but to the quality of the policy map ${u}^*(t,x)$. Namely, one iteratively computes approximate policy maps $\hat{u}(s,\cdot)$ for $s=T-1, T-2, \ldots$, using \eqref{eq:ls} to construct $\hat{u}(t,\cdot)$ based on $\{ \hat{u}(s,\cdot) : s > t\}$. Note that the original objective of finding $V(0,x)$ requires solving $T$ ranking problems of the form \eqref{def_cal}.
This approach to dynamic programming is especially attractive when the action space $\mk{L}$ is very small. A canonical example are optimal stopping problems where $\mk{L} = \{\mathrm{stop}, \mathrm{continue}\}$, i.e.~$L=2$. For a single stopping decision, the immediate reward $\mu_2(x;t)$ is typically given, leading to the case of estimating a single Q-value $\mu_1(x;t)$, see \cite{GL13}. Multiple stopping problems where both $\mu_1$ and $\mu_2$ need to be estimated arise in the pricing of swing options
\cite{MeinshausenHambly04}, valuation of real options \cite{AidLangrene12}, and
optimizing of entry-exit trading strategies \cite{ZervosJohnson12}. The case $L>2$ was considered for valuation of
energy assets, especially gas storage \cite{Secomandi11}, that lead
to optimal switching problems. For example, storage decisions are usually modeled in terms of the triple alternative $L=3$ of $\{inject, do-nothing, withdraw\}$. Small action spaces also arise in many
engineering settings, such as target tracking \cite{AndersonMilutinovic11,HLQ15}, and sensor management \cite{VeeravalliFuemmeler08}.
\section{Statistical Model}\label{sec:model}
\subsection{Sequential Design}\label{sec:seq-design}
Fix a configuration $\{\mu_\ell, \ell=1,\ldots,L\}$ and corresponding classifier $\mathcal{C}(\cdot)$.
A design of size $K$ is a collection $\mathcal{Z}^{(K)} := (x,\ell)^{1:K}$, $x\in \mathcal{X}, \ell \in \mk{L}$, with superscripts denoting vectors. Fixing $\mathcal{Z}^{(K)}$, and conditioning on the corresponding samples $Y^{1:K} \equiv (Y_{\ell^{k}}(x^{k}))_{k=1}^K$,
let $\hat{\mathcal{C}}^{(K)} \equiv \hat{\mathcal{C}}(Y^{1:K}, \mathcal{Z}^{(K)})$ be an estimate of $\mathcal{C}$. We aim to minimize the expected loss $\mathcal{L}( \hat{\mathcal{C}}(\cdot, \mathcal{Z}^{(K)}), \mathcal{C})$ over all designs of size $K$, i.e.~
\begin{align}\label{global-design}
\inf_{\mathcal{Z}: | \mathcal{Z}| = K} \mathbb{E} \left[ \mathcal{L}(\hat{\mathcal{C}}(Y^{1:K}, \mathcal{Z}), \mathcal{C}) \right],
\end{align}
where the expectation is over the sampled responses $Y^{1:K}$. To tackle \eqref{global-design} we utilize sequential algorithms that iteratively augment the designs $\mathcal{Z}$ as $Y$-samples are collected.
The interim designs $\mathcal{Z}^{(k)}$ are accordingly indexed by their size $k$, where $k=K_0, K_0+1, \ldots, K$. At each step, a new location $(x^{k+1}, \ell^{k+1})$ is added and the estimate $\hat{\mathcal{C}}^{(k+1)}$ is recomputed based on the newly obtained information. The overall procedure is summarized by the following pseudo-code:
\begin{enumerate}
\item Initialize $\mathcal{Z}^{(K_0)}$ and $\hat{\mathcal{C}}^{(K_0)}$
\item LOOP for $k=K_0,\ldots$
\begin{enumerate}
\item Select a new location $(x^{k+1},\ell^{k+1})$ and sample corresponding $y^{k+1} := Y_{\ell^{k+1}}(x^{k+1})$
\item Augment the design $\mathcal{Z}^{(k+1)} = \mathcal{Z}^{(k)} \cup \{ (x^{k+1},\ell^{k+1})\}$
\item Update the classifier $\hat{\mathcal{C}}^{(k+1)}= \hat{\mathcal{C}}( Y^{1:(k+1)}, \mathcal{Z}^{(k+1)})$ by assimilating the new observation
\end{enumerate}
\item END Loop
\end{enumerate}
The basic greedy sampling algorithm adds locations with the aim of minimizing the myopic expected estimation error. More precisely, at step $k$, given design $\mathcal{Z}^{(k)}$ (and corresponding $Y^{1:k}$), the next pair $\left(x^{k+1},\ell^{k+1}\right)$ is chosen by
\begin{align}\label{seq-design}
\arginf_{(x^{k+1},\ell^{k+1}) \in \mathcal{X} \times \mk{L}} \mathbb{E} \left[ \mathcal{L}( \hat{\mathcal{C}}( Y^{1:(k+1)}, \mathcal{Z}^{(k+1)}), \mathcal{C}) \right],
\end{align}
where the expectation is over the next sample $Y_{\ell^{k+1}}(x^{k+1})$. This leads to a simpler one-step-ahead optimization compared to the $K$-dimensional (and typically we are looking at $K \gg 100$) formulation in \eqref{global-design}. Unfortunately, the optimization in
\eqref{seq-design} is still generally intractable because it requires
\begin{itemize}
\item re-computing the full loss function $\mathcal{L}(\cdot, \mathcal{C})$ at each step;
\item finding the expected change in $\hat{\mathcal{C}}$ given $Y_{\ell^{k+1}}(x^{k+1})$;
\item integrating over the (usually unknown) distribution of $Y_{\ell^{k+1}}(x^{k+1})$;
\item optimizing over the full $d+1$-dimensional design space $\mathcal{X} \times \mk{L}$.
\end{itemize}
We accordingly propose efficient numerical approximations to \eqref{seq-design}, relying on the twin ideas of (i) sequential statistical modeling (i.e.~computing and updating $\hat{\mathcal{C}}$ as $\mathcal{Z}$ grows), and (ii) stochastic optimization (i.e.~identifying promising new design sites $(x,\ell)$).
\subsection{Response Surface Modeling}\label{sec_reg}
A key aspect of sequential design is adaptive assessment of approximation quality in order to maximize information gain from new samples. Consequently, measuring predictive uncertainty is central to picking $(x^{k+1}, \ell^{k+1})$. For that purpose, we use a Bayesian paradigm, treating $\mu_\ell$ as random objects. Hence, we work with a function space $\mathcal{M}$ and assume that $\mu_\ell \in \mathcal{M}$ with some prior distribution $\mathcal{F}_0$.
Thus, for each $x$, $\mu_\ell(x)$ is a random variable whose posterior distribution is updated based on the collected information from samples $(x, \ell, y_\ell(x))$. Given the information generated by the $k^{th}$-step design $\mathcal{Z}^{(k)}$, $\mathcal{F}_k = \sigma\left\{Y_\ell(x): (x,\ell) \in \mathcal{Z}^{(k)}\right\}$, we define the posterior $M^{(k)}_\ell(x) \sim \mu_\ell(x) | \mathcal{F}_k$. The random variable $M^{(k)}_\ell(x)$ is the belief about $\mu_\ell(x)$ conditional on $\mathcal{F}_k$; its first two moments are referred to as the kriging mean and variance respectively,
\begin{align}
\widehat{\mu}^{(k)}_\ell(x) & := \mathbb{E}[ \mu_\ell(x) | \mathcal{F}_k],\\
\delta^{(k)}_\ell(x)^2 & :=\mathbb{E}[ (\mu_\ell(x)-\widehat{\mu}^{(k)}_\ell(x))^2 | \mathcal{F}_k].
\end{align}
We will use $\widehat{\mu}(x)$ as a point estimate of $\mu_\ell(x)$, and $\delta_\ell(x)$ as a basic measure of respective uncertainty. The overall global map $x \mapsto M^{(k)}_\ell(x)$ is called the $\ell^{th}$ kriging surface. Note that while there is a spatial correlation structure over $\mathcal{X}$, we assume that observations are independent across $\mk{L}$ (so sample noise $\epsilon_\ell \perp\!\!\perp \mu_\ell$), so that the posteriors $M^{(k)}_\ell(x)$, $\ell=1,2,\ldots$ are independent.
The order statistics $\widehat{\mu}_{(1)}(x) \le \widehat{\mu}_{(2)}(x) \le \ldots$ describe the sorted posterior means at a fixed $x$. A natural definition is to announce the minimum estimated surface
\begin{equation}\label{def_estcal}
\hat{\mathcal{C}}(x) := \arg\min_\ell\left\{\widehat{\mu}_\ell(x)\right\},
\end{equation}
i.e.~the estimated classifier $\hat{\mathcal{C}}$ corresponds to the smallest posterior mean, so that $\widehat{\mu}_{\hat{\mathcal{C}}(x)}(x) = \widehat{\mu}_{(1)}(x)$. On the other hand, the uncertainty about $\mathcal{C}(x)$ can be summarized through the expected minimum of the posteriors $M_1, M_2, \ldots, M_L$,
\begin{align}\label{eq:min-gaussian}
m^{(k)}(x):=\mathbb{E}[M^{(k)}_{(1)}]=\mathbb{E}[ \min(\mu_1(x),\ldots,\mu_L(x)) | \mathcal{F}_k].
\end{align}
Observe that $\mathbb{E}[ \min_\ell \mu_\ell(x) | \mathcal{F}_k] = m^{(k)}(x) \le \widehat{\mu}^{(k)}_{(1)} = \min_\ell \mathbb{E}[ \mu_\ell(x) |\mathcal{F}_k]$, and we accordingly define the M-gap (``M'' for minimum)
\begin{align}
\label{M-Gap} \mathcal{M}(x) := \widehat\mu_{(1)}(x) -m(x) \ge 0.
\end{align}
The M-gap measures the difference between expectation of the minimum and the minimum expected response, which precisely corresponds to the Bayesian expected loss at $x$ in \eqref{eq:loss}. This fact offers an empirical analogue $\mc{EL} (\hat{\mathcal{C}})$ of the original loss function $\mathcal{L}(\hat{\mathcal{C}}, \mathcal{C})$ in \eqref{eq:loss},
\begin{align}\label{eq:emploss}
\mc{EL}(\hat\mathcal{C}) := \int_{\mathcal{X}} \mathcal{M}(x) F(\,\mathrm{d} x).
\end{align}
The above formula translates the local accuracy of the kriging surface into a global measure of fidelity of the resulting classifier $\hat{\mathcal{C}}$ and will be the main performance measure for our algorithms.
\subsection{Kriging}
The response surfaces are assumed to be smooth in $\mathcal{X}$. As a result, information about $\mu_\ell(x')$ is also revealing about $\mu_\ell(x)$ for $x\neq x'$, coupling observations at different sites. To enforce such conditions without a parametric representation,
we view each $\mu_\ell$ as a sample from a Gaussian process (GP).
A GP is specified by its trend or mean function $t_\ell(x) = \mathbb{E}[ \mu_\ell(x)]$ and a covariance structure $\mathcal{K}_\ell : \mathcal{X}^2 \to \mathbb{R}$, with $\mathcal{K}_\ell(x,x') = \mathbb{E}[ (\mu_\ell(x)-t_\ell(x))(\mu_\ell(x')-t_\ell(x'))]$.
By specifying the correlation behavior, the kernel $\mathcal{K}$ encodes the smoothness of the response surface.
Fix the response surface index $\ell$ and let $\vec{y} = (y(x^{1}), \ldots, y(x^{n}))^T$ denote the observed samples at locations $\vec{x} = x^{1:n}$. These realizations are modeled as in \eqref{def_Y} with the response represented as
$$\mu_\ell(x) = t_\ell(x) + Z_\ell(x), $$
where $t_\ell(\cdot)$ is a fixed trend term and $Z_\ell(\cdot)$ is a realization of a Gaussian process. Given the samples $(x,y)^{1:n}$, the posterior of $\mu_\ell$ again forms a GP; in other words any collection $M^{(n)}_\ell(x'_1), \ldots, M^{(n)}_\ell(x'_k)$ is multivariate Gaussian with mean $\widehat{\mu}^{(n)}_\ell(x'_i)$, covariance $v^{(n)}_\ell(x'_i,x'_j)$, and variance $\delta^{(n)}_\ell(x'_i)^2$, specified by \cite[Sec.~2.7]{WilliamsRasmussenBook} (see also \cite{NelsonStaum10}):
\begin{align}\label{eq:krig-mean}
\widehat{\mu}^{(n)}_\ell(x'_i) &= t_\ell(x'_i) + \vec{k}^{(n)}_\ell(x'_i)^T (\mathbf{K}_\ell + \mathbf{\Sigma}_\ell^{(n)})^{-1} (\vec{y} - \vec{t}_\ell^{(n)} ) \\ \label{eq:krig-cov}
v^{(n)}_\ell(x'_i,x'_j) & = {\mathcal{K}}_\ell(x'_i,x'_j) - \vec{k}_\ell^{(n)}(x'_i)^T (\mathbf{K}_\ell + \mathbf{\Sigma}_\ell^{(n)})^{-1} \vec{k}_\ell^{(n)}(x'_j)
\end{align}
with \begin{align*} \delta^{(n)}_\ell(x'_i)^2 & = v^{(n)}_\ell(x'_i,x'_i) \qquad \vec{t}^{(n)}_\ell = (t_\ell(x^{1}), \ldots, t_\ell(x^{n}))^T, \quad\text{and}\\
\vec{k}_\ell^{(n)}(x'_i) & = (\mathcal{K}_\ell(x^{1},x'_i), \ldots, \mathcal{K}_\ell(x^{n},x'_i) )^T,
\qquad \mathbf{\Sigma}_\ell^{(n)} := \diag( \sigma^2_\ell(x^{1}), \ldots, \sigma^2_\ell(x^{n})),
\end{align*}
and $\mathbf{K}_\ell$ is the $n \times n$ positive definite matrix $(\mathbf{K}_\ell)_{i,j} := \mathcal{K}_\ell(x^{i}, x^{j})$, $1 \le i,j \le n$.
By independence across $\ell$, the vector of posteriors $\bd{M(x)}$ at a fixed $x$ satisfies
$$\bd{M(x)} \sim \mc{N}(\bd{\widehat{\mu}}(x), \bd{\Delta}(x)) \quad\text{with} \;\; \bd{\widehat{\mu}}(x) = \left[\widehat\mu_1(x), \ldots, \widehat\mu_L(x)\right]^T\!, \;\; \bd{\Delta}(x) = \text{diag}\left(\delta_1^2(x), \ldots, \delta_L^2(x)\right).$$
\red{A common choice is the Matern-5/2 kernel
\begin{align}\label{eq:matern}
\mathcal{K}(x,x'; s, \theta) = s^2 \bigl( 1+ (\sqrt{5} + 5/3) \| x-x' \|^2_\theta \bigr) \cdot e^{ - \sqrt{5} \| x - x'\|_\theta}, \qquad \|x\|_\theta = \sqrt{ x \diag \vec\theta x^T}.
\end{align}
The length-scale parameter vector $\vec\theta$ controls the smoothness of members of $\mathcal{M}_\mathcal{K}$, the smaller the rougher. The variance scalar parameter $s^2$ determines the amplitude of fluctuations in the response.}
A major advantage of kriging for sequential design are \emph{updating formulas} that allow to efficiently assimilate new data points into an existing fit. Namely, if a new sample $(x,y)^{k+1}$ is added to an existing design $x^{1:k}$, the mean and kriging variance at location $x$ are updated via
\begin{align}\label{eq:updated-mean}
\widehat{\mu}^{(k+1)}(x) &= \widehat{\mu}^{(k)}(x) + \lambda(x, x^{k+1}; x^{1:k}) (y^{k+1} - \widehat{\mu}^{(k)}(x^{k+1})); \\ \label{eq:updated-var}
\delta^{(k+1)}(x)^2 & = \delta^{(k)}(x)^2 - \lambda(x, x^{k+1}; x^{1:k})^2 [\sigma^2(x^{(k+1)}) - \widehat{\mu}^{(k)}(x^{k+1}) ],
\end{align}
where $\lambda(x,x^{k+1}; x^{1:k})$ is a weight function specifying the influence of the new sample at $x^{k+1}$ on $x$ (conditioned on existing design locations $x^{1:k}$).
In particular, the local reduction in posterior standard deviation at $x^{k+1}$ is proportional to the current $\delta^{(k)}(x^{k+1})$ \cite{GinsbourgerEmery14}:
\begin{align}\label{eq:krig-update}
\frac{\delta^{(k+1)}(x^{k+1})}{\delta^{(k)}(x^{k+1})} = \frac{ \sigma(x^{k+1}) }{ \sqrt{ \sigma^2(x^{k+1})+ \delta^{(k)}(x^{k+1})^2} }.
\end{align}
Note that the updated posterior variance $\delta^{(k+1)}(x)^2$ is a deterministic function of $x^{k+1}$ which is independent of $y^{k+1}$.
For our examples below, we have used the \textbf{DiceKriging} \texttt{R} package \cite{kmPackage-R} to compute \eqref{eq:krig-mean}. The software takes as input the location-index pairs $(x,\ell)^{1:n}$, the corresponding samples $y_\ell(x)^{1:n}$ and the noise levels $\sigma^2_{\ell^n}(x^n)$, as well as the kernel family (Matern-5/2 \eqref{eq:matern} by default) and trend basis functions $t^i_\ell(x)$ and runs an EM MLE algorithm to estimate the hyper-parameters $s, \theta$ describing the kriging kernel $\mathcal{K}_\ell$.
\subsection{Summary Statistics for Ranking}
Given a fitted kriging surface $M_\ell(\cdot)$ (for notational convenience in this section we omit the indexing by the design size $k$), the respective classifier $\hat\mathcal{C}$ is obtained as in \eqref{def_estcal}. Note that $\hat{\mathcal{C}}(x)$ is not necessarily the MAP (maximum a posteriori probability) estimator, since the ordering of the posterior probabilities and posterior means need not match for $L > 2$. Two further quantities are of importance for studying the accuracy of $\hat{\mathcal{C}}$: gaps and posterior probabilities. First, the gaps quantify the differences between the posterior means, namely
\begin{align}
\widehat{\Delta}_\ell(x) &:= |\widehat{\mu}_\ell(x) - \min_{j \neq \ell} \widehat{\mu}_j(x)|,\\ \label{eq:Delta}
\widehat{\Delta}(x) &:= |\widehat{\mu}_{(1)}(x) - \widehat{\mu}_{(2)}(x)|,
\end{align}
where $\widehat{\mu}_{(1)} \le \widehat{\mu}_{(2)} \le \ldots \le \widehat{\mu}_{(L)} $ are the ordered posterior means. Note that under $L=2$,
we have $\widehat{\Delta}_1(\cdot) \equiv \widehat{\Delta}_2(\cdot) = \widehat{\Delta}(\cdot)$ due to symmetry.
Second, define the posterior probabilities for the minimal rank
\begin{align}
p_\ell(x) & := \mathbb{P}\left(\mu_\ell(x) = \mu_{(1)}(x) | \mathcal{F}_k\right) = \mathbb{P}( M_\ell(x) = \min_j M_j(x)) .\label{eq:probOfwrong}
\end{align}
We refer to $p_{(1)}(x) \ge p_{(2)}(x)\ge \ldots \ge p_{(L)}(x)$ as the decreasing ordered values of the vector $\vec{p}(x) := \{p_\ell(x)\}_{\ell=1}^L$, so that the index of $p_{(1)}(x)$ is the MAP estimate of the minimal response surface.
The following proposition provides a semi-analytic recursive formula to evaluate $\vec{p}(x)$ in terms of the kriging means and variances $(\widehat{\mu}_\ell(x), \delta^2_\ell(x))$.
\begin{pro}[Azimi \emph{et al.} \cite{AzFeFe:11}]\label{prop1}
If $ \bd{M}(x) \sim \mc{N}(\bd{\widehat{\mu}}(x), \bd{\Delta}(x))$, then for any $\ell \in \mk{L}$,
\begin{equation}\label{eq:prop}
p_\ell(x) = \mathbb{P} \left(M_\ell(x) = \min_j M_j(x) \right) = \prod\limits_{j=1}^{L-1} \Phi \left(-r_j^{(\ell)} \right),
\end{equation}
where $\Phi(\cdot)$ is standard normal cdf, and $\bd{r}^{(\ell)} = \left[r_1,r_2, \ldots, r_{L-1}\right]^T = (A(\ell)\bd{\Delta}(x)A(\ell)^T)^{-1/2}A(\ell)\bd{\widehat\mu}(x)$, with $A(\ell)$ a $(L-1)\times L$ matrix defined via
\begin{equation*}
A(\ell)_{i,j} = \left\{ \begin{aligned}
{1}& \quad \text{if} { \quad j = \ell,} \\
{-1}&\quad \text{if}{\quad 1 \leq i = j < \ell,} \text{ or }{\quad \ell < i+1 = j \leq L,} \\
{0}& \quad\text{otherwise.}
\end{aligned} \right.
\end{equation*}
\end{pro}
\begin{cor}
For $L =2$, we have $p_1(x) = \mathbb{P}(M_1(x) \leq M_2(x)) = \Phi\left(\frac{\widehat{\mu}_2(x) - \widehat{\mu}_1(x)}{\sqrt{\delta_1^2(x) + \delta_2^2(x)}}\right)$, and $p_2(x) = 1-p_1(x)$.
\end{cor}
The next proposition provides another semi-analytic formula to evaluate $m(x)$ defined in \eqref{eq:min-gaussian}.
\begin{pro}\label{prop2}
Suppose that $L=2$ and
let $M_\ell(x) \sim \mc{N}(\widehat\mu_\ell(x), \delta^2_\ell(x))$, $\ell=1,2$ be two independent Gaussians. Define
\begin{align*}
d_{12} := \sqrt{\delta_1^2(x) + \delta_2^2(x)},\qquad\text{and} \quad
a_{12} := (\widehat\mu_1(x) - \widehat\mu_2(x))/d_{12}.
\end{align*}
Then the first two moments of $M_{(1)}(x) = \min(M_1(x),M_2 (x))$ are given by:
\begin{align} \label{eq:min-mean}
m(x) & \equiv \mathbb{E}[M_{(1)}(x)] = \widehat\mu_1(x)\Phi(-a_{12})+\widehat\mu_2(x)\Phi(a_{12}) - d_{12}\phi(a_{12}),\\ \label{eq:min-var}
\mathbb{E} \left[M_{(1)}(x)^2 \right] &= (\widehat\mu_1^2(x)+\delta_1^2(x))\Phi(-a_{12}) + (\widehat\mu_2^2(x) + \delta_2^2(x))\Phi(a_{12}) \\
\notag & \qquad - (\widehat\mu_1(x)+\widehat\mu_2(x))d_{12}\phi(a_{12}).
\end{align}
\end{pro}
Equation \eqref{eq:min-mean} provides a closed-form expression to evaluate $m(x) = \mathbb{E}[ M_{(1)}(x)]$ for $L=2$. In the case $L>2$, one may evaluate $m(x)$ recursively using a Gaussian approximation. For instance, for $L = 3$, approximate $\widetilde{Y} := M_1(x) \wedge M_2(x)$ by a Gaussian random variable with mean/variance specified by \eqref{eq:min-mean}-\eqref{eq:min-var} respectively (i.e.~using $a_{12}$ and $d_{12}$) and then apply Proposition \ref{prop2} once more to $M_{(1)}(x) = \widetilde{Y} \wedge M_3(x)$.
\section{Expected Improvement}\label{sec_estEI}
The Bayesian approach to sequential design is based on greedily optimizing an acquisition function. The optimization is quantified through Expected Improvement (EI) scores that identify pairs $(x,\ell)$ that are most promising in terms of lowering the global empirical loss function $\mc{EL}$ according to \eqref{seq-design}. In our context the EI scores are based on the posterior distributions $M_\ell^{(k)}$ which summarize information learned so far about $\mu_\ell(x)$.
Our two main heuristics are dubbed Gap-UCB and Gap-SUR:
\begin{align}
E^{Gap-UCB}_k(x, \ell) & := -\widehat\Delta_\ell(x) + \gamma_k\delta_\ell(x); \label{eq:EGapSd} \\
E^{Gap-SUR}_k(x,\ell) & := \mathbb{E}[ \mathcal{M}^{(k)}(x) - \mathcal{M}^{(k+1)}(x) | x^{k+1} =x, \ell^{k+1} = \ell, \mathcal{F}_k]. \label{eq:MGap}
\end{align}
The Gap-UCB score is motivated by the exploration-exploitation trade-off in MAB's and favors locations with small gaps in posterior means and high kriging variance. Indeed, the local empirical gap measure \cite{GabillonBubeck11} $\widehat{\Delta}_\ell(x)$ identifies the most promising arm, while the kriging variance $\delta_\ell^2(x)$ promotes exploration to reduce uncertainty about arm payoffs. The two are connected via the UCB (upper confidence bound \cite{SRKRKASE:12}) tuning parameter $\gamma_k$ that balances exploration (regions with high $\delta_\ell(x)$) and exploitation (regions with small gap).
Another interpretation of Gap-UCB is to mimic a complexity-sampling scheme that selects design sites based on the complexity of the underlying ranking problem. Indeed, the gap $\Delta_\ell(x) := {\mu_{\ell}(x) - \min_{j \neq \ell}\mu_j(x)}$ measures the hardness of testing whether $\mu_\ell(x) = \min_i \mu_i(x)$; the smaller $\Delta_\ell(x)$ the tougher. At the same time, the kriging variance $\delta^2(x)$ can be related to information gain from sampling at $x$ (being akin to the standard error of a point estimator).
The Gap-SUR strategy is coming from the perspective of simulation optimization.
Recall that we strive to lower the empirical loss $\mc{EL}$ in \eqref{eq:emploss} which is related to the M-gap in \eqref{eq:MGap}, $\mc{EL} = \int \mathcal{M}(x) F(\,\mathrm{d} x)$. Accordingly, the Gap-SUR criterion uses $\mathcal{M}(x)$ to guide the adaptive design, by aiming to maximizing its \emph{expected local reduction} if we add $(x,\ell)$ to the design. Such Stepwise uncertainty reduction (SUR) strategies were introduced in \cite{Picheny12,ChevalierPicheny13}.
The evaluation of \eqref{eq:MGap} requires computing the expected mean and variance of $M_{(1)}(x)$ and $M_\ell(x)$. The updating formula \eqref{eq:updated-mean} implies that (keeping $\mathcal{K}$ fixed) $\mathbb{E}[\widehat{\mu}^{k+1}_\ell(x) |x^{k+1} = x, \ell^{k+1}=\ell, \mathcal{F}_k ] = \widehat{\mu}^{k}_\ell(x)$, while \eqref{eq:krig-update} yields
$\delta_\ell^{(k+1)}(x)$. The rest of the computation becomes straightforward in view of Proposition \ref{prop2}.
\red{\begin{remark}
Gap-SUR is also connected to the Active Learning Cohn (ALC) \cite{cohn:1996} approach to DoE. In ALC, minimization of posterior variance is achieved by greedily maximizing reduction in $\delta^2(x)$. In Gap-SUR, minimization of $\mc{EL}$ is achieved by maximizing reduction in $\mathcal{M}(x)$. The ALC paradigm suggests an alternative to \eqref{eq:EGapSd}, namely $E^{Gap-ALC}_k(x, \ell) = -\widehat\Delta_\ell(x) + \gamma_k [\delta^{(k)}_\ell(x) - \delta^{(k+1)}_\ell(x)]$, that blends expected decline in kriging variance with the estimated gap.
\end{remark}}
\red{\textbf{Asymptotic Behavior.}
The Gap-SUR method aims to drive the M-gaps to zero, which is equivalent to learning all the responses: $\mathcal{M}(x) = 0 \Leftrightarrow \delta_\ell(x)= 0 \;\forall \ell$, see \eqref{eq:MGap}. For GP models, vanishing posterior variance at $x$ corresponds to the design being dense in the neighborhood of $x$. Thus, asymptotically, the Gap-SUR heuristic will generate designs that are dense across $\mathcal{X} \times \mk{L}$. Finally, previous results about consistency of GP models (see for example \cite{ChoiSchervish07}) can be invoked to establish that $\hat{C} \to C$.}
\red{On the other hand, proper selection of the UCB schedule $(\gamma_k)$ is crucial for the performance of Gap-UCB. If $\gamma_k \equiv 0$ then convergence is not guaranteed. Indeed, consider $x_1,x_2$ such that $\Delta(x_2) > \Delta(x_1)$,
but the estimated gaps based on interim $\mathcal{Z}^{(k)}$ satisfy $\widehat{\Delta}(x_1)> \widehat{\Delta}(x_2) > \Delta(x_2)$ due to estimation error at $x_1$. Then at stage $k$ the algorithm will prefer site $x_2$ over $x_1$ (since it has smaller gap $\widehat{\Delta}$) and will then possibly get trapped indefinitely, never realizing that the estimated ordering between $\Delta(x_1)$ and $\Delta(x_2)$ is wrong. Hence without UCB the algorithm is prone to get trapped at local minima of $\widehat{\Delta}$. At the same time, any increasing unbounded $\gamma_k \to +\infty$ guarantees that $\sup_x \delta_\ell^{(k)}(x) \to 0 \; \forall \ell$. Toward this end, Srinivas et al.~\cite{SRKRKASE:12} proved that in a cumulative regret setting $\gamma_k = O( \sqrt{\log k})$ should grow logarithmically in sample size $k$. Further rules on how to choose $\gamma_k$ (for the case of a finite state space $\mathcal{X}$) can be found in \cite{GabillonBubeck11}. Another alternative is a randomized version. For example, in $\epsilon$-greedy sampling, with probability $\epsilon$ at any step instead of using an EI metric, $(x,\ell)^{k+1}$ are selected uniformly in $\mathcal{X} \times \mk{L}$. This ensures that the designs $\mathcal{Z}^{(k)}$ are dense in $\mathcal{X}$ as $k \to \infty$ and is a feature that we resort to in our experiments. Still, fine-tuning the schedule of $k \mapsto \gamma_k$ is highly non-trivial in black-box settings. For this reason, usage of the Gap-UCB approach is sensitive to implementation choices and further guidance on selecting $(\gamma_k)$ is left for future research.
}
\subsection{Selecting the Next Sample Location}\label{sec_loc}
To grow the designs $\mathcal{Z}^{(k)}$ over $k=K_0,K_0+1,\ldots$ we use the EI scores via the greedy sampling strategy
\begin{align}\label{eq:best-1}
(x,\ell)^{k+1} = \argsup_{(x, \ell) \in \mathcal{X} \times \mk{L}} E_k(x,\ell).
\end{align}
Because the above introduces a whole new optimization sub-problem, in cases where this is computationally undesirable we instead replace $\arg\sup_{ x \in \mathcal{X} }$ with $\arg\max_{x \in \mathcal{T} }$ where $\mathcal{T}$ is a finite \emph{candidate set}. Optimization over $\mathcal{T}$ is then done by direct inspection. The justification for this procedure is that (i) we expect $E_k(x, \ell)$ to be smooth in $x$ and moreover relatively flat around $x^*$; (ii) $E_k(x, \ell)$ is already an approximation so that it is not required to optimize it precisely; (iii) performance of optimal design should be insensitive to small perturbations of the sampling locations. To construct such candidate sets $\mathcal{T}$ in $\mathcal{X}$, we employ Latin hypercube sampling (LHS) \cite{McBeCo:79}.
LHS candidates ensure that new locations are representative, and well spaced out over $\mathcal{X}$. See \cite[Sec 3.4]{gra:lee:2009} for some discussion on how $\mathcal{T}$ should be designed. In addition, we refresh our \emph{candidate set} $\mathcal{T}$ at each iteration, to enable ``jittering''. Algorithm \ref{algorithm} below presents the resulting method in pseudo-code.
%
\begin{algorithm}[ht]
\caption{Sequential Design for Global Ranking using Kriging \label{algorithm}}
\begin{algorithmic}[1]
\REQUIRE $K_0, K$
\STATE Generate initial design $\mathcal{Z}^{(K_0)} := (x, \ell)^{1:K_0}$ using LHS
\STATE Sample $y^{1:K_0}$, estimate the GP kernels $\mathcal{K}_\ell$'s and initialize the response surface models $M_\ell$
\STATE Construct the classifier $\mathcal{C}^{(K_0)}(\cdot)$ using \eqref{def_estcal}
\STATE $k \leftarrow K_0$
\WHILE{$k < K$}
\STATE Generate a new candidate set $\mathcal{T}^{(k)}$ of size $D$
\STATE Compute the expected improvement (EI) $E_{k}(x,\ell)$ for each $x \in \mathcal{T}$, $\ell \in \mk{L}$
\STATE Pick a new location $\displaystyle(x,\ell)^{k+1} = \argmax_{(x,\ell) \in \mathcal{T}^{(k)}\times \mk{L}} E_k(x,\ell)$ and sample the corresponding $y^{k+1}$
\STATE (Optional) Re-estimate the kriging kernel $\mathcal{K}_{\ell^{k+1}}$
\STATE Update the response surface $M_{\ell^{k+1}}$ using \eqref{eq:updated-mean}-\eqref{eq:updated-var}
\STATE Update the classifier $\mathcal{C}^{(k+1)}$ using \eqref{def_estcal}
\STATE Save the overall grid $\mathcal{Z}^{(k+1)} \leftarrow \mathcal{Z}^{(k)} \cup (x^{k+1}, \ell^{k+1})$
\STATE k $\leftarrow$ k+1
\ENDWHILE
\RETURN Estimated classifier $\mathcal{C}^{(K)}(\cdot)$.
\end{algorithmic}
\end{algorithm}
\begin{remark}\label{sec:implement}
\red{In the context of a kriging model, the initial design $\mathcal{Z}^{(K_0)}$ is crucial to allow the algorithm to learn the covariance structures of the responses. One common challenge is to avoid assuming that $\mu_\ell$'s are too flat by missing the shorter-scale fluctuations \cite{Picheny10}. Thus, $K_0$ must be large enough to reasonably estimate $\mathcal{K}_\ell$; one recommendation is that $K_0$ should be about 20\% of the eventual design size $K$. In our implementation, the initialization is done via a space-filling LHS design (sampling equally across the $L$ surfaces).
Another issue is the re-estimation of the kriging kernel $\mathcal{K}_\ell$ in step 9 of Algorithm \ref{algorithm}. Re-training is computationally expensive and makes the GP framework not sequential.
Since we expect the algorithm to converge as $k \to \infty$, we adopt the practical rule of running the full estimation procedure for $\mathcal{K}$ according to the doubling method \cite{GanoRenaud06}, re-estimating $\mathcal{K}_\ell$ for $k=2,4,8,\ldots$ a power of two, and keeping it frozen otherwise.}
\end{remark}
\subsubsection{Hierarchical and Concurrent Sampling}\label{sec:two-step}
Instead of sampling directly over the pairs $(x,\ell) \in \mathcal{X} \times \mk{L}$, one can consider two-step procedures that first pick $x$ and then $\ell$ (or vice-versa). This strategy matches standard sequential designs over $\mathcal{X}$. Indeed, one can then directly follow the active learning approach of \cite{Mackay92,cohn:1996} by first picking $x^{k+1}$ using the gap metrics, and then picking the index $\ell^{k+1}$ based on the kriging variance:
\begin{align}\label{eq:two-step} \left\{ \begin{aligned}
x^{k+1} &= \arg\min_{x\in\mathcal{X}} \widehat{\Delta}(x) | \mathcal{F}_k, \qquad \text{cf.}~\eqref{eq:Delta} \\
\ell^{k+1} &= \arg\max_{\ell \in \mk{L}} \delta^{(k)}_\ell(x^{k+1}).
\end{aligned} \right.
\end{align}
Conditional on picking $x^{k+1}$, the above choice selects surfaces with large kriging variance $\delta_\ell(x)$, attempting to equalize $\delta_\ell(x)$ across $\ell$. \red{Note that \eqref{eq:two-step} will focus on the most \emph{uncertain} response, not on the most promising one, which tends to hurt overall performance when $L \gg 2$.} Another choice is to pick $\ell^{k+1}$ to greedily maximize the information gain as in \eqref{eq:krig-update}. Such two-step EI heuristics allow to avoid having to specify the schedule $\gamma_k$ of UCB criteria \eqref{eq:EGapSd}.
\red{A further variant is concurrent marginal modeling of each $\mu_\ell(\cdot)$. This is achieved by concurrent sampling:} after choosing a location $x^{k+1} \equiv x$, one augments the design with the $L$ respective pairs $\left(x,1\right), \left(x,2\right), \ldots \left(x, L\right)$. This approach ``parallelizes'' the learning of all response surfaces while still building an adaptive design over $\mathcal{X}$.
The disadvantage of this strategy becomes clear in the extreme situation when the variance of $Y_1(x)$ is zero, $\sigma_1(x) \equiv 0$ while the noise of $Y_2(x)$ is large. In that case, after sampling a given location once for each response, $(x,1)$ and $(x,2)$, we would have $\delta_1(x) = 0$, $\delta_2(x) \gg 0$. Hence, another sample from $Y_1(x)$ would gain no information at all, while substantial information would still be gleaned from sampling $Y_2(x)$, making parallel sampling twice as costly as needed.
\section{Simulated Experiments}\label{sec:toy}
\subsection{Toy Example}\label{sec:1d}
In this section we consider a simple one-dimensional example with synthetic data which allows a fully controlled setting.
Let $L = 2, \mathcal{X} = [0, 1]$. The noisy responses $Y_1(x)$ and $Y_2(x)$ are specified by (cf.~the example in \cite[Sec~4.4]{kmPackage-R})
\begin{align*}
Y_1(x) &= \mu_1(x) + \epsilon_1(x) \equiv \frac{5}{8}\left(\frac{\sin(10x)}{1+x} + 2x^3\cos(5x)+0.841\right) + \sigma_1(x)Z_1, \\
Y_2(x) &= \mu_2(x) + \epsilon_2(x) \equiv 0.5 + \sigma_2(x)Z_2.
\end{align*}
Here $Z_\ell$ are independent standard Gaussian, and the noise strengths are fixed at $\sigma_1(x) \equiv 0.2$ and $\sigma_2(x) \equiv 0.1$, homoscedastic in $x$ but heterogenous in $\ell=1,2$. The weights $F(\,\mathrm{d} x)=\,\mathrm{d} x$ in the loss function are uniform on $\mathcal{X}$.
The true ranking classifier $\mathcal{C}(x)$ is given by
\begin{equation}
\mathcal{C}(x) = \left\{ {\begin{array}{*{20}{l}}
{2}& \quad \text{for } x \in [0,r_1] \cup [r_2,1] \\
{1}& \quad \text{for } {r_1 < x < r_2, } \\
\end{array}} \right.
\end{equation}
where $r_1 \approx 0.3193, r_2 \approx 0.9279.$
\begin{figure}[ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth,height=2.25in]{posteriorSUR-100} &
\includegraphics[width=0.48\textwidth,height=2.25in]{posteriorSUR-400} \\
$K=100$ & $K=400$ \end{tabular}
\caption{Response surface modeling with the Gap-SUR EI criterion of \eqref{eq:MGap}. We plot the true surfaces $\mu_\ell(x)$ (black dashed lines), the posterior means $\widehat{\mu}_\ell(x)$ (blue/red solid lines), the 90\% posterior credibility intervals (light blue/red areas) of $M_1(x)$ and $M_2(x)$, and the sampling locations $x^{1:K}$ for $Y_1(x)$ (blue triangles) and $Y_2(x)$ (red circles). The middle panel shows the local loss $\mathcal{M}(x)$, cf.~\eqref{M-Gap}, while the bottom panel shows the Gap-SUR EI metric $E_K(x,\ell)$ (blue: $\ell=1$, red: $\ell=2$). \label{fig:conv-1d} }
\end{figure}
To focus on the performance of various acquisition functions, we fix the kriging kernels $\mathcal{K}_\ell$ to be of the Matern-5/2 type \eqref{eq:matern} with hyperparameters $s_1=0.1, \theta_1 = 0.18$ for $\mathcal{K}_1$ and $s_2=0.1, \theta_2=1$ for $\mathcal{K}_2$. These hyper-parameters are close to those obtained by training a kriging model for $Y_\ell(x)$ given a dense design on $\mathcal{X}$ and hence capture well the smoothness of the response surfaces above.
\red{We use a fixed trend $t_\ell(x) = 0.5$, and treat the given sampling noises $\sigma_\ell$ as known.}
To apply Algorithm \ref{algorithm} we then initialize with $K_0=10$ locations $(x,\ell)^{1:K_0}$ (five each from $Y_1(x)$ and $Y_2(x)$), drawn from a LHS design on $[0,1]$. Note that because the kriging kernels are assumed to be known, $K_0$ is taken to be very small. To grow the designs we employ the Gap-SUR EI criterion and optimize for the next $(x,\ell)^{k+1}$ using a fresh candidate set $\mathcal{T}^{(k)}$ based on a LHS design of size $D=100$.
%
Figure \ref{fig:conv-1d} illustrates the evolution of the posterior response surface models. The two panels show the estimated $M^{(K)}_\ell(x)$ at $K=100$ and $K=400$ (namely we plot the posterior means $\widehat{\mu}^{(K)}_\ell(x)$ and the corresponding 90\% CI $\widehat{\mu}^{(K)}_\ell(x) \pm 1.645\delta^{(K)}_\ell(x)$). We observe that most of the samples are heavily concentrated around the two classification boundaries $r_1, r_2$, as well as the ``false'' boundary at $x=0$. As a result, the kriging variance $\delta^2_\ell(x)$ is much lower in those neighborhoods, generating the distinctive ``sausage'' shape for the posterior credibility intervals of $M_\ell(x)$. In contrast, in regions where the gap $\Delta(x)$ is large (e.g., around $x=0.5$), ranking the responses is easy so that almost no samples are taken and the kriging variance remains large. Also, because $\sigma_1(x) > \sigma_2(x)$, the credibility intervals of $\mu_2$ are tighter, $\delta_1(x) > \delta_2(x)$, and more than 70\% of the samples are from the first response $Y_1$. Indeed, we find $D_1(k) \simeq 3 D_2(k)$ where
\begin{align*}
D_i(K) := | \{ 1 \le k \le K : \ell^k = i \}|
\end{align*}
is the number of samples in the design $\mathcal{Z}^{(K)}$ from the $i$-th surface. The above observations confirm the double efficiency from making the EI scores depend on both the $\mathcal{X}$ and $\mk{L}$ dimensions.
From a different angle, Figure \ref{fig:1d} shows the resulting design $\mathcal{Z}^{(400)}$ in this example and the location of sampled sites $x^k$ as a function of sampling order $k=1,\ldots, 400$. We observe that the algorithm first engages in exploration and then settles into a more targeted mode, alternating between sampling around $0$, $r_1$ and $r_2$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth,height=2.25in]{histSUR-400}
\includegraphics[width=0.48\textwidth,height=2.25in]{round_loc_sur}
\caption{Left: the design $\mathcal{Z}^{(400)}$ based on the Gap-SUR EI criterion of \eqref{eq:MGap}. There were $D_1(400) = 294$ and $D_2(400)=106$ samples from $Y_1$ and $Y_2$ respectively. Right: sampled locations $x^k$ as a function of $k$ (blue for $\ell^k = 1$, red for $\ell^k = 2$). \label{fig:1d} }
\end{figure}
\subsection{Comparison and Discussion of EI Criteria}\label{sec:alterate-ei}
\red{As a first basis for comparison, we provide three non-adaptive designs.} The simplest alternative is the uniform sampling method that relies purely on the law of large numbers to learn $\mu_\ell(x)$. Thus, at each step $k$, we generate a new sampling location $(x,\ell)^k$ uniformly from $\mathcal{X} \times \mathcal{L}$. This generates a roughly equal number of samples $D_1(k)\simeq D_2(k)$ from each response and a kriging variance $\delta^2_\ell(x)$ that is approximately constant in $x$. Clearly, this approach yields an upper bound on the possible (empirical) loss. \red{ The second alternative is separate non-sequential modeling of each $\mu_\ell$ through a space-filling design (implemented via LHS); this improves on uniform sampling but does not attempt in any way to discriminate in the index dimension $\mk{L}$. For this example, we take $D_1 = 160 = 4 D_2$ to be proportional to the observation noise of each surface. (Note that this strategy is roughly equivalent to building a global sequential maximin design using the acquisition function $E_k(x, \ell) := \delta_\ell(x)$.)}
\red{The third alternative is to build a sampling scheme that relies on the true $\mu_\ell(\cdot)$. With this foresight, we generate a design that relies on the \emph{actual} complexity for resolving $\mathcal{C}(x)$ by plugging-in the true $\Delta_\ell(x)$ into the Gap-UCB metric in \eqref{eq:EGapSd}. Because sampling depends solely on $\Delta_\ell(x)$ and the kriging variances $\delta^{(k)}_\ell(x)^2$ are iteratively determined by the previous $x^{1:k}$, cf.~\eqref{eq:krig-cov}, the overall design $x^{1:K}$ is deterministic (hence non-adaptive, but still implemented sequentially). Note that the resulting $\widehat{\mu_\ell}(\cdot)$'s and hence outputted $\hat{\mathcal{C}}(\cdot)$ are still a function of $Y^{1:K}$.}
Several further alternatives for evaluating expected improvement can be designed based on classification frameworks. For classification, the main posterior statistic is the probabilities $p_\ell(x)$ of $\mu_\ell(x)$ being the smallest response. One can then use the vector $\vec{p}(x)$ to measure the complexity of the resulting local classification at $x$. Note that such measures intrinsically aggregate across $\ell$ and hence only depend on $x$. This suggests either using a two-step sampling procedure as in Section \ref{sec:two-step} or building a UCB-like criterion as in \eqref{eq:EGapSd}. \red{We employ the latter method, blending a criterion $\Gamma(x)$ that discriminates among $x$-locations (larger scores are preferred) with UCB, leading to EI scores of the form $E_k(x,\ell) = \Gamma^{(k)}(x) + \gamma_k \delta_\ell(x)$. }
Three different choices for $\Gamma(\cdot)$ are:
\begin{align}
\Gamma^{ENT}(x) &:= -\sum_\ell p_\ell(x)\log p_\ell(x);\label{eq:P-entropy} \\
\Gamma^{BvSB}(x) &:= -\left[p_{Best}(x) - p_{SB}(x)\right]; \label{eq:BvSB} \\
\Gamma^{Best}(x) & := -p_{Best}(x) \label{eq:best},
\end{align}
where $p_{Best}(x) := \mathbb{P}\left(\hat{\mathcal{C}}(x) = \mathcal{C}(x) | \mathcal{F}_k\right)=p_{\hat{\mathcal{C}}(x)}$ is the posterior probability that the lowest posterior mean is indeed the smallest response, and $p_{SB}$ is the probability that the second-lowest posterior mean is the smallest response.
\red{The $\Gamma^{ENT}$ metric is the posterior entropy which is a standard measure of classification complexity.
High entropy indicates more spread in $\vec{p}(x)$ and hence more uncertainty about which is the smallest component of $\vec{\mu}(x)$. However, a well-known drawback of entropy is that for large $L$ (bigger than 3) the responses that are very unlikely to be the minimum (i.e.~small $p_\ell(x)$) still strongly affect the overall $\Gamma^{ENT}(x)$, leading to non-intuitive shapes of the EI scores. The Best-versus-Second-Best (BvSB) approach $\Gamma^{BvSB}(x)$ originating in \cite{JoPoPa:09}, counteracts this effect by comparing just the two lowest posterior means. Small differences between $p_{Best}$ and $p_{SB}$ indicate large uncertainty in identifying the minimum response. The BvSB metric can break down however if posterior variances $\delta_\ell(x)$'s are highly unequal, whereby the ordering between $\widehat{\mu_\ell}$ and $p_\ell$'s is not the same. Otherwise, $\Gamma^{BvSB}$ is quite similar to the gap measure $\widehat\Delta(x)$. Lastly, $\Gamma^{Best}$ focuses on the locations where $p_{Best}(x) \ll 1$, i.e.~those close to classification boundaries of $\hat\mathcal{C}(x)$. When $L=2$, $\Gamma^{Best}$ and $\Gamma^{BvSB}=1-2p_{Best}(x)$ give the same preferences.}
\red{Note that because $\Gamma$ does not discriminate among the surfaces, it is sensible to take $\gamma_k = \gamma_k(\ell)$ to be response-specific. Alternatively, the $\Gamma$ metrics lend themselves to concurrent sampling which builds an adaptive sequential design in $\mathcal{X}$ but treats all surfaces equally:
\begin{align}\label{eq:conc-best}
E^{Conc-\Gamma}_k(x) = \Gamma^{(k)}(x) + \gamma_k [\sum_\ell \delta_\ell(x)].\end{align}}
Yet another alternative is a so-called pure M-Gap heuristic that uses \eqref{eq:MGap} via
\red{\begin{align}\label{eq:pure-Mgap} x^{k+1} =\arg \max_{x \in \mathcal{T}^{(k)}} \mathcal{M}(x), \qquad \ell^{k+1} = \arg\max_\ell \delta^2_\ell(x^{k+1}).
\end{align}} This hierarchical sampling strategy can be viewed as generalizing the Efficient Global Optimization (EGO) criterion of \cite{JonesSchonlauWelch98} to the ranking problem, cf.~the classification variant of EGO in \cite{tgpPackage}.
\subsection{Benchmarks}\label{sec:benchmark}
\begin{table}[htb]
\centering
\caption{True loss \emph{vs.}~empirical loss with $\mathcal{Z}^{(200)}$ for the 1-D example. For UCB heuristics the cooling schedule is of the form $\gamma_k = c \sqrt{\log{k}}$ with $c$ as listed below. The error probability $ErrProb$ measures the mean of $1-p^{(200)}_{Best}(x)$ over the test set. $D_1 = D_1(200)$ is the number of samples out of 200 total from $Y_1$.\label{tbl:benchmark}}
{\small \begin{tabular}{lccccccc}
\hline
{Method} & {Emp Loss} & (SE) & {True Loss} & {(SE)} & {ErrProb} & (SE) & $D_1$ \\ \hline\hline
{Uniform Sampling} & 2.89E-3 & (1.24E-4) & 2.64E-3 & (2.67E-4) & 6.87\% & (0.25\%) & 100 \\
{Non-adaptive LHS} & 2.16E-3 & (1.01E-4) & 1.91E-3 & (2.12E-4) & 6.05\% & (0.22\%) & 160 \\
{Known-Gap-UCB, $c=4$ } & 1.77E-3 & (8.35E-5) & 1.43E-3 & (1.91E-4) & 5.61\% & (0.23\%) & 174 \\
{Gap-SUR} & 0.96E-3 & (4.98E-5) & 1.19E-3 & (1.84E-4) & 3.82\% & (0.17\%) & 146 \\
{Pure M-Gap} & 1.20E-3 & (5.39E-5) & 1.81E-3 & (2.33E-4) & 4.28\% & (0.15\%) & 172 \\
{Concurrent M-Gap} & 1.36E-3 & (8.33E-5) & 1.52E-3 & (1.97E-4) & 4.78\% & (0.24\%) & 100 \\ \hline
{Gap-UCB}, $c = 0.1$& 2.62E-3 & (1.74E-4) & 2.23E-3 & (2.60E-4) & 5.46\% & (0.23\%) & 163 \\
{Gap-UCB}, $c = 0.25$& 2.05E-3 & (1.02E-4) & 1.63E-3 & (2.43E-4) & 5.16\% & (0.19\%) & 165 \\
{Gap-UCB}, $c = 1$ & 1.27E-3 &(5.61E-5) & 1.50E-3 & (1.98E-4) & 4.39\% & (0.16\%) & 167 \\
{Gap-UCB}, $c = 5$& 1.56E-3 & (7.29E-5) & 1.62E-3 & (2.14E-4) & 5.10\% & (0.20\%) & 176 \\
{Gap-UCB}, $c = 10$& 1.83E-3 & (7.89E-5) & 1.48E-3 & (1.89E-4) & 5.49\% & (0.20\%) & 172 \\
{$\Gamma^{Best}$-UCB}, $c = 5$& 1.29E-3 & (5.85E-5) & 1.35E-3 & (1.71E-4) & 4.53\% & (0.17\%) & 172 \\
{$\Gamma^{ENT}$-UCB}, $c = 5$& 1.14E-3 & (6.02E-5) & 1.33E-3 & (1.80E-4) & 4.22\% & (0.18\%) & 169 \\
\hline
{Gap-SUR w/training $\mathcal{K}_1$} & 1.20E-3 & (5.87E-5) & 1.69E-3 & (3.24E-4) & 4.34\% & (0.37\%) & 146 \\
\hline \hline
\end{tabular}}
\end{table}
\red{To judge the efficiency of different sequential designs, we proceed to benchmark the performance of different approaches. Table \ref{tbl:benchmark} and Figure \ref{fig:ave-emp-loss}
compare the performance of EI acquisition functions, including the three non-adaptive methods; Gap-SUR; Gap-UCB with different $\gamma_k$-schedules; methods based on posterior probabilities $\vec{p}(\cdot)$:
$\Gamma^{ENT}$-UCB entropy criterion based on \eqref{eq:P-entropy} and $\Gamma^{Best}$-UCB criterion based on \eqref{eq:best}; the pure M-gap heuristic \eqref{eq:pure-Mgap}; and concurrent sampling with M-Gap.}
To construct the summary statistics in Table \ref{tbl:benchmark} we initialized each algorithm with a random LHS design of size $K_0 =10$ and augmenting it until $K=200$ sites. Throughout, we compute both the true loss in this synthetic example where $\mu_\ell(x)$ are known, as well as the approximated empirical loss $\mc{EL}$
\begin{align}\label{eq:emp-el}
\mc{EL}(\hat\mathcal{C},\mathcal{C}) &= \frac{1}{M}\sum_{j=1}^M \left\{ \widehat\mu_{(1)}(j\Delta x) - m(j\Delta x)\right\},
\end{align}
where we used $M=1000 = 1/\Delta x$ uniformly spaced gridpoints in $\mathcal{X}=[0,1]$. A further metric reported is the error probability $1-p_{Best}^{(K)}(x)$ which measures the posterior probability that the identified minimum response is incorrect. \red{Each method was run 100 times to compute the resulting mean and standard deviation of the loss function $\mathcal{L}$ and the empirical loss $\mc{EL}$.} To isolate the effect of the EI criterion, we continue with a fixed GP covariance structure $\mathcal{K}_\ell$ for the $\mu_\ell$'s \red{and pre-specified $\sigma_\ell$'s} (see hyperparameter values in Sec.~\ref{sec:1d}).
The Gap-SUR algorithm appears to be the most efficient, in particular being much more efficient than a naive uniform sampler (or the non-adaptive LHS sampler). It also performs better than Gap-UCB or the pure M-Gap methods and moreover also has the smallest fluctuations across algorithm runs, indicating more stable behavior. Nevertheless, the UCB methods are nearly as good, in particular the entropy-based $\Gamma^{ENT}$-UCB approach is competitive. \red{However, as discussed these methods are sensitive to the choice of the $\gamma_k$-schedule; the table shows that a poorly chosen $\gamma_k$ can materially worsen performance. In this example, with $\gamma = c\sqrt{ \log k}$, the scaling $c=1$ works well, but if $c$ is too small then the method is overly aggressive, and if $c$ is too big the sampling is essentially space-filling.} At the same time, a limitation of Gap-SUR is that it requires knowing the noise variances $\sigma^2_\ell(\cdot)$ when optimizing the EI acquisition function. \red{ Perhaps surprisingly, the Known-Gap-UCB strategy loses out to the adaptive methods. This happens because the empirical loss of the non-adaptive method is in fact rather sensitive to the observed samples $Y^{1:K}$ which can generate erroneous estimates of $\mu_\ell(x)$ and mis-classified $\mathcal{C}(x)$. Consequently the Known-Gap-UCB design, while properly placing $(x,\ell)^{1:K}$ \emph{on average}, does not allow self-correction so that erroneous beliefs about $\mu_\ell$ can persist for a long time, increasing $\mc{EL}$. In contrast, adaptive algorithms add samples to any regions where observations suggest that $\Delta(x)$ is small, sharpening accuracy there and lowering both true and empirical loss functions.}
The left panel of Figure \ref{fig:ave-emp-loss} visualizes algorithm behavior as a function of design size $k$, by plotting the approximated empirical loss $\mc{EL}(\hat\mathcal{C}^{(k)},\mathcal{C})$ from \eqref{eq:emp-el} for
four representative strategies. All methods appear to enjoy a power-law (linear behavior on the log-log plot) for $\mc{EL}$ as a function of $k$, with the slopes of the adaptive method strictly bigger than the non-adaptive ones.
\begin{figure}[htb]
\centering
\includegraphics[height=2.6in,width=0.49\textwidth]{bench_rounds-eps-converted-to}
\includegraphics[height=2.6in,width=0.49\textwidth]{loss_boxplot-eps-converted-to}
\caption{Left: Averaged empirical loss $\mc{EL}(\hat{\mathcal{C}}^{(k)})$ as a function of design size $k$ (in log-log scale). We compare our adaptive Gap-SUR \eqref{eq:MGap} and Gap-UCB methods \eqref{eq:EGapSd} (with $\gamma_k = 1\cdot \sqrt{\log k}$) against a uniform sampler and a Known-Gap-UCB based on the true gap $\Delta(\cdot)$. Right: boxplot of $\mathcal{L}(\hat\mathcal{C}^{(K)},\mathcal{C})$ at $K=400$ computed via \eqref{eq:emp-el}, across six different EI approaches.
\label{fig:ave-emp-loss}}
\end{figure}
Table \ref{tbl:benchmark} also highlights the gain from discriminating among the response surfaces, as the Concurrent M-Gap algorithm is notably worse (with losses of about 30\% higher) relative to Gap-SUR. The only difference between these methods is that Gap-SUR sampled $Y_1$ 146 times out of 200, while the concurrent method was constrained to sample each response exactly 100 times. All approaches that optimize over the full $\mathcal{X} \times \mk{L}$ focus on fitting the noisier $Y_1$, sampling it 70--85\% of the rounds (see the $D_1$ column).
As a final comparison, the last row of Table \ref{tbl:benchmark} reports the performance of the Gap-SUR method in the practical context where one must also \emph{train} the GP kernels $\mathcal{K}_\ell$'s by learning $\theta_i, s^2, \sigma$. \red{All the parameters, including the observation noise $\sigma$ which is viewed as the nugget of the GP covariance structure, are estimated via MLE.
Since training introduces additional noise into the fitted response surfaces, algorithm performance is necessarily degraded, especially in terms of variation across algorithm runs. This could indicate that the stationary GP model is not ideal here.}
Table \ref{tbl:benchmark} also shows that the empirical $\mc{EL}(\hat\mathcal{C}^{(K)})$ and actual loss $\mathcal{L}(\hat\mathcal{C}^{(K)},\mathcal{C})$ metrics are consistent, so that the former can be used as an internal online assessment tool to monitor accuracy of the estimated classifier. Mismatch between the two measures is driven by model mis-specification, as incorrectly inferred covariance structure of $\mu_1(x)$ leads to over-optimism: $\mc{EL} < \mathcal{L}$. This issue is largely independent of the sampling scheme and pertains more to the modeling framework than to EI acquisition functions.
\subsection{Many Surfaces}
\red{Our next example treats a more complex setting with $L=5$ surfaces and a 2-dimensional input space $\mathcal{X}=[-2,2]^2$:
$$\begin{array}{lrr}
\hline
& \text{Response} & \text{GP Parameters } (\theta_1, \theta_2, \eta^2, t_\ell) \\ \hline\hline
\mu_1(x_1,x_2) & 2-x_1^2 - 0.5 x_2^2 & (4,6.5,23,-10)\\
\mu_2(x_1,x_2) & 2(x_1-1)^2 + 2x_2^2 -2 & (7.5,7.5,475,60)\\
\mu_3(x_1,x_2) & 2 \sin(2x_1)+2 & (1,8,2,1.9) \\
\mu_4(x_1,x_2) & 8(x_1-1)^2 + 8x_2^2 -3 & (8,8,8000,300) \\
\mu_5(x_1,x_2) & 0.5(x_1+3)^2 +16x_2^2 -6 & (8,4,2500,150) \\ \hline
\end{array}$$
We assume constant homoskedastic observation noise $\epsilon_\ell(x_1,x_2) \sim \mathcal{N}(0, \sigma_\ell^2)$, $\sigma_\ell=0.5 \; \forall \ell$. The GP models have separable anisotropic Matern-5/2 covariance functions with the specified hyperparameters, and fixed trend $t_\ell$. Figure \ref{fig:2d} shows the corresponding classifier $\mathcal{C}$.}
The sequential designs were initialized at $K_0 =50$ by generating 10 LHS samples from each $Y_\ell(x_1, x_2)$;
at each step the sampling locations were selected from a LHS candidate set $\mathcal{T}$ of size $D=100$ using the randomized $\epsilon$-greedy method with $\epsilon=0.1$.
\begin{figure}[ht]
\centering
\begin{tabular}{ccc} \hspace*{-0.25in}
\includegraphics[width=0.32\textwidth,height=2in,trim=0.1in 0.1in 0.1in 0.1in]{estcontourplot-eps-converted-to} & \includegraphics[width=0.32\textwidth,height=2in,trim=0.1in 0.1in 0.1in 0.1in]{egap1-eps-converted-to} &
\includegraphics[width=0.32\textwidth,height=2in,trim=0.1in 0.1in 0.1in 0.1in]{egap2-eps-converted-to} \\
Overall $\hat{\mathcal{C}}$ & $\ell=1$ & $\ell=2$ \\
\includegraphics[width=0.32\textwidth,height=2in,trim=0.1in 0.1in 0.1in 0.1in]{egap3-eps-converted-to} &
\includegraphics[width=0.32\textwidth,height=2in,trim=0.1in 0.1in 0.1in 0.1in]{egap4-eps-converted-to} &
\includegraphics[width=0.32\textwidth,height=2in,trim=0.1in 0.1in 0.1in 0.1in]{egap5-eps-converted-to} \\
$\ell=3$ & $\ell=4$ & $\ell=5$ \end{tabular}
\caption{\red{2-D Ranking on $\mathcal{X} = [-2,2] \times [-2,2]$ using the Gap-SUR heuristic. \emph{Top-left} panel: The solid black lines show the true $\mathcal{C}(x_1,x_2)$, the dashed red lines show the estimated classifier $\hat\mathcal{C}^{(K)}(x_1,x_2)$ for $K=500$. The other panels show the marginal designs $(x_1,x_2)^{1:D_\ell(K)}$ for each of the 5 response surfaces. Shading indicates the estimated empirical gaps $\widehat\Delta_\ell(x_1,x_2)$, $\ell = 1, \ldots, 5$. We observe that most samples gravitate towards regions where $\widehat\Delta_\ell \simeq 0$. Solid curves indicate boundaries of the true classifier $\mathcal{C}(x_1,x_2)$.} \label{fig:2d}}
\end{figure}
\red{The top-left panel of Figure \ref{fig:2d} shows the estimated classifier $\hat\mathcal{C}^{(K)}$ after $K=500$ samples in total using the Gap-SUR acquisition function, and the other panels display the locations of the sampled $x$'s as allocated for each $\ell=1,\ldots,5$. As can be seen, the algorithm is highly discriminating in sampling jointly on $\mathcal{X} \times \mathcal{L}$. At any given classification boundary, the algorithm effectively only sampled two out of the five responses, endogenously recovering the concept of Best-versus-Second Best testing. Thus, samples from $Y_\ell$ are mostly located around the boundaries of surface $\mu_\ell$ and other surfaces. These contours, where $\Delta_\ell = \mu_\ell - \min_{j \neq \ell} \mu_j =0$, are precisely the regions targeted by the Gap EI metrics. Because $\mathcal{C}_1$ and $\mathcal{C}_5$ have the longest boundaries, relatively more samples were chosen there ($D_1=126, D_5=109$); conversely the smallest set is $\mathcal{C}_4$ which only received $D_4 = 70$ samples.}
\red{Table \ref{tbl:2d} presents the relative performance of different acquisition functions. Specifically, we compare (i) uniform sampling; (ii) space-filling LHS sampling; (iii) concurrent $\Gamma^{Best}$ strategy \eqref{eq:conc-best} which is analogous to entropy-based sampling; (iv) Gap-UCB, and (v) Gap-SUR. We note that with many surfaces, the key is not necessarily the budget allocation among the surfaces (here, with identical $\sigma_\ell$, optimal $D_\ell$'s are roughly equal), but efficient placement of sample locations that are most appropriate for each surface. This effect can be observed by comparing a non-adaptive strategy (that is space-filling in both $x$ and $\ell$), to a concurrent $\Gamma^{Best}$ strategy \eqref{eq:conc-best} (that targets classification boundaries but is uniform in $\ell$), to a Gap-SUR/Gap-UCB strategy (that targets different parts of classification boundaries for different indices $\ell$). Each step in the above sequence generates substantial performance gains; it is expected to be even more pronounced when the observation noise is index- (or state-) dependent.}
\begin{table}[htb]
\centering
\red{\caption{True loss \emph{vs.}~empirical loss with $\mathcal{Z}^{(500)}$ for the 2-D example. For UCB heuristics the cooling schedule is of the form $\gamma_k = c \sqrt{\log{k}}$. The error probability is $ErrProb= Ave( 1-p^{(500)}_{Best}(x))$ over the test set. The vector $D_\ell(500)$ lists the number of samples out of 500 total from $Y_\ell$, $\ell=1,\ldots,5$.\label{tbl:2d}}
{\small \begin{tabular}{lccccccc}
\hline
{Method} & {Emp Loss} & (SE) & {True Loss} & {(SE)} & {ErrProb} & Index Allocations $D_\ell$\\ \hline\hline
{Uniform Sampling}& 6.43E-3 & (4.64E-5) & 5.47E-3 & (2.39E-4) & 4.10\% &
(100,100,100,100,100)\\
{Non-Adaptive LHS} & 5.97E-3 & (2.31E-5) & 4.72E-3 & (1.97E-4) & 3.92\% &
(100,100,100,100,100)\\
{Conc $\Gamma^{Best}$, $c = 0.5$} & 5.11E-3 & (1.93E-5) & 4.04E-3 & (1.50E-4) & 3.66\% &
(100,100,100,100,100)\\
{Gap-SUR} & 3.46E-3 & (1.32E-5) & 3.17E-3 & (1.29E-4) & 3.06\% &
(126, 101, 94, 70, 109) \\
{Gap-UCB}, $c = 0.5$ & 3.41E-3 & (1.45E-5) & 2.97E-3& (1.14E-4) & 3.05\% &
(129, 103, 104, 72, 92) \\
\hline \hline
\end{tabular}}}
\end{table}
\section{Case Study in Epidemics Management}\label{sec:epi}
Our last example is based on control problems in the context of infectious epidemics \cite{LudkovskiLin14,LN10,LN11wsc,MerlGramacy09}. Consider the stochastic SIR model which is a compartmental state-space model that partitions a population pool into the three classes of Susceptible counts $S_t$, Infecteds $I_t$ and Recovereds $R_t$. We assume a fixed population size $M = S_t + I_t + R_t$ so that the state space is the two-dimensional simplex $\mathcal{X} = \{(s,i) \in \mathbb{Z}^2_+ : s+ i \le M \}$. In a typical setting, $M \in [10^3, 10^5]$, so that $\mathcal{X}$ is discrete but too large to be explicitly enumerated (on the order of $| \mathcal{X} | \simeq 10^6$).
The dynamics of $(S_t, I_t)$ are time-stationary and will be specified below in \eqref{eq:sir}.
The goal of the controller is to mitigate epidemic impact through timely intervention, such as
social distancing measures that lower the infectivity rate by reducing individuals' contact rates; mathematically this corresponds to modifying the dynamics of $(S_t, I_t)$. To conduct cost-benefit optimization, we introduce on the one hand epidemic costs, here taken to be proportional to the number of cumulative infecteds, and on the other hand intervention costs, that are proportional to the current number of remaining susceptibles $C^I S_t$. Intervention protocol can then be (myopically) optimized by comparing the expected cost of no-action $\mu_0(s,i)$ (conditional on the present state $(s,i)$) against the expected cost of immediate action, $\mu_A(s,i)$. More precisely, let
\begin{align}\label{eq:epi-cost-1}
\mu_{0}(s,i) &:= \mathbb{E}^{0}[ S_0 - S_T | I_0 = i, S_0 = s] \quad \text{and}\\ \label{eq:epi-cost-2}
\mu_{A}(s,i) &:= \mathbb{E}^{A}[ S_0 -S_T | I_0 = i, S_0 = s] + C^I s.
\end{align}
Above, $T = \inf\{ t: I_t = 0\}$ is the random end date of the outbreak; due to the fixed population and posited immunity from disease after being infected, the epidemic is guaranteed to have a finite lifetime. The difference $S_0-S_T$ thus precisely measures the total number of original susceptibles who got infected at some point during the outbreak.
The overall goal is then to \emph{rank} $\mu_{0}$ and $\mu_A$, with the intervention region corresponding to $\{ (s,i) : \mu_A(s,i) > \mu_{0}(s,i) \}$. Because no analytic formulas are available for $\mu_\ell$'s, a sensible procedure (also preferred due to the ease of handling numerous extensions of SIR models) is a Monte Carlo sampler that given an initial condition $S_0=s,I_0=i$ and regime $\ell \in \{0, A\}$ generates a trajectory $(S_t, I_t)(\omega)$ and uses it to evaluate the pathwise $S_T(\omega)$, connecting to the framework of \eqref{def_cal}.
From the policy perspective, the trade-off in \eqref{eq:epi-cost-1}-\eqref{eq:epi-cost-2} revolves around doing nothing and letting the outbreak run its course, which carries a unit cost for each individual that is eventually infected, or implementing preventive social distancing measures which costs $C^I$ for each \emph{susceptible}, but lowers the expected number of future infecteds. Typical countermeasures might be public ad campaigns, school closures, or distribution of prophylactic agents. In general, intervention is needed as soon as there is a threat of a big enough outbreak. However, if $I_t$ is low, the cost of intervention is too high relative to its benefit because the epidemic might end on its own. Similarly, if $S_t$ is low, the susceptible pool is naturally exhausted, again making intervention irrelevant (due to being ``too late''). Quantifying these scenarios requires a precise probabilistic model.
The dynamics of $(S_t, I_t)$ under the respective laws $\mathbb{P}^{0}$ and $\mathbb{P}^{A}$ follow continuous-time Markov chains with the following two transition channels:
\begin{align}\label{eq:sir}
\left\{ \begin{aligned}
\text{Infection}: & S+I \to 2I & &\text{with rate}\;\; \beta^{j} S_t I_t/M, \quad j=0,A; \\
\text{Recovery}: & I \to R & &\text{with rate}\;\; \gamma I_t. \\ \end{aligned}\right\}
\end{align}
Above, $\beta^{A} < \beta^0$ is interpreted as lowered contact rate among Infecteds and Susceptibles in the intervention regime, which thereby reduces outbreak growth and impact. The Markov chain $(S_t, I_t)$ described in \eqref{eq:sir} is readily simulatable using the Gillespie time-stepping algorithm \cite{Gill:exac:1977}, utilizing the fact that the sojourn times between state transitions have (state-dependent) Exponential distributions, and are independent of the next transition type. These simulations are however rather time-consuming, requiring $\mathcal{O}(M)$ Uniform draws. Consequently, efficient ranking of expected costs is important in applications.
\begin{remark}
Since \eqref{eq:sir} implies that each individual infected period has an independent $Exp(\gamma)$ distribution it follows that
$
\mathbb{E}[ S_0 - S_T ] = \gamma \mathbb{E} \bigl[ \int_0^T I_t \,dt \bigr],
$
so that \eqref{eq:epi-cost-1} can also be interpreted as proportional to total expected infected-days.
\end{remark}
We note that in this example the input space $\mathcal{X}$ is discrete, which however requires minimal changes to our implementation of Algorithm \ref{algorithm}. The biggest adjustment is the fact that the noise variances $\sigma^2_\ell(x)$ in \eqref{def_Y} are unknown. Knowledge of $\sigma^2_\ell(x)$'s is crucial for training the GP covariance kernel $\mathcal{K}_\ell$, see e.g.~\eqref{eq:krig-mean}. Indeed, while it is possible to simultaneously train $\mathcal{K}_\ell$ and a constant observation noise $\sigma$ (\red{the latter is known as the ``nugget'' in GP literature, and can be inferred via maximum likelihood}), with state-dependent noise $\mathcal{K}$ is not identifiable.
We resolve this issue through a batching procedure (compare to \cite[Sec 3.1]{NelsonStaum10}) to estimate $\sigma^2_\ell(x)$ on-the-go.
Namely, we re-use the same site $x \equiv (s,i)$ $r$-times,
to obtain independent samples $y^{(1)}_\ell(x), \ldots, y^{(r)}_\ell(x)$ from the corresponding $Y_\ell(x)$. This allows to estimate the conditional variance
$$
\widetilde{\sigma}^2_\ell(x) := \frac{1}{r-1} \sum_{i=1}^r (y^{(i)}(x) -\bar{y}_\ell(x))^2, \quad\text{ where } \quad \bar{y}_\ell(x) = \frac{1}{r} \sum_{i=1}^r y^{(i)}_\ell(x)
$$
is the sample mean. Moreover, as shown in \cite[Sec 4.4.2]{GinsbourgerPicheny13} we can treat the $r$ samples at $x$ as the single design entry $(x,\bar{y}_\ell(x))$ with noise variance $\widetilde{\sigma}^2_\ell(x)/r$.
The resulting reduction in post-averaged design size by a factor of $r$ offers substantial computational speed-up in fitting and updating the kriging model. Formally, the EI step in Algorithm \ref{algorithm} is replaced with using $(x^{k+1}, \ell^{k+1})= (x^{k+2}, \ell^{k+2}) = \ldots =(x^{k+r}, \ell^{k+r}) $ and re-computing the EI score once every $r$ ground-level iterations.
\begin{figure}[ht]
\centering
\includegraphics[height=2.7in,trim=0.1in 0.1in 0.1in 0.1in]{estcontoursurM_200-eps-converted-to}
\caption{Fitted response boundary $\partial \mathcal{C}$ for the epidemic response example using the Gap-SUR expected improvement metric. The scatterplot indicates the design $\mathcal{Z}^{(K)}$ for $K=200$; triangles indicate the initial design $\mathcal{Z}^{(K_0)}$, and circles the adaptively placed $(s,i)^{K_0:K}$ (green: $Y_0$; yellow: $Y_A$). \label{fig:epi-map}}
\end{figure}
For our study we set $M=2000$, $\beta^0 = 0.75, \beta^A=0.5$, $\gamma=0.5$ with intervention cost of $C^I=0.25$ per susceptible. Figure \ref{fig:epi-map} shows the resulting decision boundary $\partial \mathcal{C}$. In the dark region the relative cost of intervention is lower, and hence action is preferred. For example, starting at $I_0 = 10, S_0 = 1800$, without any action the outbreak would affect more than 40\% of the susceptible population (expected cost of about 800), while under social distancing the impact would be about 60 infecteds (leading to much lower total expected cost of $60+C^I S_0 \simeq 510$). In the light region, wait-and-see approach has lower expected costs. For example at $I_0=50, S_0=1400$, the expected number of new infecteds without any action is $385$ while the cost of countermeasures is bigger at $0.25 \times 1400 + 102 = 452$. Overall, Figure \ref{fig:epi-map} shows that the optimal decision is very sensitive to the current number of susceptibles $S_0$. This feature is due to the fact that outbreaks are created when the infection rate dominates the recovery (reproductive ratio $\mathcal{R}_0 := (\beta^0/\gamma) (S_0/ M)$ above 1). Hence, for a pool with more than 85\% susceptibles ($S_0 > 1700$), the initial growth rate satisfies $\beta^0 S_0/ M > \gamma$ and is likely to trigger an outbreak. However, as $S$ is lowered, the region where $\beta^0 S_0/M \simeq \gamma$ is approached, which makes social distancing unnecessary, as outbreak likelihood and severity diminishes. In particular, Figure \ref{fig:epi-map} shows than no action is undertaken for $S_0 < 1350$. In the intermediate region, there is a nontrivial classifier boundary for determining $\mathcal{C}(s,i)$.
Figure \ref{fig:epi-map} was generated by building an adaptive design using the Gap-SUR acquisition function and a total of $K=200$ design sites, with $r=100$ batched samples at each site. The input space was restricted to $\mathcal{X} = \{ s \in \{1200, \ldots, 1800\}, i \in \{0, 200\} \}$. The initial design $\mathcal{Z}^{(K_0)}$ included $50 = 25 \times 2$ sites on the same rectangular $5 \times 5$ lattice for each of $Y_0, Y_A$.
In this example, the noise levels $\sigma^2_\ell(s,i)$ are highly state-dependent, see Figure \ref{fig:varsurf}. The $\mu_0$ surface has much higher noise, with largest $\sigma^2_0(s,i)$ for $(s,i) \simeq (1800,5))$, whereas $\mu_A$ has largest noise in the top right corner. As a result, $\mathcal{Z}^{(K)}$ contains mostly samples from $Y_0$ and is denser towards the bottom of the Figure.
\begin{figure}[ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.44\textwidth]{varsurf1_surM_200-eps-converted-to} &
\includegraphics[width=0.44\textwidth]{varsurf2_surM_200-eps-converted-to}\\
$\widetilde\sigma_{0}(s,i)$ & $\widetilde\sigma_{A}(s,i)$ \end{tabular}
\caption{Estimated noise standard deviations $\widetilde{\sigma}_\ell(s,i)$ for the epidemic response example in the no-countermeasures (left panel, $\ell=0$) and action (right panel, $\ell=A$) regimes. Note the different color scales of the two panels, with $\sigma_0(\cdot) \gg \sigma_A(\cdot)$ for all $(s,i)$. \label{fig:varsurf}}
\end{figure}
\section{Conclusion}\label{sec:conclude}
In this article we have constructed several efficient sequential design strategies for the problem of determining the minimum among $L \ge 2$ response surfaces. Our Gap-SUR heuristic connects \eqref{def_cal} to contour-finding and Bayesian optimization, providing a new application of the stepwise uncertainty reduction framework \cite{ChevalierPicheny13}. Our Gap-UCB heuristic mimics multi-armed bandits by treating all possible sampling pairs in $\mathcal{X}\times \mk{L}$ as arms, and trying to balance arm exploration and exploitation.
Our approach is based on the kriging framework, but this is primarily for convenience and is not crucial. To this end, instead of a Bayesian formulation, one could use a maximum-likelihood method to fit $\widehat{\mu}_\ell(\cdot)$, replacing the posterior $M_\ell(x)$ with the point estimator and its standard error. Hence, many other regression frameworks could be selected. However, computational efficiency and the sequential framework place several efficiency restrictions on possible ways to modeling $\mu_\ell(\cdot)$. On the one hand, we need strong consistency, i.e.~the convergence of the respective classifier $\hat{\mathcal{C}}^{(K)} \to \mathcal{C}$ as $K\to \infty$. In particular, the regression method must be nonparametric and localized. On the other hand, we wish for a sequential procedure that allows for efficient updating rules in moving from $\hat\mathcal{C}^{(k)}$ to $\hat\mathcal{C}^{(k+1)}$. Lastly, in practical settings further challenges such as heteroscedasticity, non-Gaussian samplers $Y_\ell$, and heterogenous structure of the response surface are important.
One suitable alternative to GP's is local regression or Loess \cite{Loess}, which is a nonparametric regression framework that fits pointwise linear regression models for $\mu_\ell(x)$. Loess is efficient and well-suited for heteroscedastic contexts with unknown noise distributions as in Section \ref{sec:epi}. It also automatically generates the posterior mean and variance of the fit (allowing to use the derived formulas based on $\widehat\mu_\ell(x)$ and $\delta_\ell(x)$). However, Loess is not updatable, creating computational bottlenecks if many design augmentation iterations are to be used. At the same time fitting is extremely fast, so depending on the implementation it might still be competitive with more sophisticated methods. In this spirit, piecewise linear regression (which first partitions $\mathcal{X}$ into several cells and then carries out least-squares regression in each cell) is updatable via the Sherman-Morrison-Woodbury formulas and could be employed if there is a clear partitioning strategy available.
\red{We further note that GP kriging is just a convenient interim {surrogate} for building the experimental design. Consequently, once $\mathcal{Z}$ is generated, one could switch to a different response surface model to build a final estimate of the $\mu_\ell$'s and hence $\hat{\mathcal{C}}$. For example, the treed GP approach \cite{tgpPackage} allows for a higher-fidelity fit for the response surfaces when the underlying smoothness (specified by the covariance kernel) strongly varies across $\mathcal{X}$. Because treed GP models are expensive to fit, one could compromise by using vanilla GP during DoE and treed GP for the final estimate of $\hat{\mathcal{C}}$.}
Another fruitful extension would be to investigate ranking algorithms in the fixed confidence setting. As presented, the sequential ranking algorithm is in the fixed budget setting, augmenting the design until a pre-specified size $K$. Practically, it is often desirable to prescribe adaptive, data-driven termination by targeting a pre-set confidence level. A good termination criterion should take both accuracy and efficiency into account, ensuring the accuracy of $\widehat{\mu}_\ell(x)$ and also anticipating low information gain from further sampling steps. One proposed termination criterion is to keep track of the evolution of the empirical loss
$\mc{EL}( \hat{\mathcal{C}}^{(k)})$,
and terminate once $\mc{EL}( \hat{\mathcal{C}}^{(k)})-\mc{EL}( \hat{\mathcal{C}}^{(k+1)})$ is small enough. This is equivalent to minimizing $L_k := \mc{EL}( \hat{\mathcal{C}}^{(k)}) + \underline{\epsilon} k$, where $\underline{\epsilon} > 0$ is a parameter for cost of simulations; the more we care about efficiency, the larger the $\underline{\epsilon}$ is.
When the design size $k$ is small, the first term will dominate, so $L_k$ is expected to first decrease in $k$. As $k \to \infty$, the rate of improvement in the loss function shrinks so that eventually $L_k$ will be increasing.
However, we find that $\mc{EL}( \hat{\mathcal{C}}^{(k)})$ is quite noisy, especially if the kriging models are re-trained across stages. In that sense, the termination criterion needs to be robust enough to generate sufficiently strong (ad hoc) guarantees that a certain tolerance threshold has truly been achieved.
\begin{comment}
\section{A Real Example in Epidemic Control}
\todo{Write down the precise dynamics of $X_\ell$ and explain the initial conditions. Define $\mathcal{X}$ rigorously.}
An important application to Algorithm \ref{algorithm} is to control an epidemic. Let $X_\ell$ be a class of Markov processes following SIR model with different parameters, taking values in $\mathcal{X}$. Denote $c_\ell(x)$ be the cost functional of the $\ell^{th}$ process starting from $x$ (i.e. $X_\ell(0) = x$), then $c_\ell$ can be considered as a random mapping from $\mathcal{X}$ to $\mathbb{R}$. Notice that $c_\ell(x)$ plays the role of $Y_\ell(x)$ in Algorithm \ref{algorithm}! Different control policies correspond to different random mapping $c_\ell(\cdot)$. Finding the best policy is actually finding the minimum cost (in its expected value) among all possible processes, which is indeed to identify the classifier $\mathcal{C}(\cdot)$:
\begin{equation}
\mathcal{C}(x) := \arg\min_\ell\left\{\mathbb{E}[c_\ell(x)]\right\}
\end{equation}
For simplicity, in the example there are only two processes, one without vaccination and one with vaccination. The population is 2000 with $S_0 =1950$ fixed. Then $\mathcal{X}$ is all integers between 0 and 50. The cost functional $c_\ell(\cdot)$ is defined as the number of infected individuals before the epidemic disappear plus some fixed cost. The fixed cost may vary from processes to processes. In order to be consistent in the notation, we only use $Y_\ell(x)$ in the sequel, but readers should keep in mind that $Y_\ell(x)$ and $c_\ell(x)$ are synonyms and represent the cost of $\ell^{th}$ process starting from $x$.
Statistically, we have the following model:
\begin{align}
&Y_1(x) = \mu_1(x) + \epsilon_1, \qquad \mathbb{E}(\epsilon_1(x)) =0, \qquad \text{Var}(\epsilon_1(x)) = \sigma^2_1(x) \label{eq3}\\
&Y_2(x) = \mu_2(x) + {\epsilon_2}, \qquad \mathbb{E}({\epsilon_2(x)}) =0, \qquad \text{Var}({\epsilon_2(x)}) = {\sigma}^2_2(x) \label{eq4}
\end{align}
Here $\mu_\ell(x)$ is the predictive mean of $Y_\ell(x)$. It is actually a conditional expectation, i.e. it is the expected cost of $\ell^{th}$ process conditioning on starting at $x$.
In terms of the implementation, every time given an initial condition set $\left\{x_j\right\}$ and a process index $\ell$, we simulate corresponding trajectories and obtain the cost response $\left\{y_\ell(x_j)\right\}$. Then we regress $\left\{y_\ell(x_j)\right\}$ on $\left\{x_j\right\}$ to get $\widehat{Y_\ell}(x)$, which can be viewed as estimates of $\left\{\mu_\ell(x), x \in \mathcal{X}\right\}$.
\subsection{Benchmarks for Error Curve, Fitted Value and Frequency}
First we simulate 1000 processes at each point for both original process and vaccinated process, to produce benchmarks for predictive variance, predictive mean, error curve and frequency at each point. \textbf{Error} is the probability of making a wrong decision at $x$, \textbf{Error Curve} consist of all error at $x \in \mathcal{X}$. \textbf{Frequency} is the number of samples needed in order to make the error below 0.05.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{bench.pdf} \\
\caption{Benchmarks}
\end{figure}
\subsection{Comparisons}
\end{comment}
\printbibliography
\end{document}
|
1,314,259,993,868 | arxiv | \section{Introduction}
Given an oriented $3$-manifold $M$ with $\partial M=\mathbb{T}^2,$ a pair of slopes $s \neq s'$ on the boundary is said to form a \textit{cosmetic surgery pair} if the Dehn fillings $M(s)$ and $M(s')$ are homeomorphic, and a \textit{purely cosmetic surgery pair} if $M(s)$ and $M(s')$ are homeomorphic as oriented $3$-manifolds. We will write $M(s) \simeq M(s')$ when the two manifolds $M(s)$ and $M(s')$ are homeomorphic as oriented $3$-manifolds.
\\
\\ In the case where $M$ is the complement $E_K$ of a non-trivial knot $K$ in $S^3,$ the cosmetic surgery conjecture, first formulated by Gordon in \cite{Gor91}, asserts that:
\begin{conjecture}(Cosmetic Surgery Conjecture)\label{conj:cosm_surg} If $K$ is a non-trivial knot in $S^3$ and $s \neq s'$ are two slopes, then $E_K(s) \not\simeq E_K(s').$
\end{conjecture}
In other words, no non-trivial knot in $S^3$ has a purely cosmetic surgery pair. In the more general setting of a $3$-manifold with torus boundary $M,$ the cosmetic surgery conjecture says that $M(s)\simeq M(s')$ if and only if there is a positive self-homeomorphism of $M$ which sends the slope $s$ to $s'.$
\\
\\ Cosmetic surgery pairs which are not purely cosmetic are called \textit{chirally cosmetic}. Chirally cosmetic pairs do exist: indeed, for any amphichiral knot, any pair of opposite slopes will form a chirally cosmetic pair. Moreover, some chirally cosmetic pairs on the right-hand trefoil knot, and more generally, $(2,n)$-torus knots, were found by Mathieu \cite{Mat92}.
\\
\\ The first result on Conjecture \ref{conj:cosm_surg} was given by Boyer and Lines \cite{BL90}. Using surgery formulas for the Casson--Walker and Casson--Gordon invariants, they proved the conjecture for any knot $K$ such that $\Delta_K''(1)\neq 0,$ where $\Delta_K$ is the Alexander polynomial of $K.$ The next finite type invariant of knots also gives an obstruction: Ichihara and Wu showed \cite{IW16} that if $K$ admits purely cosmetic surgeries then $J_K'''(1)=0$ where $J_K$ is the Jones polynomial.
\\ Later on, much of the progress on Conjecture \ref{conj:cosm_surg} has been brought by studying Heegaard Floer homology of knots. First, it was proved than genus one knots do not have purely cosmetic surgeries \cite{Wan06}. Some conditions on the set of possible slopes in a purely cosmetic pair were established: it was first shown that the slopes in a purely cosmetic pair for a knot $K\subset S^3$ have opposite signs \cite{OS11} \cite{Wu11}, then Ni and Wu showed that the slopes must actually be opposite and furthermore that the Ozv\'ath-Szab\'o-Rasmussen $\tau$ invariant of $K$ must vanish \cite{NW15}. Also, the Heegaard Floer homology $\widehat{HFK}(K)$ must satisfy some other constraints \cite{NW15}\cite{Gai17}.
\\ Then, the work of Futer, Purcell and Schleimer \cite{FPS19} showed for any hyperbolic $3$-manifold with torus boundary, it can be algorithmically checked whether it admits a purely cosmetic pair. Their result came out of a different direction, using hyperbolic geometry to bound the length of slopes in a purely cosmetic pair. With their method they proved the conjecture for knots with less than $15$ crossings.
\\ Using a new method to express the Heegaard Floer homology of surgeries, Hanselman put further restrictions on the surgery slopes \cite{Han19}. Similarly to Futer, Purcell and Schleimer's result, those conditions restrict the set of possible slopes to a finite set. Hanselman's bounds seem to be more powerful in practice, with the downside that so far they apply only to the case of knots in $S^3.$
\\ Indeed, Hanselman was able to use his results to show that no knot with prime summands with less than $16$ crossings has a purely cosmetic surgery pair.
\\ Finally, let us mention that, analyzing JSJ decompositions, Tao has proved that composite knots \cite{Tao19} and cable knots \cite{Tao18} do not have purely cosmetic surgeries. It reduces the conjecture to the case of prime knots.
\\ We will now describe some of Hanselman's main theorem from \cite{Han19} in more detail below, as our results will build upon his work.
\\
\\ For $K$ a knot in $S^3,$ let $g(K)$ be its Seifert genus. Let $\widehat{HFK}(K)$ be its Heegaard Floer knot homology, which is bigraded with Alexander grading $A$ and Maslov grading $\mu.$ We define the $\delta$-grading by $\delta=A-\mu,$ and let the Heegaard Floer thickness of $K$ be
$$th(K)=\mathrm{max}\lbrace\delta(x) \ | x\neq 0 \in \widehat{HFK}(K)\rbrace -\mathrm{min}\lbrace\delta(x) \ | x\neq 0 \in \widehat{HFK}(K)\rbrace$$
Said differently, $th(K)+1$ is the number of diagonals on which the Heegaard Floer homology of $K$ is supported.
\begin{theorem}\label{thm:hanselman}\cite{Han19} Let $K$ be a non-trivial knot in $S^3$ and $s\neq s'$ be slopes such that $E_K(s)\simeq E_K(s')$ then:
\begin{itemize}
\item[-]The pairs of slopes $\lbrace s,s' \rbrace$ is either $\lbrace \pm 2 \rbrace$ or of the form $\lbrace \pm \frac{1}{k} \rbrace$ for some non-negative integer $k.$
\item[-]If $\lbrace s,s' \rbrace=\lbrace \pm 2 \rbrace$ then $g(K)=2.$
\item[-]If $\lbrace s,s' \rbrace=\lbrace \pm \frac{1}{k} \rbrace$ then
$$k \leqslant \frac{th(K)+2g(K)}{2g(K)(g(K)-1)}$$
\end{itemize}
\end{theorem}
\begin{remark}Actually, Hanselman's proof also show that there is an integer $n_s$ which can computed from $\hat{HFK}(K)$ such that if $E_K(\tfrac{1}{k})\simeq E_K(-\tfrac{1}{k})$ then $k=n_s.$
\end{remark}
Despite the success of Heegaard Floer homology, it is generally believed that other methods will be needed to study the cosmetic surgery conjecture. Indeed, Hanselman found $337$ knots $K \subset S^3$ for which Heegaard Floer homology does not distinguish between $E_K(s)$ and $E_K(-s)$ for $s=1$ or $2.$
\\
\\ Invariants that have nice surgery expressions are natural candidates for applications to the cosmetic surgery conjecture. Therefore, the author tried to investigate what information the Witten--Reshetikhin--Turaev invariants have to offer about cosmetic surgeries. As WRT invariants are part of TQFTs, they admit some natural surgery expressions. The case of the so-called $\mathrm{SO}_3$ WRT invariants at level $5$ seems to give the most straightforward condition. We find:
\begin{theorem}\label{thm:main_thm} Let $K\subset S^3$ be a knot, let $J_K$ be its Jones polynomial, and assume that $K$ has a purely cosmetic surgery pair $\lbrace \pm s \rbrace.$
\\ Then $s$ is of the form $\pm \frac{1}{5k}$ for some $k \in {\mathbb{Z}},$ unless $J_K(e^{\frac{2i\pi}{5}})=1.$
\end{theorem}
At this point, the role of the $5$-th root of unity, and multiples of $5$ in the denominator of cosmetic surgery slopes may seem mysterious. This is mostly a matter of exposition: we will also get similar conditions out of $\mathrm{SO}_3$ WRT TQFTs at different levels than $5$; with the drawback that they involve several colored Jones polynomials of $K.$ Here, let us write $J_{K,n}$ for the $n$-th normalized colored Jones polynomial of $K,$ with the convention that $J_{K,1}=1$ and $J_{K,2}$ is the Jones polynomial.
\begin{theorem}\label{thm:main_thm2}
Let $r \geqslant 5$ be an odd prime, $\zeta_r$ be a primitive $r$-th root of unity and $K$ be a non-trivial knot. Let also $[n]=\frac{\zeta_r^n-\zeta_r^{-n}}{\zeta_r-\zeta_r^{-1}}.$ There exists a finite set $F_r$ of nonzero vectors in ${\mathbb{C}}^{\frac{r-1}{2}},$ with cardinality $|F_r|\leqslant \frac{r+1}{2}$ such that if $K$ has a purely cosmetic surgery pair, then either the slopes are of the form $\lbrace \pm \frac{1}{rk}\rbrace$ with $k\in {\mathbb{Z}},$ or the vector
$$v_K=\begin{pmatrix}
1 \\ -[2]J_{K,1}(\zeta_r^2) \\ [3]J_{K,2}(\zeta_r^2) \\ \vdots \\ (-1)^{\frac{r-3}{2}}[\frac{r-1}{2}]J_{K,\frac{r-3}{2}}(\zeta_r^2)
\end{pmatrix}$$
is orthogonal to an element of $F_r.$
\end{theorem}
\begin{remark}The vectors in $F_r$ can actually be naturally be associated with pairs $\lbrace \pm 2 \rbrace$ or $\lbrace \pm \frac{1}{\underline{k}}\rbrace$ where $\underline{k}$ is a non-zero element of ${\mathbb{Z}}/r{\mathbb{Z}}.$ As the proof of Theorem \ref{thm:main_thm} will show, if $E_K(2)\simeq E_K(-2)$ then $v_K$ is orthogonal to $f_{\lbrace \pm 2 \rbrace}$ and if $E_K(\tfrac{1}{k})\simeq E_K(-\tfrac{1}{k})$ then $v_K$ is orthogonal to $f_{\lbrace \frac{1}{\underline{k}}\rbrace}$ where $\underline{k}$ is the class of $k$ mod $r.$
In particular, if one were able to compute the invariant $n_s$ in Hanselman's theorem for a knot $K,$ then it would suffice to test orthogonality of $v_K$ with $f_{\lbrace \pm 2 \rbrace}$ and with $f_{\lbrace \pm \frac{1}{\underline{n_s}}\rbrace}$ to rule out a purely cosmetic surgery pair for $K.$
\end{remark}
\begin{remark}\label{rk:level3} While the $\mathrm{SO}_3$ WRT invariant at level $r=3$ exists, it is entirely determined by homological data. It turns out that the level $r=5$ is the first one where the $\mathrm{SO}_3$ WRT invariants give non-trivial information about cosmetic surgeries.
\end{remark}
Let us now restrict to the simpler condition we get from the case $r=5$ and focus on the corollaries of Theorem \ref{thm:main_thm}.
Theorem \ref{thm:main_thm} combined with Hanselman's results implies that if a knot $K$ has a purely cosmetic surgery pair and $J_K(e^{\frac{2i\pi}{r}})\neq 1,$ then $K$ must have a rather large crossing number:
\begin{corollary} If $K$ is a non-trivial knot with at most $31$ crossings and such that $J_K(e^{\frac{2i\pi}{5}})\neq 1,$ then $K$ has no purely cosmetic surgery pair.
\end{corollary}
\begin{proof}
By Theorem \ref{thm:main_thm}, if $K$ has a purely cosmetic surgery pair then the slopes must be of the form $\pm \frac{1}{5k},$ so the denominator is at least $5.$
\\ By Hanselman's inequality and the fact that $g(K)\geqslant 2$ by \cite{Wan06}, this implies that $th(K)\geqslant 16.$
\\ However, Lowrance proved that $th(K)\leqslant g_T(K),$ where $g_T(K)$ is the Turaev genus of $K$ \cite{Low08}.
\\ But for any knot $K,$ the Turaev genus $g_T(K)$ is bounded above by $c(K)/2$ where $c(K)$ is the crossing number of $K.$ (see for example \cite[Proposition 2.4]{Abe09}).
\\ Thus a knot with $c(K)\leqslant 31$ crossings has thickness at most $15,$ and can not have a purely cosmetic surgery pair if $J_K(e^{\frac{2i\pi}{5}})\neq 1.$
\end{proof}
Let us note that according to Hanselman's computations \cite{Han19} a prime knot with at most $16$ crossings has thickness at most $2,$ so the inequality $th(K) \geqslant \frac{c(K)}{2}$ we used seems to be far from optimal.
The literature has been particularly interested in the special case of alternating knots. In that case, hypothetical alternating counterexamples of the conjecture have been shown to have a very special form. Indeed, Hanselman showed in \cite[Theorem 3]{Han19} that an alternating knot $K$ (or, more generally, a \textit{thin knot}, which has $th(K)=0$) with a purely cosmetic surgery must have signature $0$ and Alexander polynomial $\Delta_K(t)=nt^2-4nt+(6n+1)-4nt^{-1}+nt^{-2}$ for some $n\in {\mathbb{Z}}.$ As a corollary of our main result, we can put an extra condition on the Jones polynomial of $K$:
\begin{corollary}\label{cor:alternating} If $K$ is an alternating (or thin) knot with a purely cosmetic surgery, then $J_K(e^{\frac{2i\pi}{5}})=1.$
\end{corollary}
\begin{proof}
It is also part of \cite[Theorem 3]{Han19} that the only possible purely cosmetic surgery pairs of thin knots are the pairs $\lbrace \pm 1\rbrace$ or $\lbrace \pm 2\rbrace.$ By Theorem \ref{thm:main_thm}, $K$ must satisfy $J_K(e^{\frac{2i\pi}{5}})=1.$
\end{proof}
Finally, let us discuss some numerical estimates of the strength of this obstruction. Knots with $J_K(e^{\frac{2i\pi}{5}})=1$ seem to become increasingly rare when the number of crossings increases. Using a census of all $9,755,328$ prime knots with at most $17$ crossings and their Jones polynomials generated with the Regina software\cite{Reg}, we found that there are only $97$ that have $J_K(e^{\frac{2i\pi}{5}})=1.$ Following Regina's notation, those are the knots:
\begin{small}
\begin{eqnarray*}8nt_{1}, 9at_{1}, 11ah_{001}, 11at_{1}, 12ah_{0001}, 12ah_{0002}, 12nh_{003},
13ah_{0002}, 13ah_{0004}, 13ns_{2},
\\ 14ah_{00003}, 14ah_{00005}, 14ah_{00010}, 14ah_{00012}, 14ah_{00013}, 14ah_{00017}, 14ah_{00025}, 14nh_{00007}, 14nh_{00042},
\\ 15ah_{00005}, 15ah_{00007}, 15ah_{00044}, 15ah_{00049}, 15ah_{00050}, 15ah_{00070}, 15ah_{00072}, 15nh_{000019}, 15nt_{1},
\\16ah_{000006}, 16ah_{000009}, 16ah_{000010}, 16ah_{000019}, 16ah_{000060}, 16ah_{000062}, 16ah_{000069},
\\ 16ah_{000100}, 16ah_{000104}, 16ah_{000105}, 16ah_{000116}, 16ah_{000137}, 16ah_{000139}, 16ah_{000141},
\\ 16ah_{000153}, 16ah_{000209}, 16ah_{000253}, 16ah_{000447}, 16ah_{000505}, 16nh_{0000029}, 16nh_{0000034},
\\ 16nh_{0000092}, 16nh_{0000150}, 16nh_{0000415}, 17ah_{0000008}, 17ah_{0000010}, 17ah_{0000032}, 17ah_{0000081},
\\ 17ah_{0000112}, 17ah_{0000137}, 17ah_{0000138}, 17ah_{0000161}, 17ah_{0000170}, 17ah_{0000243},
17ah_{0000248},
\\ 17ah_{0000249}, 17ah_{0000341}, 17ah_{0000366}, 17ah_{0000368}, 17ah_{0000374}, 17ah_{0000376}, 17ah_{0000384},
\\ 17ah_{0000495}, 17ah_{0000500}, 17ah_{0000544}, 17ah_{0000545}, 17ah_{0000593}, 17ah_{0000634}, 17ah_{0000685},
\\ 17ah_{0000687}, 17ah_{0000786}, 17ah_{0000979}, 17ah_{0001175}, 17ah_{0001352}, 17ah_{0001734}, 17ah_{0001883},
\\ 17ah_{0002693}, 17ah_{0003282}, 17nh_{0000002}, 17nh_{0000034}, 17nh_{0000035}, 17nh_{0000196}, 17nh_{0000257},
\\ 17nh_{0000276}, 17nh_{0000327}, 17nh_{0000473}, 17nh_{0000765}, 17nh_{0005618}, 17nh_{0005619}
\end{eqnarray*}
\end{small}
Only two of those knots satisfy $\Delta_K''(1)=0$ and $J_K'''(1)=0,$ those are the knots $14nh_{00042}, 16ah_{000209}.$ We get the following corollary:
\begin{corollary}\label{cor:17crossings} No non-trivial knot with at most $17$ crossings admits purely cosmetic surgeries.
\end{corollary}
\begin{proof}
The two exceptions $14nh_{00042}$ and $16ah_{000209}$ that we found were covered by Hanselman's \cite{Han19} treatment of knots with at most $16$ crossings; for those two knots a purely cosmetic surgery can be excluded from the computation of $\widehat{HFK}.$ Moreover by \cite{Tao19}, a knot with a purely cosmetic surgery pair must be prime.
\end{proof}
\begin{remark}Since $J_K''(1)=-3\Delta''(1)$ (see \cite[Lemma 2.1]{NW15}), computing only the Jones polynomial one can exclude all but two knots with at most $17$ crossings to have a purely cosmetic surgery. While we simply used Hanselman's previous results to exclude those two knots, we could have tried out the criterion in Theorem \ref{thm:main_thm} for $r=7$ to exclude those last two slopes instead.
\end{remark}
We note that Sikora and Tuzun \cite{ST18} have done extensive computations of Jones polynomials of knots with at most $22$ crossings, in order to verify the Jones unknot detection conjecture up to that crossing number. A numerical strategy to verify the conjecture up to some crossing number would be to test knots for the criterions $\Delta_K''(1)=0$ and $J_K'''(1)=0$ which are fast to check, if both compute whether $J_K(\zeta_5)=1,$ then for the remaining exceptions use the criterion for primes $r\geq 7.$ In forthcoming work, we plan to implement the criterion in Theorem \ref{thm:main_thm} for primes $r\geq 7$ to be able to verify the conjecture for the whole census of knots with at most $19$ crossings on Regina.
The paper is organized as follows: in Section \ref{sec:prelim}, we review the basics of the $\mathrm{SO}(3)$ WRT TQFTs that we will use. In Section \ref{sec:surgery_formula}, we derive the formula of the $\mathrm{SO}(3)$ WRT TQFTs of Dehn-surgeries of a knot from TQFT axioms. In Section \ref{sec:proofs}, we prove Theorem \ref{thm:main_thm2} using the surgery formula and deduce Theorem \ref{thm:main_thm}.
\textbf{Acknowledgements:} We would like to thank François Costantino, Andras Stipsciz and Effie Kalfagianni for their interest and helpful conservations. We also thank Cl\'ement Maria for helping us generate the census of Jones polynomials with Regina.
\section{Preliminaries}
\label{sec:prelim}
\subsection{$\mathrm{SO}_3$ WRT TQFTs and extended cobordisms}
\label{sec:extended}
In this section, we review some well-known properties of WRT TQFTs needed in this paper. Although they were initially defined by Reshetikhin and Turaev in \cite{RT91} to realize Witten's \cite{W} interpretation of the Jones polynomial as a quantum field theory based on the Chern-Simons invariant, it is really the so-called $\mathrm{SO}_3$ TQFTs in the framework of Blanchet, Habegger, Masbaum and Vogel \cite{BHMV} that we will present here. We note that the $3$-manifold invariant associated to the $\mathrm{SO}_3$ TQFT is up to normalization identical to the $\tau_r'$ invariants of Kirby and Melvin \cite{KM91}.
The $\mathrm{SO}_3$ TQFTs in \cite{BHMV} have a so-called \textit{anomaly}, meaning that the associated representation of mapping class groups are projective. Lifting this anomaly requires to work with a category of $2+1$ cobordisms with an additional structure, either a $p_1$-structure as in \cite{BHMV}, or a so-called extended structure by an approach initiated by Walker \cite{W} and further developped by Turaev \cite{Tur}. We will use the later approach here.
Let us define extended $3$-manifolds as pairs $(M,n)$ where $M$ is a compact oriented $3$-manifold and $n\in {\mathbb{Z}}.$ The integer $n$ is called the weight of the extended $3$-manifold, and intuitively encodes a choice of signature of a $4$-manifold bounding $M.$ A connected extended surface $(\Sigma,L)$ will consist of a compact oriented surface $\Sigma$ with a choice of Lagrangian $L\subset H_1(\Sigma,{\mathbb{Q}}).$ Manifolds with boundary will be equiped with a weight and a Lagrangian in each boundary component. For any two extended $3$-manifolds $(M_1,n)$ and $(M_2,m)$ such that $(\Sigma,L)$ is the common boundary of $(M_1,n)$ and $(M_2,m),$ the gluing is the extended closed $3$-manifold $(M_1\underset{\Sigma}{\coprod} \overline{M_2},n+m-\mu(\lambda_{M_1}(\Sigma),L,\lambda_{M_2}(\Sigma)),$ where $\lambda_{M_i}(\Sigma)=\mathrm{Ker} H_1(\Sigma,{\mathbb{Q}})\rightarrow H_1(M,{\mathbb{Q}})$ and $\mu$ is the Maslov index, which can be computed from the following definition:
\begin{definition}\label{def:Maslov}Let $(V,\omega)$ be a finite dimensional symplectic $\mathbb{Q}$-vector space, let $L_1,L_2,L_3$ be Lagrangians in $V$ and let $W=\lbrace (x,y) \in L_1\times L_2 \ | \ x+y\in L_3 \rbrace.$ Then the Maslov index $\mu(L_1,L_2,L_3)$ is the signature of the symmetric bilinear form $B$ on $W$ defined by $B((x_1,x_2),(y_1,y_2))=\omega(x_2,y_1).$
\end{definition}
We recall that the Maslov index changes sign under any odd permutation of $L_1,L_2,L_3.$ In particular, it vanishes whenever two Lagrangians are equal.
Similar rules apply for the gluing of cobordisms, where the weight of the gluing has to computed using some Maslov indices. That construction applied to mapping cylinders gives rise to the so-called \textit{extended mapping class group} of surfaces, which we define below:
\begin{definition}\label{def:extendedMCG} For $\Sigma$ a compact oriented surface, and $L\subset H_1(\Sigma,{\mathbb{Q}})$ a Lagrangian, the extended mapping class group $\widetilde{\mathrm{Mod}}(\Sigma)$ is the $\mathbb{Z}$-central extension of $\mathrm{Mod}(\Sigma)$ defined by the rule
$$(f,n)\circ (g,m)=(f\circ g, n+m+\mu(L,f(L),(f\circ g)(L)))$$
for any $f,g\in \mathrm{Mod}(\Sigma)$ and any $n,m\in {\mathbb{Z}}.$
\end{definition}
\subsection{The $\mathrm{SO}_3$ invariants from skein theory}
\label{sec:invariants}
Here we review the TQFT constructions defined in \cite{BHMV}. We will first give the definition of the $\mathrm{SO}_3$ invariants of an extended closed $3$-manifold $(M,n)$, which are computed from the evaluation at roots of unity of the Kauffman bracket of some cablings of a surgery presentation of $M.$
Let us fix an odd integer $r\geqslant 3,$ and $A_r$ be a primitive $2r$-th root of unity. For any integer $n\in {\mathbb{Z}}$ the quantum integer $[n]$ is given by
$$[n]=\frac{A_r^{2n}-A_r^{-2n}}{A_r^2-A_r^{-2}}$$
\begin{figure}\label{fig:Kauffman}
\input{Kauffman_rel.pdf_tex}
\caption{The second Kauffman relation}
\end{figure}
Let us first recall the definition of the Kauffman bracket $\langle L \rangle$ of a framed link in $S^3.$ It is an invariant of framed links completely determined by the normalization $\langle \emptyset \rangle=1,$ and the two Kauffman relations: $\langle L \cup U \rangle=(-A^2-A^{-2})\langle L \rangle,$ where $L \cup U$ denotes the disjoint union of a framed link and a $0$-framed unknot, and the second Kauffman relation relating $3$ links that differ only in a ball is shown in Figure \ref{fig:Kauffman}.
\\ Furthermore, the \textit{colored} Kauffman bracket $\langle L,c\rangle$ of a link $L$ whose components are colored by elements of ${\mathbb{C}}[z]$ is defined in the following way: if all components $L_i$'s are colored by monomials $z^{d_i}$ then just replace the component $L_i$ by $d_i$ parallel copies and compute the Kauffman bracket. Otherwise, expand by multilinearity.
\\ Now, suppose that $M$ has a surgery presentation $M=S^3(L),$ when $L$ is a framed link in $S^3.$ Let us write $r=2m+1,$ and let $\omega=\underset{i=1}{\overset{m}{\sum}}(-1)^{i-1}[i]e_i,$ when $e_i(z)$ is the $i$-th Chebyshev polynomial, defined by $e_1(z)=1, e_2(z)=z$ and $e_{i+1}(z)=ze_i(z)-e_{i+1}(z).$ We recall that $[i]=\frac{A_r^{2i}-A_r^{-2i}}{A_r^2-A_r^{-2}}$ is the $i$-th quantum integer.
\\ Finally, we introduce
$$\eta_r=\frac{(A_r^2-A_r^{-2})}{\sqrt{-r}} \ \textrm{and} \ \kappa_r=\eta_r\langle U_-,\omega\rangle=\eta_r \underset{i=1}{\overset{\frac{r-1}{2}}{\sum}} (-A_r)^{-(i^2-1)}[i]^2,$$
where $U_-$ is the unknot with framing $-1.$ We note that $\kappa_r$ is actually a $4r$-th root of unity, whose exact order depends on $r \ \mathrm{mod} \ 4$ and the choice of $A_r.$
\begin{theorem}\label{thm:RT_invariants}\cite[Theorem 2.2.2]{Tur}
If $M$ is obtained by surgery on a framed link $L\subset S^3,$ with $n(L)$ components and signature $\sigma(L)$ then
$$Z_r(M,\sigma(L))=\eta_r^{1+n(L)}\langle L, \omega,\omega,\ldots,\omega \rangle$$
is a topological invariant of the extended $3$-manifold $(M,\sigma(L)).$
Furthermore if, $M$ contains a colored link $(K,c)$ then
$$Z_r(M,\sigma(L),K,c)=\eta_r^{1+n(L)}\langle L\cup K,\omega,\ldots,\omega,c \rangle.$$
is a topological invariant of the extended $3$-manifold $(M,\sigma(L),K,c).$
In both cases, the invariant satisfies $Z_r(M,n)=\kappa_r^{-n} Z_r(M,0).$
\end{theorem}
Although we defined the invariant for extended $3$-manifolds, for any $3$-manifold the quantity $Z_r(M)=Z_r(M,0)$ is a topological invariant of $M.$ The last property allows one to compute $Z_r(M)$ from any surgery presentation of $M,$ no matter the signature, by renormalizing by the appropriate power of $\kappa_r.$
\subsection{The $\mathrm{SO}_3$ WRT TQFTs}
\label{sec:TQFT}
The $\mathrm{SO}_3$ WRT invariants $Z_r$ are part of a TQFT defined on the category of extended cobordisms in dimension $2+1.$ Objects of this category are extended surfaces and morphisms are (homeomorphism classes fixing the boundary of) extended cobordism of dimension $3,$ as introduced in Section \ref{sec:extended}. The disjoint union gives the category a monoidal structure, and the invariant $Z_r$ can be extended to a monoidal functor from the category of extended $2+1$-cobordisms to the category of complex vector spaces. Concretely speaking, we have the following:
\begin{theorem}\cite{BHMV}\label{thm:TQFT} For any odd integer $r\geqslant 3,$ we have:
\begin{itemize}
\item[-]For any closed compact oriented $3$-manifold $M,$ $Z_r(M)=Z_r(M,0)$ is a ${\mathbb{C}}$-valued topological invariant.
\item[-]For every closed compact oriented extended surface $(\Sigma,L),$ $Z_r(\Sigma,L)$ is a finite dimensional vector space with a natural Hermitian form $\langle\langle \cdot , \cdot \rangle\rangle.$
\item[-]For any compact oriented extended $3$-manifold $M=(M,0),$ containing a link $K\subset M,$ and with a fixed homeomorphism $\partial M \simeq \Sigma,$ and a choice of Lagrangian $L\subset H_1(\Sigma,{\mathbb{Q}}),$ $Z_r(M,K)$ is a vector in $Z_r(\Sigma,L),$ and $Z_r(\Sigma,L)$ is spanned by such vectors.
\item[-]The extended mapping class group $\widetilde{\mathrm{Mod}}(\Sigma),$ acting on extended $3$-manifolds with boundary $(\Sigma,L)$ gives rise to a representation
$$\rho_r:\widetilde{\mathrm{Mod}}(\Sigma) \longmapsto \mathrm{Aut}(Z_r(\Sigma,L))$$
called the $\mathrm{SO}_3$ quantum representation.
\item[-]For any two extended $3$-manifolds (possibly with links) $(M_1,0),(M_2,0)$ with $\partial M_1\simeq \partial M_2 \simeq (\Sigma,L)$ the underlying closed $3$-manifold $M=M_1 \underset{\Sigma}{\coprod} \overline{M_2}$ has invariant
$$Z_r(M)=\kappa_r^{\mu(\lambda_{M_1}(\Sigma),L,\lambda_{M_2}(\Sigma))}\langle\langle Z_r(M_1) , Z_r(M_2) \rangle\rangle.$$
\end{itemize}
\end{theorem}
In this paper, we will only compute WRT invariants of manifolds that are Dehn surgery of knots rather than links, and will use the TQFT properties of $Z_r,$ to compute them. That, we will use the last point of Theorem \ref{thm:TQFT} in the special case where $\Sigma$ is a torus, $M_1$ is a knot complement in $S^3$ and $M_2$ is a solid torus. Therefore we will now focus on the TQFT space and quantum representation of the torus.
\subsection{Quantum representations of the torus}
\label{sec:quantum_rep}
We first describe a basis of the $Z_r$ TQFT-space of the torus $\mathbb{T}^2.$ We will fix the choice of Lagrangian in $H_1(\mathbb{T}^2,{\mathbb{Q}})$ to be the subspace generated by the class of the meridian $S^1\times \lbrace 0 \rbrace,$ and therefore no longer make reference of the choice of Lagrangian in this section.
If $1\leqslant i \leqslant \frac{r-1}{2},$ let $f_i$ be the vector in $Z_r(\mathbb{T}^2)$ corresponding to the solid torus $D^2\times S^1$ with boundary $T^2$ and meridian $S^1 \times \lbrace 0 \rbrace,$ containing the framed knot $\lbrace [0,\varepsilon] \rbrace \times S^1$ colored by $e_i(z).$ In particular, $f_1$ corresponds to the solid torus with the empty link inside, since $e_1(z)=1.$
\begin{proposition}\cite[Corollary 4.10]{BHMV} Let $r=2m+1 \geqslant 3$ be an odd integer. The vector space $Z_r(\mathbb{T}^2)$ admits $f_1,\ldots,f_{m}$ as an orthonormal basis.
\end{proposition}
With this in mind, we will now describe the quantum representations of the torus, in the above basis.
\\ Let us recall that the mapping class group of the torus is isomorphic to $\mathrm{SL}_2({\mathbb{Z}}),$ and generated by the two matrices $T=\begin{pmatrix}
1 & 1 \\ 0 & 1
\end{pmatrix}$ and $S=\begin{pmatrix}
0 & -1 \\ 1 & 0
\end{pmatrix}.$ As mapping classes, $T$ corresponds to the Dehn-twist along the meridian $S^1 \times \lbrace 0 \rbrace$ of $\mathbb{T}^2=S^1 \times S^1,$ and $S$ to the map $S(u,v)=(-v,u)$ of order $4.$ A presentation of $\mathrm{SL}_2({\mathbb{Z}})$ is then given by $$\mathrm{SL}_2({\mathbb{Z}})=\langle S,T | S^4=1,S^2=(ST)^3 \rangle.$$
In the extended mapping class group, the relations turn into
$$(S,0)^4=(Id,0), \ (S,0)^2=(S^2,0) \ \textrm{and} \ ((S,0)(T,0))^3=(S^2,1)$$
Indeed, let $a,b$ be the basis of $H_1(\mathbb{T}^2,{\mathbb{Q}})$ given by the class of the meridian $S^1\times \lbrace 0 \rbrace$ and longitude $\lbrace 0 \rbrace \times S^1.$ We have the following quick rule to compute Maslov indices in the case of the torus:
\begin{lemma}\label{lemma:Maslov}Let $L_1=\mathrm{Span}(x),$ $L_2=\mathrm{Span}(y)$ and $L_3=\mathrm{Span}(z)$ be Lagrangians of $H_1(\mathbb{T}^2,{\mathbb{Q}}).$ Then if any two of $x,y,z$ are linearly dependent then $\mu(L_1,L_2,L_3)=0$ and else if $z=\alpha x+\beta y$ then
$$\mu(L_1,L_2,L_3)=\mathrm{sign}(\alpha\beta \omega(x,y)).$$
\end{lemma}
\begin{proof}
Follows from Definition \ref{def:Maslov}.
\end{proof}
To simplify notations, we will write $\mu(x,y,z)$ for $\mu(\mathrm{Span}(x),\mathrm{Span}(y),\mathrm{Span}(z)).$
We can compute that $(S,0)^2=(S^2,\mu(a,b,a))=(S^2,0)$ and $(S,0)^4=(S^4,\mu(a,a,a))=(Id,0).$ Moreover, $(S,0)(T,0)=(ST,\mu(a,b,a))=(ST,0),$ $(ST,0)^2=((ST)^2,\mu(a,b,-a+b))=((ST)^2,-1),$ and $(ST,0)^3=((ST)^3,-1+\mu(a,-a+b,-a))=(S^2,-1).$
\begin{proposition}\cite[Section 2]{Gil99}\label{prop:quantum_rep} We have, in the basis $e_i,$ that
$$\rho_r((T,0))f_i=(-A_r)^{i^2-1}f_i, \ \textrm{and} \ \rho_r((S,0))f_i=\eta_r \underset{1\leqslant j \leqslant \frac{r-1}{2}}{\sum}(-1)^{i+j}[ij] f_j$$
and moreover $\rho_r((Id,1))=\kappa_r Id.$
\end{proposition}
\begin{remark} \label{rk:eigenvalues} Note that the matrix $\rho_r(T)$ is diagonal with distinct eigenvalues, since $(-A_r)$ is a primitive $r$-th root of $1.$
\end{remark}
To simplify notations, we will sometimes abusively write $\rho_r(S^{n_1}T^{n_2}\ldots S^{n_k})$ for
\noindent $\rho_r((S,0)^{n_1}(T,0)^{n_2}\ldots (S,0)^{n_k}),$ even though $\rho_r$ is defined on $\widetilde{\mathrm{Mod}}(\mathbb{T}^2)$ instead of $\mathrm{SL}_2({\mathbb{Z}}).$ If the word in the generators $S$ and $T$ is fixed then there is no ambiguity.
The following lemma shows that the above definition yields a projective representation of $\mathrm{SL}_2({\mathbb{Z}}),$ or a linear representation of the extended mapping class group $\widetilde{\mathrm{Mod}}(\mathbb{T}^2):$
\begin{lemma}\label{lemma:quantum_rep} The matrices $\rho_r(S)$ and $\rho_r(T)$ are unitary for $\langle\langle,\rangle\rangle,$ and we have $\rho_r(T^r)=Id,$ $\rho_r(S^2)=Id$ and $\rho_r((ST)^3)=\kappa_r^{-1} Id.$
\end{lemma}
\begin{proof}
We refer to \cite[Section 3.9]{Tur} for a proof. Note that our constants $\eta_r$ and $\kappa_r$ correspond to the constants $\mathcal{D}^{-1}$ and $\Delta \mathcal{D}^{-1},$ and our matrices $\rho_r((S,0))$ and $\rho_r((T,0))$ are the same as the matrices $\mathcal{D}^{-1}S$ and $T$ in \cite{Tur}. Finally $J=\mathrm{Id}$ in our context. Then in \cite{Tur} the equivalent relation $\rho_r(STS)=\kappa_r^{-1}\rho_r(T^{-1}ST^{-1})$ is proven.
\end{proof}
\section{Surgery formulas for $Z_r$}
\label{sec:surgery_formula}
In this section, we will present formulas that express the $Z_r$ invariant of a Dehn-surgery on a knot in terms of its colored Jones polynomials. We will first describe the surgery formulas for $Z_r$ for arbitrary slopes, then we will specialize to slopes of the form $\frac{1}{k}, k\in {\mathbb{Z}}$ or $\pm 2.$ We have the following:
\begin{proposition}\label{prop:surg_formula1} Let $K$ be a framed knot in $S^3,$ and let $r=2m+1\geqslant 3$ be an odd integer. Then, in the basis $e_1,\ldots,e_m,$ we have
$$Z_r(E_K)=Z_r(E_K,0)=\eta_r\begin{pmatrix}
1 \\ \langle K,2 \rangle
\\ \langle K,3 \rangle
\\ \vdots
\\ \langle K,m\rangle
\end{pmatrix}$$
Moreover, if $\phi$ is any element of $\mathrm{Mod}(\partial E_K)$ which sends the meridian of $K$ to the curve of slope $s,$ then
$$Z_r(E_K(s),0)=\kappa_r^{-\mathrm{sign}(s)}\langle\langle Z_r(E_K),\rho_r(\phi,0)e_1 \rangle\rangle,$$
where by definition $\mathrm{sign}(s)=0$ for $s\in \lbrace 0,\infty\rbrace.$
\end{proposition}
\begin{proof}
The first identity expresses the fact that pairing $E_K$ with the basis vectors $e_i,$ we simply get the manifold $S^3$ with the link $K$ colored by $i$ in it. By Theorem \ref{thm:RT_invariants}, $\eta_r$ is the $Z_r$ invariant of $S^3,$ and adding a knot $K$ colored by $e_i$ multiplies the invariant by $\langle K,e_i \rangle.$
\\ As for the second identity, recall that $f_1$ is the vector corresponding to the solid torus (with meridian the meridian of $K$). Thus the second identity follows from the last part of Theorem \ref{thm:TQFT} and the definition of the quantum representation, once we show that the Maslov index is $-\mathrm{sign}(s).$
Since we equip the torus with the span $\mathrm{Span}(a)$ of the meridian as Lagrangian, and the longitude $b$ bounds a surface in $E_K$ while the curve of slope $s=p/q$ bounds a disk in the solid torus, this Maslov index is
$$\mu(b,a,pa+qb)=-\mathrm{sign}(p/q)=-\mathrm{sign}(s).$$
In the cases $s=0$ or $s=\infty$ the Maslov index vanishes.
\end{proof}
\begin{remark}One may want to compare the vector $Z_r(E_K)$ with the vector $v_K$ in Theorem \ref{thm:main_thm2}. To do this, we should say that while the colored Jones polynomials are invariants of knots, the colored Kauffman brackets are invariants of framed knots only. However when talking about the cosmetic surgery problem for a knot $K$, a framing on $K$ is somewhat implicit, if we want to be able to talk about slopes on the knot.
\\ Taking the convention that we chose as framing for $K$ the longitude with zero winding number, we have for any $A\in {\mathbb{C}},$
$$\langle K,e_i\rangle(A_r)=(-1)^{i-1}[i]J_{K,i}(A_r^4)=(-1)^{i-1}[i]J_{K,i}(\zeta_r^2)$$
where $[i]=\frac{A_r^{2i}-A_r^{-2i}}{A_r^2-A_r^{-2}}=\frac{\zeta_r^i-\zeta_r^{-i}}{\zeta_r-\zeta_r^{-1}}.$
\end{remark}
\begin{remark}The map $\phi\in \mathrm{Mod}(\mathbb{T}^2)$ which sends the meridian to the curve of slope $s$ is not unique. However, any two such maps differ by multiplication on the right by $T^k,$ with $k\in {\mathbb{Z}}.$ As $\rho_r(T)f_1=f_1,$ the pairing $\langle Z_r(E_K),\rho_r(\phi)f_1 \rangle$ does not depend on the choice of $\phi.$
\end{remark}
\begin{proposition}\label{prop:surgery_formula2} Let $K$ be a knot in $S^3$ then we have
$$Z_r(E_K(\tfrac{1}{k}),0)=\langle\langle Z_r(E_K),\rho_r(ST^{-k}S)f_1 \rangle\rangle$$
and
$$Z_r(E_K(2),0)=\kappa_r^{-1}\langle\langle Z_r(E_K),\rho_r(T^2S)f_1 \rangle\rangle$$
$$Z_r(E_K(-2),0)=\kappa_r \langle\langle Z_r(E_K),\rho_r(T^{-2}S)f_1 \rangle\rangle$$
\end{proposition}
We note that writing the continued fraction expansion of $s=p/q,$ one can find a word in the generators $S$ and $T$ that maps the meridian to the curve of slope $s.$ Giving similar surgery formula then amounts to computing the contribution $\kappa_r^{m_s}$ coming from Maslov indices. Explicit formulas for $m_s$ can given in terms of Dedekind sums or Rademacher phi functions, for example closely related computations can be found in \cite{Jef92}, where the $Z_r$ invariant of lens spaces is computed.
Moreover, closely related surgery formulas for the $\tau_r'$ invariant for the integer and $1/k$-surgeries were computed in \cite{KM91}.
\begin{proof}First, let us show that $(S,0)(T,0)^{-k}(S,0)=(ST^{-k}S,\mathrm{sign}(k))$ and $(T,0)^{\pm 2}(S,0)=(T^{\pm 2}S,0).$ We compute that $(T,0)^{k}=(T^k,0)$ for any $k\in {\mathbb{Z}}$ by induction as $\mu(a,a,a)=0.$ Moreover,
$$(S,0)(T^{-k},0)=(ST^{-k},\mu(a,b,b))=(ST^{-k},0),$$
$$(ST^{-k},0)(S,0)=(ST^{-k}S,\mu(a,b,-a-kb))=(ST^{-k}S,\mathrm{sign}(k)),$$
and
$$(T^{\pm 2},0)(S,0)=(T^{\pm 2}S,\mu(a,a,\mp 2a-b))=(T^{\pm 2}S,0).$$
Now, by Proposition \ref{prop:surg_formula1}, we have
\begin{multline*}Z_r(E_K(\tfrac{1}{k}))=\kappa_r^{-\mathrm{sign}(k)}\langle\langle Z_r(E_K),\rho_r(ST^{-k}S,0)f_1 \rangle\rangle
\\=\kappa_r^{-\mathrm{sign}(k)}\langle\langle Z_r(E_K),\kappa_r^{-\mathrm{sign}(k)}\rho_r(ST^{-k}S)f_1 \rangle\rangle=\langle\langle Z_r(E_K),\rho_r(ST^{-k}S)f_1 \rangle\rangle
\end{multline*}
where the last equality uses that $\langle\langle,\rangle\rangle$ is anti-linear on the right.
Similarly, we have
$$Z_r(E_K(2))=\kappa_r^{-1}\langle\langle Z_r(E_K),\rho_r(T^{2}S,0)f_1 \rangle\rangle=\kappa_r^{-1}\langle\langle Z_r(E_K),\rho_r(T^{2}S)f_1 \rangle\rangle$$
and
$$Z_r(E_K(-2))=\kappa_r\langle\langle Z_r(E_K),\rho_r(T^{-2}S,0)f_1 \rangle\rangle=\kappa_r\langle\langle Z_r(E_K),\rho_r(T^{-2}S)f_1 \rangle\rangle.$$
\end{proof}
\section{Proof of the main theorems}
\label{sec:proofs}
Before we move to the proof of Theorem \ref{thm:main_thm2}, we will explicit the finite set $F_r$ of vectors in $Z_r(\mathbb{T}^2).$
\begin{definition}\label{def:finite_set} Let $r\geqslant 5$ be a prime. We define
$$F_r=\lbrace \rho_r(ST^{-k}S)f_1-\rho_r(ST^kS)f_1, 1\leq k \leq \tfrac{r-1}{2} \rbrace \cup \lbrace \rho_r(T^2S)f_1-\kappa_r^{-2}\rho_r(T^{-2}S)f_1 \rbrace$$ where $\rho_r(T),\rho_r(S)$ are the matrices defined in Proposition \ref{prop:quantum_rep} and $\kappa_r$ is the constant defined in Proposition \ref{lemma:quantum_rep}.
\end{definition}
It is clear from the definition that $|F_r|\leq \frac{r+1}{2}.$ The following lemma shows that $F_r$ contains only non-zero vectors:
\begin{lemma}\label{lemma:finite_set}Let $r\geq 5$ be an odd prime. Then $\rho_r(ST^{-k}S)f_1-\rho_r(ST^kS)f_1=0$ if and only if $k=0 \ \mathrm{mod} \ r.$
Moreover, $\rho_r(T^2S)f_1-\kappa_r^{-2}\rho_r(T^{-2}S)f_1\neq 0.$
\end{lemma}
\begin{proof}
Note that as $\rho_r(T^r)=Id,$ if $k=0 \ \mathrm{mod} \ r,$ then $\rho_r(ST^{\pm k}S)=\rho_r(S^2)=Id$ and thus $\rho_r(ST^{-k}S)f_1-\rho_r(ST^kS)f_1=0.$
Now, by contradiction assume $\rho_r(ST^{-k}S)f_1=\rho_r(ST^kS)f_1$ but $k\neq 0 \ \mathrm{mod}\ r.$ Then applying $\rho_r(T^kS)$ on the left we get $\rho_r(T^{2k}S)f_1=\rho_r(S)f_1.$ So $\rho_r(S)f_1$ must be an eigenvector of $\rho_r(T^{2k})$ of eigenvalue $1.$ As the order of $r$ is coprime with $2k,$ we deduce that $\rho_r(S)f_1$ must be an eigenvector of $\rho_r(T)$ of eigenvalue $1,$ thus $\rho_r(S)f_1$ must be colinear to $f_1$ by Remark \ref{rk:eigenvalues}. This is not the case: as all quantum integers $[i]=\frac{A_r^{2i}-A_r^{-2i}}{A_r^2-A_r^{-2}}$ with $1\leq i \leq \tfrac{r-1}{2}$ are non-zero, $\rho_r(S)f_1$ has non-zero coefficient along each $f_i.$ Note that we use here that $\mathrm{dim}(Z_r(\mathbb{T}^2))\geq 2$ for $r\geq 5.$
Now, by contradiction assume that $\rho_r(T^2S)f_1=\kappa_r^{-2}\rho_r(T^{-2}S)f_1,$ then similarly $\rho_r(S)f_1$ must be an eigenvector of $\rho_r(T^4),$ and thus an eigenvector of $\rho_r(T)$ as $4$ is coprime to $r$ which is the order of $T.$ Note that the diagonal coefficients $(-A_r)^{i^2-1}$ where $i=1,\ldots ,\frac{r-1}{2}$ of $\rho_r(T)$ are all distinct (see Remark \ref{rk:eigenvalues}.) Thus $\rho_r(S)f_1$ would have to be colinear to one of the $f_i$'s, which is not the case.
\end{proof}
\begin{remark}\label{rk:order3root} The proof above uses the fact that $r>3.$ In the case of $r=3,$ we would find that $\mathrm{dim}(Z_3(\mathbb{T}^2)=1,$ $\kappa_3=1, \rho_3(S)=\eta_3=1$ and $\rho_3(T)=1.$ This implies that when $r=3,$ all of the vectors in Definition \ref{def:finite_set} are zero.
\end{remark}
We are now ready to prove Theorem \ref{thm:main_thm2}:
\begin{proof}[Proof of Theorem \ref{thm:main_thm2}]Fix $r\geq 5$ an odd prime.
Let $K$ be a knot in $S^3$ with a purely cosmetic surgery pair $s>s'.$ By Theorem \ref{thm:hanselman}, the slopes $s,s'$ must be opposite and $s=\frac{1}{k}$ or $s=2.$
Let us consider the first case and assume that $r$ does not divide $k.$ We have $E_K(\tfrac{1}{k})\simeq E_K(-\tfrac{1}{k}),$ thus the $Z_r$ invariants coincide: $Z_r(E_K(\tfrac{1}{k}))=Z_r(E_K(-\tfrac{1}{k})).$ By Proposition \ref{prop:surgery_formula2} we have
$$\langle\langle Z_r(E_K),\rho_r(ST^{-k}S)f_1-\rho_r(ST^kS)f_1 \rangle\rangle=Z_r(E_K(\tfrac{1}{k}))-Z_r(E_K(-\tfrac{1}{k}))=0.$$
Note that $\rho_r(ST^{\pm k}S)f_1$ depends only on $k$ mod $r$ as $\rho_r(T^r)=Id.$ Let $l \in \lbrace \pm 1, \pm 2, \ldots , \pm \frac{r-1}{2}\rbrace$ such that $k=l \ \mathrm{mod} \ r.$ Then $Z_r(E_K)$ must be orthogonal to the vector $\rho_r(ST^{-l}S)f_1-\rho_r(ST^lS)f_1.$ Either this vector or its opposite is in the set $F_r,$ so the claim is proved in that case.
Similarly, in the case where $E_K(2)\simeq E_K(-2),$ applying Proposition \ref{prop:surgery_formula2}, we get that
$$\langle\langle Z_r(E_K),\rho_r(T^2S)f_1-\kappa_r^{-2}\rho_r(T^{-2}S)f_1 \rangle\rangle=\kappa_r\left(Z_r(E_K(2))-Z_r(E_K(-2))\right)=0,$$
and thus $Z_r(E_K)$ is orthogonal to $\rho_r(T^2S)f_1-\kappa_r^{-2}\rho_r(T^{-2}S)f_1,$ which is in $F_r$ by definition.
\end{proof}
\begin{remark}\label{rk:oneSlope} The proof of Theorem \ref{thm:main_thm} makes clear that if $k=\pm l \ \mathrm{mod} \ r$ with $1\leq l \leq \tfrac{r-1}{2}$ and $\lbrace \pm \frac{1}{k}\rbrace$ is a purely cosmetic surgery pair for a knot $K,$ then $Z_r(E_K)$ is orthogonal to a specific vector in $F_r:$ namely the vector $\rho_r(ST^{-l}S)f_1-\rho_r(ST^lS)f_1.$ Similarly, if $\lbrace \pm 2 \rbrace$ is a purely cosmetic surgery pair then $Z_r(E_K)$ is orthogonal to $\rho_r(T^2S)f_1-\kappa_r^{-2}\rho_r(T^{-2}S)f_1.$
\end{remark}
\begin{remark}\label{rk:unknot}Note that the unknot admits $\pm \frac{1}{k}$ cosmetic surgery pair for any $k\in {\mathbb{Z}},$ and that $\pm 2$ is also a purely cosmetic surgery pair for the unknot. Therefore the vector $Z_r(E_U)$ is orthogonal to all of the vectors in $F_r.$ As $Z_r(E_U)=\rho_r(S)f_1$ is a non-zero vector, all of the vectors in $F_r$ actually lie in a codimension $1$ subspace of $Z_r(\mathbb{T}^2).$
\end{remark}
The proof of Theorem \ref{thm:main_thm} is a simple corollary of Theorem \ref{thm:main_thm2}, Remark \ref{rk:unknot} and the fact that if $r=5,$ then the dimension of $Z_r(\mathbb{T}^2)$ is $2:$
\begin{proof}[Proof of Theorem \ref{thm:main_thm}]
By Theorem \ref{thm:main_thm2}, if $K$ has a purely cosmetic surgery pair $\pm s$ then either the slope $s$ is of the form $\frac{1}{5k}$ or $Z_5(E_K)$ must be orthogonal to one of the (non-zero) vectors in $F_5.$ Note that the vector of the unknot $Z_5(E_U)=\rho_r(S)f_1\neq 0$ is orthogonal to all of the vectors in $F_5$ by Remark \ref{rk:unknot}. As $\mathrm{dim}(Z_5(\mathbb{T}^2))=\frac{5-1}{2}=2,$ the vectors in $F_5$ are all colinear and any vector orthogonal to a vector in $F_5$ must be colinear to $Z_5(E_U).$
So if the slope $s$ is not of the form $\frac{1}{5k},$ the vector $Z_r(E_K)=\eta_5\begin{pmatrix}
1 \\ -[2]J_K(\zeta_5^2)
\end{pmatrix}$ must be colinear to $Z_r(E_U)=\eta_5\begin{pmatrix}
1 \\ -[2]
\end{pmatrix}.$
As $\eta_5$ and $[2]$ are non-zero, this implies that $J_K(\zeta_5^2)=1.$ By Galois action, we also have $J_K(\zeta_5)=1.$
\end{proof}
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\href}[2]{#2}
|
1,314,259,993,869 | arxiv | \section{Introduction}
\label{s:Intro}
\noindent
For a prime number \(p\) and an algebraic number field \(F\),
let \(F_p^{(\infty)}\) be the \(p\)-\textit{class tower},
more precisely the unramified Hilbert \(p\)-class field tower,
that is the maximal unramified pro-\(p\) extension, of \(F\).
The individual stages \(F_p^{(n)}\) of the tower
\[F=F_p^{(0)}\le F_p^{(1)}\le F_p^{(2)}\le\ldots\le F_p^{(n)}\le\ldots\le F_p^{(\infty)}\]
are described by the derived quotients
\(\mathfrak{G}/\mathfrak{G}^{(n)}\simeq\mathrm{G}_p^n{F}:=\mathrm{Gal}(F_p^{(n)}/F)\) with \(n\ge 1\),
of the \(p\)-\textit{class tower group} \(\mathfrak{G}:=\mathrm{G}_p^\infty{F}:=\mathrm{Gal}(F_p^{(\infty)}/F)\).
The purpose of this paper is to report on the
most up-to-date theoretical view of \(p\)-class towers
and the state of the art of actual numerical investigations.
After a summary of algebraic and arithmetic foundations in \S\
\ref{s:Foundations},
four crucial concepts will illuminate
recent innovation and progress in a very ostensive way:
\begin{itemize}
\item
the \textit{Artin limit pattern} \((\tau^{(\infty)}{F},\varkappa^{(\infty)}{F})\) of the \(p\)-class tower \(F_p^{(\infty)}\) in \S\
\ref{s:LimitPattern},
\item
\textit{successive approximation} and the current status of computational perspectives in \S\
\ref{s:Approximation},
\item
\textit{maximal subgroups} of \(3\)-class tower groups with coclass one in \S\
\ref{s:MaximalSubgroups},
and
\item
the realization of new \(3\)-class tower groups over \textit{dihedral fields} in \S\
\ref{s:General}.
\end{itemize}
\section{Algebraic and arithmetic foundations}
\label{s:Foundations}
\subsection{Abelian type invariants}
\label{ss:ATI}
\noindent
First, we recall the concepts of
abelian type invariants and abelian quotient invariants
in the context of finite \(p\)-groups and infinite pro-\(p\) groups,
and we specify our conventions in their notation.
Let \(p\ge 2\) be a prime number.
It is well known that
a finite abelian group \(A\)
with order \(\lvert A\rvert\) a power of \(p\)
possesses a \textit{unique} representation
\begin{equation}
\label{eqn:ATI}
A\simeq\oplus_{i=1}^s(\mathbb{Z}/p^{e_i}\mathbb{Z})^{r_i}
\end{equation}
as a direct sum
with integers \(s\ge 0\),
\(r_i\ge 1\) for \(1\le i\le s\),
and \textit{strictly decreasing} \(e_1>\ldots >e_s\ge 1\).
\begin{definition}
\label{dfn:ATI}
The \textit{abelian type invariants} of \(A\)
are given either in \textit{power form},
\begin{equation}
\label{eqn:PowATI}
\mathrm{ATI}(A):=
\lbrack\overbrace{p^{e_1},\ldots,p^{e_1}}^{r_1 \text{ times}},\ldots,
\overbrace{p^{e_s},\ldots,p^{e_s}}^{r_s \text{ times}}\rbrack,
\end{equation}
or in \textit{logarithmic form} with formal exponents indicating iteration,
\begin{equation}
\label{eqn:LogATI}
\mathrm{ATI}(A):=\lbrack e_1^{r_1},\ldots,e_s^{r_s}\rbrack.
\end{equation}
\end{definition}
Let \(G\) be a pro-\(p\) group
with commutator subgroup \(G^\prime\)
and \textit{finite} abelianization \(G^{ab}:=G/G^\prime\).
\begin{definition}
\label{dfn:AQI}
The \textit{abelian quotient invariants} of \(G\)
are the abelian type invariants of the biggest abelian quotient of \(G\)
\begin{equation}
\label{eqn:AQI}
\mathrm{AQI}(G):=\mathrm{ATI}(G^{ab}).
\end{equation}
\end{definition}
\subsubsection{Higher abelian quotient invariants of a pro-\(p\) group}
\label{sss:TauGrp}
\noindent
Within the frame of group theory,
abelian quotient invariants of \textit{higher order}
are defined recursively in the following manner.
\begin{definition}
\label{dfn:TauGrp}
The set of all maximal subgroups of \(G\)
which contain the commutator subgroup,
\begin{equation}
\label{eqn:LyrGrp}
\mathrm{Lyr}_1{G}:=\lbrace S\triangleleft G\mid G^\prime\le S,\ (G:S)=p\rbrace,
\end{equation}
is called the \textit{first layer} of subgroups of \(G\).
For any positive integer \(n\ge 1\),
\textit{abelian quotient invariants of \(n\)th order} of \(G\)
are defined recursively by
\begin{equation}
\label{eqn:TauGrp}
\tau^{(1)}{G}:=\mathrm{AQI}(G), \text{ and }
\tau^{(n)}{G}:=(\tau^{(1)}{G};(\tau^{(n-1)}{S})_{S\in\mathrm{Lyr}_1{G}}) \text{ for } n\ge 2.
\end{equation}
\end{definition}
\subsubsection{Higher abelian type invariants of a number field}
\label{sss:TauFld}
\noindent
Within the frame of algebraic number theory,
abelian type invariants of \textit{higher order}
are defined recursively in the following way.
Let \(F\) be an algebraic number field,
denote by \(\mathrm{Cl}_p{F}\) the \(p\)-class group of \(F\),
and by \(F_p^{(1)}\) the first Hilbert \(p\)-class field of \(F\),
that is,
the maximal abelian unramified \(p\)-extension of \(F\).
\begin{definition}
\label{dfn:TauFld}
The set of all unramified cyclic extensions \(E/F\) of degree \(p\)
which are contained in the \(p\)-class field,
\begin{equation}
\label{eqn:LyrFld}
\mathrm{Lyr}_1{F}:=\lbrace E>F\mid E\le F_p^{(1)},\ \lbrack E:F\rbrack=p\rbrace
\end{equation}
is called the \textit{first layer} of extension fields of \(F\).
For any positive integer \(n\ge 1\),
\textit{abelian type invariants of \(n\)th order} of \(F\)
are defined recursively by
\begin{equation}
\label{eqn:TauFld}
\tau^{(1)}{F}:=\mathrm{ATI}(\mathrm{Cl}_p{F}), \text{ and }
\tau^{(n)}{F}:=(\tau^{(1)}{F};(\tau^{(n-1)}{E})_{E\in\mathrm{Lyr}_1{F}}) \text{ for } n\ge 2.
\end{equation}
\end{definition}
\subsection{Transfer kernel type}
\label{ss:TKT}
\noindent
Next, we explain the concept of transfer kernel type
of finite \(p\)-groups and infinite pro-\(p\) groups.
\subsubsection{Transfer kernel type of a pro-\(p\) group}
\label{sss:TKTGrp}
\noindent
Denote by \(p\ge 2\) a prime number.
Let \(G\) be a pro-\(p\) group
with commutator subgroup \(G^\prime\)
and \textit{finite} abelianization \(G^{ab}=G/G^\prime\).
\begin{definition}
\label{dfn:TKTGrp}
By the \textit{transfer kernel type} of \(G\),
we understand the finite family
\begin{equation}
\label{eqn:TKTGrp}
\varkappa(G):=(\ker(T_{G,S}))_{S\in\mathrm{Lyr}_1{G}},
\end{equation}
where \(T_{G,S}:\,G/G^\prime\to S/S^\prime\) denotes
the transfer homomorphism from \(G\) to the normal subgroup \(S\) of finite index \((G:S)=p\).
\end{definition}
\noindent
More specifically,
suppose that \(G^{ab}\simeq C_p\times C_p\) is elementary abelian of rank two.
Then \(\mathrm{Lyr}_1{G}\) has \(p+1\) elements \(S_1,\ldots,S_{p+1}\),
the transfer kernel type of \(G\) is described briefly
by a family of non-negative integers
\(\varkappa(G)=(\varkappa_i)_{1\le i\le p+1}\in\lbrack 0,p+1\rbrack^{p+1}\)
such that
\begin{equation}
\label{eqn:TKT11Grp}
\varkappa_i:=
\begin{cases}
0 & \text{ if } \ker(T_{G,S_i})=G/G^\prime, \\
j & \text{ if } \ker(T_{G,S_i})=S_j/G^\prime \text{ for some } 1\le j\le p+1,
\end{cases}
\end{equation}
and the symmetric group \(\mathrm{S}_{p+1}\) of degree \(p+1\) acts on \(\lbrack 0,p+1\rbrack^{p+1}\)
via \(\varkappa\mapsto\varkappa^\pi:=\pi_0^{-1}\circ\varkappa\circ\pi\), for each \(\pi\in\mathrm{S}_{p+1}\),
where the extension \(\pi_0\) of \(\pi\) to \(\lbrack 0,p+1\rbrack\) fixes the zero.
\begin{definition}
\label{dfn:TKT11Grp}
The orbit \(\varkappa(G)^{\mathrm{S}_{p+1}}\) is called the \textit{invariant type} of \(G\),
but it is actually given by one of the orbit representatives \((\varkappa_i)_{1\le i\le p+1}\).
Any two distinct orbit representatives \(\lambda_1,\lambda_2\in\varkappa(G)^{\mathrm{S}_{p+1}}\)
are called \textit{equivalent}, denoted by the symbol \(\lambda_1\sim\lambda_2\).
\end{definition}
\subsubsection{Transfer kernel type of a number field}
\label{sss:TKTFld}
\noindent
Let \(F\) be an algebraic number field,
and denote by \(\mathrm{Cl}_p{F}\) the \(p\)-class group of \(F\).
\begin{definition}
\label{dfn:TKTFld}
By the \textit{transfer kernel type} of \(F\),
we understand the finite family
\begin{equation}
\label{eqn:TKTFld}
\varkappa(F):=(\ker(T_{F,E}))_{E\in\mathrm{Lyr}_1{F}},
\end{equation}
where \(T_{F,E}:\,\mathrm{Cl}_p{F}\to\mathrm{Cl}_p{E}\) denotes
the transfer of \(p\)-classes from \(F\) to the unramified cyclic extension \(E\) of degree \(\lbrack E:F\rbrack=p\),
which is also known as the \(p\)-class extension homomorphism.
\end{definition}
\noindent
More specifically,
suppose that \(\mathrm{Cl}_p{F}\simeq C_p\times C_p\) is elementary abelian of rank two.
Then \(\mathrm{Lyr}_1{F}\) has \(p+1\) elements \(E_1,\ldots,E_{p+1}\),
the transfer kernel type of \(F\) is described briefly
by a family of non-negative integers
\(\varkappa(F)=(\varkappa_i)_{1\le i\le p+1}\in\lbrack 0,p+1\rbrack^{p+1}\)
such that
\begin{equation}
\label{eqn:TKT11Fld}
\varkappa_i:=
\begin{cases}
0 & \text{ if } \ker(T_{F,E_i})=\mathrm{Cl}_p{F}, \\
j & \text{ if } \ker(T_{F,E_i})=\mathrm{Norm}_{E_j/F}(\mathrm{Cl}_p{E_j}) \text{ for some } 1\le j\le p+1,
\end{cases}
\end{equation}
and the symmetric group \(\mathrm{S}_{p+1}\) of degree \(p+1\) acts on \(\lbrack 0,p+1\rbrack^{p+1}\)
via \(\varkappa\mapsto\varkappa^\pi:=\pi_0^{-1}\circ\varkappa\circ\pi\), for each \(\pi\in\mathrm{S}_{p+1}\),
where the extension \(\pi_0\) of \(\pi\) to \(\lbrack 0,p+1\rbrack\) fixes the zero.
\begin{definition}
\label{dfn:TKT11Fld}
The orbit \(\varkappa(F)^{\mathrm{S}_{p+1}}\) is called the \textit{invariant type} of \(F\),
but it is actually given by one of the orbit representatives \((\varkappa_i)_{1\le i\le p+1}\).
Any two distinct orbit representatives \(\lambda_1,\lambda_2\in\varkappa(F)^{\mathrm{S}_{p+1}}\)
are called \textit{equivalent}, denoted by the symbol \(\lambda_1\sim\lambda_2\).
\end{definition}
\section{The Artin limit pattern}
\label{s:LimitPattern}
\noindent
Let \(p\) be a prime number.
For the recursive construction of the Artin limit pattern
of a pro-\(p\) group \(G\)
with commutator subgroup \(G^\prime\)
and \textit{finite} abelianization \(G^{ab}=G/G^\prime\),
we need the following considerations.
\subsection{Mappings of the Artin limit pattern}
\label{ss:LimitPatternMaps}
Due to our assumptions, the first layer \(\mathrm{Lyr}_1{G}\) of subgroups of \(G\) is a finite set
consisting of maximal normal subgroups \(S\) of \(G\)
with abelian quotients \(G/S\).
Consequently, the \textit{Artin transfer homomorphism}
from \(G\) to \(S\in\mathrm{Lyr}_1{G}\)
is distinguished by a very simple mapping law:
\begin{equation}
\label{eqn:Transfer}
T_{G,S}:\,G/G^\prime\to S/S^\prime,\ g\cdot G^\prime\mapsto
\begin{cases}
g^p\cdot S^\prime & \text{ if } g\in G/G^\prime\setminus S/G^\prime, \\
g^{1+h+h^2+\ldots+h^{p-1}}\cdot S^\prime & \text{ if } g\in S/G^\prime,
\end{cases}
\end{equation}
where \(h\) denotes an arbitrary element in \(G\setminus S\)
\cite[\S\ 4.1, p. 76]{Ma9}.
The Artin limit pattern encapsulates particular group theoretic information
(connected with Artin transfers)
about the lattice of subgroups of \(G\),
where each element \(U\) has at least one predecessor,
except the root \(G\) itself.
We select a unique predecessor in the following way:
for \(U\in\mathrm{Lyr}_1{S}\) we put \(\pi(U):=S\),
and we add the formal definition \(\pi(G):=G\).
This enables a recursive construction, as follows:
\begin{definition}
\label{dfn:Collection}
The \textit{collection of Artin transfers up to order} \(n\) of \(G\) is defined recursively by
\begin{equation}
\label{eqn:MapRecursion}
\alpha^{(1)}{G}:=T_{\pi(G),G}, \text{ and }
\alpha^{(n)}{G}:=\lbrack\alpha^{(1)}{G};(\alpha^{(n-1)}{S})_{S\in\mathrm{Lyr}_1{G}}\rbrack \text{ for } n\ge 2.
\end{equation}
The limit of this infinite recursive nesting process
is denoted by
\begin{equation}
\label{eqn:MapLimit}
\alpha^{(\infty)}{G}:=\lim_{n\to\infty}\,\alpha^{(n)}{G}
\end{equation}
and is called the \textit{Artin transfer collection} of \(G\).
\end{definition}
\begin{remark}
\label{rmk:Collection}
By means of the collection of Artin transfers up to order three,
\[\alpha^{(3)}{G}=\lbrack T_{G,G};(\alpha^{(2)}{S})_{S\in\mathrm{Lyr}_1{G}}\rbrack
=\lbrack T_{G,G};(\lbrack T_{G,S};(T_{S,U})_{U\in\mathrm{Lyr}_1{S}}\rbrack)_{S\in\mathrm{Lyr}_1{G}}\rbrack,\]
it should be emphasized that our definition of stepwise relative mappings \(T_{G,S}\) and \(T_{S,U}\)
admits finer information than the corresponding absolute mappings \(T_{G,U}=T_{S,U}\circ T_{G,S}\)
\cite[Thm. 3.3, p. 72]{Ma9},
since in general the kernel of \(T_{S,U}\) cannot be reconstructed from \(T_{G,U}\) and \(T_{G,S}\).
\end{remark}
\subsection{Objects of the Artin limit pattern}
\label{ss:LimitPatternObjects}
The infinite collection of mappings \(\alpha^{(\infty)}{G}\)
is only the foundation for the objects \(\tau^{(\infty)}{G}\) and \(\varkappa^{(\infty)}{G}\)
we are really interested in.
\begin{definition}
\label{dfn:Limit}
The \textit{iterated abelian quotient invariants up to order} \(n\) of \(G\) are defined recursively by
\begin{equation}
\label{eqn:TargetRecursion}
\tau^{(1)}{G}:=\mathrm{AQI}(G), \text{ and }
\tau^{(n)}{G}:=\lbrack\tau^{(1)}{G};(\tau^{(n-1)}{S})_{S\in\mathrm{Lyr}_1{G}}\rbrack \text{ for } n\ge 2.
\end{equation}
Similarly, the \textit{iterated transfer kernels up to order} \(n\) of \(G\) are defined recursively by
\begin{equation}
\label{eqn:KernelRecursion}
\varkappa^{(1)}{G}:=\ker(T_{\pi(G),G}), \text{ and }
\varkappa^{(n)}{G}:=\lbrack\varkappa^{(1)}{G};(\varkappa^{(n-1)}{S})_{S\in\mathrm{Lyr}_1{G}}\rbrack \text{ for } n\ge 2.
\end{equation}
Both are collected in the \(n\)th \textit{order Artin pattern} \(\mathrm{AP}^{(n)}{G}:=(\tau^{(n)}{G},\varkappa^{(n)}{G})\) of \(G\).
The limits of these infinite recursive nesting processes
are called the \textit{abelian invariant collection} of \(G\),
\begin{equation}
\label{eqn:TargetLimit}
\tau^{(\infty)}{G}:=\lim_{n\to\infty}\,\tau^{(n)}{G},
\end{equation}
and the \textit{transfer kernel collection} of \(G\),
\begin{equation}
\label{eqn:KernelLimit}
\varkappa^{(\infty)}{G}:=\lim_{n\to\infty}\,\varkappa^{(n)}{G}.
\end{equation}
Finally, the pair
\(\mathrm{ALP}(G):=(\tau^{(\infty)}{G},\varkappa^{(\infty)}{G})\)
is called the \textit{Artin limit pattern} of \(G\).
\end{definition}
\begin{remark}
\label{rmk:Limit}
For a finite \(p\)-group \(G\),
the recursive nesting processes in the definition of the Artin limit pattern
are actually finite.
The abelian quotient invariants are a \textit{unary} concept,
since \(\tau^{(1)}{G}=\mathrm{AQI}(G)=\mathrm{ATI}(G/G^\prime)\) depends on \(G\) only.
The first order abelian quotient invariants \(\tau^{(1)}{G}\) already
contain non-trivial information on the abelianization of \(G\).
The transfer kernels are a \textit{binary} concept for \(S<G\),
since \(\varkappa^{(1)}{S}=\ker(T_{\pi(S),S})\) depends on \(\pi(S)\) and \(S\).
The first order transfer kernel of \(G\) is trivial:
\(\varkappa^{(1)}{G}=\ker(T_{\pi(G),G})=\ker(T_{G,G})=\ker(\mathrm{id}_{G/G^\prime})=1\),
and non-trivial information starts with the transfer kernels of second order
\(\varkappa^{(1)}{S}=\ker(T_{\pi(S),S})=\ker(T_{G,S})\) for \(S\in\mathrm{Lyr}_1{G}\)
which are members of \(\varkappa^{(2)}{G}\).
The analogous constructions for a \textit{number field} \(F\) instead of a pro-\(p\) group \(G\),
along the lines of \S\S\
\ref{sss:TauFld}
and
\ref{sss:TKTFld},
lead to the \textit{Artin limit pattern} \(\mathrm{ALP}(F):=(\tau^{(\infty)}{F},\varkappa^{(\infty)}{F})\) of \(F\).
\end{remark}
\subsection{Connection between pro-\(p\) groups and number fields}
\label{ss:GrpFld}
\noindent
Let \(F_p^{(\infty)}\) be the Hilbert \(p\)-class tower of the number field \(F\),
that is, the maximal unramified pro-\(p\) extension of \(F\),
and denote by \(\mathrm{G}_p^\infty{F}=\mathrm{Gal}(F_p^{(\infty)}/F)\)
its Galois group, which is briefly called the \textit{\(p\)-tower group} of \(F\).
Now we are going to employ the
abelian type invariant collection \(\tau^{(\infty)}{F}\) of \(F\),
and the abelian quotient invariant collection
\(\tau^{(\infty)}(\mathrm{G}_p^\infty{F})\) of \(\mathrm{G}_p^\infty{F}\),
i.e., the first component of the respective Artin limit pattern.
The transfer kernel collections \(\varkappa^{(\infty)}\) will be considered further in \S\
\ref{s:MaximalSubgroups}.
\begin{theorem}
\label{thm:TauGrpFld}
For each integer \(n\ge 1\),
the abelian quotient invariants of \(n\)th order of the \(p\)-tower group \(\mathrm{G}_p^\infty{F}\) of \(F\)
are equal to the abelian type invariants of \(n\)th order of the number field \(F\)
\begin{equation}
\label{eqn:TauGrpFld}
(\forall n\ge 1) \quad \tau^{(n)}(\mathrm{G}_p^\infty{F})=\tau^{(n)}{F}, \text{ and thus } \tau^{(\infty)}(\mathrm{G}_p^\infty{F})=\tau^{(\infty)}{F}.
\end{equation}
The invariant type of the \(p\)-tower group \(\mathrm{G}_p^\infty{F}\) of \(F\)
coincides with the invariant type of the number field \(F\)
\begin{equation}
\label{eqn:TKTGrpFld}
\varkappa(\mathrm{G}_p^\infty{F})^{S_{p+1}}=\varkappa(F)^{S_{p+1}}.
\end{equation}
Even the orbit representatives of the transfer kernel types of \(\mathrm{G}_p^\infty{F}\) and \(F\) coincide,
\begin{equation}
\label{eqn:TKT11GrpFld}
\varkappa(\mathrm{G}_p^\infty{F})=(\ker(T_{\mathrm{G}_p^\infty{F},U_i}))_{1\le i\le p+1}=
(\ker(T_{F,E_i}))_{1\le i\le p+1}=\varkappa(F),
\end{equation}
provided that the \(U_i\in\mathrm{Lyr}_1(\mathrm{G}_p^\infty{F})\) and the \(E_i\in\mathrm{Lyr}_1{F}\)
are connected by
\(U_i=\mathrm{Gal}(F_p^{(\infty)}/E_i)\), for each \(1\le i\le p+1\).
Otherwise, we only have equivalence
\(\varkappa(\mathrm{G}_p^\infty{F})\sim\varkappa(F)\).
\end{theorem}
\begin{proof}
The claims are well-known consequences of the Artin reciprocity law of class field theory
\cite{Ar1,Ar2}.
\end{proof}
In contrast to the full \(p\)-tower group \(\mathfrak{G}=\mathrm{G}_p^\infty{F}\),
the Galois groups \(\mathrm{G}_p^m{F}:=\mathrm{Gal}(F_p^{(m)}/F)\simeq\mathfrak{G}/\mathfrak{G}^{(m)}\)
of the finite stages \(F_p^{(m)}\) of the \(p\)-class tower of \(F\),
that is, of the higher Hilbert \(p\)-class fields of the number field \(F\),
in general fail to reveal the abelian type invariants of \(n\)th order of the number field \(F\).
More precisely, there is a strict upper bound on the order \(n\) of the ATI of \(F\)
which coincide with the AQI of order \(n\) of the \(m\)th \(p\)-class group \(\mathrm{G}_p^m{F}\) of \(F\)
with a fixed integer \(m\ge 0\), namely the bound \(n\le m\).
\begin{theorem}
\label{thm:SuccessiveApproximation}
\textbf{(Successive Approximation Theorem.)}\\
Let \(F\) be a number field, \(p\) a prime, and \(m,n\) integers.
The abelian invariant collection \(\tau^{(\infty)}{F}\) of \(F\) is approximated successively
by the iterated AQI of sufficiently high \(p\)-class groups of \(F\):
\begin{equation}
\label{eqn:TauSuccessiveApproximation}
(\forall n\ge 1) \quad (\forall m\ge n) \quad \tau^{(n)}(\mathrm{G}_p^m{F})=\tau^{(n)}{F}.
\end{equation}
However, the transfer kernel type is a phenomenon of second order:
\begin{equation}
\label{eqn:TKTSuccessiveApproximation}
(\forall m\ge 2) \quad \varkappa(\mathrm{G}_p^m{F})\sim\varkappa(F),
\end{equation}
in particular, the metabelian second \(p\)-class group
\(\mathfrak{M}:=\mathrm{G}_p^2{F}\simeq\mathfrak{G}/\mathfrak{G}^{\prime\prime}\) of \(F\)
is sufficient for determining the transfer kernel type of \(F\).
\end{theorem}
\begin{proof}
This is one of the main results of
\cite[Thm. 1.19, p. 78]{Ma15}
and
\cite[p. 13]{Ma15b}.
\end{proof}
In general, the upper bound on the order \(n\) of the ATI of \(F\) in Theorem
\ref{thm:SuccessiveApproximation}
seems to be sharp, in the following sense,
where \(m=n-1\).
\begin{conjecture}
\label{cnj:StageSeparation}
\textbf{(Stage Separation Criterion.)}\\
Denote by \(\ell_p{F}\) the length of the \(p\)-class tower of \(F\),
that is the derived length \(\mathrm{dl}(\mathrm{G}_p^\infty{F})\) of the \(p\)-tower group of \(F\).
It is determined in terms of iterated AQI of higher \(p\)-class groups of \(F\) by the following condition:
\begin{equation}
\label{eqn:TauStageSeparation}
(\forall n\ge 1) \quad \lbrack\ \ell_p{F}\ge n\ \Longleftrightarrow\ \tau^{(n)}(\mathrm{G}_p^{n-1}{F})<\tau^{(n)}{F}\ \rbrack.
\end{equation}
\end{conjecture}
\noindent
The sufficiency of the condition in Conjecture
\ref{cnj:StageSeparation}
is a proven theorem
\cite[p. 13]{Ma15b}.
\section{Successive approximation of the \(p\)-class tower}
\label{s:Approximation}
\subsection{Computational perspectives}
\label{ss:Computation}
\noindent
Our first attempt to find sound asymptotic tendencies
in the distribution of higher non-abelian \(p\)-class groups
\(\mathrm{G}_p^n{F}=\mathrm{Gal}(F_p^{(n)}/F)\), with \(n\ge 2\),
among the finite \(p\)-groups
was planned in \(1991\) already
\cite[\S\ 3, Remark, p. 77]{Ma}.
However, the insurmountable obstacles in the required computations
limited the progress for twenty years.
In \(2012\), we finally succeeded in the significant break-through
of computing the second \(3\)-class groups
\(\mathfrak{M}=\mathrm{G}_3^2{F}\),
that is, the metabelianizations \(\mathfrak{G}/\mathfrak{G}^{(2)}\)
of the \(3\)-class tower groups \(\mathfrak{G}=\mathrm{Gal}(F_3^{(\infty)}/F)\)
of all \(4596\) quadratic fields \(F=\mathbb{Q}(\sqrt{d})\)
with fundamental discriminants in the remarkable range \(-10^6<d<10^7\)
and elementary bicyclic \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\) of rank two
\cite[\S\ 6, pp. 495--499]{Ma1}.
The underlying computational techniques were based on the
\textit{principalization algorithm via class group structure}
which we had invented in \(2009\) and implemented by means of
the number theoretic computer algebra system PARI/GP
\cite{PARI}
in \(2010\), as described in
\cite[\S\S\ 5--6, pp. 446--455]{Ma3}.
Throughout this paper,
isomorphism classes of finite groups \(G\) are characterized uniquely
by their identifier in the SmallGroups Database
\cite{BEO1,BEO2},
which is denoted by a pair \(\langle o,i\rangle\)
consisting of the order \(o=\mathrm{ord}(G)\) and a positive integer \(i\),
delimited with angle brackets.
The counter \(1\le i\le N(o)\) is unique for a fixed value of the order \(o\).
In the computational algebra system MAGMA
\cite{BCP,BCFS,MAGMA},
the upper bound \(N(o)\) can be obtained as return value of the function
\texttt{NumberOfSmallGroups}\((o)\),
provided that
\texttt{IsInSmallGroupDatabase}\((o)\)
returns \texttt{true}.
The identifier of a given finite group \(G\)
can be retrieved as return value of the function
\texttt{IdentifyGroup}\((G)\),
provided that
\texttt{CanIdentifyGroup}\((o)\)
returns \texttt{true}.
\subsection{Trivial towers with \(\ell_p{F}=0\)}
\label{ss:Trivial}
\noindent
For the decision if the \(p\)-class tower of a number field \(F\) is trivial
with length \(\ell_p{F}=0\)
it suffices to compute the class number \(h(F)\) of the field.
\begin{theorem}
\label{thm:Trivial}
\textbf{(Trivial \(p\)-class tower.)}\\
The \(p\)-class tower of a number field \(F\) is trivial, \(F_p^{(\infty)}=F\),
with length \(\ell_p{F}=0\),
if and only if the class number \(h(F)=\#\mathrm{Cl}(F)\) is not divisible by \(p\),
i. e., the \(p\)-class number is \(h_p{F}=1\).
\end{theorem}
\begin{proof}
The proof consists of a sequence of equivalent statements:
The class number satisfies \(p\nmid h(F)\). \(\Longleftrightarrow\)
The \(p\)-valuation of \(h(F)\) is \(v_p(h(F))=0\). \(\Longleftrightarrow\)
The \(p\)-class number is \(\#\mathrm{Cl}_p{F}=h_p{F}=p^{v_p(h(F))}=1\). \(\Longleftrightarrow\)
The \(p\)-class group \(\mathrm{Cl}_p{F}=1\) is trivial. \(\Longleftrightarrow\)
The \(p\)-class rank \(\rho_p=\dim_{\mathbb{F}_p}(\mathrm{Cl}(F)/\mathrm{Cl}(F)^p)\) is equal to zero. \(\Longleftrightarrow\)
The number of unramified cyclic extensions \(E/F\) of degree \(p\) is \(\frac{p^{\rho_p}-1}{p-1}=\frac{p^0-1}{p-1}=\frac{1-1}{p-1}=0\). \(\Longleftrightarrow\)
The maximal unramified \(p\)-extension \(F_p^{(\infty)}\) of \(F\) coincides with \(F\). \(\Longleftrightarrow\)
The Galois group \(\mathrm{G}_p^\infty{F}=\mathrm{Gal}(F_p^{(\infty)}/F)=\mathrm{Gal}(F/F)=1\) is trivial. \(\Longleftrightarrow\)
The length of the \(p\)-class tower is \(\ell_p{F}=\mathrm{dl}(\mathrm{G}_p^\infty{F})=\mathrm{dl}(1)=0\).
\end{proof}
\noindent
Already C. F. Gauss was able to compute class numbers \(h(F)\) of quadratic fields \(F=\mathbb{Q}(\sqrt{d})\),
at a time when the concept of class field theory was not yet coined.
Nowadays, there exist extensive tables of quadratic class numbers
which even contain the structures of the associated class groups \(\mathrm{Cl}(F)\).
In \(1998\), Jacobson
\cite{Js}
covered all real quadratic fields
with positive discriminants in the range \(0<d<10^9\),
and in \(2016\), Mosunov and Jacobson
\cite{MsJs}
investigated all imaginary quadratic fields
with negative discriminants \(-10^{12}<d<0\).
Now we apply these results to class field theory.
\begin{corollary}
\label{cor:ImaginaryTrivial}
\textbf{(Statistics for \(p=3\).)}
The asymptotic proportion of imaginary quadratic fields \(F=\mathbb{Q}(\sqrt{d})\),
with negative discriminants \(d<0\),
whose class number \(h(F)\) is, respectively is not, divisible by \(p=3\)
is given as \(43.99\%\), respectively \(56.01\%\), by the heuristics of Cohen, Lenstra and Martinet.
In Table
\ref{tbl:ImaginaryTrivial},
the approximations of these theoretical limits
by relative frequencies in various ranges \(L<d<0\)
are shown.
\end{corollary}
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Imaginary quadratic fields \(F\) with non-trivial, resp. trivial, \(3\)-class tower}
\label{tbl:ImaginaryTrivial}
\begin{center}
\begin{tabular}{|c||r|c||r|c||c|}
\hline
\(L\) & \(\#(3\mid h(F))\) & rel. fr. & \(\#(3\nmid h(F))\) & rel. fr. & w. r. t. \(\#\)total \\
\hline
\(-10^6\) & \(121\,645\) & \(40.02\%\) & \(182\,323\) & \(59.98\%\) & \(303\,968\) \\
\(-10^{11}\) & \(13\,206\,088\,529\) & \(43.45\%\) & \(17\,190\,266\,523\) & \(56.55\%\) & \(30\,396\,355\,052\) \\
\(-10^{12}\) & \(132\,584\,350\,621\) & \(43.62\%\) & \(171\,379\,200\,091\) & \(56.38\%\) & \(303\,963\,550\,712\) \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{proof}
The heuristic asymptotic limits are given in
\cite[\S\ 2, (1.1.c), p. 126]{ChMt}.
Their approximation by discriminants \(L<d<0\) with \(L=-10^6\) in
\cite[Example, p. 843]{Ma0}
and
\cite[\S\ 2, Remark, and \S\ 3, Remark, p. 77]{Ma},
where \(118\,455+3\,190=121\,645\),
is still rather far away from the limits.
In contrast, the approximations associated with the bounds \(L=-10^{11}\) and \(L=-10^{12}\) in
\cite[p. 2001]{MsJs}
are very close already.
\end{proof}
\subsection{Abelian single-stage towers with \(\ell_p{F}=1\)}
\label{ss:SingleStage}
\noindent
The first stage of the \(p\)-class tower of a number field \(F\)
is determined by the structure of the \(p\)-class group \(\mathrm{Cl}_p{F}\) of \(F\)
as a finite abelian \(p\)-group.
This is exactly the first order Artin pattern
\begin{equation}
\label{eqn:AP1}
\mathrm{AP}^{(1)}{F}=(\tau^{(1)}{F},\varkappa^{(1)}{F})=(\mathrm{ATI}(\mathrm{Cl}_p{F}),\ker(T_{F,F})),
\end{equation}
since the trivial \(\ker(T_{F,F})=1\) does not contain information.
However, only in the case of \(p\)-class rank one,
\(\rho_p=\dim_{\mathbb{F}_p}(\mathrm{Cl}(F)/\mathrm{Cl}(F)^p)=1\),
it is warranted that the exact length of the tower is \(\ell_p{F}=1\).
A statistical example
\cite[\S\ 2, Remark, p. 77]{Ma}
is shown in Table
\ref{tbl:ImaginaryCyclic}.
\begin{theorem}
\label{thm:Abelian}
A number field \(F\) with non-trivial cyclic \(p\)-class group \(\mathrm{Cl}_p{F}\)
has an abelian \(p\)-class tower of exact length \(\ell_p{F}=1\),
in fact, the Galois group \(\mathrm{G}_p^\infty{F}\simeq\mathrm{G}_p^1{F}\simeq\mathrm{Cl}_p{F}\) is cyclic.
\end{theorem}
\begin{proof}
Suppose that \(\mathrm{Cl}_p{F}>1\) is non-trivial and cyclic.
If the \(p\)-class tower had a length \(\ell_p{F}\ge 2\),
the second \(p\)-class group \(\mathfrak{M}=\mathrm{G}_p^2{F}\)
would be a non-abelian finite \(p\)-group
with cyclic abelianization \(\mathfrak{M}/\mathfrak{M}^\prime\simeq\mathrm{Cl}_p{F}\).
However, it is well known that a nilpotent group with cyclic abelianization is abelian,
which contradicts the assumption of a length \(\ell_p{F}\ge 2\).
\end{proof}
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Imaginary quadratic fields \(F\) with cyclic \(3\)-class tower for \(-10^6<d<0\)}
\label{tbl:ImaginaryCyclic}
\begin{center}
\begin{tabular}{|c||r|c||c|}
\hline
\(\mathrm{Cl}_3{F}\simeq\) & abs. fr. & rel. fr. & w. r. t. \(\#(\rho_3=1)\) \\
\hline
\(C_3\) & \(80\,115\) & \(67.63\%\) & \(118\,455\) \\
\(C_9\) & \(26\,458\) & \(22.34\%\) & \(118\,455\) \\
\(C_{27}\) & \(8\,974\) & \(7.58\%\) & \(118\,455\) \\
\(C_{81}\) & \(2\,472\) & \(2.09\%\) & \(118\,455\) \\
\(C_{243}\) & \(393\) & \(0.33\%\) & \(118\,455\) \\
\(C_{729}\) & \(43\) & \(0.04\%\) & \(118\,455\) \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{remark}
\label{rmk:SingleStage}
We interpret
the computation of abelian type invariants \(\tau^{(1)}{F}\) of the Sylow \(3\)-subgroup \(\mathrm{Cl}_3{F}\)
of the ideal class group \(\mathrm{Cl}(F)\) of a quadratic field \(F=\mathbb{Q}(\sqrt{d})\)
as the determination of the single-stage approximation \(\mathfrak{G}/\mathfrak{G}^\prime\simeq\mathrm{G}_3^1{F}\simeq\mathrm{Cl}_3{F}\)
of the \(3\)-class tower group \(\mathfrak{G}=\mathrm{G}_3^\infty{F}\) of \(F\).
This step yields complete information about the lattice of all
unramified abelian \(3\)-extensions \(E/F\) within the Hilbert \(3\)-class field \(\mathrm{F}_3^1{F}\) of \(F\).
\end{remark}
\subsection{Metabelian two-stage towers with \(\ell_p{F}=2\)}
\label{ss:TwoStage}
\noindent
According to the Successive Approximation Theorem
\ref{thm:SuccessiveApproximation},
the second stage \(F_p^{(2)}\) of the \(p\)-class tower of a number field \(F\)
is determined by the second order Artin pattern
\begin{equation}
\label{eqn:AP2}
\mathrm{AP}^{(2)}{F}=(\tau^{(2)}{F},\varkappa^{(2)}{F})=
(\lbrack\mathrm{ATI}(\mathrm{Cl}_p{F});(\mathrm{ATI}(\mathrm{Cl}_p{E}))_{E\in\mathrm{Lyr}_1{F}}\rbrack,\lbrack\ker(T_{F,F});(\ker(T_{F,E}))_{E\in\mathrm{Lyr}_1{F}}\rbrack).
\end{equation}
The determination of \(\mathrm{AP}^{(2)}{F}\) for a quadratic field \(F\) with \(3\)-class rank \(\rho_3=2\)
requires the computation of four \(3\)-class groups \(\mathrm{Cl}_3{E_i}\) of unramified cyclic cubic extensions \(E_1,\ldots,E_4\)
and of four transfer kernels \(\ker(T_{F,E_i})\).
Whereas Mosunov and Jacobson
\cite{MsJs}
were able to determine the class groups \(\mathrm{Cl}(F)\) of more than \(300\) billion,
precisely \(303\,963\,550\,712\),
imaginary quadratic fields \(F\) with discriminants \(-10^{12}<d<0\)
by parallel processes on multiple cores of a supercomputer
in several years of total CPU time,
it is currently definitely out of scope to compute the class groups \(\mathrm{Cl}(E_i)\), \(1\le i\le 4\),
for the \(22\,757\,307\,168\) unramified cyclic cubic extensions \(E_i/F\), of absolute degree six,
of the \(5\,689\,326\,792\) imaginary quadratic fields \(F\) with discriminants \(-10^{12}<d<0\)
and \(3\)-class rank \(\rho_3=2\).
Therefore, it must not be underestimated that Boston, Bush and Hajir
\cite{BBH}
succeeded in completing this task for the smaller range \(-10^8<d<0\)
with \(461\,925\) imaginary quadratic fields \(F\) having \(3\)-class rank \(\rho_3=2\),
and \(1\,847\,700\) associated \textit{totally complex dihedral fields} \(E_i\) of degree six
\cite[Prp. 4.1, p. 482]{Ma1}.
For this purpose the authors used the computational algebra system MAGMA
\cite{BCP,BCFS,MAGMA}
in a distributed process involving several processors with multiple cores.
\(276\,375\) of these quadratic fields \(F\)
have a \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\).
Imaginary quadratic fields \(F=\mathbb{Q}(\sqrt{d})\) with negative discriminants \(d<0\)
are the simplest number fields with respect to their unit group \(U_F\),
which is a finite torsion group of Dirichlet unit rank zero.
This fact has considerable consequences for their \(p\)-class tower groups,
according to the Shafarevich theorem
\cite{Sh},
corrected in
\cite[Thm. 5.1, p. 28]{Ma10},
\cite{Ma10a}.
\begin{theorem}
\label{thm:ImaginaryTwoStage}
Among the finite \(3\)-groups \(G\)
with elementary bicyclic abelianization \(G/G^\prime\simeq C_3\times C_3\) of rank two,
there exist only two metabelian groups with GI-action and relation rank \(d_2{G}=2\)
(so-called Schur \(\sigma\)-groups
\cite{KoVe,BBH}),
namely \(\langle 243,5\rangle\) and \(\langle 243,7\rangle\).
\begin{enumerate}
\item
These are the groups of smallest order
which are admissible as \(3\)-class tower groups \(G\simeq\mathrm{G}_3^\infty{F}\)
of imaginary quadratic fields \(F\) with \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\).
\item
Generally, for any number field \(F\), these groups are determined uniquely by the second order Artin pattern.
\begin{enumerate}
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(21,21,1^3,21)\rbrack,\lbrack 1;(2241)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 243,5\rangle\).
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(1^3,21,1^3,21)\rbrack,\lbrack 1;(4224)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 243,7\rangle\).
\end{enumerate}
\item
The actual distribution of these \(3\)-class tower groups \(G\)
among the \(276\,375\) imaginary quadratic fields \(F=\mathbb{Q}(\sqrt{d})\)
with \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\) and
discriminants \(-10^8<d<0\) is presented in
Table
\ref{tbl:ImaginaryTwoStage}.
\end{enumerate}
\end{theorem}
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Frequencies of metabelian \(3\)-class tower groups \(G\) for \(-10^8<d<0\)}
\label{tbl:ImaginaryTwoStage}
\begin{center}
\begin{tabular}{|c|r||r|c||r|c||c|r|}
\hline
\(G\simeq\) & abs. fr. & rel. fr. & w. r. t. & rel. fr. & w. r. t. & measure \cite{BBH} & \(\lvert d\rvert_{\text{min}}\) \\
\hline
\(\langle 243,5\rangle\) & \(83\,353\) & \(30.16\%\) & \(276\,375\) & \(18.04\%\) & \(461\,925\) & \(128/729\approx 17.56\%\) & \(4\,027\) \\
\(\langle 243,7\rangle\) & \(41\,398\) & \(14.98\%\) & \(276\,375\) & \(8.96\%\) & \(461\,925\) & \(64/729\approx 8.78\%\) & \(12\,131\) \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{proof}
All finite \(3\)-groups \(G\) with abelianization \(G/G^\prime\simeq C_3\times C_3\)
are vertices of the descendant tree \(\mathcal{T}(R)\)
with abelian root \(R=\langle 9,2\rangle\simeq C_3\times C_3\).
A search for metabelian vertices with relation rank \(d_2{G}=2\)
in this tree yields three hits
\(\langle 27,4\rangle\), \(\langle 243,5\rangle\), and \(\langle 243,7\rangle\),
but only the latter two of them possess a GI-action.
The abelianization \(G/G^\prime\) of a finite \(3\)-group \(G\)
which is realized as the \(3\)-class tower group \(\mathrm{G}_p^\infty{F}\)
of an algebraic number field \(F\)
is isomorphic to the \(3\)-class group \(\mathrm{Cl}_3{F}\) of \(F\).
When \(F\) is imaginary quadratic, it possesses signature \((r_1,r_2)=(0,1)\)
and torsionfree Dirichlet unit rank \(r=r_1+r_2-1=0\).
If \(G/G^\prime\simeq\mathrm{Cl}_3{F}\simeq C_3\times C_3\),
then the generator rank of \(G\) is \(d_1{G}=2\)
and the Shafarevich theorem implies bounds for the relation rank
\(2=d_1{G}\le d_2{G}\le d_1{G}+r=2\).
The entries of Table
\ref{tbl:ImaginaryTwoStage}
have been taken from
\cite{BBH}.
\end{proof}
More recently, Boston, Bush and Hajir
\cite{BBH2}
used MAGMA
\cite{MAGMA}
for computing the class groups of the \(481\,756\) real quadratic fields \(F\)
having \(3\)-class rank \(\rho_3=2\) and discriminants in the range \(0<d<10^9\),
and the class groups of the \(1\,927\,024\) associated \textit{totally real dihedral fields} \(E_i\) of degree six,
arising from unramified cyclic cubic extensions \(E_i/F\)
\cite[Prp. 4.1, p. 482]{Ma1}.
\(415\,698\) of these quadratic fields \(F\)
have a \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\)
(\(415\,699\) according to
\cite[Tbl. 7]{Js}).
Real quadratic fields \(F=\mathbb{Q}(\sqrt{d})\) with positive discriminants \(d>0\)
are the second simplest number fields with respect to their unit group \(U_F\),
which is an infinite group of torsionfree Dirichlet unit rank one.
Again, there are remarkable consequences for their \(p\)-tower groups,
by the Shafarevich theorem
\cite[Thm. 5.1, p. 28]{Ma10}.
\begin{theorem}
\label{thm:RealTwoStage}
Among the finite \(3\)-groups \(G\)
with elementary bicyclic abelianization \(G/G^\prime\simeq C_3\times C_3\) of rank two,
there exist infinitely many metabelian groups with GI-action and relation rank \(d_2{G}=3\)
(so-called Schur\(+1\) \(\sigma\)-groups
\cite{BBH2}),
but only three of minimal order \(3^4\),
namely \(\langle 81,7\rangle\),\(\langle 81,8\rangle\) and \(\langle 81,10\rangle\).
\begin{enumerate}
\item
These are the groups of smallest order
which are admissible as \(3\)-class tower groups \(G\simeq\mathrm{G}_3^\infty{F}\)
of real quadratic fields \(F\) with \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\).
\item
Generally, for any number field \(F\), these groups are determined uniquely by the second order Artin pattern.
\begin{enumerate}
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(1^3,1^2,1^2,1^2)\rbrack,\lbrack 1;(2000)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 81,7\rangle\).
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(21,1^2,1^2,1^2)\rbrack,\lbrack 1;(2000)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 81,8\rangle\).
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(21,1^2,1^2,1^2)\rbrack,\lbrack 1;(1000)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 81,10\rangle\).
\end{enumerate}
\item
The actual distribution of these \(3\)-class tower groups \(G\)
among the \(415\,698\) real quadratic fields \(F=\mathbb{Q}(\sqrt{d})\)
with \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\) and
discriminants \(0<d<10^9\) is presented in
Table
\ref{tbl:RealTwoStage9}.
Additionally, the frequencies of the groups \(\langle 243,5\rangle\) and \(\langle 243,7\rangle\) in Theorem
\ref{thm:ImaginaryTwoStage}
are given.
\end{enumerate}
\end{theorem}
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Frequencies of metabelian \(3\)-class tower groups \(G\) for \(0<d<10^9\)}
\label{tbl:RealTwoStage9}
\begin{center}
\begin{tabular}{|c|r||r|c||r|c||c|r|}
\hline
\(G\simeq\) & abs. fr. & rel. fr. & w. r. t. & rel. fr. & w. r. t. & measure \cite{BBH2} & \(d_{\text{min}}\) \\
\hline
\(\langle 81,7\rangle\) & \(122\,955\) & \(29.58\%\) & \(415\,698\) & \(25.52\%\) & \(481\,756\) & \(1664/6561\approx 25.36\%\) & \(142\,097\) \\
\hline
\(\langle 81,8\rangle\) or & \(208\,236\) & \(50.09\%\) & \(415\,698\) & \(43.22\%\) & \(481\,756\) & \(8320/19683\approx 42.27\%\) & \(32\,009\) \\
\(\langle 81,10\rangle\) & & & & & & & \\
\hline
\(\langle 243,5\rangle\) & \(13\,712\) & \(3.30\%\) & \(415\,698\) & \(2.85\%\) & \(481\,756\) & \(1664/59049\approx 2.82\%\) & \(422\,573\) \\
\hline
\(\langle 243,7\rangle\) & \(6\,691\) & \(1.61\%\) & \(415\,698\) & \(1.39\%\) & \(481\,756\) & \(832/59049\approx 1.41\%\) & \(631\,769\) \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{proof}
A search for metabelian vertices \(G\) of minimal order with relation rank \(d_2{G}=3\)
in the descendant tree \(\mathcal{T}(R)\)
with abelian root \(R=\langle 9,2\rangle\simeq C_3\times C_3\)
yields three hits
\(\langle 27,7\rangle\), \(\langle 27,8\rangle\), and \(\langle 27,10\rangle\).
All of them possess a GI-action.
The abelianization \(G/G^\prime\) of a finite \(3\)-group \(G\)
which is realized as the \(3\)-class tower group \(\mathrm{G}_p^\infty{F}\)
of an algebraic number field \(F\)
is isomorphic to the \(3\)-class group \(\mathrm{Cl}_3{F}\) of \(F\).
When \(F\) is real quadratic, it possesses signature \((r_1,r_2)=(2,0)\)
and torsionfree Dirichlet unit rank \(r=r_1+r_2-1=1\).
If \(G/G^\prime\simeq\mathrm{Cl}_3{F}\simeq C_3\times C_3\),
then the generator rank of \(G\) is \(d_1{G}=2\)
and the Shafarevich theorem implies bounds for the relation rank
\(2=d_1{G}\le d_2{G}\le d_1{G}+r=3\).
The entries of Table
\ref{tbl:RealTwoStage9}
have been taken from
\cite{BBH2}.
\end{proof}
In
\cite{BBH2},
Boston, Bush and Hajir only computed the first component of the second order Artin pattern
\(\mathrm{AP}^{(2)}{F}=(\tau^{(2)}{F},\varkappa^{(2)}{F})\) in Formula
\eqref{eqn:AP2},
that is, the abelian type invariants \(\tau^{(2)}{F}\) of second order
of real quadratic fields \(F\) with discriminants \(0<d<10^9\).
Determining the second component \(\varkappa^{(2)}{F}\), the transfer kernel type of \(F\),
is considerably harder with respect to the computational expense.
Consequently, the most extensive numerical results on transfer kernels available currently,
have been computed by ourselves for the smaller ranges \(0<d<10^8\) in
\cite{Ma14,Ma14b},
and, even computing third order Artin patterns, for \(0<d<10^7\) in
\cite{Ma17,Ma17b}.
With the aid of these results, we now illustrate that
the transfer kernels \(\ker(T_{F,E})\) of \(3\)-class extensions
\(T_{F,E}:\mathrm{Cl}_3{F}\to\mathrm{Cl}_3{E}\)
from real quadratic fields \(F\) to unramified cyclic cubic extensions \(E/F\)
are capable of narrowing down the number of contestants
for the \(3\)-tower group \(\mathrm{G}_3^\infty{F}\) significantly,
and thus of refining the statistics in
\cite{BBH2}.
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Frequencies of metabelian \(3\)-class tower groups \(G\) for \(0<d<10^8\) resp. \(10^7\)}
\label{tbl:RealTwoStage8}
\begin{center}
\begin{tabular}{|c|r||r|c||r|}
\hline
\(G\simeq\) & abs. fr. & rel. fr. & w. r. t. & \(d_{\text{min}}\) \\
\hline
\(\langle 81,7\rangle\) & \(10\,244\) & \(29.58\%\) & \(34\,631\) & \(142\,097\) \\
\(\langle 81,8\rangle\) & \(10\,514\) & \(30.36\%\) & \(34\,631\) & \(32\,009\) \\
\(\langle 81,10\rangle\) & \(7\,104\) & \(20.51\%\) & \(34\,631\) & \(72\,329\) \\
\hline
\(\langle 729,96\rangle\) & \(242\) & \(0.70\%\) & \(34\,631\) & \(790\,085\) \\
\hline
\(\langle 729,97\rangle\) or & \(713\) & \(2.06\%\) & \(34\,631\) & \(494\,236\) \\
\(\langle 729,98\rangle\) & & & & \\
\hline
\(\langle 729,99\rangle\) & \(66\) & \(2.56\%\) & \(2\,576\) & \(62\,501\) \\
\(\langle 729,100\rangle\) & \(42\) & \(1.63\%\) & \(2\,576\) & \(152\,949\) \\
\(\langle 729,101\rangle\) & \(42\) & \(1.63\%\) & \(2\,576\) & \(252\,977\) \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{corollary}
\label{cor:RealTwoStage}
\begin{enumerate}
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(32,1^2,1^2,1^2)\rbrack,\lbrack 1;(1000)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 729,96\rangle\).
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(32,1^2,1^2,1^2)\rbrack,\lbrack 1;(2000)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 729,i\rangle\) with \(i\in\lbrace 97,98\rbrace\).
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(2^2,1^2,1^2,1^2)\rbrack,\lbrack 1;(0000)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 729,i\rangle\) with \(i\in\lbrace 99,100,101\rbrace\).
\end{enumerate}
\item
The actual distribution of these \(3\)-class tower groups \(G\)
among the \(34\,631\), respectively \(2\,576\), real quadratic fields \(F=\mathbb{Q}(\sqrt{d})\)
with \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\) and
discriminants \(0<d<10^8\), respectively \(0<d<10^7\), is presented in
Table
\ref{tbl:RealTwoStage8}.
\end{corollary}
\subsection{Non-metabelian three-stage towers with \(\ell_p{F}=3\)}
\label{ss:ThreeStage}
\noindent
According to the Successive Approximation Theorem,
the third stage \(F_p^{(3)}\) of the \(p\)-class tower of a number field \(F\)
is usually determined by the third order Artin pattern
\begin{equation}
\label{eqn:AP3}
\mathrm{AP}^{(3)}{F}=(\tau^{(3)}{F},\varkappa^{(3)}{F})=
(\lbrack\tau^{(1)}{F};(\tau^{(2)}{E})_{E\in\mathrm{Lyr}_1{F}}\rbrack,\lbrack\varkappa^{(1)}{F};(\varkappa^{(2)}{E})_{E\in\mathrm{Lyr}_1{F}}\rbrack).
\end{equation}
It is interesting, however, that there are extensive collections of quadratic fields \(F\)
with \(3\)-class towers of exact length \(\ell_3{F}=3\),
which can be characterized by the second order Artin pattern already.
We begin with imaginary quadratic fields \(F=\mathbb{Q}(\sqrt{d})\) with discriminants \(d<0\).
\begin{theorem}
\label{thm:ImaginaryThreeStage}
Among the finite \(3\)-groups \(G\)
with elementary bicyclic abelianization \(G/G^\prime\simeq C_3\times C_3\) of rank two,
there exist infinitely many non-metabelian groups with GI-action and relation rank \(d_2{G}=2\)
(so-called Schur \(\sigma\)-groups
\cite{KoVe,BBH}),
but only seven of minimal order \(3^8\),
namely \(\langle 6561,i\rangle\) with \(i\in\lbrace 606,616,617,618,620,622,624\rbrace\).
\begin{enumerate}
\item
These are the groups of smallest order
which are admissible as non-metabelian \(3\)-class tower groups \(G\simeq\mathrm{G}_3^\infty{F}\)
of imaginary quadratic fields \(F\) with \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\).
\item
Exceptionally, for an imaginary quadratic field \(F\),
the trailing six of these groups are determined by the second order Artin pattern already.
\begin{enumerate}
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(32,21,1^3,21)\rbrack,\lbrack 1;(1313)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 6561,616\rangle\).
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(32,21,1^3,21)\rbrack,\lbrack 1;(2313)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 6561,i\rangle\) with \(i\in\lbrace 617,618\rbrace\).
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(32,21,21,21)\rbrack,\lbrack 1;(1231)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 6561,622\rangle\).
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(32,21,21,21)\rbrack,\lbrack 1;(2231)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 6561,i\rangle\) with \(i\in\lbrace 620,624\rbrace\).
\end{enumerate}
\item
The actual distribution of these \(3\)-class tower groups \(G\)
among the \(24\,476\) imaginary quadratic fields \(F=\mathbb{Q}(\sqrt{d})\)
with \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\) and
discriminants \(-10^7<d<0\) is presented in
Table
\ref{tbl:ImaginaryThreeStage}.
\end{enumerate}
\end{theorem}
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Frequencies of non-metabelian \(3\)-class tower groups \(G\) for \(-10^7<d<0\)}
\label{tbl:ImaginaryThreeStage}
\begin{center}
\begin{tabular}{|l|r||r|c||l|c||r|}
\hline
\(G\simeq\) & abs. fr. & rel. fr. & w. r. t. & type & \(\varkappa\) & \(\lvert d\rvert_{\text{min}}\) \\
\hline
\(\langle 6561,616\rangle\) & \(760\) & \(3.11\%\) & \(24\,476\) & \(\mathrm{E}.6\) & \((1313)\) & \(15\,544\) \\
\hline
\(\langle 6561,617\rangle\) or & \(1572\) & \(6.42\%\) & \(24\,476\) & \(\mathrm{E}.14\) & \((2313)\) & \(16\,627\) \\
\(\langle 6561,618\rangle\) & & & & & & \\
\hline
\(\langle 6561,622\rangle\) & \(798\) & \(3.26\%\) & \(24\,476\) & \(\mathrm{E}.8\) & \((1231)\) & \(34\,867\) \\
\hline
\(\langle 6561,620\rangle\) or & \(1583\) & \(6.47\%\) & \(24\,476\) & \(\mathrm{E}.9\) & \((2231)\) & \(9\,748\) \\
\(\langle 6561,624\rangle\) & & & & & & \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{proof}
By a similar but more extensive search than in the proof of Theorem
\ref{thm:ImaginaryTwoStage}.
Data for Table
\ref{tbl:ImaginaryThreeStage}
has been computed by ourselves in June \(2016\)
using MAGMA
\cite{MAGMA}.
\end{proof}
\begin{remark}
\label{rmk:ImaginaryThreeStage}
It should be pointed out that items (1) and (2) of Theorem
\ref{thm:ImaginaryThreeStage}
are \textit{not valid for real quadratic fields},
as documented in
\cite[Thm. 7.8, p. 162, and Thm. 7.12, p. 165]{Ma11b}.
The group \(\langle 6561,606\rangle\)
belongs to the infinite Shafarevich cover of the metabelian group \(\langle 729,45\rangle\)
with respect to imaginary quadratic fields
\cite[Cor. 6.2, p. 301]{Ma7},
\cite{Ma7b}.
It shares a common second order Artin pattern
with all other elements of the Shafarevich cover.
Third order Artin patterns must be used for its identification, as shown in
\cite[Thm. 7.14, p. 168]{Ma11b}.
\end{remark}
\noindent
Now we turn to real quadratic fields \(F=\mathbb{Q}(\sqrt{d})\) with discriminants \(d>0\).
\begin{theorem}
\label{thm:RealThreeStage}
Among the finite \(3\)-groups \(G\)
with elementary bicyclic abelianization \(G/G^\prime\simeq C_3\times C_3\) of rank two,
there exist infinitely many non-metabelian groups with GI-action and relation rank \(d_2{G}=3\)
(so-called Schur\(+1\) \(\sigma\)-groups
\cite{BBH2}),
but only nine of minimal order \(3^7\),
namely \(\langle 2187,i\rangle\) with \(i\in\lbrace 270,271,272,273,284,291,307,308,311\rbrace\).
\begin{enumerate}
\item
These are the groups of smallest order
which are admissible as non-metabelian \(3\)-class tower groups \(G\simeq\mathrm{G}_3^\infty{F}\)
of real quadratic fields \(F\) with \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\).
\item
Exceptionally, for a real quadratic field \(F\),
four of these groups are determined by the second order Artin pattern already.
\begin{enumerate}
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(2^2,21,1^3,21)\rbrack,\lbrack 1;(0313)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 2187,i\rangle\) with \(i\in\lbrace 284,291\rbrace\).
\item
If \(\mathrm{AP}^{(2)}{F}=(\lbrack 1^2;(2^2,21,21,21)\rbrack,\lbrack 1;(0231)\rbrack)\)
then \(\mathrm{G}_3^\infty{F}\simeq\langle 2187,i\rangle\) with \(i\in\lbrace 307,308\rbrace\).
\end{enumerate}
\item
The actual distribution of these \(3\)-class tower groups \(G\)
among the \(415\,698\) real quadratic fields \(F=\mathbb{Q}(\sqrt{d})\)
with \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\) and
discriminants \(1<d<10^9\) is presented in
Table
\ref{tbl:RealThreeStage}.
\end{enumerate}
\end{theorem}
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Frequencies of non-metabelian \(3\)-class tower groups \(G\) for \(0<d<10^9\)}
\label{tbl:RealThreeStage}
\begin{center}
\begin{tabular}{|l|r||r|c||l|c||r|}
\hline
\(G\simeq\) & abs. fr. & rel. fr. & w. r. t. & type & \(\varkappa\) & \(d_{\text{min}}\) \\
\hline
\(\langle 2187,284\rangle\) or & \(4318\) & \(1.04\%\) & \(415\,698\) & \(\mathrm{c}.18\) & \((0313)\) & \(534\,824\) \\
\(\langle 2187,291\rangle\) & & & & & & \\
\hline
\(\langle 2187,307\rangle\) or & \(4377\) & \(1.05\%\) & \(415\,698\) & \(\mathrm{c}.21\) & \((0231)\) & \(540\,365\) \\
\(\langle 2187,308\rangle\) & & & & & & \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{proof}
The claims for transfer kernel type \(\mathrm{c}.18\), \(\varkappa(F)\sim (0313)\),
are a consequence of
\cite[Prp. 7.1, p. 32, Thm. 7.1, p. 33, and Rmk. 7.1, p. 35]{Ma10},
those for type \(\mathrm{c}.21\), \(\varkappa(F)\sim (0231)\), have been proved in
\cite[Prp. 8.1, p. 42, Thm. 8.1, p. 44, and Rmk. 8.2, p. 45]{Ma10}.
A slightly stronger result is the Main Theorem
\cite[Thm. 2.1, p. 22]{Ma10}.
\end{proof}
\begin{remark}
\label{rmk:RealThreeStage}
The groups \(\langle 2187,i\rangle\) with \(i\in\lbrace 270,271,272,273\rbrace\)
are elements of the infinite Shafarevich cover of the metabelian group \(\langle 729,45\rangle\)
with respect to real quadratic fields.
The group \(\langle 2187,311\rangle\)
belongs to the infinite Shafarevich cover of the metabelian group \(\langle 729,57\rangle\)
with respect to real quadratic fields.
These five groups share a common second order Artin pattern
with all other elements of the relevant Shafarevich cover.
Third order Artin patterns must be employed for their identification, as shown in
\cite[Thm. 7.13, p. 167, and Thm. 7.15, p. 169]{Ma11b}.
\end{remark}
\section{Maximal subgroups of \(3\)-groups of coclass one}
\label{s:MaximalSubgroups}
\noindent
Let \((\gamma_i(G))_{i\ge 1}\) be the descending lower central series of the group \(G\),
defined recursively by \(\gamma_1(G):=G\) and
\(\gamma_i(G):=\lbrack\gamma_{i-1}(G),G\rbrack\) for \(i\ge 2\),
in particular, \(\gamma_2(G)=G^\prime\) is the commutator subgroup of \(G\).
A finite \(p\)-group \(G\) is nilpotent with
\(\gamma_1(G)>\gamma_2(G)>\ldots>\gamma_c(G)>\gamma_{c+1}(G)=1\)
for some integer \(c\ge 1\),
which is called the \textit{nilpotency class} \(\mathrm{cl}(G)=c\) of \(G\).
When \(G\) is of order \(p^n\), for some integer \(n\ge 1\),
the \textit{coclass} of \(G\) is defined by \(\mathrm{cc}(G):=n-c\)
and \(\mathrm{lo}(G):=n\) is called the \textit{logarithmic order} of \(G\).
Finite \(3\)-groups \(G\) with coclass \(\mathrm{cc}(G)=1\) were investigated by N. Blackburn
\cite{Bl2}
in \(1958\).
All of these CF-groups,
which exclusively have \textit{cyclic factors} \(\gamma_i(G)/\gamma_{i+1}(G)\)
of their descending central series for \(i\ge 2\),
are necessarily metabelian with second derived subgroup \(G^{\prime\prime}=1\)
and abelian commutator subgroup \(G^\prime\)
and possess abelianization \(G/G^\prime\simeq C_3\times C_3\),
according to Blackburn
\cite{Bl1}.
For the statement of Theorem
\ref{thm:MaxSbgCc1},
we need a precise ordering of the four maximal subgroups \(H_1,\ldots,H_4\) of the group \(G=\langle x,y\rangle\),
which can be generated by two elements \(x,y\),
according to the Burnside basis theorem.
For this purpose, we select the generators \(x,y\) such that
\begin{equation}
\label{eqn:MaximalSubgroups}
H_1=\langle y,G^\prime\rangle,\quad
H_2=\langle x,G^\prime\rangle,\quad
H_3=\langle xy,G^\prime\rangle,\quad
H_4=\langle xy^2,G^\prime\rangle,
\end{equation}
and \(H_1=\chi_2(G)\),
provided that \(G\) is of nilpotency class \(\mathrm{cl}(G)\ge 3\).
Here we denote by
\begin{equation}
\label{eqn:TwoStepCentralizer}
\chi_2(G):=\lbrace g\in G\mid (\forall\ h\in\gamma_2(G))\ \lbrack g,h\rbrack\in\gamma_4(G)\rbrace
\end{equation}
the \textit{two-step centralizer} of \(G^\prime\) in \(G\).
\subsection{Parametrized presentations of metabelian \(3\)-groups}
\label{ss:Presentations}
\noindent
The identification of the groups will be achieved with the aid of
parametrized polycyclic power-commutator presentations, as given by
Blackburn
\cite{Bl2},
Miech
\cite{Mi},
and Nebelung
\cite{Ne}:
\begin{equation}
\label{eqn:Presentation}
\begin{aligned}
G_a^n(z,w) := \langle x,y,s_2,\ldots,s_{n-1}\mid s_2=\lbrack y,x\rbrack,\ (\forall_{i=3}^n)\ s_i=\lbrack s_{i-1},x\rbrack,\ s_n=1,\ \lbrack y,s_2\rbrack=s_{n-1}^a, \\
(\forall_{i=3}^{n-1})\ \lbrack y,s_i\rbrack=1,\ x^3=s_{n-1}^w,\ y^3s_2^3s_3=s_{n-1}^z,\ (\forall_{i=2}^{n-3})\ s_i^3s_{i+1}^3s_{i+2}=1,\ s_{n-2}^3=s_{n-1}^3=1\ \rangle,
\end{aligned}
\end{equation}
where \(a\in\lbrace 0,1\rbrace\) and \(w,z\in\lbrace -1,0,1\rbrace\) are bounded parameters,
and the \textit{index of nilpotency} \(n=\mathrm{cl}(G)+1=\mathrm{cl}(G)+\mathrm{cc}(G)=\log_3(\mathrm{ord}(G))=:\mathrm{lo}(G)\) is an unbounded parameter.
The following lemma generalizes relations for second and third powers of generators in
\cite[Lem. 3.1]{Ma17},
\cite{Ma17b}.
\begin{lemma}
\label{lem:PowerRelations}
Let \(G=\langle x,y\rangle\) be a finite \(3\)-group with two generators \(x,y\in G\).
Denote by \(s_2:=\lbrack y,x\rbrack\) the main commutator, and by
\(s_3:=\lbrack s_2,x\rbrack\) and \(t_3:=\lbrack s_2,y\rbrack\) the two iterated commutators.
Then the second and third power of the element \(xy\), respectively \(xy^2\), are given by
\begin{equation}
\label{eqn:PowerRelations}
\begin{aligned}
(xy)^2 &= x^2y^2s_2t_3 & \text{ and } (xy)^3 &= x^3y^3s_2^3s_3t_3^2, \text{ respectively} \\
(xy^2)^2 &= x^2y^4s_2^2t_3^2 & \text{ and } (xy^2)^3 &= x^3y^6s_2^6s_3^2t_3^2,
\end{aligned}
\end{equation}
provided that \(t_3\in\zeta(G)\) is central, \(t_3^3=1\), and \(\lbrack s_3,y\rbrack=1\).
\end{lemma}
\begin{proof}
We begin by preparing three commutator relations:
\begin{equation}
\label{eqn:CommutatorRelations}
yx=xy\lbrack y,x\rbrack=xys_2,\quad
s_2x=xs_2\lbrack s_2,x\rbrack=xs_2s_3,\quad
\text{ and } \quad
s_2y=ys_2\lbrack s_2,y\rbrack=ys_2t_3.
\end{equation}
Now we prove the power relations
by expanding the power expressions by iterated substitution of the commutator relations in Formula
\eqref{eqn:CommutatorRelations},
always observing that \(t_3\) belongs to the centre, \(t_3^3=1\), and \(s_3y=ys_3\) commute:
\begin{equation*}
(xy)^2=xyxy=xxys_2y=x^2yys_2t_3=x^2y^2s_2t_3, \text{ and thus}
\end{equation*}
\begin{equation*}
\begin{aligned}
(xy)^3 &= (xy)^2xy=x^2y^2s_2t_3xy=x^2y^2s_2xyt_3=x^2yyxs_2s_3yt_3=x^2yxys_2s_2ys_3t_3= \\
&= x^2xys_2ys_2ys_2t_3s_3t_3=x^3yys_2t_3ys_2t_3s_2s_3t_3^2=x^3y^2s_2ys_2s_2s_3t_3^4= \\
&= x^3y^2ys_2t_3s_2^2s_3t_3=x^3y^3s_2^3s_3t_3^2, \text{ respectively}
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
(xy^2)^2 &= xyyxyy=xyxys_2yy=xxys_2yys_2t_3y=x^2yys_2t_3ys_2yt_3=x^2y^2s_2yys_2t_3t_3^2= \\
&= x^2y^2ys_2t_3ys_2t_3^3=x^2y^3s_2ys_2t_3=x^2y^3ys_2t_3s_2t_3=x^2y^4s_2^2t_3^2, \text{ and thus}
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
(xy^2)^3 &= (xy^2)^2xy^2=x^2y^4s_2^2t_3^2xy^2=x^2y^4s_2s_2xyyt_3^2=x^2y^4s_2xs_2s_3yyt_3^2= \\
&= x^2yyy yx s_2s_3 s_2y ys_3t_3^2=x^2yy yx ys_2 s_2y s_2t_3ys_3^2t_3^2=x^2yyxys_2ys_2ys_2t_3s_2ys_3^2t_3^3= \\
&= x^2yxys_2yys_2t_3ys_2t_3s_2ys_2t_3s_3^2t_3^4=x^2xys_2yys_2t_3ys_2ys_2t_3^2ys_2t_3s_2s_3^2t_3^2= \\
&= x^3yys_2t_3ys_2yys_2t_3s_2t_3^3ys_2^2s_3^2t_3^3=x^3y^2s_2ys_2yys_2t_3^2s_2ys_2^2s_3^2= \\
&= x^3y^2ys_2t_3ys_2t_3ys_2t_3^2ys_2t_3s_2^2s_3^2=x^3y^3s_2ys_2t_3^2ys_2yt_3^3s_2^3s_3^2=x^3y^3ys_2t_3s_2t_3^2yys_2t_3s_2^3s_3^2= \\
&= x^3y^4s_2s_2yys_2t_3^4s_2^3s_3^2=x^3y^4s_2s_2yys_2^4s_3^2t_3=x^3y^4s_2ys_2t_3ys_2^4s_3^2t_3=x^3y^4s_2ys_2ys_2^4s_3^2t_3^2= \\
&= x^3y^4ys_2t_3ys_2t_3s_2^4s_3^2t_3^2=x^3y^5s_2ys_2t_3^2s_2^4s_3^2t_3^2=x^3y^5y s_2t_3s_2^5s_3^2t_3^4=x^3y^6s_2^6s_3^2t_3^2.
\end{aligned}
\end{equation*}
\end{proof}
\begin{theorem}
\label{thm:MaxSbgCc1}
\noindent
Let \(G=\langle x,y\rangle\simeq G_a^n(z,w)\) be a finite \(3\)-group of coclass \(\mathrm{cc}(G)=1\)
and order \(\lvert G\rvert=3^n\) with generators \(x,y\)
such that \(y\in\chi_2(G)\) is contained in the two-step centralizer of \(G\),
whereas \(x\in G\setminus\chi_2(G)\),
given by a polycyclic power commutator presentation with parameters
\(a\in\lbrace 0,1\rbrace\), \(w,z\in\lbrace -1,0,1\rbrace\), and index of nilpotency \(n\ge 4\).
Then three of the four maximal subgroups, \(H_i=\langle xy^{i-2},G^\prime\rangle<G\), \(2\le i\le 4\),
are non-abelian \(3\)-groups of coclass \(\mathrm{cc}(H_i)=1\), as listed in Table
\ref{tbl:MaxSbgCc1}
in dependence on the parameters \(n,a,z,w\).
The supplementary Table
\ref{tbl:MaxSbgExtraSpecial}
shows the abelian maximal subgroups of the
remaining two extra special \(3\)-group of coclass \(\mathrm{cc}(G)=1\)
and order \(\lvert G\rvert=3^3\).
\end{theorem}
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Non-abelian maximal subgroups \(H_i<G\) of \(3\)-groups \(G\) of coclass \(1\)}
\label{tbl:MaxSbgCc1}
\begin{center}
\begin{tabular}{|c|c|c|c|c||c|c|c|}
\hline
\(G\simeq\) & \(n\) & \(a\) & \(z\) & \(w\) & \(H_2=\langle x,G^\prime\rangle\) & \(H_3=\langle xy,G^\prime\rangle\) & \(H_4=\langle xy^2,G^\prime\rangle\) \\
\hline
\(G_0^n(0,0)\) & \(\ge 4\) & \(0\) & \(0\) & \(0\) & \(\simeq G_0^{n-1}(0,0)\) & \(\simeq G_0^{n-1}(0,0)\) & \(\simeq G_0^{n-1}(0,0)\) \\
\(G_0^n(0,1)\) & \(\ge 4\) & \(0\) & \(0\) & \(1\) & \(\simeq G_0^{n-1}(0,1)\) & \(\simeq G_0^{n-1}(0,1)\) & \(\simeq G_0^{n-1}(0,1)\) \\
\(G_0^n(1,0)\) & \(\ge 4\) & \(0\) & \(1\) & \(0\) & \(\simeq G_0^{n-1}(0,0)\) & \(\simeq G_0^{n-1}(0,1)\) & \(\simeq G_0^{n-1}(0,1)\) \\
\(G_0^n(-1,0)\) & \(\ge 4\) & \(0\) & \(-1\) & \(0\) & \(\simeq G_0^{n-1}(0,0)\) & \(\simeq G_0^{n-1}(0,1)\) & \(\simeq G_0^{n-1}(0,1)\) \\
\hline
\(G_1^n(0,-1)\) & \(\ge 5\) & \(1\) & \(0\) & \(-1\) & \(\simeq G_0^{n-1}(0,1)\) & \(\simeq G_0^{n-1}(0,0)\) & \(\simeq G_0^{n-1}(0,0)\) \\
\(G_1^n(0,0)\) & \(\ge 5\) & \(1\) & \(0\) & \(0\) & \(\simeq G_0^{n-1}(0,0)\) & \(\simeq G_0^{n-1}(0,1)\) & \(\simeq G_0^{n-1}(0,1)\) \\
\(G_1^n(0,1)\) & \(\ge 5\) & \(1\) & \(0\) & \(1\) & \(\simeq G_0^{n-1}(0,1)\) & \(\simeq G_0^{n-1}(0,1)\) & \(\simeq G_0^{n-1}(0,1)\) \\
\hline
\end{tabular}
\end{center}
\end{table}
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Abelian maximal subgroups \(H_i<G\) of extra special \(3\)-groups \(G\)}
\label{tbl:MaxSbgExtraSpecial}
\begin{center}
\begin{tabular}{|c|c|c|c|c||c|c|c|c|}
\hline
\(G\simeq\) & \(n\) & \(a\) & \(z\) & \(w\) & \(H_1=\langle y,G^\prime\rangle\) & \(H_2=\langle x,G^\prime\rangle\) & \(H_3=\langle xy,G^\prime\rangle\) & \(H_4=\langle xy^2,G^\prime\rangle\) \\
\hline
\(G_0^3(0,0)\) & \(3\) & \(0\) & \(0\) & \(0\) & \(\simeq C_3\times C_3\) & \(\simeq C_3\times C_3\) & \(\simeq C_3\times C_3\) & \(\simeq C_3\times C_3\) \\
\(G_0^3(0,1)\) & \(3\) & \(0\) & \(0\) & \(1\) & \(\simeq C_3\times C_3\) & \(\simeq C_9\) & \(\simeq C_9\) & \(\simeq C_9\) \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{proof}
For an index of nilpotency \(n\ge 4\),
the first maximal subgroup \(H_1=\langle y,G^\prime\rangle\) of \(G\)
coincides with the two-step centralizer \(\chi_2(G)\) of \(G\),
which is a \textit{nearly homocyclic abelian} \(3\)-group \(A(3,n-1)\) of order \(3^{n-1}\), when \(a=0\).
For \(a=1\), we have \(H_1/H_1^\prime\simeq A(3,n-1)\).
We transform all relations of the group \(G\simeq G_a^n(z,w)\)
into relations of the remaining three maximal subgroups \(H\simeq G_\alpha^{n-1}(\zeta,\omega)\) of \(G\).
The \textit{polycyclic commutator relations}
\(s_2=\lbrack y,x\rbrack\), \(s_i=\lbrack s_{i-1},x\rbrack\) for \(3\le i\le n\),
and the \textit{nilpotency relation} \(s_n=1\) for the group \(G=\langle x,y\rangle\),
with lower central series \(\gamma_i{G}=\langle s_i,\gamma_{i+1}{G}\rangle\) for \(i\ge 2\),
can be used immediately for the subgroup \(H_2=\langle x,G^\prime\rangle=\langle x,s_2\rangle\)
with lower central series \(\gamma_i{H_2}=\langle t_i,\gamma_{i+1}{H_2}\rangle\),
where \(t_i:=s_{i+1}\) for \(i\ge 2\), and \(t_{n-1}=1\).
For the lower central series of \(H_3=\langle xy,G^\prime\rangle\) and \(H_4=\langle xy^2,G^\prime\rangle\),
we must employ the \textit{main commutator relation} \(\lbrack y,s_2\rbrack=s_{n-1}^a\),
and \(\lbrack y,s_i\rbrack=1\) for \(i\ge 3\).
According to the \textit{right product rule} for commutators, we have
\(\lbrack s_{i-1},xy\rbrack=\lbrack s_{i-1},y\rbrack\cdot\lbrack s_{i-1},x\rbrack^y=1\cdot s_i^y=s_i\lbrack s_i,y\rbrack=s_i\cdot 1=s_i\),
for \(i\ge 4\), but
\(\lbrack s_2,xy\rbrack=\lbrack s_2,y\rbrack\cdot\lbrack s_2,x\rbrack^y=s_{n-1}^{-a}s_3^y=s_{n-1}^{-a}s_3\lbrack s_3,y\rbrack=s_{n-1}^{-a}s_3\),
and in a similar fashion
\(\lbrack s_{i-1},xy^2\rbrack=\lbrack s_{i-1},y\rbrack\cdot\lbrack s_{i-1},xy\rbrack^y=1\cdot s_i^y=s_i\lbrack s_i,y\rbrack=s_i\cdot 1=s_i\),
for \(i\ge 4\), but again exceptionally
\(\lbrack s_2,xy^2\rbrack=\lbrack s_2,y\rbrack\cdot\lbrack s_2,xy\rbrack^y=s_{n-1}^{-a}y^{-1}s_{n-1}^{-a}s_3y=s_{n-1}^{-2a}s_3=s_{n-1}^as_3\).
For \(a=1\), the \textit{left product rule} for commutators shows
\(\lbrack s_{n-1}^{\mp 1}s_3,xy^{\pm 1}\rbrack=\lbrack s_{n-1}^{\mp 1},xy^{\pm 1}\rbrack^{s_3}\cdot\lbrack s_3,xy^{\pm 1}\rbrack=s_4\),
that is, the slight anomaly for the main commutator disappears in the next step.
Thus, the lower central series is \(\gamma_i{H_j}=\langle t_i,\gamma_{i+1}{H_j}\rangle\) for \(i\ge 2\), \(3\le j\le 4\),
where generally \(t_i:=s_{i+1}\) for \(i\ge 3\), and \(t_2:=s_3\) for \(a=0\), \(t_2:=s_{n-1}^{2-j}s_3\) for \(a=1\).
In particular, \(H_3=\langle xy,s_2\rangle\) and \(H_4=\langle xy^2,s_2\rangle\).
The main commutator relation for all three subgroups \(H_2,H_3,H_4\) of any group \(G\simeq G_a^n(z,w)\) with \(n\ge 4\) is
\(\lbrack s_2,t_2\rbrack=1=t_{n-2}^\alpha\), that is \(\alpha=0\), generally,
and it remains to determine \(\zeta,\omega\).
For this purpose, we come to the \textit{power relations} of \(G\),
\(x^3=s_{n-1}^w\), \(y^3s_2^3s_3=s_{n-1}^z\), and \(s_i^3s_{i+1}^3s_{i+2}=1\) for \(i\ge 2\),
supplemented by
\((xy)^3=x^3y^3s_2^3s_3s_{n-1}^{-2a}=s_{n-1}^ws_{n-1}^zs_{n-1}^{-2a}\) and \((xy^2)^3=x^3(y^3s_2^3s_3)^2s_{n-1}^{-2a}=s_{n-1}^ws_{n-1}^{2z}s_{n-1}^{-2a}\),
and we use these relations to determine \(\zeta,\omega\) in dependence on \(w,z,a\).
Generally, we have \(s_2^3t_2^3t_3=s_2^3s_3^3s_4=1\) for \(a=0\),
\(s_2^3t_2^3t_3=s_2^3s_{n-1}^{3(2-j)}s_3^3s_4=s_2^3s_3^3s_4=1\) for \(a=1\),
and thus uniformly \(\zeta=0\).
For \(G_0^n(0,0)\), we uniformly have \(x^3=(xy)^3=(xy^2)^3=1\), and thus \(\omega=0\) for all three subgroups.
For \(G_0^n(0,1)\), we uniformly have \(x^3=(xy)^3=(xy^2)^3=s_{n-1}\), and thus \(\omega=1\) for all three subgroups.
For \(G_0^n(\pm 1,0)\), we have \(x^3=1\), but \((xy)^3=s_{n-1}^{\pm 1}\), \((xy^2)^3=s_{n-1}^{\pm 2}=s_{n-1}^{\mp 1}\),
and thus \(\omega=0\) for \(H_2\) but \(\omega=1\) for \(H_3,H_4\), since \(G_0^n(0,-1)\simeq G_0^n(0,1)\).
For \(G_1^n(0,-1)\), we have \(x^3=s_{n-1}^{-1}\), but \((xy)^3=(xy^2)^3=s_{n-1}^{-3}=1\),
and thus \(\omega=1\) for \(H_2\) but \(\omega=0\) for \(H_3,H_4\).
For \(G_1^n(0,0)\), we have \(x^3=1\), but \((xy)^3=(xy^2)^3=s_{n-1}^{-2}=s_{n-1}\),
and thus \(\omega=0\) for \(H_2\) but \(\omega=1\) for \(H_3,H_4\).
For \(G_1^n(0,1)\), we have \(x^3=s_{n-1}\), \((xy)^3=(xy^2)^3=s_{n-1}^{-1}\),
and thus \(\omega=1\) for all three subgroups, again observing that \(G_0^n(0,-1)\simeq G_0^n(0,1)\).
The only \(3\)-groups \(G\) of coclass \(\mathrm{cc}(G)=1\) and order \(\lvert G\rvert=3^3\)
are the two extra special groups \(G_0^3(0,0)\) and \(G_0^3(0,1)\).
Since \(t_2=s_3=1\), all their four maximal subgroups,
\(H_1=\langle y,s_2\rangle\), \(H_2=\langle x,s_2\rangle\), \(H_3=\langle xy,s_2\rangle\), \(H_4=\langle xy^2,s_2\rangle\),
are abelian.
For \(w=z=0\), \(s_2\) is independent of the other generator, and \(H_i\simeq C_3\times C_3\) for \(1\le i\le 4\).
However, for \(w=1\), \(z=0\), we have \(x^3=(xy)^3=(xy^2)^3=s_2\), \(s_2^3=1\), and thus \(H_2\simeq H_3\simeq H_4\simeq C_9\),
whereas \(H_1\simeq C_3\times C_3\).
\end{proof}
\section{A general theorem for arbitrary base fields}
\label{s:General}
\noindent
Suppose that \(p\) is a prime,
\(F\) is an algebraic number field
with non-trivial \(p\)-class group \(\mathrm{Cl}_p{F}>1\),
and \(E\) is one of the unramified abelian \(p\)-extensions of \(F\).
We show that, even in this general situation,
a finite \(p\)-class tower of \(F\)
exerts a very severe restriction on the \(p\)-class tower of \(E\).
\begin{theorem}
\label{thm:General}
Assume that \(F\) possesses a \(p\)-class tower \(F_p^{(\infty)}=F_p^{(n)}\)
of exact length \(\ell_p{F}=n\) for some integer \(n\ge 1\).
Then the Galois group \(\mathrm{Gal}(E_p^{(\infty)}/E)\) of the \(p\)-class tower of \(E\)
is a subgroup of index \(\lbrack E:F\rbrack\)
of the \(p\)-class tower group \(\mathrm{Gal}(F_p^{(\infty)}/F)\) of \(F\)
and the length of the \(p\)-class tower of \(E\) is bounded by \(\ell_p{E}\le n\).
\end{theorem}
\begin{proof}
According to the assumptions,
there exists a tower of field extensions,
\[F<E\le F_p^{(1)}\le E_p^{(1)}\le F_p^{(2)}\le E_p^{(2)}\le\ldots\le F_p^{(n)}\le E_p^{(n)}\le F_p^{(n+1)},\]
where \(\ell_p{F}=n\) enforces the coincidence \(F_p^{(n)}=E_p^{(n)}=F_p^{(n+1)}\) of the trailing three fields.
Since \(\mathrm{Gal}(F_p^{(n)}/F)/\mathrm{Gal}(F_p^{(n)}/E)\simeq\mathrm{Gal}(E/F)\),
the group index of \(\mathrm{Gal}(E_p^{(n)}/E)=\mathrm{Gal}(F_p^{(n)}/E)\) in \(\mathrm{Gal}(F_p^{(n)}/F)\)
is equal to the field degree \(\lbrack E:F\rbrack\)
and \(\mathrm{Gal}(E_p^{(\infty)}/E)=\mathrm{Gal}(E_p^{(n)}/E)\)
is a subgroup of index \(\lbrack E:F\rbrack\) of \(\mathrm{Gal}(F_p^{(n)}/F)=\mathrm{Gal}(F_p^{(\infty)}/F)\).
The equality \(E_p^{(n)}=E_p^{(n+1)}\) implies the bound \(\ell_p{E}\le n\).
\end{proof}
\noindent
We shall apply Theorem
\ref{thm:General}
to the situation where \(p=3\), \(n=2\),
and \(E\) is an unramified cyclic cubic extension of \(F\),
whence \(\mathrm{Gal}(E_3^{(\infty)}/E)\) is a maximal subgroup of \(\mathrm{Gal}(F_3^{(\infty)}/F)\).
\subsection{Application to quadratic base fields}
\label{ss:Quadratic}
\begin{proposition}
\label{prp:KernelTypeD10}
\noindent
Let \(G\) be a finite \(3\)-group
with elementary bicyclic abelianization \(G/G^\prime\simeq C_3\times C_3\).
Then the following conditions are equivalent:
\begin{enumerate}
\item
The transfer kernel type of \(G\) is \(\mathrm{D}.10\), \(\varkappa(G)\sim (2241)\).
\item
The abelian quotient invariants of the four maximal subgroups \(H_1,\ldots,H_4\) of \(G\)
are \(\tau(G)\sim(21,21,1^3,21)\).
\item
The isomorphism types of the four maximal subgroups of \(G\) are
\(H_1\simeq H_2\simeq H_4\simeq\langle 3^4,3\rangle\) and \(H_3\simeq\langle 3^4,13\rangle\).
\item
The group \(G\) is isomorphic to the Schur \(\sigma\)-group \(\langle 3^5,5\rangle\) with relation rank \(d_2=2\).
\end{enumerate}
\end{proposition}
\begin{proof}
We put \(G:=\langle 243,5\rangle\) and use the presentation
\cite{MAGMA}
\[G=\langle x,y,s_2,s_3,t_3\mid s_2=\lbrack y,x\rbrack, s_3=\lbrack s_2,x\rbrack, t_3=\lbrack s_2,y\rbrack, x^3=s_3, y^3=s_3\rangle.\]
Then we obtain the maximal subgroups \\
\(H_1=\langle y,G^\prime\rangle=\langle y,s_2,s_3\rangle\), since \(t_3=\lbrack s_2,y\rbrack\), \\
\(H_2=\langle x,G^\prime\rangle=\langle x,s_2,t_3\rangle\), since \(s_3=\lbrack s_2,x\rbrack\), \\
\(H_3=\langle xy,G^\prime\rangle=\langle xy,s_2,s_3\rangle\), since \(\lbrack s_2,xy\rbrack=s_3t_3\), \\
\(H_4=\langle xy^2,G^\prime\rangle=\langle xy^2,s_2,s_3\rangle\), since \(\lbrack s_2,xy^2\rbrack=s_3t_3^2\). \\
Using Lemma
\ref{lem:PowerRelations},
and comparing to the abstract presentations
\cite{MAGMA} \\
\(\langle 81,3\rangle=\langle \xi,\upsilon,\sigma_2,\tau\mid\sigma_2=\lbrack \upsilon,\xi\rbrack, \tau=\xi^3\rangle\) and \\
\(\langle 81,13\rangle=\langle \xi,\upsilon,\zeta,\sigma_2\mid\sigma_2=\lbrack \upsilon,\xi\rbrack,\xi^3=\sigma_2,\upsilon^3=\zeta^3=1\rangle\), \\
we conclude \\
\(H_1=\langle y,s_2,s_3\rangle=\langle y,s_2\rangle\simeq\langle 81,3\rangle\), since \(y^3=s_3\ne\lbrack s_2,y\rbrack=t_3\), \\
\(H_2=\langle x,s_2,t_3\rangle\simeq\langle 81,13\rangle\), since \(x^3=s_3=\lbrack s_2,x\rbrack\), \\
\(H_3=\langle xy,s_2,s_3\rangle=\langle xy,s_2\rangle\simeq\langle 81,3\rangle\), since \((xy)^3=t_3^2\ne\lbrack s_2,xy\rbrack=s_3t_3\), \\
\(H_4=\langle xy^2,s_2,s_3\rangle=\langle xy^2,s_2\rangle\simeq\langle 81,3\rangle\), since \((xy^2)^3=s_3^2t_3^2\ne\lbrack s_2,xy^2\rbrack=s_3t_3^2\).
\end{proof}
\begin{theorem}
\label{thm:KernelTypeD10}
\noindent
Let \(F=\mathbb{Q}(\sqrt{d})\) be a quadratic field
with elementary bicyclic \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\).
Then the following conditions are equivalent:
\begin{enumerate}
\item
The transfer kernel type of \(F\) is \(\mathrm{D}.10\), \(\varkappa(F)\sim (2241)\).
\item
The abelian type invariants of the \(3\)-class groups \(\mathrm{Cl}_3{E_i}\) of the four unramified cyclic cubic extensions \(E_i/F\)
are \(\tau(F)\sim(21,21,1^3,21)\).
\item
The second \(3\)-class group \(\mathrm{G}_3^2{F}\) of \(F\) has the maximal subgroups
\(H_1\simeq H_2\simeq H_4\simeq\langle 3^4,3\rangle\) and \(H_3\simeq\langle 3^4,13\rangle\).
\item
The \(3\)-class tower group \(\mathrm{G}_3^\infty{F}\) of \(F\) is the Schur \(\sigma\)-group \(\langle 3^5,5\rangle\) with relation rank \(d_2=2\).
\end{enumerate}
\end{theorem}
\begin{proof}
The claims follow from Proposition
\ref{prp:KernelTypeD10}
by applying the Successive Approximation Theorem
\ref{thm:SuccessiveApproximation}
of first order.
\end{proof}
\begin{corollary}
\label{cor:KernelTypeD10}
\noindent
Let \(F\) be a quadratic field which satisfies one of the equivalent conditions in Theorem
\ref{thm:KernelTypeD10}.
Then the length of the \(3\)-class tower of \(F\) is \(\ell_3{F}=2\).
The four unramified cyclic cubic extensions \(E_i/F\)
are absolutely dihedral of degree \(6\),
with torsionfree Dirichlet unit rank \(r\ge 2\),
and possess \(3\)-class towers of length \(\ell_3{E_i}=2\).
More precisely,
\(\mathrm{Cl}_3{E_3}\simeq C_3\times C_3\times C_3\)
and \(\mathrm{G}_3^\infty{E_3}\simeq\langle 3^4,13\rangle\) with relation rank \(d_2=5\),
but \(\mathrm{Cl}_3{E_i}\simeq C_9\times C_3\)
and \(\mathrm{G}_3^\infty{E_i}\simeq\langle 3^4,3\rangle\) with relation rank \(d_2=4\)
for \(i\in\lbrace 1,2,4\rbrace\).
\end{corollary}
\begin{proof}
This is a consequence of Theorems
\ref{thm:General}
and
\ref{thm:KernelTypeD10},
satisfying the Shafarevich theorem.
\end{proof}
\begin{proposition}
\label{prp:KernelTypeD5}
\noindent
Let \(G\) be a finite \(3\)-group
with elementary bicyclic abelianization \(G/G^\prime\simeq C_3\times C_3\).
Then the following conditions are equivalent:
\begin{enumerate}
\item
The transfer kernel type of \(G\) is \(\mathrm{D}.5\), \(\varkappa(G)\sim (4224)\).
\item
The abelian quotient invariants of the four maximal subgroups \(H_1,\ldots,H_4\) of \(G\)
are \(\tau(G)\sim(1^3,21,1^3,21)\).
\item
The isomorphism types of the four maximal subgroups of \(G\) are
\(H_1\simeq H_3\simeq\langle 3^4,13\rangle\) and \(H_2\simeq H_4\simeq\langle 3^4,3\rangle\).
\item
The group \(G\) is isomorphic to the Schur \(\sigma\)-group \(\langle 3^5,7\rangle\) with relation rank \(d_2=2\).
\end{enumerate}
\end{proposition}
\begin{proof}
We put \(G:=\langle 243,7\rangle\) and use the presentation
\cite{MAGMA}
\[G=\langle x,y,s_2,s_3,t_3\mid s_2=\lbrack y,x\rbrack, s_3=\lbrack s_2,x\rbrack, t_3=\lbrack s_2,y\rbrack, x^3=s_3, y^3=s_3^2\rangle.\]
Similarly as in Proposition
\ref{prp:KernelTypeD10},
we obtain the maximal subgroups \\
\(H_1=\langle y,G^\prime\rangle=\langle y,s_2,s_3\rangle\),
\(H_2=\langle x,G^\prime\rangle=\langle x,s_2,t_3\rangle\), \\
\(H_3=\langle xy,G^\prime\rangle=\langle xy,s_2,s_3\rangle\), and
\(H_4=\langle xy^2,G^\prime\rangle=\langle xy^2,s_2,s_3\rangle\). \\
Using Lemma
\ref{lem:PowerRelations},
and comparing to the abstract presentations \\
\(\langle 81,3\rangle=\langle \xi,\upsilon,\sigma_2,\tau\mid\sigma_2=\lbrack \upsilon,\xi\rbrack, \tau=\xi^3\rangle\) and \\
\(\langle 81,13\rangle=\langle \xi,\upsilon,\zeta,\sigma_2\mid\sigma_2=\lbrack \upsilon,\xi\rbrack,\xi^3=\sigma_2,\upsilon^3=\zeta^3=1\rangle\), \\
we conclude \\
\(H_1=\langle y,s_2,s_3\rangle=\langle y,s_2\rangle\simeq\langle 81,3\rangle\), since \(y^3=s_3^2\ne\lbrack s_2,y\rbrack=t_3\), \\
\(H_2=\langle x,s_2,t_3\rangle\simeq\langle 81,13\rangle\), since \(x^3=s_3=\lbrack s_2,x\rbrack\), \\
\(H_3=\langle xy,s_2,s_3\rangle=\langle xy,s_2\rangle\simeq\langle 81,3\rangle\), since \((xy)^3=s_3t_3^2\ne\lbrack s_2,xy\rbrack=s_3t_3\), \\
\(H_4=\langle xy^2,s_2,s_3\rangle\simeq\langle 81,13\rangle\), since \((xy^2)^3=s_3t_3^2=\lbrack s_2,xy^2\rbrack\).
\end{proof}
\begin{theorem}
\label{thm:KernelTypeD5}
\noindent
Let \(F=\mathbb{Q}(\sqrt{d})\) be a quadratic field
with elementary bicyclic \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\).
Then the following conditions are equivalent:
\begin{enumerate}
\item
The transfer kernel type of \(F\) is \(\mathrm{D}.5\), \(\varkappa(F)\sim (4224)\).
\item
The abelian type invariants of the \(3\)-class groups \(\mathrm{Cl}_3{E_i}\) of the four unramified cyclic cubic extensions \(E_i/F\)
are \(\tau(F)\sim(1^3,21,1^3,21)\).
\item
The second \(3\)-class group \(\mathrm{G}_3^2{F}\) of \(F\) has the maximal subgroups
\(H_1\simeq H_3\simeq\langle 3^4,13\rangle\) and \(H_2\simeq H_4\simeq\langle 3^4,3\rangle\).
\item
The \(3\)-class tower group \(\mathrm{G}_3^\infty{F}\) of \(F\) is the Schur \(\sigma\)-group \(\langle 3^5,7\rangle\) with relation rank \(d_2=2\).
\end{enumerate}
\end{theorem}
\begin{proof}
The claims follow from Proposition
\ref{prp:KernelTypeD5}
by applying the Successive Approximation Theorem
\ref{thm:SuccessiveApproximation}
of first order.
\end{proof}
\begin{corollary}
\label{cor:KernelTypeD5}
\noindent
Let \(F\) be a quadratic field which satisfies one of the equivalent conditions in Theorem
\ref{thm:KernelTypeD5}.
Then the length of the \(3\)-class tower of \(F\) is \(\ell_3{F}=2\).
The four unramified cyclic cubic extensions \(E_i/F\)
are absolutely dihedral of degree \(6\),
with torsionfree Dirichlet unit rank \(r\ge 2\),
and possess \(3\)-class towers of length \(\ell_3{E_i}=2\).
More precisely,
\(\mathrm{Cl}_3{E_i}\simeq C_3\times C_3\times C_3\)
and \(\mathrm{G}_3^\infty{E_i}\simeq\langle 3^4,13\rangle\) with relation rank \(d_2=5\)
for \(i\in\lbrace 1,3\rbrace\),
but \(\mathrm{Cl}_3{E_i}\simeq C_9\times C_3\)
and \(\mathrm{G}_3^\infty{E_i}\simeq\langle 3^4,3\rangle\) with relation rank \(d_2=4\)
for \(i\in\lbrace 2,4\rbrace\).
\end{corollary}
\begin{proof}
This is a consequence of Theorems
\ref{thm:General}
and
\ref{thm:KernelTypeD5},
satisfying the Shafarevich theorem.
\end{proof}
\subsection{Application to dihedral fields}
\label{ss:Dihedral}
\noindent
We recall that a dihedral field \(E\) of degree \(6\)
is an absolute Galois extension \(E/\mathbb{Q}\)
with group \(\mathrm{Gal}(E/\mathbb{Q})=\langle\sigma,\tau\mid\sigma^3=\tau^2=1,\sigma\tau=\tau\sigma^{-1}\rangle\).
It is a cyclic cubic relative extension \(E/F\) of its unique quadratic subfield \(F=E^\sigma\),
and it contains three isomorphic, conjugate non-Galois cubic subfields \(L=E^\tau\), \(L^\sigma\), \(L^{\sigma^2}\).
The conductor \(c\) of \(E/F\) is a nearly squarefree positive integer with special prime factors,
and the discriminants satisfy the relations \(d_E=c^4d_F^3\) and \(d_L=c^2d_F\).
Here, we shall always be concerned with unramified extensions, characterized by the conductor \(c=1\),
and thus \(d_E=d_F^3\), a perfect cube, and equal \(d_L=d_F\).
\subsubsection{Totally complex dihedral fields}
\label{sss:TotallyComplexDihedral}
\noindent
The computational information on \(3\)-tower groups \(G:=\mathrm{G}_3^\infty{F}\)
of imaginary quadratic fields \(F\) in Table
\ref{tbl:ImaginaryTwoStage}
admits the purely theoretical deduction
of impressive statistics for \(3\)-tower groups \(S:=\mathrm{G}_3^\infty{E}\)
of totally complex dihedral fields \(E\) in Table
\ref{tbl:TotallyComplexDihedral}
by means of the Corollaries
\ref{cor:KernelTypeD10}
and
\ref{cor:KernelTypeD5}.
We use the crucial new insight that the groups \(S\triangleleft G\) are maximal subgroups of \(G\),
because the extensions \(E/F\) are unramified cyclic of degree \(3\).
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Frequencies of dihedral \(3\)-class tower groups \(S\) for \(-10^{24}<d_E<0\)}
\label{tbl:TotallyComplexDihedral}
\begin{center}
\begin{tabular}{|c|c|r||c|c|r||r|c||c|r|}
\hline
\(G\simeq\) & \(\tau^{(1)}{G}\) & abs. fr. & \(S\simeq\) & \(\tau^{(1)}{S}\) & abs. fr. & \(\lvert d_E\rvert_{\text{min}}\) \\
\hline
\(\langle 243,5\rangle\) & \(1^2\) & \(83\,353\) & \(\langle 81,3\rangle\) & \(21\) & \(250\,059\) & \(4\,027^3\) \\
\(\langle 243,5\rangle\) & \(1^2\) & \(83\,353\) & \(\langle 81,13\rangle\) & \(1^3\) & \(83\,353\) & \(4\,027^3\) \\
\(\langle 243,7\rangle\) & \(1^2\) & \(41\,398\) & \(\langle 81,3\rangle\) & \(21\) & \(82\,796\) & \(12\,131^3\) \\
\(\langle 243,7\rangle\) & \(1^2\) & \(41\,398\) & \(\langle 81,13\rangle\) & \(1^3\) & \(82\,796\) & \(12\,131^3\) \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Totally real dihedral fields}
\label{sss:TotallyRealDihedral}
\noindent
The computational information on \(3\)-tower groups \(G:=\mathrm{G}_3^\infty{F}\)
of real quadratic fields \(F\) in Table
\ref{tbl:RealTwoStage9}
admits the purely theoretical deduction
of impressive statistics for \(3\)-tower groups \(S:=\mathrm{G}_3^\infty{E}\)
of totally real dihedral fields \(E\) in Table
\ref{tbl:TotallyRealDihedral}
by means of Theorem
\ref{thm:MaxSbgCc1}
and Theorem
\ref{thm:General}.
Again, we use the innovative result that the groups \(S\triangleleft G\) are maximal subgroups of \(G\),
since the extensions \(E/F\) are unramified cyclic cubic.
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht]
\caption{Frequencies of dihedral \(3\)-class tower groups \(S\) for \(0<d_E<10^{27}\)}
\label{tbl:TotallyRealDihedral}
\begin{center}
\begin{tabular}{|c|c|r||c|c|r||r|c||c|r|}
\hline
\(G\simeq\) & \(\tau^{(1)}{G}\) & abs. fr. & \(S\simeq\) & \(\tau^{(1)}{S}\) & abs. fr. & \((d_E)_{\text{min}}\) \\
\hline
\(\langle 81,7\rangle\) & \(1^2\) & \(122\,955\) & \(\langle 27,3\rangle\) & \(1^2\) & \(122\,955\) & \(142\,097^3\) \\
\(\langle 81,7\rangle\) & \(1^2\) & \(122\,955\) & \(\langle 27,4\rangle\) & \(1^2\) & \(245\,910\) & \(142\,097^3\) \\
\(\langle 81,7\rangle\) & \(1^2\) & \(122\,955\) & \(\langle 27,5\rangle\) & \(1^3\) & \(122\,955\) & \(142\,097^3\) \\
\hline
\end{tabular}
\end{center}
\end{table}
\noindent
The first row of Table
\ref{tbl:TotallyRealDihedral}
reveals extensive realizations of the extraspecial group \(S=\langle 27,3\rangle\)
as \(3\)-tower group of dihedral fields.
This is the first time that \(S=\langle 27,3\rangle\) occurs as a \(3\)-tower group.
It is forbidden for quadratic fields,
and it did not occur for cyclic cubic fields and bicyclic biquadratic fields, up to now.
\begin{theorem}
\label{thm:NewRealization}
\textbf{(A new realization as \(3\)-tower group.)}
The extraspecial \(3\)-group \(S=\langle 27,3\rangle\) of coclass \(1\) and exponent \(3\)
occurs as \(3\)-class tower group \(\mathrm{G}_3^\infty{E}\) of totally real dihedral fields \(E\) of degree \(6\).
\end{theorem}
\begin{proof}
The group \(S=\langle 27,3\rangle\) possesses the relation rank \(d_2{S}=4\).
According to the Shafarevich Theorem,
it is therefore excluded as \(3\)-tower group \(\mathrm{G}_3^\infty{F}\)
of both, imaginary and real quadratic fields \(F\).
However, the combination of Theorem
\ref{thm:MaxSbgCc1}
and Theorem
\ref{thm:General}
proves its occurrence as \(3\)-class tower group \(\mathrm{G}_3^\infty{E}\)
of totally real dihedral fields \(E\) of degree \(6\),
as visualized in Table
\ref{tbl:TotallyRealDihedral}.
\end{proof}
\begin{theorem}
\label{thm:Dihedral}
\textbf{(\(3\)-class tower groups of totally real dihedral fields.)}
Let \(F=\mathbb{Q}(\sqrt{d})\) be a real quadratic field
with \(3\)-class group \(\mathrm{Cl}_3{F}\simeq C_3\times C_3\)
and fundamental discriminant \(d>1\).
Suppose the second order Artin pattern
\(\mathrm{AP}^{(2)}{F}=(\tau^{(2)}(F),\varkappa^{(2)}(F))\)
is given by
the abelian type invariants \(\tau^{(2)}(F)=\lbrack 1^2;(2^2,1^2,1^2,1^2)\rbrack\)
and the transfer kernel type \(\varkappa^{(2)}(F)=\lbrack 1;(0000)\rbrack\).
Let \(E_2,E_3,E_4\) be the three unramified cyclic cubic relative extensions of \(F\)
with \(3\)-class group \(\mathrm{Cl}_3{E_i}\simeq C_3\times C_3\).
Then \(E_i/\mathbb{Q}\) is a totally real dihedral extension of degree \(6\),
for each \(2\le i\le 4\),
and the connection between the component \(\#\varkappa^{(3)}(F)_i=\#\ker(T_{E_i,F_3^{(1)}})\)
of the third order transfer kernel type \(\varkappa^{(3)}(F)\)
and the \(3\)-class tower group \(S_i=G_3^\infty{E_i}=\mathrm{Gal}((E_i)_3^{(\infty)}/E_i)\) of \(E_i\)
is given in the following way:
\begin{equation}
\label{eqn:Dihedral}
\begin{aligned}
& \#\varkappa^{(3)}(F)_i=3 & \Longleftrightarrow & \quad S_i\simeq\langle 243,27\rangle & \text{ with } \varkappa(S_i)=(1000), \\
& \#\varkappa^{(3)}(F)_i=9 & \Longleftrightarrow & \quad S_i\simeq\langle 243,26\rangle & \text{ with } \varkappa(S_i)=(0000).
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
This theorem was expressed as a conjecture in
\cite{Ma17,Ma17b},
and is now an immediate consequence of Theorem
\ref{thm:MaxSbgCc1}.
\end{proof}
\begin{remark}
\label{rmk:Dihedral}
Recall that each unramified cyclic cubic relative extension \(E_i/F\), \(1\le i\le 4\),
gives rise to a dihedral absolute extension \(E_i/\mathbb{Q}\) of degree \(6\),
that is an \(S_3\)-extension
\cite[Prp. 4.1, p. 482]{Ma1}.
For the trailing three fields \(E_i\), \(2\le i\le 4\),
in the stable part of \(\tau^{(2)}(F)=\lbrack 1^2;(2^2,1^2,1^2,1^2)\rbrack\),
i.e. with \(\mathrm{Cl}_3{E_i}\simeq C_3\times C_3\),
we have constructed the unramified cyclic cubic extensions \(\tilde{E}_{i,j}/E_i\), \(1\le j\le 4\),
and determined the Artin pattern \(\mathrm{AP}^{(2)}{E_i}\) of \(E_i\),
in particular, the transfer kernel type of \(E_i\) in the fields \(\tilde{E}_{i,j}\)
of absolute degree \(18\).
The dihedral fields \(E_i\) of degree \(6\) share a common polarization
\(\tilde{E}_{i,1}=F_3^{(1)}\), the Hilbert \(3\)-class field of \(F\),
which is contained in the relative \(3\)-genus field \((E_i/F)^\ast\),
whereas the other extensions \(\tilde{E}_{i,j}\) with \(2\le j\le 4\)
are non-abelian over \(F\), for each \(2\le i\le 4\).
Our computational results underpin Theorem
\ref{thm:Dihedral}
concerning the infinite family of totally real dihedral fields
\(E_i\) for varying real quadratic fields \(F\).
\end{remark}
\section{Acknowledgements}
\label{s:Acknowledgements}
\noindent
The author gratefully acknowledges that his research was supported by the
Austrian Science Fund (FWF): P 26008-N25.
|
1,314,259,993,870 | arxiv | \section{Introduction}
\label{section:introduction}
Homograph attack is first described by E. Gabrilovic et al.~\cite{homograph_first2002} in 2002. To demonstrate the feasibility of the attack, the authors registered a homograph targeting to the brand domain \url{microsoft.com} using the Russian letters 'c' (U+0421) and 'o' (U+041E). The homograph contains the two non-ASCII characters and has an ASCII converted form as \url{xn--mirsft-yqfbx.com}.\footnote{International Domain Names (IDNs) contain non-ASCII characters (e.g., Arabic, Chinese, Cyrillic alphabet). Therefore, they are encoded to ASCII strings using Punycode transcription known as IDNA encoding and appear under ASCII strings starting with ``\url{xn--}". For example, the domain \url{xn--ggle-0qaa.com} is displayed as \url{g}$\tilde{\text{o}}\tilde{\text{o}}$\url{gle.com}.} However, the attack was not much attracted at that time. Until 2017, the attack had raised broad attention when the famous brand domain \url{apple.com} (Apple Inc.) is attacked by the homograph that appears under the Punycode form~\cite{homograph_apple2017} such as \url{xn--pple-43d.com}, which uses the Cyrillic `a' (U+0430) instead of the ASCII `a' (U+0061). Thereafter, many homograph attacks targeting other famous brand domains have been found such as Adobe Inc.~\cite{adobe}, LLoyds Bank~\cite{Lloydsbank}, Google Analytics~\cite{GoogleAnalytics}, etc. A recent large-scale analysis~\cite{homograph_statistic_dsn2018} about International Domain Names (IDNs) in 2018 shows that, just for the first 1,000 brand domains in top Alexa ranking, more than 1,516 homograph domains were already registered. Furthermore, the attack becomes more progressive and sophisticated today.
\subsubsection{Motivation}
Many defensive approaches have been proposed such as applying machine learning to some features (e.g., visual similarity metrics, HTML content, and optical character recognition (OCR))~\cite{homograph_visual_screen_shot, homograph_ORC, typosquatting, homograph_IFIP}, using empirical analysis based on registered databases (e.g., Whois, DNS, blacklists, confusable Unicode)~\cite{homograph_statistic_dsn2018, confused_unicode}, or blocking International Domain Names (IDNs) (e.g., disabling the automatic IDN conversion on browsers)~\cite{blocking_1, blocking_2, blocking_3, blocking_4}. So, we ask the question: \emph{how to design an approach that focuses on pro-active defense which can control the attack rather than just responding to it after it has really happened; and is it possible if the approach is based on ergonomics rather than machine engineering?} We therefore in this paper, aim to propose a system that analyzes human factors in the ability of homograph domain identification. This, in turn, allows for various security training courses against the attack aiming to appropriate participants.
\subsubsection{Contribution}
To the best of our knowledge, our work is the first to devise a system that predicts if human demographics, brand familiarity, and security backgrounds can influence the ability of homograph recognition. To do so, we designed a survey and applied it to 2067 participants who are Internet users in Japan. We subsequently build a regression model to study which factors affect the ability. As a result, we find that for different levels of visual similarity, the participants exhibit different abilities. 13.95\% of participants can recognize non-homographs while 16.60\% of participants can recognize homographs whose visual similarity with the target brand domains is under 99.9\%; but when the similarity increases to 99.9\%, the number of participants who can recognize homographs significantly drops down to only 0.19\%; and for the homographs with 100\% of visual similarity, there is no way for the participants to recognize. We also find that while female participants tend to be able to recognize homographs, male participants tend to able to recognize non-homographs. The result also shows that security knowledge is a significant factor affecting both homographs and non-homographs. We hypothesized that people who have strong security knowledge can recognize both homograph and non-homograph; but surprisingly, it is only true for the case of homographs but not for the case of non-homographs. Another interesting result is that people who work or are educated in computer science or computer engineering do not appear as a factor affecting the ability of homograph recognition. However, right after they are explained about what the homograph attack is, people who work or are educated in computer science or computer engineering are the ones who can capture the situation the most quickly (i.e, from not an affecting factor to become an affecting factor the most quickly). We believe that it opens avenues to help users reduce their presumptuousness and improve knowledge and carefulness about security threats.
\subsubsection{Roadmap}
The rest of this paper is organized as follows. The related work is described in Section~\ref{section:related_work}. The procedure for preparing the survey is presented in Section~\ref{section:procedure}. The methodology is given in Section~\ref{section:methodology}. The experiment is analyzed in Section~\ref{section:experiment}. The discussion is mentioned in Section~\ref{section:discussion}. Finally, the conclusion is drawn in Section~\ref{section:conclusion}.
\section{Related Work}
\label{section:related_work}
In this section, we introduce related work about defending homograph approaches and related work about factor analysis of the brand familiarity, and security background in computer security-related issues.
\subsection{Disabling the Automatic IDN Conversion}
In this approach, the feature of automatic IDN conversion is disabled in the web browser. Instead of showing the converted form of the domain such as \url{g}$\tilde{\text{o}}\tilde{\text{o}}$\url{gle.com}, the browsers only display the original IDN form such as \url{xn--ggle-0qaa.com} in the address bar. In reality, some popular web browsers applied this approach including Chrome and Firefox~\cite{blocking_1}, Safari~\cite{blocking_2}, Internet Explorer~\cite{blocking_3}, and Opera~\cite{blocking_4}. However, there is a big trade-off when the browsers stop supporting the automatic IDN conversion because a large number of Internet users are using non-English languages with non-Latin alphabets through over 7.5 million registered IDNs in all over the world (by December 2017)~\cite{IDN_report2017}. Furthermore, the homograph attack exploits not only look-alike Punycode characters in IDNs, but also look-alike Latin characters in non-IDNs. For instance, the homograph \url{bl0gsp0t.com} targeted to the brand domain \url{blogspot.com} by replacing the ‘o’ by the ‘0’; or the homograph \url{wlklpedia.org} targeted to the brand domain \url{wikipedia.com} by replacing the ‘i’ by the ‘l’. Also, if the homographs can deceive users before appearing in the address bar of browsers (e.g., the homographs are given from an email or a document under hyper-links) without the users' awareness of the browsers, disabling IDN conversion is not meant to prevent users from accessing the homographs.
\subsection{Detecting Homographs}
Several methods have been proposed to detect homographs. K. Tian et al.~\cite{homograph_visual_screen_shot} scanned five types of squatting domains over DNS records and identified domains that are likely impersonating popular brands. They then build a machine learning classifier to detect homographs using page behaviors, visual analysis and optical character recognition (OCR). L. Baojun et al.~\cite{homograph_statistic_dsn2018} made a large-scale analysis on IDNs using correlating data from auxiliary sources such as Whois, passive DNS and URL blacklist. They found that 1.4 million IDNs were actively registered in which 6000 IDNs were determined as homographs by URL blacklists. They also identified 1,516 IDNs showing high visual similarity to reputable brand domains. S. Yuta et al.~\cite{homograph_ORC} applies machine learning on optical character recognition (OCR) feature of a huge 1.92 million actual registered IDNs and over 10,000 malicious IDNs. A. Pieter et al.~\cite{typosquatting} collected data about the typosquatting homographs of the 500 most popular websites for seven months. They reveal that 95\% of the popular domains they investigated are actively targeted by typosquatters, only few brand owners protect themselves against this practice by proactively registering their own typosquatting domains. The study also reveals that a large of typosquatting homographs can be traced back to a small group of typosquatting page hosters and that certain top-level domains are much more prone to typosquatting than others. T. Thao et al.~\cite{homograph_IFIP} constructed a classification model for homographs and potential homographs registered by attackers using machine learning on feasible and novel features which are the visual similarity on each character and selected information from Whois. Several tools~\cite{dnstwist,idn_homograph_attack,EvilURL,homographs_dutch,instant_domain_search,DN_Pedia,Homoglyph_Attack} generate permutations of homographs from a defined subset of look-alike characters from Confusable Unicode table defined by Unicode Inc.~\cite{confused_unicode}, then look up Whois and DNS to check whether the homographs are registered and active. Compared to the approach of disabling the automatic IDN conversion, the homograph detection is more attractive to the research community.
\subsection{Brand Familiarity and Security Backgrounds in Computer Security}
In this section, we present work related to web familiarity and security backgrounds including security warnings, security knowledge, security behavior, and security self-confidence that affect human decisions on security threats. Since some previous papers analyzed both brand familiarity and security backgrounds, we do not separate them into two different sections.
T. Kelley et al.~\cite{nihgov} simulate several secure non-spoof and insecure spoof domains with different authentication levels such as extended validation, standard validation, or partial encryption. A logistic model is then applied to participants' respondents to compare how encryption level, web familiarity, security knowledge, and mouse tracking influence the participant accuracy in identifying spoof and non-spoof websites. Their result shows that user behavior derived from mouse tracking recordings leads to higher accuracy in identifying spoof and non-spoof websites than the other factors. Y. Sawaya et al.~\cite{chi2017} apply the Security Behavior Intentions Scale (SeBIS)~\cite{SeBIS} to participants from seven countries and build a regression model to study which factors affect participants' security behavior using a cross-cultural survey. The work concluded that self-confidence in computer security has a larger positive effect on security behavior compared to actual knowledge about computer security. I. Kirlappos et al.~\cite{phishing_sp2012} show that users do not focus on security warnings (or not understand what they are) rather than looking for signs to confirm whether a site is trustworthy. The study reveals that advice given in some current user educations about phishing is largely ignored. It, therefore, suggests that rather than flooding users with information, we need to consider how users make decisions both in business and personal settings for the user education. M. Sharif et al.~\cite{urakawa_ccs2018} design a survey of security warnings, user behavior, knowledge and self-confidence about security to evaluate the utility of self-reported questionnaire for predicting exposure to malicious content. Their result confirms that the self-reported data can help forecast exposure risk over long periods of time but is not as crucial as behavioral measurements to accurately predict exposure. S. Das et al.~\cite{soups_social} find that social processes played a major role in security behavior. Furthermore, conversations about security are often driven by the desire to warn or protect others from immediate novel threats observed or experienced. C. Erika et al.~\cite{soups_mobile} study user confidence toward security and privacy for smartphone and find that participants are apprehensive about running privacy- and financially-sensitive tasks on their phones as four factors: fear of theft and data loss, misconceptions about the security of their network communications, worries about accidentally touching or clicking, and mistrust of smartphone applications. I. Iulia et al.~\cite{soups_expert} compare security behaviors between expert and non-expert and find that while experts frequently report installing software updates, using two-factor authentication and using a password manager, non-experts report using antivirus software, visiting only known websites, and changing passwords frequently. A. Felt et al.~\cite{android} examine whether security warnings from the Android permission system is effective to users. Their result shows that only 17\% of participants paid attention to permissions during installation, and only 3\% of Internet survey respondents could correctly answer all permission comprehension questions. This indicates that current Android security warnings do not help most users make correct security decisions.
\section{Procedure}
\label{section:procedure}
In this section, we present how the survey is designed and distributed to the participants. The survey is created in the Japanese and is embedded to a webpage. The webpage is then distributed to 2,067 participants who are Internet users in Japan.\footnote{The Appendix in this paper describes the questions in English but the survey is designed in Japanese language and distributed to Japanese, so there is no translation problem for the preservation of the survey's reliability and structure validity.} The participants cannot submit their responses if any of the questions is not answered. There are three question parts about the human factors (including demographics, brand familiarity, and security backgrounds), and the final part about the participants' ability in distinguishing homographs. The following sections describe the design of each part.
\subsection{Demographics}
For the human demographics, the survey consists of the following seven questions:
\begin{enumerate}
\item Gender (male: 1 and female: 0)
\item Age (the inputs are integers)
\item Having a job (having a full-time job: 1, freelancer or part-time job: 0.5, and not have a job: 0).
\item Whether the participant has studied so far the languages including English, Spanish, French, Russian, Portuguese, German, Vietnamese, Turkish, Italian, Greek, and Dutch. The languages chosen are the common languages that use Punycode (i.e., confusable letters with the English alphabet). For each language, there are two answer options (yes:1 and no: 0). Thereafter, we calculate the number of languages that the participants answer `yes'.
\item Knowing only Japanese (yes: 1, and no: 0). Although there is a variable related to the number of languages that the participants have studied so far, we hypothezied that knowing only Japanese or not is probably an affecting factor because the survey is done in Japan. Thereby knowing only Japanese is chosen as a variable that needs to be measured.
\item Whether the participant graduated or enrolled in computer science or computer engineering (yes: 1 and no: 0).
\item Whether the participant worked (or is working) in computer science or computer engineering (yes: 1 and no: 0).
\end{enumerate}
\subsection{Brand Familiarity}
For the brand familiarity, the nine famous brands are chosen including Amazon, Google, Coinbase, Wiki, Booking, Expedia, Paypal, Sex.com and Facebook. For each of the brands, the participants respond to how they are familiar with the brands with 4-point Likert-scale answer options (do not know: 1, know but never use: 2, occasionally use: 3, and often use: 4). The brands may have multiple authentic domains (i.e., the domains that the brands themselves registered), and thus the logos and the names of the brands are used to represent the brands and showed in the questions instead of listing all their domains.
\subsection{Security Backgrounds}
For the security backgrounds, the survey consists of the following five questions:
\begin{enumerate}
\item Anti-virus software installation on PCs or mobile devices: (yes: 1 and no: 0).
\item Security warning: When browsing a website, a browser or anti-virus software issues a warning, whether the participants continue browsing or not (yes: 1 and no: 0).
\item Security behavior: that consists of sixteen sub-questions as described in Appendix~\ref{section:security_behavior}. For each of the sub-questions, the participants choose 5-point Likert-scale answer options (not at all: 1, rarely: 2, sometimes: 3, often: 4, and always: 5). The summation of all the sixteen answers is then calculated and used as the variable in the model instead of each separated answer.
\item Security knowledge: that consists of eighteen sub-questions as described in Appendix~\ref{section:security_knowledge}. For each of the sub-questions, the participants have two answer options (true: 1 and false: 0). Then, based on the actual correct answers given at the end of the appendix, we count the number of correct answers of the participants.
\item Security self-confidence: that consists of six sub-questions as described in Appendix~\ref{section:security_confidence}. The participants have 5-point Likert-scale answer options (not at all: 1, not applicable: 2, neither agree nor disagree: 3, applicable: 4, and very applicable: 5). Similar to the security behavior, the summation of the six answers is calculated and used for the model.
\end{enumerate}
For the security behaviors, security knowledge, and security self-confidence, we use the design from the paper~\cite{chi2017}. The paper aims to analyze factors that affect security behavior and thus uses security behavior in the target function. Meanwhile, our work aims to analyze factors (including security behavior) that affect the ability of homograph recognition, and thus security behavior is just one of the features, not used in the target function.
\begin{figure}[!htb]
\center
\includegraphics[width=0.7\columnwidth]{pic/DomainSample.pdf}
\caption{Sample Domains Used for Testing the Ability in Distinguishing Homographs}
\label{fig:user_decision}
\end{figure}
\subsection{Homograph Recognition}
This part is used for calculating the values of the target function. The eighteen sample domains mixed between homographs and non-homographs are showed in Figure~\ref{fig:user_decision} and explained in Appendix~\ref{section:user_decision}. The domains target to the nine brands mentioned in the brand familiarity. The domains are chosen for different purposes. For example, the domain \#2 (\url{amazonaws.com}) is chosen because participants probably only know \url{amazon.com} and think \url{amazonaws.com} is a homograph but actually it is not. Another example is the domain \#16 (\url{sex.com}) which is a pornographic domain, and thus the participants probably think it is homograph (unsafe) but actually it is not. For each of the eighteen domains, the participants answer whether it is safe or not. Based on the correct answers described in the Appendix~\ref{section:user_decision}, we extract whether the participants have a correct answer for each domain (true: 1, and false: 0). The reason we choose the number of domains as 18 but not 30, 40 or even more is that the participants will tend to randomly choose the answer options instead of actually answering if a survey contains too many questions, and 18 questions are a good limit for our design.
\section{Methodology}
\label{section:methodology}
This section describes the pre-process on the raw data of the participants' responses, determine the target function and define the model.
\subsection{Domain Grouping}
\label{section:grouping}
The eighteen sample domains are grouped based on the visual similarity with the brand domains. In this paper, the Structural Similarity Index (SSIM)~\cite{ssim2004} is chosen for the visual similarity metric. SSIM is commonly used since it outperforms the traditional methods such as Peak Signal-To-Noise Ratio (PSNR) and Mean Squared Error (MSE) which can estimate only the absolute errors. Firstly, the domains are parsed to images in the same size $N \times N$. The SSIM between two images $x$ and $y$ is then calculated as follows:
\begin{equation}
SSIM(x,y) = \frac{(2\mu_x\mu_y+c_1)(2\sigma_{xy}+c_2)}{(\mu^2_x+\mu^2_y+c_1)(\sigma^2_x+\sigma^2_y + c_2)}
\end{equation}
The $\mu_x$ and $\mu_y$ represent the averages of $x$ and $y$, respectively. The $\sigma^2_x$ and $\sigma^2_y$ represent the variances of $x$ and $y$, respectively. $c_1 = (k_1L)^2$ and $c_2 = (k_2L)^2$ represent the variables to stabilize the division with weak denominator where $L$ is the dynamic range of the pixel-values and is typically set to $L = 2^{\#bits\_per\_pixel}-1$ and $k_1 = 0.01, k_2 = 0.03$ by default. SSIM values $[-1, 1]$ where 1 indicates perfect similarity.
\begin{table}[!h]
\center
\caption{The SSIM of Eighteen Sample Domains}
\begin{tabular}{|>{
\hspace{0.2pc}}c<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}c<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}c<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}l<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}r<{\hspace{0.2pc}}
|}
\hline
\textbf{Group No} & \textbf{Group Name}& \textbf{Domain\#} & \textbf{Brand Domain} & \textbf{SSIM}\\
\hline
\hline
\multirow{ 4}{*}{Group 1} & \multirow{ 4}{*}{\shortstack{Homographs with \\SSIM $\geq 0.999$}} & {\#3\ } & \url{amazon.com} & 1.000\\
& & {\#4\ } & \url{google.com} & 1.000\\
&& \#10 & \url{booking.com} & 0.999\\
&& \#15 & \url{paypal.com} & 1.000\\
\hline
\multirow{ 7}{*}{Group 2} & \multirow{ 7}{*}{\shortstack{Homographs with \\SSIM $< 0.999$}} & {\#1\ } & \url{amazon.com} & 0.994\\
&& {\#6\ } & \url{google.com} & 0.838\\
&& {\#7\ } & \url{coinbase.com} & 0.996\\
&& {\#9\ } & \url{wikipedia.org} & 0.994\\
&& \#12 & \url{expedia.com} & 0.995\\
&& \#14 & \url{paypal.com} & 0.993\\
&& \#17 & \url{facebook.com} & 0.845\\
\hline
\multirow{ 7}{*}{Group 3}& \multirow{ 7}{*}{Non-homographs} & {\#2\ } & \url{amazon.com} & 0.865\\
&& {\#5\ } & \url{google.com} & 0.950\\
&& {\#8\ } & \url{wikipedia.org} & 0.853\\
&& \#11 & \url{booking.com} & 0.780\\
&& \#13 & \url{expedia.com} & 0.950\\
&& \#16 & \url{sex.com} & 1.000\\
&& \#18 & \url{facebook.com} & 0.667\\
\hline
\end{tabular}
\label{table:SSIM_domain}
\end{table}
Using the SSIM, the eighteen sample domains are categorized into three groups:
\begin{itemize}
\item \emph{Group 1: Homographs with SSIM $\geq$ 0.999.}
This group consists of four homographs including the domains \#3, \#4, \#10 and \#15 in Figure~\ref{fig:user_decision}. The domains \#3, \#4 and \#15 have SSIM = 1 which means they look completely the same as the brand domains. The domain \#10 has SSIM = 0.999 because the look-alike letter `g' is very difficult to be recognized.
\item \emph{Group 2: Homographs with SSIM $<$ 0.999.}
This group consists of seven homographs including domains \#1, \#6, \#7, \#9, \#12, \#14, and \#17 in Figure~\ref{fig:user_decision}. This group considers the homographs whose SSIM scores are lower than those in Group 1, but not so low, i.e., ranging from 0.838 to 0.996. Other homographs with lower SSIM are not considered since it may be trivial to be recognized by the participants.
\item \emph{Group 3: Non-homographs.}
This group consists of seven non-homographs including the domains \#2, \#5, \#8, \#11, \#13, \#16, and \#18 in Figure~\ref{fig:user_decision}. The domains \#2, \#5, \#8, \#11, \#13, and \#18 are safe domains that are registered by the brand themselves for different services but have less popularity than the main brand domains. For instance, the domain \#2 \url{amazonaws.com} (Amazon Web Services (AWS)) is a cloud computing service of Amazon. Many people may be confused with the main service of Amazon which is the selling service \url{amazon.com}. The domain \#16 \url{sex.com} is chosen since we want to know how participants balance their decisions between a domain that is famous and actually safe with a domain that is notorious for its content category (e.g., pornographic, darknet, terrorism).
\end{itemize}
For each group, the domain numbers, the brand domains, and the corresponding SSIMs are summarized in Table~\ref{table:SSIM_domain}.
\subsection{Lucky Answers and Neutral Answers}
\label{section:luckyneutral}
The survey is designed so that for each of the eighteen sample domains, the participants not only answer whether the domain is a homograph but also describe the reasons for their decision. A lucky answer is an answer that has a correct decision but inappropriate reason. A neutral answer is an answer that has a correct decision but unclear reason. For instance, a participant who decides \url{goole.co.jp} as a homograph and answers a correct reason such as ``\emph{the letter g is missing}" is not considered as a lucky answer. A participant who decides \url{goole.co.jp} as a homograph and answers an incorrect reason such as `\emph{Google only has .co.jp as a top-level domain, and thus google.com is unsafe}" is considered as a lucky answer. A participant who decides \url{goole.co.jp} as a homograph and answers an unclear reason such as ``\emph{I have a feeling that}" is considered as a neutral answer.
\begin{table*}
\centering
\caption{Lucky Answers and Neutral Answers}
\begin{tabular}{
|>{\hspace{0.2pc}}c<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}c<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}r<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}r<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}r<{\hspace{0.2pc}}
|}
\hline
\multirow{ 2}{*}{\textbf{Group}} & \multirow{2}{*}{\textbf{Domain \#}} & \multirow{ 2}{*}{\textbf{Incorrect Answer}} & \multicolumn{2}{c|}{\textbf{Correct Answer}}\\
\cline{4-5}
& && \textbf{Appropriate Reason} & \textbf{Lucky and Neutral Answers}\\
\hline
\hline
\multirow{ 4}{*}{Group 1} & {\#3\ } & 1411 (68.26 \%) & 0 (\hphantom{0}0\hphantom{.00} \%) & 656 (31.74 \%)\\
& {\#4\ } & 1432 (69.28 \%) & 0 (\hphantom{0}0\hphantom{.00} \%)& 635 (30.72 \%)\\
& \#10 & 755 (36.53 \%) & 4 (\hphantom{0}0.19 \%)& 1308 (63.28 \%)\\
& \#15 & 756 (36.57 \%)& 0 (\hphantom{0}0\hphantom{.00} \%) & 1311 (63.43 \%)\\
\hline
\multirow{ 7}{*}{Group 2} & {\#1\ } & 495 (23.95 \%) & 470 (22.74 \%)& 1102 (53.31 \%) \\
& {\#6\ } & 649 (31.40 \%) & 167 (\hphantom{0}8.08 \%)& 1251 (60.52 \%) \\
& {\#7\ } & 173 ( 08.37 \%)& 302 (14.61 \%)& 1592 (77.01 \%) \\
& {\#9\ } & 354 (17.13 \%) & 296 (14.32 \%) & 1417 (68.55 \%) \\
& \#12 & 243 (11.76 \%)& 341 (16.50 \%) & 1483 (71.75 \%)\\
& \#14 & 171 ( 08.27 \%)& 354 (17.13 \%)& 1542 (74.60 \%) \\
& \#17 & 229 (11.08 \%) & 471 (22.79 \%)& 1367 (66.13 \%)\\
\hline
\multirow{ 7}{*}{Group 3} & {\#2\ } & 1796 (86.89 \%)& \multicolumn{2}{c|}{271 (13.11 \%)} \\
& {\#5\ } & 1823 (88.20 \%) & \multicolumn{2}{c|}{244 (11.80 \%)} \\
& {\#8\ } & 1827 (88.39 \%) & \multicolumn{2}{c|}{240 (11.61 \%)} \\
& \#11 & 1832 (88.63 \%)& \multicolumn{2}{c|}{235 (11.37 \%)} \\
& \#13 & 1397 (67.59 \%)& \multicolumn{2}{c|}{670 (32.41 \%)} \\
& \#16 & 1841 (89.07 \%)& \multicolumn{2}{c|}{226 (10.93 \%)} \\
& \#18 & 1935 (93.61 \%)& \multicolumn{2}{c|}{132 (\hphantom{0}6.39 \%)}\\
\hline
\end{tabular}
\label{table:lucky}
\end{table*}
The lucky answers are excluded from the dataset since they are completely data outliers. For the neutral answers, we cannot just flip the decision from \emph{true} to \emph{false} because there is a well-known finding from researchers showing that very often, human experts cannot explain why they make a choice that they do, but they are correct far more often than non-experts. Ericsson et al.~\cite{lucky1} first found this studying chess experts, and the finding has been replicated and found many times since then by people such as Gerd Gigerenzer et al.~\cite{lucky2} and Gary Klein~\cite{lucky3}. This means that it is difficult to classify the neutral answers into data bias or actual correct answers. Therefore, in this paper, we decide to just exclude them from the dataset. It is safe rather than adjusting the participant responses like flipping from \emph{true} to \emph{false}. We manually check each of $2,067\times18$ answers from the 2,067 participants for the eighteen sample domains to find the lucky answers and neutral answers and summarize in Table~\ref{table:lucky}. In this table, the incorrect answers (column 3) and the correct answers with appropriate reasons (column 4) are used for the model. For group 3 (non-homograph), we do not need to remove lucky and neutral answers because: if the participants answer correctly (i.e., the domains are non-homograph), there is nothing to do; but if they answer incorrectly (the domains are homograph), with any reason, the participant's decisions are wrong.
\subsection{Model}
Let $f$ denote the model for measuring the participants' ability in distinguishing homographs. $f$ is defined as follows:
\begin{equation}
f\sim \mathsf{Demographics} + \mathsf{WebFamiliarity} + \mathsf{SecBackgrounds}
\end{equation}
The explanatory variables related to $\mathsf{Demographics}$ consist of gender, age, having a job, whether the participants know only Japanese, the number of specific languages that the participant has studied so far, whether the participant is educated in computer science/ computer engineering, whether the participant works in computer science or computer engineering. The explanatory variable related to $\mathsf{WebFamiliarity}$ is the usage frequency of the brands. The explanatory variables related to $\mathsf{SecBackgrounds}$ are anti-virus installation, security warnings, security behaviors, security knowledge, and security self-confidence.
\subsubsection{Target Functions}
The incorrect answers and the correct answers with appropriate reasons are extracted for the model. For each group, two experiment plans are performed using two different target functions.
\begin{itemize}
\item \emph{Integration}:
This plan integrates all the domains in the group using the target function:
\begin{equation}
\label{eq:target1}
f_1 = \sum_{d_i} \mathsf{SSIM}(d_i, b_i) \times \mathsf{difficult}(d_i) * \mathsf{decision}(d_i)
\end{equation}
where $\mathsf{decision}(d_i)$ denotes the decision of the participants in distinguishing whether the domain $d_i$ is a homograph. $\mathsf{SSIM}(d_i, b_i)$ denotes the SSIM between the domain $d_i$ and its corresponding brand domain $b_i$. $\mathsf{difficulty}(d_i)$ denotes the difficulty of the domain $d_i$ and is defined as ($1-\frac{c_i}{t}$) in which $c_i$ is the number of participants who give correct decisions for $d_i$ and $t = 2,067$ is the total number of participants. For example, there are 10 participants in which 7 participants answer correctly and thus the difficulty of the question is $1-\frac{7}{10}$. In this plan, the multiple (linear) regression model is applied one time for all the domains, and then the affecting factors for the integration target functions are extracted.
\item \emph{Separation}:
This plan applies the multiple (linear) regression model for each domain in the group and finds the affecting factors for each domain. The common affecting factors are then extracted. The target function is defined as follows:
\begin{equation}
\label{eq:target2}
f_2 = \mathsf{decision}(d_i)
\end{equation}
Since each domain is considered separately, $\mathsf{SSIM}(d_i, b_i)$ and $\mathsf{difficult}(d_i) $ are not necessary for the target function. After the factors affecting the target function are determined, the common factors for all the domains are extracted.
\end{itemize}
The SSIM and the difficulty are not used as variables in the features but used as elements in the target functions because the SSIM and the difficulty are not related to human information but domain information, and the goal in this paper is analyzing the human factors. Furthermore, for each domain, the SSIM and the difficulty are the same for all 2,067 participants. If the SSIM and the difficulty are used as the variables, the regression model always results that the SSIM and the difficulty are the affecting factors with $p \leq 0.05$. It is therefore not meant in finding factors.
\subsubsection{Factor Determination}
Before showing how the factors affecting the target functions are determined, we briefly describe the preliminary of the (student) $t$-test. A $t$-test~\cite{t-test1, t-test2} is commonly used to determine whether the mean of a population significantly differs from a specific value (called the hypothesized mean) or from the mean of another population. In other words, the $t$-test can tell if the differences could happen by chance. For the first step, the $t$-test takes the sample from each set and establishes the problem statement by assuming a null hypothesis that the two means are equal. Then, it calculates certain values and compares them with the standard values to determine if the assumed null hypothesis is accepted or rejected. If the null hypothesis is rejected, it indicates that data readings are strong and are not by chance. In the $t$-test, the $t$-value represents a ratio between the difference between the two groups and the difference within the groups. The larger the $t$-value, the more difference there is between groups (the more likely it is that the results are repeatable). The smaller the $t$-value, the more similarity there is between groups. If the $t$-value is negative, it shows a reversal in the directionality of the effect being studied. However, it has no impact on the significance of the difference between groups of data. Every $t$-value has a corresponding $p$-value. A $p$-value is the probability that the results from the sample data occurred by chance. The $p$-values vary from 0 to 1. The low $p$-value is good (it indicates the data did not occur by chance). In most cases, a $p$-value that is $\leq 0.05$ is accepted to mean the data is valid. In this paper, the affecting factors have the following $p$-values:
\begin{itemize}
\item $p \leq 0.001$: \emph{significant factors} that strongly affect the target function, marked as (***) in the experiment result.
\item $0.001 < p \leq 0.01$: \emph{semi-significant}, marked as (**) in the experiment result.
\item $0.01 < p \leq 0.05$: \emph{normal factor} affecting the target function, marked as (*) in the experiment result.
\end{itemize}
In the experiment result, we also show 95\% confidence interval (CI) which is a range of likely of the unknown population parameter. For the first plan (integration), the common samples which contain only incorrect answers and correct answers with appropriate reasons are inputted in the regression model. The factors are then determined based on the $t$-test's result. For the second plan (separation), the factors affecting the target function in each domain are determined. The common factors are then extracted. The final factors chosen for this plan is the common factors that affect $\geq \lceil \frac{N}{2} \rceil$ domains where $N$ denotes the number of domains in the group, and $\lceil \frac{N}{2} \rceil$ denotes the upper bound of $\frac{N}{2} $.
\subsection{Consistency of Integration and Separation Plans}
\label{section:consistency}
The best case is when both the plans result in the same set of affecting factors. If the case does not happen, we determine the final affecting factor as follows. Let $I$ and $S$ denote the set of affecting factors found in the integration and separation plan, respectively. Let $R$ denote the set of the affecting factors that we are aiming to find.
\begin{itemize}
\item All the common affecting factors of both the plans $I \cap S$ are included in $R$.
\item If there exists a factor $x \in I$ such that $x \notin S$, $x$ is included in $R$ if $x$ is an \emph{significant factor} in the integration plan ($p \leq 0.001$).
\item If there exists a factor $x \in S$ such that $x \notin I$, $x$ is included in $R$ if the \emph{significant} $p$-values ($p\leq 0.001$) are dominant in $S$ (i.e., the significant $p$-values belong to more than $\frac{\mid S \mid}{2}$ domains where $\mid S \mid$ denotes the number of factors in $S$).
\end{itemize}
The consistency of both the plans is the final result used for the conclusion; however, the factors found in each plan still gives a lot of important information and we cannot omit their details.
\section{Experiment}
\label{section:experiment}
The program is written in Python 3.7.4 on a computer MacBook Pro 2.8 GHz Intel Core i7, RAM 16 GB. The multiple (linear) regression model is executed using \emph{scikit-learn} package version 0.21. The $t$-test is computed using \emph{statsmodels} package version 0.10. The SSIM is computed using the \emph{skimage} package version 0.15.dev0.
\subsection{Participant Population}
Before performing the model, we check if the participant sampling process is valid. First, we analyze whether the participant demographics of gender and age statistically match those of an actual data (e.g., data from government census). Second, we show that the distribution of the age (continuous values) is a normal distribution (Gaussian distribution) (for the gender, the data is binary not continuous values and thus, there is no need for normal distribution test).
\begin{table*
\centering
\captionsetup[subtable]{position = below}
\captionsetup[table]{position=top}
\caption{Participant Sampling}
\label{table:population}
\begin{subtable}{0.3\linewidth}
\centering
\begin{tabular}{
|>{\hspace{0.2pc}}l<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}r<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}r<{\hspace{0.2pc}}
|}
\hline
\textbf{Age Range} & \textbf{Male} & \textbf{Female}\\
\hline
\hline
under 20 & 52 & 86\\
20-29 & 244 & 210\\
30-39 & 148 & 148\\
40-49 & 148 & 148 \\
50-59 & 148 & 148 \\
60-69 & 171 & 236\\
over 70 & 123 & 57\\
\hline
\end{tabular} \caption{Age Ranges and Gender}
\label{tab:dimFFT}
\end{subtable}%
\hspace*{3em}
\begin{subtable}{0.3\linewidth}
\centering
\begin{tabular}{|c|c|c|}
\hline
\multirow{3}{*}{\textbf{Gender}} & Male & 1034 (50.02 \%)\\
& Female & 1033 (49.98 \%) \\
& Actual Male \% & 50 \%~\cite{internet_users_japan}\\
\hline
\multirow{6}{*}{\textbf{Age}} & Average & 44.81 \\
& Median & 45 \\
& Min & 15 \\
& Max & 70 \\
& Actual Median & 35 to 44~\cite{internet_users_japan} \\
\hline
\end{tabular}
\caption{Matching Actual Statistics}
\label{tab:dimGMM}
\end{subtable}
\end{table*}
As mentioned in Section~\ref{section:procedure}, the 2067 participants are chosen from Internet users in Japan. We match them with a report of the population census from Japanese Internet users~\cite{internet_users_japan}. Table~\ref{table:population} describes the age and gender of our samples. Table~\ref{tab:dimFFT} describes the distribution of gender with different age ranges. Table~\ref{tab:dimGMM} shows the actual percentage of men within the population of Internet users, and the range in which the actual median age of Internet users lies. The normal distribution test is given in Figure~\ref{fig:table:DistributionAge}. The bell curve and the skewness (0.005) that is very close to 0 show that the data is valid for a normal distribution.
\begin{figure}[!ht]
\label{fig1}
\centering
\includegraphics[width=0.55\columnwidth]{pic/dist_age.pdf}
\qquad
\begin{tabular}[b]{l |r}\hline
\textbf{Metric} & \textbf{Value}\\
\hline
\hline
Mean & 44.812 \\
Standard Error & 0.375\\
Median& 45.000\\
Standard Deviation & 17.072\\
Sample Variance & 291.439\\
Kurtosis & -1.361\\
Skewness & 0.005 \\
Range & 55.000\\
Confidence Level (95.0 \%) & 0.736\\
Count & 2067.000\\
\hline
\end{tabular}
\caption{Distribution Curve and Distribution Summary of The Age}
\label{fig:table:DistributionAge}
\end{figure}
\subsection{Cronbach's Alpha ($\alpha$) Measurement}
We use the Cronbach's $\alpha$~\cite{cronbach1, cronbach2} to measure the internal consistency (IC) or the reliability of the questions that have multiple Likert-scale sub-questions. Suppose that we measure a quantity which is a sum of $K$ components: $X = Y_{1} + Y_{2} + \cdots + Y_{K}$. The Cronbach's $\alpha$ is defined as follows:
\begin{equation}
\alpha = \frac{K}{K-1} (1- \frac{\sum^{K}_{i=1} \sigma^{2}_{Y_{i}}}{\sigma^{2}_{X}}),
\end{equation}
where $\sigma^{2}_{X}$ denotes the variance of the observed total test scores, and $\sigma^{2}_{Y_{i}}$ denotes the variance of the component $i$ for the current sample of persons. We then use the rule of thumb for interpreting $\alpha$ as follows:
\begin{itemize}
\item $\alpha \geq 0.9$: \emph{Excellent} IC
\item $0.9 > \alpha \geq 0.8$: \emph{Good} IC
\item $0.8 > \alpha \geq 0.7$: \emph{Acceptable} IC
\item $0.7 > \alpha \geq 0.6$: \emph{Questionable} IC
\item $0.6 > \alpha \geq 0.5$: \emph{Poor} IC
\item $0.5 > \alpha$: \emph{Unacceptable} IC
\end{itemize}
In our survey, five questions consist of multiple sub-questions. Three of them include brand familiarity (4-point Likert-scale), security behavior (5-point Likert-scale), and security self-confidence (5-point Likert-scale). For the security knowledge (that contains eighteen binary sub-questions) and the user decision on distinguishing eighteen domains (that contains also eighteen binary sub-questions), we consider them as 2-point Liker-Scale questions. The result of Cronbach's $\alpha$ is showed in Table~\ref{table:cronbach}. The internal consistency of all the questions is better than or equal to \emph{acceptable}. This indicates that our survey is reliable.
\begin{table*}
\center
\caption{Cronbach's $\alpha$ Results for Likert-scale Questions}
\begin{tabular}{
|l
|r
|r
|r
|r
|c
|}
\hline
\multirow{ 2}{*}{\textbf{Question}} & \textbf{No. of Sub-} & \textbf{Sum of Item Vari-} & \textbf{Variance of Total} & \textbf{Cronbach's} & \multirow{ 2}{*}{\textbf{IC}} \\
& \textbf{questions} ($K$) & \textbf{ances} ($\sum^{K}_{i=1} \sigma^{2}_{Y_{i}}$) & \textbf{Scores} ($\sigma^{2}_{X}$) & $\alpha$ &\\
\hline
\hline
Brand Familiarity & {9\ } & {4.446\ } & {12.167\ } & {0.713\ } & Acceptable\\
\hline
Security Behavior & {16\ } & {27.203\ } & {163.219\ } & {0.889\ } & Good \\
\hline
Security Confidence & {6\ } & {5.699\ } & {25.920\ } & {0.936\ } & Excellent\\
\hline
Security Knowledge & {18\ } & {3.038\ } & {10.109\ } & {0.741\ } & Acceptable\\
\hline
Homograph Decision & {18\ } & {2.585\ } & {16.095\ } & {0.889\ } & Good\\
\hline
\end{tabular}
\label{table:cronbach}
\end{table*}
\subsection{Result for Group 1}
When distributing the survey to the participants, we hypothezied that nobody can distinguish the homographs because the visual similarity is almost 100 \%. However, the actual data surprisingly contains a large number of correct answers (over 30 \% for domain \#3 and \#4, and even over 60 \% for domain \#10 and \#15). Fortunately, the analysis of lucky and neutral answers given in Table~\ref{table:lucky} indicates that there is no correct answer with appropriate reasons in the case of domains \#3, \#4 and \#15 which have 100 \% of SSIM, and only 0.19 \% of correct answers with appropriate reasons in the case of domains \#10 which has 99.9 \% of SSIM. We now can confirm that there is no way for the participants to distinguish such extremely high-SSIM homographs. This raises the seriousness of homograph attacks. For this group, we only did the statistics without the need to apply the regression model.
\subsection{Result for Group 2}
In the first experiment plan (integration), each domain in this group has a different set of incorrect answers and correct answers with appropriate reasons. Finally, 146 common samples (out of 2067 samples) are filtered. The regression model with the target function $f_1$ given in Equation~\ref{eq:target1} is applied and the result is showed in Table~\ref{table:group2}. Remind that, (*) represents $0.01 < p\leq0.05$, (**) represents $0.001 < p\leq0.01$, and (***) represents $p\leq 0.001$. There are four affecting factors found including:
\begin{itemize}
\item Have a job: \emph{normal factor}, the positive coefficient ($0.1425$) indicates that people who have a job tend to have the ability of homograph recognition.
\item Know only Japanese: \emph{semi-significant factor}, the negative coefficient ($-0.2636$) indicates that people who do not only know the Japanese have the ability of homograph recognition.
\item Frequently use the brands: \emph{semi-significant factor}, the positive coefficient ($0.0322$) indicates that people who are more familiar with the brands have the ability of homograph recognition.
\item Have better security knowledge: \emph{significant factor}, the positive coefficient ($0.0624$) indicates that people who have better security knowledge have the ability of homograph recognition.
\end{itemize}
For the second experiment plan (separation), the regression model with the target function $f_2$ given in Equation~\ref{eq:target2} is applied on seven different sets of the incorrect answers and correct answers for appropriate reasons of the seven domains in this group. The factors affecting $f_2$ are found for each domain. The common factors are then extracted. The number of samples in each domain is respectively 965 (\#1), 816 (\#6), 475 (\#7), 650 (\#9), 584 (\#12), 525 (\#14), and 700 (\#18). The result is shown in the last seven columns of Table~\ref{table:group2}. In this table, only the $p$-values of the affecting factors are described so that the common factors can be easily observed. $(+)$ represents the positive coefficients. $(-)$ represents the negative coefficients. The factors chosen for this plan is the common factors that affect more than or equal to $\lceil N/2 \rceil = 4$ domains including:
\begin{itemize}
\item Sex (male): affecting 6/7 domains, is a \emph{significant factor} of 5 domains (\#1, \#9, \#12, \#14, \#17) and a \emph{normal factor} of \#7. All the coefficients are negative, this indicates that the females tend to recognize homographs better than the males.
\item Have a job: affecting 5/7 domains, is a \emph{significant factor} of \#9, \#17 and a \emph{normal factor} of \#7, \#12 and \#14. All the coefficients are positive; this indicates that the people who have a job tend to be able to recognize the homographs.
\item Still browsing the website even if there is a warning from an anti-virus software: affecting 4/7 domains, is a \emph{semi-significant factor} of \#1 and a \emph{normal factor} of \#12, \#14, and \#17. All the coefficients are negative; this indicates that people who do not browse the website when there is a warning tend to be able to distinguish the homographs.
\item Have more security knowledge: affecting 7/7 domains, is a \emph{significant factor} and has positive coefficients for all the domains. This indicates that the people who have better security knowledge tend to be able to distinguish the homographs.
\end{itemize}
\begin{table*}
\center
\caption{Experiment Result of Group 2 (Homograph with SSIM $<0.999$)}
\begin{tabular}{
|c|l||
r|r|r||
c|c|c|c|c|c|c|}
\hline
\multirow{ 2}{*}{\textbf{No.}} & \multirow{ 2}{*}{\textbf{Factors}} & \multicolumn{3}{c||}{\textbf{Integration}} & \multicolumn{7}{c|}{\textbf{Separation}}\\
\cline{3-12}
& & \textbf{Coef.} & \textbf{$p$} & \textbf{95\%CI} & \textbf{\#1} & \textbf{\#6} & \textbf{\#7} & \textbf{\#9} & \textbf{\#12} & \textbf{\#14} & \textbf{\#17}\\
\hline
\hline
\multicolumn{2}{|c||}{Number of Samples} & \multicolumn{3}{|c||}{146}& 965 & 816& 475& 650& 584& 525& 700\\
\hline
\multicolumn{2}{|c||}{(Intercept)} & -0.6607 & 0.007 & [-1.134, -0.187] & & & & & && \\\hline
1& Sex (male) & -0.0705 & 0.261 & [-0.194, \hphantom{-}0.053] & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{0.032} & \cellcolor{lightgray}{0.001} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{0.001} \\
& & & & & \cellcolor{lightgray}{$(-)$***} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{$(-)$*} & \cellcolor{lightgray}{$(-)$***} & \cellcolor{lightgray}{$(-)$***} & \cellcolor{lightgray}{$(-)$***} & \cellcolor{lightgray}{$(-)$***} \\
\hline
2& Age (older) & -0.0022 & 0.251 & [-0.006, \hphantom{-}0.002] & & & & & \textless 0.001& 0.001 & \textless 0.001\\
& & & & & & & & & $(-)$*** & $(-)$***& $(-)$***\\
\hline
3& Have a job& \cellcolor{lightgray}{0.1425} & \cellcolor{lightgray}{0.036} & \cellcolor{lightgray}{[\hphantom{-}0.009, \hphantom{-}0.276]} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{}& \cellcolor{lightgray}{0.016} & \cellcolor{lightgray}{\textless 0.001}& \cellcolor{lightgray}{0.015} & \cellcolor{lightgray}{0.043}& \cellcolor{lightgray}{\textless 0.001}\\
& & \cellcolor{lightgray}{} & \cellcolor{lightgray}{*} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{}& \cellcolor{lightgray}{}& \cellcolor{lightgray}{$(+)$*} & \cellcolor{lightgray}{$(+)$***} & \cellcolor{lightgray}{$(+)$*} & \cellcolor{lightgray}{$(+)$*} & \cellcolor{lightgray}{$(+)$***}\\
\hline
4& Know only Japanese & -0.2636 & 0.006 & [-0.451, -0.077] & & & & & 0.015 & \textless 0.001 & 0.007\\
& & & ** & & & && & $(-)$*& $(-)$***& $(-)$**\\
\hline
5& Number of languages & 0.0262 & 0.519 & [-0.054, \hphantom{-}0.106] & & & & & &&\\
\hline
6& Install anti-virus & -0.0920 & 0.189 & [-0.230, \hphantom{-}0.046] & & & & & && \\
\hline
7&Browse even warning & -0.0295 & 0.648 & [-0.157, \hphantom{-}0.098] & 0.010 & & & & 0.018 & 0.042 & 0.044\\
& & & & & $(-)$** & & & & $(-)$* & $(-)$*& $(-)$*\\
\hline
8& Frequently use brands & 0.0322 & 0.004 & [\hphantom{-}0.010, \hphantom{-}0.054] & 0.001 & & 0.010& & & & \\
& & & ** & & $(+)$***& & $(-)$** & & & & \\
\hline
9& Education in CS/CE & -0.0316 & 0.820 & [-0.306, \hphantom{-}0.243] & & & & & &&\\
\hline
10& Work in CS/CE & -0.1940 & 0.093 & [-0.421, \hphantom{-}0.033] & & & & & & & \\
\hline
11& More sec. behavior & 0.0009 & 0.753 & [-0.005, \hphantom{-}0.007] & 0.043& & & & &&\\
& & & & & $(+)$* & & & & &&\\
\hline
12 & More sec. knowledge & \cellcolor{lightgray}{0.0624} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{[\hphantom{-}0.041, \hphantom{-}0.084]} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{\textless 0.001}\\
& & \cellcolor{lightgray}{}& \cellcolor{lightgray}{***} & \cellcolor{lightgray}{}& \cellcolor{lightgray}{$(+)$***} & \cellcolor{lightgray}{$(+)$***} & \cellcolor{lightgray}{$(+)$***} & \cellcolor{lightgray}{$(+)$***} & \cellcolor{lightgray}{$(+)$***} & \cellcolor{lightgray}{$(+)$***} & \cellcolor{lightgray}{$(+)$***}\\
\hline
13& More sec. confidence & -0.0007 & 0.930 & [-0.015, \hphantom{-}0.014] & & & & & && \\
\hline
\end{tabular}
`*': $0.01 < p \leq 0.05$, `**': $0.001 < p \leq 0.01$, `***': $p \leq 0.001$ \\
$(+)$: coefficient $>0$, $(-)$: coefficient $<0$
\label{table:group2}
\end{table*}
\subsubsection{Consistency}
The results of the plans are not the same, so we perform the result consistency as indicated in Section~\ref{section:consistency}. The final set we are aiming to find (filled with gray color in Table~\ref{table:group2}) consists of the following affecting factors:
\begin{itemize}
\item Sex (female) since all the coefficients are negative for males in the separation plan.
\item People who have a job: this is the common factor of both the plans.
\item More security knowledge: this is also the common factor of both the plans.
\end{itemize}
\subsection{Result for Group 3}
In this group, as explained in Section~\ref{section:luckyneutral}, the lucky and neutral answers are not necessary to be excluded from the dataset. For the both experiment plans (integration and separation), the regression model is applied on all 2067 samples using the target function $f_1$ (Equation~\ref{eq:target1}) in the case of integration and $f_2$ (Equation~\ref{eq:target2}) in the case of separation. The results are showed in Table~\ref{table:group3}.
For the first experiment plan (integration), there are seven affecting factors found including:
\begin{itemize}
\item Sex (male): \emph{normal factor}, the positive coefficient ($ 0.1354$) indicates that the males tend to be able to distinguish the non-homographs.
\item Age (older): \emph{semi-significant factor}, the negative factor ($-0.0052$) indicates that the young people tend to be able to distinguish the non-homographs.
\item Have a job: \emph{significant factor}, the negative factor ($-0.2104$) indicates that the people who do not have a job tend to be able to distinguish the non-homographs.
\item Browsing the website even if there is a warning from an anti-virus software: \emph{semi-significant factor}, the positive coefficient ($0.1484>0$) indicates that the people who still browse the website even if there is a warning tend to be able to distinguish the non-homographs.
\item Education in CS/CE: \emph{normal factor}, the positive coefficient ($0.3072$) indicates that the people who are educated in computer science or computer engineering tend to be able to distinguish the non-homographs.
\item Work in CS/CE: \emph{normal factor}, the positive coefficient ($0.2861$) indicates that the people who work in computer science or computer engineering tend to be able to distinguish the non-homographs.
\item Have better security knowledge: \emph{significant factor}, the negative coefficient ($-0.0551$) indicates that people who have less security knowledge have better ability in distinguishing the non-homographs.
\end{itemize}
For the second experiment plan (separation), since this group also has seven domains as the group 2, the factors chosen for this plan are the common factors that affect more than or equal to 4 domains.
\begin{itemize}
\item Sex (male): affecting 4/7 domains, \emph{significant factor} of \#2, \emph{semi-significant factor} of \#8, \emph{normal factor} of \#5 and \#18. All the coefficients are positive; this indicates that the males tend to be able to distinguish the non-homographs.
\item Age (older): affecting 4/7 domains, \emph{significant factor} of \#11 and \#13, \emph{semi-significant factor} of \#8 and \#16. All the coefficients are negative; this indicates that the young people tend to be able to distinguish the non-homographs.
\item Have a job: affecting 6/7 domains, \emph{semi-significant factor} of \#8, \#13, and \#16, \emph{normal factor} of \#2, \#11, and \#18. All the coefficients are negative; this indicates that the people who do not have a job tend to be able to distinguish the non-homographs.
\item Have better security knowledge: affecting 5/7 domains, is a \emph{significant factor} and has negative coefficients for all the domains. This indicates that the people who have less security knowledge tend to be able to distinguish the non-homographs.
\end{itemize}
\begin{table*}[!ht]
\center
\caption{Experiment Result of Group 3 (Non-homograph)}
\begin{tabular}{|
c|l|
r|r|r||
c|c|c|c|c|c|c
|}
\hline
\multirow{ 2}{*}{\textbf{No.}} & \multirow{ 2}{*}{\textbf{Factors}} & \multicolumn{3}{c||}{\textbf{Integration}} & \multicolumn{7}{c|}{\textbf{Separation}}\\
\cline{3-12}
& & \textbf{Coef.} & \textbf{$p$} & \textbf{95\%CI} & \textbf{\#2} & \textbf{\#5} & \textbf{\#8} & \textbf{\#11} & \textbf{\#13} & \textbf{\#16} & \textbf{\#18}\\
\hline
\hline
\multicolumn{2}{|c|}{No. of Samples} & \multicolumn{10}{|c|}{2067}\\
\hline
\multicolumn{2}{|c|}{(Intercept)}& 1.3626 & \textless 0.001 & [\hphantom{-}0.934, \hphantom{-}1.791] & & & & & && \\
\hline
1& Sex (male) & \cellcolor{lightgray}{0.1354} & \cellcolor{lightgray}{0.012} & \cellcolor{lightgray}{[\hphantom{-}0.029, \hphantom{-}0.241]} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{0.019} & \cellcolor{lightgray}{0.004} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{0.011}\\
& & \cellcolor{lightgray}{} & \cellcolor{lightgray}{*} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{$(+)$***} & \cellcolor{lightgray}{$(+)$*} & \cellcolor{lightgray}{$(+)$**} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{$(+)$*}\\
\hline
2& Age (older) & \cellcolor{lightgray}{-0.0052} & \cellcolor{lightgray}{0.002} & \cellcolor{lightgray}{[-0.008, -0.002]} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{0.002} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{0.003} & \cellcolor{lightgray}{}\\
& & \cellcolor{lightgray}{} & \cellcolor{lightgray}{**} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{$(-)$**} & \cellcolor{lightgray}{$(-)$***} & \cellcolor{lightgray}{$(-)$***} & \cellcolor{lightgray}{$(-)$**} & \cellcolor{lightgray}{}\\
\hline
3& Have a job& \cellcolor{lightgray}{-0.2104} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{[-0.326, -0.095]} & \cellcolor{lightgray}{0.011} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{0.007} & \cellcolor{lightgray}{0.020} & \cellcolor{lightgray}{0.009} & \cellcolor{lightgray}{0.007} & \cellcolor{lightgray}{0.027}\\
& & \cellcolor{lightgray}{} & \cellcolor{lightgray}{***} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{$(-)$*} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{$(-)$**} & \cellcolor{lightgray}{$(-)$*} & \cellcolor{lightgray}{$(-)$**} & \cellcolor{lightgray}{$(-)$**} & \cellcolor{lightgray}{$(-)$*}\\
\hline
4& Know only & 0.0931 & 0.258 & [-0.068, \hphantom{-}0.254] & & & & 0.014 & &&\\
& Japanese & & & & & && $(+)$*& &&\\
\hline
5& Number of & 0.0087 & 0.830 & [-0.071, \hphantom{-}0.088] & & & & & &&\\
& languages & & & & & && & &&\\
\hline
6& Install & -0.0796 & 0.211 & [-0.204, \hphantom{-}0.045] & & & & & && 0.043\\
& anti-virus & & & & & & & & && $(-)$*\\
\hline
7&Browse even & 0.1484 & 0.009 & [\hphantom{-}0.037, \hphantom{-}0.259] & & 0.001 & & & 0.040 &&\\
& warning & & **& & & $(+)$***& & & $(+)$*&&\\
\hline
8& Frequently & 0.0149 & 0.138 & [-0.005, \hphantom{-}0.035] & & & & & \textless 0.001 &&\\
& use brands & & & & & & & & $(+)$*** &&\\
\hline
9& Education & 0.3072 & 0.035 & [\hphantom{-}0.021, \hphantom{-}0.593] & 0.016 & & & & &&\\
& in CS/CE & & * & & $(+)$* & & & & &&\\
\hline
10& Work & 0.2861 & 0.018 & [\hphantom{-}0.049, \hphantom{-}0.523] & 0.002 & & & & && 0.010\\
& in CS/CE & & * & & $(+)$** & & & & && $(+)$**\\
\hline
11& More sec. & -0.0020 & 0.412 & [-0.007, \hphantom{-}0.003] & & & & & &&\\
& behavior & & & & & && & &&\\
\hline
12 & More sec. & \cellcolor{lightgray}{-0.0551} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{[-0.075, -0.035]} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{\textless 0.001} & \cellcolor{lightgray}{0.001} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{} & \cellcolor{lightgray}{\textless 0.001}\\
& knowledge & \cellcolor{lightgray}{} & \cellcolor{lightgray}{} {***} & \cellcolor{lightgray}{}{} & \cellcolor{lightgray}{} {$(-)$***} & \cellcolor{lightgray}{} {$(-)$***} & \cellcolor{lightgray}{} {$(-)$***} & \cellcolor{lightgray}{} {$(-)$***} & \cellcolor{lightgray}{} {} & \cellcolor{lightgray}{} {} & \cellcolor{lightgray}{} {$(-)$***} \\
\hline
13& More sec. & 0.0060 & 0.322 & [-0.006, \hphantom{-}0.018] & & & & & && 0.049\\
& confidence & & & & & & & & && $(+)$*\\
\hline
\end{tabular}
`*': $0.01 < p \leq 0.05$, `**': $0.001 < p \leq 0.01$, `***': $p \leq 0.001$\\
$(+)$: coefficient $>0$, $(-)$: coefficient $<0$
\label{table:group3}
\end{table*}
\subsubsection{Consistency}
Similar to the group 2, the results of the plans in this group are also not the same, so we perform the result consistency as indicated in Section~\ref{section:consistency}. The final set we are aiming to find (filled with the gray color in Table~\ref{table:group3}) consists of the following affecting factors. Fortunately, all the factors are the common factors of both the plans.
\begin{itemize}
\item Sex (male): since all the coefficients are positive for both the plans.
\item Young people: since all the coefficients are negative.
\item People who do not have a job: since all the coefficients are negative.
\item Less security knowledge: since all the coefficients are negative.
\end{itemize}
\section{Discussion}
\label{section:discussion}
In this section, we discuss how the factors change when the participants are explained about the homographs. We then discuss some several ideas for improving the result and their challenges for future work.
\subsection{Before and after Homograph Explanation/Education}
The main result is described in Section~\ref{section:experiment}. In this section, we perform an extra analysis of how the factors change when the participants are explained about what the homograph attack is. In the survey, after the participants give their decisions to the eighteen domains, a description of the homograph is displayed (Appendix~\ref{section:explain}). The participants then respond to their decision again to the same eighteen domains. To avoid data outlier in the participants' decisions (for ensuring the independency in their decision before and after the homograph explanation), the web interface of the survey is designed so that the participants cannot go back to previous questions before the homograph explanation from the questions that are displayed after the homograph explanation.
In this analysis, we consider the integration plan for all the eighteen domains with all the 2067 participants\footnote{Although there are lucky and neutral answers, they actually happened (these answers are the actual samples in the dataset) and we would want to know how the factors are in this extra analysis.}. Table~\ref{table:beforeafter} shows the experiment result. $p_{BE}$ and $p_{AF}$ denotes the $p$-values before and after the homograph explanation, respectively. $\mid \bigtriangleup p\mid$ denotes the change's magnitude of $p$. The fifth and sixth columns are the change of the coefficient signs and the significane, respectively. N/A in the sixth column means that the factor is not an affecting factor (e.g., ** $\rightarrow$ N/A means the variable is a semi-significant factor before the homograph explanation, but after that, it is no longer an affecting factor). The result shows that there are five factors found in both cases of before and after the homograph explanation. The three factors (anti-virus installation, frequently use brands and more security knowledge) are consistent for both the cases. Sex (male) is no longer an affecting factor after the homograph explanation. Interestingly, working in computer science or computer engineering from not an affecting factor becomes an affecting factor. Furthermore, $\mid \bigtriangleup p\mid= 0.061$ is highest compared to other affecting factors. This indicates that people who work in computer science or computer engineering are able to capture the situation quickly after being explained about the homographs.
\begin{table*}
\center
\caption{Factors Change before and after the Homograph Explanation}
\begin{tabular}{
|>{\hspace{0.2pc}}l<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}r<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}r<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}r<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}c<{\hspace{0.2pc}}
|>{\hspace{0.2pc}}c<{\hspace{0.2pc}}
|}
\hline
\textbf{Factors} & $p_{BE}$ & $p_{AF}$ & $\mid \bigtriangleup p \mid$ & \textbf{Coefficient Sign}& \textbf{Significancy} \\
\hline
\hline
Sex (male) & 0.018 & 0.339 & 0.321 & $(+) \rightarrow (-)$ & ** $\rightarrow$ N/A \\
\hline
Install anti-virus & 0.001 & 0.045 & 0.044 & $(-) \rightarrow (-)$ & *** $\rightarrow$ *\\
\hline
Frequently use brands & \textless 0.001 & \textless 0.001 & \textless 0.001 & $(-) \rightarrow (-)$ & *** $\rightarrow$ ***\\
\hline
Work in CS/CE & 0.083 & 0.022 & 0.061 & $(+) \rightarrow (+)$ & N/A $\rightarrow$ * \\
\hline
More security knowledge & \textless 0.001 & \textless 0.001 & \textless 0.001 & $(-) \rightarrow (-)$ & *** $\rightarrow$ *** \\
\hline
\end{tabular}
\label{table:beforeafter}
\end{table*}
\subsection{Future Work and Challenges}
Related to the survey itself, there are three ideas that can improve the study. First, in this current work, the survey is applied for local participants (i.e., Japanese). If it can be applied for global participants, the responses would be more objective. In this case, there is a challenge in translating the survey across the languages in different countries. The translation should be appropriately considered while preserving its reliability and structure validity. Second, some features which may affect the ability of homograph recognition including how many hours for using the Internet per day, factors related to participant psychology like emotional state, demands and the environment when answering the questionnaire, etc. Third, if the domains are asked to the participants in an actual simulation rather than in a self-report questionnaire, the bias can be reduced and also other information related to participants can be extracted such as the time of accessing domains, the scenario of accessing domains, and the mouse move.
Related to the model, some promising elements can be included in the target functions. The first is the Alexa ranking. Some domains are very famous (e.g., \url{amazon.com} or \url{google.com}), and thus the participants are more familiar with them rather than the domains that are less popular (e.g., \url{coinbase.com}). The Alexa ranking can be considered in a global scope (if the survey is applied to different countries) or in a local scope (if the survey is applied to a country like this work). The second is the order of the domains in the questionnaire. In fact, the participants tend to carefully answer the first few domains but gradually tend to answers the domains randomly; and therefore there is bias in this case. The domain order in the questions should be added as a component in the target function. Furthermore, there can be another bias when the participants answer all domains as homographs because they perhaps think that it has a high probability for the domains to be homographs in such a security survey, or think that false positive is better than false negative when they are not sure. Designing a survey that can eliminate data bias is a challenge in most of human factor research topics.
\section{Conclusions}
\label{section:conclusion}
We designed and ran an online study to explore how user demographics, brand familiarity, and security backgrounds affect the ability in recognizing homographs. We collected 2,067 responses to our survey from participants located in Japan and analyzed them using linear regression. Our results shed light on the differences in the ability of homograph recognition for different kinds of homographs. We find that 13.95\% of participants can recognize non-homographs while 16.60\% of participants can recognize homographs when the visual similarity with the target brand domains is under 99.9\%; but when the similarity increases to 99.9\%, the number of participants who can recognize homographs significantly drops down to only 0.19\%; and for the homographs with 100\% of visual similarity, there is no way for the participants to recognize. We also find that for different levels of visual similarity, the participants exhibit different abilities. Female participants tend to recognize homographs while male participants tend to able to recognize non-homographs. Security knowledge is a significant factor affecting both homographs and non-homographs. Surprisingly, people who have strong security knowledge tend to be able to recognize homographs but not non-homographs. Furthermore, an interesting result is that people who work or are educated in computer science or computer engineering is not an affecting factor for the ability in recognizing homograph as hypothesized; however, right after being explained about homograph attack, they are the ones who can capture the situation the most quickly.
For the implication, first, we want to raise the seriousness of the homograph attack. Second, we want to recommend looking into directions beyond user education to promote more ability in homograph recognition, especially aiming at people who are male, who do not have a job, and who have less security knowledge. Third, we want to emphasize that not all the domains that have high visual similarity with the brand domains are the homographs. User education for non-homographs is also necessary and can be aimed at people especially those who are female, elder, have a job and have good security knowledge.
|
1,314,259,993,871 | arxiv | \section{Introduction}
In recent years, the field of radio detection of air
showers has advanced quite rapidly~\cite{huege2016,Frank}. Estimating the depth
of the shower maximum, $X_{\rm max}$, with improved accuracy is
of great interest for the study of the primary particle composition \cite{nature,Apel}.
The development of the air shower induced by a cosmic ray is governed by the interactions and
decays of the secondary particles. The secondary electrons and
positrons in the air shower undergo charge separation as
they travel through the magnetic field of the Earth. This leads to a
time-varying transverse current, producing radio emission. There is
another small contribution to the radiation from the excess of negative charge accumulated
at the shower front, known as the \textquoteleft{Askaryan effect}\textquoteright~\cite{Askaryan}.
The emission reaches the ground as a short pulse on the order of 10 to 100 ns
with a specific lateral intensity distribution, or footprint, that depends
on $X_{\rm max}$; $X_{\rm max}$ is calculated in terms of total atmospheric matter traversed by the air shower
from the top of the atmosphere to the point where the particle number reaches the maximum.
\textcolor{black}{It is therefore important to know the altitude-dependent air density.}
Another atmospheric parameter that plays a crucial role
in the radio emission is the refractive index of air.
If for a given emission region along the shower axis an observer is
located at the corresponding Cherenkov angle, radiation
emitted from all along this region arrives simultaneously.
This results in a highly compressed signal in time, forming
a ring-like structure on the ground~\cite{anna,ANITA_che}.
The refractive index determines the propagation velocity of the radio signal
at different altitudes and influences the time compression \cite{zharies,krijn}.
For observers located on the Cherenkov ring, pulses are coherent up to GHz frequencies \cite{Smida}. The angle at which Cherenkov emission is emitted is inversely proportional
to the refractive index. At higher frequencies pulses are more sensitive to the refractive index. In general, at all frequencies, the variations in the refractive index lead
to changes in the radio intensity footprint \cite{Arthurpaper}. \textcolor{black}{ Both the density and the refractive index of air are dependent on air
temperature, humidity and pressure. Thus, having a good understanding of these atmospheric variables is crucial. }
\vspace{0.5 cm}
The radio detection technique can be used in combination with established
techniques such as fluorescence detection and surface detection with scintillators and water Cherenkov detectors. Dense antenna arrays like the core
of the LOFAR radio telescope \cite{lofar}\, provide the
opportunity to investigate the radio footprint, i.e.\ the
lateral intensity distribution, in close detail and enable the measurement of
$X_{\rm max}$ up to a precision of $<$ 20 $\mathrm{g/cm ^{2}}$.
The precision is sensitive to the choice of an atmospheric model included in the Monte Carlo
air shower simulation codes. There are several parameterized atmospheric models incorporated in the CORSIKA air shower simulation code, based on averaged
profiles: U.S. standard atmosphere parameterized according to J. Linsley \cite{corsikamanual},
parameterized atmospheres for the Pierre Auger Observatory near Malarg\"{u}e (Argentina)
by M. Will and B. Keilhauer \cite{bianca}, South Pole atmospheres parameterized by P. Lipari and D. Chirkin etc.
So far, the US standard atmosphere has been used in LOFAR analyses, through CORSIKA simulations \cite{corsikamanual} and the CoREAS extension
\cite{corsikamanual} which is used to calculate the radio emission of the air showers.
A first order linear correction to the US standard atmosphere has been applied to account for the fact that the
US-standard atmosphere does not reflect the realistic atmospheric conditions at a given time. It is preferable to
integrate a realistic atmosphere directly into the simulations. In particular, the reconstruction of $X_{\rm max}$ depends
on the refractive index of air, and so a realistic refractive index profile needs to be included.
\textcolor{black}{The effects of the refractive index, n, on the reconstructed $X_{\rm max}$ have been previously reported in Ref.\cite{codalema} and Ref.\cite{Arthurpaper}, using
different simulation codes.
In Ref.\cite{Arthurpaper}, CoREAS was used to simulate two ensembles of showers, one with a globally
higher refractivity $N= \left(n-1\right)\, 10^6$, another with standard values. A Monte Carlo based approach was taken to study the systematic
shift in reconstructed $X_{\rm max}$ by comparing
the set of simulations with higher refractivity to the standard ones. The shift in the reconstructed $X_{\rm max}$ from the default value
was found to be proportional to the geometric distance to $X_{\rm max}$. The effect was stronger in the high frequency band of 120--250 MHz than in the 30--80 MHz band.
In Ref.\cite{codalema}, a more realistic profile of the refractivity was constructed
for one particular day using information from the Global Data Assimilation System, GDAS, a global weather database. The differences between this atmosphere and default atmospheres were studied
using the SELFAS radio emission simulation code \cite{selfas}. The results showed that correcting for the realistic density is the most important factor in
the accurate reconstruction
of $X_{\rm max}$, causing about 30 g/cm$^2$ bias in $X_{\rm max}$. And the second most important correction was through
the inclusion of the high frequency refractivity formula, applicable at radio frequencies, contributing about 5 g/cm$^2$ bias in $X_{\rm max}$. The effects of the
increased refractivity
on the time traces and the lateral distribution function (LDF) were also
reported. In the 20--80 MHz frequency band, relatively small differences in the amplitude of the electric field and LDF were found, whereas considerable differences were
found studying the high frequency band between 120--250 MHz. These results were in agreement with Ref.\cite{Arthurpaper}.
While both works paved the way for the understanding
of atmospheric effects on radio simulations, a direct application to real data using simulations with realistic atmospheric conditions was not addressed.\\
In this work, for the first time, GDAS-based atmospheric profiles, automatically included in CoREAS simulations are applied to LOFAR data. The effects of atmospheric
parameters like pressure and humidity on the reconstructed $X_{\rm max}$ are studied and compared to the results of
previously used linear corrections. A new GDAS-based correction is introduced and compared to previous methods. Furthermore,
a tool is developed to extract GDAS atmospheric parameters which are then interfaced with CORSIKA. The utility of this tool is not only limited
to LOFAR. This code, called \textquoteleft{gdastool}\textquoteright, has been available for public use since the release of CORSIKA version 7.6300.
It is flexible and ready to be adapted by the users to obtain parameterized
atmospheric profiles for user-specified time and location. Sections 2 and 3 describe the processing of GDAS data
to extract the atmospheric state variables and examples of atmospheric profiles at the LOFAR site,
respectively. Section 4 covers the details of the implementation of GDAS in CORSIKA. In sections 5 and 6, LOFAR cosmic ray data are
evaluated with the GDAS atmospheric profiles, the GDAS-correction factor is introduced and the explicit effects of humidity on
shower parameters are discussed. }
\begin{comment}
We have studied the effect of the full GDAS atmospheres on the reconstruction of $X_{\rm max}$ a
This requires that atmospheric profiles be included in the simulation
which closely match the conditions during the time of actual measurements for an accurate estimation of $X_{\rm max}$.
GDAS. For this purpose we have developed a program that extracts GDAS atmospheric parameters which is interfaced with CORSIKA. This program
is available for public use since the release of CORSIKA version 7.6300. It is flexible and can be adapted by the users to obtain parameterized
atmospheric profiles for user specified time and location. Sections 2 and 3 describe the processing of GDAS data
to extract the atmospheric state variables and examples of atmospheric profiles at the LOFAR site,
respectively. Section 3 and 4 cover the details of the implementation of GDAS to CORSIKA. Then in section 5, 6
the evaluation of LOFAR cosmic ray data with the new improved atmospheric models and the explicit effects of humidity on
shower parameters are discussed respectively.
\end{comment}
\section{Extracting atmospheric variables from GDAS data}
The Global Data Assimilation System (GDAS) developed at
NOAA's\footnote{National Oceanic and Atmospheric Administration.} National Centers for Environmental Prediction (NCEP) is a
tool used to describe the global atmosphere. It is run
four times a day (0, 6, 12, and 18 UTC) and provides a 3-, 6- and 9-hour forecast
based on the interpolation of meteorological measurements from all over the world including
weather stations on land, ships and airplanes as well as
radiosondes and weather satellites~\cite{gdas}. The three hourly data are available at
23 constant pressure levels, from 1000 hPa (roughly sea level) to
20 hPa ($\approx 26 \, \mathrm{km}$) on a global $1^{\circ}$ spaced
latitude-longitude grid ($180^{\circ}$ by $360^{\circ}$). Each data
set is complemented by data at the surface level. The data are stored
in weekly files and made available online. In order to model a realistic atmosphere one
needs to obtain the suitable atmospheric
parameters from GDAS. Parameters like temperature (K), height (m)
relative humidity ($H$) and pressure (hPa) can be directly extracted
from the database. In the GDAS data, the altitude is in geopotential units with respect to a geoid (mean
sea level). This is an adjustment to geometric height or elevation above mean sea level using the
variation of gravity with latitude and elevation. To convert from geopotential height $h$ (m) to
standard geometric altitude $z$ (m) we use the formula
\begin{equation}
z\left(h \, , \Phi\right) = \left(1+ 0.002644 \cdot \cos(2\Phi)\right)
\cdot h + (1+0.0089\cdot \cos(2\Phi)) \left(\frac{h^{2}}{6245000}\right)
\end{equation}
where $\Phi$ is the geometric latitude \cite{gdasauger}.
To calculate the air density, the relative humidity is to be converted
into water vapor pressure. The following approximation
of the empirical Magnus formula is used to calculate the water vapor pressure (hPa)
in terms of humidity and temperature \cite{gdasauger}:
\begin{equation}
e = \frac{H}{100\%} \times 6.1064 \times \exp{\left(\frac{21.88\hspace{0.1cm}t}{265.5
\hspace{0.1cm}+\hspace{0.1cm} t}\right)} \hspace{1cm} \mathrm{for} \hspace{0.2cm}t\leq 0^{\circ}C
\nonumber
\end{equation}
and
\begin{equation}
e = \frac{H}{100\%}\times 6.1070 \times \exp{\left(\frac{17.15 \hspace{0.1 cm}t}{234.9\hspace{0.1 cm} +
\hspace{0.1 cm}t}\right)}\hspace{1cm} \mathrm{for}\hspace{0.2cm} t\geq 0 ^{\circ}C \, .
\label{humeq}
\end{equation}
The density can be calculated from the ideal gas law as
\begin{equation}
\rho = \frac{P \hspace{0.1 cm} M_{\mathrm{air}}}{R \hspace{0.1 cm}T}
\end{equation}
where $P$ is the atmospheric pressure in Pa, $T$ is temperature in K and $R$ is the universal gas constant, having a
value of 8.31451 J K$^{-1}$ mol$^{-1}$ and
$M_{air}$ is the molar mass of air. Moist air can be decomposed
into three components to calculate its molar mass: dry air, water vapor and carbon dioxide. The
molar mass of humid air is the sum of the molar masses of the components, weighted with the
volume percentage $\phi_{i}$ of that component \cite{gdasauger},
\begin{equation}
M_{\mathrm{air}} = M_{\mathrm{dry}} \cdot \phi_{\mathrm{dry}} + M_{\mathrm{water}} \cdot \phi_{\mathrm{water}} + M_{\mathrm{CO_{2}}} \cdot \phi_{\mathrm{CO_{2}}} \, .
\end{equation}
The molar masses of dry air, water vapor and CO$_{2}$ are 0.02897, 0.04401 and 0.01802~kg-mol$^{-1}$
respectively. The volume percentage of CO$_{2}$ is taken as 385 ppmv, the percentage of water $\phi_{\mathrm{water}}$ is
the partial pressure of water vapor divided by the pressure $P$; the dry air makes up the rest. \\
The refractivity, defined as $N=\left(n-1\right) \, 10^6$, is a function of humidity, pressure and temperature can be expressed as
\begin{equation}
N = \SI {77.6890} {\kelvin\per\hecto\pascal}\frac{p_{d}}{T} + \SI{71.2952}{\kelvin\per\hecto\pascal}\frac{p_{w}}{T} + \SI{375463}{\square\kelvin\per\hecto\pascal}\frac{p_w}{T^{2}}
\label{ri}
\end{equation}
with $p_{w}$, $p_{d}$ and $T$ being the partial water
vapor pressure $\left(p_w= e \times 100 \, \mathrm{Pa}\right)$, partial dry air pressure and temperature respectively \cite{Rueger}.
The effect of humidity is important for our study as it tends to increase the refractivity in comparison to that of dry air
at the radio frequencies. There are differences between the refractivities obtained in radio and the ones
in the visible, near the infrared and UV ranges as described in \cite{gdasauger}. To account for the uncertainties in GDAS data
one needs to perform in situ measurements with weather balloons. Since this is beyond the scope of this work and
we refer to \cite{gdasauger}, which provides a comparison between GDAS data and weather balloon measurements in Argentina.
Since global atmospheric models are typically more precise in the Northern hemisphere where more
weather data is available we assume that
the intrinsic uncertainty of GDAS at the LOFAR site is
similar to that in Argentina. Various relevant uncertainties are:
$\pm$0.5 $^\circ$C for temperature, 0.5 hPa for
pressure, and 0.05 hPa for water vapor pressure and less than 1~$\mathrm{g/cm^{2}}$ in atmospheric depth over the altitude range from 3 to 6 km. The uncertainty in water vapor
pressure translates to $2-7\%$ uncertainty in humidity.
The resulting relative uncertainty in $N$ due to these parameters is around 0.5$\%$ at the same altitude range.
The GDAS data have a resolution of $1^{\circ}$ by $1^{\circ}$ in latitude longitude. This can be roughly approximated as a distance of 100 km
between two adjacent grid points. For highly inclined showers the distance to the region of shower development from the observation site can be larger than the distance between two grid points.
For air showers coming from 70$^{\circ}$ zenith this distance is around 70 km and for zenith $>$ 75$^{\circ}$ it is about 100 km. In these cases, the choice of
an exact grid point becomes complicated. Also at this point, for zenith angles $>$ 70$^{\circ}$ the correction due to curved atmosphere becomes important. This does not occur
for LOFAR as the detected cosmic rays are limited to within a $<$55$^{\circ}$ zenith angle due to the particle detectors used for triggering.
In this regime the GDAS model works well.
\section{GDAS atmospheric profiles at the LOFAR site}
In this section several GDAS atmospheric profiles extracted at the LOFAR site are discussed.
Fig-\ref{atm} (\textbf{left}) shows humidity as a function of altitude for 5 arbitrary atmospheric profiles for different
days in the year 2011, between June and November. A significant day-to-day fluctuation is seen. The red solid and blue dashed lines indicate two very different weather
conditions; the red solid line having
high saturating humidity between $5-8$ km suggests higher cloud coverage and the blue dashed line with low humidity in that range indicates low cloud coverage.
Fig-\ref{atm} (\textbf{right}) shows the difference in atmospheric depth profile between the US standard atmosphere and the GDAS atmospheres at LOFAR
for 8 profiles over the years $2011-2016$. The GDAS atmospheres
vary significantly from the US atmosphere. Atmospheric profiles with similar
atmospheric depth at ground can evolve differently higher in the atmosphere. This is important for calculating
the correct distance to the shower maximum.
Fig-\ref{refrac} shows the mean profile for the relative difference in refractivity $\Delta{N}_{\mathrm{relative}}$ between GDAS and the US standard atmosphere as a function of altitude
for over 3 years for 100 cosmic rays
recorded at LOFAR. It is defined as $\Delta{N}_{\mathrm{relative}}= (N_{\mathrm GDAS} - N_{\mathrm US})/N_{\mathrm US}$, where $N_{\mathrm{GDAS}}$ is calculated from
Eq-\ref{ri} using GDAS atmospheres at LOFAR. \textcolor{black}{$N_{\mathrm{US}}$ is obtained from the linear relation $N_{\mathrm{US}}=\frac{\rho_{\mathrm{us}}}{\rho_{\mathrm{sealevel}}} N_{\mathrm{sealevel}}$, with $N_{\mathrm{sealevel}}=292$. This is the default option for calculating refractivity in CoREAS as well}.
The absolute value of the mean $\Delta{N}_{\mathrm{relative}}$
is around $10\%$ near ground and around $3-8 \%$ between 3 to 10 km of altitude, the region important for shower development.
Approximately 75\% of the atmospheric matter and 99\% of the total mass of water vapor and aerosols are contained within the troposphere, the lowest layer of Earth's atmosphere.
Within the troposphere the temperature drops with altitude, reaching a constant value in the tropopause, the boundary region between troposphere and stratosphere.
In the U.S standard atmosphere the troposphere ends at 11 km and tropopause extends to an altitude of 20 km. For the local GDAS atmospheres these boundaries are not sharply defined.
The flat part in the mean $\Delta{N}_{\mathrm{relative}}$ $>$ 10 km in Fig-\ref{refrac} is the result of constant temperature
in the tropopause. However contribution from this region to the radio emission is minimal.
To consider the effects of refractive index in the propagation time of radio signal it is important to calculate the effective $N$ \cite{huege2016,zharies}. This is
defined as\\
\begin{equation}
N_{\mathrm{eff}}=\frac{\int N(h)dh}{D}
\nonumber
\end{equation}
where $D$ is the distance between the line of emission and observer.
The values of relative effective refractivity $\Delta{N}_{relative}^{eff}$ between the GDAS and US standard atmosphere
are around $7-10$ $\%$ in the range of altitude mentioned above, \textcolor{black}{for observers within $<1$00 m of the shower axis.}
\begin{figure}[h!]
\includegraphics[width=0.5\linewidth]{hum_test.pdf}
\includegraphics[width=0.5\linewidth]{test_atmdep_new.pdf}
\caption{Atmospheric profiles at LOFAR. \textbf{Left}: Example of 5 humidity profiles between June to November during the year 2011. \textbf{Right}: 8 profiles for the difference in atmospheric depth between
US standard atmosphere and GDAS atmospheres as a function of altitude between the years $2011-2016$. }
\label{atm}
\includegraphics[width=0.55\linewidth]{rel_ri_mean.pdf}
\centering
\caption{Mean relative refractivity, defined as $\Delta{N}_{\mathrm{relative}}= \frac{N_{\mathrm{GDAS}}-N_{\mathrm{US}}}{N_{\mathrm{US}}}$; profiles
for 100 recorded cosmic rays at LOFAR spanning over the years 2011 to
2014. The black solid line denotes the mean profile and the blue dashed lines show the standard deviations. }
\label{refrac}
\end{figure}
\section{Implementation in CORSIKA/CoREAS}
To incorporate the atmospheric parameters extracted from
GDAS in CORSIKA and CoREAS we have developed a program named \textquoteleft{gdastool}\textquoteright
\, that downloads the required GDAS file given the time and location
of observation of the event and returns refractive indices
between ground and the highest GDAS level. It also
fits the density profile according to the standard 5 layer atmospheric
model used in CORSIKA \cite{corsikamanual}. In this model
the density $\rho(h)$ has an exponential dependence on the altitude leading
to the functional form of mass overburden $T(h)$ which is the density
integrated over height (km) as
\begin{equation}
T(h) = a_{i} + b_{i} e^{-10^{5}h/c_{i}} \hspace{1cm}i = 1, . . . , 4 \, .
\label{dep}
\end{equation}
Thus, the density is
\begin{equation}
\rho(h) = b_{i} / c_{i} e^{-10^{5}h/c_{i}} \hspace{1cm}i = 1, . . . , 4 \, .
\label{den}
\end{equation}
In the fifth layer the overburden is assumed to decrease linearly with height.
The parameters $a_{i}$, $b_{i}$ and $c_{i}$ are obtained in a manner
such that the function $T(h)$ is continuous at the layer boundaries
and can be differentiated continuously. The first three layers constitute of the 24 density points obtained from GDAS data.
\textcolor{black}{The first layer consists of 10 points, second layer of 7 points and the third layer of 7 points.}
Since GDAS provides data on constant pressure levels, not of constant heights, the layer boundaries vary slightly
between different atmospheric profiles. \textcolor{black}{The mean values of the boundaries for the conditions of 100 cosmic ray events
are 3.56$\pm$0.11 km, 9.09$\pm$0.23 km, 26.27$\pm$0.56 km from boundary 1 to 3, respectively.}
Next, we fit the
data to Eq- \ref{den} in the following way:\\
For layer 1 the density profile is fitted with two free parameters.
Then the density $\rho_{1}$ at boundary 1 is calculated using Eq- \ref{den} with the obtained parameters $b_{1}$, $c_{1}$.
The condition that the density has to be continuous at the boundaries reduces
the number of free parameters to 1 which is the parameter $c$. Thus the parameter $b_{2}$ for second layer can be expressed
as a function of $\rho_{1}$ and $c_{2}$ with $c_{2}$ being the only free parameter.
The same fitting procedure is repeated for the third layer.
\textcolor{black}{The fourth layer ranges from the highest GDAS altitude to 100 km. At these altitudes there are no physical GDAS data.
The parameter $c_4$ is obtained by fitting the last GDAS point and the density at 100 km from US standard atmosphere.
At these altitudes the mass overburden is less than 0.1$\%$ of the value at ground. The important factor is to satisfy the boundary conditions throughout the atmosphere.
Along with density the continuity of mass overburden is also preserved.}
For that, once a smooth profile for the density is obtained,
the parameter $\mathrm{a}$ in Eq- \ref{dep} is solved for analytically, using the boundary conditions for the mass overburden.
The parameterization for the fifth layer was adapted from the US standard atmosphere \cite{corsikamanual}.
The \textquoteleft{gdastool}\textquoteright \, also returns a density profile plot with the best fit parameters as a function of altitude and
the rms of the relative density difference between data and fit. The relative density is defined
as $\frac{\rho_{\mathrm{fit}}-\rho_{\mathrm{data}}}{\rho_{\mathrm{fit}}}$. Fig-\ref{errden} (\textbf{left}) and its rms is used
as a goodness of fit. Fig-\ref{errden} (\textbf{left})
shows the example of a density profile between the fitted model and GDAS.
The mean relative error in density
for 100 profiles as a function of altitude is presented in
Fig-\ref{errden} (\textbf{right}). At lower altitudes the model fits the data very well;
deviations $>$ 2\% start at altitudes higher than 15 km which are not so important
for the shower development. A bump in the profile at 10 km is observed, this can
be explained by the change in the atmosphere at the troposphere boundary as
discussed in the previous section.
There will be an error on the atmospheric depth introduced by the fitted
model in Eq- \ref{dep}. It is on the order of 2 $\mathrm{g/cm ^2}$ on average between the altitude range mentioned above
with a variance of $4-5$~$\mathrm{g/cm^2}$. \\
\begin{comment}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7\linewidth]{total_rel_error.pdf}
\caption{Histogram of relative error in density for 100 different profiles. Each profile has 24 points
covering the entire range of altitudes between the ground and highest level in GDAS.}
\label{errden}
\end{center}
\end{figure}
\end{comment}
\begin{figure}[h!]
\includegraphics[width=0.5\linewidth]{demo_error.pdf}
\includegraphics[width=0.5\linewidth]{mean_density_error.pdf}
\caption{\textbf{Left}: Example of one density profile, GDAS and the fitted 5-layered atmospheric model. The bottom
panel shows the relative error defined as $\frac{\rho_{\mathrm{fit}}-\rho_{\mathrm{data}}}{\rho_{\mathrm{fit}}}$.
\textbf{Right}: Mean relative error in density for 100 different atmospheric profiles. The mean is calculated at each of the 24
GDAS points for all the profiles. The error bars indicate the standard deviation.}
\label{errden}
\end{figure}
The \textquoteleft{gdastool}\textquoteright \, can be executed as a stand alone script within CORSIKA.
Given the coordinate and UTC time stamp as input parameters it downloads the required GDAS files
and extracts atmospheric data. It then returns an output file that contains
fitted mass overburden parameters and tabulated refractive indices interpolated to 1 m intervals.
This output file can be invoked through the CORSIKA steering file. When called, it replaces the
default atmospheric parameters in CORSIKA with the new ones and the on-the-fly refractive index calculation
in CoREAS with the look-up table.
\section{Effects on the reconstruction of the depth of the shower maximum}
The highest precision for the determination of $X_{\rm max}$ with the radio technique
is currently achieved with the LOFAR radio telescope. Situated in the north of the
Netherlands, the dense core of LOFAR consists of
288 low-band dipole antennas within a circle with a diameter of 320 meters, known
as the Superterp. The radio emission from air showers in the frequency range 30--80~MHz
is recorded by the LOFAR low-band antennas \cite{lofar,LOFAR545454}. An array of particle
detectors installed on the Superterp provides the trigger for the detection of the air showers \cite{Lora}. \\
The $X_{\rm max}$ reconstruction technique used at LOFAR is based on the production of dedicated
simulation sets for each detected air shower. The number of simulations needed to reconstruct the shower maximum
is optimized with CONEX \cite{stijnicrc}.
A set of full CORSIKA simulations with proton and iron primaries is produced for each detected cosmic ray.
The radio emission is simulated in a star-shaped pattern for antenna positions in the shower plane using
CoREAS. An antenna model is applied to the simulated electric fields and compared to the
measured signal in the dipole antennas \cite{katiepaper}. The time integrated pulse power is calculated in
a 55 ns window centered around the pulse maximum, summed over both polarizations. Finally, a
two-dimensional map of the time integrated power is created by interpolating the star-shaped pattern \cite{xmax}.
In the previous analysis a hybrid fitting technique was used in which both the radio and particle data were fitted to the
two-dimensional radiation map and the one-dimensional particle lateral distribution function simultaneously.
In this work instead of the combined fit we fit only the radio data to the radio simulation.
The advantage of switching to the radio only fitting method is that it results in reduced systematic uncertainties.
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth, height=0.3\textheight]{reco_us.pdf}
\includegraphics[width=0.5\textwidth, height=0.3\textheight]{reco_gdas.pdf}
\caption{Quality of fit as a function of simulated $X_{\rm max}$ for a LOFAR event of
energy $\mathrm{1. 4\times 10^{8}}$ GeV, with a zenith angle of 38$^{\circ}$. \textbf{Left}: simulated with default US standard atmosphere,
reconstructed $\mathrm{X_{\max}}=$ 675.8
$\mathrm{g/cm^{2}}$. Applying the linear first order atmospheric correction, the resulting $\mathrm{X_{\max}}=$ 658 $\mathrm{g/cm^{2}}$. \textbf{Right}:
simulated with GDAS atmosphere, reconstructed $\mathrm{X_{\max}}=$ 638.3~$\mathrm{g/cm^{2}}$,
the reconstructed $X_{\rm max}$ in both the cases is indicated by solid black lines. }
\label{fitdemo}
\end{figure}
Fig-\ref{fitdemo} shows the fit quality for an air shower detected with LOFAR as a function of $X_{\rm max}$ simulated with
two different atmospheres - one with the corresponding GDAS atmosphere and the other with the US standard atmosphere. The reconstructed
value of $X_{\rm max}$ is found from the minimum of the fitted parabola around the best fitted points.
We chose a LOFAR event for which the ground pressure was much lower than the US standard atmosphere, by 20 hPa. The atmospheric profile for this
particular event is represented by the blue line with circles in Fig-\ref{atm} (\textbf{right}).
The reconstructed $X_{\rm max}$ with the US atmosphere corresponds to a much higher mass overburden
than the reconstructed $X_{\rm max}$using much thinner GDAS atmosphere.
In this example this translates to a difference of around 37.5~$\mathrm{g/cm^{2}}$
in the reconstructed $X_{\rm max}$ between the two cases. This large deviation is attributed to the
extreme weather condition for the shower chosen in the example. In the previous LOFAR analysis a correction
factor to the US atmosphere was used to account for the real atmosphere \cite{nature,xmax}.
The simulations that are produced with US standard atmosphere would approximately yield the correct geometrical altitude to the shower maximum. Then the
corrected $X_{\rm max}$ is calculated by integrating the GDAS density profile obtained at LOFAR, from the top of the atmosphere
to the geometric altitude of $X_{\rm max}$ in the following way:\\
\begin{equation}
X(h)= \frac{1}{\cos\theta} \int_{h}^{\infty}\rho_{\mathrm{gdas}}(h) dh \, .
\end{equation}
The corrected $X_{\rm max}$ for this particular example is 658 $\mathrm{g/cm^{2}}$ and the difference between the
corrected and new $X_{\rm max}$ is about 20 $\mathrm{g/cm^{2}}$.\\
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7\textwidth]{XmaxvsP.png}
\caption{Difference in mean $X_{\rm max}$ as a function of ground pressure. The total sample contains
123 air showers recorded at LOFAR. The black line denotes the U.S standard atmospheric pressure. }
\label{Pplot}
\end{center}
\end{figure}
Using the same approach described above we have studied 123 air showers recorded with LOFAR with three simulation sets:
\begin{itemize}
\item \textbf{Set A}\textendash the showers were simulated with CORSIKA v-7.6300 and GDAS atmosphere.
\item \textbf{Set B}\textendash the showers were simulated with CORSIKA v-7.4385 and US standard atmosphere.
\item \textbf{Set C}\textendash this set is identical to \textbf{Set B} but with the additional atmospheric correction factor to it as described above.
\end{itemize}
The effect of using different CORSIKA versions on the reconstructed $X_{\rm max}$ , irrespective of the atmospheric model,
was probed. The difference in $X_{\rm max}$ found using CORSIKA versions 7.6300 and 7.4385 was found to be very small, around 1.4 $\mathrm{g/cm^{2}}$.
This confirms that the differences between Set-A, Set-B and Set-C are due to different atmospheric models, not any artifact arising from different versions of CORSIKA.
In Fig-\ref{Pplot} the difference in mean reconstructed $X_{\rm max}$ between the various simulation sets mentioned above is plotted against
ground pressure bins obtained from GDAS. Both the blue circles and red squares converge to zero where GDAS pressure approaches the US standard pressure
at 1013 hPa.
The red squares have large $\Delta\mathrm{X_{\max}}$ in general. This is expected as there is no atmospheric correction involved in Set-B.
The blue circles however show a higher deviation both at low and high pressure values. This suggests that the linear first order correction added
to the standard US atmosphere implemented in Set-C is not sufficient. As the refractive index effects can not be included in the linear first order correction,
one needs full GDAS-based atmospheric profiles for more extreme atmospheric
conditions.\\
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth, height=0.3\textheight]{scatter_deltaX_5km.png}
\includegraphics[width=0.5\textwidth, height=0.3\textheight]{residual_X5km.png}
\caption{\textbf{Left}: scatter plot of $\Delta{X_{\rm max}}= X_{\rm max}^{\rm gdas} - {X_{\rm max}^{\rm us}}$ vs difference in slanted mass overburden
$\Delta{X_{\rm 5km}}= X_{\rm 5km}^{\rm gdas} - {X_{\rm 5km}^{\rm us}}$. The red line is a linear fit to the profile. \textbf{Right}: Histogram shows the
residual of fitted and actual $X_{\rm max}$; residual= $X_{\rm max}^{\rm corr} - X_{\rm max}^{\rm gdas}$.}
\label{globalcorr}
\end{figure}
\noindent \textcolor{black} {Here, we study the possibility to introduce a new global correction factor to the reconstructed $X_{\rm max}$ with US standard atmosphere to correct
for realistic atmospsheres without having to run full GDAS-based CoREAS simulations. To achieve this we studied the correlation between $X_{\rm max}$,
refractivity, and slanted mass overburden which is defined as the integrated density from the edge of the atmosphere to a given height
at the slant of zenith angle, at different altitudes.
It was seen that both the correlation between $X_{\rm max}$ and refractivity and between $X_{\rm max}$ and slanted mass overburden correlation
are poor at ground and at lower altitudes. At the higher altitudes, between 4 - 6 km,
$X_{\rm max}$ and mass overburden show a higher correlation which is not prominent in $X_{\rm max}$ vs refractivity profiles at these altitudes. We have found the
strongest correlation at an altitude of 5 km.
Fig-\ref{globalcorr} (left) shows the scatter plot of $\Delta{X_{\rm max}}$ defined as $X_{\rm max}^{\rm gdas} - {X_{\rm max}^{\rm us}}$ and difference in
the slanted mass overburden $\Delta{X_{\rm 5km}}= X_{\rm 5km}^{\rm gdas} - {X_{\rm 5km}^{\rm us}}$. The precise correlation suggests
the profile can be fit with a straight line and is used as a parameterization of global correction factor, provided by the equation:
\begin{equation}
X_{\rm max}^{\rm corr} - X_{\rm max}^{\rm us} = 0.9\left(X_{\rm 5km}^{\rm us}-X^{\rm gdas}_{\rm 5km}\right) +0.28 .
\end{equation}
The histogram in Fig-\ref{globalcorr} (right) shows the residual of the $X_{\rm max}^{\rm corr}$
from ${X_{\rm max}^{\rm gdas}}$. The profile is symmetric with mean 0 g/cm$^2$ and standard deviation 11.56 g/cm$^2$. The fluctuations are within the typical
systematic uncertainty of the reconstructed $X_{\rm max}$ with LOFAR,
which is around 17 g/cm$^2$ \cite{xmax}. This correction factor can be used as a rule of thumb for the estimation of
reconstructed ${X_{\rm max}}$ with the following caveats. It is specific to LOFAR, as simulations were performed involving weather conditions, observation level,
and magnetic field particular to LOFAR. Corresponding correction equations for other experiments can be constructed in the same manner and
can yield different results depending on atmospheric parameters. \\
However, while this global correction factor is very useful when a fast reconstruction is needed, we will use the full Monte Carlo approach in a future composition analysis.
Simulations with event specific GDAS atmospheres are always more
accurate than the correction factor. The correction factor might also introduce biases related to the mass of the primary particles.
Proton primaries on average generate showers that reach maximum lower in the atmosphere than iron; these kind of effects are not taken into account.}
\section{Effects of humidity}
As described in section 2, in the radio frequency regime, humidity increases the
refractive index. For this study, two sets of simulations were produced. In one set the showers were simulated with the respective GDAS atmosphere and in the other
with a GDAS atmosphere with vanishing humidity. This was achieved
by hard-coding the partial water vapor pressure in Eq-\ref{humeq} to negligible values.
For the GDAS atmosphere an extremely humid weather condition at the LOFAR site was chosen.
The same atmospheric
parameters are used in both cases to ensure that the particles evolve in a similar way in the atmosphere and produce same
shower maximum. In this way the inclusion of humidity only influences the simulated radio pulses.
The difference in the refractive index manifests in terms of propagation effects on the pulse arrival time
and power. The pulse propagating though an atmosphere
with higher refractive index will have a lower velocity compared to dry air. This results in a delayed arrival time of the signal, as seen in Fig-\ref{time}.
The difference in peak arrival time is less than 1 ns for an observer at 150 m. The effect is found to be less prominent
for observers further away from the axis. The lateral distribution of the energy fluence, the time-integrated power per unit area, for different observer
positions is also studied for different frequency bands for these two cases, as shown in Fig-\ref{cheren}. In the low frequency band of 30--80~MHz
relevant for LOFAR the difference in the fluence between the two sets is small, \textcolor{black}{from around 4$\%$ closer to shower axis to 2$\%$ at a distance of 100~m from the axis.
In the high frequency band of 50--350~MHz the
values are larger, being around 8$\%$ at 100 m from the core. In the higher frequency band the Cherenkov-like effects
become stronger and the signal is compressed along the Cherenkov ring \cite{nelles2015}.
A rough estimate of the radius of the ring can be obtained from the projection of a cone with
an opening angle given by the Cherenkov angle starting
from the shower maximum. The opening angle is strongly dependent on the index of refraction.
This explains the higher difference in power in Fig-\ref{cheren}. \textcolor{black}{
Similar effects in high and low frequency bands were also reported in \cite{codalema} by studying the LDF of the electric field profiles.}
Inside the Cherenkov radius pulses
are stretched due to refractive index effects. For higher refractive
indices this will lead to lower pulse power which explains the negative
sign in the relative fluence for observer distances close to the core.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.65\linewidth]{timepulse_150new.pdf}
\caption{Unfiltered electric field components of a CoREAS pulse
in time for two different refractive index profiles for a
10$^{17}$ eV proton shower with a zenith angle of 45$^\circ$ coming from east for an observer at 150 m from the axis. The solid and dashed lines
represent the profiles with lower and higher refractive indices respectively. }
\label{time}
\end{center}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.5\linewidth]{rel_power_lofar_newnew.pdf}
\includegraphics[width=0.5\linewidth]{rel_power_ska_newnew.pdf}
\caption{LDF profiles for a $10^{17}$ eV proton shower coming from zenith 45$^\circ$ with $\mathrm{X_{\max}}=~593\mathrm{g/cm^{2}}$. Observers are located
to the west of the shower axis. \textbf{Left}: low frequency band between 30--80~MHz, \textbf{Right}: high frequency band between
50--350~MHz. The upper panel shows the LDF of total fluence for the humid and non-humid sets, the lower panel shows the relative difference between these two. }
\label{cheren}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth, height=0.3\textheight]{offset_allhum_lofar.pdf}
\includegraphics[width=0.5\textwidth, height=0.3\textheight]{offset_allhum_ska.pdf}
\caption{Histogram for the $\mathrm{\Delta{X_{\max}}}=\mathrm{X_{reco}} -\mathrm{X_{real}}$ between the reconstructed and true
value of the $X_{\rm max}$ obtained from the Monte Carlo study between the humid and non-humid simulation sets. \textbf{Left}: for the low frequency band of 30--80~MHz.
\textbf{Right}: for the high frequency band of 50--350~MHz. The shift in the $X_{\rm max}$ is significant at 2$\sigma$ level. }
\label{histxmax}
\end{figure}
The radiation energy is the total energy contained in the radio signal. It scales quadratically with the cosmic ray energy, thus can
be used as a cosmic ray energy estimator \cite{Augerrad,radioprl}. The surface integral over the radio LDF mentioned above yields
the radiation energy. The relative difference in the integrated LDF between the humid and non-humid profiles for both the low and
high frequency regimes is smaller than 1$\%$. This indicates that humidity has almost no effect on the estimated cosmic ray energy
as determined from the radiation energy which was also concluded in \cite{Glaser}.
\begin{comment}
\includegraphics[width=0.6\textwidth, height=0.26\textheight]{heightvstime.pdf}
\includegraphics[width=0.6\textwidth, height=0.26\textheight]{timepulse_e07_100.png}
\caption{\textbf{left}Emission height as a function of observer time. \textbf{right:} West component of an electric field
vs time in a 10$^7$ GeV proton shower}
\label{cheren}
\end{center}
\end{figure}
Considering a simple 1-D model the arrival time of the signal from
different emission height can be calculated as a function of effective refarctive index and observer distance. In \cite{cheren} such plot is
shown for two different refractive index profile mentioned above. The longitudinal development of air shower particles is also superimposed.
There is a clear shift in time between the red and blue lines. The red line belonging to the non-humid set has higher refractive index, it
thus translates to an early arrival of signal. This is evident from the right of \cite{cheren}.
Although overall shape of the pulse is not effected by time dependence. Narrower pulses could be
of higher humidity values and this might translate to higher frequency response.
\begin{figure}
\begin{figure}
\end{comment}
Next, to investigate the effect of humidity on $X_{\rm max}$ measurements we have performed a Monte Carlo comparison study between two sets of simulations that
deals with the atmospheres in a similar way as described in the beginning of this section.
\textcolor{black}{For each of theses cases we have used a set of 40 simulated events with different energy, zenith and azimuth angles. Each of these sets consist of an ensemble of proton and iron initiated showers based on CONEX selection criteria.}
One shower from the set with higher humidity is taken as reference and all the simulated showers from the set with zero humidity are used to perform the reconstruction. This yields a reconstructed
$\mathrm{X_{reco}}$ that can be compared to the actual $\mathrm{X_{real}}$ of the reference shower.
The same method is repeated for all the showers in the set with higher humidity. \textcolor{black}{Showers with extreme values of $X_{\rm max}$ were not included in the fit. The range of the fit
was taken as $\pm$ 50 $\mathrm{g/cm^{2}}$ of the actual $X_{\rm max}$ for the test shower.}
The difference $\mathrm{X_{reco}} -\mathrm{X_{real}}$ estimates the effect of humidity on the reconstructed $X_{\rm max}$.
We do not observe any significant shift in $X_{\rm max}$ in this study.
This indicates that these effects are most likely smaller
than the overall resolution in reconstructed $X_{\rm max}$ in the LOFAR frequency band. We also performed the same study in a higher frequency band between 50 and 350 MHz, corresponding to the SKA-low band. There, an overall shift of 6.8 $\mathrm{g/cm^{2}}$ in the reconstructed $X_{\rm max}$ was observed. These results, shown in Fig-\ref{histxmax}, are in line with the LDF studies described
earlier in this section. \\
\textcolor{black}{In Ref.\cite{Arthurpaper}, larger shifts of about 10 to 22 g/cm$^2$ in reconstructed $X_{\rm max}$ in the high frequency band of 120--250~MHz
for 4\% higher refractivity and 3.5 to 11 g/cm$^2$ in the low frequency band of 30--80~MHz were reported. A toy model was used to describe the effects. The toy model was based
on the assumptions that the size of the radio footprint on the ground would be proportional to the geometric distance to $X_{\rm max}$ and to the Cherenkov angle
at the altitude of $X_{\rm max}$. The effect of constant higher refractivity would correspond to a higher Cherenkov angle resulting in an underestimation of $X_{\rm max}$.
This then leads to a clear linear relation between shift in $X_{\rm max}$ and distance to $X_{\rm max}$.
Without having prior knowledge of individual atmospheric conditions, an overall scaling of the refractivity profile had to suffice. However,
the realistic scenario is quite different. There are strong interplays between humidity, pressure, and temperature which are reflected in refractivity.
The relative refractivity profile in Fig-\ref{refrac} shows that
the shift is not a constant, but is altitude dependent. From near ground to higher altitudes it switches from being a higher value than US standard atmosphere to a lower value.
This makes an one-to-one comparison to Ref.\cite{Arthurpaper} hard. However, we can argue that qualitatively same trait in the high and low frequency band has been found in
both the works.}
\textcolor{black}{The effects of different zenith angles, true $X_{\rm max}$ and energy were probed for the shift in $X_{\rm max}$ for both the frequency bins. The simulation set was divided in two groups, each group belonging to high and low values of the parameters mentioned above. No significant effect was seen. }
\begin{table}[h!]
\begin{center}
\begin{tabular}{ c c c }
Frequency band & Zenith & $\Delta\mathrm{X_{max}}$ ($\mathrm{g/cm^2}$) \\
50--350 MHz & low $< 30^\circ$ & -6.24$\pm$0.30 \\
50--350 MHz & high $>30^\circ$ & -6.19$\pm$ 0.37 \\
30--80 MHz & low $< 30^\circ$ & 0.10$\pm$0.50 \\
30--80 MHz & high $>30^\circ$ & -0.05$\pm$0.46 \\
\hline
\end{tabular}
\begin{tabular}{ c c c }
Frequency band & True $X_{\rm max}$ ($\mathrm{g/cm^2)}$ & $\Delta\mathrm{X_{max}}$ $(\mathrm{g/cm^2}$) \\
50--350 MHz & low $<624$ & -6.78$\pm$0.41 \\
50--350 MHz & high $>624$ & -6.30$\pm$ 0.32 \\
30--80 MHz & low $<624$ & -0.61$\pm$0.51 \\
30--80 MHz & high $>624$ & 0.51$\pm$0.46 \\
\hline
\end{tabular}
\begin{tabular}{ c c c }
Frequency band & Energy(GeV) & $\Delta\mathrm{X_{max}}$ ($\mathrm{g/cm^2}$) \\
50--350 MHz & low $ <2.18\times 10^8$ & -6.86$\pm$0.35 \\
50--350 MHz & high $ >2.18\times 10^8$ & -6.92$\pm$ 0.38 \\
30--80 MHz & low $ <2.18\times 10^8$& -0.48$\pm$0.48 \\
30--80 MHz & high $ >2.18 \times 10^8$& 0.$\pm$0.49 \\
\hline
\end{tabular}
\caption{Shift in $X_{\rm max}$ for different zenith, energy and $X_{\rm max}$
bins for different frequency bands.\label{mytable}}
\end{center}
\end{table}
\section{Conclusion and discussion}
\textcolor{black}{Simulating air showers with realistic atmospheres is important for the precise
reconstruction of $X_{\rm max}$ with the radio technique. The GDAS database is a useful platform to extract atmospheric parameters for a given time and location.
Atmospheric effects on radio simulations were
previously studied in Refs. \cite{Arthurpaper} and \cite{codalema}. The studies demonstrated the role of correct description of atmospheric density and refractive index
when included in the radio simulation codes. However, the application of simulations with realistic atmospheres to real data was not
addressed. \\}
\textcolor{black}{We report, for the first time, the application of GDAS-based atmospheric profiles, automated in CoREAS
simulation to cosmic ray data. By systematically performing GDAS-based CoREAS simulations for the LOFAR dataset, we have done comparison between
GDAS-based atmospheres a linear geometrical first order correction to the US standard atmosphere on $X_{\rm max}$. While the linear correction is sufficient for
the bulk of the events, it becomes indispensable to use full GDAS based atmospheres for extreme values of the air pressure.
When the air pressure at ground level differs by less than 10 hPa from the US standard atmosphere value, the reconstructed $X_{\rm max}$ value
including the linear correction agrees with the full GDAS-based reconstruction value within 2 $\mathrm{g/cm^{2}}$. However, when the ground pressure is more than
10 hPa from the US standard atmosphere, this difference grows significantly up to 15 $\mathrm{g/cm^{2}}$. }\\
\textcolor{black}{We have also introduced a GDAS-based correction factor for $X_{\rm max}$ reconstructed with US standard atmosphere without having to run full GDAS-based CoREAS simulations.
It is specific to LOFAR, but similar relations can be worked out for other experiments as well. The uncertainty on the predicted $X_{\rm max}$ using the correction
factor is about 12 g/cm$^2$; this is within the typical $X_{\rm max}$ reconstruction uncertainty with LOFAR, around 17 g/cm$^2$.\\}
We have probed the effects
of humidity on the lateral distribution of radio power by comparing two profiles
with high and low humidity. We performed this study for different frequency bands.
In the LOFAR frequency band of 30--80~MHz the relative difference in power is small.
For a higher frequency band of 50--350~MHz the same effects are comparatively
larger, up to 10$\%$. We also estimated the radiation
energy from the LDF profiles to see the effects of humidity on the reconstructed energy.
No significant difference was found for either frequency regime which indicates that humidity
does not influence the estimated energy.
A Monte Carlo study on the reconstructed $X_{\rm max}$
was also done for these frequency bands. No significant effect of humidity
is found on the reconstructed $X_{\rm max}$ for the low frequency band relevant for LOFAR; for the higher frequency band a mean difference on the order of 7 $\mathrm{g/cm^{2}}$
is observed. This could be important for the high precision $X_{\rm max}$ measurements for the cosmic ray detection with the SKA experiment \cite{ska}.\\
\textcolor{black}{In the process of implementing GDAS-based parameterized density and refractive index profile in CORSIKA/CoREAS, we have developed a tool,
called \textquoteleft{gdastool}\textquoteright, which has been available for public use since the release of CORSIKA
version 7.6300, and is already being used by other experiments in the community around the globe.} \\
\textcolor{black}{In the previous LOFAR analysis the effects of refractive index were included within the systematic uncertainties on the reconstructed $X_{\rm max}$.
The improved atmospheric correction will lead to a reduced systematic uncertainty. An update on the mass composition results
is not within the scope of this study. It will be discussed in a future publication,
which involves, along with atmospheric corrections, improved calibration of the radio antennas, energy scale, and new
$X_{\rm max}$ reconstruction techniques.}
\small
{\section{Acknowledgement}
The LOFAR cosmic ray key science project acknowledges funding from an Advanced Grant
of the European Research Council (FP/2007-2013)/ERC Grant Agreement no 227610. The project
has also received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 640130). We furthermore
acknowledge financial support from FOM, (FOM-project 12PR304). AN is supported by the DFG (Emmy-Noether grant NE 2031/2-1 ). LOFAR, the Low
Frequency Array designed and constructed by ASTRON, has facilities in several countries, that are
owned by various parties (each with their own funding sources), and that are collectively operated
by the International LOFAR Telescope foundation under a joint scientific policy. We sincerely thank
the CORSIKA developers for their assistance regarding the implementation of our work in CORSIKA modules.}
\small
\begin{comment}
\section{Appendix}
\subsection{User manual of gdastool}
The \textit{gdastool} script included with CORSIKA (in \textit{src/utils/}) subdirectory) downloads a
section of the GDAS (Global Data Assimilation System) database and compiles a file containing
Corsika settings ATMLAY, ATMA, ATMB, ATMC, and an altitude profile of the refractive
index. \\
It takes as input:
\begin{itemize}
\item {an observatory name (currently, valid names are \textquoteleft{lofar}\textquoteright \, or \textquoteleft{aera}\textquoteright) or observatory latitude / longitude coordinates in degrees}
\item{a UTC timestamp (seconds since Jan 1, 1970) for which to read out the atmospheric profile}
\item{an output path and filename for the compiled profile}
\item{an output path \textquoteleft{gdaspath}\textquoteright for storing the downloaded section of GDAS. }
\end{itemize}
Given coordinates and time stamp are rounded to the nearest 1$\times$1 degree grid point, and to the
nearest 3-hour time in GDAS. To use the output file in CORSIKA, it should be included into the steering file with the ATM-
FILE keyword (see Sect. 4. 23 page 73).
The full usage info for the script is : \vspace{0.5 cm}
usage: gdastool [-h] [-t UTCTIMESTAMP] [-o OUTPUT]
(--observatory {lofar, aera} -c COORDINATES COORDINATES)
[-m MINHEIGHT] [-s INTERPOLATIONSTEPS] [-p GDASPATH] [-v] [-g]\\
It creates an atmosphere profile for CORSIKA/CoREAS from GDAS data.
Downloads GDAS model data for the defined location and time and fits
a 5-layer model of the atmosphere to the data. Based on the fit, a
table for the refractive index is created for usage in CoREAS.
optional arguments: \\
-h, --help show this help message and exit\\
-t UTCTIMESTAMP, --utctimestamp UTCTIMESTAMP\\
UTC time stamp of the event\\
-o OUTPUT, --output OUTPUT\\
Name of the outputfile. \\
--observatory {lofar, aera}\\
Preset of observatory coordinates. \\
-c COORDINATES COORDINATES, --coordinates COORDINATES COORDINATES\\
Coordinates of the observatory lat=-90.. 90 lon=0.. 360 in deg,
e. g. --coordinates 50.85 4. 25 for Brussels. \\
-m MINHEIGHT, --minheight MINHEIGHT \\
Mimimum hight for the interpolation. Default is -1. 0 km. \\
-s INTERPOLATIONSTEPS, --interpolationSteps INTERPOLATIONSTEPS \\
Step length for interpolation. Default is 1 m. \\
-p GDASPATH, --gdaspath GDASPATH \\
path to local gdas file directory. If required file is not there, it will be downloaded. \\
-v, --verbose Set log level, -vv for debug. \\
-g, --createplot plot density profile. \\
Please contact\\
\hspace{0.2 cm}
\begin{itemize}
\item Pragati Mitra pmitra@vub.be
\item Arthur Corstanje a.corstanje@astro.ru.nl
\item Tobias Winchen tobias.winchen@rwth-aachen.de
\end{itemize}
in case of questions or bugs.
\end{comment}
\bibliographystyle{unsrt}
|
1,314,259,993,872 | arxiv | \section{Introduction}
\label{sec:introduction}
It is an important problem to understand hadron properties based on the fundamental theory of the strong interaction, Quantum Chromodynamics (QCD).
Due to the non-trivial features of the QCD dynamics at low energies,
the hadron physics shows us many interesting and even unexpected non-trivial phenomena.
The fact that hadronic phenomena are so rich implies that various studies from many different views are useful and indispensable to reveal the nature of the hadron dynamics.
Not only isolated hadrons but also hadronic matter under extreme conditions of high temperature, of high baryon density, and of many different flavors provide important hints to understand the hadron dynamics.
One of familiar forms of hadronic matter is the atomic nucleus, the composite system of protons and neutrons.
The nuclear physics has been developed so far, based on various phenomenological approaches (shell models, collective models, and so on).
Recently, ab-initio calculations are being realized such that many-body nuclear problems are solved starting from the bare nucleon-nucleon interaction determined phenomenologically with high precision~\cite{Bertsch:2007,Bogner:2009bt,Lacroix:2010qn}.
Yet a large step forward has been made; the lattice QCD has derived the nucleon-nucleon interaction~\cite{Ishii:2006ec,Aoki:2009ji}.
Thus the so far missing path from QCD to nucleus is now being exploited.
Nevertheless, if we look at the problem, for instance, of neutron stars, we confront with a difficulty in explaining the so-called twice the solar mass problem.
Because of the high density environment in the inside of the neutron star, the strangeness, the third flavor of quarks, appears as an explicit degree of freedom.
This occurs primarily in the form of hyperons forming hypernuclei, composite systems of hyperons, such as $\Lambda$ and $\Sigma$, together with protons and neutrons.
Hypernuclear physics is then an active field, where one of the current goals is to determine the two-body and even three-body forces for hyperons and nucleons to explain the stability of the massive neutron stars~\cite{Lacroix:2010qn,Hashimoto:2006aw,Botta:2012xi}.
The dynamics of strange hadrons can provide the new energy scale, several hundred MeV, which is much larger
than nuclear physics scale of order a few or ten MeV at most (single-particle motion, surface vibration, rotation of deformed nuclei, nucleon pairings and so on).
As a famous example actively studied,
we expect that (anti-)kaons appear in nuclear matter as an active
degrees of freedom~\cite{Akaishi:2002bg}.
This is another possible form of dense strangeness flavored matter.
Because of that large energy scale, we need to consider the properties of hadrons explicitly.\footnote{This may be contrasted with the traditional view of nuclear physics, namely the nucleon dynamics is not seen directly, but it is ``renormalized" to the low energy degrees of freedom such as the collective modes (e.g. surface vibration, rotation, nucleon pairings)~\cite{ring2004nuclear}.}
Anti-kaons in the nuclear matter have long been discussed in various respects.
They interact with nucleons attractively in particular in S-wave.
Because of the suppressed kinetic energy due to their rather large mass, even small attraction is enough to trap them in a nucleus.
A well-known and the simplest system of such is $\Lambda(1405)$ as a quasi-bound state of $\bar KN$, the first negative parity excited state of $\Lambda$~\cite{Hyodo:2011ur}.
Partly due to the difficulty of the quark model in explaining the state despite its general success, such an idea was proposed many years ago~\cite{Dalitz:1967fp}.
After some time, it has been revived due to the developments of chiral theories of QCD.
It has now become one of active subjects to confirm its nature of the $\bar KN$ quasi-bound state.
Once this will turn out to be the case, the impact on the hadron and nuclear physics is very large,
where we expect to see many rich and unexpected phenomena.
The strangeness does not only bring new phenomena but also plays a role of impurity to analyze aspects of the strong interaction that we cannot see easily without it.
First of all, obviously, the strange quark brings another energy scale such that we may be able to see the QCD dynamics at various energy scales.
One example is already seen above; the attractive force between the anti-kaon and the nucleon as a consequence of the low energy theorems of the spontaneously broken chiral symmetry has a different energy scale from that between the pion and the nucleon.\footnote{We notice that, for the light $u$ and $d$ flavors, the first excited state of the nucleon, $N(1535)$ with spin-parity $J^{P}=1/2^{-}$, is well explained as an orbital excitation of valence quarks (P-wave excitation), but with an $s$ quark the corresponding state, $\Lambda(1405)$ with $J^{P}=1/2^{-}$, shows up with very different structure governed by the $\bar{K}N$-$\pi \Sigma$ dynamics.}
Another example is the mass inversion of the $\Lambda(1820)$-$\Sigma(1775)$ of spin and parity
$J^P = 5/2^-$ in comparison with the ground states $\Lambda(1116)$-$\Sigma(1190)$.
These examples show that by using an impurity the properties of the system changes, and it brings us useful information to understand the underlying mechanism of hadron dynamics.
Turning to the nuclear matter, study of hadrons in nuclear medium provides also a unique tool for investigating the vacuum structure of QCD.
In the QCD vacuum, it is known that there appear several different types of the quark and gluon condensates as a result of non-perturbative effects from QCD at low energy.
Those condensates are decisive for the hadron properties (mass, interaction and so on).
One of the most important condensates is the chiral condensate induced by the dynamical breaking of chiral symmetry.
In fact, the light mass of the pion can be explained as the nature of the Nambu-Goldstone boson, which appears as the lowest energy states in the symmetry-broken vacuum.
The small but finite mass of the pion is understood by the Gell-Mann--Oaks--Renner relation~\cite{GellMann:1968rz}, where the pion mass is related to the small explicit breaking of chiral symmetry for $u$, $d$ quarks and to the chiral condensate in vacuum.
As a consequence of the spontaneous symmetry breaking, the interaction of pions with a matter field is constrained by chiral symmetry, such as by the Weinberg-Tomozawa interaction~\cite{Weinberg:1966kf,Tomozawa:1966jm},\footnote{This is the driving interaction to provide the strong $\bar{K}N$ attraction for $\Lambda(1405)$, when the $\bar{K}$ meson is regarded as the Nambu-Goldstone boson, as stated above.} as well as by the Goldberger-Treiman relation for the axial-vector coupling.
As another example, the gluon condensate is related to the hadron mass generation as a consequence of the scale anomaly (the trace anomaly).
However, it is still a non-trivial problem how those quark and gluon condensates affect the hadron properties.
This problem can be accessed by observing the modifications of hadrons when those condensates change in nuclear matter (see Refs.~\cite{Hayano:2008vn,Leupold:2009kz} for recent reviews).
For example, it is known both in experiments and in theories that the spectra of vector mesons change in atomic nuclei.
In this respect, $\phi$ meson with $s$ quarks in nuclear matter, which is related to the problem of the $s\bar{s}$ contents in a nucleon, is also interesting~\cite{Hatsuda:1991ez,Gubler:2014pta,Gubler:2016itj}.
Therefore, the nuclei can be used as a good stage to investigate the hadron properties from QCD.
Having said so much, now we attempt to make further extensions of flavors to the {\it heavy flavor} region,
namely the {\it charm} and {\it bottom} flavors, in the hadronic matter.
The introduction of heavy quarks changes the nuclear system to be free from the constraint governed by chiral symmetry.
The problem is very challenging because so far we have little experimental data.
Because the charm (bottom) quark is very heavy as compared to the $u$, $d$ and $s$ quarks, it is natural that they do not appear in ordinary hadronic matter, nuclei.
But they must show up under extreme conditions which we do not know much about.
Thus we expect yet unexperienced phenomena in the presence of the charm (bottom) quark.
Not only that, it should provide useful information for the QCD dynamics of hadrons.
In fact, in the past decade we have experienced
exciting time for discoveries of many exotic charm (and bottom as well) hadrons, called $X$, $Y$, $Z$ states~\cite{Swanson:2006st,Brambilla:2010cs,Hosaka:2016pey} as well as $P_{c}(4380)$ and $P_{c}(4450)$ states~\cite{Aaij:2015tga} and $X(5586)$ state~\cite{D0:2016mwd}, which were observed in accelerator facilities.
Their existence strongly suggests that our naive picture of hadrons, namely three quarks for a baryon and quark-antiquark for a meson, is not sufficient, as already suggested by $\Lambda(1405)$.
The understanding of exotic hadrons
is the latest
hot topic in hadron physics~\cite{Swanson:2006st,Brambilla:2010cs,Hosaka:2016pey}.
There are novel features for charm and bottom quarks, which are qualitatively different from the light quarks~\cite{Neubert:1993mb,Manohar:2000dt}.
First, charm and bottom quarks have heavy masses: $m_{c}=1.275\pm0.025$ GeV and $m_{b}=4.66\pm0.03$ GeV~\cite{Agashe:2014kda}.
Those masses are much larger than the typical scale of the low energy QCD,
$\Lambda_{\rm QCD}=214\pm 7$ MeV ($\overline{MS}$ scheme with $N_{f}=5$~\cite{Agashe:2014kda}.), which gives the energy scales of the hadron dynamics.
Therefore, naively to say, we may expect that a charm (bottom) quark can play the role of an ``impurity particle".
This is a new degree of freedom in the low energy QCD, which would not be much affected by the change of vacuum.
This property shows up when we study the change of heavy hadrons in nuclear medium; we can separate the change of the light degrees of freedom from the heavy quark (the Born-Oppenheimer approximation).
Second, the heavy quark interaction has a special property in coupling to a gluon field;
the spin-flip process of the heavy quark is suppressed by the $1/m_{Q}$ factor with the heavy quark mass $m_{Q}$.
Especially, it becomes completely zero when the heavy quark limit ($m_{Q}\rightarrow \infty$) is adopted.
The suppression of the spin-flip interaction is helpful to separate the light degrees of freedom from the heavy quark.
In fact, this property leads the heavy quark spin symmetry as a novel symmetry in the heavy quark sector, which plays the significant role in the heavy hadron dynamics.
Those two properties of a heavy quark, namely the separation of degree of freedom and the suppression of the spin-flip process, are crucial to explain many properties (mass splittings, branching ratios of decays and so on) of the charm/bottom hadrons.
Given those unique properties of the heavy quarks, we expect it quite interesting to study the ``{\it heavy-flavor nuclei}" which are nuclei
containing a charm quark or a bottom quark as an impurity particle,
with
the extension of flavors from up, down and strangeness to charm and bottom.
There are many open problems, which should be addressed in the study of the heavy-flavor nuclei: how the nuclear structure changes by a heavy impurity hadron, how the hadron-hadron interactions as well as the hadron masses are affected by the change of the QCD vacuum, what kind of low energy mode heavy-flavor nuclei can have, and so on (Fig.~\ref{fig:160518}).
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=11cm,bb=0 0 842 595]{figs/1/160518.pdf}
\caption{A schematic figure for topics covered by studies of heavy hadrons in nuclear medium.}
\label{fig:160518}
\end{center}
\end{figure}
Our purpose in this review is to survey the preceding and current studies of the charm/bottom hadrons in heavy-flavor nuclei.
The main issues are summarized
as
the following three items.
\begin{enumerate}
\item Charm/bottom hadron-nucleon interaction
\item Structure of charm/bottom nuclei
\item QCD vacuum and hadron properties in nuclear medium
\end{enumerate}
1.~{\it Charm/bottom hadron-nucleon interaction.---}
Charm/bottom hadron-nucleon interaction is one of the most basic
ingredients
to study the charm/bottom nuclei.
However, the biggest problem is that there is only poor information from experiments, especially at low-energy scattering.
In literature, instead, there have been theoretical studies about various types of the charm/bottom hadron-nucleon interaction.
As a naive extension from SU(3) symmetry, which is valid in light flavors up to strangeness, we may consider the SU(4) flavor symmetry including a charm flavor.
We may also consider SU(5) symmetry up to the bottom flavor.
Although the SU(4) and/or SU(5) symmetries would be useful for classifying the hadron states, we have to keep in mind that those symmetries
cannot be applied
for the hadron spectroscopy.\footnote{See for example Ref.~\cite{georgi1999lie}.}
When we regard the mass of the charm/bottom quark $m_{Q}$ much heavier than the typical low-energy scale $\Lambda_{\mathrm{QCD}}$, it is a natural way to consider the heavy quark limit ($m_{Q} \rightarrow \infty$, $\Lambda_{\mathrm{QCD}}/m_{Q} \rightarrow 0$) as the leading approximation.
Whatever symmetries are adopted for heavy quarks,
it is an important question to ask what kind of hadron interactions is working.
At long distances, there are various types of meson exchange forces such as vector meson exchanges providing strong repulsion or attraction (depending on baryon charges)~\cite{Sakurai:1960ju}, and a pion exchange leading to tensor force which is crucial for the binding of the deuteron~\cite{Tornqvist:1991ks,Manohar:1992nd}.
At short distances, it is also possible to have direct quark exchanges~\cite{Oka:1980ax,Oka:1981ri,Oka:1981rj} and multi-gluon (Pomeron) exchanges~\cite{Brodsky:1989jd}, and so on.
Recently, it has become possible to study the hadron interactions from the first principle due to the rapid development of the lattice QCD computations~\cite{Ishii:2006ec,Yamazaki:2011nd,Beane:2011iw}.
We will overview the current status of the understanding of the heavy-hadron interaction in various approaches.
\vspace{1em}
2.~{\it Structure of charm/bottom nuclei.---}
Based on the charm/bottom hadron-nucleon interaction, we investigate the properties of the charm/bottom nuclei as many-body systems in several theoretical approaches.
In our approach, we regard the nuclear matter as an almost free Fermi gas, and investigate the medium effects for the charm/bottom hadrons by considering the Pauli exclusion effect from the occupied Fermi sea of the nucleons (Sect.~\ref{sec:nuclear_matter}).
By this method, we obtain the effective masses, the effective coupling constants, and the decay widths of the charm/bottom hadrons in nuclear medium (cf.~G-matrix formalism~\cite{Bethe:1971xm}).
We note that the nuclear matter as a free Fermi gas is quite unstable for any small attraction between nucleons, leading to another state as the most stable state (cf. the BCS instability in superconductivity~\cite{abrikosov1975methods}).\footnote{This is an example of the quantum fluctuations, which are important in the state with finite baryon number density. Such fluctuation effect is suppressed at finite temperature.}
This is called the Fermi instability.
It is an important subject to investigate the effect of the Fermi instability when the heavy hadron exists as an impurity particle in nuclear matter.\footnote{The Kondo effect is a known phenomena caused by the Fermi instability when the heavy impurity particle with non-Abelian interaction exists (see~Sect.~\ref{sec:D_mesons}).}
As a feedback effect, the behavior of nucleons in nuclear matter is also affected by the existence of heavy hadrons, and it can change the nuclear matter.
Eventually, we have to analyze the dynamics both for heavy hadrons and nucleons in a self-consistent way.
\vspace{1em}
3.~{\it QCD vacuum and hadron properties in nuclear medium.---}
The quark condensates and gluon condensates are directly related to the properties of the QCD ground state.
When those quantities are modified
in nuclear matter,
the modification affects
the properties of hadrons in nuclear matter (Sect.~\ref{sec:QCDSR}).
Here we consider several basic topics of the QCD vacuum properties.
As is known, chiral symmetry plays an
important role for the generation of the hadron masses and interactions at low energy as the result of the dynamical breaking in vacuum.
In the light flavor sector, it has been studied that, as a precursory signal, the partial restoration of chiral symmetry inside nuclei can be observed through the change of hadron properties (e.g.~mass modification)~\cite{Hatsuda:1994pi}.
We may ask what kind of condensate is responsible for the properties of heavy hadrons in nuclear medium.
For example, the gluon condensate is an interesting quantity for heavy hadrons, because it will be expected that the gluon dynamics dominates over the light quark dynamics in heavy flavor sector~\cite{Luke:1992tm,Klingl:1998sr}.
The light quark condensate $\langle \bar{q}q \rangle$ is also important for the mass generation of the heavy-light mesons as well as of heavy-light-light and heavy-heavy-light baryons.
\vspace{1em}
From the above considerations, we will discuss how the charm/bottom hadrons behave inside nuclei.
However, we have to say that this field is still in progress, and the systematic knowledge
has not yet been obtained so far.
The purpose of this review is, therefore, to summarize the current results in the theoretical schemes studied so far, to point out the important views and unsolved problems, and to motivate the readers to study further in coming future.
It is worthwhile to remind the readers of the important points of view through this review.
First, we emphasize the role of ``symmetry", such as chiral symmetry and heavy quark symmetry, in the heavy hadron systems.
Symmetries enable us to understand general features of the physical systems in a model-independent manner, although we have to rely on model-dependent calculations to obtain numerical results to be connected with experimental data in many cases.
Second, we emphasize the importance of the ``finite size" of the charm/bottom nuclei.
Of course, it is very useful in theoretical analysis to consider the infinite volume of nuclear matter, because the theoretical treatment is much easier than that in finite systems.
However, we should not ignore the properties of finite systems, such as surface effects and discrete energy levels, which are characteristic aspects of nuclei.
The few-body calculation is also important to understand the charm/bottom nuclei to compare the theoretical results with experimental data.
In literature, there have been some few-body calculations performed so far, but the applied systems are still limited.
As one of the techniques of the few-body calculation, we will pick up the Gaussian expansion method and explain some details which would provide us with a useful tool to investigate the charm/bottom nuclei.
This review is organized as follows.
In Sect.~\ref{sec:theory}, we will overview the basics of the theoretical approaches.
We summarize the basic properties of QCD (Sect.~\ref{sec:Properties_of_QCD}),
give explanation about chiral symmetry and heavy quark symmetry (Sect.~\ref{sec:symmetries}), and introduce hadron effective theories.
(Sect.~\ref{sec:hadronic_effective_theories}).
We discuss some technical descriptions for nuclei, such as few-body calculation, propagators of nucleons in nuclear matter, and general properties of chiral symmetry in nuclear medium (Sect.~\ref{sec:finite_density}).
In the following sections, we survey the current status of theoretical studies of the charm/bottom nuclei.
Here we separate the discussions about the heavy hadrons according to the combinations of light quark $q$ and heavy quark $Q$:
(i) $\bar{Q}Q$ mesons (e.g.~$\eta_{c}$ and $J/\psi$) in Sect.~\ref{sec:charmonia},
(ii) $\bar{q}Q$ and $q\bar{Q}$ mesons (e.g.~$D$ and $D^{\ast}$, $\bar{D}$ and $\bar{D}^{\ast}$) in Sect.~\ref{sec:D_mesons}
and (iii) $Qqq$ baryons (e.g.~$\Lambda_{c}$, $\Sigma_{c}$ and $\Sigma_{c}^{\ast}$) in Sect.~\ref{sec:charm_baryons}, respectively.
For each type of heavy hadron, we focus on the three different properties: (a) heavy hadron-nucleon interaction, (b) few-body systems and (c) heavy hadrons in nuclear matter.
The last section is devoted to summary and future perspectives.
\section{Theoretical basics}
\label{sec:theory}
The basic theory of hadrons is, needless to say, given by the quantum chromodynamics (QCD).
In contrast, the effective theories are often used in the actual studies of heavy hadrons rather than QCD.
In this section, we will see theoretical tools for heavy hadrons in nuclear systems, by considering how the heavy hadron effective theory is connected to QCD through the symmetries, such as chiral symmetry and heavy quark symmetry.
\subsection{Properties of QCD}
\label{sec:Properties_of_QCD}
Let us start from the QCD Lagrangian
\begin{align}
{\cal L}_{\mathrm{QCD}}
&= - \frac{1}{4} G_{\mu\nu}^{a} G^{a \mu\nu}
+ \bar{q}
(i\Slash{D}-m_{\mathrm{q}})q .
\label{eq:QCD}
\end{align}
The first term is the pure gluonic part made of the gluon field $A_{\mu}^{a}$ $(a=1,\dots, 8)$ which belongs to the adjoint representation of color SU(3) symmetry. The field strength tensor is defined by
\begin{align}
G_{\mu\nu}^{a}
&=
\partial_{\mu}A_{\nu}^{a}
-\partial_{\nu}A_{\mu}^{a}
-gf^{abc}A_{\mu}^{b}A_{\nu}^{c} ,
\end{align}
where $g$ is the gauge coupling and $f^{abc}$ are the structure constants defined by $[T^{a},T^{b}]=if^{abc}T^{c}$ with $T^{a}$ being the generator of the color SU(3) symmetry. The quark field $q$ belongs to the fundamental representation of color SU(3), and the coupling to the gauge field is given by the covariant derivative
\begin{align}
D_{\mu}
&=
\partial_{\mu}
+igA_{\mu}^{a}T^{a} .
\end{align}
The generators in the fundamental representation are expressed by the three-by-three Gell-Mann matrices as $T^{a}=\lambda^{a}/2$. The quark field has six flavor indices
\begin{align}
q
&=\begin{pmatrix}
u & d & s & c & b & t
\end{pmatrix}^{t} .
\end{align}
Each component carries the flavor quantum number (isospin $I$, third component of the isospin $I_{3}$, strangeness $S$, charm $C$, bottomness $B$ and topness $T$) as follows:
\begin{align}
u &: I=1/2, I_{3}=+1/2, \\
d &: I=1/2, I_{3}=-1/2, \\
s &: S=-1, \\
c &: C=+1, \\
b &: B=-1, \\
t &: T=+1.
\end{align}
In Eq.~\eqref{eq:QCD}, $m_{\rm q}$ is the mass of the quark with the flavor q generated by the Higgs mechanism.
QCD is a renormalizable quantum field theory, and the running coupling constant decreases at high energy due to the asymptotic freedom.
At low energy, the coupling constant blows up and the perturbative calculation breaks down. This occurs at the energy scale $\Lambda_{\rm QCD}$ which is specified below.
As a result, in contrast to the simplicity of the QCD Lagrangian, the content which is induced from the Lagrangian is much complicated and provides rich structure of vacuum.
According to the renormalization group, the fine structure ``constant" $\alpha_{s}=g^{2}/4\pi$ is not actually the constant, but runs as a function of the energy scale parameter $\mu$ by
\begin{align}
\mu^{2} \frac{\mathrm{d}\alpha_{s}}{\mathrm{d}\mu^{2}} = \beta(\alpha_{s}) \equiv - \left( b_{1} \alpha_{s}^{2}+b_{2}\alpha_{s}^{3}+ \cdots \right),
\label{eq:renormalization_QCD}
\end{align}
where $\beta(\alpha_{s})$ is the Gell-Mann--Low beta function.
The explicit form of the beta function is given in powers of $\alpha_{s}$ in perturbative calculation at high energy scale, where the coefficients $b_{1}$, $b_{2}$, $\cdots$, at each order can be calculated.
For example, it is known that we obtain $b_{1}=(33-2N_{f})/12\pi$ with the number of flavors $N_{f}$ at one-loop level.
At this order, the solution of Eq.~(\ref{eq:renormalization_QCD}) is given by
\begin{align}
\alpha_{s}(\mu^{2}) = \cfrac{4\pi}{\left(11-\frac{2}{3}N_{f} \right) \ln \left( \frac{\mu^{2}}{\Lambda_{\mathrm{QCD}}^{2}} \right)},
\label{eq:renormalization_QCD_sol}
\end{align}
with the parameter $\Lambda_{\mathrm{QCD}}$ for $\Lambda_{\mathrm{QCD}} \ll \mu$.
The value of $\Lambda_{\mathrm{QCD}}$ can be estimated by fitting the coupling strength $\alpha_{s}(\mu_{0}^{2})$ to reproduce experimental observables at some high energy scale $\mu_{0}$. The explicit value depends on the number of flavors and the renormalization scheme adopted in the calculation of the renormalization group equation. From the recent analysis~\cite{Agashe:2014kda}, it is determined as $\Lambda_{\rm QCD}=214\pm 7$ MeV in the $\overline{MS}$ scheme with $N_{f}=5$.
Note that the solution (\ref{eq:renormalization_QCD_sol}) is correct only at high energy scale more than a few GeV, because the perturbative expansion in terms of $\alpha_{s}$ in the right hand side of Eq.~(\ref{eq:renormalization_QCD}) should be valid at perturbative level.
Nevertheless, when we regard $\mu$ as small quantity and set $\mu \simeq \Lambda_{\mathrm{QCD}}$,
we find that $\alpha_{s}$ becomes a large coupling constant.
Such a strong coupling indicates the break-down of the perturbative calculation.
Therefore, we need essentially non-perturbative approaches to analyze the low energy phenomena of QCD.
It is very difficult to accomplish this by analytical calculations, and we need the numerical calculation by the lattice QCD simulations as first principle approach.
However, in some cases, we can use a {\it phenomenological} framework by focusing on the proper symmetries in QCD and assuming that the hadronic degrees of freedom are fundamental.
In the following, we will explain two symmetries, chiral symmetry for light quarks and spin symmetry for heavy quarks, and introduce the hadron effective theories based on those symmetries.
\subsection{Symmetries}
\label{sec:symmetries}
\subsubsection{Chiral symmetry}
\label{sec:chiral_symmetry}
\paragraph{Light quarks and chiral symmetry}
Chiral symmetry is a guiding principle to study the low-energy phenomena of the strong interaction, which is approximately realized in the light quark sector of QCD~\cite{Donoghue:1992dd,Hosaka:2001ux,Scherer:2012xha}. The light quarks satisfy
\begin{align}
m_{\rm q} \ll \Lambda_{\rm QCD}
\label{eq:lightquark} ,
\end{align}
namely, the quarks whose masses are much smaller than the QCD scale. For up and down quarks, Eq.~\eqref{eq:lightquark} is well satisfied. For these light quarks, it is reasonable to start from the massless Lagrangian
\begin{align}
{\cal L}_{\mathrm{massless}}
&= \bar{q}
i\Slash{D}q , \label{eq:massless}
\end{align}
with $q=(u,d)^{\mathrm{t}}$
and to treat the effect of the quark mass $m_{\rm q}$ perturbatively. For the discussion of chiral symmetry, we introduce the projection operators
\begin{align}
P_{L}
&= \frac{1-\gamma_{5}}{2} , \quad
P_{R}
= \frac{1+\gamma_{5}}{2} ,
\end{align}
and define the left- and right-handed quarks as
\begin{align}
q_{L}
&= P_{L}q,
\quad
q_{R}
= P_{R}q .
\end{align}
In the massless Lagrangian~\eqref{eq:massless}, the left-handed quarks are separated from the right-handed quarks by this decomposition:
\begin{align}
{\cal L}_{\mathrm{massless}}
&= \bar{q}_{L} i\Slash{D}q_{L}
+\bar{q}_{R} i\Slash{D}q_{R} .
\end{align}
This Lagrangian is invariant under global $\textrm{U}(2)_{R}\otimes \textrm{U}(2)_{L}$ transformation. The vector $\textrm{U}(1)$ part is realized as the conservation of the quark number, while the axial $\textrm{U}(1)_{A}$ symmetry is explicitly broken by quantum anomaly~\cite{tHooft:1986nc}. The invariance under the $\textrm{SU}(2)_{R}\otimes \textrm{SU}(2)_{L}$ transformation is called chiral symmetry:
\begin{align}
q_{R}
&\to Rq_{R},\quad
R=e^{i\theta_{R}^{i}T^{i}} \in \textrm{SU}(2)_{R} , \label{eq:right} \\
q_{L}
&\to Lq_{L},\quad
L=e^{i\theta_{L}^{i}T^{i}} \in \textrm{SU}(2)_{L},\label{eq:left}
\end{align}
with $i=1,\dots ,3$ and $T^{i}$ being the generator of SU(2). The scalar quark bilinear form mixes the left- and right-handed components:
\begin{align}
\bar{q}q
&=\bar{q}_{L}q_{R}+\bar{q}_{R}q_{L} .
\label{eq:qbarq}
\end{align}
The quark mass term in Eq.~\eqref{eq:QCD} therefore breaks chiral symmetry explicitly. In this way, chiral symmetry is realized in the massless limit of the QCD Lagrangian. This ideal $m_{\rm q}\to 0$ limit in QCD is called chiral limit.
The transformation laws~\eqref{eq:right} and \eqref{eq:left} are generated by $(T^{i},0)$ and $(0,T^{i})$ of the Lie algebra $\mathcal{G}$. We define the vector and axial transformations as
\begin{align}
q_{R}
&\to e^{i\theta_{V}^{i}T^{i}} q_{R},\quad
q_{L}
\to e^{i\theta_{V}^{i}T^{i}} q_{L},
\label{eq:vector}\\
q_{R}
&\to e^{i\theta_{A}^{i}T^{i}} q_{R},\quad
q_{L}
\to e^{-i\theta_{A}^{i}T^{i}} q_{L}
\label{eq:axial},
\end{align}
which are generated by $(T^{i},T^{i})$ and $(T^{i},-T^{i})$, respectively. If the mass of the up quark equals the mass of the down quark, the quark mass term~\eqref{eq:qbarq} is invariant under the vector transformation $(e^{i\theta_{V}^{i}T^{i}},e^{i\theta_{V}^{i}T^{i}})\in \textrm{SU}(2)_{V}$ which is called isospin symmetry.
\paragraph{Spontaneous and explicit breaking of chiral symmetry}
In general, symmetries of the Lagrangian may be broken by the ground state of the theory (vacuum $\ket{0}$). This is the phenomena called spontaneous symmetry breaking~\cite{Nambu:1961tp,Nambu:1961fr,Goldstone:1961eq}. In QCD, the vacuum expectation value of the $\bar{q}q$ operator is known to be nonzero:
\begin{align}
\bra{0}\bar{q}q\ket{0}
&\neq 0 .
\end{align}
This quark condensate breaks chiral symmetry spontaneously as expected
in Eq.~\eqref{eq:qbarq}. In the chiral limit, vectorial $\textrm{SU}(2)_{V}$ subgroup $(V,V)$ of chiral $\textrm{SU}(2)_{R}\otimes \textrm{SU}(2)_{L}$ symmetry remains unbroken, thanks to the Vafa-Witten theorem~\cite{Vafa:1983tf}. This indicates that $\bra{0}\bar{u}u\ket{0}=\bra{0}\bar{d}d\ket{0}$ in the chiral limit. Thus, the spontaneous breaking patten of chiral symmetry is
\begin{align}
\textrm{SU}(2)_{R}\otimes \textrm{SU}(2)_{L}
&\to \textrm{SU}(2)_{V} .
\label{eq:SSB}
\end{align}
As a consequence of the spontaneous symmetry breaking, a massless Nambu-Goldstone (NG) boson emerges for each broken generator in the Lorentz invariant system. In the case of two-flavor chiral symmetry~\eqref{eq:SSB}, there appear three massless pions, $\pi^{0}$ and $\pi^{\pm}$, corresponding to three broken generators $(T^{i},-T^{i})$.
The explicit symmetry breaking is caused by the small but nonzero masses of the up and down quarks. This effect can be introduced as an external current. The couplings to the scalar current $s$ and the pseudoscalar current $p$ are given by
\begin{equation}
\mathcal{L}_{\rm ext}
=\bar{q}(s-i\gamma_{5}p)q
=
\bar{q}_{L}\mathcal{M}^{\dag}q_{R}
+\bar{q}_{R}\mathcal{M}q_{L} ,
\end{equation}
where we denote
\begin{equation}
s+ip= \mathcal{M} .
\end{equation}
The Lagrangian is invariant if the external fields transform as
\begin{align}
\mathcal{M}^{\dag}
&\to L\mathcal{M}^{\dag}R^{\dag},\quad
\mathcal{M}
\to R\mathcal{M}L^{\dag} .
\label{eq:Mtrans}
\end{align}
In the isospin symmetric limit $m_{u}=m_{d}=\hat{m}$, the quark masses are introduced by identifying
\begin{equation}
\mathcal{M}
= \mathcal{M}^{\dag}
=
\begin{pmatrix}
\hat{m} & 0 \\
0 & \hat{m}
\end{pmatrix} ,
\label{eq:quarkmass}
\end{equation}
which is not invariant under Eq.~\eqref{eq:Mtrans} and breaks chiral symmetry. Equations~\eqref{eq:Mtrans} and \eqref{eq:quarkmass} will be used in Section~\ref{sec:chiral_effective_theory} to construct the effective Lagrangian.
The explicit breaking due to the quark masses causes the perturbative corrections to the chiral limit. For instance, the pions become massive due to the explicit breaking. When the explicit breaking is small, the pion mass $m_{\pi}$ is given by the Gell-Mann--Oakes--Renner (GMOR) relation~\cite{GellMann:1968rz}
\begin{align}
F^{2} m_{\pi}^{2}
&
=-\hat{m}\bra{0}\bar{q}q\ket{0}
+\mathcal{O}(\hat{m}^{2}),
\label{eq:GMOR}
\end{align}
where $F \simeq 93$ MeV
is the pion decay constant.
For later convenience, here we briefly mention the trace anomaly in QCD in the chiral limit. Consider the global scale transformation of the space-time $x_{\mu}\to \sigma x_{\mu}$ with a parameter $\sigma$. The quark and gluon fields are transformed according to their canonical dimensions as
\begin{align}
q(x)
&
\to \sigma^{3/2}q(\sigma x),\quad
A_{\mu}^{a}(x)
\to \sigma A_{\mu}^{a}(\sigma x) .
\end{align}
The QCD action in the chiral limit ($m_{\rm q}=0$) is invariant under this symmetry. The conserved current associated with this transformation $\Delta^{\mu}$ is called dilatational current, and can be related with the energy-momentum tensor as $\Delta^{\mu}=x_{\nu}\Theta^{\nu\mu}$. The conservation of the dilatational current is, however, broken explicitly by the quantum effect, as shown in Refs.~\cite{Collins:1976yq,Nielsen:1977sy}:
\begin{align}
\partial_{\mu}\Delta^{\mu}
&
=\Theta^{\mu}_{\mu}
=\frac{\beta(\alpha_{s})}{2g}G_{\mu\nu}^{a}G^{a\mu\nu} ,
\end{align}
where $\beta(\alpha_{s})$ is the beta function in Eq.~\eqref{eq:renormalization_QCD}.
The scale invariance of the classical Lagrangian is thus broken with a nonzero gauge coupling constant $g$. Because this anomaly relates the trace of the energy-momentum tensor with the gluon fields, it is called the trace anomaly.
\subsubsection{Heavy quark symmetry}
\label{sec:heavy_quark_symmetry}
\paragraph{Heavy quark effective theory}
In contrast to the limit of the massless fermion,
we now consider the limit of infinitely massive fermion.
Around this limit, we can obtain the heavy quark effective theory as an expansion by $1/m_{Q}$ with heavy quark mass $m_{Q}$.
This limit corresponds to regarding the charm and bottom quark masses infinitely large.
Let us start by separating the QCD Lagrangian (\ref{eq:QCD}) as
\begin{align}
{\cal L}_{\mathrm{QCD}} = {\cal L}_{\mathrm{heavy}} + {\cal L}_{\mathrm{light}},
\end{align}
with the heavy quark part
\begin{align}
{\cal L}_{\mathrm{heavy}} = \sum_{Q} \bar{Q} (iD\hspace{-0.6em}/-m_{Q})Q,
\end{align}
and the light quark and gluon part
\begin{align}
{\cal L}_{\mathrm{light}} = - \frac{1}{4} G_{\mu\nu}^{a} G^{a \mu\nu} + \sum_{q} \bar{q} (iD\hspace{-0.6em}/-m_{q})q.
\label{eq:QCD_light}
\end{align}
In the former, $Q$ is the heavy quark field, and in the latter, $q$ is the light quark field.
Let us consider large $m_{Q}$ for a heavy quark. Here we focus on the system with a single heavy flavor.
It is convenient to introduce $v$-frame with four-velocity $v^{\mu}$, in which frame the heavy quark is at rest.
We regard that the most of energy-momentum of the heavy quark $p$ is given by its on-mass-shell component.
Namely, we separate
\begin{align}
p^{\mu} = m_{Q} v^{\mu} + k^{\mu},
\end{align}
where we suppose that the off-mass-shell component, $k^{\mu}$, which is a residual part, is much smaller than $m_{Q}$ ($k^{\mu} \ll m_{Q}$).
The four-velocity $v^{\mu}$ satisfies $v^{\mu}v_{\mu}=1$ and $v^{0}>0$ from the on-mass-shell condition and the propagation into the positive direction of time.\footnote{We should not confuse the spatial component of $v^{\mu}$, $\vec{v}$, with the three-dimensional velocity $\vec{u}$. They are related by $\vec{v}=\vec{u}/\!\sqrt{1-|\vec{u}\,|^{2}}$.}
Thus, the heavy quark momentum is separated to the on-mass-shell part and the off-mass-shell part.
Let us define the effective field for the positive energy state in the heavy quark limit:
\begin{align}
Q_{v}(x) = \frac{1+v\hspace{-0.5em}/}{2} e^{im_{Q} v\cdot x} Q(x),
\end{align}
with the original heavy quark field $Q(x)$.
We notice that $Q_{v}(x)$ satisfies the condition
\begin{align}
v\hspace{-0.5em}/ \, Q_{v}(x) = Q_{v}(x).
\label{eq:positive_projection}
\end{align}
In the rest frame $v^{\mu}=(1,\vec{0}\,)$, the projection operator $(1+v\hspace{-0.5em}/)/2$ picks up the upper two components in four-component Dirac spinor in the standard representation.
The factor $e^{im_{Q} v\cdot x}$ leaves only the residual momentum scale, $k^{\mu}$, as a dynamical variable in the effective field $Q_{v}$.
Hence $Q_{v}$ has no explicit $m_{Q}$ dependence.
Similarly, we define the effective field for the negative energy state as
\begin{align}
{\cal Q}_{v} = \frac{1-v\hspace{-0.5em}/}{2} e^{im_{Q} v\cdot x} Q(x),
\end{align}
which satisfies the condition $v\hspace{-0.5em}/ \, {\cal Q}_{v} = - {\cal Q}_{v}$.
Note the sign in the projection operator $(1-v\hspace{-0.5em}/)/2$ because it picks up the lower two components in the Dirac spinor in the rest frame.
When we use two effective fields $Q_{v}(x)$ and ${\cal Q}_{v}(x)$,
we can rewrite ${\cal L}_{\mathrm{heavy}}$ as
\begin{align}
{\cal L}_{\mathrm{heavy}}
=
\bar{Q}_{v} v\!\cdot\!iDQ_{v} - \bar{\cal Q}_{v} (v\!\cdot\!iD+2m_{Q}) {\cal Q}_{v}
+\bar{\cal Q}_{v} i\hspace{0.2em}/\hspace{-0.6em}D_{\perp} Q_{v}
+\bar{Q}_{v} i\hspace{0.2em}/\hspace{-0.6em}D_{\perp} {\cal Q}_{v},
\end{align}
with $D_{\perp}^{\mu} = D^{\mu} - v^{\mu} v \cdot D$.
It is important to note that the mass of $Q_{v}$ is ``zero" while the mass of ${\cal Q}_{v}$ is $2m_{Q}$.
This is because we measure the energy as the residual momentum for the positive energy state.
Hence we may eliminate the negative energy state by regarding $2m_{Q}$ as a large quantity.
Physically, this corresponds to neglect of the creation of $\bar{Q}Q$ pairs in the state.
By using the equation of motion for ${\cal Q}_{v}$,
\begin{align}
\left( v \!\cdot\! iD + 2m_{Q} \right) {\cal Q}_{v} = i\hspace{0.2em}/\hspace{-0.6em}D_{\perp} Q_{v},
\label{eq:equation_of_motion}
\end{align}
we eliminate ${\cal Q}_{v}$ in ${\cal L}_{\mathrm{heavy}}$, and obtain
\begin{align}
{\cal L}_{\mathrm{heavy}}
&=
\bar{Q}_{v}
\left(
v \!\cdot\! iD + i\hspace{0.2em}/\hspace{-0.6em}D_{\perp} \frac{1}{v \!\cdot\! iD + 2m_{Q}} i\hspace{0.2em}/\hspace{-0.6em}D_{\perp}
\right) Q_{v} \nonumber \\
&=
\bar{Q}_{v} v \!\cdot\! iD Q_{v}
+ \bar{Q}_{v} \frac{(iD_{\perp})^{2}}{2m_{Q}} Q_{v} - g_{s} \bar{Q}_{v} \frac{\sigma_{\mu\nu}G^{\mu\nu}}{4m_{Q}} Q_{v} + {\cal O}(1/m_{Q}^{2}),
\end{align}
up to and including ${\cal O}(1/m_{Q})$.
We note that this Lagrangian is valid at tree level, because we have used the equation of motion (\ref{eq:equation_of_motion}).
As we will see, the first and second terms are not affected by quantum fluctuations (loop corrections),
while the third term is.
To take this effect into account, we introduce the Wilson coefficient $c(\mu)$ at some energy scale $\mu$.
Finally, we obtain the result
\begin{align}
{\cal L}_{\mathrm{heavy}} =
\sum_{Q=1}^{N_{h}}
\left( \bar{Q}_{v} v \!\cdot\! iD Q_{v}
+ \bar{Q}_{v} \frac{(iD_{\perp})^{2}}{2m_{Q}} Q_{v} - c(\mu) g_{s} \bar{Q}_{v} \frac{\sigma_{\mu\nu}G^{\mu\nu}}{4m_{Q}} Q_{v} + {\cal O}(1/m_{Q}^{2}) \right),
\label{eq:HQET_Lagrangian}
\end{align}
where the sum over various heavy flavors is taken ($N_{h}$ the number of heavy flavor).
This framework is called the heavy quark effective theory (HQET).
Equation (\ref{eq:HQET_Lagrangian}) is the effective Lagrangian in the HQET.
The similar procedure is applicable to higher orders.
In the higher orders, not only the heavy-quark--gluon couplings, but also light-quark--heavy-quark couplings enter into the effective Lagrangian.
The analysis up to ${\cal O}(1/m_{Q}^{n})$ with $n\ge 2$ can be found in e.g. Ref.~\cite{Bauer:1997gs} for ${\cal O}(1/m_{Q}^{2})$ and Ref.~\cite{Balzereit:1998jb} for ${\cal O}(1/m_{Q}^{3})$ (see also Ref.~\cite{Manohar:1997qy} and references therein).
\paragraph{Heavy quark symmetry}
In the heavy quark limit ($m_{Q} \rightarrow \infty$), the leading term $v\cdot iD$ in Eq.~(\ref{eq:HQET_Lagrangian}) is important to realize
the heavy spin-flavor $\mathrm{SU}(2N_{h})$ symmetry.
This symmetry implies simultaneously (i) heavy-quark spin symmetry and (ii) heavy-quark flavor symmetry.
As for (i), the operator changing the heavy quark spin is absent in $v\cdot iD$, and hence the heavy quark spin is a conserved quantity for any non-perturbative dynamics acting on the heavy quark.
As for (ii), there is no heavy flavor dependence in $v\cdot iD$, and hence the interchange of two heavy flavors, such as charm and bottom quarks, leaves the interaction invariant.
Those symmetries are called the heavy-quark symmetry (HQS).
When we do not consider weak decay processes replacing a bottom quark by a charm quark,
we sometimes focus only on the heavy quark spin symmetry in the heavy-hadron dynamics.
In the following, we simply use the heavy quark symmetry to indicate the heavy quark spin symmetry.
It is important to note that the heavy quark symmetry holds at any energy scale from low energy scale to high energy scale.
In this sense, the heavy quark symmetry may be contrasted to chiral symmetry, which is broken dynamically in the QCD vacuum.
We can observe the approximate realization of the heavy quark symmetry in the heavy hadron spectroscopy.
Let us consider the heavy hadrons composed of $\bar{q}Q$ with a light quark $q$ and a heavy quark $Q$.
We decompose the total spin of the heavy hadron $\vec{J}$ into the heavy quark spin $\vec{S}$ and the remaining part $\vec{j}$:
\begin{align}
\vec{J} = \vec{S} + \vec{j},
\label{eq:J_decomposition}
\end{align}
where $\vec{J}$ is conserved and $\vec{S}$ is conserved in the heavy quark limit.\footnote{We notice the conservation of $\vec{J}$ is valid only when the vacuum is rotationally invariant. If the rotational symmetry is broken, we cannot apply the following discussion. For example, such situation can happen when the external field (e.g. a magnetic field~\cite{Suzuki:2016kcs}) breaks the rotational symmetry.}
The conservation of each $\vec{J}$ and $\vec{S}$ leads to the conservation of $\vec{j}$ through Eq.~(\ref{eq:J_decomposition}).
Although we have denoted the light quark by $q$, this is, in field theoretical view, an ensemble of many particle states, such as $q+\bar{q}qq+ \bar{q}qg+\cdots$ with a light antiquark $\bar{q}$ and a gluon $g$, namely light degrees of freedom except for the heavy quark.
To obtain the explicit form of the wave function is a highly non-perturbative problem.
Nevertheless, $\vec{j}$ is a conserved quantity regardless of any complex structure of $q+\bar{q}qq+ \bar{q}qg+\cdots$.
This complex object conserving $\vec{j}$ is called ``brown muck"~\cite{Neubert:1993mb}.
The conservation of light degrees of freedom immediately leads to the unique properties of heavy hadrons in the spectroscopy.
As we have seen, the heavy quark spin $\vec{S}$ and the brown muck spin $\vec{j}$ are independently conserved quantities in the heavy quark limit.
Therefore, we conclude that the heavy hadrons are two degenerate states with spin
\begin{align}
J_{\pm} = j\pm1/2,
\label{eq:HQS_multiplet}
\end{align}
for $j \ge 1/2$.
Those two degenerate states are called the HQS doublet.
In case of $j=0$, we have only one state with spin $J=1/2$.
This state is called the HQS singlet.
As examples in the meson sector, we consider $D$ ($J^{P}=0^{-}$; 1870 MeV) and $D^{\ast}$ ($1^{-}$; 2010 MeV) mesons, whose mass difference is about 140 MeV, for charm, and $\bar{B}$ ($J^{P}=0^{-}$; 5280 MeV) and $\bar{B}^{\ast}$ ($1^{-}$; 5325 MeV) mesons, whose mass difference is about 45 MeV, for bottom.
By regarding those mass differences as small quantities, we can consider that $D$ and $D^{\ast}$ ($\bar{B}$ and $\bar{B}^{\ast}$) mesons are approximately degenerate states in mass.
The degenerate states correspond to the heavy hadrons with a common brown muck with spin and parity $j^{{\cal P}}=1/2^{-}$ for $\bar{q}$ in $\bar{q}Q$ in the heavy quark limit.
Therefore, $D$ and $D^{\ast}$ mesons as well as $\bar{B}$ and $\bar{B}^{\ast}$ mesons are classified as the HQS doublet.
In the baryon sector, we consider $\Sigma_{c}$ ($J^{P}=1/2^{+}$; 2455 MeV) and $\Sigma_{c}^{\ast}$ ($3/2^{+}$; 2520 MeV), whose mass difference is 65 MeV, for charm, and $\Sigma_{b}$ ($J^{P}=1/2^{+}$; 5811 MeV) and $\Sigma_{b}^{\ast}$ ($3/2^{+}$; 5832 MeV), whose mass difference is 21 MeV.
Again, we can regard that $\Sigma_{c}$ and $\Sigma_{c}^{\ast}$ ($\Sigma_{b}$ and $\Sigma_{b}^{\ast}$) baryons are approximately degenerate states,
which correspond to the heavy hadrons with brown muck of $I=1$ and $j^{{\cal P}}=1^{+}$ for $qq$ in $qqQ$ in the heavy quark limit.
Therefore, $\Sigma_{c}$ and $\Sigma_{c}^{\ast}$ baryons as well as $\Sigma_{b}$ and $\Sigma_{b}^{\ast}$ baryons are classified as the HQS doublet.
We note that the ground state baryon $\Lambda_{c}$ ($J^{P}=1/2^{+}$; 2287 MeV) and $\Lambda_{b}$ ($1/2^{+}$; 5620 MeV) have no state in near mass region, and hence they are classified to be the HQS singlet.
The structure of the brown muck in heavy hadrons is seen also in the branching ratios of the decay widths.
Let us suppose that the initial heavy hadron $\Psi_{J'}^{\prime(j')}$ with total spin $J'$ and brown muck spin $j'$ decays to the final heavy hadron $\Psi_{J}^{(j)}$ with total spin $J$ and brown muck spin $j$ by emitting a light hadron, e.g. a pion $\pi$ with relative angular momentum $L$.
Due to the independent conservations of the heavy quark spin $\vec{S}$ and the brown muck spin $\vec{j}$,
we see that the strength of the decay widths
is parametrized as
\begin{align}
\Gamma \left[ \Psi_{J'}^{\prime(j')} \rightarrow \Psi_{J}^{(j)}+\pi \right]
\propto
(2j+1)(2J'+1)
\left|
\left\{
\begin{array}{ccc}
L & j' & j \\
1/2 & J & J' \\
\end{array}
\right\}
\right|^{2}
+{\cal O}(1/M),
\label{eq:Isgur_Wise_decay}
\end{align}
by neglecting the corrections with the heavy hadron mass $M$~\cite{Isgur:1991wq}.
Here $J^{(\prime)}=j^{(\prime)}\pm1/2$.
In realistic application to experimental data, we need to include the phase space factor due to the different mass thresholds.
Including this factor, we can reproduce the branching ratios of the known decay patterns of $D_{0}^{\ast}$ ($J^{P}=0^{+}$), $D_{1}$ ($1^{+}$) mesons to $D$, $D^{\ast}$ mesons with a pion emission~\cite{Manohar:2000dt}.
We note that the similar result is applicable to any light hadrons other than a pion.
As far as we follow the heavy quark symmetry, the brown muck structure in Eqs.~(\ref{eq:HQS_multiplet}) and (\ref{eq:Isgur_Wise_decay}) is quite general.
They should hold not only in normal hadrons (mesons and baryons), but also in exotic hadrons (multiquarks, hadronic molecules, quark-gluon hybrids, and so on) as well as in nuclei and nuclear matter with a heavy quark~\cite{Yasui:2013vca,Yamaguchi:2014era}.
For example, let us consider the case that a $\bar{D}$ meson or a $\bar{D}^{\ast}$ meson exists in nuclear matter.
If we consider the heavy quark limit, the masses of $\bar{D}$ and $\bar{D}^{\ast}$ mesons should be regarded as the same.
This is also the case in nuclear matter.
As far as the rotational symmetry is unbroken, the spin of the brown muck should be conserved in nuclear matter.
Hence the $\bar{D}$ and $\bar{D}^{\ast}$ mesons in nuclear matter should be degenerate as in vacuum, as long as the heavy quark symmetry is adopted.
Interestingly, the brown muck can exhibit a new structure when the heavy hadron combined with light hadrons forms an extended object.
As an example, let us consider a $\bar{D}^{(\ast)}N$ two-boy system
which
is composed of a $\bar{D}^{(\ast)}$ meson and a nucleon.
In the heavy quark limit, the object carrying the conserved spin is given as the brown muck.
In the case of $\bar{D}^{(\ast)}N$, the brown muck should have the structure $qN$ where $q$ is the light quark inside the $\bar{D}^{(\ast)}$ meson.
The quark-hadron structure like $qN$, which includes both quarks and hadrons as effective degrees of freedom, can be called the ``light spin-complex (or spin-complex)", because the spin is the most important conserved quantity in this complex system~\cite{Yasui:2013vca}.
This is an interesting unit of matter which appears in the extended heavy hadron systems.
Furthermore, when $\bar{D}^{(\ast)}N$ exists in nuclear matter, it is surrounded by many pairs of a nucleon and a hole, $NN^{-1}$, as the low-energy degrees of freedom around the Fermi energy.
In the heavy quark limit, the spin-complex is given by $q+qNN^{-1}+q(NN^{-1})(NN^{-1})+\dots$, where $q$ is again the light quark inside the $\bar{D}^{(\ast)}$ meson.
More examples will be shown in Sect.~\ref{sec:D_mesons} and Sect.~\ref{sec:charm_baryons}.
We have to keep in mind that the mass splitting due to the HQS breaking, about 140 MeV between a $\bar{D}$ meson and a $\bar{D}^{\ast}$ meson, is still larger than the typical energy scales in nuclear dynamics, for example 40 MeV as the Fermi energy of a nucleon at normal nuclear matter density.
Hence, the approximate mass degeneracy as the HQS doublet may not be clearly identified within low energy scales.
To identify the HQS doublet structure precisely, we need to include the dynamics at large energy scales in nucleon-nucleon interaction, where the scattering energy reaches a few hundred MeV or more.
In bottom sector, the HQS structure will be found at lower energy scales, because the mass splitting, about 45 MeV between a $B$ meson and a $B^{\ast}$ meson, is comparable with the Fermi energy of the nuclear matter.
The HQS doublet in the baryon sector, $\Sigma_{c}$ and $\Sigma_{c}^{\ast}$ for charm and $\Sigma_{b}$ and $\Sigma_{b}^{\ast}$ for bottom, would provide a more appropriate situation to study the HQS structure in experiments, because their mass splittings, about 65 MeV and about 20 MeV, are smaller.
Regardless to the finite masses of heavy quarks, we emphasize that the HQS plays the essentially important role in the dynamics of heavy hadrons in nuclear matter to achieve the systematic understanding of their properties, for example, mass shifts, binding energies and decay widths in medium.
As we will see examples in later sections, the binding energies for each component in the HQS doublet exhibit an interesting behavior in a view of the HQS, though the mass thresholds are different due to the HQS breaking.
\paragraph{\mbox{\boldmath $1/m_{Q}$} corrections}
In realistic world, the heavy quark mass is not infinitely large, but is finite.
It is, therefore, important to consider the corrections to the heavy quark limit when we compare theoretical results with experimental data at quantitative level.
It is one of the biggest advantages of the heavy quark effective theory that the finite mass corrections are included systematically by a series expansion of $1/m_{Q}$.
For charm and bottom cases, we can regard the correction terms are small in comparison with the leading terms.
We may consider, for example, that the leading order correction is of the order of $\Lambda_{\mathrm{QCD}}/m_{Q}$.
We can estimate that $\Lambda_{\mathrm{QCD}}/m_{c}$ for charm and $\Lambda_{\mathrm{QCD}}/m_{b}$ for bottom are about 16\% and 4\%, respectively, with $m_{c}\simeq 1.3$ GeV, $m_{b}\simeq 4.7$ GeV, and $\Lambda_{\rm QCD}\simeq 0.21$ GeV ($\overline{MS}$ scheme with $N_{f}=5$)~\cite{Agashe:2014kda}.
To investigate the correction terms, the following guides are important~\cite{Luke:1992cs}.
\begin{enumerate}
\item The leading order terms and the next-to-leading order terms are related by the velocity rearrangement (or reparametrization).
\item There exists a term at the next-to-leading order, which breaks the heavy quark symmetry.
\end{enumerate}
Let us explain 1 and 2 in some details.
As for 1, we remind ourselves that the velocity-frame with four-velocity $v^{\mu}$ was chosen in defining the heavy quark effective theory.
However, the choice of the velocity-frame should be arbitrary, and hence another velocity-frame with four velocity $w^{\mu} \neq v^{\mu}$ could be chosen.
The two velocity-frames with $v^{\mu}$ and $w^{\mu}$ are related by the Lorentz boost.
This is called the velocity rearrangement (or reparametrization).
Let us consider the invariance of the effective Lagrangian (\ref{eq:HQET_Lagrangian}) in the velocity rearrangement.
The Lorentz boost is given by
\begin{align}
v^{\mu} &\rightarrow v^{\mu} + \varepsilon^{\mu}/m_{Q}, \\
k^{\mu} &\rightarrow k^{\mu} - \varepsilon^{\mu},
\end{align}
by regarding $\varepsilon^{\mu}$ is a quantity much smaller than $m_{Q}$.
To consider the $1/m_{Q}$ correction, we assume that this Lorentz boost is valid up to and including ${\cal O}(1/m_{Q})$.
For the condition $v_{\mu}v^{\mu}=1$ to hold after the Lorentz boost, we have to impose a constraint condition
\begin{align}
v \!\cdot\! \varepsilon = 0,
\end{align}
provided that ${\cal O}(1/m_{Q}^2)$ is neglected.
We also impose that the projection condition $v\hspace{-0.5em}/ Q_{v}=Q_{v}$ for the effective field $Q_{v}$ is invariant under this Lorentz boost,
\begin{align}
Q_{v} \rightarrow e^{i \varepsilon \cdot x} \left( 1+ \frac{\varepsilon\hspace{-0.5em}/}{2m_{Q}} \right) Q_{v}.
\end{align}
According to this Lorentz transformation, the leading part ${\cal L}_{0}$ and the next-to-leading part ${\cal L}_{1}$ in Eq.~(\ref{eq:HQET_Lagrangian}),
\begin{align}
{\cal L}_{0} &= \bar{Q}_{v} v \!\cdot\! iD Q_{v}, \\
{\cal L}_{1} &= \bar{Q}_{v} \frac{(iD_{\perp})^{2}}{2m_{Q}} Q_{v} - c(\mu) g_{s} \bar{Q}_{v} \frac{\sigma_{\mu\nu}G^{\mu\nu}}{4m_{Q}} Q_{v},
\end{align}
are transformed as
\begin{align}
{\cal L}_{0} & \rightarrow {\cal L}_{0} + \frac{1}{m_{Q}} \bar{Q}_{v} i \varepsilon \!\cdot\! D Q_{v}, \\
{\cal L}_{1} & \rightarrow {\cal L}_{1} - \frac{1}{m_{Q}} \bar{Q}_{v} i \varepsilon \!\cdot\! D Q_{v},
\end{align}
respectively.
Although each of ${\cal L}_{0}$ and ${\cal L}_{1}$ is not invariant,
their sum ${\cal L}_{0}+{\cal L}_{1}$ is invariant under the Lorentz boost.
Therefore, we confirm that the effective Lagrangian in the HQET is invariant up to and including ${\cal O}(1/m_{Q})$.
Concerning Eq.~(\ref{eq:HQET_Lagrangian}), because the concrete expression of the effective Lagrangian at the next-to-leading order is already known, we may consider that the invariance under the velocity rearrangement may not bring new information.
However, this is important to analyze higher order terms because this scheme is quite general.
For example, higher order terms ${\cal O}(1/m_{Q}^{n})$ with $n \ge 2$ can be investigated by imposing several constraints among several terms~\cite{Manohar:1997qy}.
This scheme is useful also to construct the higher order terms in the heavy hadron effective theory, as shown in Sect.~\ref{sec:heavy_hadron_effective_theory}.
As for 2, there is a spin-flip coupling for a heavy quark and a gluon, namely the magnetic coupling, at ${\cal O}(1/m_{Q})$.
In the rest frame $v^{\mu}=(1,\vec{0})$, for example, $\sigma^{\mu\nu}$ becomes
\begin{align}
\bar{Q}_{v} \sigma^{\mu\nu} Q_{v}
=
\bar{Q}_{v}
\epsilon^{ijk}
\left(
\begin{array}{cc}
\sigma^{k} & 0 \\
0 & 0
\end{array}
\right)
Q_{v}, \hspace{0.5em}\mathrm{with}\hspace{0.5em} \mu,\nu=i,j,k=1,2,3,
\end{align}
and, it flips the heavy quark spin in the positive energy component.
Therefore the heavy quark spin is not a conserved quantity at ${\cal O}(1/m_{Q})$.
This interaction is analogous to the fine structure of the electron states in atoms.
The properties 1 and 2 should hold, not only at the quark level, but also at the hadron level.
We will discuss how those two points play the role for the heavy hadron effective theory in Sect.~\ref{sec:heavy_hadron_effective_theory}.
According to the HQET, the physical quantities concerning a heavy quark can be expanded by $1/m_{Q}$.
The expansion should hold for any environment with finite temperature and baryon number density.
It should hold also in nuclei and nuclear matter with a heavy quark.
In the rest frame of the heavy hadron $v_{r}^{\mu}=(1,\vec{0}\,)$, the heavy hadron mass is parametrized as
\begin{align}
M_{H}(T,\rho) = m_{Q} + \bar{\Lambda}(T,\rho) - \frac{\lambda_{1}(T,\rho)}{2m_{Q}} + 4\vec{S} \!\cdot\! \vec{j} \, \frac{\lambda_{2}(T,\rho;m_{Q})}{2m_{Q}} + {\cal O}(1/m_{Q}^{2}),
\end{align}
with coefficients $\bar{\Lambda}(T,\rho)$, $\lambda_{1}(T,\rho)$ and $\lambda_{2}(T,\rho;m_{Q})$,
which are dependent on the temperature $T$ and the baryon number density $\rho$.
In order for this expansion to be valid, the temperature $T$ as well as the chemical potential $\mu$ corresponding to $\rho$ should be much smaller than the heavy quark mass $m_{Q}$.
We note that $\bar{\Lambda}(T,\rho)$, $\lambda_{1}(T,\rho)$ and $\lambda_{2}(T,\rho;m_{Q})$ are defined by
\begin{align}
\bar{\Lambda}(T,\rho) &= \frac{1}{2} \langle H_{v_{r}} | {\cal H}_{0} | H_{v_{r}} \rangle, \label{eq:expansion_1} \\
\lambda_{1}(T,\rho) &= \frac{1}{2} \langle H_{v_{r}} | \bar{Q}_{v_{r}} (iD_{\perp})^{2} Q_{v_{r}} | H_{v_{r}} \rangle, \label{eq:expansion_2} \\
8\vec{S} \!\cdot\! \vec{j} \, \lambda_{2}(T,\rho;m_{Q}) &= \frac{1}{2} c(\mu) \langle H_{v_{r}} | \bar{Q}_{v_{r}} g_{s} \sigma_{\alpha\beta}G^{\alpha\beta} Q_{v_{r}} | H_{v_{r}} \rangle . \label{eq:expansion_3}
\end{align}
$\bar{\Lambda}(T,\rho)$ is related to the energy momentum tensor in the heavy quark limit.
${\cal H}_{0}$ is the Hamiltonian corresponding to the Lagrangian Eq.~(\ref{eq:QCD_light}) for the light degrees of freedom.
Note that $\lambda_{2}(T,\rho;m_{Q})$ receives the logarithmic correction by $1/m_{Q}$ due to the quantum correction, as indicated by the Wilson coefficient
$c(\mu \simeq m_{Q})$
in Eq.~(\ref{eq:HQET_Lagrangian}).
Interestingly, Eqs.~(\ref{eq:expansion_2}) and (\ref{eq:expansion_3}) are rewritten as
\begin{align}
\lambda_{1}(T,\rho) &= - m_{Q} \langle H_{v_{r}} | \bar{Q}_{v_{r}} g_{s} \vec{x} \!\cdot\!\vec{E} Q_{v_{r}} | H_{v_{r}} \rangle, \label{eq:lambda_1} \\
8\vec{S} \!\cdot\! \vec{j} \, \lambda_{2}(T,\rho;m_{Q})
&=
\frac{1}{2} c(\mu) \langle H_{v_{r}} | \bar{Q}_{v_{r}} g_{s} \vec{\sigma} \!\cdot\! \vec{B} Q_{v_{r}} | H_{v_{r}} \rangle, \label{eq:lambda_2}
\end{align}
where $E^{i}=-G^{0i}$ and $B^{i}=\varepsilon^{ijk}G^{jk}$ are electric and magnetic components, respectively, of the gluon field in the rest frame.
The first equation was derived in Ref.~\cite{Neubert:1993zc} (see also Ref.~\cite{Bigi:1997fj}).
In Eq.~(\ref{eq:lambda_1}), $\vec{x}$ denotes the position of the center-of-mass of the system, which should coincide with the position of the heavy quark in the heavy-quark limit.
Therefore, $\lambda_{1}(T,\rho)$ and $\lambda_{2}(T,\rho;m_{Q})$ are related to the coupling strengths between the heavy quark and the electric and magnetic gluons, respectively.
As the environment changes, the values of $\lambda_{1}(T,\rho)$ and $\lambda_{2}(T,\rho;m_{Q})$ change also.
Because the heavy quark would not be affected by the environment so much,
we would consider that the change of $\lambda_{1}(T,\rho)$ and $\lambda_{2}(T,\rho;m_{Q})$ is reflected by the change of the coupling of the heavy quark to gluon fields in the environment (see Sect.~\ref{sec:D_mesons}).
\subsection{Hadronic effective theories}
\label{sec:hadronic_effective_theories}
In the low energy region, QCD is not directly solvable due to the non-perturbative dynamics.
Because the quarks and gluons are confined in the QCD vacuum,
it is essentially important to use hadrons as effective degrees of freedom instead of the quarks and gluons.
As effective theories for heavy hadrons, we introduce the chiral effective theory (Sect.~\ref{sec:chiral_effective_theory}) and the heavy hadron effective theory (Sect.~\ref{sec:heavy_hadron_effective_theory}).
Those effective theories are constructed according to chiral symmetry and heavy quark symmetry, as discussed in Sect.~\ref{sec:symmetries}.\footnote{We will use different conventions for various variables such as the pion decay constants and the field normalizations adopted in the chiral effective theory and in the heavy hadron effective theory.}
\subsubsection{Chiral effective theory}
\label{sec:chiral_effective_theory}
Because of the nonperturbative nature, QCD cannot be directly used to calculate the low-energy hadronic phenomena. Nevertheless, it is possible to establish an effective description, focusing on the symmetry principles and the low energy excitations. The effective field theory (EFT) is a useful technique to extract the relevant physics in the low energy domain. A traditional example is the effective Lagrangian for Quantum Electrodynamics (QED)~\cite{Heisenberg:1935qt}. In QED, the lowest energy excitation is the massless photon. When the energy of the system is much smaller than the electron mass, the dynamics of the photons can be described by the effective Lagrangian which is constrained by the symmetries of the underlying fundamental theory, QED. At present, EFT is vastly utilized in various fields of physics~\cite{Leutwyler:1993gf,Braaten:2007nq,Dubovsky:2011sj,Watanabe:2014fva}.
In the light quark sector of QCD, systematic EFT approach is developed as chiral perturbation theory (ChPT)~\cite{Weinberg:1979kz,Gasser:1983yg,Gasser:1984gg,Scherer:2012xha}. As discussed in section~\ref{sec:chiral_symmetry}, Nambu-Goldstone theorem ensures the appearance of almost massless pions as a consequence of the spontaneous breaking of chiral symmetry. The low-energy phenomena of the strong interaction is therefore determined by the effective Lagrangian of pions. A unique feature of ChPT is the spontaneous breakdown of chiral symmetry, which gives the constraints on the dynamics of NG bosons (the low energy theorems). General prescription to construct an effective Lagrangian with the nonlinear realization is given in Refs.~\cite{Coleman:1969sm,Callan:1969sn,Bando:1987br}, when the symmetry $G$ of the Lagrangian is spontaneously broken into a subgroup $H\subset G$ by the vacuum in the fundamental theory.
In the two-flavor QCD in the chiral limit, chiral symmetry $G=\textrm{SU}(2)_{R}\otimes \textrm{SU}(2)_{L}$ breaks down to the isospin symmetry $ H=\textrm{SU}(2)_{V}$. To construct the effective Lagrangian, we define the chiral field
\begin{equation}
u(\phi)
=\exp\left\{\frac{i\phi}{2F}\right\} ,
\end{equation}
where $F \simeq 93$ MeV is the pion decay constant in the two-flavor chiral limit and three NG boson fields $\pi^{0},\pi^{\pm}$ are collected in a $2\times 2$ matrix as
\begin{equation}
\phi=\begin{pmatrix}
\pi^{0} & \sqrt{2}\pi^{+} \\
\sqrt{2}\pi^{-} & -\pi^{0}
\end{pmatrix}.
\label{eq:phi_def}
\end{equation}
Because the generator of the broken symmetry is $(T^{i},-T^{i})$, [see Eq.~\eqref{eq:axial}], the representative of the coset space $G/H$ can be parametrized by the NG boson fields $\phi$ as
\begin{align}
\xi(\phi)
=(u(\phi),u^{\dag}(\phi)).
\end{align}
Using the unique decomposition of the group element $g\xi(\phi) =\xi(\phi^{\prime})h(\phi,g)$ with $h(\phi,g)\in H$, we define the nonlinear transformation law of the NG boson fields under $g=(R,L)$ through
\begin{align}
(u(\phi),u^{\dag}(\phi))
\stackrel{g}{\to} (R,L)(u(\phi),u^{\dag}(\phi)) (h^{\dag}(\phi,g),h^{\dag}(\phi,g)),
\end{align}
which can be simplified as the transformation law of the chiral field
\begin{align}
u(\phi)
\stackrel{g}{\to} Ru(\phi)h^{\dag}(\phi,g)= h(\phi,g)u(\phi)L^{\dag} .
\label{eq:utrans}
\end{align}
Because $h(\phi,g)$ depends on $\phi$, Eq.~\eqref{eq:utrans} defines the nonlinear transformation of the NG boson fields. Under the unbroken symmetry $g=h\in H$, this reduces to a linear transformation.
The fundamental quantity to construct the effective Lagrangian is the Maurer-Cartan 1-form $\alpha_{\mu}=i^{-1}(u(\phi),u^{\dag}(\phi))^{-1}\partial_{\mu}(u(\phi),u^{\dag}(\phi))$, which can be decomposed into
\begin{align}
\alpha_{\mu\parallel}
&=\frac{1}{2i}
[u^{\dag}(\phi)\partial_{\mu}u(\phi)
+u(\phi)\partial_{\mu}u^{\dag}(\phi)] , \\
\alpha_{\mu\perp}
&=\frac{1}{2i}
[u^{\dag}(\phi)\partial_{\mu}u(\phi)
-u(\phi)\partial_{\mu}u^{\dag}(\phi)] .
\end{align}
The transformation laws are given by
\begin{align}
\alpha_{\mu\parallel}
&\stackrel{g}{\to}
h(\phi,g)
\alpha_{\mu\parallel}
h^{\dag}(\phi,g)
+\frac{1}{i} h(\phi,g)\partial_{\mu} h^{\dag}(\phi,g) , \label{eq:aparalleltrans}\\
\alpha_{\mu\perp}
&\stackrel{g}{\to}
h(\phi,g)
\alpha_{\mu\perp}
h^{\dag}(\phi,g) \label{eq:aperptrans}.
\end{align}
It is illustrative to notice that these quantities correspond to the vector current and the axial current made by pions:
\begin{align}
\alpha_{\mu\parallel}
&
= -\frac{i}{8F^{2}}(\phi\partial_{\mu}\phi-\partial_{\mu}\phi\phi)+\mathcal{O}(\phi^{4}), \label{eq:aparallelexpand} \\
\alpha_{\mu\perp}
&
= \frac{\partial_{\mu}\phi}{2F}+\mathcal{O}(\phi^{3}).
\label{eq:aperpexpand}
\end{align}
Correspondence to the notation in Ref.~\cite{Scherer:2012xha} is understood as the chiral connection $\Gamma_{\mu}=i\alpha_{\mu\parallel}$ and the chiral vielbein $u_{\mu}=-2\alpha_{\mu\perp}$. The effective field theory is based on the most general Lagrangian with $\alpha_{\mu\parallel}$ and $\alpha_{\mu\perp}$ which is invariant under Eqs.~\eqref{eq:aparalleltrans} and \eqref{eq:aperptrans}. There are, however, infinitely many invariant terms. We thus need to introduce a power counting scheme in order to establish the hierarchy of different terms. For the description of the low-energy phenomena, derivative expansion of the effective Lagrangian is useful. This corresponds to the expansion in powers of the four-momentum of pions $p$ over an ultraviolet momentum scale $\Lambda$. We estimate $\Lambda\sim 1$ GeV, either by the chiral symmetry breaking scale, $\Lambda\sim 4\pi F$ or by the lowest energy of the non-NG mode in QCD, the mass of the rho meson, $\Lambda\sim m_{\rho}$.
Now our task is to construct the most general Lagrangian out of $\alpha_{\mu\parallel}$ and $\alpha_{\mu\perp}$, with the smallest number of derivatives. There is only one term
\begin{equation}
\mathcal{L}_{2}=-F^{2}\text{Tr}(\alpha_{\mu\perp}\alpha^{\mu}_{\perp}) ,
\label{eq:L2}
\end{equation}
where the coefficient $-F^{2}$ is chosen to obtain the correct normalization of the kinetic term of pions. In the purely pionic sector, it is convenient to define
\begin{equation}
U(\phi)
=u^{2}(\phi)=\exp\left\{\frac{i\phi}{F}\right\},
\end{equation}
because the transformation law does not contain $h$:
\begin{equation}
U(\phi)
\stackrel{g}{\to} RU(\phi)L^{\dag} .
\label{eq:Utrans}
\end{equation}
With the $U$ field, the Lagrangian~\eqref{eq:L2} can be expressed as
\begin{equation}
\mathcal{L}_{2}
=
\frac{F^{2}}{4}\text{Tr}(\partial_{\mu}U\partial^{\mu}U^{\dag}) .
\end{equation}
The invariance under Eq.~\eqref{eq:Utrans} is also clear. Because there are two derivatives, the lowest order Lagrangian is counted as $\mathcal{O}(p^{2})$. This Lagrangian contains even powers of $\phi$ fields, corresponding to the kinetic terms, the four-point vertices with two derivatives, and so on. The coefficient $F^{2}/4$ is chosen such that the kinetic terms are properly normalized. Applying the power counting scheme for the Green's functions, we find that the one-loop diagrams with the $\mathcal{O}(p^{2})$ interactions are counted as $\mathcal{O}(p^{4})$. Thus, the ultraviolet divergences in the one-loop diagrams can be renormalized by the counter terms in the higher-order Lagrangian at $\mathcal{O}(p^{4})$. In this way, systematic order-by-order renormalization is guaranteed.
Now we introduce the quark mass term which breaks chiral symmetry explicitly. We note that there are many ways to break chiral symmetry. In the construction of the chiral effective Lagrangian, it is important to break the chiral symmetry as in the same manner with the symmetry breaking in the fundamental theory of QCD. For this purpose, we first construct chiral \textit{invariant} Lagrangian with the transformation law of $\mathcal{M}$ in Eq.~\eqref{eq:Mtrans}, and then substitute the quark mass matrix as in Eq.~\eqref{eq:quarkmass} to break symmetry. Because we consider the perturbative expansion around the chiral limit, the quark masses are regarded as small. The invariant structure with single $\mathcal{M}$ is given by
\begin{equation}
\text{Tr}(\mathcal{M} U^{\dag}+U\mathcal{M}^{\dag}) .
\end{equation}
Motivated by the GMOR relation~\eqref{eq:GMOR}, we assign the counting of $\mathcal{O}(p^{2})$ for the quark mass matrix $\mathcal{M}$. The lowest order $\mathcal{O}(p^{2})$ chiral Lagrangian with explicit symmetry breaking is now given by
\begin{equation}
\mathcal{L}_{2}
=
\frac{F^{2}}{4}\text{Tr}(\partial_{\mu}U\partial^{\mu}U^{\dag})
+\frac{F^{2}}{4}\text{Tr}(\chi U^{\dag}+U\chi^{\dag}),
\label{eq:L2ex}
\end{equation}
with
\begin{equation}
\chi
=2B\mathcal{M}
=2B
\begin{pmatrix}
\hat{m} & 0 \\
0 & \hat{m}
\end{pmatrix} ,
\end{equation}
where the low energy constant $B$ is determined by using the definition of the chiral condensate $\bra{0}\bar{q}q\ket{0}=\partial \bra{0}\mathcal{H}\ket{0}/\partial \hat{m}$ as
\begin{equation}
B
=-\frac{\bra{0}\bar{q}q\ket{0}}{2F^{2}} .
\end{equation}
By expanding Eq.~\eqref{eq:L2ex}, we can verify that the pion mass follows the GMOR relation~\eqref{eq:GMOR}.
So far we have concentrated on the system with only pions. In QCD, the baryon number is conserved as a consequence of the vector $U(1)$ symmetry. Therefore, the system can be classified by the baryon number $B$. Let us first consider the baryon number $B=1$ sector. In the nonlinear realization~\cite{Coleman:1969sm,Callan:1969sn} the nucleons can be incorporated as matter fields, which belong to a representation of the unbroken symmetry $H$. In the two-flavor sector, the proton $p$ and the neutron $n$ belong to the fundamental representation of the isospin SU(2):
\begin{equation}
N
=
\begin{pmatrix}
p \\
n
\end{pmatrix} ,
\end{equation}
which transforms as
\begin{equation}
N
\stackrel{g}{\to} h(\phi,g) N,
\quad
\bar{N}
\stackrel{g}{\to} \bar{N}h^{\dag}(\phi,g)
\label{eq:nucleon} .
\end{equation}
Now we construct the most general invariant Lagrangian with a nucleon bilinear form. Because $h(\phi,g)$ contains the pion field $\phi$, the kinetic term of baryon is not invariant by itself. Instead, we can define the covariant derivative $D_{\mu}$ as
\begin{equation}
D_{\mu}
=\partial_{\mu}+i\alpha_{\mu\parallel} ,
\end{equation}
which gives the homogeneous transformation:
\begin{equation}
D_{\mu}N
\stackrel{g}{\to} h(\phi,g) D_{\mu}N .
\end{equation}
The most general Lagrangian with only single derivative is
\begin{equation}
\mathcal{L}_{\pi N}^{(1)}
=\bar{N}
\left(i\Slash{D}-m-g_{A}\gamma^{\mu}\gamma_{5}\alpha_{\mu\parallel}\right)
N ,
\end{equation}
where $m$ and $g_{A}$ are the mass and the axial charge of the nucleon in the chiral limit, respectively. This leading order term is counted as $\mathcal{O}(p^{1})$. We note that the mass term $\bar{N}N$ is allowed for the matter field. Because of the additional mass scale $m$ in the chiral limit, there arises the power counting violation problem in the loop calculation~\cite{Gasser:1987rb}. To recover the consistent power counting beyond the tree level, it is necessary either to employ the heavy baryon formalism~\cite{Jenkins:1990jv,Bernard:1992qa} or to employ the improved renormalization schemes, such as the infrared regularization~\cite{Becher:1999he} and the extended on mass shell scheme~\cite{Gegelia:1999gf,Fuchs:2003qc}.
The coupling to the pions are generated through $\alpha_{\mu\parallel}$ and $\alpha_{\mu\perp}$. From the expansions~\eqref{eq:aparallelexpand} and \eqref{eq:aperpexpand}, we obtain the two-pion coupling called the Weinberg-Tomozawa (WT) term and the one-pion coupling called Yukawa term:
\begin{align}
\mathcal{L}_{\rm WT}
&=\frac{i}{8F^{2}}\bar{N}
\gamma^{\mu}(\phi\partial_{\mu}\phi-\partial_{\mu}\phi\phi)
N , \\
\mathcal{L}_{\rm Yukawa}
&=-\frac{g_{A}}{2F}\bar{N}
\gamma^{\mu}\gamma_{5}\partial_{\mu}\phi N .
\label{eq:Yukawa}
\end{align}
We note that the coupling strength of the Yukawa term is determined by the axial charge of the nucleon $g_{A}$, while the strength of the WT term is determined only by the pion decay constant $F$, in accordance with the low-energy theorem~\cite{Weinberg:1966kf,Tomozawa:1966jm}. In this way, the constraints on the pion-nucleon coupling from chiral symmetry are properly encoded in the effective Lagrangian. Following the same procedure, it is also possible to introduce other hadrons to determine their coupling with pions based on chiral symmetry.
ChPT can be generalized to three flavors with the inclusion of the kaons and the eta meson. In this case, however, there can be non-NG excitations which have a comparable mass with the threshold energies of the NG bosons, because of the substantially large strange quark mass. For instance, the scalar mesons $f_{0}(980)$ and $a_{0}(980)$ exist around the $\bar{K}K$ threshold, and the $\Lambda(1405)$ resonance lies below the $\bar{K}N$ threshold. While naive perturbation theory cannot be applied to these sectors, it is shown that the unitarization of the chiral interaction is useful~\cite{Hyodo:2011ur,Oller:1997ti,Oller:1997ng,Kaiser:1995eg,Oset:1997it,Oller:2000fj}. By performing the nonperturbative resummation of the interactions derived in ChPT, the s-wave near-threshold resonances can be dynamically generated. The driving force is the Weinberg-Tomozawa interaction; in the flavor symmetric limit, the low-energy $s$-wave scattering of the NG boson with a target hadron in the representation $T$ in channel $\alpha$ is given by
\begin{align}
V_{\rm WT}(\sqrt{s})
&=-\frac{(\sqrt{s}-M)}{4F^{2}}[C_{2}(T)+N_{F}-C_{2}(\alpha)] ,
\end{align}
where $\sqrt{s}$ is the total energy, $M$ is the mass of the target hadron, and $C_{2}(R)$ is the quadratic Casimir of SU($N_{f}$) for the representation $R$~\cite{Hyodo:2006yk,Hyodo:2006kg}. It is remarkable that the sign and the strength of the driving interaction is solely determined by the group theoretical structure.
In the sector with baryon number $B=2$, we need to consider the nuclear forces~\cite{Weinberg:1990rz,Weinberg:1991um}. The nuclear force can be induced by the one-pion exchange from the Yukawa vertex~\eqref{eq:Yukawa}, while the power counting also allows the four-nucleon contact interaction, which is responsible for the short-range part of the nuclear force. The shallow bound state of deuteron and the large scattering length of the $^{1}S_{0}$ channel indicate the existence of an additional small momentum scale, which mandates the nonperturbative resummation of the interaction derived from ChPT Lagrangian. For more details, see the review articles~\cite{Epelbaum:2008ga,Machleidt:2011zz}. The applications of ChPT to nuclear matter and finite nuclei are discussed in Ref.~\cite{Holt:2013fwa}.
\subsubsection{Heavy hadron effective theory}
\label{sec:heavy_hadron_effective_theory}
We consider the heavy hadron effective theory as an effective theory of QCD at low energy.
As discussed in Sect.~\ref{sec:heavy_quark_symmetry},
one of the most important symmetries in heavy hadrons is the heavy quark (spin) symmetry in the heavy quark limit ($m_{Q} \rightarrow \infty$).
The correction terms deviated from the heavy quark limit are taken systematically by a series of the $1/m_{Q}$ expansion.
In addition, due to the presence of light quarks in the QCD vacuum, we need to consider also chiral symmetry for investigating the properties of heavy hadrons.
Therefore, we have to consider both chiral symmetry and heavy quark symmetry (see Refs.~\cite{Manohar:2000dt,Casalbuoni:1996pg}.).
\paragraph{Heavy meson effective theory}
We consider the heavy hadron effective theory for heavy-light mesons, $Q\bar{q}$ ($D$, $D^{\ast}$, $\bar{B}$, $\bar{B}^{\ast}$) mesons and $q\bar{Q}$ ($\bar{D}$, $\bar{D}^{\ast}$, $B$, $B^{\ast}$) mesons.
We mainly focus on the coupling to pions, because the possible terms in the effective Lagrangian are well controlled by chiral symmetry and several unknown parameters for coupling constants are well limited.
Let us consider the effective field for $Q\bar{q}$ mesons.\footnote{The present formalism was given for small brown muck spin in Refs.~\cite{Georgi:1990cx,Mannel:1990vg} and was extended to general cases in Ref.~\cite{Falk:1991nq}. Here we follow the description in Ref.~\cite{Grinstein:1995uv}.}
We introduce the spinor $u_{Q\,\alpha}$ for the heavy quark $Q$ and the spinor $v_{q\,\beta}$ for the light antiquark $\bar{q}$
with the Dirac indices $\alpha$ and $\beta$, which are defined in the four-velocity $v$-frame.
We express the $Q\bar{q}$ meson field by bispinor form
\begin{align}
u_{Q\,\alpha} \bar{v}_{q\,\beta},
\end{align}
with $v\hspace{-0.5em}/ u_{Q}=u_{Q}$ and $\bar{v}_{q} v\hspace{-0.5em}/=\bar{v}_{q}$.
We note that $\bar{q}$ stands for the brown muck whose quantum number is the same as the light antiquark, but contains non-perturbative components like $q\bar{q}\bar{q}$ and $g\bar{q}$, as discussed in Sect.~\ref{sec:heavy_quark_symmetry}.
The important property of $\bar{q}$ is chiral symmetry only.
In the rest frame $v^{\mu}=(1,\vec{0}\,)$, we define the spin operator
\begin{align}
S^{i} = \frac{1}{2} \gamma_{5} \gamma^{0} \gamma^{i} =
\frac{1}{2}
\left(
\begin{array}{cc}
\sigma^{i} & 0 \\
0 & \sigma^{i}
\end{array}
\right),
\label{eq:spin_operator}
\end{align}
with $i=1,2,3$.
The matrix is expressed in the Dirac representation.
The spin up and down states of $u_{Q}$ are given by $u^{(1)}_{Q\alpha}=\delta_{1\alpha}$ and $u^{(2)}_{Q\alpha}=\delta_{2\alpha}$.
In contrast, the spin up and down states of $v_{q}$ are given by $v^{(2)}_{q\alpha}=-\delta_{4\alpha}$ and $v^{(1)}_{q\alpha}=-\delta_{3\alpha}$.
We note the relation $\vec{S}(u_{Q} \bar{v}_{q}) = (\vec{S}u_{Q}) \bar{v}_{q} + u_{Q} (\vec{S}\bar{v}_{q})$.
The spin 0 state is given by
\begin{align}
u_{Q}^{(1)} \bar{v}_{q}^{(1)} + u_{Q}^{(2)} \bar{v}_{q}^{(2)}
=
\frac{1+\gamma^{0}}{2} \gamma_{5},
\end{align}
and the spin 1 state is by
\begin{align}
u_{Q}^{(1)}\bar{v}_{q}^{(2)} &= \frac{1+\gamma^{0}}{2} /\hspace{-0.5em}\epsilon^{(+)}, \\
\frac{1}{\sqrt{2}} \left( u_{Q}^{(1)} \bar{v}_{q}^{(1)} - u_{Q}^{(2)} \bar{v}_{q}^{(2)} \right) &= \frac{1+\gamma^{0}}{2} /\hspace{-0.5em}\epsilon^{(0)}, \\
u_{Q}^{(2)}\bar{v}_{q}^{(1)} &= \frac{1+\gamma^{0}}{2} /\hspace{-0.5em}\epsilon^{(-)},
\end{align}
with the polarization vector ${\epsilon^{(\lambda)}}^{\mu}$ ($\lambda=\pm,0$) for spin 1, whose explicit forms are ${\epsilon^{(\pm)}}^{\mu}=\frac{1}{\sqrt{2}}(0,1,\pm i, 0)$ and ${\epsilon^{(0)}}^{\mu}=(0, 0, 0, 1)$ (cf.~Ref.~\cite{Grinstein:1995uv}).
After the Lorentz transformation from the rest frame to general frame with four-velocity $v^{\mu}$, we obtain the spin 0 state
\begin{align}
u_{Q}^{(1)} \bar{v}_{q}^{(1)} + u_{Q}^{(2)} \bar{v}_{q}^{(2)}
\rightarrow
\frac{1+v\hspace{-0.5em}/}{2} \gamma_{5},
\end{align}
and the spin 1 state
\begin{align}
u_{Q}^{(1)}\bar{v}_{q}^{(2)} &\rightarrow \frac{1+v\hspace{-0.5em}/}{2} /\hspace{-0.5em}\epsilon^{(+)}, \\
\frac{1}{\sqrt{2}} \left( u_{Q}^{(1)} \bar{v}_{q}^{(1)} - u_{Q}^{(2)} \bar{v}_{q}^{(2)} \right) &\rightarrow \frac{1+v\hspace{-0.5em}/}{2} /\hspace{-0.5em}\epsilon^{(0)}, \\
u_{Q}^{(2)}\bar{v}_{q}^{(1)} &\rightarrow \frac{1+v\hspace{-0.5em}/}{2} /\hspace{-0.5em}\epsilon^{(-)},
\end{align}
noting that $(1+\gamma^{0})/2$ is changed by the Lorentz transformation.
From this result, we find that the heavy-light meson fields, $P_{v}$ for spin 0 and $P_{v}^{\ast \mu}$ for spin 1, can be parametrized by the bispinor representations
\begin{align}
\frac{1+v\hspace{-0.5em}/}{2} \gamma_{5} P_{v}, \hspace{1em} \frac{1+v\hspace{-0.5em}/}{2} /\hspace{-0.6em}P_{v}^{\ast}.
\end{align}
According to the spin transformation $S_{Q}$ for the heavy quark $Q$ and the chiral transformation $U_{q}$ for the light antiquark $q$
\begin{align}
u_{Q}\bar{v}_{q} \rightarrow S_{Q} \left(u_{Q}\bar{v}_{q}\right) U^{\dag}_{q}.
\end{align}
$P_{v}$ and $P_{v}^{\ast \mu}$ are transformed as
\begin{align}
\frac{1+v\hspace{-0.5em}/}{2} \gamma_{5} P_{v} &\rightarrow S_{Q} \frac{1+v\hspace{-0.5em}/}{2} \gamma_{5} P_{v} U^{\dag}_{q}, \label{eq:transformation_0} \\
\frac{1+v\hspace{-0.5em}/}{2} /\hspace{-0.6em}P_{v}^{\ast} &\rightarrow S_{Q} \frac{1+v\hspace{-0.5em}/}{2} /\hspace{-0.6em}P_{v}^{\ast} U^{\dag}_{q}. \label{eq:transformation_1}
\end{align}
Here we use $S_{v}^{\mu} v\hspace{-0.5em}/ = v\hspace{-0.5em}/ S_{v}^{\mu}$.
Because the spin 0 and 1 states are degenerate in the heavy quark limit,
and they have the common transformation properties for the heavy quark spin symmetry as well as for chiral symmetry,
we introduce
\begin{align}
H_{v}(x) = \frac{1+v\hspace{-0.5em}/}{2} \left( P^{\ast}_{v\,\mu}(x) \gamma^{\mu} + i P_{v}(x) \gamma_{5} \right),
\label{eq:H_field}
\end{align}
as a superposed field including the spin 0 and 1 states.
The transformation of $H_{v}$ is given by
\begin{align}
H_{v} \rightarrow S_{Q} H_{v} U^{\dag}_{q},
\end{align}
from Eqs.~(\ref{eq:transformation_0}) and (\ref{eq:transformation_1}).
Let us consider the interaction of the $q\bar{Q}$ meson with pions.
The pion dynamics is given by chiral symmetry and by momentum expansion of the pion, as the Nambu-Goldstone modes in the QCD vacuum with the broken chiral symmetry (Sect.~\ref{sec:chiral_effective_theory}).
As for the latter, we consider the lowest order contribution, ${\cal O}(q)$, with respect to the pion momentum $q$.
We consider the chiral transformation, $V^{\mu}(x) \rightarrow U_{q} V^{\mu}(x) U_{q}^{\dag} +i U_{q} \partial ^{\mu} U_{q}^{\dag}$ and $A^{\mu} \rightarrow U_{q} A^{\mu} U_{q}^{\dag}$, for the vector current $V^{\mu}(x)=\frac{i}{2}(\xi^{\dag}\partial^{\mu}\xi+\xi\partial^{\mu}\xi^{\dag})$ and the axial-vector current $A^{\mu}(x)=\frac{i}{2}(\xi^{\dag}\partial^{\mu}\xi-\xi\partial^{\mu}\xi^{\dag})$ with the pion field $\xi=\exp(i\phi/\sqrt{2}f_{\pi})$ for $\phi$ defined in Eq.~(\ref{eq:phi_def}).\footnote{Note that the notations are different from those in Sect.~\ref{sec:chiral_effective_theory} as $f_{\pi}=\sqrt{2}F$, $\xi=u$, $V^{\mu}=-a^{\mu}_{\parallel}$, $A^{\mu}=-a^{\mu}_{\perp}$ and $U_{q}=h$.}
Then, we find that the effective Lagrangian
\begin{align}
{\cal L}_{\mathrm{heavy-light}} = \mathrm{Tr}\bar{H}_{v} v \!\cdot\! iD H_{v} + g \mathrm{Tr} \bar{H}_{v} H_{v} \gamma_{\mu} \gamma_{5} A^{\mu} + {\cal O}(1/M),
\label{eq:L_heavy_hadron_pion}
\end{align}
is invariant under both heavy quark spin symmetry and chiral symmetry.
At the leading order, the correction by the heavy hadron mass $M$ is neglected.
In the first term, $D^{\mu}H_{v}=\partial^{\mu}H_{v}-iV^{\mu}H_{v}$ is the covariant derivative.\footnote{We note that the covariant derivative for $\bar{q}Q$ meson is defined by $D^{\mu}H_{v}=\partial^{\mu}H_{v}+iH_{v}V^{\mu}$~\cite{Manohar:2000dt}.}
The trace $\mathrm{Tr}$ is taken over the Dirac spinor and the isospin.
In the second term, $g$ is the coupling constant in the axial-vector interaction.
Note that the axial-vector current is given as $A^{\mu}\simeq-\partial^{\mu} {\cal \phi}/\sqrt{2}f_{\pi}$ for lowest number of pions, hence the single pion coupling to $H_{v}$ field, which is relevant to the one-pion-exchange potential, is supplied by this term.
It is also important to note that the single pion coupling vertex has the structure of $q^{i} P_{v}^{\ast i \dag}P_{v}$ (or $q^{i} P_{v}^{\dag} P_{v}^{\ast i}$) and $q^{i} \varepsilon^{ijk} P_{v}^{\ast j \dag} P_{v}^{\ast k}$ ($i,j,k=1,2,3$) with $q^{i}$ being the three-dimensional momentum of the pion.
Hence the single pion coupling causes the spin-flip of the heavy-light mesons~\cite{Cohen:2005bx,Yasui:2009bz}.\footnote{This form is comparable with the non-relativistic form of the axial-vector coupling of $\pi NN$, $\vec{q}\!\cdot\!\vec{\sigma}$, with $\vec{\sigma}$ being the spin operator acting on the nucleon.}
More precisely to say, only the light component $q$ of $q\bar{Q}$ flips, while the heavy quark $Q$ is not due to the conservation of the heavy quark spin (cf.~Sect.~\ref{sec:heavy_quark_symmetry}).
The explicit forms of the potential between a heavy-light meson ($\bar{q}Q$, $q\bar{Q}$) and a nucleon will be given in Sect.~\ref{sec:D_mesons}.
\paragraph{\mbox{\boldmath $1/M$} corrections}
We consider the $1/M$ corrections for the effective Lagrangian (\ref{eq:L_heavy_hadron_pion}) for $q\bar{Q}$ meson.
To include the $1/M$ correction in the effective Lagrangian, the velocity rearrangement and the heavy quark spin breaking are important~\cite{Luke:1992cs,Kitazawa:1993bk} (see also Ref.~\cite{Yasui:2013xr}), as introduced in Sect.~\ref{sec:heavy_quark_symmetry}.
To consider the velocity rearrangement, we introduce the ``original" four velocity defined by
\begin{align}
{\cal V}^{\mu} = \frac{v^{\mu}+iD^{\mu}/M}{|v^{\mu}+iD^{\mu}/M|},
\end{align}
containing the residual momentum.
This satisfies the normalization condition ${\cal V}_{\mu}{\cal V}^{\mu}=1$.
Up to and including ${\cal O}(1/M)$, ${\cal V}^{\mu}$ is expanded as
\begin{align}
{\cal V}^{\mu} = v^{\mu} + \frac{1}{M} \left( iD^{\mu} - v^{\mu} v \!\cdot\! iD \right) + {\cal O}(1/M^{2}),
\label{eq:original_four_velocity}
\end{align}
which satisfies ${\cal V}_{\mu}{\cal V}^{\mu}=1+{\cal O}(1/M^{2})$.
Let us remember that $H_{v}$ in Eq.~(\ref{eq:H_field}) was defined in the $v^{\mu}$-frame.
We consider the Lorentz boost from the $v^{\mu}$-frame to the ${\cal V}^{\mu}$-frame (${\cal V}^{\mu}$ is given in Eq.~(\ref{eq:original_four_velocity})) up to and including ${\cal O}(1/M)$.
Then, the Lorentz transformation of $H_{v}$ is given by
\begin{align}
{\cal H}_{v}(x) = H_{v}(x) + \frac{1}{2M} \left( i \vec{D} \hspace{-0.6em}/ \, H_{v}(x) - H_{v}(x) i\cev{D}\hspace{-0.6em}/ \, - 2v \!\cdot\! iD H_{v}(x) \right) + {\cal O}(1/M^2).
\end{align}
Note that ${\cal H}_{v}$ satisfies ${\cal V}\hspace{-0.6em}/\hspace{0.2em}{\cal H}_{v} = {\cal H}_{v}+{\cal O}(1/M^{2})$ and ${\cal H}_{v}{\cal V}\hspace{-0.6em}/ = - {\cal H}_{v}+{\cal O}(1/M^{2})$, hence $(1+{\cal V}\hspace{-0.6em}/ )/2$ operates like the projection operator $(1+v\hspace{-0.5em}/)/2$ for $H_{v}$ in Eq.~(\ref{eq:H_field}).
By the velocity rearrangement between $v^{\mu}$ and $w^{\mu}=v^{\mu}+q^{\mu}/M$ with $q^{\mu}$ much smaller than $m_{Q}$, ${\cal H}_{v}(x)$ is transformed as
\begin{align}
{\cal H}_{w}(x) = e^{iq \cdot x} {\cal H}_{v}(x) + {\cal O}(1/M^{2}),
\end{align}
in a covariant form.
As for the breaking of the heavy quark symmetry, we consider the form ${\mathrm Tr} \left( \bar{H}_{v} \Gamma H_{v} \cdots \right)$ with Dirac gamma matrices $\Gamma=i\gamma_{5}, \gamma^{\mu}\gamma_{5}, \sigma^{\mu\nu}$.
In fact, those $\Gamma$'s are not commutable with the spin operator (\ref{eq:spin_operator}).
We may choose $\Gamma=\gamma^{\mu}\gamma_{5}$ for the axial-vector coupling to the pion field.
Following both the velocity rearrangement and the heavy quark spin breaking,
we can construct the effective Lagrangian up to and including ${\cal O}(1/M)$.
The interaction Lagrangian in Eq.~(\ref{eq:L_heavy_hadron_pion}) is extended as~\cite{Kitazawa:1993bk,Yasui:2013xr}
\begin{align}
{\cal L}^{\mathrm{LO+NLO}}_{\pi H_{v} H_{v}}
&=
\left( g+\frac{g_{1}}{M} \right) {\mathrm Tr} \bar{H}_{v} H_{v} \gamma_{\mu} \gamma_{5} A^{\mu} \nonumber \\
&
+ \frac{g}{2M}
\left( {\mathrm{Tr}} v \!\cdot\! iD \bar{H}_{v} H_{v} \gamma_{\mu} \gamma_{5} A^{\mu} - {\mathrm{Tr}} \bar{H}_{v} v \!\cdot\! iD H_{v} \gamma_{\mu} \gamma_{5} A^{\mu} \right) \nonumber \\
&
+ \frac{g}{4M} \varepsilon_{\mu\nu\rho\sigma} \left( {\mathrm{Tr}} iD^{\nu} \bar{H}_{v} H_{v} \sigma^{\rho\sigma}A^{\mu} - {\mathrm{Tr}} \bar{H}_{v} iD^{\nu} H_{v} \sigma^{\rho\sigma} A^{\mu} \right) \nonumber \\
&
+ \frac{g_{2}}{M} {\mathrm{Tr}} \bar{H}_{v} \gamma_{\mu} \gamma_{5} H_{v} A^{\mu} + {\cal O}(1/M^{2}),
\label{eq:L_heavy_hadron_pion_2}
\end{align}
where $g_{1}$ and $g_{2}$ are new coupling constants different from $g$.
Note that the first, second and third terms in the right-hand-side are related with each other with common coupling $g$ due to the invariance under the velocity-rearrangement.
Those three terms satisfy the heavy quark symmetry.
Only the last term breaks the heavy quark symmetry.
We have demonstrated the construction of the effective Lagrangian up to and including ${\cal O}(1/M)$ for the axial-vector coupling.
Furthermore, we may consider the vector current term of the pion field, as shown in Ref.~\cite{Kitazawa:1993bk}.
This scheme can be applied also for other cases including higher order terms of the pion momentum.
\paragraph{Heavy baryon effective theory}
So far, we have considered the heavy-light ($q\bar{Q}$ and $Q\bar{q}$) mesons.
Because the construction scheme of the effective field is quite general,
we may consider the other heavy hadrons with other brown muck spin~\cite{Falk:1991nq}.
Let us investigate the effective field of $qqQ$ ($\Lambda_{c}$, $\Sigma_{c}$, $\Sigma_{c}^{\ast}$, $\dots$, $\Lambda_{b}$, $\Sigma_{b}$, $\Sigma_{b}^{\ast}$, $\dots$) baryons.
For the brown muck spin $j=0$ ($\Lambda_{c}$ baryon and $\Lambda_{b}$ baryon), the effective field is trivially given by the spinor
\begin{align}
\psi_{v}=u_{h},
\end{align}
with $u_{h}$ being the heavy quark spinor satisfying $v\hspace{-0.5em}/\hspace{0.1em}u_{h}=u_{h}$ (cf.~Eq.~(\ref{eq:positive_projection})).
Here $\psi_{v}$ satisfies the condition $v\hspace{-0.4em}/\psi_{v} = \psi_{v}$.
In literature, $\psi_{v}$ is sometimes represented by $B_{\bar{\bf 3}}$ for the SU(3) ($u$, $d$, $s$) flavor anti-triplet representation for $qq$ in $qqQ$.
For the brown muck spin $j=1$ ($\Sigma_{c}$, $\Sigma_{c}^{\ast}$ baryons and $\Sigma_{b}$, $\Sigma_{b}^{\ast}$ baryons),
the effective field is given by the vector-spinor
\begin{align}
\psi_{v}^{\mu} = A^{\mu} u_{h},
\end{align}
with $A^{\mu}$ being the brown muck axial-vector with $j^{\,{\cal P}}=1^{+}$ and $u_{h}$ being the heavy quark spinor with the condition $v\hspace{-0.5em}/\hspace{0.1em}u_{h}=u_{h}$.
Notice that the $v\hspace{-0.4em}/\psi_{v}^{\mu} = \psi_{v}^{\mu}$ as well as $v_{\mu}\psi_{v}^{\mu}=0$ are satisfied, in which the latter is induced from the condition $v_{\mu}A^{\mu}=0$ for the axial-vector field $A^{\mu}$.
The $\psi^{\mu}_{v}$ is a superposed field of the spin $3/2$ baryon and the spin $1/2$ baryon.
Let us decompose the $\psi^{\mu}_{v}$ into physical components $\psi_{v\,3/2}^{\mu}$ and $\psi_{v\,1/2}$ with spin $3/2$ and $1/2$, respectively.
For this purpose, we recall the condition for the spin $3/2$ field $\Psi^{\mu}$ in the Rarita-Schwinger formalism~\cite{Rarita:1941mf} (see also Ref.~\cite{Scherer:2012xha}),
\begin{align}
\gamma_{\mu} \Psi^{\mu}=0
\hspace{0.5em}
\mathrm{and}
\hspace{0.5em}
\partial_{\mu} \Psi^{\mu}=0.
\end{align}
In the case of $\psi_{v\,3/2}^{\mu}$ in the $v$-frame, those two conditions are rewritten as
\begin{align}
\gamma_{\mu} \psi_{v\,3/2}^{\mu}=0
\hspace{0.5em}
\mathrm{and}
\hspace{0.5em}
v_{\mu} \psi_{v\,3/2}^{\mu}=0.
\label{eq:RS_condition_2}
\end{align}
Then, we find the following form
\begin{align}
\psi_{v\,3/2}^{\mu} = \left( g^{\mu}_{\nu} -\frac{1}{3} \left( \gamma^{\mu}+v^{\mu} \right) \gamma_{\nu}\right) \psi_{v}^{\nu},
\end{align}
satisfies Eq.~(\ref{eq:RS_condition_2}) as well as $v\hspace{-0.5em}/\hspace{0.1em} \psi_{v\,3/2}^{\mu}=\psi_{v\,3/2}^{\mu}$.
The spin 1/2 field $\psi_{v\,1/2}$ is given as the orthogonal component to $\psi_{v\,3/2}^{\mu}$;
\begin{align}
\psi_{v\,1/2} = \frac{1}{\sqrt{3}} \gamma_{5} \gamma_{\nu} \psi_{v}^{\nu},
\end{align}
which satisfies $v\hspace{-0.5em}/\hspace{0.1em} \psi_{v\,1/2}=\psi_{v\,1/2}$.
As a result, we can express $\psi_{v}^{\mu}$ by $\psi_{v\,3/2}^{\mu}$ and $\psi_{v\,1/2}$ as
\begin{align}
\psi_{v}^{\mu} = \psi_{v\,3/2}^{\mu} + \frac{1}{\sqrt{3}} \left( \gamma^{\mu} + v^{\mu} \right) \gamma_{5} \psi_{v\,1/2}.
\label{eq:superfield_baryon}
\end{align}
In literature, $\psi_{v}^{\mu}$ is sometimes represented by $S_{{\bf 6}}^{\mu}$ in the SU(3) ($u$, $d$, $s$) flavor sextet representation.
With those effective fields, we can construct the interaction Lagrangian with light mesons.
The examples can be seen in Ref.~\cite{Liu:2011xc} and references therein.
The potential between a heavy baryon and a nucleon will be given in Sect.~\ref{sec:charm_baryons}.
\subsection{Theoretical approaches to finite density}
\label{sec:finite_density}
There are several theoretical approaches to investigate the properties of hadrons in nuclear medium.
First, we introduce the few-body calculation based on the Gaussian expansion method (Sect.~\ref{sec:fewbody_systems}).
In principle, the few-body calculation can be applied to large baryon numbers,
but it is not practical in many cases.
Instead, we introduce the treatment of the in-medium nucleon propagator used in the many-body calculation in infinitely extended nuclear matter (Sect.~\ref{sec:nuclear_matter}).
It is an important aspect that several quark and gluon condensates in nuclear medium become different from those in vacuum.
In the leading order of the low baryon number density, we can obtain the model-independent result for those changes (Sect.~\ref{sec:chiral_symmetry_at_finite_density}).
We introduce the QCD sum rule as a theoretical technique to connect the quark and gluon condensates and the hadron properties (Sect.~\ref{sec:QCDSR}).
\subsubsection{Few-body systems}
\label{sec:fewbody_systems}
In this section, we give a brief review of the
Gaussian expansion method which is one of the numerical techniques to solve accurately quantum few-body systems.
The few-body calculations often appear in many important problems of
the quantum theory.
In order to solve the few-body systems with high precision,
various methods have been developed so far~\cite{Kamada:2001tv}.
As one of the standard methods,
the variational method with
appropriate basis is widely used
to analyze bound and scattering states of the few-body systems.
In this method, the eigenvalues are obtained by diagonalizing the
Hamiltonian with the wave functions expanded by the basis
functions.
The calculation is iterated while changing the variational parameters,
and continued until the obtained values are converged.
The few-body system with a finite number of
$L^2$-class basis functions has a large number of degrees of freedom.
Therefore, it is important to choose the appropriate basis functions.
The Gaussian expansion method (GEM)
proposed in Refs.~\cite{Kamimura:1988zz,Hiyama:2003cu,Hiyama:2012sma}
is the well-known method,
where
the wave functions
are expanded in terms of Gaussian basis functions.
GEM
provides an efficient approach for few-body calculations,
because
(i) it can describe the physical behavior of wave functions
such as short-range correlations and long-range
asymptotic behavior, (ii) it is easy to perform the coordinate
transformation and the calculation of the matrix elements for
various interactions including non-central
force such as the spin-orbit force and tensor force.
The results by the GEM are equivalent to those by the
Faddeev method giving accurate solutions for the three-body systems,
when the good convergence of the eigenenergy is obtained~\cite{Hiyama:2003cu}.
The GEM has been applied to various few-body problems such as
molecular
physics~\cite{Kamimura:1988zz,Hiyama:2003cu,Kino:1993hi,Kino:1993hi195,PhysRevA.52.870,Hamahata:2001hi},
atomic physics~\cite{Kino:1999hi,Kino:2001hi138,PhysRevA.85.022502,Hiyama:2012cj},
nuclear
physics~\cite{Matsumoto:2003qw,Matsumoto:2004ck,Matsumoto:2005pd,Prog.Theor.Phys11620061Aoyama,Myo:2007vm,Funaki:2008gb,Yamanaka:2015qfa,Hiyama:2016nwn,Yamanaka:2016fjj},
astrophysics~\cite{Hamaguchi:2007mp,Kusakabe:2007fv,Kamimura:2008fx},
hypernuclear
physics~\cite{Hiyama:2003cu,KanadaEn'yo:2008wm,Jido:2008kp,Hiyama:2010zzb,Hiyama:2010zz,Yang:2011rp,Dote:2014ema},
and
other hadron few-body systems~\cite{Hiyama2006237,Segovia:2008zz,Ortega:2010qq,Yokota:2013sfa,Yamaguchi:2013hsa,Yoshida:2015tia,Maeda:2015hxa}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm,bb=0 0 783 232]{figs/2.4/jacobi_gn-eps-converted-to.pdf}
\caption{Jacobi coordinates of the three-body systems.}
\label{fig:jacobi}
\end{center}
\end{figure}
In order to describe the
basis functions,
the
Jacobi coordinates
are employed.
In Fig.~\ref{fig:jacobi}, the Jacobi coordinates of the three-body
system are shown.
By introducing the three-channels of the Jacobi coordinates,
the function space becomes larger than the case with a single channel.
The basis function with three channels takes account of all two-particle
correlations.
If you employ only one Jacobi channel of them,
it is difficult
to describe the two-particle correlations corresponding to the other Jacobi
channels.
The set of the Jacobi coordinates makes a good convergence even if the
orbital angular momentum considered in the calculation is restricted.
In Fig.~\ref{fig:jacobi}, the coordinates $\vec{r}_c,\vec{R}_c$ ($c=1,2,3$) and the center of mass
$\vec{G}$
are given by the
single-particle coordinates $x_i$ ($i=1,2,3$) and the mass $m_i$.
The coordinates for $c=1$ are written by
\begin{align}
&\left(
\begin{array}{c}
\vec{r}_1 \\
\vec{R}_1 \\
\vec{G} \\
\end{array}
\right) =
\left(
\begin{array}{c}
\vec{x}_2-\vec{x}_1 \\
\displaystyle \vec{x}_3-\frac{m_1\vec{x}_1+m_2\vec{x}_2}{m_1+m_2} \\[2.5mm]
\displaystyle \frac{m_1\vec{x}_1+m_2\vec{x}_2+m_3\vec{x}_3}{m_1+m_2+m_3} \\
\end{array}
\right) =
\left(
\begin{array}{ccc}
-1&1&0 \\
\displaystyle -\frac{m_1}{m_{12}}&\displaystyle -\frac{m_2}{m_{12}}&1 \\[2.5mm]
\displaystyle \frac{m_1}{m_{123}}&\displaystyle
\frac{m_2}{m_{123}}&\displaystyle \frac{m_3}{m_{123}} \\
\end{array}
\right)
\left(
\begin{array}{c}
\vec{x}_1 \\
\vec{x}_2 \\
\vec{x}_3 \\
\end{array}
\right) \, ,
\end{align}
with $m_{12\cdots n}=m_1+m_2+\cdots+m_n$.
The coordinates for $c=2,3$ can be obtained in the similar way.
The wave function of the three-body system is given by a sum of the
rearrange channel amplitudes ($c=1,2,3$) as
\begin{align}
\Psi_{JM}=&\sum_{c=1}^{3}\Phi^{c}_{JM}(\vec{r}_c,\vec{R}_c) \, .
\end{align}
The wave function of the channel $c$ is
\begin{align}
\Phi^{c}_{JM}(\vec{r}_c,\vec{R}_c)&=
\sum_{nl_1,Nl_2,L}\sum_{s_{12}S,I_{12}}
C^{(c)}_{nl_1,Nl_2,L,s_{12}S,I_{12}I}
\left\{
\left[\left[\phi^{(c)}_{nl_1m_1}(\vec{r}_c)\psi^{(c)}_{Nl_2m_2}(\vec{R}_c)\right]_{L}
\right.\right. \notag\\
&\times
\left.\left.
\left[\left[\chi_{s_1}\chi_{s_2}\right]_{s_{12}}\chi_{s_3}\right]_{S}
\right]_{JM}
\left[\left[\eta_{I_1}\eta_{I_2}\right]_{I_{12}}\eta_{I_3}\right]_{I}
\right\}
\, . \label{Eq:gaussian_expansion}
\end{align}
$l_1$ and $l_2$ stand for the relative orbital angular momenta
associated with the coordinates $\vec{r}_c$ and $\vec{R}_c$,
respectively.
$L$ is the total orbital angular momentum of the
three-body system.
$\chi_{s_i}$ ($\eta_{I_i}$) with $i=1,2,3$ is the spin (isospin)
function of
the particle with the spin $s_i$ (isospin $I_i$).
$s_{12}$ ($I_{12}$) is the spin (isospin) of two particles combined by
the relative coordinate $\vec{r}_c$, and $S$ ($I$) is the total spin
(isospin) of the
three-body system.
The (anti-)symmetrization is needed~\cite{Hiyama:2003cu}
when the system includes identical particles.
For the sum in Eq.~\eqref{Eq:gaussian_expansion},
all possible coupled channels are included to obtain
solutions with sufficiently good accuracy.
The functions $\phi_{nl_1m_1}(\vec{r}\,)$ and
$\psi_{Nl_2m_2}(\vec{R}\,)$ are
expressed in terms of the Gaussian functions~\cite{Hiyama:2003cu} as
\begin{align}
\phi_{nl_1m_1}(\vec{r}\,)&=\sqrt{\frac{2}{\Gamma(l_1+3/2)b^{2l_1+3}_n}}
\exp{\left(-\frac{r^2}{2b^2_n}\right)}{\cal Y}_{l_1m_1}(r)
\, , \\
\psi_{N{l_2}m_2}(\vec{R}\,)&=\sqrt{\frac{2}{\Gamma({l_2}+3/2)B^{2l_2+3}_N}}
\exp{\left(-\frac{R^2}{2B^2_N}\right)}{\cal Y}_{l_2m_2}(R)
\, ,
\end{align}
where ${\cal Y}_{lm}(r)$ is a solid spherical harmonic ${\cal Y}_{lm}(r)=r^lY_{lm}(\hat{r})$.
The Gaussian ranges $b_n$ and $B_N$ are given by the form of geometric
series as
\begin{align}
b_n&=b_1 a^{n-1}\, (n=1,\cdots,n_{max}), \\
B_N&=B_1 A^{N-1} \, (N=1,\cdots,N_{max}),
\end{align}
where $(b_1,b_{n_{max}})$ and $(B_1,B_{N_{max}})$
(or $(b_1,a)$ and $(B_1,A)$) are the variational parameter.
The eigenvalues and coefficients $C^{(c)}_{nl_1,Nl_2,L,s_{12}S,I_tI}$ in
Eq.~\eqref{Eq:gaussian_expansion} are
obtained by solving the eigenvalue problem
\begin{align}
{\boldmath HC}=E{\boldmath NC} \, ,
\label{eq:eigenvalueProb}
\end{align}
where the matrix elements of the Hamiltonian and norm are given by
\begin{align}
H_{nn^\prime}=\langle \Phi_{JM,n}|H|\Phi_{JM,n^\prime}\rangle \, , \\
N_{nn^\prime}=\langle \Phi_{JM,n}|1|\Phi_{JM,n^\prime}\rangle \, .
\end{align}
\subsubsection{Treatment of nuclear matter in hadron effective theories}
\label{sec:nuclear_matter}
In the framework of the hadron effective theory,
the fundamental degrees of freedom are hadrons.
The change of hadrons in nuclear matter is induced through the interaction with nucleons at finite baryon density.
Let us consider zero temperature.
For the Fermi momentum $k_{\mathrm{F}}$, the non-interacting nucleon propagator is given by
\begin{align}
iS(p_{0},\vec{p}\,;k_{\mathrm{F}}) = (p\hspace{-0.4em}/+m)
\left( \frac{i}{p^{2}-m^{2}+i\varepsilon} - 2\pi \theta(p_{0}) \delta(p^{2}-m^{2}) \theta(k_{\mathrm{F}}-|\vec{p}\,|) \right),
\label{eq:propagator_nnucleon_medium}
\end{align}
with $\varepsilon>0$.
The second term in the large parentheses is for the subtraction of the nucleon propagation with on-mass-shell and positive energy inside Fermi sphere due to the Pauli exclusion principle.
This is easily confirmed by performing the $p_{0}$-integration (on the complex $p_{0}$-plane) after multiplying a regular function of $p_{0}$.
Note that the covariance is lost due to the finite density.
An alternative way for introducing the nucleon propagator at finite density is to introduce the propagator
\begin{align}
iS'(p_{0},\vec{p}\,;\mu) = \frac{i}{p\hspace{-0.4em}/-m + \mu \gamma^{0} + i\varepsilon'},
\label{eq:propagator_nnucleon_medium_2}
\end{align}
with chemical potential $\mu$,
where $\varepsilon'$ is defined as $\varepsilon' > 0$ for $p_{0}>0$ and $|\vec{p}\,|\ge k_{\mathrm{F}}$ and for $p_{0}<0$, while $\varepsilon' < 0$ for $p_{0}>0$ and $|\vec{p}\,| < k_{\mathrm{F}}$.
It means that the poles for $p_{0}>0$ and $|\vec{p}\,| < k_{\mathrm{F}}$, namely holes inside the Fermi sphere, are regarded as the antiparticles in vacuum.
We notice
$\mu=\sqrt{k_{\mathrm{F}}^{2}+m^{2}}$ for free nucleons.
Eqs.~(\ref{eq:propagator_nnucleon_medium}) and (\ref{eq:propagator_nnucleon_medium_2}) give the same result.
In the latter formalism, we can easily consider the propagator at finite temperature by introducing the Matsubara sum for the $p_{0}$-integration~\cite{Bellac:2011kqa,Kapusta:2006pm}.
The hadrons in nuclear matter exhibit a complicated dynamics through the interaction with nucleons.
In principle, the dynamics should be solved as $N$-body problem with $N$ being the number of particles.
Of course, this is not possible as practical calculations.
In many cases, we can get important information by focusing on the two-body scattering of the hadron and the nucleon in nuclear matter.
In general, the two-body scattering is described by the coupled channel equation.
$T$-matrix of the two-body scattering is given by the Lippmann-Schwinger equation (or the Bethe-Salpeter equation),
which is schematically written as
\begin{align}
T=V+VGT,
\end{align}
where $V$ is the interaction kernel and $G$ is the two-body propagator including the in-medium nucleon propagator (\ref{eq:propagator_nnucleon_medium}) or (\ref{eq:propagator_nnucleon_medium_2}).
Through the channel coupling, the hadrons acquire non-trivial behaviors in nuclear medium, such as change of spectral function, mass modification and change of decay modes.
Examples for heavy-light hadrons will be discussed in Sect.~\ref{sec:D_mesons}.
As a principle, we have to be careful that the many-body problem is not reduced to the two-body scattering in general cases, and there is a possibility that the $N$-body scattering could play some role.
For the three-body scattering, we need to consider the Faddeev equation.
\subsubsection{Chiral symmetry at finite density}
\label{sec:chiral_symmetry_at_finite_density}
\paragraph{Quark condensates in nuclear matter}
It is well known that chiral symmetry is dynamically broken in the QCD vacuum as discussed in Sect.~\ref{sec:chiral_symmetry}.
It is considered that this symmetry breaking is partially restored at finite baryon number density, such as in nuclear medium \cite{Cohen:1991nk,Drukarev:1991fs}, and the quark condensate deviates from the value in the QCD vacuum.
Let us investigate how the quark condensate changes at finite baryon number density.
This problem should be, of course, solved as many-body problem for general baryon number density.
However, as far as we focus on the lowest order of the finite baryon number density,
we can give a rigorous result which is model-dependence~\cite{Cohen:1991nk,Drukarev:1991fs}.
To start the discussion, we note that the light quark current mass $m_{q}$ enters into the QCD Hamiltonian density as a parameter,
\begin{align}
{\cal H}_{\mathrm{mass}} = m_{q} \bar{\psi}\psi,
\end{align}
for the two flavor ($\psi=(u,d)^{\mathrm{t}}$) and $m_{q}=m_{u}=m_{d}$ in the isospin limit.
Let us denote the state of the nuclear matter $| {\Omega}(m_{q}) \rangle$ for a notation representing the dependence on $m_{q}$.
The difference between the quark condensate in the nuclear matter and that in vacuum is expressed
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}m_{q}} \epsilon
&=
\langle {\Omega}(m_{q}) | \bar{q}q | {\Omega}(m_{q}) \rangle
-\langle {0}(m_{q}) | \bar{q}q | {0}(m_{q}) \rangle,
\label{eq:HF_equation}
\end{align}
with
\begin{align}
\epsilon \equiv \langle {\Omega}(m_{q}) | {\cal H}_{\mathrm{QCD}} | {\Omega}(m_{q}) \rangle
-\langle {0}(m_{q}) | {\cal H}_{\mathrm{QCD}} | {0}(m_{q}) \rangle,
\end{align}
where ${\cal H}_{\mathrm{QCD}}$ is the full Hamiltonian of QCD.
Note that the vacuum state $| {0}(m_{q}) \rangle$ is dependent on the quark mass $m_{q}$, because $m_{q}$ is contained in the QCD Lagrangian as the parameter.
Here we used the Hellmann-Feynman theorem.\footnote{We consider the eigenstate $|\phi(\lambda)\rangle$ as an eigenstate of the Hamiltonian $H(\lambda)$ which is dependent on the parameter $\lambda$: $H(\lambda)|\phi(\lambda)\rangle=E(\lambda)|\phi(\lambda)\rangle$ ($E(\lambda)$ the eigenenergy) with the normalization condition $\langle\phi(\lambda)|\phi(\lambda)\rangle=1$. Then,
\begin{align}
\langle\phi(\lambda)|\frac{\mathrm{d}}{\mathrm{d}\lambda} H(\lambda)|\phi(\lambda)\rangle
= \frac{\mathrm{d}}{\mathrm{d}\lambda} \langle\phi(\lambda)|H(\lambda)|\phi(\lambda)\rangle.
\end{align}}
In Eq.~(\ref{eq:HF_equation}),
$\epsilon$
is the energy density of the nuclear matter, which is measured from the vacuum energy.
Let us represent it by
\begin{align}
\epsilon=m_{N}\rho+\delta \epsilon,
\end{align}
with $\rho$ being the baryon number density and $\delta \epsilon$ being the binding energy of the nuclear matter.
At the low density, we may approximate $\epsilon$ as $\epsilon \simeq m_{N}\rho$ by neglecting $\delta \epsilon$.
Furthermore we introduce the quantity defined by
\begin{align}
\sigma_{N}
&=
\frac{1}{N_{f}}
\sum_{a}
\left(
\langle N(m_{q}) | \left[Q_{A}^{a} , \left[ Q_{A}^{a}, H_{\mathrm{QCD}} \right] \right] | N(m_{q}) \rangle
- \langle {0}(m_{q}) | \left[Q_{A}^{a} , \left[ Q_{A}^{a}, H_{\mathrm{QCD}} \right] \right] | {0}(m_{q}) \rangle
\right),
\end{align}
with $N_{f}=2$, where $Q_{A}^{a}$ is the axial charge, $H_{\mathrm{QCD}}=\int \mathrm{d}^{3}\vec{x} \,\, {\cal H}_{\mathrm{QCD}}$ is the QCD Hamiltonian, and $|N(m_{q}) \rangle$ is the one nucleon state.
This can be rewritten as
\begin{align}
\sigma_{N}
&= m_{q} \int \mathrm{d}^{3}\vec{x} \left( \langle N(m_{q}) |\bar{q}q| N(m_{q}) \rangle - \langle {0}(m_{q}) |\bar{q}q| {0}(m_{q}) \rangle \right) \nonumber \\
&= m_{q} \frac{\mathrm{d}m_{N}}{\mathrm{d}m_{q}},
\end{align}
with $m_{N} = \langle N(m_{q}) | H_{\mathrm{QCD}} | N(m_{q}) \rangle$, where the Hellmann-Feynman theorem is used again.
Then, we obtain
\begin{align}
\langle {\Omega}(m_{q}) | \bar{q}q | {\Omega}(m_{q}) \rangle
-\langle {0}(m_{q}) | \bar{q}q | {0}(m_{q}) \rangle
\simeq
\frac{\sigma_{N} \rho}{m_{q}}.
\end{align}
By using the GMOR relation~(\ref{eq:GMOR})\footnote{We note the difference of the notations: $\hat{m}=m_{q}$ and $|0\rangle=|{0}(m_{q})\rangle$.}, we obtain
\begin{align}
\frac{\langle {\Omega}(m_{q}) | \bar{q}q | {\Omega}(m_{q}) \rangle}
{\langle {0}(m_{q}) | \bar{q}q | {0}(m_{q}) \rangle}
\simeq
1- \frac{\sigma_{N}}{f_{\pi}^{2}m_{\pi}^{2}}\rho.
\end{align}
This relation gives the change of the quark condensate in nuclear matter at baryon number density $\rho$.
The value of $\sigma_{N} = 40$-$45$ MeV is obtained from the analysis in the chiral perturbation theory~\cite{Buettiker:1999ap,Gasser:1990ce} (see also Ref.~\cite{Scherer:2012xha}).
When we use this value, we obtain
\begin{align}
\frac{\langle {\Omega}(m_{q}) | \bar{q}q | {\Omega}(m_{q}) \rangle}
{\langle {0}(m_{q}) | \bar{q}q | {0}(m_{q}) \rangle}
\simeq
1- 0.3 \frac{\rho}{\rho_{0}},
\label{eq:qqbarrho}
\end{align}
with the normal nuclear matter density $\rho_{0}=0.17$ fm$^{-3}$.
Hence, the quark condensate decreases by about 30 \% in normal nuclear matter.
Here we note that the above derivation is model-independent, and is valid in the limit of the low baryon number density, where only the leading order of $\rho$ should be dominant.
We have to be careful that no nucleon interaction is taken into account because its effect is completely neglected ($\delta \epsilon=0$).
Such effect can be included by higher order effect of $\rho$, which can be studied in nuclear many-body problems (see Ref.~\cite{Birse:1994cz,Cassing:1999es,Meissner:2001gz}).
\paragraph{Gluon condensates in nuclear matter}
Similarly, we can estimate the change of the gluon condensate in nuclear matter~\cite{Cohen:1991nk,Drukarev:1991fs}.
Here, without derivations, we quote on the result~\cite{Cohen:1991nk}
\begin{align}
\langle {\Omega}(m_{q}) | \frac{\alpha_{s}}{\pi} G^{a}_{\mu\nu} G^{a\mu\nu} | {\Omega}(m_{q}) \rangle
- \langle {0}(m_{q}) | \frac{\alpha_{s}}{\pi} G^{a}_{\mu\nu} G^{a\mu\nu} | {0}(m_{q}) \rangle
\simeq
-\frac{8}{9} \left( m_{N} - \sigma_{N} - y \right) \rho,
\label{eq:gluonrho}
\end{align}
in the linear density approximation with $y$ the strangeness content in a nucleon
\begin{align}
y
= m_{s} \int \mathrm{d}^{3}\vec{x} \left( \langle N(m_{q}) |\bar{s}s | N(m_{q}) \rangle - \langle {0}(m_{q}) |\bar{s}s | {0}(m_{q}) \rangle \right),
\end{align}
with the strange quark current mass $m_{s}$.
\subsubsection{QCD sum rules}
\label{sec:QCDSR}
In the QCD sum rule hadron properties such as masses and coupling constants are
studied by two point correlation
functions~\cite{Shifman:1978bx,Shifman:1978by} (see Refs.~\cite{Reinders:1984sr,Colangelo:2000dp,Ioffe:2005ym} for a review).
In momentum space at $p = (\omega, \bm p)$ it is defined as
\begin{align}
\Pi (p^2) = \int \mathrm{d}^4x\ e^{i \omega t - i \bm p \cdot \bm x} \langle \Omega| T(A(x) \bar B(0)) |\Omega\rangle,
\label{eq_def_correlationAB}
\end{align}
where $A$ and $B$ are appropriate hadronic current operators which create
physical states that we are interested in.
Here we suppress $m_q$ in the nuclear matter state $|\Omega\rangle$.
The operators can be taken, for instance,
$A, B = \bar q \gamma_\mu \vec \tau q$ for the isovector $\rho$ meson, where $\vec \tau$ is an isospin matrix, and
$\bar s \gamma_\mu s$ for the $\phi$ meson.
The correlation function is computed in two different ways.
One is a phenomenological manner
by using a spectral function $\sigma$, the imaginary part of $\Pi$,
which contains hadronic parameters such as masses and coupling constants,
\begin{align}
\Pi (p^2)
=
\int^\infty_{0} \mathrm{d}s\ \frac{\sigma(s)}{s-p^2 - i \epsilon}\, .
\label{Pi_phenomenological}
\end{align}
For example, for a simple case of one bound or resonant state at the mass $m_{res}$ with the residue $\lambda$ and a continuum
starting from a threshold $s_{th}$, the spectral function can be parametrized as
\begin{align}
\sigma(s) = \lambda \delta(s - m_{ res}) + f(s) \theta (s - s_{th}) .
\end{align}
The simplest choice of $f(s)$ is the phase space of physical multi-hadron states.
Another method, which is essential in the QCD sum rule,
is the operator product expansion (OPE) of QCD.
The correlation function (\ref{eq_def_correlationAB})
is computed in the asymptotic (deep Euclidean) region by QCD,
where the perturbation method is available.
The Wilson's operator product expansion is performed
at short distances~\cite{Wilson:1969zs},
\begin{align}
\lim_{x \to y} A(x) \bar B(y) \to \sum_i C_i (x-y) {\cal O}_i ((x+y)/2) ,
\label{eq_WilsonsOPE}
\end{align}
where ${\cal O}_i$ are local operators and $C_i (x-y) $
the Wilson's coefficients which can be calculated perturbatively.
The indices summed over $i$ are ordered by the dimensions of the operators ${\cal O}_i$.
Examples of such operators for hadrons in the vacuum written in terms of the quark and gluon fields
are given in Ref.~\cite{Reinders:1984sr}.
In this decomposition, the singularities at short distances, $x \to y$ are isolated by the
Wilson coefficients $C_i (x-y)$.
After taking the matrix elements by $|\Omega\rangle$,
the operators are replaced by numerical values which characterize
the non-perturbative dynamics of hadrons.
If $|\Omega\rangle$ is the vacuum, only Lorentz invariant matrix elements survive,
while for a finite density matter, non-Lorentz invariant ones can also survive.
It is known that the asymptotic behaviors of
$C_i$'s are determined up to logarithmic factor by the canonical
dimensions of the operators
$d_A, d_B$ and $d_i$,
\begin{align}
C_i(x-y)
\xrightarrow[|x-y| \ll 1/\Lambda_{\mathrm{QCD}}]{}
|x-y|^{d_i - d_A - d_B}
(1 + {\cal O}(|x-y|\Lambda_{\mathrm{QCD}})) .
\end{align}
For instance, for $A = \bar \psi \gamma_\mu \psi$, $d_A=3$.
Therefore, the terms of higher dimensional operators (with larger $d_i$)
are expected to be suppressed in the deep Euclidean region, $x \to y$ or $|p^2| \to \infty$,
by powers of $\Lambda_{\rm QCD}^2/|p^2|$.
Let us consider the application of the QCD sum rules in the nuclear matter.
General features of the nuclear matter was studied in Refs.~\cite{Drukarev:1991fs,Drukarev:1988kd}.
In what follows, we overview the original work of Hatsuda and Lee~\cite{Hatsuda:1991ez} for the study of
vector mesons in a nuclear matter at finite density.
For definiteness and simplicity the longitudinal component of the
correlation function is considered for the charged $\rho$ meson, $\rho^+$,
\begin{align}
\lim_{\bm q \to 0} \Pi_L (\omega, \bm p)
=
\int \mathrm{d}^4x\ e^{i \omega t - i \bm p \cdot \bm x}
\langle \Omega| T(\bar d \gamma_\mu u (x) \bar u \gamma^\mu d(0)) |\Omega\rangle .
\end{align}
In this manner we do not have to consider possible Lorentz index dependence of the correlation function.
Because we are interested in the ground state of the nuclear matter with equal numbers of protons and neutrons,
the baryon number density takes a finite value
\begin{align}
\langle \Omega| \frac{1}{N_c} (\bar u \gamma_\mu u + \bar d \gamma_\mu d)
|\Omega\rangle
=
g_{\mu 0} \rho ,
\label{eq_VVatdensity}
\end{align}
where $\rho$ is the baryon number density whose value for the normal nuclear matter
is $\sim 0.17\; {\rm fm}^{-3}$.
The expression of (\ref{eq_VVatdensity}) violates the Lorentz invariance.
Having the nuclear matter as a kind of background,
it is convenient to study the spectral function as a function of $\omega$ with zero momentum
$\bm p = 0$.
Now the remaining and the most relevant task is to find the operators
${{\cal O}_i}$.
For the case of vector mesons, the terms of non-vanishing Wilson coefficients include up to dimension six,
\begin{align}
{\rm Four \; quark \; operators}&:
\bar q \Gamma_\mu \lambda^a q
\bar q \Gamma_\nu \lambda^a q,
\nonumber \\
{\rm Quark \; bilinears} &:
\bar q \gamma_\mu D_\nu q,
\; \; \;
\bar q \gamma_\mu D_\nu D_\alpha D_\beta q .
\end{align}
The link to the physics of finite density is to take the matrix elements of these operators in the nuclear matter, and relate them to the quark condensate
$\langle \Omega | \bar q q | \Omega\rangle$
and the gluon condensate $\langle \Omega | G_{\mu \nu} G^{\mu \nu} | \Omega \rangle$
because they carry the most important density dependence.
Manipulations can be done by using equations of motion or Fierz rearrangements.
Moreover, for higher dimensional operators, vacuum saturation, or equivalently
mean field approximation is used.
For instance, the four quark condensate can be written as
\begin{align}
\langle \Omega | (\bar q q)^2 | \Omega\rangle
\sim
\langle \Omega | \bar q q | \Omega\rangle ^2 .
\end{align}
Now by using the $\rho$ dependence of various condensates as shown in
Eqs.~(\ref{eq:qqbarrho}) and (\ref{eq:gluonrho}), we can extract
the $\rho$ dependence of physical quantities.
In addition,
in the nuclear matter the spectral density (\ref{Pi_phenomenological})
receives contributions from the Landau damping due to nucleon collisions.
This effect is relevant only in low energy region ($\sim \delta(s)$) which is expected
to play only a minor role for the hadron properties.
Now the QCD sum rule is obtained by matching
the phenomenological spectral density and the one of the OPE with
appropriate density dependence in various condensates.
The phenomenological parameters are determined such that the two values
are optimally equated.
In principle, these quantities are asymptotically expanded in the deep Euclidean region
in powers of $1/(-q^2)$.
Practically, this can be done by the finite energy sum rule, or Borel improved sum rule.
In Ref.~\cite{Hatsuda:1991ez}, the finite energy sum rule is used and the following results are obtained
\begin{align}
\lambda^2
- s_{th}
\left( 1 - \frac{\alpha_S}{\pi} \right)
&=- \frac{2 \pi}{M_N} \rho ,
\\
\lambda^2 m^2
- \frac{s_{th}^2}{2}
\left( 1 - \frac{\alpha_S}{\pi} \right)
&=
- Q_4 - 2 \pi^2A_2^{u+d}M_N \rho ,
\\
\lambda^2 m^4
- \frac{s_{th}^3}{3}
\left( 1 - \frac{\alpha_S}{\pi} \right)
&=
- Q_6 - \frac{10}{3} \pi^2A_4^{u+d}M_N \rho ,
\end{align}
where
\begin{align}
Q_4 &= \frac{\pi^2}{3}
\langle \Omega | \frac{\alpha_S}{\pi} G^2 |\Omega \rangle ,
\\
Q_6 &= \frac{896}{81} \pi^3
\langle \Omega | \sqrt{\alpha_S} \bar qq |\Omega \rangle ^2 ,
\end{align}
and $A_n^{q}$ are given in terms of the quark distribution function
\begin{align}
A_n^{q} = 2 \int^1_0 \mathrm{d}x\ x^{n-1}
(q(x, \mu^2) + (-1)^n\bar q(x, \mu^2) ) ,
\end{align}
at a scale $\mu^2$.
The terms proportion to $\rho$ as well as the condensates at finite density
determines the density dependence of the physical quantities.
These equations imply that the mass of the vector meson $m$ decreases
at finite density.
Detailed contents of quantitative discussions are found in Ref.~\cite{Hatsuda:1991ez}.
Here we show their results in Fig.~\ref{fig_sumrule},
where we observe that the mass of $\rho$ decreases by about 120 MeV,
and that of $\phi$ by 30 MeV at the normal nuclear matter density.
Although the latter seems rather small, its experimental impact is important
because the amount of the mass shift is well larger than the decay width
as we discuss shortly
($\Gamma_\rho \sim 150$ MeV, and $\Gamma_\phi \sim 4$ MeV~\cite{Agashe:2014kda}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3 \linewidth,bb=0 0 589 584]{figs/2.4/fig_sumrule.pdf}
\end{center}
\vspace{-5mm}
\caption{Masses of the $\rho$ and $\phi$ mesons at finite baryon densities $\rho$. The mass of $\phi$ is computed by using the strangeness content $y = 0.22$. This figure is made by using the results of Ref.~\cite{Hatsuda:1991ez}.
}
\label{fig_sumrule}
\end{figure}
The change in the mass of hadrons under the change in the environment,
such as density and temperature, is natural if they are composite systems.
In the present case, it is also related to the properties of the QCD vacuum.
Theoretically, the predication was made in QCD by using the QCD sum rule.
The results given in this section is exact up to the first order of $\rho$
which is the case of low density regions.
In this regard, the QCD sum rule is useful and important to investigate the
primary features of hadron properties.
The present method is applicable also to the finite temperature, once we know
the temperature dependence of various condensates which has been calculated in lattice simulations.
Having this general argument given above, experimental effort has been made to observe
the mass shift of hadrons in a nucleus of finite baryon density~\cite{Hayano:2008vn,Leupold:2009kz}.
Experimentally, because of the rather wide decay width of the $\rho$ meson as we
pointed out above, it was attempted to observe a $\phi$ meson in proton-nucleus
collisions.
The result showed the mass spectrum with the strength spreading (broadening)
in the lower mass region (cf.~Refs.~\cite{Hayano:2008vn,Leupold:2009kz}.
However, the precise mechanism of this change has not been clarified yet.
The difficulty lies in realizing a situation as close as
to stationary condition, where the hadron stays inside a nucleus for sufficiently long time.
This is a general problem that we have to solve whenever
we would like to study hadron properties in a varied environment.
Once this will be done, however, we expect to see the dynamical change in the vacuum
which is the indication of the non-trivial structure of the QCD vacuum.
\section{Quarkonia}
\label{sec:charmonia}
\subsection{Quarkonium-nucleon interaction: gluon-exchange dominance}
Since 1980s, the interaction between a $J/\psi$ meson and a nucleon has been discussed as an unique inter-hadron interaction which is dominated by the gluon exchange~\cite{Brodsky:1989jd}.
Because $J/\psi$ is composed of $c\bar{c}$, the light quark-antiquark $q\bar{q}$ exchange between $J/\psi$ and nucleon is suppressed by the Okubo-Zweig-Iizuka (OZI) rule.
This property is quite different from other hadron-hadron interactions.
A charmonium can be considered as a small size system.
The interaction between $c$ and $\bar{c}$ is governed by the color Coulomb potential at short distance.
While the color Coulomb potential generates a pointlike bound state for infinitely heavy quarks, in the actual charm quarks with a finite mass, the balance between the Coulomb potential and the kinetic energy determines the small but finite size of the $c\bar{c}$ system. This provides the ``color dipole" structure of the charmonia.
This dipole (multipole) structure makes the QCD van der Waals potential between $J/\psi$ and a nucleon as the perturbative interaction at short distance.
This is analogous to the van der Waals potential in atomic physics where the interaction between electrically neutral atoms are induced by the instantaneous polarization.
At long distance, on the other hand, the $J/\psi N$ potential is dominated by the nonperturbative dynamics with multi-gluon exchanges, such as the pomeron exchange interaction.
In the early work by Brodsky, Schmidt and de T\'eramond~\cite{Brodsky:1989jd}, the $J/\psi N$ interaction has been investigated from the experimental data of the $\gamma N$ reaction.
By supposing the pomeron exchange, it is concluded that the $J/\psi N$ interaction is attractive.
Based on this interaction and a simple variational calculation, $J/\psi$ is shown to be bound in the nucleus with the atomic number $A\geq 3$.
However, the obtained binding energy is quite large (more than 100 MeV for $A=9$), because the saturation property of nuclear matter is not considered, though the overlapping of nucleons should provide a strong repulsion at short distances.
In a more realistic discussion with the saturation effect~\cite{Wasson:1991fb}, the binding energy is shown to be about 30 MeV for $A=200$.
To study the $J/\psi N$ interaction at high energies, the perturbative treatment of the QCD coupling constant $\alpha_s$ is performed by considering the color multipole expansion~\cite{Peskin:1979va,Bhanot:1979vb}.
Regarding a pair of $Q$ and $\bar{Q}$ as a bound state ($\phi$) by the color Coulomb potential, the QCD dipole and quadrupole moments are considered.
The color octet contributions are also included.
In the forward scattering of $J/\psi p$, the color dipole in $J/\psi$ interacts with gluons contained as partons in the proton.
The scattering between $J/\psi$ and gluon is given by the diagrams of absorbing and emitting the gluons. The scattering matrix is given by
\begin{align}
{\cal M}_{J/\psi g} = -\frac{g^{2}}{2N_{c}} \langle \phi | \, \vec{r} \!\cdot\! \vec{E} \frac{1}{H_{a}+\epsilon+iD_{0}} \vec{r} \!\cdot\! \vec{E} \, | \phi \rangle,
\label{eq:multi_pole_Peskin}
\end{align}
where $g$ is the $QQg$ coupling constant, $N_{c}$ is the number of colors, $\phi$ represents the $\bar{Q}Q$ state, $\vec{r}$ is the distance between $\bar{Q}$ and $Q$, $\vec{E}$ is the electric gluon field, $H_{a}$ the color octet Hamiltonian, $\epsilon$ is the Coulomb binding energy of the $\bar{Q}Q$ state, and $D_{0}$ is the zeroth component of the covariant derivative.
Based on the dipole picture, the cross section of the elastic scattering is estimated as $\sigma^{J/\psi p}=3.6$-$5.0$ mb for $J/\psi p$ and $\sigma^{\Upsilon p}=0.9$-$1.2$ mb for $\Upsilon p$ as asymptotic values at the high scattering energy.
More information about the QCD van der Waals potential can be found in a review article~\cite{Kharzeev:1995ij}.
The above scheme is applicable to a variety of reaction processes of quarkonia with hadrons.
We have to notice that the $J/\psi N$ interaction relevant for the binding in nuclear matter occurs at the low energy, where the pomeron exchange may not be an appropriate picture.
In the same manner with Refs.~\cite{Peskin:1979va,Bhanot:1979vb}, the scattering length of the $\eta_{c} N$ interaction has been discussed in Ref.~\cite{Kaidalov:1992hd}.
From the multi-pole expansion~\cite{Peskin:1979va,Bhanot:1979vb}
[cf.~Eq.~(\ref{eq:multi_pole_Peskin})], the scattering amplitude for a quarkonium $\phi$ and a nucleon $N$ is given by
\begin{align}
T = \frac{4\pi}{3} M_{\phi} \langle \phi | r^{i} \frac{1}{H_{a}+\epsilon} r^{j} | \phi \rangle \langle N | \alpha_{s} E^{i} E^{j} | N \rangle.
\label{eq:T_matrix_Peskin}
\end{align}
To estimate the first matrix element, we use the result of Ref.~\cite{Bhanot:1979vb} for the $1S$ wave function of the $c\bar{c}$ state
\begin{align}
\langle \phi | r^{i} \frac{1}{H_{a}+\epsilon} r^{j} | \phi \rangle
=
\delta^{ij}
\frac{28}{27} \pi a^{3},
\end{align}
with $a$ being the size parameter of the $1S$ $c\bar{c}$ state.
The second matrix element can be evaluated by the energy-momentum tensor $\theta_{\mu\mu} = \frac{\beta(\alpha_{s})}{4\alpha_{s}} G^{a}_{\mu\nu} G^{a}_{\mu\nu}$ as
\begin{align}
\alpha_{s} E^{i} E^{j} = \frac{2\pi}{b} \theta_{\mu\mu} + {\cal O}(\alpha_{s}),
\end{align}
with $b=9$ from the leading order term in the Gell-Mann--Low beta function $\beta({\alpha_{s}})$. The matrix element of the energy-momentum tensor is given by $\langle N | \theta_{\mu\nu} | N \rangle_{p_{1}=p_{2}=p}=2p_{\mu}p_{\nu}$ for the initial and final momenta $p_{1}$ and $p_{2}$.
Hence, the scattering amplitude for $\eta_{c}N$ at the threshold $p_{0}=m_{N}$ is given as
\begin{align}
T = \frac{64\pi^{3}}{3^{6}} 7 M_{\eta_{c}} m_{N}^{2} a^{3}.
\end{align}
As a result, we obtain the scattering length\footnote{
Throughout this paper, we define the scattering length as $a_{s}=\lim_{k\to 0}f(k)$ where $f(k)$ is the elastic scattering amplitude of the two-body system with the momentum $k$. In the absence of a shallow bound state, the positive (negative) $a_{s}$ corresponds to the attractive (repulsive) interaction at threshold. Our convention is commonly used in hadron physics (meson-meson/meson-baryon scattering), while the convention with an opposite sign is often used in the nuclear and atomic physics. See also the discussion in section~\ref{sec:DNinteraction}.}
\begin{align}
a_{s} = \frac{T}{8\pi(M_{\eta_{c}}+m_{N})} = 0.05 \hspace{0.5em} \mathrm{fm}.
\end{align}
This attractive scattering length is used to estimate the binding energy of $\eta_{c}$ in nuclear matter~\cite{Kaidalov:1992hd}.
The result was about 3 MeV.
From the phenomenological viewpoint, the bound state approach of the Skyrmion model has been used in Refs.~\cite{Gobbi:1992cf,Gobbi:1993wu} where $\eta_{c}$ is bound in the Skyrmion as a nucleon.
This model is applicable also to the $\eta$ bound state in the Skyrmion.
The Skyrmion is a topologically stable object composed of pions~\cite{Adkins:1983ya}.
At first sight, it may seem that strange and charm quarks are irrelevant to the Skyrmion, because the Skyrmion obeys the SU(2) flavor symmetry for $u$ and $d$ quarks.
Recalling that pions are the adjoint representation of SU(2) flavor symmetry, however, we can extend the pion field to the adjoint representation of the SU(4) flavor symmetry for $u$, $d$, $s$ and $c$ quarks, and hence
we can treat $\eta$ as well as $\eta_{c}$ as the same multiplet as pions.
Because the mass of $\eta$ and $\eta_{c}$ is much heavier than that of pions, the ``bound-state" approach is adopted.
The obtained binding energy of the $\eta_{c}N$ state is about 1 GeV, and it is much larger than the values expected from other theoretical studies, such as the ``color dipole" picture~\cite{Kaidalov:1992hd}, the lattice QCD and the QCD sum rules discussed below.
First study of the charmonium-nucleon interaction on the lattice QCD simulation has been performed with the quenched approximation in Ref.~\cite{Yokokawa:2006td}.
Subsequently, several groups have investigated the $J/\psi N$ interaction~\cite{Liu:2008rza,Kawanai:2010ev,Kawanai:2010ru,Beane:2014sda}.
The results in Refs.~\cite{Liu:2008rza,Kawanai:2010ev,Kawanai:2010ru} indicate that the $J/\psi N$ interaction and the $\eta_{c}N$ interaction are attractive with a moderate strength.
In Ref.~\cite{Beane:2014sda}, the $J/\psi(\eta_{c}) N$ system is shown to be bound in the unphysical quark mass region. The results of the scattering lengths by the lattice QCD simulation are summarized in Table~\ref{table:scattering_length_charmonium_nucleon}.
It is interesting that those numbers are comparable with the result $a_{J/\psi N}=0.10 \pm 0.02$ fm obtained by the QCD sum rules for the forward scattering amplitude~\cite{Hayashigaki:1998ey}. In the low density approximation, this scattering length corresponds to the negative mass shift by 4-7 MeV in nuclear matter, as stated below.
\begin{table}[tbp]
\caption{Scattering length $a$ [fm] from the lattice QCD simulations. SAV indicates the spin averaged value, $(a_{1/2}+2a_{3/2})/3$. PSF and LLE indicate the different methods, phase shift formula and leading large-$L$ expansion, respectively, used in Ref.~\cite{Yokokawa:2006td}. The number with ``*" is calculated in this review. The scattering lengths of $J/\psi N$ with spin 1/2 and 3/2 are shown in the figure in Ref.~\cite{Kawanai:2010ru}, though the numbers were not given explicitly.}
\begin{center}
\begin{tabular}{|cc|cccc|}
\hline
& spin & \cite{Yokokawa:2006td} PSF & \cite{Yokokawa:2006td} LLE & \cite{Liu:2008rza} & \cite{Kawanai:2010ev,Kawanai:2010ru} \\
\hline
$\eta_c N$ & 0 & $0.70 \pm 0.66$ & $0.39 \pm 0.14$ & 0.18 (9) & 0.25 \\
\hline
$J/\psi N$ &1/2 & $0.57 \pm 0.42$ & $0.35 \pm 0.15$ & $-0.05$(77) & --- \\
& 3/2 & $0.88 \pm 0.63$ & $0.43 \pm 0.16$ & 0.24(35) & --- \\
& SAV & $0.71 \pm 0.48$ & $0.39 \pm 0.14$ & 0.16 (70)* & 0.35 \\
\hline
\end{tabular}
\end{center}
\label{table:scattering_length_charmonium_nucleon}
\end{table}%
\subsection{Few-body systems}
Due to the $J/\psi N$ attraction, it is argued that the $J/\psi$-bound state can exist in the atomic nuclei~\cite{Brodsky:1989jd,Wasson:1991fb}.
In Ref.~\cite{Belyaev:2006vn}, the Faddeev calculations of $\eta_{c} d$ ($d$ is a deuteron) and $\eta_{c} ^{3}\mathrm{He}$ are performed for the Yukawa-type potential for $\eta_{c}N$ and the Paris potential for $NN$, where the $\eta_{c}N$ potential is parametrized according to Ref.~\cite{Brodsky:1989jd}.
Recently, a
few-body calculation has been performed for the $J/\psi(\eta_{c})$-nuclei~\cite{Yokota:2013sfa,Yokota:2014lma} by using the Gaussian expansion method explained in section~\ref{sec:fewbody_systems}.
Though the QCD van der Waals potential exists at short distance, the confinement force would provide a long range potential. In Refs.~\cite{Yokota:2013sfa,Yokota:2014lma} the authors use a Gaussian-type potential for the $J/\psi(\eta_{c})$-$N$ system, whose model parameters (potential depth and range) are related to the two-body scattering length.
By varying the value of the $J/\psi(\eta_{c})$-$N$ scattering length, various few-body systems such as $J/\psi(\eta_{c})$-$NN(d)$, $J/\psi(\eta_{c})$-$^{4}\mathrm{He}$ and $J/\psi$-$^{8}\mathrm{Be}$, where the $\alpha\alpha$ cluster structure for $^{8}\mathrm{Be}$ is considered, are examined.
The Minnesota potential is used for $NN$ as a phenomenological nuclear potential.
The $J/\psi(\eta_{c})$-$N$ scattering lengths necessary for the formation of the bound states are found to be $a_{J/\psi N}>0.95$ fm for $J/\psi(\eta_{c})$-$NN$ and $a_{J/\psi N}>0.24$ fm for $J/\psi$-$^{4}\mathrm{He}$~\cite{Yokota:2013sfa,Yokota:2014lma},\footnote{The present convention of the sign of the scattering length is different from the original ones in Refs.~\cite{Yokota:2013sfa,Yokota:2014lma}.} which can be compared with the results in the lattice QCD simulations in Table~\ref{table:scattering_length_charmonium_nucleon}.
To form a bound state of $J/\psi$-$^{8}\mathrm{Be}$, $a_{J/\psi N}>0.16$ fm is required.
The results in Refs.~\cite{Yokota:2013sfa,Yokota:2014lma} are consistent with that in Ref.~\cite{Belyaev:2006vn}.
By adopting the scattering lengths in Refs.~\cite{Kawanai:2010ev,Kawanai:2010ru}, bound states are found for $A\geq 4$ with the biding energies 0.5 MeV for $J/\psi(\eta_{c})$-$^{4}\mathrm{He}$ and 2 MeV for $J/\psi(\eta_{c})$-$^{8}\mathrm{Be}$.
\subsection{Nuclear matter}
The binding energy of the quarkonium in nuclear matter has been estimated by considering the scale anomaly (trace anomaly)~\cite{Luke:1992tm}.
For the Coulomb bound state of $\bar{Q}Q$, the interaction Lagrangian with the dimension seven operators is given by
\begin{align}
{\cal L}_{\mathrm{int}} = \sum_{v} \frac{1}{\Lambda_{Q}^{3}} (P_{v}^{\dag} P_{v} - V_{v\mu}^{\dag} V_{v}^{\mu}) (c_{E} {\cal O}_{E} + c_{B} {\cal O}_{B}),
\end{align}
where
${\cal O}_{E} \equiv - G^{\mu\alpha a} G_{\alpha}^{\nu a} v_{\mu} v_{\nu}$ and
${\cal O}_{B} \equiv \frac{1}{2} G^{\alpha \beta a} G_{\alpha \beta}^{a} - G^{\mu\alpha a} G_{\alpha}^{\nu a} v_{\mu} v_{\nu}$
with the compositeness scale $\Lambda_{Q} \simeq \alpha_{s}(\Lambda_{Q})m_{Q}$ for a Coulomb bound state,
$P_{v}$ creates the pseudoscalar meson ($\eta_{c}$) and $V_{v\mu}$ creates the vector meson ($J/\psi$) with four velocity $v^{\mu}$.
This is the covariant form of the leading order of the multipole expansion.
In the rest frame $v^{\mu}=(1,\vec{0}\,)$, these operators reduce to ${\cal O}_{E}\to\vec{E}^{a} \!\cdot\! \vec{E}^{a}$ and ${\cal O}_{B}\to\vec{B}^{a} \!\cdot\! \vec{B}^{a}$, from which we see that $c_{E}$ and $c_{B}$ are also related to the Stark and Zeeman energies, respectively.
Because ${\cal O}_{E}$ and ${\cal O}_{B}$ are related to the twist-two gluon operator $G_{2\,g}^{\mu\nu}= \frac{1}{4} g^{\mu\nu} G^{\alpha \beta a}G_{\alpha \beta}^{a} - G^{\mu\alpha a}G_{\alpha}^{\nu a}$ and the energy momentum tensor $T_{\alpha}^{\alpha}=\frac{\beta(g)}{2g}G^{\alpha\beta a}G_{\alpha \beta}^{a}$, the forward scattering amplitude can be expressed as
\begin{align}
{\cal M} = 2V_{2}(\Lambda_{Q}) \frac{c_{E}+c_{B}}{\Lambda_{Q}^{3}} M^{2} \left( \gamma^{2}-\frac{1}{4} \right)
+ 2M^{2} \frac{c_{E}-c_{B}}{\Lambda_{Q}^{3}} \frac{2\pi}{b_{Q} \alpha_{s}(\Lambda_{Q})},
\end{align}
with $\gamma=v^{0}$, where $\beta(g)=-b_{Q}\frac{g^{3}}{16\pi^{2}}$ is the Gell-Mann--Low beta function at the leading order.
The matrix elements of $G_{2\,g}^{\mu\nu}$ and $T_{\alpha}^{\alpha}$ are parametrized as
$\langle p | G_{2\,g}^{\mu\nu} | p \rangle = 2V_{2}(\mu) \left( p^{\mu}p^{\nu} - \frac{1}{4} g^{\mu\nu} p^{2} \right)$,
and $\langle p | T_{\alpha}^{\alpha} | p \rangle = 2M^{2}$ with the nucleon mass $M$.
The value of $V_{2}(\mu)$ indicates the gluon momentum fraction, which is estimated from the deep inelastic scattering at energy scale $\mu$.
Because the forward scattering amplitude is related to the energy shift of the $\bar{Q}Q$ due to the interaction with gluons in nuclear matter, we can obtain the binding energy of $\bar{Q}Q$ possessing Bohr radius $r_{B}$,
\begin{align}
U_{\mathrm{binding}} = \frac{14}{27} \pi r_{B}^{3} \, \rho \left( V_{2}(\Lambda_{Q}) \left(\gamma^{2}-\frac{1}{4}\right) + \frac{2\pi}{b_{Q}\alpha_{s}(\Lambda_{Q})} \right),
\end{align}
at baryon number density $\rho$~\cite{Luke:1992tm}.
We quote the resulting binding energy in nuclear matter $U_{\mathrm{binding}}=8$-$11$ MeV for $J/\psi$ and $U_{\mathrm{binding}}=2$-$4$ MeV for $\Upsilon$.
In Ref.~\cite{Lee:2003jh}, a more explicit form of the mass shift of the quarkonium with the binding energy $\epsilon=2m_{c}-m_{J/\psi}$ is presented as
\begin{align}
\Delta m_{J/\psi}
=
-\frac{1}{9}
\int \mathrm{d}k^{2} \left| \frac{\partial \psi(k)}{\partial \bf{k}} \right|^{2} \frac{k}{k^{2}/m_{c}+\epsilon}
\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle_{\Omega}
\frac{\rho}{2m_{N}},
\label{eq:m_jpsi_Lee:2003jh}
\end{align}
with the quarkonium wave function $\psi(k)$ in momentum space.
This formula can be applied not only to the ground states but also to the excited charmonia.
The mass shift in Eq.~(\ref{eq:m_jpsi_Lee:2003jh}), reproducing the QCD second order Stark effect in Ref.~\cite{Luke:1992tm}, was given by the operator product expansion (OPE).
The formula (\ref{eq:m_jpsi_Lee:2003jh}) is analogous to the result by the multi-pole expansion in Eq.~(\ref{eq:T_matrix_Peskin}), although the derivations are given by different styles of calculation.
In the linear density approximation, the electric gluon condensate is given by
\begin{align}
\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle_{\Omega}
=
\left(
\frac{4}{9} m_{N} m_{N}^{0} + \frac{3}{2} m_{N}^{2} \frac{\alpha_{s}}{\pi} A_{2}^{g}
\right)
\frac{\rho}{2m_{N}},
\label{eq:electric_gluon}
\end{align}
with $m_{N}^{0}$ being the nucleon mass in the chiral limit and $A_{2}^{g}$ the second moment of the gluon distribution function in a nucleon (see also the discussion below).
Numerically, we have $\left\langle \frac{\alpha_{s}}{\pi} E^{2} \right\rangle_{\Omega} \simeq 0.5$ GeV$^{2}$,
and we find the mass shifts $\Delta m_{J/\psi}=-8$ MeV, $\Delta m_{\psi(3686)}=-100$ MeV and $\Delta_{\psi(3770)}=-140$ MeV for $J/\psi$, $\psi(3686)$ and $\psi(3770)$, respectively.
The correction by the $D$ meson loop effect is analyzed by a model Lagrangian with $D$ meson and nucleon interaction,
and it is found that the mass shifts are modified as $\Delta m_{J/\psi}=-5$ MeV, $\Delta m_{\psi(3686)}=-130$ MeV and $\Delta_{\psi(3770)}=-125$ MeV.
The binding energy of $J/\psi$ is consistent with the binding energy 3 MeV for $\eta_{c}$ estimated by the two-body scattering length from the multi-pole expansion in Ref.~\cite{Kaidalov:1992hd} as already discussed.
So far, the studies based on the QCD van der Waals potential and the scale anomaly provided the investigation about the binding energy of $J/\psi$ in nuclear matter. The relations between the in-medium property of hadrons and various condensates can be obtained in the QCD sum rules~\cite{Klingl:1998sr,Hayashigaki:1998ey}. As shown in Ref.~\cite{Klingl:1998sr}, the gluon condensate in nuclear matter ($\Omega$) at baryon number density $\rho$ is given in the linear density approximation as
\begin{align}
\left\langle \frac{\alpha_{s}}{\pi} G_{\mu\nu} G^{\mu\nu} \right\rangle_{\Omega}
=
\left\langle \frac{\alpha_{s}}{\pi} G_{\mu\nu} G^{\mu\nu} \right\rangle_{0}
-\frac{8}{9} m_{N}^{0} \rho,
\end{align}
in comparison with that in vacuum (0)
for the nucleon mass $m_{N}^{0} \simeq 750$ MeV in the chiral limit.
Notice that, in vacuum, only the scalar condensate $\left\langle \frac{\alpha_{s}}{\pi} G_{\mu\nu}G^{\mu\nu} \right\rangle$ gives the contribution up to dimension four.
In nuclear matter, additionally, the twist-two gluon operator $\left\langle \frac{\alpha_{s}}{\pi} G_{\mu\rho}G^{\rho}_{\nu} \right\rangle$ contributes.
The twist-two gluon operator is related to the gluon distribution function $G(x,\mu^{2})$ in a nucleon (cf.~Ref.~\cite{Luke:1992tm}):
\begin{align}
\langle N(p) | \frac{\alpha_{s}}{\pi} G^{\alpha \sigma} G_{\sigma}^{\beta} | N(p) \rangle
=
- \left( p^{\alpha} p^{\beta} - \frac{1}{4} g^{\alpha\beta} p^{2} \right) \frac{\alpha_{s}}{\pi} A_{2}^{g},
\end{align}
with the second momentum of the gluon distribution function
\begin{align}
A_{2}^{g}(\mu^{2}) = 2 \int_{0}^{1} \mathrm{d}x\, x\,G(x,\mu^{2}),
\end{align}
at energy scale $\mu$ corresponding to the charmonium mass.
Including the twist-two gluon operator contribution, the gluon condensate in nuclear matter is given by
[cf.~Eq.~(\ref{eq:electric_gluon})]
\begin{align}
\left\langle \frac{\alpha_{s}}{\pi} G^{2} \right\rangle_{\Omega}
&=
\left\langle \frac{\alpha_{s}}{\pi} G^{2} \right\rangle_{0}
-
\left(
\frac{8}{9} m_{N}^{0} + \frac{3}{2} m_{N} \frac{\alpha_{s}}{\pi} A_{2}^{g}
\right) \rho
\nonumber \\
&\simeq
\left\langle \frac{\alpha_{s}}{\pi} G^{2} \right\rangle_{0}
\left( 1-0.06\frac{\rho}{\rho_{0}} \right),
\end{align}
with the normal nuclear matter density $\rho_{0}$.
Quantitatively, the contribution from the twist-two gluon operator is found to be about 10 \%.
In the QCD sum rules, we can extract the information about hadron property (spectral function) from the OPE data (see Sect.~\ref{sec:QCDSR}).
As the results, the mass shifts $\Delta m_{J/\psi} \simeq -7$ MeV and $\Delta m_{\eta_{c}} \simeq -5$ MeV were obtained at normal nuclear matter density~\cite{Klingl:1998sr}.
Interestingly, those values are consistent with the previous studies based on the color dipole picture~\cite{Kaidalov:1992hd} and the scale anomaly~\cite{Luke:1992tm} in sign as well as in order of the magnitude.
The QCD sum rule analysis with dimension up to six is performed, and the resulting mass shift is $\Delta m_{J/\psi}=-4$ MeV~\cite{Kim:2000kj}.
The mass shifts of $J/\psi$ and $\eta_{c}$ in isospin asymmetric nuclear matter are investigated in the QCD sum rules by introducing the dilaton field for the scale anomaly~\cite{Kumar:2010hs}.
The mean-field approach has been discussed in Ref.~\cite{Tsushima:2011kh}.
The (virtual) intermediate states $D\bar{D}$, $D^{\ast}\bar{D}$, $D\bar{D}^{\ast}$, $D^{\ast}\bar{D}^{\ast}$ in the $J/\psi$ interact with nucleons in nuclear matter (cf.~Sect.~\ref{sec:D_mesons}).
By this virtual processes, $J/\psi$ can be bound state in nuclear matter.
Several bound states including excited states are found for $^{4}\mathrm{He}$, $^{12}\mathrm{C}$, $^{16}\mathrm{O}$, $^{40}\mathrm{Ca}$, $^{90}\mathrm{Zr}$ and $^{208}\mathrm{Pb}$ nuclei.
As discussed above, the mass spectroscopy of charmonium in nuclear matter provides us with new tools to investigate the properties of the QCD vacuum, such as the modification of the gluon condensate.
For this purpose, we have focused on the $J/\psi N$ interaction around the threshold.
If we increase the energy by about 100 MeV, the $J/\psi N$ state can be converted to various channels such as $\bar{D}^{(\ast)}\Lambda_{c}$ and $\bar{D}^{(\ast)}\Sigma_{c} ^{(\ast)}$.
Recently, it has been reported by LHCb that there are two resonances $P_{c}(4380)$ and $P_{c}(4450)$, the hidden charm pentaquarks, around the mass thresholds of the $\bar{D}\Sigma_{c}$ and $\bar{D}^{\ast}\Sigma_{c}$ channels, respectively~\cite{Aaij:2015tga}.
The resonance signals are observed in the invariant mass spectrum of the $J/\psi p$ system in the final states.
This observation indicates that the $J/\psi N$ scattering have non-trivial structures due to the existence of the resonances and the coupled-channel effects.
Hence it is an interesting subject to study the spectral functions of nuclear matter with $J/\psi$ in a wide range of the scattering energies.
For example, it has been discussed that the $J/\psi$ with a finite momentum in nuclear matter gives a large cross section of $J/\psi$-nucleus reaction due to the existence of $P_{c}$ resonances~\cite{Molina:2012mv}.
\section{Heavy-light mesons}
\label{sec:D_mesons}
To study the heavy-light mesons in nuclear medium, let us first recall its naming scheme which is somehow confusing. Heavy-light meson with charm $C=+1$ is called $D$ meson, and that with bottomness $B=+1$ is denoted by $B$. Because the charm quark $c$ (bottom quark $b$) has $C=+1$ ($B=-1$), the quark content of the $D$ meson ($B$ meson) is $c\bar{q}$ ($\bar{b}q$) with a light quark $q$. Thus, we obtain the quark contents of $D$, $B$ and their antiparticles as summarized in Table~\ref{tbl:DB}. In vacuum, the basic properties of $D$ and $\bar{D}$ are symmetric. On the other hand, their in-medium properties are quite different, because the nuclear medium is composed only of the light quarks. The light degrees of freedom make the heavy-light mesons sensitive to the partial restoration of chiral symmetry in nuclear medium. Moreover, the modification of the properties of $D$ and $\bar{D}$ also affects the properties of excited charmonia near the $\bar{D}D$ threshold, such as $\psi(3770)$~\cite{Hayashigaki:2000es,Friman:2002fs}. In the following, we summarize these rich phenomena of heavy-light mesons in nuclear medium.
\begin{table}[tbp]
\caption{Quark contents of heavy-light mesons.}
\begin{center}
\begin{tabular}{|cc|cc|}
\hline
\multicolumn{2}{|c|}{$D(c\bar{q})$} & \multicolumn{2}{c|}{$\bar{D}(\bar{c}q)$} \\
$D^{+}(c\bar{d})$ & $D^{0}(c\bar{u})$
& $\bar{D}^{0}(\bar{c}u)$ & $D^{-}(\bar{c}d)$ \\
\hline
\multicolumn{2}{|c|}{$\bar{B}(b\bar{q})$} & \multicolumn{2}{c|}{$B(\bar{b}q)$} \\
$\bar{B}^{0}(b\bar{d})$ & $B^{-}(b\bar{u})$
& $B^{+}(\bar{b}u)$ & $B^{0}(\bar{b}d)$ \\
\hline
\end{tabular}
\end{center}
\label{tbl:DB}
\end{table}%
\subsection{Two-body interaction with nucleon}
\label{sec:DNinteraction}
The most basic information to study the heavy-light meson in nuclei is the two-body interaction with a nucleon. Because the isospin of the $D/\bar{D}$ meson is $I=1/2$, there are two isospin components in the $DN/\bar{D}N$ system, $I=0$ and $I=1$. At present, there is no direct experimental data to constrain these interactions. The $DN/\bar{D}N$ interactions are thus constructed by generalizing some established framework of the hadron interactions. Here we introduce several approaches proposed for the $DN/\bar{D}N$ interaction.
There are two important remarks. First, the quark-antiquark annihilation is allowed in the $D+N\sim \bar{q}c+qqq$ system, while it does not exist in the $\bar{D}+N\sim \bar{c}q+qqq$ system. As a consequence, the $DN$ system couples with the lower energy $\pi\Sigma_{c}$ and $\pi\Lambda_{c}$ channels, as well as the charmed baryon resonances, $\Lambda_{c}^{*}$ and $\Sigma_{c}^{*}$. To study the $DN$ system, we need to solve a complicated coupled-channel problem. In contrast, the $\bar{D}N$ system has no coupled channels at lower energies so that the problem is much simpler than the $DN$ system. The $\bar{D}N$ system is in a exotic channel, whose quantum numbers ($C=-1$ and baryon number $B=1$) cannot be expressed by three valence quarks. Second, heavy quark symmetry relates the pseudoscalar $D$ meson with the vector $D^{*}$ meson. From the viewpoint of heavy quark symmetry, the $DN/\bar{D}N$ interaction should be considered together with the $D^{*}N/\bar{D}^{*}N$ channel. As we will see below, the inclusion of the vector meson channel plays an important role, especially in the $\bar{D}N$ system.
\paragraph{Contact interaction models with SU(4) symmetry}
As discussed in section~\ref{sec:chiral_effective_theory}, the scattering of the pseudoscalar mesons off the baryons in the flavor SU(3) sector can be well described by the resummation of the Weinberg-Tomozawa interaction~\cite{Hyodo:2011ur,Kaiser:1995eg,Oset:1997it,Oller:2000fj,Ikeda:2011pi,Ikeda:2012au}. The interaction kernel is given by
\begin{align}
V_{ij}(\sqrt{s})
=-\frac{ C_{ij}}{4f^{2}} (2\sqrt{s}-M_{i}-M_{j}),
\label{eq:WTinteraction}
\end{align}
where $C_{ij}$ is the group theoretical coefficient. The scattering amplitude $T_{ij}(\sqrt{s})$ is given by the solution of the matrix equation
\begin{align}
T(\sqrt{s}) = V(\sqrt{s}) + V(\sqrt{s}) G(\sqrt{s}) T(\sqrt{s}) .
\label{eq:scatteringequation}
\end{align}
The loop function $G_{i}(\sqrt{s})$ contains a cutoff parameter (subtraction constant) which should be determined either by fitting to the experimental data, or by some theoretical arguments~\cite{Oller:2000fj,Lutz:2001yb,Hyodo:2008xr}. When there is a sufficient attraction in the interaction $V$, a bound state or a resonance can be dynamically generated, which is expressed by the pole of the scattering amplitude $T$. The success of chiral SU(3) dynamics with the Weinberg-Tomozawa interaction indicates that the SU(3) symmetric interaction is a good starting point for these channels, even though there is a sizable SU(3) breaking effect in the observed hadron masses. While the Weinberg-Tomozawa interaction~\eqref{eq:WTinteraction} is a contact interaction, it can be regarded as the zero-range limit of the t-channel vector meson exchange, with the help of the universal vector meson coupling and the Kawarabayashi--Suzuki--Riazuddin--Fayyazuddin (KSRF) relation~\cite{Kawarabayashi:1966kd,Riazuddin:1966sw}. With these observations, there are series of works which utilize the contact interaction model for the $DN/\bar{D}N$ systems, using the same form with Eq.~\eqref{eq:WTinteraction} under the larger flavor or spin-flavor symmetry.\footnote{This generalized framework is often referred to as the ``Weinberg-Tomozawa'' approach. However, the original Weinberg-Tomozawa theorem~\cite{Weinberg:1966kf,Tomozawa:1966jm} is derived from chiral symmetry. It is certainly not justified to start from the four-flavor chiral symmetry in QCD. When the spin symmetry is included, Eq.~\eqref{eq:WTinteraction} determines the interaction of the non-NG modes such as $\rho$ meson, whose dynamics cannot be constrained by chiral symmetry. It is therefore appropriate to regard this approach as a contact interaction model motivated by the vector meson exchange mechanism.}
The first study in this direction is performed in Ref.~\cite{Hofmann:2005sw}, by extending the flavor symmetry to SU(4) with the charm quark. The ground state $1/2^{+}$ baryons are in the $\bm{20}\in \bm{4}\otimes\bm{4}\otimes \bm{4}$ representation of SU(4), and the $0^{-}$ mesons are in the $\bm{15}\in \bm{4}\otimes\bm{\bar{4}}$ representation. The $\bm{20}$ representation contains four SU(3) multiplets: $\bm{8}=\{N,\Lambda,\Sigma,\Xi\}$ of the light baryons, $\bm{\bar{3}}=\{\Lambda_{c},\Xi_{c}\}$ and $\bm{6}=\{\Sigma_{c},\Xi_{c}^{\prime},\Omega_{c}\}$ of the heavy-light-light baryons, and $\bm{3}=\{\Xi_{cc},\Omega_{cc}\}$ of the heavy-heavy-light baryons. The $\bm{15}$ multiplet can be decomposed into $\bm{8}=\{\pi,K,\eta_{8}\}$ of the light mesons, $\bm{\bar{3}}=\{D,D_{s}\}$ and $\bm{3}=\{\bar{D},\bar{D}_{s}\}$ of the heavy-light mesons, and $\bm{1}=\{\eta_{1}\}$. Physical $\eta$, $\eta^{\prime}$, and $\eta_{c}$ are expressed by the linear combinations of $\eta_{8}$ and $\eta_{1}$, together with the SU(4) singlet in $\bm{4}\otimes\bm{\bar{4}}$. We note that the vector $D^{*}$ mesons are not included in this model. From the SU(4) symmetry, the strengths of the diagonal $DN/\bar{D}N$ interactions [in the convention of Eq.~\eqref{eq:WTinteraction}] are found to be~\cite{Mizutani:2006vq}
\begin{align}
C_{DN DN}^{(I=0)}&=3 ,
\quad C_{DN DN}^{(I=1)}=1 ,
\label{eq:DNcontact} \\
C_{\bar{D}N \bar{D}N}^{(I=0)}&=0 ,
\quad C_{\bar{D}N \bar{D}N}^{(I=1)}=-2,
\label{eq:DbarNcontact}
\end{align}
where the positive (negative) sign corresponds to the attractive (repulsive) interaction. Namely, the $DN(I=0)$ channel is strongly attractive, the $DN(I=1)$ channel is moderately attractive, the interaction vanishes in the $\bar{D}N(I=0)$ channel, and the $\bar{D}N(I=1)$ channel is repulsive. Interestingly, the strengths of the interactions in Eqs.~\eqref{eq:DNcontact} and \eqref{eq:DbarNcontact} are identical with those of the Weinberg-Tomozawa term in the $\bar{K}N/KN$ sector, because of the replacement of $c\leftrightarrow s$. The non-attractive nature of the $\bar{D}N$ interactions can be understood by the group theoretical property of $C_{ij}$, which is in most cases repulsive in the exotic channels~\cite{Hyodo:2006yk,Hyodo:2006kg}. By solving the scattering equation~\eqref{eq:scatteringequation}, quasi-bound states are dynamically generated in the $DN$ sector, while no state is generated in the $\bar{D}N$ sector, reflecting the interaction strengths in Eqs.~\eqref{eq:DNcontact} and \eqref{eq:DbarNcontact}. The quasi-bound states have the quantum number $J^{P}=1/2^{-}$, because the contact term gives the $s$-wave interaction. The analysis of the speed plot\footnote{The speed plot (absolute value of the energy derivative of the scattering amplitude) shows a peak at the energy of a narrow resonance and is related to the time delay (see, e.g., Ref.~\cite{Kelkar:2003pt}).} shows the existence of a narrow state around 2590 MeV (2620 MeV) in the $DN(I=0)$ channel [$DN(I=1)$ channel]~\cite{Hofmann:2005sw}. The $I=0$ state is identified by the negative parity $\Lambda_{c}(2595)$ state. The results are summarized in Tables~\ref{tbl:DNbound} and \ref{tbl:DbarNbound}. We also note that an exotic state is found around 2780 MeV in the $\bar{D}_{s}N$-$\bar{D}\Lambda$-$\bar{D}\Sigma$ coupled-channels. The quark content of this manifestly exotic state is $\bar{c}sqqq$, which corresponds to the pentaquark state discussed in the constituent quark model~\cite{Lipkin:1987sk}. In the $D_{s}N$ channel, a similar state with the $\bar{s}cqqq$ configuration is found at 2892 MeV. As we will see below, the existence of these states plays an important role for the study of $D_{s}/\bar{D}_{s}$ in nuclear matter.
\begin{table}[tbp]
\caption{Quasi-bound states with $J^{P}=1/2^{-}$ in the $DN$ channel in various models. All numbers are given in units of MeV, and rounded off at one MeV precision. Here we summarize the states found below the $DN$ threshold $\sim 2806$ MeV. Results of Ref.~\cite{Hofmann:2005sw} are those with the SU(4) breaking effect. Results of Ref.~\cite{Yamaguchi:2013ty} are those in the $\pi\rho\omega$ model, where the state below the $DN$ threshold is stable because the lower energy channels are not included. The pole position of the model in Ref.~\cite{Mizutani:2006vq} can be found in Ref.~\cite{GarciaRecio:2008dp}. }
\begin{center}
\begin{tabular}{|l|ll|}
\hline
Model & $DN(I=0)$ & $DN(I=1)$ \\
\hline
SU(4) contact~\cite{Hofmann:2005sw}
& $M_{R}=2593$, $\Gamma_{R}<1$
& $M_{R}=2620$, $\Gamma_{R}=1$ \\
\hline
SU(4) contact~\cite{Mizutani:2006vq}
& $M_{R}=2595$, $\Gamma_{R}=2$
& $M_{R}=2661$, $\Gamma_{R}=37$ \\
& $M_{R}=2625$, $\Gamma_{R}=103$
& $M_{R}=2695$, $\Gamma_{R}=153$ \\
\hline
SU(8) contact~\cite{GarciaRecio:2008dp}
& $M_{R}=2595$, $\Gamma_{R}=1$
& $M_{R}=2554$, $\Gamma_{R}=1$ \\
& $M_{R}=2610$, $\Gamma_{R}=71$
& $M_{R}=2612$, $\Gamma_{R}=179$ \\
&
& $M_{R}=2637$, $\Gamma_{R}=80$ \\
\hline
Meson exchange~\cite{Haidenbauer:2010ch}
& $M_{R}=2594$, $\Gamma_{R}=6$
& $M_{R}=2793$, $\Gamma_{R}=12$ \\
& $M_{R}=2603$, $\Gamma_{R}=126$
& \\
\hline
Pion exchange~\cite{Yamaguchi:2013ty}
& $M_{R}=2724$ & None \\
\hline
\end{tabular}
\end{center}
\label{tbl:DNbound}
\end{table}%
\begin{table}[tbp]
\caption{Bound states with $J^{P}=1/2^{-}$ in the $\bar{D}N$ channel in various models. All numbers are given in units of MeV, and rounded off at one MeV precision. Results of Ref.~\cite{Yamaguchi:2011xb} are those in the $\pi\rho\omega$ model. Here we summarize the states found below the $\bar{D}N$ threshold $\sim 2806$ MeV.}
\begin{center}
\begin{tabular}{|l|ll|}
\hline
Model & $\bar{D}N(I=0)$ & $\bar{D}N(I=1)$ \\
\hline
SU(4) contact~\cite{Hofmann:2005sw}
& None & None \\
SU(8) contact~\cite{Gamermann:2010zz}
& 2805 & None \\
Meson exchange~\cite{Haidenbauer:2007jq}
& None & None \\
Pion exchange~\cite{Yamaguchi:2011xb}
& 2804 & None \\
Chiral quark model~\cite{Carames:2012bd} & None & None \\
\hline
\end{tabular}
\end{center}
\label{tbl:DbarNbound}
\end{table}%
This framework for the $DN$ channel is further studied in Ref.~\cite{Mizutani:2006vq}. The interaction kernel is modified to be consistent with the zero-range limit, and a suppression factor $\kappa_{c}=(\bar{m}_{V}/\bar{m}^{c}_{V})^{2} \simeq 1/4$ is introduced in view of the underlying vector meson exchange mechanism, which partially breaks the SU(4) symmetry. From the analysis of the amplitude on the real axis, the quasi-bound states are found at qualitatively similar positions with Ref.~\cite{Hofmann:2005sw}, while the $I=1$ state becomes broader. The pole positions of this SU(4) model are calculated in Ref.~\cite{GarciaRecio:2008dp} as summarized in Table~\ref{tbl:DNbound}. Interestingly, there is an additional broad state in each sector. Note that such broad state may not be extracted in the analysis of the speed plot in Ref.~\cite{Hofmann:2005sw}. The appearance of two poles between the $\pi\Sigma_{c}$ and $DN$ thresholds in the $I=0$ sector is analogous to the double-pole structure of $\Lambda(1405)$ in the SU(3) sector~\cite{Jido:2003cb,Hyodo:2007jq}.
The effect of the finite range interaction is examined in Ref.~\cite{JimenezTejero:2009vq}. The use of the nonlocal interaction kernel gives qualitatively similar results with Ref.~\cite{Mizutani:2006vq}, but the generated resonances generally have a broader width. This is because the suppression of the charm exchange process is milder than the zero-range approximation.
\paragraph{Contact interaction models with SU(8) symmetry}
The SU(4) contact interaction approach does not include the $D^{*}N/\bar{D}^{*}N$ channel, in contrast to the requirement of the heavy quark symmetry. Moreover, because the SU(4) symmetry relates the charm quark with the light quarks, at first glance, it looks contradicting with the heavy quark symmetry which emerges in the heavy quark limit. However, we should recall that in the SU(4) approaches the flavor symmetry is mainly used to determine the unknown coupling constants of charm hadrons, and the symmetry is largely broken by the physical hadron masses in the loop functions. As long as the physical hadron masses are used, it is mandatory to couple the $D^{*}N/\bar{D}^{*}N$ channels with the $DN/\bar{D}N$ channels, from the viewpoint of heavy quark spin symmetry.
In addition, the coupling constants within the heavy quark spin multiplets should follow the heavy quark spin symmetry.
The incorporation of the heavy quark spin symmetry is discussed in Ref.~\cite{GarciaRecio:2008dp} for the $DN$ sector and in Ref.~\cite{Gamermann:2010zz} for the $\bar{D}N$ sector, within the framework of spin-flavor SU(8) symmetry. As a generalization of the SU(6) approach~\cite{GarciaRecio:2005hy}, the SU(8) model combines spin SU(2) with flavor SU(4).
The ground state baryon field $B$ is in the $\bm{120}\in \bm{8}\times\bm{8}\times \bm{8}$ representation of SU(8), and the mesons $M$ are in the $\bm{63}\in \bm{8}\times\bm{\bar{8}}$ representation. The $\bm{120}$ multiplet can be decomposed into the SU(4) representations as ${\bf 20_{2}} \oplus {\bf 20'_{4}}$ where the subscript represents the dimension of the SU(2) representation. The former corresponds to the $1/2^{+}$ baryons in the SU(4) approach (nucleon and its flavor partners), and the latter contains the $3/2^{+}$ baryons with SU(3) representations $\bm{10}\oplus\bm{6}\oplus\bm{3}\oplus\bm{1}$ ($\Delta$ and its flavor partners). The meson in the $\bm{63}$ representation consists of ${\bf 15_{1}} \oplus {\bf 15_{3}} \oplus {\bf 1_{3}}$, where ${\bf 15_{1}}$ is the pseudoscalar mesons ($\pi$ and its flavor partners) and ${\bf 15_{3}}$ and ${\bf 1_{3}}$ correspond to the vector mesons ($\rho$ and its flavor partners). Because the vector current is given by the adjoint representation ${\bf 63}$, the SU(8) symmetric contact interaction is constructed as
\begin{align}
{\cal L}^{\mathrm{SU}(8)} \sim ((M^{\dag} \otimes M)_{{\bf 63}_{a}} \otimes (B^{\dag} \otimes B)_{\bf 63})_{\bf 1} .
\label{eq:Lagrangian_SU8}
\end{align}
Since the ground state multiplet contains spin $3/2$ baryons and spin $1$ mesons, the two-body $s$-wave system can be not only in $1/2^{-}$ but also in $3/2^{-}$ and $5/2^{-}$. The coupling strengths in the normalization of Eq.~\eqref{eq:WTinteraction} in the spin $1/2$ channel are given by\footnote{As a convention, positive numbers in the diagonal components are attraction, while negative numbers are repulsion.}
\begin{align}
C_{ij}^{DN(I=0)}
&=
\begin{pmatrix}
3 & \sqrt{27} \\
\sqrt{27} & 9
\end{pmatrix} ,
\quad C_{ij}^{DN(I=1)}
= \begin{pmatrix}
1 & \sqrt{\frac{1}{3}} & -\sqrt{\frac{32}{3}}\\
\sqrt{\frac{1}{3}} & \frac{1}{3} & \sqrt{\frac{32}{9}} \\
-\sqrt{\frac{32}{3}} & \sqrt{\frac{32}{9}} & \frac{32}{3}
\end{pmatrix} ,
\label{eq:DNcontactSU8} \\
C_{ij}^{\bar{D}N(I=0)}
&=
\begin{pmatrix}
0 & \sqrt{12} \\
\sqrt{12} & -4
\end{pmatrix} ,
\quad
C_{ij}^{\bar{D}N(I=1)}
=\begin{pmatrix}
-2 & -\sqrt{\frac{16}{3}} & -\sqrt{\frac{32}{3}} \\
-\sqrt{\frac{16}{3}} & \frac{2}{3} & -\sqrt{\frac{32}{9}} \\
-\sqrt{\frac{32}{3}} & -\sqrt{\frac{32}{9}} & -\frac{2}{3}
\end{pmatrix}
,
\label{eq:DbarNcontactSU8}
\end{align}
where the basis states are given by
\begin{align}
\{ DN/\bar{D}N(^{2}\mathrm{S}_{1/2}), D^{*}N/\bar{D}^{\ast}N(^{2}\mathrm{S}_{1/2})\} ,
\end{align}
for $I=0$ and
\begin{align}
\{ DN/\bar{D}N(^{2}\mathrm{S}_{1/2}), D^{*}N/\bar{D}^{\ast}N(^{2}\mathrm{S}_{1/2}), D^{*}\Delta/\bar{D}^{*}\Delta(^{2}\mathrm{S}_{1/2})\} ,
\end{align}
for $I=1$ with the spectroscopic notation $^{2S+1}L_{J}$. The other coupled channels are not shown explicitly here. We note that the 11 components are identical with Eqs.~\eqref{eq:DNcontact} and \eqref{eq:DbarNcontact}, because the SU(4) model is in the subspace of the SU(8) model. We note that the contact interaction only gives the $s$-wave interaction. When the tensor force is included as in the pion-exchange model, the $d$-wave components also couples to the problem.
By solving the scattering equation, several states are dynamically generated in the $DN$ sector. The pole structure in the $DN(I=0)$ channel is similar to the SU(4) model; one narrow pole and one broad pole appear around 2600 MeV. However, by analyzing the coupling strengths, it is shown that the narrow pole strongly couples to the $D^{*}N$ channel~\cite{GarciaRecio:2008dp}. In the $DN(I=1)$ sector, three states are found below the $DN$ threshold, because of the larger model space of the SU(8) approach than the SU(4) one. Further extension of the contact interaction model to the bottom sector is discussed in Ref.~\cite{GarciaRecio:2012db} where the corresponding $1/2^{-}$ resonance of $\Lambda_{b}(5912)$ is dynamically generated in the $\bar{B}N(I=0)$ sector. Note however that the binding energy of the $\Lambda_{c}(2595)$ [$\Lambda_{b}(5912)$] as the $DN$ ($\bar{B}N$) bound state is $\sim 200$ MeV ($\sim 300$ MeV), and the ``molecule'' picture may not be justified for such tightly bound system.
In the $\bar{D}N(I=0)$ sector, a shallow bound state is found at 2805 MeV (1 MeV below the $\bar{D}N$ threshold)~\cite{Gamermann:2010zz} in the coupled-channel $\bar{D}N$-$\bar{D}^{*}N$ system. At first glance, the origin of the attraction may not be clear, because the diagonal interactions are zero and repulsive in Eq.~\eqref{eq:DbarNcontactSU8}. As pointed out in Ref.~\cite{Gamermann:2010zz}, however, the attractive interaction arises by taking the linear combination of $(\sqrt{3}\ket{\bar{D}N}+\ket{\bar{D}^{*}N})/2$. In fact, this combination coincides with the spin-complex basis which is introduced to extract the heavy quark spin symmetry in multi-hadron systems~\cite{Yasui:2013vca,Yamaguchi:2014era}. The coupling strength in Eq.~\eqref{eq:DbarNcontactSU8} in the spin-complex basis is given as
\begin{align}
C_{ij}^{\bar{D}N(I=0), {\rm SC}}
&=
\begin{pmatrix}
-6 & 0 \\
0 & 2
\end{pmatrix} ,
\end{align}
and the attraction $C_{22}^{\bar{D}N(I=0), {\rm SC}}=2$ is responsible for the bound state formation. The absence of the off-diagonal components in the spin-complex basis is a consequence of the heavy quark symmetry, because the contained spin-complexes are different. In this way, when the heavy quark symmetry is incorporated, a shallow bound state is supported in the $\bar{D}N(I=0)$ sector, which otherwise has no bound state. The SU(8) model also generates bound and resonance states in other spin-isospin sectors. For instance, a bound state of $N\bar{D}^{\ast}$ is found at 2922 MeV in the $I(J^{P})=0(3/2^-)$ channel. This state forms a HQS doublet with the $1/2^{-}$ state~\cite{Yamaguchi:2014era}. Although the $I(J^{P})=0(3/2^-)$ state is stable in this approach, it can decay into the $\bar{D}N$ channel in $d$ wave. Other states are found in the same way; a resonance with $M_{R}=2873$ MeV and $\Gamma_{R}=91$ MeV in the $I(J^{P})=1(1/2^-)$ channel, a resonance with $M_{R}=2979$ MeV and $\Gamma_{R}=32$ MeV in the $I(J^{P})=1(3/2^-)$ channel, a bound state at 3126 MeV in the $I(J^{P})=1(5/2^{-})$ channel,
a bound state at 3126 MeV in the $I(J^{P})=2(1/2^-)$ channel, and a bound state at 3061 MeV in the $I(J^{P})=2(3/2^{-})$ channel. Note that some states have a large binding energy; the $I(J^{P})=1(5/2^{-})$ state is about 100 MeV bound from the $\Delta \bar{D}^{*}$ threshold. When the decay width of $\Delta$ is taken into account, the bound states acquire a width of several tens of MeV, while the binding energies are not very much affected.
\paragraph{Meson exchange models}
One of the traditional approaches to study the hadron-hadron interaction is the meson exchange potential~\cite{Machleidt:1987hj}. In the strangeness sector, the $\bar{K}N/KN$ interactions have been developed by the J\"ulich group~\cite{Buettgen:1990yw,MuellerGroeling:1990cw,Hoffmann:1995ie}. The J\"ulich meson exchange model is generalized to the charm sector in Refs.~\cite{Haidenbauer:2010ch,Haidenbauer:2007jq}.
The meson exchange model is first applied to the $\bar{D}N$ sector~\cite{Haidenbauer:2007jq}. As a generalization of the $KN$ potential~\cite{Buettgen:1990yw,Hoffmann:1995ie}, $\bar{D}$ and $N$ interact with each other through the exchange of the $\rho$, $\omega$, $\sigma$, and $a_{0}$ mesons. The box-diagrams with the intermediate $\bar{D}^{*}N$, $\bar{D}^{*}\Delta$, $\bar{D}\Delta$ states are included. The coupling constants are determined by the SU(4) symmetry. As a short range interaction, the quark interchange processes with one-gluon exchange are also considered. The scattering amplitude is obtained by solving the Lippmann-Schwinger equation with this potential. The existence of bound states is not reported. In the $\bar{D}N(I=0)$ sector, the long-range meson exchange is attractive, while the short-range quark-gluon exchange is repulsive. As a consequence, the phase shift changes from attractive around the threshold to repulsive at higher energies. The interaction in the $\bar{D}N(I=1)$ sector is found to be repulsive.
The $DN$ sector is studied with the same mechanisms. The results are partly presented in Ref.~\cite{Haidenbauer:2008ff} and the detailed description is given in Ref.~\cite{Haidenbauer:2010ch}. In the $DN$ sector, however, the quark-gluon exchange is absent, because the antiquark in the $D$ meson cannot be exchanged with the quarks in the nucleon. Then the dominant vector meson exchange plays a similar role with the contact interaction in the SU(4) model. In fact, $DN(I=0)$ sector generates two poles near the $\pi\Sigma_{c}$ threshold, which can be identified with $\Lambda_{c}(2595)$. The quasi-bound state in the $DN(I=1)$ channel is obtained at higher energies than the contact interaction model, and identified as $\Sigma_{c}(2800)$ whose spin-parity is not yet determined experimentally. More detailed comparison with the SU(4)~\cite{Mizutani:2006vq} and SU(8)~\cite{GarciaRecio:2008dp} models can be found in Ref.~\cite{GarciaRecio:2008dp}.
In the meson exchange model, the $D^{*}N/\bar{D}^{*}N$ channel is included in the intermediate state of the box diagrams. However, the diagonal $D^{*}N\to D^{*}N/\bar{D}^{*}N\to \bar{D}^{*}N$ interaction is not included, and the role of the heavy quark symmetry is not very clear.
\paragraph{Pion exchange models with heavy quark symmetry}
The pion-exchange potential, accompanying the tensor force, is one of the most important ingredients in the $NN$ interaction.
As for the $D^{(\ast)}N$ and $ \bar{D}^{(\ast)}N$ interactions,
the one-pion exchange becomes possible to exist when the heavy quark symmetry is adopted;
the one-pion exchange is provided by the mixing of $DN/\bar{D}N$ and $D^{*}N/\bar{D}^{\ast}N$.
It is important that the proximity of $DN/\bar{D}N$ and $D^{\ast}N/\bar{D}^{\ast}N$ channels ($m_{D^{\ast}}-m_{D}\simeq 140$ MeV) enhances the strength of the pion exchange potential through the off-diagonal channel coupling, while there is no pion exchange in the diagonal $DN/\bar{D}N$ channel due to the parity conservation.\footnote{In contrast to the $\Lambda N$-$\Sigma N$ coupling in hypernuclei which is the mixing of different isospin states, the $\bar{D}N$-$\bar{D}^{\ast}N$ coupling is the mixing of different spin states.}
The mixing effect becomes more enhanced in the $\bar{B}^{(\ast)}N/B^{(\ast)}N$ case.
In the light quark sector, in contrast, the pion exchange does not play an important role, where the mass splitting of the spin zero and spin one states is large; $m_{K^{\ast}}-m_{K}\simeq 350$ MeV and $m_{\rho}-m_{\pi}\simeq 640$ MeV.
The pion-exchange model is intensively applied to the exotic $\bar{D}N$ sector where the coupling to lower energy channels is absent.
In Ref.~\cite{Cohen:2005bx}, the pion-exchange model is considered to examine the properties of the stable heavy pentaquark bound state, which is shown to exist in the heavy quark limit combined with the large $N_{c}$ limit\footnote{There are early studies of Skyrmions where a $D^{(\ast)}/\bar{D}^{(\ast)}$ meson is bound in the bound-state approach~\cite{Riska:1992qd,Oh:1994np,Oh:1994yv}.}.
In this study, however, the coupling of different angular momentum states via the tensor force is not completely considered. The full calculation including the tensor force is performed in Ref.~\cite{Yasui:2009bz}. By an analogy with the nucleon-nucleon interaction, the tensor force plays an important role in the pion exchange. In fact, the tensor force is known to significantly affect the nuclear structure including the neutron rich nuclei (see Refs.~\cite{Myo:2007vm,Otsuka:2005zz,Myo:2015rbv}). The tensor force strongly works through the mixing of the state with angular momentum $L$ and those with $L\pm2$. In the present case, it is important that there is the mixing of $\bar{D}N$ and $\bar{D}^{\ast}N$, in addition to the angular momentum mixing.
All the relevant channels for the $J\le 3/2$ (5/2) systems with the negative (positive) parity are listed in Table~\ref{table:DbarN_all}.
\begin{table}[tbp]
\centering
\caption{Various coupled channels for a
given quantum number $J^P$ for negative/positive parity $P = \mp1$~\cite{Yasui:2009bz,Yamaguchi:2011xb,Yamaguchi:2011qw}. }
\begin{tabular}{ c | c c c c}
\hline
$J^P$ & \multicolumn{4}{c }{channels} \\
\hline
$1/2^-$ & $\bar{D}N(^2\mathrm{S}_{1/2})$ & $\bar{D}^*N(^2\mathrm{S}_{1/2})$ &
$\bar{D}^*N(^4\mathrm{D}_{1/2})$ & \\
$3/2^-$ & $\bar{D}N(^2\mathrm{D}_{3/2})$ & $\bar{D}^*N(^4\mathrm{S}_{3/2})$ &
$\bar{D}^*N(^4\mathrm{D}_{3/2})$ & $\bar{D}^*N(^2\mathrm{D}_{3/2})$ \\
\hline
$1/2^+$ &$\bar{D}N(^2\mathrm{P}_{1/2})$&$\bar{D}^\ast N(^2\mathrm{P}_{1/2})$&$\bar{D}^\ast N(^4\mathrm{P}_{1/2})$& \\
$3/2^+$ & $\bar{D}N(^2\mathrm{P}_{3/2})$&$\bar{D}^\ast N(^2\mathrm{P}_{3/2})$&$\bar{D}^\ast
N(^4\mathrm{P}_{3/2})$&$\bar{D}^\ast N(^4\mathrm{F}_{3/2})$ \\
$5/2^+$&$\bar{D}N(^2\mathrm{F}_{5/2})$&$\bar{D}^\ast N(^4\mathrm{P}_{5/2})$&$\bar{D}^\ast
N(^2\mathrm{F}_{5/2})$&$\bar{D}^\ast N(^4\mathrm{F}_{5/2})$ \\
\hline
\end{tabular}
\label{table:DbarN_all}
\end{table}
For instance, in the $J^{P}=1/2^{-}$ and $3/2^{-}$ sectors relevant coupled-channels are given by
\begin{align}
\{ \bar{D}N(^{2}\mathrm{S}_{1/2}), \bar{D}^{\ast}N(^{2}\mathrm{S}_{1/2}), \bar{D}^{\ast}N(^{4}\mathrm{D}_{1/2}) \} ,
\end{align}
and
\begin{align}
\{ \bar{D}N(^{2}\mathrm{D}_{3/2}), \bar{D}^{\ast}N(^{4}\mathrm{S}_{3/2}), \bar{D}^{\ast}N(^{4}\mathrm{D}_{3/2}), \bar{D}^{\ast}N(^{2}\mathrm{D}_{3/2}) \} ,
\end{align}
respectively. The coupled-channel Hamiltonian of the pion-exchange model is given by
\begin{align}
H_{1/2^-} &=
\left(
\begin{array}{ccc}
K_0 & \sqrt{3} \, {C} & -\sqrt{6} \, {T} \\
\sqrt{3} \, {C} & K_0-2 \, {C} + \Delta M & -\sqrt{2} \, {T} \\
-\sqrt{6} \, {T} & -\sqrt{2} \, {T} & K_2 + ({C} - 2\, {T}) + \Delta M
\end{array}
\right), \label{eq:OPEP_DbarN_1/2} \\
H_{3/2^-} &=
\left(
\begin{array}{cccc}
K_2 & \sqrt{3}\, {T} & -\sqrt{3} \, {T} & \sqrt{3}\,{C} \\
\sqrt{3}\,{T} &K_0 + {C} + \Delta M & 2\,{T} & {T} \\
-\sqrt{3}\,{T} & 2\,{T} & K_2 + {C} + \Delta M & -{T} \\
\sqrt{3}\,{C} & {T} & -{T} & K_2 -2\,{C} + \Delta M
\end{array}
\right), \label{eq:OPEP_DbarN_3/2}
\end{align}
where the kinetic term is $K_{\ell}=-( \partial^2 / \partial r^2 + (2/r)\partial / \partial r - \ell(\ell+1)/r^2)/2\mu$ with the reduced mass $\mu$, the central and tensor potentials are ${C}=\kappa\, C(r;m_\pi)$ and ${T} = \kappa\, T(r;m_\pi)$, respectively, with $r$ the distance between $\bar{D}^{(\ast)}$ and N. The coupling constant is given by $\kappa = (g g_{\pi NN}/\sqrt{2} m_N f_{\pi})(\vec{\tau}_{P} \!\cdot\! \vec{\tau}_N/3)$ with $g$ the pion coupling of the $D$ meson in Eq.~\eqref{eq:L_heavy_hadron_pion} and $g_{\pi NN}$ the pion-nucleon coupling constant. The spatial dependences of the central and tensor potentials are given by
\begin{align}
C(r;m) &= \int \frac{\mathrm{d}^{3}\vec{q}}{(2\pi)^{3}} \frac{m^{2}}{|\vec{q}\,|^{2}+m^{2}} e^{i\vec{q}\cdot\vec{r}} F(\Lambda_{P},\vec{q}\,) F(\Lambda_{N},\vec{q}\,), \label{eq:central_C} \\
T(r;m) S_{12}(\hat{r}) &= \int \frac{\mathrm{d}^{3}\vec{q}}{(2\pi)^{3}} \frac{-|\vec{q}\,|^{2}}{|\vec{q}\,|^{2}+m^{2}} S_{12}(\hat{q}) e^{i\vec{q}\cdot\vec{r}} F(\Lambda_{P},\vec{q}\,) F(\Lambda_{N},\vec{q}\,), \label{eq:tensor_T}
\end{align}
where $S_{12}(\hat{x}) = 3 (\vec{\cal O}_{1} \cdot
\hat{x})(\vec{\sigma}_{2} \cdot \hat{x}) - \vec{\cal O}_{1} \cdot
\vec{\sigma}_{2}$
with $\hat{x}=\vec{x}/|\vec{x}\,|$, and ${\cal O}_{1}$ the polarization vector $\vec{\varepsilon}^{\,(\lambda)}$ or $\vec{\varepsilon}^{\,(\lambda)\dag}$ with helicity $\lambda$ (spin-one operator $\vec{T}$)
from the $\pi \bar{D}\bar{D}^{\ast}$ (the $\pi \bar{D}^{\ast}\bar{D}^{\ast}$) vertex (the subscript 1 for $P^{(\ast)}$ and the subscript 2 for $N$) \cite{Yasui:2009bz}.
The hadronic form factor is $F(\Lambda,\vec{q}\,)=(\Lambda^{2}-m^{2})/(\Lambda^{2}+|\vec{q}\,|^2)$ with a cutoff $\Lambda$.
The energy is measured from the $\bar{D} N$ threshold and the $\Delta M=M_{\bar{D}^{\ast}}-M_{\bar{D}}$ is the mass difference of the $\bar{D}N$ and $\bar{D}^{\ast}N$ thresholds.
The cutoff $\Lambda_P$
is determined by the ratio of the matter radii of the heavy meson $P$ and
the nucleon $N$, namely $\Lambda_P/\Lambda_N=r_N/r_P$~\cite{Yasui:2009bz}.
We note that in the central potential in Eqs.~(\ref{eq:central_C}) and (\ref{eq:tensor_T})
the contact term contribution
has been neglected. This is because the cutoff $\Lambda_{N}$ of the nucleon is fixed
to reproduce the deuteron binding energy, without including the contact term in the
nucleon-nucleon potential. Similarly, the contact term of the heavy meson-nucleon
potential is also neglected.
By calculating the eigenvalues of this Hamiltonian, a shallow bound state (a few MeV binding) is obtained below the $\bar{D}N$ threshold in the $I(J^{P})=0(1/2^{-})$ channel, while no bound state is found in other channels. Inclusion of the $\rho$ and $\omega$ exchanges does not change the qualitative results. Interestingly, the results of the bound state quantitatively agree with the SU(8) contact model (see Table~\ref{tbl:DbarNbound}), although the driving mechanism is different; the tensor force is crucial for the bound state in the pion-exchange model. On the other hand, from the comparison with the SU(4) contact model and the meson exchange model, the bound state below the $\bar{D}N$ threshold is obtained only when the heavy quark symmetry is incorporated in the framework, because the channel coupling of $\bar{D}N$ and $\bar{D}^{\ast}N$ gives an effective attraction.
Other $I(J^{P})$ sectors are further studied in Refs.~\cite{Yamaguchi:2011xb,Yamaguchi:2011qw}, finding various resonances not only in the negative parity channels but also in the positive parity ones. Of particular importance is the resonance in the $I(J^{P})=0(3/2^{-})$ channel.
Interestingly, the main channel for attraction in this resonance is provided by the $\bar{D}^{\ast}N(^{4}\mathrm{S}_{3/2})$ channel.
The decay to the lowest threshold $\bar{D}N$ is much suppressed, because the decay channel $\bar{D}N(^{2}\mathrm{D}_{3/2})$ contains the $d$-wave.
Hence this is regarded as the Feshbach resonance.
In contrast to the SU(8) contact interaction model, the decay of the $I(J^{P})=0(3/2^{-})$ state into the $\bar{D}N$ channel occurs in the pion-exchange model, thanks to the tensor force. The bottom sector can also be analyzed by replacing $\bar{D}$ by $B$. The results of the bound/resonant states are summarized in Fig.~\ref{fig:Yamaguchi:2011qw}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=11cm,bb=0 0 548 289]{figs/4/dbarNpFig11-700.pdf}
\caption{Energy levels of $\bar{D}N$-$\bar{D}^{\ast}N$ bound/resonant states (left) and their counterpart in the bottom sector (right) in the pion-exchange model~\cite{Yamaguchi:2011qw}. }
\label{fig:Yamaguchi:2011qw}
\end{center}
\end{figure}
The $I(J^{P})=0(1/2^{-})$ and $I(J^{P})=0(3/2^{-})$ states become degenerate in mass in the heavy quark limit~\cite{Yasui:2013vca,Yamaguchi:2014era}.
To see this, let us rewrite the Hamiltonian by performing the transformation from the particle basis to the spin-complex basis.
From the spin rearrangement, we change $NP^{(\ast)}$ with $P=(q\bar{Q})_{\mathrm{spin}\,0}$ and $P^{\ast}=(q\bar{Q})_{\mathrm{spin}\,1}$ into the form $[Nq]\bar{Q}$ with the spin-complex $[Nq]$, and define the spin-complex wave function as
\begin{align}
\left\{ | [Nq]^{(0,\mathrm S)}_{0^+} \, \bar{Q} \rangle_{1/2^-}, | [Nq]^{(1, {\mathrm S})}_{1^+} \bar{Q} \rangle_{1/2^-}, | [Nq]^{(1,{\mathrm D})}_{1^+} \bar{Q} \rangle_{1/2^-} \right\},
\end{align}
and
\begin{align}
\left\{ | [Nq]^{(1, {\mathrm S})}_{1^+} \bar{Q} \rangle_{3/2^-}, | [Nq]^{(1, {\mathrm D})}_{1^+} \bar{Q} \rangle_{3/2^-}, | [Nq]^{(0, {\mathrm D})}_{2^+} \bar{Q} \rangle_{3/2^-}, | [Nq]^{(1,{\mathrm D})}_{2^+} \bar{Q} \rangle_{3/2^-} \right\},
\end{align}
respectively, for $1/2^{-}$ and $3/2^{-}$.
The notation $[Nq]^{(S,L)}_{j^{\cal P}}$ denotes the spin-complex composed of a nucleon and a quark (in $P^{(\ast)}$ meson) with total spin $S$, angular momentum $L$, and total angular momentum and parity $j^{\cal P}$.
The particle basis $\{ |P^{(\ast)}N(^{2S+1}L_{J})\rangle \}$ and the spin-complex basis $\{ | [Nq]^{(S,L)}_{j^{\cal P}} \bar{Q} \rangle_{J^{P}} \}$ are related by unitary matrix, $U$ for $1/2^-$ and $U'$ for $3/2^-$:
\begin{align}
\left(
\begin{array}{c}
| PN(^{2}{\mathrm S}_{1/2}) \rangle \\
| P^{\ast}N(^{2}{\mathrm S}_{1/2}) \rangle \\
| P^{\ast}N(^{2}{\mathrm D}_{1/2}) \rangle
\end{array}
\right) = U
\left(
\begin{array}{c}
| [Nq]^{(0,\mathrm S)}_{0} \, \bar{Q} \rangle_{1/2^-} \\
| [Nq]^{(1, {\mathrm S})}_{1} \bar{Q} \rangle_{1/2^-} \\
| [Nq]^{(1,{\mathrm D})}_{1} \bar{Q} \rangle_{1/2^-}
\end{array}
\right),
\hspace{0.5em}
\mathrm{with}
\hspace{0.5em}
U=
\left(
\begin{array}{ccc}
-\frac{1}{2} & \frac{\sqrt{3}}{2} & 0 \\
\frac{\sqrt{3}}{2} & \frac{1}{2} & 0 \\
0 & 0 & -1
\end{array}
\right),
\end{align}
for $1/2^{-}$, and
\begin{align}
\left(
\begin{array}{c}
| PN(^{2}{\mathrm D}_{3/2}) \rangle \\
| P^{\ast}N(^{4}{\mathrm S}_{3/2}) \rangle \\
| P^{\ast}N(^{4}{\mathrm D}_{3/2}) \rangle \\
| P^{\ast}N(^{2}{\mathrm D}_{3/2}) \rangle
\end{array}
\right) = U'
\left(
\begin{array}{c}
| [Nq]^{(1, {\mathrm S})}_{1} \bar{Q} \rangle_{3/2^-} \\
| [Nq]^{(1, {\mathrm D})}_{1} \bar{Q} \rangle_{3/2^-} \\
| [Nq]^{(0, {\mathrm D})}_{2} \bar{Q} \rangle_{3/2^-} \\
| [Nq]^{(1, {\mathrm D})}_{2} \bar{Q} \rangle_{3/2^-}
\end{array}
\right),
\hspace{0.5em}
\mathrm{with}
\hspace{0.5em}
U'=
\left(
\begin{array}{cccc}
0 & \frac{\sqrt{6}}{4} & \frac{1}{2} & \frac{\sqrt{6}}{4} \\
1 & 0 & 0 & 0 \\
0 & \frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{2}} \\
0 & \frac{1}{2\sqrt{2}} & - \frac{\sqrt{3}}{2} & \frac{1}{2\sqrt{2}}
\end{array}
\right),
\end{align}
for $3/2^{-}$.
We note that the $s$-wave channels, $PN(^{2}\mathrm{S}_{1/2})$, $P^{\ast}N(^{2}\mathrm{S}_{1/2})$ and $P^{\ast}N(^{4}\mathrm{S}_{3/2})$, have been discussed in the contact interaction with SU(8) symmetry.
Let us rewrite the Hamiltonians $\tilde{H}_{1/2^-}$ and $\tilde{H}_{3/2^-}$, which are defined by Eqs.~(\ref{eq:OPEP_DbarN_1/2}) and (\ref{eq:OPEP_DbarN_3/2}) by setting $\Delta M=0$ in the heavy quark limit, in terms of the spin-complex basis.
They are transformed as
\begin{align}
U^{-1} \tilde{H}_{1/2^-} U
&=
\left(
\begin{array}{c|cc}
K_{0} \!-\! 3\,{C} & 0 & 0 \\
\hline
0 & K_{0} \!+\! {C} & -2\sqrt{2} \,{T} \\
0 & -2\sqrt{2} \,{T} & K_{2} \!+\! ({C} \!-\! 2\,{T})
\end{array}
\right) \nonumber \\
&=
{\mathrm{diag}}( \tilde{H}_{1/2^-}^{(0^+)},\tilde{H}_{1/2^-}^{(1^+)} ), \\
U'^{-1} \tilde{H}_{3/2^-} U'
&=
\left(
\begin{array}{cc|cc}
K_{0} \!+\! {C} & 2\sqrt{2}\,{T} & 0 & 0 \\
2\sqrt{2}\,{T} & K_{2} \!+\! ({C} \!-\! 2\,{T}) & 0 & 0 \\
\hline
0 & 0 & K_{2} \!-\! 3\,{C} & 0 \\
0 & 0 & 0 & K_{2} \!+\! ({C} \!+\! 2\,{T})
\end{array}
\right) \nonumber \\
&=
{\mathrm{diag}}( \tilde{H}_{3/2^-}^{(1^+)},\tilde{H}_{3/2^-}^{(2^+)} ),
\end{align}
with the block-diagonal forms.\footnote{
We notice that the off-diagonal terms in $\tilde{H}^{(2^{+})}_{3/2^{-}}$ vanish because the one pion-exchange potential is used.
When other interactions are employed,
the off-diagonal terms in $H_{J^P}^{(j^{\cal P})}$ may exist in general.
In our model space, we consider only $Nq$ for the spin-complex.}
Even when the other components such as $\Delta q$, $N \pi q$ are considered,
the Hamiltonians are block-diagonalized with the spin-complex basis,
as far as the heavy quark symmetry is respected.
From the transformed Hamiltonian with the spin-complex basis,
we immediately find that $\tilde{H}_{1/2^-}^{(1^+)}$ and $\tilde{H}_{3/2^-}^{(1^+)}$ have exactly the same eigenvalues.
This is reasonable because the same spin-complex with $1^{+}$ is contained in both $1/2^-$ and $3/2^-$ states.
Hence we confirm that the $1/2^{-}$ and $3/2^{-}$ states should be degenerate in the heavy quark limit, as indicated by the small mass difference between $1/2^{-}$ and $3/2^{-}$ states, as shown in Fig.~\ref{fig:Yamaguchi:2011qw}.
The small mass differences in the other channels can be explained analogously.
The non-exotic $DN$ sector is studied in Ref.~\cite{Yamaguchi:2013ty}. To concentrate on the molecular states around the $DN$ threshold, the lower energy $\pi\Sigma_{c}$ channels are not included. In the $I(J^{P})=0(1/2^{-})$ sector, the pion exchange gives a bound state around 14 MeV below the $DN$ threshold. The binding energy further increases to $\sim 80$ MeV by the $\rho$ and $\omega$ exchanges. No bound state is found in the $I=1$ sector. In contrast to the $\bar{D}N$ sector, the correspondence with the results of other models is not clear (see Table~\ref{tbl:DNbound}), partly because of the difference of the model space.
\paragraph{Chiral quark models}
In the chiral constituent quark models, the short range part of the hadron interactions can be studied. This model is successfully applied to the nuclear forces~\cite{Valcarce:2005em}, and in particular, to the $\Delta \Delta$ channel~\cite{Garcilazo:1997mf,Mota:1999qp,Valcarce:2001in} which is related to the resonance structure in the $pn\rightarrow d\pi^{0}\pi^{0}$ reaction~\cite{Bashkanov:2008ih}, known as the Abashian-Booth-Crowe (ABC) effect~\cite{Abashian:1960zz,Booth:1961zz}.
In Refs.~\cite{Carames:2012bd,Fontoura:2012mz,Carames:2014cna}, the $\bar{D}N$ scattering is studied in the chiral quark models. The $\bar{D}N$ interaction is constructed by the short range quark exchange mechanism, together with the long range meson exchanges. In the $(I,J^{P})=(0,1/2^{-})$ sector, the quark exchange works repulsively. As a consequence, no bound state is found in the $(I,J^{P})=(0,1/2^{-})$ channel.
A bound state is found as in the pion-exchange model~\cite{Yasui:2009bz,Yamaguchi:2011qw} if the quark exchange is turned off.
However, the repulsion by the quark exchange cancels partly the attraction of the pion exchange,
though the total interaction is still attractive in the full model. On the other hand, the quark exchange is attractive in the $(I,J^{P})=(1,5/2^{-})$ sector. This attraction supports a $\Delta \bar{D}^{\ast}$ bound state with a binding energy of 3.87 MeV. The state can be observed as a $d$-wave resonance in the $N\bar{D}$ system. The mechanism of the binding of the high-spin state is similar to that of the $\Delta\Delta$ resonance in spin 3~\cite{Carames:2014cna}. The appearance of the shallow bound state in the $(I,J^{P})=(1,5/2^{-})$ sector is in clear contrast with the SU(8) contact interaction model where a bound state is found about 100 MeV below the threshold.
\paragraph{Comparison of different models around threshold}
We have introduced several different approaches to the $DN/\bar{D}N$ interaction. Before closing this section, we would like to compare the prediction of different models in the low-energy limit, using the scattering lengths of the $DN/\bar{D}N$ system. The scattering length is defined by the two-body amplitude at the threshold energy. This is an observable quantity, and is accessible in principle by the lattice QCD.
Moreover, the leading contribution to the mass of the $D/\bar{D}$ meson in the nuclear medium can be estimated by the scattering length.
To avoid confusion, let us recall the properties of the scattering length. We first show the convention of the scattering length in this paper:
\begin{align}
a_{DN/\bar{D}N}
&=
\lim_{k\to 0}f_{DN/\bar{D}N}(k) ,
\label{eq:scatteringlength}
\end{align}
where $f_{DN/\bar{D}N}(k)$ is the elastic scattering amplitude of the $DN/\bar{D}N$ channel with the momentum $k$.\footnote{In the $k\to 0$ limit, all the partial waves other than $l=0$ vanish, and the scattering amplitude is independent of the scattering angle. Our convention is commonly used in hadron physics (meson-meson/meson-baryon scattering), while the convention with an opposite sign is often used in the nuclear and atomic physics.} Because the total cross section at the threshold is given by $4\pi a_{DN/\bar{D}N}^{2}$, the scattering length determines the strength of the two-body scattering process in the zero momentum limit. In the above convention, the positive scattering length represents the attractive scattering at threshold (increasing phase shift), and the negative scattering length corresponds to the repulsive scattering (decreasing phase shift). We note that the attractive/repulsive nature of the scattering length does not always correspond to the property of the potential. If a shallow bound state is generated by an attractive potential, the scattering length is large and negative. In other words, the attractive scattering length indicates the absence of a shallow bound state near the threshold. While the $\bar{D}N$ scattering length is real, the $DN$ scattering length has an imaginary part, which expresses the inelastic scattering effect to the open $\pi\Sigma_{c}$ and $\pi\Lambda_{c}$ channels. For reference, the recent determination of the $\pi N$ scattering lengths gives $a_{\pi N}^{I=1/2}=169.8\times 10^{-3}M_{\pi}^{-1}=0.243$ fm and $a_{\pi N}^{I=3/2}=-86.3\times 10^{-3}M_{\pi}^{-1}=-0.123$ fm~\cite{Hoferichter:2015hva}. As an example of the channel with near-threshold quasi-bound state, the $\bar{K}N$ scattering length is found to be $a_{\bar{K}N}^{I=0}=-1.39+i\ 0.85$ fm~\cite{Ikeda:2011pi,Ikeda:2012au,Kamiya:2015aea} where the $\Lambda(1405)$ resonance lies below the $\bar{K}N$ threshold.
We summarize the the $DN$ scattering lengths in the SU(4) contact interaction models~\cite{Hofmann:2005sw,Mizutani:2006vq}, the SU(8) contact interaction model~\cite{GarciaRecio:2008dp}, and the meson exchange model~\cite{Haidenbauer:2010ch,Haidenbauer:2008ff} in Table~\ref{tbl:DNslength}. In the $I=0$ sector, all models give a moderately repulsive scattering length, except for the SU(8) contact interaction model which predicts very weakly attractive value. It is remarkable that the imaginary part is very small in all cases, indicating the transition to the $\pi\Sigma_{c}$ channel is suppressed. In the $I=1$ sector, the results are more scattered. The imaginary part is very small in Refs.~\cite{Hofmann:2005sw,GarciaRecio:2008dp} while it is sizable in Refs.~\cite{Mizutani:2006vq,Haidenbauer:2010ch}. A large negative scattering length of $DN(I=1)$ of the meson exchange model~\cite{Haidenbauer:2010ch} is a consequence of the shallow quasi-bound state which corresponds to $\Sigma_{c}(2800)$.
\begin{table}[tbp]
\caption{Scattering lengths in the $DN$ channel in various models. The isospin averaged scattering length $a_{D}$ is defined in Eq.~\eqref{eq:averagedslength}. All numbers are given in units of fm. The negative (positive) scattering length corresponds to the repulsive (attractive) scattering at threshold. When a shallow bound state exists, the scattering length becomes negative with a large magnitude. The results of Ref.~\cite{Hofmann:2005sw} are given in Ref.~\cite{Lutz:2005vx} where the imaginary parts are found to be negligible. The results of Refs.~\cite{Mizutani:2006vq,GarciaRecio:2008dp} are shown in Ref.~\cite{Haidenbauer:2010ch}. }
\begin{center}
\begin{tabular}{|l|lll|}
\hline
Model & $a_{DN}^{I=0}$ & $a_{DN}^{I=1}$ & $a_{D}$\\
\hline
SU(4) contact~\cite{Hofmann:2005sw}
& $-0.43$
& $-0.41$
& $-0.42$ \\
SU(4) contact~\cite{Mizutani:2006vq}
& $-0.57+i\ 0.001$
& $-1.47+i\ 0.65$
& $-1.25+i\ 0.49$ \\
SU(8) contact~\cite{GarciaRecio:2008dp}
& $0.004+i\ 0.002$
& $0.33+i\ 0.05$
& $0.29+i\ 0.038$ \\
Meson exchange~\cite{Haidenbauer:2010ch}
& $-0.41+i\ 0.04$
& $-2.07+i\ 0.57$
& $-1.66+i\ 0.44$ \\
\hline
\end{tabular}
\end{center}
\label{tbl:DNslength}
\end{table}%
As shown in Table~\ref{tbl:DbarNslength}, the $\bar{D}N$ scattering lengths are calculated in the SU(4) contact interaction model~\cite{Hofmann:2005sw}, the meson exchange model~\cite{Haidenbauer:2007jq}, the pion-exchange model~\cite{Yamaguchi:2011xb}, and the chiral quark model~\cite{Fontoura:2012mz}. The scattering lengths are in general not very large and comparable with the $\pi N$ sector, except for the pion-exchange model where the $I=0$ scattering length is enhanced by the near-threshold bound state (see Table~\ref{tbl:DbarNbound}). We comment that the pion-exchange model without the $\rho$ and $\omega$ exchanges provides an attractive scattering length ($a^{I=1}_{\bar{D}N}=0.22$ fm). However, the $\rho$ and $\omega$ exchanges in the diagonal component lead to the repulsive scattering length as shown in Table~\ref{tbl:DbarNslength}.
\begin{table}[tbp]
\caption{Scattering lengths in the $\bar{D}N$ channel in various models.
The isospin averaged scattering length $a_{\bar{D}}$ is defined in Eq.~\eqref{eq:averagedslength}. All numbers are given in units of fm. The negative (positive) scattering length corresponds to the repulsive (attractive) scattering at threshold. When a shallow bound state exists, the scattering length becomes negative with a large magnitude. Results of Ref.~\cite{Yamaguchi:2011xb} are those in the $\pi\rho\omega$ model. The results of Ref.~\cite{Hofmann:2005sw} are given in Ref.~\cite{Lutz:2005vx}.}
\begin{center}
\begin{tabular}{|l|lll|}
\hline
Model & $a_{\bar{D}N}^{I=0}$ & $a_{\bar{D}N}^{I=1}$ & $a_{\bar{D}}$ \\
\hline
SU(4) contact~\cite{Hofmann:2005sw}
& $-0.16$
& $-0.26$
& $-0.24$ \\
Meson exchange~\cite{Haidenbauer:2007jq}
& $0.07$
& $-0.45$
& $-0.32$ \\
Pion exchange~\cite{Yamaguchi:2011xb}
& $-4.38$ & $-0.07$ & $-1.15$ \\
Chiral quark model~\cite{Fontoura:2012mz} & $0.03$-0.16 & $0.20$-0.25
& $0.16$-0.23 \\
\hline
\end{tabular}
\end{center}
\label{tbl:DbarNslength}
\end{table}%
The $DN/\bar{D}N$ scattering length can be used to estimate the mass shift of the $D/\bar{D}$ in the nuclear medium. Under the linear density approximation~\cite{Dover:1971hr}, the mass shift of the $D/\bar{D}$ meson in the symmetric nuclear matter is given by
\begin{align}
\Delta m_{D/\bar{D}} = -2\pi \frac{M_{N}+m_{D}}{M_{N}m_{D}} \rho_{N} a_{D/\bar{D}},
\label{eq:scattering_massshift}
\end{align}
with the nucleon mass $M_{N}$, the $D$ meson mass $m_{D}$, and the normal nuclear matter density $\rho_{N}$. The isospin averaged scattering length is defined as
\begin{align}
a_{D/\bar{D}} =\frac{a^{I=0}_{DN/\bar{D}N}+3a^{I=1}_{DN/\bar{D}N}}{4}
\label{eq:averagedslength} .
\end{align}
We see that the attractive scattering length $a_{D/\bar{D}}>0$ (repulsive scattering length $a_{D/\bar{D}}<0$) induces the decrease (increase) of the $D/\bar{D}$ mass in nuclear matter. In Tables~\ref{tbl:DNslength} and \ref{tbl:DbarNslength}, we show the results of the averaged scattering lengths~\eqref{eq:averagedslength}. We note that the the scattering length in the $I=1$ channel is important for the in-medium property of the $D/\bar{D}$ meson, because of the larger weight in Eq.~\eqref{eq:averagedslength}.
We however remind that Eq.~\eqref{eq:scattering_massshift} is a simple estimation, and more detailed analysis of the mass shift will be discussed in section~\ref{sec:Dinmatter}. In Ref.~\cite{Hayashigaki:2000es}, the two-body scattering length is evaluated by the QCD sum rules, in order to study the in-medium modification of the $D$ meson mass. The averaged scattering length of $D$ and $\bar{D}$ is estimated as $(a_{D}+a_{\bar{D}})/2 = 0.72 \pm 0.12 \hspace{0.5em} \mathrm{fm}$. This suggests the decrease of the averaged mass of $D$ and $\bar{D}$ by about $\simeq -48 M \pm 8$ MeV, while the later studies indicate the increase of the $D$ meson masses. Again, thorough discussion on the mass shift will be given in section~\ref{sec:Dinmatter}.
\subsection{Few-body systems}
We have seen several studies with an attractive $DN/\bar{D}N$ interaction, some of which predict a (quasi-)bound state below the threshold. These observations suggest the possible formation of a bound state of $D/\bar{D}$ with a few nucleons. If it exists, detailed investigations of the few-body systems will provide another clue to understand the two-body interaction, as in the case of the hypernuclei and the $\Lambda N$ interaction.
In Ref.~\cite{Bayar:2012dd}, the $DNN$ three-body system is studied with the $DN$ interaction of Ref.~\cite{Mizutani:2006vq} where the $DN$ system has a quasi-bound state of $\Lambda_{c}(2595)$ in the $I=0$ channel. The three-body system is solved by two techniques: a variational calculation based on the Gaussian expansion method as explained in Sect.~\ref{sec:fewbody_systems} and the fixed center approximation (FCA) to the Faddeev equation as developed in Refs.~\cite{Bayar:2011qj,Bayar:2012rk}. In both methods, a narrow quasi-bound state of $DNN$ is found around 3500 MeV in the $I(J^{P})=1/2(0^{-})$ channel. The $I(J^{P})=1/2(1^{-})$ channel is unbound with respect to the $\Lambda_{c}(2595)N$ threshold. The width from the two-body absorption process $DNN\to \Lambda_{c}N$ is evaluated in the FCA calculation and found to be several tens of MeV. By analyzing the wave function obtained in the variational calculation, it is found that the $DN(I=0)$ pair in the quasi-bound $DNN$ system has a similar structure with the $\Lambda_{c}(2595)$ in free space. This is a characteristic feature observed in the $\bar{K}NN$ quasi-bound state~\cite{Dote:2008in,Dote:2008hw}. The FCA approach is also applied to other three-body systems with the $D$ meson, such as $NDK$, $\bar{K}DN$, $ND\bar{D}$ systems~\cite{Xiao:2011rc}, showing the existence of several quasi-bound states.
The $\bar{D}NN$-$\bar{D}^{\ast}NN$ system is studied in Ref.~\cite{Yamaguchi:2013hsa} which indicates the existence of bound and resonance states in the three-body system. The $\bar{D}N$-$\bar{D}^{\ast}N$ interaction is given by the pion-exchange potential~\cite{Yasui:2009bz,Yamaguchi:2011xb,Yamaguchi:2011qw}, and the $NN$ interaction is adopted by the Argonne $v_{8}'$ (AV8') interaction~\cite{Pudliner:1997ck}.
The Argonne $v_{8}'$ interaction includes the tensor force explicitly, as the pion-exchange potential is essential in nuclei. As a result, in the $I(J^{P})=1/2(0^{-})$ channel, a bound state is found at 5.2 MeV below the $\bar{D}NN$ threshold. In the $I(J^{P})=1/2(1^{-})$ channel, a resonance state is found at 111.2 MeV above the $\bar{D}NN$ threshold, with a 18.6 MeV decay width. As in the case of the $\bar{D}N$-$\bar{D}^{\ast}N$ system, it is found that the tensor force plays an important role in the $\bar{D}NN$-$\bar{D}^{\ast}NN$ system. The binding energy mostly comes from the tensor force in the $\bar{D}N$-$\bar{D}^{\ast}N$ system, while the central force is dominant in the $NN$ pair rather than the tensor force. The energy levels of the three-body $\bar{D}NN$-$\bar{D}^{\ast}NN$ system are summarized in Fig.~\ref{fig:threebody}, together with the $BNN$-$B^{*}NN$ system and $PNN$-$P^{*}NN$ which represents the $m_{Q}\to \infty$ limit. It is shown that the $I(J^{P})=1/2(1^{-})$ resonance in the charm sector degenerates with the bound $I(J^{P})=1/2(0^{-})$ state in the heavy quark limit. Thus, these states form a heavy quark spin doublet as a consequence of the heavy quark symmetry in the formulation~\cite{Yasui:2013vca,Yamaguchi:2014era}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=9cm,bb=0 0 1459 1277]{figs/4/Energy-LevelPNNav8p_paper2.pdf}
\caption{Energy levels of $\bar{D}^{(\ast)}NN$, $B^{(\ast)}NN$ and $P^{(\ast)}NN$ with $I=1/2$ and $J^{P}=0^{-}$ and $1^{-}$ (solid lines)~\cite{Yamaguchi:2013hsa}. The complex energies for resonances are given as $E_{re}-i\Gamma/2$, where $E_{re}$ is a resonance energy and $\Gamma/2$ is a half decay width. Thresholds (subthresholds) are denoted by dashed (dash-dotted) lines.}
\label{fig:threebody}
\end{center}
\end{figure}
It is interesting to note that the lowest energy state is found to be the state with total spin $J=0$ in both the $DNN$ and $\bar{D}NN$ systems. In the dominant $s$-wave $DNN/\bar{D}NN$ component in the $J=0$ state, the two nucleons are combined into the ${}^{1}S_{0}$ state. On the other hand, in the $NN$ system without $D/\bar{D}$, the lowest energy state is the bound deuteron in the ${}^{3}S_{1}$ channel, not the unbound ${}^{1}S_{0}$ channel. This means that, by adding $D/\bar{D}$, the the lowest energy configuration of the two-nucleon system changes from ${}^{3}S_{1}$ to ${}^{1}S_{0}$. The reason is attributed to the stronger $DN/\bar{D}N$ attraction in the $I=0$ channel than that in the $I=1$ channel. By analyzing the isospin decomposition, it is found that the $I(J^{P})=1/2(0^{-})$ channel has larger fraction of $I=0$ $DN/\bar{D}N$ pair than the $I(J^{P})=1/2(1^{-})$ channel~\cite{Bayar:2012dd}. This is analogous to the $\bar{K}NN$ system~\cite{Dote:2008in,Dote:2008hw} which also favors the $I(J^{P})=1/2(0^{-})$ state as the ground state. In this way, the injection of $D/\bar{D}$ causes the \textit{structure transition} of the two-body correlation of nucleons. A thorough investigation of the few-body systems will elucidate the property of the hadronic interactions inside the system.
\subsection{Nuclear matter}\label{sec:Dinmatter}
In general, the nuclear medium effect can shift the mass of the heavy-light meson, and can broaden the width of the spectrum. In some cases, an additional mode arises from the particle-hole type excitation in the same quantum numbers with the heavy-light meson. All these effects are included in the in-medium spectral function $S_{D/\bar{D}}(q_{0},\vec{q};\rho)$, which is related to the in-medium propagator $D_{D/\bar{D}}(q_{0},\vec{q};\rho)$ and the in-medium self-energy $\Pi_{D/\bar{D}}(q_{0},\vec{q};\rho)$ as
\begin{align}
S_{D/\bar{D}}(q_{0},\vec{q};\rho) = -\frac{1}{\pi} \text{Im } D_{D/\bar{D}}(q_{0},\vec{q};\rho),
\label{se:spectralfunction}
\end{align}
and
\begin{align}
D_{D/\bar{D}}(q_{0},\vec{q};\rho)
=
\frac{1}{q_{0}^{2}-\vec{q}^{\,2}-m^{2}-\Pi_{D/\bar{D}}(q_{0},\vec{q};\rho)} ,
\label{eq:propagator}
\end{align}
where $q$ is the four-momentum of the hadron, $m$ is the mass of the meson in vacuum, and $\rho$ is the nuclear matter density. A useful quantity for the application to the finite nuclei is the optical potential, which is also related to the self-energy as
\begin{align}
V_{\mathrm{opt}\ D/\bar{D}}(r,q^{0})
= \frac{1}{2q^{0}} \Pi_{D/\bar{D}}(q^{0},\vec{q}=\vec{0},\rho(r)),
\label{eq:opticalpotential}
\end{align}
where $\rho(r)$ is the density of the nucleus under the local density approximation. Thus, the in-medium self-energy $\Pi_{D/\bar{D}}(q_{0},\vec{q},\rho)$ is an essential quantity and is evaluated in various approaches. For instance, $\Pi_{D/\bar{D}}(q_{0},\vec{q},\rho)$ can be evaluated by the scalar and vector mean fields in nuclear matter. Hadronic interaction models and QCD sum rules are applied to evaluate $\Pi_{D/\bar{D}}(q_{0},\vec{q};\rho)$ through the forward scattering amplitude of the $DN/\bar{D}N$ system which is related to the self-energy at low densities [See Eq.~\eqref{eq:scattering_massshift}]. In the following, we overview the results of various approaches for the in-medium properties of the heavy-light mesons. Hereafter, the normal nuclear matter density is denoted as $\rho_{0}\sim 0.17 \text{ fm}^{-3}$.
\paragraph{Quark-meson coupling (QMC) model}
The first study of the $\bar{D}/D$ meson in nuclei is carried out in Ref.~\cite{Tsushima:1998ru} based on the quark-meson coupling (QMC) model~\cite{Guichon:1987jp}. The QMC model can describe the pion emission decays of the charmed baryons~\cite{Ivanov:1998qe}. This QMC model is used to study the modification of charm hadrons in nuclear matter. It is noted that the scalar meson ($\sigma$) exchange plays an important role for the relation between the modification of the hadron masses and the change of the chiral condensate in nuclear medium. The wave function of the light quarks ($u$, $d$, $\bar{u}$ and $\bar{d}$) and the charm quarks ($c$ and $\bar{c}$) inside the bag ($|\vec{x}\,|\le R$ with the bag radius $R$) follows the Dirac equations
\begin{align}
\left( i\gamma \!\cdot\! \partial - \left( m_{q}-V_{\sigma}^{q}(\vec{x}\,) \right) - \gamma^{0} \left( V_{\omega}^{q}(\vec{x}\,) \pm
\frac{1}{2} V_{\rho}^{q}(\vec{x}\,) \right) \right)
\psi_{u,d}(x)&=0, \\
\left( i\gamma \!\cdot\! \partial - \left( m_{q}-V_{\sigma}^{q}(\vec{x}\,) \right) + \gamma^{0} \left( V_{\omega}^{q}(\vec{x}\,) \pm \frac{1}{2} V_{\rho}^{q}(\vec{x}\,) \right) \right)
\psi_{\bar{u},\bar{d}}(x)&=0, \\
\left( i\gamma \!\cdot\! \partial - m_{c} \right) \psi_{c,\bar{c}}(x)
&=0,
\end{align}
where
\begin{align}
V_{\sigma}^{q}(\vec{x}\,) = g_{\sigma}^{q} \sigma(\vec{x}\,), \quad
V_{\omega}^{q}(\vec{x}\,) = g_{\omega}^{q} \omega(\vec{x}\,), \quad
V_{\rho}^{q}(\vec{x}\,) = g_{\rho}^{q} \rho(\vec{x}\,),
\end{align}
and $g_{M}^{q}$ is the coupling constant of the meson $M$ and the light quark $q$. The mean field of the meson $M(\vec{r}\,)$ is obtained by self-consistently solving the set of nonlinear equations of mesons and nucleon (the bag including quarks) as in the Serot-Walecka model~\cite{Serot:1984ey,Serot:1997xg} (See Eqs.~(23)-(30) in Ref.~\cite{Saito:1996sf}). The Coulomb interaction is included in this procedure. While the coupling constants are basically determined by the nuclear matter properties, it is found that the $\omega$ meson potential should be phenomenologically modified as $\tilde{V}_{\omega}^{q}=(1.4)^{2}V_{\omega}^{q}$ for the $K^{+}$ meson~\cite{Tsushima:1997df}. In Ref.~\cite{Tsushima:1998ru}, both $\tilde{V}_{\omega}^{q}$ and $V_{\omega}^{q}$ are used to examine the $D$ meson bound states in nuclei.
In the left panel of Fig.~\ref{fig:Tsushima:1998ru}, the mean field potential with $V_{\omega}^{q}$ ($\tilde{V}_{\omega}^{q}$) for the $D^{-}$ meson in $^{208}\mathrm{Pb}$ is shown by the dotted (dashed) line. The attractive nature of the potential with $V_{\omega}^{q}$ is evident. The modification of the $\omega$ potential $V_{\omega}^{q}\to \tilde{V}_{\omega}^{q}$ largely cancels the attraction at small $r$, but still an attractive pocket exists at around $r\sim 8$ fm. By solving the Schr\"odinger-like equation for the $D^{-}$ meson, the bound state is found with 35 MeV binding (10 MeV binding) for the $V_{\omega}^{q}$ ($\tilde{V}_{\omega}^{q}$) case. Corresponding $D^{-}$ wave functions are shown in the right panel of Fig.~\ref{fig:Tsushima:1998ru}. It should be noted that the binding is caused mainly by the Coulomb interaction. In fact, the neutral $\bar{D}^{0}$ is unbound for the potential with $\tilde{V}_{\omega}^{q}$ which indicates that the $\bar{D}$ meson feels repulsion in the nuclear medium. In other words, the $D^{-}$ meson could be bound in the heavy nucleus due to the Coulomb force, even though the strong interaction is repulsive as a whole. In contrast to the $\bar{D}$ meson, the $\omega$ potential works with an opposite sign for the $D$ meson, leading to the binding energy of the $D^{0}$ meson of about 100 MeV. Note however that the channel coupling to the various decay modes, such as $DN\rightarrow Y_{c}\pi$, should be considered in the $D^{0}$ meson case.
\begin{figure}[tbp]
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.4,bb=0 0 488 374]{figs/4/dmespot.pdf}
\end{minipage}
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.37,bb=0 0 564 459]{figs/4/dmeswf.pdf}
\end{minipage}
\caption{The mean-field potentials (left) and the wave functions (right) of the bound $D^{-}$ ($\bar{D}$) meson in $^{208}\mathrm{Pb}$ in the QMC model \cite{Tsushima:1998ru}. In the left panel, the solid line represents the nuclear density distribution, the dotted line stands for the sum of the scalar, vector and Coulomb potentials for the $D^{-}$ meson, and the dashed line shows the total potential with the modified vector potential $\tilde{V}_{\omega}^{q}$. In the right panel, solid line represents the nuclear density distribution, the long dashed (short dashed) line represents the wave function of the $D^{-}$ meson in the 1s (1p) level, and the dashed-dotted (dotted) line shows the wave function of the $D^{-}$ meson in the 1s (1p) level with the modified vector potential $\tilde{V}_{\omega}^{q}$.}
\label{fig:Tsushima:1998ru}
\end{figure}
The in-medium mass of the $D/\bar{D}$ meson is calculated in Ref.~\cite{Sibirtsev:1999js}. The modified $\tilde{V}_{\omega}^{q}$ is used for the $\omega$ potential. As shown in Fig.~\ref{fig:Sibirtsev:1999js}, the $D$ meson ($\bar{D}$ meson) feels attractive (repulsive) interaction in the nuclear medium. These behaviors are similar to those of the $\bar{K}/K$ meson studied in Ref.~\cite{Tsushima:1997df}. We note that the mixing of $D$ and $D^{*}$ is not included.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.3,bb=0 0 529 621]{figs/4/comic7.pdf}
\caption{Masses of $D^{-}$ ($\bar{D}$) meson and $D^{+}$ ($D$) meson in the QMC model~\cite{Sibirtsev:1999js}.}
\label{fig:Sibirtsev:1999js}
\end{center}
\end{figure}
\paragraph{Nuclear mean-field approach}
In Refs.~\cite{Mishra:2003se,Mishra:2008cd,Kumar:2010gb,Kumar:2011ff}, the in-medium $D/\bar{D}$ meson is studied by the SU(4) generalization of the SU(3) mean-field approach~\cite{Mishra:2008kg,Mishra:2008dj}. The study is successively generalized to include the isospin asymmetry~\cite{Mishra:2008cd}, finite temperature~\cite{Kumar:2010gb}, and the strange hadronic matter environment~\cite{Kumar:2011ff}. The interaction Lagrangian of the $D/\bar{D}$ meson consists of the direct coupling to the nucleon (the Weinberg-Tomozawa vector-type term and the scalar-type term) and the coupling to the scalar-isoscalar mesons [$\sigma=f_{0}(600)$, $\zeta=f_{0}(980)$] and the scalar-isovector meson [$\delta=a_{0}(980)$].\footnote{While the coupling to the vector $\omega$ meson is considered in Ref.~\cite{Mishra:2003se}, it is omitted in the later studies~\cite{Mishra:2008cd,Kumar:2010gb,Kumar:2011ff} to avoid the double counting with the Weinberg-Tomozawa term.} The $D/\bar{D}$ meson self-energy $\Pi(\omega,|\vec{k}\,|)$ is evaluated by the mean-field approximation,\footnote{This self-energy $\Pi(\omega,|\vec{k}\,|)$ has an opposite sign from $\Pi_{D/\bar{D}}(q_{0},\vec{q};\rho)$ which is used in the other part of this subsection.} i.e., the nucleon and scalar meson fields are replaced by their expectation values in nuclear matter. The effect of the gluon condensate is considered through the trace anomaly with the ``dilaton'' field $\chi$~\cite{Cohen:1991nk,Kumar:2010hs}:
\begin{align}
\left\langle \frac{\beta_{\mathrm{QCD}}}{2g} G^{a}_{\mu} G^{\mu\nu a} \right\rangle
+ \sum_{i} m_{i} \bar{q}_{i}q_{i}
=-(1-d) \chi^{4},
\end{align}
where the ``dilaton'' field couples to the scalar mesons. In Refs.~\cite{Mishra:2008cd,Kumar:2010gb,Kumar:2011ff}, the in-medium dispersion relation is obtained by solving the pole condition
\begin{align}
-\omega^{2} + \vec{k}^{\,2} + m_{D}^{2} - \Pi(\omega,|\vec{k}\,|) = 0 .
\end{align}
The in-medium mass is defined as $\omega(\vec{k}=\vec{0})$. Note that the mass shift is real for both $D$ and $\bar{D}$ in the mean-field approximation, because the self-energy does not have an imaginary part. In the symmetric nuclear matter, a negative mass shift is observed for both $D$ and $\bar{D}$. The $D$ meson mass drops faster than the $\bar{D}$ meson. This is because the vector-type Weinberg-Tomozawa interaction gives attraction for the $D$ meson, while it is repulsive for the $\bar{D}$ meson. In the isospin asymmetric nuclear matter with $\eta=(\rho_{n}-\rho_{p})/2\rho_{B}\neq0$, the in-medium properties of the $D^{-}(\bar{c}d)$ and $\bar{D}^{0}(\bar{c}u)$ [$D^{0}(c\bar{u})$ and $D^{+}(c\bar{d})$] are different, reflecting the third component of the isospin. The difference is mainly caused by the Weinberg-Tomozawa contact interaction, and the contribution from the scalar-isovector exchange is small. In the neutron-rich nuclear matter ($\eta > 0$; $n_{d}>n_{u}$), the masses of both $\bar{D}$ mesons ($D^{-}$ and $\bar{D}^{0}$) increases. On the other hand, for the $D$ mesons, the mass of $D^{0}$ increases but the mass of $D^{+}$ decreases (see also Ref.~\cite{Yasui:2012rw} for the masses of the $\bar{D}$ and $B$ mesons in the asymmetric nuclear matter in the pion-exchange model). It is shown that the $D$ meson is more sensitive to the isospin asymmetry than the $\bar{D}$ meson.
\paragraph{Contact interaction models with SU(4) symmetry}
There are several works studying the in-medium properties of the $D/\bar{D}$ meson using the contact interaction models introduced in section~\ref{sec:DNinteraction}. The common strategy is to include the Pauli blocking effect for the nucleon propagator and to treat the $D/\bar{D}$ meson self-energy in the ``self-consistent'' framework~\cite{Lutz:1997wt,Ramos:1999ku}. Let us briefly demonstrate this procedure for the $\bar{D}$ meson as an example. Ignoring the nucleonic correlation effects in nuclear matter, the in-medium self-energy of the $\bar{D}$ meson is given by the in-medium T-matrix as
\begin{align}
\Pi_{\bar{D}}(q_{0},\vec{q};\rho)
= \int \frac{{\mathrm d}^{3}p}{(2\pi)^{3}}
n(\vec{p},\rho) \left( T_{\bar{D}N}^{(I=0)}(P_{0},\vec{P};\rho)
+ 3T_{\bar{D}N}^{(I=1)}(P_{0},\vec{P};\rho) \right),
\label{eq:Dbarselfenergy}
\end{align}
where $n(\vec{p},\rho)$ is the nucleon occupation (1 for the nucleons below the Fermi momentum and 0 for those above the Fermi momentum), $P_{0}=q_{0}+E_{N}(\vec{p})$ and $\vec{P}=\vec{q}+\vec{p}$. The in-medium T-matrix $T_{\bar{D}N}(P_{0},\vec{P};\rho) $ is obtained by solving the scattering equation in medium
\begin{align}
T_{\bar{D}N}(P_{0},\vec{P};\rho)
=V+V\tilde{G}_{\bar{D}N}(P_{0},\vec{P};\rho) T_{\bar{D}N}(P_{0},\vec{P};\rho),
\label{eq:inmediumamplitude}
\end{align}
where $V$ is the interaction kernel used to calculate the vacuum T-matrix in Eq.~\eqref{eq:scatteringequation}. The medium effect is included in the loop function
\begin{align}
\tilde{G}_{\bar{D}N}(P_{0},\vec{P};\rho)
&= i\int \frac{{\mathrm d}q_{0}{\mathrm d}^{3}q}{(2\pi)^{3}}
D_{\bar{D}}(q_{0},\vec{q};\rho)
S_{N}(P_{0}-q_{0},\vec{P}-\vec{q};\rho),
\end{align}
where $S_{N}(p_{0},\vec{p};\rho)$ is the in-medium nucleon propagator
with Pauli blocking effect [cf. Eq.~\eqref{eq:propagator_nnucleon_medium}], and $D_{\bar{D}}(q_{0},\vec{q};\rho)$ is the $\bar{D}$ propagator which is related to the self-energy through Eq.~\eqref{eq:propagator}. In this way, The right-hand-side of Eq.~\eqref{eq:Dbarselfenergy} also depends on the $\bar{D}$ meson self-energy $\Pi_{\bar{D}}(q_{0},\vec{q},\rho)$. Thus, by solving the set of equations self-consistently, $\Pi_{\bar{D}}(q_{0},\vec{q},\rho)$ is determined. In the case of the $D$ meson, we should note that the $DN$ channel couples with $\pi Y_{c}$ channels. Medium modifications can also be applied to the other hadrons such as pions in the coupled channels.
In Ref.~\cite{Lutz:2005vx}, the medium effect for the $D/\bar{D}$ meson is studied within the self-consistent framework developed in Ref.~\cite{Lutz:2001dq}, together with the SU(4) contact interaction model of Ref.~\cite{Hofmann:2005sw}. As for the $\bar{D}$ meson, a simple estimation using the low-density theorem~\eqref{eq:scattering_massshift}, together with the scattering lengths shown in Table~\ref{tbl:DbarNslength}, leads to the mass shift of the $\bar{D}$ meson at nuclear density $\rho$ as $\Delta m_{\bar{D}} \simeq 17 \rho/\rho_{0}$ MeV, where $\rho_{0}$ is the normal nuclear matter density. The spectral function of the full self-consistent treatment is shown in Fig.~\ref{fig:Lutz:2005vx}. The corresponding value of the mass shift at $\rho=\rho_{0}$ is found to be 18 MeV, in good agreement with the above estimation. The spectral function of the $D$ meson is also shown in Fig.~\ref{fig:Lutz:2005vx}. The $D$ meson mode is pushed up by about 32 MeV from the free-space mass ($\sim 1867$ MeV), in accordance with the repulsive scattering length $a_{D}<0$ in Table~\ref{tbl:DNslength}. Another branch appears at lower energies, corresponding to the resonance-hole modes associated with the $I=0$ and $I=1$ resonances in the model of Ref.~\cite{Hofmann:2005sw} (see Table~\ref{tbl:DNbound}). The properties of the charm-strange mesons are also discussed. As mentioned in the previous subsection,
there exists a $D_{s}N$ ($\bar{D}_{s}N$) quasi-bound state at 2892 (2780) MeV. As a consequence, the spectral functions of both the $D_{s}$ and $\bar{D}_{s}$ mesons (see Fig.~\ref{fig:Lutz:2005vx}) show two-peak structure which stems from the combination of the original $D_{s}/\bar{D}_{s}$ mode and the resonance-hole mode.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.9,bb=0 0 335 247]{figs/4/spectral-functions.pdf}
\caption{Spectral functions of the $D$, $\bar{D}$, $D_{s}$ and $\bar{D}_{s}$ mesons at $\rho=0.17$ fm$^{-3}$ (left) and $0.34$ fm$^{-3}$ (right) in the SU(4) contact interaction model~\cite{Lutz:2005vx}.}
\label{fig:Lutz:2005vx}
\end{center}
\end{figure}
In Ref.~\cite{Mizutani:2006vq}, the constructed SU(4) contact interaction model is also applied to study the medium effect of the $D$ meson. In this study, medium modification of the pion propagator in the $\pi Y_{c}$ loops is also considered. In addition to the vector-type contact interaction, the scalar-isoscalar interaction is also considered to examine the model dependence. By fitting the $\Lambda_{c}(2595)$ resonance in the $I=0$ sector, they construct two models in which the scalar interaction is switched on (model A) and switched off (model B). The imaginary part of the in-medium $DN$ scattering amplitude $T_{DN}(P_{0},\vec{P}=\vec{0};\rho)$ at $\rho=\rho_{0}$ is shown in Fig.~\ref{fig:Mizutani:2006vq} with various medium effects applied. In both models, it is found that the Pauli blocking shifts the $\Lambda_{c}(2595)$ peak to the higher energy region, and the $D$ meson dressing pushes it down, as in the case of the $\Lambda(1405)$ in the $\bar{K}N$ sector~\cite{Lutz:1997wt,Ramos:1999ku}. The $D$ meson spectral function shows the two-peak structure, as observed in Ref.~\cite{Lutz:2005vx}.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=1.1]{figs/4/Mizutani2006vq2.pdf}
\caption{Imaginary part of the in-medium $DN$ scattering amplitude $T_{\bar{D}N}(P_{0},\vec{P}=\vec{0};\rho)$ for I=0 (left) and I=1 (right) in model A (top) and in model B (down) at $\rho=\rho_{0}$, based on the SU(4) contact interaction model~\cite{Mizutani:2006vq}.}
\label{fig:Mizutani:2006vq}
\end{center}
\end{figure}
In Ref.~\cite{Tolos:2007vh}, the same SU(4) contact interaction model is used to study the medium modification of both $D$ and $\bar{D}$. The mean-field binding for the baryons is considered by the Walecka-type $\sigma$-$\omega$ model~\cite{Kapusta:2006pm} and the finite temperature effect is also included. The results of the $D$ meson spectral function are similar to those of Ref.~\cite{Mizutani:2006vq} at zero temperature. On the other hand, at $T=100$ MeV, the $\Lambda_{c}(2595)$ peak in the in-medium T-matrix is smeared out and the resonance-hole mode in the $D$ meson spectral function disappears. In Fig.~\ref{fig:Tolos:2007vh}, the mass shift of the $\bar{D}$ meson is shown with (model A) and without (model B) the scalar-isoscalar interaction, estimated by the optical potential~\eqref{eq:opticalpotential}. The mass shift of $\bar{D}$ is repulsive, irrespective to the choice of the parameter set. It is shown that the low-density approximation (denoted as $T\rho$) quantitatively deviates from the self-consistent calculation.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.3,bb=0 0 679 492]{figs/4/Coloronline_tolos_fig5.pdf}
\caption{The mass modification of the $\bar{D}$ meson in nuclear matter in the SU(4) contact interaction model~\cite{Tolos:2007vh}.}
\label{fig:Tolos:2007vh}
\end{center}
\end{figure}
The medium effect for $D/\bar{D}$ and $D_{s}/\bar{D}_{s}$ is studied in Ref.~\cite{JimenezTejero:2011fc} with the nonlocal interaction model~\cite{JimenezTejero:2009vq}. In this paper, the interplay between $D/\bar{D}$ and $D_{s}/\bar{D}_{s}$ is investigated. Because the $DN$ channel couples with the $D_{s}Y$ channel, the medium modification of $D_{s}$ has an influence on the in-medium spectrum of $D$ through the coupled-channel effect. In fact, it is explicitly demonstrated in the self-consistent approach that the $D_{s}$ meson dressing has a nonnegligible effect on the $D$ meson self-energy in the nuclear medium. The same thing happens for $\bar{D}$ and $\bar{D}_{s}$; the $\bar{D}$ dressing affect the $\bar{D}_{s}$ spectrum because the $\bar{D}_{s}N$ channel couples with the $\bar{D}\Lambda$ channel. As shown in the right panel of Fig.~\ref{fig:JimenezTejero:2011fc_Fig4}, the dressing $\bar{D}$ modifies the spectral function of $\bar{D}_{s}$. The mass of $\bar{D}$ increase by about 35 MeV. The spectrum of $\bar{D}_{s}$ shows only one distinct peak, which is different from that in Ref.~\cite{Lutz:2005vx} where two peaks appear around the vacuum $\bar{D}_{s}$ mass.
\begin{figure}[tb]
\begin{center}
\includegraphics[height=5cm, scale=0.7,bb=3 49 391 229]{figs/4/JimenezTejero2011fc.pdf}
\vspace*{1.5cm}
\caption{The self-energy and the spectral function of $\bar{D}$ (left) and $\bar{D}_{s}$ (right) mesons at $\rho=\rho_{0}$ in the SU(4) contact interaction model \cite{JimenezTejero:2011fc}.}
\label{fig:JimenezTejero:2011fc_Fig4}
\end{center}
\end{figure}
\paragraph{Contact interaction models with SU(8) symmetry}
The contact interaction model with SU(8) symmetry~\cite{GarciaRecio:2008dp,Gamermann:2010zz} is utilized in Refs.~\cite{Tolos:2009nn,GarciaRecio:2010vt} for the $D$ meson and in Ref.~\cite{GarciaRecio:2011xt} for the $\bar{D}$ meson. Because of the heavy quark symmetry, the medium modification of the $D^{*}/\bar{D}^{*}$ meson is considered simultaneously. The in-medium self-energy of the $\bar{D}^{*}$ meson is given by the $\bar{D}N$ and $\bar{D}^{*}N$ T-matrices as
\begin{align}
\Pi_{\bar{D}^{\ast}}(q^{0},\vec{q}; \rho)
&=
\int \frac{\mathrm{d}^{3}p}{(2\pi)^{3}}n(\vec{p},\rho)
\left(
\frac{1}{3}T_{\bar{D}^{\ast}N}^{(I=0,J=1/2)}(P^{0},\vec{P};\rho)
+ T_{\bar{D}^{\ast}N}^{(I=1,J=1/2)}(P^{0},\vec{P};\rho) \right. \nonumber\\*
&\quad \left. +
\frac{2}{3}T_{\bar{D}^{\ast}N}^{(I=0,J=3/2)}(P^{0},\vec{P};\rho)
+ 2T_{\bar{D}^{\ast}N}^{(I=1,J=3/2)}(P^{0},\vec{P};\rho)
\right).
\end{align}
We note that the decay $\bar{D}^{\ast} \rightarrow \bar{D}\pi$ and the in-medium decay $\bar{D}^{\ast} \rightarrow \bar{D}NN^{-1}$ with nucleon $N$ and hole $N^{-1}$ are not considered.
The latter is investigated in pion-exchange interaction~\cite{Yasui:2012rw}.
As shown in Ref.~\cite{Tolos:2009nn}, the in-medium spectral function of the $D$ meson is qualitatively similar with that in the SU(4) model, but
richer spectrum of resonances in the SU(8) model (see Table~\ref{tbl:DNbound}) induces various resonance-hole excitations. On the other hand, there is a qualitative difference from the SU(4) models in the mass shift. The mass shift of the $D$ meson is studied by the optical potential. In contrast to the SU(4) models, the mass of the $D$ meson decreases at finite density, reflecting the attractive scattering length $a_{DN}$ as shown in Table~\ref{tbl:DNslength}. Moreover, the imaginary part is shown to be small, which is also indicated by the small imaginary part of $a_{DN}$. On the other hand, the mass of the $D^{*}$ meson is shown to increase in the nuclear medium.
This behavior of the $D$ meson in nuclear matter in the SU(8) model (attractive with small imaginary part) suggests the bound state formation in a finite nucleus. This possibility is pursued in Ref.~\cite{GarciaRecio:2010vt}. To calculate the $D$ bound states in finite nuclei, the energy-dependent optical potential for the $D$ meson is constructed with the local density approximation as in Eq.~\eqref{eq:opticalpotential} with the in-medium self-energy obtained in Ref.~\cite{Tolos:2009nn}. Solving the Schr\"odinger equation, the neutral $D^{0}$ bound nuclei are found. The states with widths smaller than the binding energies ($\Gamma/2<|B|$) are shown in Fig.~\ref{fig:GarciaRecio:2010vt}, which are advantageous in experimental identification. We note that the $DNN$ quasi-bound state found in Ref.~\cite{Bayar:2012dd} is qualitatively different from the $D$ nucleus in Ref.~\cite{GarciaRecio:2010vt}. The former is driven by the $DN$ attraction far from the threshold which forms $\Lambda_{c}(2595)$, while the latter is induced by the attraction of the $DN$ system near the threshold in the SU(8) model. For the positively charged $D^{+}$ meson, the Coulomb repulsion reduces the binding energies, and no state with $\Gamma/2<|B|$ is found.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.3,bb=0 0 792 612]{figs/4/D0-energy-dep.pdf}
\caption{Binding energies and widths for different $D^{0}$-nucleus states obtained using the strong energy-dependent $D$-potential with the SU(8) contact interaction model~\cite{GarciaRecio:2010vt}. The binding energy is defined as $E=B<0$ for the bound state.}
\label{fig:GarciaRecio:2010vt}
\end{center}
\end{figure}
The result of the SU(8) model for the $\bar{D}$ meson is essentially different from the SU(4) models. As explained in section~\ref{sec:DNinteraction}, the $\bar{D}N$-$\bar{D}^{\ast}N$ system supports a bound state below the threshold, as a consequence of the coupled-channel effect~\cite{Gamermann:2010zz}. In Ref.~\cite{GarciaRecio:2011xt}, the bound state with $I(J^{P})=0(1/2^{-})$ is called $X(2805)$. The in-medium self-energy of the $\bar{D}$ meson is shown in Fig.~\ref{fig:GarciaRecio:2011xt_Pi} ($\alpha$ specifies the subtraction point). The energy of the in-medium mode is read off from the cross point of the real part of the self-energy Re $\Pi/(2m_{\bar{D}})$ with the oblique line representing $[(q^{0})^{2}-m_{\bar{D}}^{2}]/(2m_{\bar{D}})$. For instance, at $\rho=\rho_{0}$, the mode is found about $20$-$27$ MeV below the free $\bar{D}$ mass, depending on the choice of the subtraction constant. This mode can be regarded as the mixture of the in-medium $\bar{D}$ meson and the $X(2805)$-hole mode.
The imaginary part of the self-energy comes from the nucleon-hole pair creation. We note that the real part of the self-energy of the SU(4) model is always positive in the figure, so that the $\bar{D}$ meson is not bound, in accordance with Ref.~\cite{JimenezTejero:2011fc}. In addition, the strong energy dependence of the near-threshold amplitude indicates that the low-density ($T\rho$) approximation breaks down at very small density.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.8,bb=0 0 360 252]{figs/4/Pi.pdf}
\caption{Left panels show the real and imaginary parts of the $\bar{D}$ self-energy over $2m_{\bar{D}}$, at $\bm{q}=0$, as functions of the meson energy $q^{0}-m_{\bar{D}}$ for different densities and $\alpha=1$. Right panels show the same for $\alpha=1.2$. The oblique line is the function $[(q^{0})^{2}-m_{\bar{D}}^{2}]/(2m_{\bar{D}})$. The figure is taken from SU(8) contact interaction model of Ref.~\cite{GarciaRecio:2011xt}.}
\label{fig:GarciaRecio:2011xt_Pi}
\end{center}
\end{figure}
The $\bar{D}$ bound states in finite nuclei is studied by the optical potential from the self-energy $\Pi_{\bar{D}}$ shown in Fig.~\ref{fig:GarciaRecio:2011xt_Pi}. The Schr\"odinger equation for the $\bar{D}$ meson wave function $\Psi$ is written as
\begin{align}
\left( -\frac{\vec{\nabla}^{2}}{2m_{\mathrm{red}}} +V_{\mathrm{Coul}}(r) + V_{\mathrm{opt}}(r,q^{0}) \right) \Psi
=(-B-i\Gamma/2) \Psi,
\end{align}
where $m_{\mathrm{red}}$ is the reduced mass of the $\bar{D}$-nucleus system and $V_{\mathrm{Coul}}(r)$ is the Coulomb potential which acts only for $D^{-}$. Several bound states are obtained as shown by the filled symbols in Fig.~\ref{fig:GarciaRecio:2011xt_tolos}. In addition, for the $D^{-}$ case, there are atomic bound states indicated by crosses. Compared with the pure Coulombic ones (open circles), the energy levels of the $D^{-}$ atoms are shifted by the repulsive strong interaction effect due to the existence of the nuclear bound states. This is in accordance with the negative scattering length in Table~\ref{tbl:DbarNslength}.
\begin{figure}[tbp]
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.55,bb=0 0 360 252]{figs/4/tolos2.pdf}
\end{minipage}
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.55,bb=0 0 360 252]{figs/4/tolosD0.pdf}
\end{minipage}
\caption{The mass spectrum of $D^{-}$ (left) and $\bar{D}^{0}$ (right) mesons in atomic nuclei in the SU(8) contact interaction model~\cite{GarciaRecio:2011xt}. The binding energy is defined as $E=-B<0$ for the bound state.}
\label{fig:GarciaRecio:2011xt_tolos}
\end{figure}
\paragraph{Pion exchange models with heavy quark symmetry}
The in-medium self-energies of $\bar{D}^{(\ast)}$ ($B^{(\ast)}$) is considered with the pion-exchange interaction as the longest force~\cite{Yasui:2012rw}. In the lowest order of the coupling constant of the $\bar{D}^{(\ast)}\bar{D}^{(\ast)}\pi$ vertex, the relevant diagrams are those shown in Fig.~\ref{fig:Yasui:2012rw_diagram}. The nucleon-hole excitation was included in the pion loop by the $NN\pi$ pseudovector coupling of Eq.~\eqref{eq:Yukawa}. The expression of the self-energy for the $\bar{D}$ meson is expressed by
\begin{align}
\Sigma_{\bar{\mathrm{D}}}(k_{\mathrm{F}}) =
\Sigma_{\bar{\mathrm{D}}}^{(\bar{\mathrm{D}}^{\ast})}(k_{\mathrm{F}})
+ \Sigma_{\bar{\mathrm{D}}\,\mathrm{Pauli}}^{(\bar{\mathrm{D}}^{\ast})}(k_{\mathrm{F}}),
\end{align}
and that for the $\bar{D}^{\ast}$ meson is by
\begin{align}
\Sigma_{\bar{\mathrm{D}}^{\ast}}(k_{\mathrm{F}}) =
\Sigma_{\bar{\mathrm{D}}^{\ast}}^{(\bar{\mathrm{D}}^{\ast})}(k_{\mathrm{F}})
+ \Sigma_{\bar{\mathrm{D}}^{\ast}}^{(\bar{\mathrm{D}})}(k_{\mathrm{F}})
+ \Sigma_{\bar{\mathrm{D}}^{\ast}\,\mathrm{Pauli}}^{(\bar{\mathrm{D}}^{\ast})}(k_{\mathrm{F}})
+ \Sigma_{\bar{\mathrm{D}}^{\ast}\,\mathrm{Pauli}}^{(\bar{\mathrm{D}})}(k_{\mathrm{F}}),
\end{align}
which correspond to (1) and (2), respectively, in Fig.~\ref{fig:Yasui:2012rw_diagram}.
The terms with superscript with the parentheses indicate the self-energy with the corresponding intermediate states, and the terms with ``Pauli" indicate the second diagram in (1) and (2).
The explicit equation forms are found in Ref.~\cite{Yasui:2012rw}.
It is important to note that there exist intermediate $\bar{D}^{\ast}$ states in the self-energy of $\bar{D}$ meson, because this intermediate state allows the $\pi$ exchange contribution for the $\bar{D}$ meson self-energy (Fig.~\ref{fig:Yasui:2012rw_diagram}~(1)).
The existence of the intermediate $\bar{D}^{\ast}$ states is important thanks to the small mass splitting between $\bar{D}$ and $\bar{D}^{\ast}$ proportional to the inverse of the heavy meson mass.
The same forms are applied to the $B$ and $B^{\ast}$ mesons by replacing their masses.
We note that the nonperturbative effect, such as the formation of the $\bar{D}N$ bound state in Ref.~\cite{Yasui:2009bz}, is not included in this scheme.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.25,bb=0 0 842 595]{figs/4/Fig1_120528.pdf}
\caption{Diagrams for the self-energies for $\bar{D}$ (top) and $\bar{D}^{*}$ (bottom) used in Ref.~\cite{Yasui:2012rw}. The line with $\parallel$ indicates the medium part, i.e. the second term in Eq.~(\ref{eq:propagator_nnucleon_medium}).}
\label{fig:Yasui:2012rw_diagram}
\end{center}
\end{figure}
In this calculation, the first term in the self-energy $\Sigma_{\bar{\mathrm{D}}}^{(\bar{\mathrm{D}}^{\ast})}(k_{\mathrm{F}})$ of the $\bar{D}$ ($B$) meson has no imaginary part, unless the Fermi momentum exceeds the critical value $k_{F}^{\mathrm{cr}} = \sqrt{2\left(1+m_{N}/M\right)} \Delta$ with $M=(M_{P}+3M_{P^{\ast}})/4$ and $\Delta = M_{P^{\ast}}-M_{P}$ ($P^{(\ast)}=\bar{D}^{(\ast)}$, $B^{(\ast)}$).
In the case of the $\bar{D}$ and $B$ mesons, the critical density $n_{\mathrm{cr}}=2k_{F}^{\mathrm{cr}\,2}/3\pi^{2}$ is larger than the normal nuclear density.
In any case, this imaginary part can be always canceled by the second term $\Sigma_{\bar{\mathrm{D}}\,\mathrm{Pauli}}^{(\bar{\mathrm{D}}^{\ast})}(k_{\mathrm{F}})$, as it should be.
In contrast to the $\bar{D}$ ($B$) meson, the self-energy of the $\bar{D}^{\ast}$ ($B^{\ast}$) meson always has an imaginary part, as it decays to the $\bar{D}$ ($B$) meson and the nucleon-hole pairs, $NN^{-1}$, even at the small baryon number density. It is interesting to compare the decay property of the $B^{\ast}$ meson with that of $B^{\ast}$ in vacuum. In the latter, $B^{\ast}$ cannot decay to $B\pi$ due to the closed mass threshold. In the former, however, $B^{\ast}$ decays to $B$ meson and the nucleon-hole pairs, because the minimum energy cost of nucleon-hole pairs is infinitely small due to the instability of the Fermi surface.
By calculating the self-energies, it is found that the mass of the $\bar{D}$ meson decreases at finite baryon number density and the $\bar{D}^{\ast}$ meson mass also decreases, as shown in Fig~\ref{fig:Yasui:2012rw}. At the same time, the imaginary part of the self-energy causes the width broadening of the $\bar{D}^{\ast}$ meson.
Let us consider the above result in the heavy quark limit~\cite{Yasui:2013vca}.
In this limit, the self-energies of $\bar{D}$ and $\bar{D}^{\ast}$ mesons in Fig.~\ref{fig:Yasui:2012rw} can be compactly expressed as
\begin{align}
-i \Sigma_{P} =
-\left( \frac{2g}{\sqrt{2}f_{\pi}} \right)^{2}
\int \frac{\mathrm{d}^{4}k}{(2\pi)^{4}} \frac{k^{2}-(v\!\cdot\!k)^{2}}{2v\!\cdot\!(-k)+i\varepsilon}
\left( \frac{1}{k^{2}-m_{\pi}^{2}+i\varepsilon} \right)^{2}
\sum_{a,b} \tau^{a} \Pi_{\pi}^{ab}(k)\tau^{b},
\end{align}
for $P$ ($=\bar{D}$) meson and
\begin{align}
-i \Sigma_{P^{\ast}} = -i\Sigma_{P^{\ast}}^{(P)}-i\Sigma_{P^{\ast}}^{(P^{\ast})},
\end{align}
with $\Sigma_{P^{\ast}}^{(P)}=(1/3)\Sigma_{P}$ and $\Sigma_{P^{\ast}}^{(P^{\ast})}=(2/3)\Sigma_{P}$ for $P^{\ast}$ ($=\bar{D}^{\ast}$) meson,
where the superscripts indicate the contained intermediate states.
Both $P$ and $P^{\ast}$ are defined in the heavy quark limit.
In this limit, $P^{\ast}$ meson has no imaginary part because of the mass degeneracy between $P$ and $P^{\ast}$ mesons: $M_{P}=M_{P}^{\ast}$.
The coupling constant $g$ was defined in Eq.~(\ref{eq:L_heavy_hadron_pion}), and $\Pi_{\pi}^{ab}(k)$ with isospin indices $a,b=1,2,3$ is the pion self-energy including nucleon-hole pairs in nuclear matter.
Here, the factors $1/3$ and $2/3$ included in $\Sigma_{P^{\ast}}^{(P)}$ and $\Sigma_{P^{\ast}}^{(P^{\ast})}$ for $\Sigma_{P^{\ast}}$ are important.
In fact, those factors can be understood in an intuitive way.
In the heavy quark limit, all the physical properties of $P$ and $P^{\ast}$ mesons are the same except for the spin degrees of freedom.
As for $\Sigma_{P}$, the intermediate $P^{\ast}$ states have three degrees of freedom due to spin one (cf.~ Fig.~\ref{fig:Yasui:2012rw}).
As for $\Sigma_{P^{\ast}}$, on the other hand, the intermediate $P$ states have one degree of freedom due to spin zero, and the intermediate $P^{\ast}$ states have two degree of freedom due to spin one.
Notice that the latter number
is reduced from three to two, because there are antisymmetric tensor (helicity-flipping term) for the spin in the $\pi P^{\ast}P^{\ast}$ vertex in Eq.~(\ref{eq:L_heavy_hadron_pion}).
As a result, we obtain the same self-energies for $P$ and $P^{\ast}$ in nuclear matter in the heavy quark limit.
This result is consistent with the heavy quark symmetry.
The light spin-complex, which is defined in Sect.~\ref{sec:heavy_quark_symmetry}, is composed of $qNN^{-1}$ with $q$ the light quark in $P^{(\ast)}$$(=q\bar{Q})$ and $NN^{-1}$ nucleon-hole pairs.
Though the present result is given at the lowest order for the interaction, it will be straightforwardly extended to higher orders.
The isospin asymmetric nuclear matter is also investigated with the asymmetry parameter $\delta=(n_{n}-n_{p})/(n_{n}+n_{p})$~\cite{Yasui:2012rw}. For $\delta > 0$, namely, $n_{d}>n_{u}$ for $u$, $d$ quark number density, the $\bar{D}^{0}(\bar{c}u)$ mass becomes smaller, while the $D^{-}(\bar{c}d)$ mass become larger.
This is in contrast to the result in the mean-field approach~\cite{Mishra:2008cd,Kumar:2010gb}. To have more precise discussion, however, it will be necessary to include also the Weinberg-Tomozawa interaction for $NN \pi \pi$.
\begin{figure}[tbp]
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.8,bb=-50 0 310 252]{figs/4/D_density.pdf}
\end{minipage}
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.8,bb=-50 0 310 252]{figs/4/Dstar_density.pdf}
\end{minipage}
\caption{The in-medium self-energies of $\bar{D}$ (left) and $\bar{D}^{\ast}$ (right) mesons in nuclear matter with baryon number density $n$, calculated by pion-exchange interaction~\cite{Yasui:2012rw}.}
\label{fig:Yasui:2012rw}
\end{figure}
As discussed in Sect.~\ref{sec:heavy_quark_symmetry}, the $\lambda_{1}$ and $\lambda_{2}$ coefficients in the $1/m_{Q}$ expansion of the masses of the charm and bottom hadrons reflect the effects of the electric and magnetic gluons inside the charmed hadrons, respectively. By calculating the in-medium hadron masses, it is possible to study the change of the gluonic components through the modifications of the $\lambda_{1}$ and $\lambda_{2}$ in the nuclear matter. For this purpose, the $1/m_{Q}$ corrections should be introduced in the effective theory, as described in Sect.~\ref{sec:heavy_hadron_effective_theory}.
In Ref.~\cite{Yasui:2013iga}, the values of $\lambda_{1}(\rho)$ and $\lambda_{2}(\rho;m_{Q})$ at the baryon number density $\rho$ are calculated at the pion one-loop level.
It is important to include systematically the $1/M$ corrections with $M$ being the averaged mass of $P$ and $P^{\ast}$.
This is realized by using the $1/M$ corrections to the $P^{(\ast)}P^{(\ast)}\pi$ vertices as presented in Eq.~(\ref{eq:L_heavy_hadron_pion_2}).
When compared with the vacuum values, the coefficients at $\rho=\rho_{0}$ are obtained as
\begin{align}
\frac{\lambda_{1}(\rho_{0})}{\lambda_{1}(0)}
&=1.28\textrm{-}1.20 ,\quad
\frac{\lambda_{2}(\rho_{0};m_{c})}{\lambda_{2}(0;m_{c})}
=0.74\textrm{-}0.89,\quad
\frac{\lambda_{2}(\rho_{0};m_{b})}{\lambda_{2}(0;m_{b})}
=0.35\textrm{-}0.73 .
\end{align}
The increase of $\lambda_{1}$ indicates that the coupling of the electric gluon to the heavy quark in the heavy-light meson are enhanced in the nuclear medium, while the decrease of $\lambda_{2}$ shows the suppression of that of the magnetic gluon. For reference, the strength of the electric/magnetic gluon condensate is studied at finite temperature, using the QCD sum rule analysis of the lattice QCD data~\cite{Morita:2007hv,Lee:2008xp}. It is discussed that the increase of the electric condensate near the critical temperature leads to the decrease of the mass of $J/\psi$ through the Stark effect, while the magnetic condensate hardly changes around the critical temperature $T_{\mathrm c}$.
\paragraph{QCD sum rules}
The QCD sum rules have been used to study the in-medium properties for the light mesons such as $\rho$, $\omega$ and $\phi$~\cite{Hatsuda:1991ez}. The application to the heavy-light meson is firstly done in Refs.~\cite{Hayashigaki:2000es,Morath:1999cv}. The in-medium two-point correlation function of the $D/\bar{D}$ meson is given by
\begin{align}
\Pi_{\mathrm{PS}}^{\mathrm{NM}}(q)
=
i \int \mathrm{d}^{4}x e^{iq\cdot x}
\langle T J_{5}(x) J_{5}^{\dag}(0) \rangle_{\mathrm{NM}(\rho_{N})},
\label{eq:Trhosumrule}
\end{align}
where $\rho_{N}$ is the nuclear matter density and $q^{\mu}=(q^{0},\vec{q})$ is the momentum carried by the interpolating current $J_{5}(x)=J_{5}^{\dag}(x)=\left( \bar{c} i\gamma_{5} q(x) + \bar{q} i\gamma_{5} c(x) \right)/2$.
Here,
the mass of the in-medium $D/\bar{D}$ meson is evaluated by introducing the density dependence in the condensates.
In addition to the linear density approximation, in Ref.~\cite{Hayashigaki:2000es}, Eq.~(\ref{eq:Trhosumrule}) is approximated by
\begin{align}
\Pi_{\mathrm{PS}}^{\mathrm{NM}}(q)
\simeq
\Pi_{\mathrm{PS}}^{0}(q) + \frac{\rho_{N}}{2M_{N}} T_{\mathrm{PS}}(q),
\label{eq:Hayashigaki}
\end{align}
where $\Pi_{\mathrm{PS}}^{0}(q)$ is the vacuum correlation function and $T_{\mathrm{PS}}(q)$ is the forward scattering amplitude of a $D/\bar{D}$ meson and a nucleon in vacuum, defined as~\cite{Koike:1996ga}
\begin{align}
T_{\mathrm{PS}}(q) = i \int \mathrm{d}^{4} x e^{iq\cdot x}
\langle N(p) | \mathrm{T} J_{5}(x) J^{\dag}_{5}(0) | N(p) \rangle.
\end{align}
By applying the Borel transformation to $T_{\mathrm{PS}}(q)$ and comparing the OPE side with the phenomenological side, the modification of the $D/\bar{D}$ meson in medium is calculated.
Up to the dimension four OPE, the relevant condensates are $\langle \bar{q}q \rangle$,
$\langle \frac{\alpha_{s}}{\pi}G^{2} \rangle$,
$\langle q^{\dag} i \vec{D}_{0} q \rangle$ and
$\langle \frac{\alpha_{s}}{\pi} \left( \frac{(vG)^{2}}{v^{2}}-\frac{G^{2}}{4} \right) \rangle$. In this case, the difference of the $D$ and $\bar{D}$ does not appear in the OPE side. As a result, the negative mass shift $\Delta m = -48 \pm 8$ MeV is obtained for both the $D$ and $\bar{D}$ mesons at normal nuclear matter density. Namely, the mass of the $D/\bar{D}$ meson decreases about 50 MeV in the nuclear medium. This attraction is comparable with that from the QMC model in Refs.~\cite{Sibirtsev:1999js,Tsushima:1998ru}. The negative mass shifts are also obtained by the improved analysis. In Ref.~\cite{Azizi:2014bba}, the mass shift is obtained as $\Delta m_{D}=-46 \pm 7$ MeV for the $D$ meson, and $\Delta m_{B}=-242 \pm 62$ MeV for the $B$ meson. Larger mass shift is suggested in Ref.~\cite{Wang:2015uya}; $\Delta m_{D}=-72$ MeV, $\Delta m_{B}=-478$ MeV, $\Delta m_{D^{\ast}}=-102$ MeV, and $\Delta m_{B^{\ast}}=-687$ MeV. In all cases, the medium modification acts as an attractive effect. We note that Refs.~\cite{Azizi:2014bba,Wang:2015uya} use the low density approximation of Eq.~\eqref{eq:Trhosumrule} to study the medium effect.
On the other hand, the repulsive mass shift is predicted in Refs.~\cite{Hilger:2008jg,Hilger:2010zb} where the dimension five operator $\langle \bar{q} g \sigma G q \rangle$ and the $q_{0}$-odd operators
$\langle q^{\dag}q \rangle$, $\langle q^{\dag} \vec{D}^{2}_{0} q \rangle$, $\langle q^{\dag} g \sigma G q\rangle$ are included in addition to the operators used in Ref.~\cite{Hayashigaki:2000es}.\footnote{Notice that $q_{0}$-odd terms do not appear in $\bar{q}q$ meson with $q=u, d$, such as $\rho$, $\omega$ mesons, as well as in quarkonia, such as $\phi$ and $J/\psi$, in isospin-symmetric nuclear matter.}
In contrast to the assumption in Eq.~(\ref{eq:Hayashigaki}), the original form of the correlation function
\begin{align}
\Pi_{\mathrm{PS}}^{\mathrm{NM}}(q) = \Pi_{\mathrm{PS}}^{\mathrm{NM},\mathrm{even}}(q_{0}^{2}) + q_{0} \Pi_{\mathrm{PS}}^{\mathrm{NM},\mathrm{odd}}(q_{0}^{2}),
\end{align}
with the superscripts indicate the $q_{0}$-even/odd operators is used.
In this case, the $q_{0}$-odd terms cause the splitting of $D$ and $\bar{D}$.
As a result, the averaged mass of $D$ and $\bar{D}$ mesons $m=(m_{D}+m_{\bar{D}})/2$ increases by $+45$ MeV at the normal nuclear matter density (see Fig.~\ref{fig:Hilger:2010zb_D}).
By introducing the contribution of both $D^{+}$ and $D^{-}$ in the phenomenological side, the mass difference is extracted as $m_{D}-m_{\bar{D}}=-60$ MeV, indicating $m_{D}<m_{\bar{D}}$. The same tendency is observed in the bottom sector where
$m=(m_{B}+m_{\bar{B}})/2$ increases about $+60$ MeV and $m_{\bar{B}}-m_{B}=-130$ MeV. In the case of the charm-strange mesons, the averaged mass $m=(m_{D_{s}}+m_{\bar{D}_{s}})/2$ increases by about $+30$ MeV, while $m_{D_{s}}-m_{\bar{D}_{s}}=+25$ MeV, showing $m_{D_{s}}>m_{\bar{D}_{s}}$ as opposed to the $D/\bar{D}$ case. For the detailed discussion on the sum rules with the mixed condensates and the four-quark condensates, see Refs.~\cite{Zschocke:2011aa,Buchheim:2014rpa}.
\begin{figure}[tbp]
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.23,bb=0 0 792 612]{figs/4/m_cms_D_sn.pdf}
\end{minipage}
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.23,bb=0 0 792 612]{figs/4/m_delta_D_sn.pdf}
\end{minipage}
\caption{The averaged mass $m=(m_{D}+m_{\bar{D}})/2$ and the mass splitting $\Delta m = (m_{D}-m_{\bar{D}})/2$ in nuclear matter with density $n$ in the QCD sum rule approach~\cite{Hilger:2008jg}.}
\label{fig:Hilger:2010zb_D}
\end{figure}
In Ref.~\cite{Suzuki:2015est}, the in-medium spectral function of the $D/\bar{D}$ meson is extracted with the maximum entropy method~\cite{Gubler:2010cf}. In this method, the spectral function can be obtained without any assumption of its explicit form. The charge-conjugation projection is performed at the OPE level. The results are qualitatively similar to those in Ref.~\cite{Hilger:2008jg}, i.e., the $D/\bar{D}$ mass increases in the nuclear medium, as shown in Fig.~\ref{fig:Suzuki:2015est}. Quantitatively, the amount of the mass modification and the magnitude of the mass splitting are milder than those in Ref.~\cite{Hilger:2008jg}. Possible origin of the difference of the qualitative conclusions (attraction or repulsion) in the QCD sum rule approaches may be related with the choice of the Borel window which is used in the analysis (see e.g. Sect.~IV in Ref.~\cite{Suzuki:2015est} for more details).
In Refs.~\cite{Hilger:2008jg,Hilger:2010zb,Suzuki:2015est} the mass of the $D/\bar{D}$ meson is predicted to increase. This is in contrast to the light mesons, whose masses in general decrease. An explanation of these behavior is given in Ref.~\cite{Park:2016xrw}, based on the constituent quark model with the linear confining potential. When chiral symmetry is partially restored in the nuclear medium, the constituent mass of the light quarks should decrease. It is shown that the light-light meson mass decreases along with the reduction of the constituent quark mass, while the heavy-light meson mass increases.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.6,bb=0 0 360 252]{figs/4/D-mass_shift.pdf}
\caption{Masses of $D^{-}$ ($\bar{D}$) and $D^{+}$ ($D$) meson at finite baryon density~\cite{Suzuki:2015est}. The unit of the horizontal axis is given by the saturation nuclear matter density $\rho_{0}$.}
\label{fig:Suzuki:2015est}
\end{center}
\end{figure}
\paragraph{NJL model}
Nambu--Jona-Lasinio (NJL) model~\cite{Nambu:1961tp,Nambu:1961fr} describes the dynamical chiral symmetry breaking, and has been applied to various phenomena in the light quark sectors in finite temperature and density (see review articles~\cite{Klimt:1989pm,Vogl:1989ea,Klevansky:1992qe,Hatsuda:1994pi}). The generalization to the charm quark sector is done in Refs.~\cite{Ebert:1994tv,Ebert:1996vx} by utilizing the SU(4) symmetry.
The in-medium properties of the heavy-light mesons are studied in Ref.~\cite{Blaschke:2011yv} where the Polyakov loop potential is also incorporated. The Lagrangian is given by
\begin{align}
{\cal L}_{\mathrm{PNJL}}
=
\bar{q} (i \gamma^{\mu} D_{\mu} + \hat{m}) q
+ G_{S} \sum_{a=0}^{15} \left( (\bar{q}\lambda^{a}q)^{2} + (\bar{q}i\gamma_{5}\lambda^{a}q)^{2} \right)
- {\cal U}(\Phi[A],\bar{\Phi}[A];T),
\end{align}
where $\Phi$ is the Polyakov loop and $q=(u,d,s,c)^{t}$ is the four-component quark field.
The light quark masses are dynamically generated by the gap equation.
The mass of the pseudoscalar meson $M_{P}$ is obtained by solving the gap equation
\begin{align}
1-2G_{S} \Pi^{ij}(P_{0}=M_{P},\vec{P}=0)=0,
\end{align}
with the polarization operators
\begin{align}
\Pi_{ij}(P) =
iN_{c}
\int \frac{\mathrm{d}^{4}p}{(2\pi)^{4}}
\mathrm{tr}_{\mathrm{D}}
\left[
S_{i}(p) i\gamma_{5} S_{j}(p+P) i\gamma_{5}
\right].
\end{align}
The masses of the $D^{-}$ ($\bar{D}$) and $D^{+}$ ($D$) mesons at finite density are shown in Fig.~\ref{fig:Blaschke:2011yv} with the finite temperature $T=\mu/3$. At low density, the mass of the $D^{-}$ ($\bar{D}$) meson increases, while that of the $D^{+}$ ($D$) meson decreases. This can be understood by the effective repulsion by the Pauli principle; the light quark inside $\bar{D}$ feels the Pauli blocking effect at finite density, whereas neither the charm quark nor the light antiquark are affected. At sufficiently large baryon density, the decay processes such as $\bar{D} \rightarrow q + \bar{c}$ or $D \rightarrow \bar{q}+c$ can occur and the decay width $\Gamma$ appears as shown in Fig.~\ref{fig:Blaschke:2011yv}. We should note the possible cutoff dependence of the threshold effect, as in the study of the vector mesons in vacuum~\cite{Takizawa:1991mx}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.3,bb=0 0 613 552]{figs/4/D-T3mu.pdf}
\caption{Masses of $D^{-}$ ($\bar{D}$) and $D^{+}$ ($D$) meson at finite baryon density and temperature in the PNJL model \cite{Blaschke:2011yv}. The unit of the horizontal axis is given by the normal nuclear matter density $n_{0}=0.16$ fm$^{-3}$.}
\label{fig:Blaschke:2011yv}
\end{center}
\end{figure}
\subsection{New aspects of heavy-light mesons in nuclear matter}
\paragraph{Partial restoration of broken chiral symmetry}
As discussed in Sect.~\ref{sec:chiral_symmetry}, the chiral condensate is one of the most important quantities to determine the properties of hadrons.
In the linear representation of chiral symmetry, the mass difference between chiral partners of heavy-light meson, $\Delta M$, is proportional to the pion decay constant, and hence it is proportional to the chiral condensate $\langle \bar{q}q \rangle$~\cite{Bardeen:2003kt}:
\begin{align}
\Delta M \propto f_{\pi} \propto \langle \bar{q}q \rangle.
\end{align}
Thus, we can probe the chiral condensate in nuclear medium by measuring the $\Delta M$ for the heavy hadron.
In the following, we summarize the studies of the mass modifications of the chiral partners of the heavy-light pseudoscalar and scalar mesons, such as $D(0^{-})$ and $D_{0}^{\ast}(0^{+})$ as well as $D_{s}(0^{-})$ and $D_{s0}^{\ast}(0^{+})$.
There are several works based on the hadron dynamics.
In Ref.~\cite{Tolos:2009ck}, the in-medium properties (mass and decay width) of scalar mesons, $D_{0}^{\ast}(2400)$ and $D_{s0}^{\ast}(2317)$, was studied by investigating the T-matrix from the Lippmann-Schwinger equation based on the SU(8) flavor-spin symmetry.
It turned out that their in-medium decay width of $D_{s0}^{\ast}(2317)$ changes to be 100 MeV, which is much larger than the value ($<3.8$ MeV) in vacuum~\cite{Agashe:2014kda}.
The in-medium decay width of $D_{0}^{\ast}(2400)$ meson is also enhanced.
The reason of the large in-medium decay widths would be due to the strong absorption process via $DN$ and $DNN$ loops.
On the other hand, the mass modifications were small.
However, we have to be careful if the present result can be used for the partial restoration of the broken chiral symmetry in nuclear medium, because the non-linear representation is used (cf.~Sect.~\ref{sec:chiral_effective_theory}).
In contrast, the mass modifications of $D$, $D^{\ast}$, $D_{s}$, $D_{s}^{\ast}$ mesons were investigated at finite temperature in the linear-sigma model~\cite{Sasaki:2014asa,Sasaki:2014wma}.
We would like to introduce this study, because the discussion is based essentially on the change of chiral condensate in medium, and can be applied straightforwardly to nuclear medium.
The author focused especially on the parity doublets, $(0^{-},1^{-})$ and $(0^{+},1^{+})$.
Masses of $(0^{-},1^{-})$ states and $(0^{+},1^{+})$ states should coincide with each other when chiral symmetry recovers completely.\footnote{The $0^{-}$ and $1^{-}$ ($0^{+}$ and $1^{+}$) states should be closer to each other in mass, because of the approximate heavy quark symmetry (cf.~Sect.~\ref{sec:heavy_quark_symmetry}).}
In general, however, it is difficult to conclude if the masses of both parity + and - states become smaller, or if both of them become larger, or if the mass of parity + becomes larger and the mass of parity - becomes smaller.
In Ref.~\cite{Sasaki:2014asa}, the effective Lagrangian with SU(3) ($u$, $d$, $s$) flavor symmetry, the U(1)$_{\mathrm{A}}$ breaking and the heavy quark symmetry were considered~\cite{Nowak:1992um,Bardeen:1993ae} (see also Ref.~\cite{Bardeen:2003kt}).
We consider the $D=(D_{q},D_{q},D_{s})$ meson in the linear representation
\begin{align}
{\cal H}_{L,R}(x) = \frac{1}{\sqrt{2}} \Bigl( G(x) \pm i H(x) \gamma_{5} \Bigr),
\label{eq:H_LR}
\end{align}
with $H_{v}$ defined in Eq.~(\ref{eq:H_field}) for $(0^{-},1^{-})$ mesons and $G_{v}(x)$ defined by
\begin{align}
G_{v}(x) = \frac{1+v\hspace{-0.5em}/}{2} \left( -iD^{\ast}_{v\,\mu}(x) \gamma^{\mu} \gamma_{5} + D_{v}(x) \right),
\label{eq:G_field}
\end{align}
for $(0^{+},1^{+})$ mesons.
We consider also the light meson field $\Sigma = \left( \sigma^{a}+i\pi^{a} \right)T^{a}$ ($a=0,\dots,8$).\footnote{U(1)$_{\mathrm{V}}$ symmetry is included.}
Then, the heavy-light meson field ${\cal H}_{L,R}(x)$ and the light meson field $\Sigma$ are transformed as
\begin{align}
{\cal H}_{L,R}(x) \rightarrow S {\cal H}_{L,R}(x) g_{L,R}^{\dag},
\end{align}
and
\begin{align}
\Sigma \rightarrow g_{L}^{\dag} \Sigma g_{R},
\end{align}
according to the heavy quark spin transformation $S$ and chiral transformation $g_{L,R}$.
Then, we can construct the effective Lagrangian for the heavy-light meson ${\cal H}_{L,R}$ and the light meson $\Sigma$.
The effective Lagrangian of the light meson $\Sigma$ is given by the linear sigma model.
The effective Lagrangian concerning the heavy-light meson is given by the coupling to the light meson
\begin{align}
{\cal L}_{\mathrm{HL}}
&=
\frac{1}{2} \mathrm{Tr}
\left( \bar{\cal H}_{L} v\!\cdot\!i\partial {\cal H}_{L} + \bar{\cal H}_{R} v\!\cdot\!i\partial {\cal H}_{R} \right)
+ \frac{m_{0}}{2} \mathrm{Tr} \left( \bar{\cal H}_{L} {\cal H}_{L} + \bar{\cal H}_{R} {\cal H}_{R} \right)
\nonumber \\
&
+
\frac{g_{\pi}}{4} \mathrm{Tr} \left( \Sigma^{\dag} \bar{\cal H}_{L} {\cal H}_{R} + \Sigma \bar{\cal H}_{R} {\cal H}_{L} \right)
-i\frac{g_{A}}{2f_{\pi}}
\mathrm{Tr} \left( \gamma_{5}\left( \partial\hspace{-0.5em}/\hspace{0.1em}\Sigma^{\dag} \right) \bar{\cal H}_{L} {\cal H}_{R} - \gamma_{5} \left( \partial\hspace{-0.5em}/\hspace{0.1em}\Sigma \right) \bar{\cal H}_{R} {\cal H}_{L} \right),
\label{eq:linear_HL}
\end{align}
and by four-point interaction of the heavy-light meson (see Ref.~\cite{Sasaki:2014asa} for more details).
The four-point interaction is adopted for realizing the condensate of $D_{s}$ mesons at finite chemical potentials of strangeness and charm.
We consider the SU(3) flavor ``$\sigma$" meson ($J^{P}=0^{+}$), the singlet $\sigma_{0}$ and the octet $\sigma_{8}$, in the linear representation.
They are transformed to a non-strange scalar $\sigma_{q}$ meson, a strange scalar $\sigma_{s}$ meson
\begin{align}
\left(
\begin{array}{c}
\sigma_{q} \\
\sigma_{s}
\end{array}
\right)
=
\frac{1}{\sqrt{3}}
\left(
\begin{array}{cc}
\sqrt{2} & 1 \\
1 & -\sqrt{2}
\end{array}
\right)
\left(
\begin{array}{c}
\sigma_{0} \\
\sigma_{8}
\end{array}
\right),
\end{align}
as a mixing of $\sigma_{0}$ and $\sigma_{8}$.
Their condensates in vacuum are given by
\begin{align}
\langle \sigma_{q} \rangle &= f_{\pi}, \\
\langle \sigma_{s} \rangle &= \frac{1}{\sqrt{2}} \left( 2f_{K} - f_{\pi} \right),
\end{align}
with the pion and kaon decay constants, $f_{\pi}$ and $f_{K}$.
The ground state of the heavy-light meson systems is given by the stationary condition for the thermodynamics potential $\Omega$,
\begin{align}
\frac{\partial \Omega}{\partial \sigma_{q}} = \frac{\partial \Omega}{\partial \sigma_{s}}
= \frac{\partial \Omega}{\partial D_{q}} = \frac{\partial \Omega}{\partial D_{s}} = 0.
\end{align}
As a result, it turned out that the masses of $J^{P}=0^{-}$ and $0^{+}$ states for $D$ mesons are almost constant at low temperature, and only $0^{+}$ states becomes lighter around the critical temperature (Fig.~\ref{fig:Sasaki:2014asa}).
In addition, the mass difference between $J^{P}=0^{-}$ and $0^{+}$ states for $D$ mesons, $\delta M_{D}$, is almost the same as that between $J^{P}=0^{-}$ and $0^{+}$ states for $D_{s}$ mesons, $\delta M_{D_{s}}$.
This result suggests that the restoration of the broken chiral symmetry is almost the same both for $u$, $d$ quarks and $s$ quark.
This is different from the result in SU(4) flavor symmetry, namely that $\delta M_{D}$, is much smaller than $\delta M_{D_{s}}$~\cite{Roder:2003uz}.
In contrast to the linear representation, we may consider the non-linear representation at low temperature, because the heavy-light meson with positive parity can be eliminated due to the large mass~\cite{Sasaki:2014asa}.
In this setting, the mass difference between $J^{P}=0^{-}$ and $0^{+}$ states for $D$ mesons is investigated, where the pion one-loop effect is considered based on the chiral perturbation theory (e.g. Ref.~\cite{Harada:2003kt}).
Then, it is shown that the masses of both of $J^{P}=0^{-}$ and $0^{+}$ states become small (Fig.~\ref{fig:Sasaki:2014asa}).
The restoration of the broken chiral symmetry for the heavy-light mesons is also studied by the chiral susceptibility~\cite{Sasaki:2014wma}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=1.0,bb=0 0 174 121]{figs/4/mf_chpt_corr.pdf}
\caption{Masses of non-strange charm meson in the mean-field model (solid lines) and the corresponding result with one-loop in the chiral-perturbation theory \cite{Sasaki:2014asa}.}
\label{fig:Sasaki:2014asa}
\end{center}
\end{figure}
The mass degeneracy of $0^{-}$ and $0^{+}$ states is also discussed by Skyrmion crystal~\cite{Suenaga:2014sga}.
Because the $H_{v}(x)$ field as well as the $G_{v}(x)$ field couple to the pion field composing the Skyrmion,
their mass shifts can be analyzed by regarding the Skyrmion crystal as the background field (fcc type) with the inter-distance $L$ between two Skyrmions.
It turns out that the mass difference between $0^{-}$ and $0^{+}$ is still finite for large $L$ (dilute Skyrmion crystal), while it becomes small and the degeneracy is realized for small $L$ (half-Skyrmion).
Therefore, it seems to indicate that the restoration of the broken chiral symmetry would be realized in the latter case.
However, we have to note that the chiral condensate is zero as an averaged value over the Skyrmion crystal, where the non-linear representation of chiral symmetry is assumed, and hence it should be distinguished from the case of the linear sigma model in Refs.~\cite{Sasaki:2014asa,Sasaki:2014wma}.
As a different situation, the masses of $\bar{D}(0^{-})$, $\bar{D}^{\ast}(1^{-})$, $\bar{D}_{0}^{\ast}(0^{+})$ and $\bar{D}_{1}(1^{+})$ mesons are investigated in the dual chiral density wave (DCDW) or the chiral density wave (CDW)~\cite{Suenaga:2015daa}.\footnote{See Ref.~\cite{Buballa:2014tba} for more details about DCDW and CDW.}
The DCDW and the CDW are considered to be realized in high density state of nuclear matter.
In the linear representation, we introduce the light meson field $M=\sigma+i\tau^{a}\pi^{a}$ and the heavy-light meson fields ${\cal H}_{L,R}$ for $(\bar{D}, \bar{D}^{\ast})$ and $(\bar{D}_{0}^{\ast},\bar{D}_{1})$,
the latter of which is defined in Eq.~(\ref{eq:G_field}).
We consider the effective Lagrangian invariant for the chiral transformation and the heavy quark spin transformation,
\begin{align}
{\cal L} &= \mathrm{tr} \left( {\cal H}_{L} i v \!\cdot\! \partial \bar{\cal H}_{L} \right) + \mathrm{tr} \left( {\cal H}_{R} i v \!\cdot\! \partial \bar{\cal H}_{R} \right)
+ \frac{\Delta_{m}}{2f_{\pi}} \mathrm{tr} \left({\cal H}_{L} M \bar{\cal H}_{R} + {\cal H}_{R} M^{\dag} \bar{\cal H}_{L} \right) \nonumber \\
& + i \frac{g_{1}}{2f_{\pi}} \mathrm{tr} \left( {\cal H}_{R}\gamma_{5}\gamma^{\mu}\partial_{\mu}M^{\dag} \bar{\cal H}_{L} - {\cal H}_{L}\gamma_{5}\gamma^{\mu}\partial_{\mu}M \bar{\cal H}_{R} \right)
+ i \frac{g_{2}}{2f_{\pi}} \mathrm{tr} \left( {\cal H}_{L}\gamma_{5}/\hspace{-0.5em}\partial M M^{\dag} \bar{\cal H}_{L} - {\cal H}_{R}\gamma_{5}/\hspace{-0.5em}\partial M^{\dag} M \bar{\cal H}_{R} \right) \nonumber \\
& +\frac{g_{3}}{2f_{\pi}} \mathrm{tr} \left( {\cal H}_{R}\gamma^{\mu}\partial_{\mu}M^{\dag} \bar{\cal H}_{L} + {\cal H}_{L}\gamma^{\mu}\partial_{\mu}M \bar{\cal H}_{R} \right)
+i\frac{g_{4}}{2f_{\pi}} \mathrm{tr} \left( {\cal H}_{L}/\hspace{-0.5em}\partial M M^{\dag} \bar{\cal H}_{L} + {\cal H}_{R}/\hspace{-0.5em}\partial M^{\dag} M \bar{\cal H}_{R} \right) \nonumber \\
& + {\cal O}(\partial^{2}M),
\label{eq:linear_HL_2}
\end{align}
where $\Delta_{m}$ is the mass difference between the multiplets $H$ and $G$, and $g_{1}$, $g_{2}$, $g_{3}$ and $g_{4}$ are the coupling constants.
We notice that this is slightly different from Eq.~(\ref{eq:linear_HL}).
We consider the DCDW configuration
\begin{align}
M = \phi \cos (2f x) + i\tau^{3} \phi \sin(2fx),
\end{align}
where $\tau^{3}$ is the third component of the Pauli matrices for isospin, $x$ is the distance along the DCDW, $f$ the wave number, $\phi$ the amplitude of the wave.
The energy-momentum dispersion relation is obtained for $\bar{D}(0^{-})$, $\bar{D}^{\ast}(1^{-})$, $\bar{D}_{0}^{\ast}(0^{+})$ and $\bar{D}_{1}(1^{+})$ mesons in the DCDW.
The dispersion relation becomes different from that in vacuum due the $x$-dependence of the $\sigma$ and $\pi$ fields,
and has the minimum at the finite momentum corresponding to the wave number of the DCDW.
When the amplitude $\phi$ is changed, the restoration of chiral symmetry leads to the degeneracy among the dispersion relations.
As far as the value of $\phi$ in vacuum, however, we notice that again the absolute value of the chiral condensate is not changed for this $x$-dependence, hence the change of the dispersion relations in the DCDW would not be relevant to the partial restoration of the broken chiral symmetry.
The in-medium mass of a $D_{0}^{\ast}(\bar{q}c)$ and $\bar{D}_{0}^{\ast}(q\bar{c})$ meson is analyzed in the QCD sum rules~\cite{Hilger:2010zf}.
It was obtained that the mean-value of mass $(m_{D_{0}^{\ast}}+m_{\bar{D}_{0}^{\ast}})/2$ for $D_{0}^{\ast}(\bar{q}c)$ and $\bar{D}_{0}^{\ast}(q\bar{c})$ decreases in nuclear matter, while the mass difference $m_{D_{0}^{\ast}}-m_{\bar{D}_{0}^{\ast}}$ becomes negative.
To reach the conclusion about chiral symmetry, we need to investigate the mass difference between a $D$ meson and a $D_{0}^{\ast}$ meson as well as that between a $D^{\ast}$ meson and a $D_{1}$ meson.
As numerical values, the mass shifts $\Delta m_{D_{0}^{\ast}}=69$ MeV and $\Delta m_{B_{0}^{\ast}}=217$ MeV are obtained~\cite{Wang:2011mj}.
As for the vector and axial-vector mesons, $\Delta m_{D^{\ast}}=-71$ MeV and $\Delta m_{D_{1}}=72$ MeV for charm and $\Delta m_{B^{\ast}}=-380$ MeV and $\Delta m_{B_{1}}=264$ MeV for bottom are obtained~\cite{Wang:2011fv}.
We refer also Ref.~\cite{Wang:2015uya} for updated values including the higher order terms of $\alpha_{s}$ in the quark condensates.
The result shows that the masses of the scalar heavy-light mesons become massive.
We have to note the result
that the mass difference between positive parity state and negative parity state becomes larger in nuclear matter.
This is in contrast to the argument about the partial restoration of the chiral symmetry, namely that the mass difference should become smaller (cf.~Ref.~\cite{Bardeen:2003kt}).\footnote{Notice that the technique used in Refs.~\cite{Wang:2011mj,Wang:2011fv} is the same used in Ref.~\cite{Hayashigaki:2000es}. This approach is different from the Weinberg sum rules in Ref.~\cite{Hilger:2011cq}.}
\paragraph{Kondo effect}
One of the most important properties of charm hadrons is simply the heavy mass which is much larger than other light hadrons ($\pi$ mesons, $\rho$ mesons, nucleons and so on).
The heavy mass induces interesting impurity phenomena.
Here we consider the Kondo effect.
The Kondo effect is the phenomena that the electric resistance in metals becomes enhanced when impurity atoms with finite spin is contained as impurity particles~\cite{Kondo:1964}.\footnote{See for example Refs.~\cite{Hewson,Yosida,Yamada} as text books.}
The electric resistance is related to the scattering amplitude between the conducting electron and the impurity particle.
When the interaction between the conducting electron and the impurity particle has the spin-dependence (attraction in spin-singlet channel),
the scattering amplitude suffers from the infrared instability of the Fermi surface,\footnote{The Fermi surface is unstable against attraction (e.g. the Cooper instability in superconductivity), see Ref.~\cite{abrikosov1975methods}.} and it becomes logarithmically divergent as $\sim \ln T/T_{\mathrm{K}}$ at low temperature $T < T_{\mathrm{K}}$.
The temperature $T_{\mathrm{K}}$ characterizing the energy scale of the Kondo effect is called the Kondo temperature.
In perturbative approach, there are four conditions for which the Kondo effect occurs:
(i) heavy impurity particle, (ii) Fermi surface (degenerate state), (iii) quantum fluctuation (loop effect) and (iv) non-Abelian (e.g. spin-dependent) interaction.
As for the non-Abelian interaction, the sign of the coupling is important as discussed below.
As long as those conditions are satisfied, the Kondo effect can occur not only electron systems but also in nuclear matter~\cite{Yasui:2013xr,Yasui:2016ngy} as well as in quark matter~\cite{Yasui:2013xr,Hattori:2015hka,Ozaki:2015sya,Yasui:2016svc}.
Let us pickup two subjects discussed in Refs.~\cite{Yasui:2013xr,Yasui:2016ngy}.
First of all, we consider a simple example for understanding the Kondo effect.
We assume the interaction Hamiltonian
\begin{align}
H_{\mathrm{int}} = G \sum_{c=1}^{n^{2}-1} \sum_{kl,ij=1}^{n} \psi_{k}^{\dag} \left( \lambda^{c} \right)_{kl} \psi_{l} \Psi_{i}^{\dag} \left( \lambda^{c} \right)_{ij} \Psi_{j},
\label{eq:Kondo_simple}
\end{align}
with $G>0$ the coupling constant, $\psi_{k}$ the fermion field composing the Fermi surface, and $\Psi_{i}$ the heavy impurity field, and $\lambda^{c}$ the Gell-Mann matrices ($c=1,\dots,n^{2}-1$) of SU($n$) group.
At the tree level, the scattering amplitude is apparently given by
\begin{align}
M_{kl,ij}^{(0)} = G \sum_{c=1}^{n^{2}-1} \left( \lambda^{c} \right)_{kl} \left( \lambda^{c} \right)_{ij}.
\label{eq:M_0}
\end{align}
At the one-loop level, the scattering amplitude is given by
\begin{align}
M_{kl,ij}^{(1)} = G^{2} \rho_{0} \frac{n}{2} \sum_{c=1}^{n^{2}-1} \left( \lambda^{c} \right)_{kl} \left( \lambda^{c} \right)_{ij} \int_{0} \frac{\mathrm{d}E}{E-i\varepsilon},
\label{eq:M_1}
\end{align}
with $\rho_{0}$ the state number density at the Fermi surface and $E$ the intermediate energy in the particle and hole loops, which is measured from the Fermi surface, as shown in the diagrams in Fig.~\ref{fig:Kondo_pert}.
Notice that the non-Abelian property of $\lambda^{c}$ leaves the $(\lambda^{c})_{kl}(\lambda^{c})_{ij}$ type-dependence in Eq.~(\ref{eq:M_1}).
First we find that the $M_{kl,ij}^{(1)}$ has a logarithmic divergence in the infrared energy region ($E \simeq 0$).\footnote{It should be noted that the existence of the infrared divergence is generated through the dynamics. This is qualitatively different from the intrinsic ultraviolet divergence due to the four point interaction in Eq.~(\ref{eq:Kondo_simple}).}
Second, by comparing Eqs.~(\ref{eq:M_0}) and (\ref{eq:M_1}), we also find that the $M_{kl,ij}^{(1)}$ can be larger than $M_{kl,ij}^{(0)}$ for any small coupling $G$ due to the logarithmic divergence.
The enhancement of the loop contributions indicate inevitably that the system should be governed by the non-perturbative dynamics for any small coupling.
This is called the Kondo effect.\footnote{The typical scale of the infrared energy relevant to the Kondo effect is called the Kondo scale, which is regarded to be the same as the Kondo temperature $T_{\mathrm{K}}$. This quantity is evaluated by the renormalization group analysis~\cite{Hattori:2015hka}.}
In the above analysis, we can confirm that the four conditions (i)-(iv) play the essential role (see e.g. Ref.~\cite{Hattori:2015hka}).
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.25,bb=0 0 681 163]{figs/4/160525_Kondo.pdf}
\caption{Diagrams for the Kondo effect at one-loop level. The particle loop (left) and the hole loop (right) are shown. The nucleon propagation is represented by the single lines, and the heavy impurity propagation is represented by the double lines, where $i$, $j$, $k$ and $l$ are the indices for fundamental representation of SU($n$) symmetry ($n=2$ for isospin).}
\label{fig:Kondo_pert}
\end{center}
\end{figure}
In Ref.~\cite{Yasui:2013xr}, the $\bar{D}$ ($B$) meson is regarded as the heavy impurity particle in nuclear matter, and the scattering amplitude between the impurity ($\Phi$) and the nucleon ($\psi$) is investigated at one-loop level.
As for the condition (iv), the non-Abelian property of the interaction is played not by spin-exchange but by isospin-exchange, because the $\bar{D}$ ($B$) meson is spin zero and isospin one half.
The interaction Lagrangian is given by the vector-current type
\begin{align}
{\cal L}_{\mathrm{int}} = - \frac{G_{B}}{2}
\sum_{a=1}^{n^{2}-1}
(\bar{\psi}\gamma_{\mu}\lambda^{a}\psi)
\left( -(i\partial^{\mu} \Phi^{\dag})\lambda^{a}\Phi + \Phi^{\dag} \lambda^{a} i\partial^{\mu} \Phi \right),
\end{align}
with $\lambda^{a}$ ($a=1,\dots,n^{2}-1$) being the Gell-Mann matrices for SU($n$) symmetry in general ($n=2$ for isospin).
Here we assume that the coupling constant $G_{\mathrm{B}}$ is positive to impose the condition that the isospin-singlet channel is attractive.
This assumption would be reasonable when we consider the $\pi$, $\rho$ meson-exchange interaction between a $\bar{D}$ meson and a nucleon.\footnote{See Eq~(\ref{eq:DbarNcontact}) or Eq.~(\ref{eq:DbarNcontactSU8}) for contact interaction with SU(4) or SU(8) symmetry and Eq.~(\ref{eq:OPEP_DbarN_1/2}) for OPEP with heavy quark symmetry.}
Then, the scattering amplitude of the $\bar{D}$ meson and the nucleon was calculated at one-loop level, and it was obtained that there is a logarithmic enhancement in the infrared momentum scale from the loop ($\bar{D}$-nucleon and $\bar{D}$-hole) effect.
This is called the isospin Kondo effect.
The enhanced scattering amplitude affects transportation properties of a nucleon.
The result that the one-loop contribution becomes divergent indicates that the second order perturbation becomes larger than the first order, hence the perturbative expansion should break down at the infrared momentum scale.
Hence, to obtain the ground state, we need the non-perturbative approach beyond the perturbative one.
There are several theoretical methods to analyze such non-perturbative dynamics~\cite{Hewson,Yosida,Yamada}.
Among them, the mean-field approach is useful as an intuitive understanding.
This method is applied to analyze a $\bar{D}$ ($B$) meson bound in the discrete energy levels in an atomic nucleus, which exhibits the isospin Kondo effect~\cite{Yasui:2016ngy}.
Here, to simplify the model as far as possible, it is supposed that there is only the single valence orbital with energy $\epsilon$ for nucleons, whose degenerate states are labeled by $k=1,\cdots,N$, and the Hamiltonian
\begin{align}
H
=
\sum_{k=1}^{N}\sum_{\sigma=\uparrow,\downarrow} \epsilon {c_{k \sigma}}^{\!\dag} c_{k \sigma} +
g \sum_{k,k'=1}^{N} \left( {c_{k' \downarrow}}^{\!\dag} c_{k \uparrow} \, T_{+} + {c_{k' \uparrow}}^{\!\dag} c_{k \downarrow} \, T_{-} + ({c_{k' \uparrow}}^{\!\dag} c_{k \uparrow} \!-\! {c_{k' \downarrow}}^{\!\dag} c_{k \downarrow}) \, T_{3} \right),
\label{eq:HK}
\end{align}
is introduced.
Here $c_{k\sigma}^{(\dag)}$ is the annihilation (creation) operator for a nucleon with the orbital state $k$ and the isospin $\sigma=\uparrow,\downarrow$,
and
$T_{+}$, $T_{-}$ and $T_{3}$ are raising, lowering operators and the third component of the Pauli matrices for the isospin of the bound $\bar{D}$ ($B$) meson.
The sign of the coupling constant is supposed to be positive, $g>0$, to give an attraction in the isospin singlet channel.
Though this is a very simple model, it is essentially important to satisfy the conditions (i)-(iv) for the Kondo effect.
As for Eq.~(\ref{eq:HK}), we notice that the simple mean-field approximation would not be useful because it may be smoothed out by the isospin-fluctuation due to the isospin-exchange interaction.
As a trick, we extend the Fock space spanned by the impurity isospin.
We introduce the quasi-fermion fields $f_{\uparrow,\downarrow}$ corresponding to the isospin up and down of the $\bar{D}$ ($B$) meson, and rewrite the isospin operator for the impurity particle as
\begin{align}
T_{+}={f_{\uparrow}}^{\dag} f_{\downarrow}, \hspace{0.5em}
T_{-}={f_{\downarrow}}^{\dag} f_{\uparrow}, \hspace{0.5em}
T_{3}=\frac{1}{2} \left( {f_{\uparrow}}^{\dag} f_{\uparrow} - {f_{\downarrow}}^{\dag} f_{\downarrow} \right),
\end{align}
provided that the constraint condition,
\begin{align}
\sum_{\sigma = \uparrow, \downarrow} {f_{\sigma}}^{\dag} f_{\sigma} =1,
\end{align}
is imposed because the number of the impurity particle should be one in average.
The constraint condition can be considered in the modified Hamiltonian
\begin{align}
\tilde{H} = H + \lambda \left( \sum_{\sigma = \uparrow, \downarrow} {f_{\sigma}}^{\dag} f_{\sigma} -1 \right),
\end{align}
with the Lagrange multiplier $\lambda$ for Eq.~ (\ref{eq:HK}).
With the help of the new operators $f_{\uparrow,\downarrow}$, the three-point interaction in Eq.~(\ref{eq:HK}) is changed to the four-point interaction, and it becomes possible to apply the mean-field approximation by picking up two operators (ex. ${f_{\sigma}}^{\dag} c_{k\sigma}$) from the four operators.
We define the mean-field (energy gap),
\begin{align}
\Delta = -g\sum_{k,\sigma} \langle {f_{\sigma}}^{\dag} c_{k\sigma} \rangle,
\end{align}
as the mixing of the nucleon and the impurity particle.
The price for the simplification is that the Fock space is extended by the introduction of $f_{\uparrow,\downarrow}$.
As a result, it turns out that the energy of the system obtained by the mean-field approximation can be comparable with the exact solution.
The approximate solution becomes much close to the exact one, when the quantum fluctuation around the mean-field is also taken into account by the random-phase approximation.
The mean-field approximation as well as the random-phase approximation would be applicable to realistic model setting for a $\bar{D}$ ($B$) meson in an atomic nucleus.\footnote{Recently, the mean-field approximation in a field-theoretic way is applied to the QCD Kondo effect in light-flavor quark matter with heavy quark distributed as impurity particles, where the non-Abelian interaction is given by the color exchange~\cite{Yasui:2016svc}.}
\paragraph{Spin-isospin correlated nuclear matter}
Concerning $\bar{D}^{(\ast)}$ meson, we have discussed the generation of attractive force induced by the $\bar{D}N$-$\bar{D}^{\ast}N$ mixing~\cite{Yasui:2009bz,GarciaRecio:2008dp,Yamaguchi:2011xb,Yamaguchi:2011qw}.
This is the mixing effect by the two-body scattering supported by the accompanying nucleon~\cite{Yasui:2012rw,GarciaRecio:2011xt}.
We note that there is apparently no $\bar{D}$-$\bar{D}^{\ast}$ mixing in vacuum.
However, when finite spin-isospin correlation exists in nuclear matter, the $\bar{D}$-$\bar{D}^{\ast}$ mixing as a single-body scattering can happen in nuclear matter~\cite{Suenaga:2014dia}.
It is known that such spin-isospin correlated nuclear matter is realized in the pion condensate~\cite{Kunihiro:PTEP112_123,Kunihiro:PTEP112_197}.
Let us summarize the results about the $\bar{D}^{(\ast)}$ meson mass spectrum in the spin-isospin correlated nuclear matter in Ref.~\cite{Suenaga:2014dia}.
The state with the spin-isospin correlation can be expressed by
\begin{align}
\langle A^{i a} \rangle = \alpha \, \delta^{ia} \hspace{1em} (\mathrm{Pattern}\hspace{0.2em}\mathrm{I}),
\hspace{1em} \mathrm{or} \hspace{1em}
\langle A^{i a} \rangle = \alpha \, \delta^{i3} \delta^{a3} \hspace{1em} (\mathrm{Pattern}\hspace{0.2em}\mathrm{II}),
\end{align}
with the pion axial-vector current $A^{\mu}$, $i=x,y,z$ the space direction and $a=1,2,3$ the isospin direction.
$\alpha$ measures the magnitude of the pion condensate.
Inserting those configurations into Eq.~(\ref{eq:L_heavy_hadron_pion}),
we obtain the mass spectrum of $\bar{D}^{(\ast)}$ meson in the spin-isospin correlated nuclear matter.
In Pattern I, the SU(2)$_{l}$ spin symmetry and the SU(2)$_{I}$ isospin symmetry are coupled each other in the spin-isospin correlated nuclear matter.
As a result, the symmetry is broken to the diagonal SU(2)$_{\mathrm{diag}}$ symmetry.
In Pattern II, the spin-isospin correlated nuclear matter is invariant under $\mathrm{U}(1)_{l} \times \mathrm{U}(1)_{I} \times Z_{2}$ symmetry, where $\mathrm{U}(1)_{l}$ and $\mathrm{U}(1)_{I}$ are spin and isospin symmetries,
and $Z_{2}$ is the simultaneous transformation by $\mathrm{U}(1)_{l}$ and $\mathrm{U}(1)_{I}$.
First of all, we notice that the $\bar{D}$ and $\bar{D}^{\ast}$ mesons in normal nuclear matter have totally eight degree of freedom due to spin 1/2 and isospin 1/2 of the light quark component ($q$) and spin 1/2 of the heavy antiquark component ($\bar{Q}$).
Those eight states are degenerate in the heavy quark limit.
Because the heavy quark spin is irrelevant to the dynamics in this limit,
the number of the relevant degrees of freedoms are four from the $q$.
However, the mass degeneracy of those four states is resolved in the spin-isospin correlated nuclear matter.
In Pattern I, the light component $q$ becomes a triplet representation (${\bf 3}_{q}$) or a singlet representation (${\bf 1}_{q}$) of SU(2)$_{\mathrm{diag}}$.
When the heavy antiquark is added, the former becomes ${\bf 3}_{q} \times {\bf 2}_{\bar{Q}} = {\bf 2}_{q\bar{Q}}+{\bf 4}_{q\bar{Q}}$, and the latter becomes ${\bf 1}_{q} \times {\bf 2}_{\bar{Q}} = {\bf 2}_{q\bar{Q}}^{\prime}$.
Thus, in the heavy quark limit, there are six degenerate states from ${\bf 2}_{q\bar{Q}}$ and ${\bf 4}_{q\bar{Q}}$, and there are two degenerate states from ${\bf 2}_{q\bar{Q}}^{\prime}$.
In Pattern II, there are two different charges, $(+,+)$ and $(+,-)$, in correspondence to $\mathrm{U}(1)_{l} \times \mathrm{U}(1)_{I}$ symmetry.
Notice that $(-,-)$ is equivalent to $(+,+)$ and that $(-,+)$ is equivalent to $(+,-)$ because there is $Z_{2}$ symmetry.
Therefore, as for the light quark component, there is a doublet state (${\bf 2}_{q}$) as well as another doublet state (${\bf 2}'_{q}$).
When the heavy antiquark is added, the former becomes ${\bf 2}_{q} \times {\bf 2}_{\bar{Q}} = {\bf 1}_{q\bar{Q}}+{\bf 3}_{q\bar{Q}}$ and the latter becomes ${\bf 2}_{q}^{\prime} \times {\bf 2}_{\bar{Q}} = {\bf 1}_{q\bar{Q}}^{\prime} + {\bf 3}_{q\bar{Q}}^{\prime}$.
Thus, in the heavy quark limit, there are four degenerate states from ${\bf 1}_{q\bar{Q}}$ and ${\bf 3}_{q\bar{Q}}$, and there are another four degenerate states from ${\bf 1}_{q\bar{Q}}^{\prime}$ and ${\bf 3}_{q\bar{Q}}^{\prime}$.
\begin{figure}[tbp]
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.7,bb=0 0 260 164]{figs/4/Suenaga_Fig3.pdf}
\end{minipage}
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.7,bb=0 0 260 170]{figs/4/Suenaga_Fig4.pdf}
\end{minipage}
\caption{The mass spectrum of $\bar{D}^{(\ast)}$ meson in pion condensate with pattern I (left) and II (right) \cite{Suenaga:2014dia}. In the left, the solid curve indicates ${\bf 4}_{q\bar{Q}}$ state, and the dashed two curves indicate the mixed state of ${\bf 2}_{q\bar{Q}}$ and ${\bf 2}_{q\bar{Q}}^{\prime}$. In the right, four curves indicate the mixing of ${\bf 1}_{q\bar{Q}}$ and ${\bf 1}_{q\bar{Q}}^{\prime}$ for two curves, and the mixing of ${\bf 3}_{q\bar{Q}}$ and ${\bf 3}_{q\bar{Q}}^{\prime}$ for two curves.}
\label{fig:Suenaga:2014dia}
\end{figure}
The obtained mass spectrum of the $\bar{D}^{(\ast)}$ in the spin-isospin correlated nuclear matter is shown in Fig.~\ref{fig:Suenaga:2014dia}.
In reality, we have to consider the breaking of the heavy quark symmetry at finite charm quark mass, namely the mass difference between a $\bar{D}$ meson and a $\bar{D}^{\ast}$ meson.
In Pattern I, accordingly, the degeneracy of ${\bf 2}_{q\bar{Q}}$ and ${\bf 4}_{q\bar{Q}}$ becomes resolved, and the masses of ${\bf 2}_{q\bar{Q}}$ and ${\bf 4}_{q\bar{Q}}$ becomes different.
Among them, ${\bf 2}_{q\bar{Q}}$ and ${\bf 2}_{q\bar{Q}}^{\prime}$ are mixed each other.
In Pattern II, the degeneracy of ${\bf 1}_{q\bar{Q}}$ and ${\bf 3}_{q\bar{Q}}$ becomes resolved, and the degeneracy of ${\bf 1}_{q\bar{Q}}^{\prime}$ and ${\bf 3}_{q\bar{Q}}^{\prime}$ becomes resolved also.
Then, ${\bf 1}_{q\bar{Q}}$ and ${\bf 1}_{q\bar{Q}}^{\prime}$ are mixed, and ${\bf 3}_{q\bar{Q}}$ are ${\bf 3}_{q\bar{Q}}^{\prime}$ are mixed also.
In this case, due to $\mathrm{U}(1)_{J} \times \mathrm{U}(1)_{I}$ symmetry and $Z_{2}$ symmetry,
the $\Bigl\{(0,+), (0,-)\Bigr\}$ state exists as the degenerate one in correspondence to the $\bar{D}$ meson,
and the $\Bigl\{ (0,+), (0,-)\Bigr\}$, $\Bigl\{(+,+), (-,-)\Bigr\}$ and $\Bigl\{(+,-), (-,+)\Bigr\}$ states exist as the degenerate ones in correspondence to the $\bar{D}^{\ast}$ meson.
Among them, two $\Bigl\{ (0,+), (0,-)\Bigr\}$ states are mixed each other.
As more general form, we may consider the following spin-isospin correlation
\begin{align}
\langle A^{i a} \rangle = \sum_{j=1,2,3} \alpha_{j} \delta^{ij} \delta^{aj} \hspace{1em} (\mathrm{Pattern}\hspace{0.2em}\mathrm{III}).
\end{align}
In this case, there are four doublets whose masses are all different.
As for other exotic nuclear matter, the $\bar{D}^{(\ast)}$ meson mass spectrum in the Skyrmion matter and the chiral density waves were studied~\cite{Suenaga:2014sga,Suenaga:2015daa}.
\section{Heavy baryons}
\label{sec:charm_baryons}
\subsection{Early studies on $\Lambda_c N$ interaction and $\Lambda_{c}$ nuclei}
Historically, a $\Lambda_c$ nucleus is the first charm nucleus which was studied for the first time as the nucleus with charm flavor in 1970s.
At that time, SU(4) flavor symmetry was applied as a straightforward extension from SU(3) flavor symmetry in hypernuclei with strangeness, and many bound states including excited states in $\Lambda_{c}$ nuclei were discussed.
An early idea can be found in Refs.~\cite{Tyapkin1975,Tyapkin1976,Iwao:1976yi}.
In Ref.~\cite{Iwao:1976yi}, it was already pointed out that the pion exchange potential accompanying the $\Lambda_{c}N$-$\Sigma_{c}N$ mixing gives the attraction for $\Lambda_{c}$ in nucleus.
This is an analogue from the $\Lambda N$-$\Sigma N$ mixing in hypernuclei~\cite{Afnan:1989wb,Afnan:1990vs}.
The binding energy of the most stable state of $\Lambda_{c}$ in nuclear matter is about 28 MeV by assuming the mean-field potential of $\Lambda$ hypernuclear matter with free Fermi gas approximation~\cite{Gatto:1978ka}.
As a small system, the binding energy of a $\Lambda_{c}$ in He nucleus was obtained as about 8 MeV by supposing a mean-field given from $\Lambda$ hypernuclei study.
A $\Lambda_{c}\bar{\Lambda}_{c}$ bound state with binding energy 4.1 MeV was also discussed~\cite{Dover:1977hs}.
Based on SU(4) flavor symmetry,
$\Lambda_c$ and $\Sigma_c$ bound states were discussed for He, C, O, Ni and Pb nuclei~\cite{Dover:1977jw}.
The interaction between a $\Lambda_{c}$ and a nucleon was provided by a sum of the meson exchange potentials with scalar and vector mesons at long-middle distance ($r>r_{c}$) and the hard core potential at short distance ($r<r_{c}$) with the core radius $r_{c}=0.552$ fm.
This interaction is the same as that in Ref.~\cite{Dover:1977hs}.
The shell model calculation was performed by using the Hartree type potential without spin-orbital potential, and the bound states of $\Lambda_{c}$, $\Sigma_{c}$, $\Xi_{c}$, $\Xi_{c}^{\ast}$ baryons were investigated.
It turned out that the binding energy in He nucleus is about 15 MeV for S-wave and 1 MeV for P-wave, and the binding energy in Pb nucleus is about 60 MeV for S-wave.
More detailed structures were analyzed based on the modern nuclear potentials in Refs.~\cite{Bando:1981ti,Bando:1983yt,Bando:1985up}.
In Ref.~\cite{Bando:1981ti}, based on SU(4) flavor symmetry, the authors used the $\Lambda_{c}N$ potential of a Gaussian type used in $\Lambda N$ potential.
They found that the binding energy was about 3.1 MeV for the bound $\Lambda_{c}N$ system.
Cluster structures of $\Lambda_{c}$ nuclei were also investigated by regarding $^{8}\mathrm{Be}$ nucleus as $\alpha \alpha$ cluster.
They used the folding potential, and obtained the binding energy from 1.47 MeV to 12.28 MeV.
They also showed that the rotation band appears in the energy spectrum.
Furthermore, in Refs.~\cite{Bando:1983yt,Bando:1985up},
the authors made an extension of the Nijmegen one-boson-exchange potential from SU(3) flavor symmetry to SU(4) flavor symmetry for charm, as well as to SU(5) flavor symmetry for bottom, along the line in Ref.~\cite{Dover:1977jw}.
It has the competition between the attraction by $\sigma$ meson exchange and the repulsion by $\omega$ meson exchange.
In comparison to the $\Lambda N$ potential, the $\Lambda_{c}$/$\Lambda_{b}$ potential has the property that the P-wave attraction is enhanced rather than the S-wave attraction because the Majorana type force from $K$, $K^{\ast}$ meson exchange is removed in the $\Lambda_{c}$/$\Lambda_{b}$ case.\footnote{It is known that the $K$ ($K^{\ast}$) meson exchange in the $\Lambda N$ interaction gives a sizable attraction (repulsion) in even (odd) angular momentum due to the Majorana type character.}
The $D$, $D^{\ast}$ meson exchange was considered to be unimportant because of their large masses.
This property leads to the non-locality (momentum-dependence) in the single particle potential of the $\Lambda_{c}$/$\Lambda_{b}$ baryon in nucleus.
The extended Nijmegen potential in this way was applied to calculate the binding energy of $\Lambda_{c}$ in $\alpha$, and it was found that $\Lambda_{c}$ is barely bound, while there is a bound state with the binding energy $3.1$ MeV for $\Lambda_{b}$.
The authors investigated the binding energy of $\Lambda_{c}$ in nuclear matter ($\simeq 20$ MeV) by the Brueckner (G-matrix) approach.
Some comments are in order for the results in Refs.~\cite{Bando:1981ti,Bando:1983yt,Bando:1985up}.
Although the attraction of $\Lambda_{c}$ ($\Lambda_{b}$) is weaker than $\Lambda$,
the $\Lambda_{c}$ ($\Lambda_{b}$) bound state can appear by means of the suppressed kinematic energy due to the heavy mass.
As for the nuclear matter calculation, though the depth of the effective $\Lambda_{c}$ ($\Lambda_{b}$) potential in the G-matrix calculation is about 2/3 of that of the effective $\Lambda$ potential, the number of the bound states increases than that in $\Lambda$ hypernuclei.
As the partial wave components, the P-wave components of $\Lambda_{c}$ ($\Lambda_{b}$) in nuclear matter are increased due to the P-wave attraction rather than $\Lambda$ in nuclear matter.
For example, the most attractive channel in $\Lambda$ hypernuclei is given by $^{3}\mathrm{S}_{1}$+$^{3}\mathrm{D}_{1}$.
This is also the case for $\Lambda_{c}$ nuclei.
In addition to this channel, however, $^{3}\mathrm{P}_{2}$+$^{3}\mathrm{F}_{2}$ shows also a comparable magnitude in $\Lambda_{c}$ nuclei.
This property is interesting as the nuclear many-body dynamics, because it may enable us to probe higher partial wave contributions by injecting $\Lambda_{c}$ ($\Lambda_{b}$) as an impurity.
The application of the mean-field to the finite nuclei was considered in Ref.~\cite{Bando:1985up}.
For example, the authors obtained the binding energy of $\Lambda_{c}$ ($\Lambda_{b}$) about 23 MeV (32 MeV) in Pb nucleus.
The few-body calculation was performed for $\Lambda_{c}$ in $^{3}$He and $^{4}$He nuclei as well as in $^{5}$Li nucleus by assuming a separable type potential~\cite{Gibson:1983zw}.
The binding energy of about 10 MeV was obtained as most.
Afterwards, the study of $\Lambda_c$ in nuclear matter was developed by considering the effect of the quark degrees of freedom~\cite{Froemel:2004ea,Huang:2013zva}.
On the reference of the phenomenological nuclear potential (Nijmegen, AV18)~\cite{Hayashigaki:1998ey,Wiringa:1994wb},
the type of the $\Lambda_{c}N$ potential was classified by the spin and isospin structures, $1$, $\vec{\sigma}_{i} \!\cdot\! \vec{\sigma}_{j}$, $\vec{\tau}_{i} \!\cdot\! \vec{\tau}_{j}$ and $(\vec{\sigma}_{i} \!\cdot\! \vec{\sigma}_{i}) (\vec{\tau}_{i} \!\cdot\! \vec{\tau}_{j})$.
The coupling constants in those terms were changed according to the scaling by a quark wave function in a quark model, and the $\Lambda_{c}N$ potential was obtained.
Intuitively, the vertex strength becomes smaller, because the number of light quarks becomes fewer than light baryons.
The charm dibaryon $\Xi_{c}'N$, $\Xi_{cc}N$ states were investigated, and it was found that the binding energy of $\Xi_{c}'N$ is about 10 MeV at most, and the binding energy of $\Xi_{cc}N$ is a few or a few hundred MeV.
We note, however, that values of the binding energies are different for individual model settings.
In fact, no bound state was found in some cases.
The authors investigated $\Sigma_{c}N$ ($I=3/2,1/2$) also.
Furthermore, multi-charm dibaryons, $\Xi \Xi_{cc}$, $\Xi_{c}'\Xi_{c}'$, $\Xi_{c}'\Xi_{cc}$ and $\Xi_{cc}\Xi_{cc}$, were investigated, and bound states with a few hundred MeV were obtained for some potential models.
One of the reasons for those deep bound states is of heavy masses of the constituent baryons, and another is a weakness of the core repulsion.
\subsection{Recent works on $\Lambda_{c}N$ interaction}
So far, a $\Lambda_{c} N$ potential have been constructed starting from the phenomenological nucleon-nucleon or hyperon-nucleon potentials.
However, because a nucleon as well as a hyperon are regarded as the ``light" baryons,
its naive extension to heavy baryons does not reflect the heavy quark symmetry which is crucially important (Sect.~\ref{sec:heavy_quark_symmetry}).
In Refs.~\cite{Liu:2011xc,Oka:2013iua}, the $\Lambda_{c} N$ potential is developed based on the effective Lagrangian respecting the heavy quark symmetry.
Remember that, in hyperon-nucleon potential, the mixing between $\Lambda N$ and $\Sigma N$ channels provides the strong attraction~\cite{Afnan:1989wb,Afnan:1990vs}.
In analogy, we may think that the mixing between $\Lambda_{c} N$ and $\Sigma_{c} N$ may give the strong attraction as well.
However, the situation is more complicated in a charm sector.
According to the heavy quark symmetry, a heavy-quark spin partner of $\Sigma_{c}$ is $\Sigma_{c}^{\ast}$ with spin-parity $J^{P}=3/2^{+}$ and isospin $I=1$, and the mass splitting between $\Sigma_{c}$ and $\Sigma_{c}^{\ast}$ is quite small as 65 MeV.
Hence, in addition to the $\Lambda_{c}N$-$\Sigma_{c}N$ mixing, the mixing between $\Sigma_{c}$ and $\Sigma_{c}^{\ast}$ provides a strongly attractive contribution.\footnote{This is analogous situation to the $\bar{D}^{(\ast)}N$ potential , for which the mixing between $\bar{D}N$ and $\bar{D}^{\ast}N$ plays the important role~\cite{Yasui:2009bz,Gamermann:2010zz} (cf.~Sect.~\ref{sec:D_mesons}).}
Therefore, it becomes important to investigate the $\Lambda_{c}N$ interaction through the three-channel coupling of $\Lambda_{c}N$-$\Sigma_{c}N$-$\Sigma_{c}^{\ast}N$.
Let us remember that, based on the approximate degeneracy of $\Sigma_{c}$ and $\Sigma_{c}^{\ast}$, we introduced the effective fields $B_{{\bm 6}}$ and $B^{\ast}_{{\bm 6}\mu}$ corresponding to
$B_{{\bm 6}} = (Q\{qq\}_{I=1,j^{P}=1^{+}})_{I(J^{P})=1(1/2^{+})}$ and
$B_{{\bm 6}\mu} = (Q\{qq\}_{I=1,j^{P}=1^{+}})_{I(J^{P})=1(3/2^{+})\mu}$,
with the polarization vector $\mu$, and consider their linear combination as a super-field (\ref{eq:superfield_baryon}) in the heavy quark limit (cf.~Sect.~\ref{sec:heavy_hadron_effective_theory}).
Though $\Sigma_{c}$ and $\Sigma_{c}^{\ast}$ approach the HQS doublet as degenerate states in the heavy quark limit,
$\Lambda_{c}$ approaches the HQS singlet with no state to be paired.
Hence we have to define the effective field $B_{\bar{\bf 3}}$ which is independent of $B_{{\bm 6}}$ and $B^{\ast}_{{\bm 6}\mu}$.
The components in the $\Lambda_{c}N$-$\Sigma_{c}N$-$\Sigma_{c}^{\ast}N$ system are classified according to the internal spin and the angular momentum.
Examples for $J^{P}=0^{+}$ and $1^{+}$ are summarized in Table~\ref{table:LambdacN_channel}~\cite{Liu:2011xc,Oka:2013iua}.
In Refs.~\cite{Liu:2011xc,Oka:2013iua}, the potential between a charm baryon and a nucleon with channel coupling $\Lambda_{c}N$-$\Sigma_{c}N$-$\Sigma_{c}^{\ast}N$ is given by $\sigma$, $\pi$, $\omega$ and $\rho$ exchanges.
It is shown that there are bound states with $J^{P}=0^{+}, 1^{+}$ and binding energies are from a few MeV to a hundred MeV, depending on the model parameters such as coupling constants and momentum cutoff parameters.
Interestingly, the difference between the masses of the $J^{P}=0^{+}, 1^{+}$ states is about 10 MeV, which is quite smaller than the other scales.
This property is understood from the heavy quark symmetry.
When the $\Lambda_{c}N$ is regarded as a $Qqq$-$N$ system, the brown muck $qq$-$N$ should have spin 1/2 because the spin of $qq$ in $\Lambda_{c}$ must be zero.
When $qq$-$N$ is combined with the heavy quark $Q$ with spin 1/2, the compound states having spin either 0 and 1 become degenerate because the spin direction of the heavy quark is irrelevant to the total energy.
It is also important to note the three-channel coupling, $\Lambda_{c}N$-$\Sigma_{c}N$-$\Sigma_{c}^{\ast}N$, plays the significant role for providing a strong attraction.
For example, when we consider only the $\Lambda_{c}N$ channel, we find that the binding energy becomes smaller or the bound state vanishes.
In fact, the strong attraction is provided by the D-wave mixing given by the pion exchange potential accompanying a tensor force (e.g. $\Sigma_{Q}^{\ast}N(^5\mathrm{D}_{0})$ for $J^{P}=0^{+}$ in Table~\ref{table:LambdacN_channel}), provided that its fraction in the wave function is about a few percents.\footnote{It is shown also that, in $\bar{D}N$-$\bar{D}^{\ast}N$ systems, the pion exchange potential accompanying a tensor force plays the significant role for an attraction~\cite{Yasui:2009bz} (cf.~Sec.~\ref{sec:D_mesons}).}
As another approach, based on the quark model, the $\Lambda_{c}N$ interaction is calculated~\cite{Gal:2014jza,Huang:2013zva}.
In Ref.~\cite{Gal:2014jza}, the possibility of $\Lambda_{c}N$ bound states is discussed by the chiral constituent quark model,\footnote{See Ref.~\cite{Valcarce:2005em} for recent review about baryon-baryon interaction based on the quark model.} and
it is found, however, that there is no bound state both for $^{3}S_{1}$ and $^{1}S_{0}$.
In Ref.~\cite{Huang:2013zva}, the $\Lambda_{c}N$ potential as well as the $\Sigma_{c}N$ are investigated.
It turned out again that there is no bound state in $\Lambda_{c}N$ thought the potential is attractive.
On the other hand, there is a resonance, not a bound state, in $\Sigma_{c} N(^{3}S_{1})$.
\begin{table}[tbp]
\centering
\caption{\label{table_qnumbers} \small Various coupled channels for a
given quantum number $J^P$~\cite{Liu:2011xc,Oka:2013iua}.}
\vspace*{0.5cm}
{\small
\begin{tabular}{ c | c c c c c c c}
\hline
$J^P$ & \multicolumn{7}{c}{channels} \\
\hline
$0^+$ & $\Lambda_{Q}N(^1\mathrm{S}_{0})$ & $\Sigma_{Q}N(^1\mathrm{S}_{0})$ &
$\Sigma_{Q}^{\ast}N(^5\mathrm{D}_{0})$ & & & & \\
$1^+$ & $\Lambda_{Q}N(^3\mathrm{S}_{1})$ & $\Sigma_{Q}N(^3\mathrm{S}_{1})$ &
$\Sigma_{Q}^{\ast}N(^3\mathrm{S}_{1})$ & $\Lambda_{Q}N(^3\mathrm{D}_{1})$ & $\Sigma_{Q}N(^3\mathrm{D}_{1})$ & $\Sigma_{Q}^{\ast}N(^3\mathrm{D}_{1})$ & $\Sigma_{Q}^{\ast}N(^5\mathrm{D}_{1})$ \\
\hline
\end{tabular}
}
\label{table:LambdacN_channel}
\end{table}
Recently, the $\Lambda_{c}N$ potential with $^{1}S_{0}$ channel is investigated by lattice QCD simulation by HAL QCD collaboration~\cite{Miyamoto:2016hqo}.\footnote{See Refs.~\cite{Ishii:2006ec,Aoki:2009ji} for more information about the potential calculation used in HAL QCD collaboration.}
It is found that there is an attraction at low scattering energy and a repulsion at high scattering energy (Fig.~\ref{fig:Miyamoto:2016hqo}).
As a tendency, the $\Lambda_c N$ attraction is slightly weaker than the $\Lambda N$ attraction.
From the behavior of the phase shift, it turned out that the $\Lambda_{c}N$ attraction is not strong enough to form a bound state, though $\Lambda_{c}$ can be bound in nuclear matter due to the (positive) scattering length.\footnote{Note the difference of the definition of the sign of the scattering length in Eq.~(\ref{eq:scattering_massshift}).}
Numerically, the scattering length of $\Lambda_{c}N$ is $a_{\Lambda_c N}=0.43(13)$ fm ($m_{\pi}=700$ MeV) and $a_{\Lambda_c N}=0.29(11)$ fm ($m_{\pi}=570$ MeV).
In those cases, using Eq.~(\ref{eq:scattering_massshift}) by replacing a $D$ meson mass $m_{D}$ to the mass $m_{\Lambda_{c}}$ of $\Lambda_{c}$ baryon, we obtain the $\Lambda_{c}$ mass shifts,
which are $\Delta m_{\Lambda_{c}}=-26$ MeV ($m_{\pi}=700$ MeV) and $\Delta m_{\Lambda_{c}}=-18$ MeV ($m_{\pi}=570$ MeV) in normal nuclear matter.
Those numbers are comparable with the binding energy of $\Lambda$ hyperon in nuclear matter.
Its scattering length is $a_{\Lambda N}=0.83(27)$ fm ($m_{\pi}=700$ MeV) and $a_{\Lambda N}=0.39(17)$ fm ($m_{\pi}=570$ MeV).
Correspondingly, in a similar way to $\Lambda_{c}$, we obtain the mass shift is $\Delta m_{\Lambda}=-67$ MeV ($m_{\pi}=700$ MeV), $\Delta m_{\Lambda}=-32$ MeV ($m_{\pi}=570$ MeV) in normal nuclear matter.\footnote{In literature, it is considered that the binding energy of $\Lambda$ in nuclear matter is about 28 MeV, which is about 2/3 of a nucleon~\cite{Millener:1988hp}.}
At present, it is of course difficult to regard those numbers as the realistic numbers comparable with experimental values due to the large mass of a pion.
For example, the scattering length of $\Lambda N$ is smaller than the empirical values~\cite{Haidenbauer:2013oca}.
\begin{figure}[tbp]
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.3,bb=0 0 604 442]{figs/5/phase_shift_ens1.pdf}
\end{minipage}
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[scale=0.3,bb=0 0 608 442]{figs/5/phase_shift_ens2.pdf}
\end{minipage}
\caption{The phase shifts of the scattering of $\Lambda N$ and $\Lambda_c N$ for $m_{\pi}=700$ MeV (left) and $m_{\pi}=570$ MeV (right) from HAL QCD collaboration~\cite{Miyamoto:2016hqo}.}
\label{fig:Miyamoto:2016hqo}
\end{figure}
\subsection{Few-body systems}
Few-body systems with charm baryons are analogues of few-body hypernuclear systems.
In analogy to $\Lambda NN$-$\Sigma NN$ systems, the possibility of the bound states of $\Lambda_{c} NN$-$\Sigma_{c} NN$ systems are studied based on the baryon-baryon interaction from the chiral quark model~\cite{Garcilazo:2015qha}.
There, the $\Lambda_{c}$-$\Sigma_{c}$ conversions are considered like the $\Lambda$-$\Sigma$ conversions in hypernuclei.
Moreover, in Ref.~\cite{Maeda:2015hxa}, the $\Lambda_{c}N$ potential is constructed including the quark-exchange potential instead of the vector meson exchange at short distance by considering the heavy quark symmetry in analogous way with Ref.~\cite{Liu:2011xc}.
Then, the three-body $\Lambda_{c}NN$ system is investigated by a few-body calculation.
In this work, $\Sigma_{c}$ and $\Sigma_{c}^{\ast}$ degrees of freedom is integrated, and the effective $\Lambda_{c}N$ potential is constructed.
The results of the few-body calculation are shown in Fig.~\ref{fig:Maeda:2015hxa}.
Total isospin $I$ is given by $NN$ because $\Lambda_{c}$ is isospin-singlet.
Here the authors assumed that all partial waves are S-wave.
For the total isospin $I=0$, the $NN$ is isospin-singlet and hence it is spin-triplet due to the Pauli exclusion principle.
As a result, the heavy quark spin structure of $\Lambda_{c}NN$ is assigned to be the HQS doublet.
For the total isospin $I=1$, the $NN$ is isospin-triplet and spin-singlet.
The heavy quark spin structure of $\Lambda_{c}NN$ is assigned to be the HQS singlet.
Such structures of the HQS multiplets are clearly seen for $J^{P}=1/2^{+}$ and $3/2^{+}$ (HQS doublet) for $\Lambda_{c}NN(I=0)$ and $J^{P}=1/2^{+}$ (HQS singlet) in $\Lambda_{c}NN(I=1)$ as shown in Fig.~\ref{fig:Maeda:2015hxa}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.4,bb=0 0 720 540]{figs/5/3b-be_re-d3.pdf}
\caption{Energy levels of $\Lambda_{c}NN$ bound states~\cite{Maeda:2015hxa}.}
\label{fig:Maeda:2015hxa}
\end{center}
\end{figure}
\subsection{Nuclear matter}
The property of a $\Lambda_{c}$ in nuclear matter is discussed by the relativistic mean-field theory~\cite{Tan:2004mt,Tsushima:2002cc,Tsushima:2002ua,Tsushima:2002sm,Tsushima:2003dd}.
As in the conventional mean-field theory~\cite{Serot:1984ey,Serot:1997xg},
we consider that $\sigma$, $\omega$ and $\rho$ mesons make a mean-field, and a nucleon as well as a $\Lambda_{c}$ baryon are coupled to the mean-field.
The couplings are denoted by $g_{\sigma N}$, $g_{\omega N}$, $g_{\rho N}$ for a nucleon and $g_{\sigma \Lambda_c}$, $g_{\omega \Lambda_c}$ for a $\Lambda_{c}$ baryon.
Notice that $\Lambda_c$ does not couple to $\rho$ meson because of the isospin symmetry.
In the mean field theory, the baryon mass in nuclear matter is given by
\begin{align}
M_{N}^{\ast} &= M_{N} - g_{\sigma N} \langle \sigma \rangle, \\
M_{\Lambda_{c}}^{\ast} &= M_{\Lambda_{c}} - g_{\sigma \Lambda_{c}} \langle \sigma \rangle,
\end{align}
with the expectation value of the $\sigma$ field $\langle \sigma \rangle$, hence the partial restoration of the broken chiral symmetry is included in the dynamics~\cite{Tsushima:2002cc,Tsushima:2002ua,Tsushima:2002sm,Tsushima:2003dd}.
The estimate of the coupling strength between the baryon and the meson is analyzed by the quark-meson coupling model.
In this model, the coupling strength is determined by (i) the quark-meson coupling strength ($g_{M}^{q}$) and (ii) the quark wave function inside the baryon from the quark model (the bag model).
The coupling between a $\Lambda_{c}$ baryon and $\omega$
meson is given as $g_{\omega \Lambda_{c}}=2g_{\omega N}/3$
for the coupling constant $g_{\omega N}$
for a nucleon, by assuming $g_{\omega N}=3g_{\omega}^{q}$.
The coupling $g_{\sigma \Lambda_{c}}$ is also obtained as well.
Because the $\sigma$ field has the finite expectation value in the ground state $\langle \sigma \rangle$,
the mass of $\Lambda_{c}$ is sensitive to the partial restoration of the broken chiral symmetry in nuclear matter.
Under the model setting, the authors obtained the binding energy 5.2 MeV for a $\Lambda_{c}$ in Pb nucleus by performing the self-consistent calculation.
The smallness of this number comes from the electric Coulomb repulsion.
In fact, for $\Lambda_{b}$ (electric charge zero), they obtained the binding energy 27 MeV, which is as the same order as the binding energy of a $\Lambda$ hyperon.\footnote{Although the binding energies of the most stable states for $\Lambda_{b}$ and $\Lambda$ in nucleus are almost the same, the number of excited states of the $\Lambda_{b}$ nucleus is larger than that of the $\Lambda$ nucleus.}
Similarly, in Ref.~\cite{Tan:2004mt}, they considered the mean-field $U=g_{\sigma \Lambda_{c}} \langle \sigma \rangle + g_{\omega \Lambda_{c}} \langle \omega_{0} \rangle$ with $g_{\omega \Lambda_{c}}=2g_{\omega N}3$ for a $\Lambda_{c}$ nucleus, and investigated the energy spectrum of the $\Lambda_{c}$ nucleus by changing the $U$ parameter ($U=-10$, $-20$, $-30$, $-40$ MeV).\footnote{Those numbers are comparable with the potential depth of a $\Lambda$ hyperon, namely about 30 MeV.}
As a result, they obtained the $\Lambda_{c}$ bound state in Pb nucleus for $|U|>20$ MeV.
The QCD sum rule is also useful theoretical tools to investigate the $\Lambda_{c}$ bound state in nuclear matter (cf.~Sect.~\ref{sec:QCDSR}).
In Ref.~\cite{Wang:2011hta}, considering the leading order for the change of the chiral condensate and the gluon condensate in nuclear matter,
the authors found that the in-medium masses are $M^{\ast}_{\Lambda_c}=2.335$ GeV and $M^{\ast}_{\Lambda_b}=5.678$ GeV, namely the mass shifts are $\Delta M_{\Lambda_c} = M^{\ast}_{\Lambda_c}-M_{\Lambda_c}=51$ MeV, $\Delta M_{\Lambda_b} = M^{\ast}_{\Lambda_b}-M_{\Lambda_b}=60$ MeV.
On the other hand, the mass shifts of $\Sigma_{c}$ and $\Sigma_{b}$ baryons in nuclear matter are $\Delta M_{\Sigma_{c}}=-123$ MeV and $\Delta m_{\Sigma_{b}}=-375$ MeV, respectively~\cite{Wang:2011yj}.
Moreover, in Ref.~\cite{Wang:2012xk}, the authors obtained very large mass shifts of $\Xi_{cc}$, $\Omega_{cc}$, $\Xi_{bb}$ and $\Omega_{bb}$ baryons as $\Delta M_{\Xi_{cc}}=-1.11$ GeV, $\Delta M_{\Omega_{cc}}=-0.33$ GeV, $\Delta M_{\Xi_{bb}}=-3.37$ GeV and $\Delta M_{\Omega_{bb}}=-1.05$ GeV, respectively.
Such large value, but with negative sign, for $\Delta M_{\Lambda_{c}}$ is obtained also in Ref.~\cite{Azizi:2016dmr}.
We comment on the possible isospin Kondo effect for charm/bottom baryons in nucleus.
As discussed in Sect.~\ref{sec:D_mesons}, the isospin of a heavy hadron as a heavy impurity particle causes the Kondo effect, namely the logarithmic enhancement of the effective interaction at low energy scattering, which is induced essentially by the non-Abelian (isospin-exchange) type interaction.
Thus it can affect on various properties of the heavy hadrons in nuclear matter.
The Kondo effect can exist, not only for a $\bar{D}$ meson, but also for a $\Sigma_{c}$ baryon as well as for a $\Sigma_{c}^{\ast}$ baryon, because those baryons have the isospin-exchange interaction with a nucleon~\cite{Yasui:2016ngy}.
It can affect the energy spectrum of $\Sigma_{c}$ as well as $\Sigma_{c}^{\ast}$ in a nucleus.
Further studies along this line are an interesting subject.
\section{Summary and future prospects}
\label{sec:summary}
The investigation of
hadronic many-body
systems containing
different flavors opens a new gate for studying
various aspects of QCD such as
hadron-hadron interactions,
modifications of
the QCD vacuum in the medium
and so forth.
The frontier of nuclear and hadron physics reaches ``heavy-flavor
nuclei'' containing heavy quarks, namely charm and bottom quarks.
A characteristic
feature
of heavy quarks is that
their masses are much
larger than
$\Lambda_{\rm QCD}$.
The heavy masses induce the suppression of the kinetic
energy and the emergence of
the heavy quark symmetry.
The heavy quark spin symmetry
due to the suppression of the
spin-flip process of the heavy quark
separates
the heavy quark spin and the light degrees of freedom.
These features of heavy quarks produce a new type of nuclear systems,
e.g. states with the approximate spin degeneracy due to the heavy quark symmetry and the characteristic patterns of decay branching ratios.
However, the different mass thresholds due to the $1/m_Q$ correction may make it difficult to identify the partner in the doublet of the heavy quark spin symmetry.
Nevertheless, such spin structure is one of the new subject not only in view of nuclear and hadron physics but also in view of QCD, which has not
been observed in the light flavor sectors.
In this review, we have emphasized the importance of ``symmetries'' and
the ``finite size'' of the heavy-flavor nuclei.
The important symmetries are chiral symmetry and heavy quark symmetry.
Both symmetries are simultaneously manifested in the heavy-flavor nuclei;
the chiral symmetry is realized in the light flavor
sectors of QCD,
while
the heavy quark symmetry
emerges
in the heavy flavor sectors.
The
interplay
of
two symmetries is a unique feature of systems with
the heavy and light flavors.
It plays
a crucial role in the hadron spectra
and the hadron-hadron interactions of the heavy-flavor nuclei.
To make a connection between theories and experiments,
the investigation of ``finite'' systems is important while
the infinite system
is
generally easy to treat.
For light nuclei, we have the Gaussian expansion method to solve few-body systems rigorously.
For
heavy nuclei, we have an approach based on the optical potential, which can be constructed from the self-energy of the heavy hadron in nuclear matter. In addition to the strong interaction, the Coulomb interaction is also important in the finite systems.
These studies
should be helpful to
understand the hadron many-body
systems theoretically and experimentally.
In this review,
we have focused on
the properties of
heavy hadrons;
quarkonia ($\bar{Q}Q$),
heavy-light mesons ($\bar{q}Q$ and $q\bar{Q}$) and baryons
($qqQ$), in the nuclear medium.
For each heavy hadron, we discuss the two-body interaction with a nucleon, the few-body systems, and the properties in nuclear matter.
Heavy hadron-nucleon interactions have been investigated by various
approaches.
In particular the quarkonium-nucleon interaction has the unique nature
which is given
by the gluon exchange because the $q\bar{q}$ exchange is suppressed due
to the OZI rule.
The quarkonium-nucleon interaction is described by
the QCD van der Waals potential as a perturbative interaction at short
distance
and the multi-gluon exchange as a nonperturbative interaction at long
distance.
The prediction of the interaction in this picture is quantitatively confirmed by the first principle lattice QCD simulations.
The interaction turns out to be weakly attractive,
while it is not
enough to produce
a two-body quarkonium-nucleon bound state.
On the other hand,
the interactions of $DN$, $\bar{D}N$, $\Lambda_{\rm c}N$ and $\Sigma_{\rm c}N$
have been constructed
by considering chiral and heavy quark symmetries.
In the light quark sector, the low-energy pion-nucleon interaction is constrained by chiral symmetry. The Yukawa and Weinberg-Tomozawa interactions are the basic ingredients to develop the interaction of heavy hadrons with a suitable generalization.
The heavy quark symmetry requires coupled channels in the heavy hadronic
systems.
The $DN$, $\bar{D}N$ and
$\Sigma_{\rm c}N$ interactions are considered together with the
$D^\ast N$, $\bar{D}^\ast N$ and $\Sigma^\ast_{\rm c}N$ interactions,
respectively, because $D^\ast$, $\bar{D}^\ast$ and $\Sigma^\ast_{\rm c}$ are the heavy-quark spin partners of $D$, $\bar{D}$ and $\Sigma_{\rm c}$ in the heavy quark limit.
The channel mixing is enhanced
in the bottom sector where the mass
splitting of the spin partners is
further reduced in comparison with
the charm sector.
For the interactions of heavy-light mesons, the difference between $D$
and $\bar{D}$
mesons in nuclear medium
has been remarked.
The $DN$
system is able to couple with the channels of the
meson-heavy baryons and the excited heavy baryons, while the
$\bar{D}N$
system is not allowed to couple with other channels at lower
energies.
Hence the $\bar{D}N$ bound state is stable against strong decays.
The magnitude of the interactions is also different.
For the $DN$ potential, many models predict a strong attraction, and
find quasi-bound states of the $DN$ two-body system with
the large binding energy
such as $\Lambda_{\rm c}(2595)$.
On the other hand, the $\bar{D}N$ interactions are obtained as the weak
attraction or repulsion.
The bound states of the $\bar{D}N$ two-body system
are produced for the models
including $\bar{D}N-\bar{D}^\ast N$ coupled
channels respecting the heavy quark spin symmetry.
For heavy baryon interactions,
the $\Lambda_{\rm c}N-\Sigma_{\rm c}N$ coupling is also non-negligible.
It is the analogous to the $\Lambda N-\Sigma N$ coupling in the
hyperon-nucleon interaction.
The coupled channels of
$\Lambda_{\rm c}N-\Sigma_{\rm c}N-\Sigma^\ast_{\rm c}N$
give the attractive force which produces the two-body bound states.
The attractive interactions help to
bind the heavy hadrons both in few-body nuclear systems and in nuclear matter.
For quarkonia,
the weak attraction between a quarkonium and a nucleon
makes it possible for the quarkonium to be bound by a nucleus.
The properties of the quarkonium in nuclear matter are studied by
various approaches.
An interesting feature
is that
it gives us opportunities to investigate the modification of the gluon condensate which is
linked to the QCD vacuum.
In the few-body nuclear systems with
heavy hadrons,
(quasi-)bound states are
obtained
when the attractive two-body interactions are utilized.
It is found
that the
properties
of the few-body systems with heavy hadrons
are similar to the strangeness nuclear systems, namely $\bar{K}$ nuclei
and hypernuclei.
In nuclear matter,
various phenomena have been discussed as the medium effect for the heavy
mesons and baryons.
The medium effect modifies the mass (or in general the spectral function) of heavy hadrons, as a consequence of the many-body effect.
In addition to them, the coexistent heavy-light flavor
provides interesting aspects,
the chiral symmetry restoration invited by the light
degrees of freedom
and the Kondo effect given by the large mass of a heavy quark.
Heavy quarks in nuclei bring us ideas related to
the various phenomena and
new approaches to understand QCD.
Based on the above summary of this review, we now remark on future prospects, from the viewpoints of theoretical approaches, and the connections to other systems and experiments.
\vspace{1em}
1.~{\it Theoretical approaches.---}
The theoretical ideas to analyze the nuclear and hadronic systems have
been developed day by day.
Here we list several of new methods and ideas which will help us to understand properties of the
heavy-flavor nuclei in the future.
\begin{itemize}
\item Lattice QCD \\
lattice QCD simulation is the powerful method to obtain hadron spectra and
hadron-hadron interactions from the first-principles, QCD.
It is useful
for approaching an interaction
which is difficult to be determined by experiments.
This is of particular importance for providing the basic inputs to the heavy-flavor nuclei.
There are several well-established methods to investigate the interactions:
the L\"uscher's finite volume
method~\cite{Luscher:1990ux,Briceno:2014pka,Prelovsek:2014zga,Yamazaki:2015nka} and
the HAL QCD
method~\cite{Ishii:2006ec,Aoki:2009ji,HALQCD:2012aa,Aoki:2012tk,Iritani:2015dhu}.
The interaction is derived by the
energy shift of the two-body system in the L\"uscher's
method, and by
the Nambu-Bethe-Salpeter wave functions in the HAL QCD method.
Recently the lattice calculation is applied
not only to two-body interactions but also
to few-body interactions being important in the many-body systems
such as the heavy-flavor nuclei~\cite{Aoki:2012tk,Doi:2011gq}.
However the many lattice computations have associated with the
unphysically large quark masses.
The lattice calculation performing at the physical point is
one of the important challenges in
progress~\cite{Aoki:2012oma,Doi:2015oha}.
\item Gauge/gravity correspondence \\
The gauge/gravity correspondence (holography) provides a
method to approach strongly coupled gauge theories at large-$N_c$~\cite{Maldacena:1997re}.
This method is based on the idea of the duality of the strongly
coupled gauge theory and the weakly coupled gravity theory.
The original idea of the holography, AdS/CFT correspondence~\cite{Maldacena:1997re},
is limited to a system possessing the supersymmetry (SUSY).
However the treatment to break SUSY has been developed as the
compactification which introduces the breaking
scale~\cite{Witten:1998zw,Sakai:2004cn,Sakai:2005yt}.
The holographic approaches have been applied to the investigation of
the hadron physics
such as
hadron spectra~\cite{Sakai:2004cn,Sakai:2005yt,Karch:2002sh,Kruczenski:2003be,Kruczenski:2003uq,Erdmenger:2007cm},
hadron-hadron interactions~\cite{Hashimoto:2009ys}, atomic nuclei~\cite{Hashimoto:2008jq,Hashimoto:2011nm}
and quark-gluon plasma~\cite{CasalderreySolana:2011us,Natsuume:2014sfa}.
It has also been attempted to investigate the heavy flavor
physics~\cite{Erdmenger:2007cm,Jo:2011xq,Hayata:2012rw}.
In the top-down approaches, however, the presence of the heavy quark symmetry in the
holographic model is not clear~\cite{Hashimoto:2014jua},
because the pseudoscalar and vector mesons belong always to the
same multiplet
in the gauge theory possessing the SUSY.
In fact the D3-D7 model predicts the degeneracy of these mesons
regardless of flavors.
Even when the supersymmetry is broken by the
compactification,
this symmetry recovers in the UV limit corresponding to the heavy
quark mass limit~\cite{Kruczenski:2003uq}.
The phenomenological models of QCD inspired by
the gauge/gravity correspondence have also been developed.
The models being the bottom-up approach are called
AdS/QCD~\cite{Erdmenger:2007cm,Erlich:2005qh,DaRold:2005mxj,Kim:2007rt}.
This approach is applied to the description of the hadron spectroscopy
including heavy hadrons~\cite{Branz:2010ub,Gutsche:2011vb}.
However understanding the connections between QCD and AdS/QCD, and the string
theory and AdS/QCD remains an open issue.
The description of QCD including the heavy hadron
dynamics is a challenging subject in the holographic approach.
\item Methods of many-body systems with finite baryon number\\
Computing methods for solving many-body systems with finite
baryon number have been developed in various fields.
Those methods are useful to study nuclei composed of
nucleons.
In addition, the many-body calculations are also
utilized to analyze the strangeness nuclear systems
containing impurities, which are linked to the heavy-flavor
nuclei.
Here a part of the methods applied to the strangeness nuclear
systems
is summarized.
Nuclear shell model and Hartree-Fock method are well-known
approaches based on a mean-field where the many-body system is
reduced to a single-particle motion in a field given by
effects of the other particles.
Nuclear shell model is inspired by the shell structure of
electrons in atoms.
The single-particle motion in the mean-field
potential with the spin-orbit force
achieves success in explaining the magic number of atomic nuclei.
The shell model calculation for hypernuclei has been discussed in
Refs.~\cite{Gal:1971gb,Gal:1972gd,Gal:1978jt,Millener:2008zz,Gal:2011zr}.
Another widely used method
to solve a many-body quantum system
is the Hartree-Fock method.
The mean-field potential and the wave function of the
single-particle are obtained by solving the Hartree-Fock equation
self-consistently~\cite{Zhou:2007zze,Win:2008vw,Schulze:2010zzb,Mei:2015pca}.
Another important aspect of the nuclear structures is the clustering phenomena~\cite{PTPS52.89}.
Antisymmetrized molecular dynamics (AMD) is the model being able
to describe properties both of cluster structure wave functions
and independent-particle motion in a mean
field~\cite{Ono:1991uz,KanadaEn'yo:2012bj}.
Recently AMD is applied to the hypernuclei~\cite{Isaka:2011kz,Isaka:2015iip}.
\item Compositeness\\
Compositeness characterizes the structure of stable bound states
close to the thresholds.
As originally introduced by
Weinberg~\cite{Weinberg:1962hj,Weinberg:1965zz}.
the composite/elementary nature of hadrons is
estimated
from the field renormalization constant~\cite{Baru:2003qq,Baru:2010ww,Hyodo:2011qc,Aceti:2012dd,Hyodo:2013nka,Sekihara:2014kya}.
Recently the idea of the compositeness has been extended to the
quasi-bound states~\cite{Kamiya:2015aea,Guo:2015daa}.
Since a lot of exotic states have been found as the quasi-bound
states near the thresholds in the heavy flavor regions,
the compositeness is helpful to discuss the structure of them.
For the heavy-flavor nuclei, comparison of the compositeness in
vacuum and nuclear medium would give us the information on the
modification of properties of the heavy hadrons in environmental
changes.
\end{itemize}
2.~{\it Connections to other systems.---}
The physics of heavy-flavor nuclei is
broadly connected to other nuclear and hadronic systems.
The connections encourage us to understand the relations between the high-energy quark-gluon dynamics and the low-energy hadron and nuclear dynamics.
We summarize current status of the neighboring fields which are helpful to study the
heavy-flavor nuclei.
\begin{itemize}
\item Quarkonia and $XYZ$ resonances\\
As summarized in this review,
the quarkonium-hadron interaction is dominated by the gluonic
degrees of freedom.
While the results of different approaches agree with each other that the quarkonium-nucleon interaction is weakly attractive,
there are still nontrivial issues to be clarified in the future.
For instance, a recent lattice QCD calculation of $Z_{\rm c}(3900)$
in Ref.~\cite{Ikeda:2016zwx} indicates
the importance of the couplings of $\pi J/\psi-\bar{D}D^\ast$
and $\rho\eta_{\rm c}-\bar{D}D^\ast$. These couplings are considered to be suppressed in phenomenological models,
because it is accompanied by the heavy meson exchange interaction.
The couplings of a quarkonium and a pair of heavy-light mesons
are also important in the mass shift of
quarkonia~\cite{Heikkila:1983wd,Eichten:2005ga,Pennington:2007xr}.
The excited charmonia are also interesting objects to be studied.
The hadronic loop of the meson-antimeson affects
the properties of quarkonia, in particular those
close to thresholds.
Moreover, there are many candidates of the exotic states, called $XYZ$ resonances,
which are considered to have non-conventional
structures such as the compact
multi-quark states, hadronic molecules and hybrid states with
gluonic degrees of freedom~\cite{Brambilla:2010cs}.
It is an interesting open problem to understand the properties of
the quarkonia close to thresholds and
the $XYZ$ resonances having an exotic structure,
both in vacuum and in the nuclear medium.
\item Nuclear systems with impurities \\
There have been discussions about nuclei with impurity
particles.
The study of the composite systems is a challenging topic to
analyze both nuclear structures and the modification of the properties of the
impurities in nuclear medium.
The impurity particles are not affected by the
Pauli-blocking
from
the nucleons.
Therefore the impurity is allowed to enter the deep inside
of the nucleus,
and expected to be the probe to study the region which cannot be accessed by nucleons.
In addition, the impurities would induce the shrinkage
effect and change the deformation structure of
nuclei~\cite{Hashimoto:2006aw,Botta:2012xi,Schulze:2010zzb}.
On the other hand,
properties of the impurity itself would also be changed in the nuclear
medium~\cite{Hayano:2008vn}.
As an example of mutual change of both nuclear medium and impurities,
we have discussed the heavy quark symmetry leading to
the recombination of the spin correlations inside the nuclear matter.
For example, the heavy quark symmetry gives the recombination of the spin correlations inside the nuclear matter.
\item From strangeness to heavy flavor \\
The quark mass is one of the fundamental parameters in QCD.
To compare the charm/bottom nuclear physics with the strangeness nuclear physics
gives us a chance to study the role of flavors in
nuclei in terms of QCD.
The strangeness nuclear systems have been investigated as
the nuclear system with impurity particles with strangeness, e.g. kaons and
hyperons~\cite{Hashimoto:2006aw}.
The heavy hadron nuclear physics can be situated as an
extension from strangeness to heavy flavors.
The properties of the heavy-flavor nuclei should
be different from
the strangeness nuclei due to the properties of the heavy quarks,
i.e. heavy mass and heavy quark symmetry, as
discussed in this review.
The internal excitation pattern and interactions of heavy hadrons
are
different from those in light hadrons.
For example, the excitation pattern of charm/bottom baryons may be comparable with the strange baryons,
as the reduction of the spin-spin interaction is known already
in the strangeness sector~\cite{Yoshida:2015tia}.
\item Quark-gluon plasma (QGP) \\
The investigation of QGP is one of the challenging subjects
in hot and dense QCD.
The QGP is the different phase from the hadronic matter,
and is considered to be realized in the early universe
and in the relativistic heavy ion collisions (HIC).
The heavy quarks and quarkonia produced in HIC are used as hard probes to
understand the microscopic dynamics of
QGP~\cite{Matsui:1986dk,Rapp:2009my,Beraudo:2014iva}.
QGP gives us an opportunity to investigate the heavy quark
(hadron) dynamics in systems with high
temperature or high density.
Recently the Langevin dynamics, a classical description of the
Brownian motion, of a heavy quark in QGP
has been investigated in
Refs.~\cite{Akamatsu:2008ge,Akamatsu:2015kaa}.
\end{itemize}
3.~{\it Experiments.---}
The interesting physics of heavy-flavor nuclei should eventually be examined in actual experiments. For this purpose, it is necessary to consider how to produce the heavy particles.
We summarize
discussions of the production of the heavy hadrons and heavy-flavor nuclei in the experiments, together with possible facilities.
\begin{itemize}
\item Hadron beams \\
The charm production by the pion beam on a proton target,
$\pi^- p \rightarrow D^{\ast -}\Lambda^+_{\rm c}$, are studied
in Refs.~\cite{Kim:2014qha,Kim:2015ita}.
The cross section of the process is obtained by using two
different theoretical frameworks, the effective Lagrangian
method and the Regge method.
The coupling constants of the vertices of the charmed meson and
baryon are assumed to be the same as those for the strangeness
sector.
The obtained total cross section of the charm production is
about $10^{3}-10^{6}$ smaller than that of the strangeness
production.
The hidden-charm baryon production near threshold via the pion
beam is
studied in Ref.~\cite{Garzon:2015zva}.
The enhancement of the total cross section of the
$\pi^-p\rightarrow D^-\Sigma^+_{\rm c}$ by the $N^\ast_{\rm cc}$
resonance is discussed.
The charm production via the pion beam would be performed in
J-PARC.
In Ref.~\cite{Yamagata-Sekihara:2015ebw}, the formation of the
charmed mesic nuclei, $D^- -^{11}$B
and $D^0 -^{11}$B, in antiproton beam on a $^{12}$C is
investigated theoretically.
The optical potential for the charmed meson is given by the
self-energy where
the extended Weinberg-Tomozawa interaction is employed as
the charmed meson-nucleon interaction in the matter.
The results suggest the possible observations of the charmed
mesic nuclei at $\bar{\rm P}$ANDA in FAIR~\cite{Rapp:2011zz} and J-PARC.
\item Relativistic heavy ion collisions \\
Heavy ion collisions at Relativistic Heavy Ion
Collider (RHIC) and Large Hadron Collider (LHC) provide a new stage
to study the reaction with high multiplicities. Due to the high
temperature and large volume, the abundance of the yields of the
produced particles are much larger than those in $e^{+}e^{-}$
and $pp$ collisions and the fixed target experiments. In addition, the collision source as a rich baryon number
environment makes it possible to produce,
not only single hadrons, but also composite nuclei. For example, there are
reports of observation of anti-hypernuclei in RHIC, whose
production is difficult to be realized in other
reactions~\cite{58}. Recently, productions of exotic hadrons,
containing strangeness, charm and bottom quarks, from
relativistic heavy ion collisions have been studied in theories
based on the statistical model and the coalescence
model~\cite{Cho:2010db,Cho:2011ew}. Importantly, the yields of
the exotic hadrons depend on the internal configurations of
composite particles, namely compact mutli-quarks or extended
hadron molecules. Those approaches are applied to productions of
charm/bottom nucleus with small baryon numbers.
\item Photoproduction \\
The heavy hadron productions in the photon-induced processes have been discussed in the
literatures.
The productions of hidden-charm baryon resonances,
$N^\ast_{\rm cc}$ and $\Lambda^\ast_{\rm cc}$, including
the pentaquarks $P^+_{\rm c}(4380)$ and $P^+_{\rm c}(4450)$ have been
investigated in
Refs.~\cite{Huang:2013mua,Wang:2015jsa,Huang:2016tcr}.
The effective Lagrangian approach
is used in these analyses.
The photoproductions for the hadronic states with heavy flavor
are expected to be performed at Jefferson Lab.
\item Neutrino-nucleus reaction \\
The baryon production via the neutrino-nucleus reaction has been
studied for light flavor
sectors~\cite{Kamano:2012id,Nakamura:2015rta}.
The weak interaction also makes it possible to produce the heavy
hadrons in
the neutrino-nucleon reactions such as
$\nu N\rightarrow l X$ where $l$ and $X$ stand for a lepton and a
produced heavy baryon resonance, respectively.
The heavy hadron production via the neutrino-nucleus
reaction is
utilized to produce the heavy-flavor nuclei.
\end{itemize}
\section*{Acknowledgments}
The authors thank M.~Oka, M.~Harada, A.~Yokota, K.~Suzuki, K.~Ohtani and S.~Maeda for fruitful discussions and useful comments.
This work is supported by JSPS KAKENHI (the Grant-in-Aid for Scientific
Research from Japan Society for the Promotion of Science (JSPS)) with
Grant Nos.~JP24740152 (T.~H.), JP16K17694 (T.~H.), JP25247036 (S.~Y.), JP15K17641(S.~Y.) and JP26400273 (A.~H.),
by the Yukawa International Program for Quark-Hadron Sciences (YIPQS)
and by the INFN Fellowship Programme.
|
1,314,259,993,873 | arxiv | \section{Introduction} \label{sec:intro}
Several earth-based interferometers designed to detect
gravitational waves have been recently constructed. Detectors
such as LIGO, VIRGO, GEO and TAMA
are expected to begin operating
within a few years (see {\it e.g.} \cite{gw-review}).
In order to extract gravitational waveforms from noisy data
and to discuss physical parameters, it is essential to predict
waveforms in advance by both analytical and numerical approaches.
Binary neutron star systems are one of the most plausible sources
of gravitational waves.
They emit energy through gravitational radiation,
shrink their inspiral orbits gradually, and finally merge with
strong emission of gravitational waves.
The system is described by the post-Newtonian (PN) approximation
(see {\it e.g.} \cite{will94})
in the last several minutes before they merge, while
in the last phase of coalescence of stars
we need to solve the Einstein equations which are available
only through numerical integration.
After the pioneering numerical works by Nakamura and Oohara in the
Newtonian gravity with radiation reaction correction \cite{kyoto-newt},
several groups started developing their numerical codes to solve this
problem in a more realistic way.
Such hydrodynamical simulations are categorized as in the
Newtonian scheme (with/without radiation reaction term)
\cite{kyoto-newt2,cornell-newt,drexel-newt,piran-newt,max-newt,new-tohline,swesty,uryu};
PN approximation
\cite{kyoto-pn};
and fully general relativistic level
\cite{wilson,kyoto-gr,cactus2}.
However, we do not have a method to construct
physically satisfactory initial data for inspiral
binaries in general relativity.
Most of the numerical tests
start their simulations under
assumptions of certain quasi-equilibrium and
conformal flatness of spacetime, with a particular choice of
vorticity of fluid ({\it e.g,} \cite{BGM97} and references therein).
One way to prepare initial data might be patching the
PN scheme to the general relativistic one \cite{mg8}.
In this report,
we construct a simple model and examine how this effort is justified.
We solve the Tolman-Oppenheimer-Volkov (TOV) equation of hydrostatic
equilibrium of a single neutron star, which is truncated at the various
PN levels.
We compare the mass and radius of a star
as a function of central density
using the polytropic equation of state.
We also solve the Hamiltonian constraint equation of the Einstein
equations by substituting these density profiles as trial functions, and
discuss the differences in the metric.
This study is an extended one from the earlier works
\cite{wagoner,ciufolini,castagnino,lombardi}
at the first PN approximation.
We intend to make a bridge between the Newtonian and general
relativistic solutions
of a neutron star model, both of which are first shown numerically by
Tooper\cite{tooper}.
In the actual calculations, we used the geometrical units of
$c=G=M_\odot=1$, where $c, G, M_\odot$ are the speed of light,
Newton's gravitational constant and the solar mass, respectively. However,
$c$ and $G$ will appear in the text where they help understanding.
\section{Truncated TOV neutron stars}\label{sec:tovpn}
In general relativity (GR), we have the TOV equation for solving
a hydrostatic equilibrium star in the spherically symmetric spacetime.
We start from the metric
\begin{equation}
ds^2 = - e^{2\Phi(r)} dt^2 + e^{2\Lambda(r)} dr^2 + r^2\left( d\theta^2
+ \sin^2 \theta d\varphi^2 \right), \label{metric_tov1}
\end{equation}
where
$e^{2\Lambda(r)} = (1-{2Gm(r) \over c^2 r})^{-1}$.
Then the TOV equations are written as
\begin{eqnarray}
{dm \over dr}&=& 4 \pi r^2 \rho_t, \\
{dp \over dr}&=& -{Gm \rho_t \over r^2} (1 + {p \over \rho_t c^2} )
( 1+{4 \pi p r^3 \over m c^2})(1-{2Gm \over r c^2})^{-1}, \label{TOV2}\\
{d\Phi \over dr}&=& -{1 \over \rho_t}
{dp \over dr}(1+{p \over \rho_t c^2})^{-1},
\label{TOV3}
\end{eqnarray}
together with the specified equation of state, for which we use the
polytropic equation of state
\begin{equation} p=K \rho^\Gamma = K \rho^{1+1/n}, \end{equation}
where $p$, $\rho$ are the pressure and energy density, respectively, and
$\rho_t$ is the total mass density,
\begin{equation}
\rho_t=
\rho + {p \over (\Gamma -1) c^2}.
\end{equation}
Obviously, the set of equations recover the Newtonian limit for
$c^2 \rightarrow \infty$.
The idea of this report is to
expand the product of the
parentheses in (\ref{TOV2}) and (\ref{TOV3}) and truncate them
at the order of $1/c^{2i}$. The $i$-th truncation, then, gives the
so-called $i$-th PN approximation.
(The case of $i=1$ is briefly mentioned in \cite{KWbook}.)
That is, we write
(\ref{TOV2}) and (\ref{TOV3}) schematically
\begin{eqnarray}
{dp \over dr}&=&-{Gm \rho_t \over r^2} (1+A)(1+B)(1-C)^{-1} \nonumber \\
&=&-{Gm \rho_t \over r^2} (1+ A+B+C \nonumber \\
&& ~~~~~ +AB+AC+BC+C^2+\cdots) \label{pNexp1} \\
{d\Phi \over dr}&=& -{1 \over \rho_t}{dp \over dr}(1+A)^{-1} \nonumber \\
&=& -{1 \over \rho_t}{dp \over dr}(1-A+A^2-A^3+\cdots).\label{pNexp2}
\end{eqnarray}
If we use
these equations with terms in the RHS of up to two products of $A, B, C$
(such as $AB$ or $A^2$), then we say the system is in the second
PN approximation.
\begin{figure}[tbp]
\setlength{\unitlength}{1in}
\begin{picture}(3.75,6.6)
\put(-0.2,0.0){\epsfxsize=4.25in \epsfysize=6.36in
\epsffile{fig1.eps} }
\end{picture}
\caption[fig1]{
Total mass as the function of its central density
for truncated neutron star model.
Figures (a), (b) and (c) are for different equation of state
with $\Gamma=5/3$, 2 and 3, respectively.
Mass is in the unit of solar mass and central density is in
[g/cm$^3$].
The gray solid line is of Newtonian solutions, the solid line
is of general relativistic solution.
The
dotted line, dashed line and three-dot-line
are of first, second and third post-Newtonian
approximated solution, respectively.}
\label{fig1}
\end{figure}
We apply $\Gamma=5/3, 2$
and 3 for the equation of state ($n=1.5, 1$ and 0.5 in the polytropic
index, respectively) and compare the solutions of Newtonian, GR
and up to third PN approximation.
The radius of the star, $R$, is measured at the point, $r_\star$,
where
density $\rho_t$ drops low enough [$O(10^{-10})$ in the
geometrical units], and given by the proper
length,
\begin{equation}
R=\int_0^{r_\star} \left(1 - {2 G m(r) \over c^2 r } \right)^{-1/2} dr,
\end{equation}
with appropriate truncation in the integrand. We express
the mass of the star, $M$, by $M=m(r_\star)$.
We use 5th order Runge-Kutta method (Fehlberg method) to integrate
the equations. In order to check that this approach is right, we also
worked the TOV equations in the harmonic gauge and confirmed
that we get the identical physical quantities in the results.
In Fig.\ref{fig1}, we show
the total mass $M$ as the function of the
central density $\rho_c$ for the
different $\Gamma$s and PN levels.
Mass is in the unit of $M_\odot$ and central density is in
[g/cm$^3$], and both are rescalable with the constant $K$
in the equation
of state. Here we use $K$ in the calculations as:
$K_{5/3}=4.35$ (for $\Gamma=5/3$),
$K_{2}=10^2$ (for $\Gamma=2$), and
$K_{3}=10^5$ (for $\Gamma=3$) in the geometrical unit,
where $K_{5/3}$ is the number for the pure neutron
equation of state \cite{STbook}.
We see clearly the convergence of this PN approximation
in all the $\Gamma$s.
However, if the equation of state is stiff, then the high density
configuration differs from that of GR even at the
higher PN approximation.
From the first PN approximation, we see the existence
of the maximum mass.
The central density which gives this maximum becomes larger in the
weak gravity approximation.
\begin{figure}[tbp]
\setlength{\unitlength}{1in}
\begin{picture}(3.75,6.6)
\put(-0.2,0.0){\epsfxsize=4.25in \epsfysize=6.36in
\epsffile{fig2.eps} }
\end{picture}
\caption[fig2]{
Mass and radius relations for truncated neutron star models.
Mass is in the unit of solar mass and radius is in
[km].
The lines are the same as of Fig.\ref{fig1}.}
\label{fig2}
\end{figure}
In Fig.\ref{fig2}, we show the
mass-radius relations.
In the Newtonian limit, the asymptotic behaviors of $M$
near $M=0$ are as
$M \propto R^{-3}$ (for $\Gamma=5/3$),
$M \propto R^{0}$ (for $\Gamma=2$) and
$M \propto R^{5}$ (for $\Gamma=3$). These represent
softness (for $\Gamma=5/3$) and stiffness (for $\Gamma=3$)
of the equation of state. We see that all the lines in Fig.\ref{fig2}
coincide with this Newtonian limit in the lower mass limit.
The figure also shows us that the first PN solution has the
same feature as GR.
We also checked the causality constraint $dp/d\rho \leq 1$
(see {\it e.g.} \cite{geroch_lindblom} ) in all of the models,
and confirmed that the constraint is always valid.
\section{Metric Output via Hamiltonian constraint} \label{sec:metric}
We next solve the Hamiltonian constraint equation in GR
with the trial density profiles obtained above.
Our aim is to compare the difference of the output metric
and to examine a matching scheme of PN data to the general
relativistic one.
We use O'Murchadha-York's conformal approach \cite{york} to solve the
Hamiltonian constraint.
Defining the conformal factor $\psi$ and setting
$\gamma_{ij} = \psi^4 \hat{\gamma}_{ij}$,
the constraint becomes
\begin{equation}
~8~^{(3)\!}\hat{\Delta}
\psi = ~^{(3)\!}\hat{R}\psi
- 16 \pi G \hat{\rho}\psi^{-3}
\label{inithamilt}
\end{equation}
where $^{(3)\!}\hat{\Delta}$ and $~^{(3)\!}\hat{R}$
are the 3-dimensional Laplacian and Ricci scalar curvature, respectively,
defined by $\hat{\gamma}_{ij}$.
Here we assumed $K_{ij}=\hat{K}_{ij}=0$.
We choose our trial metric
$\hat{\gamma}_{ij}$ as conformally flat, and solve (\ref{inithamilt})
with a trial density configurations of
$\hat{\rho}
\rho_t$.
We use the Incomplete Cholesky conjugate gradient (ICCG) method \cite{ICCG}
with the Robin boundary condition
$\psi= 1 + {C / r} $, where $C$ is a constant,
for solving (\ref{inithamilt}).
In Fig.\ref{fig3}, we show the conformal factor $\psi$ at the origin
as a function of central density of trial configuration.
The 3-metric at the center will be given by
$\gamma_{ij} = \psi^4 {\delta}_{ij}$.
We see that using the Newotnian configuration as input gives us quite
different solutions from the expected ones of GR,
while all PN trials give similar solutions with GR.
Independently to $\Gamma$, we can say second PN approximation
provides closer values for the output metric to those of GR.
\begin{figure}[tbp]
\setlength{\unitlength}{1in}
\begin{picture}(3.75,6.6)
\put(-0.3,0.0){\epsfxsize=4.25in \epsfysize=6.36in
\epsffile{fig3.eps} }
\end{picture}
\caption[fig3]{
The conformal factor $\psi$ at the origin
is displayed as a function of
central density, of which
we used a trial configuration for solving Hamiltonian constraint
equation.
The central density is in the unit of [g/cm$^3$].
Each line indicates the trial profile as input, using the same
notation with Fig.\ref{fig1}.}
\label{fig3}
\end{figure}
\section{Discussion} \label{sec:disc}
In order to justify the recent post-Newtonian (PN) approaches to the
binary
neutron star problem, we constructed a simple model. By solving the
hydrostatic equilibrium equation of a star at $i$-th PN
approximation, we showed the convergence of this approach, the mass and
radius relations and resultant metric output via the Hamiltonian
constraint equation.
We conclude that
second PN approximation
provides quite similar density profiles
to those of GR,
independent of equations of states.
If we use second PN density configurations as trial functions,
we get closer metric
solutions to those from GR through the Hamiltonian constraint.
Although this study is restricted to a hydrostatic single star model,
we think that the figures shown here are convenient templates for
further numerical studies.
As shown in \cite{mg8}, the discontinuous matching surface of PN and GR
in the vacuum region will be smoothed out in fully relativistic
evolution in a
particular slicing condition. Therefore we expect that
higher PN initial data will smoothly evolve in the fully
relativistic simulations, although there are many unknown factors as to
whether such an initial data is numerically satisfactory or not.
We are now applying this approach to construct a binary model
including their velocity corrections together with fully general
relativistic
hydrodynamical evolutions.
This effort will be reported elsewhere.
\noindent
{\bf Acknowledgments} ~~
The author thanks Stephen B. Selipsky, Wai-Mo Suen and Cliff. M. Will
for discussions. He also thank Ed Seidel and Doug Swesty for the
comments on the causality constraint.
He appreciates the annonymous referee's suggestions in the section IV.
This work was partially supported by NSF PHYS 96-00049, 96-00507,
and NASA NCCS 5-153.
|
1,314,259,993,874 | arxiv | \section{Introduction}
Landau Fermi liquid theory\cite{Landau1957} is a remarkably successful framework that explains how a metal can remain a stable phase of matter over a wide range of energy scales, despite having infinitely many gapless excitations. From the modern perspective of effective field theory, a Fermi liquid is governed by a renormalization group (RG) fixed point in which most interactions are irrelevant, due to the kinematic constraints imposed by a Fermi surface\cite{Shankar1991,Polchinski1992, Shankar1994}.
Indeed, the only way in which a disorder-free Fermi liquid can be destabilized is by effective attractive interactions, which lead to superconductivity\cite{Shankar1991,Polchinski1992, Shankar1994}.
A significant fraction of highly correlated materials, however, are not well described by the Fermi liquid paradigm\cite{Lohneysen2007,Stewart2001}. A key challenge remains to construct controlled effective field theories of these ``non-Fermi liquid" metals that encapsulate their universal properties and describe their stability. It is believed that an essential ingredient for non-Fermi liquid behavior is the presence of additional gapless degrees of freedom (bosons tuned to criticality, or unscreened gauge fields are two examples) that act as a source of dissipation for the otherwise weakly interacting fermions of the metal. Many have postulated that the resulting strongly coupled system can capture much of the phenomenology of highly correlated electron materials albeit in a vastly simplified context\cite{Holstein, Varma, Altshuler1994, Nayak1994, Polchinski1994, Chakravarty1995, Oganesyan2001,Varma2002, FradkinExtra, Senthil2009, Lee2009,Metlitski2010,Mross2010,Lee2013}.
Our focus here will be on quantum critical metals, which are described by a Lagrangian containing, in addition to the fermions of the metal, bosonic order paramer fields whose mass is tuned to zero at a quantum critical point.
The standard paradigm for understanding quantum critical metals\cite{Hertz1976, Millis1993} involves integrating out all fermionic excitations, including those modes that lie on the Fermi surface. In a metal, this procedure is dangerous: integrating out gapless modes on the Fermi surface will give rise to
non-analytic, and even singular effective interactions among the bosons\cite{Abanov2004, Belitz2005}. A more systematic treatment of such phenomena would invoke a Wilsonian coarse-graining procedure in which only high energy modes are integrated out.
A Wilsonian effective field theory can never generate singular or non-analytic corrections to the action and can in principle be analyzed in a controlled fashion.
To date, all descriptions of non-Fermi liquids involve effective theories based on non-analytic actions of one form or another\cite{Holstein, Varma, Altshuler1994, Nayak1994, Polchinski1994, Chakravarty1995, Oganesyan2001,Varma2002, Lee2009, Metlitski2010, Mross2010}; they can only be obtained by integrating out gapless modes. By contrast, we are motivated here by asking whether non-Fermi liquid fixed points can arise in Wilsonian effective field theories. By explicit construction, we show that this is indeed the case, which therefore places the notion of a non-Fermi liquid fixed point on firmer ground. In the vicinity of the upper-critical dimension, which as we discuss below is $d=3$ spatial dimensions for the class of transitions studied here, we find new fixed points in which the bosons are described by a Wilson-Fisher fixed point and are coupled to a non-Fermi liquid.
Non-Fermi liquid fixed points of Fermi surfaces coupled to Landau damped U$(1)$ gauge bosons were first studied in an expansion about the upper-critical dimension in [\onlinecite{Chakravarty1995}]. We follow a similar approach, but start instead with a UV fixed point corresponding to a Fermi liquid coupled to undamped critical order parameter fields. In a large $N$ limit to be discussed in detail, the scaling trajectories away from the UV fixed point lead unambiguously to the non-Fermi liquid fixed point obtained here; the properties associated with this non-Fermi liquid fixed point are different than the predictions of the standard approach to the problem\cite{Sachdev}. In this limit, the metal remains also stable against the presence of infinitessimal attractive interactions at the non-Fermi liquid fixed point. For small N, however, there are IR singularities associated with Landau damping, as well as interactions in the BCS channel which may cause the scaling trajectories to flow away from the fixed point. Thus, for small N, the fixed point described below describes the intermediate asymptotic behavior, above energy scales that can be parametrically suppressed in the expansions to be considered here (see Fig. \ref{fig:ShamitFigure}).
In this paper, we will restrict our analysis to Pomeranchuk instabilities, a classic and well-studied set of phase transitions in condensed matter physics, in which rotational symmetry is broken whereas translation symmetry remains preserved. In the case of continuous Pomeranchuk transitions, the bosons condense at zero momentum and therefore couple to fermions at every point of the Fermi surface. There is growing experimental evidence that such transitions have been observed in several families of highly correlated materials including the cuprate superconductors as well as in heavy fermion compounds\cite{Fradkin2010}. A similar treatment can be applied to the case of quantum critical phenomena associated with the density wave orders. We will consider these transitions in a separate publication.
The paper is organized as follows. In section 2, we construct a scaling theory that treats both low energy bosons and fermions on an equal footing, which manages to capture the correct behavior of both the fermion and boson degrees of freedom when they are decoupled from one another. In section 3, we describe our renormalization group strategy and construct a non-Fermi liquid fixed point that governs the theory in absence of four-Fermi interactions. We describe the correlation functions of both the boson and fermion degrees of freedom at the non-Fermi liquid fixed point; they differ from the results obtained in alternative treatments. In \S4, we re-introduce the four-Fermi interactions and describe subtleties associated with log-squared divergences that arise in their presence.
In \S5, we discuss controlled large $N$ theories where the subtleties of \S4 do not arise, and we find fixed points which generalize
those of \S3 to include four-Fermi interactions. We show that these fixed points have no superconducting instabilities. We close with a discussion of open issues in \S6.
Explicit calculations which we refer to in the main body are presented in several appendices.
\begin{figure}
\begin{center}
\includegraphics[width=0.35\textwidth]{ShamitFigure}
\end{center}
\caption{ This figure depicts the regime of energy scales over which our description is controlled. The physics below the parametrically low scale of Landau damping remains to be understood.
}
\label{fig:ShamitFigure}
\end{figure}
\section{Effective action and scaling analysis}
In the standard description of quantum critical points in metals\cite{Hertz1976}, one starts with a theory involving fermion fields $\psi_{\sigma}$ with spin $\sigma = \uparrow, \downarrow$ interacting at short distances with strong repulsive forces.
These interactions are decoupled by an auxiliary boson field $\phi$ representing a fermion bilinear, and the partition function is obtained by averaging over all possible values of both the fermion and boson fields.
Initially, the auxiliary field has no dynamics and is massive. However, as high energy modes of the material of interest are integrated out, radiative corrections induce dynamics for the bosons.
In a Wilsonian theory, the dynamics are encapsulated only in {\it local, analytic corrections to the bare action}. This mode elimination is continued until eventually, the UV cutoff $\Lambda \ll E_F$ represents the scale up to which the quasiparticle kinetic energy $\epsilon(\bm k)$ can be linearized about the Fermi level. At these low energies, and in the vicinity of the quantum critical point where the field $\phi$ condenses, it is legitimate to view $\phi$ as an independent, emergent fluctuating field.
The resulting effective low energy Euclidean action consists of a purely fermionic term, a purely bosonic term and a Yukawa coupling between bosons and fermions:
\begin{eqnarray}
\label{action}
\mathcal S &=& \int d \tau \int d^d x \ \mathcal L = S_{\psi} + S_{\phi} + S_{\psi-\phi} \nonumber \\
\mathcal L_{\psi} &=& \bar \psi_{\sigma} \left[ \partial_{\tau} + \mu -\epsilon(i \nabla) \right] \psi_{ \sigma} + \lambda_{\psi} \bar \psi_{\sigma} \bar \psi_{\sigma'} \psi_{\sigma'} \psi_{\sigma} \nonumber \\
\mathcal L_{\phi} &=& m_{\phi}^2 \phi^2 + \left(\partial_{\tau} \phi \right)^2 + c^2 \left( \vec \nabla \phi \right)^2 + \frac{ \lambda_{\phi}}{4 !} \phi^4 \nonumber \\
S_{\psi,\phi} &=& \int \frac{d^{d+1} k d^{d+1} q}{\left(2 \pi \right)^{2(d+1)}}g( k, q) \bar \psi( k) \psi( k+q) \phi(q),
\end{eqnarray}
where repeated spin indices are summed. The first term, $\mathcal L_{\psi}$, represents a Landau Fermi liquid, with weak residual self-interactions incorporated in forward and BCS scattering amplitudes. The second term represents an interacting scalar boson field with speed $c$ and mass $m_{\phi}$ (which corresponds to the inverse correlation length that vanishes as the system is tuned to the quantum critical point). The third term is the Yukawa coupling between the fermion and boson fields and is more naturally described in momentum space. The quantity $g(k,q)$ is a generic {\it coupling function} that depends both on the fermion momentum $\bm k$, as well as the momentum transfer $\bm q$ (we have suppressed spin indices for clarity). For a spherically symmetric Fermi system, the angular dependence of $g(k,q)$ for $\vert \bm k \vert = k_F$ can be decomposed into distinct angular momentum channels, each of which marks a different broken symmetry. Familiar examples include ferromagnetism (angular momentum zero) and nematic order (angular momentum $2$). More generally, the coupling can be labelled by the irreducible representation of the crystal point group and it respects symmetry transformations under which $\phi$ and $\bar \psi \psi$ both change sign.
The effective action in Eq. \ref{action} will be the point of departure of our analysis below.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{scaling}
\end{center}
\caption{Summary of tree-level scaling. High energy modes (blue) are integrated out at tree level and remaining low energy modes (red) are rescaled so as to preserve the boson and fermion kinetic terms. The boson modes (a) have the low energy locus at a point whereas the fermion modes (b) have their low energy locus on the Fermi surface. The most relevant Yukawa coupling (c) connects particle-hole states separated by small momenta near the Fermi surface; all other couplings are irrelevant under the scaling.
}
\label{fig:scaling}
\end{figure}
We first describe a consistent scaling procedure for the action in Eq. \ref{action}. The key challenge stems from the fact that the boson and fermion fields have vastly different kinematics.
Our bosons have dispersion relation $k_0^2 = c^2 \bm k^2 + m_{\phi}^2$, so that low energies correspond as usual to low momentum, and their scaling is that of a standard relativistic field theory where all components of momentum scale the same way as $k_0$. By contrast, the fermion dispersion relation is $k_0 = \epsilon(\bm k)-\mu$, so their low energy states occur close to the Fermi surface (Fig. \ref{fig:scaling}). Moreover, the Yukawa coupling between the two sets of fields must conserve energy and momentum in a coarse-graining procedure. These complications are easily circumvented by requiring tree-level scaling to reproduce the behavior of a Landau Fermi liquid and a nearly-free boson decoupled from one another when $g=0$. Furthermore, when $m_{\phi}$ is finite, we must recover Landau Fermi liquid theory: this simple notion leads to a unique scaling procedure. As the fields are coarse grained, only the most relevant components of the Yukawa coupling function are retained. The four fermion interaction $\lambda_\psi$ is generally also a coupling function depending on the relative orientation of the fermion momenta, with different scalings for different configurations\cite{Polchinski1992, Shankar1994}.
To be more explicit, we consider a rotationally invariant Fermi surface, and following Polchinski\cite{Polchinski1992}, we define a fermion momentum $\bm k = \bm k_F + \bm \ell$,
where $\bm k_F$ is a point on the Fermi surface that is closest to $\bm k$; thus, $\bm \ell$ is a perpendicular displacement from the Fermi surface to $\bm k$. As the cutoff is lowered, energies and momenta must be rescaled, and in the Fermi liquid theory, only $\bm \ell$ are rescaled while $\bm k_F$ remain unaffected. For the boson fields, by contrast, all momenta components and energy must be rescaled as the cutoff is lowered. We integrate out modes at tree-level with energy $\Lambda e^{-t} < E < \Lambda$, and rescale frequencies (denoted $k_0$) and momenta so that the dispersion relations remain invariant. To simplify the discussion of scaling, we will focus on a spherically symmetric Fermi surface $\epsilon(\bm k) = \frac{1}{2 m} k^2$, so that our decomposition of the fermion momentum is equivalent to parameterizing momenta by a direction $\hat{\Omega}$ and a perpendicular magnitude $\ell$:
\begin{eqnarray}
\bm k = \bm \hat{\Omega}(k_F + \ell).
\label{eq:fermionmomentumparm}
\end{eqnarray}
The dispersion relation for
$\ell \ll k_F$ is then simply $k_0 \approx v_F \ell$, $v_F= k_F /m$. The natural fermion scaling is therefore to scale $\ell$ the same as $k_0$, but not to scale any other components of momentum. In this parameterization (\ref{eq:fermionmomentumparm}), the components of momentum parallel to the Fermi surface are more properly thought of as angles rather than momenta.
We therefore find it natural to think of the Fermi surface as a continuous collection of effectively $(1+1)$-dimensional fermions coupled by forward scattering and BCS interactions, as is true in an ordinary Landau Fermi liquid.
We therefore obtain the following scalings
\begin{equation} \label{eq:fermionMomentumScaling}
k_0' = e^{t} k_0, \ \bm k_F' = \bm k_F, \ \ell' = e^{t} \ell
\end{equation}
for the fermion states, whereas
\begin{equation} \label{eq:BosonMomentumScaling}
k_0' = e^{t} k_0, \ \bm k' = e^{t} \bm k
\end{equation}
is the scaling that we adopt for the boson fields. This particular scaling reflects the fact that our boson has dynamical critical exponent $z = 1$ at tree-level since we have not integrated out gapless fermions to generate a Landau-damped boson. The fields are rescaled so that the boson and fermion kinetic energies remain invariant, which leads to the following scaling relations:
\begin{equation} \label{eq:FieldScaling}
\psi' = e^{-3t/2} \psi , \
\phi' = e^{-\frac{(d+3)}{2}t} \phi
\end{equation}
From this it follows that a generic fermion interaction is irrelevant, whereas forward scattering and BCS interactions always remain marginal at tree-level: $ \lambda_{\psi}' = \lambda_{\psi}$ for all $d > 0$. It also follows from these considerations that the boson interactions must be rescaled as
\begin{equation}
\lambda_{\phi}' = e^{(3-d)t} \lambda_{\phi}
\end{equation}
which sets
$d=3$ as the upper-critical dimension for the boson fields: thus, when $g=0$ the quantum critical point has the properties of a classical critical point in one higher dimension, as is required when $z=1$.
At first sight, scaling the momenta of the fermions differently from those of the bosons may alarm the reader. It implies, among other things, that scale transformations in position space are non-local. However, this feature is present even in ordinary Landau Fermi liquid theory: the scaling procedure
couples fermions at different points in space. To see this explicitly, one can simply Fourier transform the momentum space scaling
\begin{eqnarray}
\psi(\hat{\Omega}, \ell) \rightarrow \psi'(\hat{\Omega}, \ell) = e^{-3 t/2} \psi(\hat{\Omega}, e^{-t } \ell),
\end{eqnarray}
back to position space. At linear order in $t$, one finds that $\psi'(x)$ depends on an integral over all $\psi(x)$.
We therefore are led to study the scaling of all couplings in momentum space. Finally, we note that away from criticality, when $m_{\phi} \ne 0$, the boson can formally be integrated out and we must recover Landau Fermi liquid theory: simplicity demands that the field scaling should not depend on $m_{\phi}$, which further compels us to adopt this scaling procedure.
Next, we consider the fate of a non-zero Yukawa coupling under this scaling procedure. In the low energy limit, the Yukawa coupling function $g(k,q)$ has a Taylor expansion of the form
\begin{equation}
g(k,q) = g(\bm k_F,0) +a_1 \ell + a_2 q + \cdots
\end{equation}
and it follows that under the scaling procedure above, only $g(\bm k_F,0)$ is marginal whereas all other terms are irrelevant.
Thus, only small momentum transfers imparted by the boson remain marginal in $d=3$ as we scale $k$ towards the Fermi surface (see Fig. \ref{fig:scaling}):
\begin{equation}
\lim_{ \bm k \rightarrow \bm k_F} \lim_{q \rightarrow 0} g'(\bm k', \bm q') = e^{\frac{3-d}{2} t} g(\bm k, \bm q)
\end{equation}
This simple relation is derived explicitly in the Appendix.
We shall refer to the coupling $g(\bm k_F, 0)$ simply as $g$ for the remainder of the paper. We see that $d=3$ is the upper-critical dimension for the Euclidean action; below it, both $\lambda_{\phi}$ and $g$ are relevant. Thus, we can naturally expect to find new fixed points in an $\epsilon-$expansion, which we show in the next section.
Some readers may be familiar with other scaling schemes, such as the `patch' picture, where all components of the fermion momenta are scaled towards a single point on the Fermi surface. In this scheme, the fermion dispersion relation takes the form $k_0 = v_F k_\perp + \bm k_\parallel^2/2m$, and so the fermions scale with both $k_\perp$ and $\bm k_\parallel$. However, this scheme cannot be applied to the entire smooth Fermi surface without breaking it up into patches in an arbitrary way, with an increasing number of patches needed as we evolve to lower energies.
Most importantly, the ``patch" scaling approach has the unappealing feature that forward scattering and BCS interactions are irrelevant (see appendix). This leads to the apparent contradiction that when the system is tuned away from the critical point, Fermi liquid behavior is not recovered. This can only be fixed by resorting to a more complicated procedure\cite{Lee2008}.
In Hertz's approach, the boson kinetic term contains a non-analytic self-energy correction that one obtains upon integrating out gapless fermions on the Fermi surface:
\begin{equation}
\label{landaudamping}
S_{\phi}^{\rm Hertz} = S_{\phi} +g^2 m_{\psi}^2 \int \frac{d^{d+1}q}{\left( 2 \pi \right)^d} \frac{\vert q_0 \vert}{\vert q \vert } \theta(\vert q \vert - \vert q_0 \vert) \phi_{\bm q} \phi_{-\bm q}
\end{equation}
The inclusion of this term in the bare action reduces the upper-critical dimension of the boson fields by 3: thus for $d> 1$, the bosons are imagined to be described by their gaussian fixed point. In this case, the scaling of time and space is different for bosons and fermions\cite{Sachdev}.
By contrast, in our theory, the self-energy correction in Eq. \ref{landaudamping} {\it does not } occur in our starting action, since we have integrated out only the high energy modes.
It is important to stress that although the physics of Landau damping is not incorporated directly into our bare action, it is always present in the theory: it is clear from Eq. \ref{action} that we would reproduce Landau damping when we integrate out fermions. In other words, the Landau damping effect is a property of the low-energy correlators of our theory at weak coupling. However, it does not alter the scaling of the theory at energies that are large compared to $g m_\psi$, which will be a parametrically small scale throughout our analysis. By choosing to keep the low energy fermions, we show that we obtain an entirely new description of a non-Fermi liquid metal.
\section{Fixed point structure at one-loop}
In this section, we discuss the fixed-point structure of the theory (\ref{action}), setting
the four-Fermi interaction $\lambda_{\psi}=0$ for now. This is common also in treatments of
Fermi liquid theory, where one first finds the Fermi liquid fixed point, and then assesses its
stability to fermion self-interactions. We discuss the non-trivial effects of four-Fermi interactions
in \S4 and \S5.
Since both $g$ and $\lambda_{\phi}$ are relevant below $d=3$, non-trivial fixed points can be obtained in a systematic expansion in $\epsilon = 3-d$. In this section we describe the renormalization group flow to one-loop order that is obtained when the diagrams in Fig. \ref{fig:alldiagrams} are taken into account. In all of these diagrams, internal propagators have energies in an infinitessimal shell $ \Lambda e^{-t} < E < \Lambda$, whereas external legs have energy $E < \Lambda e^{-t}$. The leading contribution from these diagrams is obtained by performing the loop integrals in $d=3$. After mode elimination, we rescale energy, momenta and fields in order to preserve the boson and fermion kinetic terms. We first summarize the effect of each of the one-loop diagrams in Fig. \ref{fig:alldiagrams}; explicit derivations can be found in the appendix.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.48\textwidth]{alldiagrams2}
\end{center}
\caption{One-loop diagrams. The boson self-energy (a), boson self-interactions (b,c), fermion self-energy (d), vertex correction (e) and particle-hole scattering (f). Diagrams (a) and (b) do not contribute to the renormalization group flow while (c) produces the ordinary Wilson-Fisher fixed point for bosons. Diagram (d) gives rise to fermion wave-function renormalization and (e) yields logarithmic Yukawa coupling constant renormalization. The usual marginal BCS interaction Fermi liquid theory (f) is altered by fermion wave-function renormalization as well as by diagram (g), both of which make the BCS interaction irrelevant.
}
\label{fig:alldiagrams}
\end{figure}
The first two diagrams (Fig. \ref{fig:alldiagrams}(a) and (b)), represent the fermion contribution to the boson self-energy and self-interactions respectively. Curiously, neither of them contribute to the RG flow. The boson self-energy obtained from eliminating fermion modes in the shell is proportional to
\begin{equation}
\Pi_{d \Lambda}(p) \propto \int_{d \Lambda} d q_0 \left[ {\rm sgn}(q_0) - {\rm sgn}(q_0 + p_0) \right] = 0,
\end{equation}
since the external frequencies are by definition smaller than those in the shell being eliminated. A similar result is obtained for the diagram in Fig. \ref{fig:alldiagrams}(b). Therefore, the fermions do not affect the running of $\lambda_{\phi}$ to one-loop order, and there is no boson wave-function renormalization to one-loop order.
The diagram in Fig.\ref{fig:alldiagrams}(c) yields the standard $\mathcal O(\lambda_{\phi}^2)$ contribution:
\begin{equation}
\label{wilson_fisher_flow}
\frac{d \lambda_{\phi}}{d t} = \epsilon \lambda_{\phi} - a_{\lambda_{\phi}} \lambda_{\phi}^2.
\end{equation}
Here, $a_{\lambda_{\phi}} $ is a positive constant given in the appendix and $t=-\log \left[ \Lambda/ \Lambda_0 \right]$ is now the RG flow parameter.
Next, consider the fermion self-energy in Fig \ref{fig:alldiagrams}(d). This diagram determines the fermion wavefunction renormalization, and therefore affects the running of all fermion couplings. It produces a contribution of the form
\begin{equation}
\Sigma_{d \Lambda}(k) = - \left( i k_0 g^2 a_g \right) d \log{\Lambda} ,
\end{equation}
where $a_g$ is a constant and is derived in the appendix.
The first contribution to the flow of the Yukawa coupling comes from fermion wave-function renormalization.
The second contribution to the flow of $g$ comes from the vertex correction diagram in Fig. \ref{fig:alldiagrams}(e) with zero external boson momentum. This quantity is related to the fermion self-energy via the simple relation
\begin{equation}
\frac{\delta g_{d \Lambda}(k, 0)}{g} = C_3 \frac{ \partial \Sigma_{d \Lambda}(k) }{\partial \left( i k_0 \right) } = -g^2 C_3 a_g d \log{\Lambda}
\end{equation}
where $C_3$ is a constant which is equal to 1 in the theory with a single scalar field $\phi$, and which takes more general values in large N theories we discuss later in the paper.
After the mode elimination and field rescaling, it can immediately be seen that the Yukawa coupling becomes
\begin{equation}
g'(t) = \left( g - C_3 a_g g^3 t \right) e^{-g^2 a_g t}e^{\epsilon t/2},
\end{equation}
where the first factor incorporates the vertex correction, the second takes into account wave-function renormalization, and the exponential factor is obtained from scaling at tree level. The resulting flow equation for the Yukawa coupling is readily obtained:
\begin{equation}
\label{yukawa_flow}
\frac{ d g }{d t} = \frac{\epsilon}{2} g - (1+C_3) g^3 a_g + \mathcal O(g^3 \epsilon)
\end{equation}
Note the existence of a fixed point at $g^2 = \frac{\epsilon}{2(1+C_3) a_g}$.
Equations \ref{wilson_fisher_flow} and \ref{yukawa_flow} are the key results of this section. To one-loop order in the $\epsilon$-expansion, there is a non-trivial fixed point which has
2 main features. Firstly, the boson flows to the usual Wilson-Fisher fixed point with $\lambda_{\phi}^* = \mathcal O(\epsilon)$, and is surprisingly unaffected by the Fermi surface to this order. Secondly, there is a non-trivial fixed point at finite $g^* = \mathcal O(\sqrt{ \epsilon})$ which corresponds to a non-Fermi liquid in which the fermion propagator has an anomalous dimension.
When $\epsilon \ll 1$, the fixed point values $\lambda^*_{\phi}$ and $g^*$ are small. Therefore, the properties of the system can be computed via perturbation theory. Correlation functions of the boson conform with the predictions of the Wilson-Fisher fixed point to one-loop order as do the critical exponents, whereas the fermion propagator develop branch cuts signifying the loss of a well-defined quasiparticle. To be more explicit, we compute the anomalous dimension $\gamma_{\psi}$ of the fermion propagator directly from the expression for the $\beta$ function and the RG equation:
\begin{equation}
\beta_{g^*} = \left[ \frac{3-d}{2} - 4 \gamma_{\psi} \right] g^* = 0
\end{equation}
which includes the contribution $C_3 = 1$ from the vertex correction, from which we see that $\gamma_{\psi} = \epsilon/8$. The fermion propagator at the fixed point has the scale-invariant form\cite{Phillips2013}
\begin{equation}
G(\omega, \ell) \sim \frac{1}{(i \omega - v_F \ell)^{1-2 \gamma_{\psi}} } f(\omega/\ell),
\end{equation}
where $f(\omega/\ell)$ is a scaling function that is undetermined by the RG equation.
Note that this scale invariant form, which follows from the existence of a fixed point, is obtained despite the fact that the fermions are at finite chemical potential.
Therefore, at the fixed point,
the imaginary part of the fermion self-energy varies as
\begin{equation}
\label{imsigma}
{\rm Im} \Sigma(k) \sim (g^*)^2 \omega^{1 - \frac{\epsilon}{4}}.
\end{equation}
By contrast, recent predictions based on the standard approach to the problem\cite{Sachdev} suggest that
\begin{equation}
{\rm Im} \Sigma(k) \sim \omega^{d/3} = \omega^{1 - \frac{\epsilon}{3}}.
\end{equation}
where the second equality makes the comparison directly with the expression in Eq. \ref{imsigma} obtained at the fixed point. We note that even to leading order in $\epsilon$, there are discrepancies between the two sets of theories.
\section{fermion Self-Couplings}
\label{sec:fermionselfcoupling}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{VDiagram}
\end{center}
\caption{ This figure depicts the sum of two diagrams that define the tree-level fermion scattering function $V(\theta)$. This function then appears directly in the generalized BCS loop diagrams depicted in Fig.~\ref{fig:alldiagrams}(f, g, i).
}
\label{fig:VDiagram}
\end{figure}
Next, we describe the fate of four fermion interactions at the non-trivial fixed point. In addition to the Fermi liquid contribution in Fig. \ref{fig:alldiagrams}(f), there are several diagrams which affect the flow of $\lambda_{\psi}$, and some of their contributions are rather subtle.
Aside from the fermion wave-function renormalization of diagram Fig. \ref{fig:alldiagrams}(d), there are four diagrams that produce vertex corrections.
The simplest vertex correction to consider is depicted in Fig. \ref{fig:alldiagrams}(h).
As explained in the appendix, due to the kinematic constraints of the Fermi surface, it has no divergent piece.
The other vertex corrections shown in Fig. \ref{fig:alldiagrams}(g) and Fig. \ref{fig:alldiagrams}(i)
are a bit more subtle. For example, if one directly evaluates any of these diagrams individually one seems to find a dependence of the form $\log^2( \Lambda )$, which would naively lead to explicit $\Lambda$ dependence in the RG flow equations.
One may obtain more insight into the origin of these puzzling divergences by slightly re-organizing the calculation. For this purpose, let us define the general vertex function
\begin{eqnarray}
V(\theta, \omega) \equiv \lambda_\psi(\theta) + \frac{g^2}{\omega^2 + c^2 ( \ell^2 + 4k_F^2 \sin^2 \frac{\theta}{2})}
\end{eqnarray}
where $\theta$ is the angle between the initial and final antipodal fermions, and we have suppressed the explicit $\ell$ dependence of $V$ as well as the scale dependence of the couplings.
Inserting this vertex function into one-loop Feynman diagrams, one can see that in the integral over momenta, soft scattering at small $\theta$ is responsible for an enhanced log divergence on top of the one which follows from naive scaling.
One simple way of seeing the enhanced log divergences is as follows. We can write all of the diagrams Fig. \ref{fig:alldiagrams}(f, g, i) in the compact form
\begin{eqnarray} \label{eq:GeneralizedBCSAngular}
I(\Omega) = \int \frac{\delta \Lambda d \ell \, d^{d-2} \Omega' \, k_F^{d-2}}{(2 \pi)^d (\Lambda^2 + v_F^2 \ell^2)} V(\Omega', \Lambda) V(\Omega - \Omega', \Lambda) \ \ \
\end{eqnarray}
Now we see that the integral over $\Omega'$ is merely a convolution, so we can diagonalize it by writing the $V(\theta, \omega)$ in terms of spherical harmonics on the Fermi surface, giving
\begin{eqnarray} \label{eq:GeneralizedBCSLoop}
I(L) = a_{V} V(L, \Lambda)^2 d \log (\Lambda)
\end{eqnarray}
for a constant $a_V$. The $\log^2(\Lambda)$ effect alluded to above is now contained in the behavior of $V_L$ as a function of Fermi surface angular momentum $L$.
Similarly log$^2$ divergences have been encountered in the earlier literature
\cite{Nayak1994a,Son1999,Schafer1999,Schafer2001,Wang2013}. An interesting suggestion for an improved RG to surmount this issue appears in the work of Son\cite{Son1999}. We will see here that in controlled large $N$ limits, the log$^2$ diagrams do not contribute, and the RG analysis is conventional. We intend to pursue a systematic investigation of the proper treatment of such divergences at small $N$ in further work.
\section{One-Loop Structure at Large N and fixed points including $\lambda_{\psi}$}
In \S3, we discovered new one-loop fixed points where a Wilson-Fisher boson dressed a Fermi liquid into a non-Fermi liquid. However, incorporation of four-Fermi interactions in \S4\ leads to new issues in the renormalization group, whose systematic investigation we leave for the future.
For now, we can gain significant theoretical control over the model including four-Fermi interactions by introducing a large $N$ version of it.
To do this, the fermions $\psi$ are promoted to $N$-vectors $\psi_i$ while the scalar is promoted to an $N\times N$ complex matrix $\phi^i_j$:
\begin{eqnarray}
\mathcal L_{\psi} &=& \bar \psi^i \left[ \partial_{\tau} + \mu -\epsilon(i \nabla) \right] \psi_{i}
+ \frac{\lambda_{\psi} }{N}\bar \psi^i \psi_{i} \bar \psi^j \psi_{j}
\nonumber \\
\mathcal L_{\phi} &=& {\rm tr}\left( m_{\phi}^2 \phi^2 + \left(\partial_{\tau} \phi \right)^2 + c^2 \left( \vec \nabla \phi \right)^2\right) \nonumber \\
&&+ \frac{\lambda_\phi^{(1)}}{8 N} \textrm{tr} (\phi^4) + \frac{\lambda_\phi^{(2)}}{8 N^2} (\textrm{tr} (\phi^2))^2 \nonumber \\
\mathcal L_{\psi,\phi}&=& \frac{g}{\sqrt{N}} \bar \psi^i \psi_{j} \phi^j_i
\label{largeNaction}
\end{eqnarray}
where we consider spin-less fermions for simplicity. In order to avoid having more than one independent scalar mass that must be tuned to zero at the fixed point, we take $\phi^i_j$ to be in an irreducible representation. For concreteness, we take this to be the adjoint of $SU(N)$, and the fermions to transform in the fundamental representation, though the leading large $N$ effects will not be especially dependent on this choice.
Although a priori there are two distinct quartic boson couplings with different trace structures, one can in fact set
$\lambda_{\phi}^{(1)} = 0$ in a natural way. The model enjoys an enhanced $SO(N^2)$ symmetry in that
limit (softly broken by the relevant parameter $g$), and so it is radiatively stable to do so. We proceed with $\lambda_{\phi}^{(1)} = 0$.
In appendix \ref{app:matrix}, we evaluate the generalization of the renormalization coefficients $a_g, a_{\lambda_\psi}, a_V$, and $C_3$ to large $N$, but here we will just give a qualitative overview. First of all, the diagram
(Fig. \ref{fig:alldiagrams}(a)) vanishes in this limit, including the finite piece. This implies that Landau damping is completely absent at $N \rightarrow \infty$.
This large $N$ limit in $d=3$ was exploited previously in \onlinecite{Mahajan2013}.
This is interesting, but it does mean that the large N limit and the $\omega \to 0$ limit (where Landau
damping is dominant in other treatments of this problem) exhibit subtle interplay.
In fact, this is part of a more general point: at infinite $N$, the fermions do not renormalize the scalar at all. The fixed point properties of the scalars can therefore be studied independently of the fermions.
On the other hand, the scalars do have an effect on the renormalization of the fermions, even though the fermions do not ``back react'' on the scalars. In particular, the wavefunction renormalization, which affects the running of all fermion interactions, survives at leading order in large $N$:
\begin{eqnarray}
a_g = {\cal O}(N^0).
\end{eqnarray}
In contrast, the direct cubic vertex renormalization diagram Fig. \ref{fig:alldiagrams}(e) vanishes at large $N$:
\begin{eqnarray}
C_3 &=& {\cal O}(N^{-2}).
\end{eqnarray}
The factor of $1/N$ in the four-fermion interaction $\lambda_\psi$ has been chosen so that the resulting one-loop renormalization of the fermion propagator is finite at $N\rightarrow \infty$. The interaction itself naively has two different possible structures, $\bar{\psi}^i(p)\psi_i(p')\bar{\psi}^j(-p)\psi_j(p')$ and $\bar{\psi}^i(p)\psi_j(p')\bar{\psi}^i(-p)\psi_j(p')$ in the action above. However, by anti-commuting the fermions and relabeling the momenta, these can be seen to be just a single interaction (with a coefficient $\lambda_\psi$ that can be a function of the angle $\cos \theta = \hat{p} \cdot \hat{p}'$).
The diagrams in Fig. \ref{fig:alldiagrams}(g), (h), and (i) do not contribute to $\lambda_\psi$ at leading order in large $N$.
Consequently, the double-logs mentioned in section \ref{sec:fermionselfcoupling} are also absent at infinite $N$. \footnote{A different class of diagrams contributing to the overlap region between forward and antipodal scattering is not suppressed at large $N$, and was considered in [\onlinecite{SonShuster}]. The interesting effects of these diagrams set in at a scale that is exponentially small at small $\epsilon$, and we will not discuss them further.}
The structure of the resulting RG fixed points is as follows. The boson still behaves as if it is at a Wilson-Fisher fixed point at leading order in large N. The fermion is dressed into a non-Fermi liquid, and we can now assess the stability of this non-Fermi liquid fixed point to superconducting instabilities. The leading RG equation for $\lambda_{\psi}$ is therefore of the form
\begin{equation}
\label{nosuper}
{d\over dt} \lambda_{\psi} = -2a_g g^2 \lambda_{\psi}.
\end{equation} An additional term on the right-hand side of the form $\beta \lambda_\psi^2$ (i.e. the conventional BCS result,
that would drive attractive interactions to grow at low-energy) is absent at infinite $N$; more precisely, $\beta = {\cal O}(N^{-1})$. We therefore see that
there is a ${\it stable}$ fixed point at $\lambda_{\psi}=0$. The large $N$ fixed point is stable against superconductivity. Similar conclusions were obtained in a recent study\cite{Chung2013} of superconductivity of fermions at finite density coupled to U$(1)$ gauge fields in $d=3+1$.
\section{Discussion}
The key result of this paper is the existence of a non-Fermi liquid fixed point for a metal near a quantum critical point below $d=3$. We have obtained the non-Fermi liquid fixed point in a theory which formally integrates out only high energy modes, never incorporates the effect of Landau damping in the tree-level action, and treats the low energy boson and fermion modes on an equal footing. Furthermore, in the large-N limit of \S5, the non-Fermi liquid is always stable against superconductivity, since BCS interactions become irrelevant at the fixed point due to the first term of Eq. (\ref{nosuper}), which is $\mathcal O(\epsilon)$.
However, there is a peculiar aspect of the fixed points obtained in the present analysis. We have emphasized that while the physics of Landau damping can always be recovered in our theory at any stage by integrating out fermions, there is no sense in which Landau damping smoothly grows under the RG. The reason for this effect is that the boson self-energy to one-loop order is not logarithmically divergent. Stated differently, the damping only arises when fermion modes below the boson energy are integrated out; in our Wilsonian RG treatment, we never integrate out these modes. It is a curiosity that in our approach, the boson dynamical critical exponent remains at unity at all finite steps of the RG. It seems very likely that as one goes to higher orders in $\epsilon$, more complicated diagrams generate
a non-trivial dynamical critical exponent for the scalar, and its behavior no longer coincides with that
of a scalar at the Wilson-Fisher fixed point. At $\epsilon=1$, such corrections are likely to be
quite important.
This raises the natural question of whether the fixed points we found here can be continuously interpolated to $\epsilon=1$, and govern the IR behavior of a quantum critical metal in $d=2+1$. While there is good reason to believe that the fixed point structure survives to $\epsilon = 1$, there are several issues which present a challenge in constructing the theory. Firstly, at energy scales below $\omega_{LD} \sim g m_{\psi}^*$, the boson becomes substantially Landau damped due to the presence of the dissipative fermion bath\cite{Mahajan2013}. Secondly, the effect of $\lambda_{\phi}$, which is $\mathcal O(1)$ relevant in $d=2+1$, should be taken into account beyond one-loop order. A promising route to describing the non-Fermi liquid in $d=2+1$ involves looking for self-consistent solutions of both the boson and fermion self-energies by computing a Dyson expansion for both quantities. It is well-known that starting with a Fermi liquid propagator, self-consistency cannot be achieved, since $g$ is $\mathcal O(1)$ relevant. Stated differently, the boson Landau damping, which assumes an underlying Fermi liquid, leads to a non-Fermi liquid description in $d=2+1$. However, it is conceivable that by starting with a non-Fermi liquid ansatz, a self-consistent solution to the problem at hand may be achieved. In suitable large $N$ limits, for instance, one can find closed-form integral equations for the boson and fermion self-energies, and search for self-consistent solutions. Results in this direction, with a comparison to the present work, will be described in a future publication.
\acknowledgements{We acknowledge important conversations with S. Chakravarty, A. Chubukov, E. Fradkin, S. Hartnoll, S. Kivelson, S.-S. Lee, R. Mahajan, M. Metlitski, M. Mulligan, and S. Sachdev. This work was supported in part by the National Science Foundation grant PHY-0756174 (SK), DOE Office of Basic Energy Sciences, contract DE-AC02-76SF00515 (SK and SR), the John Templeton Foundation (SK and SR), and the Alfred P. Sloan Foundation (SR). This material is based upon work supported in part by the National Science Foundation Grant No. 1066293. ALF and JK were partially supported by ERC grant BSMOXFORD no. 228169. JK acknowledges support from the US DOE under contract no. DE-AC02-76SF00515.
}
\newpage
|
1,314,259,993,875 | arxiv | \section{Introduction}
The role of the geomagnetic field in life processes remains unclear. Even the fact that some animals can navigate using the geomagnetic field is not yet explained \citep{Johnsen.ea.2008,Mouritsen.2012}. The nature of biological effects caused by such weak magnetic fields is a physical problem \cite{Binhi.ea.2003.PU}.
There are both epidemiological \citep {Schuz.ea.2009} and laboratory \citep{Ghione.ea.2004,Cook.ea.2006} studies showing some association between the level of AC electromagnetic fields and human health. However, relatively little is known about the effects of weak static magnetic field, on the order of the geomagnetic field (GMF), in humans.
There were only a few laboratory studies focused on the cognitive effects of weak static magnetic fields, particularly the hypomagnetic field.
In \cite{Beischer.1971}, 24 subjects, two people at a time, were continuously exposed to 50~nT hypomagnetic field for up to two weeks. A range of psychological tests were performed before and after the magnetic exposure: the space perception test, visual spatial memory, the hand-eye coordination, the reproduction of time intervals, the subject's equilibrium. In all these tests no significant difference was found between the data collected in the geomagnetic and in the hypomagnetic environments. However in \cite{Thoss.ea.2007}, averaged over 55 subjects, the sensitivity of the human eye to a visual light stimulus in the hypomagnetic field was less than that in the GMF by (6$\div $7){\%}. Thus, data available on the effect of HMF on human cognitive processes were insufficient and inconsistent.
In our earlier work \cite{Sarimov.ea.2008e}, we reported that deprivation of the GMF to the level lower than 400~nT affected human cognitive processes. Forty people, who all gave their informed concent, were tested in a series of four cognitive tests. Under HMF, both the number of errors and task processing times increased by about (1.5$\div$2.5){\%}, on average. These results were obtained by using several multivariate statistical methods: MANOVA, the discriminant, the factor, and the cluster analyses. The total magnetic effect, calculated as the average over about 120000 trials, was (1.7$\pm $0.2){\%}. This value was rather steady: When the array of data was limited to the measurements of the task processing times only, the average effect was 1.64{\%} \cite{Binhi.2012e}; if the results of six subjects who showed maximal effects were removed from the array, the average effect, then 1.49{\%}, retained its statistical significance at $p< 0.004$ \cite{Binhi.ea.2009.EMBM}. So, within the limits of this study, the global mean magnetic effect in humans was formed by the bulk of the measured data and by all the subjects. The observed magnetic effect was the consequence neither of particular efficiency of any test used, nor of the presence of particularly sensitive subjects. Temperature and atmospheric pressure were studied among possible essential factors, but they did not affect the results.
It should be noted that all eight measurable characteristics were subjective psychological reactions. It was interesting to understand as well, whether the hypomagnetic field can influence human reactions that are mostly independent of the will of a subject. The pupil size is a characteristic clearly involved in the execution of the aforementioned psychological tests. Although psychologically induced pupil constriction/dilatation is known, a physiological reaction to light, the pupillary light reflex, is well expressed. For this reason, the pupil size has been chosen for tracking simultaneously with the above testing of the subjects under HMF/GMF exposure.
The aim of conducting the present study was to investigate whether the hypomagnetic field can cause the eye pupil to change in size.
\section{ Method}
\subsection{Subjects}
No special selection of subjects was made but for equal numbers of men and women and of people aged less than and more than 40 years. There were 20 subjects in each gender-specific or age-specific group, and each subject was tested both in GMF and HMF conditions.
\subsection{Exposure system}
GMF deprivation has been reached by the compensation of GMF in a special wooden box of the size $1$$\times $$1 $$\times $$1.5$~m$^3$. The box included a wire mesh that shielded a test subject from the outer randomly variable electrostatic field. The magnetic field inside the box was measured by fluxgate sensors fixed near the head of the subject, approximately at the center of the box. A digital feedback system compensated (along the main axis) the outer magnetic field and its variations caused by the city electric vehicles and industrial pulses.
Four circular coils 1~m in diameter were spaced at 0.5~m while having 40 windings in the side coils and 26.5 in the middle ones. The total active electrical resistance was 1.23~Ohm. The MF inhomogeneity inside the workspace of the system did not exceed 2{\%}. The main axis of the system was oriented parallel to the GMF (44~$\mu $T) vector at a precision of 0.5 degree. The bandwidth of the feedback system was about 10~Hz, at the MF measuring rate 1000~Hz. The residual value of MF inside the box during experiments did not exceed 0.4~$\mu $T along the main axis and 0.6~$\mu $T in perpendicular directions.
\subsection{Test procedures}
Each subject has been tested twice; the second session was conducted usually in 30$\div $50 days after the first one. In one of these two sessions, HMF was used, and in the other, for comparison, there were the same conditions but without GMF deprivation. To exclude the possible contribution from the order of HMF and GMF sessions, the order of those for a half of the subjects was opposite to those for the other half. Measurable were the task processing times and the number of errors in the following tests: (i) the rate of a simple motor reflex, (ii) recognition of colored words, (iii) short-term color memory, and (iv) recognition of rotated letters. Two of these were modifications of the well-known J.R. Stroop and R.N. Shepard tests.
A total of eight parameters were measured in this study. The protocol of this experiment is described in detail in \cite{Sarimov.ea.2008e}. What is essential is the following: each of 80 experiments consisted of three time periods: the 10 min of accommodation to the environment at GMF conditions and preparing to be tested; 10+10 min of testing to collect reference data, also at GMF conditions; and 10+10+10+10 min of testing under GMF conditions (in 40 ``sham'' experiments) or under HMF conditions (in other 40 ``real'' experiments). One-minute relax intervals were placed between all these 10-min periods, so that the total duration of an experiment was 76 min. Test subjects were not aware of which magnetic field, GMF or HMF, they were subjected to during 40 min of ``exposure.''
A special device has been made for recording eye movements. The plastic frame that was fixed on the subject's head carried an analog video camera ACE-S560H (0.05 lux, 600 lines). A filter was mounted in front of the camera inside the camera cylinder to cut off light with wavelength less than 810 nm. The sensitivity of the camera was enough to work in the IR range. IR LEDs that were placed around the camera aperture illuminated right eye area, which made it possible to significantly contrast the eye pupil.
The pupil movements were recorded in a digital format MPEG-4 converted to 8-bit gray by means of a video capture device. The rate was 25 fps; the duration of each of 80 records was 76 min. It can be easily derived that a total of about 9 million frames have been collected in this study, half for GMF-, and half for HMF-type experiments.
An original computer program has been developed that could treat the footages, frame by frame. After 80 records were made, it turned out that one record failed due to a technical fault. So the results of the corresponding subject were removed from the data set, and the program processed only 78 video files of 39 subjects. A preliminary treatment was as following.
First, the program cut the fragments of the footages that corresponded to the accommodation/relax intervals and to the short intervals of eye blinking. What was left were a little less than 20 min of the reference interval (control GMF conditions), and 40 min of the sham (GMF) or real (HMF) exposure. So we had 39 one-hour footages of GMF/GMF type, i.e., those of ``sham'' or ``simulation'' experiments, and 39 movies of GMF/HMF type, or movies with ``real'' exposure to HMF.
For each frame (680$\times $572~pixel$^2$) of the movies, the program found the image of eye pupil, approximated the pupil by an ellipse, and determined its parameters: short and long axes, rotation angle, horizontal and vertical positions of the pupil, Fig.~\ref{Eye}. These values were saved in a file also containing a timestamp, frame sequence number, and the mean luminance of the frame (the mean density of the gray within 8-bit range 0--255).
The mean luminance was calculated for the entire frame area except the area of the eye pupil.
The results of each of 78 experiments were presented as a table/file, each line of which corresponded to a single frame and included the data of its treatment. Each column of the file represented an array: sizes of the ellipse axes, frame luminance, etc.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{Eye.pdf}
\caption{Parameters of the equivalent ellipse of the eye pupil as determined by a computer program.
}\label{Eye}
\end{figure}
The second step of treatment was in examining the written arrays for outliers. Due to the contribution of many uncontrollable factors, the arrays contained not only regular changes, but also a noise. Some values in the arrays can deviate from the means so far that their artifact origin is very probable. Such data are often removed from samples. The program removed an entire frame's line from the file, if one of the values in the line was spaced from the corresponding sample mean by more than three standard deviations. The reduced arrays were used for further calculations.
While a subject is tested, his or her eye rotates in different directions, so the eye pupil is seen from the camera aperture under different angles, as an ellipse. The actual size of the pupil is closer to the ellipse's major axis, because the minor axis varies as a cosine of the angle of view. We used the major axis as a main observable that was determined for each frame.
In what follows, the arrays of measured pupil sizes, corresponding to the control, or reference, 20-min interval and to ``exposure'' 40-min intervals, are denoted as $\bf c$ and $\bf x$ for ``real'' experiments, and $\bf s$ and $\bf y$ for ``sham'' experiments, respectively, or, in a different order, $\bf c$ and $\bf s$ stand for controls, and $\bf x$ and $\bf y$ stand for exposure intervals. Let $c$, $x$, $s$, and $y$ be sample means of those arrays, $\sigma_c$, $\sigma_x$, $\sigma_s$ and $\sigma_y$ be their standard deviations, and $i$ stand for the index that numbers the subjects.
Mathematical operations like ${\bf x}/c$ imply that multiplication by $1/c$ is applied to each element of the array $\bf x$. Then we could determine the result of a subject exposure in ``real'' experiment as the mean of the array ${\bf x}/c$, or a normalized effect $x' \equiv x/c$ that is centered on unity.
However, that would not be a magnetic effect, because the change from $c$ to $x$ could be due to natural physiological rhythms, to learning in the course of testing, etc. Right determination of the magnetic effect of the real HMF exposure would be only in its comparison to the result of ``sham'' experiment, where the mean of the array ${\bf y}/s$, or $y' \equiv y/s$, is calculated. Thus, we determine mean magnetic effect as $m = (x'- y') / y'$. As can be seen, with such determination, the mean magnetic effect can be considered as the mean of the array ${\bf m} = ({\bf x}/c - y')/y'$. This is the array of ``elementary magnetic effects'' defined for each separate frame of the $\bf x$ array. It is convenient, because it makes possible to build different distribution functions and to compare their statistics.
\section{Results}
First, the magnetic effect has been calculated in average all over the subjects. All the normalized arrays of each subject were combined in single arrays, $\bf X$ and $\bf Y$, separately for ``real'' and ``sham'' experiments:
\[ {\bf X} \equiv \bigcup_i {\bf x}_i /c_i, ~~ {\bf Y} \equiv \bigcup_i {\bf y}_i /s_i . \]
The distributions of the $\bf X$ and $\bf Y$ elements, i.e., pupil sizes normalized to their means in controls, are shown on Fig.~\ref{HMFvsGMF}. It is the distributions over the relative pupil size values, which are built as histograms, or relative frequencies of corresponding values. The distributions are shown as normalized to unit area under the curves. The arrays' lengths were 1692192 for $\bf X$ and 1671263 for $\bf Y$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.65]{HMFvsGMF.pdf}
\caption{Distributions of the normalized pupil sizes at ``sham'' (GMF) and ``real'' (HMF) exposures, shown as histograms of 1000 bins. The means (dash lines) and the standard deviations of the samples were: $X=0.9935$, $\sigma_X = 0.0908$ and $Y = 0.9785$, $\sigma_Y = 0.0897$ for HMF and GMF correspondingly.
}\label{HMFvsGMF}
\end{figure}
As can be seen, the distributions are different in their means. The distributions are close to the normal one. Two-sample $t$-test shows that the difference is statistically significant with an infinitesimal probability of error ($t$-statistics equals 152). As to the magnitude of the magnetic effect, the ordinary definition gives $M \equiv (X-Y)/Y \approx 1.53${\%}.
The illumination of the eye in our experiments could possibly vary due to many reasons. It is both the natural and artificial room daylight variations, light from the moving objects on the LCD monitor in front of a test subject, and its individual position in the magnetic exposure box. Despite the effective spectrum of measuring radiation was shifted to the IR range, optical radiation variations could contribute to the outcome because of the pupillary light reflex. Therefore, we paid particular attention to this fact. The mean illuminance of the area around the eye pupil was calculated along with the size of the eye pupil, for each frame. It appeared that there is a direct correlation between these two values, and not an inverse one as could be expected, Fig.~\ref{correlationLumSize}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.65]{correlationLumSize.pdf}
\caption{A correlation diagram between the pupil size and the illuminance of the area around the pupil as measured by the frame luminance. Every hundredth point of more than three million is only plotted; the regression line is calculated for the entire set of points.
}\label{correlationLumSize}
\end{figure}
The reason for this is that the position of the camera was not fixed relative to the face of a subject. A subject could adjust the position in the course of a session so that the distance between the camera and the eye often varied. The smaller the distance, the greater was the illuminance due to the IR LEDs and the greater was the size of the pupil apparent to the camera; it is a geometric constraint. At the same time, the average luminance over all the frames in HMF experiments has occasionally been greater than in GMF experiments. For this reason, we had to suggest that the observed increase in pupil size under HMF was at least partly caused by this geometric effect. Therefore, a correction was necessary to allow for the correlation and exclude the geometric effect of luminance.
The correction procedure was to determine the coefficients of simple linear regression and to correct pupil sizes by subtracting corresponding contributions of the regression. The slope of the regression line in Fig.~\ref{correlationLumSize} is $b=0.3230$, so that corrected values for the sizes have been calculated as $a_{\rm corr} = a - b (E - E_{\rm mean})$, where $a$ is an element of arrays ${\bf x}_i$, ${\bf c}_i$, etc., $E$ is the corresponding luminance, and $E_{\rm mean}$ is the mean luminance averaged over the entire data set. Of course, no correlation was found between the values of the frame luminance and corrected pupil sizes. Nonetheless, the magnetic effect has stood 100{\%} significant ($t$-statistics equals 77, and that figure differs from 100{\%} by a number less than $10^{-1280}$...), at a reduced value though.
The mean magnitude of the effect from magnetic exposure can only be computed rather than directly measured, so it depends on the definition. A definition $M \equiv (X-Y)/Y$ gives for the corrected set of data $M \approx 0.79${\%}. The pupil size distributions corresponding to the ``real'' and ``sham'' experiments, i.e. those built on the arrays $\bf X$ and $\bf Y$, are practically the same as in Fig.~\ref{HMFvsGMF}, with a smaller gap (not shown). It is essential that if the \textit{area} rather than the size of pupil was used to calculate the magnetic effect, it would be twice as large, $ 1.58${\%}, with also twice the standard deviation of the distributions. With any definition, the magnetic effect is 100{\%} statistically significant. Statistical significance between ``GMF'' and ``HMF'' persists at $p< 10^{-4}$ even if four people, showing maximal positive magnetic effects ($\sim $ 13, 12, 10, and 7{\%}) are removed from the sample.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{PupSize-CognTests-Distribs.pdf}
\caption{Normalized distributions of the individual magnetic effects calculated for the same subjects: in the present work from the pupil size measurements (solid line, $\sigma = 0.048$) and in the work \cite{Sarimov.ea.2008e} from the parameters of cognitive tests (dash line, $\sigma = 0.061$); mean values are shown. Points in the inlet plot: a correlation diagram for the individual magnetic effects.
}\label{PupSize-CognTests-Distribs}
\end{figure}
Individual magnetic effects were studied after the regression correction was made for pupil size in each frame. The individual effects reflect the individual sensitivities of subjects to 40-min HMF exposure. The individual magnetic effects in its ordinary definition have been calculated for each test subject. For this, arrays ${\bf m}_i = ({\bf x}_i/c_i - y'_i)/y'_i$ were separated, and for each one the mean value $m_i$ was calculated. The number of available quantities $m_i$, 39, was enough to compose an array ${\bf u} \equiv \bigcup _i m_i$. The distribution of its elements is shown in Fig.~\ref{PupSize-CognTests-Distribs}. Also shown is the same distribution calculated from the parameters of the cognitive tests (see below). The distributions are given as the density estimation functions with a Gaussian kernel of width equal to 0.25 standard deviations, which corresponds to a histogram of about eight bins in its main interval from $ - \sigma_u $ to $\sigma_u $.
It is essential that the individual mean magnetic effects, taken separately, were statistically significant. All the subjects except two, who showed the smallest means 0.13{\%} and 0.04{\%}, had their mean magnetic effects significant at the level $p< 10^{-3}$, at least. The ``real'' and ``sham'' distributions for each one of the test subjects are similar to those in Fig.~\ref{HMFvsGMF}, however with greater gaps and noise.
\section{Discussion}
The present study demonstrates that the 40-min exposure to HMF has a statistically significant effect on the subjects: their eye pupils experience a weak dilatation. Despite the total mean effect is small, the distributions of the measured values say something essential about the nature of magnetic effects in human.
A distribution built on the joint array $ \bigcup_i {\bf m}_i$ mixes two different distributions of magnetic effects that can be isolated. Of interest are the shapes of these distributions. The first one is the \textit{general shape} of individual distributions of the ``elementary magnetic effects,'' i.e., something that is common to individual distributions apart from their mean values. The individual distributions differ by their means, but have something in common---their shape, that can be seen after the means are subtracted from the arrays. In other words, it is the shape of distribution in the joined array ${\bf M} = \bigcup_i ({\bf m}_i -m_i )$, Fig.~\ref{Distrib-ElementaryVSindividual}-a. The other one is the shape of distribution of 39 individual magnetic effects $m_i$ in the array $\bf u$, Fig.~\ref{Distrib-ElementaryVSindividual}-b. The distributions are distinct because the nature of their variances is different. The variance of the first distribution is conditioned by many random factors of brain functioning and physical environment, while the variability of the individual magnetic sensitivity, taken as a biological characteristic, is determined mostly by the phenotypic variation.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.65]{Distrib-ElementaryVSindividual.pdf}
\caption{The shape of individual distributions (a) and of the distribution of individual means (b) have essentially different standard deviations, 0.1 and 0.048, respectively.
}\label{Distrib-ElementaryVSindividual}
\end{figure}
The distributions Fig.~\ref{Distrib-ElementaryVSindividual} show that the mean magnetic effect is not due to the presence of a small hypersensitive group of subjects. Practically all the persons have demonstrated sensitivity to HMF. However, nearly equal parts of the subjects gave opposite responses to HMF that resulted in a small average effect. At the same time, individual magnetic effects varied significantly within the range $\pm (10$$\div$$ 12)${\%}. Thus, the standard deviation is essentially greater than the mean of the individual means. For this reason, the total mean is of little value. It resembles a mean fingerprint pattern, which is actually no pattern at all.
As one can see, a random error in these experiments is very small due to the huge volume of the data set; standard errors of the means are about $7\times 10^{-5}$, which certifies the second significant decimal digit in mean magnetic effect magnitudes. Possible systematic \textit{a posteriori} bias has only been related to the enhanced level of eye luminance in the ``real'' set of experiments. However it has appeared to be inessential, because the pupillary reflex is appreciable only at visible light and not at IR radiation. Apart from the geometric effect, no other possible effects on pupil size have been found. Neither LEDs highlight, nor the artificial indoor lighting, nor the outdoor daylight variations influenced the pupil size, which has been established in a special session of testing.
As was said above, there were $N=8$ measurands in the cognitive tests. For each of them, the array of the individual mean magnetic effects was separated, and all the arrays were sorted according to the subject's sequence number. Let ${\bf u}^{(n)}$ denote these ordered arrays, where index $n=1,2,..,N$, is the sequence number of a ``psychological'' measurand used; $n=0$ standing for the measurand of the ``eye pupil size''. Then, one could estimate the correlation between these arrays. A large correlation would signify that one and the same subject possesses higher or lower magnetic sensitivity in different tests, i.e., independently of the measurand used to determine his or her sensitivity. It has turned out that all these arrays \textit{do not correlate}; the mean level of the matrix of correlation coefficients was
\begin{equation} \label{correlation} \frac {1} {N(N-1)} \sum_{n' \neq n} {\rm corr} ({\bf u}^{(n')},{\bf u}^{(n)}) \approx 0.09 .\end{equation}
This unambiguously confirms that there were no particularly sensitive subjects among 39 tested, although in every separate test there were people showing rather clear response to the hypomagnetic exposure.
Constrictions and dilatations of the eye pupil occur independently of a human will. It is an objective physiological reaction rather than a reaction based on a subjective will. It is interesting therefore that there is a similarity between the distributions built on both the reactions, Fig.\,\ref{PupSize-CognTests-Distribs}. Essential conclusions follow the fact that this similarity exists together with no correlation between individual means of different measurands.
(1) ``Wings'' are seen in the shape of distributions, at greater absolute values of the magnetic effect magnitudes. The wings, a few percent in area, are not as clear as the main peaks; however they can be seen in the shape of individual distributions both for pupil sizes and for the parameters of psychological reactions. This makes it possible to question the statement that there exist in human population a group of people who are particularly sensitive to electromagnetic fields. It is the so-called ``electromagnetic hypersensitivity syndrome'' repeatedly reported in literature \cite{Schuz.ea.2006}. It states that a few percent of people can markedly react even to relatively weak electromagnetic fields that are incapable of appreciable tissue heating.
On the face of it, the wing-shaped distribution of individual means does not contradict the hypothesis of hypersensitivity. However, the fact that there is no correlation between the magnetic effects as measured by eye pupil tracking and by psychological reactions, Fig.~\ref{PupSize-CognTests-Distribs} inlet, indicates that people demonstrating a very clear magnetic effect can be different. According to our results, some people tested for a particular biological parameter will clearly react to EMF exposure. However, if a different parameter was chosen to be measured, another minor group would react to the same EMF. We suggest that EMF hypersensitivity exists only as a casual reaction.
(2) As was said above, individual magnetic effects $m_i$ have been determined for the same subjects, but from their different characteristics, from the eye pupil size in the present study, on the one hand, and from the number of errors and the test processing time in \cite{Sarimov.ea.2008e}, on the other hand. These magnetic effects have appeared to be uncorrelated. At the same time, the distributions of these effects are rather similar: both have two major peaks and two minor peaks, or wings, in Fig.\,\ref{PupSize-CognTests-Distribs}. This fact indicates that the reaction of a human to MF exposure is not a systemic reaction.
An external factor, like acoustic noise or light, can cause only a systemic reaction that is conditioned by the human perception, by the functioning of the central nervous system. In this case, different organism's reaction to the external factor should be correlated. Apparently, the same is valid with regard to an internal, but \textit{ab initio} already systemic factor like a biological rhythm. Unlike such factors of a systemic action, MF is an agent that bypasses human signaling systems, acts directly on tissues, and consequently acts without system, at random. It is exactly this that is observed as the absence of correlation, see (\ref{correlation}), between different biological measurands when a subject is exposed to a HMF. A subject during testing can be magnetically sensitive as measured by one parameter and simultaneously insensitive as measured by another one.
(3) The data \cite{Thoss.ea.2007} are in accordance with our results \cite{Sarimov.ea.2008e} that changes between GMF and HMF cause a measurable biological reaction in humans. The authors of the former study have concluded that their data are in agreement with the so called ``radical-pair mechanism'', see for example \cite{Gegear.ea.2010}. According to this concept, some animal species have a magnetic sense, because the GMF affects spin-correlated pairs in cryptochrome photoreceptors in the eye retina. The findings of the present study are at variance with this hypothesis. The absence of the correlation between different measurands in (\ref{correlation}) proves that human reaction to magnetic field is not a systemic reaction. Consequently, it is not a reaction caused by the visual analyzer, and in particular, by the changes in its retinal cryptochromes.
Our data are in a better agreement with the idea that the targets of MF are more or less evenly spread over human organism. It might be magnetic nanoparticles found in human brain tissues \cite{Kirschvink.ea.1992}. Magnetic nanoparticles are small magnets that behave like a compass needle; they can rotate in an external MF. Magnetic nanoparticles produce their own relatively large mT-level MF. In turn, this MF can affect magnetosensitive radical-pair biochemical reactions \cite{Binhi.2008.IJRB}, so that external MFs as weak as 200~nT can cause biological effects \cite{Binhi.2006.BEM}.
\section{Conclusions}
The hypomagnetic field of about 400 nT widens the area of the human eye pupil by about $1.6${\%} on total average, with a high statistical confidence. This result is based on human eye video recording at cognitive testing of 39 people in usual geomagnetic environment and under exposure to the hypomagnetic field.
There are two types of the distributions of magnetic effects: (i) averaged individual distribution of elementary magnetic effects and (ii) distribution of the individual mean magnetic effects.
The standard deviation of the distributions is much greater than the mean effects, which makes the total mean magnetic effect uninformative about the unknown nature of magnetic effects in humans.
The distribution of individual mean magnetic effects has a multi-peak shape nearly similar for all tested measurands. For each measurand, the peaks of the distribution are formed by different people.
The hypomagnetic effect observed in 39 test subjects as measured by eye pupil size and by eight cognitive parameters is likely a general magnetic effect in the human population. Due to the fact that magnetic reactions observed simultaneously with respect to different measurands do not correlate, these reactions to magnetic fields are mostly casual reactions. It takes a large volume of observations in order to register a very weak total magnetic effect.
|
1,314,259,993,876 | arxiv | \section{Introduction}
Self-gravitating astrophysical objects are usually set into rotation due to diverse formation and evolution scenarios. Gravitational attraction is compensated by pressure but also by centrifugal forces. For fast rotating astrophysical objects, the centrifugal force substitutes to a substantial fraction of the internal pressure to sustain the system in a quasi-stationary equilibrium. If moreover, the gas is magnetized, it radiates an electromagnetic wave, removing energy and angular momentum from the system. Single stars or clouds are typical configurations where such phenomena occur.
This problem is particularly relevant in the realm of high energy astrophysics. Indeed, fast rotating neutron stars are privileged places where the system deviates significantly from a spherical shape, like for instance millisecond pulsars. Magnetars are also expected to show strong magnetic constrictions leading to oblate or even prolate surfaces not directly connected to a geometry imposed by their rotation. A detailed analysis of such systems evolving in vacuum was recently performed by \cite{petri_spheroidal_2021}. If vacuum is replaced by an ideal plasma like in electron/positron pairs, a spheroidal force-free neutron star magnetosphere can be constructed, generalizing the extensive literature about the force-free magnetosphere of spherical stars \citep{spitkovsky_time-dependent_2006, komissarov_simulations_2006, mckinney_relativistic_2006, petri_pulsar_2012, cao_spectral_2016}.
From an observational point of view, thermal X-ray emission from neutron star hot spots was used to constrain the compactness and radius of these compact objects \citep{riley_nicer_2019, bogdanov_constraining_2019}. Their estimates rely on fitting the X-ray pulsed profile taking into account general-relativity and oblate surfaces. However, this modelling requires an accurate knowledge of the hot spot shape, its temperature profile, the stellar oblateness and the general-relativistic environment. All these ingredients are still uncertain, especially for millisecond pulsars, deviating significantly from a perfect sphere. Nevertheless, these observational hints tend to reliably constrain the hot spot location on the star and their area. In the present work, we start from a theoretical perspective and construct polar cap areas from the knowledge of the electromagnetic field in the force-free regime of a prolate or an oblate star starting from a theoretical point of view.
Computing light curves with general-relativistic effects has been done by \cite{cadeau_light_2007} and \cite{morsink_oblate_2007} who used an oblate Schwarzschild approximation to the exterior gravitational field. \cite{algendy_universality_2014} studied the impact of fast rotation on the gravitational field on the surface of a neutron star. They also gave analytical fits to the equatorial radius. Recently \cite{silva_surface_2021} constructed equilibria configurations using realistic neutron star equations of state. Such investigations give a good estimate of the expected stellar deformation due to centrifugal forces and helps to constrain the oblateness knowing the neutron star rotation period.
It is the purpose of this paper to compute force-free neutron star magnetospheres for a spheroidal star surrounded by an ideal pair plasma. In section~\ref{sec:simulations} we present the simulation set up, solving the time-dependent Maxwell equations using our fully pseudo-spectral code. In section~\ref{sec:results}, we plot magnetic field lines, spin down luminosities, polar cap shapes and current densities. Some discussion about fast rotating neutron stars is proposed in section~\ref{sec:discussion}. Conclusions and perspectives are drawn in section~\ref{sec:conclusion}.
\section{Time-dependent simulation setup}
\label{sec:simulations}
The present study relies on our previous work \citep{petri_spheroidal_2021}, adding an electric charge and current density in the force-free approximation. We succinctly remember the model before diving into the salient features of a force-free spheroidal magnetosphere.
The stellar interior is treated as a perfect conductor of spheroidal shape, being oblate or prolate. The time-dependent Maxwell equations are solved in a spheroidal coordinate system with coordinates~$(\rho,\psi,\phi)$ and oblateness or prolateness parameter~$a$. The coordinate system belongs to one of the eleven separable coordinate systems and forms an orthogonal basis. The electric and magnetic fields $\vec{E}$ and $\vec{B}$ are evolved in time according to
\begin{subequations}
\begin{align}
\label{eq:Maxwell1}
\mathbf{\nabla}\cdot \vec{B} & = 0 \\
\label{eq:Maxwell2}
\mathbf{\nabla} \times \vec{E} & = - \frac{\partial \vec{B}}{\partial t} \\
\label{eq:Maxwell3}
\mathbf{\nabla}\cdot \vec{E} & = \frac{\rho_{\rm e}}{\varepsilon_0} \\
\label{eq:Maxwell4}
\mathbf{\nabla} \times \vec{B} & = \mu_0 \, \mathbf j + \frac{1}{c^2} \, \frac{\partial \vec{E}}{\partial t} .
\end{align}
\end{subequations}
$\varepsilon_0$ and $\mu_0$ are the electric permittivity and magnetic permeability respectively, $\rho_{\rm e}$ is the charge density and $\vec{j}$ the current density.
For an extensive and detailed discussion of the spheroidal coordinate systems, we refer to \cite{petri_spheroidal_2021} in order to avoid reproducing lengthy formulae here.
As usual in the relativistic force-free regime, the electric current density is deduced from the force-free prescription and reads
\begin{equation}
\label{eq:J_ideal}
\vec{j} = \rho_{\rm e} \, \frac{\vec{E}\wedge \vec{B}}{B^2} + \frac{\vec{B} \cdot \mathbf{\nabla} \times \vec{B} / \mu_0 - \varepsilon_0 \, \vec{E} \cdot \mathbf{\nabla} \times \vec{E}}{B^2} \, \vec{B} .
\end{equation}
The system of equation is therefore fully determined.
The numerical setup for performing simulations is similar to the one used in \cite{petri_spheroidal_2021}. The gravitating fluid outer boundary is denoted by~$\rho_{\rm in} = R$ and should not to be confused with the spherical radius because we use spheroidal coordinates. The light-cylinder radius is $r_{\rm L}$ and the spheroidal inner surface of the computational domain set to $\rho_{\rm in} / r_{\rm L}=0.3$. The outer boundary of the computation domain is located at $\rho_{\rm out}/r_{\rm L} = 7$. We impose outgoing wave boundary conditions at the outer edge of the simulation domain and impose tangential electric field component as well as normal magnetic field component continuity on the stellar surface. A numerical convergence analysis showed that a grid resolution of $N_\rho \times N_\psi \times N_\varphi = 129\times32\times64$ was sufficient to reckon accurate results.
\section{Results}
\label{sec:results}
As physical relevant outputs of our simulations, we draw attention to the magnetic field structure, to the spin down luminosity, to the polar cap geometry and to the polar cap current density. These quantities are the baselines underlying a more detailed investigation of observational consequences like thermal X-ray emission, radio and gamma-ray pulse profiles. However computation of such emission processes are postponed to future works.
\subsection{Magnetic field lines}
In the force-free regime, the magnetic field lines are attached to the particles. Because outside the light-cylinder these particles can no more corotate with the star, field lines are opening up, the poloidal component tending asymptotically to radial lines wound up by rotation. This winding up is a general feature of any force-free magnetosphere reaching ultra-relativistic corotating speeds close to the light cylinder region and trying to enforce corotation beyond, irrespective of the exact inner boundary on the stellar surface. We verify this assertion by plotting some of these field lines for spherical, oblate and prolate stars.
The asymptotic radial structure of poloidal field lines and the transition between closed and open field lines for an aligned rotator are shown quantitatively in Fig.~\ref{fig:lignes_champ_B_xz_a0} for an oblateness parameter $a/R=\{0,0.5,1\}$ in blue, green and red respectively. The blue quarter disk on the bottom left represents the spherical star. The light-cylinder is depicted as a vertical dashed black line. An oblate star inflates in the equatorial direction whereas a prolate star inflates along the rotation axis. Field lines of spheroidal stars do therefore not start right at the blue disk but at a larger distance.
Field lines in the equatorial plane for orthogonal rotators are shown in Fig.~\ref{fig:lignes_champ_B_xy_a90} with the same spheroidal parameter $a/R=\{0,0.5,1\}$ in blue, green and red respectively. The blue disk in the centre represents the spherical star. The two-armed spiral shape, typical of a rotating dipole, develops from the light-cylinder up to large distances. Its location in space is identical for all simulations.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{lignes_champ_B_xz_a0_ri0.3_ro7_n257_nt64_np1_cfl0.5_ba7_alp0.1_o4_j1.png}
\caption{Magnetic field lines in the meridional plane $xOz$ for an aligned spheroidal rotator with oblateness (first two columns) or prolateness (last two columns) parameter $a/R=\{0,0.5,1\}$ respectively in blue, green and red. The blue quarter disk in the bottom left depicts the spherical star. The vertical dashed black line represents the light-cylinder.}
\label{fig:lignes_champ_B_xz_a0}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{lignes_champ_B_xy_a90_ri0.3_ro7_n129_nt32_np64_cfl0.5_ba7_alp0.1_o4_j1.png}
\caption{Magnetic field lines in the equatorial plane $xOy$ for a perpendicular spheroidal rotator with oblateness (first two columns) or prolateness (last two columns) parameter $a/R=\{0,0.5,1\}$ respectively in blue, green and red. The blue disk in the centre depicts the spherical star. The dashed black circle represents the light-cylinder.}
\label{fig:lignes_champ_B_xy_a90}
\end{figure*}
\subsection{Spin down}
The spin down depends on the surface shape but also on the normalisation convention used to compute the luminosity. This important issue is discussed in depth by \cite{petri_spheroidal_2021}. In order to get rid of effects not directly related to the change in shape, we used a normalisation where the magnetic dipole tends asymptotically to the same magnetic moment value at infinity, irrespective of the oblateness or prolateness parameter~$a/R$.
As for the vacuum case, the spin down decreases with increasing parameter~$a/R$, as seen in Fig.~\ref{fig:luminosite_ellipticite_ffe_112} for single dipole field on the surface and in Fig.~\ref{fig:luminosite_ellipticite_ffe_134} for spherical dipole imposed on the surface.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{luminosite_ellipticite_m112_r0.3_ro7_n129_nt32_np64_cfl0.5_ba7_alp0.1_o4_j1.png}
\caption{Spin-down luminosity for oblate and prolate stars, respectively in solid and dashed lines, with single dipole stellar boundary conditions.}
\label{fig:luminosite_ellipticite_ffe_112}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{luminosite_ellipticite_m134_r0.3_ro7_n129_nt32_np64_cfl0.5_ba7_alp0.1_o4_j1.png}
\caption{Spin-down luminosity for oblate and prolate stars, respectively in solid and dashed lines, with spherical dipole stellar boundary conditions.}
\label{fig:luminosite_ellipticite_ffe_134}
\end{figure}
All simulation runs are summarized by a spin down fitted formally with
\begin{equation}\label{eq:Luminosite_fit}
L_{\rm FFE}^{\rm spheroid} \approx L_\perp \, (\ell_1 + \ell_2 \, \sin^2 \rchi) ,
\end{equation}
where the vacuum luminosity for a perpendicular rotator is
\begin{equation}\label{eq:Lperp}
L_\perp = \frac{8\,\pi}{3\,\mu_0\,c^3}\,\Omega^4\,B^2\,R^6 ,
\end{equation}
where $B$ is the magnetic field strength at the equator of a spherical star, $\Omega$ its rotation rate and $R$ its radius.
The coefficients~$\ell_i$ for $i=\{1,2\}$ depend on the ellipticity according to
\begin{equation}\label{eq:Fit}
\ell_i = \alpha_i - \beta_i \, \left( \frac{a}{R} \right)^2 .
\end{equation}
The fitted values are reported in table~\ref{tab:fit}.
\begin{table}
\centering
\begin{tabular}{|c|cccc|}
\hline
Model & $\alpha_1$ & $\beta_1$ & $\alpha_2$ & $\beta_2$ \\
\hline
oblate & 1.17 & 0.0212 & 1.71 & 0.116 \\
prolate & 1.14 & 0.120 & 1.71 & 0.0975 \\
oblate spherical & 1.17 & 0.123 & 1.71 & 0.0830 \\
prolate spherical & 1.18 & 0.0172 & 1.71 & 0.0932 \\
\hline
\end{tabular}
\caption{Fitted coefficients $\alpha_i$ and $\beta_i$ as given by eq.~\eqref{eq:Fit}.\label{tab:fit}}
\end{table}
The spheroidal shape always decreases the spin down luminosity with respect to the spherical star. This effect is most pronounced for prolate stars.
\subsection{Polar cap shape}
Polar caps are important observables. Indeed, the shape of their rims can be indirectly probed by their thermal X-ray pulsation as detected by for instance the Neutron Star Interior Composition Explorer (NICER) \citep{riley_nicer_2019,bogdanov_constraining_2019}. As a starting point for computing thermal X-ray emission, we present in this paper realistic polar cap surfaces relying on our force-free simulations of spheroidal stars. Fig.~\ref{fig:forme_calotte} compiles the geometry of the polar cap edges for various neutron star surface deformations, being either oblate or prolate, different inclination angles~$\rchi=\{0\degr, 30\degr, 60\degr, 90\degr\}$ and several normalized spheroidal parameter~$a/R=\{0,0.5,1\}$. For reference, the vacuum Deutsch solution is also shown in dashed orange lines.
Compared to polar caps deduced from vacuum, in force-free magnetospheres, the cap rims lie always outside the equivalent Deutsch case. For the aligned rotator, the size and area of the cap, being perfectly circular due to the axisymmetry, increases with oblateness or prolateness, except for the third raw, as seen in the left column of Fig.~\ref{fig:forme_calotte}. For large obliquities, the polar cap surface area increases also with the spheroidal parameter~$a$. Nevertheless, for prolate stars, the deformation of the polar cap remains weak because along the equator, the stellar shape remains almost spherical, tending to the spherical force-free results. For oblate stars, the situation is opposite because the maximum deformation arises around the equator and the polar cap area increases significantly with oblateness.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{forme_calotte_ri0.3_ro7_n129_nt32_np64_cfl0.5_ba7_alp0.1_o4_j1.png}
\caption{Polar cap shape for oblate and prolate stars with oblateness parameter $a/R=\{0,0.5,1\}$ respectively in blue, green and red. The orange dashed line shows the reference solution for the Deutsch field as a check. The obliquity from the left column to the right column is $\rchi=\{0\degr, 30\degr, 60\degr, 90\degr\}$. First row for an oblate star with one multipole of order $\ell=1$, second row for a spherical dipole magnetic field at the surface, third row for a prolate star with one multipole of order $\ell=1$ and fourth row for a spherical dipole magnetic field at the surface.}
\label{fig:forme_calotte}
\end{figure}
Contrary to the spin down comparison performed in the previous section where some fiducial spherical model had to be chosen, the polar cap rims rely solely on geometrical effects of magnetic field lines. Therefore the polar surfaces discussed in this section are insensitive to the particular normalisation used for the magnetic dipole. The results are robust and can applied to any magnetic field intensity.
\subsection{Polar cap current}
Just like the previous quantities, the current produced within the magnetosphere is significantly altered by the spheroidal shape of the stellar surface. The local current density expelled and returning to the polar caps represents offers an important view of the electrodynamics impacted by the spheroidal star. This current density is best reckoned in the star corotating frame, removing the convective current produced by the electric drift motion of the charge-separated plasma component. This current is not directly related to the conduction current which is the primordial current to plot in force-free electrodynamics. \cite{endean_lorentz_1974} presented an interesting derivation of the relativistic force-free equation involving only the magnetic field, therefore generalizing the usual non relativistic equation
\begin{equation}\label{rq:FFEnonrel}
( \vec{\nabla} \wedge \vec{B} ) \wedge \vec{B} / \mu_0 = \vec{j} \wedge \vec{B} = \vec{0}
\end{equation}
to a relativistic version given by
\begin{equation}\label{eq:FFErel}
( \vec{\nabla} \wedge \vec{B}^* ) \wedge \vec{B} / \mu_0 = \vec{0}
\end{equation}
where he introduced a new vector $\vec{B}^*$ defined by
\begin{equation}\label{eq:FFE_Bcor}
\vec{B}^* = \left( \left(1 - \frac{x^2+y^2}{r_{\rm L}^2}\right) \, B^\rho, \left(1 - \frac{x^2+y^2}{r_{\rm L}^2}\right) \, B^\psi, B^\phi \right).
\end{equation}
A similar expression was given by \cite{mestel_force-free_1973} at the same time and also explained by a simple Lorentz transformation. Therefore according to Eq.~\eqref{eq:FFErel}, the term
\begin{equation}\label{eq:jconduction}
\mu_0 \, \vec{j}_c = \vec{\nabla} \wedge \vec{B}^*
\end{equation}
is directed along $\vec{B}$ and can be interpreted as the conduction current density~$\vec{j}_c$ in the corotating frame, see also \cite{bai_modeling_2010} for another derivation of this interpretation. For numerical purposes, we normalize the current density to a fiducial value related to the corotating current density at the pole of an aligned rotator and given by
\begin{equation}\label{key}
j_{\rm pol} = 2 \, \varepsilon_0 \, \Omega \, B \, c .
\end{equation}
Following the definition given by Eq.~\eqref{eq:jconduction} fig.~\ref{fig:jconduction_l11} shows the conduction current density on the surface of an aligned oblate star with oblateness parameter $a/R=\{0,0.5,1\}$ for one pole with $\theta \in [0\degr, 90\degr]$. The other pole is not shown because of the north-south antisymmetry. The conduction current $\vec{j}_c$ is actually symmetric with respect to the equator located at a colatitude $\theta=90\degr$. The oblateness decreases the current density by up to a factor two for $a/R=1$. Nevertheless the polar cap area also increases with oblateness, partially compensating for the current density decrease. The current flowing in the vicinity of the polar cap therefore depends crucially on the oblateness. Combining the variation in the area, we expect the associated thermal X-ray emission to be drastically influenced by the stellar surface geometry in addition to the polar cap area and shape.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{jconduction_l11_a0_r0.3_ro7_n257_nt64_np1_cfl1.0_ba7_alp0.1_o4_j1.png}
\caption{Conduction current density on the surface of an aligned oblate star with oblateness parameter $a/R=\{0,0.5,1\}$.}
\label{fig:jconduction_l11}
\end{figure}
Fig.~\ref{fig:jconduction_l12} shows the same conduction current density on the surface of an aligned prolate star with oblateness parameter $a/R=\{0,0.5,1\}$. Here also, the current density decreases but to a lesser extend compared to the oblate case. The impact on the thermal X-ray emission is consequently less affected than in the previous oblate case.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{jconduction_l12_a0_r0.3_ro7_n257_nt64_np1_cfl1.0_ba7_alp0.1_o4_j1.png}
\caption{Conduction current density on the surface of an aligned prolate star with oblateness parameter $a/R=\{0,0.5,1\}$.}
\label{fig:jconduction_l12}
\end{figure}
To be exhaustive, we also computed the current density for orthogonal rotators in oblate and prolate geometries. Fig.~\ref{fig:jconduction_l11_a90} shows a cross section, taken around the centre of the polar cap, of the conduction current in oblate stars with oblateness parameter $a/R=\{0,0.5,1\}$. The conduction current $\vec{j}_c$ is actually skew-symmetric with respect to the equator located at a colatitude $\theta=90\degr$.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{jconduction_l11_a90_r0.3_ro7_n129_nt64_np128_cfl0.5_ba7_alp0.1_o4_j1.png}
\caption{Conduction current density on the surface of an orthogonal oblate star with oblateness parameter $a/R=\{0,0.5,1\}$.}
\label{fig:jconduction_l11_a90}
\end{figure}
Eventually, fig.~\ref{fig:jconduction_l12_a90} shows the same cross section for a prolate star with prolateness parameter $a/R=\{0,0.5,1\}$. We always notice a monotonic decrease in the peak current density with increasing spheroidal parameter~$a$.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{jconduction_l12_a90_r0.3_ro7_n129_nt64_np128_cfl0.5_ba7_alp0.1_o4_j1.png}
\caption{Conduction current density on the surface of an orthogonal prolate star with prolateness parameter $a/R=\{0,0.5,1\}$.}
\label{fig:jconduction_l12_a90}
\end{figure}
\section{Discussion}
\label{sec:discussion}
Nowadays, thermal X-ray pulsation emanating from the surface of neutron stars is used to probe the location and shape of the underlying hot spots. Recent observations by NICER clearly demonstrated the strength of such approaches to constrain the mass, the equatorial radius \citep{riley_nicer_2019} and the equation of state \citep{miller_psr_2019} of these compact objects. However, these investigations rely on observational fitting that are not always quantitatively and accurately linked to neutron star magnetosphere models. For instance, we are not aware of any oblate force-free magnetosphere computation showing the impact of oblateness on spin down efficiency and polar cap perturbations induced by the deviation form a spherical star.
Our results unambiguously predicts the importance of carefully estimating the oblateness in order to fit hot spot X-ray light curves. Irrespective of the normalization employed for the magnetic field, we always found an increase in the hot spot surface area compared to a spherical star. The current flowing back to the star, hitting and heating the surface is therefore also modified by the oblateness or prolateness. As a consequence, the surface brightness temperature and the luminosity of these hot spots are influenced by the stellar boundary, not only because of geometrical effects but also because of electrodynamics effects related to the change in current and charge density existing inside the magnetosphere.
Another important ingredient not included in the present study is general relativity. Because of the large compactness of neutron stars, curved space time also significantly modifies the electrodynamics of its magnetosphere, especially in the vicinity of its surface. Unfortunately this requires solutions to Einstein equations for an oblate gravitating fluid and simple analytical formulas for the exterior space time are difficult to find with exception of \cite{erez_gravitational_1959}, corrected by \cite{young_exact_1969} and \cite{doroshkevich_gravitational_1965}. A very general solution for the exterior of the star has been proposed by \cite{quevedo_general_1989}. The multipole components are however left as free parameters not straightforwardly connected to the stellar oblateness. Nevertheless, a simple and good approximation is obtained by the so-called oblate+Schwarzschild model where the oblate stellar surface is embedded into the Schwarzschild metric. It is simple but furnishes a good impression on these additional effects even if it does not include frame-dragging.
Last but not least, it was shown that magnetic multipole components \citep{bilous_nicer_2019} can play a central role in understanding millisecond pulsar light-curves \citep{kalapotharakos_multipolar_2021}.
The oblateness parameter $a$ can be deduced from numerical models of neutron star interiors. According to table~I of \cite{silva_surface_2021}, using realistic equations of state for the construction of stars in stationary equilibrium for fast rotating neutron stars with frequency about 600~Hz, the ratio between polar radius~$R_p$ and equatorial radius~$R_e$ is about $r = R_p/R_e \approx0.9$, implying an oblateness parameter of $a = \sqrt{r^{-2}-1} \approx 0.48$. The deformation of the stellar surface is therefore substantial, leading to a slight increase in the polar cap surface area.
The impact on the spin down luminosity depends on the internal structure of the magnetic field compared to the equivalent spherical star. This issue is however not solved. For some neutron stars, a birth period around 1~ms is expected, even for strongly magnetized ones, coined millisecond magnetars and suspected to be the central engine of hypernovae or superluminous supernovae. For such high periods, the aspect ratio is about $r \approx 0.7$, implying an oblateness of $a \approx 1$. If we assume an ideal plasma compression due to the slowing down of the star, the magnetic flux is approximately conserved and the magnetic field strength at the equator increases by a factor $1/r \approx 1.42$. The associated spin down augments by a factor $1/r^2 \approx 2.0$. Therefore at least two mechanisms contribute to a modification of the neutron star luminosity, first the change in the stellar surface and second the variation in the surface magnetic field strength. We believe that these effects must be taken into account for the evolution of the early stage of a neutron star relaxing to a spherical shape as it slows down.
\section{Conclusions}
\label{sec:conclusion}
Although neutron stars are subject to enormous gravitational binding energy, millisecond rotation periods can lead to significant centrifugal forces deforming their surface into an oblate shape or even into a prolate shape due to magnetic compression. We computed force-free magnetospheres for spheroidal neutron stars and showed the repercussion onto magnetic topology, spin down efficiency, polar cap geometry and current density. We observed significant changes in the polar cap rims for reasonable spheroidal parameters. Compared to a spherical star, the impact on spin down luminosity remains small with our normalisation convention. However the magnetic field geometry repercussing on the polar cap shape is significant irrespective of the magnetic field strength. Such shapes can indirectly be probed via thermal X-ray emission from hot spots as recently demonstrated by the NICER collaboration.
From our study, we can compute such X-ray light-curves, knowing the size and shape of the hot spot. Nevertheless, in order to better stick to realistic neutron stars, we need to take into account the compactness of the star and therefore are obliged to include some general-relativistic effects using for instance an oblate Schwarzschild metric or more accurate models of spheroidal neutron star gravitational fields. These investigations are the natural steps that we pursue in forthcoming studies.
\begin{acknowledgements}
I am grateful to the referee for helpful comments and suggestions.
This work has been supported by the CEFIPRA grant IFC/F5904-B/2018 and ANR-20-CE31-0010.
\end{acknowledgements}
\bibliographystyle{/home/petri/publications/bibtex/aa}
|
1,314,259,993,877 | arxiv | \section*{Appendix}
\subsection*{Preliminaries of power networks}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.235\textwidth]{figs/intro-fig-scheme.png}
\includegraphics[width=0.235\textwidth]{figs/intro-fig.png}
\caption{Schematic of power distribution network with few residential EV adopters.}
\label{fig:intro}
\end{figure}
Fig.~\ref{fig:intro} shows a typical power distribution network with few EV adopters. The network is a tree rooted at the substation node and connects residences through local pole top transformers.
A branch originating from the substation is also termed as a `feeder'.
The power flowing along each edge results in voltage drop between the nodes.
Therefore, the node at the extreme end of the feeder experiences the minimum voltage.
\subsection*{Convergence of ADMM methodology}
In the paper, we have discussed the need of a distributed methodology to respect the privacy of consumer information. However, this alters the original MILP problem into one QP and multiple MIQPs. Here, we discuss the deviation of the optimal solution obtained by the distributed framework from that obtained by a centralized approach. Note that the centralized approach involves solving the MILP in (\ref{eq:total-opt}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{figs/cent-dist-dev.png}
\caption{Percentage deviation of the optimal solution obtained using the proposed framework from the solution obtained by centrally solving (\ref{eq:total-opt})}
\label{fig:dev}
\end{figure}
Fig.~\ref{fig:dev} shows the percentage deviation in the optimal cost incurred by the EV adopting residences. This is done for an adoption level of $50\%$ in `Com-A' of the network. The solution obtained by the proposed distributed approach converges with the solution obtained by the centralized approach for majority of residences. For the other residences the deviation in the optimal cost lies mostly below $5\%$ with one residence having a large deviation of $20\%$. Therefore, we can conclude that the optimal solution converges for most of the residences. The deviation is the price we pay to respect the privacy of consumers while dealing with this problem.
\subsection*{Additional Results}
We provide the plots for line loading levels and node voltages for multiple adoption levels in the two communities Fig.~\ref{fig:extra-1} and Fig.~\ref{fig:extra-2}. In all the cases, we note that the edges in the network are not loaded to their full capacities. However, the voltages of multiple residences fall outside the acceptable limits of $0.95-1.05$p.u.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{figs/121144-com-2-adopt-30-rate-4800-loading.png}
\includegraphics[width=0.48\textwidth]{figs/121144-com-2-adopt-60-rate-4800-loading.png}
\caption{Comparison of line loading level (edge flows) for residential EV adoption of $30\%$ and $60\%$ in `Com-A' of network. The EV adoption does not significantly affect the line loading level for either of the optimization methods.}
\label{fig:extra-1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{figs/121144-com-2-adopt-30-rate-4800-loading.png}
\includegraphics[width=0.48\textwidth]{figs/121144-com-2-adopt-60-rate-4800-loading.png}
\caption{Comparison of residence node voltages for residential EV adoption of $30\%$ and $60\%$ in `Com-A' of network. The EV adoption adversely affects the node voltages during cheap electricity hours for the individual optimization scenario.}
\label{fig:extra-2}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{figs/121144-com-1-homes.png}
\includegraphics[width=0.58\textwidth]{figs/121144-com-1-rate-4800-voltlimit.png}
\includegraphics[width=0.4\textwidth]{figs/121144-com-3-homes.png}
\includegraphics[width=0.58\textwidth]{figs/121144-com-3-rate-4800-voltlimit.png}
\includegraphics[width=0.4\textwidth]{figs/121144-com-4-homes.png}
\includegraphics[width=0.58\textwidth]{figs/121144-com-4-rate-4800-voltlimit.png}
\caption{Impact of residential EV charging is analyzed for three more residential communities within the same network: `Com-C'(top), `Com-D'(middle) and `Com-E'(bottom). The orange nodes shown in the network denote the residences in the two communities. The individual optimization leads to undervoltage (less than 0.95 p.u.) at a significant number of residences. This can be avoided by using the proposed distributed optimization method even for higher levels of EV adoption.}
\label{fig:extra-3}
\end{figure*}
We further compare the distributed and individual optimization frameworks for three more residential communities in the network. These are shown in Fig.~\ref{fig:adoption-out-limit}. We note that the impact on reliability of the network in terms of number of residences experience undervoltage is different for each community in the network. This proves the spatial dependency of network reliability on EV adoption location.
\section{Conclusion}
In this paper we propose an ADMM method based distributed framework for scheduling EV charging for consumers at residential premises while maintaining reliability of the network.
We observe that individual consumer loads, spatial layout of the network, and electricity pricing all affect network reliability.
We provide a set of results on a small distribution network in Montgomery county of Virginia, USA for EV adoption at different levels in multiple residential communities.
We inspect the network reliability with two measures : node voltage and edge flows (line rating/line loading).
Results show that the proposed distributed approach for scheduling residential EV charging enables the network operator to maintain system reliability by respecting line limits and having fewer undervoltage households and almost no households with node violation even for higher levels of EV adoption as opposed to an individual approach.
\section{Introduction}\label{sec:intro}
Studies have shown that home-charging units are pivotal infrastructure for promoting EV adoption~\cite{Wei2021}.
As EV adoption increases over the next few years, the power drawn from the grid will increase and may cause disturbances in the distribution power network.
It is quite possible that the network operation may undergo modifications in order to accommodate these unconventional loads in the future without affecting the reliability of the grid.
From the standpoint of power engineering, a \emph{reliable} power grid is one which has adequate generation to support the consumer load demand and can be operated without violating standard power engineering constraints~\cite{Billinton1994}.
In this paper, we consider an operational problem and therefore are not concerned about the aspect of adequate generation.
Thus, we use the term \emph{reliable distribution network} to mean a network that can satisfy the loads without violating the node voltages and line flow (edge flow) capacities.
While flows above line rating causes overheating of conductors and subsequent physical damage, node voltages represent the quality of power delivered at the node. Undervoltage and overvoltage at residence nodes lead to eventual failure of household appliances. Relevant basic concepts of power distribution networks are described in the Appendix.
Traditionally, the distribution network is designed to sustain the peak load demand of consumers~\cite{dist_plan}. The predictable growths in consumer peak demands and energy consumption enables the network operator to plan and operate the network reliably. However, adoption of EVs in residential communities leads to a significant deviation in the predictability of consumer loads~\cite{ev-problem1}.
Residential EV charging constitutes a significant percentage of net household demand leading to large power consumed from the grid.
The problem is more dominant when residential consumers opt to charge EVs according to their personal convenience~\cite{ev-problem2}.
The excess load consumed by the EV charging units adversely affects distribution grid by causing transformer overloading or high voltage drop at feeder ends~\cite{ev-problem3}.
As a result, it is desirable to develop a framework which aids residential consumers with their goal of scheduling EV charging based on their individual preferences, and simultaneously taking into account the grid reliability requirements of the network operator.
\begin{comment}
From the standpoint of power engineering, a \emph{reliable} power grid is one which has adequate generation to support the consumer load demand and can be operated without violating standard power engineering constraints~\cite{Billinton1994}.
In this paper, we consider an operational problem and therefore, are not concerned about the aspect of adequate generation.
Thus, we use the term \emph{reliable distribution network} to mean a network that can be satisfy the loads without violating the node voltages and line flow capacities. \swapna{Rounak, I think you should rename edge flows in experiments and results to line flow capacities. Make sure to add those change sto the plots as well.}
While flows above line rating causes overheating of conductors and subsequent physical damage, node voltages represent the quality of power delivered at the node. Undervoltage and overvoltage at residence nodes lead to eventual failure of household appliances used in the residence. Fig.~\ref{fig:intro} shows a typical power distribution network with few EV adopters. The network is a tree rooted at the substation node and connects residences through local pole top transformers. A branch originating from the substation is also termed as a `feeder'. The power flowing along each edge results in voltage drop between the nodes. Therefore, the node at the extreme end of the feeder experiences the minimum voltage.
Traditionally, the distribution network is designed to sustain the peak load demand of consumers~\cite{dist_plan}. The predictable growths in consumer peak demands and energy consumption enables the network operator to plan and operate the network reliably. However, adoption of EVs in residential communities leads to a significant deviation in the predictability of consumer loads~\cite{ev-problem1}.
Residential EV charging constitutes a significant percentage of net household demand leading to large power consumed from the grid~\cite{butos_turu_2014}.
The problem is more dominant when residential consumers opt to charge EVs according to their personal convenience~\cite{ev-problem2}. The excess load consumed by the EV charging units adversely affects distribution grid by causing transformer overloading or high voltage drop at ends of feeders~\cite{ev-problem3}.
As a result, it is desirable to develop a framework which aids residential consumers with their goal of scheduling EV charging based on their individual preferences, and simultaneously taking into account the grid reliability requirements of the network operator.
\end{comment}
\noindent \textbf{Contributions.}~The contributions of the paper are summarized here: (i) A novel `reliability-aware distributed EV charging scheduling framework' is proposed. It uses information such as the hourly electricity rate, household energy demand profiles \& their preferences as inputs for consumers, and power engineering constraints of distribution network as inputs for operator to aid residential EV adopters in scheduling their EV charging units in an optimal manner without affecting the reliability of the power grid. (ii) The distributed framework uses ADMM based iterative methodology which guarantees an optimal solution for our problem. Each iteration involves solving a mixed integer quadratic program (MIQP) for each residential consumer and a quadratic program (QP) for the operator. The optimal solutions are exchanged and used in succeeding iterations until a consensus is reached. This minimum exchange of information between the consumers/residences/households and the network operator can be executed using present smart grid infrastructure and avoids sharing of private and proprietary data. (iii) We use digital duplicates of residential consumer load demand profile and power distribution networks resembling the actual physical counterparts for our case studies. This facilitates conducting real-world test scenarios to explore the impact of using the proposed framework while considering multiple levels of EV adoption. Our experiment results demonstrate that the proposed distributed framework helps maintain network reliability compared to the case where EV adopters charge their vehicles based on their personal (individualized) preferences.
\begin{comment}
\begin{enumerate}
\item A novel `distributed EV charging scheduling framework for the power distribution network' is proposed. It uses information such as the hourly electricity rate, household energy demand profiles \& their preferences, power engineering constraints of distribution networks, and fixed time-slots available for EV charging to enable residential EV adopters to schedule their EV charging units in an optimal manner without affecting the reliability of the power grid.
\item The distributed framework uses ADMM based iterative methodology which guarantees an optimal solution. Each iteration involves solving a mixed integer quadratic program (MIQP) for each residential consumer and a quadratic program (QP) for the operator. The optimal solutions are exchanged and used in succeeding iteration until a consensus is reached. This minimum exchange of information between the consumers/residences/households and the network operator can be executed using present smart grid infrastructure.
\item We use digital duplicates of residential consumer load demand profile and digital duplicate of power distribution networks resembling the actual physical counterparts.
This facilitates conducting real-world test scenarios to explore the impact of using the proposed framework while considering multiple levels of EV adoption.
\end{enumerate}
\end{comment}
\section{Related Work}\label{sec:relate}
Several works have been presented in the literature for scheduling EV charging. In general, optimization techniques are a popular choice for solving the problem of scheduling EV charging at household level~\cite{Cao2012,Zhao2018,Lee2020,Blonsky2021,Young-Min2013,Wanrong2016,goncalves2018,Khonji2018,Nyns2010,sairaj2014}.
Recently, ML techniques such as neural networks~\cite{Shuvo2021} and reinforcement learning frameworks~\cite{Yongsheng2022} have been proposed to study the problem of scheduling EV charging loads in smart homes.
Most of these works implement a centralized approach to schedule EV charging.
A centralized optimization algorithm/framework evaluates the optimal power consumption patterns which are beneficial to only one of the entities -- consumers or network operators~\cite{Liu2015}.
This approach may not be realistic since details of the individual consumer load is usually not accessible to the network operator.
At the same time, the network topology and parameters are unknown to the consumers.
Under such circumstances, a de-centralized/distributed framework is useful since it can help network operators and consumers communicate essential information that can respect both -- network reliability and consumer preferences.
The current smart grid infrastructure supports the development of such a framework due to the availability of two-way communication.
Dall’Anese \emph{et al.}~\shortcite{sairaj2014} use an alternating direction method of multipliers (ADMM) based approach to evaluate inverter set points at different locations in a network while maintaining network reliability. The results show that this method provides superior convergence guarantees in comparison with other methods while dealing with mixed integer linear programs (MILP). In this work, we propose a distributed optimization framework based on the ADMM method for scheduling EV charging in a power distribution network. Our framework satisfies two goals -- maintaining grid reliability for the network operator while respecting consumer preferences.
\begin{comment}
In recent literature, a number of articles have proposed dynamic pricing based approaches to address the problem of optimal load scheduling~\cite{Li2018_phd,Ivo2018_gecco,Yang2021_peakDR}. In these papers, an optimal monetary incentive is provided to the residential consumers to alter their load demand and thereby reducing the overall peak demand in the network. Deep learning based algorithms have been proposed in~\cite{Chi2019_RL,Derong2018_DPNN,Sharda2021} to learn and predict consumer load demand profiles.
Using the framework, network operators provide dynamic price signals based on the consumer load demand. Bidding based strategies have been proposed in~\cite{Lu2021_bidding,Biswas2018_MSThesis} to consider the consumer preferences while aiding the network operator to dispatch power in a reliable way. These methods, though suitable for real time demand-response on a large scale, might not be applicable to individual consumers who are reluctant to frequent altering of household usage patterns. In this paper, we treat the EV charging load as the sole load to be scheduled.
\hsm{(You may want to justify why EV charging? E.g., because of the relatively large impact compared to other residential power consuming items + also because adoption is bound to increase}
\swapna{EV load demand in London via ABM approach --\cite{butos_turu_2014}}
\swapna{Using genetic algorithm setup to schedule several household devices including EV~\cite{goncalves2018} }
\swapna{Check with Rounak if this is valid lit for the paper?}
Multiple residential device models are considered in~\cite{Zhou2020} and a stochastic optimization problem is proposed to evaluate the optimal load usage profile for the consumer. However, this work does not consider EV charging units. Lyapunov optimization is used in~\cite{Zhu2021_EssLyapunov,gupta2018optimal} to evaluate the optimal charging schedule of residential energy storage systems (ESS) while keeping the network reliability intact. However, in our case, we are only interested in charging the EVs and do not feed power into the network.
Several centralized optimization algorithms have been proposed to evaluate the optimal power consumption patterns which is either beneficial to consumers or to network operators~\cite{Liu2015}. However, a centralized approach to solve this problem might not be realistic since individual load information for the residential consumers might not be accessible to the network operator. At the same time, the network topology and parameters are unknown to the consumers. Therefore, this necessitates a distributed approach to compute the optimal EV charging schedule for the residences. The authors in~\cite{sairaj2014} use an alternating direction method of multipliers (ADMM) based approach to evaluate inverter set points at different locations in a network while maintaining network reliability. This method provides superior convergence guarantees in comparison with other methods while dealing with MILPs.
To this end, we use realistic residential load demand models which are synthetically generated based on consumer behavior and activities~\cite{swapna2018}. We model the scheduling problem as a mixed integer linear program (MILP) which evaluates the optimal time intervals when EV charging can be carried out without affecting network reliability.
\end{comment}
\begin{comment}
In this paper, we study distributed algoirithms for scheduling EV charging
for residential customers while maintaining the reliability of the distribution grid.
First we formulate the basic problem that captures the basic elements of user preferences, power engineering constraints and charging times. The problem (\textcolor{blue}{name the problem} aims to finds a
distributed algorithm that schedules EV charging at the residential level (see Section for a formal definition). \mvm{expand on this a bit. Importantly make a formal problem. Use {\sc for problem names}, {\sf for procedure or algorithm names}.}
Second, we present a `distributed EV scheduling framework' in the power distribution network that uses the hourly electricity rate as input and enables residential EV adopters to schedule their EV charging units in an optimal manner along without affecting the reliability of the power grid. The distributed framework uses ADMM based iterative methodology which guarantees faster convergence time to the optimal solution. The message passing carried out in each iteration requires minimum exchange of information between the consumers and network operator which can be executed using present smart grid infrastructure.
\mvm{what sort of performance guarantees does one have: number of rounds, quality of solution ...}
Finally, we use realistic test scenarios (residential consumer load demand profile and digital duplicate of power distribution networks resembling the actual physical counterparts) to compare the proposed framework with an uncoordinated EV charging scenario. Therefore, we are able to provide a realistic representation of the impact of using the proposed framework while considering multiple levels of EV adoption. The results show that the proposed framework is capable of handling higher level of residential EV charging without compromising on network reliability aspects. \mvm{what are two key insights}
\hsm{(Maybe consider a repeat sentence stating the importance of this for the future grid.}
\end{comment}
\section{Problem Formulation}\label{sec:problem}
In this paper, we are interested in power consumption trajectory over a finite horizon time window of $T$ intervals from time instant $k=0$ to $k=T$. We define an interval $t$ as the duration between time instants $k=t-1$ and $k=t$. Table~\ref{tab:set-index} summarizes the index variables and sets used in the paper.
\begin{table}[htbp]
\centering
\caption{Summary of index variables and sets used}
\label{tab:set-index}
\begin{tabular}{ll}
\hline
\textbf{Symbol} & \multicolumn{1}{c}{\textbf{Description}} \\ \hline
$T$ & Number of intervals in time window \\
$N$ & Number of non-substation nodes in network \\
$i$ & Index of node in network \\
$t$ & Index of time interval \\
$k$ & Index of time instant \\
$\alpha,\beta$ & Limits of squared voltage magnitude \\
$\mathscr{V}$ & Set of all nodes in network \\
$\mathscr{H}$ & Set of all residence nodes in network \\
$\mathscr{N}$ & Set of all non-substation nodes in network \\\hline
\end{tabular}
\end{table}
\subsection{Distribution Network Model}
The power distribution network is a \emph{tree} comprising of $N+1$ nodes collected in the set $\mathscr{V}:=\mathscr{N}\cup\{0\},\mathscr{N}:=\{1,2,\cdots,N\}$. The tree is rooted at substation node $\{0\}$ and consists of primary and secondary distribution lines collected in the edge set. The set $\mathscr{N}$ includes residences, local transformers and auxiliary nodes required to connect the transformers and residences~\cite{rounak2020,kersting_book,Bolognani2015}. Here, we are interested in the variables associated with the set of residence nodes $\mathscr{H}\subset\mathscr{N}$. The power consumption and squared voltage magnitude at node $i$ and time $t$ are denoted respectively by $p_{i}^{t}$ and $v_{i}^{t}$. We respectively stack these variables for $i\in\mathscr{N}$ to corresponding vectors $\mathbf{p}^t$ and $\mathbf{v}^t$ for every time interval $t$. The non-linear relation between power injections and squared voltages in the network can be simplified to linear expression using the Linearized Distribution Flow (LDF) model~\cite{Bolognani2015}.
\begin{subequations}
\begin{align}
& \mathbf{v}^t = -2\mathbf{Rp}^t + \mathbf{1} \label{seq:volt-power}\\
& \alpha \mathbf{1} \leq \mathbf{v}^t \leq \beta \mathbf{1} \label{seq:volt-limit}
\end{align}
\label{eq:volt-constraint}
\end{subequations}
Here, $\mathbf{1}$ is a vector of all $1$'s. The matrix $\mathbf{R}$ is a function of network topology and edge parameters which is only accessible by the network operator. The primary objective of the operator is to maintain the network reliability where the node voltages are within acceptable ANSI C.84 Range A limits~\cite{ansi}. In most practical distribution networks, these limits are 0.95 pu and 1.05 pu. The operator ensures that the power consumption at different nodes in the network satisfy (\ref{eq:volt-constraint}) where $\alpha,\beta$ denote the squared voltage limits.
\subsection{Residence Load Models}
This section describes load demand of a residence node $i\in\mathscr{H}$. The aggregate power consumption for the time interval $t$ is given by $p_{i}^t\in\mathbb{R}$. This load comprises of an uncontrollable base load demand denoted by $p_{i,0}^{t}$ and the controllable counterpart. In this paper, we use the synthetically generated residential load demand data described in~\cite{swapna2018} for the uncontrollable base load demand. We consider residence owned EV charging stations as the only controllable load which is denoted by $p_{i,\textrm{EV}}^{t}$. The power consumption at node $i$ and over time interval $t$ can be expressed as
\begin{equation}
p_{i}^{t} = p_{i,0}^{t} + p_{i,\textrm{EV}}^{t} \quad \forall t=1,2,\cdots,T \label{eq:load-bal}
\end{equation}
\noindent\textbf{EV Charging Model.}~We assume that the EV charging unit is responsible to charge only a single EV owned by the customer. Let the charge capacity of the EV be $Q_{i,\textrm{EV}}$ and the power rating of the EV charging unit be $P_{i,\textrm{EV}}$. The state of charge (SOC) evolves over the time interval $t$ from $s_{i,\textrm{EV}}^{t-1}$ to $s_{i,\textrm{EV}}^{t}$ following (\ref{seq:ev-evol}). Further, the constraint (\ref{seq:ev-lim}) limits the SOC to suitable lower and upper bounds. The scheduling problem aims to find out the optimal time intervals when the EV can be charged while maintaining a secure power grid. Let $z_{i,\textrm{EV}}^t\in\{0,1\}$ be a binary variable which takes the value $1$ if the EV is charged at time interval $t$ and $0$ otherwise.
Further, the EV is available for charging only at particular time interval denoted by the closed interval $\mathscr{T}_{i,\textrm{EV}}:=\left[t_{\textrm{start}},t_{\textrm{end}}\right]$. Let the SOC at time $t=t_{\textrm{start}}$ be $s_{i,\textrm{init}}$ and it is expected that by the end of the interval, the SOC of the battery needs to be at least $s_{i,\textrm{final}}$. Note that we consider a very simple case where the EV is available to be charged within a single continuous time interval $\mathscr{T}_{i,\textrm{EV}}$ in the time window of $T$ intervals. We also assume that the EV is not used in this interval. In realistic scenarios, these intervals of charging are discontinuous and usage of EV would result in different SOC at different time intervals.
\begin{subequations}
\begin{align}
& p_{i,\textrm{EV}}^{t} = z_{i,\textrm{EV}}^tP_{i,\textrm{EV}} & \forall t=1,2,\cdots,T \label{seq:ev-power}\\
& s_{i,\textrm{EV}}^{t} = s_{i,\textrm{EV}}^{t-1} + \frac{p_{i,\textrm{EV}}^{t}}{Q_{i,\textrm{EV}}} & \forall t=1,2,\cdots,T \label{seq:ev-evol}\\
& 0 \leq s_{i,\textrm{EV}}^{k} \leq 1 & \forall k=0,1,\cdots,T \label{seq:ev-lim}\\
& z_{i,\textrm{EV}}^{t} = 0 & \forall t \notin \mathscr{T}_{i,\textrm{EV}} \label{seq:ev-nocharge}\\
& s_{i,\textrm{EV}}^{t_\textrm{start}} = s_{i,\textrm{init}},\quad s_{i,\textrm{EV}}^{t_\textrm{end}} \geq s_{i,\textrm{final}} & \label{seq:ev-init}
\end{align}
\label{eq:res-ev}
\end{subequations}
\subsection{Optimization problem}
Each residence aims to compute the optimal power usage trajectory of its EV charging unit over a finite horizon time window of length $T$ denoted by $\{p_{i,\textrm{EV}}^{t}\}_{t=1}^{T}$. Given the hourly rate of electricity $c^{t}$ for each time interval in the time window, the optimization problem for each residence involves minimizing the total cost of consumption given the EV constraints (\ref{eq:res-ev}). This results in the MILP (\ref{eq:ind-opt}).
\begin{subequations}
\begin{align}
\min \quad & \sum_{t=1}^{T} c^t p_{i}^{t} \\
\textrm{over} \quad & p_{i,\textrm{EV}}^{t} \quad\quad\quad\quad~~~ \forall t\\
\textrm{s.t.} \quad & (\ref{eq:load-bal}),(\ref{eq:res-ev}) ~~~\quad\quad\quad \forall t
\end{align}
\label{eq:ind-opt}
\end{subequations}
At the same time, the network operator needs to ensure node voltages are within acceptable limits to maintain a reliable system. Additionally the operator might aim to optimize other aspects such as minimize losses or reduce voltage deviation~\cite{sairaj2014}. This can be expressed by $C\left(\mathbf{p}^t\right)$ which is a function of power usage of all residences at interval $t$. In this paper, we do not consider any particular objective of the network operator and treat $C\left(\mathbf{p}^t\right)=0$. We define the \emph{Reliability-aware EV Charge Scheduling}({\sc REVS}) problem which satisfies consumer preferences as well as ensures network reliability by (\ref{eq:total-opt}).
\begin{subequations}
\begin{align}
\min \quad & \sum_{t=1}^{T}C\left(\mathbf{p}^t\right) + \sum_{i\in\mathscr{H}}\sum_{t=1}^{T} c^t p_{i}^{t} \\
\textrm{over} \quad & p_{i,\textrm{EV}}^{t} \quad\quad\quad\quad~~~ \forall t~ \forall i\in\mathscr{H}\\
\textrm{s.t.} \quad & (\ref{eq:volt-constraint}) \quad\quad\quad\quad\quad~~ \forall t\\
& (\ref{eq:load-bal}),(\ref{eq:res-ev}) ~~~\quad\quad\quad \forall t~~ \forall i\in\mathscr{H}\\
& p_{i}^{t} = 0 \quad\quad\quad\quad \forall t~~ \forall i\notin\mathscr{H}
\end{align}
\label{eq:total-opt}
\end{subequations}
\section{Proposed Methodology}
The {\sc REVS} problem in (\ref{eq:total-opt}) is an MILP with binary variables arising from the on/off status of the EV charging unit if we consider . This problem can be solved from a central location (such as the operator) if the load information of residences are known. However, this is not always the case due to privacy concern associated with sharing personal data of consumers. Similarly, the network topology and parameters are considered proprietary information and cannot be shared with the consumers.
However, limited information exchange such as total power consumption can be done without violating privacy concerns using the current smart grid infrastructure. In this section, we propose an iterative method based on the ADMM technique to reach the optimal solution for the {\sc REVS} problem.
To this end, we separate the problem for the network operator and individual residences. Each residence $i$ aims to compute the optimal power usage trajectory $\left\{p_{i}^t\right\}_{t=1}^T$ over the time window given the EV charging constraints. The network operator computes consumption trajectories $\left\{\tilde{p}_i^t\right\}$ for all nodes (in the vector form $\tilde{\mathbf{p}}^t$) such that the network reliability constraints are satisfied. Additionally, we add constraint (\ref{seq:admm-equal}) to force these trajectories to match each other. Therefore, we get (\ref{eq:total-opt-alternate}) as the alternate version of the {\sc REVS} problem.
\begin{subequations}
\begin{align}
\min \quad & \sum_{i\in\mathscr{H}}\sum_{t=1}^{T} c^t p_{i}^{t} \\
\textrm{over} \quad & p_{i}^{t},\tilde{p}_{i}^{t} ~~\quad\quad\quad\quad~ \forall t~~ \forall i\\
\textrm{s.t.} \quad & (\ref{eq:volt-constraint}) ~\quad\quad\quad\quad\quad~~ \forall t\\
& (\ref{eq:load-bal}),(\ref{eq:res-ev}) \quad\quad\quad\quad \forall t~~ \forall i\in\mathscr{H}\\
& p_{i}^{t} = 0 = \tilde{p}_i^t ~\quad\quad \forall t~~ \forall i\notin\mathscr{H}\\
& \tilde{p}_i^t = p_{i}^t ~\quad\quad\quad~~~ \forall t ~~ \forall i\in\mathscr{H}\label{seq:admm-equal}
\end{align}
\label{eq:total-opt-alternate}
\end{subequations}
\begin{table}[htbp]
\centering
\caption{Summary of variables in optimization problem}
\label{tab:opt-var}
\begin{tabular}{ll}
\hline
\textbf{Symbol} & {\textbf{Description}} \\ \hline
$v_i^t$ & Voltage at node $i$ for interval $t$ \\
$p_i^t$ & Power consumed at node $i$ for interval $t$\\
$\tilde{p}_i^t$ & Power consumption computed by operator \\
$p_{i,0}^t$ & Power consumed by fixed load \\
$p_{i,\textrm{EV}}^t$ & Power consumed by EV charging unit \\
$z_{i,\textrm{EV}}^t$ & On/Off status of EV charging unit \\
$s_{i,\textrm{EV}}^k$ & SOC of EV at time instant $k$ \\
$\gamma_i^t$ & Dual variable corresponding to (\ref{seq:admm-equal})\\
$\mathbf{v}^t$ & Vector of $v_i^t$ for all nodes $i\in\mathscr{N}$ \\
$\mathbf{p}^t$ & Vector of $p_i^t$ for all nodes $i\in\mathscr{N}$ \\
$\tilde{\mathbf{p}}^t$ & Vector of $\tilde{p}_i^t$ for all nodes $i\in\mathscr{N}$\\
\hline
\end{tabular}
\end{table}
Now, we use the conventional ADMM steps to iteratively update the optimization variables for the operator and residences. Let $\tilde{\mathcal{P}}[l]:=\left\{\tilde{\mathbf{p}}^t[l]\right\}_{t=1}^{T}$ denote the optimal trajectories for all nodes computed by the operator for iteration $l$. Similarly, let $\mathcal{P}_i[l] := \left\{{p}_i^t[l]\right\}_{t=1}^{T}$ denote the optimal power usage trajectory computed by residence $i$. We abuse the notation $\left\{\mathcal{P}_i[l]\right\}$ to denote the optimal trajectories $\left\{{p}_i^t[l]\right\}_{t=1}^{T}$ computed by all residences $i\in\mathscr{H}$ individually. The two steps of iteration are listed below. Note that the first step is carried out simultaneously for all residences and network operator. Fig.~\ref{fig:message-xchange} illustrates the proposed message passing based distributed framework.
\begin{enumerate}[font=\bfseries]
\item[S1a.] At the operator side, we update the operator estimated power consumption $\tilde{p}_i^t$ for all residences using (\ref{eq:opt-operator}).
\begin{subequations}
\begin{align}
\tilde{\mathcal{P}}[l+1] := &\textrm{arg}~\textrm{min} \quad F(\tilde{\mathcal{P}}[l],\{\mathcal{P}_i[l]\}) \label{seq:obj-operator}\\
&\textrm{s.to.} \quad \alpha \leq 1-2\sum_{j=1}^NR_{ij}\tilde{p}_{j}^{t} \leq \beta \quad \forall t~~\forall i\label{seq:volt-operator}
\end{align}
\label{eq:opt-operator}
\end{subequations}
where the function $F(\tilde{\mathcal{P}}[l],\{\mathcal{P}_i[l]\})$ is defined as
\begin{equation}
\begin{aligned}
F(\tilde{\mathcal{P}}[l],&\{\mathcal{P}_i[l]\}) := \sum_{i\in\mathscr{H}}\sum_{t=1}^{T}\frac{\kappa}{2}\left(\tilde{p}_{i}^t\right)^2 \\
+ & \sum_{i\in\mathscr{H}}\sum_{t=1}^{T}\tilde{p}_{i}^t \left(\gamma_{i}^{t}[l] - \frac{\kappa}{2}\tilde{p}_{i}^t[l]-\frac{\kappa}{2}p_{i}^{t}[l]\right)
\end{aligned}
\end{equation}
\item[S1b.]For residence $i$, we update using (\ref{eq:opt-residence}).
\begin{subequations}
\begin{align}
\mathcal{P}_{i}[l+1] := \textrm{arg}~\textrm{min} \quad & \sum_{t=1}^{T} c^t p_{i}^{t} + F_{i}(\tilde{p}_{i}^{t}[l],p_{i}^{t}[l])& \label{seq:obj-residence}\\
\textrm{s.to.} \quad & (\ref{eq:load-bal})-(\ref{eq:res-ev}) &
\end{align}
\label{eq:opt-residence}
\end{subequations}
where the function $F_i\left(\tilde{p}_{i}^{t}[l],p_{i}^{t}[l]\right)$ is defined as
\begin{equation}
\begin{aligned}
F_{i}\left(\tilde{p}_{i}^{t}[l],p_{i}^{t}[l]\right) = & \sum_{t=1}^{T}\frac{\kappa}{2}\left({p}_{i}^t\right)^2 \\ - & \sum_{t=1}^{T}{p}_{i}^t \left(\gamma_{i}^{t}[l] + \frac{\kappa}{2}\tilde{p}_{i}^t[l]+\frac{\kappa}{2}p_{i}^{t}[l]\right)
\end{aligned}
\label{eq:f_res}
\end{equation}
\item[S2.]At the operator and residence sides, the dual variable update is updated.
\begin{equation}
\gamma_{i}^{t}[l+1] = \gamma_{i}^{t}[l] + \frac{\kappa}{2}\left(\tilde{p}_{i}^{t}[l+1]-p_{i}^{t}[l+1]\right)
\label{eq:gamma}
\end{equation}
\end{enumerate}
The resulting decentralized procedure involves a two-way message exchange of the iterates $\left\{\tilde{\mathbf{p}}^t[l]\right\}_{t=1}^{T}$ and $\left\{\mathbf{p}^t[l]\right\}_{t=1}^{T}$ between the network operator and residential consumers. At an iteration $l>0$, the network operator updates the power trajectories based on (\ref{eq:opt-operator}) whose objective includes a regularization term $F(\tilde{\mathcal{P}}[l],\{\mathcal{P}_i[l]\})$. This term enforces consensus with the power usage trajectories computed at the residences. The constraints ensure the reliability aspects in of the network. Note that (\ref{eq:opt-operator}) is a QP because of the quadratic regularization term. The operator relays to each residential consumer $i$ a copy of the iterate value $\left\{\tilde{p}_i^t[l+1]\right\}_{t=1}^{T}$.
At the same time, the consumer optimal trajectories are updated using (\ref{eq:opt-residence}) and copy of the iterate value $\left\{{p}_i^t[l+1]\right\}_{t=1}^{T}$ is sent to the operator. We note that (\ref{eq:opt-residence}) is a MIQP because of the quadratic regularization term ensuring consensus with the operator objective and binary constraints for the EV charging unit. We note that the {\sc REVS} problem which was originally an MILP is converted to a QP for the operator and MIQPs for individual residences using the proposed ADMM based framework.
Once the updated local iterates are exchanged, the operator and residential updates the local dual variables using (\ref{eq:gamma}).
The centralized approach to solve the MILP guarantees convergence to the global optimum solution. However, the concern of sharing private consumer information with the network operator hinders the approach. The proposed ADMM based distributed framework avoids sharing of private and proprietary information and only uses exchange of power consumption data. The approach converts the problem into a QP for the operator and MIQPs for each residence. However, the size of each problem is significantly smaller than the original MILP. The convergence of the algorithm to the optimal solution of (\ref{eq:total-opt}) is formally stated next.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.48\textwidth]{figs/scheme-message-transfer.png}
\caption{Message exchange aided proposed distributed framework to schedule residential EV charging.}
\label{fig:message-xchange}
\end{figure}
\begin{proposition}
The iterates $\left\{\tilde{\mathbf{p}}^t[l]\right\}_{t=1}^{T}$ and $\left\{\mathbf{p}^t[l]\right\}_{t=1}^{T}$ produced by $[\mathbf{S1}]-[\mathbf{S2}]$ are convergent, for any $\kappa>0$. Further, $$\lim_{l\rightarrow \infty}\left\{\tilde{\mathbf{p}}^t[l]\right\}_{t=1}^{T} = \lim_{l\rightarrow \infty}\left\{\mathbf{p}^t[l]\right\}_{t=1}^{T} = \left\{\mathbf{p}^t_{\textrm{opt}}\right\}_{t=1}^{T}$$
where $\mathbf{p}^t_{\textrm{opt}}$ denotes the optimal power usage trajectory.
\end{proposition}
ADMM has been proved to converge to the optimal solution for convex problems~\cite{boyd}. However, in our paper the problem involves binary constraints for the EV charging unit which renders the problem to be non-convex. Interestingly, ADMM can converge to optimality in the exact sense for non-convex problems with binary variables. Therefore, we can guarantee that the proposed framework converges to the optimal solution.
\section{Experiments}
The experiments are conducted in order to study the effects of EV adoptions at different levels (30\%, 60\%, 90\%).
We compare effects of two optimization scenarios (individual vs. distributed) on EV scheduling behavior in different communities.
Under the \emph{individual optimization} scenario, customers charge their EVs based on individual preferences without thinking about the impact on the network. The optimal schedule is obtained by solving (\ref{eq:ind-opt}) for each EV adopter.
With \emph{distributed optimization}, the customers agree to an optimal EV charging schedule where they are benefited as well as the network reliability is maintained. This allows the network operator to maintain acceptable node voltage level throughout the network. The optimal schedule is obtained by iteratively solving Equations (\ref{eq:opt-operator}), (\ref{eq:opt-residence}), and (\ref{eq:gamma}) until convergence.
Particularly, we aim to compare the reliability of the network when these two methods are used to schedule residential EV charging for varied levels of EV adoption. Note that network reliability is the ability to operate with edge power flows within the line capacities and node voltage within the bandwidth ($0.95-1.05$ p.u)~\cite{ansi}. Hence, these two measures -- node voltage and edge power flow are used to quantify the impact of network reliability at different levels (30\%, 60\%, 90\%) of EV adoptions in multiple communities in the distribution network.
\begin{table}[htbp]
\centering
\caption{Hourly electricity rate for the experiments}
\label{tab:price-summer}
\begin{tabular}{ccccc}
\hline
\textbf{\begin{tabular}[c]{@{}c@{}}Time \\ interval \\ (HH:MM)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}00:00-\\ -05:00\end{tabular} & \begin{tabular}[c]{@{}c@{}}05:00-\\ -15:00\end{tabular} & \begin{tabular}[c]{@{}c@{}}15:00-\\ -18:00\end{tabular} & \begin{tabular}[c]{@{}c@{}}18:00-\\ -00:00\end{tabular} \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Cost \\ (\$/kWhr)\end{tabular}} & 0.07866 & 0.09511 & 0.21436 & 0.09511 \\ \hline
\end{tabular}
\end{table}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.4\textwidth]{figs/121144-com-2-homes.png}
\includegraphics[width=0.58\textwidth]{figs/121144-com-2-rate-4800-voltlimit.png}
\includegraphics[width=0.4\textwidth]{figs/121144-com-5-homes.png}
\includegraphics[width=0.58\textwidth]{figs/121144-com-5-rate-4800-voltlimit.png}
\caption{Impact of residential EV charging is analyzed for two different residential communities within the same network: `Com-A'(top) and `Com-B'(bottom). The orange nodes shown in the network denote the residences in the two communities. The individual optimization leads to undervoltage (less than 0.95 p.u.) at a significant number of residences. This can be avoided by using the proposed distributed optimization method even for higher levels of EV adoption.}
\label{fig:adoption-out-limit}
\end{figure*}
A small area of Montgomery county in Virginia is considered as the region of interest for our study.
Household level synthetic hourly electricity consumption profiles are used. These timeseries are created using several population surveys, statistical models, and physics based models of household devices and validated using real data~\cite{swapna2018}. The data also has household level demographics and spatial attributes.
Synthetically generated distribution networks created using electrical engineering concepts and resembling actual networks are used for the purpose of our analysis~\cite{rounak2020}.
Hourly electricity rate (in \$/kWhr) is known to the residential customers (Table~\ref{tab:price-summer}).
Time of use (TOU) hourly electricity rate provided by the off-peak plan of a utility company serving the particular geographical region~\cite{dominion_tou} are used.
\emph{Assumptions.}~All EVs have a uniform charge capacity of $20$kWhr and are available to be charged between 4:00p.m. and 5:00a.m. The initial state of charge is assumed to be $20\%$ and the EVs are required to be charged to at least $90\%$.
A uniform rating of $4.8kW$ of the residential EV charger.
Households are randomly selected as EV adopters in the network.
All adopters have necessary provisions to charge their EVs at their residential premises.
\section{Results}
Figure~\ref{fig:flow-vs-volt} describes edge power flow as a percentage of line capacities and node voltages in `Com-A' community in the network when EV adoption has reached 90\%.
Figure~\ref{fig:flow-vs-volt} (top) shows that the edge flow (line rating capacity) levels in the network are well within limits even when 90\% of residences have adopted EV.
However, the same cannot be observed for node voltages in the network.
Figure~\ref{fig:flow-vs-volt} (bottom) shows that the node voltages at several residences are outside the acceptable limits.
We notice that maximum number of node voltage violations occur in the period where the hourly electricity rate (in \$/kWhr) is minimum.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.48\textwidth]{figs/121144-com-2-adopt-90-rate-4800-loading.png}
\includegraphics[width=0.48\textwidth]{figs/121144-com-2-adopt-90-rate-4800-voltage.png}
\caption{Comparison of line loading level (edge flows) and node voltages for residential EV adoption of $90\%$ in `Com-A' of network. The high EV adoption does not significantly affect the line loading level. However, node voltages at multiple residences in the network are outside the acceptable voltage limits of $0.95-1.05$ p.u.}
\label{fig:flow-vs-volt}
\end{figure}
We further explore effects of node voltage violation, at different adoption levels in two communities (`Com-A', `Com-B') of residences in the network.
Figure~\ref{fig:adoption-out-limit} shows the selected communities in the network and node voltage violation at different adoption levels.
The results are obtained after performing optimizations under two scenarios : individual and distributed.
We focus our attention only on the time intervals where the hourly electricity rate (in \$/kWhr) is minimum (i.e. time windows of maximum node violations).
The node voltage violation is divided into 3 ranges : less than $0.92$ p.u., between $0.92-0.95$ p.u. and between $0.95-0.98$ p.u.
Though the last voltage range is not considered as voltage violation by the practised ANSI standard~\cite{ansi}, it can be considered as a sign of reduced network reliability.
The clustered bar chart shows significantly higher number of residences with voltage violation under individual optimization scenario as compared to the distributed optimization scenario.
The number of residences with voltage less than $0.95$ p.u. is close to zero at all considered time intervals.
The observations are similar for both communities in the network.
This shows that, if the proposed distributed framework is used to schedule EV charging units, the network operator is able to dispatch the power without compromising on system reliability.
Figure~\ref{fig:adoption-out-limit} shows that under the individual optimization approach the number of residences with undervoltage during the cheap electricity hours increases with an increase in the level of adoption. However, this trend is not consistent in the distributed optimization approach. This is because the later approach also ensures system reliability along with consuming electricity during cheap hourly rates.
The distributed framework does this by allocating small amount of EV charging during time intervals where the hourly electricity rates are relatively higher.
We also notice that the number of residences experiencing undervoltage issues for the same level of adoption differs significantly when we consider different communities for EV adoption. These differences can be attributed to location and energy usage of adopter households in the network and the resulting voltages at different nodes.
The error bars on the bar chart (Figure~\ref{fig:adoption-out-limit}) show variation in number of residences violating node voltage in each category after multiple runs of the adoption level.
\begin{comment}
In this paper, we analyze the impact of EV adoption on the power distribution network. We consider that all adopters have necessary provision to charge their EVs at their residential premises by consuming power from the distribution grid. Particularly, we aim to identify how much EV adoption is acceptable using the existing distribution network infrastructure. ANSI C.84 Range A limits the acceptable node voltage within $0.95-1.05$ p.u. Node voltages outside this range is considered to be a \emph{security} concern for the network operator. Two potential EV charging scenarios are compared in this paper.
\begin{enumerate}
\item[(i)]\emph{Individual optimization}: In this scenario, customers charge their EVs based on their individual benefit without thinking about the impact on the network. The goal of the customer is to maximize their gains by charging their EVs during time intervals where the electricity cost is minimum. The optimal schedule is obtained by solving (\ref{eq:ind-opt}) for each EV adopter.
\item[(ii)]\emph{Distributed optimization}: The customers agree to an optimal EV charging schedule where they are benefited as well as the network security is maintained. This allows the network operator to maintain acceptable node voltage level throughout the network. The optimal schedule is obtained by iteratively solving (\ref{eq:opt-operator}), (\ref{eq:opt-residence}) and (\ref{eq:gamma}) until convergence.
\end{enumerate}
We consider a small area of Montgomery county in south-west Virginia as the region of interest for our study. We use synthetically generated distribution networks for the purpose of our analysis. These networks have been created using electrical engineering concepts and resemble their actual physical counterparts~\cite{rounak2020}. The hourly load demand of the residences is also synthetically created using several population surveys and physics based models of household devices~\cite{swapna2018}.
For either of the two scenarios, the hourly electricity rate (in \$/kWhr) is known to the residential customers. In this paper, we consider a typical summer day for our case studies. We use the time of use (TOU) hourly electricity rate provided by the off-peak plan of a utility company serving the particular geographical region~\cite{dominion_tou}. The hourly rates are summarized in Table~\ref{tab:price-summer}.
For the sake of uniformity of comparison, we consider some basic assumptions for the case studies. We assume that all EVs have a uniform charge capacity of $20$kWhr and are available to be charged between 4:00p.m. and 5:00a.m. Further, the initial state of charge is assumed to be $20\%$ and the EVs need to be charged at least to $90\%$. Also, we consider uniform rating of the residential EV charger to be $4.8kW$. The charge capacity is chosen based on the latest EV models which can be used for regular commuting purposes within $100$km.
\noindent\textbf{Edge flows versus node voltages.}~The power distribution network infrastructure is designed with line ratings such that it is capable of withstanding future load growth. Furthermore, distribution transformers and lines are rated at least twice the customer loads to be served in order to redirect power during failures in small sections of the network. Therefore, in the context of EV adoption, the additional load demand does not necessarily lead to maximum line loading in the network. However, this adversely affects the node voltages in the network. Fig.~\ref{fig:flow-vs-volt} shows box plots representing the distribution of line loading level and node voltages in the network at different time intervals. We consider an EV adoption of $90\%$ in a small residential community of the network where each adopter uses an EV charger of rating $4.8$kW. We focus our attention to those time intervals where the EVs are available for charging. Note that the high EV adoption percentage does not affect the line loading levels in the network. However, the node voltages at multiple residences are outside the ANSI acceptable limits.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{figs/121144-com-2-adopt-90-rate-4800-loading.png}
\includegraphics[width=0.48\textwidth]{figs/121144-com-2-adopt-90-rate-4800-voltage.png}
\caption{Comparison of line loading level and node voltages for residential EV adoption of $90\%$ in the network. The high EV adoption does not significantly affect the line loading level. However, node voltages at multiple residences in the network are outside the acceptable voltage limits of $0.95-1.05$ p.u. (result for community A)}
\label{fig:flow-vs-volt}
\end{figure}
\noindent\textbf{Impact of adoption level.}~In this section we analyze the efficacy of the proposed distributed framework in maintaining system reliability with several levels of EV adoption in the network. We consider two different communities of residences within a single distribution power grid and name them `Com-A' and `Com-B' respectively. These communities are shown in Fig.~\ref{fig:adoption-out-limit}. For each level of adoption, we randomly choose EV adopters within the community (`Com-A' or `Com-B') and compute the optimal charging schedule using the two optimization methods: individual and distributed. In this paper we show results for $60\%$, $80\%$ and $90\%$ of residences in either community (`Com-A'/`Com-B') adopting EVs and charging them using the residential charging outlet.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.485\textwidth]{figs/121144-com-2-homes.png}
\includegraphics[width=0.485\textwidth]{figs/121144-com-5-homes.png}
\includegraphics[width=0.485\textwidth]{figs/121144-com-2-rate-4800-outlimit.png}
\includegraphics[width=0.485\textwidth]{figs/121144-com-5-rate-4800-outlimit.png}
\caption{Impact of residential EV charging is analyzed for two different residential communities within the same network: `Com-A'(left) and `Com-B'(right). The orange nodes shown in the network denote the residences in the two communities.}
\label{fig:adoption-out-limit}
\end{figure*}
We focus our attention to the time intervals where the hourly electricity rate (in \$/kWhr) is minimum. These are the intervals where most residences would opt to charge their EVs in order to ensure minimum cost of electricity. This is evident from the results observed for the individual optimization scenario where each residential consumer charges their EV without any concern regarding the network reliability. We notice that a significant number of residential nodes experience undervoltage (less than $0.95$ p.u.) in this scenario. This is shown in stacked bars of Fig.~\ref{fig:adoption-out-limit}. Each bar color represents the number of residences experiencing undervoltage at different ranges: less than $0.92$ p.u., between $0.92-0.95$ p.u. and between $0.95-0.98$ p.u. Though the last voltage range is not considered as undervoltage by the practised ANSI standard, it can be considered as a sign of reduced network reliability.
On the contrary, if the proposed distributed framework is used to schedule the EV charging activities, the network operator is able to dispatch the power while maintaining the system reliability. The shaded bars in Fig.~\ref{fig:adoption-out-limit} show the number of residences experiencing undervoltage at the same ranges. We note that the number of residences with voltage less than $0.95$ p.u. is almost zero at all these time intervals. This validates the need for a distributed framework to schedule the EV charging loads while maintaining the reliable dispatch of power to the residential consumers.
We show the impact of different level of EV adoption in the two different residential communities. We notice that for the individual optimization approach the number of residences with undervoltage during the cheap electricity hours increases with an increase in the level of adoption. However, this trend is not consistent in the distributed optimization approach. This is because the latter approach also ensures system reliability along with consuming electricity during cheap hourly rates. The distributed framework does this by allocating small amount of EV charging during time intervals where the hourly electricity rates are relatively higher.
We also notice that the number of residences experiencing undervoltage issues for the same level of adoption differs significantly when we consider different communities for EV adoption. This shows the spatial dependency of EV adoption and voltage across residences in the network. This can be attributed to the regular load usage pattern of the residential consumers in the two communities as well as the different network parameters in the two sections of the network. Furthermore, for a given community and EV adoption level, we choose random residences to be the EV adopters. This leads to different number of nodes experiencing undervoltage for different random sets of EV adopters. To this end, we run the same scheduling task for multiple random sets of EV adopters and provide error bars in the plots of Fig.~\ref{fig:adoption-out-limit} to denote the distribution of the number of residences with undervoltage issues.
\end{comment}
|
1,314,259,993,878 | arxiv | \section{Introduction}
\label{sect:intro}
Magnetic fields play an important role in astrophysical phenomena of the universe on various scales.
For galaxies, dynamo models associated with various MHD instabilities occurring in the interstellar medium (ISM)
are used to explain the formation of the galactic structure (e.g., Gomez \& Cox \cite{GomezCox2004};
Bonanno \& Urpin \cite{BonannoUrpin2008}).
Magnetic fields play a role in the evolution of interstellar molecular clouds and
the star formation process, where
the cloud collapse is probably taking place along the magnetic field lines (e.g.\ Alves et al.\ \cite{Alves2008}).
They are also present at all stages of stellar evolution,
from young T\,Tauri stars and Ap/Bp stars
to the end products: white dwarfs, neutron stars, and magnetars.
On the other hand, the role of magnetic fields
in massive O-type stars and Wolf-Rayet (WR) stars remains unknown.
No definitive magnetic field has ever been detected in
WR stars and presently only about a dozen O stars have published magnetic fields, while
for early B stars magnetic field detections are more numerous.
In our study of the open cluster NGC\,3766, we found that seven out of the 14 observed early B-type stars
showed magnetic fields (Hubrig et al.\ \cite{Hubrig2009}).
In addition, theories about the origin of magnetic fields in O- and early B-type stars
remain poorly developed, mostly because the distribution of magnetic field strengths in massive stars
from the ZAMS to more evolved stages has not yet been studied.
In our recent studies, we focused on magnetic fields of massive stars observed in different environments:
in open clusters at different ages and in the field (Hubrig et al.\ \cite{Hubrig2009,Hubrig2011c,Hubrig2011e}).
\section{Magnetic field models for $\beta$\,Cephei and SPB stars}
\label{sect:earlyB}
\begin{figure*}
\centering
\includegraphics[angle=270,totalheight=0.32\textwidth]{HD46328.fs.ps}
\includegraphics[angle=270,totalheight=0.32\textwidth]{15cma.fs.ps}
\includegraphics[width=0.45\textwidth]{xi1cma.eps}
\includegraphics[width=0.45\textwidth]{15cma.eps}
\caption{
Periodograms (top) and phase diagrams (bottom) with the best sinusoidal fit for
the longitudinal magnetic field measurements of $\xi^1$\,CMa (left) and 15\,CMa (right).
The residuals (Observed $-$ Calculated) are shown in the lower panels.
}
\label{fig:betcep_fits}
\end{figure*}
Using FORS\,1/2 and SOFIN longitudinal magnetic field measurements collected in
our recent studies (Hubrig et al.\ \cite{Hubrig2011a}), we were able to determine the rotation period and constrain
the field geometry of two $\beta$\,Cephei stars, one candidate $\beta$\,Cephei star, and one SPB star.
The dipole model provides a satisfactory fit to the data and among the very few
presently known magnetic $\beta$\,Cephei stars, $\xi^1$\,CMa and $\alpha$\,Pyx
possess with several kG the largest dipole strengths.
In Fig.~\ref{fig:betcep_fits} we show periodograms and phase diagrams
with the best sinusoidal fit for the longitudinal magnetic field measurements of
$\xi^1$\,CMa (left) and 15\,CMa (right).
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{pasteeps2.eps}
\caption{
Periodogram for the magnetic field measurements of V1449\,Aql.
The residuals (Observed $-$ Calculated) are shown in the lower panel.
}
\label{fig:v1449aqla}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{momall2.eps}
\caption{
Equivalent width (m0), radial velocity (m1), line width (m2), asymmetry (m3), and kurtosis (m4)
from our SOFIN data of V1449\,Aql.
}
\label{fig:v1449aqlb}
\end{figure}
In our studies,
V1449\,Aql possesses so far the strongest longitudinal magnetic field of up to 700\,G among the $\beta$\,Cephei stars.
The resulting periodogram from our SOFIN and FORS1\, measurements displays three
dominant peaks with the highest peak at $f=0.0720$\,d$^{-1}$ corresponding to a period $P=13.893$\,d
(Hubrig et al.\ \cite{Hubrig2011b}).
This period was recently confirmed with seismic modeling based on CoRoT data
and ground-based time-resolved spectroscopy by Aerts et al.\ (\cite{Aerts2011}),
who also mention a somewhat high initial internal metallicity in their models,
probably related to the presence of a magnetic field.
The magnetic field geometry can likely be described by a centered dipole with a polar magnetic field strength
$B_{\rm d}$ around 3\,kG.
In Fig.~\ref{fig:v1449aqla} we show the periodogram for V1449\,Aql and the residuals for our measurements.
The line profiles undergo very significant pulsational variability and any
rotational modulation of the observed profiles is completely masked by the pulsational modulation.
The moment variations for five lines belonging to different elements, namely
He~{\sc i} 4713.1, C~{\sc ii} 5133.1, N~{\sc ii} 5667.6, O~{\sc ii} 4592.0, and Si~{\sc iii} 4553.6
exhibit slight differences between the elements.
In Fig.~\ref{fig:v1449aqlb} we show the equivalent width (m0),
radial velocity (m1), line width (m2), asymmetry (m3), and kurtosis (m4) from our SOFIN data of V1449\,Aql.
\section{O-type stars}
\label{sect:o}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{zoph-irac.3.eps}
\caption{
Combined IR Spitzer IRAC image of the bow shock around $\zeta$\,Oph.
}
\label{fig:zoph_spitzer}
\end{figure}
To investigate statistically whether magnetic fields in O-type stars are
ubiquitous or appear only in stars with a specific spectral classification,
certain age, or in a special environment, we acquired new spectropolarimetric observations with FORS\,1/2.
A field at a significance level of 3$\sigma$ was detected in eleven stars (Hubrig et al.\ \cite{Hubrig2011c,Hubrig2011d}).
The strongest longitudinal magnetic fields were measured in two Of?p stars:
$\left<B_{\rm z}\right>=−381\pm122$\,G for CPD--28\,2561 and $\left<B_{\rm z}\right>=−297\pm62$\,G for HD\,148937.
Both magnetic fields were detected by us for the first time, the latter in an earlier study
(Hubrig et al.\ \cite{Hubrig2008}).
The star $\zeta$\,Ophiuchi of spectral type O9.5\,V is a
well-known rapidly rotating runaway star with extremely interesting
characteristics. It undergoes episodic mass loss seen as emission in
H$\alpha$, and it is possible that it rotates with almost break-up
velocity with $v$\,sin\,$i=400$\,km\,s$^{-1}$ (Kambe et al.\ \cite{Kambe1993}).
Our spectropolarimetric observations
of $\zeta$\,Oph with FORS\,1 in 2008 revealed the presence of a mean longitudinal magnetic field
$\left< B_z\right>_{\rm all}= 141\pm45$\,G. Additional
nine spectropolarimetric observations were obtained with FORS\,2 over the rotation period in 2011.
The longitudinal
magnetic field shows a change of polarity and its variation over the rotation cycle
can be represented by a sinusoidal fit with a semi-amplitude of $\sim$160\,G.
Fig.~\ref{fig:zoph_spitzer} shows a combined IR {\em Spitzer} IRAC image of the bow shock around $\zeta$\,Oph.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{logthisto.eps}
\caption{
Age distribution of probable cluster members from our study.
Stars with magnetic fields are denoted by horizontal lines.
}
\label{fig:histo}
\end{figure}
The available observations (Hubrig et al.\ \cite{Hubrig2011c,Hubrig2011e}) seem to indicate that a magnetic field is more
frequently detected in field stars than in stars belonging to clusters or associations.
It is striking that most previously detected magnetic O-type stars are candidate runaway stars.
Clearly, these findings generate a strong motivation to carry out a kinematic
study of all stars previously surveyed for magnetic fields to search for a
correlation between the kinematic status and the presence of a magnetic field.
In Fig.~\ref{fig:histo} we show the age distribution of probable cluster members from our study.
The three stars with magnetic fields, HD\,155806, HD\,156154, and HD\,164794, are denoted by horizontal lines.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{HD148937.eps}
\caption{
Longitudinal magnetic field variation of HD\,148937 over the 7.032\,d period
determined by Naz\'e et al.\ (\cite{Naze2010}).
Red (large) symbols correspond to ESPaDOnS observations (Wade et al.\ \cite{Wade2011}),
while green (small) symbols are our FORS\,1 and FORS\,2 measurements
(Hubrig et al.\ \cite{Hubrig2008,Hubrig2011c}; Hubrig et al.\ in preparation).
}
\label{fig:hd148}
\end{figure}
To demonstrate the excellent potential of FORS\,2 for the
detection and investigation of magnetic fields in massive stars, we present in Fig.~\ref{fig:hd148}
our FORS\,2 observations collected between 2008 and May 2011 of the
Of?p star HD\,148937, together with the ESPaDOnS observations obtained at the CFHT
(Wade et al.\ \cite{Wade2011}).
The measurement errors for both ESPaDOnS and FORS\,1/2 observations
are of similar order.
One more measurement (not shown in this Figure) was
obtained in 2010 on May 22, but ordinary and extraordinary beams in the FORS\,2
setup were overlapping in this exposure due to slitlet problems.
\section{Discussion}
\label{sect:disc}
Early B-type pulsating stars had been a puzzle for a long time with
respect to the presence of magnetic fields in their atmospheres.
Only very few stars of this type with very weak magnetic fields were detected
before we started our surveys of magnetic fields in B-type stars in 2004.
We were the first to determine magnetic field strengths for a large sample of O-type stars, with an accuracy of a few tens of Gauss.
Very few magnetic fields stronger than 300\,G were detected in the studied sample,
suggesting that large-scale, dipole-like magnetic fields with polar magnetic
field strengths higher than 1\,kG are not widespread among O-type stars.
Our studies of massive stars revealed that the presence of a magnetic field can
be expected in stars of different classification categories and at different evolutionary stages.
We note that no physical properties are known that define these particular classes of stars as non-magnetic.
The inability to detect magnetic fields in massive stars in previous studies
could be related to the weakness of these fields, which can, in some stars, be as little as only a few tens of Gauss.
The results of our kinematic analysis of known magnetic O-type stars using the best available
astrometric, spectroscopic, and photometric data indicates that a magnetic field
is more frequently detected in candidate runaway stars than in stars belonging to
clusters or associations (Hubrig et al.\ \cite{Hubrig2011e}).
As the sample of stars with magnetic field detections is still very small,
a study of a larger sample is urgently needed to confirm the detected trend by performing dedicated
magnetic field surveys of O stars in clusters/associations and in the field.
The results obtained so far allow us to constrain preliminarily the conditions
conducive to the presence of magnetic fields and derive the first trends for their occurrence rate
and field strength distribution. This information is critical for answering the principal question of
the possible origin of magnetic fields in massive stars.
|
1,314,259,993,879 | arxiv | \section{AD-AE Appendix}
\subsection{Paper Artifact Description / Article Evaluation (AD/AE) Appendix}
Are there computational artifacts such as datasets, software, or hardware associated with this paper? Yes.
\subsection{AD/AE Details}
Experiments were run on a Stanford University HPC cluster equipped with dual-sockets and 16 cores \lstinline[breaklines=true]{Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz} with 32GB of RAM per node.
Intel Compiler (icpc (ICC) \lstinline[breaklines=true]{19.1.0.166 20191121}) and Intel MPI are used with Intel MKL (version \lstinline[breaklines=true]{2020.0.166}) for BLAS, LAPACK and ScaLAPACK. We use StarPU version \lstinline[breaklines=true]{1.3.2}.
\subsubsection{Artifacts Available (AA)}
\begin{itemize}
\item Software Artifact Availability: All author-created software artifacts are maintained in a public repository under an OSI-approved license.
\item Hardware Artifact Availability: There are no author-created hardware artifacts.
\item Data Artifact Availability: There are no author-created data artifacts.
\item Proprietary Artifacts: There are associated proprietary artifacts that are not created by the authors. Some author-created artifacts are proprietary.
\end{itemize}
\subsubsection{Author artifacts}
\begin{itemize}
\item Artifact 1: TaskTorrent repository, \lstinline[breaklines=true]{github.com/leopoldcambier/tasktorrent}
\item Artifact 2: TaskTorrent paper Scalapack and StarPU benchmarks repository, \lstinline[breaklines=true]{github.com/leopoldcambier/tasktorrent_paper_benchmarks}
\end{itemize}
\subsubsection{Experimental setup}
\begin{itemize}
\item Relevant hardware: Dual socket \lstinline[breaklines=true]{Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz} with 16 cores and 32GB of RAM per node.
\item Operating systems and versions: Linux kernel \lstinline{3.10.0-693.el7.x86_64}
\item Compilers and versions: Intel Compiler (icpc (ICC) \lstinline[breaklines=true]{19.1.0.166 20191121})
\item Applications and versions: N/A
\item Libraries and versions: Intel MPI version \lstinline{2018.2.199}, Intel MKL version \lstinline{19.1.0.166}, StarPU version \lstinline{1.3.2}
\item Key algorithms: N/A
\item Input datasets and versions: N/A
\item Optional link (URL) to output from commands that gather execution environment information: \lstinline[breaklines=true]{stanford.edu/~lcambier/tasktorrent_paper/AD_AE.txt}
\end{itemize}
\subsection{Artifact Evaluation}
Are you completing an Artifact Evaluation (AE) Appendix? No.
\subsection{API Description} \label{sec:api}
\texttt{TTor}{}'s API can be divided into two parts, a shared memory component (expressing the PTG) and
a distributed component (used for AMs).
The combination of those two features is what distinguishes \texttt{TTor}{} from other solutions
and is one of the factors that makes \texttt{TTor}{} lightweight.
\subsubsection{Shared memory components}
\paragraph{Threadpool}
A \lstinline{Threadpool} is a fixed set of threads that receive and process tasks.
A threadpool with \lstinline{n_threads} threads can be created by \lstinline[breaklines=true,breakatwhitespace=true]{Threadpool tp(n_threads, &comm)}.
(\lstinline{comm} is a \lstinline{Communicator}; see \Cref{sec:api_distributed}).
Tasks can be inserted directly in the threadpool, but typically this is done using a \lstinline{Taskflow}.
The threadpool joins when calling \lstinline{tp.join()}.
This returns when all the threads are idle and all communications have completed.
\Cref{sec:distributed_completion} explains in details the distributed completion mechanism.
\paragraph{Taskflow}
A \lstinline{Taskflow<K>} \lstinline{tf} (for some index space \lstinline{K}, typically an
integer or a tuple of integers) represents a Parametrized Task Graph. It is created using \lstinline{Taskflow<K> tf(&tp)} where \lstinline{tp} is a \lstinline{Threadpool}.
It is responsible for managing task dependencies and
automatically inserting tasks in \lstinline{tp} when ready.
At least three functions have to be provided:
\begin{itemize}
\item \lstinline{(int)indegree(K k)} returns the number of dependencies for task \lstinline{k}.
\item \lstinline{(void)task(K k)} indicates what task \lstinline{k} should be doing when running.
Typically this is some computational routine followed by the trigger of other tasks.
For instance task \lstinline{k1} can fulfill one dependency of task \lstinline{k2} by
\lstinline{tf.fulfill_promise(k2)}.
\item \lstinline{(int)mapping(K k)} indicates what thread should task \lstinline{k} be
initially mapped to.
\end{itemize}
\added{In general, tasks can be stolen between threads to avoid starvation. This is done using a work stealing algorithm. \lstinline{tf.set_binding(binding)} can be used to make some tasks bound to their thread. Optional priorities can also be provided through \lstinline{tf.set_priority(priority)}. Finally, \lstinline{tf.fulfill_promise(k)} is used to fulfill one of the dependencies of task \lstinline{k} on Taskflow \lstinline{tf}. See \Cref{fig:taskflow}.}
\begin{figure}
\centering
\begin{tikzpicture}[node distance = 0.8cm]
\tikzstyle{every node}=[font=\footnotesize,text centered];
\tikzset{zigzag/.style={decorate, decoration=zigzag}};
\node [circle,draw] (task_k) at (0,0) {\lstinline{k}};
\node [circle,draw,above left=0.5cm and 0.5cm of task_k] (k0) {\lstinline{i0}};
\node [above=0.5cm of task_k] {\lstinline{...}};
\node [circle,draw,above right=0.5cm and 0.5cm of task_k] (kn) {\lstinline{im}};
\node [circle,draw,below left=0.5cm and 0.5cm of task_k] (out_k0) {\lstinline{o0}};
\node [below=0.5cm of task_k] {\lstinline{...}};
\node [circle,draw,below right=0.5cm and 0.5cm of task_k] (out_kn) {\lstinline{on}};
\path [->,>=stealth] (k0) edge (task_k);
\path [->,>=stealth] (kn) edge (task_k);
\path [->,>=stealth] (task_k) edge (out_k0);
\path [->,>=stealth] (task_k) edge (out_kn);
\node [left=1cm of k0] (thread) {Thread $t$};
\draw [snake=zigzag] (thread) -- (-3,-1);
\path [->,>=latex,dashed] (task_k) edge[out=west,in=east] node [midway, below] (mid) {} (thread);
\node [below left=0.25cm and -0.25cm of out_k0] (mapping) {\lstinline{(int)mapping(K k)}};
\path [->,>=latex,dashed] (mapping) edge (mid);
\draw [decorate,decoration={brace,amplitude=4}] (1.5,1.3) -- (1.5,0.6) node [] (brace0) {};
\node [above right=-0.05cm and 0cm of brace0] {\lstinline{(int)indegree(K k)}};
\draw [decorate,decoration={brace,amplitude=5}] (1.5,0.2) -- (1.5,-1.2) node [] (brace1) {};
\node [above right=0.35cm and 0cm of brace1] {\lstinline{(void)run(K k)}};
\end{tikzpicture}
\caption{The \lstinline{Taskflow<K>} API.
\lstinline{(int)indegree(K k)} returns the number of incoming dependencies of task \lstinline{k}.
\lstinline{(void)run(K k)} indicates what function to run.
\lstinline{(int)mapping(K k)} returns what thread the task should be mapped \added{(but not bound)} to.
}
\label{fig:taskflow}
\end{figure}
\subsubsection{Distributed memory components} \label{sec:api_distributed}
Active Messages (AMs) are used to allow tasks on rank $a$ to trigger tasks on rank $b \neq a$
without rank $b$ explicitly waiting for messages.
An AM is a pair \lstinline{(function, payload)}.
When an AM is sent from rank $a$ to rank $b$, the payload is sent through the network,
and upon arrival, the function (with the associated payload passed as argument) is run on
the receiver rank.
This allows for instance to store the payload at some location in local memory and then trigger tasks.
\paragraph{Active message}
An \lstinline{ActiveMsg<Ps...>} \lstinline{am} pairs a function \lstinline{(void)fun(Ps... ps)} and a payload \lstinline{ps}.
\added{Note that \lstinline{Ps...} is a variadic template: different types can be used as arguments. A \lstinline{view<T>} can be used to identify a memory buffer (i.e., a pointer and a length) and is built as
\lstinline{view<T> v(pointer, num_elements)}.}
The AM can be sent to rank \lstinline{dest} over the network using \lstinline{am->send(dest, ps...)}.
When sent, the payload is serialized on the sender, sent over the network,
deserialized on the receiver and the function is run as \lstinline{fun(ps...)}.
The payloads are always serialized in a temporary buffer by the library.
As such, the user-provided arguments can be immediately reused or modified
as soon as \lstinline{send} returns.
\lstinline{am->send} is thread-safe and can be called by any thread.
\texttt{TTor}{} also provides \emph{large} active messages.
A large AM can be used to avoid temporarily copying large buffers.
A large AM payload is made of one \lstinline{view<T>} and a series of arguments \lstinline{Ps...}.
The view will be sent and received directly without any extra copy.
\added{It is associated with three functions:
(1) a function to be run on the receiver rank that returns a pointer to a user-allocated buffer, where the data will be stored;
(2) a function to be run on the receiver rank to process the data upon arrival;
(3) a function to be run on the sender rank when the buffer on the sender side can be reused.
This is an important feature to avoid costly copies and/or when memory use is constrained.}
\paragraph{Communicator}
A \lstinline{Communicator} \lstinline{comm} is a C++ factory to create AMs and is responsible
for sending, receiving and running AMs.
\lstinline{Communicator comm(mpi_comm)} creates a communicator using the \lstinline{mpi_comm} MPI communicator.
An AM can then be created by \lstinline[breaklines=true,breakatwhitespace=true]{am = comm.make_active_msg(f)} where \lstinline{f} is a \lstinline{(void)f(Ps...)} function.
AMs always have to be created in the same order on all ranks
because we need to create a consistent global indexing of all the AM that need to be run.
\subsubsection{Example}
The following shows how the different components can be used together. This assumes \lstinline{compute(k)} does the computation related to task \lstinline{k}. In addition, \lstinline{mapping(k)} returns a thread for task \lstinline{k} (which is typically \lstinline{k
\lstinputlisting[basicstyle=\footnotesize\ttfamily]{api_example.cpp}
\section{Benchmarks}
\label{sec:benchmarks}
In this section, we present benchmarks comparing \texttt{TTor}{} to OpenMP, StarPU, and ScaLAPACK.
We start with micro-benchmarks to validate the low overhead of the shared memory component. This is only used to verify that the task-based management overhead is comparable, and sometimes better, to other runtime systems.
\added{We then apply \texttt{TTor}{} (with its distributed component) to two classical linear algebra problems. In those sections, the goal is to compare a sequential enumeration of the DAG (STF) as implemented in StarPU versus the PTG approach as implemented in \texttt{TTor}{}. We note in particular that it is possible to modify the StarPU code such that the DAG is parallelized in a manner close to \texttt{TTor}{}. Similarly several optimizations in \texttt{TTor}{} are possible but were not explored for this paper (memory management, task insertion, communication). Therefore, these benchmarks cannot be interpreted as measuring the peak performance of either runtime.}
\added{In all cases, experiments are run on a cluster equipped with dual-sockets and 16 cores Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz with 32GB of RAM per node. Intel Compiler (\lstinline{icpc (ICC) 19.1.0.166 20191121}) and Intel MPI are used with Intel MKL (version \lstinline{2020.0.166}) for BLAS, LAPACK and ScaLAPACK. We use StarPU version \lstinline{1.3.2}. We assign one MPI rank per node. \texttt{TTor}{}'s code, including benchmarks, is available at {\small \lstinline[breaklines=true]{github.com/leopoldcambier/tasktorrent}}. StarPU and ScaLAPACK's benchmarks are available at {\small \lstinline[breaklines=true]{github.com/leopoldcambier/tasktorrent_paper_benchmarks}}.}
\subsubsection{Distributed completion algorithm}
\label{sec:distributed_completion}
We now discuss the distributed algorithm to determine completion. We present the algorithm along with a proof of correctness. The difficulty in detecting completion lies in the fact that even if all taskflows are idle, the program may not be finished since active messages (AM) may still be in-flight. An example of a flawed strategy is to request that all ranks send an \lstinline{IDLE} signal to one rank when they have no tasks running. This strategy will lead to early termination of the program in many cases. Hence, detecting completion is non-trivial in a distributed setting.
\paragraph{Completion}
In the following, we will consider a series of events such as queuing and processing messages, checking certain conditions, etc.
Within a thread we assume a total ordering between events which lets us associate each of them with a unique real number which we informally call ``time''.
We consider a program with two threads per rank: a main (MPI) thread responsible for MPI communication (asynchronous sends and receives) and AMs, and a worker thread responsible for executing all the user-defined tasks (in practice, the worker thread may be in fact a thread pool, but this is not relevant).
We say that an AM is \textbf{queued} on a sending rank when it is issued either by the worker or the main thread. When issued by a worker, we assume that queueing always finishes before the completion of the enclosing task. An AM is \textbf{processed} on the receiving rank by the main thread. We assume that if an AM results in a task being inserted in the task queue of the worker thread, this insertion must complete before the end of the enclosing AM.
To define our ordering between ranks, we assume that if a message is queued at time $t$ and processed at time $t'$ then $t' > t$.
We assume that messages that are queued are eventually processed if the network and all ranks are idle except for handling these messages (progress guarantee), and that all communications are non blocking (no deadlocks are possible). \texttt{TTor}{} satisfies those assumptions by construction.
\begin{definition}[Completion]
We say that $\{t_a\}_a$ is completion time sequence if:
\begin{itemize}
\item Rank $a$ is idle at time $t_a$ for all $a$;
\item For any pair of ranks $(a,b)$ and all AMs from $a$ to $b$, all AMs queued before $t_a$ have been processed on $b$ before $t_b$.
\end{itemize}
\end{definition}
One can prove (omitted here) that this definition implies the intuitive definition of completion,
which is that, after $t_a$, if we keep the program running, rank $a$ remains idle forever.
\paragraph{Completion algorithm}
The algorithm is based on making sure, after all ranks are idle, that the number of messages sent is equal to the number of messages received.
For this verification to work, we need to proceed in two steps, leading to the following definition:
\begin{definition}[Synchronization time]
Assume that for all ranks $a$, we have defined a pair of times $(t^-_a,t^+_a)$ with $t^-_a < t^+_a$. We say that $\bar t$ is a synchronization time for $(t^-_a,t^+_a)$ if
$$ t^-_a < \bar t < t^+_a, \quad \text{for all $a$}$$
\end{definition}
Before giving the exact algorithm, we prove a sufficient condition to establish completion.
\begin{lemma} \label{lemma:t_minus_t_plus}
\added{Let $p_a(t)$ (resp.\ $q_a(t)$) be the number of processed (resp.\ queued) AMs on rank $a$ at time $t$. Assume that there exists a synchronization time $\bar t$ for $(t^-_a,t^+_a)$ and that for all $a$}
\begin{itemize}
\item the worker thread on rank $a$ is idle at $t_a^-$;
\item $\bar p_a = p_a(t_a^-) = p_a(t_a^+)$ (no new processed AM between $t_a^-$ and $t_a^+$);
\item $\bar q_a = q_a(t_a^-) = q_a(t_a^+)$ (no new queued AM between $t_a^-$ and $t_a^+$);
\item $\sum_a \bar q_a = \sum_a \bar p_a$.
\end{itemize}
Then the sequence $\{t_a^-\}_a$ is a completion time sequence for the execution.
\end{lemma}
\begin{IEEEproof}
Let us first prove that rank $a$ is idle during the entire period $[t_a^-,t_a^+]$. Rank $a$ is idle at $t_a^-$. Since $p_a(t_a^-) = p_a(t_a^+)$, no AM was processed at any time $t \in [t_a^-,t_a^+]$. So no tasks may have been inserted in the worker task queue by the main thread. Hence, rank $a$ is idle during $[t_a^-,t_a^+]$.
Second, because $p_a(t_a^-) = p_a(t_a^+)$ and $q_a(t_a^-) = q_a(t_a^+)$, we necessarily have that
$$ p_a(\bar t) = p_a(t_a^-) = p_a(t_a^+), \quad q_a(\bar t) = q_a(t_a^-) = q_a(t_a^+). $$
This is because $p_a$ and $q_a$ are increasing functions of time and $t_a^- < \bar t < t_a^+$. Therefore: $\sum_a \bar q_a(\bar t) = \sum_a \bar p_a(\bar t)$. The key is that this is true at the synchronization time $\bar t$.
Consider now a message $m$ that is contributing to $\sum_a \bar q_a(\bar t)$ and $\sum_a \bar p_a(\bar t)$. It is not possible that $m$ contributes $+1$ to $\sum_a \bar p_a(\bar t)$ (e.g., it has been counted as processed) while contributing 0 to $\sum_a \bar q_a(\bar t)$ (e.g., it has not been counted as queued). This is because the process time is always strictly greater than the queuing time and we are evaluating the terms at the synchronization time $\bar t$.
From this, assume now that $m$ contributes $+1$ to $\sum_a \bar q_a(\bar t)$ (queued) and 0 to $\sum_a \bar p_a(\bar t)$ (but not processed yet). Then we must have:
$$ \sum_a \bar q_a(\bar t) > \sum_a \bar p_a(\bar t) $$
This is because not other message $m'$ can ``restore'' the equality. This inequality is a contradiction.
Therefore all messages queued have been processed. With the results above, $\{t_a^-\}$ is a completion time sequence.
\end{IEEEproof}
We now describe the algorithm.
Rank 0 will be responsible to detect completion by synchronizing ($\bar t$) with other ranks $r > 0$.
\textbf{When a rank is idle}, the main thread on all ranks does the following.
\begin{enumerate}
\item All ranks $r$ continuously monitor $q_r(t)$ and $p_r(t)$ (which only contain the user's AM count and not the messages used in the completion algorithm). If at a time $t^-_r$ those values differ from the latest observed ones, rank $r$ sends a message \lstinline{COUNT} $= (r, q_r(t^-_r), p_r(t^-_r))$ to rank $0$ with those updated counts.
\item Rank $0$ continuously observes the latest received counts. Since $q_r(\cdot)$ and $p_r(\cdot)$ are non-decreasing it is enough to consider the greatest received counts and discard the others. If at time $\tilde t$ (implemented as an always increasing integer counter), $\sum_r q_r(t^-_r) = \sum_r p_r(t^-_r)$ and that sum is different from the latest observed sum, rank $0$ sends a \lstinline{REQUEST} $= (q_r(t^-_r),p_r(t^-_r), \tilde t)$ message back to all ranks $r > 0$.
\item All ranks $r$ continuously monitor the \lstinline{REQUEST} messages from rank $0$. They process the one with the largest $\tilde t$, and discard the others. At time $t^+_r$, if $q_r(t^-_r) = q_r(t^+_r)$ and $p_r(t^-_r) = p_r(t^+_r)$, they send a \lstinline{CONFIRMATION} = $(\tilde t)$ back to rank $0$.
\item Rank $0$ continuously observes the received \lstinline{CONFIRMATION}. If all ranks replied with the latest $\tilde t$, the program has completed. Rank $0$ then sends a \lstinline{SHUTDOWN} message to all ranks.
\item All ranks $r$ continuously listen to the \lstinline{SHUTDOWN} message. When received, the program has completed and rank $r$ terminates.
\end{enumerate}
Note that although we write the algorithm as a sequence from 1 to 5, the word ``continuously'' indicates that this is implemented as a loop which keeps attempting to perform each step until \lstinline{SHUTDOWN} is received.
We proved the following two theorems (proof is omitted but is based on the results described above, with the assumptions provided at the beginning).
\begin{theorem}[Correctness]
The \lstinline{SHUTDOWN} message is sent if and only if completion has been reached.
\end{theorem}
The second property guarantees that \lstinline{CONFIRMATION} is sent in finite time. For example, if the number of message is potentially unbounded, messages from some ranks could always be prioritized, preventing any progress from other ranks, and the algorithm may never terminate.
\begin{theorem}[Finiteness]
The completion protocol is guaranteed to send \lstinline{CONFIRMATION} in finite time.
\end{theorem}
\section{Conclusion}
We presented TaskTorrent (\texttt{TTor}{}), a lightweight distributed task-based runtime system in C++. It has a friendly API, and rely on readily available tools (C++14 and MPI). It enables shared-memory task-based parallelism coupled with one-sided active messages. Those two concepts naturally work together to create a distributed task-based parallel computing framework. We showed that \texttt{TTor}{} is competitive with both StarPU (a state of the art runtime) and ScaLAPACK on large problems. Its lightweight nature allows it to be more forgiving when task granularity is not optimal, which is key to integrating this approach in legacy codes.
\subsection{Contributions} \label{sec:contributions}
In this paper, we present TaskTorrent (\texttt{TTor}{}).
\texttt{TTor}{} is a lightweight, distributed task based runtime that uses a PTG approach.
Our main contributions are:
\begin{itemize}
\item We show how to combine a PTG approach with one-sided active messages.
\item \added{A mathematical proof is provided for the correctness of our implementation.}
\item \added{We benchmark \texttt{TTor}{} and show that it matches or exceeds performance of StarPU on sample problems.}
\end{itemize}
\texttt{TTor}{} has a couple of notable features compared to existing solutions
\begin{itemize}
\item It is a C++14 library with no dependencies other than MPI.
\item \texttt{TTor}{}'s implementation leads to a small overhead and handles well small task granularity (about 10 $\mu{}s$ and up). This means that \texttt{TTor}{} can be used on any existing code, without needing to fuse or redefine tasks, or change existing algorithms.
\item \added{Default options in \texttt{TTor}{} are designed to provide good performance ``out-of-the-box'' without requiring the user to tune or optimize internal parameters or functionalities of the library.}
\item The user can use their own data structures without having to wrap their data in opaque data structures.
\item \added{It is perfectly scalable in the following sense. Consider a provably scalable numerical algorithm (e.g., there exists an iso-efficiency curve). Assume that (1) the parallel computer is composed of nodes with a bounded number of cores, but with an unbounded number of nodes, and (2) that each node in the DAG has a bounded number of dependencies. Then if the algorithm is executed using \texttt{TTor}{} it will remain scalable. Said more simply, \texttt{TTor}{} does not introduce any parallel bottleneck.}
\end{itemize}
We emphasize that \texttt{TTor}{} is a general purpose runtime system.
The applications in this paper are mostly in dense linear algebra, but there are no
features or optimizations that are specific to linear algebra in this version of \texttt{TTor}{}.
\subsection{Distributed dense cholesky factorization}
We now consider an implementation of the Cholesky algorithm, i.e., given a symmetric positive definite matrix $A \in \mathbb{R}^{N \times N}$, compute $L$ such that
$A = L L^\top$.
In its sequential and blocked form, the algorithm is described in \Cref{alg:cholesky_sequential}.
\begin{algorithm}
\begin{algorithmic}[1]
\Procedure{Cholesky}{$A$, $n$} \Comment{$A \succ 0$, $n \times n$ blocks}
\For{$1 \leq k \leq n$}
\State $L_{kk} L_{kk}^\top = A_{kk}$ \Comment{potrf($k$)}
\For{$k+1 \leq i \leq n$}
\State $L_{ik} = A_{ik} L_{kk}^{-\top}$ \Comment{trsm($i,k$)}
\For{$k+1 \leq j \leq i$}
\State $A_{ij} \leftarrow A_{ij} - L_{ik} L_{jk}^\top$ \Comment{gemm($k,i,j$)}
\EndFor
\EndFor
\EndFor
\EndProcedure
\end{algorithmic}
\caption{}
\label{alg:cholesky_sequential}
\end{algorithm}
The algorithm is made of three main computational routines: potrf($k$), trsm($i,k$) and gemm($k,i,j$) (in practice syrk when $i = j$). We show a PTG formulation of \Cref{alg:cholesky_sequential} in \Cref{fig:cholesky_ptg}. Large active messages are used.
\begin{figure}
\centering
\subfloat[][\lstinline{potrf(j)}]{
\begin{tikzpicture}
\tikzstyle{every node}=[rectangle,draw,font=\tiny,text centered];
\def\L{0.2cm};
\node (potrf_j) {\lstinline{potrf(j)}};
\node [above = \L of potrf_j] (a) {\lstinline{gemm(j-1,j,j)}};
\node [below left = \L and -0.1cm of potrf_j] (b) {\lstinline{trsm(j+1,j)}};
\node [below right = \L and -0.1cm of potrf_j] (c) {\lstinline{trsm(N,j)}};
\path [->,>=stealth] (a) edge (potrf_j);
\path [->,>=stealth] (potrf_j) edge[out=west,in=north] (b);
\path [->,>=stealth] (potrf_j) edge[out=east,in=north] (c);
\tikzstyle{every node}=[font=\tiny,text centered];
\node [below = \L of potrf_j] {$\dots$};
\end{tikzpicture}
}
\subfloat[][\lstinline{trsm(i,j)}]{
\begin{tikzpicture}
\tikzstyle{every node}=[rectangle,draw,font=\tiny,text centered];
\def\L{0.2cm};
\node (trsm_ij) {\lstinline{trsm(i,j)}};
\node [above left = \L and -0.2cm of trsm_ij] (a) {\lstinline{potrf(j)}};
\node [above right = \L and -0.2cm of trsm_ij] (b) {\lstinline{if(j > 0) gemm(j-1,i,j)}};
\node [below left = \L and -0.1cm of trsm_ij] (c) {\lstinline{gemm(j,i,j+1)}};
\node [below right = \L and -0.1cm of trsm_ij] (d) {\lstinline{gemm(j,N,i)}};
\path [->,>=stealth] (a) edge[out=east,in=110] (trsm_ij);
\path [->,>=stealth] (b) edge[out=west,in=70] (trsm_ij);
\path [->,>=stealth] (trsm_ij) edge[out=west,in=north] (c);
\path [->,>=stealth] (trsm_ij) edge[out=east,in=north] (d);
\tikzstyle{every node}=[font=\tiny,text centered];
\node [below = \L of trsm_ij] {$\dots$};
\end{tikzpicture}
} \\
\subfloat[][\lstinline{gemm(k,i,j)}]{
\begin{tikzpicture}
\tikzstyle{every node}=[rectangle,draw,font=\tiny,text centered];
\def\L{0.2cm};
\node (gemm_kij) {\lstinline{gemm(k,i,j)}};
\node [above = \L of gemm_kij,align=left] (a)
{\lstinline{if(i==j) potrf(j)}\\
\lstinline{else trsm(i,k), trsm(j,k)}};
\node [right = \L of gemm_kij] (b) {\lstinline{if(k > 0) gemm(k-1,i,j)}};
\node [below = \L of gemm_kij,align=left] (c)
{\lstinline{if(k<j-1) gemm(k+1,i,j)}\\
\lstinline{else if(i==j) potrf(j)}\\
\lstinline{else trsm(i,j)}};
\path [->,>=stealth] (a) edge[out=south,in=north] (gemm_kij);
\path [->,>=stealth] (b) edge[out=west,in=east] (gemm_kij);
\path [->,>=stealth] (gemm_kij) edge (c);
\end{tikzpicture}
}
\caption{PTG description of \Cref{alg:cholesky_sequential}.
In \texttt{TTor}{}, when out-dependencies are remote, an AM is sent to the remote rank, carrying the associated block and triggering remote tasks.}
\label{fig:cholesky_ptg}
\end{figure}
We compare \texttt{TTor}{}, StarPU (with STF semantics) and ScaLAPACK. A 2D block cyclic data distribution is used with a block size of 256. Task priorities in \texttt{TTor}{} are computed using~\cite{beaumont2020makespan}. As before, in ScaLAPACK the block size is related to the data distribution but there are no tasks per se.
Weak and strong scalings are performed by multiplying the number of rows and columns by 2 or the number of cores by 8. The larger test case is a matrix of size $N = 131\,072$. \Cref{fig:cholesky_scalings} shows the results.
We see that on large problems, both \texttt{TTor}{} and StarPU reach very similar performances, both outperforming ScaLAPACK by far: for $N = 131\,072$ on 1024 cores, ScaLAPACK takes more than $125$ secs (not shown). On the $N = 131\,072$ test case, \texttt{TTor}{} and StarPU differ by less than 10\%. \added{StarPU shows better strong scaling for small problems on many nodes. We conjecture that this may be due to a better task scheduler, memory management (thread-memory affinity), and mapping of the computation across nodes.}
\Cref{sfig:cholesky_scalings_large} shows the runtime as a function of the block size for a test case of size $65\,536 \times 65\,536$ on 64 nodes (1024 CPUs). We see that 256 gives the best results for both \texttt{TTor}{} and StarPU. Furthermore, we observe that for small task size, \texttt{TTor}{} degrades less quickly than StarPU. The small block size leads to many tasks and unrolling the DAG on one node becomes prohibitive, even for reasonably large tasks (block size of 128). For a block size of 64, \texttt{TTor}{} is about 10x faster. Thanks to its lightweight runtime and distributed DAG exploration, \texttt{TTor}{} suffers less from the small task size. For large task sizes, both degrade similarly. The poor performance at large size is caused by a lack of concurrency.
\Cref{sfig:cholesky_random_block} shows a load balancing test using random block sizes with a fixed number of blocks. $\rho$ is the ratio of the largest over the average block size. For $\rho = 1.5$, the ratio of flops from smallest to largest task is $(1.5/0.5)^3 = 27$. We see that \texttt{TTor}{} handles tasks of various granularity very well, with less than 25\% degradation from $\rho = 1$ to $\rho = 2$ for an average block size of 256.
\begin{figure}
\centering
\subfloat[TaskTorrent.]{
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Cores (16 per node)},xtick={2,16,128,1024},xticklabels={2,16,128,1024},
ymin=0.5,ymax=60,ylabel={Time [sec.]},width=4.3cm,font=\footnotesize,
ylabel near ticks,xlabel near ticks,
]
\addplot[black,only marks] table[x=total_ncores,y=total_time] {dense_cholesky_leo/ttor.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{536870912}] {dense_cholesky_leo/ttor.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{4294967296}] {dense_cholesky_leo/ttor.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{34359738368}] {dense_cholesky_leo/ttor.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{274877906944}] {dense_cholesky_leo/ttor.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{2199023255552}] {dense_cholesky_leo/ttor.dat};
\addplot[red,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{8192}] {dense_cholesky_leo/ttor.dat};
\addplot[blue,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{16384}] {dense_cholesky_leo/ttor.dat};
\addplot[purple,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{32768}] {dense_cholesky_leo/ttor.dat};
\addplot[orange,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{65536}] {dense_cholesky_leo/ttor.dat};
\addplot[black,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{131072}] {dense_cholesky_leo/ttor.dat};
\node (text) at (axis cs:2,2.2) {8k};
\node (text) at (axis cs:2,18) {16k};
\node (text) at (axis cs:14,27) {32k};
\node (text) at (axis cs:120,27) {64k};
\node (text) at (axis cs:800,29) {128k};
\end{loglogaxis}
\end{tikzpicture}
}
\subfloat[StarPU]{
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Cores (16 per node)},xtick={2,16,128,1024},xticklabels={2,16,128,1024},
ymin=0.5,ymax=60,yticklabels=\empty,width=4.3cm,font=\footnotesize,
ylabel near ticks,xlabel near ticks,
]
\addplot[black,only marks] table[x=total_ncores,y=total_time] {dense_cholesky_leo/starpu.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{536870912}] {dense_cholesky_leo/starpu.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{4294967296}] {dense_cholesky_leo/starpu.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{34359738368}] {dense_cholesky_leo/starpu.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{274877906944}] {dense_cholesky_leo/starpu.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{2199023255552}] {dense_cholesky_leo/starpu.dat};
\addplot[red,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{8192}] {dense_cholesky_leo/starpu.dat};
\addplot[blue,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{16384}] {dense_cholesky_leo/starpu.dat};
\addplot[purple,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{32768}] {dense_cholesky_leo/starpu.dat};
\addplot[orange,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{65536}] {dense_cholesky_leo/starpu.dat};
\addplot[black,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{131072}] {dense_cholesky_leo/starpu.dat};
\node (text) at (axis cs:2,2.2) {8k};
\node (text) at (axis cs:2,18) {16k};
\node (text) at (axis cs:14,27) {32k};
\node (text) at (axis cs:120,27) {64k};
\node (text) at (axis cs:800,29) {128k};
\end{loglogaxis}
\end{tikzpicture}
} \\
\subfloat[ScaLAPACK]{
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Cores (16 per node)},xtick={2,16,128,1024},xticklabels={2,16,128,1024},
ymin=0.5,ymax=60,width=4.3cm,font=\footnotesize,ylabel={Time [sec.]},
ylabel near ticks,xlabel near ticks,
]
\addplot[black,only marks] table[x=total_ncores,y=total_time] {dense_cholesky_leo/scalapack.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{536870912}] {dense_cholesky_leo/scalapack.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{4294967296}] {dense_cholesky_leo/scalapack.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{34359738368}] {dense_cholesky_leo/scalapack.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{274877906944}] {dense_cholesky_leo/scalapack.dat};
\addplot[black,dotted] table[x=total_ncores,y=total_time,discard if not={flops_per_core}{2199023255552}] {dense_cholesky_leo/scalapack.dat};
\addplot[red,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{8192}] {dense_cholesky_leo/scalapack.dat};
\addplot[blue,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{16384}] {dense_cholesky_leo/scalapack.dat};
\addplot[purple,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{32768}] {dense_cholesky_leo/scalapack.dat};
\addplot[orange,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{65536}] {dense_cholesky_leo/scalapack.dat};
\addplot[black,dashed] table[x=total_ncores,y=total_time,discard if not={matrix_size}{131072}] {dense_cholesky_leo/scalapack.dat};
\node (text) at (axis cs:2,3) {8k};
\node (text) at (axis cs:2,25) {16k};
\node (text) at (axis cs:100,21) {32k};
\node (text) at (axis cs:800,32) {64k};
\end{loglogaxis}
\end{tikzpicture}
}
\subfloat[Block size impact, $N = 65\,536$ with 1024 CPUs.]{
\label{sfig:cholesky_scalings_large}
\begin{tikzpicture}
\begin{loglogaxis}[
font=\footnotesize,xlabel={Block size},xtick={32,128,512,2048},
xticklabels={32,128,512,2048},width=4.3cm,
ylabel near ticks, xlabel near ticks,
legend style={at={(1.05,1.1)}}
]
\addplot[mark=*]
table[x=block_size,y=total_time] {dense_cholesky_leo/ttor_blocksize.dat};
\addplot[dashed,mark=triangle*,mark options={solid}]
table[x=block_size,y=total_time] {dense_cholesky_leo/starpu_blocksize_nopruning.dat};
\legend{\texttt{TTor}{}, StarPU};
\end{loglogaxis}
\end{tikzpicture}
} \\
\subfloat[Load balancing test with random block sizes. $N = 65\,536$ with 1024 CPUs.
Block sizes are random uniform on $((2-\rho)b,\rho b)$ with $b$ the maximum block size.
Numbers indicate the average block size.]{
\label{sfig:cholesky_random_block}
\begin{tikzpicture}
\begin{semilogyaxis}[
font=\footnotesize,xtick={1,1.5,2.0},
xticklabels={1.0,1.5,2.0},width=4.3cm,
ylabel near ticks,xlabel near ticks,ylabel={Time [sec.]},
ymin=5,xlabel={$\rho = $ \lstinline{max_block_size/average_block_size}},
]
\addplot[red,mark=*] table[x=ratio,y=total_time,discard if not={block_size}{64}] {dense_cholesky_leo/ttor_random.dat};
\addplot[blue,mark=*] table[x=ratio,y=total_time,discard if not={block_size}{128}] {dense_cholesky_leo/ttor_random.dat};
\addplot[mark=*] table[x=ratio,y=total_time,discard if not={block_size}{256}] {dense_cholesky_leo/ttor_random.dat};
\node at (axis cs:1.5,7) [align=left] {\textcolor{black}{256}};
\node at (axis cs:1.5,11.5) [align=left] {\textcolor{blue}{128}};
\node at (axis cs:1.5,17) [align=left] {\textcolor{red}{64}};
\end{semilogyaxis}
\end{tikzpicture}
}
\caption{Cholesky scalings.
(a-c): weak (dotted) and strong (dashed) scalings. Numbers indicate the matrix size $N$. Largest (top right) test case is $N = 131\,072$.
(d): optimal block size (i.e., task granularity) for the $N=65\,536$ test case.
(e): load balancing test using random block sizes for the $N=65\,536$ test case.
}
\label{fig:cholesky_scalings}
\end{figure}
\subsection{Distributed Matrix-matrix Product}
We now consider a distributed matrix-matrix multiplication problem (GEMM), i.e., given $A, B \in \mathbb{R}^{N \times N}$ compute $C = AB$.
We compare:
\begin{itemize}
\item \texttt{TTor}{} with an algorithm using a 2D block cyclic mapping of blocks of size 256 to ranks,
using the default (``small'') and large AMs;
\item \texttt{TTor}{} with an algorithm using a 3D mapping of blocks to ranks, tiled
(every GEMM is single threaded, with a block size of 256)
or not (every GEMM is a single large multithreaded BLAS).
We use the DNS algorithm (see for instance \cite{grama2003introduction}) to map blocks to ranks.
\item StarPU (with STF semantics, i.e., all ranks explore the full DAG) using a 2D block cyclic mapping of blocks of size 256 to ranks. Various scheduling strategies have been tried, without significant variation in runtime; the default local work stealing \lstinline{lws} is then used.
\item \added{ScaLAPACK using a 2D block cyclic mapping (with a block size of 256) with multithreaded BLAS. We note that ScaLAPACK is not a runtime and is not actively managing a task graph.}
\end{itemize}
The following code snippet shows the GEMM portion when using the 2D block cyclic data distribution. In this case, contributions $A_{ik} B_{kj}$ are ordered as function of $k$, i.e., $A_{ik} B_{kj}$ happens before $A_{i(k+1)} B_{(k+1)j}$. Furthermore, because of the 2D data distribution, the products $A_{ik} B_{kj}$ are mapped to a rank function of $(i,j)$ only and, as such, always happen on a same node. The mapping of tasks to thread may be any deterministic function of \lstinline{ikj}. In practice something as simple as \lstinline{ikj[0]
\Cref{fig:gemm_scaling} presents strong and weak scalings results.
Scalings are done multiplying the number of rows and columns by 2 and/or the number of nodes by 8,
and the largest test case are matrices of size $32\,768$. We make multiple observations:
\begin{itemize}
\item \texttt{TTor}{} benefits from the large messages (\Cref{sfig:gemm_2d}) over small ones, decreasing the total time by up to 30\%.
\item \texttt{TTor}{} with large messages and StarPU using the 2D mapping have similar performance (\Cref{sfig:gemm_2d} vs \Cref{sfig:gemm_starpu}). \texttt{TTor}{} performs better than StarPU with small blocks (\Cref{sfig:gemm_blocksize}).
\item \texttt{TTor}{} with the 3D mapping and the tiled algorithm has better performance than without (see \Cref{sfig:gemm_3d} as well as \Cref{sfig:gemm_3d_tiled} vs \Cref{sfig:gemm_3d_nontiled} for results on 8 nodes). This shows the importance of having a small task granularity, to increase overlap between communication and computation. It has however similar performance to the 2D mapping.
\item Runtime-based implementations outperform ScaLAPACK (\Cref{sfig:gemm_scalapack}), showing the benefits of a task-based runtime system.
\end{itemize}
\Cref{sfig:gemm_blocksize} shows the impact of the block size on the runtime.
We see that \texttt{TTor}{} is about 2.5x faster than StarPU at small sizes.
This highlights the advantages of a distributed DAG exploration. \added{We note that in this case small blocks are not optimal. However, GEMM is in some sense an ``easy'' benchmark since it offers a large amount of concurrency. Therefore, to stress the runtimes and observe measurable differences we need to deviate from the optimal GEMM settings. Although we could not investigate other algorithms for this paper, more complex applications would probably reveal additional differences between \texttt{TTor}{} and StarPU.\@}
\added{Finally, \Cref{sfig:gemm_concurrency} shows the efficiency of \texttt{TTor}{} (2D GEMM) as a function of the concurrency. Since the GEMMs are sequential as a function of $k$, \lstinline{num_blocks^2/n_cores} indicates how much parallelism is available per core. This represents the number of blocks that are processed on each core between communication steps. We see that efficiency decreases sharply at around 16 blocks per core.}
\begin{figure}
\centering
\subfloat[\label{sfig:gemm_3d_tiled}Tiled 3D DNS GEMM trace for 8 nodes. Every red line is 1 \texttt{TTor}{} thread using single-threaded BLAS.]{
\begin{tikzpicture}
\begin{axis}[
font=\footnotesize,xmin=0,xmax=5,ymin=0,ymax=1,width=0.24\textwidth,
ytick=\empty,yticklabels=\empty,xlabel={Time [sec.]},
]
\addplot[] graphics[xmin=0,xmax=5,ymin=0,ymax=1] {gemm_data/tiled};
\end{axis}
\end{tikzpicture}
}\quad\quad
\subfloat[\label{sfig:gemm_3d_nontiled}Non-tiled 3D DNS GEMM trace for 8 nodes. Every red line is 1 \texttt{TTor}{} thread using multithreaded BLAS.]{
\begin{tikzpicture}
\begin{axis}[
font=\footnotesize,xmin=0,xmax=7,ymin=0,ymax=1,width=0.24\textwidth,
ytick=\empty,yticklabels=\empty,xlabel={Time [sec.]}
]
\addplot[] graphics[xmin=0,xmax=7,ymin=0,ymax=1] {gemm_data/nontiled};
\end{axis}
\end{tikzpicture}
}\\
\subfloat[\label{sfig:gemm_2d}\texttt{TTor}{} 2D GEMM. Red = small AMs, blue = large AMs.]{
\begin{tikzpicture}
\pgfplotsset{
xlabel={Cores (16 per node)},xtick={1,8,64},xticklabels={16,128,1024},
ylabel={Time [sec.]},ymin=0.2,ymax=75,width=4.3cm,
ylabel near ticks,
}
\begin{loglogaxis}[font=\footnotesize]
\addplot[red, only marks] table[x=nranks,y=total_time] {gemm_data/ttor_2d_nonlarge.dat};
\addplot[blue,only marks] table[x=nranks,y=total_time] {gemm_data/ttor_2d_large.dat};
\addplot[red,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{8192}] {gemm_data/ttor_2d_nonlarge.dat};
\addplot[red,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{16384}] {gemm_data/ttor_2d_nonlarge.dat};
\addplot[red,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{32768}] {gemm_data/ttor_2d_nonlarge.dat};
\addplot[red,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{65536}] {gemm_data/ttor_2d_nonlarge.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{68719476736}] {gemm_data/ttor_2d_nonlarge.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{549755813888}] {gemm_data/ttor_2d_nonlarge.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{4398046511104}] {gemm_data/ttor_2d_nonlarge.dat};
\addplot[blue,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{8192}] {gemm_data/ttor_2d_large.dat};
\addplot[blue,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{16384}] {gemm_data/ttor_2d_large.dat};
\addplot[blue,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{32768}] {gemm_data/ttor_2d_large.dat};
\addplot[blue,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{65536}] {gemm_data/ttor_2d_large.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{68719476736}] {gemm_data/ttor_2d_large.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{549755813888}] {gemm_data/ttor_2d_large.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{4398046511104}] {gemm_data/ttor_2d_large.dat};
\node (text) at (axis cs:1,2) {8k};
\node (text) at (axis cs:1,14) {16k};
\node (text) at (axis cs:8,18) {32k};
\node (text) at (axis cs:60,21) {64k};
\end{loglogaxis}
\end{tikzpicture}
}\,
\subfloat[\label{sfig:gemm_3d}\texttt{TTor}{} 3D GEMM. Red = non-tiled, blue = tiled.]{
\begin{tikzpicture}
\pgfplotsset{
xlabel={Cores (16 per node)},xtick={1,8,64},xticklabels={16,128,1024},
ymin=0.2,ymax=75,width=4.3cm,yticklabels=\empty,
}
\begin{loglogaxis}[font=\footnotesize]
\addplot[red, only marks] table[x=nranks,y=total_time] {gemm_data/ttor_3d_nontiled.dat};
\addplot[blue,only marks] table[x=nranks,y=total_time] {gemm_data/ttor_3d_tiled.dat};
\addplot[red,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{8192}] {gemm_data/ttor_3d_nontiled.dat};
\addplot[red,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{16384}] {gemm_data/ttor_3d_nontiled.dat};
\addplot[red,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{32768}] {gemm_data/ttor_3d_nontiled.dat};
\addplot[red,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{65536}] {gemm_data/ttor_3d_nontiled.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{68719476736}] {gemm_data/ttor_3d_nontiled.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{549755813888}] {gemm_data/ttor_3d_nontiled.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{4398046511104}] {gemm_data/ttor_3d_nontiled.dat};
\addplot[blue,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{8192}] {gemm_data/ttor_3d_tiled.dat};
\addplot[blue,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{16384}] {gemm_data/ttor_3d_tiled.dat};
\addplot[blue,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{32768}] {gemm_data/ttor_3d_tiled.dat};
\addplot[blue,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{65536}] {gemm_data/ttor_3d_tiled.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{68719476736}] {gemm_data/ttor_3d_tiled.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{549755813888}] {gemm_data/ttor_3d_tiled.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{4398046511104}] {gemm_data/ttor_3d_tiled.dat};
\node (text) at (axis cs:1,2) {8k};
\node (text) at (axis cs:1,14) {16k};
\node (text) at (axis cs:8,18) {32k};
\node (text) at (axis cs:60,21) {64k};
\end{loglogaxis}
\end{tikzpicture}
} \\
\subfloat[\label{sfig:gemm_starpu}StarPU 2D GEMM.]{
\begin{tikzpicture}
\pgfplotsset{
xlabel={Cores (16 per node)},xtick={1,8,64},xticklabels={16,128,1024},
ylabel={Time [sec.]},ymin=0.2,ymax=75,width=4.3cm,
ylabel near ticks,
}
\begin{loglogaxis}[font=\footnotesize]
\addplot[black,only marks] table[x=nranks,y=total_time] {gemm_data/starpu_2d.dat};
\addplot[black,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{8192}] {gemm_data/starpu_2d.dat};
\addplot[black,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{16384}] {gemm_data/starpu_2d.dat};
\addplot[black,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{32768}] {gemm_data/starpu_2d.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{68719476736}] {gemm_data/starpu_2d.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{549755813888}] {gemm_data/starpu_2d.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{4398046511104}] {gemm_data/starpu_2d.dat};
\node (text) at (axis cs:1,2) {8k};
\node (text) at (axis cs:1,14) {16k};
\node (text) at (axis cs:8,18) {32k};
\node (text) at (axis cs:60,20) {64k};
\end{loglogaxis}
\end{tikzpicture}
} \,
\subfloat[\label{sfig:gemm_scalapack}ScaLAPACK GEMM.]{
\begin{tikzpicture}
\pgfplotsset{
xlabel={Cores (16 per node)},xtick={1,8,64},xticklabels={16,128,1024},
ymin=0.2,ymax=75,width=4.3cm,ylabel near ticks,yticklabels=\empty,
}
\begin{loglogaxis}[font=\footnotesize]
\addplot[black,only marks] table[x=nranks,y=total_time] {gemm_data/scalapack.dat};
\addplot[black,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{8192}] {gemm_data/scalapack.dat};
\addplot[black,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{16384}] {gemm_data/scalapack.dat};
\addplot[black,dashed] table[x=nranks,y=total_time,discard if not={matrix_size}{32768}] {gemm_data/scalapack.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{68719476736}] {gemm_data/scalapack.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{549755813888}] {gemm_data/scalapack.dat};
\addplot[black,dotted] table[x=nranks,y=total_time,discard if not={flops_per_rank}{4398046511104}] {gemm_data/scalapack.dat};
\node (text) at (axis cs:1,2) {8k};
\node (text) at (axis cs:1,18) {16k};
\node (text) at (axis cs:8,30) {32k};
\node (text) at (axis cs:60,27) {64k};
\end{loglogaxis}
\end{tikzpicture}
} \\
\subfloat[\label{sfig:gemm_blocksize}Block size impact, $N = 32\,768$ with 1024 CPUs.]{
\begin{tikzpicture}
\pgfplotsset{
xlabel={Block size},xtick={64,256,1024},xticklabels={64,256,1024},
width=4.3cm,ylabel near ticks,ylabel={Time [sec.]},
}
\begin{loglogaxis}[font=\footnotesize, legend style={nodes={scale=0.7, transform shape}}]
\addplot[black,mark=*] table[x=block_size,y=total_time] {gemm_data/ttor_2d_large_blocksize.dat};
\addplot[black,dashed,mark=triangle*,mark options={solid}] table[x=block_size,y=total_time] {gemm_data/starpu_2d_nonpruned_blocksize.dat};
\addplot[black,mark=square*] table[x=block_size,y=total_time,discard if not={block_size}{64}] {gemm_data/ttor_2d_nonlarge_blocksize.dat};
\legend{\texttt{TTor}{} (2D), StarPU};
\end{loglogaxis}
\node (text) at (0.3,0.42) [fill=white,inner sep=1pt,anchor=north,align=left,font=\scriptsize] {Small AMs};
\draw [->,>=stealth] (text)--(0.24,0.9);
\end{tikzpicture}
} \,
\subfloat[\label{sfig:gemm_concurrency} \texttt{TTor}{} 2D GEMM. Concurrency = \lstinline{num_blocks^2/n_cores}, $N = 16\,384$.]{
\begin{tikzpicture}
\pgfplotsset{
xlabel={Concurrency}, log basis x = {2}, xtick={1,16,256, 4096}, xticklabels={1,16,256, 4096},
ymin=0,ymax=1,yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{inf}\%}, ylabel={Efficiency}, ylabel near ticks,
width=4.3cm
}
\begin{axis}[font=\footnotesize, xmode = log, legend style={draw=none, fill=none, at={(1.0,0.79)}, nodes={scale=0.6, transform shape}}]
\addlegendimage{empty legend}
\addlegendentry{\# nodes}
\addplot[red, mark=x, thick,only marks] table[x=conc,y=eff, discard if not={n}{16384}, discard if not={nodes}{1}] {gemm_data/ttor_eff_conc.dat};
\addlegendentry{1};
\addplot[orange, mark=o, thick,only marks] table[x=conc,y=eff, discard if not={n}{16384}, discard if not={nodes}{2}] {gemm_data/ttor_eff_conc.dat};
\addlegendentry{2};
\addplot[teal, mark=square, thick,only marks] table[x=conc,y=eff, discard if not={n}{16384}, discard if not={nodes}{4}] {gemm_data/ttor_eff_conc.dat};
\addlegendentry{4};
\addplot[blue, mark=+, thick,only marks] table[x=conc,y=eff, discard if not={n}{16384}, discard if not={nodes}{8}] {gemm_data/ttor_eff_conc.dat};
\addlegendentry{8};
\addplot[purple, mark=triangle, thick,only marks] table[x=conc,y=eff, discard if not={n}{16384}, discard if not={nodes}{16}] {gemm_data/ttor_eff_conc.dat};
\addlegendentry{16};
\addplot[green!40!gray, mark=o, thick,only marks] table[x=conc,y=eff, discard if not={n}{16384}, discard if not={nodes}{32}] {gemm_data/ttor_eff_conc.dat};
\addlegendentry{32};
\end{axis}
\end{tikzpicture}
}
\caption{GEMM scalings.
(a-b): impact of task granularity on 3D GEMM. Smaller tasks give higher overlap of computation and communication.
(c-f): weak (dotted) and strong (dashed) scalings. Numbers indicate the matrix size $N$. Largest test case is $N = 65\,536$.
(g): optimal block size (i.e., task granularity) for the $N=32\,768$ test case.
The extra data point shows the improvement when using small AMs instead of large AMs on small block sizes. The decrease in the number of messages sent improves the runtime by 3x.
(h): efficiency as a function of concurrency for $N=16\,384$. Reference timing is with 1 core.}
\label{fig:gemm_scaling}
\end{figure}
\subsection{Implementation Details}
\subsubsection{Taskflow and threadpool}
The threadpool is implemented with two
\lstinline{std::priority_queue<Task*>} per thread, storing the ready-to-run tasks.
Since some tasks can be stolen and others not, each thread has two queues.
The priority queues are protected using \lstinline{std::mutex} so that tasks can be inserted into a thread queue by any other thread.
One of the main goals of the \lstinline{Taskflow<K>} implementation is to support arbitrary task flows with keys belonging to any domain. Hence, we store dependencies in a \lstinline{std::unordered_map<K,int>}. Furthermore, to avoid having one central map storing all dependencies (whose access needs to be serialized), the map is distributed across threads. Task's dependencies are split among the threads using the mapping function: the dependency count of task \lstinline{k} is stored in the map associated to thread \lstinline{mapping(k)}. Each distributed map is always accessed by the same thread, preventing data races.
\subsubsection{Active messages and communication thread}
Active messages (AM) are implemented by registering functions on every rank in the same order.
Each AM then has a unique ID shared across ranks.
This ID is later used to retrieve the function on the receiver side.
Communication is performed using MPI non-blocking sends and receives.
The \lstinline{Communicator} maintains three queues:
\begin{enumerate}
\item a queue of serialized and ready-to-send messages;
\item a queue of send messages, to be later freed when the associated send completes;
\item a queue of receive messages, to be later run and freed when the associated receive completes.
\end{enumerate}
On the sender side, when sending (thread-safe) an active message \lstinline{am->send(dest, ps...)}, the various arguments \lstinline{ps...} are first serialized into a buffer, along with the AM ID. The buffer is placed in a queue in the communicator. When calling \lstinline{progress()}, that buffer will eventually be sent using \lstinline{MPI_Isend} and later freed when the send has completed.
On the receiver side, calling \lstinline{progress()} performs the following:
\begin{enumerate}
\item As long as it succeeds, it calls \lstinline{MPI_Iprobe} to probe for incoming messages and
(1) retrieves the message size using \lstinline{MPI_Getcount},
(2) allocates a buffer and
(3) receives the message using \lstinline{MPI_Irecv}.
\item It goes through all received messages and tests for completion with \lstinline{MPI_Test}.
If it succeeds
(1) it retrieves the AM using the ID from the buffer and
(2) deserializes the buffer, passes the arguments to the user function and runs the user function.
\end{enumerate}
MPI tags are used to distinguish
(1) messages of size smaller or larger than $2^{31}$ bytes, and
(2) regular and large AMs.
\section{Introduction}
\subsection{Parallel runtime systems}
Classical parallel computing has traditionally followed a fork-join (as in OpenMP) or bulk-synchronous (MPI) approach.
(\Cref{fig:mpi} shows the skeleton of a typical MPI program).
This has many advantages, including ease of programming and predictable performance.
It has however a key downside: \added{many points of synchronization during execution are added,} even when not necessary.
Runtime systems take a different approach.
The key concept is to express computations as a graph of tasks with dependencies between them (\Cref{fig:dag}).
This graph is directed and acyclic, and we will later refer to it as the task DAG.
Given the DAG, the runtime system is able to extract parallelism by identifying which tasks can run in parallel.
Tasks are then assigned to processors (either individual cores, nodes, accelerators, etc).
The advantage of this method is that it removes \added{all} unnecessary synchronization points.
\begin{figure}[b]
\centering
\subfloat[\label{fig:mpi}A typical MPI program]{
\begin{minipage}{0.22\textwidth}
\lstinputlisting[language=C++,basicstyle=\footnotesize\ttfamily]{mpi.cpp}
\end{minipage}
}
\subfloat[\label{fig:dag}Example of DAG of tasks.]{
\begin{minipage}{0.22\textwidth}
\centering
\begin{tikzpicture}[tips=proper]
\node (a) at (0,0) {$a$};
\node [below=0.5cm of a] (b) {$b$};
\node [above right=0.1cm and 0.5cm of a] (c) {$c$};
\node [below=0.5cm of c] (d) {$d$};
\node [below right=0.1cm and 0.5cm of b] (e) {$e$};
\node [right=0.5cm of c] (f) {$f$};
\node [above right=0.1cm and 0.5cm of e] (g) {$g$};
\draw [->,>=stealth] (a) edge (c);
\draw [->,>=stealth] (a) edge (d);
\draw [->,>=stealth] (b) edge (d);
\draw [->,>=stealth] (b) edge (e);
\draw [->,>=stealth] (c) edge (f);
\draw [->,>=stealth] (d) edge (g);
\draw [->,>=stealth] (e) edge (g);
\end{tikzpicture}
\end{minipage}
}
\caption{More parallelism can be extracted using a tasks DAG:
task $d$ needs to wait for task $a$ and $b$. However, task $f$ can run as soon as task $c$ has finished.}
\label{fig:fork_join_spmd_dag}
\end{figure}
\subsection{Existing approaches to describe the DAG} \label{subsec:stf_ptg}
A key design choice in runtime systems is how to express the DAG.\@
At a high-level, two approaches have been primarily used.
\subsubsection{Sequential Task Flow (STF)}
In this approach, the graph is discovered by the runtime using a sequential semantics, \added{that is, typically, on each node a single thread is responsible for building the DAG.}
\added{Different mechanisms to compute task dependencies can be used.}
Often, this takes the form of inferring dependencies based on \added{specifiying data sharing rules}
(e.g., READ, WRITE, READWRITE).
This is the approach taken by Legion/Regent \cite{bauer2014legion}
\footnote{Legion is the name of the lower level C++ API, while Regent is
the name of the higher-level language based on Lua.} and StarPU \cite{augonnet2011starpu}.
In both, the user first defines data regions and tasks operating on those regions (as inputs or outputs).
Regent maintains a global view of the data, and \added{data regions correspond to a partitioning of the data.}
\added{The user is also able to write mappers to indicate how to map and schedule tasks to the available hardware.}
StarPU uses data handles referring to distributed memory buffers.
The program is then written in a sequential style (with for loops, if/else statements, etc.), creating tasks on previously registered data regions.
The runtime system then discovers task dependencies, builds the DAG and executes tasks \added{in parallel.}
The key in the STF approach is that the DAG has to be discovered \added{through sequential enumeration.}
\added{This restriction may have performance implications but is attractive to the programmer,} since the program is easy to write and understand.
\subsubsection{Parametrized Task Graph (PTG)}
The PTG approach is another method to express the DAG.
Using some index space (\lstinline{K}) to index all tasks, functions of \lstinline{K} are used
to express tasks and their dependencies.
As an example, \added{the DAG could be defined by specifying three functions of \lstinline{K} (other choices are possible):} one for the in-dependencies, one for the computational task itself and one for the out-dependencies. \added{By running these functions as needed, the runtime discovers the DAG dynamically.}
PaRSEC \cite{6654146} takes that approach, using a custom language (JDF) to express the PTG.\@
\added{In PaRSEC, in and out-dependencies specifications contain both tasks and data.}
The PTG format has multiple advantages.
Since task in/out-dependencies can be independently queried at any time, it simplifies task management, leading to minimal overhead during execution.
\added{It also naturally scales by parallelizing both the DAG creation and DAG execution.}
In contrast, a STF code uses, in its purest form, a single thread to discover the DAG.\@
\added{It also removes the need to store in memory large portions of the DAG of tasks.}
Instead, the runtime can query the relevant functions only as needed and discover the DAG piece by piece.
The main drawback of the PTG approach is that the program no longer has a sequential semantics, which
makes it harder to understand the program's behavior at first sight.
\Cref{fig:stf_vs_ptg} illustrates at a higher level the differences between the STF and the PTG approach.
\begin{figure}
\centering
\subfloat[][\label{fig:stf}STF based program.
Dependencies are inferred through data sharing rules.
]{
\lstinputlisting[basicstyle=\footnotesize\ttfamily]{stf.cpp}
} \;
\subfloat[][\label{fig:ptg}PTG based program.
Task dependencies are defined using functions over \lstinline{K}.
Computation is triggered by seeding the initial tasks.]{
\lstinputlisting[basicstyle=\footnotesize\ttfamily]{ptg.cpp}
}
\caption{\added{Schematic of STF and PTG programs.}}
\label{fig:stf_vs_ptg}
\end{figure}
\subsection{Micro-benchmarks}
We first perform a series of micro benchmarks to validate the low overhead of the shared memory component of the runtime. In the following, we average timings across 25 runs. In every case, the standard deviation was recorded as well, to estimate the variability of the measurement. In most cases, it was negligible and we don't report it. In all cases, we pick a number of tasks so that the total runtime is about 1 second.
\subsubsection{No-dependencies overhead}
We begin with an estimation of the ``serial'' overhead of \texttt{TTor}{}'s shared memory runtime.
We start \lstinline{ntasks} tasks, without any dependencies, and assign them in a round-robin fashion to the \lstinline{nthreads} threads.
Each task is only spinning for \lstinline{spin_time} seconds.
As such, the total ideal time is \lstinline{spin_time} $\times$ \lstinline{ntasks} $/$ \lstinline{nthreads}.
\Cref{fig:benchmark_no_deps} shows the efficiency as a function of \lstinline{nthreads} and \lstinline{spin_time}.
Given a total wall clock time of \lstinline{run_time}, efficiency is defined as \lstinline{run_time} $\times$ \lstinline{nthreads} $/$ (\lstinline{spin_time} $\times$ \lstinline{ntasks}).
\lstinline{ntasks} is chosen so that \lstinline{run_time} is around 2 seconds.
\Cref{sfig:benchmark_no_deps_a} shows results for \texttt{TTor}{}'s only,
where we do \textbf{not} measure task insertion, i.e., we evaluate
\begin{lstlisting}[basicstyle=\footnotesize\ttfamily]
for(int k = 0; k < n_tasks; k++) {
tf.fulfill_promise(k);
}
tp.start(); // Start measuring time
tp.join(); // Stop measuring time
\end{lstlisting}
We see that the runtime has negligible impact for tasks $\approx$ 100$\mu$s,
and it becomes significant around 1 $\mu$s where overhead dominates.
We then compare it to OpenMP and StarPU in \Cref{sfig:benchmark_no_deps_b} where, to make the comparison fair, insertion time \textbf{is} measured
(which reduces the maximum possible efficiency, as the insertion is sequential).
\begin{lstlisting}[basicstyle=\footnotesize\ttfamily]
tp.start(); // Start measuring time
for(int k = 0; k < n_tasks; k++) {
tf.fulfill_promise(k);
}
tp.join(); // Stop measuring time
\end{lstlisting}
We note that this is a spurious consequence of creating tasks with no dependencies. In practice the insertion is done by other tasks, themselves executing in parallel. We evaluate StarPU both using ``direct'' task insertion (``Task''), as well as using the STF approach (``STF''). In the STF approach, each independent task is associated with an artificial independent read-write piece of data. We see that for very small tasks $<$ 10$\mu s$, overhead is significant but comparable for all runtimes.
\begin{figure}
\centering
\subfloat[][\label{sfig:benchmark_no_deps_a}
TTOR's overhead (no dependencies) measurement.
Task insertion time is not included. Numbers indicate \lstinline{spin_time}.]{
\begin{tikzpicture}
\begin{axis}[
font=\footnotesize, ytick={0,0.5,1.0},
xlabel={Num.\ threads}, ylabel={Efficiency},
width=4.5cm, ylabel near ticks, ymax=1.2, ymin=-0.1,
]
\addplot+[mark=*,mark options={mark size=1pt,solid},red] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-04}]{micro_benchmarks/no_deps.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},blue] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-05}]{micro_benchmarks/no_deps.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},brown] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-06}]{micro_benchmarks/no_deps.dat};
\node (text) at (axis cs:12.5,0.3) {\textcolor{brown}{1 $\mu$s}};
\node (text) at (axis cs:12.5,0.75) {\textcolor{blue}{10 $\mu$s}};
\node (text) at (axis cs:12.5,1.1) {\textcolor{red}{100 $\mu$s}};
\end{axis}
\end{tikzpicture}
}\;
\subfloat[][\label{sfig:benchmark_no_deps_b}
Overhead (no dependencies) comparisons.
Task insertion time is included.
Solid is \lstinline{spin_time} = 100 $\mu$s; dashed is 10 $\mu$s;
* is StarPU with direct task insertion (Task) and STF semantics (STF).]{
\begin{tikzpicture}
\begin{axis}[
font=\footnotesize,
xlabel={Num.\ threads}, ylabel={Efficiency},
width=4.5cm, ylabel near ticks, ymax=1.2, ymin=-0.1,
]
\addplot[mark=*,mark options={mark size=1pt,solid},red,dashed,forget plot] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-05},discard if not={test}{ttor_wait_time_insertion1}]{micro_benchmarks/no_deps_comp.dat};
\addplot[mark=*,mark options={mark size=1pt,solid},red,] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-04},discard if not={test}{ttor_wait_time_insertion1}]{micro_benchmarks/no_deps_comp.dat};
\addplot[mark=*,mark options={mark size=1pt,solid},blue,dashed,forget plot] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-05},discard if not={test}{omp_wait}]{micro_benchmarks/no_deps_comp.dat};
\addplot[mark=*,mark options={mark size=1pt,solid},blue] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-04},discard if not={test}{omp_wait}]{micro_benchmarks/no_deps_comp.dat};
\addplot[mark=*,mark options={mark size=1pt,solid},black,dashed,forget plot] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-05},discard if not={test}{startpu_wait}]{micro_benchmarks/no_deps_comp.dat};
\addplot[mark=*,mark options={mark size=1pt,solid},black] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-04},discard if not={test}{startpu_wait}]{micro_benchmarks/no_deps_comp.dat};
\addplot[mark=*,mark options={mark size=1pt,solid},brown,dashed,forget plot] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-05},discard if not={test}{startpu_wait_stf}]{micro_benchmarks/no_deps_comp.dat};
\addplot[mark=*,mark options={mark size=1pt,solid},brown] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={sleep_time}{1.000000e-04},discard if not={test}{startpu_wait_stf}]{micro_benchmarks/no_deps_comp.dat};
\node (text) at (axis cs:9,0.1) {\textcolor{brown}{* (STF)}};
\node [fill=white,inner sep=1pt] (text) at (axis cs:8.3,0.68) {\textcolor{black}{* (Task)}};
\node (text) at (axis cs:15,0.58) {\textcolor{blue}{OMP}};
\node (text) at (axis cs:15,0.85) {\textcolor{red}{\texttt{TTor}{}}};
\end{axis}
\end{tikzpicture}
}
\caption{Shared memory serial overhead, as a function of the number of threads \lstinline{nthreads} (x-axis) and the task time \lstinline{spin_time} (various lines).
The plots show the mean across 25 runs.}
\label{fig:benchmark_no_deps}
\end{figure}
\subsubsection{Many dependencies overhead}
We then estimate the overhead when dependencies are involved.
Consider a 2D array of \lstinline{nrows} $\times$ \lstinline{ncols} tasks, with \lstinline{ndeps} dependencies between task $(i,j)$ and $((i+k)\text{\%\lstinline{nrows}},j+1)$ for $0 \leq k <$ \lstinline{ndeps}.
Again, tasks are spinning for \lstinline{spin_time} seconds and, in \texttt{TTor}{}, task $(i,j)$ is assigned to thread $i \%$ \lstinline{nthreads}.
\added{Since this is not easily implementable in OpenMP,} we only compare \texttt{TTor}{} with StarPU. In the ``Task'' version, tasks are directly inserted, and their dependencies are explicitly expressed. In the STF approach, we register data for every $(i,j)$ task and that data is used to create dependencies with the tasks in the next column. We note that StarPU STF has the constraint that the number of input data buffers for a given task should normally be known at compile time, which makes it not well-suited for this benchmark.
\Cref{fig:benchmark_with_deps} shows the results with \lstinline{nrows} set to 32.
We see that \texttt{TTor}{} is between StarPU ``Task'' and StarPU ``STF'', with similar overhead.
This validates the implementation.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{groupplot}[
group style={
group size=2 by 3,
}
]
\nextgroupplot[
title={\lstinline{spin_time} = 100$\mu$s},
ylabel={TTOR},
ylabel style={align=center,at={(-0.2,0.5)}},
width=4cm, font=\footnotesize, ymin=0.3, ymax=1.1,
]
\addplot+[mark=*,mark options={mark size=1pt,solid},blue ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{1},discard if not={sleep_time}{1.000000e-04},discard if not={test}{ttor_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},red ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{2},discard if not={sleep_time}{1.000000e-04},discard if not={test}{ttor_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},purple] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{4},discard if not={sleep_time}{1.000000e-04},discard if not={test}{ttor_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},black ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{8},discard if not={sleep_time}{1.000000e-04},discard if not={test}{ttor_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},brown ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{16},discard if not={sleep_time}{1.000000e-04},discard if not={test}{ttor_deps}]{micro_benchmarks/deps_comp.dat};
\nextgroupplot[
title={\lstinline{spin_time} = 1ms},
legend entries={1 dep, 2 deps, 4 deps, 8 deps, 16 deps, 32 deps},
legend pos=outer north east,
ytick={},
width=4cm, font=\footnotesize, ymin=0.3, ymax=1.1,
]
\addplot+[mark=*,mark options={mark size=1pt,solid},blue ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{1},discard if not={sleep_time}{1.000000e-03},discard if not={test}{ttor_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},red ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{2},discard if not={sleep_time}{1.000000e-03},discard if not={test}{ttor_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},purple] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{4},discard if not={sleep_time}{1.000000e-03},discard if not={test}{ttor_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},black ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{8},discard if not={sleep_time}{1.000000e-03},discard if not={test}{ttor_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},brown ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{16},discard if not={sleep_time}{1.000000e-03},discard if not={test}{ttor_deps}]{micro_benchmarks/deps_comp.dat};
\nextgroupplot[
yshift=0.5cm,
ylabel={*PU Task},
ylabel style={align=center,at={(-0.2,0.5)}},
width=4cm, font=\footnotesize, ymin=0.3, ymax=1.1,
]
\addplot+[mark=*,mark options={mark size=1pt,solid},blue ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{1},discard if not={sleep_time}{1.000000e-04},discard if not={test}{starpu_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},red ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{2},discard if not={sleep_time}{1.000000e-04},discard if not={test}{starpu_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},purple] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{4},discard if not={sleep_time}{1.000000e-04},discard if not={test}{starpu_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},black ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{8},discard if not={sleep_time}{1.000000e-04},discard if not={test}{starpu_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},brown ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{16},discard if not={sleep_time}{1.000000e-04},discard if not={test}{starpu_deps}]{micro_benchmarks/deps_comp.dat};
\nextgroupplot[
yshift=0.5cm,
ytick={},
width=4cm, font=\footnotesize, ymin=0.3, ymax=1.1,
]
\addplot+[mark=*,mark options={mark size=1pt,solid},blue ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{1},discard if not={sleep_time}{1.000000e-03},discard if not={test}{starpu_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},red ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{2},discard if not={sleep_time}{1.000000e-03},discard if not={test}{starpu_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},purple] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{4},discard if not={sleep_time}{1.000000e-03},discard if not={test}{starpu_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},black ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{8},discard if not={sleep_time}{1.000000e-03},discard if not={test}{starpu_deps}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},brown ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{16},discard if not={sleep_time}{1.000000e-03},discard if not={test}{starpu_deps}]{micro_benchmarks/deps_comp.dat};
\nextgroupplot[
yshift=0.5cm,
xlabel={Num.\ threads},
ylabel={*PU STF},
ylabel style={align=center,at={(-0.2,0.5)}},
xlabel near ticks,
width=4cm, font=\footnotesize, ymin=0.3, ymax=1.1, ylabel style={align=center},
]
\addplot+[mark=*,mark options={mark size=1pt,solid},blue ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{1},discard if not={sleep_time}{1.000000e-04},discard if not={test}{starpu_deps_stf}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},red ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{2},discard if not={sleep_time}{1.000000e-04},discard if not={test}{starpu_deps_stf}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},purple] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{4},discard if not={sleep_time}{1.000000e-04},discard if not={test}{starpu_deps_stf}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},black ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{8},discard if not={sleep_time}{1.000000e-04},discard if not={test}{starpu_deps_stf}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},brown ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{16},discard if not={sleep_time}{1.000000e-04},discard if not={test}{starpu_deps_stf}]{micro_benchmarks/deps_comp.dat};
\nextgroupplot[
yshift=0.5cm,
xlabel={Num.\ threads}, xlabel near ticks,
ytick={},
width=4cm, font=\footnotesize, ymin=0.3, ymax=1.1,
]
\addplot+[mark=*,mark options={mark size=1pt,solid},blue ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{1},discard if not={sleep_time}{1.000000e-03},discard if not={test}{starpu_deps_stf}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},red ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{2},discard if not={sleep_time}{1.000000e-03},discard if not={test}{starpu_deps_stf}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},purple] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{4},discard if not={sleep_time}{1.000000e-03},discard if not={test}{starpu_deps_stf}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},black ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{8},discard if not={sleep_time}{1.000000e-03},discard if not={test}{starpu_deps_stf}]{micro_benchmarks/deps_comp.dat};
\addplot+[mark=*,mark options={mark size=1pt,solid},brown ] table[x=n_threads,y=efficiency_mean,y error=efficiency_std,discard if not={n_edges}{16},discard if not={sleep_time}{1.000000e-03},discard if not={test}{starpu_deps_stf}]{micro_benchmarks/deps_comp.dat};
\end{groupplot}
\end{tikzpicture}
\caption{Efficiency vs.\ number of threads. Shared memory runtime dependency management overhead. The plots show the mean across 25 runs.}
\label{fig:benchmark_with_deps}
\end{figure}
\added{The conclusion of this section is that the overhead of \texttt{TTor}{} is comparable (and sometimes better) to OpenMP and StarPU.}
\subsection{Organization of the paper}
This paper is organized as follows.
\Cref{sec:features} describes \texttt{TTor}{}'s API and implementation.
\Cref{sec:benchmarks} compares \texttt{TTor}{} to StarPU and ScaLAPACK, first validating its
shared memory component and then comparing it on large linear algebra problems.
We finally survey previous work in \Cref{sec:previous_work} before concluding.
\section{TaskTorrent}
\label{sec:features}
\texttt{TTor}{} uses a PTG.
The DAG is expressed by providing at least three functions:
(1) one returning the number of in-dependencies of every task;
(2) one that runs the computational task and fulfills dependencies on other tasks;
(3) one returning the thread each task should be mapped to \added{(an option is provided to bound the task to the thread or leave it stealable).}
When their dependencies are satisfied, tasks are inserted into a thread pool, where a work-stealing algorithm
keeps the load balanced between the threads.
Tasks then run and fulfill other tasks' dependencies, locally (on the same rank) or remotely on a different rank.
In the case of remote dependencies, since all computations are asynchronous, the receiver rank cannot explicitly wait for data to arrive.
Hence, one-sided active messages are used. An active message (AM) is a pair (function, data).
Once the AM arrives on the receiver, the function is run with the data passed as argument.
This is typically used to store the data and fulfill dependencies, eventually triggering more tasks.
This approach means \texttt{TTor}{} never needs to store the full DAG.
Task dependencies are queried only when needed, and the DAG is discovered piece by piece.
In particular, \texttt{TTor}{} becomes aware of the existence of a specific task \textbf{only} when a task fulfills its first dependency. This makes \texttt{TTor}{} scalable and lightweight. The full DAG is never stored or even explored by any specific thread or rank, and the task management overhead is minimal. \Cref{fig:ttor_distributed_dag} illustrates this local DAG + AM model.
\begin{figure}
\centering
\begin{tikzpicture}
\tikzstyle{every path}=[->,>=stealth];
\def\L{0.5cm};
\tikzstyle{every node}=[draw,circle,fill=black!20];
\node (a) {};
\node [above left = \L and \L of a] (b) {};
\node [above right = \L and \L of a] (c) {};
\node [left = \L of a] (d) {};
\node [right = \L of a] (e) {};
\node [below = \L of a] (f) {};
\path (a) edge (b);
\path (b) edge (d);
\path (c) edge (e);
\path (c) edge (a);
\path (a) edge (f);
\tikzstyle{every node}=[draw,circle,fill=black!70];
\node [right = 2cm of a] (a2) {};
\node [below left = \L and \L of a2] (b2) {};
\node [below right = \L and \L of a2] (c2) {};
\node [above = \L of a2] (d2) {};
\node [right = \L of a2] (e2) {};
\node [above right = \L and \L of a2] (f2) {};
\path (c2) edge (e2);
\path (e2) edge (a2);
\path (a2) edge (d2);
\path (a2) edge (b2);
\path (d2) edge (f2);
\path (a2) edge[dashed] (f);
\path (c) edge[dashed] (d2);
\tikzstyle{every node}=[];
\node [above = 2*\L of a] {Rank $a$};
\node [above = 2*\L of a2] {Rank $b$};
\node [left = \L of d,align=left] {DAG =\\functions\\over some\\index space};
\node [right = \L of e2,align=left] {Between\\ranks =\\one sided\\AMs};
\end{tikzpicture}
\caption{The model of \texttt{TTor}{}: a distributed graph of tasks expressed using a parametrized task graph (solid arrows), with explicit active messages (dashed arrows) between ranks to asynchronously insert/trigger tasks.}
\label{fig:ttor_distributed_dag}
\end{figure}
\section{Previous work}
\label{sec:previous_work}
\paragraph{Runtime systems}
As mentioned in \Cref{subsec:stf_ptg}, other task-based runtime systems exist. We highlight some of their characteristics.
PaRSEC \cite{6654146} is a runtime system centered around dense linear algebra. It takes the PTG approach but uses a custom programming language, the JDF.
This can make adoption harder for new users.
Legion \cite{bauer2014legion} is a general purpose STF runtime.
It has many features and can be used from C++ but requires the user to express everything using Legion's data structures.
It is also intended to be used primarily with GASNet \cite{bonachea2017gasnet} and not MPI.
Regent \cite{slaughter2015regent} proposes a higher level language on top of Legion, making programming more productive.
Unfortunately, obtaining high performance requires the user to program directly the mapper
which is time-consuming and requires a detailed understanding of the inner workings of Legion.
Finally, StarPU \cite{augonnet2011starpu} uses C++ and is STF-based.
The data is initially distributed by the user like a classical MPI code,
and various scheduling strategies can be used to further improve performance.
However, user data still has to be wrapped using StarPU's data structures.
In designing \texttt{TTor}{} we chose to focus on the following features.
The message passing paradigm requires the programmer to distribute data
but simplifies the design of the library with the goal of minimizing global synchronization and communication.
MPI and C++ makes integration into other codes easier.
Active messages are necessary because of the asynchronous nature of computations.
Finally the PTG approach leads to a minimal runtime overhead.
Note however that the choice of PTG has drawbacks: it can be difficult for the programmer to reason about tasks dependencies.
This can be easier in some applications (like linear algebra) than others.
\texttt{TTor}{} also does not consider concepts like memory affinity or accelerators at the moment. This is reserved for future work.
\paragraph{Task-based parallelism}
Task-based parallelism is now a common feature of many parallel programming systems.
Cilk \cite{joerg1996cilk, frigo1998implementation} introduced a multi-threading component to C in 1996, and Cilk-5 introduced \lstinline{spawn} and asynchronous computations.
Many other efforts followed, including OpenMP \cite{dagum1998openmp} (with tasking introduced in version 3.0),
Intel TBB \cite{reinders2007intel} (where task DAGs can be expressed), Cilk Plus \cite{robison2013composable}, XKaapi \cite{gautier2013xkaapi},
OmpSs \cite{duran2011ompss}, Superglue \cite{tillenius2015superglue}, and the SMPSs programming model \cite{perez2008dependency, perez2010handling}.
The Plasma \cite{agullo2009numerical, agullo2009plasma} (for CPU) and Magma \cite{tomov2009magma} (for CPU and GPU) libraries
are replacements for multithreaded LAPACK, where parallelism is obtained through tiled algorithms using a dynamic runtime, Quark \cite{yarkhan2011quark}.
Notice that all the previously mentioned work is typically only usable in a shared-memory context.
In particular, there is no support to let one rank trigger (or fulfill the dependency of) a task on another rank.
\paragraph{Distributed programming}
An explicit goal of \texttt{TTor}{} is to provide support for distributed computing.
The most common distributed programming paradigm is using explicit message passing like in MPI.
In MPI, ranks are completely independent and only communicate with each other through explicit message passing.
Charm++ \cite{kale1993charm++} takes an object-oriented approach. It exposes \emph{chares} which are concurrent objects communicating through messages.
We also mention \lstinline{DARMA/vt} \cite{lifflander2020darma}, a tasking and active message library in C++,
with other features such as load balancing and asynchronous collectives.
Finally, in the PGAS (partitioned global address space) model (like GASNet \cite{bonachea2017gasnet}), each rank can access a global address space through read (get) and write (put) operations.
Chapel \cite{callahan2004cascade}, Fortran Co-arrays \cite{numrich1998co}, UPC \cite{el2006upc} and UPC++ \cite{zheng2014upcxx} are examples of PGAS-based parallel programming languages.
\paragraph{Active messages}
One-sided active messages is another important feature of TaskTorrent.
Von Eicken et al. \cite{von1992active} argued in 1992 that active messages are a powerful mechanism to hide latency and improve performance.
Active messages are also a central part of UPC++ where they resemble the ones in \texttt{TTor}{}.
In UPC++, however, remote data is referred to using global data structures,
while \texttt{TTor}{} tends to use the C++ variable capture mechanism in lambda functions.
\subsection{Sparse cholesky}
\begin{figure}
\centering
\subfloat[TaskTorrent strong scalings]{
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Cores},xtick={1,4,16,64,256,1024},xticklabels={1,4,16,64,256,1024},
ylabel={Time [sec.]},width=6cm,legend entries={1,2,4,8,16,32},legend pos=outer north east,ymin=10,
]
\addplot+[] table[x=n_threads_total,y=total_time,discard if not={n_ranks}{1}] {sparse_cholesky/strong_scalings.dat};
\addplot+[] table[x=n_threads_total,y=total_time,discard if not={n_ranks}{2}] {sparse_cholesky/strong_scalings.dat};
\addplot+[] table[x=n_threads_total,y=total_time,discard if not={n_ranks}{4}] {sparse_cholesky/strong_scalings.dat};
\addplot+[] table[x=n_threads_total,y=total_time,discard if not={n_ranks}{8}] {sparse_cholesky/strong_scalings.dat};
\addplot+[] table[x=n_threads_total,y=total_time,discard if not={n_ranks}{16}] {sparse_cholesky/strong_scalings.dat};
\addplot+[] table[x=n_threads_total,y=total_time,discard if not={n_ranks}{32}] {sparse_cholesky/strong_scalings.dat};
\addplot+[dotted,no marks,domain=1:1024] (x,1000/x);
\end{loglogaxis}
\end{tikzpicture}
}
\subfloat[TaskTorrent weak scalings]{
\begin{tikzpicture}
\begin{loglogaxis}[
xlabel={Cores},xtick={1,4,16,64,256,1024},xticklabels={1,4,16,64,256,1024},
ylabel={Time [sec.]},width=6cm,ymin=0.5,
]
\addplot+[dashed] table[x=n_threads_total,y=total_time,discard if not={nrows}{262144}] {sparse_cholesky/scalings.dat};
\addplot+[dashed] table[x=n_threads_total,y=total_time,discard if not={nrows}{512000}] {sparse_cholesky/scalings.dat};
\addplot+[dashed] table[x=n_threads_total,y=total_time,discard if not={nrows}{1030301}] {sparse_cholesky/scalings.dat};
\addplot+[dashed] table[x=n_threads_total,y=total_time,discard if not={nrows}{2097152}] {sparse_cholesky/scalings.dat};
\addplot+[dashed] table[x=n_threads_total,y=total_time,discard if not={nrows}{4173281}] {sparse_cholesky/scalings.dat};
\addplot+[dotted,no marks,domain=1:32] (x,100/x);
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Sparse cholesky scalings}
\label{fig:sparse_cholesky_scalings}
\end{figure}
See \autoref{fig:sparse_cholesky_scalings} |
1,314,259,993,880 | arxiv | \section{Introduction}
\label{sec:intro}
\IEEEPARstart{O}{ver} the past two decades, a great evolution of optical communication systems, in terms of spectral efficiency$\times$distance product, has been enabled by the advances in digital coherent detection. So far, most of the efforts, on reaching the capacity of the nonlinear fiber--optic channel, have been focusing on the C--band only~\cite{agrell2016roadmap}. However, squeezing the information inside this transmission window will soon reach its theoretical limit~\cite{Dar14}. To cope with the constant demand for higher throughput, novel solutions must be explored.
Optical communication systems operating across multi--band transmission, are an attractive solution for providing the future capacity scaling. They can provide up to 10$\times$ higher capacity, compared to the C--band~\cite{Ferrari20}, on the already deployed SMF fiber infrastructure. To make multi--band systems commercially deployable in the near future, large research efforts in terms of components, system and network design are needed~\cite{ciena, infinera, Doerr2016, Timurdogan19, Messner20, Tummidi20, Sarwar20, Wang20}.
One of the main challenges in realizing multi--band systems is the development of optical amplifiers that are able to provide sufficiently high gains over such a wide bandwidth. Additionally, a novel feature that may become essential is the ability to provide arbitrary gain profiles in a controlled and ultra--fast way. This is because different signal channels in a multi--band system are unevenly impacted by the interaction between the Kerr nonlinearity, amplified spontaneous emission (ASE) noise and stimulated Raman scattering (SRS)~\cite{Ferrari20}. Consequently, for the maximization of the achievable information rate (AIR) $\times$ distance product, non--flat signal channel power profiles are needed. Depending on the system configuration, signal channel power profiles will be a result of a complex optimization and may assume arbitrary shapes. Moreover, to address the future requirements on high capacity optical networks, ultra--fast gain profile re--configurability is needed~\cite{Napoli18}.
A current and by far the most dominant approach for performing programmable signal channel power profile shaping is by leveraging the use of wavelength selective switches (WSSs) whose primary function is to route the signals throughout the optical network. However, this approach is highly power inefficient since it adjusts the channel powers by means of attenuation.
A novel approach for realizing signal channel power shaping is by employing optical amplifiers with programmable (arbitrary) gain profiles. What we mean by programmable is that the targeted gain profiles can be obtained in a single--step by applying the appropriate pump laser driving voltages. To express it differently, a programmable optical amplifier is an amplifier that can provide arbitrary gain profiles, in a controlled way, with a single--set of instructions. This is somehow equivalent to field-programmable-gate-arrays (FPGAs) in electronics.
Programmable gain amplifiers could be a potential game changer as they would be able to simultaneously amplify the optical data signal and perform gain shaping. This has many impactful applications such as compensation of wavelength--dependent loss in devices such as modulators and frequency combs, gain--shaping in fixed-gain profile amplifiers and channel power profile adjustments to optimize the AIR in multi--band systems. Especially, if integrated--combs are targeted for multi-channel sources, an efficient approach for gain shaping would be desirable. This is because for integrated--combs there is a large variation in power of their frequency components. Finally, optical amplifiers providing arbitrary gain profiles can be used in hybrid approaches to complement the gain, and overcome the limitations of other optical amplifier technologies~\cite{Fukuchi01,Gordienko16,Ionescu19,Galdino19,Arnould20,Ye20}.
There are several approaches and technologies for realizing optical amplifiers covering multiple bands.
To date, works on multi--band optical amplifiers have focused on: rare--earth--doped fiber amplifiers (xDFAs) covering 17.56~THz over O+E--band~\cite{Wang20} and 10.7~THz over S+C--band~\cite{Sakamoto06}, semiconductor optical amplifiers (SOAs) for 12.7~THz on S+C+L--band~\cite{Renaudier18}, optic parametric amplifiers (OPAs) with 10~THz of bandwidth on S+C+L--band~\cite{Kobayashi20}, Raman amplifiers (RA) in combination with EDFAs, SOAs and OPAs achieving bandwidths ranging from 10.7 to 14~THz on C+L and S+C+L--band~\cite{Fukuchi01,Ye20,Ionescu19,Galdino19,Arnould20,Gordienko16}, and pure RAs with bandwidths of up to 19.1--THz S+C+L--band~\cite{Rottwitt99,Zhou06,Chen18,Emori01,Iqbal20}. So far, the majority of works in~\cite{Fukuchi01,Sakamoto06,Wang20,Renaudier18,Kobayashi20,Gordienko16,Ionescu19,Galdino19,Arnould20,Rottwitt99,Zhou06,Chen18,Emori01,Iqbal20} have focused on realizing flat gain profiles in C+L and S+C+L--band. Recently, an amplifier that relies on a hybrid SOA/Raman configuration has been demonstrated to achieve arbitrary loss/gain profile generation in S+C+L--band in 12.3~THz of bandwidth~\cite{Ye20}.
Among all different solutions, RAs are most suitable for realizing arbitrary gain profiles, in a controlled way. This is because the RAs allow for a flexible gain profile design by adjusting the pump powers and wavelengths, and provide gain availability across a broad range of wavelengths, when operated in multi-pump configurations.
The challenge with Raman amplifier design is on the selection of pump powers and wavelengths that would result in a targeted gain profile. Several solutions to this optimization problem have been reported in the literature but have mainly focused on realizing flat gain profiles~\cite{Ferreira11,Zhou01,Chen18,Perlin02,Iqbal20,Mowla08,Jiang10,Emori01, Ania07}. Recently, a machine learning framework for the ultra--fast configuration of the pump powers and wavelengths has been theoretically proposed and as a proof--of--principle experimentally demonstrated in C--band only~\cite{Zibar20,deMoura20}. The proposed approach can be used for the design of Raman amplifiers, where an arbitrary gain profile is achievable in a controlled way. However, moving from C--band to multi--band and realizing wider gain profiles is significantly more challenging. This is partly due to the increased number of pumps that need to be controlled and also the increased nonlinearity given the higher overall powers in the optical fiber.
In this paper, we use the proposed machine learning framework for the experimental realization of multi--band RAs that can provide arbitrary gains, in a controlled way, in C+L and S+C+L--band. Up to 8 pumps are employed to provide more than 5000 arbitrary gain profiles over up to 17.6-THz of bandwidth. We achieve a highly--accurate programmable set of gain profiles with a very low average maximum error, (defined between the target and realized gain profiles), per bandwidth, $E_{MAX}/BW$, of $1.6 \cdot 10^{-2}$~dB/THz.
This is the first experimental demonstration of S+C+L--band optical amplifier, that can realize arbitrary gain profiles in a controlled way, using Raman effects only. We have achieved an important breakthrough by demonstrating an extremely low root mean square error per bandwidth (RMSE/BW) of 0.0045 dB/THz, over an ultra--wide bandwidth of 17.6 THz (140.7 nm). More specifically, in terms of maximum error per bandwidth, our results are a record low. The presented approach and the obtained results have therefore great potential to become a relevant reference point for future research on this upcoming topic.
The previous experimental results that we have published in \cite{Zibar20,deMoura20} were limited to the C--band only. Increasing the bandwidth from C to S+C+L--band (a factor of 4.4 for the considered case) is highly--challenging. We demonstrate that the proposed machine learning framework plays a key role in addressing those challenges.
Machine learning for broadband gain optimisation is a topic of growing interest, which is reflected in the recent work~\cite{Ye20} reporting a root mean squared error per bandwidth, $RMSE/BW$, of 0.033~dB/THz in a 12.3~THz bandwidth SOA/distributed Raman link scenario. This is an order of magnitude higher $RMSE/BW$ when compared to our current result of 0.0045 dB/THz over a larger bandwidth of 17.6~THz.
\label{sec:exp_setups}
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{figures_final/uiara1}
\caption{(a) Experimental setup for the multi--band RA: path 1 refers to the C+L--band RA and path 2 is for the S+C+L--band dual--stage discrete RA. (b) Input optical signal spectrum. (c) Pump lasers spectrum and their expected contribution to the overall Raman gain.}
\label{fig:setup}
\end{figure*}
The structure of the paper is as follows: Section~\ref{sec:exp_setups} describes the experimental setup for realizing Raman amplifiers operating in C+L and S+C+L--band. We also give a brief overview of the ML framework used to obtain programmable arbitrary gain profiles. Section~\ref{sec:results} presents, discusses and evaluates the experimental results. In Section~\ref{sec:conc} conclusions and future work are presented and outlined.
\section{Experimental setup}
The experimental setup for realizing the multi--band RA is shown in Fig.~\ref{fig:setup}(a). By selecting path 1 or 2, the operation in either C+L (1) or S+C+L--band (2) can be enabled. To achieve gains in the C+L and S+C+L--band, 5 and 8 pump lasers are employed, respectively. Fig.~\ref{fig:setup}(c) illustrates the spectral pump allocation and their individual contribution to the overall Raman gain. We only consider counter propagating pumps whose wavelengths are fixed and shown in Table~\ref{tab:pumps}.
\begin{table}[!b]
\centering
\caption{\bf Pump lasers wavelengths and frequencies}
\begin{tabular}{ccccc}
\hline
& $P_1$ & $P_2$ & $P_3$ & $P_4$ \\
\hline
Wavelength [nm] & 1508 & 1485 & 1465 & 1445 \\
Frequency [THz] & 198.8 & 201.9 & 204.6 & 207.5 \\
\hline
& $P_5$ & $P_6$ & $P_7$ & $P_8$ \\
\hline
Wavelength [nm] & 1425 & 1405 & 1385 & 1365 \\
Frequency [THz] & 210.4 & 213.4 & 216.5 & 221.1 \\
\hline
\end{tabular}
\vspace{1ex}
\label{tab:pumps}
\end{table}
The gain profile control is performed by only adjusting the pump powers. Pump lasers $P_1...P_7$ are semiconductor laser diodes. Their output power is controlled by adjusting the driving currents. The corresponding power going into the RA is in the range from $\sim$16~dBm to $\sim$27~dBm. Pump laser $P_8$ is a Raman--based fiber laser and is controlled by adjusting its voltage. It provides power to the RA ranging from $\sim$20~dBm to $\sim$27~dBm.
The reason why we only optimize pump lasers powers is because there are no tunable, high power pump lasers available within the considered frequency ranges. However, the selected pump laser frequencies fall within the ranges that would provide Raman gain profiles within the desired frequency bands.
\subsection{C+L--band Raman amplifier}
The C+L--band RA can either be operated as a discrete, (7.5~km of inverse dispersion fiber (IDF)) or distributed (75~km span of standard SMF) amplifier. An input optical signal covering the C+L--band, for testing the performance of the RA, is generated by using two ASE sources for C and L bands channelized through a WSS to generate 90 lines placed at 100~GHz ITU-T grid covering a 9.4~THz (77~nm) bandwidth. The total input signal power to the amplifier is adjusted by means of a variable attenuator to 0 and 10~dBm for the discrete and distributed C+L--band Raman amplifier, respectively. The corresponding optical spectrum is shown in Fig.~\ref{fig:setup}(b) (inside bracket 1) and is measured with a resolution of $\Delta \lambda = 0.1$~nm. The gaps between the C and L signal bands are due to the different ASE sources for these two bands.
An isolator is placed at the input to the IDF to prevent pump powers entering the C+L--band signal source and to minimize the double Rayleigh backscattering induced multipath interference~\cite{Iqbal19JLT}. Finally, an optical spectrum analyser (OSA) is used to capture the optical spectrum.
\subsection{S+C+L--band Raman amplifier}
The S+C+L--band RA is implemented as a two--stage sequential discrete RA. The first stage is responsible for providing the gain in the S--band and it consists of 7.5~km of IDF and three pump lasers, $P_6...P_8$ used to control the gain profiles. The second stage is the same as the one used for the C+L--band RA. Note that distributing the pumps into two sequential stages reduces the strong depletion of shorter wavelength pumps~\cite{Krummrich01}. The multi--band input optical signal (17.6~THz/140.7~nm) is generated by combining the optical signal from the C+L--band with a supercontinuum S--band source~\cite{El-Taher:s} and a single frequency laser operating at 185~THz. The resulting signal has a total of 148 frequency lines at 100~GHz ITU-T grid. A variable attenuator is used to adjust the input signal power to 7~dBm. The corresponding optical spectrum is shown in Fig.~\ref{fig:setup}(b) (inside bracket 2). Due to the amplifier configuration, two pumps from the first stage ($P_{1-2}$) fall within the S-band signal. This means that some channels from the S--band need to be removed to avoid overlapping with the Rayleigh backscattered components of the pumps, leaving the gaps as shown in Fig.~\ref{fig:setup}(b)~\cite{Iqbal:20}.
\subsection{Pump power control}
\begin{figure*}[t]
\centering
\includegraphics[width=0.32\textwidth]{figures_final/uiara2}
\includegraphics[width=0.32\textwidth]{figures_final/uiara3}
\includegraphics[width=0.32\textwidth]{figures_final/uiara4}
\caption{Measured on--off gain profiles for various pump laser currents and voltage configurations. C+L--band RA (a) distributed, (b) discrete and (c) dual stage discrete RA S+C+L--band.}
\label{fig:datasets}
\end{figure*}
The objective is to determine pump power settings that result in user defined target gain profiles such as: tilted gain, flat gain or an arbitrary gain. These settings are achieved off--line using the machine learning framework presented and then later applied on--line for the pump laser currents and voltage control~\cite{Zibar20}. As the framework in~\cite{Zibar20} is based on supervised learning, a data--set is required. This is achieved by varying the currents and the voltage of the pump lasers and measuring the corresponding gain profiles. The gain profiles are measured on a 100-GHz grid, as the difference in power between the output optical spectrum when the pump lasers are turned on and off, also known as the on--off gain. As the currents and the voltage, $I_1,...I_7,V_8$, are drawn from a uniform distribution whose bounds are shown in Table~\ref{tab:pump_range}, we refer to the corresponding gain profiles as arbitrary.
In Fig.~\ref{fig:datasets}, the measured on--off gain profiles, $G$, obtained for the C+L and the S+C+L--band are shown. For the discrete Raman amplifiers, increasing the pump laser output powers, beyond certain levels, leads to gain instabilities. The maximum allowable driving currents, for the pump lasers, are shown in Table~\ref{tab:pump_range} as the maximum values of the uniform distribution interval. As a consequence of the limited pump lasers output powers on the discrete C+L--band amplifier, we could not obtain as large gains and gain profile variations as compared to the distributed C+L--band amplifier. More specifically, the decreased driving currents for the lower frequency pumps ($P_1$ and $P_2$) is the reason why the gains in lower frequency region in Fig.~\ref{fig:datasets}(b), (186 THz--188 THz), are not as high as for the distributed amplifier (Fig.~\ref{fig:datasets}(a)). Additionally, the reduced current on the low frequency pump leads to a lower depletion experienced by the high frequency pumps.
We measure $M=5600$ and $M=4025$ gain profiles, each with $K=90$ and $K=148$ data points per gain profile, for C+L and S+C+L--band, respectively. We denote the respective data--sets as: $\mathcal{D}^{M\times (K+5)}_{C+L}=\{(G_1^i,...,G_K^i,I^i_1,...I^i_5),|i=1,...,M\}$ and $\mathcal{D}^{M\times (K+8)}_{S+C+L}=\{(G_1^i,...,G_K^i,I^i_1,...I^i_7,V^i_8),|i=1,...,M\}$.
\begin{table}[!h]
\centering
\caption{\bf {Current and voltage ranges}}
\begin{tabular}{cccc}
\hline
& C+L dist. & C+L disc. & S+C+L disc. \\
\hline
$I_1$ [A] & $[0.20:1.00]$ & $[0.20:0.90]$ & $[0.20:1.00]$ \\
$I_2$ [A] & $[0.20:1.00]$ & $[0.20:0.80]$ & $[0.20:0.80]$ \\
$I_3$ [A] & $[0.20:1.20]$ & $[0.20:1.20]$ & $[0.20:1.00]$ \\
$I_4$ [A] & $[0.20:1.50]$ & $[0.20:1.40]$ & $[0.20:1.40]$ \\
$I_5$ [A] & $[0.20:1.50]$ & $[0.20:1.50]$ & $[0.20:1.40]$ \\
$I_6$ [A] & - & - & $[0.20:1.20]$ \\
$I_7$ [A] & - & - & $[0.60:1.30]$ \\
$V_8$ [V] & - & - & $[1.80:2.40]$ \\
\hline
\end{tabular}
\vspace{1ex}
\label{tab:pump_range}
\end{table}
To find the machine learning model with the lowest prediction error, we allocate 3400 and 3000 data points, for C+L and S+C+L--band, correspondingly. We employ 10--fold cross--validation, which means that we use 90\% for training (includes hyperparameter optimization) and 10\% for testing as described~\cite{Zibar20,bishop2006}. For a more detailed explanation on the training of the employed machine learning model, see the Appendix Section. The remaining data points are later used for the final validation of the machine learning model for the pump laser current prediction of arbitrary gains.
The procedure of obtaining pump current configuration is then as follows: 1) a single-layer neural network, $NN_{inv}$, is employed to learn the mapping between the target gain profiles and currents and voltage -- inverse system learning, 2) once the neural network has learned the inverse mapping, given a set of target gain profiles, the corresponding pumps currents and voltages are predicted, 3) the predicted currents and voltages are then applied to the second multi--layer neural network, $NN_{fwd}$, that has learned the forward mapping between pump currents/voltage and gain profiles. The $NN_{fwd}$ thereby predicts the gain profile given the pump currents and the voltage. If the error between the predicted and targeted gain profile is not satisfactory pump currents and voltages are adjusted accordingly, i.e.~fine--optimization. The fine--optimization uses iterative gradient descent by backpropagating the error through $NN_{fwd}$ to adjust the currents and voltage as described in~\cite{Zibar20}, 4) the obtained currents and voltages are applied to the pump lasers in the experimental set--up, and new sets of measurements are performed, and 5) finally, to investigate the accuracy of the predicted pump currents and voltage, we calculate the maximum absolute error between the target and the newly measured gain profiles (i.e.~$E_{MAX}$) and normalize it with the bandwidth ($BW$). The optimized topologies of the employed neural networks $NN_{fwd}$ and $NN_{inv}$, as well as their performance evaluation, are found in the Appendix Section.
\section{Results and discussion}
\label{sec:results}
\subsection{Arbitrary gain profiles}
Fig.~\ref{fig:results_arb}(a)--(c) show the probability, (PDF), and the cumulative, (CDF), density functions of the $E_{MAX}/BW$ for the C+L--band (distributed and discrete) and S+C+L--band (discrete) Raman amplifiers. The error is defined between the targeted arbitrary gain profiles, taken directly from the data--set (not used for training the machine learning framework), and the predicted gain profiles obtained from the measurement using the pump currents and voltage allocation provided by the machine learning framework. We use 2100, 2600 and 1025 target arbitrary gain profiles for the distributed C+L--band, discrete C+L--band and discrete S+C+L--band validation, respectively. We compare the accuracy of allocating pump currents and voltage, by using only the inverse mapping multi--layer neural network, $(NN_{inv})$, and both the inverse and forward mapping multi--layer neural networks, $(NN_{inv}+NN_{fwd})$, which allows for fine--optimization of pump currents and the voltage.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.32\textwidth]{figures_final/uiara5}
\includegraphics[width=0.32\textwidth]{figures_final/uiara6}
\includegraphics[width=0.32\textwidth]{figures_final/uiara7}
\caption{Probability density function (PDF, top) and cumulative density function (CDF, bottom) of the $E_{MAX}/BW$, with indication of mean, $\mu$ and standard deviation, $\sigma$: (a) C+L--band distributed RA, (b) C+L--band discrete RA and (c) S+C+L--band discrete RA.}
\label{fig:results_arb}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.32\textwidth]{figures_final/uiara8}
\includegraphics[width=0.32\textwidth]{figures_final/uiara9}
\includegraphics[width=0.32\textwidth]{figures_final/uiara10}
\caption{Probability density function (PDF, top) and cumulative density function (CDF, bottom) of the $RMSE/BW$, with indication of mean, $\mu$, standard deviation, $\sigma$: (a) C+L--band distributed RA, (b) C+L--band discrete RA and (c) S+C+L--band discrete RA.}
\label{fig:results_arb_rmse}
\end{figure*}
The PDFs shown in Fig.~\ref{fig:results_arb}(b)--(c), illustrate that for the discrete RA, highly--accurate pump current predictions, resulting in a low mean and standard deviation, can be obtained using only $NN_{inv}$. Thus, the currents and the voltage prediction is obtained in an ultra--fast way as $NN_{inv}$ only involves matrix computations. We notice that the mean and standard deviations are decreased by a factor of $\sim$2 when going from C+L to S+C+L--band. This is mainly because these two schemes have the same performance in terms of $E_{MAX}$ and S+C+L--band has almost two times wider bandwidth. However, qualitatively the results for C+L and S+C+L--band are comparable.
If $NN_{inv}+NN_{fwd}$ is used a slight increase in the mean and the standard deviation is observed. This is because the $NN_{inv}$ has already found pump current configuration that minimizes the mean square error. Applying the fine--optimization introduces some small random deviations around this minimum and worsens the performance.
For both discrete RA schemes, the CDF shows that most of the cases already present an $E_{MAX}/BW$ lower than $6\cdot10^{-2}$~dB/THz, before the fine--optimization, i.e.~97\% of the cases for the C+L--band and $\sim$100\% for the S+C+L--band.
Compared to the discrete RA, the resulting PDF for the distributed RA (Fig.~\ref{fig:results_arb}(a)) has a higher mean and standard deviation when considering only $NN_{inv}$. On the other hand, a significant reduction can be obtained after applying fine--optimization $NN_{inv}+NN_{fwd}$, as also illustrated by the CDF. Indeed, the fine--optimization significantly increases the number of cases with $E_{MAX}/BW$ lower than $6\cdot10^{-2}$~dB/THz, i.e.~from 18.7\% to 95.4\%.
To understand why only the distributed amplifier benefits from the fine--optimization, we need to consider the mean and the standard deviation of the predicted RMSE for the arbitrary gain profiles when applying $NN_{inv}$ only. This information is obtained from Fig.~\ref{fig:results_arb_rmse}(a)--(c) by de-normalizing it with the amplifier bandwidth. The corresponding mean and standard deviations, ($\mu \pm \sigma$), for the distributed C+L--band, discrete C+L--band and discrete S+C+L--band amplifier are: $0.46 \pm 0.10$ dB, $0.21 \pm 0.06$ dB and $0.08 \pm 0.05$ dB. As the RMSE values for the discrete C+L-band and, especially, discrete S+C+L--band amplifier are already low, there are no observable improvements when applying the fine--optimization.
Finally, in Fig.~\ref{fig:results_arb_rmse}(a)-(c), the resulting PDF and CDF of the RMSE per bandwidth is plotted for the distributed and discrete amplifiers. The Figure shows that very low mean and standard deviation values are achievable.
\subsection{Flat and tilted gain profiles}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.32\textwidth]{figures_final/uiara11}
\includegraphics[width=0.32\textwidth]{figures_final/uiara12}
\includegraphics[width=0.32\textwidth]{figures_final/uiara13}
\caption{(a)--(f): the predicted and the target flat and tilted (on--off) gain profiles as a function of wavelength. (g)--(i) $E_{MAX}/BW$ and (j)--(l) $RMSE/BW$ as a function of gain for the flat and the tilted gains.}
\label{fig:results_flat_tilted}
\end{figure*}
Next, we investigate the ability of the machine learning framework to predict accurate pump current and voltage allocations for the design of flat and tilted gain profiles using the discrete and distributed RAs, in C+L and S+C+L--band. Flat gains ranging from 6 to 16~dB (C+L--band distributed RA), 7 to 15~dB (C+L--band discrete RA), and 14 to 20~dB (S+C+L--band discrete RA) are evaluated in steps of 1~dB. For the tilted profiles, slopes of approximately 0.24~dB/THz (C+L--band RAs) and 0.20~dB/THz (S+C+L--band RA) are considered. These values were chosen to provide an overall tilt of around 1~dB on each band.
Fig.~\ref{fig:results_flat_tilted}, shows the predicted and target flat ((a)-(c)) and tilted ((d)-(f)) gain profiles, as a function of frequency, for the distributed and the discrete RA operating in C+L and S+C+L--band. Just a subset of gains (2~dB step) is shown for better visualization. The corresponding $E_{MAX}/BW$ for all gains under consideration is shown in Fig.~\ref{fig:results_flat_tilted}(g)-(i). We only show results obtained after using $NN_{inv}+NN_{fwd}$ as the fine--optimization significantly reduced the error for all the amplifier schemes and their evaluated gains.
The reason why $NN_{inv}$ is not able to provide accurate solutions for the flat and tilted gain profiles is because, in general, multi--layer neural--networks are good at interpolating and not so good at extrapolating. More precisely, the neural--networks will provide highly accurate predictions for examples that are close to the examples in the training data set. The number of cases in the training data--set with a ``close to flat gain`` profiles ($max([G_1,...,G_K])-min([G_1,...,G_K])\leq$ 1.2 dB), out of the total data--set size, are: 13/3464, 61/3000, 0/3000, for the distributed C+L, discrete C+L and discrete S+C+L--band amplifiers, respectively. These low ``close to flat gain`` profile cases on the training data--set is an indication that the $NN_{inv}$ is extrapolating when predicting the pump configuration for these flat gains. The same analysis goes to the tilted gain profiles.
Additionally, for the increasing input dimension of the neural network, an increasing number of training data points is needed to cover all the combinations. This is explained in details in~\cite{Zibar20,bishop2006}. The input dimension of the $NN_{inv}$ for the S+C+L and C+L--band amplifiers are 148 and 90, respectively (Table~\ref{tab:NNinv} in the Appendix Section). The $NN_{fwd}s$ used on the fine--optimization routine, on the other hand, have a significantly smaller input dimensions compared to the $NN_{inv}s$, i.e.~5 for the distributed and the discrete C+L and 8 for the discrete S+C+L--band amplifier (Table~\ref{tab:NNdir} in the Appendix Section). This implies that they are easier to train and can provide accurate predictions when trained over a smaller data--set size. That is why the fine--optimization is able to outperform $NN_{inv}$ even using another neural network $NN_{fwd}$ trained over the same data--set.
Furthermore, we would like to stress that by employing the fine--optimization, large data sets for training $NN_{inv}$ are not necessary as we are still able to obtain highly--accurate gain designs.
A general trend observed in Fig.~\ref{fig:results_flat_tilted}(a)--(f), is that the predicted gain oscillates around the target gain profile. The magnitude of the oscillations has a tendency to increase for increasing gains. Moreover, for the S+C+L--band RA, the oscillation amplitude increases with the frequency, achieving up to 2~dB of maximum error compared to the target.
To understand what is happening, it is worth mentioning that it was observed some power instabilities on the supercontinuum S--band source and the Raman-based fiber laser $P_8$. Additionally, recall that the broadband and nonuniform Raman gain spectrum for a single pump, with a peak located near 12.5~THz below the pump frequency for the IDF, is partially overlapped in the multiple-pump configurations considered in this work as illustrated in Fig.~\ref{fig:setup}(c). On the S--band, besides pumps $P_{6-8}$, there are also contributions of pumps $P_{1-5}$ because the S--band lies within the Raman gain spectrum bandwidth of all these pumps. This makes the design more complex on this region. Thus, although it is expected that the machine learning framework is able to deal with these broadband effects when adjusting the pumps (once the two stages on the S+C+L--band discrete RA are jointly trained), it is also expected to achieve a higher error on the S--band.
It is observed in Fig.~\ref{fig:results_flat_tilted}(h)-(i) that the $E_{MAX}/BW$ for the discrete RA in C+L and S+C+L--band is similar for the flat and the tilted gain profiles. The $E_{MAX}/BW$ is kept below $1.1\cdot10^{-1}$ and $0.9\cdot10^{-1}$~dB/THz for the design of flat and tilted gain profiles, respectively. On the other hand, the $E_{MAX}/BW$ for the distributed RA shown in Fig.~\ref{fig:results_flat_tilted}(g) is higher for the design of the flat gains, but it is still kept below $1.4\cdot10^{-1}$~dB/THz. The reason may be related to the pump distributions, i.e.~the number of pumps and wavelength being more suitable to provide a tilted gain profile. This can be observed on the experimental data--set gain profiles shown in Fig.~\ref{fig:datasets}. The same analysis does not apply for the S+C+L--band, since there it no clear flat/tilted profile trend on its data--set gain curves. Therefore, we also need to take into account that there will be a limitation on the theoretically achievable gain tilt and flatness given experimental set--up that has fixed wavelengths of pump lasers. Fig.~\ref{fig:results_flat_tilted}(j)-(l) shows $RMSE/BW$ and it observed that the trends are very similar to as for $E_{MAX}/BW$.
\vspace{0.15cm}
Finally, we have demonstrated that by only changing the pump powers we are able to achieve low design errors for arbitrary, flat and tilted gain profiles. In conclusion, adjusting the pump powers only, may be sufficient to obtain low errors for various gain profiles. This also points in the direction that the Raman gain profile is more sensitive to pump lasers powers with sufficient number of pump frequencies evenly distributed. We may expect even lower errors if we are able to control pump laser frequencies. However, there are no tunable pump laser available within the considered frequency ranges.
To put the presented work in the perspective, in Fig.~\ref{fig:record}, $E_{MAX}/BW$, is plotted for various experimental demonstrations of multi--band amplifiers. It is observed that the presented work results in a low--error and broad bandwidth by means of machine learning.
\begin{figure}[!t]
\centering
\includegraphics[width=0.47\textwidth]{figures_final/uiara14}
\caption{$E_{MAX}/BW$ as a function of amplifier bandwidth.}
\label{fig:record}
\end{figure}
\section{Conclusion}
\label{sec:conc}
A multi--band programmable gain Raman amplifier operating in C+L and S+C+L--band is experimentally demonstrated.
The key enabling technique is the machine learning framework that allows for ultra--fast and highly--accurate prediction of the pump currents and voltage for providing the targeted gain profiles. The ability to generate arbitrary gain profiles in a controlled and fast way, may provide novel approaches for the intelligent utilization of the ultra--wideband spectrum and become a key feature for future optical communication systems. Moreover, the programmable gain optical amplifier may advance other areas of fundamental science requiring spectral shaping, such as optical frequency combs.
\section*{Appendix}
The machine learning framework used in this paper to achieve highly accurate Raman amplifier (RA) programmable gains is based on two artificial neural networks. The first neural network $NN_{inv}$ models the RA inverse mapping, i.e. the mapping between gain profiles and pump lasers' currents/voltage. Whereas the forward mapping, i.e., the mapping between the pump lasers' currents/voltage and gain profiles, is learned by a second neural network $NN_{fwd}$. Following, in Section A we describe how these two NNs are trained for the different RA schemes considered in this paper. We also show their prediction accuracy in Section B. Training and validation are performed on disjoint experimental data--sets, whose total number of elements are shown in Table~\ref{tab:datasets}. Section C presents the pump configuration obtained after using $NN_{inv}+NN_{fwd}$ for flat and tilted gain profiles.
\begin{table}[h]
\centering
\caption{\bf Experimental data--set distribution}
\begin{tabular}{cccc}
\hline
RA scheme & C+L dist. & C+L disc. & S+C+L disc. \\
\hline
Training & 3464 & 3000 & 3000 \\
Validation & 2100 & 2600 & 1025 \\
\hline
\end{tabular}
\label{tab:datasets}
\end{table}
\subsection{Neural networks training}
\label{sec:training}
$NN_{inv}$ is trained using random projection (RP). This training algorithm, also known as extreme learning machine (ELM)~\cite{Huang2011}, initializes the weights of the hidden layers randomly, according to a normal distribution with mean zero and a certain standard deviation $\sigma_{NN_{init}}$, corresponding to NN initialization variance. This random weight assignment is independent from the training data--set and requires a high number of hidden nodes as these weights are kept untrained. The training data--set is used to optimize only the last layer weight by regularized least squares, with a regularization parameter $\lambda$. Since it is performed in a single step, the training time is drastically reduced when compared to standard approaches that updates all the weights in a numerical iterative routine. $NN_{inv}$ models for each RA scheme are shown in Table~\ref{tab:NNinv}, where $f_{act}$ is the nonlinear activation function for all nodes (except the ones on the last layer, which use linear functions), $numHL$ is the number of hidden layers, $numHN$ is the number of hidden nodes, and $D_{input/output}$ is the input/output dimension. To reduce the impact of the randomly initialized weights on the RP method, 20 parallel and independent $NN_{inv}$ are trained and the pump configuration prediction is the average of the 20 $NN_{inv}$ outputs~\cite{Zibar20}. In Table~\ref{tab:NNinv}, $f_{act}$, $numHN$, $\sigma_{NN_{init}}$ and $\lambda$ were obtained after a hyperparameter optimization routine using k-fold cross validation~\cite{bishop2006}.
\begin{table}[b]
\centering
\caption{\bf Neural network models for $NN_{inv}$}
\begin{tabular}{cccc}
\hline
RA scheme & C+L dist. & C+L disc. & S+C+L disc. \\
\hline
Training alg. & RP & RP & RP \\
$f_{act}$ & logsig & sine & sine \\
$numHL$ & 1 & 1 & 1 \\
$numHN$ & 760 & 500 & 500 \\
$D_{input}$ & 90 & 90 & 148 \\
$D_{output}$ & 5 & 5 & 8 \\
$numHN$ & 760 & 500 & 500 \\
$\sigma_{NN_{init}}$ & $6.0 \cdot 10^{-3}$ & $2.6 \cdot 10^{-2}$ & $1.0 \cdot 10^{-2}$ \\
$\lambda$ & $1.0 \cdot 10^{9}$ & $1.0 \cdot 10^{3}$ & $1.0 \cdot 10^{4}$ \\
\hline
\end{tabular}
\label{tab:NNinv}
\end{table}
$NN_{fwd}$ is trained differently for each RA scheme. For the C+L--band RA (discrete and distributed), $NN_{fwd}$ is trained traditionally updating all weights on the NN iteratively by using the Levenberg-Marquadt (LM) method. However, the high input and output dimensions of the S+C+L--band RA scheme makes the use of LM optimization challenging due to the long convergence time. Thus, RP is applied again only for this scheme. Table~\ref{tab:NNdir} summarizes $NN_{fwd}$ parameters for each RA scheme, where only the RP parameters $f_{act}$, $numHN$, $\sigma_{NN_{init}}$ and $\lambda$ were obtained after a hyperparameter optimization routine. Table~\ref{tab:NNdir} also shows that the RP faster training comes with the cost of having a larger network, with 500 hidden nodes instead of 20 when using LM.
\begin{table}[t]
\centering
\caption{\bf Neural network models for $NN_{fwd}$}
\begin{tabular}{cccc}
\hline
RA scheme & C+L dist. & C+L disc. & S+C+L disc. \\
\hline
Training alg. & LM & LM & RP \\
$f_{act}$ & tanh & tanh & tanh \\
$numHL$ & 2 & 2 & 1 \\
$numHN$ & 10 & 10 & 500 \\
$D_{input}$ & 5 & 5 & 8 \\
$D_{output}$ & 90 & 90 & 148 \\
$\sigma_{NN_{init}}$ & * & * & $1.0 \cdot 10^{-3}$ \\
$\lambda$ & ** & ** & $1.0 \cdot 10^{8}$ \\
\hline
\end{tabular}
\vspace{1ex}
{\\ \raggedright (*) Nguyen-Widrow initialization algorithm~\cite{initnw};
(**) Dynamically modified during training according to~\cite{Hagan94}. \par}
\label{tab:NNdir}
\end{table}
\subsection{Neural networks validation}
$NN_{inv}$'s performance in predicting pump currents/voltage is presented in Fig.~\ref{fig:NNinv}. The metric used is the absolute error relative to the maximum current/voltage excursion for each pump laser. Fig.~\ref{fig:NNinv} shows the probability density functions (PDF) and the cumulative density functions (CDF) over all the cases on the validation data--set and all pump lasers. Notice that the errors are kept bellow 2\% for 95\% of the cases for all the RA schemes.
The prediction performance for the $NN_{fwd}$ is evaluated in terms of root mean squared error ($RMSE$) and maximum absolute error ($E_{MAX}$) between predicted $G_P$ and target $G_T$ gain profiles, extracted from the $K$ WDM points (spectrum), given by
\begin{equation}
RMSE = \sqrt{\frac{1}{K}\sum_{i=1}^{K}(G_{P,i}-G_{T,i})^2},
\label{eq:mse}
\end{equation}
\begin{multline}
E_{MAX} = max\{|G_{P,1}-G_{T,1}|, |G_{P,2}-G_{T,2}|, \cdots \\
\cdots, |G_{P,K}-G_{T,K}| \}
\end{multline}
\noindent where $K=90$ and $K=148$ for C+L and S+C+L-band RAs, respectively. Fig.~\ref{fig:NNdir} shows the PDF for $RMSE$ and $E_{MAX}$ over all the cases on the validation data--set.
In Fig.~\ref{fig:NNdir}, the overall $NN_{fwd}$ performances for both C-L--band RAs are consistent with the ones obtained in~\cite{earlyAccessBrusinJLT2020}, which also considers a C+L--band RA (distributed scheme only) with same NN model and training algorithms. On the other hand, the worst performance obtained here by the S+C+L--band RA scheme in terms of $E_{MAX}$ can be explained by its more complex mapping relating more pumps to the gain over a wider bandwidth. S+C+L--band RA scheme was also the only model that used RP, but the same study presented in~\cite{earlyAccessBrusinJLT2020} showed that, for the Raman amplifier case, the performance of the LM only overcomes the RP for higher number of hidden nodes, which requires even more time to train.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{figures_final/uiara15}
\caption{Probability density function (PDF) and cumulative density function (CDF) of the $NN_{inv}$ pump current/voltage prediction error, with indication of mean, $\mu$ and standard deviation, $\sigma$.}
\label{fig:NNinv}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{figures_final/uiara16}
\caption{Probability density function (PDF) of the $NN_{fwd}$ gain prediction error: (a) $RMSE$ and (b) $E_{MAX}$ with indication of mean, $\mu$, standard deviation, $\sigma$, and maximum (max) values.}
\label{fig:NNdir}
\end{figure}
The errors $RMSE$ and $E_{MAX}$ are non-convex and unknown functions of the pump configuration that might not share the same local minimums, i.e. the pump configuration that minimizes $RMSE$ might not minimize $E_{MAX}$. However, since the fine--optimization is a gradient-based procedure, it needs to use a differentiable cost function with respect to the pump parameters, which makes the $MSE$ the only candidate for this. When the pdf curves in Fig.~\ref{fig:NNdir}(a) and (b) present similar shapes, like for the C+L--band RAs, it might be an indication that minimums of these two errors occur for similar pump configurations and, consequently, minimizing $MSE$ (which scales with the $RMSE$), may also minimizes $E_{MAX}$. For the S-C-L--band RA, on the other hand, where $E_{MAX}$ and $RMSE$ pdf curves have completely different shapes, it is more likely that minimizing $MSE$ is not the same as minimizing $E_{MAX}$.
\subsection{Pump configuration for flat and tilted gain profiles}
The pump configurations to achieve flat and tilted gain profiles in Fig.~\ref{fig:results_flat_tilted} are shown in Tables~\ref{tab:pump_flat_CL_dist} to \ref{tab:pump_tilted_SCL}. Current/voltage values are presented from the minimum (first line) to the maximum (last line) gain values, i.e., for the distributed C+L RA case presented in Table~\ref{tab:pump_flat_CL_dist}, the first line corresponds to the bottom gain curve in Fig.~\ref{fig:results_flat_tilted}(a) (minimum gain), and the last line corresponds to the upper gain curve. Recall that these values were obtained after fine optimization.
\begin{table}[!h]
\centering
\caption{\bf {C+L dist. - pump configuration for flat gain profiles}}
\begin{tabular}{ccccc}
\hline
$I_1$ & $I_2$ & $I_3$ & $I_4$ & $I_5$ \\
{[A]} & [A] & [A] & [A] & [A] \\
\hline
0.43 & 0.20 & 0.23 & 0.42 & 0.22 \\
0.49 & 0.20 & 0.29 & 0.54 & 0.30 \\
0.52 & 0.21 & 0.35 & 0.65 & 0.39 \\
0.50 & 0.22 & 0.40 & 0.74 & 0.53 \\
0.45 & 0.24 & 0.43 & 0.82 & 0.70 \\
0.39 & 0.28 & 0.44 & 0.87 & 0.92 \\
\hline
\end{tabular}
\vspace{1ex}
\label{tab:pump_flat_CL_dist}
\end{table}
\begin{table}[!h]
\centering
\caption{\bf {C+L dist. - pump configuration for tilted gain profiles}}
\begin{tabular}{ccccc}
\hline
$I_1$ & $I_2$ & $I_3$ & $I_4$ & $I_5$ \\
{[A]} & [A] & [A] & [A] & [A] \\
\hline
0.49 & 0.20 & 0.21 & 0.22 & 0.20 \\
0.60 & 0.20 & 0.27 & 0.39 & 0.21 \\
0.66 & 0.20 & 0.34 & 0.53 & 0.26 \\
0.67 & 0.21 & 0.41 & 0.66 & 0.35 \\
0.63 & 0.23 & 0.46 & 0.77 & 0.47 \\
0.57 & 0.26 & 0.49 & 0.87 & 0.63 \\
\hline
\end{tabular}
\vspace{1ex}
\label{tab:pump_tilted_CL_dist}
\end{table}
\begin{table}[!h]
\centering
\caption{\bf {C+L disc. - pump configuration for flat gain profiles}}
\begin{tabular}{ccccc}
\hline
$I_1$ & $I_2$ & $I_3$ & $I_4$ & $I_5$ \\
{[A]} & [A] & [A] & [A] & [A] \\
\hline
0.51 & 0.26 & 0.27 & 0.45 & 0.29 \\
0.55 & 0.29 & 0.34 & 0.59 & 0.38 \\
0.56 & 0.31 & 0.40 & 0.71 & 0.51 \\
0.52 & 0.35 & 0.45 & 0.82 & 0.68 \\
0.46 & 0.41 & 0.48 & 0.92 & 0.89 \\
\hline
\end{tabular}
\vspace{1ex}
\label{tab:pump_flat_CL_disc}
\end{table}
\begin{table}[!h]
\centering
\caption{\bf {C+L disc. - pump configuration for tilted gain profiles}}
\begin{tabular}{ccccc}
\hline
$I_1$ & $I_2$ & $I_3$ & $I_4$ & $I_5$ \\
{[A]} & [A] & [A] & [A] & [A] \\
\hline
0.65 & 0.24 & 0.24 & 0.29 & 0.20 \\
0.71 & 0.30 & 0.32 & 0.45 & 0.23 \\
0.72 & 0.34 & 0.39 & 0.60 & 0.32 \\
0.69 & 0.39 & 0.46 & 0.73 & 0.45 \\
0.63 & 0.44 & 0.50 & 0.85 & 0.63 \\
\hline
\end{tabular}
\vspace{1ex}
\label{tab:pump_tilted_CL_disc}
\end{table}
\begin{table}[!h]
\centering
\caption{\bf {S+C+L disc. - current and voltage configuration for flat gain profiles}}
\begin{tabular}{cccccccc}
\hline
$I_1$ & $I_2$ & $I_3$ & $I_4$ & $I_5$ & $I_6$ & $I_7$ & $V_8$ \\
{[A]} & [A] & [A] & [A] & [A] & [A] & [A] & [V] \\
\hline
0.72 & 0.47 & 0.49 & 0.72 & 0.61 & 0.37 & 0.60 & 2.01 \\
0.70 & 0.49 & 0.56 & 0.82 & 0.73 & 0.37 & 0.60 & 2.23 \\
0.67 & 0.50 & 0.62 & 0.92 & 0.89 & 0.38 & 0.64 & 2.40 \\
0.65 & 0.49 & 0.65 & 1.07 & 1.09 & 0.41 & 0.85 & 2.40 \\
\hline
\end{tabular}
\vspace{1ex}
\label{tab:pump_flat_SCL}
\end{table}
\begin{table}[!h]
\centering
\caption{\bf {S+C+L disc. - current and voltage configuration for tilted gain profiles}}
\begin{tabular}{cccccccc}
\hline
$I_1$ & $I_2$ & $I_3$ & $I_4$ & $I_5$ & $I_6$ & $I_7$ & $V_8$ \\
{[A]} & [A] & [A] & [A] & [A] & [A] & [A] & [V]\\
\hline
0.94 & 0.51 & 0.47 & 0.63 & 0.42 & 0.38 & 0.60 & 1.80 \\
0.88 & 0.55 & 0.56 & 0.72 & 0.56 & 0.39 & 0.60 & 1.98 \\
0.85 & 0.57 & 0.65 & 0.82 & 0.69 & 0.39 & 0.60 & 2.19 \\
0.82 & 0.58 & 0.72 & 0.93 & 0.83 & 0.39 & 0.62 & 2.40 \\
\hline
\end{tabular}
\vspace{1ex}
\label{tab:pump_tilted_SCL}
\end{table}
\section*{Acknowledgment}
This work was supported by the European Union's H2020 program (Marie Sk\l{}odowska-Curie grant 754462 and MSCA-ITN WON grant 814276), the European Research Council (ERC CoG FRECOM grant 771878), the Villum Foundations (VYI OPTIC-AI grant no. 29344), and the UK EPSRC grants EP/M009092/1 and EP/R035342/1.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,314,259,993,881 | arxiv | \section{Introduction}\label{sec:introduction}
In the context of IoT, many applications cannot afford the presence of a battery because of size, weight and cost issues.
The recent advancement in the Non-Volatile Memory (NVM) technologies is paving the way for Non-Volatile Computing Systems.
These systems are able to sustain computations under unstable power, by quickly saving the state of the full system in a non-volatile fashion.
Thus, Non-Volatile Processors (NVPs) may allow battery-less designs without suffering from frequent power losses inherent in energy harvesting scenarios.
In related work, both software- and hardware-level solutions were proposed to cope with the backup and restore problem.
Software-based approaches are implemented on platforms that include both some SRAM and an addressable NVM used to store the backup, as the one presented in \cite{zwerg_82_2011}.
Checkpoints are placed at compile time~\cite{ransford_mementos:_2011}.
Then, at run-time the supply voltage is checked and, if an imminent power failure is identified ($V_{dd} < V_{th}$), a backup of the stack and the registers is executed.
In some works, backups are only executed when a power failure interrupt is triggered and the full volatile state (SRAM and registers) is copied to the NVM~\cite{balsamo_hibernus:_2015, balsamo_hibernus_2016}.
Other approaches do not take advantage of the volatile SRAM and exploit the NVM as the only system memory, backing-up only the registers in the event of a power outage~\cite{jayakumar_quickrecall:_2015, choi_achieving_2019}.
Software-level solutions can be implemented on available hardware, but they normally come with a big overhead in terms of both backup time and energy.
Hardware solutions on the other hand usually implement fully Non-Volatile Processors (NVP).
NVPs mostly make use of emerging NVM technologies to implement complex hybrid memory elements (nvFF and nvSRAM, non-volatile registers, and SRAM memory, respectively) that allow for very fast parallel backup and restore operations \cite{yu_non-volatile_2011, wang_3us_2012, liu_4.7_2016, wang_a_130nm_feram_2017, sakimura_10.5_2014, senni_non-volatile_2016}.
However, introducing these hybrid memory elements is intrusive. Moreover, it usually comes with a significant area overhead and often results in increased delay and active power.
Additional limitations on the amount of data that can be saved and restored in parallel is imposed by the peak current consumption required to drive all the NVM bit cells at the same time.
To mitigate these problems, distributed small non-volatile arrays, where groups of flip-flops are backed-up in sequence, are proposed in \cite{bartling_8mhz_2013}.
An adaptive restore controller for configuring the parallelism of the nvSRAM restore operation, trading off peak current with restore speed is instead presented in~\cite{liu_4.7_2016}.
The use of NVM enables persistence across power failures but it also introduces the problem of consistency for the data stored in the NVM \cite{ransford_nonvolatile_2014}.
To address the consistency issue and improve reliability of the system, a software framework that performs a copy-on-write of modified pages of the NVM in a shadow memory area is developed in~\cite{choi_achieving_2019}.
The consistency problem can be also addressed via static analysis or with hardware techniques \cite{liu_lightweight_2016}.
In particular, hybrid nvFFs can be used in a hardware scheme where an enhanced store buffer is used to treat the execution of stores to the NVM as speculative, until a checkpoint is reached~\cite{liu_lightweight_2016}.
Two counters are also used to periodically trigger checkpoints based on the number of executed stores or on the number of executed instructions.
Previous work has also focused the attention to the problem of optimal checkpoint placement, as in \cite{ghodsi_optimal_2017} where online decisions on checkpoints are taken based on a table filled offline using Q-learning.
In this paper, we propose Freezer, a hardware backup and restore controller that is able to reduce the amount of data that needs to be backed-up.
Our approach avoids the high cost of hardware fully NVP architectures since it can be implemented with plain CMOS technology.
Furthermore, contrary to other hardware based approaches such as non-volatile processors \cite{liu_4.7_2016, sakimura_10.5_2014, ma_architecture_2015}, our proposed controller is a component that can be integrated in existing SoCs, without requiring modification of the processor architecture.
Moreover, Freezer achieves better performance than pure software approaches.
Our contributions can be summarized as follows:
\begin{itemize}
\item We propose an analysis of different backup strategies based on the use of memory access traces.
\item We introduce an oracle based backup strategy that provides the optimal lower bound for the backup size.
\item We present a hardware backup controller, Freezer, that dynamically keeps track of the changes in the program state and commits these changes in the NVM before the power failure.
The controller spies the address signal of the SRAM and uses dirty bits to track modified addresses with a block granularity
\item We conduct an analysis of the trade-offs and a design space exploration for our proposed strategy. Results on a set of benchmarks show an average $8\times$ reduction in backup size. Thanks to Freezer, the backup time is further reduced by more than $100\times$, with a very low area and power overhead.
\item We compare the memory access energy of three different system architectures: SRAM+NVM, NVM-only and cache+NVM, showing that NVM-only systems take on average $3.74\times$ to $3.35\times$ more energy than SRAM+NVM with full-memory backup and $6.19\times$ to $4.22\times$ more when compared to Freezer. Our strategy shows a clear advantage also when compared to cache+NVM architecture, requiring in average $7.8\times$ and $5.9\times$ less energy, with respectively RRAM and STTRAM as main memory.
\end{itemize}
The rest of the paper is organized as follows.
In Section \ref{sec:context}, we present some background information and related works.
In Section \ref{sec:sys-modeling} we describe the main system models and architectures for a transiently powered device, and we present a model for evaluating the memory access energy of different system architectures.
In Section \ref{sec:backup-model}, we introduce and discuss the model for the backup strategies.
Section \ref{sec:traces} explains how the memory access traces of the benchmarks are processed and analysed.
Section \ref{sec:freezer} presents Freezer backup controller, its algorithm, and some area and power synthesis results.
We report several comparison results of our study in Section \ref{sec:results}.
Finally, we briefly discuss our approach and draw the conclusions in Sections \ref{sec:discussion} and \ref{sec:conclusion}.
\section{Background and Related Work}\label{sec:context}
In this section, we briefly present the context around non-volatile processors and the problem of state retention in energy harvesting applications.
We then present the motivation from which this paper is derived.
In related work, both software- and hardware-level solutions were proposed to guarantee forward progress across unpredictable power failures.
There are two main approaches to cope with the backup and restore problem:
periodic check-pointing \cite{ransford_mementos:_2011, choi_achieving_2019}, and
on-demand backup \cite{balsamo_hibernus:_2015, balsamo_hibernus_2016, jayakumar_quickrecall:_2015}.
Periodic check-pointing systems try to guarantee forward progress by repeatedly executing some check-pointing tasks, interleaved with the computation.
These check-points are usually placed by the compiler, according to some heuristic.
At run-time, when a check-point is reached, the system decides if a backup should be executed.
In \cite{ransford_mementos:_2011}, for example, the supply voltage level is checked to determine whether there is enough energy or if a snapshot should be taken.
After a power outage, the state will be rolled back to the last saved state and the execution will resume from the last check-point that was reached.
This approach has the advantage that backup size can be optimised, as the location of each check-point is known in advance.
In \cite{choi_achieving_2019}, checkpoints are instead taken based on the expiration of a timer, but only the registers are saved as the system uses only NVM as its main memory. To avoid consistency issues with NVM updates happening between a checkpoint and a power failure, the modified NVM pages are saved with a copy-on-write mechanism on a shadow memory area.
These periodic check-pointing techniques also introduce overhead due to the execution of unnecessary checkpoints and backups, moreover they may lead to the re-execution of part of the code after the rollback.
On-demand backup tries to avoid the run-time overheads introduced with periodic check-pointing by waiting until a power failure is detected before executing the backup.
The typical behavior of an on-demand backup system is depicted in Fig. \ref{fig:interval},
\begin{figure*}[htbp]
\centering
\includegraphics[width=.85\linewidth]{images/intervals_v3}
\caption{Division of execution time in intervals and system state during an interval.}
\label{fig:interval}
\end{figure*}
which shows how the system responds to a power failure, signaled by a decrease in the supply voltage (Vdd), by interrupting the computation and by entering in the \textit{Backup} phase.
When the backup is completed, the system goes in the \textit{OFF} state, where it will wait until the power resumes.
When the power is newly available, the platform can leave the \textit{OFF} state and start the recovery.
The new interval begins when the system enters the \textit{Restore} phase, to recover the state saved in the previous backup.
When the restore is completed the system can resume the computation.
Some hardware-based solutions can also be considered as implementation of on-demand backups.
As an example, in \cite{su_a_ferroelectric_2017}, the non-volatile processor is paired with a dedicated voltage detector used to trigger the backup mechanism.
The main disadvantage with these techniques is that they often require a full backup of the system memory, as it is difficult to know in advance when a power failure will happen and thus saving only the required memory is complicated.
To mitigate this problem some offline static analysis technique have been proposed \cite{zhao_software_2015, zhao_stack-size_2017}.
In particular, in \cite{zhao_stack-size_2017}, an offline analysis of the code is used to find the backup positions that reduce the stack size.
These positions are marked in the code with the insertion of special label instructions.
At run-time a dedicated hardware module will wait for the power failure signal. After this signal, the execution continues until the
program reaches the label instruction. Then, this dedicated hardware module executes the backup.
These techniques require a compile-time analysis, with a detailed energy model of the platform.
Moreover they tend to introduce overhead as they need to modify the program code \cite{zhao_software_2015} and the internal architecture of the processor \cite{zhao_stack-size_2017}.
Non-volatile processors can also be considered implementation of on-demand backup, as they focus on having very fast backup (and restore) in response to power failure.
In \cite{ma_architecture_2015}, architectures and techniques for implementing non-pipelined, pipelined, and out-of-order (OoO) non-volatile processors are proposed.
The proposed techniques try to optimise the backup size of the internal state of the processor, using techniques such as dirty bits for a selective backup of the register file.
Contrary to our approach these architectures rely on NVM or hybrid memories for the persistence of the main memory.
Moreover these techniques are in general very intrusive, as they require an in depth modification of the internal architecture of the processor.
To address the problem of full memory backup in an on-demand scheme, we propose a hardware backup controller, Freezer, that is able to optimise the size of the backup based on the information collected at run-time.
Our proposed controller is an independent component that can be integrated in existing SoCs, without requiring changes to the internal architecture of the processor core.
In this work, we focus on how to optimise the backup of the main memory and we do not consider the problem of saving the internal state of the processor.
However the state of the CPU could be managed via software by the processor by copying its internal register into the main memory before starting the back up.
Other techniques are proposed in the literature to save the internal registers.
Common hardware-based solutions use nvFFs based on different technologies, such as STTRAM~\cite{sakimura_10.5_2014}, MRAM-based nvFFs~\cite{senni_non-volatile_2016}, FeRAM~\cite{su_a_ferroelectric_2017}, ReRAM~\cite{liu_4.7_2016}, and the use of FeRAM distributed mini arrays~\cite{bartling_8mhz_2013} or the use of nvFFs and NVM blocks for the backup of internal registers~\cite{ma_architecture_2015}.
\section{System Modelling}
\label{sec:sys-modeling}
\subsection{Considered System Model}\label{sec:sys-model}
Energy harvesting is seen as a promising source to power future battery-less IoT systems.
However, due to the unpredictable nature of the energy source, these systems will be subject to sudden power outages.
This could cause the execution of program to be unexpectedly interrupted.
Thus, in these intermittently (or transiently) powered systems, the execution is divided in multiple power cycles, i.e., intervals, as shown in Fig. \ref{fig:interval}.
The timing break-down of one of these intervals is depicted in Fig. \ref{fig:energy-cycle}.
$t_{cyc}$ is the duration of this on-off cycle and is defined as $t_{cyc}=t_r+t_a+t_s+t_{off}$, where $t_a$ is the time in active state where the system is executing some software tasks, $t_{off}$ the time in the power-off state, and $t_r$, $t_s$ the time to restore, save (backup) the data from, to the NVM, respectively.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\linewidth]{images/transient_computing}
\caption{Detail of an execution cycle between two consecutive power outages.}
\label{fig:energy-cycle}
\end{figure}
The energy consumed by the system during $t_{cyc}$ can be modelled as (adapted from \cite{hager_a_scan-chain:2017})
\begin{equation}
E_{c} = E_{s} N_{s} + E_{r} N_{r} + P_{on} t_{a} + P_{off} t_{off},
\label{eq:energy}
\end{equation}
where $E_{s}$ and $E_{r}$ are the energy required respectively for saving and restoring one word, $N_s$ and $N_r$ the total number of words to save and restore.
$P_{on}$ and $P_{off}$ are the power consumed during the active state and off state, respectively.
In this type of intermittently-powered systems, usually $P_{off}$ is zero as the state is retained in a non-volatile manner, thus the all system including the processor core can be fully shut-down.
Moreover, $N_s$ and $N_r$ are usually equal and often coincide with the full size of the volatile system state~\cite{balsamo_hibernus:_2015}.
Considering an \textit{on-demand backup} system that only performs a backup before a power failure, the total execution time $t_{exec}$ of a program can be modeled as (adapted from \cite{balsamo_hibernus:_2015})
\begin{equation}
t_{exec} = t_{prog} + n_i \times (t_s + t_r + \overline{t_{off}}),
\label{eq:time}
\end{equation}
where $t_{prog}$ is the time needed for running the whole program without interruptions, $n_i$ the number of interruptions, $t_s$ and $t_r$ the save and restore time, respectively, and $\overline{t_{off}}$ the average off time.
Our approach, Freezer, aims at reducing the size of the backup ($N_{s}$), thus also reducing $t_s$ and the total execution time and backup energy.
Moreover, the hardware implementation of our approach guarantees an additional decrease to the backup and restore time and energy, by eliminating the overhead due to software operations.
In this paper, we assume that the system has a reliable way to detect a power failure and we also assume that the system has enough power to complete the backup.
Therefore we do not investigate the problem of how to deal with incomplete backup.
To have a stronger guarantee on the consistency of the system state after recovery a double buffering scheme can be applied, such that a new backup does not overwrite the previous one on the NVM.
Moreover, we do not deal with the issue of how to detect a power failure.
For this problem there are also solutions proposed in the literature, such as dedicated voltage detector \cite{su_a_ferroelectric_2017}.
\subsection{System Architecture} \label{sec:sys-arch}
In the field of non-volatile processors for energy harvesting applications, there are several possible architectural choices for achieving state retention.
The most common approaches are the following:
\begin{itemize}
\item A CPU with an SRAM and an addressable NVM.
The NVM might serve as a backup of the full memory space of the SRAM but might also be addressable by the processor.
\item A CPU with an SRAM and a backup-only NVM or a CPU with a hybrid nvSRAM as in \cite{liu_4.7_2016}.
\item A CPU with an NVM as main system memory as in \cite{sakimura_10.5_2014, wang_a_130nm_feram_2017, jayakumar_quickrecall:_2015, senni_non-volatile_2016, choi_achieving_2019}.
\item A CPU with an SRAM-based cache and an NVM as the only system memory \cite{ghodsi_optimal_2017, ma_architecture_2015}.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=.85\linewidth]{images/SRAM+NVM_Cache+NVM_NVM-only_spaced_large-font}
\caption{Architectural models for non-volatile state retention.}
\label{fig:arch_models}
\end{figure}
These approaches can be grouped into the three basic architectures depicted in Fig. \ref{fig:arch_models}.
The first two approaches have in common the SRAM+NVM architecture, which, as shown in Fig. \ref{fig:arch_models}, exploits SRAM for execution and NVM for enabling backup and restore operations.
The NVM-only approach relies solely on NVM as its main memory.
Cache+NVM uses NVM as the main memory with the addition of a volatile cache.
A common choice for implementing intermittently-powered systems is to use commercially available SoCs with an embedded addressable NVM.
As this NVM is addressable, this type of systems is the common choice for implementing software-based retention schemes~\cite{balsamo_hibernus:_2015, ransford_mementos:_2011}.
Another option explored in related work is that of using hybrid nvSRAM \cite{yu_non-volatile_2011, liu_4.7_2016}
This choice allows to exploit the main advantages of SRAM (fast read/write and low access power), while also obtaining fast parallel backup through the paired non-volatile memory elements.
This means that the non-volatile elements are not directly accessible by the programmer, instead the non-volatility is made transparent by the hardware.
A conceptually simple solution to guarantee state retention is to exploit only an NVM as the main memory.
This solution is proposed in \cite{sakimura_10.5_2014}, where the system is fully based on STTRAM.
Another example is given by the software approach of QuickRecall \cite{jayakumar_quickrecall:_2015}, where the available SRAM is not used and the system runs only on the FeRAM.
As with hybrid nvSRAM, the non-volatility is transparent to the programmer.
Also, in this case, there is no need to copy the data in the event of a power failure.
In \cite{ma_architecture_2015} methods for the backup and recovery of the internal state are proposed and compared considering non pipelined, pipelined, and out of order (OoO) processor architectures.
These solutions can also be considered NVM-only type of systems, as they use NVM as their main memory, with the addition of hybrid or NVM caches in the case of the OoO processor.
Unfortunately, some of these new NVM technologies are still immature and often they do not provide the same level of performance in terms of access time and access energy as the SRAM \cite{yu_emerging:2016}.
Moreover, NVM-only designs must also face the issue of wear and the reduced endurance that characterises many of the emerging NVM technologies.
To mitigate this problem, a possible solution could be to use register-based or SRAM-based store buffers.
As an example, enhanced store buffers are proposed in \cite{liu_lightweight_2016} to postpone the execution of NVM writes, treating store operations as speculative.
Though the limited size of the store buffer still results in very frequent checkpoints and a large number of NVM writes.
Another possible answer to mitigate NVM writes speed and endurance problem could be to use an SRAM-based cache to buffer the accesses to the main NVM.
Although this type of architecture could be of some interest for higher performance systems, it is not very common in small IoT edge nodes.
This is because adding a cache would significantly increase both the dynamic and static power consumption during the active period.
In this work, we consider an architecture that comprises a micro-controller with an SRAM as main memory and an NVM that is used by our proposed backup controller, Freezer, to save (and restore) the state of the system before (and after) a power failure.
The general overview of such architecture is depicted in Fig. \ref{fig:system-arch}.
The micro-controller we consider implements the RISC-V Instruction Set Architecture (ISA).
\begin{figure}[ht]
\centering
\includegraphics[width=.6\linewidth]{images/freezer_harvester}
\caption{General overview of a system implementing Freezer.}
\label{fig:system-arch}
\end{figure}
\subsection{Modelling Memory Access Energy}
\label{sec:res:energy}
The energy required for a backup operation is dominated by the data transfers between the SRAM and the NVM, and will be proportional to the backup size.
This energy will mostly be determined by the write energy of the NVM, that can be even $100\times$ that of the SRAM \cite{yu_emerging:2016}.
Our approach provides a reduction of the backup energy by decreasing the number of data transfers and by improving the speed of the process compared with a software based backup strategy.
In this section, we provide a simplified model to evaluate and compare the energy cost of some of the different system architectures introduced in Section \ref{sec:sys-arch}. Results provided in Section \ref{sec:results} are based on this model.
In particular, we derive the energy cost in terms of memory accesses for the following types of memory models:
\begin{itemize}
\item SRAM + NVM for backup,
\item NVM only,
\item cache + NVM as main memory.
\end{itemize}
For the \textit{SRAM+NVM} architecture, we consider both a system which performs a full memory backup and a system with Freezer.
For this system, the energy cost associated with memory accesses can be expressed as
\begin{equation}
E_{SRAM+NVM} = E_{prog} + E_{backup} + E_{restore}
\label{eq:energy-sram+NVM}
\end{equation}
where $E_{prog}$ is the energy of the memory accesses needed for running the program.
\begin{equation}
E_{prog} = E_{sram/r} N_{load} + E_{sram/w} N_{store}
\label{eq:energy-prog}
\end{equation}
where $E_{sram/r}$ and $E_{sram/w}$ are the read and write energy of the SRAM, and $N_{load}$ and $N_{store}$ are the total number of load and store operations, respectively.
The additional cost required by a platform with both SRAM and NVM are expressed in Eq. \ref{eq:energy-sram+NVM} by the energy for the backup $E_{backup}$ and by the energy for the restore $E_{restore}$, defined respectively as
\begin{equation}
E_{backup} = N_{s} (E_{sram/r} + E_{nvm/w}),
\label{eq:energy-backup}
\end{equation}
\begin{equation}
E_{restore} = N_r (E_{nvm/r} + E_{sram/w}).
\label{eq:energy-restore}
\end{equation}
The energy for the backup depends on the total size of the backup $N_s$ and on the energy required for reading from SRAM $E_{sram/r}$ and writing to NVM $E_{nvm/w}$.
$N_s$ is the total number of saved words throughout the full execution.
Similarly $E_{restore}$ can be expressed as the energy for a single transfer (read from NVM and write to SRAM) multiplied by the total number of restored words $N_r$.
For the \textit{NVM-only} architecture there is no need to preform backup and restore operations, as everything is already saved in the NVM.
In this case, the memory access energy is given only by the load and store operations performed for running the program.
The energy cost for a purely non-volatile system that uses a NVM as its main memory is estimated by
\begin{equation}
E_{NVM} = E_{prog NVM} = E_{nvm/r} N_{load} + E_{nvm/w} N_{store}.
\label{eq:energy-NVM}
\end{equation}
The \textit{cache+NVM} architecture comprises both an NVM as its main memory, and an SRAM-based cache to reduce the number of accesses to the NVM.
This system uses a write-back cache controller that performs a flush of the dirty lines on NVM in case of a power failure.
On a cache system, for every operation, the TAG memory is first read to verify if the required address is on the cache or not, then in case of a miss a read from NVM is executed.
Moreover, simultaneous TAG and DATA memory reads are performed inside the cache to sustain high throughput.
Finally, multiple data words may be accessed in parallel on N-way set-associative cache where only one word is useful.
Therefore, the energy per read/write operation of this system is much higher that the one with tightly coupled memory (SRAM+NVM).
The energy cost for a cache+NVM system is therefore estimated by
\begin{equation}
E_{cache} = E_{hits} + E_{misses} + E_{flushes}
\label{eq:energy-cache}
\end{equation}
where $E_{hits}$ is the energy due to cache hits, $E_{misses}$ the energy penalty due to misses, and $E_{flushes}$ the energy consumed with flushes.
The first part of the energy cost $E_{hits}$ is
\begin{equation}
E_{hits} = N_{hit/r}E_{hit} + N_{hit/w}(E_{hit} + E_{cache/w})
\label{eq:energy-hits}
\end{equation}
where $N_{hit/r}$ and $N_{hit/w}$ are respectively the number of read and write hits, $E_{hit}$ the energy for a single cache access and $E_{cache/w}$ the energy for a write operation inside the cache.
$E_{hits}$ therefore includes the energy due to read hits $N_{hit/r}E_{hit}$ and the energy due to write hits $N_{hit/w}(E_{hit} + E_{cache/w})$.
$E_{misses}$, the energy due to the misses, is expressed as
\begin{equation}
\begin{aligned}
E_{misses} &= N_{miss}(E_{miss} + (E_{nvm/r} + E_{cache/w}) \times 8) \\
&+ N_{evict} E_{nvm/w}
\end{aligned}
\label{eq:energy-misses}
\end{equation}
where $N_{miss}$ is the total number of misses, $E_{miss}$ the energy for a missing access, $N_{evict}$ the total number of evicted words, $E_{nvm/r}$ and $E_{nvm/w}$ are the energy for reading and writing a word in the NVM.
Eq. \eqref{eq:energy-misses} shows that each miss causes the reading of a full block (8 words in our case) from the NVM.
Moreover a missing access may also cause the eviction of a block from the cache resulting in writes to the NVM.
$E_{flushes}$ is caused by the backup of the dirty lines before a power failure happens. This operation requires to scan all the cache lines and write back the dirty ones and is repeated before every power failure.
\begin{equation}
E_{flushes} = N_i N_{lines}E_{hit} + N_{flush} E_{nvm/w} \times 8
\label{eq:energy-flushes}
\end{equation}
where $N_{lines} E_{hit}$ represents the energy for reading all the blocks of the cache and $N_i$ the number of interruptions.
The energy due to the writes to NVM is expressed by $N_{flush}$, the total number of flushed blocks throughout all power failures, multiplied by the energy for writing $8$ words to NVM.
\section{Modeling of the Backup Strategies}\label{sec:backup-model}
By analyzing the memory access sequences, we can identify different backup strategies.
The \emph{Full Memory Backup} strategy corresponds to the state of the art.
In this paper, we propose four backup strategies defined as \emph{Used Address (UA)}, \emph{Modified Address (MA)}, and \emph{Modified Block (MB)}, a block-based evolution of the two previous strategies.
The last strategy presented is an \emph{Oracle} and cannot be implemented in a real system as it requires knowledge of the future.
This oracle is however very useful for comparison, as it gives the optimal lower bound for the backup size.
In the rest of the paper, a \emph{word} is defined as a 32-bit data.
\subsection{Full Memory Backup}
The first and simplest solution is to backup the full content of the memory at the end of each interval as it is proposed in~\cite{balsamo_hibernus:_2015}.
For our study and fair comparison, we considered a slightly improved version of this strategy that saves only the data section of the program in pages of $512$ bytes (128 words), thus not saving the full memory every time.
As an example, if a program needs a $1000$-byte data space, $1024$ bytes ($2$ pages) will be saved in the NVM.
With this approach, the backup size is a constant for all the intervals, equivalent to the number of pages to be saved.
\subsection{Used Address Backup}
The first strategy that we propose is the \textit{Used Address (UA)} strategy.
UA consists of keeping track of all the different addresses that are accessed (reading and writing) during an interval.
When a power failure is detected every address that was accessed during that interval is saved in the NVM.
In the UA case, only the memory locations that were used during the interval are going to be backed-up.
\subsection{Modified Address Backup}
If the initial snapshot of the program is stored in the NVM, the UA scheme can be improved by implementing the \textit{Modified Address (MA)} backup strategy.
MA only keeps track of the memory locations that are modified (written) during a power cycle.
Then, before a power outage, only the words that were modified (write operation) are saved to the NVM.
In practice this means saving only the addresses accessed by a store operation at least once during the interval.
This number of addresses gives the size of the backup at the end of the interval.
It may happen that the data written during execution do not modify the content of the memory. However, to keep the technique simple, we do not track the content of the memory but only the addresses where a write operation happens.
\subsection{Oracle}
The \textit{Oracle} is defined as the strategy that saves only the words that are \textit{alive}.
An address is considered alive when it is going to be read at least once in any future interval.
In other words, a written data is considered alive if it is read in any future intervals, before being modified by any other write at the same address.
A word that will be overwritten before being read is not considered alive and thus is not backed-up by the oracle.
\begin{figure}[htb]
\centering
\includegraphics[width=.9\linewidth]{images/alive_oracle_new}
\caption{Example of the aliveness of two addresses with Load (L) and Store (S) instructions. The continuous green line indicates that the address is alive. The black dotted line is used when the address is not alive.
The store on address \textit{0x10} at cycle 12 (S*) does not make the address alive because it is followed by another store at cycle 15, that overwrites the value written by S*
}
\label{fig:alive-ex}
\end{figure}
Fig. \ref{fig:alive-ex} shows an example of two addresses changing between the \textit{alive} and the \textit{not alive} state as the execution progresses.
In the example, address \textit{0x0C} stops being alive after it is used by the load in cycle 5 and stays \textit{not alive} for the period between the sixth and the ninth clock cycles.
This happens because the Oracle knows that the value will be overwritten by the store executed at clock cycle 10.
Therefore, between clock cycles 6 and 9, it does not consider \textit{0x0C} as an alive address.
For the same reason, address \textit{0x10} stops being alive after the load in cycle 7 and is \textit{not alive} in the time between cycle 8 and cycle 14.
The store operation happening at cycle 12 does not change the state of the address because it is going to be followed by another store instruction that will discard this temporary update.
The \textit{Oracle}, before the power failure, only saves the words that are going to be read during any further interval.
Extending this oracle, we moreover define the \textit{Oracle Modified (OM)} strategy that only saves the alive words that were modified in the current interval.
As for the MA scheme, we can consider that a complete snapshot of the system memory is stored in the NVM at the beginning and during any previous interval.
With the OM strategy, the data that will be read in the future are only saved if they were modified.
If a data has been saved in the previous intervals and remained unchanged, it is not added in the snapshot of the memory to be saved before the next power failure.
From now on, we will use Oracle to refer to the Oracle Modified when comparing with the other strategies.
\subsection{Block-Based Strategies}
Both the \textit{Used Address} and \textit{Modified Address} strategies can be implemented with different degrees of granularity.
Tracking each individual word may require a very large memory to store the modified addresses, block-based strategy tries to trade-off between hardware cost and backup saving.
Instead of considering single word addresses, the addresses can be grouped in blocks of $N$ words and the scheme can be adapted to keep track of these blocks.
Therefore the \textit{Modified Block (MB)} strategy keeps track of the blocks that are modified during the interval.
The backup size is given, for each interval, by the number of blocks that are accessed with one or more store operations.
In Freezer, the modified blocks are tracked using corresponding \textit{dirty bits}, which allows for the size of the associated tracking memory to be reduced by a factor equivalent to the block size.
MB with blocks of $N=1$ word corresponds to the MA strategy.
\section{Trace Analysis and Improvement in Backup Size}\label{sec:traces}
In order to validate our approach, we analyzed the memory access traces of several benchmarks from a subset of MiBench (see Table~\ref{tab:backup-size-vs-block-size} for a list of the benchmarks).
The benchmarks were run on a cycle accurate, bit accurate RISC-V model~\cite{rokicki:hal-02303453}, thus only two types of memory access are possible: load and store operations.
The traces report the information about each memory access during the program execution.
In particular, each trace records a timestamp (cycle count), the type of operation (ST or LD for store or load) and the address for every memory access.
Table \ref{tab:trace} shows an example of a memory access trace.
\begin{table}[htb]
\caption{Example of memory access trace.}
\centering
\begin{tabular}{rlll}
\hline
interval & cycle & op & addr \\
\hline
$i$ & ... & ... & ... \\
$i$ & 90 & ST & 0x38aaad4 \\
$i$ & 97 & LD & 0x2ba50 \\
$i$ & 99 & LD & 0x2b06c \\
\hline
$i+1$ & 104 & LD & 0x2b954 \\
$i+1$ & 109 & LD & 0x38aaad4 \\
$i+1$ & ... & ... & ... \\
\vdots & \vdots & \vdots & \vdots \\
$n$ & ... & ... & ... \\
\hline
\end{tabular}
\label{tab:trace}
\end{table}
The occurrences of power failures are simulated by dividing an access trace in $n$ time intervals.
Each interval $i$ is composed of a given number of clock cycles $N_{prog_i}$, equal to the active time $t_a$ of the interval $i$ divided by the processor clock period.
The cycle count reported in the trace is used to divide the execution of a benchmark in these $n$ intervals.
In the rest of the paper, for simplicity without loosing generality, we divide $t_{prog}$, the time needed for running the whole program without interruptions, in $n$ equal intervals of $N_{prog}$ cycles.
In the example reported on Table \ref{tab:trace}, the interruption is placed after cycle \textit{99}.
This means that the load happening in cycle \textit{104} is considered as being executed in the next interval ($i+1$).
This is a simple way to simulate a frequency of power failures every $N_{prog}$ cycles ($N_{prog}=100$ in this example).
In practice, for our simulations we considered longer intervals, ranging from $10^5$ to $10^7$ cycles.
As an example, considering a device running at 10 MHz, intervals of $10^6$ clock cycles would correspond to a frequency of interruptions due to power failures of 10 Hz.
In Section \ref{sec:impact-of-interval}, we present an analysis of the impact of the interval length and of the variability of intervals duration on the reduction of backup size.
From these traces, the number of load and store operations per interval, as well as other memory access features, can be extracted.
As an example, Fig. \ref{fig:op} shows the number of LD and ST in each interval during the execution of the FFT benchmark, with intervals of $N_{prog}=10^7$ clock cycles.
\begin{figure}[htb]
\centering
\includegraphics[width=.9\linewidth]{images/op_number_per_interval_scale}
\caption{Number of LD and ST opserations per interval during the FFT benchmark execution with $N_{prog}=10^7$ clock cycles.}
\label{fig:op}
\end{figure}
Considering the duration of the full execution of the FFT benchmark on the target processor, $n=7$ intervals can be simulated, ranging from interval $0$ to $6$ in the figure.
These traces provide relevant information about the memory access behavior of a given program. They will be used to compare the different backup strategies in Sections \ref{sec:Impact-of-block-size} and \ref{sec:results}.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\linewidth]{images/alive_vs_total_fft_percent_scale}
\caption{Percentage w.r.t. full memory space of ``alive'' and ``alive \& modified'' addresses per interval during the execution of the FFT benchmark with $N_{prog}=10^7$ clock cycles.}
\label{fig:alive-fraction}
\end{figure}
Fig. \ref{fig:alive-fraction} shows the fraction of \textit{alive} and \textit{alive \& modified} addresses with respect to the total number of words addressed, for every interval of $10^7$ clock cycles for the FFT benchmark.
In the last interval no address is considered alive as the oracle knows that the program is going to terminate before the next power failure.
The figure also shows that, even with a relatively small benchmark, the number of words that really needs to be saved is less than a quarter of the total.
This motivates our work on the definition of new backup strategies to reduce the volume of data to be backed-up before a power failure.
However, as already mentioned, the OM cannot be implemented in a real system as it requires knowledge of the future.
It is however very useful for comparison as it gives the optimal lowest bound to the backup size.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{images/full_mem_vs_ua_vs_ma_vs_oracle_abrev_scale}
\caption{Average number of words saved per interval by the different backup strategies -- full-memory, used-address (UA), modified (MA), and oracle modified (OM) -- during the execution of different benchmarks, with $N_{prog}=10^6$ cycles.}
\label{fig:ma-is-better}
\end{figure}
Fig. \ref{fig:ma-is-better} compares the average number of word saved per interval by the full-memory, UA, MA, and OM strategies for different benchmarks and with $N_{prog}=10^6$ clock cycles.
The figure shows the great potential of the proposed strategies w.r.t. state-of-the-art approaches.
Fig. \ref{fig:ma-is-better} also demonstrates that the MA strategy always outperforms the UA strategy in terms of number of saved words and it is the only technique that comes close to the performance of the oracle modified.
Therefore, only the MA strategy will be considered in the rest of the paper, as well as its extension to a block-based strategy presented in the following section.
\section{Freezer}\label{sec:freezer}
In this section, we present Freezer, a backup controller that implements the \textit{Modified Block} backup strategy, and study the impacts of the block size in the MB strategy.
\subsection{Freezer Architecture}
Fig. \ref{fig:system-arch} shows the system-level view of the \textit{Freezer} architecture.
The system is composed of four major components: the CPU, the SRAM used as a main memory, the NVM used for the backup, and the backup controller (Freezer).
Freezer is itself composed of two main blocks: a controller implemented as a finite-state machine (FSM) for sequencing the operations and a small memory containing the dirty bits used to keep track of the blocks that need to be saved, as shown in Fig.~\ref{fig:freezer-arch-int}.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\linewidth]{images/freezer_arch_int}
\caption{Freezer internal architecture}
\label{fig:freezer-arch-int}
\end{figure}
The Freezer controller is a stand-alone component, that does not need to be tightly coupled with the memories or with the core.
It uses two handshake interfaces for the SRAM and NVM requests, allowing to tolerate variable access latency.
Freezer can be directly connected to the control, address and data signals of both SRAM and NVM, using these handshake interfaces.
Alternatively the SRAM and NVM interfaces can be arbitrated and share a single master port on the system bus.
Moreover, Freezer is also connected to the request signals of the CPU to the SRAM, this allows Freezer to (i) spy the address of the SRAM accesses by the processor and (ii) manage the backup-to and restore-from-NVM phases in place of the processor.
SRAM and NVM do not need to have two ports, CPU and Freezer accesses can be easily arbitrated as they never access the memory at the same time.
At run-time, Freezer checks the address of the store operations in the SRAM to dynamically keep track of the blocks that are modified.
When a power failure arises, the CPU is halted and the controller starts transferring the modified blocks into the non-volatile memory.
The words within a block are then stored sequentially in the NVM.
The controller uses the information collected during the active time to determine which blocks to save.
When performing this task, Freezer has access to both the SRAM and the NVM memory.
Algorithm \ref{algo:backup-controller} describes the behavior of the backup controller during the execution, backup, and restore phases.
During execution, Freezer implements the \textit{Modified Block} backup strategy.
\begin{algorithm}[tb]
\SetKwIF{If}{ElseIf}{Else}{if}{:}{elif}{else:}{}%
\SetKwFor{For}{for}{\string:}{}%
\SetStartEndCondition{ }{}{}%
\AlgoDontDisplayBlockMarkers\SetAlgoNoEnd\SetAlgoNoLine
\caption{Freezer backup controller algorithm}
\label{algo:backup-controller}
\KwIn{cpu\_addr address generated by the CPU}
\KwIn{is\_store = 1 if the operation is a store}
\KwIn{op\_valid = 1 if the operation is valid}
\KwIn{pwr\_fail = 1 if power failure is detected}
\KwIn{restore = 1 if resume after a power failure}
\BlankLine
\KwData{\textit{to\_backup} flag memory of 1-bit per block}
\BlankLine
\eIf{restore}{
\For{i $\gets 0$ \textbf{to} $SRAM\_SIZE-1$}{
sram[i] $\gets nvm[i]$\;
}
}
{
\eIf{\textbf{not} pwr\_fail}{
\If{is\_store \textbf{and} op\_valid}{
block $\gets cpu\_addr \gg log2(BLOCK\_SIZE)$\;
to\_backup[block] $\gets 1$\;
}
}{
\For{$b \gets 0$ \textbf{to} $BLOCK\_NUM-1$}{
\If{to\_backup[b]}{
\For{$a \gets 0$ \textbf{to} BLOCK\_SIZE-1}{
addr $\gets (b \ll log2(BLOCK\_SIZE)) \| a$\;
nvm[addr] $\gets$ sram[addr]\;
}
}
}
}
}
\end{algorithm}
During execution, when there is no power failure (not \textit{pwr\_fail}) and there is a valid store operation, the controller records the blocks that are modified in a table (\textit{to\_backup}) implemented in a small memory, or in a register bank.
When the \textit{pwr\_fail} condition is true, it enters in the backup phase and in a loop where, for each block, the \textit{to\_backup} memory is checked.
If the block has to be saved, then a loop for every address of the block is executed, where a word is read from the SRAM and written in the NVM.
This last loop can easily be pipe-lined such that an NVM write in an address can be executed in the same cycle with an SRAM read in the successive address.
The same holds true also for the restore phase, that simply moves back the data from the NVM to the SRAM.
In this way, the backup controller is able to back up and restore one word every clock cycle.
This should also lead to an additional speed-up, when compared with software-based backup loops executed on low-end micro-controllers, as in the case of \cite{balsamo_hibernus:_2015}.
In the hardware implementation, the process of checking the dirty bits can also be optimised.
As an example, the scan of the \textit{to\_backup} memory to find the next dirty block can happen in parallel to the backup of the current block, which is a relatively long operation.
Moreover, the \textit{to\_backup} memory can be organised as a matrix of dirty bits and the controller can check an entire row of dirty bits in parallel.
This means that the \textit{to\_backup} memory can be scanned row by row.
The sparsity of the dirty bits can also be exploited: skipping rows that have only clear bits (all zeros).
With these and other optimisations, the throughput of the backup operation can be sustained with little to no dead cycles.
However these low level optimisations are outside the scope of this work and will not be investigated further.
\subsection{Area and Power Results}
\label{sec:areapowefreezer}
As our algorithm is relatively simple, the controller itself introduces small area and power overheads.
The major contribution in the area and power overheads is given by the \textit{to\_backup} dirty-bit memory, used to keep track of the blocks that have to be saved.
Table \ref{tab:flag-mem} shows the number of bits and an estimation of the area of the \textit{to\_backup} memory for different block sizes, considering a 32KB SRAM.
\begin{table*}[ht]
\centering
\caption{Number of bits and area estimation of the \textit{to\_backup} memory, implemented with standard cells in 28 nm FDSOI.}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
block size (32bit words) & 2 & 4 & 8 & 16 & 32 & 64 \\\hline
\# bits & 4196 & 2048 & 1024 & 512 & 256 & 128 \\
area $[\mu m^2]$ & 10838.11 & 5452.02 & 2748.94 & 1436.81 & 730.64 & 386.95 \\
\hline
\end{tabular}
\label{tab:flag-mem}
\end{table*}
For these results, the \textit{to\_backup} memory is synthesized with standard cells in a 28nm FDSOI technology using Synopsys Design Compiler (DC).
Even when considering a fine granularity for the block size, the dirty-bit memory is small compared to the total size of the SRAM memory.
As an example, for a block size of $8$ words, the required 1024-bit memory is $256\times$ smaller than the main SRAM memory.
Moreover, by tuning the block size with larger blocks, the \textit{to\_backup} memory can be stored in a register file with a small increase in the backup size.
A non optimized version of the controller was synthesized from a C++ specification using Mentor Graphics CatapultHLS and Synopsys (DC) with the same 28nm FDSOI technology at $0.7V$.
In this configuration, Freezer's controller achieves a dynamic power of $P_{active} = 6.8 \mu W$ and a leakage power of around $P_{leak} = 40 nW$ at 25\textdegree.
With the same technology, we estimate a leakage of roughly $600 nW$ for a register-based \textit{to\_backup} memory of $1024$ bits.
These synthesis results will be exploited in Section \ref{sec:area} to estimate the energy of a system implementing Freezer.
\subsection{Impact of Block Size} \label{sec:Impact-of-block-size}
In this section, we study the impact of the block size on the size of the backup provided by the MB strategy.
The size of the \textit{to\_backup} memory depends on two parameters: the number of 32-bit words in each block, which determines the granularity of the backup strategy, and the total size of the SRAM.
Therefore, it is possible to trade off an increase in backup size with a smaller area overhead of the \textit{to\_backup} memory.
In Table \ref{tab:backup-size-vs-block-size}, the backup size across a set of benchmarks is reported for different configurations of block granularity.
The backup size is averaged on all intervals and normalized with respect to a block of one word (MA strategy). The interval is set to $N_{prog}~=~10^6$ clock cycles.
\begin{table}[htb]
\centering
\caption{Backup size relative to blocks of one 32-bit word (MA approach) for different benchmarks. $N_{prog}=10^6$ cycles. The table also reports the average on all benchmarks.}
\begin{tabular}{l|rrrrrr}
\hline
block size $N$ & 2 & 4 & 8 & 16 & 32 & 64 \\
\hline
susan\_smooth\_small (sss) & 1.01 & 1.04 & 1.09 & 1.17 & 1.32 & 1.62 \\
susan\_edge\_small (ses) & 1.03 & 1.10 & 1.22 & 1.35 & 1.54 & 1.68 \\
matmul16\_float (mm16f) & 1.11 & 1.24 & 1.47 & 1.77 & 2.23 & 2.54 \\
qsort (qsort) & 1.01 & 1.03 & 1.07 & 1.28 & 1.58 & 1.70 \\
fft (fft) & 1.10 & 1.22 & 1.44 & 1.79 & 2.45 & 3.22 \\
matmul32\_int (mm32i) & 1.03 & 1.08 & 1.17 & 1.28 & 1.47 & 1.75 \\
str\_search (str) & 1.02 & 1.06 & 1.12 & 1.17 & 1.28 & 1.55 \\
cjpeg (cjpeg) & 1.01 & 1.03 & 1.06 & 1.10 & 1.18 & 1.32 \\
dijkstra (dijk) & 1.06 & 1.18 & 1.31 & 1.36 & 1.45 & 1.55 \\
matmul16\_int (mm16i) & 1.07 & 1.19 & 1.38 & 1.69 & 2.16 & 2.76 \\
susan\_edge\_large (sel) & 1.08 & 1.17 & 1.30 & 1.46 & 1.59 & 1.72 \\
\hline
average (avg) & 1.05 & 1.12 & 1.24 & 1.40 & 1.66 & 1.95 \\
\hline
\end{tabular}
\label{tab:backup-size-vs-block-size}
\end{table}
Increasing the block size has obviously an impact on the performance of the MB strategy, i.e., the average size of the backup required at the end of each interval.
However, MB with a relative small size of block (up to $N=8$) only increases the backup size by 24\% in average, while this block-based strategy can decrease the size of the \textit{to\_backup} memory by a factor of $8\times$.
More comparisons and impact of the interval size are reported in the next section.
\section{Results}\label{sec:results}
In this section, the details of the experimental setup are explained and the results regarding the backup size (Sec.~\ref{sec:res:backupsize}) and backup time {(Sec.~\ref{sec:res:backuptime})} are reported.
The impact of the interval size is discussed in Section \ref{sec:impact-of-interval}. For all the other results provided in this section, the interval is set to $N_{prog}=10^6$ clock cycles.
Moreover, a discussion about power, energy, and area of our approach is presented in Sections \ref{sec:mem-energy-cmp} and \ref{sec:area},
while considerations on the impact of leakage are presented in Section \ref{sec:leakage}
\subsection{Backup Size}\label{sec:res:backupsize}
For every interval, the backup size is computed considering the different approaches described in Section \ref{sec:backup-model}.
Fig. \ref{fig:backup-size-plt} shows the backup size reported for every benchmark and for blocks of 1, 8, and 64 32-bit words.
Blocks of size equal to one word corresponds to the MA strategy.
The \textit{OM} strategy is also reported to provide the optimal (non reachable) value.
The backup size is averaged on all intervals and normalized against the improved Hibernus~\cite{balsamo_hibernus:_2015} approach, which saves the full memory used by the program in pages of 512 bytes.
As it can be seen from Fig. \ref{fig:backup-size-plt}, our approach greatly reduces the average backup size per interval, reaching an $87.7\%$ (more than $8\times$) reduction in average, with only a $7.5\%$ distance from the \emph{oracle modified}, when configured with a granularity of $8$ words per block.
This reduction in backup size can be directly converted into an energy saving in number of write to the NVM during the backup phase.
\begin{figure}[ht]
\centering
\includegraphics[width=.95\linewidth]{images/backup_size_comp2_scale}
\caption{Backup size normalized w.r.t. the program memory size (improved Hibernus strategy) of Freezer implementing MB strategy with blocks of 1, 8, and 64 words. Lower bound in backup size of the oracle-modified is also reported.}
\label{fig:backup-size-plt}
\end{figure}
\subsection{Impact of Interval Size}\label{sec:impact-of-interval}
On a system powered with intermittent ambient energy, the time length of the intervals is mostly determined by the energy source and by the energy budget of the platform.
If the energy source is relatively stable, the length of the power cycle increases, and so does the amount of computation that the processor manages to complete during one interval.
This means that more memory accesses will be performed, thus we can expect the average size of a backup to increase.
However, this also depends on the spatial locality of the application, and considering wider blocks could be beneficial for less intermittent sources.
When the length of the power cycles decreases, the processor is interrupted more frequently, and the number of memory accesses is reduced.
Therefore, the average backup size is further decreased.
Fig. \ref{fig:backup-iv-size} shows the average reduction in backup size, across all benchmarks, for different lengths of the power cycles (interval size $N_{prog}$ expressed in number of clock cycles), considering blocks of 8 words.
As it can be seen, the backup size reduction is greater than $70\%$ for all the interval lengths.
Moreover, with shorter intervals, the reduction becomes greater than $90\%$.
It must be noted also that, when the length of the interval is increased above 20 million clock cycles, the majority of the programs are able to run to completion before the first power failure occurs.
\begin{figure}[htb]
\centering\includegraphics[width=.95\linewidth]{images/backup_reduction_vs_interval_size_new}
\caption{Average backup size reduction with different interval size $N_{prog}$ expressed in number of clock cycles.}
\label{fig:backup-iv-size}
\end{figure}
Due to the unpredictability of the energy source, an intermittently-powered system might also experience a wide variation between the time length of successive intervals.
To better capture this behaviour, we model the occurrence of a power failure as a random variable distributed according to a binomial law.
Power failure events in this model are considered independent of one another.
At each clock cycle, there is a certain probability to incur in a power failure.
For this experiment, we considered two values of one power failure every $10^6$ cycles and one power failure every $10^7$ clock cycles.
Figures \ref{fig:random-iv-1e6} and \ref{fig:random-iv-1e7} show, for each benchmark, the average savings with relative standard deviation computed for 100 executions, considering blocks of 8 words.
Our proposed method is robust to variability in the size of the intervals and it is able to achieve more than $83\%$ and $88\%$ savings on average when the failure rates are respectively $10^{-7}$ and $10^{-6}$.
\begin{figure}[ht]
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{images/freezer_binomial_intervals_1e-6_crop_scaled}
\caption{Failure rate $10^{-6}$.}
\label{fig:random-iv-1e6}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{images/freezer_binomial_intervals_1e-7_crop_scaled}
\caption{Failure rate $10^{-7}$.}
\label{fig:random-iv-1e7}
\end{subfigure}
\caption{Average savings and std. deviation for 100 executions with power failures distributed following the binomial law with failure rates of $10^{-6}$ (a) and $10^{-7}$ (b).}
\label{fig:random-iv}
\end{figure}
\subsection{Backup Time}\label{sec:res:backuptime}
The reduction in the backup size comes with a relative reduction in the save time.
On top of that, thanks to the hardware accelerated backup process, our solution provides an additional improvement in terms of backup time.
In particular, the backup process is managed directly by Freezer and can be further pipelined, so that each word can be saved in one clock cycle.
Of course, the speed of this process is limited by the cycle-time of the slowest NVM memory.
As our approach does not rely on any specific NVM technology, we considered the numbers reported in \cite{balsamo_hibernus:_2015} for our comparison.
In particular, we considered a clock frequency of 24 MHz for the normal operation using SRAM, and a clock cycle period of 125 ns (8 MHz) for the FeRAM.
\begin{table}[htb]
\centering
\caption{Percentage reduction of backup time w.r.t. improved Hibernus (higher is better). Columns b\_$N$ provides results with our strategy using blocks of $N$ words. Oracle modified and NVP \cite{liu_4.7_2016} are also provided for comparison.}
\begin{tabular}{lccccc}
\hline
{} & b\_1 & b\_8 & b\_64 & oracle & NVP \cite{liu_4.7_2016}\\
\hline
susan\_smooth\_small & 99.80 & 99.78 & 99.68 & 99.87 & 39.25 \\
susan\_edge\_small & 98.93 & 98.69 & 98.20 & 99.20 & 42.58 \\
matmul16\_float & 99.32 & 99.00 & 98.27 & 99.79 & 39.08 \\
qsort & 99.38 & 99.34 & 98.95 & 99.58 & 45.96 \\
fft & 99.58 & 99.39 & 98.64 & 99.85 & 39.64 \\
matmul32\_int & 99.35 & 99.23 & 98.86 & 99.69 & 40.44 \\
str\_search & 99.49 & 99.43 & 99.21 & 99.81 & 43.03 \\
cjpeg & 98.10 & 97.98 & 97.48 & 99.32 & 38.97 \\
dijkstra & 99.34 & 99.13 & 98.98 & 99.63 & 43.32 \\
matmul16\_int & 99.23 & 98.94 & 97.88 & 99.80 & 29.94 \\
susan\_edge\_large & 99.93 & 99.91 & 99.89 & 99.95 & 45.78 \\
\hline
average & 99.31 & 99.17 & 98.73 & 99.68 & 40.73 \\
\hline
\end{tabular}
\label{tab:backup-time}
\end{table}
Table \ref{tab:backup-time} reports the improvement in backup time compared with a modified implementation of Hibernus that only saves the memory used by the program (in pages of 512 KB).
Columns b\_$N$ provides results with our strategy using blocks of $N$ 32-bit words.
For the column related to non-volatile processor (NVP), we considered the backup time reported in~\cite{liu_4.7_2016} of 1.02 ms for 4KB, and scaled it for the memory size of our benchmarks, grouping the addresses in pages of 1 KB.
With this configuration, our approach gives a two orders of magnitude improvement in backup time when compared to the software-based approach that saves the whole program memory.
Moreover, Freezer provides a significant advantage also when compared with a fully non-volatile processor as \cite{liu_4.7_2016}, which only provides an improvement of 40\% when compared to the software-based approach.
This improvement in the backup time is also going to affect positively the total execution time, as expressed in Eq. \ref{eq:time}.
We considered a 24~MHz frequency for the volatile operations and an 8~MHz frequency for the FeRAM accesses.
The active time is set to $N_{prog}~=~10^7$ clock cycles at 24~MHz.
We assumed an average off time equal to the active time.
As reported in Table~\ref{tab:exec-time}, our strategy achieves a 32\% average decrease of the total execution time when compared with improved Hibernus.
We also compared Freezer against approaches like QuickRecall \cite{jayakumar_quickrecall:_2015} that runs the programs only using the NVM.
In this case, the save and restore time are roughly zero (only the registers need to be saved), but the frequency of the core is limited to the frequency of the FeRAM.
As a consequence, in most cases, the QuickRecall approach leads to longer execution time than Hibernus, whereas our solution performs always better and is very close to the Oracle.
\begin{table}[htbp]
\centering
\caption{Percentage reduction of execution time w.r.t. improved Hibernus (higher is better). Columns b\_$N$ provides results with our strategy using blocks of $N$ words. Oracle and NVM-only solution of \cite{jayakumar_quickrecall:_2015} are also provided for comparison.}
\begin{tabular}{lcccc}
\hline
{} & b\_1 & b\_8 & oracle & NVM only \cite{jayakumar_quickrecall:_2015}\\
\hline
susan\_smooth\_small & 20.75 & 20.75 & 20.76 & -17.59 \\
susan\_edge\_small & 33.35 & 33.31 & 33.41 & 2.35 \\
matmul16\_float & 9.90 & 9.88 & 9.93 & -34.50 \\
qsort & 88.79 & 88.77 & 88.90 & 89.00 \\
fft & 10.68 & 10.67 & 10.70 & -33.30 \\
matmul32\_int & 15.32 & 15.31 & 15.35 & -26.01 \\
str\_search & 24.90 & 24.89 & 24.95 & -11.05 \\
cjpeg & 27.93 & 27.91 & 28.13 & -5.94 \\
dijkstra & 35.21 & 35.17 & 35.27 & 5.14 \\
matmul16\_int & 8.72 & 8.71 & 8.75 & -36.34 \\
susan\_edge\_large & 83.49 & 83.48 & 83.50 & 80.27 \\
\hline
average & 32.64 & 32.62 & 32.69 & 1.09 \\
\hline
\end{tabular}
\label{tab:exec-time}
\end{table}
\subsection{Energy Comparison with other Memory Models}
\label{sec:mem-energy-cmp}
\begin{table*}[htb]
\centering
\caption{Energy and leakage power parameters used for memory access cost simulation. Read/write energy is reported in pJ per 32-bit word access.}
\begin{tabular}{l|llll|llll|llll}
\hline
& \multicolumn{4}{l|}{SRAM} & \multicolumn{4}{l|}{STT} & \multicolumn{4}{l}{RRAM} \\ \hline
Size {[}KB{]} & 4 & 16 & 32 & 64 & 4 & 16 & 32 & 64 & 4 & 16 & 32 & 64 \\ \hline
Read {[}pJ{]} & 0.219 & 0.703 & 1.664 & 2.50 & 7.754 & 7.889 & 8.426 & 8.692 & 5.101 & 5.477 & 6.004 & 6.667 \\ \hline
Write {[}pJ{]} & 0.111 & 0.215 & 1.175 & 1.388 & 20.244 & 20.614 & 20.873 & 21.416 & 21.349 & 27.449 & 24.176 & 28.575 \\ \hline
Leakage {[}$\mu$W{]} & 0.78 & 2.16 & 3.58 & 7.16 & \multicolumn{8}{l}{~} \\ \cline{1-5}
\end{tabular}
\label{tab:mem-energy-values}
\end{table*}
\begin{table}[htb]
\centering
\caption{Cache Miss and Hit dynamic energy in pJ per 32-bit word access}
\begin{tabular}{c|c|c}
\hline
Size [KB] & Hit Energy [pJ] & Write Energy [pJ] \\
& $E_{cache/r}$ & $E_{cache/w}$ \\
\hline
2 & 5.43 & 4.5 \\
4 & 6.15 & 4.96 \\
8 & 10.13 & 9.42 \\
16 & 13.45 & 12.74 \\
\hline
\end{tabular}
\label{tab:cache-dynamic-energy}
\end{table}
We use Eqs. (\ref{eq:energy-sram+NVM}), (\ref{eq:energy-NVM}), and (\ref{eq:energy-cache}) to compare the dynamic memory access energy of the different system configurations.
Figures~\ref{fig:rram} and~\ref{fig:stt} show these dynamic energies normalised w.r.t. the system using Freezer, with RRAM and STT, respectively.
For the cache+NVM architecture, four different cache sizes of 2KB, 4KB, 8KB and 16KB are reported.
The three caches are all 4-way set associative with lines of 8 words (256 bits), which is representative of this type of device.
We considered blocks of 8 words also for the system using Freezer.
The read and write dynamic energies per 32-bit word for the memories used in these comparison are reported in Table \ref{tab:mem-energy-values}, and were obtained using NVSim \cite{dong_nvsim:2012}.
Table \ref{tab:cache-dynamic-energy} reports the Hit and Write dynamic energy for the different cache sizes, obtained with NVSim.
Miss energies were in all cases equal to Hit energies.
\begin{figure}[htbp]
\centering
\begin{subfigure}{.49\textwidth}
\includegraphics[width=.95\linewidth]{images/energy_cost_nvsim_rram_new_scaled}
\caption{RRAM}
\label{fig:rram}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\includegraphics[width=.95\linewidth]{images/energy_cost_nvsim_stt_new_scaled}
\caption{STT}
\label{fig:stt}
\end{subfigure}
\caption{Relative dynamic energy of memory accesses, normalized w.r.t. Freezer, using RRAM (a) and STT (b) as NVMs for backup.}
\label{fig:energy-cost}
\end{figure}
As it can be seen from Fig.~\ref{fig:energy-cost}, our proposed approach provides a significant reduction in the energy due to memory accesses when compared with all the other methods.
The memory access energy for a full-memory backup strategy is in average $1.26\times$ of that required by Freezer when using STT and $1.65\times$ when considering RRAM.
Being based on the same SRAM+NVM architecture, Freezer and full-memory backup strategies require the same absolute amount of energy for the execution of the program, i.e., the energy required for executing load and store operations is the same for the same benchmark.
Moreover as the two strategies rely on a full memory restore after a power failure, they spend the same amount of energy for the restore memory accesses, for the same benchmark.
Tables \ref{tab:energy-percent-stt-freezer} and \ref{tab:energy-percet-stt-full-mem} show the energy decomposition, across all benchmarks, for Freezer and full-memory strategies when using STT NVM.
The two tables show the clear advantage that Freezer brings in terms of backup energy, reducing its weight from an average $23.25\%$ to an average $3.44\%$ of the total memory access energy.
\begin{table}[htb]
\caption{Memory access energy percentage decomposition for Freezer using STT}
\centering
\begin{tabular}{lrrrr}
\hline
Trace & backup & restore & prog. loads & prog. stores \\
\hline
sss & 0.74 & 22.60 & 74.86 & 1.80 \\
ses & 5.97 & 29.30 & 59.23 & 5.49 \\
mm16f & 5.53 & 21.89 & 49.88 & 22.70 \\
fft & 4.72 & 28.28 & 46.12 & 20.88 \\
cjpeg & 7.53 & 16.30 & 57.64 & 18.53 \\
str & 2.62 & 23.86 & 43.65 & 29.87 \\
mm16i & 15.13 & 33.12 & 40.55 & 11.21 \\
dijk & 3.45 & 23.56 & 61.34 & 11.64 \\
mm32i & 4.46 & 26.86 & 60.42 & 8.26 \\
\hline
avg & 3.44 & 23.52 & 61.18 & 11.86 \\
\hline
\end{tabular}
\label{tab:energy-percent-stt-freezer}
\end{table}
\begin{table}[htb]
\caption{Memory access energy percentage decomposition for full-memory backup using STT}
\centering
\begin{tabular}{lrrrr}
\hline
Trace & backup & restore & prog. loads & prog. stores \\
\hline
sss & 21.39 & 17.90 & 59.29 & 1.43 \\
ses & 28.29 & 22.35 & 45.18 & 4.19 \\
mm16f & 21.66 & 18.15 & 41.36 & 18.82 \\
fft & 26.15 & 21.92 & 35.75 & 16.18 \\
cjpeg & 17.40 & 14.56 & 51.49 & 16.55 \\
str & 22.65 & 18.95 & 34.67 & 23.72 \\
mm16i & 31.33 & 26.80 & 32.81 & 9.07 \\
dijk & 23.60 & 18.65 & 48.54 & 9.21 \\
mm32i & 25.77 & 20.87 & 46.94 & 6.42 \\
\hline
avg & 23.25 & 18.69 & 48.63 & 9.42 \\
\hline
\end{tabular}
\label{tab:energy-percet-stt-full-mem}
\end{table}
Figures \ref{fig:rram} and \ref{fig:stt} also show that, due to the higher read and write dynamic energies, using the NVM as the main memory is often detrimental even when compared with full-memory backup systems.
Moreover when compared to Freezer, NVM-only systems require in average $6.19\times$ and $4.22\times$ more energy for RRAM and STT respectively.
As described in Section \ref{sec:res:energy}, the cache+NVM system uses the write-back policy and flushes the dirty lines in the NVM when a power failure arises.
Thus the cache+NVM system shows a behaviour that is similar to the one of Freezer during power failures, but with higher energy per operation.
There are however some major differences between a system that implements Freezer and a system with a write-back cache and a NVM main memory.
First of all, Freezer is meant to be simple to reduce the energy overhead of tracking the modified blocks.
Moreover, Freezer is able to track the full main memory and only needs to write on the NVM before a power failure happens.
A write-back cache on the other hand might perform additional writes on the NVM at run-time.
In fact if the access causes a conflict, the cache will evict the conflicting line thus causing additional NVM writes.
These additional writes may reduce the lifetime of the NVM due to the limited endurance of these type of memories.
When it comes to cache+NVM based systems, the size that in average provides the smallest energy is 4KB, with 2KB and 8KB caches performing better in some benchmarks.
Access to smaller caches requires less energy, as shown in Table \ref{tab:cache-dynamic-energy}, but they might incur in the high cost of additional NVMs read and writes due to a larger number of misses and evictions.
The increased number of writes to the NVM could also cause problems of endurance because of wear-out, that might prevent this solution to be applied for long-lasting operations.
A larger cache can reduce the number of accesses to the NVM, up to the point where the cache is so large that it is able to buffer the full application.
In these case, it is possible to obtain a number of writes to the NVM which is close to what Freezer achieves.
However, this comes at the cost of having a large cache that is complex and energy hungry.
Moreover, it is unusual to see a cache used in small low-power edge devices, where the system memory is embedded on chip and seldom exceeds 64KB.
To summarize, for our set of benchmarks, the energy required by a 4KB cache + STT system is $5.9 \times $ w.r.t. Freezer, whereas the larger 16KB cache requires in average $9.3 \times$ more energy than Freezer.
\subsection{Impact of Leakage Power}
\label{sec:leakage}
For a fair comparison, it is also important to study the impact of leakage power of the {SRAM+NVM} memory model, especially when compared to NVM-only architectures.
Eq. \ref{eq:energy-sram+NVM} is therefore enhanced by considering the leakage power of low-power SRAMs of the appropriate size, as reported in Table~\ref{tab:mem-energy-values}.
The leakage power of STT and RRAM is considered to be zero, which is obviously not the case for real designs.
Table \ref{tab:bench-energy-values} reports for each benchmark the absolute dynamic energy of memory accesses for Freezer with both RRAM and STT as NVMs, equivalent to the Freezer blue bar in Figures \ref{fig:rram} and \ref{fig:stt}, respectively.
The table also reports an estimation of the leakage energy due to the main SRAM memory obtained considering a $20MHz$ clock, and the total memory size of the benchmark.
Table \ref{tab:bench-energy-values} shows that the leakage energy represents around half of the dynamic energy of memory accesses when using Freezer.
Even accounting for the leakage of SRAM, the approaches based on SRAM+NVMs are still better than running an NVM-only system.
Compared to full-memory backup which would consume roughly the same leakage energy, Freezer still benefits from the backup size reduction.
Moreover, even accounting for the leakage of the NVM memories would not change the outcome of the analysis.
In fact, when considering NVMs of the same size running for similar periods of time, the leakage due to the NVMs would be roughly the same for both SRAM+NVM and NVM-only architectures.
Furthermore, an SRAM+NMV system would even be able to activate the NVM only during the backup and restore phases, reducing even more the impact of NVM leakage.
In both cases, the SRAM+NVM architecture would still show an advantage.
\begin{table}
\centering
\caption{{Backup energy using Freezer, leakage and memory size for different benchmarks, energy in $[\mu J]$, memory size in words of 32 bits.}}
\begin{tabular}{lrrrr}
\hline
Trace & mem\_size & E\_freezer & E\_freezer & E\_leakage \\
~ & [32-bit word] & RRAM & STT & SRAM \\
\hline
sss & 8192 & 11.0 & 11.0 & 6.1 \\
ses & 16384 & 3.3 & 3.2 & 1.9 \\
mm16f & 2048 & 2.5 & 2.4 & 2.0 \\
fft & 2048 & 3.4 & 3.4 & 3.7 \\
cjpeg & 8192 & 3.9 & 3.6 & 1.3 \\
sl & 8192 & 3.6 & 3.7 & 1.9 \\
mm16i & 1024 & 0.051 & 0.045 & 0.028 \\
dijk & 16384 & 51.0 & 50.0 & 27.0 \\
mm32i & 4096 & 0.58 & 0.56 & 0.41 \\
\hline
\end{tabular}
\label{tab:bench-energy-values}
\end{table}
\subsection{Energy and Area Overhead Considerations}
\label{sec:area}
In this section, we provide insights about the overhead in energy due to our backup controller.
The use of the Freezer hardware backup strategy in an energy harvesting platform will introduce a small overhead at run-time, but will also decrease the energy required for the backup and restore operations.
We can account for the overhead and the reduction in the backup size by modifying Eq. \ref{eq:energy} which becomes
\begin{equation}
E_{c} = E_{s} N'_{s} + E_{r} N_{r} + (P_{on}+P_{ovh})\times t_{on} + P_{off} t_{off},
\end{equation}
where $N'_s$ is the reduced backup size and $P_{ovh}$ represents the overhead introduced at run-time.
The energy required for moving the data ($E_{s}$ for save and $E_{r}$ for restore) is heavily dependent on the memory technology.
However, software-based approaches introduces additional overhead.
In our case, as a backup operation may require hundreds or even thousands of transfers, we can approximate the energy required for saving one word as
\begin{equation}
E_{s} = E_{sram/r} + E_{nvm/w}
\label{eq:es}
\end{equation}
where $E_{sram/r}$ is the energy for reading a from the SRAM and $E_{nvm/w}$ the energy required for a write in the NVM.
The power overhead introduced by our strategy can be estimated as $P_{ovh} = \alpha \times P_{active} + P_{leak}$, where $P_{leak}$ is the leakage power, which will be mostly determined by the \textit{to\_backup} memory, and $P_{active}$ the active power.
$P_{leak}$ and $P_{active}$ were provided in Section \ref{sec:areapowefreezer}.
$P_{active}$ will be consumed whenever the processor performs a store operation and $\alpha = N_{store}/N_{prog}$ is the fraction of clock cycles spent performing store operations w.r.t. the execution of the program in the whole interval.
This overhead can be compared with the advantage gained in terms of save and restore energy.
If we compare against a system that saves everything but does not introduce any overhead, we can estimate the maximum active time $t_{on}$ after which the power consumed by the controller during active time becomes greater than the energy reduction obtained at backup time.
$t_{on}$ is constrained by the following inequation:
\begin{equation}
t_{on} \le \frac{\delta E_{s} N_{tot}}{P_{ovh}}
\label{eq:ton}
\end{equation}
where $N_{tot}$ is the number of words to be backed-up without Freezer (full memory), $E_{s}$ the energy required to back-up one word ($E_{sram/r} + E_{nvm/w}$), and $\delta E_{s} N_{tot}$ the energy saved during the backup operation.
With Freezer, considering $\delta = 87.7\%$, $E_{sram/r} = 0.45 pJ/bit$, $E_{nvm/w} = 100\times E_{sram}$, we obtained for the two extreme configurations depending on the considered benchmark:
$t_{on}~<~16.42s$ and $P_{ovh}= 1.18 \mu W$ for \textit{susan\_smooth}, and
$t_{on} < 2.4 s$ and a similar $P_{ovh}$ for the \textit{FFT} benchmark.\\
Both these $t_{on}$ values allow for the programs to be executed completely and are well above the typical active time of intermittently-powered systems.
Moreover, Eq. \ref{eq:ton} is obtained by comparing our solution to a system that introduces no overhead at run-time and no overhead during the backup process, which would not be the case in real systems. \\
To give an idea of how Freezer would fit in a low-end IoT node, we can compare it with a ultra-low-power, size-optimised SoC, implemented with the same 28nm FDSOI technology node such as the one presented in \cite{bol_a_40_to_80mhz_2019}.
In terms of area the SoC is 0.7$mm^2$, while its power consumption is 3$\mu$W/MHz giving at 48MHz a power consumption of 144$\mu$W.
From these numbers we can see that Freezer, even with our non-optimised implementation, would lead to a small overhead.
In particular, assuming blocks of 8 words, the area overhead of $2,748\mu m^2$ represents $\approx 0.4\%$.
The power overhead during active time, considering the $\alpha$ of the FFT benchmark, could be as low as 0.82\%.
\section{Discussion About The Approach}\label{sec:discussion}
Several studies have approached the problem of computing under intermittent power supply, providing a wide variety of different solutions.
While software-based approaches try to solve the problem at the application level, hardware-based solutions try to provide platforms that implement the non-volatility in a way that is transparent to the programmer.
The majority of the hardware solutions usually rely heavily on the underlying memory technology to accomplish the state retention.
Even in \cite{hager_a_scan-chain:2017}, where no NVM is used, their technique relies on an ultra low-power retention SRAM.
Our approach moves away from this type of scheme and tries to solve the problem from a different standpoint, by providing hardware acceleration for the backup and restore procedures, and by exploiting run-time information to optimize the backup sequence.
Moreover, this approach is agnostic with respect to the NVM technology, and opens a series of possibilities.
Technologies such as hybrid nvSRAM, as the one used in \cite{liu_4.7_2016}, with circuit-level configurable memory, parallel block-wise backup and adaptive restore, may be exploited and enhanced by Freezer, thus achieving a faster and more energy efficient backup sequence thanks to the backup size reduction.
Furthermore, our approach could be extended to implement a programmable backup hardware accelerator, or to implement a dedicated ISA extension.
This would provide programs with some levels of control on the save and restore procedures and allow for the hardware to exploit some of the information available to the program.
As an example, a program may signal that a certain buffer or memory region is no longer used, allowing the controller to exclude it from the backup process.
This would also make possible to integrate static analysis techniques such as the one presented in \cite{zhao_software_2015} and \cite{zhao_stack-size_2017} on top of Freezer.
\section{Conclusion}\label{sec:conclusion}
Applications that run under ambient harvested energy suffer from frequent and unpredictable power losses.
To guarantee progress of computation in this circumstances, these applications have to rely on some mechanisms to retain their state.
In this paper, we propose Freezer, a backup and restore controller that is able to reduce the backup size by monitoring the memory accesses, and that provides hardware acceleration for the backup and restore procedures.
The controller only requires a small memory to keep track of the store operations.
Moreover, it can be implemented with plain CMOS technology and does not rely on complex and expensive hybrid non-volatile memory elements.
Furthermore, Freezer is a drop-in component that can be integrated in existing SoCs without requiring modifications to the internal architecture of the processor.
Our proposed solution achieve a 87.7\% average reduction in backup size on a set of benchmarks, and a two orders of magnitude reduction in the backup time when compared with software based state-of-the-art approaches.
The code and traces used in this paper are available for reproducibility at \url{https://gitlab.inria.fr/dpala/freezer-resources}.
\section*{Acknowledgment}
This work was supported by Inria Project Lab “Zero-Power Systems” (ZEP).
The authors would like to thank the anonymous reviewers for their comments and feedback.
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec:introduction}
In the context of IoT, many applications cannot afford the presence of a battery because of size, weight and cost issues.
The recent advancement in the Non-Volatile Memory (NVM) technologies is paving the way for Non-Volatile Computing Systems.
These systems are able to sustain computations under unstable power, by quickly saving the state of the full system in a non-volatile fashion.
Thus, Non-Volatile Processors (NVPs) may allow battery-less designs without suffering from frequent power losses inherent in energy harvesting scenarios.
In related work, both software- and hardware-level solutions were proposed to cope with the backup and restore problem.
Software-based approaches are implemented on platforms that include both some SRAM and an addressable NVM used to store the backup, as the one presented in \cite{zwerg_82_2011}.
Checkpoints are placed at compile time~\cite{ransford_mementos:_2011}.
Then, at run-time the supply voltage is checked and, if an imminent power failure is identified ($V_{dd} < V_{th}$), a backup of the stack and the registers is executed.
In some works, backups are only executed when a power failure interrupt is triggered and the full volatile state (SRAM and registers) is copied to the NVM~\cite{balsamo_hibernus:_2015, balsamo_hibernus_2016}.
Other approaches do not take advantage of the volatile SRAM and exploit the NVM as the only system memory, backing-up only the registers in the event of a power outage~\cite{jayakumar_quickrecall:_2015, choi_achieving_2019}.
Software-level solutions can be implemented on available hardware, but they normally come with a big overhead in terms of both backup time and energy.
Hardware solutions on the other hand usually implement fully Non-Volatile Processors (NVP).
NVPs mostly make use of emerging NVM technologies to implement complex hybrid memory elements (nvFF and nvSRAM, non-volatile registers, and SRAM memory, respectively) that allow for very fast parallel backup and restore operations \cite{yu_non-volatile_2011, wang_3us_2012, liu_4.7_2016, wang_a_130nm_feram_2017, sakimura_10.5_2014, senni_non-volatile_2016}.
However, introducing these hybrid memory elements is intrusive. Moreover, it usually comes with a significant area overhead and often results in increased delay and active power.
Additional limitations on the amount of data that can be saved and restored in parallel is imposed by the peak current consumption required to drive all the NVM bit cells at the same time.
To mitigate these problems, distributed small non-volatile arrays, where groups of flip-flops are backed-up in sequence, are proposed in \cite{bartling_8mhz_2013}.
An adaptive restore controller for configuring the parallelism of the nvSRAM restore operation, trading off peak current with restore speed is instead presented in~\cite{liu_4.7_2016}.
The use of NVM enables persistence across power failures but it also introduces the problem of consistency for the data stored in the NVM \cite{ransford_nonvolatile_2014}.
To address the consistency issue and improve reliability of the system, a software framework that performs a copy-on-write of modified pages of the NVM in a shadow memory area is developed in~\cite{choi_achieving_2019}.
The consistency problem can be also addressed via static analysis or with hardware techniques \cite{liu_lightweight_2016}.
In particular, hybrid nvFFs can be used in a hardware scheme where an enhanced store buffer is used to treat the execution of stores to the NVM as speculative, until a checkpoint is reached~\cite{liu_lightweight_2016}.
Two counters are also used to periodically trigger checkpoints based on the number of executed stores or on the number of executed instructions.
Previous work has also focused the attention to the problem of optimal checkpoint placement, as in \cite{ghodsi_optimal_2017} where online decisions on checkpoints are taken based on a table filled offline using Q-learning.
In this paper, we propose Freezer, a hardware backup and restore controller that is able to reduce the amount of data that needs to be backed-up.
Our approach avoids the high cost of hardware fully NVP architectures since it can be implemented with plain CMOS technology.
Furthermore, contrary to other hardware based approaches such as non-volatile processors \cite{liu_4.7_2016, sakimura_10.5_2014, ma_architecture_2015}, our proposed controller is a component that can be integrated in existing SoCs, without requiring modification of the processor architecture.
Moreover, Freezer achieves better performance than pure software approaches.
Our contributions can be summarized as follows:
\begin{itemize}
\item We propose an analysis of different backup strategies based on the use of memory access traces.
\item We introduce an oracle based backup strategy that provides the optimal lower bound for the backup size.
\item We present a hardware backup controller, Freezer, that dynamically keeps track of the changes in the program state and commits these changes in the NVM before the power failure.
The controller spies the address signal of the SRAM and uses dirty bits to track modified addresses with a block granularity
\item We conduct an analysis of the trade-offs and a design space exploration for our proposed strategy. Results on a set of benchmarks show an average $8\times$ reduction in backup size. Thanks to Freezer, the backup time is further reduced by more than $100\times$, with a very low area and power overhead.
\item We compare the memory access energy of three different system architectures: SRAM+NVM, NVM-only and cache+NVM, showing that NVM-only systems take on average $3.74\times$ to $3.35\times$ more energy than SRAM+NVM with full-memory backup and $6.19\times$ to $4.22\times$ more when compared to Freezer. Our strategy shows a clear advantage also when compared to cache+NVM architecture, requiring in average $7.8\times$ and $5.9\times$ less energy, with respectively RRAM and STTRAM as main memory.
\end{itemize}
The rest of the paper is organized as follows.
In Section \ref{sec:context}, we present some background information and related works.
In Section \ref{sec:sys-modeling} we describe the main system models and architectures for a transiently powered device, and we present a model for evaluating the memory access energy of different system architectures.
In Section \ref{sec:backup-model}, we introduce and discuss the model for the backup strategies.
Section \ref{sec:traces} explains how the memory access traces of the benchmarks are processed and analysed.
Section \ref{sec:freezer} presents Freezer backup controller, its algorithm, and some area and power synthesis results.
We report several comparison results of our study in Section \ref{sec:results}.
Finally, we briefly discuss our approach and draw the conclusions in Sections \ref{sec:discussion} and \ref{sec:conclusion}.
\section{Background and Related Work}\label{sec:context}
In this section, we briefly present the context around non-volatile processors and the problem of state retention in energy harvesting applications.
We then present the motivation from which this paper is derived.
In related work, both software- and hardware-level solutions were proposed to guarantee forward progress across unpredictable power failures.
There are two main approaches to cope with the backup and restore problem:
periodic check-pointing \cite{ransford_mementos:_2011, choi_achieving_2019}, and
on-demand backup \cite{balsamo_hibernus:_2015, balsamo_hibernus_2016, jayakumar_quickrecall:_2015}.
Periodic check-pointing systems try to guarantee forward progress by repeatedly executing some check-pointing tasks, interleaved with the computation.
These check-points are usually placed by the compiler, according to some heuristic.
At run-time, when a check-point is reached, the system decides if a backup should be executed.
In \cite{ransford_mementos:_2011}, for example, the supply voltage level is checked to determine whether there is enough energy or if a snapshot should be taken.
After a power outage, the state will be rolled back to the last saved state and the execution will resume from the last check-point that was reached.
This approach has the advantage that backup size can be optimised, as the location of each check-point is known in advance.
In \cite{choi_achieving_2019}, checkpoints are instead taken based on the expiration of a timer, but only the registers are saved as the system uses only NVM as its main memory. To avoid consistency issues with NVM updates happening between a checkpoint and a power failure, the modified NVM pages are saved with a copy-on-write mechanism on a shadow memory area.
These periodic check-pointing techniques also introduce overhead due to the execution of unnecessary checkpoints and backups, moreover they may lead to the re-execution of part of the code after the rollback.
On-demand backup tries to avoid the run-time overheads introduced with periodic check-pointing by waiting until a power failure is detected before executing the backup.
The typical behavior of an on-demand backup system is depicted in Fig. \ref{fig:interval},
\begin{figure*}[htbp]
\centering
\includegraphics[width=.85\linewidth]{images/intervals_v3}
\caption{Division of execution time in intervals and system state during an interval.}
\label{fig:interval}
\end{figure*}
which shows how the system responds to a power failure, signaled by a decrease in the supply voltage (Vdd), by interrupting the computation and by entering in the \textit{Backup} phase.
When the backup is completed, the system goes in the \textit{OFF} state, where it will wait until the power resumes.
When the power is newly available, the platform can leave the \textit{OFF} state and start the recovery.
The new interval begins when the system enters the \textit{Restore} phase, to recover the state saved in the previous backup.
When the restore is completed the system can resume the computation.
Some hardware-based solutions can also be considered as implementation of on-demand backups.
As an example, in \cite{su_a_ferroelectric_2017}, the non-volatile processor is paired with a dedicated voltage detector used to trigger the backup mechanism.
The main disadvantage with these techniques is that they often require a full backup of the system memory, as it is difficult to know in advance when a power failure will happen and thus saving only the required memory is complicated.
To mitigate this problem some offline static analysis technique have been proposed \cite{zhao_software_2015, zhao_stack-size_2017}.
In particular, in \cite{zhao_stack-size_2017}, an offline analysis of the code is used to find the backup positions that reduce the stack size.
These positions are marked in the code with the insertion of special label instructions.
At run-time a dedicated hardware module will wait for the power failure signal. After this signal, the execution continues until the
program reaches the label instruction. Then, this dedicated hardware module executes the backup.
These techniques require a compile-time analysis, with a detailed energy model of the platform.
Moreover they tend to introduce overhead as they need to modify the program code \cite{zhao_software_2015} and the internal architecture of the processor \cite{zhao_stack-size_2017}.
Non-volatile processors can also be considered implementation of on-demand backup, as they focus on having very fast backup (and restore) in response to power failure.
In \cite{ma_architecture_2015}, architectures and techniques for implementing non-pipelined, pipelined, and out-of-order (OoO) non-volatile processors are proposed.
The proposed techniques try to optimise the backup size of the internal state of the processor, using techniques such as dirty bits for a selective backup of the register file.
Contrary to our approach these architectures rely on NVM or hybrid memories for the persistence of the main memory.
Moreover these techniques are in general very intrusive, as they require an in depth modification of the internal architecture of the processor.
To address the problem of full memory backup in an on-demand scheme, we propose a hardware backup controller, Freezer, that is able to optimise the size of the backup based on the information collected at run-time.
Our proposed controller is an independent component that can be integrated in existing SoCs, without requiring changes to the internal architecture of the processor core.
In this work, we focus on how to optimise the backup of the main memory and we do not consider the problem of saving the internal state of the processor.
However the state of the CPU could be managed via software by the processor by copying its internal register into the main memory before starting the back up.
Other techniques are proposed in the literature to save the internal registers.
Common hardware-based solutions use nvFFs based on different technologies, such as STTRAM~\cite{sakimura_10.5_2014}, MRAM-based nvFFs~\cite{senni_non-volatile_2016}, FeRAM~\cite{su_a_ferroelectric_2017}, ReRAM~\cite{liu_4.7_2016}, and the use of FeRAM distributed mini arrays~\cite{bartling_8mhz_2013} or the use of nvFFs and NVM blocks for the backup of internal registers~\cite{ma_architecture_2015}.
\section{System Modelling}
\label{sec:sys-modeling}
\subsection{Considered System Model}\label{sec:sys-model}
Energy harvesting is seen as a promising source to power future battery-less IoT systems.
However, due to the unpredictable nature of the energy source, these systems will be subject to sudden power outages.
This could cause the execution of program to be unexpectedly interrupted.
Thus, in these intermittently (or transiently) powered systems, the execution is divided in multiple power cycles, i.e., intervals, as shown in Fig. \ref{fig:interval}.
The timing break-down of one of these intervals is depicted in Fig. \ref{fig:energy-cycle}.
$t_{cyc}$ is the duration of this on-off cycle and is defined as $t_{cyc}=t_r+t_a+t_s+t_{off}$, where $t_a$ is the time in active state where the system is executing some software tasks, $t_{off}$ the time in the power-off state, and $t_r$, $t_s$ the time to restore, save (backup) the data from, to the NVM, respectively.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\linewidth]{images/transient_computing}
\caption{Detail of an execution cycle between two consecutive power outages.}
\label{fig:energy-cycle}
\end{figure}
The energy consumed by the system during $t_{cyc}$ can be modelled as (adapted from \cite{hager_a_scan-chain:2017})
\begin{equation}
E_{c} = E_{s} N_{s} + E_{r} N_{r} + P_{on} t_{a} + P_{off} t_{off},
\label{eq:energy}
\end{equation}
where $E_{s}$ and $E_{r}$ are the energy required respectively for saving and restoring one word, $N_s$ and $N_r$ the total number of words to save and restore.
$P_{on}$ and $P_{off}$ are the power consumed during the active state and off state, respectively.
In this type of intermittently-powered systems, usually $P_{off}$ is zero as the state is retained in a non-volatile manner, thus the all system including the processor core can be fully shut-down.
Moreover, $N_s$ and $N_r$ are usually equal and often coincide with the full size of the volatile system state~\cite{balsamo_hibernus:_2015}.
Considering an \textit{on-demand backup} system that only performs a backup before a power failure, the total execution time $t_{exec}$ of a program can be modeled as (adapted from \cite{balsamo_hibernus:_2015})
\begin{equation}
t_{exec} = t_{prog} + n_i \times (t_s + t_r + \overline{t_{off}}),
\label{eq:time}
\end{equation}
where $t_{prog}$ is the time needed for running the whole program without interruptions, $n_i$ the number of interruptions, $t_s$ and $t_r$ the save and restore time, respectively, and $\overline{t_{off}}$ the average off time.
Our approach, Freezer, aims at reducing the size of the backup ($N_{s}$), thus also reducing $t_s$ and the total execution time and backup energy.
Moreover, the hardware implementation of our approach guarantees an additional decrease to the backup and restore time and energy, by eliminating the overhead due to software operations.
In this paper, we assume that the system has a reliable way to detect a power failure and we also assume that the system has enough power to complete the backup.
Therefore we do not investigate the problem of how to deal with incomplete backup.
To have a stronger guarantee on the consistency of the system state after recovery a double buffering scheme can be applied, such that a new backup does not overwrite the previous one on the NVM.
Moreover, we do not deal with the issue of how to detect a power failure.
For this problem there are also solutions proposed in the literature, such as dedicated voltage detector \cite{su_a_ferroelectric_2017}.
\subsection{System Architecture} \label{sec:sys-arch}
In the field of non-volatile processors for energy harvesting applications, there are several possible architectural choices for achieving state retention.
The most common approaches are the following:
\begin{itemize}
\item A CPU with an SRAM and an addressable NVM.
The NVM might serve as a backup of the full memory space of the SRAM but might also be addressable by the processor.
\item A CPU with an SRAM and a backup-only NVM or a CPU with a hybrid nvSRAM as in \cite{liu_4.7_2016}.
\item A CPU with an NVM as main system memory as in \cite{sakimura_10.5_2014, wang_a_130nm_feram_2017, jayakumar_quickrecall:_2015, senni_non-volatile_2016, choi_achieving_2019}.
\item A CPU with an SRAM-based cache and an NVM as the only system memory \cite{ghodsi_optimal_2017, ma_architecture_2015}.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=.85\linewidth]{images/SRAM+NVM_Cache+NVM_NVM-only_spaced_large-font}
\caption{Architectural models for non-volatile state retention.}
\label{fig:arch_models}
\end{figure}
These approaches can be grouped into the three basic architectures depicted in Fig. \ref{fig:arch_models}.
The first two approaches have in common the SRAM+NVM architecture, which, as shown in Fig. \ref{fig:arch_models}, exploits SRAM for execution and NVM for enabling backup and restore operations.
The NVM-only approach relies solely on NVM as its main memory.
Cache+NVM uses NVM as the main memory with the addition of a volatile cache.
A common choice for implementing intermittently-powered systems is to use commercially available SoCs with an embedded addressable NVM.
As this NVM is addressable, this type of systems is the common choice for implementing software-based retention schemes~\cite{balsamo_hibernus:_2015, ransford_mementos:_2011}.
Another option explored in related work is that of using hybrid nvSRAM \cite{yu_non-volatile_2011, liu_4.7_2016}
This choice allows to exploit the main advantages of SRAM (fast read/write and low access power), while also obtaining fast parallel backup through the paired non-volatile memory elements.
This means that the non-volatile elements are not directly accessible by the programmer, instead the non-volatility is made transparent by the hardware.
A conceptually simple solution to guarantee state retention is to exploit only an NVM as the main memory.
This solution is proposed in \cite{sakimura_10.5_2014}, where the system is fully based on STTRAM.
Another example is given by the software approach of QuickRecall \cite{jayakumar_quickrecall:_2015}, where the available SRAM is not used and the system runs only on the FeRAM.
As with hybrid nvSRAM, the non-volatility is transparent to the programmer.
Also, in this case, there is no need to copy the data in the event of a power failure.
In \cite{ma_architecture_2015} methods for the backup and recovery of the internal state are proposed and compared considering non pipelined, pipelined, and out of order (OoO) processor architectures.
These solutions can also be considered NVM-only type of systems, as they use NVM as their main memory, with the addition of hybrid or NVM caches in the case of the OoO processor.
Unfortunately, some of these new NVM technologies are still immature and often they do not provide the same level of performance in terms of access time and access energy as the SRAM \cite{yu_emerging:2016}.
Moreover, NVM-only designs must also face the issue of wear and the reduced endurance that characterises many of the emerging NVM technologies.
To mitigate this problem, a possible solution could be to use register-based or SRAM-based store buffers.
As an example, enhanced store buffers are proposed in \cite{liu_lightweight_2016} to postpone the execution of NVM writes, treating store operations as speculative.
Though the limited size of the store buffer still results in very frequent checkpoints and a large number of NVM writes.
Another possible answer to mitigate NVM writes speed and endurance problem could be to use an SRAM-based cache to buffer the accesses to the main NVM.
Although this type of architecture could be of some interest for higher performance systems, it is not very common in small IoT edge nodes.
This is because adding a cache would significantly increase both the dynamic and static power consumption during the active period.
In this work, we consider an architecture that comprises a micro-controller with an SRAM as main memory and an NVM that is used by our proposed backup controller, Freezer, to save (and restore) the state of the system before (and after) a power failure.
The general overview of such architecture is depicted in Fig. \ref{fig:system-arch}.
The micro-controller we consider implements the RISC-V Instruction Set Architecture (ISA).
\begin{figure}[ht]
\centering
\includegraphics[width=.6\linewidth]{images/freezer_harvester}
\caption{General overview of a system implementing Freezer.}
\label{fig:system-arch}
\end{figure}
\subsection{Modelling Memory Access Energy}
\label{sec:res:energy}
The energy required for a backup operation is dominated by the data transfers between the SRAM and the NVM, and will be proportional to the backup size.
This energy will mostly be determined by the write energy of the NVM, that can be even $100\times$ that of the SRAM \cite{yu_emerging:2016}.
Our approach provides a reduction of the backup energy by decreasing the number of data transfers and by improving the speed of the process compared with a software based backup strategy.
In this section, we provide a simplified model to evaluate and compare the energy cost of some of the different system architectures introduced in Section \ref{sec:sys-arch}. Results provided in Section \ref{sec:results} are based on this model.
In particular, we derive the energy cost in terms of memory accesses for the following types of memory models:
\begin{itemize}
\item SRAM + NVM for backup,
\item NVM only,
\item cache + NVM as main memory.
\end{itemize}
For the \textit{SRAM+NVM} architecture, we consider both a system which performs a full memory backup and a system with Freezer.
For this system, the energy cost associated with memory accesses can be expressed as
\begin{equation}
E_{SRAM+NVM} = E_{prog} + E_{backup} + E_{restore}
\label{eq:energy-sram+NVM}
\end{equation}
where $E_{prog}$ is the energy of the memory accesses needed for running the program.
\begin{equation}
E_{prog} = E_{sram/r} N_{load} + E_{sram/w} N_{store}
\label{eq:energy-prog}
\end{equation}
where $E_{sram/r}$ and $E_{sram/w}$ are the read and write energy of the SRAM, and $N_{load}$ and $N_{store}$ are the total number of load and store operations, respectively.
The additional cost required by a platform with both SRAM and NVM are expressed in Eq. \ref{eq:energy-sram+NVM} by the energy for the backup $E_{backup}$ and by the energy for the restore $E_{restore}$, defined respectively as
\begin{equation}
E_{backup} = N_{s} (E_{sram/r} + E_{nvm/w}),
\label{eq:energy-backup}
\end{equation}
\begin{equation}
E_{restore} = N_r (E_{nvm/r} + E_{sram/w}).
\label{eq:energy-restore}
\end{equation}
The energy for the backup depends on the total size of the backup $N_s$ and on the energy required for reading from SRAM $E_{sram/r}$ and writing to NVM $E_{nvm/w}$.
$N_s$ is the total number of saved words throughout the full execution.
Similarly $E_{restore}$ can be expressed as the energy for a single transfer (read from NVM and write to SRAM) multiplied by the total number of restored words $N_r$.
For the \textit{NVM-only} architecture there is no need to preform backup and restore operations, as everything is already saved in the NVM.
In this case, the memory access energy is given only by the load and store operations performed for running the program.
The energy cost for a purely non-volatile system that uses a NVM as its main memory is estimated by
\begin{equation}
E_{NVM} = E_{prog NVM} = E_{nvm/r} N_{load} + E_{nvm/w} N_{store}.
\label{eq:energy-NVM}
\end{equation}
The \textit{cache+NVM} architecture comprises both an NVM as its main memory, and an SRAM-based cache to reduce the number of accesses to the NVM.
This system uses a write-back cache controller that performs a flush of the dirty lines on NVM in case of a power failure.
On a cache system, for every operation, the TAG memory is first read to verify if the required address is on the cache or not, then in case of a miss a read from NVM is executed.
Moreover, simultaneous TAG and DATA memory reads are performed inside the cache to sustain high throughput.
Finally, multiple data words may be accessed in parallel on N-way set-associative cache where only one word is useful.
Therefore, the energy per read/write operation of this system is much higher that the one with tightly coupled memory (SRAM+NVM).
The energy cost for a cache+NVM system is therefore estimated by
\begin{equation}
E_{cache} = E_{hits} + E_{misses} + E_{flushes}
\label{eq:energy-cache}
\end{equation}
where $E_{hits}$ is the energy due to cache hits, $E_{misses}$ the energy penalty due to misses, and $E_{flushes}$ the energy consumed with flushes.
The first part of the energy cost $E_{hits}$ is
\begin{equation}
E_{hits} = N_{hit/r}E_{hit} + N_{hit/w}(E_{hit} + E_{cache/w})
\label{eq:energy-hits}
\end{equation}
where $N_{hit/r}$ and $N_{hit/w}$ are respectively the number of read and write hits, $E_{hit}$ the energy for a single cache access and $E_{cache/w}$ the energy for a write operation inside the cache.
$E_{hits}$ therefore includes the energy due to read hits $N_{hit/r}E_{hit}$ and the energy due to write hits $N_{hit/w}(E_{hit} + E_{cache/w})$.
$E_{misses}$, the energy due to the misses, is expressed as
\begin{equation}
\begin{aligned}
E_{misses} &= N_{miss}(E_{miss} + (E_{nvm/r} + E_{cache/w}) \times 8) \\
&+ N_{evict} E_{nvm/w}
\end{aligned}
\label{eq:energy-misses}
\end{equation}
where $N_{miss}$ is the total number of misses, $E_{miss}$ the energy for a missing access, $N_{evict}$ the total number of evicted words, $E_{nvm/r}$ and $E_{nvm/w}$ are the energy for reading and writing a word in the NVM.
Eq. \eqref{eq:energy-misses} shows that each miss causes the reading of a full block (8 words in our case) from the NVM.
Moreover a missing access may also cause the eviction of a block from the cache resulting in writes to the NVM.
$E_{flushes}$ is caused by the backup of the dirty lines before a power failure happens. This operation requires to scan all the cache lines and write back the dirty ones and is repeated before every power failure.
\begin{equation}
E_{flushes} = N_i N_{lines}E_{hit} + N_{flush} E_{nvm/w} \times 8
\label{eq:energy-flushes}
\end{equation}
where $N_{lines} E_{hit}$ represents the energy for reading all the blocks of the cache and $N_i$ the number of interruptions.
The energy due to the writes to NVM is expressed by $N_{flush}$, the total number of flushed blocks throughout all power failures, multiplied by the energy for writing $8$ words to NVM.
\section{Modeling of the Backup Strategies}\label{sec:backup-model}
By analyzing the memory access sequences, we can identify different backup strategies.
The \emph{Full Memory Backup} strategy corresponds to the state of the art.
In this paper, we propose four backup strategies defined as \emph{Used Address (UA)}, \emph{Modified Address (MA)}, and \emph{Modified Block (MB)}, a block-based evolution of the two previous strategies.
The last strategy presented is an \emph{Oracle} and cannot be implemented in a real system as it requires knowledge of the future.
This oracle is however very useful for comparison, as it gives the optimal lower bound for the backup size.
In the rest of the paper, a \emph{word} is defined as a 32-bit data.
\subsection{Full Memory Backup}
The first and simplest solution is to backup the full content of the memory at the end of each interval as it is proposed in~\cite{balsamo_hibernus:_2015}.
For our study and fair comparison, we considered a slightly improved version of this strategy that saves only the data section of the program in pages of $512$ bytes (128 words), thus not saving the full memory every time.
As an example, if a program needs a $1000$-byte data space, $1024$ bytes ($2$ pages) will be saved in the NVM.
With this approach, the backup size is a constant for all the intervals, equivalent to the number of pages to be saved.
\subsection{Used Address Backup}
The first strategy that we propose is the \textit{Used Address (UA)} strategy.
UA consists of keeping track of all the different addresses that are accessed (reading and writing) during an interval.
When a power failure is detected every address that was accessed during that interval is saved in the NVM.
In the UA case, only the memory locations that were used during the interval are going to be backed-up.
\subsection{Modified Address Backup}
If the initial snapshot of the program is stored in the NVM, the UA scheme can be improved by implementing the \textit{Modified Address (MA)} backup strategy.
MA only keeps track of the memory locations that are modified (written) during a power cycle.
Then, before a power outage, only the words that were modified (write operation) are saved to the NVM.
In practice this means saving only the addresses accessed by a store operation at least once during the interval.
This number of addresses gives the size of the backup at the end of the interval.
It may happen that the data written during execution do not modify the content of the memory. However, to keep the technique simple, we do not track the content of the memory but only the addresses where a write operation happens.
\subsection{Oracle}
The \textit{Oracle} is defined as the strategy that saves only the words that are \textit{alive}.
An address is considered alive when it is going to be read at least once in any future interval.
In other words, a written data is considered alive if it is read in any future intervals, before being modified by any other write at the same address.
A word that will be overwritten before being read is not considered alive and thus is not backed-up by the oracle.
\begin{figure}[htb]
\centering
\includegraphics[width=.9\linewidth]{images/alive_oracle_new}
\caption{Example of the aliveness of two addresses with Load (L) and Store (S) instructions. The continuous green line indicates that the address is alive. The black dotted line is used when the address is not alive.
The store on address \textit{0x10} at cycle 12 (S*) does not make the address alive because it is followed by another store at cycle 15, that overwrites the value written by S*
}
\label{fig:alive-ex}
\end{figure}
Fig. \ref{fig:alive-ex} shows an example of two addresses changing between the \textit{alive} and the \textit{not alive} state as the execution progresses.
In the example, address \textit{0x0C} stops being alive after it is used by the load in cycle 5 and stays \textit{not alive} for the period between the sixth and the ninth clock cycles.
This happens because the Oracle knows that the value will be overwritten by the store executed at clock cycle 10.
Therefore, between clock cycles 6 and 9, it does not consider \textit{0x0C} as an alive address.
For the same reason, address \textit{0x10} stops being alive after the load in cycle 7 and is \textit{not alive} in the time between cycle 8 and cycle 14.
The store operation happening at cycle 12 does not change the state of the address because it is going to be followed by another store instruction that will discard this temporary update.
The \textit{Oracle}, before the power failure, only saves the words that are going to be read during any further interval.
Extending this oracle, we moreover define the \textit{Oracle Modified (OM)} strategy that only saves the alive words that were modified in the current interval.
As for the MA scheme, we can consider that a complete snapshot of the system memory is stored in the NVM at the beginning and during any previous interval.
With the OM strategy, the data that will be read in the future are only saved if they were modified.
If a data has been saved in the previous intervals and remained unchanged, it is not added in the snapshot of the memory to be saved before the next power failure.
From now on, we will use Oracle to refer to the Oracle Modified when comparing with the other strategies.
\subsection{Block-Based Strategies}
Both the \textit{Used Address} and \textit{Modified Address} strategies can be implemented with different degrees of granularity.
Tracking each individual word may require a very large memory to store the modified addresses, block-based strategy tries to trade-off between hardware cost and backup saving.
Instead of considering single word addresses, the addresses can be grouped in blocks of $N$ words and the scheme can be adapted to keep track of these blocks.
Therefore the \textit{Modified Block (MB)} strategy keeps track of the blocks that are modified during the interval.
The backup size is given, for each interval, by the number of blocks that are accessed with one or more store operations.
In Freezer, the modified blocks are tracked using corresponding \textit{dirty bits}, which allows for the size of the associated tracking memory to be reduced by a factor equivalent to the block size.
MB with blocks of $N=1$ word corresponds to the MA strategy.
\section{Trace Analysis and Improvement in Backup Size}\label{sec:traces}
In order to validate our approach, we analyzed the memory access traces of several benchmarks from a subset of MiBench (see Table~\ref{tab:backup-size-vs-block-size} for a list of the benchmarks).
The benchmarks were run on a cycle accurate, bit accurate RISC-V model~\cite{rokicki:hal-02303453}, thus only two types of memory access are possible: load and store operations.
The traces report the information about each memory access during the program execution.
In particular, each trace records a timestamp (cycle count), the type of operation (ST or LD for store or load) and the address for every memory access.
Table \ref{tab:trace} shows an example of a memory access trace.
\begin{table}[htb]
\caption{Example of memory access trace.}
\centering
\begin{tabular}{rlll}
\hline
interval & cycle & op & addr \\
\hline
$i$ & ... & ... & ... \\
$i$ & 90 & ST & 0x38aaad4 \\
$i$ & 97 & LD & 0x2ba50 \\
$i$ & 99 & LD & 0x2b06c \\
\hline
$i+1$ & 104 & LD & 0x2b954 \\
$i+1$ & 109 & LD & 0x38aaad4 \\
$i+1$ & ... & ... & ... \\
\vdots & \vdots & \vdots & \vdots \\
$n$ & ... & ... & ... \\
\hline
\end{tabular}
\label{tab:trace}
\end{table}
The occurrences of power failures are simulated by dividing an access trace in $n$ time intervals.
Each interval $i$ is composed of a given number of clock cycles $N_{prog_i}$, equal to the active time $t_a$ of the interval $i$ divided by the processor clock period.
The cycle count reported in the trace is used to divide the execution of a benchmark in these $n$ intervals.
In the rest of the paper, for simplicity without loosing generality, we divide $t_{prog}$, the time needed for running the whole program without interruptions, in $n$ equal intervals of $N_{prog}$ cycles.
In the example reported on Table \ref{tab:trace}, the interruption is placed after cycle \textit{99}.
This means that the load happening in cycle \textit{104} is considered as being executed in the next interval ($i+1$).
This is a simple way to simulate a frequency of power failures every $N_{prog}$ cycles ($N_{prog}=100$ in this example).
In practice, for our simulations we considered longer intervals, ranging from $10^5$ to $10^7$ cycles.
As an example, considering a device running at 10 MHz, intervals of $10^6$ clock cycles would correspond to a frequency of interruptions due to power failures of 10 Hz.
In Section \ref{sec:impact-of-interval}, we present an analysis of the impact of the interval length and of the variability of intervals duration on the reduction of backup size.
From these traces, the number of load and store operations per interval, as well as other memory access features, can be extracted.
As an example, Fig. \ref{fig:op} shows the number of LD and ST in each interval during the execution of the FFT benchmark, with intervals of $N_{prog}=10^7$ clock cycles.
\begin{figure}[htb]
\centering
\includegraphics[width=.9\linewidth]{images/op_number_per_interval_scale}
\caption{Number of LD and ST opserations per interval during the FFT benchmark execution with $N_{prog}=10^7$ clock cycles.}
\label{fig:op}
\end{figure}
Considering the duration of the full execution of the FFT benchmark on the target processor, $n=7$ intervals can be simulated, ranging from interval $0$ to $6$ in the figure.
These traces provide relevant information about the memory access behavior of a given program. They will be used to compare the different backup strategies in Sections \ref{sec:Impact-of-block-size} and \ref{sec:results}.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\linewidth]{images/alive_vs_total_fft_percent_scale}
\caption{Percentage w.r.t. full memory space of ``alive'' and ``alive \& modified'' addresses per interval during the execution of the FFT benchmark with $N_{prog}=10^7$ clock cycles.}
\label{fig:alive-fraction}
\end{figure}
Fig. \ref{fig:alive-fraction} shows the fraction of \textit{alive} and \textit{alive \& modified} addresses with respect to the total number of words addressed, for every interval of $10^7$ clock cycles for the FFT benchmark.
In the last interval no address is considered alive as the oracle knows that the program is going to terminate before the next power failure.
The figure also shows that, even with a relatively small benchmark, the number of words that really needs to be saved is less than a quarter of the total.
This motivates our work on the definition of new backup strategies to reduce the volume of data to be backed-up before a power failure.
However, as already mentioned, the OM cannot be implemented in a real system as it requires knowledge of the future.
It is however very useful for comparison as it gives the optimal lowest bound to the backup size.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{images/full_mem_vs_ua_vs_ma_vs_oracle_abrev_scale}
\caption{Average number of words saved per interval by the different backup strategies -- full-memory, used-address (UA), modified (MA), and oracle modified (OM) -- during the execution of different benchmarks, with $N_{prog}=10^6$ cycles.}
\label{fig:ma-is-better}
\end{figure}
Fig. \ref{fig:ma-is-better} compares the average number of word saved per interval by the full-memory, UA, MA, and OM strategies for different benchmarks and with $N_{prog}=10^6$ clock cycles.
The figure shows the great potential of the proposed strategies w.r.t. state-of-the-art approaches.
Fig. \ref{fig:ma-is-better} also demonstrates that the MA strategy always outperforms the UA strategy in terms of number of saved words and it is the only technique that comes close to the performance of the oracle modified.
Therefore, only the MA strategy will be considered in the rest of the paper, as well as its extension to a block-based strategy presented in the following section.
\section{Freezer}\label{sec:freezer}
In this section, we present Freezer, a backup controller that implements the \textit{Modified Block} backup strategy, and study the impacts of the block size in the MB strategy.
\subsection{Freezer Architecture}
Fig. \ref{fig:system-arch} shows the system-level view of the \textit{Freezer} architecture.
The system is composed of four major components: the CPU, the SRAM used as a main memory, the NVM used for the backup, and the backup controller (Freezer).
Freezer is itself composed of two main blocks: a controller implemented as a finite-state machine (FSM) for sequencing the operations and a small memory containing the dirty bits used to keep track of the blocks that need to be saved, as shown in Fig.~\ref{fig:freezer-arch-int}.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\linewidth]{images/freezer_arch_int}
\caption{Freezer internal architecture}
\label{fig:freezer-arch-int}
\end{figure}
The Freezer controller is a stand-alone component, that does not need to be tightly coupled with the memories or with the core.
It uses two handshake interfaces for the SRAM and NVM requests, allowing to tolerate variable access latency.
Freezer can be directly connected to the control, address and data signals of both SRAM and NVM, using these handshake interfaces.
Alternatively the SRAM and NVM interfaces can be arbitrated and share a single master port on the system bus.
Moreover, Freezer is also connected to the request signals of the CPU to the SRAM, this allows Freezer to (i) spy the address of the SRAM accesses by the processor and (ii) manage the backup-to and restore-from-NVM phases in place of the processor.
SRAM and NVM do not need to have two ports, CPU and Freezer accesses can be easily arbitrated as they never access the memory at the same time.
At run-time, Freezer checks the address of the store operations in the SRAM to dynamically keep track of the blocks that are modified.
When a power failure arises, the CPU is halted and the controller starts transferring the modified blocks into the non-volatile memory.
The words within a block are then stored sequentially in the NVM.
The controller uses the information collected during the active time to determine which blocks to save.
When performing this task, Freezer has access to both the SRAM and the NVM memory.
Algorithm \ref{algo:backup-controller} describes the behavior of the backup controller during the execution, backup, and restore phases.
During execution, Freezer implements the \textit{Modified Block} backup strategy.
\begin{algorithm}[tb]
\SetKwIF{If}{ElseIf}{Else}{if}{:}{elif}{else:}{}%
\SetKwFor{For}{for}{\string:}{}%
\SetStartEndCondition{ }{}{}%
\AlgoDontDisplayBlockMarkers\SetAlgoNoEnd\SetAlgoNoLine
\caption{Freezer backup controller algorithm}
\label{algo:backup-controller}
\KwIn{cpu\_addr address generated by the CPU}
\KwIn{is\_store = 1 if the operation is a store}
\KwIn{op\_valid = 1 if the operation is valid}
\KwIn{pwr\_fail = 1 if power failure is detected}
\KwIn{restore = 1 if resume after a power failure}
\BlankLine
\KwData{\textit{to\_backup} flag memory of 1-bit per block}
\BlankLine
\eIf{restore}{
\For{i $\gets 0$ \textbf{to} $SRAM\_SIZE-1$}{
sram[i] $\gets nvm[i]$\;
}
}
{
\eIf{\textbf{not} pwr\_fail}{
\If{is\_store \textbf{and} op\_valid}{
block $\gets cpu\_addr \gg log2(BLOCK\_SIZE)$\;
to\_backup[block] $\gets 1$\;
}
}{
\For{$b \gets 0$ \textbf{to} $BLOCK\_NUM-1$}{
\If{to\_backup[b]}{
\For{$a \gets 0$ \textbf{to} BLOCK\_SIZE-1}{
addr $\gets (b \ll log2(BLOCK\_SIZE)) \| a$\;
nvm[addr] $\gets$ sram[addr]\;
}
}
}
}
}
\end{algorithm}
During execution, when there is no power failure (not \textit{pwr\_fail}) and there is a valid store operation, the controller records the blocks that are modified in a table (\textit{to\_backup}) implemented in a small memory, or in a register bank.
When the \textit{pwr\_fail} condition is true, it enters in the backup phase and in a loop where, for each block, the \textit{to\_backup} memory is checked.
If the block has to be saved, then a loop for every address of the block is executed, where a word is read from the SRAM and written in the NVM.
This last loop can easily be pipe-lined such that an NVM write in an address can be executed in the same cycle with an SRAM read in the successive address.
The same holds true also for the restore phase, that simply moves back the data from the NVM to the SRAM.
In this way, the backup controller is able to back up and restore one word every clock cycle.
This should also lead to an additional speed-up, when compared with software-based backup loops executed on low-end micro-controllers, as in the case of \cite{balsamo_hibernus:_2015}.
In the hardware implementation, the process of checking the dirty bits can also be optimised.
As an example, the scan of the \textit{to\_backup} memory to find the next dirty block can happen in parallel to the backup of the current block, which is a relatively long operation.
Moreover, the \textit{to\_backup} memory can be organised as a matrix of dirty bits and the controller can check an entire row of dirty bits in parallel.
This means that the \textit{to\_backup} memory can be scanned row by row.
The sparsity of the dirty bits can also be exploited: skipping rows that have only clear bits (all zeros).
With these and other optimisations, the throughput of the backup operation can be sustained with little to no dead cycles.
However these low level optimisations are outside the scope of this work and will not be investigated further.
\subsection{Area and Power Results}
\label{sec:areapowefreezer}
As our algorithm is relatively simple, the controller itself introduces small area and power overheads.
The major contribution in the area and power overheads is given by the \textit{to\_backup} dirty-bit memory, used to keep track of the blocks that have to be saved.
Table \ref{tab:flag-mem} shows the number of bits and an estimation of the area of the \textit{to\_backup} memory for different block sizes, considering a 32KB SRAM.
\begin{table*}[ht]
\centering
\caption{Number of bits and area estimation of the \textit{to\_backup} memory, implemented with standard cells in 28 nm FDSOI.}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
block size (32bit words) & 2 & 4 & 8 & 16 & 32 & 64 \\\hline
\# bits & 4196 & 2048 & 1024 & 512 & 256 & 128 \\
area $[\mu m^2]$ & 10838.11 & 5452.02 & 2748.94 & 1436.81 & 730.64 & 386.95 \\
\hline
\end{tabular}
\label{tab:flag-mem}
\end{table*}
For these results, the \textit{to\_backup} memory is synthesized with standard cells in a 28nm FDSOI technology using Synopsys Design Compiler (DC).
Even when considering a fine granularity for the block size, the dirty-bit memory is small compared to the total size of the SRAM memory.
As an example, for a block size of $8$ words, the required 1024-bit memory is $256\times$ smaller than the main SRAM memory.
Moreover, by tuning the block size with larger blocks, the \textit{to\_backup} memory can be stored in a register file with a small increase in the backup size.
A non optimized version of the controller was synthesized from a C++ specification using Mentor Graphics CatapultHLS and Synopsys (DC) with the same 28nm FDSOI technology at $0.7V$.
In this configuration, Freezer's controller achieves a dynamic power of $P_{active} = 6.8 \mu W$ and a leakage power of around $P_{leak} = 40 nW$ at 25\textdegree.
With the same technology, we estimate a leakage of roughly $600 nW$ for a register-based \textit{to\_backup} memory of $1024$ bits.
These synthesis results will be exploited in Section \ref{sec:area} to estimate the energy of a system implementing Freezer.
\subsection{Impact of Block Size} \label{sec:Impact-of-block-size}
In this section, we study the impact of the block size on the size of the backup provided by the MB strategy.
The size of the \textit{to\_backup} memory depends on two parameters: the number of 32-bit words in each block, which determines the granularity of the backup strategy, and the total size of the SRAM.
Therefore, it is possible to trade off an increase in backup size with a smaller area overhead of the \textit{to\_backup} memory.
In Table \ref{tab:backup-size-vs-block-size}, the backup size across a set of benchmarks is reported for different configurations of block granularity.
The backup size is averaged on all intervals and normalized with respect to a block of one word (MA strategy). The interval is set to $N_{prog}~=~10^6$ clock cycles.
\begin{table}[htb]
\centering
\caption{Backup size relative to blocks of one 32-bit word (MA approach) for different benchmarks. $N_{prog}=10^6$ cycles. The table also reports the average on all benchmarks.}
\begin{tabular}{l|rrrrrr}
\hline
block size $N$ & 2 & 4 & 8 & 16 & 32 & 64 \\
\hline
susan\_smooth\_small (sss) & 1.01 & 1.04 & 1.09 & 1.17 & 1.32 & 1.62 \\
susan\_edge\_small (ses) & 1.03 & 1.10 & 1.22 & 1.35 & 1.54 & 1.68 \\
matmul16\_float (mm16f) & 1.11 & 1.24 & 1.47 & 1.77 & 2.23 & 2.54 \\
qsort (qsort) & 1.01 & 1.03 & 1.07 & 1.28 & 1.58 & 1.70 \\
fft (fft) & 1.10 & 1.22 & 1.44 & 1.79 & 2.45 & 3.22 \\
matmul32\_int (mm32i) & 1.03 & 1.08 & 1.17 & 1.28 & 1.47 & 1.75 \\
str\_search (str) & 1.02 & 1.06 & 1.12 & 1.17 & 1.28 & 1.55 \\
cjpeg (cjpeg) & 1.01 & 1.03 & 1.06 & 1.10 & 1.18 & 1.32 \\
dijkstra (dijk) & 1.06 & 1.18 & 1.31 & 1.36 & 1.45 & 1.55 \\
matmul16\_int (mm16i) & 1.07 & 1.19 & 1.38 & 1.69 & 2.16 & 2.76 \\
susan\_edge\_large (sel) & 1.08 & 1.17 & 1.30 & 1.46 & 1.59 & 1.72 \\
\hline
average (avg) & 1.05 & 1.12 & 1.24 & 1.40 & 1.66 & 1.95 \\
\hline
\end{tabular}
\label{tab:backup-size-vs-block-size}
\end{table}
Increasing the block size has obviously an impact on the performance of the MB strategy, i.e., the average size of the backup required at the end of each interval.
However, MB with a relative small size of block (up to $N=8$) only increases the backup size by 24\% in average, while this block-based strategy can decrease the size of the \textit{to\_backup} memory by a factor of $8\times$.
More comparisons and impact of the interval size are reported in the next section.
\section{Results}\label{sec:results}
In this section, the details of the experimental setup are explained and the results regarding the backup size (Sec.~\ref{sec:res:backupsize}) and backup time {(Sec.~\ref{sec:res:backuptime})} are reported.
The impact of the interval size is discussed in Section \ref{sec:impact-of-interval}. For all the other results provided in this section, the interval is set to $N_{prog}=10^6$ clock cycles.
Moreover, a discussion about power, energy, and area of our approach is presented in Sections \ref{sec:mem-energy-cmp} and \ref{sec:area},
while considerations on the impact of leakage are presented in Section \ref{sec:leakage}
\subsection{Backup Size}\label{sec:res:backupsize}
For every interval, the backup size is computed considering the different approaches described in Section \ref{sec:backup-model}.
Fig. \ref{fig:backup-size-plt} shows the backup size reported for every benchmark and for blocks of 1, 8, and 64 32-bit words.
Blocks of size equal to one word corresponds to the MA strategy.
The \textit{OM} strategy is also reported to provide the optimal (non reachable) value.
The backup size is averaged on all intervals and normalized against the improved Hibernus~\cite{balsamo_hibernus:_2015} approach, which saves the full memory used by the program in pages of 512 bytes.
As it can be seen from Fig. \ref{fig:backup-size-plt}, our approach greatly reduces the average backup size per interval, reaching an $87.7\%$ (more than $8\times$) reduction in average, with only a $7.5\%$ distance from the \emph{oracle modified}, when configured with a granularity of $8$ words per block.
This reduction in backup size can be directly converted into an energy saving in number of write to the NVM during the backup phase.
\begin{figure}[ht]
\centering
\includegraphics[width=.95\linewidth]{images/backup_size_comp2_scale}
\caption{Backup size normalized w.r.t. the program memory size (improved Hibernus strategy) of Freezer implementing MB strategy with blocks of 1, 8, and 64 words. Lower bound in backup size of the oracle-modified is also reported.}
\label{fig:backup-size-plt}
\end{figure}
\subsection{Impact of Interval Size}\label{sec:impact-of-interval}
On a system powered with intermittent ambient energy, the time length of the intervals is mostly determined by the energy source and by the energy budget of the platform.
If the energy source is relatively stable, the length of the power cycle increases, and so does the amount of computation that the processor manages to complete during one interval.
This means that more memory accesses will be performed, thus we can expect the average size of a backup to increase.
However, this also depends on the spatial locality of the application, and considering wider blocks could be beneficial for less intermittent sources.
When the length of the power cycles decreases, the processor is interrupted more frequently, and the number of memory accesses is reduced.
Therefore, the average backup size is further decreased.
Fig. \ref{fig:backup-iv-size} shows the average reduction in backup size, across all benchmarks, for different lengths of the power cycles (interval size $N_{prog}$ expressed in number of clock cycles), considering blocks of 8 words.
As it can be seen, the backup size reduction is greater than $70\%$ for all the interval lengths.
Moreover, with shorter intervals, the reduction becomes greater than $90\%$.
It must be noted also that, when the length of the interval is increased above 20 million clock cycles, the majority of the programs are able to run to completion before the first power failure occurs.
\begin{figure}[htb]
\centering\includegraphics[width=.95\linewidth]{images/backup_reduction_vs_interval_size_new}
\caption{Average backup size reduction with different interval size $N_{prog}$ expressed in number of clock cycles.}
\label{fig:backup-iv-size}
\end{figure}
Due to the unpredictability of the energy source, an intermittently-powered system might also experience a wide variation between the time length of successive intervals.
To better capture this behaviour, we model the occurrence of a power failure as a random variable distributed according to a binomial law.
Power failure events in this model are considered independent of one another.
At each clock cycle, there is a certain probability to incur in a power failure.
For this experiment, we considered two values of one power failure every $10^6$ cycles and one power failure every $10^7$ clock cycles.
Figures \ref{fig:random-iv-1e6} and \ref{fig:random-iv-1e7} show, for each benchmark, the average savings with relative standard deviation computed for 100 executions, considering blocks of 8 words.
Our proposed method is robust to variability in the size of the intervals and it is able to achieve more than $83\%$ and $88\%$ savings on average when the failure rates are respectively $10^{-7}$ and $10^{-6}$.
\begin{figure}[ht]
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{images/freezer_binomial_intervals_1e-6_crop_scaled}
\caption{Failure rate $10^{-6}$.}
\label{fig:random-iv-1e6}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{images/freezer_binomial_intervals_1e-7_crop_scaled}
\caption{Failure rate $10^{-7}$.}
\label{fig:random-iv-1e7}
\end{subfigure}
\caption{Average savings and std. deviation for 100 executions with power failures distributed following the binomial law with failure rates of $10^{-6}$ (a) and $10^{-7}$ (b).}
\label{fig:random-iv}
\end{figure}
\subsection{Backup Time}\label{sec:res:backuptime}
The reduction in the backup size comes with a relative reduction in the save time.
On top of that, thanks to the hardware accelerated backup process, our solution provides an additional improvement in terms of backup time.
In particular, the backup process is managed directly by Freezer and can be further pipelined, so that each word can be saved in one clock cycle.
Of course, the speed of this process is limited by the cycle-time of the slowest NVM memory.
As our approach does not rely on any specific NVM technology, we considered the numbers reported in \cite{balsamo_hibernus:_2015} for our comparison.
In particular, we considered a clock frequency of 24 MHz for the normal operation using SRAM, and a clock cycle period of 125 ns (8 MHz) for the FeRAM.
\begin{table}[htb]
\centering
\caption{Percentage reduction of backup time w.r.t. improved Hibernus (higher is better). Columns b\_$N$ provides results with our strategy using blocks of $N$ words. Oracle modified and NVP \cite{liu_4.7_2016} are also provided for comparison.}
\begin{tabular}{lccccc}
\hline
{} & b\_1 & b\_8 & b\_64 & oracle & NVP \cite{liu_4.7_2016}\\
\hline
susan\_smooth\_small & 99.80 & 99.78 & 99.68 & 99.87 & 39.25 \\
susan\_edge\_small & 98.93 & 98.69 & 98.20 & 99.20 & 42.58 \\
matmul16\_float & 99.32 & 99.00 & 98.27 & 99.79 & 39.08 \\
qsort & 99.38 & 99.34 & 98.95 & 99.58 & 45.96 \\
fft & 99.58 & 99.39 & 98.64 & 99.85 & 39.64 \\
matmul32\_int & 99.35 & 99.23 & 98.86 & 99.69 & 40.44 \\
str\_search & 99.49 & 99.43 & 99.21 & 99.81 & 43.03 \\
cjpeg & 98.10 & 97.98 & 97.48 & 99.32 & 38.97 \\
dijkstra & 99.34 & 99.13 & 98.98 & 99.63 & 43.32 \\
matmul16\_int & 99.23 & 98.94 & 97.88 & 99.80 & 29.94 \\
susan\_edge\_large & 99.93 & 99.91 & 99.89 & 99.95 & 45.78 \\
\hline
average & 99.31 & 99.17 & 98.73 & 99.68 & 40.73 \\
\hline
\end{tabular}
\label{tab:backup-time}
\end{table}
Table \ref{tab:backup-time} reports the improvement in backup time compared with a modified implementation of Hibernus that only saves the memory used by the program (in pages of 512 KB).
Columns b\_$N$ provides results with our strategy using blocks of $N$ 32-bit words.
For the column related to non-volatile processor (NVP), we considered the backup time reported in~\cite{liu_4.7_2016} of 1.02 ms for 4KB, and scaled it for the memory size of our benchmarks, grouping the addresses in pages of 1 KB.
With this configuration, our approach gives a two orders of magnitude improvement in backup time when compared to the software-based approach that saves the whole program memory.
Moreover, Freezer provides a significant advantage also when compared with a fully non-volatile processor as \cite{liu_4.7_2016}, which only provides an improvement of 40\% when compared to the software-based approach.
This improvement in the backup time is also going to affect positively the total execution time, as expressed in Eq. \ref{eq:time}.
We considered a 24~MHz frequency for the volatile operations and an 8~MHz frequency for the FeRAM accesses.
The active time is set to $N_{prog}~=~10^7$ clock cycles at 24~MHz.
We assumed an average off time equal to the active time.
As reported in Table~\ref{tab:exec-time}, our strategy achieves a 32\% average decrease of the total execution time when compared with improved Hibernus.
We also compared Freezer against approaches like QuickRecall \cite{jayakumar_quickrecall:_2015} that runs the programs only using the NVM.
In this case, the save and restore time are roughly zero (only the registers need to be saved), but the frequency of the core is limited to the frequency of the FeRAM.
As a consequence, in most cases, the QuickRecall approach leads to longer execution time than Hibernus, whereas our solution performs always better and is very close to the Oracle.
\begin{table}[htbp]
\centering
\caption{Percentage reduction of execution time w.r.t. improved Hibernus (higher is better). Columns b\_$N$ provides results with our strategy using blocks of $N$ words. Oracle and NVM-only solution of \cite{jayakumar_quickrecall:_2015} are also provided for comparison.}
\begin{tabular}{lcccc}
\hline
{} & b\_1 & b\_8 & oracle & NVM only \cite{jayakumar_quickrecall:_2015}\\
\hline
susan\_smooth\_small & 20.75 & 20.75 & 20.76 & -17.59 \\
susan\_edge\_small & 33.35 & 33.31 & 33.41 & 2.35 \\
matmul16\_float & 9.90 & 9.88 & 9.93 & -34.50 \\
qsort & 88.79 & 88.77 & 88.90 & 89.00 \\
fft & 10.68 & 10.67 & 10.70 & -33.30 \\
matmul32\_int & 15.32 & 15.31 & 15.35 & -26.01 \\
str\_search & 24.90 & 24.89 & 24.95 & -11.05 \\
cjpeg & 27.93 & 27.91 & 28.13 & -5.94 \\
dijkstra & 35.21 & 35.17 & 35.27 & 5.14 \\
matmul16\_int & 8.72 & 8.71 & 8.75 & -36.34 \\
susan\_edge\_large & 83.49 & 83.48 & 83.50 & 80.27 \\
\hline
average & 32.64 & 32.62 & 32.69 & 1.09 \\
\hline
\end{tabular}
\label{tab:exec-time}
\end{table}
\subsection{Energy Comparison with other Memory Models}
\label{sec:mem-energy-cmp}
\begin{table*}[htb]
\centering
\caption{Energy and leakage power parameters used for memory access cost simulation. Read/write energy is reported in pJ per 32-bit word access.}
\begin{tabular}{l|llll|llll|llll}
\hline
& \multicolumn{4}{l|}{SRAM} & \multicolumn{4}{l|}{STT} & \multicolumn{4}{l}{RRAM} \\ \hline
Size {[}KB{]} & 4 & 16 & 32 & 64 & 4 & 16 & 32 & 64 & 4 & 16 & 32 & 64 \\ \hline
Read {[}pJ{]} & 0.219 & 0.703 & 1.664 & 2.50 & 7.754 & 7.889 & 8.426 & 8.692 & 5.101 & 5.477 & 6.004 & 6.667 \\ \hline
Write {[}pJ{]} & 0.111 & 0.215 & 1.175 & 1.388 & 20.244 & 20.614 & 20.873 & 21.416 & 21.349 & 27.449 & 24.176 & 28.575 \\ \hline
Leakage {[}$\mu$W{]} & 0.78 & 2.16 & 3.58 & 7.16 & \multicolumn{8}{l}{~} \\ \cline{1-5}
\end{tabular}
\label{tab:mem-energy-values}
\end{table*}
\begin{table}[htb]
\centering
\caption{Cache Miss and Hit dynamic energy in pJ per 32-bit word access}
\begin{tabular}{c|c|c}
\hline
Size [KB] & Hit Energy [pJ] & Write Energy [pJ] \\
& $E_{cache/r}$ & $E_{cache/w}$ \\
\hline
2 & 5.43 & 4.5 \\
4 & 6.15 & 4.96 \\
8 & 10.13 & 9.42 \\
16 & 13.45 & 12.74 \\
\hline
\end{tabular}
\label{tab:cache-dynamic-energy}
\end{table}
We use Eqs. (\ref{eq:energy-sram+NVM}), (\ref{eq:energy-NVM}), and (\ref{eq:energy-cache}) to compare the dynamic memory access energy of the different system configurations.
Figures~\ref{fig:rram} and~\ref{fig:stt} show these dynamic energies normalised w.r.t. the system using Freezer, with RRAM and STT, respectively.
For the cache+NVM architecture, four different cache sizes of 2KB, 4KB, 8KB and 16KB are reported.
The three caches are all 4-way set associative with lines of 8 words (256 bits), which is representative of this type of device.
We considered blocks of 8 words also for the system using Freezer.
The read and write dynamic energies per 32-bit word for the memories used in these comparison are reported in Table \ref{tab:mem-energy-values}, and were obtained using NVSim \cite{dong_nvsim:2012}.
Table \ref{tab:cache-dynamic-energy} reports the Hit and Write dynamic energy for the different cache sizes, obtained with NVSim.
Miss energies were in all cases equal to Hit energies.
\begin{figure}[htbp]
\centering
\begin{subfigure}{.49\textwidth}
\includegraphics[width=.95\linewidth]{images/energy_cost_nvsim_rram_new_scaled}
\caption{RRAM}
\label{fig:rram}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\includegraphics[width=.95\linewidth]{images/energy_cost_nvsim_stt_new_scaled}
\caption{STT}
\label{fig:stt}
\end{subfigure}
\caption{Relative dynamic energy of memory accesses, normalized w.r.t. Freezer, using RRAM (a) and STT (b) as NVMs for backup.}
\label{fig:energy-cost}
\end{figure}
As it can be seen from Fig.~\ref{fig:energy-cost}, our proposed approach provides a significant reduction in the energy due to memory accesses when compared with all the other methods.
The memory access energy for a full-memory backup strategy is in average $1.26\times$ of that required by Freezer when using STT and $1.65\times$ when considering RRAM.
Being based on the same SRAM+NVM architecture, Freezer and full-memory backup strategies require the same absolute amount of energy for the execution of the program, i.e., the energy required for executing load and store operations is the same for the same benchmark.
Moreover as the two strategies rely on a full memory restore after a power failure, they spend the same amount of energy for the restore memory accesses, for the same benchmark.
Tables \ref{tab:energy-percent-stt-freezer} and \ref{tab:energy-percet-stt-full-mem} show the energy decomposition, across all benchmarks, for Freezer and full-memory strategies when using STT NVM.
The two tables show the clear advantage that Freezer brings in terms of backup energy, reducing its weight from an average $23.25\%$ to an average $3.44\%$ of the total memory access energy.
\begin{table}[htb]
\caption{Memory access energy percentage decomposition for Freezer using STT}
\centering
\begin{tabular}{lrrrr}
\hline
Trace & backup & restore & prog. loads & prog. stores \\
\hline
sss & 0.74 & 22.60 & 74.86 & 1.80 \\
ses & 5.97 & 29.30 & 59.23 & 5.49 \\
mm16f & 5.53 & 21.89 & 49.88 & 22.70 \\
fft & 4.72 & 28.28 & 46.12 & 20.88 \\
cjpeg & 7.53 & 16.30 & 57.64 & 18.53 \\
str & 2.62 & 23.86 & 43.65 & 29.87 \\
mm16i & 15.13 & 33.12 & 40.55 & 11.21 \\
dijk & 3.45 & 23.56 & 61.34 & 11.64 \\
mm32i & 4.46 & 26.86 & 60.42 & 8.26 \\
\hline
avg & 3.44 & 23.52 & 61.18 & 11.86 \\
\hline
\end{tabular}
\label{tab:energy-percent-stt-freezer}
\end{table}
\begin{table}[htb]
\caption{Memory access energy percentage decomposition for full-memory backup using STT}
\centering
\begin{tabular}{lrrrr}
\hline
Trace & backup & restore & prog. loads & prog. stores \\
\hline
sss & 21.39 & 17.90 & 59.29 & 1.43 \\
ses & 28.29 & 22.35 & 45.18 & 4.19 \\
mm16f & 21.66 & 18.15 & 41.36 & 18.82 \\
fft & 26.15 & 21.92 & 35.75 & 16.18 \\
cjpeg & 17.40 & 14.56 & 51.49 & 16.55 \\
str & 22.65 & 18.95 & 34.67 & 23.72 \\
mm16i & 31.33 & 26.80 & 32.81 & 9.07 \\
dijk & 23.60 & 18.65 & 48.54 & 9.21 \\
mm32i & 25.77 & 20.87 & 46.94 & 6.42 \\
\hline
avg & 23.25 & 18.69 & 48.63 & 9.42 \\
\hline
\end{tabular}
\label{tab:energy-percet-stt-full-mem}
\end{table}
Figures \ref{fig:rram} and \ref{fig:stt} also show that, due to the higher read and write dynamic energies, using the NVM as the main memory is often detrimental even when compared with full-memory backup systems.
Moreover when compared to Freezer, NVM-only systems require in average $6.19\times$ and $4.22\times$ more energy for RRAM and STT respectively.
As described in Section \ref{sec:res:energy}, the cache+NVM system uses the write-back policy and flushes the dirty lines in the NVM when a power failure arises.
Thus the cache+NVM system shows a behaviour that is similar to the one of Freezer during power failures, but with higher energy per operation.
There are however some major differences between a system that implements Freezer and a system with a write-back cache and a NVM main memory.
First of all, Freezer is meant to be simple to reduce the energy overhead of tracking the modified blocks.
Moreover, Freezer is able to track the full main memory and only needs to write on the NVM before a power failure happens.
A write-back cache on the other hand might perform additional writes on the NVM at run-time.
In fact if the access causes a conflict, the cache will evict the conflicting line thus causing additional NVM writes.
These additional writes may reduce the lifetime of the NVM due to the limited endurance of these type of memories.
When it comes to cache+NVM based systems, the size that in average provides the smallest energy is 4KB, with 2KB and 8KB caches performing better in some benchmarks.
Access to smaller caches requires less energy, as shown in Table \ref{tab:cache-dynamic-energy}, but they might incur in the high cost of additional NVMs read and writes due to a larger number of misses and evictions.
The increased number of writes to the NVM could also cause problems of endurance because of wear-out, that might prevent this solution to be applied for long-lasting operations.
A larger cache can reduce the number of accesses to the NVM, up to the point where the cache is so large that it is able to buffer the full application.
In these case, it is possible to obtain a number of writes to the NVM which is close to what Freezer achieves.
However, this comes at the cost of having a large cache that is complex and energy hungry.
Moreover, it is unusual to see a cache used in small low-power edge devices, where the system memory is embedded on chip and seldom exceeds 64KB.
To summarize, for our set of benchmarks, the energy required by a 4KB cache + STT system is $5.9 \times $ w.r.t. Freezer, whereas the larger 16KB cache requires in average $9.3 \times$ more energy than Freezer.
\subsection{Impact of Leakage Power}
\label{sec:leakage}
For a fair comparison, it is also important to study the impact of leakage power of the {SRAM+NVM} memory model, especially when compared to NVM-only architectures.
Eq. \ref{eq:energy-sram+NVM} is therefore enhanced by considering the leakage power of low-power SRAMs of the appropriate size, as reported in Table~\ref{tab:mem-energy-values}.
The leakage power of STT and RRAM is considered to be zero, which is obviously not the case for real designs.
Table \ref{tab:bench-energy-values} reports for each benchmark the absolute dynamic energy of memory accesses for Freezer with both RRAM and STT as NVMs, equivalent to the Freezer blue bar in Figures \ref{fig:rram} and \ref{fig:stt}, respectively.
The table also reports an estimation of the leakage energy due to the main SRAM memory obtained considering a $20MHz$ clock, and the total memory size of the benchmark.
Table \ref{tab:bench-energy-values} shows that the leakage energy represents around half of the dynamic energy of memory accesses when using Freezer.
Even accounting for the leakage of SRAM, the approaches based on SRAM+NVMs are still better than running an NVM-only system.
Compared to full-memory backup which would consume roughly the same leakage energy, Freezer still benefits from the backup size reduction.
Moreover, even accounting for the leakage of the NVM memories would not change the outcome of the analysis.
In fact, when considering NVMs of the same size running for similar periods of time, the leakage due to the NVMs would be roughly the same for both SRAM+NVM and NVM-only architectures.
Furthermore, an SRAM+NMV system would even be able to activate the NVM only during the backup and restore phases, reducing even more the impact of NVM leakage.
In both cases, the SRAM+NVM architecture would still show an advantage.
\begin{table}
\centering
\caption{{Backup energy using Freezer, leakage and memory size for different benchmarks, energy in $[\mu J]$, memory size in words of 32 bits.}}
\begin{tabular}{lrrrr}
\hline
Trace & mem\_size & E\_freezer & E\_freezer & E\_leakage \\
~ & [32-bit word] & RRAM & STT & SRAM \\
\hline
sss & 8192 & 11.0 & 11.0 & 6.1 \\
ses & 16384 & 3.3 & 3.2 & 1.9 \\
mm16f & 2048 & 2.5 & 2.4 & 2.0 \\
fft & 2048 & 3.4 & 3.4 & 3.7 \\
cjpeg & 8192 & 3.9 & 3.6 & 1.3 \\
sl & 8192 & 3.6 & 3.7 & 1.9 \\
mm16i & 1024 & 0.051 & 0.045 & 0.028 \\
dijk & 16384 & 51.0 & 50.0 & 27.0 \\
mm32i & 4096 & 0.58 & 0.56 & 0.41 \\
\hline
\end{tabular}
\label{tab:bench-energy-values}
\end{table}
\subsection{Energy and Area Overhead Considerations}
\label{sec:area}
In this section, we provide insights about the overhead in energy due to our backup controller.
The use of the Freezer hardware backup strategy in an energy harvesting platform will introduce a small overhead at run-time, but will also decrease the energy required for the backup and restore operations.
We can account for the overhead and the reduction in the backup size by modifying Eq. \ref{eq:energy} which becomes
\begin{equation}
E_{c} = E_{s} N'_{s} + E_{r} N_{r} + (P_{on}+P_{ovh})\times t_{on} + P_{off} t_{off},
\end{equation}
where $N'_s$ is the reduced backup size and $P_{ovh}$ represents the overhead introduced at run-time.
The energy required for moving the data ($E_{s}$ for save and $E_{r}$ for restore) is heavily dependent on the memory technology.
However, software-based approaches introduces additional overhead.
In our case, as a backup operation may require hundreds or even thousands of transfers, we can approximate the energy required for saving one word as
\begin{equation}
E_{s} = E_{sram/r} + E_{nvm/w}
\label{eq:es}
\end{equation}
where $E_{sram/r}$ is the energy for reading a from the SRAM and $E_{nvm/w}$ the energy required for a write in the NVM.
The power overhead introduced by our strategy can be estimated as $P_{ovh} = \alpha \times P_{active} + P_{leak}$, where $P_{leak}$ is the leakage power, which will be mostly determined by the \textit{to\_backup} memory, and $P_{active}$ the active power.
$P_{leak}$ and $P_{active}$ were provided in Section \ref{sec:areapowefreezer}.
$P_{active}$ will be consumed whenever the processor performs a store operation and $\alpha = N_{store}/N_{prog}$ is the fraction of clock cycles spent performing store operations w.r.t. the execution of the program in the whole interval.
This overhead can be compared with the advantage gained in terms of save and restore energy.
If we compare against a system that saves everything but does not introduce any overhead, we can estimate the maximum active time $t_{on}$ after which the power consumed by the controller during active time becomes greater than the energy reduction obtained at backup time.
$t_{on}$ is constrained by the following inequation:
\begin{equation}
t_{on} \le \frac{\delta E_{s} N_{tot}}{P_{ovh}}
\label{eq:ton}
\end{equation}
where $N_{tot}$ is the number of words to be backed-up without Freezer (full memory), $E_{s}$ the energy required to back-up one word ($E_{sram/r} + E_{nvm/w}$), and $\delta E_{s} N_{tot}$ the energy saved during the backup operation.
With Freezer, considering $\delta = 87.7\%$, $E_{sram/r} = 0.45 pJ/bit$, $E_{nvm/w} = 100\times E_{sram}$, we obtained for the two extreme configurations depending on the considered benchmark:
$t_{on}~<~16.42s$ and $P_{ovh}= 1.18 \mu W$ for \textit{susan\_smooth}, and
$t_{on} < 2.4 s$ and a similar $P_{ovh}$ for the \textit{FFT} benchmark.\\
Both these $t_{on}$ values allow for the programs to be executed completely and are well above the typical active time of intermittently-powered systems.
Moreover, Eq. \ref{eq:ton} is obtained by comparing our solution to a system that introduces no overhead at run-time and no overhead during the backup process, which would not be the case in real systems. \\
To give an idea of how Freezer would fit in a low-end IoT node, we can compare it with a ultra-low-power, size-optimised SoC, implemented with the same 28nm FDSOI technology node such as the one presented in \cite{bol_a_40_to_80mhz_2019}.
In terms of area the SoC is 0.7$mm^2$, while its power consumption is 3$\mu$W/MHz giving at 48MHz a power consumption of 144$\mu$W.
From these numbers we can see that Freezer, even with our non-optimised implementation, would lead to a small overhead.
In particular, assuming blocks of 8 words, the area overhead of $2,748\mu m^2$ represents $\approx 0.4\%$.
The power overhead during active time, considering the $\alpha$ of the FFT benchmark, could be as low as 0.82\%.
\section{Discussion About The Approach}\label{sec:discussion}
Several studies have approached the problem of computing under intermittent power supply, providing a wide variety of different solutions.
While software-based approaches try to solve the problem at the application level, hardware-based solutions try to provide platforms that implement the non-volatility in a way that is transparent to the programmer.
The majority of the hardware solutions usually rely heavily on the underlying memory technology to accomplish the state retention.
Even in \cite{hager_a_scan-chain:2017}, where no NVM is used, their technique relies on an ultra low-power retention SRAM.
Our approach moves away from this type of scheme and tries to solve the problem from a different standpoint, by providing hardware acceleration for the backup and restore procedures, and by exploiting run-time information to optimize the backup sequence.
Moreover, this approach is agnostic with respect to the NVM technology, and opens a series of possibilities.
Technologies such as hybrid nvSRAM, as the one used in \cite{liu_4.7_2016}, with circuit-level configurable memory, parallel block-wise backup and adaptive restore, may be exploited and enhanced by Freezer, thus achieving a faster and more energy efficient backup sequence thanks to the backup size reduction.
Furthermore, our approach could be extended to implement a programmable backup hardware accelerator, or to implement a dedicated ISA extension.
This would provide programs with some levels of control on the save and restore procedures and allow for the hardware to exploit some of the information available to the program.
As an example, a program may signal that a certain buffer or memory region is no longer used, allowing the controller to exclude it from the backup process.
This would also make possible to integrate static analysis techniques such as the one presented in \cite{zhao_software_2015} and \cite{zhao_stack-size_2017} on top of Freezer.
\section{Conclusion}\label{sec:conclusion}
Applications that run under ambient harvested energy suffer from frequent and unpredictable power losses.
To guarantee progress of computation in this circumstances, these applications have to rely on some mechanisms to retain their state.
In this paper, we propose Freezer, a backup and restore controller that is able to reduce the backup size by monitoring the memory accesses, and that provides hardware acceleration for the backup and restore procedures.
The controller only requires a small memory to keep track of the store operations.
Moreover, it can be implemented with plain CMOS technology and does not rely on complex and expensive hybrid non-volatile memory elements.
Furthermore, Freezer is a drop-in component that can be integrated in existing SoCs without requiring modifications to the internal architecture of the processor.
Our proposed solution achieve a 87.7\% average reduction in backup size on a set of benchmarks, and a two orders of magnitude reduction in the backup time when compared with software based state-of-the-art approaches.
The code and traces used in this paper are available for reproducibility at \url{https://gitlab.inria.fr/dpala/freezer-resources}.
\section*{Acknowledgment}
This work was supported by Inria Project Lab “Zero-Power Systems” (ZEP).
The authors would like to thank the anonymous reviewers for their comments and feedback.
\bibliographystyle{IEEEtran}
|
1,314,259,993,882 | arxiv | \section{Introduction}
It is generally accepted that the intergalactic medium (IGM) contains the majority of the baryons
in the universe \citep[e.g.][]{Shull03,Lehner07,Danforth08,Richter08,Shull12,Danforth16},
making it a key component in understanding cosmological structure formation. It is estimated
that about $30\,\%$ of the baryons at low-$z$ are in the form of photoionised hydrogen at a temperature of
$\lesssim 10^4$~K \citep{Penton00,Lehner07,Danforth08,Shull12,Tilton12,Danforth16},
while the collapsed, shock-heated Warm-Hot Intergalactic Medium (WHIM) at T $\sim 10^5 - 10^6$~K
contains at least $20\,\%$ \citep{Richter06,Richter06b,Lehner07}.
Cosmological simulations indicate that the WHIM may contribute up to $50\,\%$ of the baryons in the
low-$z$ universe \citep{Cen99,Dave99,Shull12,Martizzi19}.
This makes the IGM an important reservoir of baryons for the galaxies to fuel star formation.
Indeed, to explain current rates of star formation, such a baryon reservoir is
needed \citep[e.g.][]{Erb08,Prochaska09,Genzel10}. In this way, the evolution of galaxies is
tied to the properties and the spatial distribution of the IGM.
The relation between the IGM and galaxies is not one-way, however, as galaxies influence their
surroundings by ejecting hot gas into their circumgalactic medium (CGM) by AGN feedback
\citep[e.g.][]{Bower06,Davies19} and, particularly in the early universe, by supernova explosions
\citep{Madau01,Pallottini14}. Thus, there is a large-scale exchange of matter and energy
between the galaxies and the surrounding IGM and both environments influence the evolution of each
other. One observational method to study the gas circulation processes between galaxies and
the IGM is the analysis of intervening Lyman $\alpha$ (Ly$\alpha$) absorption in the spectra
of distant Active Galactic Nuclei (AGN), which are believed to trace the large-scale gaseous
environment of galaxies. Generally, an anticorrelation between Ly$\alpha$ absorber strength
and galaxy impact parameter is found for absorbers relatively close to
galaxies \citep[e.g.][]{Chen01,Bowen02,Wakker09,French17}.
The IGM is not only tied to the galaxies, but it is also expected to trace the dark matter
distribution and can, therefore, give insights into the large-scale structure of the universe.
This large-scale structure has been mapped by galaxy surveys like the 2-degree Field Galaxy Redshift
Survey and the Sloan Digital Sky Survey (SDSS) \citep{Colless01,York00}.
Studying the IGM absorber distribution at low redshift allows for a comparison with data from these
galaxy surveys. Already more than 25 years ago, \citet{Morris93} studied the spectrum
of the bright quasar 3C\,273 and mapped Ly$\alpha$ absorbers along this line of sight together
with galaxies in its vicinity. They found that Ly$\alpha$ absorbers cluster less strongly around
galaxies than the galaxies among themselves. This can be interpreted as most of the Ly$\alpha$
absorbers truly being intergalactic in nature, following the filamentary large-scale structure
rather than the position of individual galaxies.
More recently, \citet{Tejos16} studied Ly$\alpha$ and O\,{\sc vi} absorption in a single sightline in
regions between galaxy clusters. The detected overdensity of narrow and broad Ly$\alpha$ absorbers
hints at the presence of filamentary gas connecting the clusters. A different approach was taken
by \citet{Wakker15}. Instead of mapping gas along an isolated sightline, they used several sightlines
passing through a known galaxy filament. By comparing the relation of Ly$\alpha$ equivalent width
with both galaxy and filament impact parameters, \citet{Wakker15} conclude that Ly$\alpha$ absorbers
are best described in the context of large-scale structure, instead of tracing individual galaxy haloes.
While there is a relation between strong ($N$(H\,{\sc i}) $> 10^{15}$ cm$^{-2}$)
absorbers and the CGM of galaxies, weak Ly$\alpha$ absorbers are more likely to be associated with
filaments. This view is also supported by \citet{Penton02}, who find that weak absorbers
do not show a correlation between equivalent width and impact parameter to the nearest galaxy, while
stronger absorbers do. By comparing the position of their sample of Ly$\alpha$ absorbers relative to
galaxies in filaments, they conclude that the absorbers align with the filamentary structure. Evidence for
absorbers tracing an extensive, intra-group medium comes from other recent surveys of \citet{Stocke13}
and \citet{Keeney18}.
While the correlation between Ly$\alpha$ equivalent width and galaxy impact parameter seems to indicate
that these absorbers {\it somehow} are associated with galaxies (e.g., by the gravitational potential),
studies like \citet{Wakker15,Tejos16} show that at least some of the absorbers are associated with the
cosmological large-scale structure. Others studies \citep{Bowen02,Wakker09} conclude that their data
simply does not yield any definite conclusions on this aspect \citep[see also][]{Penton02, Prochaska11,Tejos14}.
Therefore, the question of how Ly$\alpha$ absorbers at $z=0$ are linked to galaxies and the large-scale
cosmological structure is not yet resolved. Clearly, additional absorption-line studies that
improve the currently limited statistics on the absorber/galaxy connection are desired.
In this paper, we systematically investigate the properties of $z=0$ Ly$\alpha$ absorbers and their
connection to the local galaxy environments and the surrounding large-scale structure. For this,
we follow an approach similar to that of \citet{Wakker15}. We combine the information on local galaxy
filaments mapped by \citet{Courtois13} with archival UV absorption line data from the
Cosmic Origins Spectrograph (COS) installed on the \textit{Hubble Space Telescope} (\textit{HST}).
Information on the galaxy sample used in this study is provided in Sect.~2. In Sect.~3, the HST/COS
data are described and information on the absorption line measurements are given. Details on the
galaxy filaments are presented in Sect.~4. In Sect.~5, we investigate the relation between absorbers and
galaxies, whereas in Sect.~6 we focus on the relation between absorbers and filaments. In Sect.~7, we
discuss our findings and compare them with previous studies. Finally, we summarise and conclude
our study in Sect.~8.
\begin{figure}[bp]
\centering
\includegraphics[width=\hsize]{histmMLall.pdf}
\caption{
Histogram of apparent and absolute $B$-band magnitudes and luminosities for all galaxies of
the V8k catalogue.
}
\label{histV8k}
\end{figure}
\section{Galaxy data}
\citet{Courtois13} used the V8k catalogue of galaxies to map galaxy filaments in the nearby universe.
This catalogue is available from the Extragalactic Distance Database\footnote{\url{http://edd.ifa.hawaii.edu/}} \citep[EDD][]{Tully09}.
It is a compilation of different
surveys, including John Huchra's `ZCAT' and the IRAS Point Source Catalog redshift survey with its
extensions to the Galactic plane \citep{Saunders00a,Saunders00b}. In total, the catalogue consists
of $\sim$~30\,000 galaxies, all with velocities less than $8000$ km\,s$^{-1}$. It is complete up
to $M_B$~=~$-16$ for galaxies at 1000~km\,s$^{-1}$, while at 8000~km\,s$^{-1}$, it contains
one in 13 of the $M_B$~=~$-16$ galaxies. A radial velocity of 8000~km\,s$^{-1}$ corresponds
to a cosmological distance of $d\sim(114$ km\,s$^{-1})h^{-1}_{70}$.
The distance to the Centaurus Cluster ($v \sim 3000$~km\,s$^{-1}$) is $\sim 40$~Mpc.
As described in Sect.\,3, the velocity range studied in this work extends up to
$v \sim 6700$~km\,s$^{-1}$, which corresponds to $\lambda \sim$ 1243 \AA.
Note that distance estimates to galaxies within 3000 ~km\,s$^{-1}$ in the V8k catalogue
are adjusted to match the Virgo-flow model by \citet{shaya95}.
The relatively uniform sky coverage (except for the zone
of avoidance, ZOA) of the V8k survey combined with the broad range of galaxy types make it
suitable for qualitative work \citep{Courtois13}.
The distribution of apparent and absolute $B$-band magnitudes as well as
log($L/L^*$) for all galaxies of the V8k catalogue is presented in Fig.~\ref{histV8k}.
As can be seen from this distribution, the V8k catalogue is largely insensitive
to dwarf galaxies with luminosities log($L/L^*)\leq -0.5$.
This needs to be kept in mind for our later discussion of the absorber-galaxy relation in Sects. 5 and 6.
We decided to not add supplementary galaxy data from other surveys, because the sky coverage of
such a mixed galaxy sample would be quite inhomogeneous, which would introduce an additional
bias to the galaxy-absorber statistics.
In Fig.~\ref{skydistV8k}, upper panel, we show the sky distribution of the galaxies in the
various filaments, such as defined in \citet{Courtois13}.
The galaxies in these filaments have radial velocities in the range
$v=750 - 5900$~km\,s$^{-1}$. All filaments are feeding into the Centaurus Cluster located at
$l\sim 300\degree$ and $b\sim 20\degree$. The large concentration of galaxies in the green filament,
between $l\sim 260-300\degree$ and $b\sim 60-70\degree$ is due to the Virgo Cluster.
\begin{figure*}[htp]
\centering
\includegraphics[width=1.\textwidth]{skydistV8kCOS.pdf}
\caption{
{\it Upper panel:} Sky distribution of galaxies from V8k belonging to filaments as defined
in \citet{Courtois13}. The different colours indicate different galaxy filaments. Several
important clusters are noted. {\it Lower panel:} Sky distribution of HST/COS sightlines
passing close to a filament (black circles) and HST/COS sightlines not belonging to a filament
(grey circles) plotted together with the galaxies from the V8k catalogue belonging to
filaments (colour-coded according to velocity).
}
\label{skydistV8k}
\end{figure*}
\section{Absorption line data}
\subsection{HST/COS observations}
In this study we make use of ancillary HST/COS data, as retrieved from the HST Science Archive
at the Canadian Astronomy Data Centre (CADC). The total sample consists of 302 AGN sightlines,
all reduced following the procedures described in \citet{Richter17}.
Since the Ly$\alpha$ absorption ($\lambda_0=1215.67$ \AA) in the spectra studied here
falls in the wavelength range between 1215 and 1243 \AA, we make use of the data from the
COS G130M grating. This grating covers a wavelength range from 1150~$-$~1450~\AA~and has a
resolving power of $R=16,000-21,000$ \citep{Green2012,Dashtamirova19}.
The data quality in our COS sample is quite diverse,
with signal-to-noise (S/N) ratios per resolution element
varying substantially (between $3$ and $130$; see Fig.\,A.1 in the Appendix.)
We also checked for metal absorption in the Ly$\alpha$ absorbers, considering the
transitions of Si\,{\sc iii} $\lambda 1206.50$, Si\,{\sc ii} doublet $\lambda 1190.42;
1193.29$, Si\,{\sc ii} $\lambda 1260.42$, Si\,{\sc ii} $\lambda 1526.71$, Si\,{\sc iv}
doublet $\lambda 1393.76; 1402.77$, C\,{\sc ii} $\lambda 1334.53$, C\,{\sc iv} doublet
$\lambda 1548.20; 1550.77$. For the lines at $\lambda > 1450$~\AA, data from the COS
G160M grating was used, which covers $\lambda$~=~1405~$-$~1775 \AA.
The QSO sightlines are plotted on top of the V8k galaxy filaments in the lower panel of
Fig.~\ref{skydistV8k}. The sky coverage of the sightlines is noticeably better in the
upper hemisphere. As can be seen, the majority of the sightlines do not pass through the
centres of the filaments, but rather are located at the filament edges.
A reason for sightlines not going directly through filaments could be because of extinction.
This holds true especially for dense regions like the Virgo Cluster, where the extinction is high.
On the other hand, the Virgo Cluster is a nearby cluster and might be better studied than
random regions on the sky. From the COS sightlines shown in the lower panel of Fig.~\ref{skydistV8k},
no clear bias can be seen, except for the northern versus southern hemisphere.
\subsection{Absorber sample and spectral analysis}
For all 302 COS spectra, the wavelength range between $1220-1243$ \AA~was inspected for
intervening absorption. This range corresponds to Ly$\alpha$ in the velocity range
$v \approx 1070-6700$~km\,s$^{-1}$. At velocities $<1070$~km\,s$^{-1}$, Ly$\alpha$
absorption typically is strongly blended with the damped Ly$\alpha$ absorption trough
from the foreground Galactic interstellar medium (ISM). To ensure consistency, we
do not further consider any absorption feature below 1220 \AA.
Each detected absorption feature at $1220-1243$ \AA\, was checked to be Ly$\alpha$
absorption by ruling out Galactic foreground ISM absorption and other, red-shifted
lines from intervening absorbers at higher redshift. As for the Galactic ISM
absorption, this wavelength range contains only the N\,{\sc v} doublet (1238, 1242~\AA)
and the weak Mg\,{\sc ii} doublet (1239, 1240 \AA) as potential contaminants and
the regions were flagged accordingly. Potential red-shifted contaminating lines
that were ruled out include: the H\,{\sc i} Lyman-series up to Ly$\delta$,
Si\,{\sc iii} (1206.50 \AA), and the two O\,{\sc vi} lines at $1037.62$ and
$1031.93$ \AA. Whenever possible, we also used the line-list of intergalactic
absorbers from \citet{Danforth16}, which covers a sub-sample of 82 COS spectra.
All in all, we identify 587 intervening Ly$\alpha$ absorbers along the 302
COS sightlines in the range $\lambda =1220-1243$ \AA.
For the continuum normalisation and the equivalent width measurements of the detected
features (via a direct pixel integration) we used the {\tt span} code \citep{Richter11}
in the ESO-MIDAS software package, which also provides velocities/redshifts for
the absorbers. To derive column densities of H\,{\sc i} (and the metal ions) for a
sub-sample of the identified Ly$\alpha$ absorbers we
used the component-modelling method, as described in \citet{Richter13}. In this
method, the various velocity sub-components in an absorber are consistently modelled
in all available ions (H\,{\sc i} and metals) to obtain column densities ($N$) and
Doppler-parameter ($b$-values) for each ion in each component. Throughout the paper, we give
column densities in units [cm$^{-2}$] and $b$-values
in units [km\,s$^{-1}$].
The modelling code, that is also implemented in ESO-MIDAS, takes into account the wavelength dependent
line-spread function of the COS instrument. Wavelengths and oscillator strengths
of the analysed ion transitions were taken from the list of \citet{Morton03}.
The total sample of 302 COS sightlines was separated into two sub-samples, one with
sightlines passing close to a filament, and the other with sightlines that do not.
To account for the occasionally seen large projected widths of the filaments (see, e.g.,
part of the dark blue filament in Fig.~\ref{skydistV8k}) and to be able to map
also the outer parts of the filaments, a separation of 5 Mpc to the nearest galaxy
belonging to a filament was chosen as dividing distance in this selection process.
One sightline (towards 4C--01.61) was categorised as belonging to a filament -- although
its nearest galaxy distance is as large as 7.9 Mpc -- because it passes a filament
that is very poorly populated. In total, our selection processes lead to 91 sightlines
that are categorised as filament-related, while the remaining 211 sightlines
are categorised as sightlines that are unrelated to the filaments studied here.
The total redshift pathlength in our COS data set can be estimated as
$\Delta z = 0.0189\,N$, with $N$ being the number of sightlines.
This gives $\Delta z = 1.72$ and $3.99$ for the sightline sample belonging to filaments
and the one unrelated to filaments, respectively. This will be further discussed in
Sect.\,\ref{section:statistics}.
Within the for us relevant sub-sample of the filament-related sightlines, 12 spectra
were unsuited for measurements for absorption-line measurements due to various different
data issues, such as an indeterminable continuum, or heavy blending from various lines.
Of the remaining 79 spectra, 9 had no Ly$\alpha$ absorption features detected in the
studied wavelength range. This implies a Ly$\alpha$ detection rate of $\sim$90\,\%
(we will later further discuss the number density and cross section of Ly$\alpha$
absorbers in this sample). The signal-to-noise ratios for these 79 spectra
vary between 5 and 92 per resolution element. In this sub-sample of 79 filament-related
sightlines, we identify 215 Ly$\alpha$ absorption systems that are composed of
227 individual components. For these 215 (227) absorbers (components), we have
derived H\,{\sc i} column densities and $b$-values via the component-modelling method,
as described above.
In the other sightline sample, that we categorise as unrelated to the galaxy filaments,
25 spectra were unsuited for measurements for the same reasons as described above.
Of the remaining 186 spectra, only 24 show no Ly$\alpha$ absorption in the range
considered above, resulting in a 87\,\% detection rate for Ly$\alpha$ in this sample.
Metal ions (Si\,{\sc ii}, Si\,{\sc iii}, Si\,{\sc iv}, C\,{\sc ii} or C\,{\sc iv})
were detected for 26 of the 215 Ly$\alpha$ filament absorbers, giving a metal
detection fraction of $\sim$12\,\%. Two example HST/COS spectra are shown in
Fig.~\ref{specmodel} (black) together with synthetic model spectrum (red). These
example spectra give an indication of the characteristic differences in S/N in the
COS data used in this study.
\begin{figure}[tp]
\centering
\includegraphics[width=\hsize]{spec_model.pdf}
\caption{
HST/COS G130M spectra of the QSOs VV2006-J131545.2+152556 (upper panel) and
PKS2155-304 (lower panel). The COS data are given in black, while the absorber
model is plotted in red. Several Ly$\alpha$ absorbers are seen in these spectra.
For a better visualisation, both spectra are binned over two pixels.}
\label{specmodel}
\end{figure}
Figure~\ref{histEWLya}, upper panel, shows the distribution of H\,{\sc i} Ly$\alpha$
equivalent widths for the detected absorbers in the two sub-samples and in the
combined, total sample. The lower panel instead shows the distribution of H\,{\sc i}
column densities in the filament-related absorbers, as derived from the component
modelling. Both distributions mimic those seen in previous Ly$\alpha$ studies
at $z=0$ \citep{Lehner07}. The sample of \citet{Danforth16} with 2577 Ly$\alpha$
absorbers obtained with HST/COS shows a similar distribution with a peak in equivalent
width just below 100 m\AA. The H\,{\sc i} column-density distribution falls
off below log $N$(H\,{\sc i}$)=13.5$ due to the incompleteness in the data
to detect weaker H\,{\sc i} Ly$\alpha$ absorbers. Note that because of the
limited spectral resolution and S/N many of the broader Ly$\alpha$
lines most likely are composed of individual, unresolved sub-components.
The H\,{\sc i} column-density distribution function will be discussed in
Sect.\,7.
\begin{figure}[htp]
\centering
\includegraphics[width=\hsize]{histEWlogN.pdf}
\caption{
Histogram of equivalent widths of Ly$\alpha$ absorbers (upper panel) and log$N$(H{\sc i}) in
the filament-related absorbers, as derived from the component modelling (lower panel).
}
\label{histEWLya}
\end{figure}
Errors of the measured equivalent widths have been derived
with the {\tt span} code \citep{Richter11}, which takes into account
the S/N around each line, the uncertainty for the local continuum
placement, and possible blending effects with other lines/features.
Typical 1$\sigma$ errors for the equivalent widths lie around 20 m\AA.
The errors in the column densities were derived based on the component-modelling
method \citet{Richter11}. Here, the typical errors are on the order
of $\sim 0.1$ dex.
The latter value is similar to the errors found by \citet{Richter17} for the same method and a comparable COS data set. The Doppler
parameters have a relatively high uncertainty, especially for higher
values of $b$. With the majority of $b$-values falling between 10 and
30~km\,s$^{-1}$, the errors are typically $\sim 5$~km\,s$^{-1}$, with
lower errors for the low end of the range of $b$ and slightly higher
errors for larger $b$-values. Tabulated results from our
absorption-line measurements can be made available on request.
\section{Characterisation of galaxy filaments}
To study how the IGM is connected to its cosmological environment, it is
important to characterise the geometry of the filaments, their galaxy content,
and their connection to the overall large scale structure. In Fig.~\ref{skyRvir},
we show the position of the galaxies in the filaments together with their
radial extent in 1.5 virial radii (1.5 R$_{\rm vir}$). Gas within this characteristic
`sphere of influence' can be considered as gravitationally bound to that galaxy.
This plot therefore gives a first indication of how much uncovered
sky there is {\it between} the galaxies and their spheres of influence,
indicative for the projected intergalactic space in the filaments (compared
to the projected circumgalactic space within 1.5 R$_{\rm vir}$). The Virgo Cluster
clearly stands out, as many galaxies are overlapping in their projected spheres
at 1.5 R$_{\rm vir}$, while in most other filaments, there are both regions with
strong overlap and regions without overlapping halos. In Sect.\,6 and in the
Appendix, we will discuss also other virial radii as selection criteria.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.8\textwidth]{skyRvir.pdf}
\caption{
Galaxies belonging to all filaments considered in this study plotted together
with their projected 1.5 virial radii.
}
\label{skyRvir}
\end{figure*}
\begin{figure*}[htp]
\centering
\includegraphics[width=1.0\textwidth]{allfilsv.pdf}
\caption{Galaxies belonging to different filaments [(a): green; (b): purple;
(c): dark blue; (d): cyan; (e): magenta] with their velocities
colour-coded. Grey dots show galaxies belonging to one of the other filaments.
The filament axes is indicated with the black solid line.
}
\label{filsv}
\end{figure*}
\subsection{Parametrisation of filament geometry}
To define an axis for each filament, a rectangular box was generated
per filament containing the galaxies therein.
The dark blue filament (see Fig.~\ref{skydistV8k})
was split into two individual boxes because of geometrical reasons.
Widths and lengths of the boxes vary for the different filaments,
as they scale with the filament's projected dimensions.
After defining the boxes sampling the individual filaments, they were
each sub-divided into segments with the full width of the box and
a length corresponding to 20$\degree$ on the sky. Each segment overlaps
with the previous one with half the area (10$\degree$ length).
The average longitude and latitude of the galaxies within each segment
was then determined and used as an anchor point to define the filament
axis. All these anchor points were connected in each filament to form
its axis.
In this way, the definition of the filament axis on the sky allowed us to calculate
impact parameters of the COS sightlines to the filaments. In addition,
we calculated velocity gradients in the filaments, by taking the average
velocity of all galaxies in each segment as velocity anchor point.
The method of using overlapping segments to determine the filament axis is
similar to the approach used by \citet{Wakker15}. A difference with their
approach is that they first determined which galaxies were part of the
filament by looking at the velocities. We did not do this as the filaments
were already defined by \citet{Courtois13}. The uncertainty on the placement
of the filament axes is no more than 1.5$\degree$ on the sky, less for most filaments.
The characteristics of each filament will be discussed separately in the
following subsections. The orange or `4 clusters' filament from
\citet{Courtois13} is not discussed here, as there are no available COS
sightlines nearby.
\subsection{Green filament}
\begin{figure}[htp]
\centering
\includegraphics[width=\hsize]{galdens.pdf}
\caption{Galaxy density along the green filament. The galaxy density indicates
the number of galaxies within 5$\degree$ on the sky for each galaxy.
}
\label{galdensgreen}
\end{figure}
Perhaps the most notable of the filaments discussed here is the one
containing the Virgo Cluster, located at a distance of $\sim$16.5~Mpc \citep{Mei07}
and with up to 2000 member galaxies.
This filament is labelled in green in
Fig.~\ref{skydistV8k} and extends from the Centaurus Cluster to the
Virgo Cluster in the range $l \sim 260-300\degree$ and
$b\sim 60-70\degree$. The Virgo area has the highest
galaxy density of the regions studied here.
The axis of the green filament as well as the galaxy velocities are
indicated in Fig.~\ref{filsv}a. The velocities range from
$\sim 3400$~km\,s$^{-1}$ at the Centaurus Cluster to $\sim-400$~km\,s$^{-1}$
However, the velocities of the galaxies in the Virgo Cluster reach up to
$\sim 2500$~km\,s$^{-1}$, indicating a large spread in velocities,
just as expected for a massive galaxy cluster.
Figure~\ref{galdensgreen} shows that the density along the filament varies
greatly, with the Virgo Cluster being the densest region (subboxes 6 $-$ 8).
In total, this filament has 427 galaxies and 36 COS sightlines passing through it.
\subsection{Purple filament}
As mentioned in \citet{Courtois13}, the purple filament is the longest
cosmological structure in space from those studied here. In projection,
however, it is one of the shorter filaments on the sky. This filament
was discussed in detail by \citet{Fairall98}, who named it the `Centaurus Wall'.
A striking lack of galaxies in the regions around $b\sim 0\degree$ is evident
in Fig.~\ref{filsv}b due to the ZOA caused by
the Milky Way disk in the foreground. Just below this scarcely populated
region is the Norma Cluster ($l\sim 325\degree$), followed by the Pavus~II
Cluster ($l\sim 335\degree$).
The purple filament contains the galaxies with the highest velocities in
our sample, with $v$ reaching up to $6500$~km\,s$^{-1}$ (see Fig.~\ref{filsv}b). These high
velocities indicate distances of $\leq 85$ Mpc.
It is the only filament in which the galaxy velocities strongly increase when moving
away from Centaurus. As such, it extends beyond the velocity range considered in
\citet{Courtois13}. Here, we consider only the part of the filament indicated
by their work.
The purple filament is the densest of our defined filaments, which is not surprising as
it hosts two galaxy clusters and the projection effect makes it visually compact
on the sky. A total of 351 galaxies from the V8k catalogue belong to this filament,
but only 2 COS sightlines, which are both shared with the dark blue filament.
\subsection{Dark Blue filament}
The dark blue filament represents one branch of the Southern Supercluster filament,
defined in \citet{Courtois13}. Since it is clearly separated on the sky from the
other branch (the cyan filament), these two branches are treated as individual
filaments in this study. Starting from the Centaurus Cluster, the dark blue
filament is entangled with the purple filament, but it continues to stretch out
as a rather diffuse cosmological structure over the range $l \sim 0-180\degree$
in the southern hemisphere. Because of the low galaxy density, the filament
axis of the dark blue filament is not well defined and unsteady compared to other
filaments, as can be seen in Fig.~\ref{filsv}c. The dashed portion of the axis
indicated in the figure is a result of the small number of galaxies found in
this region, so the exact filament geometry in this part of the filament
remains uncertain.
Figure~\ref{filsv} further indicates that average velocities in the dark blue
filament are much lower than in the purple filament, making the two filaments
easy to distinguish. The dark blue filament also exhibits two
distinct velocity branches: one with velocities $\sim 2500$~km\,s$^{-1}$ and
one with v $\sim 1300$~km\,s$^{-1}$ (see Fig.~\ref{filsv}), further
underlining the inhomogeneous morphology of this filament. This filament has
only 180 galaxies and 21 COS sightlines.
\subsection{Cyan filament}
The second branch of the Southern Supercluster filament is indicated by the
cyan colour in Fig.~\ref{skydistV8k}. Compared to the dark blue filament, this
branch is rather densely populated and the corresponding filament axis is
well defined (Fig.~\ref{filsv}d).
As with the green and dark blue filaments, the highest velocities in the cyan
filament are found near the Centaurus Cluster, with velocities decreasing as one
gets closer to the Fornax Cluster. However, Fig.~\ref{filsv} suggests that there
is a slight increase in velocity near the end of the filament at $l<240\degree$.
The cyan filament is made up of 289 V8k galaxies and there are 20 COS sightlines
passing though it.
\subsection{Magenta filament}
This filament (magenta coloured in Fig.~\ref{skydistV8k}) contains the
Antlia Cluster and also crosses the ZOA. While it is densely populated
for $b>0\degree$ (near Centaurus), it is underdense near the ZOA and also only
moderately populated at negative Galactic latitudes. This makes the transition
of the filament axis from positive to negative latitudes hard to define.
As can be seen in Fig.~\ref{filsv}e, the velocities in this filament range
from 3000 km\,s$^{-1}$ near the Centaurus Cluster to 1400 km\,s$^{-1}$
near its end at $l=210\degree$ and $b=-45\degree$.
It has 143 galaxies and 2 usable COS sightlines.
\begin{figure}[tp]
\centering
\includegraphics[width=\hsize]{EWrhog.pdf}
\caption{
Equivalent width of Ly$\alpha$ absorbers (blue dots) plotted against
the impact parameter to the nearest galaxy (upper panel) or against
the impact parameter in units of the galaxy's virial radius (lower panel).
The sample has been split into components that lie within 1000~km\,s$^{-1}$
of the nearest filament segment (red) and ones with a larger velocity
difference (blue). Black crosses indicate sightlines that exhibit
no significant Ly$\alpha$ absorption
in the analysed spectral region.
For these, we give the distance to the nearest galaxy in the velocity
range $v~=~1070-6700$~km\,s$^{-1}$.
}
\label{EWrhog}
\end{figure}
\section{Ly$\alpha$ absorption and its connection to galaxies}\label{Lyagals}
To learn about the relation between intervening Ly$\alpha$ absorption,
nearby galaxies, and the local large scale structure, in which the
absorbers and galaxies are embedded, we first look at the connection
between Ly$\alpha$ absorption in the COS data and individual galaxies.
In Fig.~\ref{EWrhog}, upper panel, we have plotted the equivalent widths
of all Ly$\alpha$ absorbers against the line-of-sight impact parameter to
the nearest galaxy, $\rho_{\rm gal}$, that has a radial velocity within
400~km\,s$^{-1}$ of the absorber. For this plot, {\it all} V8k galaxies
have been taken into account (not just the ones in filaments), as some of
the absorbers might be related to galaxies outside of the main cosmological
structures. We indicate absorbers that are within 1000~km\,s$^{-1}$ of
the nearest filament in red, and those that have larger deviation
velocities in blue.
Non-detections have been indicated by the black
crosses. The corresponding sightlines do not show any Ly$\alpha$ absorption
in the wavelength range 1220$-$1243 \AA.
There is an overdensity of absorbers within 1 Mpc of the nearest
galaxy, many of which having equivalent widths less than $W${\boldmath$_{\lambda}$} $<200$ m\AA.
This overabundance of weak absorbers close to galaxies might be a selection
effect. Prominent regions, such as the dense Virgo Cluster, receive more
attention by researchers and are sampled by more sightlines (and by spectral
data with better S/N) compared to underdense cosmological regions, which
typically are not as well-mapped. The highest equivalent widths
of the absorbers ($W_{\lambda} >500$ m\AA) typically are found closer to the galaxies,
in line with the often observed anti-correlation between Ly$\alpha$
equivalent width and impact parameter \citep[e.g.][]{Chen01,French17}.
There is, however, a large scatter in this distribution, such as seen also
in other studies \citep[e.g.][]{French17}. This scatter most likely is related
to filament regions that have a large galaxy density and overlapping (projected)
galaxy halos, such as indicated in Fig.~\ref{skyRvir}. Ly$\alpha$ absorption
that is detected along a line of sight passing through such a crowded region
cannot unambiguously be related to a {\it particular} galaxy (such as the
nearest galaxy, which is assumed here), but could be associated with the same
likelihood to any other (e.g., more distant) galaxy and its extended
gaseous halo that is sampled by the sightline.
The lower panel of Fig.~\ref{EWrhog} shows the Ly$\alpha$ equivalent width
plotted against $\rho_{\rm gal}$/R$_{\rm vir}$. Again, we see the same trend for
stronger absorbers to be closer to a galaxy. Out of the 208 Ly$\alpha$
absorption components, 29 are within $1.5$ virial radii from the nearest
galaxy. Following \citet{Shull14,Wakker15}, this is the characteristic radius
up to which the gas surrounding a galaxy is immediately associated with that
galaxy and its circumgalactic gas-circulation processes (infall, outflows,
mergers). It corresponds to $\sim 2 - 3$ times the gravitational radius as
defined in \citet{Shull14}.
Outside of this characteristic radius, the gas is more likely
associated with the superordinate cosmological environment (i.e., the group
or cluster environment and the large-scale filament; but see also Sect.\,6 and
Fig.\,B.1).
\citet{Wakker15} use both this distance criterion and the criterion of
absorption occurring within 400~km\,s$^{-1}$ of the galaxy's velocity
to associate each absorber with either the galaxy or the filament.
This velocity range (which we also adopt here; see above) is justified in view
of other dynamic processes that would cause a Doppler shift of the gas
in relation to the galaxy's mean radial velocity, such as galaxy rotation,
velocity dispersion of gas-structures within the virialised dark-matter of
the host galaxy, as well as in- and outflows.
In Fig.~\ref{NHIb} we show how the H\,{\sc i} column density
(log$N$(H\,{\sc i})) and the Doppler parameter ($b$ value) vary with
$\rho_{\rm gal}$. Similarly to $W${\boldmath$_{\lambda}$}, the largest
values for log$N$(H\,{\sc i}) and $b$ are found at smaller impact parameters,
but (again) the scatter is large.
\citet{Wakker15} have also plotted the equivalent width versus impact parameter
to the nearest galaxy for their sample. Although there are some high equivalent
width absorbers at large $\rho_{\rm gal}$ (out to 2000 kpc),
the average equivalent width decreases with
increasing $\rho_{\rm gal}$. Similar to our sample, \citet{Wakker15} find the
majority of the absorbers within 1 Mpc of a galaxy. Our sample, however, has
a larger scatter and more strong absorbers at larger distances.
\citet{Prochaska11} also conclude there is an anti-correlation between
equivalent width and galaxy impact parameter for their sample that
has a maximum $\rho_{\rm gal}$ of 1 Mpc. In addition to stronger absorbers
having lower impact parameters, their sample shows an increase of
the number of weak absorbers ($W_{\lambda} <$ 100 m\AA)
with increasing impact parameter.
\begin{figure}[tp]
\centering
\includegraphics[width=\hsize]{logNHIb.pdf}
\caption{
Logarithmic H\,{\sc i} column density and the Doppler parameter of
Ly$\alpha$ absorbers are plotted against $\rho_{\rm gal}$ for the
same two samples as shown in Fig.~\ref{EWrhog}.
}
\label{NHIb}
\end{figure}
\begin{figure*}[htp]
\centering
\includegraphics[width=1.0\textwidth]{allfilsCOS.pdf}
\caption{
Same as Fig.~\ref{filsv}, but here with Ly$\alpha$ absorbers (coloured
squares) overlaid that fall within 1000~km\,s$^{-1}$ of the filament's
velocity. Multiple absorbers along the same sightline have been given
spatial offset. COS sightlines that do not exhibit Ly$\alpha$ absorption
in this range, are indicated by black crosses.
}
\label{filsCOS}
\end{figure*}
\section{Ly$\alpha$ and its connection to filaments}
In Fig.~\ref{filsCOS}, the filaments are plotted together with the position
of the COS sightlines (filled squares) and the velocities of the detected
Ly$\alpha$ components colour-coded (in the same way as the galaxies).
Only those absorption components are considered that have velocities within
1000~km\,s$^{-1}$ of the nearest filament segment. These plots are useful
to visualise the large-scale kinematic trends of the absorption features
along each filament, while at the same time the spatial and kinematic
connection between Ly$\alpha$ components and individual galaxies can
be explored.
In the green filament (a), Ly$\alpha$ absorption is predominantly found
near 1500~km\,s$^{-1}$. This holds true for both the sightlines at the outskirts
of the filament and those going through the Virgo Cluster. For the latter, this
indicates the gas has a higher velocity than the typical velocity
of galaxies in the Virgo Cluster (as mentioned earlier, the V8k catalogue
takes into account the Virgo-flow model by \citet{shaya95}).
Due to the extended Ly$\alpha$ trough of the Galactic foreground ISM
absorption, intervening Ly$\alpha$ absorption below $\sim 1100$~km\,s$^{-1}$
cannot be measured in our COS data set, so that our absorption statistics is
incomplete at the low end of the velocity distribution. Still,
the trend of decreasing galaxy velocities with increasing distance to the
Centaurus Cluster (see above) is not reflected in the kinematics of the
detected Ly$\alpha$ absorbers in this filament, which appears
to be independent of the large-scale galaxy kinematics.
The purple filament (b) and first section of the dark blue filament (c)
overlap on the sky and have two COS sightlines in common.
The different filament velocities allow us to assign the detected Ly$\alpha$
absorption in one of the sightlines to the purple filament, while the other
sightline has one absorption component that we associate with the dark blue
filament. With only 2 sightlines available for the purple filament, no
clear trends can be identified.
As the dark blue filament continues, the different `branches' noted earlier
in Sect.\,4.4 are also reflected in the velocities of the Ly$\alpha$
absorption components. This trend might be partly a result of our original
selecting criterion for filament-related absorbers (absorption within 1000 km\,s$^{-1}$
of the closest filament-segment velocity; see above). However, because of the
large velocity range used, the selection criterion cannot account for the
entire branching effect. Obviously, in this filament, the gas traces the
velocities of the galaxies. Since this is the most diffuse filament, the
chance of finding a Ly$\alpha$ absorber, that is not directly associated
with a galaxy but rather traces the large-scale flow of matter in that
filament, is higher.
The cyan filament (d) instead is well-populated with galaxies, while also
being relatively long and broad. It thus has a high cross-section and there
are several sightlines that pass through this structure. Also in this case,
the Ly$\alpha$ absorption appears to follow the velocity trend of the
galaxies in the filament here. Starting from Centaurus, the absorbers
first exhibit velocities around 1800~km\,s$^{-1}$, then the velocities
decrease several hundred km\,s$^{-1}$, to rise again slightly at the
end of the filament, in line with the galaxies' velocity pattern.
Most of the sightlines that pass the magenta filament (e) are not suited
for a spectral analysis. The one sightline that has been analysed, shows
no significant absorption in the relevant velocity range, implying that
no useful information is available for the magenta filament.
\begin{figure}[tp]
\centering
\includegraphics[width=\hsize]{EWrhof_Dvsel.pdf}
\caption{
Ly$\alpha$ equivalent width versus filament impact parameter for absorbers
with velocities within 1000~km\,s$^{-1}$ of the nearest filament segment.
Open circles indicate absorbers that are associated with a galaxy (passing
it within 1.5 R$_{\rm vir}$ and $\Delta$v < 400~km\,s$^{-1}$), closed circles are
not associated with a (known) galaxy. The different colours indicate the
individual filaments.
}
\label{EWrhof}
\end{figure}
In analogy to Fig.~\ref{EWrhog}, Fig.~\ref{EWrhof} shows the equivalent
width of the Ly$\alpha$ absorbers, but now plotted against the {\it filament}
impact parameter, $\rho_{\rm fil}$. To evaluate whether the absorbers are related
to a nearby galaxy, those absorbers that pass within 1.5 R$_{\rm vir}$ and
$\Delta$v~<~400~km\,s$^{-1}$ of a galaxy are shown as open circles, whereas
absorbers not associated with a (known) galaxy are indicated with closed
circles. In the Appendix we show in Fig.\,\ref{diffrhog} the effect of
varying the impact-parameter criterion for absorbers to be associated with a galaxy
between $1.0$ and $2.5$ R$_{\rm vir}$ (which leads to no further insights, however).
While some of the absorbers with the highest equivalent widths are associated
with a galaxy, this is not true for all strong absorbers. Neither sub-sample
shows a clear, systematic trend for the equivalent width scaling with
$\rho_{\rm fil}$, except that the maximum Ly$\alpha$ equivalent width
in a given $\rho_{\rm fil}$ bin decreases with in increasing distance. However,
both sub-samples show a higher absorber density within $\rho_{\rm fil}<3$~Mpc
compared to more distant regions. Some of the absorbers indicated in green
extend up to $\rho_{\rm fil}=13$~Mpc, but these absorbers are unlikely to be
part of the green filament, as the typical width of a cosmological filament
is a few Mpc \citep{Bond10}.
But even if we limit our analysis to absorbers with $\rho_{\rm fil} < 5$~Mpc
\citep[as in][]{Wakker15}, the large scatter in the distribution
of Ly$\alpha$ equivalent widths versus filament impact parameter remains.
The velocity trends for galaxies and absorbers along four filaments (green,
purple, dark blue, cyan) are shown in Fig.~\ref{kinbins}.
Starting point for each filament is the Centaurus-Cluster region.
Here, each sub-box (segment) is defined to have a length of 10$\degree$
on the sky. This is half the length of the sub-boxes (segments) used to
define the filament axes (see Sect.\,4.1), because here, sub-boxes (segments)
do not overlap. Only for the second part of the dark blue filament
(sub-boxes 12$-$18), a length of 20$\degree$ was chosen to have a
sufficient number of galaxies available for the determination of a
meaningful average velocity.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{allkin.pdf}
\caption{
Average galaxies velocities along four filaments (dots, (a): green;
(b): purple; (c): dark blue; (d): cyan) plotted together with the velocities of
Ly$\alpha$ absorbers (squares) for each sub-box (segment). The velocity
dispersion is indicated by the colour-shaded area. The grey bars indicate
the numbers of galaxies belonging to each sub-box (segment) in the filament.
}
\label{kinbins}
\end{figure*}
\begin{figure}[tp]
\centering
\includegraphics[width=\hsize]{CDDF.pdf}
\caption{
H\,{\sc i} column-density distribution function for the Ly$\alpha$
absorbers in sightlines close to filaments (blue) and the absorbers
falling within 1000~km\,s$^{-1}$ from the filament velocity (red).
Errors in log[f($N$)] are from Poisson statistics.
}
\label{CDDF}
\end{figure}
\begin{figure}[tp]
\centering
\includegraphics[width=\hsize]{SiIIILya.pdf}
\caption{
Relation between log$N$(Si\,{\sc iii}) and log$N$(H\,{\sc i}) for
13 systems in our absorber sample.}
\label{SiIIIHI}
\end{figure}
Both the green and cyan filaments (Fig.\ref{kinbins}a and d) show a clear
decrease in velocity as they extend further away from the Centaurus Cluster.
For these two filaments, the velocities of the detected Ly$\alpha$ absorbers
all lie above the lower limit of the galaxy velocity-dispersion in each sub-box,
with only one exception (see sub-box 3 in the green filament, where one absorber
falls just below the shaded area). This could possibly mean that there is a void
of absorbers in the region between the filament and the Milky Way.
However, it is important to recall that absorbers with velocities less than
$\sim$1100~km\,s$^{-1}$ could not be measured due to the Galactic foreground
absorption. This limit could also explain why there is no absorption in the
lower velocity range of the Virgo Cluster (see sub-boxes 6 and 7 in
Fig.~\ref{kinbins}a).
Furthermore, most absorbers in sub-boxes 2 $-$ 4 and a couple in the other
sub-boxes have large $\rho_{\rm fil}$ ($>5$~Mpc). The velocity spread for those
absorbers in the first sub-boxes is larger and trending to slightly higher
velocities than that of the galaxies. Most, however, fall within the
standard deviation of the galaxy-velocities.
The purple filament contains only three absorbers, all belonging to the same
sightline at the end of the filament, as can be seen in Fig.~\ref{kinbins}b.
The plot underlines that this filament is well-populated, with more than
50 galaxies in 4 out of 7 sub-boxes.
Just like the filament axis in Fig.~\ref{filsv}c, the velocity trend along
the dark blue filament is irregular. This trend is also reflected in the
absorber velocities. While there are only a few galaxies in each sub-box,
the second part of the filament contains 14 absorbers, which is comparable
to the total number of absorbers in the cyan filament.
\section{Absorber statistics}
\label{section:statistics}
In quasar-absorption spectroscopy, the observed relation between the number
of H\,{\sc i} absorption systems in the column density interval $\Delta N$
($N_{\mathrm{H\,I}}$ to $N_{\mathrm{H\,I}} +dN$) and the absorption-path
length interval $\Delta x$ ($X$ to $X + dX$) is commonly characterised by
the differential {\it column density distribution function} (CDDF),
$f(N_{\mathrm{H\,I}})$. We use the formalism described in \citet[][and
references therein]{Lehner07} and adopt the following expression to
describe the differential CDDF of our Ly$\alpha$ absorbers:
\begin{equation}
f(N_{\mathrm{H\,I}})\,dN_{\mathrm{H\,I}}\,dX = C_{\mathrm{H\,I}}\,
N_{\mathrm{H\,I}}^{-\beta}\,dN_{\mathrm{H\,I}}\,dX.
\end{equation}
Following e.g. \citet{Tytler87}, absorption path $\Delta x$ and
redshift path $\Delta z$ at $z\approx 0$ can be approximated by the
relation
\begin{equation*}
\Delta X = 0.5[(1+\Delta z)^2 - 1],
\end{equation*}
where we calculate the redshift pathlength $\Delta z$ for the various sightline
samples as described in Sect.\,3.2.
The slope of the CDDF is given by the exponent $\beta$, while the normalisation
constant, $C_{\mathrm{H\,I}}$ can be calculated via the relation
\begin{equation*}
C_{\mathrm{H\,I}} \equiv m_{tot}(1-\beta)/\{N_{max}^{1-\beta}
[1-(N_{min}/N_{max})^{1-\beta}]\}
\end{equation*}
Here, $m_{tot}$ is the total number of absorbers in the column-density
interval $N_{min}$ to $N_{max}$.
The column density distributions for our sample of Ly$\alpha$ absorbers
are shown in Fig.~\ref{CDDF}. The CDDFs were fitted for Ly$\alpha$ components
with log $N$(H\,{\sc i}) $\geq 13.2$ (maximum log\,$N$(H\,{\sc i}) = 15.5).
For the full filament sightline sample, we derive $\beta = 1.63 \pm 0.12$,
while for the sub-sample of absorbers within 1000~km\,s$^{-1}$ of the filament
velocity we obtain $\beta = 1.47 \pm 0.24$.
Using high-resolution STIS data, \citet{Lehner07} derived
$\beta = 1.76 \pm 0.06$ for their sample of narrow
absorbers (b $\leq$ 40~km\,s$^{-1}$, 110 absorbers) and $\beta = 1.84 \pm 0.06$
for b $\leq$ 150~km\,s$^{-1}$ (140 absorbers), thus slightly steeper slopes
than the distributions found here. Note that we do not split our sample based on
$b$-values, as the fraction of absorbers with $b > 40$~km\,s$^{-1}$
is small ($\leq 5\,\%$).
A comparison with other recent studies can be made, for example with the \citet{Danforth16}
COS study of the low-redshift IGM, which yields $\beta = 1.65 \pm 0.02$ for a
redshift-limited sub-sample of 2256 absorbers. Other results are:
$\beta = 1.73 \pm 0.04$ from \citet{Danforth08} (650 absorbers from COS),
$\beta = 1.65 \pm 0.07$ by \citet{Penton00} (187 absorbers from GHRS
and STIS), and $\beta = 1.68 \pm 0.03$ by \citet{Tilton12} (746 absorbers
from STIS). Most of these values are consistent with $\beta = 1.60 - 1.70$,
whereas higher values ($\beta > 1.70$) may indicate a redshift
evolution of the slope between $z = 0.0 - 0.4$. Such an evolution was
discussed in \citet{Danforth16}, who find a steepening of the slope with
decreasing $z$ in this redshift-range.
For our study, with $z \leq 0.0223$, the slope should be close to the
value valid for the universe at $z = 0$.
Furthermore, the spectral resolution may play a role in the determination of $\beta$.
For instance, the spectral resolution of COS ($R\approx 20,000$) is
substantially lower than the resolution
of the STIS spectrograph ($R\approx 45,000$) used by \citet{Lehner07},
so that some of our Ly$\alpha$ absorption components might be composed
of several (unresolved) sub-components with lower column densities.
The limited S/N in the COS data additionally hampers the detection of
weak Ly$\alpha$ satellite components in the wings of stronger
absorbers \citep[see also][their Fig.\,1]{Richter06}.
In this series of results, the shallower CDDF ($\beta = 1.47 \pm 0.24$)
for the sub-sample of velocity-selected absorbers within 1000~km\,s$^{-1}$
of the filaments stands out. Although this low value is formally in
agreement within its error range with the canonical value of $\beta = 1.65$,
it may hint at a larger relative fraction of high-column density systems
in the filaments, reflecting the spatial concentration of galaxies and
the general matter overdensity in these structures. A larger sample
of Ly$\alpha$ absorbers associated with filaments would be required to
further investigate this interesting aspect on a statistically
secure basis.
Like previous studies, our sample offers an opportunity to study
the number of Ly$\alpha$ absorbers per unit redshift ($d\mathcal{N}/dz$).
Table~\ref{table:linedens} gives the Ly$\alpha$ line density for
the entire sample, as well as for several subsamples. Those subsamples
separate the sample into different column-density bins, allowing us to
directly compare the results to the high-resolution \citet{Lehner07} absorber
sample and other studies.
For the full absorber sample (including filament and non-filament related
sightlines), the Ly$\alpha$ line density is $116\pm5$ lines
per unit redshift, but only for the filament-related subsample do we have
column-density information available (see Sect.\,3.2).
Taking this subsample in the ranges $13.2\lesssim$
log$N$(H\,{\sc i}) $\lesssim 14.0$ and $13.2\lesssim$
log$N$(H\,{\sc i}) $\lesssim 16.5$, we derive Ly$\alpha$ line densities
of $88\pm 8$ and $98\pm 8$, respectively. These values are in
good agreement with those reported by \citet{Lehner07}, who derive
number densities of $80\pm6$ and $96\pm7$ for the same
column-density ranges.
Our results can also be compared with those obtained from the much larger
COS absorber sample presented in \citet{Danforth16}. They derive a
relation for $d\mathcal{N}/dz$ in the form
$d\mathcal{N} (>N)/dz \approx 25\, (N/10^{14}$~cm$^{-2})^{-0.65}$.
For absorbers with log$N$(H\,{\sc i})$\geq 13.2$, this leads to
$d\mathcal{N}/dz \sim 83$, mildly lower than the values derived
by us and \citet{Lehner07}, but still in fair agreement.
If we take the velocity-selected absorber sample, which potentially traces
the Ly$\alpha$ gas associated with the filaments, we obtain a significantly
higher line density of $d\mathcal{N}/dz=189\pm25$.
The redshift pathlength for the velocity-selected absorber sample was
calculated for each sightline by considering a velocity range of
$\pm 1000$ km\,s$^{-1}$ around the center-velocity
for the filament segment that was closest to that sightline.
The value of $189\pm25$ for the velocity-selected filament sample is $93$ percent
higher than the value derived for the total filament-absorber sample (along the
same lines of sight).
This line overdensity of the Ly$\alpha$ forest kinematically associated with filaments
obviously reflects the matter overdensity of baryons in the potential wells
of these large-scale cosmological structures.
\begin{table}
\caption{Ly$\alpha$ line density for the full sample (filament and
non-filament related sightlines), filament related sightlines, and for the velocity
selected absorber sample ($\Delta$v < 1000~km\,s$^{-1}$). }
\label{table:linedens}
\centering
\begin{tabular}{l r r }
\hline\hline
log$N$(H\,{\sc i}) & $\mathcal{N}$ & $d\mathcal{N}/dz$\\
\hline
Full sample & & \\
& 579 & $116\pm 5$ \\ \hline
Filament sample & & \\
$12.0 - 16.5$ & 215 & $144\pm 11$ \\
$13.2 - 14.0$ & 132 & $88\pm 8$ \\
$13.2 - 16.5$ & 147 & $98\pm 8$ \\ \hline
Velocity-selected filament sample & &\\
$12.0 - 16.5$ & 74 & $233\pm 27$ \\
$13.2 - 14.0$ & 46 & $145\pm 22$ \\
$13.2 - 16.5$ & 60 & $189\pm 25$ \\
\hline
\end{tabular}
\end{table}
For the sake of completeness, we also show in Fig.~\ref{SiIIIHI} the
relation between log$N$(Si\,{\sc iii}) and log$N$(H\,{\sc i}) for absorbers
in our sample for which both species are detected. Only for a small
fraction (8.4\,\%) of the Ly$\alpha$ components, Si\,{\sc iii} can be
measured, which is partly because of the velocity-shifted Si\,{\sc iii}
falling in the range of Galactic Ly$\alpha$ absorption. Generally,
log$N$(Si\,{\sc iii}) increases with log$N$(H\,{\sc i}), as expected
from other Si\,{\sc iii} surveys in the IGM and CGM
\citep[e.g.][]{Richter16}, but the scatter is substantial. The small
number of Si\,{\sc iii}/H\,{\sc i} absorbers in our sample does not
allows us to draw any meaningful conclusions on the metal content
of the absorbers in relation to their large-scale environment.
\section{Discussion on Ly$\alpha$ absorbers and their environment}
In their study, \citet{Prochaska11} correlated galaxies and Ly$\alpha$
absorbers {\rm at $z=0.06-0.57$} and found that for weak absorbers (13
$\leq$ log$N$(H\,{\sc i}) $\leq$ 14) less than 20\,\% of the systems were
associated with a known galaxy, while for strong absorbers
(log$N$(H\,{\sc i}) $\geq$ 15), this fraction was 80\,\%. The criteria
they used for associating a galaxy with an absorber were the following:
i) a velocity difference between absorber and galaxy of
$\leq$ 400~km\,s$^{-1}$, and ii) an impact parameter of $\rho
\leq$ 300 kpc. Using the same criteria, we derive for our sample that
10\,\% (40\,\%) of the low (high) column density absorbers are associated
with a galaxy in the V8k galaxy sample.
Therefore, and in agreement with \citet{Prochaska11}, we find that
high column density Ly$\alpha$ absorbers are four-times more often
associated with a galaxy than low column density absorbers, but the
overall fraction of absorbers for which an associated galaxy is found
is only half of that in the \citet{Prochaska11} sample. This can be
attributed to the fact that the V8k catalogue is incomplete
for $M_B<-16$ and $v>1000$ km\,s$^{-1}$, while the
\citet{Prochaska11} galaxy sample is complete up to at least
$z = 0.1$ for galaxies with $L < 0.1 L_{*}$.
By comparing their observed covering fractions with a filament model,
\citet{Prochaska11} conclude that {\it all} Ly$\alpha$ absorbers are
associated with either a galaxy or a filament. This view is debated
by \citet{Tejos12}, however, who argue that there is an additional
population of `random' Ly$\alpha$ absorbers that reside in the
underdense large-scale structure (voids).
The idea of Ly$\alpha$ absorbers belonging to different populations
(and thus different environments) was proposed more than 25 years ago
by \citet{Morris93}. By analysing Ly$\alpha$ absorbers in a single
sightline and comparing the location of the absorbers with locations
of galaxies, these authors found that the absorbers do not cluster
around galaxies as strongly as galaxies cluster among themselves.
On the other hand, they also found the trend that the absorbers do
cluster around galaxies. From this, they concluded that there could
be two populations of Ly$\alpha$ absorbers: one that is associated
with galaxies and one that is more or less randomly distributed.
To test whether the Ly$\alpha$ absorbers in our sample resemble a
`random population', we generated two artificial populations of
Ly$\alpha$ absorbers, both with random sky positions, random
absorption velocities within the assumed $v_{\rm fil} \pm 1000$~km\,s$^{-1}$
velocity range of a filament, and random H\,{\sc i} column densities
weighted by the H\,{\sc i} CDDF.
For the one population, we have restricted the sample
from \citet{Lehner07} (hereafter abbreviated with L07)
to the redshift range spanned by the filaments in our study
and used the slope of their CDDF (resulting in 39 absorbers,
$\beta=1.76$). For the other population, we used our own absorber sample
and slope (74 absorbers, $\beta=1.47$).
The normalisation constant and absorber-path length were calculated
using the relations given above. All absorbers are assumed to be
at a distance of $\leq 5$ Mpc from the nearest galaxy belonging to
a filament, which was also our original criterion to select absorbers
inside a filament. The fraction of the simulated absorbers in each
filament was adjusted to match the real fractions found in this study.
Because the dark blue and purple filaments have an overlap on the sky,
their randomised simulated counterparts were generated for the both
filaments combined.
Figure~\ref{randomrhog} shows a comparison of how column densities
for the three different Ly$\alpha$ absorber samples (observed sample,
random sample with own statistics, random sample with L07 statistics)
depend on $\rho_{\rm gal}$. Like in Sect.~\ref{Lyagals}, $\rho_{\rm gal}$ was
calculated for the nearest galaxy on the sky within a velocity interval
of 400~km\,s$^{-1}$ from the absorber. Clearly, the measured absorbers
cluster more strongly around galaxies than both random samples.
This indicates that at least some of the absorbers are associated with
galaxies, as expected from previous studies
\citep[e.g.][]{Morris06,Prochaska11,Tejos14,French17}.
A very rough estimate of the fraction of absorbers associated with a
galaxy can be made by comparing the fraction of absorbers within 1.5 Mpc
of a galaxy in different samples. For the measured absorbers 82\,\% of
the absorbers have $\rho_{\rm gal} \leq 1.5$~Mpc, while the fraction for
the randomised sample drawn form our own distribution is 53\,\% and
for the L07 random sample it is 46\,\%. In conclusion, about a third
of our absorbers cannot be explained by a random population
and might be connected to a nearby galaxy.
\begin{figure}[tp]
\centering
\includegraphics[width=\hsize]{random_rhog.pdf}
\caption{
H\,{\sc i} column density versus $\rho_{\rm gal}$ for Ly$\alpha$ absorbers,
i) as measured in the COS data (upper panel), ii) for a randomised sample
following the CDDF in this work (middle panel), and iii) for a randomised
sample following the number statistics and CDDF from \citet{Lehner07}
(lower panel).
}
\label{randomrhog}
\end{figure}
\begin{figure}[tp]
\centering
\includegraphics[width=\hsize]{random_rhof.pdf}
\caption{
Same as Fig.~\ref{randomrhog}, but now for $\rho_{\rm fil}$.
}
\label{randomrhof}
\end{figure}
When comparing the distance of the Ly$\alpha$ absorbers to the nearest
filament axis, as shown in Fig.\ref{randomrhof}, a similar, albeit less
pronounced trend can be seen. Measured absorbers are generally found
closer to the filament axis than a random distribution shows.
One may argue that this could be a selection effect, e.g., from targeting
particularly interesting areas such as the Virgo Cluster, which would
result in more sightlines near the Virgo filament.
However, Fig.\ref{EWrhof} showed the break-down of absorbers into
different filaments, demonstrating that the absorbers belonging to
the Virgo Cluster filament (green) are in fact more spread out than
absorbers in other regions, speaking against such a selection effect.
To further investigate the possible clustering signal of Ly$\alpha$
absorbers near the filament axis, we have plotted in Figure~\ref{cdistrf}
the cumulative distribution function for $\rho_{\rm fil}$ for the
three previously mentioned absorber samples
(observed sample, random sample with own statistics, random sample with
L07 statistics) as well as the galaxies that constitute the filament.
We also have added another absorber test sample (D16)
generated from the Ly$\alpha$ column density distribution of absorbers
reported in \citet{Danforth16}.
The cumulative distribution of galaxies set the reference point, as
these {\it define} the filament. As can be seen, the observed
distribution of absorbers cluster more strongly around the
filament axis than the three random absorber test distributions, but not as
strongly as the galaxies. Within the inner 2 Mpc, in particular, the
fraction of measured Ly$\alpha$ absorbers rises faster than the
synthetic absorbers in the randomised samples.
The cumulative distribution function as shown in Fig.~\ref{cdistrf}
can be compared with one for absorbers associated with galaxies.
Fig.~3 in \citet{Penton02} shows this function for 46 Ly$\alpha$
absorbers and subsamples thereof.
Their full sample follows a distribution similar to our absorbers, with $\sim$60\,\%
found within 2~Mpc of the nearest galaxy \citep{Penton02} or filament (this study).
Both studies show the galaxies more strongly clustered than the Ly$\alpha$ absorbers.
\citet{Stocke13} compared their absorber-galaxy cumulative distribution function
with a random distribution concluded that absorbers are associated with
galaxies in a more general way, i.e., tracing the large-scale structure
instead of individual galaxies. \citet{Penton02,Stocke13,Keeney18} all
conclude that high-column density absorbers are more strongly correlated with
galaxies than those with lower column densities.
This is in agreement with what is found here: Ly$\alpha$ absorbers
do not trace the same distribution as galaxies, but they are not randomly
distributed around filaments either.
\begin{figure}[tp]
\centering
\includegraphics[width=\hsize]{cdistrf.pdf}
\caption{
Cumulative distribution function for $\rho_{\rm fil}$ for three different
absorber samples. The measured absorbers in the COS data are indicated
in blue, the random sample with own statistics is plotted in dashed
green, the random sample with the L07 statistics is indicated in
dotted black, and the random sample from \citet{Danforth16}
is added in dashed magenta (D16).
The distribution for the galaxies is shown in red.
}
\label{cdistrf}
\end{figure}
\begin{table}[bp]
\caption{Kolmogorov-Smirnov test for impact-parameter distributions}
\label{table:KStest}
\centering
\begin{tabular}{l l l }
\hline\hline
Sample compared & KS statistics & $p$-value \\
\hline
Random, this work & 0.28 & 0.003 \\
Random, L07 & 0.31 & 0.011 \\
Random, D16 & 0.32 & 0.013 \\
V8k galaxies & 0.26 & 0.032 \\
\hline
\end{tabular}
\end{table}
A Kolmogorov-Smirnov (KS) test confirms that the absorbers found in the COS
data are not drawn from a random distribution. Table~\ref{table:KStest} lists
the KS values and $p$-values for the different samples.
A KS-test can be used to compare two different samples to evaluate whether
if they are drawn from the same parent distribution. A high
KS value (maximum of 1) indicates a high probability that this
is true. The $p$-value instead indicates the significance that the
null hypothesis is rejected. In this case, the null hypothesis is
that $\rho_{\rm fil}$ for the absorbers measured in this study
follows the same distribution as that of a random sample, or of
the galaxies in the filament. A $p$-value of
$<0.05$ means the null hypothesis can be rejected with a probability of
$>95$\,\%. In all three comparisons between the measurements
and the random absorber samples (our own sample, randomised, and the
randomised L07 and D16 samples), the KS-test indicates they do not follow
the same distribution.
The low $p$-value that we obtain from comparing the COS absorber sample
with the V8k galaxy sample in the filaments further indicates that the
the Ly$\alpha$ absorbers are not drawn from the same distribution as
the galaxies.
Evaluating the hypothesis that there are two populations of Ly$\alpha$
absorbers \citep[e.g.][]{Morris93, Tejos12}, we have removed from
the sample (as a test) those absorbers that we have associated with
galaxies. This has, however, no significant effect on the
cumulative distribution function for $\rho_{\rm fil}$.
In conclusion, the cumulative distribution functions for $\rho_{\rm fil}$
show that galaxies are more strongly clustered in the filaments than
the Ly$\alpha$ absorbers that belong to the same cosmological
structures. Ly$\alpha$ absorbers do not follow a random distribution
and neither do they follow the same distribution as the galaxies that
constitute large-scale filaments. There might be two (or more) separate
populations of Ly$\alpha$ absorbers in filaments, but (from our
study) there is no evidence that the Ly$\alpha$ absorbers that are
{\it not} directly associated with large galaxies are randomly
distributed in the field of the filament.
Deeper insights into these aspects (including other important cosmological
issues such as overdensity bias-factors and how they affect the absorber/galaxy/filament
statistics) are highly desirable, but will require a much larger observational
data set in combination with numerical cosmological simulations.
\section{Summary and concluding remarks}
In this study, we have combined galaxy data of more than 30,000 nearby
galaxies from the V8k catalogue \citet{Courtois13} with HST/COS UV
spectral data of 302 distant AGN to investigate the relation between
intervening H\,{\sc i} Ly$\alpha$ absorbers and five nearby
cosmological structures (galaxy filaments) at $z\approx0$
($v<6700$ km\,s$^{-1}$).\\
\\
(1) All in all, we identify 587 intervening Ly$\alpha$ absorbers
along the 302 COS sightlines in the wavelength range between
$1220$ and $1243$\,\AA. For the 91 sightlines that pass the
immediate environment of the examined galaxy filaments
we analysed in detail 215 (229) Ly$\alpha$ absorption
systems (components) and derived column densities and $b$-values
for H\,{\sc i} (and associated metals, if available). \\
\\
(2) For the individual galaxies in our sample, we have calculated the
virial radii from their luminosities and the galaxy impact parameters,
$\rho_{\rm gal}$, to the COS sightlines. We assume $29$ Ly$\alpha$
absorbers to be directly associated with galaxies, as they are
located with 1.5 virial radii of their host galaxies and within
400 km\,s$^{-1}$ of the galaxies' recession velocity.\\
\\
(3) We characterise the geometry of the galaxy filament by considering
the galaxy distribution in individual segments of the filaments.
In this way, we define for each filament a geometrical axis that
we use as reference for defining the filament impact parameters,
$\rho_{\rm fil}$, for those Ly$\alpha$ absorbers that are located
within 1000 km\,s$^{-1}$ of the filament.\\
\\
(4) We find that the absorption velocities of the Ly$\alpha$
absorbers reflect the large-scale velocity pattern of the four
galaxy filaments, for which sufficient absorption-line data are
available. 74 absorbers are aligned in position and velocity
with the galaxy filaments, indicating that these absorbers and the
galaxies trace the same large-scale structure .\\
\\
(5) If we relate the measured Ly$\alpha$ equivalent widths
(or H\,{\sc i} column densities) with the galaxy and filament
impact parameters, we find that the strongest absorbers
(equivalent widths $W_{\lambda} >500$ m\AA) are preferentially located
in the vicinity of individual galaxies (within 3 virial radii)
and/or in the vicinity of the filament axes (within 5 Mpc).
The observed relations between $W$ and $\rho_{\rm gal}$/$\rho_{\rm fil}$
exhibit substantial scatter, however, disfavouring a simple
equivalent width/impact parameter anti-correlation.\\
\\
(6) We find that the measured H\,{\sc i} components
follow a column-density distribution function with a slope
of $-\beta=-1.63\pm0.12$, a value that is typical for
the low-redshift Ly$\alpha$ forest. Only for the
sub-sample of absorbers within 1000~km\,s$^{-1}$ of the filament
velocity do we obtain a shallower CDDF with $\beta = 1.47 \pm 0.24$,
possibly indicating an excess of high column-density absorbers
in galaxy filaments when compared to the overall Ly$\alpha$ forest.\\
\\
(7) The Ly$\alpha$ absorbers that lie within 1000~km\,s$^{-1}$
of the nearest filament have a $\sim 90$ percent higher rate of
incidence ($d\mathcal{N}/dz=189\pm25$ for log $N$(H\,{\sc i}) $\geq 13.2$)
than the general Ly$\alpha$ absorber population in our sample
($d\mathcal{N}/dz=98\pm8$ for log $N$(H\,{\sc i}) $\geq 13.2$).
This higher number density of Ly$\alpha$ absorbers per unit redshift
most likely reflects the filaments' general matter overdensity.\\
\\
(8) We compare the filament impact-parameter distributions of
the galaxies, measured Ly$\alpha$ absorbers, and a (synthetic)
Ly$\alpha$ absorber sample with randomised locations on the sky
with each other. We find that
the galaxies are most strongly clustered around the filament
axes, while the spatial clustering of the observed Ly$\alpha$
absorbers around the filament axes is evident, but less
pronounced. Using a KS test, we confirm
that the Ly$\alpha$ absorbers neither follow the
impact-parameter distribution of the galaxies, nor do they follow
a random distribution, but represent an individual, spatially
confined sample of objects.\\
\\
Taken together, the results of our study underline that the
relation between intervening Ly$\alpha$ absorbers, large-scale
cosmological filaments, and individual galaxies (that constitute
the filaments) in the local universe is complex and manifold,
and difficult to reconstruct with existing data.
This complexity is not surprising, of course, if we recall, what
Ly$\alpha$ absorbers actually are: they are objects that trace
local gas overdensities in an extremely extended, diffuse medium
that is gravitationally confined in hierarchically structured
potential wells, and stirred up by large-scale matter flows and
galaxy feedback. In this picture, the spatial distribution
of Ly$\alpha$ absorbers in cosmological filaments is governed
by both the distribution of individual sinks in the large-scale
gravitational potential energy distribution (i.e., galaxies,
galaxy groups etc.) and more (or less) stochastically distributed
density fluctuations at larger scales that reflect the internal
dynamics of the IGM.
For the future, we are planning to extend our study of the relation
between intervening absorbers and cosmological filaments in the
local universe by using a larger (and deeper) galaxy sample and
additional HST/COS spectra, in combination with constrained
magneto-hydrodynamic cosmological simulations of nearby
cosmological structures.
\begin{acknowledgements}
The authors would like to thank the referee for his valuable
comments and suggestions which helped to improve the manuscript.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,314,259,993,883 | arxiv | \section{\label{sec:level1}Introduction}
Dissipative solitons are stable localized excitations of nonlinear systems that include drive and dissipation, generalizing the concept of solitary wave-like solutions (solitons) to the case of non-integrable systems \cite{akhmediev2008dissipative}. Transverse optical systems involving diffractive feedback or optical cavities provide suitable platforms for observing spatial dissipative solitons within the nonlinear optics domain \cite{ackemann2009fundamentals}. A feature that is well described by mean-field models is that optical dissipative solitons can be switched on and off by means of tightly focused addressing pulses \cite{purwins2010dissipative}. Therefore, multiple spatial peaks of the optical field can be excited locally in the system \cite{mcsloy2002computationally, vladimirov2002two}, exhibiting characteristic homoclinic snaking branches \cite{burke2007snakes}.
Schemes involving cold atomic gases in longitudinally pumped cavities or single-mirror systems have been recently investigated theoretically and experimentally \cite{ackemann2021self}, and shown to achieve light-atom self-structuring, where the relevant coupling involves optomechanical (dipole) forces \cite{Tesio2012, Labeyrie2014a, Tesio2014a, Robb2015}, electronic \cite{camara2015optical, Labeyrie2016} and magnetic transitions \cite{kresicCP, labeyrie2018magnetic, krevsic2019inversion, ackemann2021coupling}. In particular, the collective nature of optomechanical self-structuring has provided insight into several aspects of cold and ultracold atom physics such as crystallization \cite{Ritsch2013, Ostermann2016, gopalakrishnan2009emergent}, supersolidity with continuous symmetry breaking \cite{gopalakrishnan2010atom, mottl2012roton, leonard2017supersolid}, photon-mediated interactions with tunable range \cite{vaidya2018tunable}, and structural transitions between crystalline configurations \cite{li2020measuring, baio2021multiple}.
Transverse optical pattern forming dynamics involving orbital angular momentum (OAM) in the input beam was analyzed for a photorefractive medium \cite{caullet2011pattern, caullet2012vortex}, domain walls in optical parametric oscillators \cite{oppo2001characterization}, and dissipative solitons in semiconductor microcavities \cite{kheradmand2003rotating}. However, a systematic treatment of such phenomena was developed only recently in Ref.~\cite{yao2019control} for a Kerr cavity, including polarization structuring effects.
In this work, we focus on a cold atom optomechanical cavity scheme as introduced in Ref.~\cite{Baio2020}. There, it was shown that the rotational dynamics of self-organized light-atom ring-lattices is capable of sustaining robust atomic transport, induced by the input beam carrying OAM. Cavity solitons (CSs) of light and atom density were theoretically predicted with a purely optomechanical nonlinearity in Ref.~\cite{Tesio2013}. Here we show that a phase structured input can be used to engineer transport of self trapped atoms along complex trajectories, such as rotating or spiralling motion in the transverse plane, and to probe multi-soliton interactions resulting in stable soliton clusters. These considerations are also expected to apply to the dark and bright solitons in cold atoms predicted in a single-feedback-mirror (SFM) scheme \cite{baio2021multiple}.
The paper is organized as follows. In Sec.~\ref{sec:level2} we review the features of mutually self-focused light-density soliton in contrast to optomechanical CSs. In Sec.~\ref{sec:level3} we discuss the motional dynamics of CSs induced by the phase gradients and in Sec.~\ref{sec:level4} the existence of different soliton configurations depending on the cloud susceptibility. Finally, in Sec.~\ref{sec:level5}, we address the properties of CS interactions in rotating chains.
\section{\label{sec:level2}The model}
One of the mechanisms allowing the formation of spatial solitons in light-atom systems relies on a collective self-focusing effect due to modulations of the atom density. The concept of mutual self-focusing occurring in coupled light and atomic beams was introduced first by Klimontovich and Luzgin in Ref.~\cite{klimontovich1979possibility}. The total index of refraction for a two-level atomic medium is obtained from its nonlinear susceptibility as follows \cite{Labeyrie2014a}:
\begin{align}
\nu(\mathbf{r}, s) = 1 - \frac{3 \lambda^3}{8 \pi^2}\frac{\Delta}{(1+\Delta^2)}\frac{n(\mathbf{r}, s)}{1+s(\mathbf{r})},
\end{align}
where $n(\mathbf{r}, s)$ is the spatially modulated atom density, $s(\mathbf{r})$ is the atomic saturation parameter, $\Delta = 2\delta/\Gamma$ the light-atom detuning in units of half-linewidth and $\lambda$ the light wavelength. By assuming a Gibbs equilibrium state for $n(\mathbf{r}, s)$ on the integration domain $\Omega$ as follows:
\begin{align}
n_{\textrm{eq}}(\mathbf{r}) = \frac{\exp{\left[-\sigma s(\mathbf{r}) \right]}}{\int_{\Omega}\exp{\left[-\sigma s(\mathbf{r})
\right]}d\mathbf{r}},
\label{canonical}
\end{align}
where $\sigma= \hbar\delta/2k_{B}T = \hbar\Delta\Gamma/4k_{B}T$ represents an optomechanical coupling constant, one easily derives the Klimontovich-Luzgin condition for mutual self-focusing with red atomic detuning $\Delta <0$ only, namely \cite{klimontovich1979possibility}:
\begin{align}
\frac{\exp{[-\sigma s(\mathbf{r})]}}{1+s(\mathbf{r})} >1 \quad\textrm{for}\quad \sigma < 1.
\end{align}
Wang and Saffman showed that the same condition allows for stable spatial soliton solutions in the paraxial propagation of an optical field through an atomic cloud \cite{Saffbook, wangthesis}. Assuming instead a ring cavity configuration, such as the one sketched in Fig.~\ref{cavitysol}, leads to the following nonlinear coupled model for the slowly varying envelope of the optical field $\mathcal{E}$ in the mean field and low saturation approximations \cite{lugiato1988stationary}:
\begin{align}
\partial_{t'} \mathcal{E} = -(1+i\theta)\mathcal{E} + \mathcal{A}_{\mathrm{in}}(\mathbf{r}) -2 i \mathcal{C} \Delta\, n\,\mathcal{E} + i\nabla_{\perp}^{2}\mathcal{E}, \label{ringcaveq}
\end{align}
where $t' = \kappa t$ is an adimensional time variable (in units of cavity lifetime $\kappa$), $\theta$ is the detuning between the pump and the closest cavity resonance, $\mathcal{A}_{I}(\mathbf{r})$ a spatially dependent pump rate, and $\mathcal{C} = b_0/2\tau(1+\Delta^2)$ the cavity cooperativity parameter, describing the susceptibility strength at fixed $\Delta$. The atom density $n(\mathbf{r},t')$ obeys the Smoluchowski equation in the strong friction limit \cite{Tesio2012, Tesio2013}:
\begin{align}
\partial_{t'} n = \sigma D_{\mathbf{r}}\nabla_{\perp}\cdot\left[n\,\nabla_{\perp} |\mathcal{E}|^2\right]+D_{\mathbf{r}}\nabla_{\perp}^{2}n,
\label{smoluch2}
\end{align}
where $D_{\mathbf{r}}$ is the spatial diffusion constant \footnote{Note that this can be controlled externally by means of optical molasses beams via the momentum damping rate, assumed such that the overdamped condition is satisfied.}. Optomechanical transport is generated by the dipole potential, indicated by the gradient terms in Eq.~($\ref{smoluch2}$).
\begin{figure}
\hspace{-.45cm}\includegraphics[scale=1.3]{cavity.pdf}
\caption{Optomechanical single-mode ring cavity in a bow-tie configuration. The intra-cavity field $\mathcal{E}(\mathbf{r},t')$ is driven by a phase structured beam of amplitude $\mathcal{A}_{\mathrm{in}}(\mathbf{r})$, where $\mathbf{r} = (x,y)$ is the transverse coordinate. We assume one imperfect mirror with transmittivity $\tau$. An ensemble of overdamped laser-cooled two-level atoms of temperature $T$ and optical density at resonance $b_0$ is placed within the cavity, coupled to the optical field via a Smoluchowski equation for the atom density $n(\mathbf{r},t')$. CSs arise from the bi-stability of patterned and homogeneous states below threshold \cite{ackemann2009fundamentals, Tesio2013}. }
\label{cavitysol}
\end{figure}
Similarly to the case of mutual filamentation instabilities of light and matter beams in Ref.~\cite{saffman1998self}, the coupled system of Eqs.~(\ref{ringcaveq}) and (\ref{smoluch2}) is shown to have a modulation instability leading to optomechanical structure formation for a plane wave pump, namely $\mathcal{A}_{\mathrm{in}}(\mathbf{r}) = A_{\mathrm{in}} $, when the following minimum threshold condition is satisfied \cite{Tesio2012, Tesio2013}:
\begin{align}
I = |A_{\mathrm{in}}|^2 \geq I_0 = \frac{1}{2\mathcal{C}\Delta\sigma} .
\end{align}
The optomechanical instability typically results in the formation of a positive hexagonal phase $\mathbf{H}^{+}$ of the cavity field $\mathcal{E}(\mathbf{r},t)$ together with correlated (anticorrelated) density $n(\mathbf{r},t')$ in a $\mathbf{H}^{+(-)}$ for red (blue) detuning $\Delta < 0 $ ($\Delta > 0 $) \cite{Tesio2012, Baio2020}. Deviations from the effective-Kerr approximation and structural transitions are expected to arise in the strong detuned regime (at fixed $b_0$), namely, when the cloud susceptibility $\mathcal{C}\Delta$ is lower than a critical value \cite{baio2021multiple}. Finally, as shown in Ref.~\cite{Tesio2013}, the subcritical bi-stability of hexagonal phases allows for coupled light-density dissipative solitons beyond the mutual self-focusing condition. In the rest of the paper, we focus on parameter regions where the atom density exhibits a $\mathbf{H}^{+}$ phase, since the localized structures correspond to peaks of self-trapped atoms. As shown later in Sec.~\ref{sec:level4}, this is also possible for blue-detuned atoms, where self-trapping and cooling conditions in a cavity QED system have been explored recently in Ref.~\cite{jungkind2019optomechanical}.
\section{\label{sec:level3} Soliton dynamics with structured phase profiles}
\begin{figure}
\hspace*{-.2cm}\includegraphics[scale=.21]{rot2.png}
\caption{Rotation of bright density optomechanical CSs for $\Delta < 0$, $\kappa t_{\mathrm{max}} = 300$, with OAM index $l=1$ at different radii (The rotation is counterclockwise for $l>0$.). The white dots track the peak position at different times.~(a) A localized peak of both the cavity field and atom density, initially defined at a position $\mathbf{r}_0 = (0,y_0)$ with $y_0 >y_d/4$, covers approximately a quarter of its orbit. (b) For an inner radial initial position $y_0<y_d/4$, the CS achieves a faster rotation speed (see Eq.~(\ref{driftvel})), completing a cycle within $\kappa t_{\mathrm{max}}$. Model parameters chosen as follows: $I/I_0 \approx 0.668$, $\theta = 5.1$, $\mathcal{C} \Delta = - 2.25$, $\sigma = 25$.}
\label{rot2}
\end{figure}
Rotational or spiralling motion of localized structures induced by an azimuthal phase twist was predicted for solitons supported by Bessel lattices in cubic media in Ref.~\cite{kartashov2004rotary, kartashov2005soliton}, and observed experimentally in Ref.~\cite{wang2006observation}. We start here by considering the simplest scalar phase structured input profile carrying OAM, namely:
\begin{align}
\mathcal{A}_{\mathrm{in}}(\mathbf{r}) = A_{\mathrm{in}}(r)\exp(il\phi),
\end{align}
where the amplitude $A_{\mathrm{in}}(r)$ is a radial function and $\mathbf{r} = (r,\phi)$ represent the transverse position expressed in polar coordinates. $A_{\mathrm{in}}(r)$ is assumed as the following hyperbolic tangent "tophat":
\begin{equation}
A_{\textrm{in}}(r)= \frac{\sqrt{I}}{2}\left\{1-\textrm{tanh}[\xi(r-\rho_{0})]\right\},
\label{tophat}
\end{equation}
with controllable steepness $\xi$ and size $r_{0}$ \cite{eslami2014complex}. Note that no radial modulation is present, in contrast with Ref.~\cite{kartashov2004rotary, kartashov2005soliton}. As for the well known cases of Laguerre-Gaussian or Bessel beams, the purely azimuthal phase factor $\exp(il\phi)$, with $l\in\mathbb{Z}$, generates a nontrivial vortex structure with phase singularity at $\mathbf{r} = 0$ \cite{yao2011orbital}. Finally, the CS is seeded as a localized pattern peak defined on top of Eq.~(\ref{tophat}).
Numerical simulations of the 2D optomechanical CS dynamics, in the model described by Eqs.~(\ref{ringcaveq}), together with the Gibbs distribution for the atom density in Eq.~(\ref{canonical}), are performed by means of a split-step method, with a spatial domain size of 10 critical wavelengths \footnote{The critical wavelength $\Lambda_c = 2\pi/q_c$ for the optomechanical ring cavity model can be found in Ref.~\cite{Baio2020}.} in a $256^2$ grid and time-step $\delta t' = 10^{-3}$. The purely azimuthal rotation case is shown in Fig.~\ref{rot2} for red detuning $\Delta < 0$, where the bright self-trapped density peak is observed drifting along perfectly circular trajectories with different rotation speeds at different radii. As for the case of optical bullet holes in the purely absorptive model, the drift velocity is determined by the input phase gradient \cite{Firth1996a} and, thus, the observed rotation speed scales with the transverse radius $r$ as follows \cite{yao2019control, Baio2020}:
\begin{align}
\mathbf{v}_{\textrm{dr}}(\mathbf{r}) \propto 2l\nabla_{\perp}\phi = \frac{2l}{r} \hat{\phi},
\label{driftvel}
\end{align}
as visible from the tracked position of the maximum of the atom density peak $n_{\textrm{eq}}(\mathbf{r},t')$ in the transverse domain, measured at regular intervals of $\kappa t = 10$. Note that, for the cold atom case, the exact proportionality is determined by the atomic diffusion timescales, which are shown to drag the rotation speed when $D_{\mathbf{r}} \rightarrow 0$ \cite{Baio2020}.
\begin{figure}
\hspace*{-.4cm}\includegraphics[scale=.21]{spir2.png}
\caption{Spiralling trajectories of optomechanical CSs for $\alpha \neq 0$ and $\kappa t_{\mathrm{max}} = 750$. (a) Atom-density peak undergoing inward spiralling motion for $\alpha = 1.2$. (b) Total input phase $\alpha\psi(r) + \phi$ (OAM index $l=1$), including gradient field, representing the soliton drift velocity. (c) Outward spiralling trajectory for $\alpha = -1.2$. (d) Corresponding phase and velocity field. Model parameters: $I/I_0 \approx 0.672$, $\theta = 5.1$, $\mathcal{C} \Delta = - 2.25$, $\sigma = 25$.}
\label{spir2}
\end{figure}
Simultaneous control of angular and radial motion of the optomechanical CS can be achieved by means of an additional phase factor $\exp{\left[i\alpha\psi(r)\right]}$, where $\psi(r)$ is a concave function such that, e.g., when $\alpha > 0$, CSs are guided towards the center of the transverse domain, as the phase gradient field points towards the point $\mathbf{r}=0$. For convenience, $\psi(r)$ is chosen as in Eq.~(\ref{tophat}) with the same $\rho_0$ and a slower steepness $\xi$. The drift velocity in this case reads:
\begin{align}
\mathbf{v}_{\textrm{dr}}(\mathbf{r}) = \alpha \partial_r \psi(r)\hat{r} + \frac{2l}{r} \hat{\phi}.
\label{driftspir}
\end{align}
The presence of radial correction for $\alpha \neq 0$ is shown here in Fig.~\ref{spir2}. The choice $\alpha > 0$ induces inward spiralling motion of the CS, until effective repulsive interactions close to the singular point $\mathbf{r} = 0$ take place, forcing the CS into a stable circular orbit, as shown by tracked evolution of the density peak in Fig.~\ref{spir2} (a).
The inwards spiralling structure of the drift velocity $\mathbf{v}_{\textrm{dr}}(\mathbf{r})$ is represented graphically in Fig.~\ref{spir2} (b), where the input phase and its gradient field are plotted together. The case $\alpha<0$ is shown in Fig.~\ref{spir2} (b)-(c), where the outward spiralling soliton is interacting this time with the outer boundary of the tophat, as shown by a slight change of direction in Fig.~\ref{spir2} (c). Eventually, the soliton is trapped in a circular trajectory at the maximum radius $\rho_0$ allowed by the tophat input profile \footnote{Note that the curvature of the spiral can be controlled by the concavity of $\psi(r)$ via the parameter $\alpha$.}.
\section{\label{sec:level4} Dark light - bright atom density solitons}
As shown in Ref.~\cite{baio2021multiple} for a SFM configuration, the optomechanical instability displays structural transitions among patterned phases and a recovery of the inversion symmetry in dependence on the cloud susceptibility. The phenomenology of this mechanism is similar to cases where an external parameter is tuned such as, e.g., external fields \cite{krevsic2019inversion} or polarization balances \cite{Scroggie1996, aumann1997polarized}. For the present model, the linear susceptibility of the atomic cloud is encoded in the cavity cooperativity $\mathcal{C}$, introduced in Sec.~\ref{sec:level2}. Therefore, significant nonlinear behaviour beyond the effective-Kerr medium case (for fixed values of $b_0$) is expected when $\mathcal{C}\Delta$ lies below a certain value. Assuming $\mathcal{C}\Delta$ as a free parameter in our simulations, we span across the range $\mathcal{C}\Delta \in [0.25,2]$, corresponding to variations of $b_0 \in [10,100]$ for $\Delta \approx 100$ and $\tau = 0.2$ \footnote{We also assume a fixed $\sigma = 50$, providing $\Delta \approx 200$ for $T = 295\, \mu\textrm{K}$.}. Results are shown in Fig.~\ref{invsim} for different cavity detunings $\theta$, where the displacement of the steady-state atom density pattern $n_{\textrm{eq}}(\mathbf{r},\kappa t_{\textrm{max}})$ with respect to the homogeneous value $n_{\textrm{eq}} = 1$ is measured by the quantity:
\begin{align}
\langle\eta\rangle = \frac{1}{2}\left[\max_{\Omega} n_{\textrm{eq}}(\mathbf{r})+\min_{\Omega} n_{\textrm{eq}}(\mathbf{r})\right] - 1.
\label{fom}
\end{align}
As expected from the general notion of inversion symmetry, the figure of merit $\langle\eta\rangle$ used here is positive (negative) in correspondence of an $\mathbf{H}^{+(-)}$ atom density phase. Starting from the $\mathbf{H}^-$ phase already known from Refs.~\cite{Tesio2012, Tesio2013}, we observe a clear change in the symmetry of the self-organized light-atom pattern, roughly around a value of $\mathcal{C}\Delta = 0.5$. Moreover, stable $\mathbf{S}$ phases with possible defects are found in the vicinity of the point $\langle \eta \rangle = 0$. For $\mathcal{C}\Delta < 0.5$, one finds $\mathbf{H}^+$ states, extending the scenario from the SFM model in Ref.~\cite{baio2021multiple} to the ring cavity model. Interestingly, the critical $\mathcal{C}\Delta$ is independent from the cavity detuning $\theta$, further confirming the common origin of the transition, as stemming from the transport character of the nonlinearity. A rigorous characterization of the supercritical stability diagram in the space $(\mathcal{C}\Delta,\theta)$ by means of a weakly nonlinear analysis will be considered in a further study.
\begin{figure}
\centering
\hspace*{-.7cm}\includegraphics[scale=.225]{transition.png}
\caption{Average displacement $\langle \eta \rangle$ across the inversion symmetry point $\langle \eta \rangle = 0$ of the atom density $n_{\textrm{eq}}(\mathbf{r})$ from the homogeneous value $n_{\textrm{eq}} = 1$ for $\theta = -1, -0.5, -0.5$ at fixed $\sigma = 50$. Data are measured the steady state atom density according to Eq.~(\ref{fom}) by spanning values of $\mathcal{C}\Delta$ for a chosen value of $\theta$. The error bars reflect local variations of the value $\langle\eta\rangle$ across the domain $\Omega$. }
\label{invsim}
\end{figure}
The stability of $\mathbf{H}^+$ atom densities for blue detuning implies the existence of CSs characterized by atomic bunches self-trapped in a dark region of the optical field. This is shown in Fig.~\ref{oscil}~(a)-(b), with the characteristic diffraction rings \cite{baio2021multiple}. Such solitons are similar to the optical and matter-wave counterparts obtained with radially symmetric potentials \cite{kartashov2005stable, baizakov2006matter, kartashov2019stable}. Interestingly, we observe here regimes of weakly damped temporal oscillations, dependent on the input pump and shown in Fig.~\ref{oscil}~(c), by tracking the time evolution of the atom density peak $\mathrm{max}_{\Omega}\, n(\mathbf{r},t')$. The presence of such an oscillatory mode, excited by perturbations to the stationary soliton profile, reveals that optomechanical CSs may also display a Hopf instability in the vicinity of the current parameter regime \cite{skryabin1999interaction, firth2002dynamical}.
\begin{figure}
\centering
\hspace*{-.25cm}\includegraphics[scale=.23]{dark1.png}
\caption{Dark light optomechanical CSs for blue light-atom detuning with plane-wave pumping. (a)-(b) Soliton spatial profiles with model parameters $I/I_0 = 0.968$, $\theta = -0.85$, $\mathcal{C}\Delta = 0.2$, $\sigma=100$. Oscillating soliton behavior obtained by tracking maximum density peak over time for pump values in the range $I/I_0 \in [0.964,0.968]$. Each curve is vertically shifted for ease of understanding. The profiles in (a)-(b) are plotted in correspondence of an atom density peak maximum at $kt\approx 40$ for the corresponding green curve.}
\label{oscil}
\end{figure}
In the rest of this section, we address rotation of such solitons on a phase profile carrying OAM. First, a characteristic of the strong blue detuning regime considered here is that, with a finite size pump, the atom density becomes negligible in the interaction region \footnote{This means that atoms simply tend to be pushed away from the input beam.}. Thus, we introduce an additional radial confinement in the atom density, achievable by further external trapping beams. Due to the higher-order rings visible from Fig.~\ref{oscil} (a)-(b), CSs in this regime interact strongly with the diffractive modulations of the homogeneous state below threshold. Such an effect is controlled by enlarging the domain size ($\approx35$ critical wavelengths) and smoothening the input field close to the singular point $\mathbf{r}=0$ \cite{yao2019control}. This is shown in Fig.~\ref{darksolrot2d}~(a)-(b), where the rotation velocity of the soliton is in once again in agreement with the predicted value in Eq.~(\ref{driftvel}). In Sec.~\ref{sec:level4}, we focus on interacting CSs and dynamical phenomena deriving from such interactions.
\begin{figure}
\centering
\hspace*{-.3cm}\includegraphics[scale=.20]{darkrot.png}
\caption{Counterclockwise rotation of a dark light optomechanical CS and its trajectory measured within a period of $\kappa t_{\mathrm{max}} = 10^3$ and higher OAM index $l=5$. Parameters chosen as follows: $I/I_0 \approx 0.495$, $\theta = -0.43$, $\mathcal{C} \Delta = 0.16$, $\sigma = 100$. An additional radial
trap prevents atoms from accumulating in the dark regions of optical intensity.}
\label{darksolrot2d}
\end{figure}
\section{\label{sec:level5} Multi-soliton interactions}
A general treatment of interacting CSs was derived in Refs. \cite{mcsloy2002computationally,vladimirov2002two} for the purely absorptive two-level case, based on neutral modes corresponding to translational invariance of the localized state \cite{obukhov1990self, maggipinto2000cavity}. Such an approach yields a hierarchy of equations of motion in gradient form, including the effect of many-body interactions \cite{scroggie2002self}. Those works predict stable bound states of two CSs at a set of preferred distances, mediated by their oscillatory tails. Those were experimentally observed for a sodium vapor in a SFM configuration \cite{schapers2000interaction}, where each pinning distance corresponds to an interaction potential minimum \cite{mcsloy2002computationally, vladimirov2002two}. Phase-structured input field can be used to induce soliton-soliton collisions, resulting into the formation of localized pattern spots or bound states of the light-atom CSs. This depends strongly on the input pump, which can inhibit the self-replication induced by the overlapping rings \cite{liehr2003replication}. In the rest of this section we show that OAM induced rotation can be used to probe multi-soliton interactions in a linear chain of CSs.
\begin{figure}
\centering
\hspace*{-.15cm}\includegraphics[scale=.21]{chain2.png}
\caption{3-CS chain rotational dynamics with OAM index $l=1$. Rotating CSs initially excited in a bound state evolve into a triangle. Snapshots of atom density $n_{\mathrm{eq}}(\mathbf{r})$ at $kt=10^3$~(a), $kt=3\times 10^3$~(b), $kt=3.75\times 10^3$~(c), $kt=4.25\times 10^3$~(d) for $\mathcal{C}\Delta =-2.5$ and $\sigma = 25 $.~(e) Stability of the rotating 3-soliton chain together with corresponding single CSs profiles. This result suggests that the stability of CS chains increases with decreasing susceptibility $|\mathcal{C}\Delta|$. The error bars measure the observed duration of the chain collapse in units of $\kappa t$. }
\label{chain}
\end{figure}
As discussed in Sec.~\ref{sec:level3}, in the presence of an phase gradient, a CS moves away from its initial position with a drift velocity dictated by the gradient itself. Thus, rotating or spiralling chains of two or more peaks can be constructed by periodically exciting travelling CSs with a purely azimuthal or combined radial + azimuthal phase such as in Eq.~(\ref{driftspir}). Chains of CSs orient themselves along the trajectories shown in Fig.~\ref{spir2}~(a)-(c). However, rotating chains are observed to be unstable in the long-term dynamics due to the local radial variation of the drift velocity and the system tendency to favor the $\mathbf{H}$-lattice \cite{vladimirov2002two}, as seen in Fig.~\ref{chain}~(a)-(d). The transient lifetime, during which linear CS chains are observed, is found to strongly depend on the susceptibility $\mathcal{C}\Delta$, meaning that soliton-soliton interactions also influence the stability of such states. A detailed investigation of this aspect for a 3-CS chain in the rotating case, for different values of $\mathcal{C}\Delta$, is shown in Fig.~\ref{chain}~(e). We choose the 3-CS chain, where effects induced by the local phase gradient are enhanced. To evaluate the stability of the chain, we estimate the $\kappa t_{\textrm{max}}$ before it collapses into a triangle (See Fig.~\ref{chain}). Note that the triangular CS cluster continues to rotate at constant speed around the beam center.
The lifetime increases with $\mathcal{C}\Delta$ and it is practically infinite for $\mathcal{C}\Delta > -0.75$, meaning that the chains are stable.
The origin of such a behaviour can be traced back to the overlap between rings of interacting CSs. As visible from the insets in Fig.~\ref{chain} (e), for $\mathcal{C}\Delta > -1$, the peaks corresponding to higher-order rings of a single CS are smaller with respect to the central peak. Therefore, one expects CSs interactions to lose relevance in the overall dynamics, partially explaining the results in Fig.~\ref{chain}. This can be investigated quantitatively by means of effective Hamiltonian approaches~\cite{parra2017interaction}. Those could unveil the presence of configurational minima besides the triangular cell.
\section{\label{sec:level5} Concluding remarks}
We demonstrated controllable motional states of transverse optomechanical localized structures in a longitudinally pumped ring cavity, where the nonlinear medium is given by a cloud of laser-cooled atoms. This motion arises from phase structured input fields carrying OAM, generating atomic transport via an optomechanical instability \cite{Baio2020}. In particular, by means of numerical studies, we addressed complex rotational and spiralling trajectories of optomechanical CSs by tuning the radial and azimuthal dependencies of the input profile. We also reported structural transitions among patterned phases with different symmetry and rotational motion of corresponding dark-light CSs. Finally, we explored CSs interactions in rotating 3-CS linear chains, providing evidence that they play a crucial role in the stability of such bound states.
A direct extension of interest for the present work is the study of two or more optomechanical CS collisions beyond the overdamped regime, unveiling potential effects of the transient dynamics in the momentum distribution or connections to supersolid droplets in the ultracold limit \cite{PhysRevE.104.044201}. Similar transverse localized states are also found in the optomechanical coupling of discrete arrays of oscillating mirrors \cite{ruiz2020spatial, PhysRevA.93.033850}.
Finally, our results suggest the possibility of transporting self-trapped atoms by means of CS motion in arbitrary time-dependent phase profiles \cite{pedaci2006positioning, cleff2008gradient}. All such studies are of potential relevance for the realization of novel atomtronic devices \cite{amico2021atomtronic, amico2021roadmap}.
\begin{acknowledgments}
All authors acknowledge financial
support from the European Training Network ColOpt, which
is funded by the European Union (EU) Horizon 2020 program
under the Marie Skłodowska-Curie Action, Grant Agreement
No. 721465.
\end{acknowledgments}
|
1,314,259,993,884 | arxiv | \section{I Introduction}
More than 50 years ago John von Neumann traced
first parallels between architecture of the
computer and the brain \cite{neumann}.
Since that time computers became an unavoidable element of the modern
society forming a computer network connected by the World Wide Web (WWW).
The WWW demonstrates a continuous growth approaching to $10^{11}$
web pages spread all over the world (see e.g.
http://www.worldwidewebsize.com/). This number starts to become even
larger than $10^{10}$ neurons in the brain. Each neuron can
be viewed as an independent processing unit connected with
about $10^4$ other neurons
by synaptic links (see e.g. \cite{izhikevich1,izhikevich2,sporns1}).
About 20\% of these links are unidirectional
\cite{felleman} and hence the brain can be viewed
as a directed network of neuron links.
At present, more and more experimental information
about neurons and their links becomes available and
the investigation of properties of neuronal networks
attracts an active interest of many groups (see e.g.
\cite{laughlin,sporns2,sporns3,kaiser,sporns4,izhikevich3,chklov,monasson}.
The WWW is also a directed network where
a site $j$ points to a site $i$ but no necessary
vice versa. The classification of web sites and information retrieval
from such an enormous data base as the WWW
becomes a formidable challenge of modern
society where the search engines like Google
are used by internet users in everyday life.
An efficient way to classify and extract the information
from WWW is based on the PageRank Algorithm (PRA),
proposed by Brin and Page in 1998 \cite{brin},
which forms the basis of the Google search engine.
The PRA is based on the construction of the Google matrix
which can be written as (see e.g. \cite{googlebook} for details):
\begin{equation}
{\bf G}=\alpha {\bf S}+(1-\alpha) {\bf E}/N \; .
\label{eq1}
\end{equation}
Here the matrix ${\bf S}$ is constructed from the adjacency matrix ${\bf A}$
of directed network links between $N$ nodes
so that $S_{ij}=A_{ij}/\sum_k A_{kj}$ and
the elements of columns with
only zero elements are replaced by $1/N$. The second term
in r.h.s. of (\ref{eq1}) describes a finite probability $1-\alpha$
for WWW surfer to jump at random to any node so that the matrix elements
$E_{ij}=1$. This term with the Google damping factor $\alpha$
stabilizes the convergence of PRA
introducing a gap between the maximal eigenvalue $\lambda=1$
and other eigenvalues $\lambda_i$. As a result
the first eigenvalue has $\lambda_1=1$ and the second one
has $|\lambda_2| \leq \alpha$.
Usually the Google search uses the value
$\alpha=0.85$ \cite{googlebook}. By the construction
$\sum_i G_{ij}=1$ so that the asymmetric matrix ${\bf G}$
belongs to the class of Perron-Frobenium operators
which naturally appear in the ergodic theory
\cite{sinai} and dynamical systems
with Hamiltonian or dissipative dynamics \cite{mbrin}.
The right eigenvector at $\lambda=1$ is
the PageRank vector with
positive elements $p_j$ and $\sum_j p_j=1$.
The classification of nodes in the decreasing order of
$p_j$ values is used to classify
importance of WWW nodes as it is described in more detail in
\cite{googlebook}.
The PageRank can be efficiently obtained by a
multiplication of a random vector by ${\bf G}$
which is of low cost since in average there are
only about ten nonzero elements in a typical line
of ${\bf G}$ of WWW. This procedure converges
rapidly to the PageRank.
Fundamental investigations of the PageRank properties of the WWW
have been performed in the computer science community
(see e.g. \cite{donato,boldi,avrach1,avrach2,litvak,avrach3};
involvement of physicists is visible, e.g. \cite{fortunato1},
but less pronounced).
It was established that the PageRank is satisfactory characterized by
an algebraic decay $p_j \sim 1/j^\beta$ with
$j$ being the ordering index and
$\beta \approx 0.9$;
the number of nodes with the PageRank $p$ scales as
$N_n \sim 1/p^\nu$ with the numerical value of the exponent
$\nu =1+1/\beta \approx 2.1$ \cite{googlebook,donato}.
It is known that such type of algebraic dependencies appear in various
types of scale-free networks \cite{dorogovtsev}.
The PageRank classification finds its applications not only for
the WWW but also for the network of article citations
in Physical Review
as it is described in \cite{redner,fortunato2}.
This shows that the approach based on the Google matrix can be
applied to vary different types of networks.
In this work we construct the Google matrix ${\bf G}$ for a model
of brain analyzed in \cite{izhikevich3}.
The properties of the spectrum and the eigenstates of ${\bf G}$
are described in the next Section II.
The results are discussed in Section III.
\section{II Numerical results}
\begin{figure}
\centerline{\epsfxsize=8.5cm\epsffile{fig1a.eps}}
\vglue -1.0cm
\centerline{\epsfxsize=8.5cm\epsffile{fig1b.eps}}
\vglue -0.3cm
\caption{Distribution of {\it ingoing} (left panels)
and {\it outgoing} (right panels) links $\kappa$:
$P_{in}$ and $P_{out}$ give number of nodes with
$\kappa$ {ingoing} and {\it outgoing} links respectively.
Top panels: unweighted links; bottom panels: weighted links.
}
\label{fig1}
\end{figure}
To construct the Google matrix of brain
we use a directed network of links between $N=10^4$
neurons \cite{izhikevich4}
generated from the brain model \cite{izhikevich3}.
In total there are $N_l=1960108$ links in the network.
They form $N_{out}$
{\it outgoing} links and $N_{in}$ {\it ingoing} links
($N_l=N_{out}=N_{in}$),
so that there are about 200 outlinks (or ingoing) per neuron.
These numbers include
multiple links between certain pairs of neurons;
certain neurons have also links to themselves
(there is one neuron linked only to itself).
The number of weighted symmetric links
is approximately $9.8$\%.
Due to existence of multiple links between the same neurons
we constructed two ${\bf G}$ matrices
based on unweighted and weighted counting of links.
In the first case all links from neuron
$j$ to neuron $i$ are counted as one link,
in the second case the weight of the link
is proportional to the number of links from $j$ to $i$.
In both cases the sum of elements in one column
is normalized to unity.
The distributions of ingoing ($P_{in}$) and outgoing
($P_{out}$) links
are shown in Fig.~\ref{fig1}.
The weighted distribution of ingoing links have
a pronounced peaked structure corresponding to
different regions of brain model
considered in \cite{izhikevich3}.
We note that the distribution of links is
not of free-scale type.
\begin{figure}
\centerline{
\epsfxsize=4.5cm\epsffile{fig2a.eps}
\epsfxsize=4.5cm\epsffile{fig2b.eps}
}
\vglue -0.2cm
\centerline{
\epsfxsize=4.5cm\epsffile{fig2c.eps}
\epsfxsize=4.5cm\epsffile{fig2d.eps}
}
\vglue -0.3cm
\caption{(Color online) PageRank $p_j$ for the Google matrix of brain model
at $\alpha=0.6,0.85, 0.9, 0.95$ and $0.99$ shown by red, magenta
green, blue and black solid curves
(full curves from bottom to top at $\log_{10} j=0.3$).
The dotted black curve corresponds
to $\alpha=0.999$ and demonstrates strong dependence
of the PageRank on $\alpha$ in the
vicinity of $\alpha=1$. Panels (a) and (b)
correspond to unweighted and weighted links.
For panels (a) and (b)
the values of PAR are $\xi = 8223.$ and $8314.$, $6295.$ and
$6040.$, $5570.$ and $5046.$, $3283.$ and $3367.$, $28.4 $,
$90.0$, $1.09$ and $1.19$
for $\alpha=0.6, 0.85, 0.9, 0.95, 0.99, 0.999$
respectively. Panels (c) and (d) show the dependence of
the influence-PageRank $p^*(j)$ on $j$ for the same values
of $\alpha$ as for top panels respectively for
unweighted and weighted links (for $\alpha >0.6$ there is a
strong overlap of curves).
}
\label{fig2}
\end{figure}
The dependence of the PageRank on $\alpha$ is shown in Fig.~\ref{fig2}.
For $\alpha=0.999$ almost all probability $p_j$ is concentrated on
one neuron. This is the only one neuron which is linked
only to itself. With the increase of $\alpha$ up to $ 0.99$
the main part of probability is concentrated mainly on about 10 neurons
that approximately corresponds to the number of peaks in
the distribution of weighted ingoing links in Fig.~\ref{fig1}
(bottom left panel). At the same time
the PageRank has a long tail at large $j$ where
the probability $p_j$ is practically homogeneous.
For $\alpha=0.6$ the peak of probability
at $1 \leq j \leq 10$ is washed out and the PageRank
becomes completely delocalized.
We note that a delocalization of the PageRank with $\alpha$
appears in the Ulam networks describing dynamical
systems with dissipation \cite{sz,ermann}.
At the same time the WWW networks
remain stable in respect to variation of $\alpha$
as it is discussed in \cite{avrach3,ggs2010}.
Recently, for the studies of procedure call network
of the Linux Kernel \cite{linux}
it was proposed to study the properties of the
importance-PageRank $p^*(j)$ which is
given by the eigenvector at $\lambda=1$
for the Google matrix constructed from the
inverted links of the original adjacency matrix.
It was argued that $p^*(j)$ can give an additional
information about certain important nodes.
Our results for $p^*(j)$ are shown in panels (c,d)
of Fig.~\ref{fig2}. They show that $p^*(j)$
is practically delocalized and flat for all used values of $\alpha$.
This indicates that all nodes have practically equal importance.
The popularity-importance correlator introduced
in \cite{linux} and defined as $\kappa=N \sum_i p(i)p^*(i) -1$
is rather small ($\kappa \approx -0.009, -0.017$
at $\alpha=0.6$ and $\kappa \approx -0.054, -0.065$ at $\alpha=0.85$
for unweighted, weighted links respectively).
This shows that there are no correlations between $p$ and $p^*$
in our neuronal network that is similar to the
Linux Kernel case.
\begin{figure}
\centerline{
\epsfxsize=4.5cm\epsffile{fig3a.eps}
\epsfxsize=4.5cm\epsffile{fig3b.eps}
}
\vglue -1.2cm
\centerline{
\epsfxsize=4.5cm\epsffile{fig3c.eps}
\epsfxsize=4.5cm\epsffile{fig3d.eps}
}
\vglue -0.3cm
\caption{(Color online) Spectrum of eigenvalues of the Google matrix
${\bf G}$ of
brain at $\alpha=0.99$ in the complex plain $\lambda$ for
(a) unweighted and (b) weighted links in the neuronal network.
Panels (c) and (d) show zooms of data of panel (c).
The color shows
the degree of localization of eigenvectors of ${\bf G}$
being proportional to the value of PAR $\xi$
and changing from one (red/light gray) to maximal value
(dark green/black).
}
\label{fig3}
\end{figure}
The spectrum $\lambda_i$ and the right eigenvectors
$\psi_i$ of the Google matrix of
brain are defined by the equation
\begin{equation}
{\bf G} \psi_i = \lambda_i \psi_i \; .
\label{eq2}
\end{equation}
The spectrum of $\lambda$ is complex and is shown in Fig.~\ref{fig3}.
The color of points is chosen to be proportional to the
PArticipation Ratio (PAR)
defined as $\xi = (\sum_j |\psi_i(j)|^2)^2/\sum_j|\psi_i(j)|^4$.
This quantity determines an effective number of sites
populated by an eigenstate $\psi_i$, it is often used
to characterize localization-delocalization transition in
quantum solid-state systems with disorder (see e.g. \cite{mirlin}).
The spectrum has eigenvalues with $|\lambda_i|$ being close to unity
so that there is no gap in the spectrum of $\lambda$ in the vicinity of
$\lambda=1$ (we remind that the second
term in the r.h.s. of (\ref{eq1}) transfers
$\lambda_i$ to $\alpha \lambda_i$ keeping only
one $\lambda_1=1$ \cite{googlebook}).
This is different from the spectrum of
random scale-free networks characterized by a large gap in the
spectrum of $\lambda$ \cite{ggs}.
\begin{figure}
\centerline{
\epsfxsize=7.5cm\epsffile{fig4.eps}
}
\vglue -0.3cm
\caption{Dependence of the density of
states $dW/d\gamma$ of ${\bf G}$ on the relaxation rate $\gamma$
for unweighted (pluses) and weighted (circles) links in the neuronal network.
}
\label{fig4}
\end{figure}
Compared to the spectra of the university WWW networks
studied in \cite{ggs2010} the spectrum of ${\bf G}$ in Fig.~\ref{fig3}
is more flat being significantly compressed to the real axis.
In this respect our neuronal network has certain similarity
with the spectra of vocabulary networks
analyzed in \cite{ggs2010} (see Fig.~1 there).
At the same time the spectrum of ${\bf G}$ matrix of brain
has visible structures in the eigenvalues distribution
in the complex plane of $\lambda$ while
the vocabulary networks are characterized by
structureless spectrum.
The spectrum of Fig.~\ref{fig3}
has global properties being similar to those of the Ulam
networks considered in \cite{sz}. It is interesting to note that
the spectra of unweighted and weighted networks of brain
have similar structure. This supports the view of
structural stability of the spectrum of ${\bf G}$ matrix.
It is useful to determine the relaxation rate
of eigestates by the relation $\gamma= - 2 \ln |\lambda|$.
The dependence of density of states $d W/d\gamma$ on $\gamma$
is shown in Fig.~\ref{fig4} (the density is normalized to
unity so that $\int_0^{\infty} d W/d \gamma d \gamma =1$
corresponds to $N=10^4$ states). The distribution in $\gamma$
has a pronounced peak at $\gamma \approx 5$,
the density of states at small $\gamma<1$ is relatively small
(this is also seeing in Fig.~\ref{fig3}).
The comparison of unweighted and weighted links
shows the stability of the density distribution
in respect to such modification of links.
\begin{figure}
\centerline{
\epsfxsize=4.5cm\epsffile{fig5a.eps}
\epsfxsize=4.5cm\epsffile{fig5b.eps}
}
\vglue -0.3cm
\caption{Dependence of
PAR $\xi$ on relaxation rate $\gamma$ at $\alpha=0.85$
for (a) unweighted and (b) weighted links in the neuronal network.
}
\label{fig5}
\end{figure}
The dependence of the PAR $\xi$ on $\gamma$ is shown in Fig.~\ref{fig5}
(we note that except of the PageRank $\xi$ is independent of $\alpha$
due to the unity rank of matrix ${\bf E}$, see e.g. \cite{googlebook,sz}).
The PageRank value of $\xi$ at $\gamma=0$ is very large
being more than half of the total number of neurons $N=10^4$.
It is clear that this corresponds to a delocalized state.
The eigenstates with $0<\gamma < 2$ have relatively small
$\xi \lesssim 10^3$ being close to a localized domain
while eigenstates with $2 < \gamma < 10$ have $\xi > 10^3 $
being delocalized on the main part of the network;
the states with $\gamma > 10$ enter in the localized domain.
For $\alpha > 0.99$ the PAR is close to $\xi \approx 1$.
Taking as a criterion that the delocalization takes place when
$\xi > N/2$ we obtain that
the PageRank becomes delocalized at
$\alpha_c \approx 0.9$ (see data of Figs.~\ref{fig2},\ref{fig6}).
The global dependence of the PAR $\xi$ of the PageRank
on parameter $\alpha$ is shown in Fig.~\ref{fig6}
with a sharp delocalization of $\xi$ for $\alpha < \alpha_c$.
Of course, the above analysis should be considered
as an approximate one
since the localization properties should be
studied in dependence on the system size $N$
while we consider only one size of $N$.
\begin{figure}
\centerline{
\epsfxsize=4.5cm\epsffile{fig6a.eps}
\epsfxsize=4.5cm\epsffile{fig6b.eps}
}
\vglue -0.3cm
\caption{Dependence of
PAR $\xi$ of the PageRank on parameter $\alpha$
for (a) unweighted and (b) weighted links in the neuronal network.
}
\label{fig6}
\end{figure}
\begin{figure}
\centerline{
\epsfxsize=4.5cm\epsffile{fig7a.eps}
\epsfxsize=4.5cm\epsffile{fig7b.eps}
}
\vglue -0.3cm
\caption{Distribution of PageRank values $p$ and $p^*$
for all sites
for (a) unweighted and (b) weighted links in the neuronal network
at $\alpha=0.85$.
}
\label{fig7}
\end{figure}
Finally, following
the approach proposed in \cite{linux},
we show in Fig.~\ref{fig7} the distribution of PageRank
values $p$ and $p^*$ for all sites. Such kind of distributions
can be rather useful in determining
sites which have maximal values of $p$ and $p^*$
at the same time. However, a detailed analysis of the properties
of this distribution would require networks with a larger size $N$
where statistical fluctuations are smaller.
\section{III Discussion}
In this work we studied the properties of the Google matrix of
a neuronal network of the brain model discussed in \cite{izhikevich3}.
For this network of $10^4$ neurons we found that the spectrum
of the Google matrix has a gapless spectrum at $\alpha=1$ demonstrating
certain similarities with the spectra of university WWW networks
and vocabulary networks studied in \cite{ggs2010}. At the same time
our neuronal network shows signs of delocalization transition
of the PageRank
at the Google damping factor $\alpha_c \approx 0.9$
which was absent in the networks studied in \cite{avrach3,ggs2010}.
A similar transition in $\alpha$ was detected in the Ulam networks
generated by dissipative dynamical maps \cite{sz}.
We attribute the appearance of such delocalization transition
to a large number of links per neuron (200) which is
by factor 10 larger than in the WWW networks (20).
Of course, our studies have certain limitations
since we considered only a fixed size neuronal network
and since this network is taken from a model system of
brain analysed in \cite{izhikevich3}.
Another weak point is that we do not consider
the dynamical properties of the network
which are probably more important for practical applications.
Nevertheless, the spectral properties of ${\bf G}$ matrix
can be rather useful. Indeed, the gapless spectrum of $\lambda$
shows that long living excitations can exist
in our neuronal network. Such relaxation modes with small
rates $\gamma$ can be the origin
of long living oscillations found in numerical
simulations \cite{izhikevich3}. It is quite possible that
the properties of spectra of ${\bf G}$
can help to understand in a better way rapid relaxation processes
and those with long relaxation times.
We conjecture that the rapid relaxation modes
correspond to relaxation of local groups of neurons
while long living modes can represent relaxation of collective
modes representing dynamics of human thoughts.
The dynamics of such collective modes can contain
significant elements of chaotic dynamics
as it was discussed in the frame of the concept
of creating chaos in \cite{chirikov}.
It is possible that the brain effectively implements dynamics
described by the evolution equation
$d \psi /d t = {\bf G} \psi$ which
without perturbations converges to the steady-state
described by the PageRank (which may be linked with
a sleeping phase). External perturbations
give excitations of other eigenmodes of ${\bf G}$
discussed here. The evolution of these excitations will
be significantly affected by the spectrum of ${\bf G}$.
Further development of the Google matrix approach
to the brain looks to us to be rather promising.
For example, a detection of isolated communities
and personalized PageRank, represented by
other types of matrix ${\bf E}$ in (\ref{eq1}),
is under active investigation in the computer science community
(see e.g. \cite{googlebook,avrach3}).
Such type of problems can find their applications
for detection of specific quasi-isolated
neuronal networks of brain.
The usage of real neuronal networks, similar to those
studied in \cite{laughlin,sporns2,sporns3,kaiser,sporns4,monasson},
in combination with the Google matrix approach
can allow to discover new properties of processes in the brain.
The development of parallels between the WWW and neuronal networks
will give new progress of the ideas of John von Neumann.
We thank E.M.~Izhikevich for providing us with
the data set of links between neurons \cite{izhikevich4}
in the brain model \cite{izhikevich3}.
|
1,314,259,993,885 | arxiv | \section{Introduction}
Unsupervised generative modeling is at the forefront of deep learning research~\cite{Goodfellow2016}. Unlike the extremely successful discriminative tasks such as supervised classification and regression, the goal of generative modeling is to model the probability distribution of observed data and generate new samples accordingly. Generative modeling finds wide applications in computer vision~\cite{Zhu2017}, speech synthesis~\cite{Van2016}, as well as chemical design~\cite{Gomez2016}. And it is believed to be a crucial component towards artificial general intelligence. However, generative modeling is more challenging than discriminative tasks since it requires one to efficiently represent, learn and sample from high-dimensional probability distributions~\cite{Goodfellow2016}.
In parallel to the rapid development of deep learning,
there is heated ongoing research to fabricate intermediate scale quantum circuits~\cite{Preskill2018,IBM2017,*Google2018}. Quantum hardware may show greater potential than their classical counterparts in generative tasks. Multiple schemes have been proposed to boost the performance of classical generative models using quantum devices. For example, quantum Boltzmann machines~\cite{Amin2016,Khoshaman2018,Benedetti2017} generalize the energy function of classical Boltzmann machines~\cite{Ackley1985} to quantum Hamiltonian for possible stronger representational power and faster training.
A concern which may prevent the quantum Boltzmann machine from surpassing its classical counterpart is the limited connectivity on the actual quantum hardware~\cite{Dumoulin2014}.
\Ref{Gao2017} introduces a quantum generalization of the probabilistic graphical model~\cite{Koller2009},
which can be exponentially more powerful than its classical counterpart and has exponential speedup in training and inference at least for some instances under reasonable assumptions in computational complexity theory.
Another class of arguably simpler quantum generative models named Born machines~\cite{Han2017, Cheng2017, Benedetti2018} directly exploit the inherent probabilistic interpretation of quantum wavefunctions~\cite{Born1926}. Born machines represent probability distribution using quantum pure state instead of the thermal distribution like the Boltzmann machines~\cite{Ackley1985}. Therefore, Born machines can directly generate samples via projective measurement on the qubits, in contrast to the slow mixing Gibbs sampling approach.
Moreover, computational complexity considerations on quantum sampling problems suggest that a quantum circuit can produce probability distribution that is \#P-hard~\cite{Fortnow1999,Aaronson2011,Lund2017},
which is infeasible to simulate efficiently using classical algorithms. The same reasoning underlines the current efforts towards "quantum supremacy" experiments by sampling outcomes of random quantum circuits~\cite{Boixo2016}. \Ref{Han2017} performed a classical simulation of the Born machine using the matrix product state representation of the quantum state.
It will be even more promising to realize the Born machines using quantum circuits since one can at least efficiently prepare some of the tensor networks on a quantum computer~\cite{Huggins2018}. Recently, Ref.~\cite{Benedetti2018} demonstrated experimental realization of the Born machine on a four qubits shallow quantum circuit trained by gradient-free optimization of measurement histograms.
To further scale up the quantum circuit Born machine (QCBM) to larger number of qubits and circuit depth, one needs to devise an appropriate objective function for the generative tasks without explicit reference to the model probability. Unlike the tensor network simulation~\cite{Han2017}, QCBM belongs to \emph{implicit} generative models since one does not have access to the wavefunction of an actual quantum circuit. Thus, QCBM can be used as a simulator to generate samples without access to their likelihoods, which is similar to the notable generative adversarial networks (GAN)~\cite{Goodfellow2014,Goodfellow2016c}. Compared to generative models with explicit likelihoods such as the Boltzmann machines~\cite{Hinton2006}, normalizing flows~\cite{Oord2016,Dinh2016,Kingma2016,Rezende2015,Papamakarios2017}, and variational autoencoders~\cite{Kingma2013,Rezende2014}, the implicit generative models can be more expressive due to less restrictions in their network structures. On the other hand, having no direct access to the output probability also poses challenge to the scalable training of quantum circuits.
Moreover, one also needs better learning algorithm than the gradient-free optimization scheme~\cite{Benedetti2018}, especially given the noisy realization of current quantum circuits.
Similarly, scalability of the optimization scheme is also a crucial concern in deep learning, in which deep neural networks can even reach billions of parameters~\cite{Simonyan2014}. In the history of machine learning, gradient-free algorithms were employed to optimize small-scale neural networks~\cite{MinskyPerceptrons}. However, they failed to scale up to a larger number of parameters. It is the back-propagation algorithm~\cite{Rumelhart1986} which can efficiently compute the gradient of the neural network output with respect to the network parameters enables scalable training of deep neural nets. It is thus highly demanded to have scalable quantum algorithms for estimating gradients on actual quantum circuits.
Recently, gradient-based learning of quantum circuits has been devised for quantum control~\cite{Li2017} and discriminative tasks~\cite{Farhi2018,Mitarai2018}.
Although they are still less efficient compared to the back-propagation algorithm for neural networks,
these unbiased gradient algorithms can already greatly accelerate the quantum circuit learning. Unfortunately, direct application of these gradient algorithms~\cite{Li2017,Farhi2018,Mitarai2018} to QCBM training is still non-trivial since the output of the generative model is genuinely bit strings which follow high-dimensional probability distributions.
In fact, it is even an ongoing research topic in deep learning to perform differentiable learning of implicit generative model with discrete outputs~\cite{Goodfellow2016c, Li2017b}.
In this paper, we develop an efficient gradient-based learning algorithm to train the QCBM. In what follows, we first present a practical quantum-classical hybrid algorithm to train the quantum circuit as a generative model in \Sec{sec-model}, thus realize a Born machine.
Then we apply the algorithm on $3\times 3$ Bars-and-Stripes and double Gaussian peaks datasets in \Sec{sec-app}.
We show that the training is robust to moderate sampling noise, and is scalable in circuit depth. Increasing the circuit depth significantly improves the representational power for generative tasks.
Finally, we conclude and discuss caveats and future research directions about the QCBM in \Sec{sec-discussion}.
\section{Model and Learning algorithm}\label{sec-model}
Given a dataset ${\mathcal{D}}=\{x\}$ containing independent and identically distributed (i.i.d.) samples from a target distribution $\pi(x)$, we set up a QCBM to generate samples close to the unknown target distribution. As shown in \Fig{fig-concept}, the QCBM takes the product state $|0\>$ as an input and evolves it to a final state $|{\psi}_{\boldsymbol{\theta}}\>$ by a sequence of unitary gates. Then we can measure this output state on computation basis to obtain a sample of bits $x\sim p_{\boldsymbol{\theta}}(x)=|\< x|{\psi}_{\boldsymbol{\theta}}\>|^2$. The goal of the training is to let the model probability distribution $p_{\boldsymbol{\theta}}$ approach to $\mathbf{\pi}$.
We employ a classical-quantum hybrid feedback loop as the training strategy. The setup is similar to the Quantum Approximate Optimization Algorithm (QAOA)~\cite{Farhi2014,Farhi2017,Otterbach2017} and the Variational Quantum Eigensolver (VQE)~\cite{Peruzzo2014,OMalley2016, Kandala2017}.
By constructing the circuits and performing measurements repeatedly we collect a batch of samples from the QCBM. Then we introduce two-sample test as a measure of distance between generated samples and training set, which is used as our differentiable loss.
Using a classical optimizer which takes the gradient information of the loss function, we can push the generated sample distribution towards the target distribution.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth,trim={0.5cm 1.cm 0.5cm 1cm},clip]{images/fig5.pdf}
\end{center}
\caption{Illustration of the differentiable QCBM training scheme.
Top left is the quantum circuit which produce bit string samples.
The dashed box on the right denotes two-sample test on the generated samples and training samples, with the loss function (\Eq{eq-mmd-loss}) and corresponding gradients (\Eq{eq-mmd-grad}) as outputs.
$\Delta{\boldsymbol{\theta}}$ is the amount of updated to be applied to the circuit parameters, which are computed by a classical optimizer. The outcome of the training is to produce a quantum circuit which generates samples according to the learned probability distribution on the computational basis.}
\label{fig-concept}
\end{figure}
\subsection{Quantum Circuit Architecture Design}\label{sec:structurelearning}
The overall circuit layout is similar to the IBM variational quantum eigensolver \cite{Kandala2017}, where one interweaves single qubit rotation layers and entangler layers shown in \Fig{fig-concept}.
The rotation layers are parameterized by rotation angles ${\boldsymbol{\theta}}=\{{\theta^\alpha_l}\}$,
where the layer index $l$ runs from $0$ to $d$, with $d$ the maximum depth of the circuit.
$\alpha$ is a combination of qubit index $j$ and arbitrary rotation gate index, where the arbitrary rotation gate has the form $U(\theta_l^j)=R_z(\theta_l^{j,1})R_x(\theta_l^{j,2})R_z(\theta_l^{j,3})$ with $R_m(\theta)\equiv \exp\left(\frac{-i\theta\sigma_m}{2}\right)$.
The total number of parameters in this QCBM is $(3d+1)n$, with $n$ the number of qubits~ \footnote{The number of parameters here is not $3(d+1)n$ here because the leading and trailing $R_z$ gates can be omitted without affecting the output probability distribution.}.
We employ CNOT gates with no learnable parameters for the entangle layers to induce correlations between qubits. In light of experimental constraints on the connectivity of the circuits, we make the connection of the entangle layers to be sparse by requiring its topology as a tree (i.e. the simplest connected graph). From the classical probabilistic graphical model's perspective~\cite{Koller2009}, the tree graph that captures information content of the dataset most efficiently is Chow-Liu tree~\cite{Chow1968}. Since controlled unitary gates have a close relation with classical probability graphical models~\cite{Low2014}, we employ the same Chow-Liu tree as the topology of CNOT gates. To construct the Chow-Liu tree we first compute mutual information between all pairs of the bits for samples in the training set as weights, and then construct the maximum spanning tree
using, for example, the Kruskal's algorithm.
The assignment of the control bit and the target bit on a bond is random, since the Chow-Liu algorithm treated directed and undirected graphs the same.
In the case where this connection structure is not directly supported by the hardware, a combination of SWAP gates and CNOT gates can be used to efficiently simulate the required structure~\cite{Zulehner2017}.
\material{
\begin{align}
\begin{split}
&T_{\rm Chow-Liu} = \argmax\limits_T\sum\limits_{(s,t)\in T}\mathbb{I}(s,t)\label{eq-log-likelihood},
\end{split}
\end{align}
where the summation over $(s,t)\in T$ iterates over all edges in a tree graph, and $\mathbb{I}(s,t)$ is the two site mutual information between two bits in dataset, which is calculated using empirical distributions.
}
The performance of entangle layers constructed in this procedure is better than most random connections with the same number of gates. This data-driven quantum circuit architecture design scheme respects the information content of the classical dataset easier and may alleviate issues of vanishing gradients for large-scale applications~\cite{Huggins2018}.
\subsection{Loss Function and Gradient-based Optimization}
Viewing the QCBM as an implicit generative model~\cite{Mohamed2016},
we train it by employing the kernel two-sample test~\cite{Anderson1994}.
The idea is to compare the distance in the kernel feature space on the samples drawn from the target and the model distributions. We refer the following loss function as the squared maximum mean discrepancy (MMD)~\cite{Gretton2007,Gretton2012}
\begin{align}
\begin{split}
\mathcal{L} =& \left\|\sum_{x} p_{\boldsymbol{\theta}}(x) \phi(x)- \sum_{x} \pi(x) \phi(x) \right\|^2 \\
=&\expect{K(x,y)}{{x\sim p_{\boldsymbol{\theta}}, y\sim p_{\boldsymbol{\theta}} } }-2\expect{K(x,y)}{x\sim p_{\boldsymbol{\theta}},y\sim \mathbf{\pi}}+\expect{K(x, y)}{x\sim \mathbf{\pi},y\sim \mathbf{\pi}}.
\end{split}\label{eq-mmd-loss}
\end{align}
The summation in the first line runs over the whole Hilbert space. The expectation values in the second line are for the corresponding probability distributions.
The function $\phi$ maps $x$ to a high-dimensional reproducing kernel Hilbert space~\cite{Hofmann2008}. However, as common in the kernel tricks, by defining a kernel function $K(x,y)=\phi(x)^T\phi(y)$ one can avoid working in the high-dimensional feature space. We employ a mixture of Gaussians kernel $K(x,y) = \frac{1}{c}\sum\limits_{i=1}^c\exp\left(-\frac{1}{2\sigma_i}|x-y|^2\right)$ to reveal differences between the two distributions under various scales. Here, $\sigma_i$ is the bandwidth parameter which controls the width of the Gaussian kernel.
The sample $x$ can either be a bit string (vector) or an integer (scalar) depending on the representation. When $x$ is a bit string, $|x|$ stands for the $\ell^2$-norm in the vector space.
The MMD loss with Gaussian kernels asymptotically approaches zero if and only if the output distribution matches the target distribution exactly~\cite{Gretton2007,Gretton2012}.
The same loss function was used to train the generative moment matching networks (GMMN)~\cite{Li2015,Dziugaite2015,Li2017e}.
To learn the QCBM as a generative model, we compute gradient of the loss function \Eq{eq-mmd-loss} with respect to the circuit parameters
\begin{eqnarray}
\frac{\partial \mathcal{L}}{\partial {\theta^\alpha_l}} &=&\expect{K(x,y)}{x\sim p_{{\boldsymbol{\theta}}^+}, y\sim p_{\boldsymbol{\theta}}}-\expect{K(x,y)}{x\sim p_{{\boldsymbol{\theta}}^-},y\sim p_{\boldsymbol{\theta}}} \nonumber\\
&-&\expect{K(x,y)}{x\sim p_{{\boldsymbol{\theta}}^+},y\sim \mathbf{\pi}}+\expect{K(x,y)}{x\sim p_{{\boldsymbol{\theta}}^-},y\sim \mathbf{\pi}}.\label{eq-mmd-grad}
\end{eqnarray}
Here, $p_{{\boldsymbol{\theta}}^+}(x)$ and $p_{{\boldsymbol{\theta}}^-}(x)$ are output probabilities of QCBM under circuit parameters ${\boldsymbol{\theta}}^{\pm}={\boldsymbol{\theta}}\pm\frac{\pi}{2}{\mathbf{e}_l^\alpha}$, where ${\mathbf{e}_l^\alpha}$ is the $(l,\alpha)$-th unit vector in parameter space (i.e. ${\theta^\alpha_l}\leftarrow{\theta^\alpha_l}\pm\frac{\pi}{2}$, with other angles unchanged).
In contrast to the finite difference methods like simultaneous perturbation stochastic approximation (SPSA)~\cite{Spall1998}, \Eq{eq-mmd-grad} is an unbiased estimator of the exact gradient. Its detailed derivations is in \App{sec-mmd}.
In order to estimate the gradient [\Eq{eq-mmd-grad}] on an actual quantum circuit, one can repeatedly send rotation and entangle pulses to the device according to the circuit parameters ${\boldsymbol{\theta}}^{(\pm)}$, and then perform projective measurements on the computational basis to collect binary samples $x\sim p_{{\boldsymbol{\theta}}^{(\pm)}}$. While for $x\sim\mathbf{\pi}$, one can simply take a batch of data from the training dataset~\footnote{For the small dataset employed in our numerical experiment, we can afford to average over the whole training dataset.}.
The sampling noise in the estimated gradient is controlled by the number of measurements $N$, denoted as the batch size. After one has obtained a sufficiently accurate gradient, one can use a classical optimizer to update the circuit parameters similar to the stochastic gradient descent training of deep neural nets.
Parameter learning of quantum circuits is adaptive in the sense that the implementation of quantum gates can even be non-ideal.
One can obtain gradients with high accuracy as long as the parametrized single qubits rotation gates are precise, which is relatively easier to achieve experimentally. The optimization scheme is independent of the detailed form of non-parametrized entangle gates.
Thus, the CNOT gate in the setup can be replaced by any gate which can generate desired quantum entanglements.
It is instructive to compare the training of the QCBM to that of classical implicit generative models such as GAN~\cite{Goodfellow2014,Goodfellow2016c} and GMMN~\cite{Li2015,Dziugaite2015,Li2017e}. Classically, one does not have access to the likelihood either. The gradient is thus obtained via the chain rule $\frac{\partial \mathcal{L}}{\partial {\boldsymbol{\theta}}}=\frac{\partial \mathcal{L}}{\partial x}\frac{\partial x}{\partial {\boldsymbol{\theta}}}$, which then does not apply for discrete data.
On the other hand, the unbiased gradient estimator of the QCBM takes advantage of the known structure of the unitary evolution and the MMD loss (see \App{sec-mmd}), despite that the probability of the outcome is unknown.
In this sense, quantum circuits exhibit a clear quantum advantage over classical neural nets since they fill the gap of differentiable learning of implicit generative models of discrete data.
\section{Numerical Experiments}\label{sec-app}
We carry out numerical experiments by simulating the learning of QCBM on a classical computer. These experiments reveal advantages of gradient based optimization over gradient-free optimization, and demonstrate stronger expressibility of deep circuits over shallow ones.
The code can be found at the Github repository~\cite{github}
\subsection{Bars-and-Stripes Dataset}
We first train a QCBM on the Bars-and-Stripes dataset~\cite{Mackay2003,Han2017,Benedetti2018}, which is a prototypical image dataset consists of vertical bars and horizontal stripes. On a grid of $3\times 3$, the dataset contains 14 valid configurations. We model the pixels with a quantum circuit of $9$ qubits.
The Chow-Liu tree for this dataset is shown in \Fig{fig-chowliu-tree} (a).
Bonds are either row-wise or column-wise since correlations of pixels sharing the same row/column index are dominant in this dataset.
The bandwidths used in Gaussian kernels of MMD loss are $\sigma = 0.5,1,2,4$.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth,trim={0, 2cm 0 1cm}]{images/fig3.pdf}
\end{center}
\caption{(a) Connectivity of the CNOT gates for $3\times 3$ Bars-and-Stripes dataset generated via the chow-Liu tree algorithm. The qubits are arranged on a $3\times 3$ grid, with some of them shifted a bit in order to visualize the edges clearly.
(b) Chow-Liu tree for double Gaussian peak model, the numbers represent the position of a bit in the digit, $0$ for the big end and $9$ for the little end.
In this plot, the darkness of edges indicates the amount of mutual information between two sites.}\label{fig-chowliu-tree}
\end{figure}
For circuit depth $d=10$, our gradient-based training is able to reduce the MMD loss efficiently. The loss function for different iteration steps is shown in \Fig{fig-noiseloss}(a).
We first perform L-BFGS-B~\cite{Byrd1995} optimization (black dashed line) using the exact gradient computed via the wavefunction ($N=\infty$) to test the expressibility of the quantum circuit.
A loss of $2.4\times 10^{-7}$ can be achieved, showing that the circuit is quite expressive in terms of the two-sample test.
In practice, one has to perform projective measurements on the qubits to collect statistics of the gradient since the wavefunction is inaccessible. This situation is similar to the mini-batch estimate of the gradient in deep learning~\cite{Goodfellow2016}. As is well known in the deep learning applications~\cite{Goodfellow2016}, the L-BFGS-B algorithm is not noise tolerant. Thus, it is unsuitable for quantum circuit learning in realistic situation.
One needs to employ an alternate optimizer which is robust to the sampling noise to train the quantum circuit with noisy gradient estimator.
We employ the stochastic gradient optimizer Adam~\cite{Kingma2014} with the learning rate $0.1$.
The sampling noise in the gradients can be controlled by tuning the batch size $N=2000,20000,\infty$ of the measurements. The solid lines in \Fig{fig-noiseloss} (a) show that as the sample size increases, the final MMD loss reduces systematically.
The scatters in the inset confirmed that the model probability of learned quantum circuit and the target probability aligns better with lower MMD loss.
To visualize the quality of the samples, we generated a few samples from the QCBM trained under different measurement batch size $N$ in \Fig{fig-bs-gen}.
Here, we define a valid rate $\chi\equiv p(x \text{ is a bar or a stripe})$ as a measure of generation quality.
The valid rate increases as the batch size increases.
However, even with a moderate number of measurement $N=2000$ one can achieve a valid rate $\chi=88.6\%$.
Here, we should mention that the best valid rate of $d=10$ layer circuit is achieved by L-BFGS-B optimizer with $N=\infty$, which is $\chi=99.9\%$.
To highlight the importance of using a gradient-based optimizer, we compare our approach to the covariance matrix adaptation evolution strategy (CMA-ES)~\cite{Hansen2006,Rios2013}, a state-of-the-art gradient-free stochastic optimizer.
The input of CMA-ES is the scalar loss function measured on the circuit instead of the vector gradient information. The CMA-ES optimizer is able to optimize non-smooth non-convex loss functions efficiently~\cite{Rios2013}, thus in general performs better than other gradient-free methods such as the SPSA~\cite{Spall1998} in training noisy quantum circuits. We have confirmed this in our simulation.
In the absence of sampling noise $N=\infty$ in \Fig{fig-noiseloss} (b), we do observe that the CMA-ES optimizer is able to achieve similar performance as the Adam optimizer after $10^4$ steps of optimization with a population size of $50$. The total number of generated samples is $10^4\times50\times N$, which is comparable to the Adam training in \Fig{fig-noiseloss} (a)
\footnote{Notice for this circuit of depth $d=10$ that having $279$ parameters, we need to make $279\times N\times2+1$ measurements in a gradient estimation.}.
However, the performance of CMA-ES deteriorates significantly once taking sampling noise into consideration, as shown for $N=2000$ and $N=20000$ in \Fig{fig-noiseloss} (b).
A possible explanation is that in each step of CMA-ES, its evolution strategy chooses the direction to go by inspecting the center of top $20\%$ instances.
This process can be understood as an effective finite difference gradient estimation base on the losses of its population. However, extracting gradient information from noisy losses is difficult, even one has plenty of them.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{images/fig9.pdf}
\end{center}
\caption{
The MMD loss (\Eq{eq-mmd-loss}) as a function of training steps under different sampling errors that governed by batch size $N$.
(a) Gradient based training, solid colored lines are for Adam, and the dashed black line is for L-BFGS-B with $N=\infty$.
Inset is a comparison of probability distribution between the training set and QCBM output, points on dashed line means exact match.
(b) Gradient free CMA-ES training counterpart, where each point in the graph represents a mean loss of its population.
}\label{fig-noiseloss}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth,trim={0cm 0.5cm 1.5cm 1.5cm},clip]{images/fig7.pdf}
\end{center}
\caption{$3\times3$ Bars-and-Stripes samples generated from QCBMs.
Circuit parameters used here are from the final stages of Adam training with different batch sizes $N$ in \Fig{fig-noiseloss} (a). $\chi$ is the rate of generating valid samples in the training dataset.
For illustrative purpose, we only show $12$ samples for each situation with batch size $N$.}
\label{fig-bs-gen}
\end{figure}
Another advantage of using gradient-based learning is the efficiency comparing with gradient-free methods, which gets particularly significant when circuits get deeper and the number of parameters increases.
In the following, we address the necessity of using deep circuits.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{images/fig8.pdf}
\end{center}
\caption{Losses as a function of training step for circuit depth $d=1,\ldots,10$. (a) The MMD loss \Eq{eq-mmd-loss}, and (b) the corresponding KL divergence. Here, we use L-BFGS-B optimizer with exact gradient.
}\label{fig-mmd-kl}
\end{figure}
\Fig{fig-mmd-kl} (a) shows the MMD loss as a function of L-BFGS-B training steps for different circuit depth. One obtains lower loss for deeper quantum circuit after $500$ optimization steps. \Fig{fig-mmd-kl} (b) shows the Kullback-Leibler (KL) divergence~\cite{Kullback1951} calculated using the circuit parameters in (a) at different training steps. Note that this quantity is inaccessible for large-scale problems since one has no access to the target nor the output probability. We compute the KL divergence for the toy model to demonstrate that the MMD loss is a good surrogate for practical training.
The result indeed shows a consistency between MMD loss and KL divergence. And it also supports the observation that deep circuits have stronger representational power.
Similar to deep neural networks, deep circuits can achieve better performance also due to that one is less prone to be trapped in a poor local minima with larger amount of parameters~\cite{Choromanska2015}.
Another advantage of the QCBM over traditional deep neural networks is that its training not suffer from gradient vanishing/exploding problem as the circuit goes deeper. Gradient vanishing/exploding is a common problem for a traditional deep neural network~\cite{He2016} which originates from multiplications of a long chain of matrices in the back-propagation algorithm. Training of the deep quantum circuits naturally circumvented this problem by due to the unitary property of the time evolution. Similar idea was exploited in constructing classical recurrent neural networks with unitary building blocks~\cite{Jing2016}. More numerical simulation and analytical explanations can be found in \App{sec-gradient}.
\subsection{Mixture of Gaussians}
Next, we train a QCBM to model a mixture of Gaussians distribution
\begin{equation}
\mathbf{\pi}(x)\propto e^{-\frac{1}{2}\left(\frac{x-\mu_1}{{\nu}}\right)^2}+e^{-\frac{1}{2}\left(\frac{x-\mu_2}{{\nu}}\right)^2}.
\end{equation}
Here, $x=1,\ldots,x_{\rm max}$ is an integer encoded by the qubits, with $x_{\rm max}=2^n$ and $n$ is the number of qubits. It is different from Bars-and-Stripes dataset, in which case a sample $x$ is represented as a bit string. We choose ${\nu}=\frac{1}{8}x_{\rm max}$,
the centers $\mu_1=\frac{2}{7}x_{\rm max}$ and $\mu_2=\frac{5}{7}x_{\rm max}$. The distribution is shown as the dashed line in \Fig{fig-gaussian2}(b).
In the following discussion, we use $n=10$ qubits and set circuit depth $d=10$.
Unlike the case of Bars-and-Stripes , the Gaussian mixture distribution is smooth and non-zero for all basis state.
Here, we generate $10^5$ i.i.d. samples from the target distribution as the training set.
Its Chow-Liu tree is shown in \Fig{fig-chowliu-tree}(b).
In this graph, we see the main contributions of mutual information are from bits near the big end (most significant bits labeled by small indices).
This is because the bit near the little end only determines the local translation of the probability on data axis.
But for a smooth probability distribution, the value of the little end is nearly independent from values of the rest bits.
For example, the value of the big-end $0$-th bit being $0$/$1$ corresponds to the global left/right peak in \Fig{fig-gaussian2} (b). While the probability for the little end being $0/1$ corresponds to $x$ being even/odd.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{images/fig1.pdf}
\end{center}
\caption{(a) The MMD loss as a function of Adam training step.
(b) Histogram for samples generated by a trained QCBM with a bin width $20$ (green bars),
in comparison with the exact probability density function (black dashed line).}\label{fig-gaussian2}
\end{figure}
We use mixture of three Gaussian kernels with bandwidths $\sigma = 0.25, 10, 1000$.
$\sigma=0.25$ captures the local difference in distribution,
and $\sigma=1000$, which has the same scale as $x_{\rm max}$, which captures the overall differences in the probability distribution.
\Fig{fig-gaussian2} (a) shows the MMD loss as a function of the Adam optimization steps, with a sample size $N=20000$.
After $2000$ training steps, the MMD loss decreased from $9.6\times10^{-2}$ to $6.4\times10^{-4}$. To see whether this low MMD loss represents a good generative model, we then generate $20000$ samples from the QCBM and plot its binned histogram in \Fig{fig-gaussian2} (b).
We see an excellent match between the histogram (green bars) and the exact probability distribution (black dashed curve). Thus, we conclude that MMD loss with Adam optimizer can also learn a smooth probability distributions over the qubit index of a QCBM. Here, we acknowledge that the unbinned histogram appears more spiky, partly due to the MMD loss does not capture the local variation of the probability distribution. Better circuit architecture design for representing continuous distribution may help alleviate this problem.
\section{Discussions}\label{sec-discussion}
We presented a practical gradient-based learning scheme to train quantum circuit Born machine as a generative model. The key component of the learning algorithm is to measure the gradient of the
MMD two-sample test loss function \Eq{eq-mmd-grad} on a quantum computer unbiasedly and efficiently.
Besides possessing stronger representation power, the QCBM does not suffer from the gradient vanishing/exploding problem as circuit depth increases compared to the classical deep neural networks. While compared to other quantum generative models~\cite{Amin2016,Gao2017}, the quantum circuit Born machine has fewer restrictions on hardware and circuit design, while both the training and sampling can be efficiently
carried out.
A recent work~\cite{Mcclean2018} pointed out that the quantum circuit also faces gradient vanishing problem as the number of qubits increases.
They found that the variance of gradient amplitudes decreases exponentially as the number of qubits increases in a random quantum circuit.
However, it is reasonable to believe that with better circuit structure design and parametrization strategy, such \Sec{sec:structurelearning} and Ref.~\cite{Huggins2018}, or using shared weights such as done in the convolutional neural networks~\cite{Goodfellow2016}, the gradient vanishing problem can be alleviated.
How gradient vanishing really affects gradient-based training in a large-scale QCBM needs further systematic investigation.
Our simulation of the quantum circuits Born machine is limited to a small number of qubits, thus the training set contains all patterns and we are pushing the circuits towards the memorization limit. In future applications, one shall focus on the generalization ability of the quantum circuits. In those cases, the structure and depth of a quantum circuit provide means of regularization since they can be design to express inductive bias in the natural distributions. In terms of the learning, the randomness in the stochastic gradient is also in favor of the generalization~\cite{Choromanska2015}.
Besides the two-sample test loss employed in this paper, one can explore alternative training schemes, e.g. adversarial
training~\cite{Goodfellow2014}. Alternatively, learning of the kernel function used in \Eq{eq-mmd-loss} may also improve the generation quality~\cite{Li2017e}.
Finally, differentiable learning of the QCBM may be used for solving combinatorial optimization problems
and structure learning tasks, where the outputs are encoded in discrete bit strings.
\section{Acknowledgment}
Simulations of quantum circuits were mainly performed using the ProjectQ library~\cite{Steiger2016,Haner2018}.
We thank Alejandro Perdomo-Ortiz, Miles Stoudenmire, Damian Steiger, Xun Gao and Pan Zhang for helpful discussions. The authors are supported by the National Natural Science Foundation of China under the Grant No. 11774398 and research program of the Chinese Academy of Sciences under Grant No. XDPB0803.
|
1,314,259,993,886 | arxiv | \section{Main results and set-up}
\subsection{Main results}
Consider $n$ classical particles moving in $d$-dimensional
Euclidean space under the influence of a potential which is the
sum of pair potentials.
If the pair potentials die off
appropriately at infinity then we expect that, within any widely
separated fast-moving configuration of particles, the individual
particles will move almost along straight lines. In this case it makes
sense to talk about ``scattering''. See for example \cite{DG,Sim,He,Hu1}, and \cite{Hu2}.
We will prove
new facts regarding the relation between initial conditions and
scattering data at infinity. The most surprising of these is explicit criteria \eqref{def:finally:free}
which guarantee the escape to asymptotic freedom.
See Theorem \ref{thm:main1}. Other facts, summarized by Theorems \ref{thm:main2},
\ref{thm:mainMoller} extend and refine results
previously only known for short range
potentials to the case of long-range potentials (see Definition \ref{def:range}).
\subsection{Setup and notation for $n$-body dynamics and potential decay}
\medskip
\noindent
A configuration $q$ specifies the locations of all $n$
masses, so that $q = (q_1, \ldots, q_n) \in \bR^{dn}$ with
$q_a \in \bR^d$. Thus our configuration space is \begin{equation} M:=\bR^{dn}_q
\qmbox{, or} \widehat M:=\bR^{dn}_q\backslash\Delta,
\Leq{conf:space:hat} depending on whether or not our pair potentials
$V_{i,j} = V_{i,j} (q_i -q_j)$ have singularities at collision
$q_i = q_j$; here \begin{equation} \Delta := \{q = (q_1,\ldots,q_n)\in
\bR^{dn}_q\mid q_i=q_j\mbox{ for some }i\neq j\} \Leq{collision:set}
is the collision set, also known as the ``fat diagonal''. $\Delta$
will also play an important role in the velocity space.
Configurations evolve in time according to Newton's equations
\begin{equation}
m_i \ddot q_i = -\nabla_{q_i} V,\qquad i =1, \ldots , n, \qquad m_i >
0 \text{ the masses},
\Leq{N}
\noindent which we will formulate in the usual way in phase space,
using momenta $p_i = m_i \dot q_i$, so that $p = (p_1, \ldots, p_n)
\in \bR^{dn}_p$. Thus our phase space $P$ is
\begin{equation}
P := T^*M = \bR^{dn}_p \times \bR^{dn}_q, \qmbox{ or}
\widehat P := T^*\widehat M = \bR^{dn}_p \times (\bR^{dn}_q \, \backslash\, \Delta)
\Leq{phase_space}
endowed with its canonical symplectic form.
Identify $\bR^{dn}$ with $\bR^n \otimes \bR^d$, let
\[{\cal M}:={\rm diag}(m_1,\ldots,m_n)\otimes \idty_d\]
be the mass matrix, seen as an (invertible symmetric) operator on
$\bR^{dn}_p$. Newton's equations can be rewritten as Hamilton's
equations
\[\dot p= - \nabla_q V, \quad \dot q = {\cal M}^{-1} p,\]
with Hamiltonian $H:P\to\bR$ (or $\widehat P \to \bR$),
\begin{equation}
H(p,q) := K(p) + V(q),
\Leq{Ham}
where the potential energy is assumed to be of the form
\[V(q) := \sum_{1\le i<j\le n} V_{i,j}(q_i-q_j),\]
where the pair potentials $V_{i,j}$ satisfy $V_{j,i}=V_{i,j}$
and $V_{i,i}=0$ for all $i,j$, and where $K$ is the usual kinetic energy
$$K(p) := \sum_{i=1}^n \frac{\|p_i\|^2}{2m_i}= \eh \LA
p,p\RA_{\!{\cal M}^{-1}}, \quad \LA p,p'\RA_{\!{\cal M}^{-1}}:= \LA p,{\cal M}^{-1}
p'\RA.$$
\medskip From now on we will use multi-index notation for partial
derivatives.
\begin{definition}
\label{def:range}%
A pair potential $V_{i,j}\in C^2(\bR^d \backslash \{0\},\bR)$ is
\begin{enumerate}[$\bullet$]
\item
\emph{long range} if for some $\alpha>0$
\begin{equation}
\partial^\gamma V_{i,j}(q)=\cO \big(\|q\|^{-\alpha -
|\gamma|}\big) \qquad \big(\|q\| \to \infty,\,
\gamma\in\bN_0^d,\, |\gamma|\le 2 \big)
\Leq{V:decay}
(if needed, $V_{i,j}$ will then also be called an $\alpha$-potential),
\item
\emph{short range} if \eqref{V:decay} is valid for some $\alpha>1$,
\item \emph{finite range} if the $V_{i,j}$ have bounded support.
\end{enumerate}
The potential $V(q)=\sum V_{i,j}(q_i-q_j)$ is called \emph{long
range}, etc., if all its pair potentials $V_{i,j}$ have the
corresponding property.
\end{definition}
\begin{caveats}\quad\\
According to this established
terminology, the following implications hold:
$$\mbox{finite range } \Longrightarrow \mbox{ short range }
\Longrightarrow \mbox{ long range}.$$ We apologize for the
counterintuitive nature of the terminology. It is standard in
scattering literature. Also note that a finite range potential $V$
typically does not have bounded support within ${\bR}^{dn}$.
Rather, its support is contained in a neighborhood of the fat
diagonal $\Delta$. \hfill $\Diamond$
\end{caveats}
\begin{example}[Celestial mechanics and electrostatics]\quad\\
\label{ex:grav:coulomb}%
In celestial mechanics and electrostatics we have
$V_{i,j}(Q)=\frac{I_{i,j}}{\|Q\|} $ with respectively
$I_{i,j}=-m_i m_j$ and $I_{i,j}=Z_i Z_j$ for the charges
$Z_i\in\bR\!\setminus\! \{0\}$. These potentials are long range,
lying on the boundary of the space of short range potentials.
\hfill $\Diamond$
\end{example}
\begin{remark}[Strong forces near collisions]\quad\\
By definition, so-called strong force potentials satisfy
\begin{equation}
\partial^\gamma V_{i,j}(q)=\cO \big(\|q\|^{-\alpha - |\gamma|}\big)
\qquad \big(\|q\| \to 0,\, \gamma\in\bN_0^d,\, |\gamma|\le 2 \big),
\Leq{}
for some $\alpha \geq 2$ (cf.~\eqref{V:decay} as
$q \to \infty$). Variationally speaking, this condition is most
important in the opposite ''ultraviolet'' regime of short distances,
$\|q_i - q_j \| \ll 1$, rather than our current ``infrared regime'' of
long distances. Imposing the strong force condition on attractive
forces guarantees that any collision solution has infinite action
and so is a simple way to exclude collision solutions as candidate
minimizers when using the direct method of the calculus of
variations to achieve various types of solutions (e.g.\ periodic
ones).\hfill $\Diamond$
\end{remark}
\subsection{Asymptotic freedom}
Our first goal is to define the \emph{free region} of phase space,
leading to motions along which mutual distance eventually increase
linearly with time, as in the free flow, where bodies do not interact.
This definition relies on the prior concept of asymptotic velocity.
\begin{definition}
\label{def:asymptotic:velocity}%
The \emph{(forward, resp. backward) asymptotic shape} or
\emph{velocity} of a state $x \in P$ is the limit in $\bR^{dn}$, if
it exists, is
\[v^\pm (x) := \lim_{t \to \pm \infty} \frac{q(t)}{t},\]
where $x(t)=(q(t),p(t))$ is the integral curve through $x$ at $t=0$.
\end{definition}
We are interested in motions for which $v^+ \notin \Delta$.
\begin{definition}
\label{def:free}\quad\\%
The state $x$ is \emph{forward free} if $v^+(x)$ exists and
$v^+(x) \in \bR^{dn} \, \backslash\, \Delta$. We call
\[F^+ := \{x \in P\mid v^+(x) \in \bR^{dn} \, \backslash\, \Delta\}\]
the subset of $P$ of forward free states.
Correspondingly, the subset $F^-$ is the set of states $x$ which are
\emph{backward free}, i.e. $v^-(x) \in \bR^{dn} \, \backslash\, \Delta$.
We will sometimes refer to trajectories passing through $F^+$
as \emph{escape orbits}.
\end{definition}
\begin{remark}[Clusters] \quad\\
Those motions $x(t)$ for which $v^+ (x)$ exists but for which $v^+ (x) \in \Delta$ break up into $k < n$ clusters, each cluster
composed of those particles whose indices $i$ share a common asymptotic velocity: $v^+_i = v^+ _j$.
The dynamics within a cluster of size $c$ can be as complicated as that of the general $c$-body problem. The
clusters interact with each other like a free
$k$-body system. (See \textsc{Marchal-Saari}~\cite{MS}, however not in the sense of asymptotic completeness, see \cite[section 5.10]{DG}.)\,$\Diamond$%
\end{remark}
\begin{example}[Celestial mechanics]\quad\\
\label{ex:celestial:mechanics}%
\textsc{Chazy} \cite{Cha} showed
that collision-free solutions for $n=3$ gravitating bodies fall into one of seven possibilities,
regarding their final behavior in the future.
\begin{itemize}
\item
Bounded, parabolic, parabolic-elliptic and oscillating motions have zero asymptotic velocity.
\item
Hyperbolic-elliptic and hyperbolic-parabolic motions have asymptotic velocity belonging to
$\Delta\backslash\{0\}$.
\item Hyperbolic motions are free. So their asymptotic velocity is in $\bR^{3d}\,\backslash\,\Delta$.
\end{itemize}
So, here hyperbolicity equates to freedom. For more bodies, new types
of final motions occur, notably the ``non-collision singularities'', see {\sc Gerver} \cite{Ge}
and {\sc Xia} \cite{Xia}.
But it remains true that every
collision-free solution has asymptotic velocities $v^{\pm}$ in both time directions
provided we allow velocities to take values in the one point compactification
of $\bR^{dn}$. (For example, for initial conditions $x$ leading to non-collision singularities we have
$\lim_{t \to T^\pm(x)} \frac{\|q(t)\|}{t} =\infty$, where $T^+(x)\in(0,\infty]$ and $T^-(x)\in[-\infty,0)$ are
the escape times beyond which the solution fails to exist.)
\hfill$\Diamond$%
\end{example}
The precise structure of $F^+$ is not obvious. Yet, by flowing $F^+$
along integral curves, we will reach an open subset of $P$, which we can characterize explicitly. Let
\begin{align}
q_{i,j}& := \|q_i-q_j\|\ ,\quad q_{\min} := \min_{i<j}q_{i,j}
\ ,\quad q_{\max}:= \operatorname*{\max}_{i<j}
q_{i,j}\,,\nonumber\\
v_{i,j}&:=\|v_i-v_j\|\ ,\quad
v_{\min}:=\min_{i<j} v_{i,j}\ ,\quad
v_{\max}:= \operatorname*{\max}_{i<j} v_{i,j}
\end{align}
and let $\alpha$, $\delta$ and $C$ be three positive parameters.
\begin{definition}
\label{def:finally:free}%
The \emph{finally free region} (with parameters $\alpha$, $\delta$
and $C$) is
\begin{eqnarray} F^+_{\rm loc} :=
\Big\{ x=(p,q) \in P\hspace{-2mm} & \Big| \hspace{-2mm}&
v_{\min}^2> C \frac{q_{\max}}{q_{\min}^{\alpha+1}}\,
, \label{def:ff}\\
&& \LA v_i-v_j, q_i-q_j\RA > (1-\delta)v_{i,j}q_{i,j}\, ,\nonumber\\
&&\textstyle (1+2\delta) \frac{q_{k,l}}{v_{k,l}}>
\frac{q_{i,j}}{v_{i,j}} ,\quad( i \ne j,k \ne l )\Big\}. \nonumber
\end{eqnarray}
\end{definition}
Notice that $F^+_{\rm loc}$, like $F^+$, is invariant w.r.t.\ the symplectic lift of the diagonal action
of the Euclidean group on configuration space $\bR^{dn}$.
The following theorem justifies that our definition of $F^+$ matches
our initial goal, and also justifies the notation $F^+_{\rm loc}$.
\begin{theorem}
\label{thm:main1}%
For any long range potential $V$, there exist appropriate parameters
$\alpha$, $\delta$ and $C$ such that $F^+_{\rm loc}$ is forward invariant and such that a
state $x \in P$ is in $F^+$ if and only if its forward orbit
eventually enters $F^+_{\rm loc}$.
\end{theorem}
The theorem follows from Theorem~\ref{thm:final:free} below. The proof
will actually show that the boundary $\partial F^+_{\rm loc}$ is a ($C^0$)
surface of section of the flow restricted to $F^+$.
Notice that
$F^+ = \bigcup_{t\geq 0} \Phi_{-t}\left(F^+_{\rm loc}\right)$, where
$F^+_{\rm loc}$ is open and $\Phi_{-t}$ is smooth, whence the following.
\begin{corollary}
$F^+$ is a non-empty open subset of $P$.
\end{corollary}
\medskip The asymptotic velocity map \ref{def:asymptotic:velocity}
enjoys regularity on $F^+$.
\begin{theorem}
[Asymptotic velocity map on $F^+$]\quad \label{thm:main2}\\
Assume that $V$ is a long-range potential whose pair potentials are
$C^k$, $k \ge 2$. The map
$v^+ : F^+ \to \bR^{dn} \, \backslash\, \Delta$ is a $C^{k-1}$
complete set of commuting first integrals. Moreover, for fixed
$v_* \in \bR^{dn} \, \backslash\, \Delta$ the space of all forward
orbits $x(t)$ for which $v^+ (x(t)) = v_*$ has the structure of an
affine space modelled on the $(nd-1)$-dimensional vector
space~$v_* ^{\perp}$.
\end{theorem}
The regularity of the $v^+$ follows from item 1 of Theorem
\ref{thm:both:moeller}. That components of $v^+$ Poisson commute is
clear. That $v^+$ is a surjective submersion, and the assertion on the
structure of its fibers follows from item 4 of Theorem
\ref{thm:Dollard:Moeller} below.
\begin{earlier}[Smoothness of scattering data]\quad\\
Smoothness of the scattering data and in particular of the
asymptotic velocity map $x \mapsto v^+ (x)$ has been achieved under
various assumptions:
\begin{enumerate}[$\bullet$]
\item
In \cite{Gu}, {\sc Gutkin} proved continuity of scattering data for
a class of $n$-particle systems on the line with repulsive interactions.
\item
Later, {\sc Fusco} and {\sc Oliva} proved in \cite{FO} a result that
implies smoothness of asymptotic momentum and even integrability for
repulsive Coulombic potentials.
\item
More recently, \textsc{Duignan} et al. \cite{Duignan} prove that the
map $x \to v^+(x)$ is analytic on $F^+_{\rm loc}$ for the Newtonian
potential. \hfill $\Diamond$
\end{enumerate}
\end{earlier}
\subsection{Comparison with free flows}
\label{subsec:flows}
In order to study the asymptotic behavior of the dynamics on $F^+$,
one strategy would be to compactify the phase space, as
in~\cite{Duignan} for the $N$-body problem. Such a
compactification is hard to define in full generality. Another
strategy, chosen here, is to compare the dynamics to a model,
integrable, free dynamics.
Write
$$\Phi : \bR_t \times P \dashedrightarrow P $$
for the flow defined by our $n$-body system. {\it We have used the
broken arrow notation } for the map $\Phi$ to indicate that the
domain of the map need not be all of $ \bR_t\times P$, thus allowing
for the incomplete flows like the flows that occur for potentials such
as Newton's or Coulomb's which have singularities. The curve
$t \mapsto \Phi_t (x)$, where defined, is a solution to our Hamilton's
equations having initial condition $x \in P$.
The {\em free flow} $\Phi^{(0)}$, on the other hand, is the flow whose
projected curves are the lines $t\mapsto a t + c$\,:
\begin{equation} \Phi^{(0)}: \bR_t\times
P\to P\qmbox{,} \Phi^{(0)}_t(p, q)=(p,q+t{\cal M}^{-1}p) \Leq{free:solu}
and is generated by the free Hamiltonian $H_0 = K$. Let
\begin{equation} F_0 = F^+_0 := \big\{ (p,q)\in P \mid v= {\cal M}^{-1}p\notin \Delta
\, \big\}.
\Leq{P0free}
\begin{theorem}
\label{thm:mainMoller}\quad\\%
Let $V$ be a short-range $(\alpha, k)$-potential with $\alpha>1$ and
$k \ge 2$ (see definition~\ref{def:alpha-k}).\\
Then the dynamics $\Phi$ on $F^+$ is
conjugate to the free dynamics $\Phi^{(0)}$: there exists a $C^{k-1}$
symplectomorphism $\Omega : F_0 \to F^+$
such that
$$\Omega \circ \Phi^{(0)}_t = \Phi_t \circ \Omega \qquad (\forall t
\ge 0).$$
\end{theorem}
This is the qualitative contents of Theorem \ref{thm:moeller} below.
\medskip
An analogous theorem to \ref{thm:main2} holds for long-range
potentials. Instead of comparing the given flow with the free flow,
we must compare it with an integrable, time dependent ``Dollard
Hamiltonian'' $H_D (p,t ) = K(p) + V((\sqrt{1+t^2})p)$ (which does not
depend on $q$!). See Theorem \ref{thm:Dollard:Moeller} for precise
statements.
\begin{earlier}\quad\\
In 1927 {\sc Chazy} (\cite[Chapter 5]{Cha}) used the term
``hyperbolic'' in the classification the long-time behaviour of
solutions in the long range case of celestial mechanics. He
established an analytic asymptotic expansion near infinity for his
hyperbolic solutions with initial terms
\begin{equation}
q(t) = at + b \log(t) + c +\cO\big(\log(t)/ t\big) ; \quad b = +\nabla V(a)\quad
\text{ as } t \to \infty
\Leq{hyperbolicChaz}
Later, {\sc Saari} \cite[section 8]{Sa}, and
{\sc Marchal} and {\sc Saari} \cite[section 10]{MS} extended
and clarified Chazy's results, focussing on how
cluster energies and angular momenta approach their limits. Here
``cluster'' refers to the situation where $v^+ \in \Delta$. The
``clusters'' are the subsets of mass indices $i$,
for which $v^+ _i = v^+ _ j$.
The $\log(t)$ term in Chazy's expansion
equation~\eqref{hyperbolicChaz} is an essential consequence of the
$1/r$-nature of the Newtonian (or Coulomb) potential. On the other
hand, hyperbolic solutions for short range potentials satisfy
\begin{equation}
q(t) = a t + c + o(1), \quad\text{ as }t \to + \infty.
\Leq{hyperShortRange}
\textsc{Simon} \cite{Sim} proved the validity of this expansion for the
two-body problem using the M\o ller transform (as used in
section~\ref{sect:moeller}), or, as he called it, the
{\em wave transformation}.
Recently {\sc Maderna} and {\sc Venturelli} \cite{MV} investigated
forward hyperbolic motions for $n$-body problem using variational
and weak KAM methods. And \textsc{Duignan} et al. set up
\cite{Duignan} an approach to hyperbolic motions and scattering for
the $n$-body problem which relies on a McGehee-style compactification
of phase space which adds fixed points at infinity whose stable
manifold correspond to forward hyperbolic solutions.\hfill $\Diamond$
\end{earlier}
\begin{subsection}{Summary: Main notations}
\label{subsec:notations}
\begin{tabular}[t]{ll}
$v^\pm(x)$ &asymptotic velocity of state $x$
(definition~\ref{def:asymptotic:velocity})\\
$P$ & phase space (equation~\eqref{phase_space}) \\
$\widehat P$ & phase space when collision singularities present (equation~\eqref{phase_space} ) \\
$F^\pm$ &forward and backward free regions
(definition~\ref{def:free})\\
$\widehat F^\pm$ & as above, but when collision singularities present
(definition~\eqref{P:hat:free:pm})\\
$F^+_{\rm loc}$ &forward finally free region
(definition~\ref{def:finally:free})\\
$F_0$ &free region of the free flow (equation~\eqref{P0free})\\
$\Phi^{(0)}_t$ & free flow (equation~\eqref{free:solu})\\
$\Phi_t$ & n-body flow (subsection~\ref{subsec:flows})\\
$\hat \Phi_t $ &n-body flow when collisions present \\
\end{tabular}
\end{subsection}
\section{Do we know when we are free?}
\label{sect:free}
For simplicity, we first consider long-range potentials $V$ which are
non-singular at the origin i.e. $C^2$ on $\bR^{dn}$. Many
properties which hold for these potentials also hold for
singular long range potentials (e.g.\ the
gravitational $n$-body potential). This will be proved in
section~\ref{sect:moeller2}.
We equip the real vector space of long range $\alpha$-potentials $V\in
C^2(\bR^{dn},\bR)$ (as introduced in definition~\ref{def:range},
with $\alpha>0$) with the seminorm
\begin{equation}
\|V\|^{(\alpha)} \; := \; \| {\cal M}^{-1}\|\
\max_{i<j\in N} \sup_{q\in\bR^d\backslash \{0\}} \|q\|^{\alpha+1}
\|\nabla V_{i,j}(q)\|.
\Leq{V:alpha}
Typically, pair potentials are $C^2$, smoother, often even
analytic. In order to describe a section of the flow in restriction to
$F^+$, we will need a $C^2$ seminorm estimate of the potential
and later, scattering estimates will be improved using
$C^k$-seminorms, with $k\in \bN$, $k \geq 2$. We now introduce such
seminorms.
\begin{definition}
\label{def:alpha-k}\quad\\%
An $(\alpha,k)$--\emph{potential} $V$ is a potential whose pair
potentials $V_{i,j}\in C^k(\bR^d,\bR)$ fulfill
\begin{equation}
\partial^\gamma V_{i,j}(q)=\cO\big(\|q\|^{-\alpha - |\gamma|}\big)
\qquad (\gamma\in\bN_0^d, |\gamma|\le k).
\Leq{V:k:decay}
On the space of $(\alpha,k)$-potentials we define
\begin{equation}
\|V\|^{(\alpha,k)} := \| {\cal M}^{-1}\|\,
\sum_{i<j\in N} \sum_{\gamma\in\bN_0^d, |\gamma| = k}\
\sup_{q\in\bR^d\backslash \{0\}} \|q\|^{\alpha+k} |\partial^\gamma
V_{i,j}(q)|
\Leq{V:k:norm}
(so that $\|V\|^{(\alpha)} = \|V\|^{(\alpha,1)}$).
\end{definition}
The inessential factor $\|{\cal M}^{-1}\|=m_{\min}^{-1}$ for
$m_{\min}:=\min(m_1,\ldots,m_n)$ simplifies formulae.
Recall the definition of the free region
\begin{equation}
F^+= \big\{x\in P \mid v^+ (x) \in \bR^{dn}\, \, \backslash\, \Delta
\big\}.
\Leq{P:free:pm:B}
It depends on the details of the (generally non-integrable)
flow, and hence implicitly on the precise form of the potential $V$.
So general properties of the free region are hard to
grasp. Surprisingly, there is an explicit surface of
section of the flow restricted to $F^+$. It bounds a
positive-invariant subset $F^+_{\rm loc} \subset F^+$ having the
property that every orbit in $F^+$ must eventually enter $F^+_{\rm loc}$.
We still assume that $V$ is a non-singular $\alpha$-potential. Let
$\delta$ and $C$ with
\[0<\delta \le \delta_0:= \min(\alpha/(4+\alpha),1/5), \quad C := 16
\,dn\,\|V\|^{(\alpha{,2})}/\delta.\]
Define $F^+_{\rm loc}$ by \eqref{def:ff}, with our chosen values of
$\alpha$, $\delta$ and $C$. The three inequalities assert
\begin{enumerate}[--]
\item the dominance of the interparticle kinetic energy over potential
energy
\item the near-parallelism of interparticle distances and velocities
\item that interparticle distances are nearly proportional to
interparticle velocities.
\end{enumerate}
This tells us what the motion of free particles eventually looks like.
For example, landing in $F^+_{\rm loc}$ yields the simple propagation estimates
\eqref{propagation:est}.
\begin{theorem}[Final free region]
\label{thm:final:free}%
\quad
\begin{enumerate}[1.]
\item $F^+_{\rm loc}$ is forward invariant : $\Phi_t (F^+_{\rm loc}) \subseteq
F^+_{\rm loc}$ for $t\ge0$.
\item $F^+_{\rm loc}$ is a subset of $F^+$.
\item For any $x_0\in F^+$ there is a time $t$ such that
$\Phi(t,x_0)\in F^+_{\rm loc}$.
\item
For $x_0\in F^+_{\rm loc}$ the distance between the particles
$i<j\in N$ increases linearly:
\begin{equation}
\eh v_{i,j}(0) \; t\le q_{i,j}(t,x_0) - q_{i,j}(0,x_0)
\le \textstyle\frac{3}{2} v_{i,j}(0)\; t
\qquad\big(t\in[0,\infty)\big).
\Leq{propagation:est}
\end{enumerate}
\end{theorem}
As already mentioned, the boundary of $F^+_{\rm loc}$ is thus a $C^0$
surface of section of the flow restricted to the free region.
\begin{example}[Two bodies]
We already remarked that $F^+_{\rm loc}$ is invariant under
Euclidean transformations. So in particular
$F^+_{\rm loc} = T^*\bR^d\times \widetilde{F}^+_{\rm loc} $, the cartesian product referring to
the separation of center of mass and internal motion.\\
In the case $n=2$, $\widetilde{F}^+_{\rm loc}$
is fibered over $\bR^{d}_q$, with fiber
diffeomorphic to the cylinder $[1,\infty)\times B_{d-1}$ in spherical coordinates,
see figure \ref{fig:F:plus}.
\hfill $\Diamond$
\begin{figure}[h]
\begin{center}
\includegraphics[width=8cm]{section.pdf}
\end{center}
\caption{Fiber over $(q,v_1)$ of $\partial F^+_{\rm loc}$ in the $n=2$
case}
\label{fig:F:plus}
\end{figure}
\end{example}
\begin{remark}[Topology of the final free region]\quad\\
Although $\Delta$ is contractible,
already the set $\bR^{dn}\, \backslash\, \Delta$ to which partial $F^+_{\rm loc}$ projects,
is topologically rich:
\begin{enumerate}[1.]
\item
For $d=1$, there is a homeomorphism
$\bR^n \backslash\, \Delta \cong \bR^n\times {\rm Sym}(n)$.
\item
For $d=2$, the cohomology ring of $\bR^{2n}\, \backslash\, \Delta$ is
the one of the product over $k$ of bouquets of $1\le k\le n-1$ circles,
see {\sc Arnold} \cite{Ar}.
\hfill $\Diamond$
\end{enumerate}
\end{remark}
\medskip
In order to prove Theorem \ref{thm:final:free}, we will use the following
lemma, whose proof is routine, and where we denote by $\LA\, \cdot\, \RA$ a
smoothened version a the absolute value:
\[\LA q \RA =\sqrt{q^2+1}.\]
\begin{lemma}
\label{lem:integral:estimate}%
\quad\\[-6mm]
\begin{enumerate}[1.]
\item
For $\alpha>0$ and $q\ge 0$,
\begin{equation}
{\textstyle \frac{1}{\alpha}}\LA q \RA^{-\alpha}
\ \le \ \int_{q}^{\infty}\! \LA \tilde{q} \RA^{-\alpha-1}d\tilde{q}
\ \le \ (\textstyle \frac{1}{\alpha}+1)\LA q \RA^{-\alpha}.
\Leq{ineq:smooth:abs}
\item If $\,V$ is an $\alpha$-potential (see~\eqref{V:decay}),
\[\|V\|^{(\alpha)} \le \|V\|^{(\alpha,1)}\]
and, if $\,V$ is an $(\alpha,k+1)$--potential
(see~\eqref{V:k:decay}) with $k\geq 1$,
\[ \|V\|^{(\alpha,k)} \le d\, \|V\|^{(\alpha,k+1)}.\]
\end{enumerate}
\end{lemma}
\noindent
\textbf{Proof of Theorem \ref{thm:final:free}:} \\
We will repeatedly use the symbol $X_{i,j}$ for the relative
accelerations
\[X_{i,j}: \bR^{dn}\to \bR^d \qmbox{,}
X_{i,j}(q):=\sum_{k\in N\setminus\{i\}}\!\!\!\frac{\nabla V_{i,k}(q_i-q_k)}{m_i}-
\sum_{k\in N\setminus\{j\}} \!\!\! \frac{\nabla V_{j,k}(q_j-q_k)}{m_j}\]
between the $i$-th and $j$-th particle, and the estimate
\begin{equation}
\|X_{i,j}(q)\| \le 2(n-1)\frac{\|V\|^{(\alpha)}}{ q_{\min}^{\alpha+1}}\qquad(i<j\in N).
\Leq{X:est}
Throughout the proof we also use that, by Lemma
\ref{lem:integral:estimate}.2,
\begin{equation}
C\ge 8 n\|V\|^{(\alpha)}/\delta.
\Leq{C:ge}
\begin{enumerate}[1.]
\item $F^+_{\rm loc}$ is open, since it is defined by strict inequalities
among continuous functions on phase space. To prove that $F^+_{\rm loc} $
is forward invariant, it is sufficient to show that the Hamiltonian
vector field {\em points inwards along its boundary}
$\partial F^+_{\rm loc}$. Thus we will show that the difference of the
sides of each inequality has positive time derivative (Poisson
bracket $\{\bullet,H\}$) at instants at which that inequality
becomes an equality.
Note that on $\overline{F^+_{\rm loc}}$ the phase space functions
$q_{i,j}$ and $v_{i,j}$ have positive values and are thus smooth.
For $q_{\min}$, $q_{\max}$, $v_{\min}$ and $v_{\max}$, which are
only Lipschitz continuous, we consider the distributional
derivative.
\begin{enumerate}[(a)]
\item On $F^+_{\rm loc}$ the time derivative relating to the first
inequality in \eqref{def:ff} is positive, since defining $\{f, H\}$,
as the distribution $\frac{d}{dt}f$ along orbits, $v_{\min}$
fulfills the inequality
\[\{v_{\min}^2, H\}\ge -2v_{\min}\, \max_{i<j\in
N}\|\dot{v}_{i}-\dot{v}_{j}\|,\]
see {\em e.g.}\ {\sc Lieb} and {\sc Loss} \cite[Cor. 6.18]{LL} for
the weak gradient of the minimum of functions. Thus
\begin{align}
\Big\{v_{\min}^2 -&\, C \frac{q_{\max}}{q_{\min}^{\alpha+1}} \,,\, H\Big\}\ge\nonumber\\
\ge&\; C(\alpha+1) \frac{q_{\max}}{q_{\min}^{\alpha+2}}(1-\delta)v_{\min}
-2v_{\min}\, \max_{i<j\in N}\|X_{i,j}(q)\| -
C \frac{v_{\max}}{q_{\min}^{\alpha+1}}\nonumber\\
\ge&\; C\Big((\alpha+1)(1-\delta)\frac{v_{\min}}{q_{\min}} -\delta/4
\frac{v_{\min}} {q_{\max}}- \frac{v_{\max}} {q_{\max}}\Big)
\frac{q_{\max}}{q_{\min}^{\alpha+1}}\nonumber\\
\ge&\; C\Big((\alpha+1)(1-\delta)-\delta/4 -(1+2\delta)\Big)
\frac{v_{\min}\,q_{\max}}{q_{\min}^{\alpha+2}}\nonumber\\
=&\; C\Big(\alpha(1-\delta) -\frac{13}{4}\delta\Big)
\frac{v_{\min}\,q_{\max}}{q_{\min}^{\alpha+2}}> 0.\nonumber
\end{align}
The factor $1-\delta$ in the first inequality follows from the second
line of \eqref{def:ff}.
The second inequality follows from \eqref{X:est} and \eqref{C:ge}.
The factor $1+2\delta$ in the second to last line follows from the
third line of \eqref{def:ff}.\\
In the final inequality we used that $\delta \in
\big(0,\min(\alpha/(4+\alpha),1/5)\big]$:
\begin{enumerate}[$\bullet$]
\item
For $\alpha\in (0,1]$ we obtain $\alpha(1-\delta) -\frac{13}{4}\delta \ge \frac{3\alpha}{4(4+\alpha)}>0$.
\item
For $\alpha\in (1,\infty]$ we get
$\alpha(1-\delta) -\frac{13}{4}\delta\ge\frac{4}{5}\alpha- \frac{13}{20}\ge\frac{3}{20} $.
\end{enumerate}
\item
The time derivative of the left hand side of the second inequality
\[ \LA v_i-v_j, q_i-q_j\RA - (1-\delta)v_{i,j}q_{i,j} > 0 \]
in \eqref{def:ff} is positive, too. This is trivial if
$\|V\|^{(\alpha)}=0$, that is, $V=0$. Otherwise
\begin{align}
\big\{\langle v_i-v_j,&\, q_i-q_j\rangle - (1-\delta)v_{i,j}q_{i,j} \;,\; H\big\}\ge \nonumber\\
\ge&\; v_{i,j}^2 - \LA X_{i,j}(q),q_i-q_j\RA - (1-\delta)\big(v_{i,j}^2+\|X_{i,j}(q)\|q_{i,j}\big)
\nonumber\\
\ge&\; \delta C \frac{q_{\max}}{q_{\min}^{\alpha+1}}-
2(n-1)\frac{\|V\|^{(\alpha)}q_{\max}}{ q_{\min}^{\alpha+1}}
(2-\delta)
\nonumber\\
\ge &\; \big(8n-2(n-1)(2-\delta)\big)\frac{\|V\|^{(\alpha)}q_{\max}}{ q_{\min}^{\alpha+1}}
> 4n\frac{\|V\|^{(\alpha)}q_{\max}}{ q_{\min}^{\alpha+1}}
>0 .\nonumber
\end{align}
For the third line we used the first inequality of \eqref{def:ff} and
\eqref{X:est}, and
\eqref{C:ge} for the last line.
\item
Concerning the third inequality in \eqref{def:ff},
as $\{ q_{k,l} , H \}\in [(1-\delta)v_{k,l}, v_{k,l}]$ (using the
second inequality in \eqref{def:ff}),
at value zero of
$(1+2\delta) \frac{q_{k,l}}{v_{k,l}}- \frac{q_{i,j}}{v_{i,j}}$ its
time derivative estimates as follows:
\begin{align}
&\Big\{ (1+2\delta) \, \frac{q_{k,l}}{v_{k,l}}- \frac{q_{i,j}}{v_{i,j}}, H \Big\}\ge\nonumber\\
\ge&\; (1+2\delta)(1-\delta) -1 -( (1+2\delta) q_{k,l}\|X_{k,l}(q)\| +q_{i,j}\|X_{i,j}(q)\|)/v_{\min}^2\nonumber\\
\ge&\; \delta-2\delta^2-
2 (n-1) (1+\delta) \frac{\|V\|^{(\alpha)}q_{\max}}{v_{\min}^2 \, q_{\min}^{\alpha+1}}
\nonumber\\
>&\textstyle \; \delta(1- 2\delta) - 4n(1+\delta) \frac{\|V\|^{(\alpha)}}{C }
\ge \delta(1- 2\delta) -\eh(1+\delta))
\ge\delta(\frac{3}{5}\! -\! \frac{6}{10}) = 0. \nonumber
\end{align}
\end{enumerate}
\item[4.]
We now prove item 4, before items 2 and 3. Throughout the proof of
item 4 we will use that we already proved positive invariance of
$F^+_{\rm loc}$ (item 1). We adopt the notation
$\tilde{f}(t):=f\circ\Phi(t,x_0)$ for a phase space function $f$,
with $x_0\in F^+_{\rm loc}$.
For $F:=\eh \tilde{q}_{i,j}^{\,2}$ and $t\ge0$ we get from
\eqref{def:ff} that \begin{equation} F'(t)\ge
(1-\delta)\tilde{q}_{i,j}(t)\tilde{v}_{i,j}(t) \Leq{Fp} and \begin{equation}
F''(t)\ge (1-\delta)
\l[\tilde{v}_{i,j}^{\,2}(t)-\tilde{q}_{i,j}(t)\|
\tilde{X}_{i,j}(t)\|\ri]
\ge(1-\delta)(1-\delta/4)\tilde{v}_{i,j}^{\,2}(t) \ge \textstyle
\frac{19}{25} \tilde{v}_{i,j}^{\,2}(t). \Leq{Fpp} The second
inequality in \eqref{Fpp} is valid, since by \eqref{X:est},
\eqref{C:ge} and \eqref{def:ff} \begin{equation} \tilde{q}_{i,j}(t) \|
\tilde{X}_{i,j}(t) \| \ \le\ \tilde{q}_{i,j}(t)
2(n-1)\frac{\|V\|^{(\alpha)}}{ \tilde{q}_{\min}^{\alpha+1}(t)} \
\le\ \ev C\frac{\tilde{q}_{i,j}(t)}{\tilde{q}_{\min}^{\alpha+1}(t)}
\delta \ \le\ \frac{\delta}{4}\tilde{v}_{\min}^2(t). \Leq{tq:X} The
third inequality in \eqref{Fpp} follows from $\delta \le1/5$.
There exists a maximal $T\in (0,+\infty]$ so that
\begin{equation}
\big\|\big(\tilde{v}_i(t)-\tilde{v}_j(t)\big) - \big(\tilde{v}_i(0)-\tilde{v}_j(0)\big)\big\|^2
\le \textstyle\frac{1}{6} \tilde{v}_{i,j}^2(0)\qquad \big( t\in [0,T) \big).
\Leq{assum:velocity}
Thus $\big(1-\sqrt{1/6}\big)^2\,\tilde{v}_{i,j}^{\,2}(0)\le \tilde{v}_{i,j}^{\,2}(t)\le
\big(1+\sqrt{1/6}\big)^2\,\tilde{v}_{i,j}^{\,2}(0)$
within this time interval, and by \eqref{Fp}, \eqref{Fpp} this implies
\begin{eqnarray}
F(t)&=&\textstyle F(0)+\int_0^t \big(F'(0)+\int_0^sF''(\tau)d\tau\big)\, ds
\label{F:FFF}\\
&\ge& F(0) +(1-\delta)\tilde{q}_{i,j}(0)\tilde{v}_{i,j}(0) t
+\textstyle\frac{19}{25} \int_0^t\int_0^s \tilde{v}_{i,j}^{\,2}(\tau)
\,d\tau\,ds\nonumber\\
&\ge& \textstyle
\eh \tilde{q}_{i,j}^{\,2}(0)+\frac{4}{5}\tilde{q}_{i,j}(0)\tilde{v}_{i,j}(0)t
+\frac{19}{50} \big(1-\sqrt{1/6} \big)^2\,\tilde{v}_{i,j}(0)^2 t^2. \nonumber
\end{eqnarray}
Conversely by the first line in \eqref{F:FFF} and \eqref{tq:X} with
$\delta\le 1/5$,
\begin{eqnarray}
F(t)&\le& F(0) +\tilde{q}_{i,j}(0)\tilde{v}_{i,j}(0) t
+\textstyle\frac{21}{20}
\int_0^t\int_0^s \tilde{v}_{i,j}^{\,2}(\tau) \,d\tau\,ds \nonumber \\
&\le& F(0) +\tilde{q}_{i,j}(0)\tilde{v}_{i,j}(0) t +\textstyle
\frac{21}{40} \big(1+\sqrt{1/6}\big)^2\,\tilde{v}_{i,j}^{\,2}(0)t^2. \nonumber
\end{eqnarray}
These two estimates prove both inequalities in
\eqref{propagation:est} for time $t\in[0,T)$.
\item[2.] Next we start by showing that $T =+\infty$ in
\eqref{assum:velocity}. With the rescaled time parameter
\[s(t) := \textstyle\frac{\tilde{v}_{i,j}(0)} {2\tilde{q}_{i,j}(0)}
t\qmbox{,} \tilde{q}_{i,j}(t)\ge \tilde{q}_{i,j}(0)\LA s(t)\RA.\]
Note that by definition \eqref{def:ff} of $F^+_{\rm loc}$ the scaling
factors $\textstyle\frac{\tilde{v}_{i,j}(0)} {2\tilde{q}_{i,j}(0)}$
are, up to a factor $1+\delta$, independent of the index pair
$(i,j)$. So by applying \eqref{assum:velocity}, \eqref{X:est}, and
\eqref{def:ff} with \eqref{C:ge} in succession,
\begin{eqnarray*} \lefteqn{
\big\|\big(\tilde{v}_i(t)-\tilde{v}_j(t)\big) - \big(\tilde{v}_i(0)-\tilde{v}_j(0)\big)\big\|^2=}\\
&=&-2\int_0^t\LA \tilde{v}_{i}(\tau)
-\tilde{v}_{j}(\tau) - \big(\tilde{v}_i(0)-\tilde{v}_j(0)\big),\tilde{X}_{i,j}(\tau)\RA\,d\tau\\
&\le& \textstyle
\frac{2}{\sqrt{6}}\,\tilde{v}_{i,j}(0)\int_0^\infty \|\tilde{X}_{i,j}(\tau)\|\,d\tau\\
&\le& \textstyle \frac{4}{\sqrt{6}}\,(n-1)\,\tilde{v}_{i,j}(0)
\|V\|^{(\alpha)}
\int_0^\infty \tilde{q}_{\min}(\tau)^{-\alpha-1}\, d\tau\\
&\le& \textstyle \frac{8}{\sqrt{6}}\, (n-1)\,
\tilde{v}_{i,j}(0)\|V\|^{(\alpha)} \frac{(1+\delta)
\tilde{q}_{\max}(0)}
{\tilde{q}_{\min}^{\alpha+1}(0)\tilde{v}_{i,j}(0)}
\int_0^\infty \LA s\RA^{-\alpha-1}\, ds\\
&\le& \textstyle \frac{\sqrt{6}}{5}\,\tilde{v}_{i,j}^{\,2}(0) \,
\delta
\int_0^\infty \LA s\RA^{-\alpha-1}\, ds\\
&\le& \textstyle \frac{\sqrt{6}}{5} \tilde{v}_{i,j}^{\,2}(0)
\min\!\big(1/5,\frac{\alpha}{4+\alpha}\big)
\frac{\sqrt{\pi} \Gamma(\alpha/2)} {2\Gamma((1+\alpha)/2)}\\
&\le& \textstyle \frac{\sqrt{6}\pi}{50}\tilde{v}_{i,j}^{\,2}(0) <
\tilde{v}_{i,j}(0)^2/6\, , \end{eqnarray*} since
$\min\!\big(1/5,\frac{\alpha}{4+\alpha}\big) \frac{\sqrt{\pi}
\Gamma(\alpha/2)} {2\Gamma((1+\alpha)/2)}$
attains its maximal value $\pi/10$ for $\alpha=1$. This shows that
in \eqref{assum:velocity} $T=+\infty$. Thus by
\eqref{assum:velocity} the velocity differences stay bounded away
from zero ($\tilde{v}_{i,j}(t)\ge \eh \tilde{v}_{i,j}(0)>0$ for all
$t\ge0$) so that the initial condition $x_0\in F^+_{\rm loc}$ is in
$F^+$.
\item[3.] Let $x_0\in F^+$. By definition, $v^+(x_0)$ exists and
$v^+ (x_0) \notin \Delta$. It follows that for any
$\delta\in(0,\delta_0]$ there exists a time $t_0$ such that \begin{equation}
\|\tilde{v}_k(t)-v^+_k(x_0)\| \le \ea\,\delta \,
\overline{v}^+_{\min}(x_0) \qquad (k\in N,\,t\ge t_0). \Leq{v:ofv:diff}
In particular
\[\tilde{v}_{\min}(t)\ge (1-\ev\delta)\, \overline{v}^+_{\min}(x_0) > 0 \qquad (t\ge t_0).\]
As
$ \big\|\big( \tilde{q}_{i}(t)-\tilde{q}_{j}(t)\big) -\int_{t_0}^t
(\tilde{v}_i(s)-\tilde{v}_j(s))\,ds\big\| = \tilde{q}_{i,j}(t_0)$,
\begin{equation} \big\| \big( \tilde{q}_{i}(t)-\tilde{q}_{j}(t)\big)\, -\,
(t-t_0)\big(\overline{v}_i(x_0)-\overline{v}_j(x_0)\big)\big\| \le \delta/4 \,
(t-t_0) \overline{v}^+_{\min}(x_0) + \tilde{q}_{i,j}(t_0).
\Leq{near:affine} So
$v_{\min}(x)^2> C \frac{q_{\max}(x)}{q_{\min}(x)^{\alpha+1}}$ for
$x:=\Phi(t,x_0)$, $t\ge t_0$ large. This is the first condition in
the definition \eqref{def:ff} of $F^+_{\rm loc}$.
Concerning the second condition, similarly by \eqref{near:affine},
for $t$ large
\[\LA \tilde{v}_{i}(t)-\tilde{v}_{j}(t),
\tilde{q}_{i}(t)-\tilde{q}_{j}(t)\RA\ge
(1-\delta)\tilde{v}_{i,j}(t)\tilde{q}_{i,j}(t)\]
and, for the third condition,
\[(1+2\delta) \frac{\tilde{q}_{k,l}(t)}{\tilde{v}_{k,l}(t)} >
\frac{\tilde{q}_{i,j}(t)}{\tilde{v}_{i,j}(t)} \qquad(\,i<j,k<l \in
N).\]
This shows that $\Phi_t (x_0)\in F^+_{\rm loc}$ for all $t$ sufficiently
large. \hfill $\Box
\end{enumerate}
\section{Regularity of the asymptotic velocity}
We move on to the
regularity of the asymptotic velocity map $v^+ : x \mapsto v^+ (x)$.
\begin{theorem}[{{\cite[Theorem 5.4.1]{DG}}}]
Let the potential $V \in C^2(\bR^{dn},\bR)$ be long range. Then the
asymptotic velocity $v^+(x)$ exists for all $x \in P$.
\end{theorem}
The map $v^+ : P \to \bR^{dn}$ is Borel-measurable, but may be
discontinuous.
\begin{example}[Discontinuity of the asymptotic velocity]
\label{rem:discontinuous:as}%
\quad\\
Take $d=1$ and $n=2$ and a non-negative pair potential
$V_{1,2}\in C^2_{\rm c}(\bR,\bR)$ which has compact support, and a
unique maximum $V(0)>0$. The velocity along any trajectory is
constant after some time. So, the map $v^+$ is defined on the whole
phase space $\bR^2$ and has a discontinuity at the hyperbolic
equilibrium $x=0$ and nowhere else.
\hfill $\Diamond$
\end{example}
We will see that in restriction to the free region $F^+$ the map
$v^+$ is continuous and even differentiable. We will use the notation
\[p^+(x_0) = {\cal M} v^+(x_0)= \lim_{t\rightarrow\infty}p(t).\]
\begin{theorem}
\label{thm:both:moeller}
\quad\\
Let $V$ be an $(\alpha,k)$--potential.
\begin{enumerate}[1.]
\item \label{Statement1} The map $x_0 \mapsto v^+ (x_0)$ is a
$C^{k-1}$ map $F^+ \to \bR^{dn}$.
\item \label{Statement2}
Quantitatively, if $x_0 = (p_0,q_0) \in F^+_{\rm loc}$, for
multi-indices $\delta:=(\beta,\gamma)\in\bN_0^{dn}\times \bN_0^{dn}$
with $|\delta|\equiv |\beta|+|\gamma| \le k - 1$ and
partial derivatives: $\pa^\delta_{x_0}:=\pa^\beta_{p_0}\pa^\gamma_{q_0}$ we get
\begin{equation}
\pa^\delta_{x_0} (p^{+} (x_0) - p_0) =
\cO\l(\|V\|^{(\alpha,k)} v_{\min}(x_0)^{-1-|\beta|} \LA
q_{\min}(x_0)\RA^{-\alpha-|\gamma|} \ri).
\Leq{O:p}
\end{enumerate}
\end{theorem}
\begin{remark}[Variants]\quad\\
The constant in the order estimate \eqref{O:p} is independent of
$V$. Using the first condition in the definition \eqref{def:ff} of
$F^+_{\rm loc} $, we obtain the weaker estimate
\[\pa^\delta_{x_0} (p^{+} (x_0) - p_0) =
\cO\l( v_{\min}(x_0)^{+1-|\beta|} \LA q_{\min}(x_0)\RA^{-|\gamma|}
\ri).\]
Similarly, instead of \eqref{Moe:E}, we would have the weaker
estimate
\[\pa^\delta_{X_0} (Q_0-q_0) =
\cO\l( v_{\min}(X_0)^{-|\beta|} \LA q_{\min}(X_0)\RA^{1-|\gamma|}
\ri).\]
These estimates depend on the norm $\|V\|^{(\alpha,k)}$ of the
potential only indirectly, via the phase space region $F^+_{\rm loc} $
where they apply.\hfill $\Diamond$
\end{remark}
\textbf{Proof of Theorem \ref{thm:both:moeller}:}\\
We use the shorthands $q_{\min}:= q_{\min}(x_0)$, $v_{\min}:= v_{\min}(x_0)$
and continue to use the notation
$\tilde{f}(t):=f\circ\Phi(t,x_0)$ for a phase space function $f$.\\
$\bullet$
To prepare for the proof of Claim \ref{Statement1},
we first estimate the initial value problem for long-range potentials.
As $V$ is an $(\alpha,k)$--potential, the flow
\[\Phi\in C^{k-1}(\bR\times P,P).\]
For derivatives $\pa^\delta_{x_0}$ w.r.t.\ initial conditions $x_0$ with $1\le |\delta|\le k-1$,
like in \cite[section 6]{Kn} we use the integral representation of the trajectory
\[ q(t,x_0)= q_0+
{\cal M}^{-1}\l(\textstyle t p_0 - \int_0^t\!\! \int_0^s \nabla V\big(q(\tau,x_0)\big)\,d\tau \,ds \ri)
\qquad \big(t\in[0,\infty)\big).\]
By a standard dominated convergence argument (see, {\em e.g.},
{\sc Elstrodt} \cite[Thm.\ IV.5.7]{El}) its deviation from free motion is controlled by
\begin{eqnarray}
\lefteqn{\pa^\delta_{x_0}
\big( q(t,x_0) -(q_0+t {\cal M}^{-1}p_0)\big)= -\int_0^t \!\! \int_0^s \!
{\cal M}^{-1} \pa^\delta_{x_0}\nabla V\big( q(\tau,x_0)\big)\,d\tau \,ds
=} &&\label{implicit:equation}\\
&&\hspace*{-8mm} - \! \sum_{N=1}^{|\delta|} \label{eq:schlange}
{\cal M}^{-1}\hspace*{-7mm}
\sum_{\stackrel{\delta^{(1)}+\ldots+\delta^{(N)} = \delta} {|\delta^{(i)}|>0}}\!\!\!\!\!
\int_0^t\!\! \int_0^s \!
D^N\nabla V\big(q(\tau,x_0)\big)
\l(\pa^{\delta^{(1)}}_{x_0}q(\tau,x_0),\ldots,
\pa^{\delta^{(N)}}_{x_0}q(\tau,x_0)\ri) d\tau \,ds .\nonumber
\end{eqnarray}
Due to the $N=1$ term this is only an implicit equation for $\pa^\delta_{x_0}q(t,x_0)$.
To transform it into an explicit equation, we
thus consider for $\lambda>0$ the real Banach space
$\big(\widehat{\cC}, \|\cdot\|_{\lambda}\big)$,
\begin{equation}
\widehat{\cC} := \l\{ w\in C\l([0,\infty),\bR^{dn}\ri) \l| \,
\|w\|_{\lambda}:=\sup_{t\geq0} \| w(t)\|/\langle \lambda t\rangle <
\infty \ri.\ri\},
\Leq{hat:C}
noting that $\widehat{\cC}$ is independent of the choice of $\lambda$.
The linear operator ${\cal Q}\equiv {\cal Q}_{x_0}$,
\begin{equation}
{\cal Q}(w)(t):=
{\cal M}^{-1} \int_0^t \!\! \int_0^s D\nabla V\big(q(\tau,x_0)\big)w(\tau)\,d\tau \,ds
\qquad(t\geq 0),
\Leq{Q:op}
maps $\widehat{\cC}$ into itself,
and we want to prove that for all $x_0\in F^+_{\rm loc}$
the operator norm of ${\cal Q}_{x_0}$ is strictly smaller than one for a
suitable $\lambda$. Using \eqref{propagation:est} and \eqref{V:k:norm},
the operator norm is estimated by
\begin{eqnarray*}
\|{\cal Q}\|_{\lambda}\!\!
&:=& \sup_{w:\,\|w\|_{\lambda}=1} \! \! \|{\cal Q}(w)\|_{\lambda}
\le {\|V\|^{(\alpha,2)}}
\sup_{t\ge 0} \frac{ \int_0^t \!\! \int_0^s
( q_{\min}+\eh v_{\min}\tau )^{-2-\alpha}
\langle \lambda \tau\rangle\,d\tau \,ds}{\langle \lambda t\rangle}\\
&\le& \frac{\|V\|^{(\alpha,2)}}{\lambda^2 q_{\min}^{2+\alpha}}
\sup_{t\ge 0} \frac{ \int_0^t \!\! \int_0^s
\langle \tau\rangle^{-1-\alpha}\,d\tau \,ds}{\langle t\rangle}
\le 4 (1+1/\alpha) \frac{\|V\|^{(\alpha,2)}}{ v_{\min}^2 q_{\min}^{\alpha}}
\end{eqnarray*}
using Lemma \ref{lem:integral:estimate} in the last inequality and
setting
\[\lambda:= \eh v_{\min}/q_{\min} .\]
By Definition \eqref{def:ff} of $F^+_{\rm loc}$ the operator is a contraction:
\[\|{\cal Q}\|_{\lambda}\le \frac{4(1+1/\alpha)}{16\pi dn\,\max(1,1/\alpha)}\le
\frac{1}{2\pi dn} < 1.\]
Thus \eqref{implicit:equation} can be transformed into
\begin{eqnarray}
\lefteqn{(\idty+ {\cal Q})(\pa^\delta_{x_0}q)(t) = \pa^\delta_{x_0}(q_0+t {\cal M}^{-1}p_0)\ -
\ {\cal M}^{-1}\times}&&
\label{oneminq}\\
&&\hspace*{-6mm}\sum_{N=2}^{|\delta|}
\sum_{\stackrel{\delta^{(1)}+\ldots+\delta^{(N)}=\delta}{|\delta^{(i)}|>0}}\!\!\!\!\!
\int_0^t \!\! \int_0^s
D^{N} \nabla V\big( q(\tau,x_0)\big)
\l(\pa^{\delta^{(1)}}_{x_0} q(\tau,x_0),\ldots,
\pa^{\delta^{(N)}}_{x_0} q(\tau,x_0)\ri) d\tau \,ds\nonumber
\end{eqnarray}
with the invertible operator $\idty + {\cal Q}$ on $\widehat{\cC}$.
As on the r.h.s.\ of (\ref{oneminq}) only partial derivatives
of order $|\delta^{(i)}|< |\delta|$ appear, we can perform an
induction in $|\delta|$.\\
Assume that for all $\delta'=(\beta',\gamma')\in\bN_0^{dn}\times \bN_0^{dn}$
with $1 \le |\delta'|\le |\delta|-1$
\begin{equation}
\|\pa^{\delta'}_{x_0}q(\cdot,x_0) \|_\lambda =
\cO\l( v_{\min}(x_0)^{-|\beta'|} \, q_{\min}(x_0)^{1-|\gamma'|}\ri).
\Leq{ass:delta:bar}
This assumption is satisfied for $|\delta'|=1$, since then the
sum on the r.h.s.\ of \eqref{oneminq} equals zero.
Then by \eqref{propagation:est} and \eqref{ass:delta:bar} the terms
on the r.h.s.\ of (\ref{oneminq}) fulfill
\begin{align}
\MoveEqLeft
\l\|{\cal M}^{-1} \int_0^t\!\! \int_0^s \! D^N \nabla V\big(q(\tau,x_0)\big)
\l(\pa^{\delta^{(1)}}_{x_0}q(\tau,x_0),\ldots,
\pa^{\delta^{(N)}}_{x_0}q(\tau,x_0)\ri) d\tau \,ds\ri\| \le
\label{so:wie}\\
&\le
\|V\|^{(\alpha,N+1)}
\int_0^t\!\! \int_0^s \! q(\tau,x_0)^{-\alpha-N-1}
\prod_{i=1}^N \|\pa^{\delta^{(i)}}_{x_0}q(\tau,x_0)\| \,\, d\tau \,ds \nonumber\\
&\le
\|V\|^{(\alpha,N+1)}\ \times \nonumber\\
&\ \int_0^t\!\! \int_0^s \! \big(q_{\min}(x_0)+\eh v_{\min}(x_0)t \big)^{-\alpha-N-1}
\prod_{i=1}^N \l(\|\pa^{\delta^{(i)}}_{x_0}q(\cdot,x_0)\|_\lambda \LA \lambda\tau \RA\ri)
\,\, d\tau \,ds \nonumber\\
&\le
C_0 \|V\|^{(\alpha,N+1)} \, v_{\min}(x_0)^{-|\beta|} \, q_{\min}^{-\alpha-|\gamma|-1}
\int_0^t\!\! \int_0^\infty \LA \lambda \tau\RA^{-\alpha-1} \, d\tau\,ds \ \nonumber\\
&\le
C_1 \|V\|^{(\alpha,N+1)} \,
v_{\min}^{-2-|\beta|} \, q_{\min}^{1-\alpha-|\gamma|}\LA \lambda t\RA. \nonumber
\end{align}
For $x_0\in F^+_{\rm loc}$ that term is bounded above by
(see \eqref{def:ff}) %
$C_\delta \, v_{\min}^{-|\beta|} q_{\min}^{1-|\gamma|}\LA \lambda t\RA$,\footnote{$\delta$
of $C_\delta$ does not refer to the multi-index $\delta\in \bN_0^{2dn}$, but to the constant in
Theorem \ref{thm:final:free}. It is chosen as
$\delta:=\min(\delta_0,\alpha-1)$ in the short range case ($\alpha>1$)
and $\delta:=\delta_0$ if $0<\alpha\le 1$.}
proving the induction step for \eqref{ass:delta:bar}.\\
$\bullet$
We prove the momentum estimate in \eqref{O:p} for no partial derivative
w.r.t.\ initial conditions ($\delta=0$), which holds for all long range potentials.
By the propagation estimate \eqref{propagation:est}
uniformly in $t\ge 0$
\begin{eqnarray}
\lefteqn{\| {\cal M}^{-1}(\tilde{p}(t)-\tilde{p}(0))\| } \nonumber\\
&\le& \int_0^t \|{\cal M}^{-1}\nabla V(\tilde{q}(s))\|\,ds
\le \|V\|^{(\alpha,1)}\int_0^t \big( q_{\min} + \eh v_{\min}\ s \big)^{-\alpha-1}\,ds \nonumber\\
&\le&
\frac{\|V\|^{(\alpha,1)}}{\eh v_{\min}} \int_0^\infty \big( q_{\min} + s \big)^{-\alpha-1}\,ds
= \frac{2\,\|V\|^{(\alpha,1)}}{\alpha \, v_{\min} \, q_{\min}^{\alpha}}.
\label{finer:asym:momentum:est}
\end{eqnarray}
Lemma \ref{lem:integral:estimate} was applied in the last step.
By the same estimate, which is locally uniform in $x_0$,
\[\overline{v}^+(x_0) = {\cal M}^{-1} \overline{p}^+(x_0) = {\cal M}^{-1} \lim_{t\rightarrow\infty}\tilde{p}(t)\]
exists and is continuous in $x_0\in F^+$.\\
$\bullet$
For multi-index $\delta\in\bN_0^{2dn}$ of norm $1\le |\delta|\le k-1$ the momentum estimate
\begin{equation}
\| {\cal M}^{-1} \pa^{\delta}_{x_0}(\tilde{p}(t)-\tilde{p}(0))\| \le
C_2 \|V\|^{(\alpha,k)}
v_{\min}^{-1-|\beta|} q_{\min}^{-\alpha-|\gamma|}
\le C_{\delta} \, v_{\min}^{+1-|\beta|} q_{\min}^{-|\gamma|}
\Leq{finer:momentum:deriv}
is derived like the position estimate in and after \eqref{so:wie}.
We infer that at $x_0\in F^+_{\rm loc}$ asymptotic velocity
$\overline{v}^+$ is $k-1$ times continuously differentiable.
This proves item~\ref{Statement2}.\\
As the flow $\Phi\in C^{k-1}(\bR\times P,P)$ and by Property 3.\ of Thm.\
\ref{thm:final:free}, the same statement is true for $x_0\in F^+$.
This proves item~\ref{Statement1}.
\hfill$\Box$\\[2mm]
In \cite[Lemma II.2]{He}, {\sc Herbst} noted for $n=2$ that for long
range potentials the limit
$\lim_{t\to \infty}\big(q_2(t,x)-q_1(t,x)\big)$ exists, if the
asymptotic velocities coincide. His -- perhaps astonishing -- result
immediately generalizes to the $n$--body case. To see this, we modify
\eqref{def:free}, setting
\begin{equation} \widehat{F}^\pm:= \big\{x\in \widehat{P}\mid v^{\pm} (x) \mbox {
exists, and } v^\pm(x) \notin \Delta \big\}.
\Leq{P:hat:free:pm}
\begin{lemma}\quad \label{lem:q:minus:q}\\
For a long range potential $V$, consider initial conditions
$x^{(i)}_0\equiv\big(p^{(i)}_0,q^{(i)}_0\big)\in\widehat{F}^\pm$ \
$(i=1,2)$, whose asymptotic momenta $\overline{p}^\pm\big(x^{(i)}_0\big)$
coincide.
Then
\begin{equation}
a^\pm:=\lim_{t\to\pm\infty} \big(q(t,x^{(2)}_0)-q(t,x^{(1)}_0)\big)
\Leq{a:pm}
exists. More precisely, although the estimate
$p\big(t,x_0\big) - \overline{p}^\pm\big(x_0\big) = \cO(|t|^{-\alpha})$
is in general optimal in the $t\to\pm\infty$ limit,
\begin{equation}
p\big(t,x^{(2)}_0\big) - p\big(t,x^{(1)}_0\big) =
\cO(|t|^{-1-\alpha})\mbox{ and }
q\big(t,x^{(2)}_0\big)-q\big(t,x^{(1)}_0\big)=a^\pm+\cO(|t|^{-\alpha}).
\Leq{pp:qq}
Finally, if $a^\pm=0$, then $x^{(1)}_0=x^{(2)}_0$.
\end{lemma}
\textbf{Proof:}\\
$\bullet$ To begin with, the estimate
$p(t,x_0) - \overline{p}^\pm(x_0) = \cO(|t|^{-\alpha})$
follows from \eqref{finer:asym:momentum:est} and the propagation
estimate \eqref{propagation:est}, and its optimality from
\[ \eh \big\langle \overline{p}^\pm(x_0), {\cal M}^{-1}\overline{p}^\pm(x_0)\big\rangle
= \eh \big\langle p(t,x_0), {\cal
M}^{-1}p(t,x_0)\big\rangle + V\big(q(t,x_0)\big). \]
$\bullet$
The second estimate in \eqref{pp:qq} and \eqref{a:pm} follow by integration
from the first estimate in \eqref{pp:qq}.\\
$\bullet$
To derive it and the last statement, we argue like in \cite[Lemma II.2]{He}.
\hfill $\Box$\\[2mm]
We can also apply Theorem \ref{thm:both:moeller}, which is formulated for
non-singular potentials, to the unregularized Hamiltonian flow with an
$(\alpha,k)$--potential $V:\widehat M \to \bR$.
The point is, some $x$'s end in collision, so have no well-defined
asymptotic velocity. As in definition~\ref{def:free}, the phase space
regions $\widehat{F}^\pm\subseteq \widehat{P}$ are open. The escape
time $T^\pm (x_0)$ for initial conditions $x_0$ lying in
$\widehat{F}^\pm$ are $\pm \infty$
whereas $T^-$ ($T^+$) are still upper (respectively lower)
semicontinuous.
\begin{corollary}[Asymptotic velocities for singular
potentials]\quad\label{cor:hat:free}\\[-6mm]
\begin{enumerate}[1.]
\item
For $\alpha>0$ and $(\alpha,k)$--potentials $V\in C^k(\widehat M,\bR)$,
see \eqref{V:k:norm}, the restricted asymptotic velocity maps
$\overline{v}^\pm$ are $C^{k-1}$ over $\widehat{F}^\pm$.
\item
So for $(-\alpha)$--homogeneous potentials, that is
\begin{equation}
\textstyle
V(q) := \sum_{1\le i<j\le n} \frac{I_{i,j}}{\|q_i-q_j\|^\alpha}.
\Leq{hom:pot}
the asymptotic velocities $\overline{v}^\pm$ are smooth on $\widehat{F}^\pm$.
\end{enumerate}
\end{corollary}
\textbf{Proof:}
\begin{enumerate}[1.]
\item
The flow
is $C^{k-1}$ on its domain and if $x_0\in F^+$
then there is a time $t\ge 0$
so that $\Phi_t (x_0)\in F^+_{\rm loc}$. (The norm $\|V\|^{(\alpha{,2})}$
appearing in Thm.\ \ref{thm:final:free} needs to be appropriately
re-defined to account for blow-up along $\Delta$).
Then by Theorem \ref{thm:both:moeller}.1 the
restriction of the asymptotic velocity $v^+$
to $F^+$ is a $C^{k-1}$ with values in $\bR^{dn}$.
\item
$V$ in \eqref{hom:pot} has finite $\|V\|^{(\alpha,k)}$ norm for any $k\in \bN$.
\hfill $\Box
\end{enumerate}
\section{The M\o ller semi-conjugacy (short range)}
\label{sect:moeller}
We will now show that for a short range potential the flow and the
free flow are semi-conjugate, using the so-called M\o ller
transformation.
If the potential is short range ($\alpha>1$ in \eqref{V:decay}),
then we can establish the asymptotics
$$q(t) = at + b + \cO\big(t^{1-\alpha}\big) \qquad \mbox{as} \ t \to + \infty$$
for forward free solutions, see equation \eqref{Moe:E} below.
The vector $a$ is $v^+ (x_0)$ if $x_0 := ({\cal M}\dot{q}(0), q(0))$ is the initial
condition for $q(t)$. The vector $b$ is something like the ``impact
parameter'' found in standard treatments of classical scattering. We
would like to think of $a, b \in \bR^{dn}$ as initial conditions at
$t = + \infty$.
One way to formalize this idea is via the M\o ller transformation,
which compares the given flow to that of a free particle.
\begin{definition}
\label{def:moller}
The \emph{(forward) M\o{}ller transformation}, where the (pointwise) limit
exists, is the map
$\Omega= \Omega_+ := \lim_{t \to + \infty} \Phi_{-t}
\circ\Phi^0_{t}: P \dashedrightarrow P$.
Similarly the \emph{backward M\o ller transformation} is
$\Omega_- := \lim_{t \to - \infty} \Phi_{-t} \circ\Phi^0_{t}$, where
the limit exists.
\end{definition}
See figure \ref{fig:Moeller} for a depiction of the forward and
backward M\o{}ller transformations. We have continued to use the
broken arrow notation in the definition of the M\o{}ller
transformation to allow ourselves vagueness about its domain. We
repair this vagueness now. Moreover, these transformations provide us
with semi-conjugacy in the short range case.
\begin{theorem}[M\o{}ller transformation]\label{thm:moeller}\quad\\%
If the $(\alpha,k)$--potential $\,V$ is short range
($\alpha>1$ in definition \ref{def:alpha-k}), then
\begin{enumerate}[1.]
\item \label{Statement3}
For $F^+_0$ and $F^+$ defined in \eqref{P0free} respectively in
\eqref{P:free:pm:B}, the M\o ller transformation
\begin{equation} \Omega = \lim_{t \to + \infty} \Phi_{-t}
\circ\Phi^0_{t}: F^+_0 \to F^+
\Leq{M:T}
exists and is a $C^{k-1}$ symplectomorphism intertwining $\Phi_t$
with $\Phi^{(0)}_t$:
\begin{equation}
\Omega \circ \Phi^{(0)}_t =\Phi_t\circ\Omega \qquad (t\in \bR).
\Leq{intertwining}
\item \label{Statement4}
If $|\delta| \le k-1$, $x_0 = (p_0, q_0) \in F^+_{\rm loc} $
and $\Omega (x_0) = X_0=(P_0,Q_0)$ then the inverse M\o ller transformation $\Omega^{-1}$
satisfies the regularity estimates:
\begin{align}
\pa^\delta_{X_0} (P_0-p_0) &=
\cO\l( \|V\|^{(\alpha,k)}
v_{\min}(X_0)^{-1-|\beta|} \LA
q_{\min}(X_0)\RA^{-\alpha-|\gamma|}
\ri) ,
\label{Moe:E:p} \\
\pa^\delta_{X_0} (Q_0-q_0) &=
\cO\l( \|V\|^{(\alpha,k)}
v_{\min}(X_0)^{-2-|\beta|} \LA
q_{\min}(X_0)\RA^{1-\alpha-|\gamma|}
\ri).
\label{Moe:E}
\end{align}
\end{enumerate}
\end{theorem}
\begin{figure}[h]
\begin{center}
\includegraphics[width=100mm]{moeller1.pdf}
\includegraphics[width=100mm]{moeller2.pdf}
\end{center}
\caption{Above: Potential $V$. Below: Level lines of the
Hamiltonian $H$ and of the kinetic energy $T$ (dashed); and the
corresponding M\o ller transformations.}
\label{fig:Moeller}
\end{figure}
We will deal with long range $(\alpha,k)$--potentials with $\alpha\in
(1/2,1]$ later; we will show existence of modified 'Dollard' M\o ller
transformations in Theorem \ref{thm:Dollard:Moeller}, indicating how
to generalize this to $\alpha\in (0,1]$.
\medskip
Let us pause to see how Theorems \ref{thm:final:free} and
\ref{thm:moeller} are related. Suppose that
$\Omega( A, B) = x_0 \in F^+$. We claim that $A = {\cal M} v^+ (x_0)$
where $v^+ (x_0) \notin \Delta$ is $x_0$'s asymptotic velocity of
definition~\ref{def:asymptotic:velocity} and described in
Theorem~\ref{thm:main1}. Inverting, we have
$\Omega^{-1} (x_0) = (A,B)$ and
$\Omega^{-1} = \lim_{t \to \infty} \Phi ^0 _{-t} \circ\Phi_{t}$.
Write $\Phi_t (x_0) = (p(t, x_0), q(t,x_0))$ and set
$p(\infty) ={\cal M} v^+ (x_0) = A$. Theorem \ref{thm:main1} tells us that
for $t$ large we have $p(t, x_0) = A + o(1)$. Now the momentum $p$ is
constant under the free flow so that for large $t$ we have
$\Phi ^0 _{-t} \circ\Phi_{t} (x_0) = ( A + o(1), Q(x_0; t))$. Letting
$t \to \infty$ kills the $o(1)$ term and yields the claim:
$\Omega ^{-1} (x_0) = (A, *)$.\\[2mm]
\textbf{Proof of Theorem \ref{thm:moeller}:}\\
$\bullet$
We now prove for $(\alpha,k)$--potentials $V$ of short range ($\alpha>1$)
pointwise existence and smoothness properties of the M\o ller transformation.\\
Thus let $X_0=(P_0,Q_0)\in F_0$ and
write $\tilde{Q}(t):= Q_0+{\cal M}^{-1}P_0t$ for the corresponding free
solution. (See \eqref{free:solu}.) Define the map $\cF_{X_0,T}\equiv$
\begin{equation}
\cF:\widehat{\cD} \to C\big([T,\infty), \bR^{dn}\big)\ \mbox{ , } \
(\cF r)(t) = -{\cal M}^{-1}\!\! \int_t^\infty \!\!\! \int _s^\infty \!\!\!\!
\nabla V\big((\tilde{Q}+r)(\tau)\big)\, d\tau\, ds
\Leq{contraction map}
on the complete metric space
\begin{equation}
\hspace*{-2mm}
\widehat{\cD} \equiv \widehat{\cD}_{X_0,T} :=
\Big\{ r\in C\big([T,\infty), \bR^{dn}\big)\ \Big|\
\|r\|:=\sup_{t\ge T}\|r(t)\| \le \eh q_{\min} (X_0)\Big\} .
\Leq{def:DC}
By the short range
assumption on $V$ the map $\cF$ is well-defined, and any function $u = \cF (r)$ in its
image satisfies $\lim_{t\to\infty} u (t) = 0$.\\
We search for solutions $r$ of the fixed point problem $r=\cF_{X_0}(r)$.
Out of such a fixed point $r$ we will build $\Omega (X_0)$.
First, observe that if $r$ is such a fixed point then
\begin{equation}
q:= \tilde{Q} + r
\label{perturbation}
\end{equation} satisfies Newton's equations
$\ddot{q}(t)= - {\cal M}^{-1}\nabla V\big(q(t)\big)$
and is asymptotic to $\tilde Q$.
When $X_0\in F^+_{\rm loc} \subseteq F_0$, then
by \eqref{propagation:est} and \eqref{def:DC} the interparticle distances
$\tilde {q}_{i,j}(\tau)\ge \eh(q_{i,j}+ v_{i,j}\tau)$. Thus, using
\eqref{V:k:norm}
\[ \big\| {\cal M}^{-1}\nabla V\big((\tilde{Q}+r)(\tau)\big) \big\| \
\le \ \|V\|^{(\alpha)}\, \LA \eh(q_{\min} + v_{\min}\tau)\RA^{-1-\alpha}
\qquad (\tau\ge0).\]
So by Lemma \ref{lem:integral:estimate}
$\big\| {\cal M}^{-1}\int_s^\infty \nabla V\big((\tilde{Q}+r)(\tau)\big) \,d\tau \big\|
\le \frac{2 \|V\|^{(\alpha)}}{v_{\min}} \LA\eh(q_{\min} + v_{\min} s)\RA^{-\alpha}$
and
$\|(\cF r)(t)\| \le \frac{2\, \|V\|^{(\alpha)}}{(\alpha-1)v_{\min}^2}
\LA\eh q_{\min}\RA^{1-\alpha}
\le \frac{8d\|V\|^{(\alpha,1)}}{(\alpha-1)v_{\min}^2} (\eh q_{\min})^{1-\alpha}
\le \eh q_{\min}$,\\
as $X_0 \in F^+_{\rm loc}$ and $\delta\le \alpha-1$.
So $\cF$ maps $\widehat{\cD}$ into itself.
Next we show that $\cF$ is a contraction on $\widehat{\cD}$.
So let $r^{(0)}\neq r^{(1)}\in \widehat{\cD}$. Then
\[ \frac {\big\|\cF\big(r^{(0)}\big)-\cF \big(r^{(1)}\big)\big\|} { \|r^{(0)}-r^{(1)}\| }
\le \int_0^\infty \!\!\! \int _s^\infty \!\!\!
\int_0^1\big\| {\cal M}^{-1} D\nabla V\big((\tilde{Q}+r^{(\rho)})(\tau)\big) \big\|
\,d\rho \; d\tau\; ds\]
with $r^{(\rho)}:=(1-\rho)\,r^{(0)}+\rho\, r^{(1)}$.
The right hand side is majorized by
\[ \|V\|^{(\alpha,2)}\int_t^\infty \!\!\! \int _s^\infty
\LA\eh(q_{\min}+ v_{i,j} s)\RA^{-2-\alpha} \,d\tau\; ds
\le \frac{2\|V\|^{(\alpha,2)}}{\alpha(1+\alpha) v_{i,j}^2 q_{\min}^\alpha}
\le \frac{\delta}{16dn} < 1. \]
By Banach's theorem $\cF_{X_0}$ has a unique fixed point $r$. Evaluating
the corresponding solution $\tilde Q (t) + r(t)$ to Newton's equations appropriately
at $t = 0$ yields
the value of the M\o ller transformation on $X_0$.
Indeed we claim that
\[ \Omega (X_0) = \big(P_0+{\cal M} \dot{r}(0),\, Q_0+r(0)\big) . \]
To see this, we approach the problem of approximating $r(t)$ ``from the other end of time''
as follows. Write $\Phi_{-T}\circ\Phi^{(0)}_T(X_0) = \big({\cal M} (\dot r^{(T)})(0),\, r^{(T)}(0) \big)$.\\
Then the solution $r^{(T)} :[0,T] \to \bR^{dn}$ to Newton's equations
with initial position $r^{(T)}(0)$ and initial velocity $\dot r^{(T)}(0)$
is the unique fixed point of the map
\[\cF^{(T)}:\widehat{\cD}^{(T)} \to \widehat{\cD}^{(T)}
\mbox{ , }\,
(\cF^{(T)} r^{(T)})(t) = -{\cal M}^{-1}\! \int_t^T \!\!\! \int _s^T \!\!\!
\nabla V\big((\tilde{Q}+r)(\tau^{(T)})\big) \,d\tau\; ds\]
on
\[\widehat{\cD}^{(T)} := \l\{ r\in C\big([0,T], \bR^{dn}\big) \mid
\max_{t\in[0,T]}\|r(t)\| \le \eh q_{\min} (X_0), r(T)=\dot{r}(T)=0\ri\} \!,\]
and by uniqueness of the original fixed point $r$ we must have that
\[r(t)=\lim_{T\to+\infty} r^{(T)}(t)\qmbox{,}\dot{r}(t) =
\lim_{T\to+\infty} \dot{r}^{(T)}(t) \qquad (t\ge0).\]
To see that M\o ller transformation is defined on all of $F_0$,
observe that for any $X_0\in F^+$ we have, eventually, for large
enough times $h$ that $\Phi^{(0)}_h(X_0)\in F_{\rm loc}$, at which point
we have just seen that $\Omega\big(\Phi^{(0)}_h(X_0) \big)$ exists. Then
observe by inspecting the definition of the limits that
$\Omega (X_0) = \Phi_{-h} \circ \Omega \circ \Phi^{(0)}_h (X_0)$. \\
As a locally uniform limit the M\o ller transformation is continuous on
$F^+$. The intertwining relation \eqref{intertwining}
follows, since the flows are $\bR$--actions, or alternatively by
re-arranging the just-proved relationship,
$\Omega = \Phi_{-h} \circ \Omega \circ \Phi^{(0)}_h $ valid for all
sufficiently large $h$ in a neighborhood of any $X_0$.\\[2mm]
$\bullet$
To investigate the degree of smoothness of $\Omega^+$,
instead of the operator \eqref{Q:op} related to the initial value problem,
we now use the operator ${\cal P}\equiv {\cal P}_{X_0}$, with
\begin{equation}
{\cal P}(w)(t) :=
-{\cal M}^{-1} \int_t^\infty \!\! \int_s^\infty D\nabla V\big(\tilde{Q}(\tau)\big)\, w(\tau)\,d\tau \,ds
\qquad(t\geq 0),
\Leq{P:op}
on the Banach space $C^b([0,\infty),\bR^{dn})$ of bounded curves.
Its operator norm is majorized by
\begin{align}
\|{\cal P}_{X_0} \|
&\le\|V\|^{(\alpha,2)}\int_0^\infty \!\! \int_s^\infty \!\! \LA
\tilde{Q}(\tau)\RA^{-2-\alpha} d\tau \,ds \nonumber\\
&\le \|V\|^{(\alpha,2)}\int_0^\infty \!\! \int_s^\infty \!\! \
\LA \eh(q_{\min} + v_{\min} s) \RA^{-2-\alpha}\! d\tau \,ds \nonumber\\
&\le \frac{2^{2+\alpha}}{\alpha} \|V\|^{(\alpha,2)} q_{\min}^{-\alpha} v_{\min}^{-2}
\le \frac{2^{2+\alpha}}{\alpha}\frac{\delta}{16dn} < 1 \nonumber
\end{align}
if $\alpha \le3$ (for larger $\alpha$ one uses the forward flow into
$F^+_{\rm loc}$, where the estimates become better). So we can invert
${\rm Id} - {\cal P}_{X_0}$ in order to solve for $|\delta| \le k-1$
\begin{eqnarray}
\lefteqn{\pa^\delta_{X_0} r(t,X_0) = -\int_t^\infty \!\! \int_s^\infty
{\cal M}^{-1} \pa^\delta_{X_0}\nabla V\big(q(\tau,X_0) \big)\,d\tau \,ds
= - \! \sum_{N=1}^{|\delta|} {\cal M}^{-1} \, \times} &&
\label{r:est}\\
&&
\hspace*{-6mm}
\times\hspace*{-4mm} \sum_{\stackrel{\delta^{(1)}+\ldots+\delta^{(N)}
= \delta} {|\delta^{(i)}|>0}}
\int_t^\infty \!\! \int_s^\infty\! D^N\nabla V\big(q(\tau,X_0)\big)
\l(\pa^{\delta^{(1)}}_{X_0}q(\tau,X_0),\ldots,
\pa^{\delta^{(N)}}_{X_0}q(\tau,X_0)\ri) d\tau \,ds \nonumber
\end{eqnarray}
with the shorthand $q=\tilde{Q}+r$ in a way similar to \eqref{oneminq}.
This shows \eqref{Moe:E} and finishes the proof of Claim \ref{Statement4}.\\
As $C^1$--limit of the symplectomorphisms
$\Phi_{-t}\circ\Phi^{(0)}_t$ the M\o ller transformation $\Omega^+$ is a
symplectomorphism onto its image.
But this image coincides with $F^+$, by its mere definition \eqref{P:free:pm:B} and by reversing the
roles of the two flows.\\
So Claim \ref{Statement3} is also true.
\hfill $\Box$
\begin{remark}[M\o ller transform]\quad\\
The standard reference for the {\em M\o ller} transform is section 5
of \cite{DG} by {\sc Dere\-zi\'{n}ski} and {\sc G{\'e}rard}.
In the case of finite-range interactions {\sc Hunziker}, in
\cite{Hu1,Hu2} proved that the M\o ller transform exists and used it
to establish {\em asymptotic completeness} of finite range
interactions. This asymptotic completeness includes the
decomposition of solutions into independent clusters where `cluster'
has the meaning alluded to above.
Hunziker viewed the M\o ller
transform as the classical version of the quantum M\o ller
transform, or wave map, defined as the limit of
$\exp(-it H) \exp(it H_0)$ as $t \to \infty$. Here $H = H_0 + V$ is
the quantum version of our Hamiltonian so that $H_0$ corresponds to
a multiple of the Laplacian on $\bR^{dn}$.\\
Soon afterwards, {\sc
Simon} \cite{Sim} used the method to establish asymptotic
completeness for the classical two-body problem with short range
interactions provided the second derivative of the potential decays
appropriately. In an appendix Simon exhibited the necessity of his
second derivative decay conditions by constructing a potential for
which his decay conditions failed and which admits two distinct
hyperbolic solutions asymptotic to the same free solution. Thus
$\Omega^{-1} (x_0) = \Omega^{-1} (y_0)$ for $x_0, y_0$ not lying on
the same orbit, so that whatever $\Omega$ is, it is at least
``two-valued'' and not a well-defined map.\\
{\sc Derezi\'{n}ski} and {\sc G{\'e}rard}, among many other results, established the existence
and invertibility of the M\o{}ller transformation for potentials of
superexponential decrease in \cite[sect.\ 5.10]{DG}. \hfill $\Diamond$
\end{remark}
\section{The Dollard-M\o ller semi-conjugacy (long range)}
\label{sect:moeller2}
The gravitational and Coulomb potentials are long range but not short
range so the M\o ller transformation fails to exist for them. Dollard \cite{Do} discovered that by modifying the comparison free dynamics
in a time-dependent way he could define a modified M\o
ller transformation which existed for long range potentials.
We will call his modified transformation the Dollard-M\o ller
transformation. It will yield the asymptotics
\begin{eqnarray*}
p(t) = {\cal M} v + o(1)\\
q(t) = v t + W(t, {\cal M} v) + b + o(1) \\
\text{ with } v= v^+ (x(0)) \mbox{ and } W(t, {\cal M} v) = o(t) \mbox{ as } \ t \to + \infty
\label{eq:Chazylike}
\end{eqnarray*}
valid for all escape solutions $x(t) = (p(t), q(t))$ and all long-range potentials ($0 < \alpha \le 1$ in \eqref{V:decay}),
whether they have singularites or not.
See equation \eqref{def:W} for the relation between $W$ and the potential $V$.
This assertion on asymptotics follows from the existence of the inverse Dollard-M\o ller transformation $\Omega^{-1}$, part 1 of \ref{thm:Dollard:Moeller}.
See remark \ref{rmk:asymptotics} for a sketch of a proof of a derivation of \eqref{eq:Chazylike} from part 1.
The asymptotic velocity $v$ occurring in the asymptotics \eqref{eq:Chazylike} is given by $\Omega^{-1} (x(0)) = (v, \beta)$ for some $\beta$.
The ``impact parameter'' $b$, projected onto $v^{\perp}$ represents the affine orbital parameter
described in part 3 of \ref{thm:Dollard:Moeller} below.
\begin{definition}\label{def:dollard}\quad\\
The Dollard dynamics $\Phi ^D _{t, s}$ (see~\eqref{groupoid}) associated with a potential $V$ on $\bR^{dn}$
is the non-autonomous flow defined by the time dependent
\emph{Dollard Hamiltonian}
\[ H^D := K+\tilde{H}^D:\ \bR_t\times F_0
\longrightarrow \bR \]
given by
\begin{equation}
H^D (t, p, q) = \eh \langle p, {\cal M}^{-1}p \rangle + V \big( \langle
t\rangle{\cal M}^{-1}p \big) \text{ where} \LA t\RA=\sqrt{1+t^2}
\Leq{dollard:hamiltonian2}
\end{definition}
The first term $K$ of $H^D$ is the usual kinetic energy. Its second term
$\tilde{H}^D_t(p,q)$ is the potential turned into a function of momentum.
$H^D$ is independent of $q$ so the momentum $p$ is constant along the non-autonomous Dollard
flow $\Phi ^D _{t, s}$.
\begin{example}[Newtonian case] \quad\\
Take the case of the Newtonian $n$-body problem, where the potential
is homogeneous of degree $-1$. Using
$\langle t\rangle = t( 1 + \frac{1}{2} \frac{1}{t^2} + \ldots)$ for $t \gg 1$ we
see that
$H_D = \frac{1}{2} \langle p, {\cal M}^{-1} p\rangle + V( \langle t
\rangle {\cal M}^{-1} p) = \frac{1}{2} \langle p, {\cal M}^{-1} p\rangle +
\frac{1}{t} V( {\cal M}^{-1} p) + \cO(1/t^3)$
for large $t$, where the $\cO(1/t^3)$ term depends only on $p$. Then
the ODEs to solve to find the Dollard flow are
$$
\begin{cases}
\dot q = {\cal M}^{-1} p + \frac{1}{t} \nabla V( {\cal M}^{-1} p) + \cO(1/t^3)\\
\dot p = 0
\end{cases}
$$
which integrate to yield precisely Chazy's asymptotics
~\eqref{hyperbolicChaz} above. Compare with
{\sc Chazy} \cite[page 46]{Cha},
1922. One could argue that the proper Dollard Hamiltonian
\eqref{dollard:hamiltonian:homogeneous} has Chazy's work as a precursor.
\hfill $\Diamond$
\end{example}
Returning to a general $V$, we compute the time-dependent flow of $H^D$, for
initial time $s\in\bR$ and final time $t\in\bR$, to have the form:
\begin{equation}
\Phi^D_{t,s}(p,q) = \big(\,p\; ,\; q+(tv+ W(t;p))-(sv+ W(s;p))
\big) \quad \big( (p,q)\in F_0 \big) .
\Leq{Dollard:dynamics}
where \begin{equation} W: \bR_t\times F_0\to
\bR^{dn}\qmbox{,} W(t;p) = \int_0^t \nabla_p V\big(\langle s\rangle
{\cal M}^{-1}p\big)\,ds\,.
\Leq{def:W}
If $V$ is an $(\alpha, k)$ potential then
$W\in C^{k-1}\big(\bR_t\times F_0,\bR^{dn}\big)$,
and
\[t\mapsto W(t;p) =\l\{
\begin{array}{ll}
\cO(|t|^{1-\alpha})&,\, \alpha\in (1/2,1)\\[1mm]
\cO(\log(|t|))&,\, \alpha=1,
\end{array}
\ri. \qquad(|t|\to\infty).\]
Although the correction term $W(t,p)$ to linear motion can go to infinity with $t$,
we have that $W(t, p) = o(t)$, which is to say, that $|t| > > |W(t;p)|$ as $t \to \infty$.
It will be crucial below that for fixed $p$ and $t_0$,
\begin{equation} W(t + t_0, p) - W(t, p) \to 0 \text{ as } t \to \infty \, ,
\Leq{W:differences}
as the reader can easily verify.
The asymptotic velocity of any Dollard solution curve $\Phi^D_{t,s}(x_0)$ with
$x_0 = (p,q)$ is $v={\cal M}^{-1}p$. All Dollard solutions \eqref{Dollard:dynamics}
which share a fixed initial momentum $p$ are translates of one another:
\[\Phi^D_{t,s}\big(p,q^{(2)}\big)-\Phi^D_{t,s}\big(p,q^{(1)}\big) = \big( 0,q^{(2)}-q^{(1)} \big) \qquad
\big( s,t\in\bR,\, q^{(i)}\in\bR^{dn} \big).\]
For explicit computations of Dollard flows and comparison of the induced transformations
with M\o ller transformations see the appendices.
We will use the Dollard flow $ \Phi^{D}_{0,t}$ in place of the free flow $\Phi^{(0)}_t$ in
order to define a version of the M\o ller transformation. However, collisions in backward time prevent
us from defining a direct Dollard-M\o ller transform on $\widehat{F}^+$.
The backward $n$-body flow $\widehat \Phi_{-t}$,
$t > 0$, applied to some points of $\widehat{F}^+$
may not exist due to multi-body collisions in backwards time.
To circumvent this problem we instead define the {\em inverse} Dollard-M\o ller transformation,
whose definition only uses the forward flow, so that its domain can be
taken to be $\widehat{F}^+$.
\begin{theorem}[Dollard-M\o ller transformations]\quad
\label{thm:Dollard:Moeller}\\
(The reader may wish to refer to subsection \ref{subsec:notations} for notations.)
For long range $(\alpha,k)$--potentials $V$ (see \eqref{V:k:norm})
with $\alpha\in (1/2,1]$ and collision singularities allowed, the following hold.
\begin{enumerate}[1.]
\item \label{long:range:1}
The backward and forward inverse Dollard-M\o ller transformations
\begin{equation}
\Omega^{-1, \pm} := \lim_{T\to\pm\infty}
\Phi^{D}_{0,T}\circ\widehat{\Phi}_{T}
\qmbox{,}
\Omega^{-1, \pm}: \widehat{F}^\pm\to F_0
\Leq{I:M:T:D}
exist in the sense of locally uniform convergence.
\item \label{long:range:2}
a) These transformations conjugate the n-body flow on $\widehat{F}^\pm$
with the free flow
\begin{equation}
\Omega^{-1, \pm}\circ \widehat{\Phi}_t = \Phi^{(0)}_t\circ
\Omega^{-1, \pm} \,.
\Leq{conjugate}
b) For $k\ge3$ the $\Omega^{-1, \pm}$ are
$C^{k-2}$--smooth symplectomorphisms onto their images.
\item \label{long:range:3} The analog of \eqref{Moe:E:p} holds for
$\Omega^{-1, \pm} - Id$.
\item \label{long:range:4}
For any $v\in \bR^{dn}\,\backslash\,\Delta$,
the space of orbits having asymptotic velocity $v$ form an affine space with
underlying vector space the tangent space of the sphere $S^{dn-1}$ at
$v/\|v\|_{\cal M}$.
\end{enumerate}
\end{theorem}
\textbf{Proof:}\\
We will make use of the open subset ${F}_{\rm loc}^+$ of
$\widehat{P}$ defined by precisely the same conditions as
${F}_{\rm loc}^+$ in \eqref{def:ff} with all points lying in $\widehat{P}$ - the
phase space points with no collisions. Note that
for $\alpha$-homogeneous potentials, the conditions within
\eqref{def:ff} respect the homogeneity of kinetic and potential
energy. \\
$\bullet$
As both flows $\widehat{\Phi}$ and $\Phi^{D}_{\bullet,\bullet}$ are
$C^{k-1}$--smooth on their maximal domains $\widehat{D}$ and
$\bR_t\times\bR_s\times F_0$ respectively, by Theorem
\ref{thm:final:free}.3 and its Corollary \ref{cor:hat:free} we can
assume without loss of generality that $x_0=(p_0,q_0)\in
\widehat{F}_{\rm loc}^+$.
We consider the Dollard solution \eqref{Dollard:dynamics},
$t\mapsto \Phi^{D}_{t,0}(X)$
with initial value $X\in F_0$ and
denote by $X_T(x_0)$ the initial value with the property
\[\Phi^{D}_{T,0}\big(X_T(x_0)\big) = \widehat{\Phi}_{T}(x_0) \qquad
\big( x_0\in \widehat{F}_{\rm loc}^+,\ T\ge0 \big).\]
Since $\Phi^{D}_{\bullet,\bullet}$ is the solution of a time dependent
initial value problem, we have
\begin{equation}
\Phi^{D}_{t,t}={\rm Id}_{F_0}\qmbox{and}
\Phi^{D}_{t_2,t_1}\circ \Phi^{D}_{t_1,t_0} = \Phi^{D}_{t_2,t_0}\qquad
(t,t_i\in\bR),
\Leq{groupoid}
so that $\big(\Phi^{D}_{t_1,t_0})^{-1}= \Phi^{D}_{t_0,t_1}$. \\
\noindent{\large {\it Proof of part 1 of the theorem.}}\\
In \eqref{I:M:T:D} we claim pointwise existence and local uniformity of the limit $T \to \infty$ of
\begin{equation}
\Omega^{-1}_T := (\Phi^D_{T,0})^{-1}\circ\widehat{\Phi}_T =
\Phi^D_{0,T}\circ\widehat{\Phi}_T.
\Leq{Om:star:T}
We compute, with $v:={\cal M}^{-1}p$ denoting velocity, that
\begin{align}
\Omega^{-1}_T(x_0)
&= \textstyle
\big(p(T,x_0), q(T,x_0)-v(T,x_0)T-\int_0^T \nabla_p V\big(\langle s\rangle {\cal M}^{-1}p(T,x_0)\big)\,ds\big)\nonumber\\
&=
\big(p(T,x_0) , q_0+ r(T,x_0)\big),
\label{Omega:T}
\end{align}
where
\begin{align}
\label{r:for:Omega}
r(T,x_0) := &\,\textstyle
\int_0^T \big[v(s,x_0) \!-\!v(T,x_0)\! -\!\nabla_p V\big(\langle s\rangle {\cal M}^{-1}p(T,x_0)\big)\big] \,ds \big) \\
= &\,\textstyle
{\cal M}^{-1}\!
\int_0^T \big[\int_s^T \nabla V\big(q(\tau,x_0)\big) d\tau -
\!\langle s\rangle \nabla V\big(\langle s\rangle {\cal M}^{-1}p(T,x_0)\big) \big]\,ds,\nonumber
\end{align}
see \eqref{Dollard:dynamics} and \eqref{def:W}.\\
$\bullet$
We begin the proof of \eqref{I:M:T:D} by showing that
\[r^+(x_0):=\lim_{T\to+\infty} r(T,x_0)=
r(0,x_0)+\lim_{T\to+\infty}{\textstyle \int_0^T}\dot{r}(t,x_0)\,dt \]
exists.
Therefore, we first estimate its $T$--derivative.
\begin{align}
\label{eq:dot:r}
\hspace*{-4mm}
\dot{r}(T,x_0) = &
- T\,\dot{v} (T,x_0) - \langle T\rangle \nabla
V\big(\langle T\rangle {\cal M}^{-1}p(T,x_0)\big) \\
& +{\cal M}^{-1} \textstyle \int_0^T
\langle s\rangle^2 D\nabla V\big(\langle s\rangle
{\cal M}^{-1}p(T,x_0)\big) \,ds \; {\cal M}^{-1}\nabla V\big(q(T,x_0)\big)\,.\nonumber
\end{align}
The propagation estimate \eqref{propagation:est} and \eqref{finer:asym:momentum:est} imply that
$\|p(T,x_0)-p^+(x_0)\| = \cO\big(\langle T\rangle^{-\alpha}\big)$ and thus
locally uniformly in $x_0\in \widehat{F}_{\rm loc}^+$
\[\|q(T,x_0)-(q_0+v^+(x_0)T)\| = \left\{\begin{array}{cl}
\cO\big(\langle T\rangle^{1-\alpha} \big) &,\, \alpha < 1 \\[1mm]
\cO\big(\log(T)\big) &,\, \alpha = 1
\end{array}
\right. .\]
\begin{enumerate}[1.]
\item
For $\alpha\in(1/2,1)$ the first line on the right hand side of \eqref{eq:dot:r} equals
\begin{align*}
&{\cal M}^{-1} \big[T\,\nabla V(q(T,x_0)) - \langle T\rangle \nabla V\big(\langle T\rangle {\cal M}^{-1}p(T,x_0)\big) \big]\\
=&\,
T {\cal M}^{-1} \big[\nabla V(q(T,x_0)) - \nabla V\big(\langle T\rangle\, v(T,x_0)\big) \big]
+\cO\big(\langle T\rangle^{-2-\alpha}\big)\\
=&\,
T {\cal M}^{-1} \big[\nabla V\big(\langle T\rangle\, v(T,x_0)\! +\! \cO\big(T^{1-\alpha}\big)\big) -
\nabla V\big(\langle T\rangle\, v(T,x_0)\big) \big] + \cO\big(\!\langle T\rangle^{-2-\alpha}\big)\\
=&\,\cO\big(\langle T\rangle^{-2\alpha}\big) + \cO\big(\langle T\rangle^{-2-\alpha}\big)
=\cO\big(\langle T\rangle^{-2\alpha}\big)\, ,
\end{align*}
since $\langle T\rangle-T=\cO\big(\langle T\rangle^{-1}\big)$.
\item
As $D\nabla V\big(\langle s\rangle {\cal M}^{-1}p(T,x_0)\big) =\cO\big(\langle s\rangle^{-2-\alpha}\big)$,
for $\alpha\in(1/2,1)$ the second
line of \eqref{eq:dot:r} has the order $\cO\big(\langle T\rangle^{1-\alpha}\langle T\rangle^{-1-\alpha}\big)
=\cO\big(\langle T\rangle^{-2\alpha}\big)$, too.
\item
For $\alpha=1$ the orders of both lines in \eqref{eq:dot:r} are
$\cO\big(\langle T\rangle^{-2}\log(\langle T\rangle)\big)$.
\end{enumerate}
We conclude that \eqref{eq:dot:r} is of order $\cO\big(T^{-2\alpha}\big)$ for $\alpha\in (1/2,1)$, respectively
$\cO\big(T^{-2}\log(T)\big)$ for $\alpha=1$.
By our assumption $2\alpha>1$ we finally obtain existence of $r^+(x_0)$, and thus of
inverse Dollard-M\o ller transformation $\Omega^{-1} = \lim_{T\nearrow
+\infty}\Omega^{-1}_T$.\\
$\bullet$
As $r^+(x_0) = \lim_{T\to+\infty} r(T,x_0)$ exists, by the analogs of \eqref{r:for:Omega} and \eqref{eq:dot:r}
\begin{align}
r_i&(T,x_0) = r^+_i(x_0) - \int_T^\infty \dot{r}_i(\tau , x_0)\,d\tau
= r^+_i(x_0) +\frac{1}{m_i} \sum_{j\in N\setminus\{i\}} \label{r:int:eq}\\
&\!\!\!\! \Big[ \int_T^\infty \!\! \Big( \tau \nabla V_{i,j}\big(q_i(\tau,x_0)-q_j(\tau,x_0)\big)
- \langle \tau\rangle
\nabla V_{i,j} \big(\langle \tau \rangle (v_i(\tau,x_0)-v_j(\tau,x_0))\big) \Big)\,d\tau \nonumber\\
&\!\!\!\!- \!\!\int_T^\infty \!\int_\tau^\infty \!\! \langle s\rangle^2
D \nabla V_{i,j} \big(\langle {s} \rangle
({v}_i(\tau,x_0)- {v}_j(\tau,x_0))\big)
\big(\dot{v}_i(\tau,x_0)- \dot{v}_j(\tau,x_0)\big)\, ds\,d\tau \Big].\nonumber
\end{align}
When one substitutes the argument $q_i(\tau,x_0) - q_j(\tau,x_0)$ in the second line of \eqref{r:int:eq},
using $q(\tau,x_0) = v(\tau,x_0)\tau + W(\tau;\Phi_\tau(x_0)) - r(\tau,x_0)$, then one obtains an
integral equation for $r$.
When we assume that $r$ belongs to the complete metric space
$\widehat{\cD}_{X_0,T}$ defined in \eqref{def:DC}, then the integrand is of order
$\cO\big(\tau^{-2\alpha})$ for $\alpha\in (1/2,1)$ and $\cO\big(\tau^{-2}\log(\tau))$ for $\alpha=1$.
So from \eqref{r:int:eq} we infer that $r(T,x_0) - r^+(x_0)$ is of order $\cO\big(T^{1-2\alpha})$, resp.\ $\cO\big(T^{-1}\log(T))$.
As a function of $r$, the right hand side of \eqref{r:int:eq} is a contraction for $T$ large,
justifying the assumption $r\in \widehat{\cD}_{X_0,T}$.
\\
$\bullet$
As convergence is locally uniform on $\widehat{F}^+$,
by the parametrized fixed point theorem the dependence of $r$ on $x_0$ is continuous.
So the map $r^+: \widehat {F}^+\to\bR^{dn}$ is continuous, too.
Estimates of the derivatives w.r.t.\
this initial condition proceed like in the proof for the short range case,
that is, Theorem \ref{thm:both:moeller}.3.
As stated in Corollary \ref{cor:hat:free}, for $(\alpha,k)$--potentials asymptotic
velocity $\overline{v}^+\in C^{k-1}\big(\widehat{F}^+,\bR^{dn}\big)$.
So by \eqref{Omega:T},
$\Omega^{+,*}$ is continuous, and as smooth as $r^+$.
Note, however, that in \eqref{r:int:eq} the second derivative of the long range potential $V$ appears.
This is different from the case \eqref{contraction map} of short range potentials, where only the
first derivative is needed.
Therefore, in comparison with Part \ref{Statement3} of Theorem \ref{thm:both:moeller},
we lose one derivative in Part \ref{long:range:2} of Theorem \ref{thm:Dollard:Moeller}.\\
$\bullet$
By Lemma \ref{lem:q:minus:q}, $\Omega^{-1, \pm}$ is one to one. So we can invert $\Omega^{-1, \pm}$
on its image, yielding the M\o ller transformation $\Omega^{\pm}$. We still have to prove that for any
$x_0 = (p_0,q_0)\in \widehat{F}_{\rm loc}^+$ and its image $X\equiv X(x_0)
:= \Omega^{+,*}(x_0)$ the M\o ller transformation is of the form
\begin{equation}
\Omega^+(X) = \lim_{T\to +\infty}\Omega_T(X)\qmbox{ for}
\Omega_T := \widehat{\Phi}_{-T} \circ\Phi^{D}_{T,0}.
\Leq{Omega_T}
But this means to control $r$ as a function of $X$ instead of $x_0$.
So the analysis is similar, and we omit it.\\
{\it This completes the proof of item (1) of the proposition, i.e of~\eqref{I:M:T:D}.
}
\vskip .2cm
\noindent{\large {\it Proof of part 2 of the theorem.}}\\
The intertwining property \eqref{conjugate} follows by first noting that for $\Omega^{-1}_T$ from \eqref{Om:star:T}
\[\Omega^{-1}_T \circ \widehat{\Phi}_t = (\Phi^D_{0,T}\circ \Phi^D_{T+t,0})\circ \Omega^{-1}_{T+t} \]
follows by applying the groupoid property \eqref{groupoid}, and by \eqref{Dollard:dynamics},
\[ \Phi^D_{0,T}\circ \Phi^D_{T+t,0}(p,q) = \big(p,q+tv +W(T+t;p,q)-W(T;p,q)\big).\]
Then $\lim_{T\to+\infty} \Phi^D_{0,T}\circ \Phi^D_{T+t,0} = \Phi^{(0)}_t$, since using \eqref{def:W}
\[\lim_{T\to+\infty}\big( W(T+t;p,q)-W(T;p,q) \big)
= \lim_{T\to+\infty}\int_{T}^{T+t} \nabla_p V\big(\langle s\rangle {\cal M}^{-1}p\big)\,ds =0.
\]
$\bullet$
As a locally uniform limit of symplectomorphisms $\Omega_T$ in $C^{1}$ norm,
for $k\ge3$ the Dollard-M\o ller transformation $\Omega^+$ is a
symplectomorphism onto its image. This is shown by suitably modifying
the proof of Theorem \ref{thm:both:moeller}.3. \\
{\it This completes the proof of item 2 of the proposition.}
\vskip .2cm
\noindent{\large {\it Proof of part 3 of the theorem.} }\\
The analog of \eqref{Moe:E:p} follows from \eqref{O:p}, as
the Dollard dynamics \eqref{Dollard:dynamics} conserves momentum.
{\it This completes the proof of item 3.}\\
\noindent{\large {\it Proof of part 4 of the theorem.} } \\
The proof relies on Lemma \ref{lem:orbits:with equal:as:ve} below,
the conjugacy relation \eqref{conjugate} which forms part 3 just proved,
and the relation \eqref{eq:meaningOmega}
proved below.
Let us write ${{\cal P}}_v$ for the space of all trajectories $x(t)$ having $v^{+} (x(t))= v$
where $v \notin \Delta$ is fixed. Let $\pi^{\perp}: \bR^{dn} \to v^{\perp}$ be the orthogonal projection
so that $\pi^{\perp} (w) = w - (v \langle w, v \rangle / |v|^2)$. Define a map
$${{\cal P}}_v \times {{\cal P}}_v \to v^{\perp}$$
\begin{equation}
(x, x^{(0)}) \mapsto \lim_{t \to \infty} \pi ^{\perp} ((q(t) - q^{(0)} (t)) =: b(x,x^{(0)}) \in v_* ^{\perp}
\Leq{AFFINE}
where we've written by
$x(t) = (p(t),q(t)), x^{(0)} (t) = (p^{(0)}(t), q^{(0)}(t))$ for two trajectories, i.e. points in ${{\cal P}}_v$.
By Lemma \ref{lem:orbits:with equal:as:ve} this limit exists and is independent of
where we start on the orbits: shifting $x(t)$ to $x(t + t_1)$ and $x^{0}(t)$ to $x^{(0)} (t + t_0)$
yields $ \lim_{t \to \infty} (q(t+ t_1) - q^{(0)} (t+t_0) )= \lim_{t \to \infty} (q^{(1)}(t) - q^{(0)} (t) ) + (t_1 - t_0) v $
so leaves the map \eqref{AFFINE} unchanged.
Think of one of the orbits, $x^{(0)}$, as the ``origin'' of ${{\cal P}}_v$. Then {\em we must show that the map
\eqref{AFFINE}, viewed as a function of $x$ alone, is onto, and that its image uniquely determines $x$
up to a time translation}.
It will be important to understand that $x \in {{\cal P}}_v$ iff $\Omega^{-1}(x(0)) =({\cal M} v, \beta)$ for some $\beta$.
This is an immediate consequence of
\begin{equation}
v^+ (x(t)) = v_* \iff \Omega^{-1} (x(0)) = ({\cal M} v_* , \beta), \text{ some } \beta
\Leq{eq:meaningOmega}
valid for all escape orbits $x(t)$.
To establish the validity of \eqref{eq:meaningOmega} recall that the free flow
(or the Dollard flow) does not change the momentum component.
Write $\Omega^{-1} (x(0)) = ({\cal M} v, \beta)$, for some $v, \beta$.
Let ${\rm pr}_1$ denotes the projection onto the momentum factor. Then we have
\[{\cal M} v = {\rm pr}_1 \Omega^{-1}(x(0)) = {\rm pr}_1 \Omega^{-1} (x(t)) =
\lim_{t \to \infty} {\rm pr}_1 \Omega^{-1} (x(t))\]
according to the conjugacy relation.
By \eqref{eq:Chazylike} the momentum component of $x(t)$ limits to ${\cal M} v^+ (x(0))$ as $t \to \infty$.
On the other hand, by part 3 of the theorem we are proving - the asymptotic near identity part,
see \eqref{Moe:E:p}, the map $\Omega^{-1}$ tends to the identity along
escape orbits such as $x(t)$:
$$\Omega^{-1} (x(t)) = x(t) + o(1) , \text{ as } t \to \infty.$$
Indeed, the term $q_{\min} ^{\alpha}$ appearing in estimate \eqref{Moe:E:p} tends to zero like
$t^{-\alpha}$ as $t \to \infty$. It follows that
$\lim_{t \to \infty} {\rm pr}_1 \Omega^{-1} (x(t)) = {\cal M} v^+ (x(0))$,
which establishes \eqref{eq:meaningOmega}.
If $\Omega^{-1}$ mapped {\it onto} $F_0$, then the surjectivity of our map \eqref{AFFINE} would be immediate.
$\Omega^{-1}$ would map ${\cal P}_v$ onto the space of lines parallel to $v$ according to \eqref{eq:meaningOmega} and the conjugacy relation.
And $\Omega$, being the inverse of $\Omega^{-1}$,
would be well-defined with domain all of $F_0$ and would map straight lines onto asymptotically free trajectories lying in $F^+$.
We could take $x^{(0)}(t)$ to be $\Omega(\ell_0 (t))$ where $\ell_0 (t))= ({\cal M} v, vt)$ corresponds to $b =0$.
Any $x(t) \in {\cal P}_v$ can be written, up to translation, uniquely as $\Omega ({\cal M} v, vt + b)$ for some $b \in v^{\perp}$. Moreover, both $\Omega$ and $\Omega^{-1}$
tend to the identity along escape orbits so that the limit in \eqref{AFFINE} is the same as
the limit achieved using the free flow, and so would yield $b = b(x, x^{(0)})$, and completing the proof.
$\Omega^{-1}$ is onto $F_0$ for non-singular potentials $V$.
To see this fact, observe that we can, in the case of a non-singular potential, form $\Phi_{-t} (x_0)$ for any $t$ and any $x_0$.
Incompleteness of the backward flow due to collisions was the only thing which prevented
the direct Dollard-M\o ller map $\Omega$, defined as the limit ${\Phi}_{-T} \circ \Phi^{T}_{T,0}$ as
$T \to \infty$, from existing and having domain all of $F_0$.
The analysis we used in part 1 of the current theorem to insure the existence of $\Omega^{-1}$, defined as the limit of $\Phi^{D}_{0,T}\circ\widehat{\Phi}_{T}$ as $T\to \infty$,
carries through essentially verbatim to yield the existence of
$\Omega: F_0 \to F^+$ and that it is the inverse of our $\Omega^{-1}$.
We deal with the case of singular potentials by observing that $\Omega^{-1}$
does not actually have to be onto, but only onto {\it modulo the flow}, in order for the argument two paragraphs above to work.
For any $v_* \notin \Delta$ and $b \in v_* ^{\perp} $ form $(v_*, b)$ and denote its forward Dollard orbit by
$\Phi^D _{t,0} (v_*, b):= y_D (t;b), t_* \le t < \infty$.
Eventually, for $t$ large enough, we will show that these Dollard orbits lies in the image of $\Omega^{-1}$,
and that moreover $\Omega^{-1}$ is invertible there. Then the entire Dollard ray $y_D ([t_*, \infty); b)$ will
lie in the image of $\Omega^{-1}$ and this will be enough.
To this end, fix any relatively compact neighborhood $K$ of the origin in the full phase space $\bR^{dn} \times\bR^{dn}$.
Then there is a $t$ large enough so that $K_t: =y_D (t) + K \subset F^+_{\rm loc} \subset F^+$. To see this,
observe that as $t$ increases without bound the estimates
of \ref{def:finally:free} must eventually hold since the $q_i$ occurring in the estimate are equal to $t v_*, i$ to leading order
while the $v_i$ are $v_{*, i}$. It thus follows from Theorem \ref{thm:mainMoller} that $K_t \subset F^+$
for all sufficiently large $t$. Now, as we saw a few paragraphs above, part 3 (just proved; see also \eqref{Moe:E:p})
tells us that the map $\Omega^{-1}$ on $K_t$ is of the form $Id + h_t$ with $h_t =o(1)$ as $t \to \infty$.
As soon as $t$ is large enough so that the $C^{k-1}$-norm of $h_t$ on $K_t$ is less than $1$ we have that
$\Omega^{-1}$ is invertible and that $y_D (t) \in \Omega^{-1} (K_t) \cap K_t$. We can let $t$ increase since the estimates only get better
and in this way conclude that
the entire future Dollard ray $y_D ([t_*, \infty); b)$ lies in the image of $\Omega^{-1}$, for some $t_* = t_* (v_*, b)$. Also $\Omega$,
the inverse of $\Omega^{-1}$ exists along the Dollard ray.
This analysis applies to any $b$, including $b = 0$. Now take $v = v_*$
Take for the `origin' of our trajectory space ${\cal P}_v$ the solution $x^{(0)}(t) = \Omega (y_D (t; 0)$.
Then
\[b(x, x^{(0)}) = \lim_{t \to \infty} q(t) - q_0 (t) = \lim_{t \to \infty} {\rm pr}_2 (y_D (t;b) - y_D (t; 0) )= b,\] where ${\rm pr}_2 (p, q) = q$ is the projection onto configuration space.
We have proved that the map \eqref{AFFINE} is onto.
Finally, to see that the map $x \mapsto b(x, x^{(0)})$ of \eqref{AFFINE} determines
the trajectory $x$ up to time translation use that fact that the same map determines the
trajectory up to time translation over on the free side, and that the free and Newtonian limits are equal
since $\Omega^{-1}$ tends to the identity along escaping orbits.
\hfill $\Box$
\noindent
\begin{remark}
See also the remark in {\sc Derezi\'{n}ski} and {\sc G{\'e}rard} \cite[p.\ 24]{DG} regarding the affine structure of the tangent space and part 4.
\end{remark}
\begin{remark}[Derivation of asymptotics \eqref{eq:Chazylike}]
\label{rmk:asymptotics}
Set $\Phi^D _t = \Phi^D_{t,0}$ so that $\Omega^{-1} = \lim_{t \to \infty} (\Phi^D _t)^{-1} \circ \Phi_t$.
It follows that for $t$ large we have $\Phi^D _t \circ \Omega^{-1} = \Phi_t + o(1)$.
But $\Omega^{-1}$ tends to the identity along escaping orbits $x(t)$.
This yields\\
$\Phi^D _t (x(T)) = \Phi_t (x(T)) + o(1)$
for $T$ sufficiently large, $t \to \infty$ which is \eqref{eq:Chazylike}.
\hfill $\Diamond$
\end{remark}
\begin{remark}[Homogeneous potentials]\quad\\
If $V$ is a ($-\alpha$)--homogeneous potential, then
for every $k\in \bN$, $V$ is an $(\alpha,k)$--potential.
So in particular the Dollard-M\o ller transformation is smooth.
\hfill $\Diamond$
\end{remark}
\begin{earlier}\quad\\[-6mm]
\begin{enumerate}[1.]
\item
As we have indicated above, the assumption $\alpha\in (1/2,1]$ in Theorem \ref{thm:Dollard:Moeller}
can be relaxed, by generalizing the two-body technique from {\sc Herbst} \cite{He}.
The price to be payed is a Dollard dynamics that is more involved than
\eqref{Dollard:dynamics}.
\item
Theorem 1 of {\sc Saari} \cite{Sa} states for the gravitational $n$--body system that under a
non-oscillation assumption the centers of mass of clusters asymptotically either move like
$t\mapsto vt+D\log(t)+o(\log(t))$, or their mutual distances are of order $\cO(t^{2/3})$.
As this allows for non-trivial clusters, Saari's result is not contained in the statement of
Theorem \ref{thm:Dollard:Moeller}.
On the other hand, Theorem \ref{thm:Dollard:Moeller} concerns general long range potentials
and controls the asymptotics of the flow, not just of individual orbits.
\item
As Lemma \ref{lem:orbits:with equal:as:ve} below
shows, orbits with equal asymptotic momentum $\overline{p}^+$ synchronize their relative positions,
although their momenta $\tilde{p}$ approach $\overline{p}^+$ only slowly
($\,\tilde{p}(t)-\overline{p}^+=\cO(t^{-\alpha})\,$).
See also {\sc Herbst} \cite[Lemma II.2]{He} for the case of potential scattering.
\hfill $\Diamond$
\end{enumerate}
\end{earlier}
\begin{lemma}[Orbits with equal asymptotic velocity]\label{rem:equal:as:vel}\quad\\
\label{lem:orbits:with equal:as:ve}%
For a long range potential $V$, consider initial conditions
$x^{(i)}_0\equiv\big(p^{(i)}_0,q^{(i)}_0\big)\in\widehat{F}^\pm$ \ $(i=1,2)$,
whose asymptotic momenta $\overline{p}^+\big(x^{(i)}_0\big)$ respectively $\overline{p}^-\big(x^{(i)}_0\big)$ coincide.
Then
\begin{equation}
\Omega^{-1, \pm} \big( x^{(2)}_0 \big) - \Omega^{-1, \pm} \big( x^{(1)}_0 \big)
\ =\ \Big(0\ ,\ \lim_{t\to\pm\infty} \big(q(t,x^{(2)}_0)-q(t,x^{(1)}_0)\big) \Big).
\Leq{inverse:moeller:qminq}
In particular, the limit on the right in~\eqref{inverse:moeller:qminq} is finite when the $x^{(i)}_0$ yield solutions having the same asymptotic
velocities (or momenta).
\end{lemma}
\begin{remark}[Difference between long range and short range case]\quad\\
Note that the limits
$\lim_{t\to\pm\infty}\Big( \Phi_t \big(x^{(i)}_0 \big) - \Phi^D_{t,0}\circ \Omega^{*,\pm} \big(x^{(i)}_0 \big)\Big)$
do {\em not} exist for $(-\alpha)$--homogeneous potentials and $\alpha\in (0,1)$, see
Appendix \ref{app:Dollard-Herbst}.
\hfill $\Diamond$
\end{remark}
\textbf{Proof of Lemma \ref{lem:orbits:with equal:as:ve}:}
With $a^\pm$ from \eqref{a:pm} and $\Omega_t$ from \eqref{Omega_T} we have
\begin{align}
(0,a^\pm)
&\stackrel{(1)}{=}
\lim_{t\to\pm\infty} \big( p(t,x^{(2)}_0) - p(t,x^{(1)}_0) \, ,\, q(t,x^{(2)}_0) - q(t,x^{(1)}_0)\big)\nonumber\\
&\stackrel{(2)}{=}
\lim_{t\to\pm\infty}
\big( \Phi^D_{t,0}\circ \Omega^{-1}_t(x^{(2)}_0)-\Phi^D_{t,0}\circ \Omega^{-1}_t(x^{(1)}_0) \big)\nonumber\\
&\stackrel{(3)}{=}
\lim_{t\to\pm\infty} \big(\Omega^{-1}_t(x^{(2)}_0) - \Omega^{-1}_t(x^{(1)}_0) \big)
\stackrel{\rm def.}{=}
\Omega^{-1, \pm}\big(x^{(2)}_0\big) - \Omega^{-1, \pm}\big(x^{(1)}_0\big)\, ,\nonumber
\end{align}
since\\
$\bullet$
By assumption $\overline{p}^\pm\big(x^{(i)}_0\big) = \lim_{t\to\pm\infty} p\big(t,x^{(i)}_0\big)$ coincide, and by
Lemma \ref{lem:q:minus:q} $a^\pm= \lim_{t\to\pm\infty} \big(q(t,x^{(2)}_0)-q(t,x^{(1)}_0)\big)$
exists, proving (1).\\
$\bullet$
Identity (2) follows from
$\Phi^D_{t,0}\circ \Omega^{-1}_t=\Phi^D_{t,0}\circ\Phi^D_{0,t}\circ\widehat \Phi_t=\widehat \Phi_t$, see
\eqref{groupoid}.\\
$\bullet$
The Dollard dynamics $\Phi^D$, see \eqref{Dollard:dynamics},
does not change momentum, which implies equality of the first components in (3).
Concerning the second components,
\[Q\big(-t,p\big(t,x^{(i)}_0\big)\big) = q\big(t,x^{(i)}_0\big)
- {\cal M}^{-1}p\big(t,x^{(i)}_0\big) t - \int_0^t \nabla_p V\big(\langle s\rangle {\cal M}^{-1}p\big(t,x^{(i)}_0\big)\big)\,ds\]
and $\big\| p\big(t,x^{(2)}_0\big) - p\big(t,x^{(1)}_0\big) \big\| = \cO(|t|^{-1-\alpha})$, see \eqref{pp:qq}. So
\[Q\big(\!-t,p\big(t,x^{(2)}_0\big)\big) - q\big(t,x^{(2)}_0\big) =
Q\big(\!-t,p\big(t,x^{(1)}_0\big)\big) - q\big(t,x^{(1)}_0\big) +\cO(|t|^{-\alpha}),\]
proving (3).
\hfill $\Box$\\[2mm]
Finally we prove a property special to ($-1$)--homogeneous potentials:
the existence of the Dollard-M\o ller transformation {\em and} of
asymptotes. This property does not extend to $(1,k)$--potentials or
to ($-\alpha$)--homogeneous potentials, $0 < \alpha < 1$, as
counterexamples on the half-line show.
\begin{prop}[Asymptotes for ($-1$)--homogeneous potentials]\quad\label{prop:m:one:homog}\\
Let $V$ be a $(-1)$--homogeneous potential and
$\Phi^D_{\bullet,\bullet}$ its
Dollard flow \eqref{Dollard:dynamics}.\\
Then for all initial conditions $x_0\in \widehat{F}^\pm$ there exist
unique $X_0^\pm\in F_0$ with
\[\lim_{t\to \pm\infty}\big(\Phi^D_{t,0}(X_0^\pm) -
\widehat{\Phi}_t(x_0)\big) = 0. \]
In fact, $X_0^\pm=\Omega^{*,\pm}(x_0)$.
\label{-1_Dollard}
\end{prop}
\textbf{Proof:} \\
We show the result for the limit $t\to+\infty$ and omit the superscript $\pm$ of $X_0^\pm$.
To make clear that $\alpha=1$ is the unique power with the described property, we first allow
for $\alpha$--homogeneous potentials with $\alpha\in (1/2,1]$.\\
We set $x(t)\equiv \big(p(t),q(t)\big) := \widehat{\Phi}_t(x_0)$ and
$X(t)\equiv \big(P(t),Q(t)\big) :=\Phi^D_{t,0}(X_0)$ for the Dollard flow
with initial conditions $X_0:= \Omega^{*,+}(x_0)$ and show that the limit
$\lim_{t\to +\infty}\big(X(t) - x(t)\big)$ exists iff $\alpha=1$.\\
For all $t\in \bR$ we have $P(t) = \overline{p}^+(x_0) = \lim_{t\to +\infty} p(t)$.
So we must consider
\begin{align}
F(t) &:=
Q(t)-q(t) = Q_0 + \overline{v}^+(x_0)t
+ \int_0^t \! \nabla_{\overline{p}^+} V\big(\langle s \rangle {\cal M}^{-1}\overline{p}^+(x_0)\big)\,ds - q(t)\nonumber\\
&= Q_0 + \overline{v}^+(x_0)t + f_\alpha(t){\cal M}^{-1} \nabla V\big(\overline{v}^+(x_0)\big) - q(t) \,,\nonumber
\end{align}
(See Appendix \ref{ex:Hom:Dollard}.) Its time derivative equals for $t>0$
\begin{align}
\dot{F}(t) &=
\overline{v}^+(x_0) + \langle t \rangle^{-\alpha} {\cal M}^{-1} \nabla V\big(\overline{v}^+(x_0)\big) - \dot{q}(t)\nonumber\\
&={\cal M}^{-1} \Big[ \int_t^\infty\!\! \nabla V\big(q(s)\big)\,ds
+ \langle t \rangle^{-\alpha} \nabla V\big(\overline{v}^+(x_0)\big)\Big] \nonumber\\
&={\cal M}^{-1} \Big[ \int_t^\infty\!\! \nabla V\big(\overline{v}^+(x_0) s + \cO\big(s^{1-\alpha}\log(s)\big)\big)\,ds
+ \langle t \rangle^{-\alpha} \nabla V\big(\overline{v}^+(x_0)\big)\Big] \nonumber\\
&={\cal M}^{-1} \Big[ \int_t^\infty\!\! \nabla V\big(\overline{v}^+(x_0) s \big)\,ds
+ \langle t \rangle^{-\alpha} \nabla V\big(\overline{v}^+(x_0)\big)\Big] + \cO\big( t^{-2\alpha} \log(t) \big)\nonumber\\
&=\big[ \langle t \rangle^{-\alpha} -\alpha^{-1} t^{-\alpha}\big]
{\cal M}^{-1} \nabla V\big(\overline{v}^+(x_0)\big) + \cO\big(t^{-2\alpha}\log(t)\big). \nonumber
\end{align}
We used $(-\alpha)$--homogeneity of $V$ in the second to last equation. To avoid
a distinction of cases, we kept an $\cO(\log(t))$ term, that is unnecessary if $\alpha\in (1/2,1)$.\linebreak
So if $\alpha=1$, then the first term is of order $ \cO\big(t^{-3}\big)$, and only in this case
$\lim_{t\to +\infty}\big(Q(t) - q(t)\big)$ exists.
Subtracting this limit from $Q_0$, if non-zero, gives the unique initial conditions of the
Dollard flow that yield an asymptote.
However, for $\alpha=1$ we have $\lim_{t\to +\infty}\big(Q(t) - q(t)\big)=0$:
We just proved that the difference of the momentum $p(t)$ (which equals the momentum component of
$\Omega_t^{*,+}(x_0)$) and of $P(t)$ is of order $\cO\big(t^{-2}\log(t)\big)$. So the difference of the
positions of the time $t$ Dollard flow with initial conditions $(P_0,Q_0):=X_0=\Omega^{*,+}(x_0)$
and $(P_t,Q_t):=X_t:=\Omega_t^{*,+}(x_0)$ is
\begin{align}
Q(t)-\big(Q_t+ t&{\cal M}^{-1} p(t)+f_1(t)\nabla_{p(t)}V({\cal M}^{-1} p(t)\big) \label{Diff:Q:DM:asymp}\\
&=\; \big[Q_0-Q_t\big]\,+\,\big[\overline{v}^+(x_0)-{\cal M}^{-1}p(t)\big] t \nonumber\\
&\quad \
+\sinh^{-1}(t)\Big[\nabla_{\overline{p}^+(x_0)}V({\cal M}^{-1} \overline{p}^+(x_0)\big) - \nabla_{p(t)}V({\cal M}^{-1} p(t)\big)\Big].
\nonumber
\end{align}
By definition of the Dollard-M\o ller transformation
$\lim_{t\to\infty}[Q_0-Q_t]=0$, whereas
$\big[\overline{v}^+(x_0)-{\cal M}^{-1}p(t)\big] t=\cO\big(t^{-1}\log(t)\big)$,
and
\[\sinh^{-1}(t)\Big[\nabla_{\overline{p}^+(x_0)}V({\cal M}^{-1}
\overline{p}^+(x_0)\big) - \nabla_{p(t)}V({\cal M}^{-1} p(t)\big)\Big]\!
=\cO\big(\log(t)\cdot t^{-2}\log(t)\big).\]
So the difference \eqref{Diff:Q:DM:asymp} has limit zero. \hfill $\Box$
\section{On the scattering relation and map}
When we replace limits $t \to \infty$ by $t \to - \infty$ we arrive at
the analogous objects for backward time, such as
$$v^{-} (x_0) = \lim_{t\to -\infty} q(t;x_0)/t,$$
in definition \ref{def:free}. In this way we arrive at the backward
time analogue of being ``free'', which is to be in the set
$$F^-:= \{x \in P : \text{the solution through } x \text{
is backward free} \} ,$$
and the backward M\" oller transform
$$\Omega_- := \lim_{t \to - \infty} \Phi_{-t} \circ\Phi^0_{t}: P
\dashedrightarrow P.$$
If $x_0 \in F^- \cap F^+$ then both $v^+ (x_0)$ and
$v^{-} (x_0)$ are defined, which leads us to the {\em scattering
relation} $\sim_s$ on $\bR^{dn} \, \backslash\, \Delta$ under which
$v^- \sim_s v^+$ if and only if there exists an
$x_0 \in F^- \cap F^+$ such that $v^- (x_0) = v^-$ and
$v^+ (x_0) = v^+$. Borrowing from quantum mechanics, the
``S-matrix'' or scattering map is defined by
$$S\, :=\, \Omega_+ ^{-1} \circ \Omega_- .$$
$S$ takes an ``initial condition'' $(p_-, C_-)$ at time $t = -\infty$
to an $x_0 \in F^- \cap F^+$ and then takes this $x_0$ to the
$(p_+, C_+)$ at $t = + \infty$ to which its solution corresponds.
Observe that $p_{\pm} = {\cal M} v^{\pm} (x_0)$ so that the projection of
the {\em graph} of the scattering map onto its momentum components
$p_-, p_+$ yields the scattering relation (times the mass matrix
${\cal M}$). We will leave this work to future researchers or future
times.
\begin{remark}[Manifold at infinity]\quad\\
For an alternate construction of the scattering map which is valid
for long range potentials and in particular for the Newtonian
potential, see \cite{Duignan}. In this version $S$ is defined by
adding a manifold at infinity and identifying the asymptotic
velocities $v_-$, $v_+$ with equilibrium points at infinity.
\hfill $\Diamond$
\end{remark}
|
1,314,259,993,887 | arxiv | \section{Introduction}
The Homunculus nebula around the massive luminous blue variable $\eta$~Carinae is the result of an eruptive mass-loss event that took place around 1843, throwing out more than 10~M$_{\odot}$ in a bipolar outflow \citep{morseetal01,gaviola50,ringuelet58,currieetal96,smithetal98a,smithetal98,smithetal03b,smith06}. \citet{hillieretal92} suggested that the lobes of the Homunculus are essentially hollow shells with most of the mass concentrated in two polar caps and thin side walls (see also \citealt{davidsonetal01,smith02,meaburnetal93}). In addition, a ragged equatorial disc also formed, which makes the Homunculus unique among bipolar nebulae \citep{duschletal95}. The Homunculus shows a complex mottled surface with lanes of dust condensation and holes in the lobes \citep{morseetal98}. Although the Homunculus is mainly a reflection nebula, it also has associated intrinsic emission lines due to shocks or photo-excitation \citep{smith05,smith04,smith06,smith02,davidsonetal01,allenetal91,allenetal93,hillieretal92}. Such studies showed that the observed velocity pattern of a spectral line along the Homunculus can be used to determine the origin of each component of the emission.
Surprisingly, another bipolar structure lying inside the Homunculus was discovered by \citet{ishibashietal03} using long-slit spectroscopy obtained with the Space Telescope Imaging Spectrograph (STIS) aboard the Hubble Space Telescope (\textit{HST}). The so-called \textit{Little Homunculus} was detected in more than 30 optical lines of [Fe\,{\sc ii}] and [Ni\,{\sc ii}] (such as [Fe\,{\sc ii}]~$\lambda$4891, $\lambda$4907, $\lambda$4975, and [Ni\,{\sc ii}]~$\lambda$7380) as well as in He\,{\sc i} and H\,{\sc i} lines \citep{ishibashietal03}, and is particularly bright in the near-IR [Fe\,{\sc ii}] lines \citep{smith02}. The proper motion and radial velocities analysis are consistent with the \textit{Little Homunculus} forming in a smaller mass-ejection event that occurred around 1890 \citep{ishibashietal03,smith05}. \citet{smith05} estimated an ejected mass and kinetic energy for the \textit{Little Homunculus} of 0.1--0.2~M$_{\odot}$ and roughly 10$^{47}$~erg, respectively -- values which are at least two orders of magnitude lower than those for the Great Eruption that created the larger Homunculus \citep{smithetal03b}. Although \citet{smith05} mapped the basic structure of the \textit{Little Homunculus} using five slits oriented parallel to the major axis and separated by $\approx$1 arcsec, the spatial distribution of the \textit{Little Homunculus} with a higher angular resolution \textit{in the near-infrared} has not been done so far.
$\eta$~Car is also surrounded by a broken toroidal ring structure \citep{smithetal00,smithetal02,morrisetal99,ishibashietal03}, which absorbs part of the ultraviolet radiation, while allowing the rest to escape through holes and excite/ionise the surrounding gas at large distances from the central source \citep{smith06}. An excellent example of this effect is given by the blue-shifted component of the He\,{\sc i}~$\lambda$10830 line projected onto the NW lobe \citep{smith02}.
This work presents the results of the first integral-field spectroscopy mapping of the Homunculus nebula. The paper is organized as follows. The observations and data reduction are described in \S\ref{obs}. A discussion about the line-formation mechanism throughout the nebula around $\eta$~Carinae is presented in \S\ref{mec}, while the results are shown in \S\ref{res}. In \S\ref{dis} is a comparison of the continuum radio-emission and our results as well as our estimates about the properties of the hot-companion star. Finally, our conclusions are summarized in \S\ref{sum}.
\section{Observations and data reduction}\label{obs}
The integral field observations of $\eta$~Car were recorded on 2003 March 14, 15 and 18 at the 8m Gemini South telescope using the visitor instrument CIRPASS\footnote{\href{http://www.ast.cam.ac.uk/~optics/cirpass/cirpass\_index.html}{www.ast.cam.ac.uk/$\sim$optics/cirpass/cirpass\_index.html}}, a spectrograph developed by the Cambridge instrumentation team \citep{parryetal04}. CIRPASS has 490 hexagonal lens placed at the integral field unit with a spatial sampling interval of 0.25 arcsec per lens. It provides a wavelength coverage of $\lambda\lambda$10620--12960, with a resolving power ($\frac{\lambda}{\Delta\lambda}$) of 3200.
The observations were originally planned for two epochs, one at the high- (2003 March) and other at the low-excitation state (2003 July) of the 5.5 yr cycle \citep{damineli96, daminelietal00}. Unfortunately, the observing run during the low excitation state was lost due to poor weather conditions.
The following strategy was adopted to map the whole Homunculus: 2 images were taken at a given position of the nebula, and then the IFU was shifted by 0.88 arcsec along the North--South axis to take the next 2 images. The final dataset comprised 2 images from 44 IFU positions, which corresponds to a final mosaic of 6299 spectra and covers the whole Homunculus.
The data were reduced using standard near-infrared techniques. First, any spurious features were removed by the {\sc iraf}\footnote{{\sc iraf} -- Image Reduction and Analysis Facility -- is written and supported by the {\sc iraf} programming group at the National Optical Astronomy Observatories (NOAO) in Tucson, Arizona. NOAO is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under cooperative agreement with the National Science Foundation}/{\sc cosmicrays} task. After that, the spectra were flat-fielded, and an optimal spectral extraction was performed in order to account for the overlapping wings among neighboring spectra. The wavelength calibration was done using an argon lamp spectra and a polynomial interpolation (RMS $\sim 0.1$~{\AA}). The telluric lines were removed dividing the Homunculus spectra by the spectrum of a hot early-type star observed just after the science observation. The photospheric lines of the telluric standard was removed previously through an interpolation of the adjacent continuum. The FWHM of the point-spread function of our data is 0.4~arcsec, which was measured using the intensity profile of the standard star. Finally, a full data cube was constructed from the individual spectra using our own tasks written in {\sc idl}. The signal-to-noise ratio (S/N) is quite variable along the entire mosaic: within the region where the Homunculus nebula is located, the average S/N in the continuum is $\approx 25$, while throughout the outer ejecta it is roughly 10.
\section{Mechanisms of excitation and/or ionization in the circumstellar ejecta of $\eta$~C\lowercase{ar}}\label{mec}
In this paper, we focus on two emission lines found throughout the Homunculus, namely, [Fe\,{\sc ii}]~$\lambda$12567 and He\,{\sc i}~$\lambda$10830. As usual in the spectra of the Homunculus, these lines show many components of intrinsic and reflected emission, which can be used to map the spatial distribution of structures lying inside or outside the Homunculus. Hence, it is very important to know about the mechanisms of line-formation in order to better understand the physical conditions of the emitting and reflecting regions.
In this section we will discuss about the process of line-formation inside and outside the Homunculus nebula. We must stress that throughout this paper, we refer to `photo-excitation' as the process by which a photon of the radiation field is absorbed by an atom or ion of the gas with the promotion of one electron to a higher energy level without ionization (bound-bound transition). If the incident photon has an energy greater than the ionization potential of the atom or ion, then one electron will be stripped out of it. This process is called `photo-ionization' and is responsible for bound-free transitions.
On the other hand, if the gas is hot enough to keep most of the atoms ionized, as usually found around massive stars, then the electronic density will be high and probably the electrons will collide with the ions and excite their electron to an upper level (more energetic). This process is called `collisional-excitation' or `collisional-ionization', depending on the capability of the incident electron in removing or not one electron from the target ion.
The return of the electron to the ground state of the ion -- known as recombination -- will be followed by the emission of many photons with different energy, which gives rise to the emission-line spectrum of some stars.
\subsection{The Homunculus nebula}
Many studies have shown that in the lobes of the Homunculus nebula, photo-excitation is the predominant mechanism for populating the atomic levels of the ions \citep{gulletal05,verneretal05,smith06}. This is because the strong stellar radiation is absorbed by dust in the nebula which, in turn, is heated to a few hundreds degrees Kelvin and starts to emit a reprocessed radiation field responsible for keeping the observed ionization structure inside the walls \citep{smithetal07ii}. However, this is not ionizing radiation since the central source has a dense stellar wind which absorbs nearly all of the Lyman-continuum photons. The few photons that escape the stellar wind are absorbed either by the toroidal structure at the equator or by the \textit{Little Homunculus}, which causes strong variability in the radio continuum \citep{duncanetal97}.
On the other hand, the stellar wind shows a latitude-dependent profile, being more dense and fast in the polar regions than in the equatorial region \citep{smithetal03}. The expansion velocity of the Homunculus is about 600~km\,s$^{-1}$ while the stellar wind reaches terminal velocities in the range of 600 to 1000~km\,s$^{-1}$. Thus, inside the lobes it is expected to detect collisionally-excited emission lines as well. Indeed, the strength of IR [Fe\,\textsc{ii}] and H$_{2}$ lines in the NW lobe relative to SE suggest that there is a combination of slow shocks and photo-excitation \textit{inside the lobes}. The shocks are needed to explain the observed value $\ga$~35 for the ratio of [Fe\,\textsc{ii}]~$\lambda$16435 to Br$\gamma$ \citep{smithetal01}, though it should not exceed the threshold velocity either for dissociation of H$_{2}$ -- detected by \citealt{smithetal01} -- or to emit any hard X-ray photon -- as noted by \citealt{weisetal01}. Therefore, the walls of the nebula must be composed of dense, small neutral/molecular knots or clumps \citep{morseetal98} so that the fast bi-polar wind could escape into the outer ejecta without strong interaction with the Homunculus (similar to the well-known Rayleigh-Taylor instabilities).
Therefore, in the lobes of the Homunculus nebula, there is a competition between photo-excitation and collisions as the main line-formation mechanism, the former being the most significant excitation/ionisation process.
\subsection{The equatorial region}
In the equatorial region, however, the stellar wind is slower than in the polar regions. It also presents a lower density and consequently, the emission due to collisions is weak. Therefore, the equatorial region is largely dominated by photo-excitation. This is because of both the proximity to the central source and the high ionization flux from the equatorial region of the central star \citep{smithetal03}. The presence of a torus around the system is revealed by narrow-band IR images and confirmed at radio wavelengths \citep{smithetal98,duncanetal97} as well as in emission lines. Nevertheless, this structure is not continuous but shows either dense clumps (where low-ionization ions are detected) and holes (through where radiation can escape). Examples of these structures are the so-called `Sr-filament' \citep{zethsonetal99,gulletal00,gulletal01,zethsonetal01,bautistaetal02,hartmanetal04,bautistaetal06} and the He\,\textsc{i}~$\lambda$10830 emission columns \citep{smithetal02}.
\subsection{The outer ejecta}
The picture drastically changes when considering the line-formation process outside the Homunculus nebula, namely, in the outer ejecta. This region is nitrogen-rich \citep{smithetal04a} and responsible for practically all of the observed X-ray flux up to 1.5~keV, which implies shock velocities in excess of 1500~km\,s$^{-1}$\citep{weisetal04}. These shocks are sufficient to excite (and even ionize) ions to higher energy levels than those observed in the Homunculus. This is supported by the observation of N\,\textsc{vi}/\textsc{vii}, Si\,\textsc{xiii}/\textsc{xiv}, Mg\,\textsc{xii} in the X-ray part of a thermal-emission spectrum and strong N\,\textsc{ii}, [O\textsc{iii}], [S\,\textsc{iii}] and Si\,\textsc{ii} and many other lines of high-ionization energy ions in the optical range \citep{smithetal04,weisetal04,hamaguchietal07}.
Thus, throughout the Homunculus nebula the process of line-formation is mainly via photo-excitation with a little contribution from slow shocks inside the lobes at high stellar latitudes, while in the outer ejecta the predominant process is excitation/ionisation via collisions, with little contribution from photo-excitation.
\section{Results: $J$-band spatial maps of the nebula around $\eta$~C\lowercase{ar}}\label{res}
\subsection{Structures in the [Fe\,{\sc ii}]~$\lambda$12567 line}\label{collisions}
This section presents the spatial structure and kinematics of the photo-excited regions found in the Homunculus. For the first time, a complete spatial map of such photo-excited regions is presented in the near-infrared. Although velocity maps were already made using forbidden lines in the optical with higher spatial resolution using \textit{HST} \citep{ishibashietal03}, the use of the near-infrared region in this work allows the observer to peer through the circumstellar dust and probe the environment around $\eta$~Car.
Extensive work has been done in the near-infrared by N. Smith \citep{smith02,smith04,smith05,smith06} to establish the overall kinematics of the regions around $\eta$~Car, in special the \textit{Little Homunculus} \citep{smith05}. However, a complete spatial map has not been done so far in near-infrared. The IFU observations presented here should therefore be interpreted as complementary information to what has been presented by previous works using long-slit spectroscopy.
Fig.~\ref{fig1} is an image of the circumstellar environment of $\eta$~Car obtained with the Planetary Camera of the Wide Field Planetary Camera 2 (WFPC2) on board the \textit{HST}. It is a negative of the image shown in fig. 3 of \citet{morseetal98}, reproduced by permission of the AAS. This image is referred to throughout this paper for the location and standard nomenclature of specific regions in the ejecta of $\eta$~Car.
\begin{figure}\centering \includegraphics[width=8.4cm]{fig/fig1.jpg} \caption{\label{fig1}Optical \textit{HST}/WFPC2 image of the ejecta around $\eta$~Car (bottom image of fig. 3 from \citealt{morseetal98}, reproduced by permission of the AAS). This image shows the adopted nomenclature of some of the most important regions around $\eta$~Car.} \end{figure}
\begin{figure}\centering \includegraphics[width=8.4cm]{fig/fig2a.jpg}
\includegraphics[width=8.4cm]{fig/fig2b.jpg} \caption{\label{fig2}The upper panel (background image from fig. 2 of \citealt{morseetal98}) shows the position where we extracted the spectra shown in the lower panel. Although the boxes in the upper panel have approximately the dimension of one lens (0.25~arcsec), each spectrum in the lower panel is a median-combined of 4 adjacent lens in order to improve the signal to noise ratio. The abbreviations is described in \S\ref{idspec}.} \end{figure}
\subsubsection{Identifying line components in the Homunculus spectrum}\label{idspec}
We used the kinematic model of the Homunculus proposed by \citet{davidsonetal01} to identify reflected and intrinsic emission components. This model was built using long slit observations of forbidden spectral lines in the optical, mapping the emission coming from inside the Homunculus with spatial and spectral resolution of 0.1~arcsec and $\approx90$~km\,s$^{-1}$, respectively. Thus, as a first approximation, the \citet{davidsonetal01} model is accurate enough (for our objectives) to classify the observed components of a given line as intrinsic or reflected.
Typical spectra from the Homunculus are presented in Fig.~\ref{fig2}. The adopted abbreviations are as follows: $f$ ($b$) means intrinsic emission coming from the {\it front} ({\it back}) wall of the SE or NW lobe and $s$ means {\it scattered} emission reflected in the SE or NW lobe. As usual, we refer to $front$ ($back$) wall as the near (far) side of each lobe of the Homunculus. Emission associated with the equatorial ejecta is labelled as either EQ$s$ or EQ$r$, where $s$, in this case, means {\it slow}- and $r$, {\it rapid}-moving gas.
We identified all of the [Fe\,\textsc{ii}]~$\lambda$12567 components in order to map the emission structures of the Homunculus. The Doppler velocity of each component mentioned hereafter is heliocentric and was obtained by a multi-gaussian line fitting procedure. The components are as follows:
\begin{itemize}
\item $f$SE ($\approx-400$~km~s$^{-1}$) is an intrinsic emission due to Balmer excitation to an upper level followed by recombination to a lower level (bound-bound transition) or recombination of Fe$^{++}$ to Fe$^{+}$ (free-bound transition) inside the SE lobe;
\item $b$SE ($\approx-200$~km~s$^{-1}$) is intrinsic emission coming from the back wall of the SE lobe and is due to recombination of Fe$^{++}$ to Fe$^{+}$ as well;
\item $s$SE ($\approx+200$~km~s$^{-1}$) is emission from the stellar wind scattered in the front wall of the SE lobe;
\item EQ$s$ ($\approx-100$~km\,s$^{-1}$) is due to intrinsic emission from slow-moving material located in the equatorial plane, which intercepts our line-of-sight towards the NW lobe. This component was also detected in [Ni~II]~$\lambda$7380 by \citet{davidsonetal01} and in many low-ionization emission lines such as [Mn\,\textsc{ii}], [Cr\,\textsc{ii}] and [Ti\,\textsc{ii}]. It is due to the `Sr-filament' discussed in \S~\ref{radio};
\item EQ$r$ ($\approx-250$~km\,s$^{-1}$) is due to equatorial ejecta illuminated directly by ionizing radiation from the central source. This component comes from a region known by its strong continuum radio emission and variability. We will discuss this feature in \S~\ref{radio};
\item $f$NW ($\approx+100$~km~s$^{-1}$) is an intrinsic emission associated with the near wall of the NW lobe. It is due to the same mechanism as $f$SE;
\item $b$NW ($\approx+300$~km~s$^{-1}$) is also intrinsic emission related to recombination at the polar region of the NW lobe;
\item LH is emission associated with the \textit{Little Homunculus}.
\end{itemize}
\subsubsection{The hole at the pole of lobes}\label{hole}
\begin{figure*}\centering \vbox {\vfil \includegraphics[width=16cm]{fig/fig3.jpg} \caption{\label{fig3}Velocity maps of the [Fe\,{\sc ii}]~$\lambda$12567 line showing a hole both in the SE and NW lobe \citep{smithetal98}. The coordinates of the NW hole are the same as the SE hole but rotated by 180\degr around the centre of the axis. Notice the change in the color scale to bring up weaker emissions in each lobe. The emission from the surrounding of the hole is almost 50 per cent greater than that coming from the hole itself. The system velocity is 8.1 km\,s$^{-1}$ heliocentric \citep{smith06}. A full animated version of this figure is available at \href{http://www.astro.iag.usp.br/~damineli/feii_holes.gif}{www.astro.iag.usp.br/$\sim$damineli/feii\_holes.gif} with the same color scale.}\vfil} \end{figure*}
We detected two regions where emission due to recombinant Fe$^{++}$ is nearly absent. These regions can be seen in Fig.~\ref{fig3}, which shows velocity maps of [Fe\,{\sc ii}]~$\lambda$12567 from $-525$~km~s$^{-1}$ to $+510$~km~s$^{-1}$. On both lobes, a small polar region with extremely reduced emission in this line can be identified (circular contour in each lobe in Fig.~\ref{fig3}). The flux from these holes are typically $\sim50$ per cent lower than its immediate surroundings. Moreover, such deficit of [Fe\,{\sc ii}] and H$_2$ emission at these locations was also reported by \citet{smithetal98} and \citet{smith06}.
It is tempting to associate the hole seen in the [Fe\,{\sc ii}]~$\lambda$12567 velocity maps with the structure known as \textit{SE hole} (see Fig.~\ref{fig1}), observed in optical and near-infrared wavelengths \citep{smithetal98}. To investigate that hypothesis, we compared the relative position between the \textit{SE hole} and the SE region with low emission of [Fe\,{\sc ii}]~$\lambda$12567. Besides, we also compared the position of those regions with the location of the lobe's pole\footnote{By the word `pole' we mean to say the location in the lobe where the stellar latitude is $90\degr$. Obviously, this position is model-dependent.}. To do so, we assumed the kinematic model of the Homunculus of \citet{smith06}, which was obtained by tracing the H$_2$ emission along the nebula and gives a more accurate position for the location of the pole.
Our results give strong support to the idea that there indeed is a hole at the pole of each lobe. The facts that led us to this conclusion are mainly two:
\begin{itemize}
\item the location of the SE region with lower emission of [Fe\,{\sc ii}]~$\lambda$12567 does not match neither the position of the \textit{SE hole} nor the pole of the lobe. Instead, they are shifted from one another by $\approx 0.7$~arcsec along the major axis of the Homunculus, i.e. the \textit{SE hole} is half-way between the pole of the lobe and the region with weak emission of [Fe\,\textsc{ii}]~$\lambda$12567. Hence, the lack of Fe$^{+}$ emission is not associated with either the \textit{SE hole} or the pole.
\item the absence of thermal infrared emission from dust reported by \citet{smithetal98} is also a strong indication that those polar holes are indeed lower-density regions, and not shadows.
\end{itemize}
[Fe\,\textsc{ii}]$\lambda$12567 is most likely to arise from a warm, low-density region \textit{inside} the lobe because when the stellar radiation field penetrates the wall of the lobes -- which has a hydrogen density of about $10^{7}$~cm$^{-3}$ -- it gets more attenuated and then Fe$^{+}$ recombines to Fe$^{0}$ and we see no more emission from Fe\,\textsc{ii} \citep{smithetal07ii}. Thus, the spatial distribution of this line represents the emission coming from the inner part of the lobes. Together with the fact that molecular hydrogen emission comes from a region of the lobe that is shielded from strong radiation -- i.e. just outside it --, we could get a rough estimate of the lobes' thickness.
A first approximation of the geometry of the hole is to consider it as a cylinder with linear diameter\footnote{To conversion between apparent and linear size, we adopted a distance of 2.25~kpc to $\eta$~Car \citep{davidsonetal01}.} \textit{d} -- the diameter of the [Fe\,{\sc ii}] non-emitting region -- and height $\Delta$R$_0$ -- the distance between the SE pole of the Homunculus and the [Fe\,{\sc ii}] non-emitting region. Hence, considering an inclination angle of \textit{i}=41\degr from the line-of-sight \citep{davidsonetal01,smith06}, we obtained for the height and diameter of the cylinder, respectively, a linear size of $6.5\pm0.4\times10^{16}$ and $6.0\pm0.3\times10^{16}$~cm. The errors quoted here are due only to our uncertainty in position and do not include the uncertainty in distance, which is in the range of 0.1--$2\times10^{16}$~cm for most studies \citep{davidsonetal01,smith06}.
The right panel of Fig.~\ref{fig4} shows the adopted model for the height and radius of the hole in the lobes of the Homunculus. The coordinates of the non-emitting region in the NW lobe were obtained by considering the position of the same region in the SE lobe, mirrored relatively to the central star by 180\degr.
We also estimated the thickness of the lobes at lower latitudes by measuring the `limb-darkening' profile seen in our velocity maps (cf. Fig.~\ref{fig3}). Indeed, there is a clear separation between the [Fe\,\textsc{ii}]$\lambda$12567 emission and the optical limit of the Homunculus' lobes because of the high-density medium inside the wall of the lobes. The observed mean separation in both lobes is 1.3~arcsec, which corresponds to a linear thickness of $4.4\pm0.5\times10^{16}$~cm (here, the errors are due to the irregularity of the [Fe\,\textsc{ii}]$\lambda$12567 emission region). This result suggests that the thickness of the lobes also presents a latitude-dependent effect, which makes it almost 50 per cent thicker at polar regions than at lower latitudes.
Both holes -- SE and NW -- define an axis with position angle (P.A.) of $-50\degr$, which coincides with that found by \citet{smith02} based on symmetry arguments. We suggest that these holes must form a fundamental axis of the Homunculus, which could be created because of a low (or even inhibited) mass-loss rate within $\approx 5\degr$ of the poles. Since about 75 per cent of the mass of the Homunculus is located at high stellar latitudes \citep{smith06}, when the lobes expand they might appear as two rings in the future, similar to those seen around other blue supergiants, such as HD168625 or Sher~25 \citep{smith07}.
An alternative explanation is that the central star has had a major blowout in the polar region, creating the holes. In this scenario, this `blowout' would have been a greater manifestation of the same mechanism that produced the many fast-moving structures dubbed `strings' or `whiskers' or even `spikes' \citep{weisetal99,morseetal98,meaburnetal96}. These high-density ($n_{\rm{e}}\sim10^4$~cm$^{-3}$; \citealt{weis02}) filamentar structures lie outside the Homunculus and are moving at nearly 1000~km\,s$^{-1}$ but even so, they do not emit hard X-rays most likely because of its very small cross-section. Interestingly, they are only seen at high stellar latitudes (in the polar directions). One of the explanations for the observed velocity profile (a Hubble law) of these structures is that they could be formed in a presumably stellar explosion \citep{weisetal99}. Thus, if this scenario is correct, this explosion could be responsible for the formation of the hole in pole as well.
We also note that an explosion at the surface of the primary star was the physical mechanism used by \citet{smithetal07iii} in a simulation that creates, simultaneously, a bipolar nebula and an equatorial disc (as observed in the Homunculus). However, the physical mechanism that could start such stellar explosion remains unclear, and encourages further studies.
\begin{figure*}\centering \vbox {\vfil \includegraphics[width=16.8cm]{fig/fig4.jpg} \caption{\label{fig4}(a) Position of the features used to estimating the thickness of the lobes: $\diamond$ marks the location of the pole of the lobe (as in \citealt{smith06}), while $\times$ indicates the centre of the [Fe\textsc{ii}]~$\lambda$12567 weak-emission region (circular contour). The greyscale background image is from \citet{morseetal98}. (b) A cylinder in the SE lobe superimposed on the Homunculus model. From the $\times$ mark to the $\diamond$ symbol, the linear height of the cylinder is roughly $6.5\times10^{16}$~cm (log($\Delta\rm{R_0}$)=16.81), which we assumed as the thickness of the polar region of the lobes. The diameter of the cylinder is assumed to be the same as that observed for the [Fe\textsc{ii}]~$\lambda$12567 weak-emission region, which has a linear size of $6.0\times10^{16}$~cm. The adopted values for the inclination and position angle of the Homunculus model are also given (the same parameters are applied to the cylinder as well).}\vfil} \end{figure*}
\subsubsection{Spatial mapping of the \textit{Little Homunculus}}
Due to the long-slit spectroscopic technique employed by \citet{smith05}, the determination of the spatial extent and distribution of the \textit{Little Homunculus} (hereafter LH) was restricted to the interpolation between the points where the emission associated with the LH was detected in the slit. In the present work, we show the 3D kinematics in the form of slices in velocity space, rather than slices along the major axis as in \citet{smith05}, but the results of the two independent methods are in agreement. Our velocity channel images may provide a better way to evaluate images from simulations of the formation of the LH \citep{gonzalezetal04}, as we provide a complete, model-independent spatial map of the LH.
Our analysis of the velocity maps (Fig.~\ref{fig5}) showed that the emission of the SE lobe of the LH begins at~$\approx -250$~km~s$^{-1}$ and goes up to $+100$~km~s$^{-1}$. The emission associated to the LH is seen blue-shifted near the centre in the SE lobe as indicated in Fig.~\ref{fig5}(a) and (b), in line with the results of \citet{smith05}. Starting from negative and moving toward positive velocities, it is possible to see the emission from the equatorial disc (EQ in Fig.~\ref{fig5}(a)) in the same line-of-sight of the NW lobe of the LH. However, based on geometric arguments, the components of the equatorial disc can only have negative velocities \citep{davidsonetal97b}. Thus, the component associated to the NW lobe of the LH was identified as the structure lying near the central region with velocities ranging from $\approx+20$ up to about $+235$~km\,~s$^{-1}$ (see Fig.~\ref{fig5}(d)--(f)).
\begin{figure*}\centering \vbox {\vfil \includegraphics[width=16.8cm]{fig/fig5.jpg} \caption{\label{fig5}Velocity maps of the [Fe\,{\sc ii}] $\lambda$12567 line showing the observed structures and its identification. LH means emission associated to the \textit{Little Homunculus} while EQ is due to the equatorial disc. A fully animated version of this figure is available at \href{http://www.astro.iag.usp.br/~damineli/feii_lh.gif}{www.astro.iag.usp.br/$\sim$damineli/feii\_lh.gif} with the same color scale.}\vfil} \end{figure*}
\subsection{The H\lowercase{e\,{\sc i}} $\lambda$10830 emission column}\label{beam}
The He\,{\sc i}~$\lambda$10830 line has a very complex velocity structure in the spectrum of the Homunculus \citep{smith02}. It is a combination of absorption, emission and reflection from different regions inside and outside the nebula and is, presumably, formed near the central source.
We detected an intrinsic emission component which appears restricted to a narrow azimuthal region in the line-of-sight to the NW lobe \citep{smith02}. It is labelled as 1 in Fig.~\ref{fig6}, and is likely photo-excited by energetic photons (at least $16.2$~eV), since [Fe\,{\sc ii}]~$\lambda$12567 does not show any component at the same velocity. Component 2 is associated with emission from the slow-moving equatorial ejecta\footnote{Note, however, that component 2 may also be associated with diffuse emission due to the H\,\textsc{ii} region in which $\eta$~Carinae is immersed.}, and component 3 is reflected emission from the central source in the NW lobe (see also fig.~12 and 13 of \citealt{smith02}).
This narrow intrinsic emission of He\,{\sc i}~$\lambda$10830 (hereafter He emission column) is likely formed when the UV radiation from the central source passes through the holes in the torus \citep{smith02,smithetal02} and are free to excite He atoms at large radii from the central source. The apparent Doppler velocity of the He emission column changes, respectively, from $\approx-250$ to $\approx-500$~km~s$^{-1}$ when moving from a projected distance of 2~arcsec to 10.5~arcsec from the central source (see Fig.~\ref{fig7}), suggesting that the emitting region has a Hubble-flow motion, i.e. $v\propto d$. Note that such high projected velocity would suggest that the equatorial disc has ejecta moving as fast as the polar cap of the Homunculus. However, as will be discussed in the next section, given the errors associated to the determination of the inclination angle of the helium emission column (which indicates whether or not it is in the equatorial disc), it may be comfortably associated with the Great Eruption, which lasted about 20 yr with peak between 1843 and 1851 \citep{currieetal96,currieetal99,morseetal01}.
\subsubsection{Characteristics of the helium emission column and the detection of its twin brother}
It is known that the \textit{Paddle} is a dust-free region located at the equatorial disc \citep{smithetal98}. Its shape is well defined and seems symmetric regarding the major axis of the Homunculus\footnote{Although it is likely that this alignment occurs by chance.} (see Fig.~\ref{fig1}). Therefore, the \textit{Paddle} would be a suitable candidate to be blamed for the escape of radiation. Indeed, free of any interaction, it is expected that radiation flowing through that region would follow a linear path centered on P.A.=$-41\degr$ -- the position angle of the \textit{Paddle}. However, the observed P.A. of the He\,\textsc{i} emission column is different of the \textit{Paddle}, namely the He emission is at $-48\degr$ (see Fig.~\ref{fig8}). We must stress that even with this discrepancy, the He\textsc{i}$\lambda$10830 emission column can be weakly detected along P.A.=$-41\degr$ because of its roughly 3~arcsec wide but the bulk of emission comes indeed from P.A.=$-48\degr$.
In order to analyse the kinematic structure of the He\,\textsc{i}$\lambda$10830, we adopted the same convention as in \citet{davidsonetal01} and calculated the inclination angle\footnote{The inclination angle is defined such that $i=0\degr$ means that the equatorial disc is seen face-on, while $i=90\degr$ corresponds to an edge-on view.} of the He emission column using the following equation
\[ \tan(i_{He})=\frac{t}{4.74 D}\frac{V}{\omega}, \]
where $t$ is the age, in years, of the equatorial gas, $D$ is the heliocentric distance to $\eta$~Car measured in pc, $V$ is the apparent Doppler velocity (km~s$^{-1}$) measured at position $\omega$ (arcsec). Note that regarding the age of the equatorial disc, there is no consensus. Based on kinematics studies, \citet{morseetal01} suggested that the equatorial disc is coeval with the Homunculus lobes, although some material appears to be even younger presumably associated with posterior eruptions \citep{davidsonetal97,davidsonetal01,smithetal98a,dorlandetal04}. In the present work, we assumed an age of 160 year for the equatorial disc. We also assumed that it is perpendicular to the major axis of the Homunculus \citep{davidsonetal01,smithetal07iii}.
\begin{figure}\centering \includegraphics[width=8.4cm]{fig/fig6.jpg} \caption{\label{fig6}Comparison between He\,{\sc i}~$\lambda$10830 and [Fe\,{\sc ii}]~$\lambda$12567. The components labelled 1 and 2 are intrinsic emission from the equatorial region while 3 is formed in the winds of the central source and then scattered by the dust in the background NW lobe. Both spectra were extracted from the point labelled 6 in Fig.~\ref{fig7}. The broad line at $\approx1200$ ~km~s$^{-1}$ is a blend between Fe\,{\sc ii} $\lambda$10863 and Fe\,{\sc ii} $\lambda$10872.} \end{figure}
\begin{figure}\centering \vbox {\vfil \includegraphics[width=8.4cm]{fig/fig7.jpg} \caption{\label{fig7}He\,{\sc i}~$\lambda$10830 line profile along position angle $-41\degr$. Notice the detection of the He\textsc{i}~$\lambda$10830 emission column along this position angle (vertical dashed line). In order to improve the signal-to-noise ratio, each spectrum was median-combined from 4 adjacent lens. The backgound image of the Homunculus (from \citealt{morseetal98}) was rotated by $41\degr$ counter-clockwise.
}\vfil} \end{figure}
From our data, the average value of $|V/\omega|$ was $67.9\pm6.3$~km\,s$^{-1}$~arcsec$^{-1}$, corresponding to $\tan(i_{He})=0.95\pm0.13$, which in turn results an inclination angle of approximately $44\degr$ \textit{from the plane of the sky toward us}. Though the error in our result ($\pm3\degr$) is larger than that obtained with long-slit observations -- typically less than 1\degr --, the lower end is consistent with $i=41\degr$, which is the assumed inclination angle of the equatorial disc obtained using long-slit observations. Even so, our range of values for the inclination angle of the helium emission column is coherent with the Great Eruption, which takes place between 1837 and 1860, with the peak occuring around 1843 and 1851 \citep{currieetal96,currieetal99,morseetal01}.
Hence, we concluded that, if the He\,\textsc{i}$\lambda$10830 emission column is indeed in the equatorial disc and is caused by UV escaping through a hole in the torus, we should detect this same effect in other places, since it is known that there are many holes in the equatorial torus.
In fact, we also detected intrinsic emission of He\textsc{i}$\lambda$10830 at the end of the \textit{NN jet} (Fig~\ref{fig9}). With a P.A. of +35\degr, the spectrum extracted from 2 up to 6~arcsec from the central source is likely reflected by dust in the \textit{NN jet}, since they show a red-shifted velocity profile where it is expected to find only blue-shifted velocities (if they were due to intrinsic emission). Furthermore, He\,\textsc{i}$\lambda$10830 shows a velocity-variable P~Cyg profile in that region (shown by the dashed line from position 1 to 8 in Fig.~\ref{fig9}), which is also a strong suggestion of reflection. However, beyond 6~arcsec from the central source, the reflected profile disappears and a blue-shifted emission begins to raise at approximately $-570$~km\,s$^{-1}$ going up to $-650$~km\,s$^{-1}$ at 8~arcsec (see Fig.~\ref{fig9}). Therefore, our results suggest that there are at least two regions where radiation is escaping to excite/ionize gas lying in the equatorial disc.
In a binary context, these regions could be produced by the high-energy radiation coming direct from the hot source of the system, which spends most of its orbital period near apastron, in a highly elliptical orbit. Thus, if the plane of the orbit is the same as the equatorial disc, then one would expect the UV from the secondary to escape through the holes in the torus and excite/ionize helium atoms along its way. It would be very interesting to observe these regions along the period of 5.52 year to see their behavior near the minimum, when the hot companion gets into the dense wind of the primary. Hence, if our assumption is correct, the He\,\textsc{i}$\lambda$10830 emission column would fade and then return.
We also noted that to the SW direction -- toward the \textit{S-condensation} --, we only detected reflected components as well as intrisic emission associated with the equatorial disc but no signal of another He\,\textsc{i}$\lambda$10830 emission column (see Fig~\ref{fig10}).
\section{Discussion}\label{dis}
\subsection{The 3-cm radio emission}\label{radio}
\begin{figure*}\centering \vbox {\vfil \includegraphics[width=16.8cm]{fig/fig8.jpg} \caption{\label{fig8}Velocity maps for the He\,{\sc i}~$\lambda$10830 line showing the location of the He emission column (He~\textsc{i}~E.C. in the figure) and the \textit{Paddle}. The misalignment between the P.A. of the \textit{Paddle} and that of the helium emission column is clearly seen in (b). Also, note the strong emission (white region) near the centre in (d), which is associated to the \textit{Purple Haze}. A full animated version of this figure is available at \href{http://www.astro.iag.usp.br/~damineli/hei.gif}{www.astro.iag.usp.br/$\sim$damineli/hei.gif} with the same color scale.}\vfil} \end{figure*}
\begin{figure*}\centering \vbox {\vfil \includegraphics[width=16.8cm]{fig/fig9.jpg} \caption{\label{fig9}Tracing of line profiles toward the \textit{NN jet}. Notice the changes in the P Cygni absorption component in the helium line (dashed line), which moves from approximately $-500$ to $-220$~km\,s$^{-1}$ from position 1 to 9, respectively. Beyond position 9, we detected another He\textsc{i}~$\lambda$10830 emission column (dotted line). At position 2 and 3, Pa$\gamma$ shows an artifact in the red side of the line that is caused by a cosmetic defect in the detector. The background image is from \citet{morseetal98}.}\vfil} \end{figure*}
\begin{figure*}\centering \vbox {\vfil \includegraphics[width=16.8cm]{fig/fig10.jpg} \caption{\label{fig10}Same as Fig. \ref{fig9} but extracted towards the \textit{S-condensation}. At this position angle, we did not detected any component in the helium line that could be due to high-energy photons (far-UV from the secondary) escaping through holes in the equatorial torus. We only detected intrinsic emission due to the equatorial disc as well as reflected components. Notice that all of the lines show a component near 0~km\,s$^{-1}$, which remains with practically the same apparent Doppler velocity at all of the positions. This component may be associated to the H\textsc{ii} region in which $\eta$~Car is immersed. To better visualization, the image of the Homunculus was rotated by $-100\degr$.}\vfil} \end{figure*}
\begin{figure}\centering \includegraphics[width=8.4cm]{fig/fig11.jpg} \caption{\label{fig11}The background image represents the integral of the [Fe\,{\sc ii}]~$\lambda$12567 line flux from $-1000$ to $+1000$~km\,s$^{-1}$ normalized by the adjacent continuum. The continuum radio-emission at 3~cm (from \citealt{duncanetal97}) is superimposed to show that the low levels match the spatial extent of the \textit{Little Homunculus}, as claimed by \citet{smith05}. Also, note the spatial coincidence between the NW radio peak and and the emission from the equatorial region (see discussion in \S\ref{radio}).} \end{figure}
\begin{figure}\centering \vbox {\vfil \includegraphics[width=8.4cm]{fig/fig12a.jpg} \includegraphics[width=8.4cm]{fig/fig12b.jpg} \caption{\label{fig12}(a) 3-cm radio contours from \citet{duncanetal97} superimposed on an image of the ratio of F220W to F550M from \citet{smithetal04} (reproduced with permission). In the background image, dark areas indicate stronger UV emission. The radio contours were scaled to take into account an expansion due to the 7-year delay between the radio and optical observations. (b) The spectrum taken from the regions indicated in (a) shows the presence of several low-ionization lines at positions 2 and 3, which have stronger intensity as compared to 1, 4 and 5.}\vfil} \end{figure}
\begin{table}\centering
\caption{\label{t1}Emission lines observed in the region of the `Sr-filament'. The unidentified lines are listed with a `?' symbol. Note that [Fe\,\textsc{ii}]~$\lambda$12567 has three components (shown in bold font).}
\label{lines}
\begin{tabular}{@{}ccc}
\hline
$\lambda_{\rm{obs}}$ (\AA) & Ion & Velocity (km\,s$^{-1}$) \\
\hline
11730.8 & [Ti\,{\sc ii}]~$\lambda$11736 & $-130$ \\
11817.7 & [Ti\,{\sc ii}]~$\lambda$11823 & $-142$ \\
11833.8 & [Ti\,{\sc ii}]~$\lambda$11838 & $-132$ \\
11853.0 & [V\,{\sc ii}]~$\lambda$11857 & $-110$ \\
11877.2 & [Fe\,{\sc ii}]~$\lambda$11882 & $-110$ \\
11883.9 & ? & \\
11890.8 & ? & \\
11925.2 & [Ti\,{\sc ii}]~$\lambda$11930 & $-130$ \\
11945.8 & [Cr\,{\sc ii}]~$\lambda$11950 & $-115$ \\
12028.1 & [Ti\,{\sc ii}]~$\lambda$12033 & $-115$ \\
12037.1 & [Ti\,{\sc ii}]~$\lambda$12042 & $-115$ \\
12292.8 & [Ti\,{\sc ii}]~$\lambda$12298 & $-115$ \\
12322.9 & ? & \\
12383.5 & [Fe\,{\sc ii}]~$\lambda$12388 & $-100$ \\
12416.6 & [Ti\,{\sc ii}]~$\lambda$12422 & $-120$ \\
12469.7 & [Cr\,{\sc ii}]~$\lambda$12476 & $-140$ \\
12482.8 & [Cr\,{\sc ii}]~$\lambda$12488 & $-135$ \\
12490.5 & [Ti\,{\sc ii}]~$\lambda$12496 & $-140$ \\
12521.8 & [Fe\,{\sc ii}]~$\lambda$12521 & $+10$ \\
\bf{12556.3} & \multirow{3}{*}{\bf{[Fe\,{\sc ii}]~$\lambda$12567}} & \bf{$-$250} \\
\bf{12563.2} & & \bf{$-$85} \\
\bf{12573.1} & & +\bf{150} \\
12633.0 & [Ti\,{\sc ii}]~$\lambda$12638 & $-120$ \\
12646.5 & [Ti\,{\sc ii}]~$\lambda$12651 & $-105$ \\
12685.5 & [Ti\,{\sc ii}]~$\lambda$12692 & $-145$ \\
12703.1 & ? & \\
12713.7 & [Ti\,{\sc ii}]/[Mn\,{\sc ii}]~$\lambda$12719 & $-115$ \\
12787.3 & [Fe\,{\sc ii}]~$\lambda$12787 & $+10$ \\ \hline
\multicolumn{2}{c}{mean velocity} & $-109\pm9$\\
\hline
\end{tabular}
\end{table}
The radio monitoring performed by \citet{duncanetal03} revealed that during the low-excitation phases (when the high-excitation lines weaken or vanish -- \citealt{gaviola53,rodgersetal67,thackeray67,zanellaetal84}) the free-free emission is concentrated in a small region of $\sim 1.5 $~arcsec in diameter. However, during the `normal' state, a more extended 3-cm radio emission region of about 4~arcsec is present, which can be seen by the contours in Fig.~\ref{fig11}.
It is often assumed that the structure seen at 3-cm radio continuum is the equatorial torus. In this context, during the high-excitation state, the surrounding torus absorbs the UV radiation and then is kept ionized throughout the most part of the orbital period. However, when the secondary star is at periastron, the ionizing flux is rapidly absorbed by the dense wind of the primary star and thus the previously-ionized 4~arcsec-wide region -- namely, the torus -- is allowed to recombine and therefore, the radio-continuum flux is reduced to a point-like source, i.e. restricted to a small Str\"omgren sphere around the hot companion.
On the other hand, the ionizing flux could \textit{also} be absorbed in the lobes of the LH. \citet{duncanetal97} showed that the H91$\alpha$ flux is composed of mainly two components: one bright and narrow feature (FWHM$\approx250$~km\,s$^{-1}$) with peak at $-250$~km\,s$^{-1}$ and a broad (FWHM$\approx600$~km\,s$^{-1}$), fainter emission with peak at approximately $-115$~km\,s$^{-1}$ \citep{duncanetal97}. The brightest component is presumably equatorial emission due to a turbulent gas cloud located at 1.6~arcsec NW of the central source. This cloud must be colder than the rest of the torus because of its high ratio of spectral-line to continuum, which is an indicator of the temperature of the emitting region (\citealt{duncanetal97} and references therein).
Although the projected position of this cold cloud (which we refer to as the radio spot) is the same as the `Sr-filament' (see Fig.~\ref{fig12}), they obviously do not occupy the same spatial region, since the radio spot must be ionized to be seen in radio frequencies while the `Sr-filament' shows many lines of low-ionization ions such as [Sr\textsc{ii}], [Ti\textsc{ii}], etc. (\citealt{zethsonetal99,hartmanetal01,hartmanetal04}). Lines of these same ions were also present in our data, as shown in Fig.~\ref{fig12} and listed in Table~\ref{t1}. In addition, the kinematics of the `Sr-filament' ($\approx-100$~km\,s$^{-1}$) \textit{does not match} the observed velocity of the peak of the bright H91$\alpha$ component ($\approx-250$~km\,s$^{-1}$). We noted, though, that the He\,\textsc{i}$\lambda$10830 emission column \textit{does match} the velocity of the radio peak. Moreover, they are located at roughly the same projected position, suggesting that they may be related to each other. There is no doubt that this relationship deserves further studies, since it could give important clues about the ionizing source.
Regarding the faint component of the H91$\alpha$ emission, it could arise from both the torus and the LH because of the spatial coincidence that exists between the low-level contours of the broad radio-emission and the extent of the LH (see Fig.~\ref{fig11}). Moreover, the FWHM of this component is similar to the range of velocities observed in the LH ($\approx\pm250$~km\,s$^{-1}$; \citealt{smith05}).
\subsection{On the nature of $\bmath{\eta}$ C\lowercase{ar} B}\label{secondary}
Independently of where the radio emission is formed and assuming that the free-free emission discussed in \S~\ref{radio} is caused mainly by the UV radiation field of $\eta$~Car~B, we estimated the number of Lyman continuum photons using the following relation \citep{mezgeretal74,carpenteretal90,filipovicetal03,morganetal04} \[ N_{\rmn{Ly}}=\phi\hspace{1mm}a(\nu,T_{\rmn{e}})^{-1} \left[\frac{\nu}{\rmn{GHz}}\right]^{0.1}\left[\frac{T_\rmn{e}}{\rmn{K}}\right]^{-0.45} \left[\frac{S_\nu}{\rmn{Jy}}\right]\left[\frac{D}{\rmn{kpc}}\right]^2 \] where $N_{\rmn{Ly}}$ is the number of photons per second in the Lyman continuum, $\phi$ is a numeric constant (=~4.76~$\times~10^{48}$), $a(\nu,T_{\rmn{e}})$ is a slowly varying function tabulated by \citet{mezgeretal67}, $\nu$ is the frequency at which the observation is made, $T_{\rmn{e}}$ is the electronic temperature, $S_\nu$ is the flux density, and $D$ is the distance to the source.
Using the observed radio continuum flux at 1.3~mm\footnote{At this wavelength the free-free emission is optically thin \citep{abrahametal05}.} when the system is at high-excitation state (i.e., when $\eta$~Car~B is out of dense wind of the primary star) $S_\nu\approx~39$~Jy (Abraham, Z., private communication) and a value of $a(\nu,T_\rmn{e})\sim1$ for $T_\rmn{e}\sim10^{4}$~K \citep{mezgeretal67}, the ionising flux is log($N_\rmn{Ly})\approx49.4$. This corresponds to a star with the minimum spectral type of O5.5\,III to O7\,I star \citep{martinsetal05}. As we can only put a lower limit to the number of Lyman continuum photons and to the spectral type, we can not rule out a late-type Wolf-Rayet companion, which also has log($N_\rmn{Ly})>49.4$ \citep{crowther07}.
An O7\,I presents stellar parameters compatible with other other works. The effective temperature for such a star is about 35,000~K \citep{martinsetal05}, which is well within the range of 34,000--38,000~K determined by \citet{verneretal05} based on the observed ratio of [Ar\,{\sc iii}]~$\lambda7136$ to [Ne\,{\sc iii}]~$\lambda3869$ in the Weigelt blobs. \citet{ipingetal05} also indicates an effective temperature near 35,000~K using spectra obtained with the \textit{Far Ultraviolet Spectroscopic Explorer} (\textit{FUSE}). Altough the flux of $\eta$~Car~B is expected to dominate the spectra at wavelengths shortward of 1200~\AA~\citep{hillieretal06}, the quantitative analysis is rather complex \citep{hillieretal06}. An O5.5\,III star presents stellar parameters that are also similar to those assumed for $\eta$~Car~B. Following \citet{martinsetal05}, an O5.5\,III star has an effective temperature of about 39,250~K, which is higher than the upper limit of \citet{verneretal05}. Nevertheless, this effective temperature is rather acceptable since, there may be gas between the ionizing source and the Weigelt blobs responsible for absorbing the high energy part of the spectrum, and thus decreasing the estimating of the effective temperature.
It is worthwhile to note that \citet{prinjaetal90} showed that early-type stars in the range from O5.5 ($T_{\rmn{eff}}\approx~39,250$~K) to O7 ($T_{\rmn{eff}}\approx35,000$~K) present terminal velocities in the range from 1100 up to 3000~km~s$^{-1}$, which agrees with the values proposed for $\eta$~Car~B by \citep{davidson99,pittardetal03} based on hydrodynamic calculations.
\section{Summary and conclusions}\label{sum}
Near-infrared integral field spectroscopy has revealed additional details of the circumstellar ejecta around $\eta$ Car. The main results and conclusions are summarized below:
\begin{enumerate}
\item We determined the dimensions and geometry of the hole present in both lobes of the Homunculus using our [Fe\,{\sc ii}]~$\lambda$12567 velocity maps and the model of \citet{smith06} for the Homunculus. The holes have a diameter of $\approx6.0\times10^{16}$~cm and is $\approx6.5\times10^{16}$~cm-thick at the polar region. They are located within $5\degr$ from the pole, suggesting an inhibited mass loss at stellar latitudes $\ga 85\degr$ during the Great Eruption. These holes are seen in the H$_2$ and [Fe\,{\sc ii}]~$\lambda$16435 lines as well \citep{smith06};
\item The feature known as the \textit{SE hole} in optical images is a region with a local minimum column density towards the SE lobe caused by the fact that we are looking through the borders of the hole;
\item We confirmed the claim of \citet{smith05} and also suggested that the broad component of the 3-cm continuum radio-emission originates both in the torus and the LH because of the spatial coincidence between the low-level contours of the radio emission and the extent of the LH, though the bulk of emission is due to the torus. Moreover, the width of the broad radio-emission is also consistent with the kinematics of the LH.
\item The He\textsc{i}$\lambda$10830 emission column presents a Hubble flow from $-250$~km\,s$^{-1}$ (at 2~arcsec from the central source) to $\approx-500$~km\,s$^{-1}$ (at 10.5~arcsec). Its position angle is $-48\degr$ and based on symmetry and kinematic arguments, we suggested that the He\textsc{i}$\lambda$10830 emission column is not related to the \textit{Paddle}, which shows P.A.=$-41\degr$. Nonetheless, our results suggest that it is indeed in the equatorial disc (with inclination angle of $i_{He}=44\degr$) and is most likely related to the radio spot (the narrow component of the 3-cm continuum radio-emission reported by \citealt{duncanetal97}).
\item We also detected another He\,\textsc{i}~$\lambda$10830 emission column at P.A.=$+35\degr$, confirming the suggestion that such structure is indeed caused by high-energy photons (far-UV from $\eta$~Car~B) escaping through holes in the equatorial disc.
\item The radio spot and the `Sr-filament' are in the same line-of-sight but disconnected spatially. While the former is an ionized region with peak at $-250$~km\,s$^{-1}$, the latter is characterized by low-ionization lines with typically $-110$~km\,s$^{-1}$, presumably shielded from high-energy radiation by H$^{0}$ and a forest of Fe$^{+}$ \citep{bautistaetal02,bautistaetal06}.
\item From the observed 1.3-mm radio flux we estimated that the ionising flux, which comes from $\eta$~Car~B, is consistent with an O-type star hotter than O5.5\,III to O7\,I, though we can not rule out a Wolf-Rayet nature to the companion at this point.
\end{enumerate}
\section{Acknowledgments}
M. Teodoro, A. Damineli, J. H. Groh and C. L. Barbosa are grateful to the Brazilian agencies CNPq and FAPESP for continuous financial support. We would like to thank Dr. Nathan Smith for his extensive comments and suggestions on the earlier stages of this manuscript. We are grateful to the referee Dr. Theodore Gull for his fruitful comments that have improved the content and presentation of our results. M. Teodoro also would like to thank Michelle Doherty for her efforts in obtaining all the calibration data as well as Dr. Jon Morse for kindly granting the permission to use the \textit{HST}/WFPC2 images of the Homunculus shown in this paper. M. Teodoro is supported by FAPESP through grant 05/00190-8.
|
1,314,259,993,888 | arxiv | \section{Introduction}
Material recognition is a crucial task in the application of scene understanding and has a wide range of usage scenarios. The prediction of material recognition can provide additional \emph{material} information (1) to support robotic grippers~\cite{yoon2021analysis} to determine the correct force when grasping or holding fragile \emph{glass} objects, (2) to assist autonomous vehicles to switch to the appropriate driving mode on \emph{sandy} roads with a low resistance coefficient~\cite{ding2020definition}, (3) to warn people if they put \textit{plastic} objects in the microwave, and (4) to alert People with Visual Impairments~(PVI) to slippery surfaces~\cite{yang2018predicting_polarization,mao2021panoptic} when walking on \textit{snowy} sidewalks. To this end, the task of material recognition is of critical importance in real-world scenarios.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{figures/fig1_3.pdf}
\begin{minipage}[t]{.33\columnwidth}
\vskip-3ex
\subcaption{Wearable robotics}\label{fig1_a}
\end{minipage}%
\begin{minipage}[t]{.66\columnwidth}
\vskip-3ex
\subcaption{\underline{Material} \& \textit{Object} segmentation}\label{fig1_b}
\end{minipage}%
\caption{\textsc{Mate}Robot, (a) wearable robotics, can assist (b) \underline{material} semantic segmentation (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \textcolor[HTML]{BD33A4}{\underline{snow}}, \textcolor{gray}{\underline{ceramic}}) and general \textit{object} semantic segmentation (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \textit{\textcolor[HTML]{f013ee}{sidewalk}, \textcolor{blue}{cup}})}
\label{fig:head}
\end{figure}
In addition to the above specific cases, material recognition in everyday life is often a challenging task for PVI, who typically recognize object through touch~\cite{paterson2006seeing,klatzky2007object}. Therefore, it becomes vital to develop a system that can assist PVI in recognizing the materials of objects before touch, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, a contact-free material recognition system. Thus, in this work, we mainly focus on designing a human-friendly wearable device for PVI to recognize materials.
In recent years, significant progress has been witnessed in the field of assistive technology that can support PVI in several activities, such as navigation~\cite{duh2020v}, object localization~\cite{agrawal2022novel}, indoor understanding~\cite{liu2021hida}, and path orientation~\cite{zhang2022trans4trans}. While panoptic predictions were provided in~\cite{mao2021panoptic}, transparent objects were recognized in~\cite{zhang2022trans4trans}, both can only deliver limited material categories.
In this paper, general and \underline{mate}rial recognition in wearable \underline{robot}ic systems are presented, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \textbf{\textsc{Mate}Robot}. As shown in Fig.~\ref{fig1_a}, such a human-friendly system consists of a pair of smart glasses with a RGB-D camera and a pair of bone-conducting earphones, and a portable image processor inside a small waist bag.
In Fig.~\ref{fig1_b}, the {\textsc{Mate}Robot} can recognize not only materials (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \textit{\textcolor[HTML]{BD33A4}{snow}}), but also general object categories (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \textit{\textcolor[HTML]{f013ee}{sidewalk}}). These two predictions from different fields can form a feedback of object categories with material properties (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \textit{``\textcolor[HTML]{BD33A4}{snowy} \textcolor[HTML]{f013ee}{sidewalk}''}), yielding more comprehensive information to assist PVI in their daily life.
Due to the bottleneck of self-attention in dense prediction~\cite{dosovitskiy2020ViT}, it is, however, hard to deploy a resource-intensive ViT-based model on wearable robotic platforms. To address this challenge, compared to the ViT-based counterparts~\cite{dosovitskiy2020ViT}, we propose an efficient \underline{mate}rial segmentation \underline{ViT}-based model, namely \textbf{MateViT}, which includes a \textbf{Learnable Importance Sampling (LIS)} strategy to maintain only the informative tokens for the material segmentation, so as to reduce the computational cost. Based on our LIS method, a resource-friendly MateViT model is obtained, which makes it feasible to deploy plain vision transformers on wearable robotic devices that have limited computational resources.
Apart from the model efficiency, to enlarge the model capacity, we introduce a \textbf{Multi-gate Mixture-of-Experts (MMoE)} method to combine the aforementioned general image semantic segmentation and material semantic segmentation on a single model. Compared to the previous method~\cite{zhang2022trans4trans} using a straightforward dual-head structure, our MMoE is proposed to construct an efficient multi-task learning architecture. The feature tokens from input images are forwarded to respective gates and experts to extract the top-$k$ informative features for task-relevant decoder heads that generate final semantic segmentation masks.
In order to endow \textsc{Mate}Robot with robust perception capability, including general and material semantic segmentation, we perform model training on COCOStuff-10K~\cite{caesar2018coco} and DMS~\cite{upchurch2022dms} datasets, both contain more than $10K$ training samples. Through extensive experiments, our small model obtains $40.2\%$ and $51.1\%$ of mIoU scores, surpassing the previous multi-task learning baseline~\cite{zhang2022trans4trans} by absolute ${+}5.7\%$ and ${+}7.0\%$ on COCOStuff-10K and DMS, respectively. For single-task learning on materials segmentation, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, on DMS, our model reaches state-of-the-art performance, having ${+}8.1\%$ mIoU gains compared to previous CNN counterpart~\cite{upchurch2022dms}. To verify the practicability of our \textsc{Mate}Robot for recognizing material categories in real-world scenarios, we conduct a task-oriented user study with six blindfolded participants. On the post-study questionnaires, our system obtains respective $28$ and $77$ scores regarding the evaluation criteria of NASA-Task Load Index (NASA-TLX) and System Usability Scale (SUS), which indicates the ease of use and the usability of our \textsc{Mate}Robot in practical scenarios.
In summary, our main contributions are:
\begin{itemize}
\item We consider material recognition in assistive technology for the first time, and build a wearable robotic system, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \textit{\textsc{Mate}Robot}, for people with visual impairments to realize contactless material recognition in daily life.
\item We propose an efficient \textit{MateViT} model to enable the deployment of resource-intensive ViT-based counterparts on resource-constrained mobile platforms by using a \textit{Learnable Importance Sampling (LIS)} strategy.
\item We present a \textit{Multi-gate Mixture-of-Experts (MMoE)} method for efficient multi-task learning and the combination of general and material semantic segmentation.
\item We conduct a task-oriented user study and a post-study questionnaire session with six blindfolded participants. Through quantitative and qualitative analyses, the usability and effectiveness of our \textsc{Mate}Robot are proved.
\end{itemize}
\section{Related Work}
\subsection{Wearable Assistive System}
With the tremendous capability revealed by computer vision algorithms, vision-based wearable assistance systems~\cite{yang2018predicting_polarization,wang2017enabling,aladren2014navigation} are becoming increasingly applicable.
A vision-based navigation system~\cite{duh2020v}, calculating precise positions and orientations, is proposed to help People with Visual Impairments (PVI) stay on track during walking, and it can recognize unexpected dynamic obstacles, to reduce the danger on navigation.
Due to the COVID-19 pandemic, an object-finding algorithm is introduced in~\cite{agrawal2022novel} to build an end-to-end perception robotic cane system, which can enable socially-preferred autonomous goal selection and navigation in indoor spaces.
Specifically, an algorithm for social norm-aware chair selection is proposed, which is optimized for convenience, intimacy, and privacy, respectively.
In~\cite{liu2021hida}, a lightweight system with a solid-state LiDAR sensor is proposed for holistic indoor detection and avoidance, by using 3D point cloud instance segmentation.
In this system, obstacle avoidance and object finding are implemented together with voice guidance, so that new point cloud from a changing indoor environment can be scanned by the user.
In a previous work~\cite{zhang2022trans4trans}, to cover segmentation of the safety-critical transparent objects, a dual-head transformer model is proposed and deployed on a wearable device for helping PVI to recognize the glass-like objects in everyday life.
However, only limited recognizable materials are delivered in previous wearable assistance systems.
In order to help blind users to obtain a more comprehensive and humanized experience on material recognition, we design a wearable robotic system in this work, for the first time, which can not only recognize conventional object categories, such as \textit{cups}, but also further recognize the material of the object, such as \textit{plastic cups}, delivering contactless object recognition.
\subsection{Material Semantic Segmentation}
In Fully Convolutional Networks (FCNs)~\cite{long2015FCN}, image semantic segmentation is addressed as a dense-pixel classification task in an end-to-end manner.
In order to achieve better performance, previous works~\cite{chen2017deeplabv2, zhao2017PSPNet} focus on extracting contextual information or using multi-scale features~\cite{chen2018deeplabv3plus}.
However, the limited receptive field captured by FCNs is hard to model the correlations among long-range spatial locations.
Recently, Vision Transformer~\cite{dosovitskiy2020ViT,xie2021segformer} is proposed to utilize the self-attention operation in transformer layers to extract non-local features from a sequence of image patches, yielding an alternative backbone solution compared to convolutional counterparts~\cite{chen2018deeplabv3plus, zhao2017PSPNet, he2016resnet}.
In DMS~\cite{upchurch2022dms}, a model based on ResNet~\cite{he2016resnet} is used to address dense material segmentation.
In contrast to DMS~\cite{upchurch2022dms}, we adopt Vision Transformer as the backbone, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, ViT~\cite{dosovitskiy2020ViT}, to perform material semantic segmentation, with the aim of extracting long-distance dependencies between image patches, since the long-range contextual information is crucial for robust representation
of diverse materials~\cite{zhang2022trans4trans}.
However, due to the high computational demanding of self-attention operation, there is still a bottleneck when deploying a plain vision transformer on resource-constrained mobile platforms, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot mobile robots and wearable devices. In this work, we propose a novel importance sampling method to reduce the number of token and maintain only the informative ones for material segmentation, so as to enable the deployment of plain vision transformers on wearable devices.
\section{\textsc{Mate}Robot: A Wearable Robotic System}
In this section, we introduce \textsc{Mate}Robot, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the wearable robotic system for material recognition, including the hardware components in Sec.~\ref{method:robots}, the user interaction in Sec.~\ref{method:hci}, the overall model architecture as algorithm in Sec.~\ref{method:model}, our efficient ViT with Learnable Importance Sampling (LIS) in Sec.~\ref{method:LIS}, and the Multi-gate Mixture-of-Experts (MMoE) in Sec.~\ref{method:MMoE}.
\subsection{Hardware Component}\label{method:robots}
As shown in Fig.~\ref{fig1_a}, there are three main hardware components in our \textsc{Mate}Robot, including a pair of KRVision smart vision glasses\footnote{Glasses: \url{www.krvision.cn}}, a portable processor, and a power bank inside a waist bag.
Inside the smart glasses, there are an RGB-Depth camera RealSense R200\footnote{Camera: \url{www.intelrealsense.com}} and a pair of bone-conduction headphones.
For the concept of human-friendly design, there are three advantages of using bone-conduction earphones, which are comfortable to wear, clean and hygienic, and keep in touch with the outside world.
Maintaining awareness of ambient sounds is especially important for PVI.
Besides, in order to maintain higher portability of the system, we choose the smaller NVIDIA Jetson AGX Xavier\footnote{Processor: \url{www.nvidia.com}} on the market, which brings suitable compute density, energy efficiency, and higher inference capabilities. Furthermore, a power bank with high energy capacity is selected to provide the system with up to $6$ hours of battery life, which greatly reduces the battery life anxiety of traditional wearable devices~\cite{liu2021hida}. Through the above hardware components, a more portable \textsc{Mate}Robot and better user experience can be delivered to PVI when performing contactless material recognition in real-world scenarios.
\subsection{User Interaction}\label{method:hci}
Between the system and the user, we design an easy-to-use interface for PVI. First of all, in order to give users timely information feedback, we adopt an adjustable feedback frequency. For example, if users set a larger interval in a more familiar environment, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, home, they will get concise information. If they explore an unknown space, setting a high frequency can obtain more information. Besides, between the pixel-wise image segmentation results to the auditory output to users, only the detected object and material that locate in the middle of the input image will be selected. Their pre-defined texts are used to generate speeches jointly in form of ``\underline{material} \textit{objects}'', such as ``\underline{ceramic} \textit{cups}'' or ``\underline{metal} \textit{forks}''.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth,trim=2 2 2 2,scale=1]{figures/architecture.pdf}
\caption{\textbf{Model architecture of MateViT} in a Learnable Importance Sampling (LIS) strategy to reduce the computational complexity, and with a Multi-gate Mixture-of-Experts (MMoE) layer to perform dual-task segmentation (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \#1-Object and \#2-Material segmentation).
MHSA and FFN stand for Multi-Head Self-Attention and Feed-Forward Network, respectively.
The upsampling layer is a ViT-based decoder block~\cite{vaswani2017transformer}. Decoder heads are the segmentation head of Segmenter~\cite{strudel2021segmenter}.
}
\label{fig:architecture}
\end{figure*}
\subsection{MateViT Model}\label{method:model}
Apart from the hardware components, we further detail the overall model architecture deployed on \textsc{Mate}Robot. Vision Transformer (ViT)~\cite{dosovitskiy2020ViT} outperforms other counterparts, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, CNNs, in many different computer vision tasks, and therefore, it is widely used as the backbone in various concurrent methods.
For the sake of achieving high performance and efficiency, we propose \textit{MateViT}, which has ViT~\cite{dosovitskiy2020ViT} with \emph{Learnable Importance Sampling (LIS)} as the backbone, followed by an upsampling layer, a \emph{Multi-gate Mixture-of-Experts (MMoE)} layer and two decoder heads. The overall architecture is shown in Fig.~\ref{fig:architecture}.
Note that our multi-task learning architecture can be easily extended to more tasks by adding decoder heads.
Mixed by two different datasets, each input data sample is first encoded by the efficient ViT backbone and then fed into the upsampling layer.
One gate in the MMoE layer is corresponding to one task, receiving the data sample that is only relevant to the task. Depending on the output of the gate, different selected experts are activated to learn meaningful latent representations, which are finally decoded by the task-relevant decoder head. Different from the training process, one data sample is fed into all gates synchronously during inference, and latent representations from selected experts are then decoded by corresponding decoder heads, producing predictions for all tasks.
\subsection{Learnable Importance Sampling}\label{method:LIS}
Although ViT~\cite{dosovitskiy2020ViT} shows excellent potential in computer vision tasks, the computation cost explodes with high-resolution input images. Prior works expedite ViT~\cite{dosovitskiy2020ViT} by reducing image token numbers, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot~EViT~\cite{liang2022evit}.
In order to make vanilla ViT \cite{dosovitskiy2020ViT} more lightweight and feasible in real-world applications, we propose a Learnable Importance Sampling strategy for material semantic segmentation, yielding an efficient MateViT.
Our approach does not require an additional class token in forward pass compared to EViT \cite{liang2022evit}; therefore, it further reduces the model complexity with high-resolution inputs. As illustrated in Fig.~\ref{fig:architecture}, all image patches are flattened and projected into tokens, forming Queries ($\textbf{Q}$), Keys ($\textbf{K}$), and Values ($\textbf{V}$). Multi-Head Self-Attention (MHSA) is then calculated as:
\begin{equation}
MHSA(\textbf{Q}, \textbf{K}, \textbf{V})=Softmax(\frac{\textbf{QK}^T}{\sqrt{C}})\textbf{V}
\end{equation}
where $\textbf{Q} \in \mathbb{R}^{N \times C}$, $\textbf{K} \in \mathbb{R}^{N \times C}$ and $\textbf{V} \in \mathbb{R}^{N \times C}$ are query, key and value matrices; $N$, $C$ are token number and dimension.
Since softmax is introduced in attention map calculation, the summation of each value in rows is equal to $1$. However, the result does not always equal $1$ when summing up all values in columns, indicating the importance of each token from an image.
Based on this observation, we first calculate the summation in columns and then average the importance vectors among all heads.
Top $k$ values are selected and tokens are downsampled according to the indices of these $k$ values after \textit{Add \& Norm}, which stand for a residual link \cite{he2016resnet} and layer normalization \cite{ba2016layernorm}. Fig. \ref{fig:architecture} illustrates the whole LIS process in detail.
Following \cite{vaswani2017transformer}, the downsampled tokens are then sent to a Feed-Forward Network (FFN) followed by another \textit{Add \& Norm}. Since the token number is reduced after the importance sampling, an upsampling layer is then applied, which is a standard transformer decoder block~\cite{vaswani2017transformer}. Thanks to LIS, a vanilla ViT-based model is expedited and better qualified for real-world mobile applications.
\subsection{Multi-gate Mixture-of-Experts}\label{method:MMoE}
Since the performance of a wearable robotic system is relevant to its model capacity, Mixture-of-Experts~\cite{shazeer2017moe} is proposed for enlarging model capacity while maintaining invariant model complexity. However, the usage of MoE is less discussed on wearable systems~\cite{zhang2022trans4trans}. For the first time, we adopt the MoE method to perform complementary training of both general object and material segmentation, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the former prevents the latter from overfitting and vice versa. Based on these observations, we adopt a Multi-gate Mixture-of-Experts (MMoE) layer in our model for high efficiency and performance. Fig.~\ref{fig:architecture} shows the detail of the MMoE layer for multi-task learning. Specifically, one gate is responsible for one task to select the experts. Similar to \cite{shazeer2017moe}, the resulting selection vector $G(\textbf{x})$ can be described as:
\begin{align}
G(\textbf{x}) &= Softmax(TopM(H(\textbf{x}), m)) \\
H(\textbf{x}) &= \textbf{x} \cdot \textbf{W}_g + N(\textbf{x})\\
N(\textbf{x}) &= Normal(Softplus(\textbf{x} \cdot \textbf{W}_{noise})) \\
TopM(\textbf{v}, m)_i &= \begin{cases}
v_i \hspace{10mm} \text{if $v_i$ ranks top $m$} \\
-\infty \hspace{6.5mm} \text{otherwise}
\end{cases}
\end{align}
where $\textbf{x}$, $\textbf{W}_g$, and $\textbf{W}_{noise}$ are token, gate matrix, and noise matrix, respectively. The noise term $N(\textbf{x})$ is calculated by the CDF of the standard normal distribution $Normal(\cdot)$ for balancing load. During the training, for every token in one image, only one same gate is activated to produce the selection vector.
According to the indices of the top-$m$ values, $m$ experts are selected and the token is only fed into the $m$ experts. The output of the MMoE layer is a weighted sum of the top $m$ values in the selection vector and their corresponding outcomes from the $m$ experts.
A task-relevant decoder head~\cite{strudel2021segmenter} is then applied to transform all output tokens from MMoE into a prediction mask. Note that we also employ the load and importance balancing loss following~\cite{shazeer2017moe} besides the task loss. During the inference, every token from one image is fed into all gates synchronously. The resulting weighted sums from selected experts are then decoded by the corresponding decoder heads shown in Fig.~\ref{fig:architecture}. As the model is trained jointly on two tasks, the knowledge absorbed from object segmentation plays an important role in the high performance of material recognition, which provides PVI with accurate material information in their daily life.
\section{Experiments}
\subsection{Settings and Datasets}
\noindent\textbf{Settings.} We implement the model with PyTorch $1.12.1$ and CUDA $11.6$.
The learning rate is initialized as $0.01$ and it is scheduled by a cosine annealing strategy~\cite{loshchilov2016cosannealing}.
SGD with momentum $0.9$ is adopted as the optimizer.
We initialize our efficient ViT backbone with a pre-trained vanilla ViT \cite{dosovitskiy2020ViT} and keep other layers of the model randomly initialized. Data augmentations like random horizontal flipping, random resize with a ratio $0.5\text{-}2$, and random cropping to $512{\times}512$ are used during training. Note that we \textbf{do not} use other tricks such as OHEM, auxiliary loss, and class-weighted loss for a fair comparison to other methods. We train our model with a batch size of $4$ for $200$ epochs on four 1080Ti GPUs.
\noindent\textbf{Datasets.} We adopt COCOStuff-10K \cite{caesar2018coco} and DMS \cite{upchurch2022dms} for general object and material segmentation, respectively. The COCOStuff-10K dataset \cite{caesar2018coco} has $9000/1000$ images for training/testing. We conduct experiments following the implementation of mmsegmentation~\cite{mmseg2020} with $171$ categories. The DMS dataset~\cite{upchurch2022dms} has respective $21857/9057/9152$ images for training/validation/testing with $46$ categories.
\subsection{Results on Object Segmentation}
In this experiment, we focus on a single task, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, object segmentation on COCOStuff-10K~\cite{caesar2018coco}. Referring to the model architecture, we remove the MMoE layer from the ViT-based model in order to perform the single-task learning. Table~\ref{tab:sota_cocostuff} summarizes the quantitative comparison with other state-of-the-art methods. Since our aim is to make vanilla ViT~\cite{dosovitskiy2020ViT} more efficient and feasible in real-world applications, for a fair comparison, we mainly compare against plain-ViT-based methods. As shown in Table \ref{tab:sota_cocostuff}, thanks to the learnable importance sampling, our model using ViT-Tiny as the backbone has the smallest GFLOPs ($4.35$) compared to other CNN- and attention-based methods, while keeping competitive pixel accuracy and mean intersection over union (mIoU).
Moreover, regarding the ViT-Small backbone version, our model achieves the best pixel accuracy ($70.2\%$) and mIoU ($38.9\%$) among other counterparts.
These results verify the superiority and efficiency of our model with learnable importance sampling for general object segmentation.
\input{tables/sota_cocostuff}
\subsection{Results on Material Segmentation}
Similar to the experiment on object segmentation, we conduct another experiment on material segmentation using the DMS dataset~\cite{upchurch2022dms} with the same model architecture and settings. Results are reported in Table \ref{tab:sota_dms}. It can be observed that our model using the ViT-Tiny backbone still has the lowest computation expense ($4.30$ GFLOPs) with high performance ($44.1\%$ in mIoU), and the ViT-Small variant outperforms other methods listed in Table~\ref{tab:sota_dms} in both pixel accuracy and mIoU. Furthermore, Fig. \ref{fig:dms_category_iou} presents the per-class IoU of all material categories. It is worth noting that our ViT-Small variant achieves performance gains in all $46$ categories, especially the ones that are relevant for assisting PVI, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \textit{fire} (gaining ${+}20.2\%$), \textit{snow} (gaining ${+}21.3\%$), \textit{plastic} (gaining ${+}22.6\%$), and \textit{ceramic} (gaining ${+}25.5\%$). The aforementioned results prove that our model with learnable importance sampling is capable of achieving high performance and efficiency, not only on the object segmentation but also on the material segmentation task. The high accuracy scores ensure that the effectiveness of our proposed system to assist PVI to recognize materials.
\input{tables/sota_dms}
\begin{figure*}[tbh]
\centering
\includegraphics[width=1\linewidth,trim=2 2 2 2,scale=1]{figures/dms_category_iou.pdf}
\caption{\textbf{Per-class IoU (\%) of all material categories}. The blue bar shows the category IoU (\%) of the baseline DMS~\cite{upchurch2022dms}, while the orange shows the \textbf{performance gain} (\%) of our ViT-Small variant.}
\label{fig:dms_category_iou}
\vskip -2ex
\end{figure*}
\subsection{Results on Multi-task Learning}
To deliver richer dense information to PVI and to perform complementary training, we further conduct experiments on multi-task learning based on our MMoE-based ViT models, taking both object and material segmentation into account.
Table \ref{tab:sota_moe} illustrates the quantitative results.
Compared to Trans4Trans MiT-B0~\cite{zhang2022trans4trans}, our model using ViT-Tiny as the backbone requires less computation expenses (${-}35.0\%$ GFLOPs) and comparable number of parameters, while maintaining higher performance on both dataset. Additionally, compared to the Trans4Trans MiT-B2 variant~\cite{zhang2022trans4trans}, it becomes evident that our ViT-Small variant has a higher mIoU on both COCOStuff-10K (gaining ${+}5.7\%$) and DMS (gaining ${+}7.0\%$). In general, the ViT-based models do have a larger number of parameters than the baselines. Compared with the number of parameters (MParams) affecting the size of model, the computational complexity more critically determines the efficiency of the model running on the mobile platform, therefore, affects the fluency of the entire system.
According to the single-task results in Table~\ref{tab:sota_cocostuff} and Table~\ref{tab:sota_dms}, it is worth noting that there are sufficient performance improvements when adding the MMoE layer to our multi-task learning model. The mIoU of ViT-Tiny model is raised from $31.6\%$ (Table~\ref{tab:sota_cocostuff}) to $32.7\%$ (Table~\ref{tab:sota_moe}) on object segmentation and from $44.1\%$ (Table~\ref{tab:sota_dms}) to $45.1\%$ (Table~\ref{tab:sota_moe}) on material segmentation.
The performance gains are consistent in the ViT-Small variant, from $38.9\%$ to $40.2\%$ and from $50.1\%$ to $51.1\%$ on object- and material segmentation, respectively. The reason for the gains is two-fold: (1) the MMoE layer enlarges the model capacity; (2) when applying MMoE to both object segmentation and material segmentation, the former prevents the latter from overfitting and vice versa.
\input{tables/sota_moe}
\subsection{Ablation Study}
To fully understand the proposed methods, an ablation study is conducted regarding the different modules of our model. The quantitative results are shown in Table~\ref{tab:ablation}. Our baseline model is the same as Segmenter ViT-Tiny~\cite{strudel2021segmenter}. Learned from Table \ref{tab:ablation}, replacing ViT-Tiny with ViT-Small as the backbone boosts the performance from $43.2\%$ to $49.2\%$ on mIoU in material segmentation task. After applying LIS, the mIoU continuously increases from $49.2\%$ to $50.1\%$. We then add the MoE layer to our model, leading to an even higher mIoU of $50.7\%$. These performance gains also occur on other metrics and in other tasks. It can be observed that our MMoE model achieves the best performances on all metrics in both object segmentation and material segmentation tasks compared to the baseline model, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $6.5\%$ and $3.7\%$ boosts in pixel accuracy, $8.9\%$ and $7.9\%$ boosts in mIoU. Through the extensive pre-study experiments, the effectiveness of the proposed methods can be comprehensively proved.
\input{tables/ablation.tex}
\subsection{Qualitative Analysis}
Four groups of scenarios related to the daily life of PVI are shown in Fig. \ref{fig:seg_results}, where the original RGB input image, the \textit{object} segmentation result, and the \underline{material} segmentation result, are listed in order. The upper-left group describes a scenario where people with visual impairments are having their meals. The \textit{hot dog} colored orange is perfectly recognized by our system in the second image, and it is tagged with a \underline{food} property according to the material segmentation result in the third image. The group in the lower left corner shows a scenario where blind people walk in a park. According to the predictions in the second and third images, blind people using our system know additionally there is an \underline{asphaltic} \textit{road} ahead. With the additional material information, they can traverse through the road more safely. In the bottom-right case, the entrance is not recognized if only object recognition is performed, however, the material segmentation result can provide supplementary information to find the doors, which can further improve the mobility accessibility. As a result, these dense information can help PVI better understand their surroundings, and to support them to make correct interactions with the environment.
\begin{figure*}[tbh]
\centering
\captionsetup[subfigure]{labelformat=empty}
\includegraphics[width=0.99\linewidth]{figures/seg_results.png}
\begin{minipage}[t]{.16\linewidth}
\vskip-3ex
\subcaption[]{Input}
\end{minipage}%
\begin{minipage}[t]{.16\linewidth}
\vskip-3ex
\subcaption{Object}
\end{minipage
\begin{minipage}[t]{.16\linewidth}
\vskip-3ex
\subcaption{Material}
\end{minipage
\begin{minipage}[t]{.16\linewidth}
\vskip-3ex
\subcaption{Input}
\end{minipage}%
\begin{minipage}[t]{.16\linewidth}
\vskip-3ex
\subcaption{Object}
\end{minipage
\begin{minipage}[t]{.16\linewidth}
\vskip-3ex
\subcaption{Material}
\end{minipage
\vskip-1ex
\caption{\textbf{Visualization} of both general object segmentation and material segmentation. From left to right in each group: RGB input, object segmentation, and material segmentation.}
\label{fig:seg_results}
\vskip-2ex
\end{figure*}
\section{User Study}
Following detailed pre-study experiments based on datasets, for wearable robotic systems, a critical factor that needs to be addressed is whether the system can provide a positive user experience in real-world situations. To know about that, we conduct a user study in a structured manner, including task-oriented testing and a questionnaire session.
\subsection{Organization}
To verify the usability of the system, we further organize a user study with six blindfolded participants on real-world testing scenarios.
For the user study, we select seven categories of materials that are common in daily life, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \textit{fabric}, \textit{foliage}, \textit{glass}, \textit{metal}, \textit{paper}, \textit{plastic}, and \textit{wood}.
To conduct a comparison between without and with using our system, there are two rounds of material recognition, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \textbf{contact} and \textbf{contactless} round. The contact round is to recognize by touch, while the contactless round is to recognize by using our system. Note that, to conduct a fair comparison, the order of the two rounds is randomized. Six blindfolded participants (three males and three females) are invited to conduct both testing rounds in the user study, as presented in Fig.~\ref{fig:participants}. In the contact round, the participants are required to name the material after touching an object. The reaction time from touching the object until naming the material is recorded by the organizer. In the contactless round, the participants are required to name the material after hearing the feedback from the bone-conduction earphones of the wearable system. The reaction time from starting our system until naming the material is also recorded for the sake of comparison. Apart from the recorded reaction time, answer correctness is another metric of this user study. After two rounds of material recognition, the participants join an anonymous questionnaire session. The questionnaires regarding NASA-Task Load Index (NASA-TLX) and System Usability Scale (SUS) are filled out by all participants. Besides, there is space for participants to write down their open comments.
\begin{figure}[!t]
\centering
\includegraphics[width=0.99\linewidth]{figures/participants.pdf}
\caption{Incidences of participants on the user study.}
\label{fig:participants}
\vskip-3ex
\end{figure}
\subsection{Results}
\noindent\textbf{Performance.}
We utilize average reaction time and answer correctness to evaluate our system performance. Table~\ref{tab:time_and_correctness} presents results on the aforementioned metrics among all participants. While the answer correctness is ($100\%$) in both rounds, the average reaction time in the contactless round is obviously shorter than the contact round (response ${-}0.86$s). Without any haptic perception, our wearable system can perform a faster recognition than the contact-based perception, which indicates our \textsc{Mate}Robot is
feasible and reliable to provide fast assistance of recognizing material properties via visual cues, and it is able to offer a promising step towards improving the perception accessibility for PVI.
\input{tables/user_study.tex}
\noindent\textbf{Cognitive load.}
To learn about the cognitive load of our wearable system, NASA-TLX is a simple and effective method for cognitive load measurement.
We first calculate the average score of every factor among all participants. There are six factors, which are \textit{Mental Demand}, \textit{Physical Demand}, \textit{Temporal Demand}, \textit{Performance}, \textit{Effort}, and \textit{Frustration}. As shown in Fig.~\ref{fig:tlx}, we then average the scores of all six factors, resulting in a final NASA-TLX value of $28$. According to \cite{grier2015tlx}, this value illustrates the workload caused by our system is in the 20th percentile of global workload scores from $6.21$ to $88.50$ among $1173$ observations, which can assist users without too much burden. From Fig. \ref{fig:tlx}, we notice that the effort value is relatively smaller than the rest five factors, meaning that our system is user-friendly.
\noindent\textbf{Usability.}
Apart from NASA-TLX, we verify the usability of our system with SUS. Our system scores $77$ out of $100$, which is a relatively high score. According to Bangor~\textit{et al.}~\cite{bangor2008sus}, who analyzed $2324$ surveys from $206$ studies, ``\textit{the best quarter of studies range from $78.51$ to $93.93$}''. Therefore, we reach that our system is useful for recognizing not only general objects but also their material properties.
\noindent\textbf{User comments.}
We analyze the open comments made by users during the testing and from the post-study questionnaire session. The insights are reported below:
\begin{itemize}
\item Besides the easy operation, $66.7\%$ participants were amazed by the fast response of our material recognition system, which is one of the reasons why they would like to use the system.
\item Regarding the material recognition, all participants enjoyed the experience. They found our system useful and helpful in the daily life of visually impaired people.
\item However, some participants complained about the voice feedback of the glasses, since it kept informing users of the object material without stopping. We also notice this problem and plan to improve the voice feedback in our future work.
\end{itemize}
\begin{figure}[tbh]
\centering
\includegraphics[width=0.99\linewidth,scale=1]{figures/TLX.pdf}
\caption{\textbf{Average NASA-TLX score} of every factor among all participants. Values range from $0$ to $100$, lower is better.}
\label{fig:tlx}
\end{figure}
\noindent\textbf{Limitations.}
Due to the challenges posed by the pandemic, it was difficult to find a sufficient number of participants to perform user study on the developed system. As a result, the system was tested by a limited number of blindfolded participants.
While the preliminary field test provides some insights into the potential usefulness of the wearable robotics, the results cannot be considered representative of the experience of real blind users. Another limitation is the lack of diversity among the participants. Without a more diverse range of users, the results may be biased towards certain types of users, which could limit the usefulness in real-world settings. Therefore, in our future work, further research and testing are planed to verify the usability and effectiveness of our wearable robotics with a wide variety of real blind users.
\section{Conclusions}
In this work, we look into semantic material understanding for helping visually impaired people via a wearable robotic system \textsc{Mate}Robot.
We put forward \textsc{Mate}ViT, which unifies general object and material segmentation via a multi-gate mixture-of-experts architecture, whose efficiency is enhanced via token importance sampling to make plain-vision-transformer models suitable for mobile applications.
The proposed model is ported to our established assistive \textsc{Mate}Robot system designed for supporting people with visual impairments.
Extensive experiments on DMS and COCOStuff-10K datasets and a user study demonstrate the effectiveness and usefulness of our recognition system.
\bibliographystyle{IEEEtran}
|
1,314,259,993,889 | arxiv | \section{Introduction}
\label{sec:introduction}
\IEEEPARstart{H}{umans} can readily and accurately estimate the 3D shape of an object from a set of 2D landmark points on a single image. Many computer vision approaches have been developed over the years in an attempt to replicate this outstanding ability. As an example, the approaches in \cite{Zhou_2015_CVPR,ramakrishna2012,lin2014jointly,categoryShapesKar15,HamsiciGM12} provide 3D shape estimates of a set of 2D landmark points on a single image. Other works, such as structure from motion (SfM) and shape from shading, require a sequence of images \cite{fayad2011automated,6126319,Jeni:15}, and although they have been extensively studied, the results are not as accurate as the ones reported in the present paper.
The above mentioned algorithms learn a set of 3D shape bases, with the assumption that the deformation in a test image can be represented as a linear combination of these bases. This linear assumption limits the applicability of these approaches on highly deformable or articulated shapes, a point that is only exacerbated when the number of 2D landmark points is small (i.e., non-dense fiducials). To resolve these problems, some previous works, e.g., \cite{ramakrishna2012,lin2014jointly}, are specifically designed to recover the 3D shape of a specific object by introducing domain-specific priors into the 3D estimate. However, the addition of these priors typically limit the applicability of the resulting algorithms, e.g., to improve generic object recognition \cite{lin2014jointly,Leotta2011} or estimate 3D pose in virtual reality \cite{Lee1985,ramakrishna2012}. Another solution is to identify the set of picewise segments that belong to the same surface and then solve for each section separately \cite{Russell:11,Russell:14}. However, these algorithms require multiple images or video sequences to identify consistency and smoothness in the movement.
Estimating the 3D geometry of an object from a single view is an ill-posed problem. Nevertheless, with the available 3D ground-truth of a number of 2D sample images with corresponding 2D landmarks, we can learn the mapping from 2D to 3D landmark points, i.e., we can define a mapping function $f(.)$ that given a set of 2D landmark points $x$ outputs their corresponding 3D coordinates $y$, $y=f(x)$. There are three main challenges in this task: (1) how to define a general algorithm that aims at a wide range of rigid {\em and} non-rigid objects; (2) how to define an algorithm that yields good results whether we have a small or a large number of samples; (3) how to make this process run in real-time ($>60$ frames/s).
\begin{figure*}[htp]
\centering{
\includegraphics[width=7in]{framework.png}
\caption{Framework of the proposed algorithm to estimate the 3D shape of a set of 2D landmark point on a single image. The resulting 3D shape is estimated in real-time ($>1,000$ frames/s on an i7 desktop).}\label{fig:example}
}
\end{figure*}
In order to deal with the aforementioned challenges, we propose a deep-network framework to estimate the function $f(.)$. The proposed model is illustrated in Figure~\ref{fig:example}.
Unlike most existing methods, our work derives a deep neural network that uses a set of hierarchical nonlinear transformations to recover depth information of a number of known 2D landmark points on a single image. The 3D shape is estimated by combining the input and output of the neural network and is independent of any scaling factor. The algorithm can be efficiently trained regardless of the number of samples and is robust to noise and missing data. This derived deep neural network is efficient, outperforming previous algorithms by a significant margin.
The major contributions of the herein derived algorithm can be summarized as follows.
\begin{enumerate}[(1)]
\item Our algorithm is extremely general and can be applied to recover the 3D shape of basically any rigid or non-rigid object for which a small or large number of training samples (i.e., 2D and corresponding 3D landmark points) is available. As examples, we provide results on human faces, cars, human bodies and flags.
\item Our algorithm is not limited by the number of training samples. When the number of training data is very large, we employ mini-batch training to parallelize gradient-based optimization and reduce computational cost. When the training set is small, we used an innovative data augmentation approach that can generate many novel samples of the 2D landmark points of a given set of 3D points using several camera models; this yields new 2D landmark points as viewed from multiple cameras, scales, points of view, etc.
\item Our proposed multilayer neural network can be trained very quickly. Additionally, in testing, our algorithm runs much faster than real-time ($> 1,000$ frames/s on an i7).
\end{enumerate}
\section{Related Work}
Estimating the 3D geometry of an object from a single view is an ill-posed problem. There has been substantial work on detecting 2D landmark points \cite{Rivera:12,DelaTorre:13,Yang:2011} from a single image, but how about 3D shape? Recently, a number of databases including accurate 3D and 2D landmark points of different objects have allowed the learning of mapping functions between 2D and 3D. Example databases are the Google 3D Warehouse \cite{WinNT}, the Carnegie Mellon Motion capture set \cite{WinNT}, the Fine-Grained 3D Car database (FG3DCar) \cite{lin2014jointly}, and the PASCAL3D database \cite{Aggapito_PAMI:16,xiang_wacv14}.
Directly related to our work are several approaches that reconstruct 3D shape from a single image \cite{Zhou_2015_CVPR,ramakrishna2012,lin2014jointly,vicente2014reconstructing,categoryShapesKar15}. Previous methods usually fit a shape-space model to recover the 3D configuration of the object from its 2D locations of landmarks in a single image \cite{ramakrishna2012,Zhu2015ICCV,Zhou_2015_CVPR,ZhouZLD15}. In \cite{ramakrishna2012}, the authors address the human pose recovery problem by presenting an activity-independent method, given 2D locations of anatomical landmarks. Unfortunately, this method is limited to the reconstruction of human bodies. Lin \emph{et al.} \cite{lin2014jointly} derived an algorithm to recover 3D models of cars. In the work of \cite{Zhu2015ICCV}, an optimization method for location analysis is proposed to map the discriminant parts of the object into a 3D geometric representation. In \cite{Zhou_2015_CVPR}, a shape-based model is designed, which represents a 3D shape as a linear combination of shape bases. In \cite{ZhouZLD15}, the authors extend this previous work to deal with outliers in the 2D correspondences and rotation asynchronization. More broadly, but also related to our work, reconstructing 3D shape from a sequence of 2D images using structure from motion (SfM) has been extensively investigated \cite{Hartley:2003}. In particular, the work of \cite{HamsiciGM12} can recover the 3D shape of an objects from 2D landmark points on a single image using SfM-trained models.
These works assume a low-dimensional embedding of the underlying shape space, and represent each 3D shape as a linear combination of shape bases. For example, Principal Component Analysis (PCA) and other linear methods are employed to obtain the low-dimensional shape bases \cite{Gotardo_PAMI,Akhter:11}. A major limitation of linear models is that when the true (but unknown) mapping function is nonlinear, the accuracy drops significantly \cite{Gotardp_ICCV}. Therefore, these methods cannot efficiently handle highly deformable or articulated shapes.
Additionally, several algorithms are limited to a certain type of object categories, e.g. \cite{ramakrishna2012} is designed for reconstructing 3D human pose from 2D image landmarks and \cite{lin2014jointly} focuses on 3D car modeling. These algorithms thus make prior assumptions or use special geometric properties of the object to constrain the solution space that do not typically generalize well to other object categories.
An additional limitation of the above cited papers is their inability to learn from either large or small datasets. Some existing algorithms require very small training sets, but are unable to yield improvements when larger datasets are available. On the other hand, some algorithms require very large training sets and are incapable of yielding reasonable results when the number of samples is small.
Our theoretical and experimental results reported below demonstrate how the herein derived algorithm resolves the limitations of the methods discussed in this section.
\section{Proposed Approach}
In this section, we describe how to recover the 3D shape of an object given its 2D landmarks on a single image.
\subsection{Preliminaries}
Let us denote the $n$ 2D landmark points on the $i^{th}$ image
\begin{equation}
{\bf W}_i= \left( {\begin{array}{*{20}{c}}
{\begin{array}{*{20}{c}}
{{u_{i1}}}&{{u_{i2}}}& \cdots &{{u_{in}}}
\end{array}}\\
{\begin{array}{*{20}{c}}
{{v_{i1}}}&{{v_{i2}}}& \cdots &{{v_{in}}}
\end{array}}
\end{array}} \right)\in\mathbb{R}^{2\times n},
\end{equation}
where $(u_{ij},v_{ij})^T$ are the 2D coordinates of the $j^{th}$ image landmark point. Our goal is to recover the 3D coordinates of these 2D landmark points,
\begin{equation}
{\bf S}_i=\left( {\begin{array}{*{20}{c}}
{{x_{i1}}}&{{x_{i2}}}& \cdots &{{x_{in}}}\\
{{y_{i1}}}&{{y_{i2}}}& \cdots &{{y_{in}}}\\
{{z_{i1}}}&{{z_{i2}}}& \cdots &{{z_{in}}}
\end{array}} \right)\in\mathbb{R}^{3\times n},
\end{equation}
where $(x_{ij},y_{ij},z_{ij})^T$ are the 3D coordinates of the landmarks of the $j^{th}$ object.
Assuming a weak-perspective camera model, with calibrated camera matrix $\mathcal{M}=\begin{pmatrix}
\lambda &0 &0 \\
0 &\lambda &0
\end{pmatrix}$, the weak-perspective projection of the object 3D landmark points is given by
\begin{equation}
\label{eq:linear}
{\mathbf W}_i = \mathcal{M}{\bf S}_i.
\end{equation}
This result is of course defined up to scale, since $\boldsymbol{u}_i=\lambda \boldsymbol{x}_i$ and $\boldsymbol{v}_i=\lambda \boldsymbol{y}_i$, where ${\mathbf x}_i^T=(x_{i1},x_{i2},...,x_{in})$, ${\mathbf y}_i^T=(y_{i1},y_{i2},...,y_{in})$, ${\mathbf z}_i^T=(z_{i1},z_{i2},...,z_{in})$, ${\mathbf u}_i^T=(u_{i1},u_{i2},...,u_{in})$ and ${\mathbf v}_i^T=(v_{i1},v_{i2},...,v_{in})$. This will require that we standardize our variables when deriving our algorithm.
\subsection{Deep 3D Shape Reconstruction from 2D Landmarks}
\subsubsection{Proposed Neural Network.}\label{Sec: Proposed NN}
Given a training set with $m$ 3D landmark points $\{{\bf S}_i\}_{i=1}^{m}$, we aim to learn the function $f: {\mathbb{R}}^{2n} \to {\mathbb{R}}^{n}$, that is,
\begin{equation}
\widehat{{\bf z}}_{i}=f({\widehat{{\bf x}}_{i}}, {\widehat{{\bf y}}_{i}}),
\end{equation}
where $\widehat{{\bf x}}_{i}$, $\widehat{{\bf y}}_{i}$ and $\widehat{{\bf z}}_{i}$ are obtained by standardizing ${\bf x}_i$, ${\bf y}_i$ and ${\bf z}_i$ as follows,
\begin{equation}
\begin{aligned}
\widehat{x}_{ij}=\frac{x_{ij}-\overline{{\bf x}}_i}{(\sigma({\bf x}_i)+\sigma({\bf y}_i))/2},\\
\widehat{y}_{ij}=\frac{y_{ij}-\overline{{\bf y}}_i}{(\sigma({\bf x}_i)+\sigma({\bf y}_i))/2},\\
\widehat{z}_{ij}=\frac{z_{ij}-\overline{{\bf z}}_i}{(\sigma({\bf x}_i)+\sigma({\bf y}_i))/2},\\
\end{aligned}
\end{equation}
where $\overline{{\bf x}}_i$, $\overline{{\bf y}}_i$ and $\overline{{\bf z}}_i$ are mean values, and $\sigma({\bf x}_i)$, $\sigma({\bf y}_i)$ and $\sigma({\bf z}_i)$ are the standard deviation of the elements in ${{\bf x}_i}$, ${{\bf y}_i}$ and ${{\bf z}_i}$, respectively.
We standardize ${\bf x}_i$, ${\bf y}_i$ and ${\bf z}_i$ to eliminate the effect of scaling and translation of the 3D object, as noted above. By estimating $f$ using a training set, we learn the geometric property of the given object. Herein, we model the function $f(.)$ using a multilayer neural network.
Figure~\ref{fig:example} depicts the overall architecture of our neural network. It contains $L$ layers. The $l^{th}$ layer is defined as,
\begin{equation}
\boldsymbol{a}^{(l+1)}=\tanh\left({\bf\Omega}^{(l)}\boldsymbol{a}^{(l)}+\boldsymbol{b}^{(l)}\right),
\end{equation}
where $\boldsymbol{a}^{(l)}\in\mathbb{R}^{d}$ is an input vector, $\boldsymbol{a}^{(l+1)}\in\mathbb{R}^{r}$ is the output vector, $d$ and $r$ specify the number of input and output nodes, respectively, and ${\bf\Omega}\in\mathbb{R}^{r\times d}$ and $\boldsymbol{b}\in\mathbb{R}^{r}$ are network parameters, with the former a weighting matrix and the latter a basis vector. Our neural network uses a Hyperbolic Tangent function, $\tanh(\cdot)$ .
Our objective is to minimize the sum of Euclidean distances between the predicted depth location $\boldsymbol{a}_i^{(L)}$ and the ground-truth $\widehat{\boldsymbol{z}}_i$ of our $m$ 3D landmark points. Formally,
\begin{equation}
\displaystyle \min{\sum_{i=1}^m{\| \widehat{\boldsymbol{z}}_i-\boldsymbol{a}^{(L)}_i \|_2}},
\end{equation}
with $\| . \|_2$ the Euclidean metric.
We utilize the RMSProp \cite{Tieleman2012} algorithm to optimize our model parameters. In a multilayer neural network, the appropriate learning rates can vary widely during learning and between different parameters. RMSProp is a technique that updates parameters of a neural network to improve learning. It can adaptively adjust the learning rate of each parameter separately to improve convergence to a solution.
\subsubsection{Testing.}
When testing on the $t^{th}$ object, we have ${\bf u}_t$ and ${\bf v}_t$, and want to estimate ${\bf x}_t$, ${\bf y}_t$ and ${\bf z}_t$ . From Eq.~(\ref{eq:linear}) we have $\boldsymbol{u}_t=\lambda \boldsymbol{x}_t$ and $\boldsymbol{v}_t=\lambda \boldsymbol{y}_t$. Thus, we first standardize the data,
\begin{equation} \label{eq:std}
\begin{aligned}
\widehat{u}_{tj}=\frac{u_{tj}-\overline{\boldsymbol{u}}_t}{(\sigma(\boldsymbol{u}_t)+\sigma(\boldsymbol{v}_t))/2},\\
\widehat{v}_{tj}=\frac{v_{tj}-\overline{\boldsymbol{v}}_t}{(\sigma(\boldsymbol{u}_t)+\sigma(\boldsymbol{v}_t))/2}.
\end{aligned}
\end{equation}
This yields $\widehat{\boldsymbol{x}}_t=\widehat{\boldsymbol{u}}_t$ and $\widehat{\boldsymbol{y}}_t=\widehat{\boldsymbol{v}}_t$. This means we can directly feed $({\widehat{\boldsymbol{u}}_{t}}, {\widehat{\boldsymbol{v}}_{t}})$ into the trained neural network to obtain its depth $\widehat{\boldsymbol{z}}_{t}$. Then, the 3D shape of the object can be recovered as $(\widehat{\boldsymbol{u}}_{t}^T, \widehat{\boldsymbol{v}}_{t}^T, \widehat{\boldsymbol{z}}_{t}^T)^T$, a result that is defined up to scale.
\subsubsection{Missing Data.}
To solve the problem of missing data, we add a recurrent layer \cite{bengio1996recurrent} on top of the multi-layer neural network to jointly estimate both the 2D coordinates of missing 2D landmarks and their depth. This is illustrated in Figure~\ref{fig:rnn}. The module named ``A" corresponds to the recurrent layer that estimates the 2D entries of the missing data, while ``B" is the multi-layer neural network described before and summarized in Figure~\ref{fig:example}. The output of ``A" is thus the full set of 2D landmarks and the output of ``B" their corresponding depth values. The module ``C" merges the outputs of ``A" and ``B" to generate the final output, $\left( \widehat{\boldsymbol{u}}_{i}^T, \widehat{\boldsymbol{v}}_{i}^T, \widehat{\boldsymbol{z}}_{i}^T \right)^T$, and ${\ell _2}$ is the loss function used in this module.
In the recurrent network, we use the notation ${\widehat u}_{ij}^{(s)}$ and ${\widehat v}_{ij}^{(s)}$ to specify the estimated values of ${\widehat u}_{ij}$ and ${\widehat v}_{ij}$ at iteration $s$. Here $i$ represents the $i^{th}$ sample. The input to our above defined network can then be written as ${\mathbf d_i}^{(0)}=({\widehat u}_{i1}^{(0)}, {\widehat v}_{i1}^{(0)}, \cdots , {\widehat u}_{in}^{(0)}, {\widehat v}_{in}^{(0)})$, with $s=0$ specifying the initial input. If the values of $u_{ij}$ and $v_{ij}$ are missing (i.e., occluded in the image), then ${\widehat u}_{ij}^{(0)}$ and ${\widehat v}_{ij}^{(0)}$ are set to zero. Otherwise the values of $u_{ij}$ and $v_{ij}$ are standardized using Eq. \eqref{eq:std} to obtain ${\widehat u}_{ij}^{(0)}$ and ${\widehat v}_{ij}^{(0)}$.
In subsequent iterations, from $s-1$ to $s$, if the $j^{th}$ landmark is not missing, ${\widehat u}_{ij}^{(s)}={\widehat u}_{ij}^{(s-1)}$ and ${\widehat v}_{ij}^{(s)}={\widehat v}_{ij}^{(s-1)}$. If the $j^{th}$ landmark is missing, then ${\widehat u}_{ij}^{(s)}=g \left( \sum\limits_{k = 1}^{2n} {{w_{k(2j - 1)}}d_{ik}^{(s - 1)}} \right)$, ${\widehat v}_{ij}^{(s)}=g \left( \sum\limits_{k = 1}^{2n} {{w_{k(2j)}}d_{ik}^{(s - 1)}} \right)$, where $g( \cdot )$ can be the identity function or a nonlinear function (e.g. $\tanh(\cdot)$) and $w_{k(2j-1)}$, $w_{k(2j)}$, $k=1,...,2n$, $j=1,...,n$ are the parameters of the recurrent layer. In our experiments we find that $g( \cdot )$ being the identity function works best.
The number of iterations is set to $\tau$, which yields ${\mathbf d_i} = \sum\limits_{s = 1}^\tau {{\lambda _s}{{\mathbf d_i}^{(s)}}}$, where $0 < {\lambda _1} < \cdots < {\lambda _\tau }$ and $\sum\limits_{s = 1}^\tau {\lambda_s} = 1$, as the final output of the recurrent layer. The vector ${\boldsymbol \lambda}=(\lambda_1,...,\lambda_{\tau})^T$ is fixed by hand. By using the weighted sum of the output at each step rather than the output at the last step as final output of the recurrent layer, we can enforce intermediate supervision to make the recurrent layer gradually converge to the correct output.
\begin{figure}[htp]
\centering{
\includegraphics[width=2in]{rnn.eps}
\caption{Deep network for dealing with missing data. ${\mathbf d}^{(0)}$ is the input to the network, ``A" is the recurrent layer with $\tau$ steps for estimating missing inputs. ``B" is the multi-layer neural network defined in Section \ref{Sec: Proposed NN} and summarized in Figure~\ref{fig:example}. ``C" combines the results of ``A" and ``B" to yield the final output of the network.}\label{fig:rnn}
}
\end{figure}
\subsubsection{Data augmentation approach.}
In many applications the number of training samples (i.e., 2D and corresponding 3D landmark points) is small. However, any regressor designed to learn the mapping function $f(.)$ requires of a large number of training samples with the 2D landmarks as seen from as many cameras, views (translation, rotation) and scales as possible. We resolve this with a simple, yet efficient data augmentation approach.
The key to our approach is to note that, for a given object, its 3D structure does not change. What changes are the 2D coordinates of the landmark points in the image. For example, scaling or rotating an object in 3D yields different 2D coordinates of the same object landmarks. Thus, our task is to generate as many of these possible sample views as possible. We do this with a camera model.
A camera model allows us to predict the 2D image coordinates of 3D landmark points. Here, we use an affine camera model to generate a very large number of images of the known 3D sample objects. We do this by varying the intrinsic parameters of the camera model (e.g., focal length) as well as the extrinsic parameters (e.g., 3D translation, rotation and scale). Specifically, we use the weak-perspective camera model defined above.
We also use this data augmentation step to model imprecisely localized 2D landmark points. All detection algorithms include a detection error (even when fiducial detections are done by humans) \cite{Ding:10}. We address this problem by modeling the detection error as Gaussian noise, with zero mean and variance $\sigma$. Specifically, we use a small variance equivalent to about 3\% of the size of the object. This means that, in addition to the 2D landmark points given by the camera models used above, we will incorporate 2D landmark points that have been altered by adding this random Gaussian noise. This allows our neural network to learn to accurately recover the 3D shape of an object from imprecisely localized 2D landmark points.
It is important to note that, when the original training set is small, we can still train an efficiently-performing neural network using this trick. In fact, we have found experimentally that we do not need a large number of training samples to obtain extremely low reconstruction errors using our derived approach and this data augmentation trick. When the number of samples is large, this approach can help reduce the 3D reconstruction error by incorporating intrinsic or extrinsic camera parameters and detection errors not well represented in the samples.
\subsubsection{Implementation Details.}
Our feed-forward neural network contains six layers. The number of nodes in each layer is $[2n,2n,2n,2n,2n,n]$.
We divide our training data into a training and a validation set. In each of these two sets, we perform data augmentation.
We use Keras library \cite{chollet2015} on top of Theano \cite{bastien2012theano} to implement our proposed multilayer neural network. Early stopping is enabled to prevent overfitting. We stop the training process if the validation error does not decrease after $10$ iterations. We set the initial learning rate at $.01$.
\section{Experimental Results}
We conduct experiments on a variety of databases to test the effectiveness of our algorithm. We used the following datasets: the CMU Motion Capture database \cite{CMUmotion}, the fine-grained 3D car (FG3DCar) database \cite{lin2014jointly}, the Binghamton-Pittsburgh 4D Spontaneous Expression (BU$-$3DFE) database \cite{zhang2014bp4d,zhang2013high} and the flag flapping in the wind database \cite{white2007capturing}. We also report our results on the sequestered dataset of the 3D Face Alignment in the Wild Challenge (3DFAW), done in conjunction with the 2016 European Conference on Computer Vision (ECCV), where the herein derived algorithm was a top performer.
Comparisons with the state-of-the-art demonstrates that our algorithm is significantly more accurate and efficient in recovering 3D shape from a single 2D image than previously defined methods. The 3D reconstruction error is evaluated using the Procrustes distance between the reconstructed shape and the ground-truth. Specifically, the ground-truth is given in millimeters (mm) which is normalized to a standard scale (given by the mean of all 3D landmark points) to make the error measure invariant to scale.
Also, to demonstrate the robustness of our method, we perform sensitivity analysis on these databases. Results show that our method is tolerant to moderate Gaussian noise. We also show results of how our method handles missing data.
Our derived neural network runs much faster than real-time, $>1000$ frames/s, on a 3.40 GHz Intel Core i7 desktop computer.
\subsection{CMU Motion Capture Database}
\setlength{\tabcolsep}{4pt}
\begin{table}
\begin{center}
\caption{Comparisons of the reconstruction error on CMU Motion Capture Database. The proposed results yields a highly-significant reduction on the reconstruction error compared with previously defined algorithms.}
\label{Table:headings}
\begin{tabular}{lccc}
\hline\noalign{\smallskip}
Methods & Subject 13 & Subject 14 & Subject 15\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
This paper & $\boldsymbol{.0229}$ & $\boldsymbol{.0201}$ & $\boldsymbol{.0099}$\\
Zhou \emph{et al.} \cite{Zhou_2015_CVPR} & $.0653$ & $.0643$ & $.0405$\\
Ramakrishna \emph{et al.} \cite{ramakrishna2012} & $.0983$ & $.0979$ & $ .0675$\\
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\begin{figure*}[htp]
\centering{
\includegraphics[width=7in]{mocap_example2.jpg}
\caption{Result of our algorithm on images of the humans in-the-wild dataset \cite{NIPS2006_2976}. The first and fifth columns are the single 2D images and 2D landmarks (in yellow) used by our algorithm. The reconstructed 3D shapes are shown on the second, third, fourth, sixth, seventh and eight columns; these show the 3D reconstruction from multiple views to more clearly demonstrate the quality of the recovered 3D shape.}
\label{Figure:example}
}
\end{figure*}
The CMU motion capture database contains 3D human body joints locations of subjects performing various physical activities. Each body shape is defined by 15 3D landmark points. To provide a fair comparison, we follow the experimental setting of \cite{Zhou_2015_CVPR}, where sequences of subject 86 are used for training and sequences of subjects 13, 14 and 15 are used for testing. During the training of the neural network, we split data of subject 86 into five folds and use four folds for training and the other fold for validation. We train the neural network in 10,000 epochs. We train one batch for at most 300 iterations within one epoch. The batch is either the original training data (the first epoch) or a random rotation of the original training data in 3D space. To represent real human shape in a 2D image, we set the rotation angle to be uniformly distributed within the range of [-20\degree, 20\degree] about the $x$ axis, [-20\degree, 20\degree] about the $y$ axis and [-180\degree, 180\degree] about the $z$ axis.
Comparative results are in Table~\ref{Table:headings}. As shown in this table, our results are significantly more accurate than those given by previously defined algorithms. To further demosntrate the effectiveness and generality of our method, we randomly selected several 2D images from the human-in-the-wild dataset \cite{NIPS2006_2976} and used the herein derived algorithm to recover the 3D shape of the human bodies in these images. The results are in Figure~\ref{Figure:example}.
\subsection{3D Face Reconstruction}
\setlength{\tabcolsep}{4pt}
\begin{table}
\begin{center}
\caption{Comparisons results of our method and Zhou \emph{et al.} \cite{Zhou_2015_CVPR} on BU$-$3DFE, 3DFAW and FG3DCar.}
\label{Table:face}
\begin{tabular}{lccc}
\hline\noalign{\smallskip}
Database & This paper & Zhou \emph{et al.} \cite{Zhou_2015_CVPR}\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
BU$-$3DFE Face & $\boldsymbol{0.0040}$ & 0.0053\\
3DFAW & $\boldsymbol{0.0042}$ & 0.0055\\
FG3DCar & $\boldsymbol{0.0022}$ & 0.0042\\
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\begin{figure*}[htp]
\centering{
\includegraphics[width=7in]{face2.jpg}
\caption{Illustration of our 3D estimation results on the Helen database \cite{Le2012}. The model is trained on the BU$-$3DFE face database and tested on Helen. The first and fifth columns show the input 2D images with landmarks (in yellow); the other columns show the estimated 3D shapes.}
\label{Figure:face}
}
\end{figure*}
The BU$-$3DFE database contains images of $100$ subjects performing six facial expressions in front of a 3D scanner. Each subject performed each expression 25 times, for a total of $2,500$ 3D sample images. Every sample has 83 annotated 3D landmarks. We randomly select 60 subjects for training, 10 for validation and the remaining 30 for testing.
We trained our neural network in $50,000$ epochs and used the same training strategy as described in the preceding section, with the difference that we restricted rotations about the $z$ axis in the range of [-60\degree, 60\degree] to more accurately reflect real head movement.
The mean testing error of our algorithm is ${\bf 0.004}$. To provide comparative results against the state-of-the-art, we train and test the algorithm of \cite{Zhou_2015_CVPR} and use the same experimental setting described above. Comparative results are in Table~\ref{Table:face}. These results show that the proposed algorithm outperform the state-of-the-art method of \cite{Zhou_2015_CVPR}.
We also validate the cross database generality of our trained model on the Helen database of \cite{Le2012}. Here, we randomly select some face images and manually label the face landmarks. The reconstructed 3D shapes are shown in Figure~\ref{Figure:face}.
To provide an additional unbiased result, we participated in the 3DFAW competition\footnote{https://competitions.codalab.org/competitions/10261}, which was conducted in conjunction with ECCV. Three of the four datasets in the challenge are subsets of MultiPIE \cite{Gross2010}, BU-4DFE \cite{Yin2008} and BP4D-Spontaneous \cite{Zhang2014} databases respectively. Another dataset TimeSlice3D contains annotated 2D images that are extracted from online videos. In total, there are $18,694$ images. Each image has 66 labeled 3D fiducial points and a face bounding box centered around the mean 2D projection of the landmarks. The 2D to 3D correspondence assumes a weak-perspective projection. The depth values have been normalized to have zero mean. Because this competition required that we also estimate the 2D landmark points in the image, we incorporated an additional deep network to our architecture to the model summarized in Figure. \ref{fig:example} and trained it to detect 2D face landmarks \cite{Zhao:16}.
Reconstruction error was reported by the organizers of the competition on a sequestered dataset to which we did not have prior access. Our algorithm was a top performers, with a significant margin over other methods. Herein, we also provide comparative results against the algorithm of \cite{Zhou_2015_CVPR} in Table~\ref{Table:face}.
\subsection{FG3DCar Database}
\begin{figure*}[htp]
\centering{
\includegraphics[width=7in]{car_demo2.jpg}
\caption{Illustration of our 3D estimation results on the FG3DCar database. The first and fifth columns corresponds to the input 2D images and landmark points (in yellow), with the other columns showing the recovered 3D shapes.}
\label{Figure:car}
}
\end{figure*}
This is a fine-grained 3D car database. It consists of 300 images of 30 different car models as seen from multiple views. Each car model has 10 images. Each image has 64 annotated 2D landmarks and a 3D shape reconstructed CAD (Computer-Aided Design) model. We adopt the default setting for training and testing, i.e., half of the 3D shapes of each car model is used for training, the other half is used for testing. In order to train our neural network, we further split the $150$ 3D training sample shapes into training (120 shapes) and validation (30 shapes) sets. Testing is conducted on the remaining $150$ images.
The neural network is trained for 100,000 epochs. We follow the same procedure used in the preceding two sections and train our neural network on one batch for at most 300 iterations in each epoch. The batch is either composed of the 120 training samples (the first epoch) or a random 3D rotation of the 120 training samples. We augment the validation set $2,000$ times using the data augmentation approach described above, resulting in a total of $60,000$ validation sample images.
The mean 3D reconstruction error on the testing set is $\boldsymbol{0.0022}$. Comparative results with the method of \cite{Zhou_2015_CVPR} is in Table~\ref{Table:face}. Qualitative examples of our results are shown in Figure~\ref{Figure:car}.
\subsection{Flag Flapping in the Wind Database}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[trim={7cm 4cm 5cm 4cm},clip, scale=0.5]{10.eps}
\includegraphics[trim={7cm 4cm 5cm 4cm},clip, scale=0.5]{20.eps}
\includegraphics[trim={7cm 4cm 5cm 4cm},clip, scale=0.5]{40.eps}
\includegraphics[trim={7cm 4cm 5cm 4cm},clip, scale=0.5]{60.eps} \\
\includegraphics[trim={7cm 4cm 5cm 4cm},clip, scale=0.5]{70.eps}
\includegraphics[trim={7cm 4cm 5cm 4cm},clip, scale=0.5]{80.eps}
\includegraphics[trim={7cm 4cm 5cm 4cm},clip, scale=0.5]{90.eps}
\includegraphics[trim={7cm 4cm 5cm 4cm},clip, scale=0.5]{100.eps}
\end{center}
\caption{3D shape reconstruction results on the flag flapping in the wind sequence. Comparisons of the reconstructed 3D shape (in red) with its 3D ground-truth (in green).}
\label{Figure:flag}
\end{figure*}
Flag flapping in the wind database \cite{white2007capturing} is a motion capture sequence of a flag waving in the wind. There are 450 frames, each of which has 540 vertices. We use the first 300 frames for training and the rest for testing. The network is trained in 30,000 epochs and we use the same procedure described in the preceding sections.
The mean testing error is $\boldsymbol{0.0004}$. Figure \ref{Figure:flag} shows some of our reconstructed results compared with the ground-truth. The reconstructed 3D shape is shown using filled red circles and the 3D ground-truth with open green circles. As we can see in these results, the reconstructed and true shape are almost identical.
\subsection{Noise and missing data}
To determine how sensitive the proposed neural network is to inaccurate 2D landmark detections, we add independent random Gaussian noise with variance $\sigma$ to the databases used in the preceding sections. Figure~\ref{subfig-1:dummy} shows how little the performance degrades as $\sigma$ increases when noise is added to the CMU Motion Capture database. The average height of subjects in this dataset is $1,500$ mm, meaning the variance of the noise $\sigma$ is about $3\%$. In Figure~\ref{subfig-1:dummy}, we can see the robustness of the proposed algorithm to inaccurate 2D landmark positions. Figure~\ref{subfig-2:dummy} shows the relative reconstruction error averaged across the testing subjects for each landmark with and without noise.
The results on the BU$-$3DFE Face Database, FG3DCar Database and Flag Flapping in the Wind sequence when the data is distorted with Gaussian noise are shown in Figures~\ref{subfig-21:dummy}-\ref{subfig-42:dummy}. The average width of the faces in BU$-$3DFE is 140 mm, hence, the variance is $5\%$. The mean width of the car models in FG3DCar is $569$ pixel, hence, the variance is $2\%$. The mean width of the flags is $386$ mm, hence, the variance is $3\%$.
\begin{figure*}[htp]
\centering{
\subfloat[\label{subfig-1:dummy}]{%
\includegraphics[trim={0cm 0cm 1cm 2.5cm},width=2.3in]{noise_mocap.eps}
}
\hfill
\subfloat[\label{subfig-2:dummy}]{%
\includegraphics[trim={0cm 2cm -1cm 2.5cm},clip, width=4.5in]{joint_analysis_mocap.eps}
} \\
\subfloat[\label{subfig-21:dummy}]{%
\includegraphics[trim={0cm 0cm 1cm 2cm},width=2.3in]{noise_face.eps}
}
\hfill
\subfloat[\label{subfig-22:dummy}]{%
\includegraphics[trim={6cm 4cm 2cm 3.5cm},clip, width=4.5in]{joint_analysis_face.eps}
} \\
\subfloat[\label{subfig-31:dummy}]{%
\includegraphics[trim={0cm 0cm 1cm 2cm},width=2.3in]{noise_car.eps}
}
\hfill
\subfloat[\label{subfig-32:dummy}]{%
\includegraphics[trim={1.6in 0in 1.5in 0in},clip, width=1.05in]{joint_analysis_car_1.eps}
\includegraphics[trim={1.6in -0.5in 1.45in 0in},clip, width=1.05in]{joint_analysis_car_2.eps}
\includegraphics[trim={2in -0.1in 0.95in 0in},clip, width=1.1in]{joint_analysis_car_3.eps}
\includegraphics[trim={1.8in -0.7in 0.8in 1.5in},clip, width=1.2in]{joint_analysis_car_4.eps}
}\\
\subfloat[\label{subfig-41:dummy}]{%
\includegraphics[trim={0cm 0cm 1cm 1cm},width=2.3in]{noise_flag.eps}
}
\hfill
\subfloat[\label{subfig-42:dummy}]{%
\includegraphics[trim={8cm 5cm 3cm 4cm},clip, width=4.5in]{joint_analysis_flag.eps}
}
\caption{Additive random Gaussian noise is used to test the robustness of the proposed algorithm to 2D inaccurate detections on CMU Motion Capture Database (a), BU$-$3DFE Database (c), FG3DCar Database (e) and Flag Flapping in the Wind Database (g). Reconstruction error is shown on the $y$ axis and $\sigma$ on the $x$ axis. Note the tinny recosntruction errors even for large values of $\sigma$. Sensitivity of reconstruction to each landmark when $\sigma$ increases on CMU Motion Capture Database (b), BU$-$3DFE Database (d), FG3DCar Database (f) and Flag Flapping in the Wind Database (h). The radius of the circle indicates the relative reconstruction error for each landmarks.}
\label{Figure:analysis_mocap}
}
\end{figure*}
\begin{figure*}[htp]
\centering{
\subfloat[\label{subfig-human:dummy}]{
\includegraphics[width=7.2in]{mocap_missing1_2.png}}\\
\subfloat[\label{subfig-face:dummy}]{
\includegraphics[width=7.2in]{face_missing2.png}}\\
\caption{Qualitative illustration of our algorithm applied to images of human bodies and faces with missing landmark points. (a) Missing landmarks are identified with a dotted line in the images. (b) Missing landmarks are marked in red in the images.}
\label{Figure:reconstruction_missing}
}
\end{figure*}
Finally, we tested the ability of the trained system to deal with missing data. Here, each training and validation sample had one randomly selected landmark point missing; during training and testing. For CMU Motion Capture database, the average 3D reconstruction errors for subjects 13, 14 and 15 are 0.0413, 0.0396 and 0.0307, respectively. Figure~\ref{subfig-human:dummy} shows qualitative results on three randomly selected images of humans in the wild. For BU$-$3DFE Face Database, the mean reconstruction error is 0.006. Figure~\ref{subfig-face:dummy} shows qualitative results on three randomly selected images of humans in the wild.
\section{Conclusions}
We have presented a very simple algorithm for the reconstruction of 3D shapes from 2D landmark points that yield extremely low reconstruction errors. Specifically, we proposed to use a feed-forward neural network to learn the mapping function between a set of 2D landmark points and an object's 3D shape. The exact same neural network is used to learn the mappings of rigid (e.g., cars), articulated (e.g., human bodies), non-rigid (e.g., faces), and highly-deformable objects (e.g., flags). The system performs extremely well in all cases and yields results as much as two-fold better than previous state-of-the-art algorithms. This neural network runs much faster than real-time, $>1000$ frames/s, and can be trained with small sample sets.
\section*{Acknowledgment}
This research was supported in part by the National Institutes of Health, grants R01-EY-020834 and R01-DC-014498, and by a Google Faculty Research Award to AMM.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,314,259,993,890 | arxiv | \section{Introduction}
Category recognition is one of the fundamental tasks in computer vision, an area where neural networks have had a great success. However, they usually require a relatively large amount of labeled training data for each class. This may limit their application in scenarios where training data are scarce. To address this issue, few-shot learning (FSL) has been considered where the model has to generalize to novel classes with only a few instances. These classes are disjoint from the training classes where sufficient data is available for them during the training stage.
Few-Shot Learning methods usually mimic the few-shot task by utilizing sampled mini-batches called episodes during the training stage. In each episode, a set of C\ classes are randomly selected from training classes. For each of these classes, K labeled instances are sampled to act as the support set, and a subset of the remainder serves as the query set \cite{Sung_2018_CVPR}. This setting is referred to as “C-way K-shot”. By using episodic learning, FSL attempts to improve the model’s generalization ability in tasks with few instances and transfer the learned knowledge of the model to few-shot learning problem for novel classes. This paradigm that is utilized in FSL models is referred to as meta-learning.
Recent meta-learning models can be roughly grouped into two categories. The first one, known as Optimization-based methods \cite{DBLP:journals/corr/abs-1807-05960,antoniou2018how} aims to fine-tune the learned model on the target task. The second category focuses on learning a metric space shared between source tasks \cite{Sung_2018_CVPR,Proto}. This space will be used to solve the target task by nearest neighbor search or learning a simple linear classifier on top of the model \cite{cao2021concept}.
Although FSL models have been able to achieve remarkable performance in terms of accuracy in recent years, they are black-box models. There has been a growing concern about use of black-box models in real-world and FSL is not an exception. In general, Interpretability maybe be accomplished by applying post-hoc analysis methods on black-box models that are already trained, or alternatively we can create models that are interpretable by design. In the area of FSL, the main stream approaches have been posthoc \cite{Kang_2021_ICCV,8794911} and to the best of our knowledge, research on FSL methods that are inherently interpretable has rarely been conducted.
Inspired by \cite{Proto}, recently Cao and et. al proposed COMET \cite{cao2021concept}, a method for FSL along human-interpretable concept dimensions. When human tries to learn new bird species, they are already equipped with some structured, reusable concepts such as wing, beak, legs and feather that help efficiently adapt to the new task and also explain their decisions in terms of such concepts.
COMET learns an embedding space for each concept by masking areas related to that concept. This method learns one metric space over each concept and the final decision is based on averaging decisions of different spaces.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\textwidth]{diagram_design.pdf}
\caption{ Pipeline for performing FSL with InCoPoN.}
\end{figure*}
Although the aforementioned method is able to provide interpretable decisions, the concept annotations have to be given as prior knowledge not only for the instances of base tasks, but also for the instances of target tasks including the test samples. However, such concept annotations may not be readily available at test time.
While assuming the availability of annotations and labels for training samples, including the few training samples of novel classes is a typical assumption in supervised learning, we argue that extending such assumption, even partially, to test samples may limit the application of the resulting model in many real-world scenarios.
Besides, COMET \cite{cao2021concept} does not properly handle commonalities and differences across concepts. They examined two extreme cases: training individual networks for concepts, and training a single network. However, the former ignore the commonalities among concepts and the latter ignores the differences between concepts by forcing the embeddings to be picked from the same feature map.
Another drawback of COMET \cite{cao2021concept} is that at the aggregation step, all concepts have equal votes even those that are weakly present or are completely absent in the input.
In this paper, we present Interpretable Concept-based Prototypical Networks (InCoPoN) to perform FSL based on a set of human-interpretable concepts. InCoPoN learns a set of concept-specific metric spaces, and extracts concept-specific embeddings for query samples and aggregates the resulting concept-specific decisions to make a final decision.
The closest work to the proposed method is \cite{cao2021concept} where the three drawbacks described above are addressed.
The contributions of this paper are as follows:
1) We propose an inherently interpretable method that unlike COMET \cite{cao2021concept}, does not need concept annotations for the test samples. The proposed method learns to infer them from the test samples.
2) A multi-tasking approach that includes a shared back-bone network for capturing the commonalities among concepts followed by individual heads to capture the differences between concepts.
3) For aggregating concept ebmeddings, we propose an adaptive approach where for each sample, it emphasizes more on the concepts that are present in the sample.
Through experiments, we show that the proposed interpretable method performs on a par with several previously state-of-the-art black-box FSL methods on the fine-grained bird classification task using CUB dataset \cite{WahCUB_200_2011} which is a widely used and yet challenging dataset due to the presence of highly similar classes. Moreover, through a detailed ablation study we demonstrate the effectiveness of the second and third contributions over some baseline methods.
\section{Proposed Method}
\label{proposed}
Given a labeled dataset $\mathcal{B} = {\{(x_i,y_i)\}}_{i=1}^{N_b}$ for base classes $\mathcal{Y}_b$, and a labeled support set $\mathcal{S}={\{(x_i,y_i)\}}_{i=1}^{N_s}$ for novel classes ${\mathcal{Y}_n}$ where ${\mathcal{Y}_b} \cap {\mathcal{Y}_n} = {\emptyset}$, FSL aims to predict labels of a query set $\mathcal{Q} = {\{(x_i,y_i)\}}_{i=1}^{N_q}$ which also belongs to ${\mathcal{Y}_n}$. Samples in $\mathcal{B}$, $\mathcal{S}$ are annotated for $N$ common high-level concepts $\mathcal{C} = {\{c^{(k)}\}}_{k=1}^{N}$. Part-based annotations are associated with, for example, meaningful body parts of birds such as beak, belly and wings. For each concept, when is present in an image, only its location would suffice. A bounding box or pixel-based segmentation is not required.
The proposed method consists of three main components: 1) Concept learners that provide concept-specific feature maps; 2) Concept detectors that predict location of concepts in the corresponding feature maps. For each concept, the outputs of this part is a probability score for presence of the concept along with the corresponding concept embedding vector; 3) Aggregation module that makes the final decision. These components will be described in the following.
\subsection{Concept learners with shared layers and concept-specific heads}
\label{concept_learners}
For each concept $c^{(j)}$, one embedding network $f_{\theta}^{(j)}: \mathcal{X}\rightarrow\mathcal{E}_{j}$, is learned where $\mathcal{X}$ is the image space and $\mathcal{E}_{j}$ is the embedding space for concept $j$.
Intuitively, each embedding space $\mathcal{E}_{j}$ is learned to cluster samples around the prototype of their corresponding class only based on the concept $c^{(j)}$. This can be achieved by masking out non-concept regions of the input samples to ensure the concept learner sees only the concept-related parts of the input samples during training. Alternatively, the entire image can be used without masking to get an intermediate feature map $M^{h\times w \times c}$ and from there a feature vector $e_{x_i}^{(j)}$ corresponding to the center location of the concept in the input image can be picked. $h$, $w$ and $c$ denotes height, width and number of channels in the feature map.
Note that this can be done because the locality is preserved when applying filters.
Following \cite{cao2021concept}, we use the second approach. $e_{x_i}^{(j)}$ is a vector of the length $c$ and represents the concept embedding for sample $x_i$.
In \cite{cao2021concept}, two different designs were considered for concept learners. The first one learns one totally separate network $f^{(j)}$ for each concept that results in ignoring commonalities along different concepts. In the second design, a shared network $f$ is trained for all concepts, ignoring the differences between concepts. To consider both commonalities and differences, we design the concept learners to share weights in early layers and have their own concept-specific heads. Therefore, the concept learner $f^{(j)}$ is substituted with $g^{(j)}\circ h$ where $h$ is the network with shared parameters and $g^{(j)}$ is the network head for concept $j$.
The concept learners are trained on the images of base classes $\mathcal{Y}_b$ using episodic learning to mimic the few-shot classification setting. Using each concept learner $j$, one concept-specific prototype $P_y^{(j)}$ is calculated for class $y$ by averaging the concept embeddings of support set:
\begin{equation}
P_y^{(j)} = \frac{1}{|S_y|} \sum \limits_{x_i \in {S_y}} e_{x_i}^{(j)}
\end{equation}
where $e_{x_i}^{(j)}$ is the concept embedding feature vector picked from $g^{(j)}\circ h(x_i)$ and $|S_y|$ is the number of images in the support set of class $y$.
For a query image $x_q$ in an arbitrary training episode, the concept embeddings are extracted using concept learners. Then by calculating an aggregated distance to concept-specific prototypes of different classes, the class of $x_q$ is determined. Specifically, to calculate the aggregated distance from the concept-specific prototypes of class $y$, the distance of each concept embedding $e_{x_q}^{(j)}$ from the concept prototype $P_y^{j}$ is calculated. Finally, the distances across all concepts are summed to calculate the probability of assigning $x_q$ to class $y$ as:
\begin{equation}
p(y|x_q) =
\frac{exp(-\sum_{j \in \mathcal{C}} d(e_{x_q}^{(j)},P_y^{(j)}))}
{\sum_{y'}exp(-\sum_{j \in \mathcal{C}} d(e_{x_q}^{(j)},P_{y'}^{(j)}))}
\end{equation}
$h$ and $g^{(j)}$ are trained using the negative log-likelihood $L = - \log p(y_{x_q}|x_q)$ of true class in an episodic training setting using the images and concept location of the base classes.
\subsection{Predicting concept locations for query samples}
Similar to the simulated episodes during training concept learners, we can perform few-shot classification in target space. However, since the concept locations are not available for query images, in the following, we will present an approach to predict them.
To detect the location of concept feature vector in the last feature map of concept learner, one binary classifier is trained on top of each learned concept embedding network using the concepts of base classes. Specifically, on top of the embedding network of concept $j$, a binary classifier $c^{(j)}$ is trained using binary cross-entropy loss to detect $e_{x_i}^{(j)}$ from other feature vectors in the last feature map of the concept learner. To train the classifier, the feature vectors corresponding to the center of the concept in the input images are presented to the classifier as positive instances. Feature vectors in other spatial locations of the final feature map are provided to the model as negative instances.
To detect the feature vector of concept $j$ for an arbitrary image $x_a$, the image is fed to the network $g^{(j)}\circ h$ and in the final feature map $M$, each feature vector along the channel dimension is provided to the binary classifier $c^{(j)}$ and the feature vector with the highest probability is selected as the class embedding $e_{x_a}^{(j)}$.
As the number of negative instances is considerably more than the positive ones, the classes are weighted in the cross-entropy loss to alleviate the adverse effect of the imbalance data.
\subsection{Aggregation module}
\label{sub_setting1}
To perform few-shot recognition in target space, concept-specific prototypes for each class are computed using the procedure described in Section \ref{concept_learners} and for each query image $x_q$, the concept embeddings $\{e_{x_q}^{(j)}\}_{j \in \mathcal{C}}$ are detected from the final feature maps of concept learners using the trained concept-specific classifiers described in the previous subsection.
Finally, class of the query image $x_q$ is determined using Eq. (\ref{eq_classification}) by measuring the accumulated distance of its concept embeddings to concept-specific prototypes of each class. This distance is indicated by ${D_y}({x_q})$ and formulated in Eq. (\ref{eq_dis}).
\begin{equation}
\label{eq_classification}
\hat{y} = \argmax_{y \in \mathcal{Y}_n}
\frac{exp(-{{D_y}({x_q})})}
{\sum_{y'}exp(-{D_{y'}}({x_q}))}
\end{equation}
\begin{equation}
\label{eq_dis}
{D_y}({x_q}) = \frac{\sum_{j \in \mathcal{C}} {w_{x_q}^{(j)}}{ d(e_{x_q}^{(j)},P_y^{(j)})}}{\sum_{j \in \mathcal{C}} {w_{x_q}^{(j)}}}
\end{equation}
where $w_{x_q}^{(j)}$ is the inverse of the probability score obtained from the binary classifier $c^{(j)}$ for the selected feature vector $e_{x_q}^{(j)}$. Therefore, concept embeddings with higher probabilities will have a higher impact on the final classification decision and likewise concepts that are not present in the query sample will have a lower impact.
\section{Experiments and Evaluations}
\label{eval}
\subsection{Dataset and experimental settings}
We evaluate InCoPoN on Caltech-UCSD Birds-200-2011 (CUB) \cite{WahCUB_200_2011} dataset. This is a fine-grained bird classification dataset consisting of 11,788 images from 200 different categories with a total number of 15 parts/concepts locations.
We follow the protocol provided in \cite{chen2018a} for splitting the dataset. The models are evaluated on the widely used 5-way setting. Specifically, in each episode, 5 classes are sampled randomly where k samples are provided for each class as support set to form the k-shot classification task. The query set contains 16 samples from the classes of the support set. The best model is chosen based on the accuracy on the validation set. For testing, 600 episodes are sampled randomly from novel classes and the mean accuracy and standard deviation are reported for these 600 episodes.
The FSL widely used backbone network Conv-4 \cite{10.5555/3045118.3045167} with an input size of 84 $\times$ 84 is adopted for concept learners. The first three blocks of this network are shared among different concept learners and the last block is the head specific to each concept. Moreover, each concept-specific binary classifier is a two-layer MLP with 64 neurons in the hidden layer. Finally, Euclidean distance is employed to measure the distance between concept embeddings and prototypes.
Similar to \cite{cao2021concept}, standard data augmentation including random crop, rotation, horizontal flipping and color jittering is performed. Finally, concept learners are trained using Adam optimizer with a learning rate of $10^{-3}$.
\subsection{Performance comparison}
We compare InCoPoN with six previously state-of-the-art FSL methods as shown in Table \ref{tab_comparison}. It can be seen that for both 5-way 5-shot and 5-way 1-shot settings, InCoPoN achieved results on a par with the black-box FSL models and yet provides interpretability through learning to learn along human-friendly concepts.
On 5-way 5-shot setting, our method is able to achieve an average accuracy of 78.6\% which outperforms previously state-of-the-art methods ProtoNet \cite{Proto}, MAML \cite{pmlr-v70-finn17a}, MatchingNet \cite{NIPS2016_90e13578}, and is only slightly behind the MetaOptNet \cite{NIPS2016_90e13578} and Baseline++ \cite{chen2018a} (1\% and 1.6\%). On 5-way 1-shot setting, our method records 57.9\% in terms of average accuracy. The top performer is MetaOptNet with an average accuracy of 62.2\%. The reason for achieving somewhat less competitive results on 1-shot setting could be attributed to the fact that not all concepts are available for each image in CUB dataset, and it is more likely that a specific concept has no representation for a class in 1-shot setting since each class is represented with just one support image. In that case, global average pooling of the final feature map in the model is used as the features for that missing concept.
COMET \cite{cao2021concept} achieved 85.3\% and 67.9\% on 5-shot and 1-shot settings respectively but it should be noted that because it gains from the additional concept annotations for the test samples, a direct comparison may not be fair.
\begin{table}
\caption{Results of 5-way 5-shot and 5-way 1-shot on CUB dataset. Average accuracy and standard deviation over 600 randomly sampled episodes are reported.}
\label{tab_comparison}
\centering
\begin{tabular}{lll}
\hline
Method & 5-way 5-shot & 5-way 1-shot \\
\hline
Baseline++ \cite{chen2018a} & 80.2 $\pm$ 0.6 & 61.4 $\pm$ 1.0 \\
MatchingNet \cite{NIPS2016_90e13578} & 75.9 $\pm$ 0.6 & 61.0 $\pm$ 0.9 \\
MAML \cite{pmlr-v70-finn17a} & 74.4 $\pm$ 0.8 & 52.8 $\pm$ 1.0 \\
RelationNet \cite{Sung_2018_CVPR} & 78.6 $\pm$ 0.7 & 62.1 $\pm$ 1.0 \\
MetaOptNet \cite{Lee_2019_CVPR} & 79.6 $\pm$ 0.6 & 62.2 $\pm$ 1.0 \\
ProtoNet \cite{Proto} & 76.1 $\pm$ 0.7 & 57.1 $\pm$ 1.0 \\
\hline
InCoPoN & 78.6 $\pm$ 0.7 & 57.9 $\pm$ 0.9 \\
\hline
\end{tabular}
\end{table}
\subsection{Effect of using probability scores as weights}
In this section we compare the proposed method with a baseline model that considers equal weights for all concept embeddings. The results are shown in Table \ref{tab_effect}.
It can be seen that the proposed aggregation module improves the performance in both 5-way 5-shot and 5-way 1-shot settings.
\begin{table}
\caption{Comparison between the proposed method and the baseline model with equal weights for all concepts.}
\label{tab_effect}
\centering
\begin{adjustbox}{width={0.45\textwidth},totalheight={\textheight},keepaspectratio}
\begin{tabular}{lll}
\hline
Method & 5-way 5-shot & 5-way 1-shot \\
\hline
InCoPoN with equal weights & 77.2 $\pm$ 0.7 & 57.6 $\pm$ 0.9 \\
InCoPoN with probability scores as weights & 78.6 $\pm$ 0.7 & 57.9 $\pm$ 0.9 \\
\hline
\end{tabular}
\end{adjustbox}
\end{table}
\subsection{The impact of backbone network design}
To evaluate our new design for the backbone network, a comparison is performed between COMET with its original designs and our new design as shown in Table \ref{tab_design}
While COMET with original designs achieved the same performance of 85.3\% in terms of average accuracy in 5-way 5-shot setting, the new design improved the performance by approximately 2\%. This improvement is even higher in 5-way 1-shot setting achieving 3.97\% higher average accuracy.
\begin{table} [h]
\caption{Comparison between COMET with our design and the original designs of backbone network.}
\label{tab_design}
\centering
\begin{adjustbox}{width={0.45\textwidth},totalheight={\textheight},keepaspectratio}
\begin{tabular}{lll}
\hline
Method & 5-way 5-shot & 5-way 1-shot \\
\hline
COMET shared w & 85.3 $\pm$ 0.5 & 67.9 $\pm$ 0.9 \\
COMET with distinct networks & 85.3 $\pm$ 0.5 & 67.9 $\pm$ 0.9 \\
\hline
COMET ours & 87.25 $\pm$ 0.46 & 71.87 $\pm$ 0.92 \\
\hline
\end{tabular}
\end{adjustbox}
\end{table}
\section{Conclusion}
\label{conc}
In this paper, an interpretable few-shot learning model was proposed. Although it decomposes the decision space into multiple metric spaces associated with human-interpretable concepts, it does not require concept annotations for test samples. The process of learning multiple metric spaces is efficiently modeled as a multi-tasking problem. The results are aggregated by considering the degree that each concept is present in the input. Finally, The proposed interpretable method achieved competitive average accuracy in the range of six previously state-of-the-art black-box FSL methods.
\bibliographystyle{IEEEbib}
|
1,314,259,993,891 | arxiv | \section{Introduction}\label{sec:one}
One of the most important scientific features of planetary microlensing is that it is
sensitive to cold planets lying at around or beyond the snow line. Detecting these
planets is important because, according to the core-accretion theory of the planet
formation \citep{Laughlin2004}, there are abundant solid grains at around the snow line
to be accreted into planetesimals that will eventually evolve into gas giants by accreting
gas residing in the surrounding disk. It is, in general, difficult to detect these cold
planets using other major planet detection methods such as the radial-velocity and
transit methods because of the long orbital periods of the planets. Hence, the microlensing
method is essential for the demographic studies that encompass planets distributed throughout
a wide region of planetary systems \citep{Gaudi2012}.
For the complete demographics of planets, it is important to accurately estimate the
planet detection efficiency. The efficiency of microlensing planets is calculated as
the ratio of the number of detected planets to the total number of lensing events. If
a fraction of planets are missed despite their signals being above a detection threshold,
then their frequency would be underestimated, and this would lead to an erroneous
evaluation of planet properties.
Strong planetary signals appearing in lensing light curves covered with high cadences
and good photometric precision, for example, OGLE-2018-BLG-1269 \citep{Jung2020a} and
KMT-2019-BLG-0842 \citep{Jung2020b}, can be readily identified. However, identifying planetary
signals for a subset of lensing events is difficult caused by various reasons, such as very
short durations of signals, low photometric precision due to the faintness of the source or
the occurrence of planetary signals during low lensing magnifications, weakness of signals
due to their non-caustic-crossing nature, and relatively sparse coverage of signals for
events detected in low-cadence fields and (or) in the wings of the observing season. In
some cases of events produced by high-mass planets lying in the vicinity of the Einstein
ring, the planet-induced caustics have a resonant form. In this case, the planetary signals
can significantly deviate from the typical form of a short-term anomaly \citep{GouldLoeb1992},
and this also makes it difficult to immediately identify the planetary signals. Therefore,
finding unidentified planetary signals in the data of lensing surveys is important for the
accurate estimation of planet frequency.
Searches for unidentified microlensing planets in the data collected by the Korea Microlensing
Telescope Network \citep[KMTNet:][]{Kim2016} survey have been carried out in two major channels.
The first channel is a systematic investigation of the residuals in lensing light curves from
the single-lens single-source (1L1S) fits. This approach has been applied to the KMTNet
prime-field data obtained in the 2018 and 2019 seasons, leading to the discoveries of dozens
of planets: 1 planet (OGLE-2019-BLG-1053Lb) published in \citet{Zang2021}, 6 planets
(OGLE-2018-BLG-0977Lb, OGLE-2018-BLG-0506Lb, OGLE-2018-BLG-0516Lb, OGLE-2019-BLG-1492Lb,
KMT-2019-BLG-0253Lb, KMT-2019-BLG-0953Lb) published in \citet{Hwang2022}, 1 planet
(OGLE-2018-BLG-0383Lb) published in \citet{Wang2022}, 3 planets (KMT-2019-BLG-1042Lb,
KMT-2019-BLG-1552Lb, KMT-2019-BLG-2974Lb) published in \citet{Zang2022}, and 8 planets
(OGLE-2018-BLG-1126Lb, KMT-2018-BLG-2004Lb, OGLE-2018-BLG-1647Lb, OGLE-2018-BLG-1367Lb,
OGLE-2018-BLG-1544Lb, OGLE-2018-BLG-0932Lb, OGLE-2018-BLG-1212Lb, and KMT-2018-BLG-2718Lb)
published in \citet{Gould2022}. This approach is being expanded to the data obtained from
the other fields and to those acquired in other seasons, and 6 planets (KMT-2018-BLG-0030Lb,
KMT-2018-BLG-0087Lb, KMT-2018-BLG-0247Lb, OGLE-2018-BLG-0298Lb, KMT-2018-BLG-2602Lb,
OGLE-2018-BLG-1119Lb) were recently reported from the investigation of the 2018
sub-prime-field data \citep{Jung2022}.
The second approach for exhuming missing planets is visually inspecting planetary signals.
\citet{Han2020} found 4 microlensing planets (KMT-2016-BLG-2364Lb, KMT-2016-BLG-2397Lb,
OGLE-2017-BLG-0604Lb, and OGLE-2017-BLG-1375Lb) from visually inspecting faint-source events
found during the 2016 and 2017 seasons. \citet{Han2021a} inspected anomalies with no
caustic-crossing features and identified 3 planets (KMT-2018-BLG-1976Lb, KMT-2018-BLG-1996,
and OGLE-2019-BLG-0954). \citet{Han2021b} additionally identified 3 planets (KMT-2017-BLG-2509Lb,
OGLE-2017-BLG-1099Lb, and OGLE-2019-BLG-0299Lb) detected via resonant-caustic channels.
\citet{Han2022} reexamined high-magnification microlensing events in the 2018 season data
and found 1 planet (KMT-2018-BLG-1988Lb) with a very short-duration planetary signal.
This independent visual search for planetary signals complements the AnomalyFinder search
and provides a verification sample for evaluating the effectiveness of the AnomalyFinder
algorithm. The visual search requires modeling a large number of anomalous events to isolate
the ones of interest. The output of the AnomalyFinder is then compared to this modeling to
verify that the anomalies have been correctly classified and vet for ambiguous events, such
as those that suffer from various types of degeneracy. In addition, planets discovered by-eye
but not recovered by the AnomalyFinder search provide important insight into false negatives
in that search.
In this work, we report the discovery of a planet (KMT-2017-BLG-0673Lb) and a planet candidate
(KMT-2019-BLG-0414Lb). These were both identified from the visual inspection of the KMTNet
data obtained in the peripheral fields, toward which the observational cadences are
substantially lower than that of the prime fields, and thus planetary signals were sparsely
covered. KMT-2019-BLG-0414Lb was also identified by the AnomalyFinder search. However,
while the anomaly in KMT-2017-BLG-0673 was found by the algorithm, it was rejected by the
operator because the peak data appeared to be ``noisy'', which, in fact, was due to the
planetary signal. This event demonstrates the importance and complementarity of such by-eye
searches.
To present the analyses of the planetary events, we organize the paper as follows. In
Section~\ref{sec:two}, we describe the observations of the planetary lensing events and
the procedure of data reduction. In Section~\ref{sec:three}, we describe the detailed
features of the observed anomalies, and present analyses of the lensing events conducted
to explain the anomalies. In particular, we show that there is an alternate,
non-planetary, solution for KMT-2019-BLG-0414, in which the anomaly is explained by an
orbiting companion of the source (xallarap) and which is disfavored by only $\Delta\chi^2=4.2$.
We explain the procedure of modeling and present the lensing parameters constrained by the
modeling. In Section~\ref{sec:four}, we specify the source stars of the events and estimate
angular Einstein radii. In Section~\ref{sec:five}, we estimate the physical lens parameters
by conducting Bayesian analyses of the events. In Section~\ref{sec:six}, we discuss how
future adaptive optics (AO) observations on 30~m class telescopes can resolve the
planet-xallarap degeneracy for KMT-2019-BLG-0414. We summarize the results found from
the analyses in Section~\ref{sec:seven}.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{f1.eps}
\caption{
KMTNet fields and cadences. The locations of the events KMT-2017-BLG-0673 and
KMT-2019-BLG-0414 are indicated by cross marks.
}
\label{fig:one}
\end{figure}
\section{Observations}\label{sec:two}
Observations of the KMTNet survey are being carried out in different cadences depending on
the fields. Six prime fields (KMT01, KMT02, KMT03, KMT41, KMT42, and KMT43) are visited with
a 0.5~hr cadence. Each pair of the fields KMT01-KMT41, KMT02-KMT42, KMT03-KMT43 overlaps but
are shifted to cover the gaps between chips in one of the two fields (about 15\% of the total
region) and thus most region of the prime fields are covered with a 0.25~hr cadence.
For the other fields, the cadences are lower than that of the prime fields with 1.0~hr cadence
for 7 fields (KMT04, KMT14, KMT15, KMT17, KMT18, KMT19, and KMT22), 2.5~hr cadence for 11 fields
(KMT11, KMT16, KMT20, KMT21, KMT31, KMT32, KMT33, KMT34, KMT35, KMT37, and KMT38), and 5~hr
cadence for 3 fields (KMT12, KMT13, and KMT36). Figure~\ref{fig:one} shows the fields covered
by the KMTNet survey and the observational cadences of the individual fields.
The two planetary events KMT-2017-BLG-0673 and KMT-2019-BLG-0414 were found in the KMT13
and KMT38 fields, toward which observations were conducted with 5~hr and 2.5~hr cadences,
respectively. The equatorial coordinates of the individual events are (RA, DEC)$=$(17:39:01.69,
-33:48:28.91), which correspond to the Galactic coordinates $(l, b)=(-4^\circ\hskip-2pt .882,
-1^\circ\hskip-2pt.373)$, for KMT-2017-BLG-0673, and (RA, DEC)$=$(17:55:24.70, -21:53:43.94),
which correspond to $(l, b)=(7^\circ\hskip-2pt .179, 1^\circ\hskip-2pt .709)$, for
KMT-2019-BLG-0414. In Figure~\ref{fig:one}, we mark the positions of the two lensing events.
Both events were found solely by the KMTNet survey, and there are no data from the other
currently working surveys of the Optical Gravitational Lensing Experiment
\citep[OGLE:][]{Udalski2015} and the Microlensing Observations in Astrophysics survey
\citep[MOA:][]{Bond2001}. The event KMT-2017-BLG-0673 occurred before the development
of the KMTNet AlertFinder algorithm \citep{Kim2018b}, which began operation in the 2018 season,
and thus it was identified from the post-season investigation of the data using the KMTNet
EventFinder algorithm \citep{Kim2018a}. On the other hand, the event KMT-2019-BLG-0414
was found using the AlertFinder system in the early stage of the event on 2019 April 16
(HJD$^\prime \sim 8589.5$), when the apparent source flux was brighter than the baseline
by $\sim 0.15$~mag.
Observations of both events were done using the three KMTNet telescopes. The individual KMTNet
telescopes are located in three sites of the Southern Hemisphere: Cerro Tololo Interamerican
Observatory in Chile (KMTC), the South African Astronomical Observatory in South Africa (KMTS)
and the Siding Spring Observatory in Australia (KMTA). Each of these telescopes has a 1.6~m
aperture and is equipped with a camera yielding 4~deg$^2$ field of view. Images from the survey
were obtained primarily in the $I$ band, and about 9\% of images were acquired in the $V$
band for the source color measurement.
Reductions of images and photometry of the events were carried out utilizing the automatized
pipeline of the KMTNet survey developed by \citet{Albrow2009}. For a subset of the KMTC data
sets, additional photometry were conducted using pyDIA code \citep{Albrow2017} to measure the
source colors of the events. The detailed procedure of the source color measurement will be
discussed in Sect.~\ref{sec:three}. For the data used in the analysis, we readjusted the error
bars estimated from the photometry pipeline using the method of \citet{Yee2012} so that the
data are consistent with their scatter and $\chi^2$ per degree of freedom for each data set
is unity.
\section{Light curve analyses}\label{sec:three}
By inspecting the data of the lensing events detected in the peripheral KMTNet fields covered
with observational cadences $\geq 1 $~hours during the 2017--2019 seasons, we found that
KMT-2017-BLG-0673 and KMT-2019-BLG-0414 exhibit subtle deviations from the 1L1S models.
In Figure~\ref{fig:two}, we present the light curves of the two events. From a brief glimpse,
the light curves of both events appear to be well described by 1L1S models, which are drawn
over the data points. However, a thorough inspection reveal that there exist weak short-term
anomalies. We mark the regions of the anomalies for the individual events with grey boxes
drawn over the light curves, and the enlarged views of the anomaly regions are shown in
Figure~\ref{fig:three} for KMT-2017-BLG-0673 and in Figure~\ref{fig:four} for KMT-2019-BLG-0414.
In the following subsections, we present analyses of both events conducted to explain the observed
anomalies in the lensing light curves. As will be discussed, the anomalies are well described by a
binary-lens (2L1S) model, in which the mass ratio between the lens components ($M_1$ and $M_2$) is
very small.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{f2.eps}
\caption{
Microlensing light curves of KMT-2017-BLG-0673 and KMT-2019-BLG-0414. The curve drawn over
the data points of each light curve is a 1L1S model, and the box indicates the region of the
anomaly. The enlarged views of the anomaly regions are shown in Fig.~\ref{fig:three} for
KMT-2017-BLG-0673 and in Fig.~\ref{fig:four} for KMT-2019-BLG-0414. Colors of data points
are set to match those of the telescopes marked in the legend.
}
\label{fig:two}
\end{figure}
The 2L1S modeling procedure commonly applied to both events are as follows. In the modeling of
each event, we search for a lensing solution, which represents a set of lensing parameters describing
the observed lensing light curve. Under the approximation that the relative lens-source motion
is rectilinear, a 2L1S event light curve is described by 7 basic parameters. The first three of
these parameters $(t_0, u_0, t_{\rm E})$ characterize the encounter between the lens and source, and
the individual parameters denote the time of the closest lens-source approach, the separation
at that time (impact parameter) normalized to the angular Einstein radius $\theta_{\rm E}$, and the
event time scale, respectively. The event time scale is defined as the time for the source to
cross the angular Einstein radius, that is, $t_{\rm E} = \theta_{\rm E}/\mu$, where $\mu$
represents the relative lens-source proper motion. The next three parameters $(s, q, \alpha)$
define the binary lens, and the first two parameters represent the projected separation (scaled to
$\theta_{\rm E}$) and mass ratio between $M_1$ and $M_2$, respectively, and the third parameter indicates
the source trajectory angle defined as the angle between the relative source motion and
the binary axis of the lens. The last parameter $\rho$ (normalized source radius), which is
defined as the ratio of the angular source radius $\theta_*$ to $\theta_{\rm E}$, characterizes the
deformation of a lensing light curve by finite-source effects, which occur during the crossing of
a source over the caustic formed by a binary lens.
The searches for the the best-fit lensing parameters were carried out in two steps. In the first
step, we divided the lensing parameters into two groups, and the binary parameters $(s, q)$ in
the first group were searched for via a grid approach with multiple initial values of $\alpha$
evenly divided in the [0 -- 2$\pi$] range, and the other parameters were found via a downhill
approach using a Markov Chain Monte Carlo (MCMC) algorithm. We then construct $\chi^2$ maps on
the $\log s$--$\log q$--$\alpha$ planes, and identify local solutions on the $\chi^2$ maps. In
the second step, we refine the individual local solutions by letting all parameters, including
$s$ and $q$, vary. If the degeneracies among different local solutions are severe, we present
multiple solutions, and otherwise we present a single global solution.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{f3.eps}
\caption{
Enlargement of the anomaly region in the lensing light curve of KMT-2017-BLG-0673. The
solid and dotted curves drawn over the data points are the 2L1S and 1L1S models, respectively.
The residuals from the individual models are shown in the lower two panels. The inset in the
top panel shows the lens system configuration, in which the source trajectory (line with an
arrow) with respect to the caustic (red figures) is shown. The curve drawn in the bottom
panel is the difference between the 2L1S and 1L1S models.
}
\label{fig:three}
\end{figure}
\subsection{KMT-2017-BLG-0673}\label{sec:three-one}
The anomaly feature of KMT-2017-BLG-0673, shown in Figure~\ref{fig:three}, is characterized by
a single strong anomaly point at HJD$^\prime\sim 7963.9$ ($t_{\rm anom}$) surrounded by weak
positive deviations around $t_{\rm anom}$ during the relatively short time range of $7960\lesssim
{\rm HJD}^\prime \lesssim 7967$. The light curve variation at $t_{\rm anom}$ is discontinuous,
and this suggests that the anomaly point was produced by the source crossing over a caustic,
although the detailed structure of the caustic-crossing feature could not be delineated due to
the sparse coverage of the anomaly caused by the low observational cadence of the field.
Considering that the anomaly appears in the peripheral part of the light curve, it is likely
that the anomaly was generated by a planetary caustic lying at around the planet-host axis
with a separation of $u_{\rm anom}\sim s-1/s$ from the host \citep{Griest1998, Han2006}. In
this case, the planet-host separation can be heuristically estimated as \begin{equation} s =
{1\over 2} \left[(u_{\rm anom}^2 +4)^{1/2} \pm u_{\rm anom}\right], \label{eq1} \end{equation}
where $u_{\rm anom}=(\tau_{\rm anom}^2+u_0^2)^{1/2}$, $\tau_{\rm anom}=(t_{\rm anom}-t_0)/t_{\rm E}$,
and $t_{\rm anom}\sim 7964$ represents the time of the anomaly. With the values of $t_0\sim
7973.3$, $u_0\sim 0.12$, and $t_{\rm E} \sim 22$~days obtained from the 1L1S modeling conducted by
excluding the data points around the anomaly, we find two values of $s_{\rm close}\sim 0.81$
and $s_{\rm wide}\sim 1.24$, which correspond to the separations of the close ($s<1.0$) and wide
($s>1.0$) solutions, respectively.
From the 2L1S modeling, we found a unique solution without any degeneracy. In Table~\ref{table:one},
we list the lensing parameters of the solution and the model curve corresponding to the solution
is drawn over the data points in Figure~\ref{fig:three}. It was found that the 2L1S model improved
the fit by $\Delta\chi^2=584.7$ with respect to the 1L1S model. As expected, the anomaly was
produced by a low-mass companion with the binary parameters of $(s, q)\sim (0.81, 5.6\times
10^{-3})$. We note that the planet-host separation matches well the value that was heuristically
estimated from the location of the anomaly in the lensing light curve.
\begin{table}[t]
\small
\caption{Lensing parameters of KMT-2017-BLG-0673\label{table:one}}
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lllcc}
\hline\hline
\multicolumn{1}{c}{Parameter} &
\multicolumn{1}{c}{Value } \\
\hline
$\chi^2$ & $452.0 $ \\
$t_0$ (HJD$^\prime$) & $7973.264 \pm 0.049$ \\
$u_0$ & $0.121 \pm 0.006 $ \\
$t_{\rm E}$ (days) & $22.36 \pm 0.63 $ \\
$s$ & $0.813 \pm 0.005 $ \\
$q$ ($10^{-3}$) & $5.58 \pm 1.22 $ \\
$\alpha$ (rad) & $3.179 \pm 0.032 $ \\
$\rho$ ($10^{-3}$) & $7.56 \pm 2.32 $ \\
\hline
\end{tabular*}
\tablefoot{ ${\rm HJD}^\prime = {\rm HJD}- 2450000$. }
\end{table}
\begin{table*}[t]
\small
\caption{Lensing parameters of KMT-2019-BLG-0414\label{table:two}}
\begin{tabular}{llllll}
\hline\hline
\multicolumn{1}{c}{Parameter} &
\multicolumn{2}{c}{2L1S (Sol 1)} &
\multicolumn{2}{c}{2L1S (Sol 2)} &
\multicolumn{1}{c}{Xallarap} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{Close} &
\multicolumn{1}{c}{Wide} &
\multicolumn{1}{c}{Close} &
\multicolumn{1}{c}{Wide} &
\multicolumn{1}{c}{} \\
\hline
$\chi^2$ & $585.8 $ & $585.8 $ & $585.9 $ & $586.0 $ & $590.0 $ \\
$t_0$ (HJD$^\prime$) & $8611.332 \pm 0.005$ & $8611.330 \pm 0.005$ & $8611.333 \pm 0.005$ & $8611.335 \pm 0.005$ & $8611.362 \pm 0.015 $ \\
$u_0$ ($10^{-3}$) & $4.33 \pm 0.52 $ & $4.31 \pm 0.49 $ & $4.48 \pm 0.53 $ & $4.26 \pm 0.56 $ & $4.84 \pm 0.63 $ \\
$t_{\rm E}$ (days) & $71.32 \pm 7.80 $ & $71.31 \pm 7.70 $ & $70.77 \pm 7.50 $ & $74.65 \pm 8.17 $ & $72.28 \pm 4.48 $ \\
$s$ & $0.347 \pm 0.066 $ & $2.803 \pm 0.599 $ & $0.416 \pm 0.078 $ & $2.714 \pm 0.376 $ & -- \\
$\log q$ & $-2.23 \pm 0.26 $ & $-2.26 \pm 0.25 $ & $-2.60 \pm 0.28 $ & $-2.56 \pm 0.26 $ & -- \\
$\alpha$ (rad) & $2.986 \pm 0.064 $ & $3.003 \pm 0.056 $ & $0.783 \pm 0.067 $ & $0.784 \pm 0.068 $ & -- \\
$\rho$ ($10^{-3}$) & $< 5 $ & $< 5 $ & $< 5 $ & $< 5 $ & -- \\
$P$ (day) & & & & & $ 0.9 \pm 0.1 $ \\
$\xi_N$ ($10^{-3}$) & & & & & $ 0.84 \pm 0.16 $ \\
$\xi_E$ ($10^{-3}$) & & & & & $ 0.00 \pm 0.12 $ \\
$\phi$ (deg) & & & & & $ 326.72 \pm 15.39$ \\
$i$ (deg) & & & & & $ 11.83 \pm 12.24 $ \\
\hline
\end{tabular}
\end{table*}
The lens system configuration, showing the source motion with respect to the caustic, is
presented in the inset of the top panel in Figure~\ref{fig:three}. It shows that the anomaly
was generated by the source passage through one of the two planetary caustics induced
by a close planet with $s<1.0$. To be noted among the parameters is that the normalized
source radius $\rho =(7.56\pm 2.32)\times 10^{-3}$ is measured despite the fact that
only a single point covered the caustic. This is possible because the data points lying
adjacent to the caustic point provide an extra constraint on the source radius.
A short-term anomaly can also be produced if a source is a binary \citep{Gaudi1998}. We
checked this possibility by additionally conducting a binary-source (1L2S) modeling of the
light curve. From this modeling, it was found that the 1L2S model yielded a poorer fit to the
anomaly than the 2L1S model by $\Delta\chi^2 =30.7$, especially in the peripheral region of
the anomaly. We, therefore, exclude the 1L2S interpretation of the anomaly. We also checked
the feasibility of detecting higher-order effects, including the microlens-parallax effect
\citep{Gould1992} and lens-orbital effect \citep{Batista2011, Skowron2011} induced by the
orbital motion of Earth and the binary lens, respectively. We found that it was difficult
to securely constrain the lensing parameters defining these higher-order effects not only
because the event time scale was not long enough but also because the photometry was not
sufficiently precise.
\subsection{KMT-2019-BLG-0414}\label{sec:three-two}
The event KMT-2019-BLG-0414 reached a very high magnification at the peak: $A_{\rm peak} >
200$. The anomaly, whose features are shown in Figure~\ref{fig:four}, occurred near the peak,
and the deviation from the 1L1S model was subtle without any strong signatures, such as would
be generated by a caustic crossing. From the inspection of the residual from the 1L1S model,
presented in the bottom panel, the feature of the anomaly is characterized by two KMTA points
at the peak (HJD$^\prime =8611.11$ and 8611.22) with positive deviations and those around the
peak exhibiting slight negative deviations that lasted for about 2.5 days.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{f4.eps}
\caption{
Enlarged view around the anomaly region in the lensing light curve of KMT-2019-BLG-0414.
Drawn over the data points are the 1L1S model considering finite-source effects (FSPL),
the two 2L1S (solutions~1 and 2) models, and the binary-source model considering xallarap
effects (xallarap). The model curves of the two 2L1S models are difficult to be resolved
within the line width due to the severe degeneracy between the models. The four bottom panels
show the residuals from the individual models. The two left insets in the top panel show
the lens-system configurations of the two 2L1S solutions, and the right inset shows the
configuration of the xallarap model. The blue and red curves in the bottom panel represent
the differences of the 2L1S solutions 1 and 2 from the FSPL model, respectively.
}
\label{fig:four}
\end{figure}
\begin{table*}[t]
\small
\caption{Source properties\label{table:three}}
\begin{tabular}{lllll}
\hline\hline
\multicolumn{1}{c}{Quantity} &
\multicolumn{1}{c}{KMT-2017-BLG-0673} &
\multicolumn{1}{c}{KMT-2019-BLG-0414} \\
\hline
$(V-I, I)_{\rm S}$ & $(3.728 \pm 0.063, 17.291\pm 0.006) $ & $(2.603 \pm 0.029, 21.476 \pm 0.003)$ \\
$(V-I, I)_{\rm RGC}$ & $(3.693, 17.517) $ & $(2.966, 17.435) $ \\
$(V-I, I)_{\rm RGC,0}$ & $(1.060, 14.377) $ & $(1.060, 14.239) $ \\
$(V-I, I)_{\rm 0,S}$ & $(1.096 \pm 0.063, 14.150 \pm 0.006) $ & $(0.697 \pm 0.029, 18.280\pm 0.003) $ \\
$\theta_*$ ($\mu$as) & $7.29 \pm 0.70 $ & $0.68\pm 0.05 $ \\
\hline
\end{tabular}
\end{table*}
From the 2L1S modeling of the light curve, we found 2 sets of planetary solutions with source
trajectory angles of $\alpha \sim 3.00$ (in radian) and $\sim 0.78$. We refer to the solutions
as 2L1S ``solution~1'' and ``solution~2'', respectively. For each solution, we identified
a pair of solutions resulting from the close-wide degeneracy \citep{Griest1998, Dominik1999},
and thus there are 4 solutions in total. Regardless of the solution, the estimated mass ratios
between the lens components are in the range of [3.3 -- 4.7]$\times 10^{-3}$, indicating that
the companion to the lens is a planet. The fit improvement of the 2L1S models with respect to
a 1L1S model conducted with the consideration of finite-source effects is $\Delta\chi^2= 90.8$,
indicating that the planetary signal is securely detected.
In Table~\ref{table:two}, we list the lensing parameters of the 4 degenerate planetary 2L1S
solutions along with the $\chi^2$ values of the fits. The degeneracies among the solutions
are very severe with $\Delta\chi^2 <0.2$. In Figure~\ref{fig:four}, we present the model
curves and residuals of solutions~1 and 2. The lens system configurations of the solutions~1
and 2 are shown in the left and right insets inserted in the top panel of Figure~\ref{fig:four},
respectively, where the presented configurations are for the close solutions. Despite the large
difference in the source trajectory angle, $\alpha\sim 3.00$ for solutions~1 and $\sim 0.78$ for
solution~2, the planet parameters, $(s, q)\sim (0.35/2.8, 4.7\times 10^{-3})$ for solution~1 and
$(0.42/2.7, 3.3\times 10^{-3})$ for solution~2, are similar to each other, and thus the caustics
of both solutions appear to be similar to each other. For both solutions, the anomaly was produced
by the source approach close to the central caustic induced by a planetary companion. The two points
with positive deviations appeared when the source approached the cusp of the caustic. Although the
normalized source radius cannot be measured because the source did not cross the caustic, the
modeling yields an upper limit of $\rho_{\rm max}\sim 5 \times 10^{-3}$.
We additionally checked the binary-source origin the anomaly. Under the static 1L2S
interpretation without the orbital motion of the source, we found that the anomaly could
not be explained by the model because the light curve exhibited both positive and negative
deviations, while the 1L2S model could produce only positive deviations. However, a negative
deviation can be produced by a xallarap effect, in which a faint source companion induces
a variation of the primary source motion by the orbital motion \citep{Griest1992, Han1997}.
Following the parameterization of \citet{Dong2009}, we thus carried out a xallarap modeling
by adding 5 extra parameters of ($\xi_{{\rm E},N}, \xi_{{\rm E},E}$, $P$, $\phi$, $i$) to
those of the 1L1S model. Here $(\xi_{{\rm E},N}, \xi_{{\rm E},E})$ are the north and east
components of the xallarap vector $\mbox{\boldmath $\xi$}_{{\rm E}}$, $P$ denotes the orbital period,
$\phi$ is the phase angle, and $i$ represents the inclination of the source orbit. From
this modeling, we found a xallarap solution that approximately explained the observed anomaly.
We list the lensing parameters of the best-fit xallarap solution in Table~\ref{table:two},
and present the model curve, its residual, and the lens system configuration in
Figure~\ref{fig:four}. According to the model, the source is accompanied by a close faint
companion inducing a very short orbital period of $\sim 1$~day. Although the xallarap fit
is not preferred over the the 2L1S solutions, the $\chi^2$ difference is minor with $\Delta
\chi^2\sim 4.2$. We, therefore, consider the xallarap solution as a viable interpretation of
the event. Had the Einstein radius of the lens system been measured, the minimum mass of the
source companion could be constrained by the relation of $M_{S_2,{\rm min}}=(\xi_{\rm E}
\hat{r}_{\rm E}/{\rm AU})^3/(P/{\rm yr})^2$ to judge the validity of the xallarap solution,
but this constraint could not be applied to the lens system because the normalized source
radius could not be measured. Here $\hat{r}_{\rm E}=D_{\rm S}\theta_{\rm E}$ denotes the physical
Einstein radius projected on the source plane. We note that the model curves of the the 2L1S
and xallarap solutions differ from each other in the region not covered by data points,
indicating that the degeneracy between the solutions could have been resolved if the event
had more continuous coverage. In particular, the two models differ by $\sim 0.1$~mag during
the KMTC observing window centered on HJD$^\prime \sim 8610.75$. Unfortunately, KMTC was
weathered out on that night.
\section{Source stars and Einstein radii}\label{sec:four}
In this section, we characterize the source stars of the events. Characterizing the source
star is important not only to fully describe the event but also to measure the Einstein
radius, which is related to the lensing parameter of the normalized source radius by
\begin{equation}
\theta_{\rm E} = {\theta_* \over \rho},
\label{eq2}
\end{equation}
where the angular source radius can be deduced from the source type. See below for the detailed
procedure of the $\theta_*$ estimation. Measurement of the Einstein radius is important because it
is related to the mass $M$ and distance to the lens $D_{\rm L}$ as $\theta_{\rm E}=(\kappa M\pi_{\rm rel})^{1/2}$,
and thus $\theta_{\rm E}$ can provide an extra constraint on the mass and distance to the lens in addition
to the basic observable of the event time scale. Here $\kappa\equiv 4 G/c^2\,{\rm AU}\simeq
8.14\,{\rm mas}/M_\odot$ and $\pi_{\rm rel}$ represents the relative lens-source parallax, that is,
$\pi_{\rm rel}={\rm AU}(D_{\rm L}^{-1} - D_{\rm S}^{-1})$, where $D_{\rm S}$ is the distance to the
source. We specified the source stars of the events by measuring their color and brightness. In
order to estimate the extinction- and reddening-corrected (de-reddened) color and magnitude of the
source, $(V-I, I)_{\rm S,0}$, from the instrumental values, $(V-I, I)_{\rm S}$, we applied the
\citet{Yoo2004} routine. In this routine, the centroid of the red giant clump (RGC) in the
color-magnitude diagram (CMD) with its known de-reddened color and magnitude \citep{Bensby2013,
Nataf2013} was used as a reference for the calibration of the source color and brightness.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{f5.eps}
\caption{
Source locations with respect to the centroids of red giant clump (RGC) in the instrumental
color-magnitude diagrams of stars lying in the neighborhood of the source of KMT-2017-BLG-0673
and KMT-2019-BLG-0414 constructed using the pyDIA photometry of KMTC data set. Also marked are
the positions of the blend.
}
\label{fig:five}
\end{figure}
Figure~\ref{fig:five} shows the CMDs of stars lying in the neighborhood of the source stars
of the two events constructed with the photometry data processed using the pyDIA code
\citep{Albrow2017}. In each CMD, we mark the positions of the source and RGC centroid by a
blue dot and a red dot, respectively. Also marked is the position of a blend, which is marked
by a green dot, indicating that the blended fluxes of both events are dominated by bright disk
stars. We will discuss the nature the blend in the following section. For the estimation of the
source position in the CMD, we measured the $I$- and $V$-band magnitudes of the source by regressing
the data of the individual passbands processed using the pyDIA photometry code with the variation
of the lensing magnification. By measuring the offsets in color and magnitude between the source
and RGC centroid in the CMD, $\Delta (V-I, I) = [(V-I, I)_{\rm S} - (V-I, I)_{\rm RGC}]$, then
the de-reddened color and magnitude of the source were estimated as
\begin{equation}
(V-I, I)_{\rm S,0} = (V-I, I)_{\rm RGC,0} + \Delta (V-I, I).
\label{eq3}
\end{equation}
In Table~\ref{table:three}, we list the values of $(V-I, I)_{\rm S}$, $(V-I, I)_{\rm RGC}$,
$(V-I, I)_{\rm RGC,0}$, and $(V-I, I)_{\rm 0,S}$ for the two lensing events. According to the
measured color and magnitude, it was found that the source of KMT-2017-BLG-0673 was a K-type
giant and the source of KMT-2019-BLG-0414 was a G-type main-sequence star.
We estimated the source radii of the events by first converting $V-I$ into $V-K$ using the
color-color relation of \citet{Bessell1988}, and then derived
$\theta_*$ from the
$(V-K, V)$--$\theta_*$ relation of \citet{Kervella2004}. The estimated angular radii of the
source stars are
\begin{equation}
\theta_* =
\begin{cases}
(7.29 \pm 0.70)~\mu{\rm as}, & \textrm{for KMT-2017-BLG-0673}, \\
(0.68 \pm 0.05)~\mu{\rm as}, & \textrm{for KMT-2019-BLG-0414}.
\end{cases}
\label{eq4}
\end{equation}
From the relation in Equation~(\ref{eq2}), it is estimated that the angular Einstein radii of
the two events are
\begin{equation}
\theta_{\rm E} =
\begin{cases}
(0.96 \pm 0.31)~{\rm mas}, & \textrm{for KMT-2017-BLG-0673}, \\
> 0.14~{\rm mas}, & \textrm{for KMT-2019-BLG-0414}.
\end{cases}
\label{eq5}
\end{equation}
With the values of the event time scales, the relative lens-source proper motions,
$\mu=\theta_{\rm E}/t_{\rm E}$, are estimated as
\begin{equation}
\mu =
\begin{cases}
(15.74 \pm 5.05)~{\rm mas/yr}, & \textrm{for KMT-2017-BLG-0673}, \\
> 0.7~{\rm mas/yr}, & \textrm{for KMT-2019-BLG-0414}.
\end{cases}
\label{eq6}
\end{equation}
For KMT-2019-BLG-0414, we present the lower limits of $\theta_{\rm E}$ and $\mu$ because only an
upper limit on $\rho$ is obtained for this event.
\begin{table*}[t]
\small
\caption{Physical lens parameters\label{table:four}}
\begin{tabular}{lccccc}
\hline\hline
\multicolumn{1}{c}{Quantity} &
\multicolumn{1}{c}{KMT-2017-BLG-0673} &
\multicolumn{4}{c}{KMT-2019-BLG-0414} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{Solution 1} &
\multicolumn{2}{c}{Solution 2} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{Close} &
\multicolumn{1}{c}{Wide} &
\multicolumn{1}{c}{Close} &
\multicolumn{1}{c}{Wide} \\
\hline
$M_{\rm h}$ ($M_\odot$) & $0.63^{+0.37}_{-0.35}$ & $0.74^{+0.43}_{-0.38}$ & $\leftarrow $ & $\leftarrow $ & $\leftarrow $ \\ [0.7ex]
$M_{\rm p}$ ($M_{\rm J}$) & $3.67^{+2.17}_{-2.07}$ & $4.57^{+3.74}_{-2.06}$ & $4.26^{+3.32}_{-1.87}$ & $1.95^{+1.76}_{-0.93}$ & $2.14^{+1.75}_{-0.96} $ \\ [0.7ex]
$D_{\rm L}$ (kpc) & $5.08^{+1.22}_{-1.59}$ & $4.41^{+1.79}_{-1.93}$ & $\leftarrow $ & $\leftarrow $ & $\leftarrow $ \\ [0.7ex]
$a_\perp$ (AU) & $2.34^{+0.56}_{-0.74}$ & $1.16^{+0.47}_{-0.51}$ & $9.42^{+3.83}_{-4.12}$ & $1.40^{+0.57}_{-0.61}$ & $9.12^{+3.71}_{-3.99} $ \\ [0.7ex]
disk/bulge & $63\%/37\% $ & $73\%/27\% $ & $\leftarrow $ & $\leftarrow $ & $\leftarrow $ \\ [0.7ex]
\hline
\end{tabular}
\end{table*}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{f6.eps}
\caption{
Bayesian posteriors of the host mass for the planetary systems KMT-2017-BLG-0673L (upper panel) and
KMT-2019-BLG-0414L (lower panel). The vertical line in each panel represents the median value and the
two dotted vertical lines indicate the 1$\sigma$ range of the distribution. The curves drawn in blue
and red represent the contributions of the disk and bulge lens populations, respectively.
}
\label{fig:six}
\end{figure}
\section{Physical lens parameters}\label{sec:five}
The physical parameters of the lens mass and distance are constrained by the lensing observables
of $t_{\rm E}$, $\theta_{\rm E}$, and $\pi_{\rm E}$, which are related to the physical parameters. The basic parameter
of the event time scale was securely measured for both events, the Einstein radius was measured for
KMT-2017-BLG-0673, while only a lower limit was obtained for KMT-2019-BLG-0414. However, the microlens
parallax could not be measured for either event. Although the partial measurements of the lensing
observables make it difficult to uniquely determine $M$ and $D_{\rm L}$ from the relations \citep{Gould2000}
\begin{equation}
M= {\theta_{\rm E} \over \kappa\pi_{\rm E}};\qquad
D_{\rm L} ={{\rm AU} \over \pi_{\rm E}\theta_{\rm E} + \pi_{\rm S}},
\label{eq7}
\end{equation}
it is still possible to constrain the physical parameters with the measured observables of the
individual events from a Bayesian analysis.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{f7.eps}
\caption{
Bayesian posteriors of the distance to the planetary system. Notations are same as those in
Fig.~\ref{fig:six}.
}
\label{fig:seven}
\end{figure}
In the Bayesian analysis, we first produced a large number ($10^7$) of lensing events by conducting
a Monte Carlo simulation based on a Galactic model. The Galactic model defines the matter density,
motion, and mass function of Galactic objects. From the physical parameters of the simulated events,
we computed the lensing observables of $t_{{\rm E},i}=D_{\rm L} \theta_{\rm E}/v_\perp$ and $\theta_{{\rm E},i}
=(\kappa M \pi_{\rm rel})^{1/2}$. Here $v_\perp$ represents the transverse speed between the lens
and source. We then constructed Bayesian posteriors of the lens mass and distance
with a weight assigned to each simulated event computed by $w_i=\exp(-\chi^2/2)$, where $\chi^2=
(t_{{\rm E},i}-t_{\rm E})^2/\sigma({t_{\rm E}})^2+(\theta_{{\rm E},i}-\theta_{\rm E})^2/\sigma(\theta_{\rm E})^2$, $\sigma(t_{\rm E})$
and $\sigma(\theta_{\rm E})$ represent the uncertainties of $t_{\rm E}$ and $\theta_{\rm E}$ estimated from the modeling,
respectively. In the simulation of lensing events, we used the \citet{Jung2021} Galactic model.
In Figures~\ref{fig:six} and \ref{fig:seven}, we present the posteriors of the host mass and distance
to the planetary systems, respectively. For each distribution, the solid vertical line denotes the
median value, and the two dotted vertical lines represent the 1$\sigma$ range estimated as 16\% and
84\% of the distribution. We separately mark posterior distributions of the disk and bulge lens
populations with blue and red curves, respectively. In Table~\ref{table:four}, we summarize the
estimated masses of the host ($M_{\rm h}$) and planet ($M_{\rm p}$), distances, and projected
planet-host separations ($a_\perp=sD_{\rm L}\theta_{\rm E}$) for the two planetary systems. For KMT-2019-BLG-0414,
we present four sets of parameters corresponding to the four degenerate solutions. We note that the
primary mass and distance of KMT-2019-BLG-0414L estimated from the degenerate solutions are alike
regardless of the solution because the constraint given by the lower limit of $\theta_{\rm E}$ is very weak
and the other constraint of the event time scales of the degenerate solutions are nearly identical.
It is found that the two planetary systems are very similar to each other in many aspects. First,
both systems have planets heavier than the Jupiter of the Solar system with masses $M_{\rm p}\sim
3.7~M_{\rm J}$ and $\sim [2.0$--$4.5]~M_{\rm J}$ for KMT-2017-BLG-0673L and KMT-2019-BLG-0414L,
respectively. Second, the masses of the planet hosts, $M_{\rm h}\sim 0.6~M_\odot$ and $\sim
0.7~M_\odot$ for the individual lenses, are also similar to each other. Third, the distances to
the systems, $D_{\rm L}\sim 5.1$~kpc and $\sim 4.4$~kpc, indicate that both planetary systems are likely
to be in the disk. According to the Bayesian estimation, the probability for the planetary system
to be in the disk is $\sim 63\%$ for KMT-2017-BLG-0673L and $\sim 73\%$ for
KMT-2019-BLG-0414L.
Prompted by the facts that the lenses of both events are likely to be in the disk and the blended
fluxes come from disk stars, we inspected the possibility of the lenses of the events to be the
main sources of the blended flux. For this inspection, we examined the reference images taken
before the lensing magnification and the difference image taken during the lensing magnification.
For KMT-2017-BLG-0673, there exists a star even brighter than the source of a bulge giant at a
location with a separation of $\sim 2.2$~arcsec from the source, and this degrades the photometry
and astrometry of the event together with the high extinction, $A_I\sim 3.1$~mag, toward the field.
Nevertheless, we identified a blend lying at a separation of $\sim 0.65$~arcsec from the source with
a brightness corresponding to the blend. This indicates that the lens is not the main source of the
blended flux. For KMT-2019-BLG-0414, the astrometric offset between the centroid of baseline object
measured on the reference image, $\theta_{\rm ref}=[(f_s/f_b)\theta_s+\theta_b]/(f_s/f_b+1)$, and
the source position measured on the difference image, $\theta_s$, is $\Delta\theta=|\theta_{\rm ref}
-\theta_s|=(79.2\pm 59.2)$~mas. Here $f_s$ and $f_b$ represent the flux values of the source and
blend, respectively. In the case of KMT-2019-BLG-0414, $f_b\gg f_s$, and thus the centroid shift
can be approximated as $\Delta\theta\sim |\theta_b-\theta_s|$. Considering that the measured
astrometric offset is only 1.3 times greater than the uncertainty, it is difficult to firmly
exclude the possibility that the lens is the major source of blended light based on the argument
of the measurement uncertainty.
\section{Resolution of KMT-2019-BLG-0414 Degeneracy}\label{sec:six}
In this section, we discuss the possibility of resolving the degeneracy between the 2L1S and
xallarap solutions of KMT-2019-BLG-0414 by means of AO observations on 30~m class telescopes,
which may see first light in about 2030. We give a brief synopsis of the decision tree governing
these observations.
In the 2L1S model, $\rho<0.005$ at $3\,\sigma$ confidence, which implies, $\mu>0.7$~mas~yr$^{-1}$
(see Equation~(\ref{eq6})). If $\mu$ is really at the threshold of this limit, then several
decades would be required to separately resolve the lens and source. However, typical lens-source
relative proper motions are substantially higher than this, and therefore, most likely, 30~m-class
AO imaging in 2030 will resolve them. This will yield a heliocentric proper motion measurement.
One must be careful to correct from heliocentric to geocentric proper motion using
\begin{equation}
\mbox{\boldmath $\mu$}_{\rm geo} = \mbox{\boldmath $\mu$}_{\rm hel} - {{\bf v}_{\oplus,\perp}\over {\rm AU}}\pi_{\rm rel},
\label{eq8}
\end{equation}
where ${\bf v}_{\oplus,\perp}(N,E) = (-0.7,+21.6)~{\rm km~s}^{-1}$ is the projected velocity
of Earth at the peak of the event. However, we ignore this correction in the following
simplified treatment. Because $t_{\rm E} \simeq {\rm yr}/5$ is well measured in both models, such
a proper motion measurement will immediately yield a determination of the Einstein radius:
$\theta_{\rm E} \simeq 0.6~{\rm mas}~(\mu/3\,{\rm mas}~{\rm yr}^{-1})$.
Within the context of the xallarap model, this will yield the semi-major axis of the orbit of
the primary, i.e., $a_{\rm primary}=\xi\theta_{\rm E} D_{\rm S}=1.25~R_\odot~(\mu/3~{\rm mas~yr}^{-1})$,
where we have assumed a source distance of $D_{\rm S}=8$~kpc. Combined with the period measurement
$P=0.9$~day, a source-mass estimate of $M_{\rm S} \simeq 1\,M_\odot$ and Kepler's Third Law, this
will yield the mass of the source companion. For example, for $\mu=3~{\rm mas~yr}^{-1}$,
$M_{\rm companion}\sim 0.4\,M_\odot$ (which would account for its lack of photometric signature).
However, if $\mu$ were measured to be sufficiently large, this might imply photometric signatures
from a luminous orbiting companion and (or) eclipses, which might rule out the xallarap scenario.
Assuming that xallarap solutions survive this test, there remains a radial-velocity (RV) test
of the xallarap hypothesis. The amplitude of the RV signature would be $v\sin i = 2\pi (a_{\rm
primary}/P) \sin i = 70~{\rm km~s}^{-1} (\mu/3~{\rm mas~yr}^{-1})\sin i$. The source is
$I_{\rm S} \sim 21.5$ ($K_{\rm S} \sim 18.0$), so RV signatures of amplitude $\ll 1~{\rm km~s}^{-1}$
should be detectable with 30~m-class AO spectroscopy. If these signatures are not detected, then
the xallarap model would become highly implausible because it would require an almost perfectly
face-on orbit. For completeness, we note that {\it Gaia} DR3 reports a parallax and proper
motion for the baseline object of $\pi_{\rm base} = (1.1\pm 0.4)$~mas and $\mbox{\boldmath $\mu$}_{\rm base}(E,N)
= (-0.4\pm 0.4,-2.3\pm 0.3)~{\rm mas~yr}^{-1}$. As shown in Figure~\ref{fig:five}, the baseline
object is dominated by the blend. While the parallax error is large, the low proper motion is
also consistent with the blend being a nearby object. This object is most likely dominated by
the host or a companion to the host. By the time 30~m-class AO is available, the parallax error
will likely be reduced by a factor $\sim 1.7$, so Gaia astrometry may substantially aid the
interpretation of the AO images.
\section{Summary}\label{sec:seven}
We presented analyses of two microlensing events KMT-2017-BLG-0673 and KMT-2019-BLG-0414, for
which the presence of planets in the lenses were found from the inspection of the microlensing
data collected by the KMTNet survey during the 2017--2019 seasons in the peripheral Galactic
bulge fields. The planetary signal of KMT-2017-BLG-0673 had been previously missed because of
the relatively sparse coverage of the event, and the signal of KMT-2019-BLG-0414 had not been
identified due to the weakness of the signal caused by its non-caustic-crossing nature. The
detections of the planets indicate the need for thorough investigations of non-prime-field
lensing events for the complete census of microlensing planet samples.
For KMT-2017-BLG-0673, we identified a unique planetary solution with lensing parameter of $(s, q)
\sim (0.81, 5.6\times 10^{-3})$. It was found that the planetary signal was generated by the
source passage through one of the two tiny planetary caustics induced by a planet lying inside
the Einstein ring of the planetary system.
For KMT-2019-BLG-0414, on the other hand, we identified two pairs of solutions, for each of which
there were two solutions caused by the close-wide degeneracy. Despite the fact that the incidence
angles of the two sets of solutions differed from each other, the planet parameters of the projected
planet-host separations and mass ratios were similar to each other. Although slightly less favored,
the anomaly could also be explained by a model, in which a single lens mass magnified a rapidly
orbiting binary source with a faint companion. In Section~\ref{sec:six}, we discussed how the
planet/xallarap degeneracy might one day be resolved by AO observations on 30~m class telescopes.
Pending such resolution, KMT-2019-BLG-0414Lb should not be included in catalogs of
``known planets.''
From the physical parameters estimated by conducting Bayesian analyses based on the observables of
the events, it was found that the two planetary systems were similar to each other in many aspects.
For both planetary systems, the hosts of the planets are stars with masses lower than the Sun, and
they host planets heavier than the Jupiter of the Solar system. Furthermore, both planetary systems
reside in the disk of the Galaxy.
\begin{acknowledgements}
Work by C.H. was supported by the grants of National Research Foundation of Korea
(2020R1A4A2002885 and 2019R1A2C2085965).
This work was financially supported by the Research Year of Chungbuk National University in 2021.
This research has made use of the KMTNet system operated by the Korea Astronomy and Space
Science Institute (KASI) and the data were obtained at three host sites of CTIO in Chile,
SAAO in South Africa, and SSO in Australia.
J.C.Y. acknowledges support from NSF Grant No. AST-2108414.
W.Z. and H. Y. acknowledge the support by the National Science Foundation of China (Grant
No. 12133005).
\end{acknowledgements}
|
1,314,259,993,892 | arxiv | \section{Introduction}
Electroweak properties are widely used as an important source
of information on the structure of hadrons. In particular
within the framework of
light-front (LF) dynamics \cite{dirac,brodsky,karmanov,kp,LEV}, a large number
of papers has been devoted to the study of nuclei and hadrons
(see e.g.
\cite{chung,we,Jaus90,tob92,carpi,card,salme,sim,nua,Jaus99,choi01,JI01,pion99,ba01,pach02,
Hwang,ba02},
just to give a partial account of previous works with a finite number of constituents).
The LF dynamics allows one to exploit the intuitive language of the Fock space.
Indeed the Fock-space language is particularly meaningful within
LF dynamics, since: "The simplicity of the light-cone Fock
representation as compared to that in equal-time quantization is
directly linked to the fact that the physical vacuum state has a
much simpler structure on the light-cone because the Fock vacuum
is an exact eigenstate of the full Hamiltonian."\cite{brodsky}
Another basic motivation for choosing the LF dynamics is
represented by the striking feature that the Fock decomposition is
stable under LF boosts, since they are of kinematical
nature and therefore do not change the number of particles,
i.e., are diagonal in the Fock space.
Therefore the LF dynamics is a suitable framework for the investigation of the Fock
expansion for mesons and baryons, viz
\begin{eqnarray} &&
| meson \rangle = |q\bar{q} \rangle + |q \bar{q} q \bar{q}\rangle +
|q \bar{q} ~g\rangle +
.....
\nonumber \\ &&
| baryon \rangle = |qqq \rangle +
|qqq~q \bar{q} \rangle +|qqq~g \rangle +
.....
\end{eqnarray}
In particular, within the LF dynamics the
electromagnetic form factor of the pion has
been the object of many papers
(see, e.g., Refs.
\cite{chung,tob92,carpi,card,salme,choi01,pion99,Hwang,pach02}).
Indeed the pion electromagnetic form factor yields a simple tool for
the investigation of pion and photon microscopic
structure in terms of hadronic constituents. In
what follows we will present an approach to investigate in a common
framework the
pion and photon vertex functions, with the perspective of an extension of our
approach to the nucleon. The intuitive
language of the Fock space will be widely exploited
to analyze the above mentioned vertex functions.
Aim of this work is to give a unified description of the electromagnetic form
factor of the pion, both in the space-like (SL) and in the time-like (TL) regions,
taking into account the complexities related to the pion and photon
vertex functions, both in the valence and in the nonvalence sectors,
as well as {\em{the fermionic nature of the constituents}}.
A first presentation of our approach was given in Ref. \cite{DFPS}.
The choice of the {\em reference frame} where the form factor analyses
are carried out has a fundamental role, as shown in previous
works in the space-like region \cite{pach98,pion99,ba01,pach02}
and in the time-like one \cite{choi01}.
For a unified description of TL and SL form factors,
a reference frame is needed where the plus component of the momentum transfer, $q^+ = q^0 + q^3$,
is different from zero (otherwise, $q^2=q^+ q^- - q^2_{\perp}$ cannot be positive).
As a matter of fact, a
reference frame where $q^+\neq 0$ allows one to analyze, in a
common framework \cite{DFPS}, the pair
production process (Z-diagram contribution) \cite{JI01}, i.e. the effect of multiquark propagation,
as well as the ultrarelativistic effect of the so-called instantaneous contributions and the hadronic
components of the photon wave function \cite{brodsky,ashery}.
In Ref. \cite{LPS} it was shown that, within the Hamiltonian LF
dynamics (HLFD), a Poincar\'e covariant and conserved current
operator can be obtained
from the matrix elements of the free current, evaluated in the Breit
reference frame,
where the initial and the final total momenta of the system are directed
along the
spin quantization axis, $z$. Following Ref. \cite{LPS}, we calculate
the pion form factor in a
reference frame where ${\bf q}_{\perp}=0$ and $q^+>0$.
Our starting point is the {\em Mandelstam formula} \cite{mandel}. To construct a bridge toward the
Hamiltonian language, the hadron vertex functions will be connected to the LF
wave function of the valence component of the hadron state. Furthermore, the concept of
hadronic valence, i.e. $q\bar q$, component of the photon wave
function will be introduced \cite{brodsky,ashery}.
The main difficulties to be dealt with are: i) how to construct the
photon-hadron coupling when a $q\bar q$ pair is produced by a photon
with $q^+>0$; and ii) how to describe the nonvalence content relevant for the
process under
consideration, both in
the pion and in the photon wave functions.
The first issue is addressed by using a {\em covariant generalization of
the vector meson dominance approach}
(see, e.g., \cite{Connell}) at the
level of the photon vertex function (see Ref. \cite{DFPS}).
As a matter of fact, it is necessary to construct the
Green's function of the interacting $q\bar q$ pair in the $1^-$ channel. For the
description of the vector meson vertex functions in the valence sector we use
the eigenfunctions of the square
mass operator proposed in Refs. \cite{pauli,tobpauli}. The simplified version
of the model that we are going to use \cite{FPZ02} includes
confinement through a harmonic oscillator potential.
The model showed a universal and
satisfactory description of the experimental values of the masses
of both singlet and triplet $S$-wave mesons and the
corresponding radial excitations \cite{FPZ02}, giving
a natural explanation of the almost linear relationship
between the mass squared of excited states and the radial quantum
number $n$~\cite{iach,ani}. Therefore such a relativistic QCD-inspired
model for pseudoscalar and vector mesons retains the main feature
of the spectra and at the same time allows one to perform simple numerical calculations.
The second issue, i.e. {\em the contribution of the nonvalence
($2q2\bar q$) components} of the pion and photon wave functions,
is addressed using a model where a quark in the valence
component radiates a pair by a contact interaction \cite{JI01}.
This interaction is described through a pseudoscalar coupling of quark and pion fields,
multiplied by a constant.
In a recent study of meson decay processes
within LF dynamics \cite{JI01},
this approximation was shown to give a good description of the
experimental data. Here, we just follow the above
suggestion to parameterize the radiative pion emission amplitude
from the quark.
Another important point to be treated carefully is the {\em contribution of
the instantaneous terms}, which is strictly related to the fermionic nature of the constituents.
We remind the reader that the Dirac propagator can be
decomposed using the light-front momentum components \cite{brodsky}, as
follows:
\begin{eqnarray}
\frac{\rlap\slash{k}+m}{k^2-m^2+\imath \epsilon} =
\frac{\rlap\slash{k}_{on}+m}{k^+(k^--k^-_{on}+\frac{\imath
\epsilon}{k^+})} +\frac{\gamma^+}{2k^+} \ ,
\label{inst}
\end{eqnarray}
where $\gamma^+ = \gamma^0 + \gamma^3$ and $k^-_{on}=(|{\bf k}_{\perp}|^{2}+m^2)/{k^+}$.
The second term on the right-hand side of Eq. (\ref{inst}) is
an instantaneous term in the light-front time, related to the so-called zero modes.
As already known (see, e.g., \cite{pach02}), the instantaneous contributions play a
dominant role in the description
of the pion electromagnetic form factor in the space-like region,
in a reference frame where $q^+>0$. Therefore a
special care is devoted in the present work
to the treatment of the {\it instantaneous} contributions in the
light-cone representation of the fermion propagators. In particular the contributions of
the zero modes are under control, thanks to the momentum behavior
of the hadron vertex functions. It should be pointed out
that the effects of the instantaneous terms is emphasized by the small mass of the pion.
Our description contains a small set of parameters: the
oscillator strength, the constituent quark mass,
and the width for the vector mesons. We use experimental
widths for the vector mesons, when available \cite{pdg}, while for the unknown widths
of the radial excitations we use a single width as a fitting parameter. The
constant involved in the description of the nonvalence component can be
fixed by the pion charge normalization in the limit of a vanishing pion mass.
The evaluation of the instantaneous vertex functions involves a further
parameter (see Sect. X).
Previously, the elastic time-like form factor was explored in
the light-front quantization in a boson model of $q\overline Q$ mesons
with point-like vertexes \cite{choi01}, which does not exploit the
rich structure of the meson excited states. Here, by studying in a
common framework the pion space- and time-like form factors, we
also access information from the radially excited vector meson
wave functions. Indeed, in our approach, in the time-like region
the virtual
photon couples directly to the vector meson resonances, which in turn
decay in $\pi^+\pi^-$. Therefore our microscopical model could represent a useful
tool to address the investigation of the vector meson Green function.
In the present paper in order to simplify the numerical calculations,
{\em we use a massless pion},
i.e. we evaluate the pion form factor at the chiral limit.
In the time-like region the full result for the pion form factor
is always given by the pair-production process ("Z-diagram") alone,
independently of this approximation. In the
space-like region only the "Z-diagram" contribution
\cite{sawicki,pach98,pion99,ba01,pach02,ba02} survives for
a massless pion \cite{DFPS}. The importance of the "Z-diagram"
contribution to the electromagnetic current for $q^+>0$ was also
recently investigated in the context of the Bethe-Salpeter
equation within the light-front quantization in Ref. \cite{tiburzi}.
This work is organized as follows. In Sec. II, we present the
general form of the covariant electromagnetic form factor of the
pion in impulse approximation, which is our starting point, and
our vector-meson-dominance
approach for the dressed photon vertex.
In Sec. III, we first decompose the triangle diagram
in on-shell and instantaneous contributions. Then we
integrate for $q^+~>~0$ on the light-front energy in the momentum
loop of the triangle diagram, under analytical assumptions for the
vertex functions.
The valence components of the light-front meson and
photon wave functions are defined in Sec. IV.
In Sec. V, we discuss the contribution of the nonvalence component of the
photon to the time-like current, which appears through the vertex
for the radiative emission of pions by a virtual quark inside the
photon. In this section, we also discuss the contribution of the
nonvalence component of the pion wave function to the space-like
current.
In Sec. VI, we introduce the pion and vector
meson wave functions in the expression of the triangle diagram.
The pion time-like and
space-like form factors written in terms of the valence components
of the meson wave functions and of the emission/absorption vertexes are derived
in Sec. VII and Sec. VIII, respectively. In Sec. IX, we briefly revise
the light-front model for the pion and vector mesons and conclude
the derivation of our model for the pion form factor with a discussion
on our treatment of the vertex functions for the instantaneous terms. In Sec. X,
we present the numerical results for the pion form factor in the momentum transfer range
between -10 $(GeV/c)^2$ and +10 $(GeV/c)^2$. In Sec. XI, our conclusions are
presented.
\section{Covariant em form factor of the pion}
Our starting point is the covariant expression for the amplitude of the processes
$\pi~\gamma^* \rightarrow \pi'$, or
$\gamma^* \rightarrow \pi \pi'$, where the meson $\pi'$ is a pion
in the elastic case or an antipion in the production process,
evaluated in impulse approximation \cite{mandel}
(see the triangle diagram of Fig. 1). In the time-like region one has (see Fig. 2)
\begin{eqnarray}
j^{\mu} &=& \langle \bar{\pi} \pi | J^{\mu} (q) | 0 \rangle =
~-\imath ~ 2 ~ e ~ \frac{m^2}{f^2_\pi} N_c\int \frac{d^4k}{(2\pi)^4}
~ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}}) ~
\overline \Lambda_{\pi}(k,P_{\pi}) ~\times \nonumber \\ &&
Tr[S(k - P_{\pi}) ~ \gamma^5 ~
S(k-q) ~ \Gamma^\mu(k,q) ~ S(k) ~ \gamma^5 ] \ ,
\label{jmu}
\end{eqnarray}
where
$N_c=3$ is the number of colors; $\displaystyle
S(p)=\frac{1}{\rlap\slash p-m + \imath \epsilon} \,$
is the quark propagator with $m$ the mass of the constituent quark;
$q^{\mu}$ is the virtual photon momentum; $P^{\mu}_{\pi}$ and
$P^{\mu}_{\bar{\pi}}$ are the pion momenta.
The factor 2 stems from isospin algebra, since
\begin{eqnarray}
Tr \left [ \frac{\tau _x - \imath \tau _y} {\sqrt 2} ~ \frac {1 + \tau _z} {2} ~
\frac{\tau _x + \imath \tau _y} {\sqrt 2} \right ] ~ = ~ 2 \ \ ,
\label{iso}
\end{eqnarray}
where $( 1 + \tau _z ) / 2$ is the isospin factor of the current
and the other isospin factors in Eq. (\ref{iso}) pertain to the pions.
The function $\overline \Lambda_{\pi}(k,P_{\pi})$ is the momentum component of
the $q\bar{q}$ vertex function for the outgoing
pion, which will be taken as a symmetric function of the $q$,
$\bar{q}$ momenta. In this vertex function, $P_{\pi}$ is the momentum of the
outgoing pion and $k$ is the momentum of the incoming quark (see Fig. 2).
The "bar" notation on the vertex function labels the adjoint Bethe-Salpeter amplitude,
i.e. the solution of a Bethe-Salpeter equation where the
two-body irreducible kernel is placed on the right of the
amplitude, while for the Bethe-Salpeter amplitude it is placed on the left
\cite{lurie,izuber}. This is a
well known property of time orderings implied by the Mandelstam
formula, for initial and final states \cite{mandel,lurie}.
The vertex function is defined by the following equation
\begin{eqnarray}
&& \imath \frac{\rlap\slash{k} +m}{k^{ 2}-m^2+\imath \epsilon} ~\gamma_5~
\Lambda_{\pi}(k,P_{\pi}) ~ \frac{\rlap \slash k^\prime+m}{k^{\prime 2}- m^2 + \imath \epsilon}
~ \delta^4( k^\prime + P_{\pi} - k)
=\frac{1}{(2\pi)^4}\int d^4x~d^4y
\times
\nonumber \\ &&
\exp{i(k^\prime\cdot y - k\cdot x )} ~
\langle 0 | \text{T}\left[ q (x)~\overline
q (y)\right]|P_{\pi}\rangle ~~~,
\label{tpf}
\end{eqnarray}
where $q(x)$ is the quark field.
To obtain the current matrix element for the
space-like region, $P^\mu_\pi$ should be replaced by $-P^\mu_\pi$
and $\bar{\pi}$ by $\pi'$.
Then the pion vertexes $\overline \Lambda_{\pi}(k,P_{\pi})$ and
$\Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})$ in Eq. (\ref{jmu})
are to be changed with $\Lambda_{\pi}(-k,P_{\pi})$ and
$\overline \Lambda_{\pi^{\prime}}(k + P_{\pi},P_{\pi^{\prime}})$, respectively (see Fig. 3).
The momentum dependence of the vertex functions
$\Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})$ and $
\overline \Lambda_{\pi}(k,P_{\pi})$ is
expected to regularize the integrals of Eq. (\ref{jmu}).
The dressed photon-vertex, $\Gamma^\mu(k,q)$, is related to the
photon Bethe-Salpeter amplitude, which is defined from the three-point
function in the standard form:
\begin{eqnarray}
&&\imath \frac{\rlap \slash k^\prime+m}{k^{\prime 2}-m^2+\imath \epsilon}
\Gamma^\mu (k,q) \frac{\rlap\slash{k} +m}{k^{ 2}-m^2+\imath
\epsilon} \delta^4( k^\prime+q-k)=\frac{1}{(2\pi)^4}\int
d^4x~d^4x^\prime~d^4x^{\prime\prime}
\times
\nonumber \\ &&
\exp{i(k^\prime\cdot x^\prime -k\cdot x + q\cdot
x^{\prime\prime})} R^\mu_3(x,x^\prime,x^{\prime\prime}) \ .
\label{wf1zg}
\end{eqnarray}
The three-point function is given by
\begin{eqnarray}
R^\mu_3(x,x^\prime,x^{\prime\prime})=
\langle 0 | \text{T}\left[ q (x)~\overline
q(x^{\prime\prime})\gamma^\mu q(x^{\prime \prime})\overline
q(x^\prime)\right]|0\rangle ~~~,
\label{tpf1}
\end{eqnarray}
which is the matrix element between the vacuum states of the time
ordered product of the four quark fields written above
\cite{izuber}.
The central assumption of our paper is the
microscopical description of the dressed photon vertex,
$\Gamma^\mu(k,q)$,
in the processes where a photon with $q^+>0$ decays in a quark-antiquark pair. In these processes
we use for the photon vertex,
dressed by the interaction between the $q\overline q$ pair,
the following covariant vector meson dominance approximation
(see Fig. 4)
\begin{eqnarray}
\Gamma^{\mu}(k,q) &=& \sqrt{2} \sum_{n}
\left [ -g^{\mu \nu} + {q^{\mu} q^{\nu} \over M_n^2} \right ]~ \widehat{V}_{n \nu}(k,k-q)
~ \Lambda_{n}(k,q) ~ { f_{Vn} \over \left [ q^2 -
M^2_n + \imath M_n \tilde{\Gamma}_n(q^2)\right ]} \ ,
\label{cur6}
\end{eqnarray}
where
\begin{eqnarray}
\left [ -g^{\mu \nu} + {q^{\mu} q^{\nu} \over M_n^2} \right ]
{1 \over \left [ q^2 -
M^2_n + \imath M_n \tilde{\Gamma}_n(q^2)\right ]} \ ,
\label{prop}
\end{eqnarray}
is the vector meson propagator \cite{Halzen}.
In Eq. (\ref{cur6}) $f_{Vn}$ is the decay constant of the $n{\rm th}$ vector
meson in a virtual photon, $M_n$ the corresponding
mass, $\Lambda_{n}(k,q)$ gives the momentum dependence
and $\widehat{V}_{n \nu}(k,k-q)$ the Dirac structure of the VM vertex function,
while $\tilde{\Gamma}_n(q^2)$ is the total decay width.
If we approximate Eq. (\ref{cur6}) considering on-shell quantities
for the VM in the numerator,
i.e. if we replace $q^-$ with $P^-_n=(|{\bf q}_{\perp}|^2+M^2_n)/q^+$,
we have
\begin{eqnarray}
\Gamma^{\mu}(k,q) &=& \sqrt{2} \sum_{n, \lambda}
\left [ \epsilon_{\lambda} (P_n)\cdot \widehat{V}_{n}(k,k-P_n) \right ]
\Lambda_{n}(k,P_n) ~ { [\epsilon ^{\mu}_{\lambda}(P_n)]^* f_{Vn} \over \left [ q^2 -
M^2_n + \imath M_n \tilde{\Gamma}_n(q^2)\right ]} \ ,
\label{cur7}
\end{eqnarray}
where the quantity
$\left [ \epsilon_{\lambda}(P_n) \cdot \widehat{V}_{n}(k,k-P_n) \right ] ~
\Lambda_{n}(k,P_n)$ is the VM vertex function and $\epsilon_{\lambda}(P_n)$
the VM polarization. Note that the total momentum for
an on-shell vector meson is
$P^{\mu}_n \equiv \{P^-_n=(|{\bf q}_{\perp}|^2+M^2_n)/q^+,
{\bf P}_{n \perp}={\bf q}_{\perp}, P^+_{n}=q^+ \}$,
while $q^{\mu}
\equiv \{q^-, {\bf q}_{\perp}, q^+ \}$
and that at the production vertex, see Fig. 4, the light-front
three-momentum is conserved.
In Eq. (\ref{cur6}) the sum runs over
all the possible vector mesons. The
vector meson decay constant, $f_{Vn}$, can be obtained from the definition \cite{Jaus99}
\begin{eqnarray} &&
\epsilon^{\mu}_{\lambda} \sqrt{2} f_{V,n} = \langle 0| \bar{q}(0) \gamma^{\mu}
q(0)|\phi _{n,\lambda}\rangle
\label{fVap}
\end{eqnarray}
with $|\phi _{n,\lambda}\rangle$ the vector meson state. A detailed expression for $f_{Vn}$
is given in Appendix A.
The total decay width in the denominator of Eqs. (\ref{prop}) and (\ref{cur7}), $\tilde{\Gamma}_n(q^2)$,
is vanishing in the SL
region. In the TL region it is assumed to be equal to
\begin{eqnarray} &&
\tilde{\Gamma}_n(q^2) = \Gamma_{n} ~ \left[ {p(q^2) \over p(M_{n}^2) } \right]^3
\left[ { M_{n}^2 \over q^2} \right]^{1/2}
\label{Gamma}
\end{eqnarray}
where $p(q^2) = [q^2 - 4 m_\pi^2]^{1/2}/2$ ~ ~ ~ \cite{Saku,Klingl,Benayoun}.
In Ref. \cite{pach97} the
following expression was used for the Dirac structure, $\widehat{V}_{n}(k,k-P_n)$,
of the vector meson vertex :
\begin{eqnarray} &&
\widehat{V}^{\mu}_{n}(k,k') = \gamma^{\mu}-{M_n
\over 2}{k^{\mu}+k'^{\mu} \over P_n \cdot k +mM_n}
\label{gamV}
\end{eqnarray}
where $k' = k - P_n$.
Let us consider, instead of Eq. (\ref{gamV}),
a symmetric form for $\widehat{V}_{n}(k,k-P_n)$:
\begin{eqnarray} &&
\widehat{V}^{\mu}_{n}(k,k-P_n) =
\gamma^{\mu}- M_n ~ {k^{\mu} + k'^{\mu} \over P_n \cdot k - P_n \cdot k' + 2 mM_n} =
\gamma^{\mu} - {k^{\mu} + k'^{\mu} \over M_n + 2 m } \; \; .
\label{gams2}
\end{eqnarray}
If in Eq. (\ref{gams2}) both the CQ's are taken on their
mass shell (i.e., $k^- = k^-_{on} = (|{\bf k}_{\perp}|^2+m^2)/k^+ $)
and the VM mass, $M_n$, is replaced by the free mass,
$M_0$,
of the quarks in a system of total momentum $q^{\mu}$,
\begin{eqnarray}
M_0 = \left [ (k_{on} + (q-k)_{on}) \cdot (k_{on} +( q-k)_{on}) \right ]^{1/2}
\label{M0}
\end{eqnarray}
one obtains
\begin{eqnarray}
\left [ \widehat{V}^{\mu}_{n}(k,k-q)\right ]_{on}
&& = ~ \gamma^{\mu} -
{k^{\mu}_{on}-(q-k)_{on}^{\mu} \over M_0 + 2 m } \; \; .
\label{gams1}
\end{eqnarray}
This form coincides with the on-shell expression given in
Ref. \cite{Jaus90} for the $^3S_1$ vector meson vertex,
but then $\Gamma^{\mu}(k,q)$ of Eq. (\ref{cur7}) is not anymore a four vector.
Let us note that to obtain the pion form factor one actually needs
only one of the components of the current. In the following we will derive the
pion form factor from the {\em plus} component.
In Ref. \cite{DFPS} to evaluate the pion form factor
we considered the plus component of Eq. (\ref{cur7}),
where $\Lambda_{n}$ and the VM polarizations were
taken at the vector meson pole, and the
on-shell expression for $\widehat{V}^{\mu}_{n}$, as given by Eq. (\ref{gams1}), was used,
in order to have the structure of the VM vertex suggested by the Hamiltonian
LF dynamics.
In Appendix B, starting from Eq. (\ref{cur6}), we propose a current
which satisfies current conservation. In the reference frame, where
$q^+ = M_n > 0$ and ${\bf q }_\perp=0$, the $n{\rm th}$ term of this current has
exactly the same plus component as the $n{\rm th}$ term of the current defined in Eq. (\ref{cur7}).
One might wonder that a bare $\gamma ^{\mu}$ coupling term should be added to the current
defined in
Eqs. (\ref{cur6}) or (\ref{cur7}). However, as it is shown in Appendix C, in the case of a massless pion
a bare coupling produces violation of current conservation. Therefore we do not consider this term in
the present paper.
\section{Approximating the triangle diagram on the light-front }
Our aim is to retain the essential physics contained in the triangle
diagram (Fig. 1) and at the same time
to construct a bridge toward the Hamiltonian wave function language.
At the same time we wish to go beyond a simple valence description. To
accomplish these goals and to
eliminate the relative light-front time between the
quarks, we perform the $k^-$ integration in Eq. (\ref{jmu}) with
some assumptions on the analytical structure of
the $\Lambda$ and $\Gamma$ vertexes for the pion and the photon.
To be more precise, Eq. (\ref{jmu}) is
evaluated with the assumptions that: (i) the momentum components,
$\Lambda(k,P)$,
of the vertex functions,
both for the pion and the vector mesons,
vanish in the complex plane $k^-$ for
$|k^-|\rightarrow\infty$; and (ii) the contributions of the possible
singularities of $\Lambda(k,P)$ can be neglected. Furthermore, the Dirac
structures of the vector meson vertex
function, $\widehat{V}^{\mu}_{n}(k,k-q)$,
are assumed to be regular functions
of the complex variable $k^-$.
The expressions for $\widehat{V}^{\mu}_{n}(k,k')$ given
by Eqs. (\ref{gams1}) or (\ref{gams2})
obviously fulfill
the requirement that no pole is present in the $k^-$ complex plane.
To make clear the discussion of the $k^-$ integration, it is helpful to first separate
instantaneous and non-instantaneous contributions, using the decomposition of the Dirac
propagator given in Eq. (\ref{inst}). Indeed this decomposition is useful to have a
better control on
possible divergences both in $k^-$ and in $k^+$ integrations. In particular,
as already mentioned, it should be
pointed out the tight relation between the instantaneous terms and the so-called zero modes,
where $k^+ = 0$. We assume that the behavior of the functions $\Lambda(k,P)$ in $k^+$ is able
to regularize the divergences at the $k^+$ end points \cite{SPPF}.
Since three propagators are present in Eq. (\ref{jmu}), one has a total of
eight contributions. The
contribution with three instantaneous terms vanishes because of the property
$\gamma ^+ ~ \gamma ^+ = 0$, since the combination
$\gamma ^+ ~\gamma ^5 ~\gamma ^+$ appears.
Also the three contributions with two instantaneous terms vanish,
as a consequence of our assumptions on $\Lambda(k,P)$.
Indeed, only a single pole from the
propagators is present in these contributions. Then, since we assume that
the functions $\Lambda(k,P)$ go to zero for
$|k^-|\rightarrow\infty$ and disregard their singularities, we can
perform the integration in the $k^-$
complex plane closing the contour in the semiplane where no singularity
in the propagators is present
and we obtain a null result. Moreover, two of these contributions with two instantaneous terms
are also identically vanishing because of the
presence of the combination $\gamma ^+ ~\gamma ^5 ~\gamma ^+$.
Therefore we are left with four contributions : three contributions with one
instantaneous term only and one contribution with no instantaneous term.
To evaluate the triangle diagram we treat separately the time-like
case and the space-like case.
\subsection{Time-like case}
In the time-like case, one has $q^{\mu} = P^{\mu}_{\pi} +
P^{\mu}_{\bar{\pi}}$, and $q^+ > 0$ .
Equation (\ref{jmu}) written in light-front variables becomes
(the Jacobian for the transformation to the light-front variables is 1/2):
\begin{eqnarray}
&&j^{\mu} = ~-\imath \frac{e} {(2\pi)^4} \frac{m^2}{f^2_\pi} N_c~
\int \frac{dk^- dk^+ d{\bf k}_{\perp}}{(k^+ - P^+_{\pi}) k^+ (k^+
- q^+)} ~ Tr[{\cal O}^{\mu}] ~ \times \nonumber \\ &&
\frac{\Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})
\overline\Lambda_{\pi}(k,P_{\pi})}{(k^- - k^-_{on} + \frac{\imath
\epsilon}{k^+}) (k^- - q^- -(k - q)^-_{on} +
\frac{\imath\epsilon}{k^+ - q^+}) (k^- -P^-_{\pi} - (k -
P_{\pi})^-_{on} + \frac{\imath\epsilon}{k^+ - P^+_{\pi}})} \ .
\label{jmuA}
\end{eqnarray}
The on-mass-shell values of the
minus-components of the momenta in Eq. (\ref{jmuA}) are given by
\begin{eqnarray}
k^-_{on}=\frac{{\bf k}_{\perp}^{2}+m^2}{k^+} \ , \quad
(k-q)^-_{on}=\frac{({\bf k-q})_{\perp}^{2}+m^2}{k^+ - q^+} \ ,
\quad \
(k - P_{\pi})^-_{on} = \frac{({\bf k - P}_{\pi})_{\perp}^{2}+m^2}{k^+ - P^+_{\pi}} \ ,
\label{onek}
\end{eqnarray}
and the operator
${\cal O}^{\mu}$ is defined as follows
\begin{eqnarray}
{\cal O}^{\mu} ~ = ~ (\rlap\slash k - \rlap\slash P_{\pi} + m)
~\gamma^5 (\rlap\slash k - \rlap\slash q + m)~ \Gamma^\mu(k,q)
~(\rlap\slash k + m)~ \gamma^5 \ .
\label{O}
\end{eqnarray}
Let us perform the decomposition of the propagators in instantaneous and in on-shell
parts (see Eq. (\ref{inst})), as discussed at the beginning of this section.
Then Eq. (\ref{jmuA}) becomes
\begin{eqnarray}
&&j^{\mu} = {\cal J}^{\mu}_{on} + {\cal J}^{\mu}_{1} + {\cal J}^{\mu}_{2}
+ {\cal J}^{\mu}_{3}
\label{jmuAA}
\end{eqnarray}
where ${\cal J}^{\mu}_{on}$ represents the on-shell contribution
and ${\cal J}^{\mu}_{i} (i=1,2,3)$ represent the contributions with one instantaneous term.
Then we have
\begin{eqnarray}
&&{\cal J}^{\mu}_{on} = -\imath \frac{e} {(2\pi)^4} \frac{m^2}{f^2_\pi} N_c~
\int \frac{dk^- dk^+ d{\bf k}_{\perp}}{(k^+ - P^+_{\pi}) k^+ (k^+
- q^+)} ~ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}}) ~
\overline\Lambda_{\pi}(k,P_{\pi}) ~ {\cal T}^{\mu}_{on}
\label{jmuAon}
\end{eqnarray}
and
\begin{eqnarray}
&&{\cal J}^{\mu}_{i} = -\imath \frac{e} {(2\pi)^4} \frac{m^2}{f^2_\pi} N_c~
\int \frac{dk^- dk^+ d{\bf k}_{\perp}}{(k^+ - P^+_{\pi}) k^+ (k^+
- q^+)} ~ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}}) ~
\overline\Lambda_{\pi}(k,P_{\pi}) ~ {\cal T}^{\mu}_i
\label{jmuAi}
\end{eqnarray}
where
\begin{eqnarray}
{\cal T}^{\mu}_{on} =
\frac{Tr[[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~ \Gamma^\mu(k,q)
~(\rlap\slash k_{on} + m)~ \gamma^5 ]}
{(k^- - k^-_{on} + \frac{\imath \epsilon}{k^+})
(k^- - q^- -(k - q)^-_{on} + \frac{\imath\epsilon}{k^+ - q^+})
(k^- -P^-_{\pi} - (k -
P_{\pi})^-_{on} + \frac{\imath\epsilon}{k^+ - P^+_{\pi}})}
\label{TmuAon}
\end{eqnarray}
\begin{eqnarray}
&&{\cal T}^{\mu}_{1} =
\frac{Tr[ \gamma^+
~\gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~ \Gamma^\mu(k,q)
~(\rlap\slash k_{on} + m)~ \gamma^5 ]}{2 ~ (k^- - k^-_{on} + \frac{\imath
\epsilon}{k^+}) (k^- - q^- -(k - q)^-_{on} +
\frac{\imath\epsilon}{k^+ - q^+}) }
\label{jmuA1}
\end{eqnarray}
\begin{eqnarray}
&&{\cal T}^{\mu}_{2} =
\frac{Tr[[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ \gamma^+ ~ \Gamma^\mu(k,q)
~(\rlap\slash k_{on} + m)~ \gamma^5 ]}{2 ~ (k^- - k^-_{on} + \frac{\imath
\epsilon}{k^+}) (k^- -P^-_{\pi} - (k -
P_{\pi})^-_{on} + \frac{\imath\epsilon}{k^+ - P^+_{\pi}})}
\label{jmuA2}
\end{eqnarray}
\begin{eqnarray}
&&{\cal T}^{\mu}_{3} =
\frac{Tr[[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~ \Gamma^\mu(k,q)
~ \gamma^+ ~ \gamma^5 ]}{2 ~ (k^- - q^- -(k - q)^-_{on} +
\frac{\imath\epsilon}{k^+ - q^+}) (k^- -P^-_{\pi} - (k -
P_{\pi})^-_{on} + \frac{\imath\epsilon}{k^+ - P^+_{\pi}})} \ .
\label{jmuA3}
\end{eqnarray}
In Eqs. (\ref{jmuAon}, \ref{jmuAi}) the three propagators of the triangle diagram
generate three poles:
\begin{eqnarray}
k^-_{(1)} &=& k^-_{on}~ - ~ \frac{\imath \epsilon}{k^+} \ , \nonumber \\
k^-_{(2)} &=& q^- ~+ ~(k - q)^-_{on}~ - ~\frac{\imath\epsilon}{k^+
- q^+} \ ,
\nonumber \\
k^-_{(3)} &=& P^-_{\pi} ~+~ (k - P_{\pi})^-_{on} ~ - ~
\frac{\imath\epsilon}{k^+ - P^+_{\pi}} \ .
\label{po}
\end{eqnarray}
Within our assumptions on the vertex functions,
$\Lambda(k,P)$, if $k^+ < 0$ there are no
poles in the lower complex semi-plane of $k^-$ (cf. Eq. (\ref{po})).
Therefore, if the $k^-$ integration is performed
by closing the contour of integration in the lower
semi-plane, a vanishing result is obtained. Furthermore, if $k^+ >
q^+$, there are no poles in the upper complex semi-plane and a
vanishing result is obtained by closing the contour in the upper
semi-plane. Then, the integrals (\ref{jmuAon}, \ref{jmuAi})
have contributions only for $k^+$ in
the range $0 < k^+ < q^+ $. The integration range can be
decomposed in two intervals, $0 < k^+ < P^+_{\pi}$ and $P^+_{\pi}
< k^+ < q^+$. In the first one, if the $k^-$ integration contour is
closed in the lower semi-plane, only the pole $k^-_{(1)}$ falls
within the integration contour, while in the second one, if the
integration contour is closed in the upper semi-plane, only the
pole $k^-_{(2)}$ falls within the integration contour. Then
in the range $0 < k^+ < P^+_{\pi}$ one has contributions from
${\cal J}^{\mu}_{on}$, ${\cal J}^{\mu}_{1}$ and
${\cal J}^{\mu}_{2}$, while in the range $P^+_{\pi} < k^+ < q^+$ one has contributions from
${\cal J}^{\mu}_{on}$, ${\cal J}^{\mu}_{1}$ and
${\cal J}^{\mu}_{3}$.
Let us introduce the free mass, $M_{0}(k^+, {\bf
k}_{\perp}; P^+, {\bf P}_{\perp} )$, of a $q\bar{q}$
system of mass $M$, total momentum $P$, kinematical
momenta $(k^+, {\bf k}_{\perp})$ and $(P^+-k^+, {\bf P}_\perp
-{\bf k}_{\perp})$ for the pair of quarks:
\begin{eqnarray}
M_{0}^2(k^+, {\bf k}_{\perp}; P^+, {\bf P}_{\perp} )
= \frac{{\bf k}^2_\perp+m^2}{x} + \frac{
({\bf P-k})^2_\perp + m^2}{1-x}- {\bf P}^2_\perp \
\label{M0q}
\end{eqnarray}
where $x = k^+/P^+$, with $0 \ \le \ x \ \le \ 1$. Using this
definition of the free mass, the following equations hold:
\begin{eqnarray}
& &{1 \over (P^{-} -(P-k)^-_{on}- k^-_{on})}={P^+ \over (M^2 + {\bf P}^2_{\perp} -
\frac{({\bf P-k})_{\perp}^{2}+m^2}{(1-x)}- \frac{{\bf k}_{\perp}^{2}+m^2}{x})}=
\nonumber
\\ & & ={P^+ \over (M^2 - M^2_0(k^+, {\bf k}_{\perp}; P^+, {\bf P}_{\perp}))}
\label{M01}
\end{eqnarray}
\begin{eqnarray}
& &{ 1 \over \left [ P^{\prime -} - (P^{\prime} - (k - P))^-_{on}
- (k - P)^-_{on} \right ] } =
{ P^{\prime +}
\over (M^2 + {\bf P^{\prime}}^{2}_{\perp} -
\frac{({\bf P - k})_{\perp}^{2} + m^2}{x^{\prime}} -
\frac{({\bf P^{\prime} - ({\bf k - P})})_{\perp}^{2} +
m^2}{(1-x^{\prime})}) } = \nonumber
\\ & & ={P^{\prime +} \over \left [ M^2 -
M^2_0((k - P)^+, ({\bf k - P})_{\perp}; P^{\prime +}, {\bf
P^{\prime}}_{\perp}) \right ]}
\label{M02}
\end{eqnarray}
where
$x^{\prime} = (k^+ - P^+)/P^{\prime +}$.
Then, performing the $k^-$ integration and using Eqs. (\ref{M01}, \ref{M02}),
from Eq. (\ref{jmuAA}) we obtain
\begin{eqnarray}
j^{\mu} &=&
\frac{ e } {(2\pi)^3} \frac{m^2}{f^2_\pi} N_c~\int_0^{q^+} \frac{ dk^+
d{\bf k}_{\perp}~ }{(k^+ - P^+_{\pi}) k^+ (q^+ -
k^+)}~
\left \{ \Theta (P^+_{\pi} -k^+) ~ I^{\mu}_1
+ \Theta (k^+ - P^+ _{\pi} ) ~ I^{\mu}_2 \right \} \ .
\label{jmuB}
\end{eqnarray}
The quantities $I^{\mu}_1$ and $I^{\mu}_2$ in Eq. (\ref{jmuB}) are defined as follows
\begin{eqnarray}
I^{\mu}_1 &=&
\left [ \overline\Lambda_{\pi}(k,P_{\pi})
\Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}}) \right ] _{k^-=k^-_{on}}
\left [ T^{\mu}_{on, (1)} ~ + ~ T^{\mu}_{1, (1)} ~ + T^{\mu}_{2, (1)} \right ]
\label{jmu1}
\end{eqnarray}
\begin{eqnarray}
I^{\mu}_2 &=&
\left [ \overline\Lambda_{\pi}(k,P_{\pi})
\Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}}) \right ] _{k^-=q^- + (k - q)^-_{on}}
\left [ T^{\mu}_{on, (2)} ~ + ~ T^{\mu}_{1, (2)} ~ + T^{\mu}_{3, (2)} \right ]
\label{jmu2}
\end{eqnarray}
where
\begin{eqnarray}
T^{\mu}_{on, (1)} = ~ q^+ ~
P^+_{\pi} ~ \frac{ Tr[[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\Gamma^{\mu}(1)
~(\rlap\slash k_{on} + m)~ \gamma^5 ] }
{ \left [ M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) - q^2
-\imath \epsilon \right]~
\left [ ~ M^2_0(k^+, {\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp}) ~ - m^2_\pi ~ \right ] }
\label{Ton1}
\end{eqnarray}
\begin{eqnarray}
T^{\mu}_{on, (2)} = \frac{ q^+ ~ P^+_{\bar{\pi}} ~ ~
Tr[[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\Gamma^{\mu}(2)
~(\rlap\slash k_{on} + m)~ \gamma^5 ] }
{\left [ M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) - q^2
- \imath \epsilon \right]
\left [ ~ m^2_\pi - M^2_0((k^+ - P^+ _{\pi} ), ({\bf k - P_{\pi}})_{\perp};
P^+_{\bar{\pi}}, {\bf P}_{{\bar{\pi}} \perp}) ~ \right ] }
\label{Ton2}
\end{eqnarray}
\begin{eqnarray}
T^{\mu}_{1, (i)} = q^+ ~
\frac{ Tr[ \gamma^+
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\Gamma^{\mu}(i)
~(\rlap\slash k_{on} + m)~ \gamma^5 ] }{ 2 ~
\left [ M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) - q^2
- \imath \epsilon \right]} \quad \quad ( i = 1, 2 )
\label{T11}
\end{eqnarray}
\begin{eqnarray}
T^{\mu}_{2, (1)} &=&
P^+_{\pi} ~ \frac{ Tr[[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ \gamma^+ ~
\Gamma^{\mu}(1)
~(\rlap\slash k_{on} + m)~ \gamma^5 ] }
{2 ~ \left [ ~ M^2_0(k^+,{\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp}) ~ - m^2_\pi ~ \right ]}
\label{T21}
\end{eqnarray}
\begin{eqnarray}
T^{\mu}_{3, (2)} &=&
P^+_{\bar{\pi}} ~ \frac{ Tr[[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\Gamma^{\mu}(2)
~ \gamma^+ ~ \gamma^5 ] }{ 2 ~
\left [ ~ M^2_0((k^+ - P^+ _{\pi} ), ({\bf k - P_{\pi}})_{\perp};
P^+_{\bar{\pi}}, {\bf P}_{{\bar{\pi}} \perp}) ~ - m^2_\pi ~ \right ]}
\label{T32}
\end{eqnarray}
with
\begin{eqnarray}
\Gamma^{\mu}(i) = \Gamma^\mu(k^+,{\bf k}_{\perp}, k^-=k^-_{(i)},q)
\quad \quad ( i = 1, 2 )
\label{Gi}
\end{eqnarray}
The first term of Eq. (\ref{jmuB}), with $k^+ - P^+_{\pi} \le 0$,
and the second term, with $k^+ - P^+_{\pi} \ge 0$, are represented in Fig. 2
by the diagrams (a), with the arrow of $k^+ - P^+_{\pi}$ from the left to the right,
and (b), with the arrow of $k^+ - P^+_{\pi}$ from the right to the left, respectively.
In the first term only the vertex function
$\overline\Lambda_{\pi}(k,P_{\pi})$ has the momentum fraction $k^+ / P^+ _{\pi}$
in the {\em valence-sector} range $[0, 1]$ and in the second term only the vertex function
$\Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})$ has the momentum fraction
$(k^+ - P^+ _{\pi}) / P^+_{\bar{\pi}}$
in the {\em valence-sector} range $[0, 1]$.
\subsection{Space-like case}
In the space-like case, one has $P^{\mu}_{\pi^{\prime}} =
P^{\mu}_{\pi} + q^{\mu}$. Then the expression for the triangle
diagram can be obtained from Eq. (\ref{jmu}) replacing
$-P^{\mu}_{\pi}$ with $P^{\mu}_{\pi}$,
$\bar{\pi}$ with $\pi'$ and
the pion vertexes $\overline \Lambda_{\pi}(k,P_{\pi})$ and
$\Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})$
with $\Lambda_{\pi}(-k,P_{\pi})$ and
$\overline \Lambda_{\pi^{\prime}}(k + P_{\pi},P_{\pi^{\prime}})$, respectively : \
\begin{eqnarray}
j^{\mu} &=&-\imath 2 e \frac{m^2}{f^2_\pi} N_c \int
\frac{d^4k}{(2\pi)^4} Tr[S(k+P_{\pi}) \gamma^5
S(k-q)~\Gamma^\mu(k,q)~S(k) \gamma^5 ]
\overline\Lambda_{\pi^{\prime}}(k + P_{\pi},P_{\pi^{\prime}})
\Lambda_{\pi}(-k, P_{\pi}) \
\nonumber \\
&=& -\imath \frac{ e} {(2\pi)^4} \frac{m^2}{f^2_\pi} N_c~ \int
\frac{dk^- dk^+ d{\bf k}_{\perp}}{(k^+ + P^+_{\pi}) k^+ (k^+ - q^+)} ~
Tr[{\cal O ^{\prime}}^{\mu}] ~
\times \nonumber \\ &&
\frac{\overline\Lambda_{\pi^{\prime}}(k +
P_{\pi},P_{\pi^{\prime}}) \Lambda_{\pi}(-k,P_{\pi}) }{(k^- -
k^-_{on} + \frac{\imath \epsilon}{k^+}) (k^- - q^- -(k - q)^-_{on}
+ \frac{\imath\epsilon}{k^+ - q^+}) (k^- + P^-_{\pi} - (k +
P_{\pi})^-_{on} + \frac{\imath\epsilon}{k^+ + P^+_{\pi}})} \ ,
\label{jmuE}
\end{eqnarray}
where
\begin{eqnarray}
{\cal O ^{\prime}}^{\mu} ~ = ~ (\rlap\slash k + \rlap\slash P_{\pi} + m)
~\gamma^5 (\rlap\slash k - \rlap\slash q + m)~ \Gamma^\mu(k,q)
~(\rlap\slash k + m)~ \gamma^5 \ .
\label{O1}
\end{eqnarray}
As in the time-like case, let us decompose the propagators in on-shell and instantaneous parts.
Then Eq. (\ref{jmuE}) becomes
\begin{eqnarray}
&&j^{\mu} = {\cal J'}^{\mu}_{on} + {\cal J'}^{\mu}_{1} + {\cal J'}^{\mu}_{2}
+ {\cal J'}^{\mu}_{3}
\label{jmuAS}
\end{eqnarray}
where ${\cal J'}^{\mu}_{on}$ represents the on-shell contribution
and ${\cal J'}^{\mu}_{i} (i=1,2,3)$ represent the contributions with one instantaneous term.
Then we have
\begin{eqnarray}
{\cal J'}^{\mu}_{on} = -\imath \frac{e} {(2\pi)^4} \frac{m^2}{f^2_\pi} N_c~
\int \frac{dk^- dk^+ d{\bf k}_{\perp}}{(k^+ + P^+_{\pi}) k^+ (k^+
- q^+)} ~ \overline\Lambda_{\pi^{\prime}}(k +
P_{\pi},P_{\pi^{\prime}}) \Lambda_{\pi}(-k,P_{\pi}) ~ {\cal T'}^{\mu}_{on}
\label{jmuSon}
\end{eqnarray}
and
\begin{eqnarray}
{\cal J'}^{\mu}_{i} = -\imath \frac{e} {(2\pi)^4} \frac{m^2}{f^2_\pi} N_c~
\int \frac{dk^- dk^+ d{\bf k}_{\perp}}{(k^+ + P^+_{\pi}) k^+ (k^+
- q^+)} ~ \overline\Lambda_{\pi^{\prime}}(k +
P_{\pi},P_{\pi^{\prime}}) \Lambda_{\pi}(-k,P_{\pi}) ~ {\cal T'}^{\mu}_i
\label{jmuSi}
\end{eqnarray}
where
\begin{eqnarray}
{\cal T'}^{\mu}_{on} =
\frac{Tr[[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~ \Gamma^\mu(k,q)
~(\rlap\slash k_{on} + m)~ \gamma^5 ]}
{(k^- - k^-_{on} + \frac{\imath
\epsilon}{k^+}) (k^- - q^- -(k - q)^-_{on} +
\frac{\imath\epsilon}{k^+ - q^+}) (k^- + P^-_{\pi} - (k +
P_{\pi})^-_{on} + \frac{\imath\epsilon}{k^+ + P^+_{\pi}})}
\label{tmuSon}
\end{eqnarray}
\begin{eqnarray}
&&{\cal T'}^{\mu}_{1} = {\cal T}^{\mu}_{1} =
\frac{Tr \left [ \gamma^+
~\gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~ \Gamma^\mu(k,q)
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ]}{2 ~ (k^- - k^-_{on} + \frac{\imath
\epsilon}{k^+}) (k^- - q^- -(k - q)^-_{on} +
\frac{\imath\epsilon}{k^+ - q^+}) }
\label{jmuS1}
\end{eqnarray}
\begin{eqnarray}
&&{\cal T'}^{\mu}_{2} =
\frac{Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ \gamma^+ ~ \Gamma^\mu(k,q)
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ]}{2 ~ (k^- - k^-_{on} + \frac{\imath
\epsilon}{k^+}) (k^- + P^-_{\pi} - (k +
P_{\pi})^-_{on} + \frac{\imath\epsilon}{k^+ + P^+_{\pi}})}
\label{jmuS2}
\end{eqnarray}
\begin{eqnarray}
&&{\cal T'}^{\mu}_{3} =
\frac{Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~ \Gamma^\mu(k,q)
~ \gamma^+ ~ \gamma^5 \right ]}{2 ~ (k^- - q^- -(k - q)^-_{on} +
\frac{\imath\epsilon}{k^+ - q^+}) (k^- + P^-_{\pi} - (k +
P_{\pi})^-_{on} + \frac{\imath\epsilon}{k^+ + P^+_{\pi}})} \quad \ .
\label{jmuS3}
\end{eqnarray}
In Eq. (\ref{jmuE}) the quark propagators generate three poles :
\begin{eqnarray}
k^-_{(1)} &=& k^-_{on}~ - ~ \frac{\imath \epsilon}{k^+} \ , \nonumber \\
k^-_{(2)} &=& q^- ~+ ~(k - q)^-_{on}~ - ~\frac{\imath\epsilon}{k^+
- q^+} \ ,
\nonumber \\
k^-_{(4)} &=& -P^-_{\pi} ~+~ (k + P_{\pi})^-_{on} ~ - ~
\frac{\imath\epsilon}{k^+ + P^+_{\pi}} \quad \ ,
\label{pos}
\end{eqnarray}
where
\begin{eqnarray}
(k + P_{\pi})^-_{on} = \frac{({\bf k + P}_{\pi})_{\perp}^{2}+m^2}{k^+ + P^+_{\pi}}
\label{posi}
\end{eqnarray}
Let us assume $q^+\ge 0$. Therefore, if $k^+ > q^+$ and within the
hypotheses stated at the beginning of this Section, there are no
poles in the upper complex semi-plane and a vanishing result is
obtained by closing the $k^-$ integration contour in this semi-plane.
Furthermore, if $k^+ < -P^+_{\pi}$, there are no poles in the lower
complex semi-plane of $k^-$. Therefore, one obtains a vanishing
result by closing the contour of integration in the lower
semi-plane. Then, the integral has contributions only for $k^+$ in
the range $-P^+_{\pi} < k^+ < q^+ $. The integration range can be
decomposed in two intervals, $-P^+_{\pi} \le k^+ \le 0 $ and
$0 < k^+ \le q^+$. In the first one, if the integration contour is closed
in the lower semi-plane, only the pole $k^-_{(4)}$ falls within
the integration contour, while in the second one, if the
integration contour is closed in the upper semi-plane, only the
pole $k^-_{(2)}$ falls within the integration contour.
Then
in the range $-P^+_{\pi} \le k^+ \le 0 $ one has contributions from
${\cal J'}^{\mu}_{on}$, ${\cal J'}^{\mu}_{2}$ and
${\cal J'}^{\mu}_{3}$, while in the range
$0 < k^+ \le q^+$ one has contributions from
${\cal J'}^{\mu}_{on}$, ${\cal J'}^{\mu}_{1}$ and
${\cal J'}^{\mu}_{3}$.
As a consequence, $j^{\mu}$ can be decomposed as follows
\begin{eqnarray}
j^{\mu} = j^{(I) \mu} + j^{(II) \mu} \ ,
\label{jIeII}
\end{eqnarray}
where $j^{(I) \mu}$ has the integration on $k^+$ constrained by
$-P^+_{\pi} \le k^+ \le 0$, while $j^{(II) \mu}$ has the
integration on $k^+$ in the interval $0 < k^+ < q^+$. As illustrated below,
the valence
component of the pion contributes to $j^{(I) \mu}$ only, while
$j^{(II) \mu}$ is the contribution
of the pair production mechanism from an incoming virtual photon
with $q^+ \ > \ 0$ \cite{sawicki,pach98,pion99,ba01,DFPS,pach02,ba02}.
Performing the $k^-$ integration, the two contributions to $j^{\mu}$
are given by the following expressions
\begin{eqnarray}
j^{(I) \mu} &=&
~ \frac{ e } {(2\pi)^3} \frac{m^2}{f^2_\pi} N_c~\int_{-P^+_{\pi}}^{0}
\frac{ dk^+ d{\bf k}_{\perp}}{(k^+ + P^+_{\pi}) ~ k^+ ~ (q^+ - k^+)} ~
\left [ T^{\prime \mu}_{on, (4)} ~ + ~ T^{\prime \mu}_{2, (4)} ~ + ~ T^{\prime \mu}_{3, (4)} \right ] ~
\times \nonumber \\ && \nonumber \\ && ~
\left [ \overline\Lambda_{\pi \prime}(k + P_{\pi}, P_{\pi \prime})\Lambda_{\pi}(-k, P_{\pi})
\right ] _{k^- = -P^-_{\pi} + (k + P_{\pi})^-_{on}}
\label{jmuF}
\end{eqnarray}
\begin{eqnarray}
j^{(II) \mu} &=&
- \frac{ e } {(2\pi)^3} \frac{m^2}{f^2_\pi} N_c~\int_{0}^{q+}
\frac{ dk^+ d{\bf k}_{\perp}}{(k^+ + P^+_{\pi}) ~ k^+ ~ (q^+ - k^+)}~
\left [ T^{\prime \mu}_{on, (2)} ~ + ~ T^{\prime \mu}_{1, (2)} ~ + ~ T^{\prime \mu}_{3, (2)} \right ]
\times
\nonumber \\ && \nonumber \\ &&
\left [ \overline\Lambda_{\pi \prime}(k + P_{\pi}, P_{\pi \prime})
\Lambda_{\pi}(-k, P_{\pi})
\right ] _{k^- = q^- + (k - q)^-_{on}} \quad \ ,
\label{jmuG}
\end{eqnarray}
where
\begin{eqnarray}
T^{\prime \mu}_{on, (4)} =
\frac{ Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\Gamma^{\mu}(4)
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ] }
{ \left [ P^-_{\pi} ~-~ (k + P_{\pi})^-_{on} + k^-_{on} \right] ~
\left [ P^-_{\pi \prime} ~-~ (k + P_{\pi})^-_{on} +
(k - q)^-_{on} \right] }
\label{TonI}
\end{eqnarray}
\begin{eqnarray}
T^{\prime \mu}_{on, (2)} =
\frac{ Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\Gamma^{\mu}(2)
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ] }
{\left [ q^- + (k - q)^-_{on} - k^-_{on} + \imath \epsilon \right ] ~
\left [ P^-_{\pi \prime} ~-~ (k + P_{\pi})^-_{on} +
(k - q)^-_{on} \right ]}
\label{TonII}
\end{eqnarray}
\begin{eqnarray}
T^{\prime \mu}_{1, (2)} =
\frac{ Tr \left [ \gamma^+
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\Gamma^{\mu}(2)
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ] }
{ 2 ~ \left [ q^- + (k - q)^-_{on} - k^-_{on} + \imath \epsilon \right]}
\label{T1II}
\end{eqnarray}
\begin{eqnarray}
T^{\prime \mu}_{2, (4)} &=&
- ~ \frac{ Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ \gamma^+ ~
\Gamma^{\mu}(4)
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ] }
{ 2 ~ \left [ ~ P^-_{\pi} ~-~ (k + P_{\pi})^-_{on} + k^-_{on} ~ \right ]}
\label{T2I}
\end{eqnarray}
\begin{eqnarray}
T^{\prime \mu}_{3, (4)} &=&
- ~ \frac{ Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\Gamma^{\mu}(4)
~ \gamma^+ ~ \gamma^5 \right ] }
{ 2 ~ \left [ ~ P^-_{\pi \prime} ~-~ (k + P_{\pi})^-_{on} +
(k - q)^-_{on} ~ \right ]}
\label{T3I}
\end{eqnarray}
\begin{eqnarray}
T^{\prime \mu}_{3, (2)} &=&
\frac{ Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~ \gamma^5 [ (\rlap\slash k - \rlap\slash q)_{on} + m ] ~
\Gamma^{\mu}(2)
~ \gamma^+ ~ \gamma^5 \right ] }
{ 2 ~ \left [ ~ P^-_{\pi \prime} ~-~ (k + P_{\pi})^-_{on} +
(k - q)^-_{on} ~ \right ]}
\label{T3II}
\end{eqnarray}
with
\begin{eqnarray}
\Gamma^{\mu}(4) = \Gamma^{\mu}(k^+,{\bf k}_{\perp}, k^-=k^-_{(4)},q) \quad \ .
\label{G4}
\end{eqnarray}
The contributions $j^{(I) \mu}$ and $j^{(II) \mu}$ are represented
by diagrams (a) and (b) of Fig. 3, respectively.
\subsubsection{Valence region contribution}
Let us change integration variables in Eq. (\ref{jmuF}) for the
valence contribution, defining $k^{\prime +} = k^+ + P^+_{\pi}$
and ${\bf k}^{\prime}_{\perp} ={\bf k}_{\perp} + {\bf P}_{\pi
\perp}$, with $(k^{\prime +},{\bf k}^{\prime}_{\perp})$ the light-front momentum
of a quark in the valence range. Then $j^{(I) \mu}$ acquaints the following more familiar expression
\begin{eqnarray}
j^{(I) \mu} &=&
~ \frac{ e } {(2\pi)^3} \frac{m^2}{f^2_\pi} N_c~\int_{0}^{P^+_{\pi}}
\frac{ dk^{\prime +} d{\bf k}^{\prime}_{\perp}} {(k^{\prime+}-
P^+_{\pi}) k^{\prime+}(P_{\pi \prime}^+ - k^{\prime+})}~
\left [ T^{\prime \mu}_{on, (4)} ~ + ~ T^{\prime \mu}_{2, (4)} ~ + ~ T^{\prime \mu}_{3, (4)} \right ]
\times \nonumber \\ && \nonumber \\ && ~
\left [ \overline \Lambda_{\pi
\prime}(k^{\prime}, P_{\pi \prime})
\Lambda_{\pi}(P_{\pi}-k^{\prime}, P_{\pi})
\right ] _{k^{\prime -} = k^{\prime -}_{on}} ~ ,
\label{jmuI}
\end{eqnarray}
where
\begin{eqnarray}
k ^{\prime -}_{on} = \frac{({\bf k ^{\prime}_{\perp}}^{2} + m^2)}{k^{\prime +} }
\label{k'}
\end{eqnarray}
and we have defined $k^- + P^-_{\pi} ~=~ k^{\prime -}$.
The quantities $T^{\prime \mu}_{on, (4)}$, $T^{\prime \mu}_{2, (4)}$,
$T^{\prime \mu}_{3, (4)}$ can now be expressed as follows :
\begin{eqnarray}
T^{\prime \mu}_{on, (4)} =
[~P^+_{\pi \prime} ~ P^+_{\pi}~] \frac{ Tr \left [(\rlap\slash {k^{\prime}}_{on} + m)
~\gamma^5 [(\rlap\slash {k^{\prime}} - \rlap\slash {P_{\pi \prime}})_{on} + m]~
\Gamma^\mu(4)
~ [(\rlap\slash {k^{\prime}} -
\rlap\slash {P_{\pi}})_{on} + m]~ \gamma^5 \right ] }
{ \left [ m^2_{\pi \prime} - M^2_0(k^{\prime +},
{\bf k}^{\prime}_{\perp}; P^+_{\pi \prime}, {\bf P}_{\pi \prime \perp}) \right ] ~
\left [ m^2_\pi - M^2_0(k^{\prime +}, {\bf k}^{\prime}_{\perp};
P^+_{\pi}, {\bf P}_{\pi \perp}) \right ] }
\label{TonI'}
\end{eqnarray}
\begin{eqnarray}
T^{\prime \mu}_{2, (4)} &=&
- P^+_{\pi} ~ \frac{ Tr \left [(\rlap\slash {k^{\prime}}_{on} + m)
~\gamma^5 ~ \gamma^+ ~
\Gamma^\mu(4)
~[(\rlap\slash {k^{\prime}} -
\rlap\slash {P_{\pi}})_{on} + m]~ \gamma^5 \right ] }
{ 2 ~ \left [ m^2_\pi - M^2_0(k^{\prime +}, {\bf k}^{\prime}_{\perp};
P^+_{\pi}, {\bf P}_{\pi \perp}) \right ]}
\label{T2I'}
\end{eqnarray}
\begin{eqnarray}
T^{\prime \mu}_{3, (4)} &=&
- P^+_{\pi \prime} ~ \frac{ Tr \left [(\rlap\slash {k^{\prime}}_{on} + m)
~\gamma^5 [(\rlap\slash {k^{\prime}} - \rlap\slash {P_{\pi \prime}})_{on} + m]~
\Gamma^\mu(4)
~ \gamma^+ ~ \gamma^5 \right ] }{ 2 ~
\left [ m^2_{\pi \prime} - M^2_0(k^{\prime +},
{\bf k}^{\prime}_{\perp}; P^+_{\pi \prime}, {\bf P}_{\pi \prime \perp}) \right ]} \quad \ .
\label{T3I'}
\end{eqnarray}
In Eq. (\ref{jmuI}) both the vertex functions have the quark momentum fractions
$k^{\prime +} / P^+_{\pi \prime}$ and $(P_{\pi} - k^{\prime })^+/P_{\pi}^+$
in the valence sector $[0,1]$. Note that the on-shell momenta in Eq. (\ref{TonI'}) allow one
to retrieve the relativistic spin coupling factors with the spin 1/2 Melosh
rotations automatically included \cite{Jaus90,tob92,araujo99}.
\subsubsection{Pair-production contribution}
By making use of Eq. (\ref{M01}) the quantities
$T^{\prime \mu}_{on, (2)}$, $T^{\prime \mu}_{1, (2)}$, $T^{\prime \mu}_{3, (2)}$
in the pair-production contribution (Eq. (\ref{jmuG})) become
\begin{eqnarray}
T^{\prime \mu}_{on, (2)} =
P^+_{\pi \prime} ~ \frac{ Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m] ~
\Gamma^\mu(2)
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ] }
{\left [ q^- - q^-_0 + \imath \epsilon \right ] ~
\left [ m^2_{\pi \prime} - M^2_0(k^{\prime +},
{\bf k}^{\prime}_{\perp}; P^+_{\pi \prime}, {\bf P}_{\pi \prime \perp}) \right] }
\label{TonII'}
\end{eqnarray}
\begin{eqnarray}
T^{\prime \mu}_{1, (2)} =
\frac{ Tr \left [ \gamma^+
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\Gamma^\mu(2)
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ] }
{ 2 ~ \left [ q^- - q^-_0 + \imath \epsilon \right]}
\label{T1II'}
\end{eqnarray}
\begin{eqnarray}
T^{\prime \mu}_{3, (2)} =
P^+_{\pi \prime} ~ \frac{ Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\Gamma^\mu(2)
~ \gamma^+ ~ \gamma^5 \right ] }{ 2 ~
\left [ m^2_{\pi \prime} - M^2_0(k^{\prime +},
{\bf k}^{\prime}_{\perp}; P^+_{\pi \prime}, {\bf P}_{\pi \prime \perp}) \right ]}
\label{T3II'}
\end{eqnarray}
where
$q^-_0 = k^-_{on} + (q-k)^-_{on}$ and
$k^{\prime +} = k^+ + P^+_{\pi}$,
${\bf k}^{\prime}_{\perp} ={\bf k}_{\perp} + {\bf P}_{\pi \perp}$.
In Eq. (\ref{jmuG}) only the vertex function $\overline\Lambda_{\pi \prime}$ has
the quark momentum fraction $(P_{\pi} + k )^+/P_{\pi \prime}^+$ in the range $[0,1]$.
\section{Valence component of the light-front wave function}
\subsection{Meson wave functions}
\subsubsection{Pion}
To fully interpret the terms that appear in Eq. (\ref{jmuB})
and in Eqs. (\ref{jmuF}), (\ref{jmuG}), we
have to discuss valence and nonvalence components of the
light-front wave function. Let us first begin with the valence
component, which can be derived from the Bethe-Salpeter
amplitude \cite{sales1}. In the present model the pion
Bethe-Salpeter amplitude is given by
\begin{eqnarray}
\Psi_\pi (k,P_\pi) =
\frac{m}{f_\pi}\frac{\rlap\slash{k}+m}{k^2-m^2+\imath \epsilon}
\gamma^5 ~ \Lambda_\pi (k,P_\pi)
\frac{\rlap\slash{k}-\rlap\slash{P_\pi} + m}{(k - P_\pi)^2 - m^2 + \imath
\epsilon} \ \ ,
\label{bsa}
\end{eqnarray}
where the pion vertex is $\frac{m}{f_\pi} ~ \gamma^5 \Lambda_\pi (k,P_\pi)$.
The valence component of the light-front wave function
can be obtained from the Bethe-Salpeter amplitude (\ref{bsa})
in the valence sector, $0 \leq k^+ \leq P_{\pi}^+$,
disregarding the instantaneous terms in Eq. (\ref{bsa}),
multiplying $\Psi_\pi$ by the factor $[k^+ ~ ( k^+ - P^+_{\pi} )]/(2\pi \imath)$
and integrating over $k^-$ :
\begin{eqnarray}
&&\phi_\pi(k^+,\vec k_\perp; P^+_{\pi},\vec P_{\pi \perp})=\nonumber \\ &&
=-\imath
\frac{m}{f_\pi}k^+(k^+-P^+_{\pi} ) \int \frac{dk^-}{2\pi}
\frac{\rlap\slash{k}_{on}+m}{k^2-m^2+\imath \epsilon} \gamma^5
\Lambda_\pi (k,P_\pi)
\frac{(\rlap\slash{k}-\rlap\slash{P_\pi})_{on}+m}{(k-P_\pi)^2-m^2+\imath
\epsilon} \ \, .
\label{wf1}
\end{eqnarray}
Two poles, $k_{(1)}$ and $k_{(3)}$, appear in Eq. (\ref{wf1}), respectively
in the lower and in the upper $k^-$ semiplanes.
We perform the $k^-$ integration in the lower complex
semi-plane disregarding the contributions that arise from
the singularities of the vertex $\Lambda_\pi (k,P_\pi)$
(cf. the assumptions (i) and (ii) at the beginning of Sect. III).
Then the pion wave function becomes
\begin{eqnarray}
\phi_\pi(k^+, {\bf k}_{\perp}; P^+_{\pi}, {\bf
P}_{\pi \perp}) = ~ \frac{m}{f_\pi} ~(\rlap \slash k_{on} + m) ~ \gamma^5
\frac{P^+_{\pi} ~ [ \Lambda_{\pi}(k,P_{\pi}) ]_{[k^- = k^-_{on}]}}
{[m^2_\pi - M^2_0(k^+, {\bf k}_{\perp}; P^+_{\pi},
{\bf P}_{\pi \perp})]} ~
\left [(\rlap \slash k - \rlap \slash P_{\pi})_{on} + m \right ] ~~ ~~ .
\label{wf2}
\end{eqnarray}
If the $k^-$
integration is done in the upper
semi-plane within the same assumptions, one has:
\begin{eqnarray} &&
\phi_{\pi}(k^+, {\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp})
=\nonumber \\ &&
= ~ \frac{m}{f_\pi} ~ (\rlap \slash k_{on} + m) ~ \gamma^5
\frac{P^+_{\pi} ~ [ \Lambda_{\pi}(k,P_{\pi}) ] _{[k^- = P^-_\pi - (P_\pi-k)^-_{on}]}}
{[m^2_\pi - M^2_0(P^+_\pi-k^+, {\bf
P}_{\pi\perp}-{\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi
\perp})]} ~\left [(\rlap \slash k - \rlap \slash P_{\pi})_{on} + m \right ] ~.
\label{wf22}
\end{eqnarray}
In principle, the elimination of the relative light-front time
between the quark and the antiquark in the pion Bethe-Salpeter
amplitude by the $k^-$ integration in Eq. (\ref{wf1}), should give
a unique answer, which defines the valence component of the wave
function in the range $0 \leq k^+ \leq P^+_\pi$,
with both the quarks on their mass shell.
Therefore in order to
require consistency within our model, we will assume
$[ \Lambda_{\pi}(k,P_{\pi}) ]_{[k^- = k^-_{on}]} $ and
$[ \Lambda_{\pi}(k,P_{\pi}) ]_{[k^- =P^-_\pi - (P_\pi-k)^-_{on}]}$
to be equal in that kinematical range
(note that $M^2_0(k^+, {\bf k}_{\perp}; P^+_{\pi},
{\bf P}_{\pi \perp})$ is equal to
$M^2_0(P^+_\pi-k^+, {\bf P}_{\pi\perp}-{\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi\perp})$).
This assumption produces a
momentum component of the
valence light-front wave function symmetrical for the exchange of the quark momenta,
since the vertex function $\Lambda_{\pi}(k,P_{\pi})$ is assumed to be symmetrical.
Within a Bethe-Salpeter approach, the function $\phi_{\pi}$ fulfills a two-body Schroedinger-like
equation, with the proper Melosh structure represented by the matrix
$(\rlap \slash k_{on} + m) ~ \gamma^5
\left [(\rlap \slash k - \rlap \slash P_{\pi})_{on} + m \right ] $ \cite{Jaus90}.
Therefore, when the plus component
of the quark momentum is in
the interval $0\leq k^+ \leq P^+_\pi$, $\phi_{\pi}$ will be identified
in our approach with
the HLFD pion wave function, with momentum component
$\psi_{\pi}(k^+, {\bf k}_{\perp};
P^+_{\pi}, {\bf P}_{\pi \perp})$ (see Ref. \cite{Jaus90}):
\begin{eqnarray}
~\phi _{\pi}(k^+, {\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi
\perp}) = (\rlap \slash k_{on} + m) ~ \gamma^5
\left [(\rlap \slash k - \rlap \slash P_{\pi})_{on} + m \right ]
\psi_{\pi}(k^+, {\bf k}_{\perp};
P^+_{\pi}, {\bf P}_{\pi \perp}) ~~ ~~.
\label{wfp}
\end{eqnarray}
\subsubsection{Vector meson}
In analogy with the pion case,
one can define the light-front VM wave function,
which describes the valence component of the meson state $|n \lambda \rangle$.
Indeed, starting from the Bethe-Salpeter amplitude for a vector meson
\begin{eqnarray}
\Psi_{n \lambda} (k,P_n) =
\frac{\rlap\slash{k} + m}{k^2 - m^2 + \imath \epsilon}
\left [ \epsilon_{\lambda}(P_n) \cdot \widehat{V}_{n}(k,k-P_n) \right ] ~
\Lambda_n(k,P_n) ~
\frac{\rlap\slash{k}-\rlap\slash{P_n} + m}{(k - P_n)^2 - m^2 + \imath
\epsilon} \ ,
\label{bsan}
\end{eqnarray}
the valence component of the light-front wave function can be defined
from $\Psi_{n \lambda} (k,P_n)$ integrating over $k^-$,
disregarding the instantaneous terms and multiplying by the factor
$[k^+ ~ ( k^+ - P^+_{\pi} )]/(2\pi \imath)$, as we already did for the pion.
Furthermore, in this case one has to take on their mass shell
both the quark momenta in the Dirac structure
of the VM vertex function, $\widehat{V}_{n}(k,k-P_n)$ :
\begin{eqnarray}
&&\phi_{n \lambda} (k^+,\vec k_\perp; P^+_n,\vec P_{n \perp})=
-\imath k^+ (k^+-P^+_{n} ) \times
\nonumber \\ &&
\nonumber \\ &&
\int \frac{dk^-}{2\pi}
\frac{\rlap\slash{k}_{on}+m}{k^2 - m^2 + \imath \epsilon} ~
\Lambda_n (k,P_n)
~ \left [ \epsilon_{\lambda} (P_n) \cdot [ \widehat{V}_{n}(k,k-P_n) ]_{on} \right ] ~
\frac{(\rlap\slash{k}-\rlap\slash{P_n})_{on}+m}{(k-P_n)^2-m^2 + \imath
\epsilon} \ \,
\label{wf1n}
\end{eqnarray}
where $[ \widehat{V}_{n}(k,k-P_n) ]_{on}$ is defined by Eq. (\ref{gams1})
in order to retrieve the $^3S_1$ vector meson vertex of Ref. \cite{Jaus90}.
Assuming that $\Lambda_n(k,P_n)$
does not diverge in the complex plane $k^-$ for
$|k^-|\rightarrow\infty$, and
neglecting the contributions of its singularities
in the $k^-$ integration, the valence VM wave function is
\begin{eqnarray}
&&\phi _{n \lambda}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp}) =
\nonumber \\ &&
= ~ P^+_{n} ~ (\rlap \slash k_{on} + m) ~
\frac{\left [ \epsilon_{\lambda}(P_n) \cdot [ \widehat{V}_{n}(k,k-P_n) ]_{on} \right ] }
{[M^2_n - M^2_0(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp})]} ~
[ \Lambda_{n}(k,P_{n}) ]_{[k^- = k^-_{on}]} ~
\left [(\rlap \slash k - \rlap \slash P_{n})_{on} + m \right ]~~ ~~ .
\label{wfn}
\end{eqnarray}
In analogy with the pion case, we assume
$[\Lambda_{n}(k,P_n)]_{[k^- = k^-_{on}]} =
[\Lambda_{n}(k,P_n)]_{[k^- = P^-_n - (P_n - k)^-_{on}]}$
in the valence sector, $0 \leq k^+ \leq P^+_n$.
As for the pion, the function $\phi _{n \lambda}$, with the plus component
of the quark momentum in
the interval $0\leq k^+ \leq P^+_\pi$, will be identified with
the HLFD vector meson wave function, with
momentum component $\psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp})$,
\cite{Jaus90}:
\begin{eqnarray}
&&\phi _{n \lambda}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp}) =
\nonumber \\ &&
= ~ (\rlap \slash k_{on} + m) ~
\left [ \epsilon_{\lambda}(P_n) \cdot [ \widehat{V}_{n}(k,k - P_n) ]_{on} \right ]
~\left [(\rlap \slash k - \rlap \slash P_{n})_{on} + m \right ]
\psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp}) ~~.
\label{wfpn}
\end{eqnarray}
In conclusion, Eqs. (\ref{wf2}, \ref{wfp}) and (\ref{wfn}, \ref{wfpn}) establish a link
between the momentum part of the meson HLFD wave functions and
the momentum part of
the meson vertex functions.
The valence component of the VM wave function are
normalized to the probability of the valence component of the meson state $|n \lambda \rangle$
(see Appendix D).
This probability is estimated in a schematic model in Appendix E.
The corresponding normalization for the pion wave function is included in an overall
normalization constant for the pion form factor.
\subsection{Photon wave function}
One can define as well the valence component of the hadronic contribution
to the photon wave function, starting from
the Bethe-Salpeter amplitude of the photon, which can be written as:
\begin{eqnarray}
\Psi^\mu_\gamma (k,q) =
\frac{\rlap\slash{k+m}}{k^2 - m^2 + \imath \epsilon}
\Gamma^\mu (k,q) \frac{\rlap\slash{k} - \rlap\slash{q} + m}
{(k-q)^2 - m^2 + \imath \epsilon} \ ,
\label{bsag}
\end{eqnarray}
where $\Gamma^\mu (k,q)$ is the photon vertex amplitude (see Eq. (\ref{wf1zg})).
In analogy with Eq. (\ref{wf1n}), the valence component of the virtual photon light-front wave
function can be obtained from the Bethe-Salpeter amplitude
(\ref{bsag}) in the valence sector, $0\leq k^+\leq q^+$,
separating out the instantaneous terms of Eq. (\ref{bsag}), integrating in $k^-$
and multiplying by the factor $[k^+ ~ ( k^+ - P^+_{\pi} )]/(2\pi \imath)$.
Then,
using our explicit expression for $\Gamma^{\mu}(k,q)$ given by Eq. (\ref{cur6}), the
light-front wave function of the photon can be defined by
\begin{eqnarray} &&
\phi^\mu_\gamma(k^+,{\bf k}_\perp; q^2,q^+,{\bf q}_\perp)=
\nonumber \\ &&
= -\imath k^+(k^+-q^+) \int \frac{dk^-}{2\pi}
\frac{\rlap\slash{k}_{on} + m}{k^2 - m^2 + \imath \epsilon}
\left [ \Gamma^\mu (k, q) \right ]_{on}
\frac{(\rlap \slash k - \rlap \slash q)_{on} + m}{(k-q)^2 - m^2 + \imath \epsilon}
\ \ ,
\label{wf1g}
\end{eqnarray}
where the label "$on$" in $\left [ \Gamma^\mu (k, q) \right ]_{on}$ means that,
as in the VM case, the Dirac structures of the photon vertex
amplitude, $\Gamma^\mu (k,q)$, have to be taken with both the quark
momenta on their mass shell. Therefore, in analogy with Eq. (\ref{wf1n}),
a possible choice for the quantity
$\widehat{V}_{n}(k,k-q)$ of Eq. (\ref{cur6}) is given by the quantity
$\left [ \widehat{V}_{n}(k,k-q) \right ]_{on}$ as defined by Eq. (\ref{gams1}).
Then, performing the $k^-$ integration with the assumptions given at the beginning of Section 3,
in the range $0\leq k^+\leq q^+$
Eq. (\ref{wf1g}) becomes
\begin{eqnarray}
~\phi^\mu _{\gamma}(k^+, {\bf k}_{\perp}; q^2,q^+, {\bf q}_{\perp}) =
(\rlap \slash k _{on} + m ) ~
\psi^\mu_{\gamma}(k^+, {\bf k}_{\perp};q^2, q^+, {\bf q}_{\perp}) ~
\left[(\rlap \slash k-\rlap \slash q)_{on} + m\right] ~~ ~~,
\label{wf3g}
\end{eqnarray}
where the function $\psi^\mu_{\gamma}(k^+, {\bf k}_{\perp};q^2, q^+, {\bf q}_{\perp})$,
which includes the
Dirac structures of the photon vertex, is defined by
\begin{eqnarray}
\psi^\mu_{\gamma}(k^+, {\bf k}_{\perp};q^2, q^+, {\bf q}_{\perp}) =
[ \Gamma^\mu(k,q) ]_{on} ~
\frac {q^+}{[ q^2 - M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) + \imath \epsilon]} ~~ ~~ .
\label{wf2g}
\end{eqnarray}
As in the previous meson case, to have consistency in our
virtual photon wave function model, we will not distinguish between
$[ \Lambda_{n}(k,q) ]_{[k^- = k^-_{on}]} $
and $[ \Lambda_{n}(k,q) ]_{[k^- =q^- - (q-k)^-_{on}]}$
in the valence sector, $0 \leq k^+ \leq q^+$.
Therefore we obtain
for $\psi^\mu_{\gamma}(k^+, {\bf k}_{\perp};q^2, q^+, {\bf q}_{\perp})$
the same result
when the $k^-$ integration is performed both in the lower or in the upper $k^-$ complex semiplane.
The valence wave function,
$ \phi^\mu_\gamma(k^+,{\bf k}_\perp; q^2,q^+,{\bf q}_\perp)$,
and the function
$\psi^\mu_{\gamma}(k^+, {\bf k}_{\perp};q^2, q^+, {\bf q}_{\perp})$
depend on the value of $q^2$ carried by the virtual photon.
Note that in the time-like case
a singularity appears in the photon valence wave function
(see Eq. (\ref{wf2g})).
If, as in Eq. (\ref{cur7}), the photon vertex $[ \Gamma^\mu(k,q) ]_{on}$
is taken with on-shell quantities for
the vector mesons in the numerator, i.e. if we take
\begin{eqnarray}
\left [\Gamma^{\mu}(k,q) \right ]_{on} = \sqrt{2} \sum_{n, \lambda}
\epsilon_{\lambda} (P_n)\cdot \left [ \widehat{V}_{n}(k,k - P_n) \right ]_{on}
\left [ \Lambda_{n}(k,P_n) \right ]_{[k^- = k^-_{on}]}
{ [\epsilon ^{\mu}_{\lambda}(P_n)]^* f_{Vn} \over \left [ q^2 -
M^2_n + \imath M_n \tilde{\Gamma}_n(q^2)\right ]} \
\label{cur7b}
\end{eqnarray}
and identify Eqs. (\ref{wfn}) and (\ref{wfpn}), then
the function
$\psi^\mu_{\gamma}(k^+, {\bf k}_{\perp};q^2, q^+, {\bf q}_{\perp})$
can be written as follows:
\begin{eqnarray}
&& \psi^\mu_{\gamma}(k^+, {\bf k}_{\perp};q^2, q^+, {\bf q}_{\perp})
= \sqrt{2} ~ \sum_{n, \lambda}
\left [ \epsilon_{\lambda} (P_n)\cdot \left [ \widehat{V}_{n}(k,k-P_n) \right ]_{on} \right ]
\times \nonumber \\ &&
\frac {\left [ M^2_n - M^2_0(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp})\right ]}
{\left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) + \imath \epsilon \right ]} ~
\psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp})
~ { [\epsilon ^{\mu}_{\lambda}(P_n)]^* f_{Vn} \over \left [ q^2 -
M^2_n + \imath M_n \tilde{\Gamma}_n(q^2)\right ]}
\label{wfni1}
\end{eqnarray}
in terms of the momentum part of the HLFD vector meson wave functions,
$\psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp})$.
\section{Contribution of nonvalence components to the current }
\subsection{Time-like case: the photon nonvalence component}
The process of pion-antipion production is shown in Fig. 2, where
the dashed lines (both in (a) and in (b)) represent two different
light-front times. At the first time (the one on the right) the hadronic valence component
of the virtual photon is represented, while at the second one the
$2q2\overline q$ photon nonvalence component is depicted (see also Fig. 5).
The two parts of Fig. 2, i.e. (a) and (b), differ by
the emission vertex of an antipion or of a pion (see also Fig. 5 (b)), respectively.
The corresponding quark amplitudes for the
radiation of an antipion or a pion are given in
Eq. (\ref{jmuB}) by
the antipion vertex
$\Lambda_{\overline\pi}(k-P_\pi; P_{\overline\pi})$, evaluated at $k^-=k^-_{on}$
for $(k - P_\pi)^+ < 0$, and by
the pion vertex $\overline\Lambda _{\pi}(k; P_{\pi})$,
evaluated at $k^-=q^- + (k - q)^-_{on}$ for $k^+ > P^+_{\pi}$, respectively.
Once the interaction that couples the
valence to the $2q2\overline q$ component is known (see Fig. 2), the amplitude for the photon
decay in a $\pi{\overline\pi}$ pair can be constructed. To this end,
let us introduce a kernel operator ${\cal K}$ which realizes this coupling.
Then, we can write the following
equation to relate the valence component of the pion wave function,
$\psi_{\pi}(k^{\prime +},
{\bf k}^\prime_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp})$,
to the vertex function $ \overline\Lambda _{\pi}(k; P_{\pi})$ at ${k^-=q^- + (k -
q)^-_{on}}$, which is the amplitude for the pion emission (see Fig. 2 (b) and Fig. 6):
\begin{eqnarray} &&
\overline{{\cal{D}}}_{\pi} := \frac{m}{f_\pi} \overline\Lambda
_{\pi}(k; P_{\pi})_{[k^-=q^- + (k - q)^-_{on}]}=
\nonumber \\ &&=\frac14
\sum_{\alpha^\prime\beta^\prime\alpha\beta}\int_0^{P^+_\pi}
\frac{dk^{\prime +}d{\bf k}^\prime_\perp}{k^{\prime
+}(P^+_\pi-k^{\prime+})} ~
\psi^* _{\pi}(k^{\prime +},
{\bf k}^\prime_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp})\times
\nonumber \\ &&~
(\gamma^5)^{\beta\alpha}(\gamma^5)_{\beta^\prime\alpha^\prime} ~
{\cal K}^{\alpha^\prime\beta^\prime}_{\alpha\beta}\left(k^{\prime
+},{\bf k}^\prime_\perp~;~k^{ +},{\bf k}_\perp~;~q^-,q^+,{\bf
q}_\perp\right) \ .
\label{piemi}
\end{eqnarray}
For simplicity, the example of a
$\gamma^5$ structure was used in Eq. (\ref{piemi}), just to be
consistent with our assumption of a pseudoscalar pion model.
One can write an analogous expression for the emission of $\overline
\pi$
:
\begin{eqnarray} &&
{\cal{D}}_{\overline{\pi}} :=\frac{m}{f_\pi} [\Lambda _{\overline\pi}(k-P_\pi;
P_{\overline\pi})]_{(k^-=k^-_{on})} =
\nonumber \\ &&=\frac14
\sum_{\alpha^\prime\beta^\prime\alpha\beta}\int_0^{P^+_{\overline\pi}}
\frac{dk^{\prime +}d{\bf k}^\prime_\perp}{k^{\prime
+}(P^+_\pi-k^{\prime+})} ~
\psi^* _{\overline\pi}(k^{\prime +},
{\bf k}^\prime_{\perp}; P^+_{\overline\pi}, {\bf P}_{\overline\pi \perp})\times
\nonumber \\ &&~
(\gamma^5)^{\beta\alpha}(\gamma^5)_{\beta^\prime\alpha^\prime} ~
{\cal K}^{\alpha^\prime\beta^\prime}_{\alpha\beta}\left(k^{\prime
+},{\bf k}^\prime_\perp~;~k^{ +}-P^+_\pi,{\bf k}_\perp-{\bf
P}_{\pi\perp}~;~q^-,q^+,{\bf q}_\perp\right) \ .
\label{pibemi}
\end{eqnarray}
In our model calculation, both pion emission vertexes will be
substituted by a constant, following Ref. \cite{JI01}.
\subsection{Space-like case: the pion nonvalence component}
A part the direct photon coupling to the quark line,
in the space-like region the nonvalence component of the final pion wave
function appears for $q^+>0$ in both the
contributions of the current obtained after the $k^-$ integration, and
given by Eqs. (\ref{jmuF}) and (\ref{jmuG}) (see Fig. 7).
On one hand the valence component of the final pion is coupled
to the nonvalence $2q2\overline q$ component (see Fig. 7 (b)), through
an interaction
kernel ${\cal H}$, which contributes
to the quark-photon absorption vertex of Eq. (\ref{jmuF}),
given by $\Gamma^\mu(4) = \Gamma^\mu(k^{\prime} - P_{\pi},q)$ with
$k^{\prime-}=k^{\prime-}_{on}$. On the other hand
the vertex $\Lambda _{\pi}(-k; P_\pi)$ in Eq. (\ref{jmuG}),
evaluated at $k^-=q^- + (k - q)^-_{on}$ for $- k^+ < 0$,
describes the quark-pion absorption through another
interaction kernel ${\cal K'}$ and generates
the nonvalence $2q2\overline q$ component of the final pion (see Fig. 7 (c)).
We identify the kernel ${\cal K'}$ with the kernel ${\cal {K}}$, already used in the previous
subsection A for the description of the pion emission (Fig. 6).
Equation (\ref{jmuF}) gives a contribution to the SL form factor
where the initial and the final pion valence components appear (diagram (a) of
Fig. 3).
The plus component of the quark-photon absorption vertex, given by
$\Gamma^+(k^{\prime} - P_{\pi},q)$ with $k^{\prime-}=k^{\prime-}_{on}$ which enters in
Eq. (\ref{jmuF}), is represented by an empty circle in
diagram (a) of Fig. 3 and is approximated by the sum of i) the bare photon vertex
multiplied by a renormalization constant, $a$, (diagram (a) of Fig. 7) and ii)
the
contribution due to the $2q2\overline q$ component of the final pion wave
function, which is represented by diagram (b) of Fig. 7.
Therefore, we can make
the following identification:
\begin{eqnarray} &&
\left[ [\Gamma^+(k^{\prime} - P_{\pi},q)]_{(k^{-} = k^{-}_{on})} \right]_{\alpha\beta} =
a ~ (\gamma^+)_{\alpha\beta} + \sum_{\alpha^\prime\beta^\prime}\int_0^{q^+}
\frac{dk^{\prime \prime +}d{\bf k}^{\prime \prime}_\perp}{k^{\prime \prime +}
(q^+-k^{\prime \prime +})} \times
\nonumber \\ &&
{\cal H}^{\alpha^\prime\beta^\prime}_{\alpha\beta}
\left( k^{\prime +} - P^+_\pi,{\bf k^\prime}_\perp-{\bf P}_{\pi\perp}~;
~k^{\prime \prime +},{\bf k}^{\prime \prime}_\perp~;
~P^-_{\pi\prime},P^+_{\pi\prime},{\bf P}_{\pi\prime\perp}\right)
\left[\psi^+_{\gamma}(k^{\prime \prime +}, {\bf k}^{\prime \prime}_{\perp};
q^-,q^+, {\bf q}_{\perp}) \right ]_{\alpha^\prime\beta^\prime} .
\label{phabs}
\end{eqnarray}
As already discussed in Sec. II, we do not consider the bare term
photon vertex in the present paper, since it violates current conservation
for a massless pion (see Appendix C).
Therefore, disregarding the bare photon vertex in
the right-hand side of Eq. (\ref{phabs}), we can formally write:
\begin{eqnarray} &&
[\Gamma^+(k^{\prime} - P_{\pi},q)]_{(k^{-}=k^{-}_{on})} \simeq {\cal H}
\psi^+_{\gamma} \ .
\label{phabs1}
\end{eqnarray}
One could try to interpret
Eq. (\ref{phabs1}) in terms of constituent quark form factors. However,
we have to point out that the absorption vertex of
Eq. (\ref{phabs}) does not depend only on $q^2$, as one could
naively think, but it depends on the virtuality of the quark, and
therefore depends on the hadron where this process occurs.
Let us note that, within our assumption of a vanishing pion mass, the
contribution of Eq. (\ref{jmuF}) is also vanishing (see Sect. VIII)
and therefore there is no contribution from
$[\Gamma^+(k^{\prime} - P_{\pi},q)]_{(k^{-} = k^{-}_{on})}$.
Equation (\ref{jmuG}) represents the pair-production term
(Z-diagram) and is depicted in Fig. 7 (c). The quark-pion absorption vertex,
given by $\Lambda _{\pi}(-k; P_\pi)$ evaluated at $k^-=q^- + (k - q)^-_{on}$,
which appears in Eq. (\ref{jmuG}) can be written as
\begin{eqnarray} &&
{\cal{D}}_{\pi} := \frac{m}{f_\pi}[\Lambda _{\pi}(-k ;
P_{\pi})]_{(k^-=q^- + (k - q)^-_{on})} = \frac14
\sum_{\alpha^\prime\beta^\prime\alpha\beta} \int_0^{P^+_\pi}
\frac{dk^{\prime +} d{\bf k}^\prime_\perp}{k^{\prime+} (P^+_\pi-k^{\prime+})}
\times \nonumber \\ &&~
(\gamma^5)_{\beta\alpha}(\gamma^5)^{\beta^\prime\alpha^\prime}
{\cal K}_{\alpha^\prime\beta^\prime}^{\alpha\beta}
\left(k^{+},{\bf k}_\perp;~k^{\prime +},{\bf k}^\prime_\perp~;
~P^-_{\pi\prime},P^+_{\pi\prime},{\bf P}_{\pi\prime\perp}\right)
\psi_{\pi}(k^{\prime +}, {\bf k}^\prime_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp}) \
\label{pibabs}\ .
\end{eqnarray}
For our purpose this quark-pion absorption vertex will be taken
constant, as we do in the TL case for the quark-pion emission vertex,
as was proposed in Ref. \cite{JI01}.
\section{Triangle diagram and pion LF wave function}
\subsection{Time-like case}
Let us insert into Eq. (\ref{jmuB}) the photon
vertex of Eq. (\ref{cur7}). Furthermore, whenever the full expression for
the light-front pion wave function
$\phi_{\pi}(k^+, {\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp})$,
given by Eq. (\ref{wf2}), appears in Eq. (\ref{jmuB})
and the momentum fraction is in the valence-sector range $[0,1]$, let us replace it
with the expression of Eq. (\ref{wfp}), i.e. let us write the pion vertex in terms
of the momentum component of the HLFD pion wave function.
This means that we introduce in Eq. (\ref{jmuB}) the wave functions
$\psi_{\pi}(k^+, {\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp})$ and
$\psi_{\bar{\pi}}((k^+ - P^+ _{\pi} ), ({\bf k -
P_{\pi}})_{\perp}; P^+_{\bar{\pi}}, {\bf P}_{{\bar{\pi}} \perp})$
when these wave functions have the correct support.
Then the triangle diagram can be expressed as follows:
\begin{eqnarray}
j^{\mu} &=&
\frac{ e } {(2\pi)^3} \frac{m}{f_\pi} N_c~\int_0^{q^+}
\frac{ dk^+ d{\bf k}_{\perp}}{(k^+ - P^+_{\pi}) k^+ (q^+ - k^+)}~
\sum_{n, \lambda}
~ {\sqrt{2} ~ [\epsilon ^{\mu}_{\lambda}(P_n)]^* f_{Vn} \over
\left [ q^2 - M^2_n + \imath M_n \tilde{\Gamma}_n(q^2) \right ]}
\times \nonumber \\ &&
\left \{ \Theta (P^+_{\pi} -k^+) ~ I_{1, n, \lambda} ~
+ ~ \Theta (k^+ - P^+ _{\pi} ) ~ I_{2, n, \lambda} \right \} \ .
\label{jmuD}
\end{eqnarray}
The quantities $I_{1, n, \lambda}$ and $I_{2, n, \lambda}$ in Eq. (\ref{jmuD})
are defined as follows
\begin{eqnarray}
I_{1, n, \lambda} &=&
\left [
\Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}}) ~ \Lambda_{n}(k,P_n) \right ]_{k^- = k^-_{on}}
~ ~ \times \nonumber \\
&& \left \{ \frac{ q^+ }
{ \left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) + \imath \epsilon \right ]} ~
\left [ T_{on, (1, n, \lambda)} ~ + ~ T_{1, (1, n, \lambda)} \right ] ~
+ ~ T_{2, (1, n, \lambda)} \right \}
\label{jmu1n}
\end{eqnarray}
\begin{eqnarray}
I_{2, n, \lambda} &=&
\left [ \overline\Lambda_{\pi}(k,P_{\pi}) ~ \Lambda_{n}(k,P_n) \right ]_{k^- = q^- + (k - q)^-_{on}}
~ ~ \times \nonumber \\
&& \left \{ \frac{ q^+ }
{\left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) + \imath \epsilon \right ] }
~ \left [ T_{on, (2, n, \lambda)} ~ + ~ T_{1, (2, n, \lambda)} \right ] ~
+ ~ T_{3, (2, n, \lambda)} \right \}
\label{jmu2n}
\end{eqnarray}
where
\begin{eqnarray}
&& T_{on, (1, n, \lambda)} = ~
\psi^* _{\pi}(k^+, {\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp})
~ \times \quad \quad \quad \quad \quad \quad
\label{Ton1n} \\
\nonumber \\
&& Tr \left [[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) ~ \right ] _{k^- = k^-_{on}}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ] \quad \ ,
\nonumber
\end{eqnarray}
\begin{eqnarray}
&& T_{on, (2, n, \lambda)} = - ~ \psi^* _{\bar{\pi}}((k^+ - P^+ _{\pi} ), ({\bf k -
P_{\pi}})_{\perp}; P^+_{\bar{\pi}}, {\bf P}_{{\bar{\pi}} \perp})
~ ~
\times \quad \quad \quad \quad
\label{Ton2n} \\ \nonumber \\
&& Tr \left [ [(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~ \gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) ~ \right ]
_{k^- = q^- + (k - q)^-_{on}}
(\rlap\slash k_{on} + m)~ \gamma^5 \right ] ~~ ,
\nonumber
\end{eqnarray}
\begin{eqnarray}
&& T_{1, (1, n, \lambda)} = - ~\frac{1} { 2 } ~ \frac{m}{f_\pi} ~
\left[\overline\Lambda _{\pi}(k; P_{\pi}) \right ]_{k^- = k^-_{on}} ~
\times \quad \quad
\nonumber \\
\nonumber \\
&& Tr \left[ \gamma^+
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) ~ \right ] _{k^- = k^-_{on}}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ] \quad ,
\label{T11n}
\end{eqnarray}
\begin{eqnarray}
&& T_{1, (2, n, \lambda)} = - ~\frac{1}{ 2 } ~ \frac{m}{f_\pi} ~
\left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})
\right ]_{k^- = q^- + (k - q)^-_{on}}
~ ~ \times \quad \quad \quad \quad
\nonumber \\
\nonumber \\
&& Tr \left [ \gamma^+
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) ~ \right ]
_{k^- = q^- + (k - q)^-_{on}}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ]
~~ ,
\label{T12n}
\end{eqnarray}
\begin{eqnarray}
&& T_{2, (1, n, \lambda)} =
- ~ \frac{1}{2} ~ \psi^* _{\pi}(k^+, {\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp}) ~
~ \times \quad \quad
\nonumber \\
\nonumber \\
&& Tr \left [[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ \gamma^+ ~
\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) ~ \right ] _{k^- = k^-_{on}}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ] ~~ ,
\label{T21n}
\end{eqnarray}
\begin{eqnarray}
&& T_{3, (2, n, \lambda)} = - ~ \frac{1}{2} ~ \psi^* _{\bar{\pi}}((k^+ - P^+ _{\pi} ), ({\bf k -
P_{\pi}})_{\perp}; P^+_{\bar{\pi}}, {\bf P}_{{\bar{\pi}} \perp})
~
~ \times \quad \quad
\label{T32n} \\
\nonumber \\
&& Tr \left [[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) ~ \right ]
_{k^- = q^- + (k - q)^-_{on}}
~ \gamma^+ ~ \gamma^5 \right ] \quad .
\nonumber
\end{eqnarray}
Let us notice that the momentum component of the LF pion wave function does not appear in
the instantaneous terms $T_{1, (1, n, \lambda)}$ and $T_{1, (2, n, \lambda)}$,
because in these terms the propagator
$[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]/
(k^- -P^-_{\pi} - (k -
P_{\pi})^-_{on} + \frac{\imath\epsilon}{k^+ - P^+_{\pi}})$
is replaced by $\gamma^+/2$. Indeed the amplitude
\begin{eqnarray}
~\frac{1} { 2 } ~ \frac{m}{f_\pi} ~(\rlap\slash k_{on} + m)~ \gamma^5 ~
\left[\overline\Lambda _{\pi}(k; P_{\pi}) \right ]_{k^- = k^-_{on}} ~
\gamma^+
\label{ampist}
\end{eqnarray}
does not obey the same two-body Schroedinger-like equation as the
light-front pion wave function
$\phi_\pi(k^+,\vec k_\perp; P^+_{\pi},\vec P_{\pi \perp})$ does.
As already noted at the end of Sect. III A,
the first and the second term of Eq. (\ref{jmuD}) are represented in Fig. 2 by
the diagrams (a) and (b), respectively. Note that, due to the $\Theta$ functions,
the final pion or antipion wave functions enter into the
first or the second term of Eq. (\ref{jmuD}), respectively.
In Eq. (\ref{jmuD})
the pion vertexes $[\Lambda _{\overline\pi}(k-P_\pi; P_{\overline\pi})]$, evaluated at
$k^-=k^-_{on}$, and
$[\overline\Lambda _{\pi}(k; P_{\pi})]$,
evaluated at $k^-=q^- + (k - q)^-_{on}$, have the momentum fraction
outside the valence-sector range $[0,1]$ and can be related to the quark amplitudes for radiative
antipion or pion emission, respectively (see Figs. 2 (a), 2 (b)).
The presence of these
vertexes gives rise to the contribution of the nonvalence
component of the virtual-photon wave function, relevant for the process under
consideration. In
the space-like region the analogous processes can be interpreted for
$q^+>0$ as the contribution of the nonvalence component of the
pion wave function in the final state \cite{JI01}. These points have already been illustrated in
Section V.
If we choose to take the Dirac structures in the photon vertex with both
quarks on their
mass shell, i.e. $\Gamma^{\mu}(k,q) = \left [\Gamma^{\mu}(k,q) \right ]_{on}$
(see Eq. (\ref{cur7b})), then
whenever the full expression for the light-front vector meson wave function
$\phi_{n \lambda} (k^+,\vec k_\perp; P^+_n,\vec P_{n \perp})$,
given by Eq. (\ref{wfn}), appears in Eq. (\ref{jmuD}),
we can take advantage of our identification of Eqs. (\ref{wfn}) and (\ref{wfpn})
to express the quantities $I_{1, n, \lambda}$ and $I_{2, n, \lambda}$
through the momentum component of the HLFD VM wave function.
However,
in the instantaneous contributions to $I_{1, n, \lambda}$ and $I_{2, n, \lambda}$ which are
proportional to the
quantities $T_{2, (1, n, \lambda)}$ and $T_{3, (2, n, \lambda)}$
we do not express the VM vertex functions
$\left [ \Lambda_{n}(k,P_n) \right ]_{k^- = k^-_{on}}$ and
$\left [ \Lambda_{n}(k,P_n) \right ]_{k^- = q^- + (k - q)^-_{on}}$
through the momentum component of the HLFD VM wave function,
because the full expression for this function given by Eq. (\ref{wfpn})
does not appear in these instantaneous terms.
Then we obtain
\begin{eqnarray}
I_{1, n, \lambda} &=&
\left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})^{~}_{~} \right ]_{k^- = k^-_{on}}
~ ~ \times \nonumber \\
\nonumber \\
&& \left \{ \frac{ \psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp}) ~
[M^2_n - M^2_0(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp})] }
{ \left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) + \imath \epsilon \right ]} ~
\left [ T_{on, (1, n, \lambda)} ~ + ~ T_{1, (1, n, \lambda)} \right ] ~ + \right .
\nonumber \\
\nonumber \\
&&\left . \left [ \Lambda_{n}(k,P_n)^{~}_{~} \right ]_{k^- = k^-_{on}}
~ T_{2, (1, n, \lambda)} \right \}
\label{j1n}
\end{eqnarray}
\begin{eqnarray}
I_{2, n, \lambda} &=&
\left [ \overline\Lambda_{\pi}(k,P_{\pi}) \right ]_{k^- = q^- + (k - q)^-_{on}}
~ ~ \times \nonumber \\
\nonumber \\
&& \left \{ \frac{ \psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp}) ~
[M^2_n - M^2_0(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp})] }
{\left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) + \imath \epsilon \right ] }
~ \left [ T_{on, (2, n, \lambda)} ~ + ~ T_{1, (2, n, \lambda)} \right ] ~ + \right .
\nonumber \\
\nonumber \\
&&\left . \left [ \Lambda_{n}(k,P_n) \right ]_{k^- = q^- + (k - q)^-_{on}} ~
T_{3, (2, n, \lambda)} \right \}
\label{j2n}
\end{eqnarray}
The quantities
$T_{on, (1, n, \lambda)} , ~ T_{1, (1, n, \lambda)}, ~ T_{2, (1, n, \lambda)}$
and the quantities $ T_{on, (2, n, \lambda)} , ~ T_{1, (2, n, \lambda)}, ~
T_{3, (2, n, \lambda)} $ in Eq. (\ref{j1n}) and in Eq. (\ref{j2n})
have the same expressions as in Eqs. (\ref{Ton1n}), (\ref{T11n}), (\ref{T21n})
and in Eqs. (\ref{Ton2n}), (\ref{T12n}), and (\ref{T32n}), respectively, with
$\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) ~ \right ] _{k^- = k^-_{on}}$
and $\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) ~ \right ]
_{k^- = q^- + (k - q)^-_{on}}$
both replaced by
$\epsilon_{\lambda} (P_n) \cdot \left [ \widehat{V}_{n}(k,k-P_n) ~ \right ]_{on}$
(see Eq. (\ref{gams1}) for the definition of
$\left [ \widehat{V}_{n}(k,k-P_n) ~ \right ]_{on}$).
Note that
the region of integration over $k^+$ in Eq. (\ref{jmuD})
(a consequence of the non vanishing
integration in the $k^-$ complex plane) agrees with the
support of the wave functions $\psi_{\pi}$ and $\psi_{\bar{\pi}}$ of Eqs.
(\ref{Ton1n}) and (\ref{Ton2n}), respectively. Furthermore,
in agreement with the above assumptions, the vertex associated with the virtual photon
and consequently the wave function $\psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp}) $
in Eqs. (\ref{j1n}) and (\ref{j2n}) have the intrinsic fraction of the plus-component
of the quark momentum, $k^+/q^+ = k^+/P_n^+$, in the interval [0, 1].
To be able to evaluate the TL pion form factor
we have still to assign a value, in the instantaneous terms, to the VM vertex functions
$\left [ \Lambda_{n}(k,P_n)^{~}_{~} \right ]_{k^- = k^-_{on}}$ and
$\left [ \Lambda_{n}(k,P_n) \right ]_{k^- = q^- + (k - q)^-_{on}}$, as well as to the
pion vertex functions $\left[\overline\Lambda _{\pi}(k; P_{\pi}) \right ]_{k^- = k^-_{on}}$
and $ \left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})
\right ]_{k^- = q^- + (k - q)^-_{on}}$.
\subsection{Space-like case}
As in the time-like case, let us replace in Eqs. (\ref{jmuI}) and (\ref{jmuG})
the pion vertex function with its expression in terms of the momentum component of
the HLFD pion wave function, whenever the full expression for
the LF pion wave function,
(Eq. (\ref{wf2})) appears,
taking advantage of our identification of Eqs. (\ref{wf2})
and (\ref{wfp}).
\subsubsection{Valence region contribution}
Substituting in Eq. (\ref{jmuI}) the pion initial and final wave
functions
and noting that for the final pion
the "bar" vertex gives the complex conjugate wave function, while
in the initial state the vertex gives the initial pion wave
function, in the valence region one gets (see Fig. 3):
\begin{eqnarray}
j^{(I) \mu} &=&
~ \frac{ e } {(2\pi)^3} N_c~\int_{0}^{P^+_{\pi}}
\frac{ dk^{\prime +} d{\bf k}^{\prime}_{\perp}} {(k^{\prime+}-
P^+_{\pi}) k^{\prime+}(P_{\pi \prime}^+ - k^{\prime+})}~ \times
\nonumber \\ &&
\nonumber \\ &&
\left [ {\overline T}^{\mu}_{on, (4)} ~ ~ \psi^*_{\pi\prime}(k^{\prime +}, {\bf k}^\prime_{\perp};
P^+_{\pi\prime}, {\bf P}_{\pi \prime \perp})
~ \psi_{\pi}(P^+_\pi-k^{\prime +}, {\bf P}_{\pi\perp}-{\bf
k}^\prime_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp}) ~ ~ + \right .
\nonumber \\ &&
\nonumber \\ &&
\left . + ~ ~ {\overline T}^{\mu}_{2, (4)} ~ ~
\psi_{\pi}(P^+_\pi-k^{\prime +}, {\bf P}_{\pi\perp}-{\bf k}^\prime_{\perp};
P^+_{\pi}, {\bf P}_{\pi \perp}) ~
\left [ \overline \Lambda_{\pi \prime}(k^{\prime}, P_{\pi \prime})
\right ] _{k^{\prime -} = k^{\prime -}_{on} } ~ ~ + \right .
\nonumber \\ &&
\nonumber \\ &&
\left . + ~ ~ {\overline T}^{\mu}_{3, (4)} ~ ~
\psi^*_{\pi\prime}(k^{\prime +}, {\bf k}^\prime_{\perp};
P^+_{\pi\prime}, {\bf P}_{\pi \prime \perp}) ~
\left [ \Lambda_{\pi}(P_{\pi}-k^{\prime}, P_{\pi})
\right ] _{k^{\prime -} = k^{\prime -}_{on}} \right ] ~~ ,
\label{jmuIa}
\end{eqnarray}
where
\begin{eqnarray}
{\overline T}^{\mu}_{on, (4)} =
Tr \left [(\rlap\slash {k^{\prime}}_{on} + m)
~\gamma^5 [(\rlap\slash {k^{\prime}} - \rlap\slash {P_{\pi \prime}})_{on} + m]~
\Gamma^\mu(4)
~ [(\rlap\slash {k^{\prime}} -
\rlap\slash {P_{\pi}})_{on} + m]~ \gamma^5 \right ]
\label{TonIb}
\end{eqnarray}
\begin{eqnarray}
{\overline T}^{\mu}_{2, (4)} &=&
- ~ \frac{1}{2} ~ Tr \left [(\rlap\slash {k^{\prime}}_{on} + m)
~\gamma^5 ~ \gamma^+ ~
\Gamma^\mu(4)
~[(\rlap\slash {k^{\prime}} -
\rlap\slash {P_{\pi}})_{on} + m]~ \gamma^5 \right ]
\label{T2Ib}
\end{eqnarray}
\begin{eqnarray}
{\overline T}^{\mu}_{3, (4)} &=&
- ~ \frac{1}{2} ~ Tr \left [(\rlap\slash {k^{\prime}}_{on} + m)
~\gamma^5 [(\rlap\slash {k^{\prime}} - \rlap\slash {P_{\pi \prime}})_{on} + m]~
\Gamma^\mu(4)
~ \gamma^+ ~ \gamma^5 \right ] ~~ .
\label{T3Ib}
\end{eqnarray}
In the instantaneous terms proportional to ${\overline T}^{\mu}_{2, (4)} $ and
to ${\overline T}^{\mu}_{3, (4)}$ we do not express the pion vertex functions
$\left [ \overline \Lambda_{\pi \prime}(k^{\prime}, P_{\pi \prime})
\right ] _{k^{\prime -} = k^{\prime -}_{on} }$ and
$\left [ \Lambda_{\pi}(P_{\pi}-k^{\prime}, P_{\pi})
\right ] _{k^{\prime -} = k^{\prime -}_{on}}$
in terms of the momentum component of the HLFD pion wave function, because of the presence
of $\gamma^+$ instead of
$[(\rlap\slash {k^{\prime}} - \rlap\slash {P_{\pi \prime}})_{on} + m]$ or
$[(\rlap\slash {k^{\prime}} -
\rlap\slash {P_{\pi}})_{on} + m]$, respectively.
In Eq. (\ref{jmuIa}) the photon vertex, $\Gamma^\mu(4) = \Gamma^\mu(k^{\prime} -
P_{\pi},q)$, evaluated at $k^{\prime-}=k^{\prime-}_{on}$, is the
amplitude for the photon absorption by a quark.
As discussed in Sect. V B, the photon absorption operator can be decomposed in a bare vertex,
i.e. $\gamma^\mu$ (Fig. 7 (a)), plus other terms. From the expansion in the
light-front Fock-space, the next term relevant for the process we are analyzing
is due to a
contribution of the nonvalence $2q2\overline q$ component of the
final pion wave function, see diagram (b) of Fig. 7. This
contribution
can be thought of as an expectation value
of an operator between the valence component of the wave functions
for the initial and final pions. The operator can be constructed
by applying to the
virtual photon wave-function the kernel, $\cal H$, which produces the
nonvalence pion component from the valence one (see Eq. (\ref{phabs})).
\subsubsection{Pair-production contribution}
Also the pair-production contribution to the current, can be rewritten
in terms of the momentum component of the HLFD pion
wave function (see Eqs. (\ref{wf2}),
(\ref{wfp})) when the light-front pion wave function appears. Then Eq. (\ref{jmuG}) becomes
(see Fig. 3 (b)):
\begin{eqnarray}
j^{(II) \mu} =
- \frac{ e N_c} {(2\pi)^3} \frac{m}{f_\pi} \int_0^{q^+}
\frac{ dk^+ d{\bf k}_{\perp} ~ \left [\Lambda _{\pi}(-k; P_\pi) \right] _{k^-=q^- + (k - q)^-_{on}} }
{(k^+ + P^+_{\pi}) ~ k^+ ~ (q^+ - k^+)} ~
\sum_{n, \lambda}
~ {\sqrt{2} ~ [\epsilon ^{\mu}_{\lambda}(P_n)]^* f_{Vn}
\over
\left [ q^2 - M^2_n + \imath M_n \tilde{\Gamma}_n(q^2) \right ]} ~
\times &&
\nonumber \\
\left [ \Lambda_{n}(k,P_n) \right] _{k^- = q^- + (k - q)^-_{on}}
\left \{ \frac{ q^+ }
{ \left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) + \imath \epsilon \right ]}
\left [ T^{\prime}_{on, (2,n)} ~ + ~ T^{\prime}_{1, (2,n)} \right ] ~ + ~ T^{\prime}_{3, (2,n)} \right \}
\quad ~
\label{jmuO1}
\end{eqnarray}
where
\begin{eqnarray}
&&T^{\prime}_{on, (2,n)} = \psi^* _{\pi\prime}((k^+ + P^+ _{\pi} ), ({\bf k} +
{\bf P}_{\pi})_{\perp}; P^+_{ \pi\prime}, {\bf P}_{\pi\prime \perp}) ~
\times \quad \quad \quad
\label{TonIIn} \\
&& Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
\gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m]
\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) \right ]_{k^-=q^- + (k - q)^-_{on}}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ]
\nonumber
\end{eqnarray}
\begin{eqnarray}
&& T^{\prime}_{1, (2,n)} =
\frac{1} { 2 } ~ \frac{m}{f_\pi} ~
\left [ \overline\Lambda_{\pi \prime}(k + P_{\pi}, P_{\pi \prime})
\right ] _{k^- = q^- + (k - q)^-_{on}}
\times \quad \quad \quad
\label{T1IIn} \\
&& Tr \left [ \gamma^+ ~ \gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) \right ]_{k^-=q^- + (k - q)^-_{on}}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ]
\nonumber
\end{eqnarray}
\begin{eqnarray}
&& T^{\prime}_{3, (2,n)} = \frac{1} { 2 } ~ \psi^* _{\pi\prime}((k^+ + P^+ _{\pi} ), ({\bf k} +
{\bf P}_{\pi})_{\perp}; P^+_{ \pi\prime}, {\bf P}_{\pi\prime \perp}) ~
\times \quad \quad \quad
\label{T3IIn} \\
&& Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \epsilon_{\lambda} (P_n) \cdot \widehat{V}_{n}(k,k-P_n) \right ]_{k^-=q^- + (k - q)^-_{on}}
~ \gamma^+ ~ \gamma^5 \right ]
\nonumber
\end{eqnarray}
As for the time-like case, we have used the expression of Eq. (\ref{cur7}) for the
photon vertex with the virtual photon going into a $q\bar{q}$ pair.
The "bar" vertex $ \overline\Lambda_{\pi \prime}$ implies that the final pion wave
function in the above expressions has to be complex conjugated.
If we take the Dirac structures in the photon vertex with both the quarks on their
mass shell, as in the time-like case
(see Eq. (\ref{cur7b})), then using Eqs. (\ref{wfn}) and (\ref{wfpn})
we can express $j^{(II) \mu}$ through the momentum component of the
HLFD vector meson wave functions,
when the LF VM wave function is present, i.e. in the terms given by Eqs.
(\ref{TonIIn}) and (\ref{T1IIn}):
\begin{eqnarray}
&&j^{(II) \mu} =
- \frac{ e N_c } {(2\pi)^3} \frac{m}{f_\pi} \int_0^{q^+}
\frac{ dk^+ d{\bf k}_{\perp} ~\left[\Lambda _{\pi}(-k; P_\pi) \right] _{k^-=q^- + (k - q)^-_{on}} }
{(k^+ + P^+_{\pi}) ~ k^+ ~ (q^+ - k^+)} ~
\sum_{n, \lambda}
~ {\sqrt{2} ~ [\epsilon ^{\mu}_{\lambda}(P_n)]^* ~ f_{Vn} \over
\left [ q^2 - M^2_n + \imath M_n \tilde{\Gamma}_n(q^2) \right ]} ~
\times \nonumber \\
&&\left \{ \frac{ \psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp}) ~
[M^2_n - M^2_0(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp})] }
{ \left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf q}_{\perp}) + \imath \epsilon \right ]} ~
\left [ T^{\prime}_{on, (2,n)} ~ + ~ T^{\prime}_{1, (2,n)} \right ] ~ \right .
+
\nonumber \\
\nonumber \\
&& \left . \left [ \Lambda_{n}(k,P_n) \right ] _{k^-=q^- + (k - q)^-_{on}} ~
T^{\prime}_{3, (2,n)} \right \}
\label{jmuO1b}
\end{eqnarray}
with
\begin{eqnarray}
T^{\prime}_{on,(2,n)} &=& \psi^* _{\pi\prime}((k^+ + P^+ _{\pi} ), ({\bf k} +
{\bf P}_{\pi})_{\perp}; P^+_{ \pi\prime}, {\bf P}_{\pi\prime \perp}) ~ \times
\label{TonIInb} \\
&& Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
\gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m] ~
\epsilon_{\lambda} (P_n) \cdot \left [\widehat{V}_{n}(k,k-P_n) \right ]_{on}
(\rlap\slash k_{on} + m)~ \gamma^5 \right ]
\nonumber
\end{eqnarray}
\begin{eqnarray}
T^{\prime}_{1, (2,n)} &=&
\frac{1} { 2 } ~ \frac{m}{f_\pi} ~
\left [ \overline\Lambda_{\pi \prime}(k + P_{\pi}, P_{\pi \prime})
\right ] _{k^- = q^- + (k - q)^-_{on}} ~
~
\times \quad \quad
\nonumber \\
&& Tr \left [ \gamma^+ ~ \gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m] ~
\epsilon_{\lambda} (P_n) \cdot \left [ \widehat{V}_{n}(k,k-P_n) \right ]_{on}
~(\rlap\slash k_{on} + m) ~ \gamma^5 \right ]
\label{T1IInb}
\end{eqnarray}
\begin{eqnarray}
T^{\prime}_{3, (2,n)} &=& \frac{1} { 2 } ~
\psi^* _{\pi\prime}((k^+ + P^+ _{\pi} ), ({\bf k} +
{\bf P}_{\pi})_{\perp}; P^+_{ \pi\prime}, {\bf P}_{\pi\prime \perp}) ~
\times \quad \quad \quad
\label{T3IInb} \\
&& Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\epsilon_{\lambda} (P_n) \cdot \left [ \widehat{V}_{n}(k,k-P_n) \right ]_{on}
~ \gamma^+ ~ \gamma^5 \right ]
\nonumber
\end{eqnarray}
The vertex $\Lambda _{\pi}(-k; P_\pi)$ evaluated at
$k^-=q^- + (k - q)^-_{on}$ represents the pion absorption amplitude by an on-shell quark. The
presence of this process can be also interpreted as a $2q2\overline q$
component in the final pion wave function (see Figs. 3 (b) and 7 (c)), as
illustrated in Sect. V.
\section{ Time-like em form factor of the pion}
We have pointed out in the Introduction that,
for a unified description of TL and SL form factors, it is necessary to choose a reference frame
where the plus component of the momentum transfer, $q^+$, is different from zero (otherwise,
$q^2 = q^+ q^- - {\bf q}_{\perp}^2$ cannot be positive). Therefore,
as in Ref. \cite{LPS}, in order to calculate the pion form factor we adopt a
reference frame where ${\bf q}_{\perp}=0$ and $q^+>0$.
The decay of a time-like virtual photon is written in terms of the
time-like form factor of the pion as follows
\begin{eqnarray} &&
j^{\mu} =\langle \pi \bar{\pi}| \bar{q}(0) \gamma^{\mu}q(0)
|0\rangle = e ~ \left (P^{\mu}_{\pi} -P^{\mu}_{\bar{\pi}} \right )~F_{\pi}(q^2)
\label{dec1}
\end{eqnarray}
where $q^{\mu} =P^{\mu}_{\pi}+P^{\mu}_{\bar{\pi}}~$ is the four momentum of the
virtual photon.
In Fig. 2, the diagrammatic analysis of the virtual-photon decay
in a $\pi \bar{\pi}$ pair is shown.
The virtual-photon decay amplitude can be obtained from Eq.
(\ref{dec1}) by evaluating the plus-component of the matrix
element, $j^{\mu}$. To be able to evaluate
the matrix element $j^{+}$ from Eq. (\ref{jmuD}), we
substitute in Eq. (\ref{jmuD}) constant values for the vertexes,
$ \bar{\cal D}_{\pi}$ and ${\cal D}_{\bar\pi}$, namely for pion or antipion
radiation by a quark, Eqs. (\ref{piemi}) and
(\ref{pibemi}), respectively (see also Eqs. (\ref{jmu1n}, \ref{jmu2n})). Then,
it remains to
specify the values of the instantaneous
vertex functions
$\left[\overline\Lambda _{\pi}(k; P_{\pi}) ~ \right ] _{k^- = k^-_{on}}$ in Eq. (\ref{T11n}),
$\left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})
\right ] _{k^- = q^- + (k - q)^-_{on}}$ in Eq. (\ref{T12n}),
and $\left [ \Lambda_{n}(k,P_n) \right ]$
in Eqs. (\ref{j1n}, \ref{j2n}). This will be thoroughly discussed in Sect. IX.
By using Eqs. (\ref{dec1})
and (\ref{jmuD}) one can obtain the pion form factor
$F_{\pi}(q^2)$ from the plus-component of the current :
\begin{eqnarray} &&
F_{\pi}(q^2) = \sum_n ~ {f_{Vn} \over
\left [ q^2 - M^2_n + \imath M_n \tilde{\Gamma}_n(q^2) \right ]} ~~ g^+_{Vn}(q^2)
\label{tlff}
\end{eqnarray}
where $g^+_{Vn}(q^2)$, for $q^2 > 0$, is the form factor for the VM decay in a pair of pions,
as expected from the VMD approximation. The characteristic feature of our approach
is that we aim at a microscopic description of both $f_{Vn} $ and $g^+_{Vn}(q^2)$.
Let us now evaluate $\sum_{ \lambda}$ of Eq. (\ref{jmuD}).
The momentum of the vector meson is $P^{\mu}_n \equiv \{
P^-_n=(|{\bf q}_{\perp}|^2+M^2_n)/q^+,{\bf q}_{\perp},q^+ \}$ and
the momentum of the virtual photon is $q^{\mu} \equiv \{
q^-,{\bf q}_{\perp},q^+ \}$
(as already noted, see Fig. 4, at the production vertex the LF
three-momentum is conserved).
In a frame where ${\bf q}_{\perp}=0$,
one has $q^- = q^2/q^+$ for the photon, while for the vector meson $P^{-}_n
= M^2_n/q^+$. Using Cartesian components for the four vectors,
i.e. $a^{\mu} \equiv \left [ a^0,{\bf a} \right ]$, in this frame
the three polarization four-vectors are given by
\begin{eqnarray} &&
\epsilon^{\mu}_x \equiv [0,1,0,0 ], \quad \epsilon^{\mu}_y
\equiv [0,0,1,0 ], \quad \epsilon^{\mu}_z \equiv [P_{n z}/M_n,0,0,\sqrt{1+\eta} ]
\end{eqnarray}
where $\eta=P^2_{n z}/M^2_n$.
.
Let us recall that, in the frame we are adopting,
$P_{n z}=(q^+ - P^-_n)/2=({q^+}^2 - M^2_n )/ 2q^+$.
Therefore, in the reference frame defined by ${\bf q}_{\perp}=0$ and
$q^+>0$ the polarization four-vector $\epsilon^{\mu}_z$
does not have a defined sign for the zero component.
The plus-component of $\epsilon^{\mu}_z$ is
given by
\begin{eqnarray} &&
\epsilon^{+}_z =
{P_{n z} \over M_n} + \sqrt{1 + {P_{n z}^2 \over M^2_n}} =
{{q^+}^2 - M^2_n \over 2q^+ M_n} +
\sqrt{1+ \left ({{q^+}^2 - M^2_n \over 2q^+ M_n} \right )^2}
\label{eps1}
\end{eqnarray}
The plus-component of the other polarization
four-vectors are vanishing (i.e.,
$\epsilon^{+}_x=\epsilon^{+}_y=0$) and therefore we have
$\sum_{\lambda} \left [ \epsilon
^{+}_{\lambda}(P_n) \right ]^* \epsilon _{\lambda}(P_n) \cdot
\widehat{V}_{n} = \left [ \epsilon ^{+}_{z}(P_n) \right ]^*
\epsilon _{z}(P_n) \cdot \widehat{V}_{n} $.
Each term of $\sum_{n}$ in Eq. (\ref{tlff}) is invariant under LF boosts,
that are kinematical, and therefore
to simplify the calculations it can be evaluated
in the rest frame of the corresponding resonance. In the rest frame
of the $n{\rm th}-$resonance
one has $q^+=M_n$ and $q^- = q^2/M_n$ for the photon, while
$P^{+}_n= P^{-}_n=M_n$ for the vector meson.
This means that we choose a different frame for
each resonance, but all these frames are related by kinematical LF boosts
along the $z$ axis to each other, and to the Breit frame where $q^+ = - q^- = \sqrt{-q^2}$,
adopted in previous analyses of the SL region
(we have always ${\bf q}_{\perp}=0$) \cite{pach02,LPS}.
Then, in the evaluation of the sum in Eq. (\ref{tlff})
one has $\epsilon^+_z=1$ and
\begin{eqnarray}
\sum_{\lambda} \left [ \epsilon
^{+}_{\lambda}(P_n) \right ]^* \epsilon _{\lambda}(P_n) \cdot
\widehat{V}_{n} = - \widehat{V}_{n z} \quad
\label{polz}
\end{eqnarray}
for the contribution of any resonance!
In conclusion we have
\begin{eqnarray}
g^+_{Vn}(q^2) &=& ~ {1 \over P^+_{\bar{\pi}}-P^+_{{\pi}} } ~
\frac{ N_c} {(2\pi)^3} ~ ~ \times
\nonumber \\
&&\int_0^{q^+} \frac{ \sqrt{2} ~ dk^+ d{\bf k}_{\perp}}{(k^+ - P^+_{\pi}) k^+ (q^+ - k^+)}
\left \{ \Theta (P^+_{\pi} -k^+) ~ \overline{I}_{1, n}
+ \Theta (k^+ - P^+ _{\pi} ) ~ \overline{I}_{2, n} \right \} \ \ , \quad
\label{jmuT}
\end{eqnarray}
where the quantities $\overline{I}_{1, n}$ and $\overline{I}_{2, n}$
can be obtained from Eqs. (\ref{j1n}), (\ref{j2n}) replacing
$m/f_{\pi} ~
\left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})^{~}_{~} \right ]_{k^- = k^-_{on}}$ and
$m/f_{\pi} ~ \left [ \overline\Lambda_{\pi}(k,P_{\pi}) \right ]_{k^- = q^- + (k - q)^-_{on}}$
with ${\cal{D}}_{\overline{\pi}}$ and $\overline{{\cal{D}}}_{\pi}$,
respectively (see Eqs. (\ref{pibemi}) and (\ref{piemi})):
\begin{eqnarray}
\overline{I}_{1, n} &=&
{\cal{D}}_{\overline\pi} ~
\left \{ \frac{ \psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf 0}) ~
[M^2_n - M^2_0(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf 0}] }
{ \left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf 0}) + \imath \epsilon \right ]} ~
\left [ {\cal{T}}_{on, (1, n)} ~ + ~ {\cal{T}}_{1, (1, n)} \right ] ~
+ \right .
\nonumber \\
&& \left . \left [ \Lambda_{n}(k,P_n) \right ]_{k^- = k^-_{on}} ~ {\cal{T}}_{2, (1, n)} \right \}
\label{jmu1nT}
\end{eqnarray}
\begin{eqnarray}
\overline{I}_{2, n} &=&
\overline{{\cal{D}}}_{\pi} ~
\left \{ \frac{
\psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf 0}) ~
[M^2_n - M^2_0(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf 0})]}
{\left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf 0}) + \imath \epsilon \right ]~
} ~ \left [ {\cal{T}}_{on, (2, n)} ~ + {\cal{T}}_{1, (2, n)} \right ] ~
+ \right .
\nonumber \\
&& \left . \left [ \Lambda_{n}(k,P_n) \right ]_{k^- = q^- + (k - q)^-_{on}} ~
{\cal{T}}_{3, (2, n)} \right \}
\label{jmu2nT}
\end{eqnarray}
with
\begin{eqnarray}
{\cal{T}}_{on, (1, n)} &=& - ~
\psi^* _{\pi}(k^+, {\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp})
~
~ \times \quad \quad
\label{Ton1VT} \\
&& Tr \left [[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \widehat{V}_{nz}(k,k-P_n) ~ \right ] _{on}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ]
\nonumber
\end{eqnarray}
\begin{eqnarray}
{\cal{T}}_{on, (2, n)} &=& ~\psi^* _{\bar{\pi}}((k^+ - P^+ _{\pi} ), ({\bf k -
P_{\pi}})_{\perp}; P^+_{\bar{\pi}}, {\bf P}_{{\bar{\pi}} \perp}) ~ ~
\times
\label{Ton2VT} \\
&& Tr \left [ [(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~ \gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \widehat{V}_{nz}(k,k-P_n) ~ \right ] _{on}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ]
\nonumber
\end{eqnarray}
\begin{eqnarray}
{\cal{T}}_{1, (1, n)} &=& \frac{1} { 2 } ~ \frac{m}{f_\pi} ~
\left[\overline\Lambda _{\pi}(k; P_{\pi}) ~
\right ] _{k^- = k^-_{on}} ~
\times
\nonumber \\
&& Tr \left[ \gamma^+
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \widehat{V}_{nz}(k,k-P_n) ~ \right ] _{on}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ]
\label{T11VT}
\end{eqnarray}
\begin{eqnarray}
{\cal{T}}_{1, (2, n)} &=& \frac{1} { 2 } ~ \frac{m}{f_\pi} ~
\left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})
\right ]
_{k^- = q^- + (k - q)^-_{on}} ~
~ ~ \times
\nonumber \\
&& Tr \left [ \gamma^+
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \widehat{V}_{nz}(k,k-P_n) ~ \right ] _{on}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ]
\label{T12VT}
\end{eqnarray}
\begin{eqnarray}
{\cal{T}}_{2, (1, n)} &=& ~ \frac{1}{2} ~
\psi^* _{\pi}(k^+, {\bf k}_{\perp}; P^+_{\pi}, {\bf P}_{\pi \perp}) ~
~ \times \quad \quad
\nonumber \\
&& Tr \left [[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ \gamma^+ ~
\left [ \widehat{V}_{nz}(k,k-P_n) ~ \right ] _{on}
~(\rlap\slash k_{on} + m)~ \gamma^5 \right ]
\label{T21nT}
\end{eqnarray}
\begin{eqnarray}
{\cal{T}}_{3, (2, n)} &=& ~ \frac{1}{2} ~ \psi^* _{\bar{\pi}}((k^+ - P^+ _{\pi} ), ({\bf k -
P_{\pi}})_{\perp}; P^+_{\bar{\pi}}, {\bf P}_{{\bar{\pi}} \perp})
~
~ \times \quad \quad
\nonumber \\
&& Tr \left [[(\rlap\slash k - \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \widehat{V}_{nz}(k,k-P_n) ~ \right ] _{on}
~ \gamma^+ ~ \gamma^5 \right ]
\label{T32nT}
\end{eqnarray}
The $^3S_1$ vector meson vertex $[\widehat{V}_{n}(k,k-P_n)]_{on}$
given by Eq. (\ref{gams1}), as it was used in previous
calculations \cite{Jaus90}, is completely determined by the
kinematical momenta of the individual quark and antiquark.
In the $^3S_1$ vector meson intrinsic frame, where $q^+=M_n$ and
$\bf q_{\perp}=0$, one has
\begin{eqnarray} &&
\left [ \widehat{V}_{nz}(k,k - P_n) ~ \right ] _{on} =
\left (\gamma^3-
{ k^{3}_{on} - (P_n^{3} - k^{3})_{on} \over M_{0n}+ 2m} \right )
\label{tetap}
\end{eqnarray}
where $M_{0n}^2 = (|{\bf k}_{\perp}|^2+m^2)/x (1-x)$ ($x=k^+/q^+=k^+/M_n$).
The kinematics for the final two-pion state in the particular
frame where the photon has momentum ${\bf q}_{\perp}=0$, $q^+=M_n$
and $q^-=q^2/M_n$ can be derived from the energy-momentum
conservation, which yields
\begin{eqnarray} &&
q^+ = M_n = P^+_{\bar{\pi}} + P^+_{{\pi}},
\quad \quad {\bf P}_{\bar{\pi} \perp} = -{\bf
P}_{{\pi} \perp}
\label{kine}
\end{eqnarray}
and thus
\begin{eqnarray} && q^- = P^-_{\bar{\pi}} + P^-_{{\pi}} =
{|{\bf P}_{{\pi} \perp}|^2 + m^2_{\pi} \over
P^+_{\bar{\pi}}} +{|{\bf P}_{{\pi} \perp}|^2 + m^2_{\pi} \over
P^+_{{\pi}}} = {1 \over q^+} {|{\bf P}_{{\pi} \perp}|^2 + m^2_{\pi}
\over x_{{\pi}}~(1-x_{{\pi}})}
\label{kine1}
\end{eqnarray}
where
$x_{\pi}=P^+_{{\pi}}/q^+$ and $x_{\bar{\pi}} = P^+_{\bar{\pi}}/q^+ =1-x_{\pi}$. Eqs.
(\ref{kine}) and (\ref{kine1}) put in evidence the relation
between the kinematical variables of the virtual photon and the ones of both
pion and antipion. In the time-like region the value
of $q^2$ does not fully determine the values for the four-momenta
of the pion and the antipion in the final state of the $\pi \bar{\pi}$ pair.
In order to reduce the
freedom we make the purely longitudinal choice, i.e.
${\bf P}_{\bar{\pi} \perp} = -{\bf P}_{{\pi} \perp}={\bf 0} $. Then,
from Eqs. (\ref{kine}) and (\ref{kine1}), one obtains
\begin{eqnarray} &&
x_\pi=\frac12\pm\sqrt{\frac14-{m^2_\pi\over q^2}}.
\label{xpi}
\end{eqnarray}
Let us note that the minimum allowed value for $q^2$ is $4 m^2_\pi$.
At this threshold value one has $x_\pi = 1/2$. Therefore, since
\begin{eqnarray} &&
P^+_{\bar{\pi}} - P^+_{{\pi}} =
q^+ - 2 P^+_{{\pi}} = q^+ (1 - 2x_\pi) = M_n (1 - 2x_\pi) ~,
\label{piu}
\end{eqnarray}
one cannot evaluate Eq. (\ref{jmuT}) exactly at threshold, unless an exact cancellation
occurs between vanishing numerator and denominator of Eq. (\ref{jmuT}).
For finite
values of $m_\pi$, the values $x_\pi=1$ or $x_\pi=0$ are possible
only for an infinite value of the momentum transfer and imply an
infinite value of $P_{{\pi z}}$ or $P_{\bar{\pi} z}$,
respectively.
In the limit of {\em a vanishing pion mass} ($m_\pi=0$), Eq. (\ref{xpi}) gives $x_\pi=1$ or $ 0$,
which implies that one of the terms of Eq. (\ref{jmuT}) vanishes due to
the $\Theta$ function. To simplify our calculations, in the following we make the approximation
$m_\pi=0$ and adopt the choice $x_\pi=0$,
which implies $P^+_\pi = 0$, $P^-_\pi = q^- = q^2/M_n$,
$P^+_{\bar{\pi}} = q^+ = M_n$, and $P^-_{\bar{\pi}} = 0$. Then only the second term
of Eq. (\ref{jmuT}), containing the quantity $\overline{I}_{2, n}$, gives a
contribution to the TL pion form factor.
Furthermore, for $m_\pi=0$ one has ${\cal {T}}_{on, (2, n)} = 0 ~$. Indeed, in this limit
$~(\rlap\slash k - \rlap\slash P_{\pi})_{on} = \rlap\slash k_{on}$ and
\begin{eqnarray} &&
(\rlap\slash k_{on} + m) ~ \gamma^5 ~ (\rlap\slash k_{on} + m) =
(\rlap\slash k_{on} + m) ~ (- ~\rlap\slash k_{on} + m) ~ \gamma^5 =
(- ~ \rlap\slash k_{on} \rlap\slash k_{on} + m^2) ~ \gamma^5 = 0 ~ \quad
\label{dec3}
\end{eqnarray}
Therefore only the instantaneous contributions
${\cal {T}}_{1, (2, n)}$ and ${\cal {T}}_{3, (2, n)}$ survive in the limit of a vanishing pion mass
and can be written as follows :
\begin{eqnarray}
{\cal {T}}_{1, (2, n)} &=& - ~ \frac{1}{2} ~ \frac{m}{f_\pi} ~
\left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})
\right ] _{k^- = q^- + (k - q)^-_{on}} ~
~ ~ \times
\nonumber \\
&& Tr \left [ \gamma^+
~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \widehat{V}_{nz}(k,k-P_n) ~ \right ] _{on}
~(\rlap\slash k_{on} + m) \right ]
\label{T12VTL}
\end{eqnarray}
\begin{eqnarray}
{\cal {T}}_{3, (2, n)} &=& ~ \frac{1}{2} ~
\psi^* _{\bar{\pi}}(k^+, {\bf k }_{\perp}; M_n, {\bf 0}_{ \perp})
~
~ \times \quad \quad
\nonumber \\
&& Tr \left [[ - \rlap\slash k_{on} + m]
~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \widehat{V}_{nz}(k,k-P_n) ~ \right ] _{on}
~ \gamma^+ \right ] \quad \ .
\label{T32nTL}
\end{eqnarray}
To evaluate the time-like pion form factor we have still to
specify the values of the instantaneous
vertex functions
$\left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}}) \right ] _{k^- = q^- + (k - q)^-_{on}}$
in Eq. (\ref{T12VTL})
and $\left [ \Lambda_{n}(k,P_n) \right ]_{k^- = q^- + (k - q)^-_{on}}$
in Eq. (\ref{jmu2nT}) that, as already explained, cannot be directly related to
$\psi_{\bar{\pi}}$ and $\psi_n$.
\section{ Space-like em form factor of the pion}
The space-like form factor of the pion can be obtained from the plus
component of the proper current matrix element
\begin{eqnarray} &&
j^{\mu} =\langle \pi | \bar{q}(0) \gamma^{\mu}q(0)
| \pi \prime |\rangle = e ~ \left (P^{\mu}_{\pi} + P^{\mu}_{\pi \prime} \right ) ~ F_{\pi}(q^2)
\label{dec2}
\end{eqnarray}
where $q^{\mu} = P^{\mu}_{\pi \prime} - P^{\mu}_{\pi}~$.
In our reference frame, where ${\bf q}_{\perp}=0$ and $q^+>0$, the minus-component of the
four-momentum transfer is given by $q^- = q^2/q^+$, which is negative in the space-like
region. Let us note that
\begin{eqnarray} &&
q^- = {|{\bf P}_{{\pi} \prime
\perp}|^2 + m^2_{\pi} \over P^+_{\pi \prime}} - {|{\bf P}_{{\pi}
\perp}|^2 + m^2_{\pi} \over P^+_{{\pi}} } ~ ~.
\label{qmneg}
\end{eqnarray}
Hence, the constraint $q^- < 0$ is obviously fulfilled for any
value of ${\bf P}_{{\pi} \perp}$, since $|{\bf P}_{{\pi} \prime
\perp}| = |{\bf P}_{{\pi} \perp}|$ and $P^+_{\pi \prime} = q^+ +
P_{\pi}^+ > P_{\pi}^+$. From Eq. (\ref{qmneg}) one has
\begin{eqnarray} &&
q^2 = - (q^+)^2 ~
{|{\bf P}_{{\pi} \perp}|^2 + m^2_{\pi} \over P_{\pi}^+ ~(q^+ +
P_{\pi}^+)} =
- {|{\bf P}_{{\pi} \perp}|^2 + m^2_{\pi}
\over x_{{\pi}}~(1 + x_{{\pi}})} ~ ~,
\label{qSL}
\end{eqnarray}
where $x_{\pi}=P^+_{{\pi}}/q^+$.
Therefore, once a value for $|{\bf P}_{{\pi} \perp}|$ is chosen,
$P_{\pi}^+$ and $P^+_{\pi \prime}$ are fixed. For a purely longitudinal motion
of the pions, i.e. ${\bf P}_{\pi \perp}= {\bf P}_{\pi \prime \perp} = {\bf 0} $,
it is easy to obtain from Eq. (\ref{qSL}) that
\begin{eqnarray}
{P}^+_{\pi} = q^+ \left ( -\frac12 +
\sqrt{\frac14-{m^2_\pi\over q^2}}\right) \ \ \text{and} \ \
{P}^+_{\pi \prime} = q^+ \left(\frac12 +
\sqrt{\frac14-{m^2_\pi \over q^2}}\right) \ .
\label{kinsl}
\end{eqnarray}
In the {\em{limit of}} $m_\pi=0$, the longitudinal momenta of the pions
according to Eq. (\ref{kinsl}) are
\begin{eqnarray}
{P}^+_\pi=0 \ \ \text{and} \ \ {P}^+_{\pi \prime} = q^+
\label{kinsl0}
\end{eqnarray}
for any value of the momentum transfer.
In a frame where $q^+ \neq 0$, the electromagnetic current $j^+$ in
the space-like region, Eq. (\ref{jIeII}), receives contributions from the valence component
of the wave function, $j^{(I)+}$ given by Eq. (\ref{jmuIa}), as well as
from the nonvalence components, $j^{(II)+}$ of
Eq. (\ref{jmuO1b}), i.e. from the Z-diagram contribution (see Fig. 7).
The contribution of the pion valence wave
function to the current can be calculated from Eq. (\ref{jmuIa}) introducing
the plus component of the operator
$\Gamma^\mu(k - P_{\pi},q)$ for $k^{ -} = k^{ -}_{on}$, as
given in Eq. (\ref{phabs}) and discussed in Sect. V B,
once the values of the pion vertex functions
$\left [ \overline \Lambda_{\pi \prime}(k^{\prime}, P_{\pi \prime})
\right ] _{k^{\prime -} = k^{\prime -}_{on} }$ and
$\left [ \Lambda_{\pi}(P_{\pi} - k^{\prime}, P_{\pi})
\right ] _{k^{\prime -} = k^{\prime -}_{on}}$ in the instantaneous terms have been specified.
In the limit of zero pion mass,
according to Eq. (\ref{jmuIa}) the valence contribution to
the space-like pion form factor vanishes, since ${P}^+_\pi=0$. Then,
only the Z-diagram contribution survives in this limit,
as in the time-like region.
The contribution of the Z-diagram to the elastic pion form factor
can be obtained from Eq. (\ref{dec2}) by substituting
in Eq. (\ref{jmuO1b}) the pion absorption vertex of Eq.
(\ref{pibabs}). The result can be written as follows:
\begin{eqnarray} &&
F^{II}(q^2) = \sum_n~{f_{Vn} \over q^2 -M^2_n}~ f^{II}_n(q^2) \ .
\label{ffpi1} \end{eqnarray}
Since $f^{II}_n(q^2)$ is invariant under
kinematical LF boosts, we choose to evaluate the
contribution of each vector meson, $f^{II}_n(q^2)$, in the same
reference frame that we used in the time-like region, i.e., we adopt
the rest frame for each resonance ($q^+ = M_n$, ${\bf q}_{\perp} =
0$ ; $P^+_n = q^+ = M_n$, $P^-_n = M^2_n/q^+ = M_n$).
Then for a finite value of the pion mass we have:
\begin{eqnarray}
f^{II}_n(q^2) &=& \sqrt{2} ~ {N_c \over 8 \pi^3}
{\epsilon^{+}_{z} \over P^+_{\pi\prime} + P^+_{{\pi}} } \int_0^{q^+} {dk^+
\over k^+ ~ (q^+-k^+) ~ (P^+_{\pi} +k^+)} \int d{\bf k}_{\perp} ~ ~ {\cal{D}}_\pi ~ \times
\nonumber \\ && \nonumber \\ &&
\left \{ { \psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf 0}_{ \perp}) ~ ~
[M^2_n - M^2_0(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf 0}_{\perp})] \over
\left [ q^2-M^2_0(k^+, {\bf k}_{\perp}; q^+, {\bf
0}_{\perp}) + i\epsilon \right ]
} ~
\left [ {\cal T}^{\prime}_{on, (2,n)} ~ + ~ {\cal T}^{\prime}_{1, (2,n)} \right ] ~ + \right .
\nonumber \\ &&
~ \left . \left [ \Lambda_{n}(k,P_n) \right ] _{k^-=q^- + (k - q)^-_{on}} ~ {\cal T}^{\prime}_{3, (2,n)} \right \}
\quad \ ,
\label{ffpi2}
\end{eqnarray}
with
\begin{eqnarray}
{\cal T}^{\prime}_{on, (2,n)} &=&
\psi^* _{\pi\prime}((k^+ + P^+ _{\pi} ), ({\bf k} +
{\bf P}_{\pi})_{\perp}; P^+_{ \pi\prime}, {\bf P}_{\pi\prime \perp}) ~
~
\times
\label{TonIInbb} \\
&& Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m] ~
\gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m]
\left [ \widehat{V}_{nz}(k,k - P_n) ~ \right ] _{on}
(\rlap\slash k_{on} + m)~ \gamma^5 \right ]
\nonumber
\end{eqnarray}
\begin{eqnarray}
{\cal T}^{\prime}_{1, (2,n)} &=& ~
\frac{1} { 2 } ~ \frac{m}{f_\pi} ~
\left [ \overline\Lambda_{\pi \prime}(k + P_{\pi}, P_{\pi \prime})
\right ] _{k^- = q^- + (k - q)^-_{on}}
~ ~
\times
\nonumber \\
&& Tr \left [ \gamma^+ ~ \gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m] ~
\left [ \widehat{V}_{nz}(k,k - P_n) ~ \right ] _{on}
~(\rlap\slash k_{on} + m) ~ \gamma^5 \right ]
\label{T1IInbb}
\end{eqnarray}
\begin{eqnarray}
{\cal T}^{\prime}_{3, (2,n)} &=& \frac{1} { 2 } ~
\psi^* _{\pi\prime}((k^+ + P^+ _{\pi} ), ({\bf k} +
{\bf P}_{\pi})_{\perp}; P^+_{ \pi\prime}, {\bf P}_{\pi\prime \perp}) ~ ~
\times
\nonumber \\
&& Tr \left [[(\rlap\slash k + \rlap\slash P_{\pi})_{on} + m]
~\gamma^5 ~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \widehat{V}_{nz}(k,k - P_n) ~ \right ] _{on}
~ \gamma^+ ~ \gamma^5 \right ]
\label{T3IInbb}
\end{eqnarray}
The Dirac structure, $\left [ \widehat{V}_{nz}(k,k - P_n) ~ \right ] _{on} $, for
the $^3S_1$ meson state is given by Eq. (\ref{tetap}). As already noted, in our
reference frame one has $\epsilon^+_z=1$.
Equations (\ref{T1IInbb}) and (\ref{T3IInbb}) represent the instantaneous contributions.
Analogously to the time-like case, for a vanishing pion mass one has
${\cal T}^{\prime}_{on, (2,n)} = 0$ (see Eq. (\ref{dec3})).
Let us now evaluate
$f^{II}_n(q^2)$ at $q^2 \rightarrow 0^-$ for a finite value of the mass of the pion.
To begin with, we consider: i) a constant value for ${\cal{D}}_\pi$, ii)
a simple form for the LF
pion wave function \cite{tobpauli}
\begin{eqnarray}
\psi_{\pi \prime}[(k^+ + P_{\pi}^+), ({\bf k}_{\perp} + {\bf P}_{\pi \perp});
P_{\pi \prime}^+, {\bf P}_{\pi \prime \perp}]
= {m \over f_{\pi}}
{ P^+_{\pi \prime} \over
m^2_{\pi} -
M^2_{0 \pi \prime}(k^+ + P_{\pi}^+, {\bf k}_{\perp} + {\bf P}_{\pi \perp};
P_{\pi \prime}^+, {\bf P}_{\pi \prime \perp}) } \ ,
\label{model3}
\end{eqnarray}
and iii) in the instantaneous term (\ref{T1IInbb})) take
$\left [ \overline\Lambda_{\pi \prime}(k + P_{\pi}, P_{\pi \prime})
\right ] _{k^- = q^- + (k - q)^-_{on}}$
proportional to
$\psi^*_{\pi \prime}[(k^+ + P_{\pi}^+), ({\bf k}_{\perp} + {\bf P}_{\pi \perp}); ~
P_{\pi \prime}^+, {\bf P}_{\pi \prime \perp}]$
(see the next Section).
For a finite value of the mass of the pion, let us note that in the limit $q^2 \rightarrow 0^-$
from Eq. (\ref{kinsl}) one obtains ${P}^+_{\pi} \rightarrow \infty$
and ${P}^+_{\pi \prime} = \left (M_n + {P}^+_{\pi}\right )\rightarrow \infty$.
Then, since the squared free mass for the final pion is
\begin{eqnarray} &&
M^2_{0 \pi \prime}[(k^+ + P_{\pi}^+), ({\bf k}_{\perp} + {\bf P}_{\pi \perp});
P_{\pi \prime}^+, {\bf P}_{\pi \prime \perp}]
= {P}^+_{\pi \prime} \left ({|{\bf k}_{ \perp}|^2 + m^2 \over
P_{\pi}^+ + k^+} + {|{\bf k}_{ \perp}|^2 + m^2 \over {P}^+_{\pi
\prime} - P_{\pi}^+ - k^+} \right ) \ , \
\label{M0p}
\end{eqnarray}
it becomes
large for ${P}^+_{\pi} \rightarrow \infty$, i.e. $M^2_{0 \pi
\prime} \sim {P}^+_{{\pi \prime}}$.
Then $\psi^*_{\pi \prime}[(k^+ + P_{\pi}^+), ({\bf k}_{\perp} + {\bf P}_{\pi \perp});
P_{\pi \prime}^+, {\bf P}_{\pi \prime \perp}]$
becomes a constant for
${P}^+_{\pi \prime} \rightarrow \infty$. Furthermore, for
$P^+_{\pi} \rightarrow \infty$ the traces in Eqs. (\ref{TonIInbb}, \ref{T1IInbb})
are proportional to
$\sim P^+_{\pi}$. Therefore, collecting together the factors $P^+_{\pi}$
in Eq. (\ref{ffpi2}), one concludes that for a finite value of the pion mass
$\lim_{~ q^2\rightarrow
0^-} ~ f^{II}_n(q^2) \sim \lim_{~ q^2\rightarrow 0^-} ~
1/P^+_{\pi} = 0$. The same result, $\lim_{~q^2\rightarrow 0^-} ~
f^{II}_n(q^2) = 0$, should also hold for pion wave functions which
are eigenfunctions of a Hamiltonian \cite{FPZ02}.
On the contrary,
in the limit of $m_\pi=0$, the longitudinal momenta of the pions are
${P}^+_\pi=0$ and ${P}^+_{\pi \prime} = M_n$, respectively (see Eq. (\ref{kinsl})). Then,
according to Eq. (\ref{jmuIa}), the valence contribution to
the space-like pion form factor vanishes, while
the Z-diagram yields a nonzero contribution.
A comment is appropriate here. In the work of
Ref. \cite{pach02}, where $m_\pi \neq 0$, it was found that the wave function contribution
to the space-like pion form factor strongly decreases in the frame $q^+=\sqrt{-q^2}$
as the momentum transfer $-q^2$ increases. As a consequence, the Z-diagram contribution,
which is zero at $q^2 = 0$,
becomes the dominant one at high momentum transfer. As the pion
mass is artificially decreased in that model, we find that the momentum at which
the Z-diagram starts to dominate the form factor tends toward
zero,
in agreement with the previous
discussion.
Since in this paper we work at the chiral limit of a vanishing pion mass,
in our reference frame,
the full space-like pion form factor is given
by $F^{II}(q^2)$ alone.
It has to be noted that,
as occurs in the time-like region and for the same reasons, for $m_\pi=0$ only
the instantaneous terms ${\cal T}^{\prime}_{1, (2,n)}$ and
${\cal T}^{\prime}_{3, (2,n)}$
(cf Eqs. (\ref{T1IInbb}) and (\ref{T3IInbb})) give
contribution to the pion form factor. These terms can be written in the following form :
\begin{eqnarray}
{\cal T}^{\prime}_{1, (2,n)} &=& ~ - ~
\frac{1} { 2 } ~ \frac{m}{f_\pi} ~
\left [ \overline\Lambda_{\pi \prime}(k + P_{\pi}, P_{\pi \prime})
\right ] _{k^- = q^- + (k - q)^-_{on}}
~ ~
\times
\nonumber \\
&& Tr \left [ \gamma^+ ~ [(\rlap\slash k - \rlap\slash q)_{on} + m] ~
\left [ \widehat{V}_{nz}(k,k - P_n) ~ \right ] _{on}
~(\rlap\slash k_{on} + m) \right ]
\label{T1IInbL}
\end{eqnarray}
\begin{eqnarray}
{\cal T}^{\prime}_{3, (2,n)} &=& \frac{1} { 2 } ~
\psi^* _{\pi\prime}(k^+ , {\bf k} _{\perp}; M_n, {\bf 0}_{ \perp}) ~ ~
\times \quad \quad \quad
\nonumber \\
&& Tr \left [[- \rlap\slash k _{on} + m]
~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \widehat{V}_{nz}(k,k - P_n) ~ \right ] _{on}
~ \gamma^+ \right ] \quad \ .
\label{T3IInbL}
\end{eqnarray}
\section{A Light-front model}
To evaluate the pion form factor we need :
\begin{itemize}
\item[i)] a model for the HLFD pion and vector meson
wave functions which appear in Eqs. (\ref{jmuT}) and (\ref{ffpi2});
\item[ii)] a value for the probability, $P_{q\bar q,n}$, of the VM valence component
(see Appendices D and E);
\item[iii)] an approximation for the pion vertex functions
which represent the pion emission or absorption by a quark;
\item[iv)] to assign a value to the pion and VM vertex functions
with an instantaneous quark leg.
\end{itemize}
The vector-meson resonances are described by an effective light-front model
inspired by QCD \cite{FPZ02}, that can be also applied to the pion.
The squared mass-operator for the $S$-mesons contains a harmonic oscillator
interaction featuring the confinement and a Dirac delta-function that acts
in the $^1S_0$ channel (with a renormalized strength). The wave
functions for the $^3S_1$ channel are
solutions of the following eigenvalue problem
\begin{eqnarray} &&
\left [4(|{\bf \kappa}|^2 + m^2) + {1 \over 64} \omega^2 r^2 + a \right ]
~\Psi^{HO}_{n}({\bf r})= M^2_{n} ~\Psi^{HO}_{n}({\bf r}) \, \, ~~,
\label{model1}
\end{eqnarray}
where $|{\bf \kappa}|^2 = M^2_{0}/4 -m^2$ is the square of the intrinsic quark three-momentum,
$M^2_{n}= n~\omega ~ + ~ M^2_{\rho}$ and the
eigenfunctions $\Psi^{HO}_{n}({\bf r})$ are the three-dimensional
harmonic oscillator wave functions for zero angular momentum.
The HLFD wave functions, without the Melosh rotations, are
given in the Fourier space by
\begin{eqnarray} &&
\psi_{n}(k^+,{\bf k}_{\perp},P^+_n,{\bf P}_{n \perp}) = P^+_n ~ \Psi^{HO}_{n}(|{\bf \kappa}|^2)
\label{model2} \, \, ~~.
\end{eqnarray}
The factor $P^+_n$ comes from the different normalizations used for
$\psi_{n}(k^+,{\bf k}_{\perp},P^+_n,{\bf P}_{n \perp})$ and $\Psi^{HO}_{n}(|{\bf \kappa}|^2)$.
Indeed the function $\Psi^{HO}_{n}(|{\bf \kappa}|^2)$ is normalized through the equation
\begin{eqnarray} &&
\int|\Psi^{HO}_{n}(|{\bf \kappa}|^2)|^2 d^3 \kappa = 1 ~~~\ ,
\label{norm3}
\end{eqnarray}
while the function $\psi_{n}(k^+,{\bf k}_{\perp},P^+_n,{\bf P}_{n \perp})$
is normalized through the evaluation of the
charge form factor of a vector meson at $q^2=0$,
i.e. by using the so-called charge normalization (Appendix D),
more appropriate in a relativistic
context \cite{mandel}. In the actual calculation, we have to consider that,
after properly integrating the valence component, its probability should be
recovered. This amounts to construct a schematic model for the probability,
$P_{n,q \bar q}$, for each excited state ( see Appendix E), and subsequently
to renormalize $\psi_{n}(k^+,{\bf k}_{\perp},P^+_n,{\bf P}_{n \perp})$ in Eq.
(\ref{model2}) as
follows
\begin{eqnarray} &&
\psi_{n}(k^+,{\bf k}_{\perp},P^+_n,{\bf P}_{n \perp})~=
~\sqrt{P_{n,q \bar q}}~P^+_n ~ \Psi^{HO}_{n}(|{\bf \kappa}|^2)
\label{rinmodel2}
\end{eqnarray}
In the model of Ref. \cite{FPZ02} the complete form of the pion wave function
is an eigenstate of the mass operator of Eq. (\ref{model1}) plus
a Dirac-delta interaction (in the configuration space),
which is necessary for producing a pion with a small mass (i.e. a
collapsing $q\bar{q}$ pair in the $^1S_0$ channel).
The pion wave function is found from the pole of the resolvent, explicitly written in
Ref. \cite{FPZ02}. The result is the following:
\begin{eqnarray}
\psi_{\pi}(k^+,{\bf k}_{\perp},P^+_{\pi} ,{\bf P}_{\pi \perp}) = P^+_{\pi} \sum_{n}
{\Psi^{HO}_{n }(|{\bf \kappa}|^2)\Psi^{HO}_{n }(0) \over
m^2_\pi - M^2_{n}} \ ,
\label{model5}
\end{eqnarray}
where $\Psi^{HO}_{n}(0)$ is the S-wave harmonic oscillator
eigenfunction in coordinate space at the origin.
In this model, the pion wave function approaches the asymptotic
limit, Eq. (\ref{model3}), imposed by the presence of the Dirac
delta-function in the interaction.
The relativistic constituent quark model of Ref. \cite{FPZ02}
achieves a satisfactory description of the experimental masses for both singlet and
triplet $S$-wave
mesons, with a natural explanation of the "Iachello-Anisovitch law" \cite{iach,ani},
namely the almost linear relation between the square mass of the excited states
and the radial quantum
number $n$. Since the model does not include the mixing between isoscalar and isovector
mesons, in this paper we include only the contributions of the isovector
$\rho$-like vector mesons.
As already discussed in Sec. VI, we approximate the pion vertex functions
which represent the antipion and pion emission by a quark, as well as
the quark-pion absorption vertex by means of a constant
\begin{eqnarray} &&
\overline {\cal{D}}_{\pi} = {\cal{D}}_{\overline \pi} = \frac{m}{f_\pi} ~ \lambda _{\pi} \ ;
\quad
~~~~{\text{and}}~~~~~~~ {\cal{D}}_\pi
= {m \over f_{\pi}} ~ \lambda _{\pi}
\label{model4}
\end{eqnarray}
in agreement with the constant form proposed in Ref. \cite{JI01} and successfully tested
in the study of the pseudo-scalar meson decays. The actual value of the
constant $\lambda _{\pi}$
is fixed by the pion charge normalization.
As anticipated in Sec. VII and Sec. VIII, to simplify our calculations we
are going to use $m_\pi=0$. Within this assumption, for the time-like
form factor only the instantaneous
contributions ${\cal {T}}_{1, (2, n)}$ and ${\cal {T}}_{3, (2, n)}$ survive, while
for the space-like form factor only the instantaneous terms
${\cal T}^{\prime}_{1, (2,n)}$ and ${\cal T}^{\prime}_{3, (2,n)} $ remain.
Then to fully evaluate the pion form factor
in the time-like and in the space-like
region we have still to assign a value to the pion and VM vertex functions
with an instantaneous quark leg, i.e. to the vertex functions
$\left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}}) \right ] _{k^- = q^- + (k - q)^-_{on}}$
and
$\left [ \overline\Lambda_{\pi \prime}(k + P_{\pi}, P_{\pi \prime})\right ] _{k^- = q^- + (k - q)^-_{on}}$
in Eqs. (\ref{T12VTL}) and (\ref{T1IInbL}), respectively, and
to the vertex function $\left [ \Lambda_{n}(k,P_n) \right ]_{k^- = q^- + (k - q)^-_{on}}$
of Eqs. (\ref{T32nTL}) and (\ref{T3IInbL}).
The instantaneous contributions to the time-like pion form factor corresponding to the vertex
functions
$\left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}}) \right ] _{k^- = q^- + (k - q)^-_{on}}$
and $\left [ \Lambda_{n}(k,P_n) \right ]_{k^- = q^- + (k - q)^-_{on}}$ are represented by
diagrams (a) and (b) of Fig. 8, respectively.
Let us note that the presence of the factors $(k^+ \pm P^+_{\pi})$ and $k^+$ in the
denominators of the two instantaneous terms produces an enhancement of the contributions
around the values $(k^+ \pm P^+_{\pi}) = 0$ and $k^+ = 0$ in the $k^+$ integration.
Within our assumption of a vanishing pion mass, this means that, for both the instantaneous terms,
there is an enhancement of the contribution at the end point $k^+ = 0$,
which corresponds to an infinite value
of the $z$ component of the intrinsic quark three-momentum, $\kappa_z = M_0(2x-1)/2$
($\kappa_z = - \infty$ for $x = 0$, since $M_0 \rightarrow \infty$). Therefore the high momentum part of the meson
vertex functions, i.e. the short-range part in coordinate space, is very relevant.
Then in the vertex functions with an instantaneous quark leg, ${\Lambda}^{ist}_{\pi (n)}$,
we assume that the very short-range part of the one-gluon-exchange interaction,
which includes spin-spin terms \cite{deR}, is the dominant one.
In symbolic notation we have (see Fig. 8) :
\begin{eqnarray} &&
{\Lambda}^{ist} = {\cal {K}}^{ist} ~ G_0 ~ {\Lambda}^{full}
\label{ista}
\end{eqnarray}
where ${\cal {K}}^{ist}$ is the Bethe-Salpeter kernel for the instantaneous
vertex function ${\Lambda}^{ist}$, $G_0$ the propagator of two free quarks
and ${\Lambda}^{full}$ the full vertex function.
The kernel ${\cal {K}}^{ist}$ is assumed to be dominated by the short-range part
of the interaction.
Actually we drastically simplify Eq. (\ref{ista}) as follows :
\begin{eqnarray} &&
{\Lambda}^{ist} \sim c ~ {\Lambda}^{full} \, \, ~~.
\label{OGE}
\end{eqnarray}
This amounts to naively assume that ${\Lambda}^{full}$ is an eigenstate of
${\cal {K}}^{ist} ~ G_0$. Furthermore, we assume that
${\Lambda}^{full}$ is still related to the LF meson wave function
as illustrated in Sec. IV,
i.e. ${\Lambda}^{full}_{\pi (n)} = \psi_{\pi (n)} ~ [M^2_{\pi (n)} - M^2_0] / P^+_{\pi (n)}$.
The constant $c$ is thought to roughly describe the effects of the
short-range interaction. In particular,
if we take grossly into account only the spin-spin interaction term,
then the results of Ref. \cite{DFPS} are recovered i) by choosing $c
= - 3/4$ for the pion vertex function (Fig. 8 (a)) and
$c=1/4$ for the VM vertex function (Fig. 8 (b)) and ii) by using
the probabilities
$P_{q\bar q;n} =
\frac{\delta ~ \omega^\frac12 }{2 ~ \sqrt{ 2 n+ \frac{3}{2}}}$
for the VM valence components with $\delta ~ \omega^\frac12/2 = 1$
(see Appendix E).
With this choice for the constants $c$'s, the relative weight of the VM instantaneous terms with
respect to the pion instantaneous terms is equal to $- 1/3$.
At variance with Ref. \cite{DFPS},
in the present paper we use this relative weight, $w_{VM} = c_{VM} / c_{\pi}$, as a free parameter.
In conclusion, we replace the momentum component of
the pion vertex function in Eq. (\ref{T12VTL}) as follows
\begin{eqnarray} &&
{m \over f_{\pi}} ~\left [ \Lambda_{\bar{\pi}}(k - P_{\pi},P_{\bar{\pi}})
\right ] _{k^- = q^- + (k - q)^-_{on}}
\rightarrow \nonumber \\ &&
{c_{\pi} \over P^+_{\bar{\pi}}}
\psi_{\bar{\pi}}(k^+ - P^+_{\pi}, {\bf k}_{\perp} - {\bf P}_{\pi \perp};
P^+_{\bar{\pi}}, {\bf P}_{\bar{\pi} \perp})
~ [m^2_\pi - M^2_0(k^+ - P^+_{\pi}, {\bf k}_{\perp} - {\bf P}_{\pi \perp}; P^+_{\bar{\pi}},
{\bf P}_{\bar{\pi} \perp})]
\end{eqnarray}
and in Eq. (\ref{T1IInbL}) as follows
\begin{eqnarray} &&
{m \over f_{\pi}} ~\left [ \overline\Lambda_{\pi \prime}(k + P_{\pi}, P_{\pi \prime})
\right ] _{k^- = q^- + (k - q)^-_{on}}
\rightarrow \nonumber \\ &&
{ c_{\pi}\over P^+_{\pi \prime}}\psi^*_{\pi \prime}(k^+ + P^+_{\pi}, {\bf k}_{\perp} + {\bf P}_{\pi \perp};
P^+_{\pi \prime}, {\bf P}_{\pi \prime \perp})
~ [m^2_\pi - M^2_0(k^+ + P^+_{\pi}, {\bf k}_{\perp} + {\bf P}_{\pi \perp}; P^+_{\pi \prime},
{\bf P}_{\pi \prime \perp})] ~.\end{eqnarray}
The momentum component of the VM vertex function
in Eqs. (\ref{T32nTL}), (\ref{T3IInbL})is approximated by
\begin{eqnarray} &&\left [ \Lambda_{n}(k,P_n) \right ]_{k^- = q^- + (k - q)^-_{on}}
\rightarrow
{c_{VM} \over P^+_{n}} \psi_{n}(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp}) ~
[M^2_n - M^2_0(k^+, {\bf k}_{\perp}; P^+_{n}, {\bf P}_{n \perp})]
. \end{eqnarray}
As explained in the previous sections, in the limit of a vanishing pion mass
both in the time-like and in the space-like case one has $P^+_\pi=0$ and
${P}^+_{\pi \prime} = P^+_{\bar{\pi}} = M_n$. Then,
the quantities $g^+_{Vn}(q^2)$ of Eq. (\ref{jmuT}) and $f^{II}_{n}(q^2)$ of
Eq. (\ref{ffpi2}) acquire the same functional form, despite the sign of $q^2$,
and reduce to the same function $\xi_{n}(q^2)$ :
\begin{eqnarray}
\xi_{n}(q^2) = &&{N_c \over 16 \pi^3} ~ {m\over f_{\pi}} ~ \lambda _{\pi} ~ c_{\pi}
~{\sqrt{2} \over M^2_{n}} \int_0^{M_{n}}{dk^+\over
(k^+)^2~(M_{n}-k^+)}\int d{\bf k}_\perp
\left[ {\cal T}_{1, n}(k^+,{\bf k}_\perp) + {\cal T}_{3, n} (k^+,{\bf k}_\perp) \right]
~\times \nonumber \\
\nonumber \\
&& \psi^*_{\pi \prime}(k^+, {\bf k}_{\perp}; M_n, {\bf 0}_{ \perp})
~ \left[ M^2_n - M^2_0(k^+, {\bf k}_{\perp}; M_{n}, {\bf 0}_{\perp}) \right] ~
\psi_{n}(k^+, {\bf k}_{\perp}, M_{n}, {\bf 0}_{ \perp }) ~ ,
\label{qsi}
\end{eqnarray}
where ${\cal T}_{1,n}$ and ${\cal T}_{3,n}$ are given by
\begin{eqnarray}
{\cal T}_{1, n} &=& - ~
{ \left[ m^2_\pi - M^2_0(k^+, {\bf k}_{\perp}; M_n, {\bf 0}_{ \perp}) \right] \over
\left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; M_n, {\bf 0}_{\perp}) + i\epsilon \right ] } ~
\times \nonumber \\ &&
Tr \left [ \gamma^+ ~ [(\rlap\slash k - \rlap\slash q)_{on} + m] ~
\left [ \widehat{V}_{nz}(k,k - P_n) ~ \right ] _{on}
~(\rlap\slash k_{on} + m) \right ] =
\nonumber \\
\nonumber \\
&=& - ~ 4 ~ {\left[ m^2_\pi - M^2_0(k^+, {\bf k}_{\perp}; M_n, {\bf 0}_{ \perp}) \right] \over
\left [ q^2 - M^2_0(k^+, {\bf k}_{\perp}; M_n, {\bf 0}_{\perp}) + i\epsilon \right ] } ~
\times
\nonumber \\
&& \left[ k^+(k-q)_{on, z} + (k-q)_{on}\cdot k_{on} + (k^+ - M_{n}) k_{on, z} - m^2\right.
\nonumber \\
&&\left. - ~
m ~ (2k^+-M_{n})(k_{on}-(q-k)_{on})_z~H_S(M_0) \right]
\label{T1nbL}
\end{eqnarray}
\begin{eqnarray}
{\cal T}_{3, n} &=& w_{VM} ~ Tr \left [[- \rlap\slash k _{on} + m]
~ [(\rlap\slash k - \rlap\slash q)_{on} + m]~
\left [ \widehat{V}_{nz}(k,k - P_n) ~ \right ] _{on}
~ \gamma^+ \right ] =
\nonumber \\
&=& w_{VM} ~ 4 \left[ - k^+(k-q)_{on, z} + (k-q)_{on}\cdot k_{on} + (k^+ - M_{n}) k_{on, z} - m^2 \right.
\nonumber \\
&&\left. + ~ m ~ M_{n}
\left [ k_{on}-(q-k)_{on} \right ] _z ~ H_S(M_0) \right]
\quad \ .
\label{T3nbL}
\end{eqnarray}
In the last steps in Eqs. (\ref{T1nbL}) and (\ref{T3nbL}) the traces have been explicitly evaluated and
the function $H_S(M_0)$ is given by:
\begin{eqnarray}
H_S(M_0)=\frac1{M_0+2m} \label{hs} \ .
\end{eqnarray}
Actually the value of $c_{\pi}$ together with the value of $\lambda_{\pi}$ is fixed by the
charge normalization and we have to assign a value only to the relative weight $w_{VM}$.
In Eq. (\ref{qsi}) there is no divergence from the poles at the end points
$k^+ = 0$ and $q^+ - k^+ = 0$, because of the Gaussian decrease of the
VM wave functions at these end points, which correspond to infinite values
of the $z$ component of the intrinsic quark three-momentum, $\kappa_z = M_0(2x-1)/2$
($\kappa_z = - \infty$ or $\kappa_z = + \infty$ for $x = 0$ or $x = 1$, respectively).
Finally, both in the time-like and in the space-like regions,
the pion electromagnetic form factor can be written as
\begin{eqnarray} &&
F_{\pi}(q^2) = \sum_n ~ {f_{Vn} \over
\left [ q^2 - M^2_n + \imath M_n \tilde{\Gamma}_n(q^2) \right ]} ~~ \xi_{n}(q^2)
\label{ffactor}
\end{eqnarray}
We stress that the pion form factor is continuous at $q^2 = 0$
in the limit $m_{\pi} \rightarrow 0$ and that only the instantaneous terms contribute in this limit.
We would like to remind the reader that
the vector meson wave functions are normalized to the probability of
the valence component, which can be roughly estimated
in a simple model,
as shown in Appendix E.
The decreasing probability of the valence
component for the excited vector meson states is essential to make
convergent the sum over the resonances.
\section{Results}
The pion electromagnetic form factor is calculated through
Eqs. (\ref{ffactor}) and (\ref{qsi}), where the pion and vector
meson wave functions are eigenstates of the square mass operator
defined in Eq. (\ref{model1}) ( shown for the vector channel only).
In our calculation we have a small set of parameters:
i) the constituent quark mass, ii) the oscillator strength $\omega$,
iii) the widths for the vector mesons, $\Gamma _n$,
and iv) the relative weight $w_{VM}$ of the two instantaneous contributions.
The up-down quark mass is fixed at
0.265 $GeV$ \cite{FPZ02} and the oscillator strength is fixed at
$\omega$ = 1.556 $GeV^2$ of Ref. \cite{FPZ02}.
For the first four vector mesons, the masses and widths, presented in
Table I, are used.
The non-trivial $q^2$ dependence of $\xi_{n}(q^2)$
in our microscopical model allows a small shift of the VM masses
with respect to the values obtained in the analyses of the experimental data
by using Breit-Wigner functions
with constant values for $\xi_{n}(q^2)$.
For the radial excitations with $M_n > 2.150$ $GeV$,
the mass values corresponding to the model of Ref. \cite{FPZ02} are used.
For the unknown widths we use a single width as a
fitting parameter. We choose the value
$\Gamma _n = 0.15$ $GeV$, which presents the best agreement with the compilation of the
experimental data of Ref. \cite{baldini}. We consider 20 resonances in our calculation
to obtain stability of the results up to $q^2 = 10$ $(GeV/c)^2$.
The probabilities $P_{q\bar q,n}$ of the valence component of the VM states are fixed according to the schematic
model of Appendix E (see Eq. (\ref{prob1}) and Table II).
As we discussed in the previous Sections, it is also
necessary to know the amplitude for the virtual process where a
constituent quark radiates or absorbs a pion. This unknown
function was first investigated in a phenomenological study of decay processes
within LF dynamics \cite{JI01}, it was
approximated by a constant, obtaining a satisfactory descrption of
the experimental data. We followed the approximation proposed in \cite{JI01}
in the calculation of the decay
amplitude $\xi_n(q^2)$ of Eq. (\ref{qsi}).
The value of the constant $\lambda _{\pi}$,
together with the constant $c_{\pi}$ (see the previous Section),
is fixed by the charge normalization.
The values of the coupling constants, $f_{Vn}$, are calculated using Eq. (\ref{fV2ap})
of Appendix A from the model VM wave functions.
The corresponding
partial decay width, $\Gamma_{e^+e-}$,
for these mesons are calculated from our values of $f_{Vn}$ using
Eq. (\ref{gee}) \cite{Jaus99} and are reported in Table I. The partial decay widths
for the vector mesons are in good agreement with the data, when available \cite{pdg}.
We perform two sets of calculations, to test the effect of the
pion wave function model. In one set we use the
asymptotic form of the pion valence wave function,
Eq. (\ref{model3}), and in another one we choose the eigenstate of
the square mass operator of the model of Ref. \cite{FPZ02}, given by the pion wave
function of Eq. (\ref{model5}).
The results for the form factor are shown in Figs. 9, 10, and 11.
In Fig. 9 the results corresponding to the weight $w_{VM} = - 0.7$ are shown, while in
Fig. 11 the results corresponding to $w_{VM} = - 0.7$ and $w_{VM} = - 1.5$ are compared in a
linear scale around the $\rho$ meson peak. In Fig. 9 we also report the results
calculated with
the masses and the widths used in Ref. \cite{DFPS} and reported in Table III. For this
calculations the oscillator strength $\omega = 1.39$ $GeV^2$, the probabilities
$P_{q\overline q;n} =\frac{1}{ \sqrt{ 2 n+ \frac{3}{2}}}$
and $c_{\pi} = - 3/4$, $c_{VM} = 1/4$ have been used.
Let us note that our results are the same within a few percent, if in Dirac structure of
the $n{\rm th}$ VM vertex (Eq. (\ref{gams1})), the free mass is replaced with $M_n$.
In Fig. 9, we show our results in a wide region of square
momentum transfers, from -10 up to 10 $(GeV/c)^2$, comparing them with the data
collected by Baldini et al. \cite{baldini} and with the data of
Ref. \cite{JLABp}. A general
qualitative agreement with the data is seen in this wide range of
momentum transfers, independently of the detailed form of the pion
wave function.
It has to be stressed that the heights of the TL bumps directly depend on the calculated
values of $f_{Vn}$ and $\xi_{n}(q^2)$.
The results obtained with the asymptotic pion
wave function and the full model present some difference
only above 3 $(GeV/c)^2$.
The pion form factor is particularly very well described in the space-like region,
both using the weight $w_{VM} = - 0.7$ or the weight $w_{VM} = - 1.5$, as can be clearly seen
in Fig. 10, where the ratio of the SL form factor to the monopole
factor $M(q^2) = 1/(1~ - ~ q^2/M_{\rho}^2)$ is shown.
The excellent agreement with the experimental form factor
at low momentum transfers is expected, since we have built-in
the generalized $\rho$-meson dominance.
The time-like region between 0 and 3 $(GeV/c)^2$, where $\rho(770)$,
$\rho(1450)$ and $\rho(1700)$ appear, is shown in Fig. 11 in a linear scale. The
$\rho$-meson peak is placed at the right position using a bare
mass of 770 $MeV$. From this figure it is clear that the
parameter $w_{VM}$ is able to control the region of the $\rho(770)$ peak, while in
other regions its effect is less relevant. For $w_{VM}=-1.5$, the
$\rho(770)$ peak is very well described, except for
the region around 2 $(GeV/c)^2$, where our results underestimate the
experimental data.
This dip is due to a destructive interference between the
contributions of $\rho(770)$, $\rho(1450)$, and $\rho(1700)$, and could be
potentially sensitive for a detailed test of the model presently adopted for
the meson wave functions and
other approximations introduced.
It is clear that the introduction of $\omega$-like and $\phi$-like mesons could improve the
description of the data in the TL region.
However, a consistent dynamical description of the mixing of
isospin states is far beyond the present work, and we leave it for
future developments of the model.
Finally, we have also calculate the adimensional quantity,
$F_\pi(q^2)~q^2/.77^2$, up to
$q^2=-1000~(GeV/c)^2$, observing a smooth decreases from a value of $0.691$
for $q^2=-100~(GeV/c)^2$ to a value of $0.677$ for $q^2=-1000~(GeV/c)^2$.
\section{Summary and Conclusions}
In this work, we are able to give a unified description
of the pion electromagnetic
form factor in the space- and time-like regions, thanks to the choice of a
reference frame where $q^+~>~0$.
The main steps are shortly summarized. Within the covariant approach proposed
by Mandelstam \cite{mandel}, the matrix elements
of the electromagnetic current are evaluated between pion states,
in impulse approximation, but with all the vertexes of
the triangle diagram properly
dressed. Exploiting a suitable decomposition of the fermionic propagators,
one singles out
on-shell and
instantaneous contributions. The integration over the light-front energy, $k^-$,
in the
momentum loop of the triangle diagram is performed disregarding
the effect of possible singularities of the vertex functions
and taking care of only the singularities in the propagators.
For the photon vertex function, in the processes where a $q\bar q$-pair
in the odd-parity spin-1 channel is produced, we use a generalization
of the Vector Meson Dominance approach, built up from the VM Bethe-Salpeter
amplitude (phenomenologically determined) and the VM propagator, enlightening
the relation between the hadronic part
of the photon valence wave function and the pion electromagnetic
form factor.
The obtained expression for the electromagnetic current matrix elements
are carefully discussed and the different contributions are
interpreted in terms of valence and
nonvalence components of the pion and photon wave functions.
In the valence components of the pion and VM amplitudes, the momentum part is
described through the corresponding HLFD wave functions, evaluated in a
QCD-inspired model which shows a satisfactory description for the $^1S_0$ and
$^3S_1$ mesons. A schematic model is used for the probability, $P_{n,q\bar q}$,
of the valence
component of the mesons.
The contribution of the
nonvalence component of the photon wave function appears in the
time-like region, while the nonvalence component of the pion appears
in the space-like region. The nonvalence contributions of the photon and
pion wave functions, relevant for the process under consideration,
involve emission/absorption amplitudes, that in
principle can be calculated from the valence components of the
corresponding particles, and a suitable kernel.
However, since our knowledge of this
kernel is poor, we use a constant vertex approximation for the
emission/absorption amplitudes \cite{JI01}.
To simplify our calculation, we take advantage of the smallness of the pion
mass, which is put to zero. Then, only the "Z-diagram"
survives in the space-like region.
We point out that, for
$m_\pi=0$, only the instantaneous terms contribute to the pion
form factor. Therefore, in order to evaluate the pion form factor we need the
instantaneous vertex functions, which we approximate by the full vertex functions
times a constant.
Only a few parameters define our light-front model: the oscillator
strength, the constituent quark mass, the VM meson
masses and widths. We use the experimental
width and mass for the vector mesons, when available. For the radial
excitations above $ 2.150$ $GeV$ we use the masses of the theoretical spectrum and
a single width as a fitting parameter. It is worth noting that the results
are not markedly sensitive upon different pion wave functions, like the
asymptotic wave function and the full-model one of Ref. \cite{FPZ02}.
This could be ascribed to the strong pion binding, that makes
the pion wave function
similar to its PQCD asymptotic limit \cite{lepag}.
In the space-like region, the
pion electromagnetic form factor is very well
described on the whole experimentally-explored interval, i.e. up to $q^2 = - 10$ $(GeV/c)^2$.
In the time-like
region, we find a general agreement up to 10 $(GeV/c)^2$,
except near the experimental dip at 2 $(GeV/c)^2$.
Our model can be straightforwardly improved in many respects. For instance: i) more realistic VM wave functions can
be used, as the ones of Ref. \cite{Isgur}, that take into account, e.g., the D-state
nature of some of the VM resonances, as $\rho(1700)$; ii) the introduction of
both a dynamical mixing of isospin states and the
contribution of $\phi$ meson.
Other improvements, like taking care of the non vanishing pion mass, or
considering a more realistic model for the instantaneous vertexes and for the
emission/absorption of a pion by a quark, are highly non trivial.
In summary, our work appears an encouraging step forward in achieving
a detailed investigation
of important issues, as the light-quark
content of the photon valence light-front wave function,
through the analysis of the pion electromagnetic form factor in the time-like
region. The peculiar feature represented by the smallness of the pion
mass is the key point to accomplish such an investigation.
\section*{Acknowledgments}
This work was partially supported by the Brazilian agencies CNPq
and FAPESP and by Ministero dell'Istruzione, dell'Universit\`a e della Ricerca.
J.P.B.C. M. and T.F. acknowledge the hospitality of
the Dipartimento di Fisica, Universit\`a di Roma "Tor Vergata" and
of Istituto Nazionale di Fisica Nucleare, Sezione Tor Vergata and
of Istituto Nazionale di Fisica Nucleare, Sezione Roma I.
\newpage
|
1,314,259,993,893 | arxiv | \section{Introduction}
Stellar evolution models predict that when life evolved on the Earth more the 4 billion years ago just after the heavy bombardment, the conditions in the inner part of the solar system were significantly different compared to what we know today. E.g. it was noted already in 1972 by Sagan and Mullen that the luminosity of the young Sun was only around 75\% of its present value, which would result in freezing temperatures on the Earth -- assuming a radiation budget similar to what we have today (Sagan \& Mullen 1972, see also Gough 1981). This problem has since been known as {\it the faint young Sun paradox}. On the other hand stellar evolution models predict that the activity of the Sun related to the chromosphere and corona should be much stronger than what we know today.
A number of different solutions have been proposed to {\it the faint young Sun paradox}. Sagan \& Mullen (1972) suggested that elevated levels of CO$_{2}$ could have maintained surface temperatures above freezing, but the CO$_{2}$ level needed to raise the temperature might be so high that it would be in conflict with geochemical records (Rye et al. 1995). Potentially other greenhouse gasses like ammonia could also help to raise the temperature, but on the other hand it is questionable if large amounts of ammonia could be maintained in the Earth atmosphere at a time where the Sun's UV radiation was up to 10 times larger than today (Sagan \& Chyba 1972).
Another solution that was originally proposed by Graedel (1991) suggests that the Sun has experienced a mass loss of 5-10\% over its main-sequence life. Therefore the young Sun would have been a bit more massive than today's Sun and thus brighter than what we would expect without considering mass loss. Unfortunately mass loss rates as high as 10\% contradicts both solar evolution models calibrated using helioseismology (Guzik \& Cox 1995) and measurements of stellar winds around solar-type stars (Gaidos et al. 2000; Wood et al. 2002).
Recently attempts to solve {\it the faint young Sun paradox} have been based on atmosphere models including not only an increased greenhouse effect, but also a reduced albedo (von Paris et al. 2008; Kitzmann et al. 2010). These attempts are in line with the hypothesis put forward by Svensmark \& Friis-Christensen (1997): that galactic cosmic rays (GCRs) modulate the amount of aerosols and clouds in the lower part of the Earth's atmosphere. In other words, in order to understand the evolution of the Earth's climate now and back then, it is important to understand not only the effect from high-altitude clouds (through the greenhouse effect), but also the effect from low-altitude clouds (through the albedo effect). By including both effects in there modes it was suggested by Shaviv (2003) and Svensmark (2003, 2006) that the young active Sun's increased ability to protect the Earth from GCRs could cause higher temperatures on the Earth.
The hypothesis is that as the Sun was much more active when life started to evolve on the Earth (which is reflected in its higher rotation rate and higher level of UV and X-ray emission) it would have been much more efficient in shielding us from the GCRs, which would have resulted in smaller amounts of aerosols and clouds in the lower part of the Earth's atmosphere and thus higher temperatures on the Earth. This hypothesis was recently strengthened by observations of direct evidence of a relation between GCRs, aerosols and clouds on short time scales as observed during major Forbush decreases (Svensmark, Bondo \& Svensmark 2009).
Though it was noted by Svensmark, Bondo \& Svensmark (2009) that large Forbush decreases today are too rare to have any significant effect on the Earth's climate, this might not have been the case 4 billion years ago when life started to evolve on the Earth as CMEs are expected to have been much more common on the early Sun. We therefore analyze if the reduction in the influx of GCRs originating from Forbush decreases could be significant compared to the reduction in the influx of GCRs originating from the more effective shielding capacity of Sun at the time life evolved on the Earth.
We undertake this analysis through a case study of the young solar twin \kcet. With an age of around 700 million years \kcet mimics the Sun at that time. The age estimate of \kcet is based on the rapid rotation of \kcet with a period of 8.6 days (Rucinski et al. 2004) and a resulting large activity level (Baliunas et al. 1995), but the uncertainties of such a simple scaling relation for the ages are of course huge. Other fundamental stellar parameters of \kcet such as effective temperature, surface gravity and metallicity come so close to solar values that \kcet qualify as a solar analogue (see Table~1). By using the well studied \kcet as a case study instead of (simple) scaling laws between stellar activity and age [as done by Shaviv (2003) and Svensmark (2003, 2006)] we can base our analysis on actual measurements of stellar activity rather than estimates based on physical assumptions.
\begin{table}
\caption{Stellar parameters for {$\kappa^1$} Ceti (from Gaidos \& Gonzalez 2002) and the Sun (from Christensen-Dalsgaard et al. 1996)}
\centering
\begin{tabular}{lccccc}
\hline \hline
Name & Type & $T_{\rm eff}$ [K] & log $g$ & [Fe/H] \\
\hline
{$\kappa^1$} Ceti & G5 V & 5747 (49) & 4.53 (0.06) & 0.11 (0.04)\\
Sun & G2 V & 5778 & 4.44 & 0.00 \\
\hline
\end{tabular}
\label{tab1}
\end{table}
\section{The effect of more effective shielding}
The temperature response to a change in the GCR influx is given by (Shaviv, 2003):
\begin{equation}
\Delta T_{GCR} \approx D[1-(\varepsilon_{\rm \kappa^1 Ceti}/\varepsilon_{\odot})^q],
\end{equation}
where $\varepsilon_{\rm \kappa^1 Ceti}$ is the GCR flux reaching the troposphere around an Earth-like planet around \kcet and $\varepsilon_{\odot}$ is the GCR flux reaching the troposphere around the Earth. $D$ and $q$ are constants ($D$ $\sim$ 10 K and $q$ $\sim$ 0.5).
The GCR influx can be found by solving the spherically-symmetric transport equation for the stellar modulation of cosmic rays reaching the planet's troposphere (Perko, 1987):
\begin{equation}
\frac{\partial U}{\partial r} +\frac{VP}{3 \kappa} \frac{\partial U}{\partial P} \simeq 0,
\end{equation}
where $U$ is the cosmic ray distribution function, $r$ is the heliocentric radial distance, $P$ is the particle rigidity, $V$ is the solar wind speed and $\kappa$ is the diffusion coefficient for radial propagation. The spherically-symmetric transport equation can be solved for GCRs with energies larger than a few GeV using the force-field approximation. In the force-field approximation the kinetic energies of the GCRs at the planet $E$ is given as:
\begin{equation}
E=E_{\rm ISM} - \Phi,
\end{equation}
where $E_{\rm ISM}$ is the kinetic energy of the GCRs in the interstellar medium at the astrospheric boundary and $\Phi$ is the modulation strength (Perko, 1987):
\begin{equation}
\Phi = \frac{rV}{3\kappa}.
\end{equation}
If we assume that the stellar wind speed and the diffusion coefficient is the same for the Sun and \kcet the modulation strength for \kcet can be found by assuming that the ram pressure of the wind around \kcet equals that of the interstellar medium and therefore is the same as for the Sun:
\begin{equation}
P_{\rm ram} = \rho V^2 \propto \frac{\dot{M}V}{r^2},
\end{equation}
so
\begin{equation}
\Phi \propto r \propto \sqrt{\dot{M}}.
\end{equation}
The mass loss of \kcet has been measured by Gaidos (1998) and Gaidos, G{\"u}del \& Blake (2000) to $\dot{M}_{\rm \kappa^1 Ceti}\sim4\cdot 10^{-11} {\rm M_{\odot}/yr}$.
Following Shaviv (2003) we can now calculate the energy of the GCR influx to an Earth-like planet around \kcet relative to the energy of the GCR influx to the Earth:
\begin{equation}
\frac{\varepsilon_{\rm \kappa^1 Ceti}}{\varepsilon_{\odot}} = \frac{\int_{E_{\rm c}}^{\infty} f_{\rm \kappa^1 Ceti}E_{\rm \kappa^1 Ceti}dE}{\int_{E_{\rm c}}^{\infty} f_{\odot}E_{\odot}dE},
\end{equation}
where $E_{\rm c}$ is the cutoff energy of GCRs that can actually reach the troposphere ($\sim$ 12 GeV) and $f$ is the differential number flux reaching an Earth-like planet around either the Sun or \kcet which again is a function of the differential number flux of the interstellar medium (Shaviv, 2003):
\begin{equation}
f \propto (E+\Phi)^{-2.7}.
\end{equation}
We thus obtain:
\begin{equation}
\frac{\varepsilon_{\rm \kappa^1 Ceti}}{\varepsilon_{\odot} }=
\frac{\left(E_{\rm c} +\Phi_{\rm \kappa^1 Ceti} \right)^{-1.7}} { \left(E_{\rm c} +\Phi_{\odot} \right)^{-1.7} }
\frac{\left( \frac{E_{\rm c}}{0.7} +\frac {\Phi_{\rm \kappa^1 Ceti}}{1.7} \right)}{\left(\frac{E_{\rm c}}{0.7} \frac{\Phi_{\odot}}{1.7} \right) },
\end{equation}
or $\varepsilon_{\rm \kappa^1 Ceti}/\varepsilon_{\odot} \sim 0.1$ -- i.e. an Earth-like planet around \kcet would only receive around 10\% of the cosmic ray flux that we receive on the Earth and the temperature would thus be around 7 degrees warmer than it would have been had \kcet not been able to protect its planet more effectively from GCRs than the Sun.
\section{The effect of Forbush Decreases}
CMEs directed toward the Earth can lead to sudden reductions in the influx of GCRs over time scales from hours to days known as Forbush decreases. It was shown by Svensmark, Bondo \& Svensmark (2009) that large Forbush decreases were followed by reduced levels of aerosols, of cloud water content, of liquid water cloud fraction and of low IR-detected clouds.
Svensmark, Bondo \& Svensmark (2009) analyzed five CMEs found over a time span of 10 years which all resulted in Forbush decreases associated with an $\sim$10 \% decrease in the GCR influx over around a week - this led them to note that large Forbush decreases today are too rare to have any significant effect on the Earth's climate. On the other hand it is not given that large Forbush decreases did not have a significant effect on the Earth's climate 4 billion years ago when life started to evolve on the Earth, as CMEs are expected to have been much more common on the early Sun. We therefore analyze if the reduction in the influx of GCRs originating from Forbush decreases could be significant compared to the reduction in the influx of GCRs from the more effective shielding from the Sun at the time life evolved on the Earth.
Assuming that the CME rate scales liniarly with the flare rate (which seems to be the case for the Sun) we can use the flare rate of \kcet to provide us with an estimate of the CME rate and thus an estimate of how common Forbush decreases would be on an Earth-like planet around $\kappa^1$~Ceti. The cumulative flare occurrence rate distribution of \kcet was calculated by Audard et al. (2000) using 7 days of EUV observations from the {\it Extreme Ultraviolet Explorer} (Malina \& Bowyer 1991). In order to compare this cumulative flare occurrence rate distribution to the Sun we have analyzed the occurrence of solar flares from 1998 to 2007 observed in X-ray by the {\it Geostationary Operational Environmental Satellite} system\footnote{The {\it Geostationary Operational Environmental Satellite} system observations were obtained from http://www.ngdc.noaa.gov/stp/SOLAR/ftpsolarflares.html} (Garcia 1994).
The flare occurrence rate distribution for the Sun and \kcet are shown in Fig.~1 (solid lines) together with power law fits to these distributions of the form (Audard et al. 2000): $N(>E)=kE^{-\alpha+1}$,
where $k$ is a normalization factor and $\alpha$ is a constant measuring the hardness of the flare distribution.
In order to compare the two flare occurrence rate distributions we need to correct them for the fact that the solar flares were observed in an energy range roughly 50 times smaller than energy range in which the flares on \kcet where observed in; the solar flares were observed in the soft X-ray band (1-8{\AA), while the flares on \kcet were observed in the EUV band (0.01-10 {\rm KeV}). We therefore multiply the solar flare energies with 50 (Audard et al. 2000).
\begin{figure}
\includegraphics[width=\columnwidth]{helas4_karoff2_fig01.eps}
\caption{Comparison between the cumulative flare occurrence rate distributions for the Sun and $\kappa^1$~Ceti. The flare rate for the Sun has been calculated from soft X-ray (1-8 {\AA}) data from {\it The Geostationary Operational Environmental Satellite} system, and the flare rate for \kcet has been calculated from EUV data from the {\it Extreme Ultraviolet Explorer} (0.01-10 {\rm KeV}) from Audard et al. (2000). The solid lines show the observations. The dotted lines show power law fits to the observations of the Sun and \kcet, respectivily. The dashed line shows the power law fit to the solar observation, but here the flare energies have been multiplied by 50 in order to make a reliable comparison to the \kcet observations, which have been integrated over a larger energy range. It is seen that whereas flares with integrated energies around $10^{32}$ erg (the ones that causes Forbush decreases) are rather rare on the Sun, they occure daily on $\kappa^1$~Ceti.}
\end{figure}
All the five Forbush decreases analyzed by Svensmark, Bondo \& Svensmark (2009) were associated with flares with an integrated energy around $10^{32}$ erg. These Forbush decreases generally led to a $\sim$10 \% decrease in the GCR influx over around a week. In Fig.~1 it is seen that such flares happen around once a day on $\kappa^1$~Ceti. This means that around 7 Forbush decreases would be present around an Earth-like planet around \kcet at any given time. Assuming that all 7 Forbush decreases will lead to a 10\% reduction in the GCR influx, an Earth-like planet around \kcet would be experiencing an approximate 50\% mean reduction in the GCR influx from Forbush decreases -- i.e around half the reduction that is expected to occur due to the more effective shielding of GCR around $\kappa^1$~Ceti.
It is apparent that an Earth-like planet around \kcet would be experiencing a reduction in the GCR influx from both a more effective shielding from a larger astrosphere and from Forbush decreases at the same time. Thus it would experience a 90\% reduction from more effective shielding from a larger astrosphere and a 50\% reduction of the remaining 10\% from Forbush decreases. Therefore by adding the contribute from Forbush decreases to the contribute from more effective shielding from a larger astrosphere Eq.~1 predicts an 8 instead of a 7 degree warmer climate. This is of course an insignificant difference -- i.e. it does not change much to remove what is absent. On the other hand this study has shown that if the early Sun had not been more effective in shielding the Earth from GCRs, the Earth would still have experienced a reduced GCR influx due to Forbush decreases.
\section{Conclusion}
Using the young solar twin \kcet as a case study we have shown that the reduction in the GCR influx to the Earth caused by Forbush decreases had the same order of magnitude as the reduction caused by more effective shielding from a larger heliosphere at the time life evolved on the Earth.
This does not change the conclusion made by Shaviv (2003): that the warming associated with a reduced GCR influx is enough to significantly compensate for the fainter Sun at the time life evolved on the Earth and can explain about 1/2 to 2/3's of the temperature increase between now and then. Thus the warming associated with a reduced GCR influx is enough to solve {\it the faint young Sun paradox}.
An open question is whether the GCRs that are scattered away from the CMEs in the Forbush decrease will eventually return. There is no evidence from ground-based observations of GCRs that the influx increases after a Forbush decrease and it therefore seems secure to assume that the 10\% of the GCR influx is simply scattered so much away from the Earth during a Forbush decrease that it can be considered removed from the near-Earth environment. This is most likely also the case for an Earth-like planet around \kcet as a large part of the CMEs will be magnitudes larger than what we have observed on the Sun.
\kcet is a unique laboratory for understanding the activity of the Sun when life evolved on the Earth. Not only for understanding how a reduced GCR influx could affect the climate back then, but also for understanding for example how a larger X-ray flux from the larger corona and a larger UV flux from a stronger chromosphere could affect the evolution of life.
The larger flare rate of the young Sun has of course also had other consequences. Firstly, the largest flares and accompanying CMEs would have reduced the amount of ozone in the Earth atmosphere, making life on Earth much more vulnerable to UV radiation (Schaefer et al. 2000). Secondly, large solar flares and accompanying CMEs might also have played a more direct r\^{o}le in the evolution of life on the Earth by providing an energy source to create organic molecules such as the lightning in
the Miller-Urey experiment (Miller \& Urey 1959).
Unfortunately, we still lack a good asteroseismic estimate of the age of this star. This is unfortunate because the best current age estimates of \kcet based on measurements of the rotation and activity of \kcet come with uncertainties of $\sim$ 500 million years. \kcet is therefore an obvious target for future ground-based asteroseismic campaigns -- e.g. the first observations by the {\it Stellar Observations Network Group} (Grundahl et al. 2009).
\vspace{2cm}
\section*{Acknowledgments}
CK acknowledges financial support from the Danish Natural Science Research Council. The National Oceanic and Atmospheric Administration operates the {\it Geostationary Operational Environmental Satellite} system.
|
1,314,259,993,894 | arxiv |
\section{Applications}\label{application}
DeepProbe\xspace is versatile in the sense that it has a wide range of applicability, covering inference, ranking, adaptive sampling and decision making. We have built three applications of DeepProbe\xspace, illustrated in figure \ref{three}. In this section, we will elaborate on the details including the design, training, implementation and evaluations of these applications. Note that we build the applications on top of a commercial ``product ads'' search engine, which recommends products if a user inputs a query with product intent.
\noindent
\begin{figure}
\centering
\resizebox{0.8\columnwidth}{!}{%
\begin{tikzpicture}[font=\sffamily,>=stealth',thick,
commentl/.style={text width=3cm, align=right},
commentr/.style={commentl, align=left},]
\node[] (init) {\LARGE User};
\node[right=2cm of init] (recv) {\LARGE Agent};
\draw[->] ([yshift=-0.7cm]init.south) coordinate (fin1o) -- ([yshift=-.7cm]fin1o-|recv) coordinate (fin1e) node[rectangle, fill=green, pos = .5, align=center, above, sloped, draw] {Ingress:\\Query Rewriter}
node[pos = .5, align=center, below, sloped] {{\itshape``tablet TV connector"}};
\draw[->] ([yshift=-.3cm]fin1e) coordinate (ack1o) -- ([yshift=-2.3cm]ack1o-|recv) coordinate (ack1e)
node[rectangle, fill=green, pos = .4, align=center, right, draw] {Processing:\\Info. Directed \\Decision Maker}
node[pos = .4, align=center, left] {``Recommend or\\ ask more questions?};
\draw[->] (ack1e-|recv) coordinate (fin2o) -- ([yshift=-.7cm]fin2o-|init) coordinate (fin2e) node[rectangle, fill=green, pos = .5, align=center, above, sloped, draw] {Egress:\\ Relevance Scoring};
\draw[->] ([yshift=-.3cm]fin2e) coordinate (ack2o) -- ([yshift=-.7cm]ack2o-|recv) coordinate (ack2e) node[pos=.5, above, sloped] {Query (If needed)};
\draw[thick, shorten >=-1cm] (init) -- (init|-ack2e);
\draw[thick, shorten >=-1cm] (recv) -- (recv|-ack2e);
\draw[dotted] (recv.285)--([yshift=2mm]recv.285|-fin1e) coordinate[pos=.5] (aux1);
\draw[dotted] (init.255)--([yshift=2mm]init.255|-fin1o);
\draw[dotted] ([yshift=1mm]init.255|-fin2e) --([yshift=-5mm]init.255|-ack2e) coordinate (aux2);
\node[commentr, right =2mm of ack2e] {\textbf{...}};
\node[below left = 0mm and 2mm of init.south, commentl]{\textbf{INITIAL QUERY}\\[-1.5mm]{\itshape ``How do I connect tablet to TV?"}};
\node[below left = -1mm and 2mm of aux2-|init, commentl]{\textbf{...}};
\end{tikzpicture}
}
\caption{Three applications (in green) of DeepProbe\xspace}
\label{three}
\end{figure}
\subsection{Ingress: Query Rewriting}\label{sec:rewriting}
Query understanding and rewriting is a vital pre-processing step for modern recommendation and information retrieval systems. In the area of product ads recommendation, a user input query in the search engine should be able to 1) trigger an ad recommendation action and 2) return the relevant ads. If the query is in a standard form, like it's grammatically correct, and contains the right keywords, the backend information retrieval system will be able to return the recommendations, and the response time must be short to ensure a high quality of service. In the first application, we apply DeepProbe\xspace for query rewriting.
\subsubsection{What \& How: Question Understanding}
A pain point we identified about product ad recommendation in our search engine is that it does not process queries in question form well. These queries are often ambiguous, and the product is implicitly referred to, usually formulated in a relationship to other entities. As an example, a user might type in the search box a question like
\[ How\ to\ connect\ my\ tablet\ to\ TV?\]
From a human point of view, this query clearly points to a product: micro HDMI cable. However, this posts a challenge to the information retrieval system, as no clear keywords related to the right product ad were present in the query.
\subsubsection{Training and Data}
\label{sec:rewrite_data}
We trained DeepProbe\xspace to generate standard queries from question-form queries. To serve this purpose, we used data collected from a ``related searches" feature on a commercial search engine. The related searches are a list of queries being recommended to a user when a specific query is typed in the search box, and many of them are standard queries. We picked user-input queries starting with ``what" and ``how", and regard a related search query as a positve training example if it was clicked by the user. The click behavior by a user confirms that the standard query is indeed relevant to the question the user has entered. By doing so, we were able to collect a dataset consisting of 12 million clicked (question-form query, standard query) pairs. To further focus on questions that will end up with product ad recommendation, we filter the dataset by keeping only the pairs where there was product ad recommendation for the standard query itself. A total of 782 thousand such training pairs were collected. We summarize the statistics of the training dataset in Table \ref{tb:rewrite_train}.
\begin{table}
\centering
\begin{tabular}{ l | c c c c}
& size & vocabulary & average length & clicks \\
\hline
questions & 316K & 126K
& 5.5 & 782K \\
queries & 481K & 870K
& 2.8 & 782K
\end{tabular}
\caption{Statistics of the Rewrite Training Set}
\label{tb:rewrite_train}
\end{table}
\begin{table}
\centering
\begin{tabular}{l | l| l }
\# questions & \# queries & \# pairs \\
\hline
34K & 42K & 45K
\end{tabular}
\caption{Statistics of the Rewrite Test Set}
\label{tb:rewrite_test}
\end{table}
\subsubsection{Details of Model}
We used the model in section \ref{sec:seq2seq} with vocabulary size $|\mathcal V|=100k$ for both the encoder and decoder. Any word not in $\mathcal V$ is assigned with symbol $\langle$UNK$\rangle$. %
We chose the embedding dimension $d_{emb}=100$. We used 3-layer LSTMs with hidden vector size $d_h$=300 on the decoder side, and we implemented 4 different attention scenarios as in Section \ref{sec:atten}. The results for the four different attention mechanisms are compared. The model rewrites to a sequence of words as follows.
At step $i$ at the decoder, the model picks the most likely word and use it as the input to the embedding layer at step $i+1$, until the max length is reached, or an $\langle$EOS$\rangle$ token is encountered. We used Theano \cite{2016arXiv160502688short} for model training on a Tesla K20 GPU, with cross entropy as the loss function, and Adadelta \cite{zeiler2012adadelta}, a variant of Adagrad \cite{duchi2011adaptive}, for gradient descent. We do end-to-end training to learn all the parameters described in Section \ref{sec:seq2seq}, with a total of 10 epochs
\subsubsection{Result and Evaluation}
In our experiments, we found DeepProbe\xspace's rewriting helped in two ways. First, while many original queries are product related, they did not trigger product ads, due to the form in which the queries are presented, or their implicitness. After being rewritten, they become more keyword-like and trigger product ads.
Some of such examples are
\begingroup\makeatletter\def\f@size{7}\check@mathfonts
\begin{align*}
\text{How to connect my tablet to TV}&\rightarrow\text{ tablet tv connector}\\
\text{How to repair my broken iphone screen}&\rightarrow\text{ iphone screen replacement}\\
\text{How to charge my iphone}&\rightarrow\text{ iphone charger}\\
\text{How to protect my iphone screen}&\rightarrow\text{ iphone screen protector}
\end{align*}
\endgroup
Secondly, rewriting also helps in retrieving the correct ads, especially when implicit or complex relations are present in the query. To provide an explicit example, the query ``How to wire car radio" indicates, from the human understanding perspective, that the user has the radio already and is looking for wiring products. When submitted in the original form, ads on car radios are retrieved. After DeepProbe\xspace rewrites it to ``radio wiring", the correct ads (radio wiring harness) are retrieved. Another example is the query ``How to fix gps in car", where in its original form it triggers ad about mobile GPS, and after rewriting, the correct ads, GPS holders are returned.
As a quantitative evaluation, we test on a different (question-form query, standard query) test dataset. The test set was collected using the same procedure described in Section \ref{sec:rewrite_data}, but sampled from log in a different time period. In addition, any pair appearing in the training dataset was removed from the test set. We summarize the statistics of the test dataset in Table \ref{tb:rewrite_test}.
\subsubsection{Quality of Rewrites}
For each pair in the test set, we generate rewrites using different DeepProbe\xspace model variations. We evaluate each rewrite against the standard query as baseline using BLEU score \cite{Papineni:2002:BMA:1073083.1073135}. BLEU (bilingual evaluation understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. We borrow the same technique to evaluate query rewriting since the BLEU score is a standard evaluation for seq2seq model-based translation. We summarize the average BLEU score results in Table \ref{tb:BLEU}. We can clearly see that attention mechanisms consistently outperform the base model without attention. This observation is different from several recent attempt in applying seq2seq model for question-answering like \cite{DBLP:journals/corr/VinyalsL15}, which stated that attention mechanism is not helpful.
If we reflect based on the examples shown above, this can be explained by the source-target sequence alignment characteristic of our question {\em rewriting} application, a property that machine translation shares but not question {\em answering}.
Among the attention mechanisms, general attention performs the best while concat attention is the worst. Strictly speaking concat attention does not do alignment directly using the hidden vectors of source and target words. On the other hand, dot and general attention does exactly that. Lastly, the tensor mechanism ranks the $2^{nd}$, only a bit worse than general attention. We suspect its structure is too complicated to learn when combined against recurrent neural networks.
\subsubsection{Quality of Ad Recommendation}
We evaluate the quality of ad recommendation using the rewrites as input to the ad system.
We first sampled 1000 question queries from the test pairs, and generate rewrites using different DeepProbe\xspace model variations. Then, we submit the rewrites to product ads search engine. The batch submission effort is usually called ``scraping'' in industry. We also scrape the system with the original question queries and the standard query respectively, in order to compare the ads coverage and quality. The results are presented in Table \ref{tb:scrape}.
The ads coverage is defined by \% of questions having ads returned.
For ads quality, we sample 3000 (original question, ad) pairs for each version of rewrites and their returned ads. Each pair is labeled by a group of trained human judges according to the relevance between the query and the ad. Each label ranges in \{bad, fair, good, excellent\}, and we consider \{fair, good, excellent\} as positive. Note that along with the recommended ads, we submit the ``original question'' to judges. Judges are only comparing the original question to the returned ad, without knowing the ads are actually retrieved using rewrites. The ads quality is based on \% positive labeled ads in each 3000-pair set.
In Table \ref{tb:scrape}, we see that only 21.0\% of the original questions triggered product ad recommendations. 21.1\% of the returned ads are of reasonable quality. If we scrape with the related search queries collected from the log, we see much higher ad coverage at 84.7\%. This is expected as we already filter the test set this way. More importantly, we see even better quality ads at 25.3\%. This confirms the validity of our rewrite data collection method. The clicked related search queries are indeed relevant to the questions so that, the ads returned using the rewrites are similarly relevant compared to scraping with the questions themselves.
Among DeepProbe\xspace's rewrites, we see that using {\it general} attention achieves both the highest coverage and quality. We see a 3.5x increase in coverage and a $50\%$ increase in quality, relative to the original query. Even when compared with the unobserved ground-truth, i.e., the clicked related search queries, we see only a $10.7\%$ decrease in coverage but a $12.4\%$ increase in quality.
\begin{table}[t]
\centering
\begin{tabular}{ l | c }
Model & BLEU Score \\
\hline
DeepProbe\xspace rewrites without attention
& 0.326 \\
DeepProbe\xspace rewrites with dot attention
& 0.349 \\
DeepProbe\xspace rewrites with general attention
& {\bf 0.388} \\
DeepProbe\xspace rewrites with concat attention
& 0.331 \\
DeepProbe\xspace rewrites with tensor attention
& 0.364 \\
\end{tabular}
\caption{BLEU scores between rewrites and standard queries}
\label{tb:BLEU}
\end{table}
\begin{table}[htb]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ l | c c c }
Scrape Set & Ads Coverage
& Ads Quality \\
\hline
Original questions & 21.0\%
& 21.1\% \\
Related search queries & 84.7\%
& 25.3\% \\
\hline
DeepProbe\xspace rewrites without attention & 67.4\%
& 26.7\% \\
DeepProbe\xspace rewrites with dot attention & 64.2\%
& 33.0\% \\
DeepProbe\xspace rewrites with general attention & 74.0\%
& {\bf 37.7\%} \\
DeepProbe\xspace rewrites with concat attention & 64.1\%
& 18.2\% \\
DeepProbe\xspace rewrites with tensor attention & 64.1\%
& 28.8\%
\end{tabular}
}
\caption{Scraping results}
\label{tb:scrape}
\end{table}
\subsubsection{Discussion}
The results indicate that doing ``question''-rewriting with appropriate training data achieves improved recommendation quality and coverage. This staged approach can be seamlessly integrated into current infrastructure. It does not require any change in the existing information retrieval system, as the rewritten query can be submitted either instead of or along with the original one.
In addition, targeting only ``what'' and ``how'' questions is just the first step towards a general-purpose question-answering system. Readers can imagine that this application would be part of a large-scale, comprehensive system, where this application only focuses on product recommendation. Lastly, one may argue that although a significant improvement is observed, the reported ads quality is still not high. This leads to the next section using seq2seq for scoring and keeping better candidates.
\subsection{Egress: Relevance Scoring}\label{sec:scoring}
DeepProbe\xspace's ability of estimating items' posterior distribution also makes it a good fit for quality control at the egress side. When a set of ads are returned from the information retrieval infrastructure, DeepProbe\xspace can serve as a relevance filter which shows only the most related ads to the user.
\subsubsection{Training and Data}
The DeepProbe\xspace scoring model was trained on our internal dataset, which consists of clicked (query, ad) pairs sampled from a commercial product ad search engine. The ads come from a product ad database, each is a sequence of words describing the corresponding product. The queries are user inputs in our search engine, and if the user clicked on an ad when searching with a query, we regard it as a positive (query, ad) pair. A total of 15 million clicks are sampled from a month long of click logs, which ends up with 6.4 million distinct user queries and 5.1 million distinct ads. We summarize the statistics of the training dataset in Table \ref{tb:data}.
\subsubsection{Details of Model}
We used the model in Section \ref{sec:seq2seq} and chose a vocabulary size $|\mathcal V_q|=60k$ on the query side and $|\mathcal V_d|=100k$ on the ad side. Any word that is not part of $\mathcal V$ is assigned with symbol $\langle$UNK$\rangle$. We chose the embedding dimension $d_e=150$ on both encoder and decoder sides, and hidden dimension $d_h$=300 on the decoder side.
Although we did notice an improvement in performance for deeper networks, in this experiment we trained a single layer LSTM model for a fair comparison noted below. We trained the model for 5 epochs.
\begin{table}
\centering
\begin{tabular}{ l | c c c c}
& size & vocabulary & average length & clicks \\
\hline
query & 6.4M & 68K & 4.1 & 15M \\
ads & 5.1M & 114K & 9.3 & 15M
\end{tabular}
\caption{Statistics of the Scoring Training Set}
\label{tb:data}
\end{table}
\begin{table}[t]
\begin{tabular}{l | l| l | l | l}
\# queries & \# ads & \# pairs & \# positive & \# negative \\
\hline
23K & 915K & 965K & 234K & 731K
\end{tabular}
\centering
\caption{Statistics of the Scoring Test Set}
\label{tb:testset}
\end{table}
\subsubsection{Evaluation}
As a quantitative evaluation, we test on a fully annotated test set. The test set contains around 966 thousand (query, ad) pairs where each pair is labeled by a group of trained human judges according to the relevance between the query and the ad. Each label ranges in \{bad, fair, good, excellent\}. The pairs are sampled from the early selection stage of a commercial ads search engine, where there are a significant amount of low quality selected ads to be pruned out in downstream processing. We use AUC (area-under-curve of the receiver operating characteristic plot) as the metric for evaluation,
by considering the good and excellent labels as the positive class and the rest labels as the negative class. This results in a test set consisting of 234 thousand positive and 731 thousand negative pairs.
We briefly summarize the statistics of the testset in Table \ref{tb:testset}. We use a uniform prior $\pi$ on the set of ads, and hence $Pr(ad|query) \propto Pr(query|ad)$ as in Equation (\ref{eq:likelihood}). As a result, $Pr(query|ad)$ serves as the relevance score for a pair $(query, ad)$. The AUC is then computed according to the scores for all the pairs in the test set.
We compare against existing relevance scoring baselines, including the popular CDSSM \cite{shen2014learning} and DeepIntent \cite{zhaideepintent}, both of which are deep-learning based and have shown very satisfactory results in production. Given a (query, ad) pair, they encode the query and the ad seperately into two vectors, and then calculate cosine similarity directly from these two vectors as the relevance score. We trained CDSSM, DeepIntent and DeepProbe\xspace on the same training dataset, and evaluated the performance by comparing the AUCs on the same testset.
Our results are presented in Table \ref{tb:auc}.
\begin{table}[t]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lclc|l}
Implmentation & Decoder & Encoder Architecture & Encoder Embedding & AUC \\
\hline
(a) CDSSM & None & Conv / max pooling & Tri-letter hash & 0.726 \\
(b) DeepIntent & None & Conv / max pooling & Word-based & 0.728 \\
(c) DeepIntent & None & BLSTM / last pooling & Word-based & 0.798\\
(d) DeepProbe\xspace & Yes & BLSTM / last pooling & Word-based & {\bf 0.840}
\end{tabular}
}
\caption{AUC scores of different Scoring Frameworks}
\label{tb:auc}
\end{table}
Note both CDSSM and DeepIntent methods only use encoders, unlike in DeepProbe\xspace there're both encoder and decoder components. So to make a fair comparison, it becomes necessary to keep the encoder setting as similar as possible, say the encoder architecture, encoded vector size, and depth of recurrent neural networks. The vector size is easy to do and we set it to 300 across the models. We also train all models with depth = 1, and set word-embedding size to 150 if applicable. Below we discuss the different encoder architectures and their AUC performance. We avoid using attention in DeepProbe\xspace to keep the comparison fair and simple:
1) DeepIntent with BLSTM resembles DeepProbe\xspace the most, i.e. they have the same encoder architecture. They both start with a word-based embedding layer, leverage BLSTM to compute a sequence of hidden vectors, and take the last vector as the final encoded vector. So comparing this against DeepProbe\xspace can fairly show the gain by having a decoder. In Table \ref{tb:auc}, we see DeepProbe\xspace achieves 0.84 AUC, as shown in row (d), outperform this baseline with 0.798 AUC shown in row (c).
2) Rather than using BLSTM, at the encoder side, CDSSM uses a convolutional (Conv) layer. The Conv layer aggregates tri-letter-based word-hash vectors via a sliding window. The output is a sequence of vectors which gets further reduced to a final encoded vector with max pooling. In CDSSM's implemenation, it also has a fully connected layer to reduce the size of final encoded vector for online performance reason. To make a fair comparison, we set both the internal hidden vector size and final encoded vector size to 300. In Table \ref{tb:auc}, we see CDSSM implementation achieves far worse AUC score of 0.726 in row (a).
3) With CDSSM being so different in encoder, namely the tri-letter-based embedding and the Conv layer, we modified DeepIntent implemenation to use Conv layer, in order to understand where the loss in AUC comes from. Specifically, we would like to know whether it is from the different embedding or recurrence layer. After using DeepIntent with Conv layer, the AUC of this implementation achieves only 0.728 AUC, as shown in row (b) of Table \ref{tb:auc}, similar as CDSSM implementation in row (a). It is strongly suggested, by comparing (b) against (c), that BLSTM-based encoder outperforms Conv-based encoder.
In summary, DeepProbe\xspace not only provides scores allowing probabilisitc interpretation, but achieves better performance than similiarity-based scoring methods namely CDSSM and DeepIntent.
\subsubsection{Discussion}
We would like to briefly discuss the cost of computation and implementation, and point out why DeepProbe\xspace is capable of being a good relevance filter. First, from a practical point of view, having DeepProbe\xspace acting on top of the existing information retrieval system requires no modification to the infrastructure, minimizing implementation cost. Second, from the computation point of view, when used as a relevance scoring method, DeepProbe\xspace needs to calculate a score for each (query, $\text{ad}_i$) pair for all ads requesting a relevance scoring. As a result, the cost of computation grows linearly with the number of ads. This is also the reason we do not use DeepProbe\xspace to directly search through the entire ads database for the most relevant ones, as the hundreds of millions of ads in the database can make the search process too long to guarantee service quality.
When used as the egress control of a information retrieval engine, however, the number of returned ads are limited; usually at the scale of tens. Moreover, batching and hierarchical softmax can further reduce the computation time required.
\noindent
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tikzpicture}[font=\sffamily,>=stealth',thick,
commentl/.style={text width=3cm, align=right},
commentr/.style={commentl, align=left},]
\node[] (user) {\LARGE User};
\node[right=3cm of user] (bot) {\LARGE Bot Agent};
\node[right=3cm of bot] (ir) {\LARGE IR System};
\draw[->] ([yshift=-0.3cm]user.south) coordinate (fin1o) -- ([yshift=-.3cm]fin1o-|bot) coordinate (fin1e) node[rectangle, pos = .5, align=center, above, sloped, draw] {Query Rewriter}
node[pos = .5, align=center, below, sloped] {{\itshape``tablet TV connector"}};
\draw[->] ([yshift=-0.5cm]bot.south) coordinate (fin3o) -- ([yshift=-0.5cm]fin3o-|ir) coordinate (fin3e) node[rectangle, pos = .5, align=center, above, sloped, draw] {Submit to IR}
node[pos = .5, align=center, below, sloped] {{\itshape``tablet TV connector"}};
\draw[->] ([yshift=-0.4cm]fin3e-|ir) coordinate (fin2o) -- ([yshift=-0.7cm]fin2o-|bot) coordinate (fin2e) node[rectangle, pos = .6, align=center, above, sloped, draw] {Returned from IR}
node[pos = .5, align=center, below, sloped] {[HDMI, micro HDMI, VGA, ...]};
\draw[->] ([yshift=-0.4cm]fin2e) coordinate (ack1o) -- ([yshift=-1.0cm]ack1o-|bot) coordinate (ack1e)
node[rectangle, pos = .4, align=center, right, draw] {Estimate Posterior \\Distribution}
node[pos = .4, align=center, left] {[HDMI (Pr=0.5), \\micro HDMI (Pr=0.4), \\VGA (Pr=0.1), ...]};
\draw[->] ([yshift=-0.1cm]ack1e) coordinate (ack2o) -- ([yshift=-1.4cm]ack2o-|bot) coordinate (ack2e)
node[rectangle, pos = .4, align=center, right, draw] {Information Based \\ Decision Making \\ Estimate Entropy}
node[pos = .35, align=center, left] {Confident, Recommend \\ \itshape`` Do you like this HDMI cable?"}
node[pos = .75, align=center, left, yshift=-0.4cm] {Not Confident,\\ Ask M.I. maximizing question \\ \itshape`` What size do you want?"};
\draw[->] ([yshift=-0.5cm]ack2e-|bot) coordinate (fin4o) -- ([yshift=-.3cm]fin4o-|user) coordinate (fin4e);
\draw[->] ([yshift=-0.1cm]fin4e) coordinate (ack3o) -- ([yshift=-1.0cm]ack3o-|user) coordinate (ack3e)
node[rectangle, pos = .4, align=center, left, draw] {User Receives Recommendation\\ or User Receive Question}
node[pos = .4, align=center, right] {\itshape ``I want micro sized."};
\draw[->] ([yshift=-0.0cm]ack3e) coordinate (ack5o) -- ([yshift=-2.0cm]fin4o-|bot) coordinate (fin5e) node[rectangle, pos = .5, align=center, above, sloped, draw] {Query Rewriter}
node[pos = .5, align=center, below, sloped] {{\itshape``micro size"}};
\draw[thick, shorten >=-1cm] (user) -- (user|-ack3e);
\draw[thick, shorten >=-1cm] (bot) -- (bot|-ack3e);
\draw[thick, shorten >=-1cm] (ir) -- (ir|-ack3e);
\node[commentr, right =2mm of fin5e] {\textbf{...}};
\node[below left = 0mm and 2mm of user.south, commentl]{\textbf{INITIAL QUERY}\\[-1.5mm]{\itshape ``How do I connect tablet to TV?"}};
\node[below left = 15mm and 2mm of aux2-|user, commentl]{\textbf{...}};
\end{tikzpicture}
}
\caption{ChatBot Design}
\label{fig:chatbot}
\end{figure}
\subsection{Chatbot: Information Directed Conversation and Recommendation}\label{sec:chatbot}
The last application we introduce is a chatbot that specializes in product ad recommendation. Virtual agents and chatbots have gained popularity due to its user-friendliness and interactiveness. They not only offload some of the jobs of search engines, but also create new user interaction entry points \cite{aron2011innovative}.
Below we explain the flow of interaction with examples.
\subsubsection{Flow of Dialog and System Behaviors}
The interactive session starts with the first query submitted by a user. For example, the user can ask the chatbot ``How do I connect my tablet to TV?". The chatbot then retrieves a initial list of related ads from the information retrieval backend system. To do so, it applies DeepProbe\xspace's rewriting to the question and convert it into a standard query, in this case ``tablet tv connector". This standard query is then submitted to the information retrieval system which returns a list of ads, e.g. ads about HDMI cables, micro HDMI cables, or VGA cables. The chatbot then uses DeepProbe\xspace's posterior distribution estimation to calculate the distribution of the returned ad list, and estimates its corresponding conditional entropy. In the decision-making step, if the conditional entropy is less than some threshold $T$, the top $k$ (3 by default) most relevant ads are returned to the user. Each ad is displayed with a picture, the selling price, the merchant selling it, and embedded with a hyperlink so the user can click on. A user click will redirect the user to the e-commerce web page hosted by the merchant so the user can continue the exploration and make purchase.
Otherwise, the bot asks a conditional mutual-information maximizing question to the user, in this case the question is about the ``size'' of the connector products. For example, ``what size do you want?''. This conversation goes until a final recommendation is made. Figure \ref{fig:chatbot} gives an illustration of the procedure in the form of a timing diagram.
Next we explain how we formulate such questions.
\subsubsection{Question Formulation}
By the principle of maximizing expected information gain, at each step, if the chatbot is not confident, it is supposed to ask a question that maximized the conditional mutual information. The problem here is, what is the set of questions we are maximizing over for? If we allow arbitrary questions, the chatbot may face issues like 1) the question may be not relevant to the product so is confusing, and 2) the mutual information is difficult to estimate.
To address this issue, we leverage the attributes associated with each ad. For example, an ad about a laptop has attributes ``processors", ``RAM size", ``manufacture" and so on. Similarly for clothes, there are attributes like ``color", ``size" and ``material". By formulating questions based on the attributes, the aforementioned issues go away. Firstly, it will be easier for users to relate. Users will have the perception that the chatbot is working with them to narrow down the most relevant product by confirming the attribute info. Secondly, it is straightforward to estimate mutual information along with attribute-based questions.
Notice that attributes only depend on the ads, so $\text{Input}_1^n - \text{Ad} - \text{Attribute}$
forms a Markov chain.
This allows us to estimate mutual information, as the conditional distribution, $Pr(\text{Ad}, \text{Attribute}|\text{Input}_1^n)$, can be calculated by
\[
Pr(\text{Ad}, \text{Attribute}|\text{Input}_1^n)=Pr(\text{Attribute}|\text{Ad})Pr(\text{Ad}|\text{Input}_1^n)
\label{eq:attribute}
\]
where the first factor is estimated by counting and the second factor is directly provided by DeepProbe\xspace's posterior update. After the information-maximizing attribute is identified, a question will be raised and the user input will be collected to update the posterior distribution again. As an example, if the user is looking for a laptop, a question may look like
\[\text{\textit{What manufacture do you like?}} \]
\subsubsection{Implementation and Qualitative Feedbacks}
We built the bot using Microsoft Bot Framework \cite{BotFramework}, which is a chatbot development tool. It supports bot conversation over various platforms, including text messages, Skype, Slack, Messenger, etc. Figure \ref{fig:screenshot} is a screenshot of the chatbot with Skype as the platform.
By demonstrating the prototype to a few colleagues, we got a few encouraging feedbacks. Most of them were surprised by the capability of the chatbot in recommending products when they ask related questions. The ``how to connect tablet to tv'' case was also a big win. An HDMI ad was recommended back to a user, and by clicking the ad, the title of the redirected web page popped up: ``16.4ft Ultra-thin Micro HDMI D to A Long Cable - Connect Tablet / Smart Phone / Mobile / Laptop / Camera to HD TV''. Interested users can take this opportunity to learn more about a Micro HDMI cable (it can connect not only tablet but also other devices to TV) and purchase it! Nonetheless, several colleagues pointed out that this chatbot should not be standalone. In addition to recommending products, we should also integrate with other services to provide tutorial videos for example. Last but not least, a colleague asked whether the chatbot can provide information in other verticals other than products. By explaining how the system works, the colleague understood by training on data from a different vertical, and combining with the corresponding search engine, we can generalize the chatbot to where needed.
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{bot_LI}
\caption{Screenshot of the Chatbot}
\label{fig:screenshot}
\end{figure}
\section{Conclusion and Future Work}
In this paper, we introduced DeepProbe\xspace, a sequence-to-sequence model based framework for query understanding, ad recommendation and user interaction. In query rewriting, it significantly increases both the coverage and quality. For relevance scoring, AUC, which is a key metric, surpasses existing systems. It also demonstrated great potential in more efficient user interaction and chatbot design, for which we can rigorously formulate questions to users, based on a principle of maximizing information gain. As an ongoing work, we would like to continue work and experiment on the chatbot, possibly with quantitative experiments for the chatbot. A helpful experiment is that we can measure its efficiency (i.e. number of rounds of interaction) for a user to acquire the information he or she needs.
\section{Implementation and Experiments}\label{implementation}
In this chapter, we describe our implementation of the ad recommendation chatbot, and discuss some of the key hacks about the main framework to make it compatible with the current system.\\
As discussed in chapters \ref{Bayesian}, in order to calculate the conditional distribution of ads given a query $Q$, it is required that the likelihood $P(Q|Ad)$ be computed for every $Ad\in\mathcal A$. This issue was present in all other proposed information retrieval systems based on statistical language models, demanding a $O(|\mathcal A|)$ complexity. When $|\mathcal A|$ is large, This poses challenges on both the system side, considering the number of queries, and quality of service, as the chatbot is expected to be responsive and should not wait too long before return the next action.
\section{Introduction}
Recent years have witnessed a boom in deep learning, which revolutionizes areas including computer vision, speech recognition and natural language processing. One widely-used deep learning model is the sequence to sequence (seq2seq) model, which demonstrated their power in machine translation \cite{sutskever2014sequence}, achieving higher BLEU score than conventional methods like phrased-based statistical machine translation models.
Different bells and whistles have been developed which further boost the performance of seq2seq models, one prominent example of which is the attention mechanism. In \cite{luong-pham-manning:2015:EMNLP}, the authors proposed this mechanism, which augments the top hidden vector at the decoder side with a weighted average of the encoder hidden vectors. The weights can be calculated through a cosine similarity or a generalized matrix inner product, where the weight matrix is part of the parameters to be learnt. By adding attention to the deep seq2seq model, the authors were able to better align inputs and outputs, and subsequently achieve an additional 5.0 improvement in BLEU score.
On top of natural languages, seq2seq models can be trained with literally any kind of paired sequence data. Authors in \cite{DBLP:journals/corr/VinyalsL15} build a model with IT helpdesk question-answer conversation log so that it will read new user questions and respond ``machine-translated'' answers. Authors in \cite{45189} take email correspondence log to build a model that suggests email reply candidates for users to choose from in mobile environment. Reflecting that a question-answering system would need a knowledge base to search answers from, we propose a staged approach that can leverage existing recommend system as is, serving as the knowledge base to search for the right answer.
We apply the seq2seq model to understand user questions, using it to rewrite the question to one in standard query form that an ordinary recommendation system would understand. The rewrite is submitted to the recommendation system to retrieve a set of candidate answers. We show that with attention mechanism the model can rewrite questions with better quality measured by BLEU score. We also show that those rewrites can retrieve ads from a commercial search engine with better human labeled quality, proving that the system has significant commercial values.
Another powerful aspect of the seq2seq model besides its generativeness, namely its statistical property as likelihood estimators, have not been fully investigated by previous work. We built a seq2seq likelihood estimator in DeepProbe\xspace, which serves as the central model for an information directed evaluation and interaction framework. When used as a evaluation tool, a posterior probability derived from the seq2seq likelihood estimator will be calculated which serves as a relevance criterion. We can use it to refine candidates returned by a recommendation system. By comparing it against existing baselines like CDSSM \cite{shen2014learning}, we find significant performance improvement evaluated by AUC on a manually labeled dataset. When used as a interactive tool, the seq2seq model steers an agent through the user interaction process. An agent like a chatbot, tries to identify the intent of a user at a interactive session. The agent, using the seq2seq estimator, calculates the conditional entropy through a Naive Bayes procedure, which will be updated every time new information comes, i.e. a new user input.
The agent iteratively uses the information to make a decision to either make a recommendation or ask further questions to gather more information.
We build a chatbot prototype using this framework. The prototype is built on a commercial search engine which recommends product ads. The chatbot will recommend an ad if a user asks questions with product intent. When the user intention is not clear, it actively asks the user by formulating questions around product attributes that maximize the expected information gain.
The contribution of the paper is summarized as follows. We introduce DeepProbe\xspace, an information-directed interaction framework built upon a seq2seq model. We propose and implement a practical way to answer user questions in a staged approach: (1) we apply seq2seq model to understand and rewrite user questions into one that an ordinary recommendation system can understand and return candidates, (2) we use seq2seq model to score and pick better candidates, and finally (3) we use seq2seq to derive confidence measure and probe users for clarification if necessary.
\section{Models}\label{model}
\subsection{Deep Multi-layer Seq2Seq Attention Model}\label{sec:seq2seq}
We use a seq2seq neural network enhanced with attention mechanism, which is illustrated in Figure \ref{fig:seq2seq}.
A seq2seq model is comprised of an encoder and a decoder, each consisting of several vertically stacked layers. Below we give a detailed explanation.
\subsubsection{Embedding Layer}
The embedding layer takes a word and converts it to its vector representation. The parameter required for this layer is a matrix $W_{emb}\in \mathbb R^{d_{emb}\times |\mathcal V|}$. Specifically, when a word with index $i$ is given to the embedding layer, it produces $W_{\cdot, i}$, the $i$-th column of the matrix, which is a dimension $d_{emb}$ vector. We learn separate embedding layers and parameters for the encoder and decoder, i.e. two $W_{emb}$ matrices.
\subsubsection{Variable-depth LSTM Recurrent Layers}
The LSTM recurrent layer with depth $l$ consists of $l$ vertically stacked LSTM blocks. Each LSTM block takes three inputs: $e_t$, $c_{t-1}$ and $h_{t-1}$, where $e_t$ is the input from below, $c_{t-1}$ and $h_{t-1}$ are inputs from the previous step. Its output, $h_t$, is computed in the following way:
\begin{align*}
i_t&=\sigma (W_{ei}e_t+W_{hi}h_{t-1}+b_i)\\
f_t&=\sigma (W_{ef}e_t+W_{hf}h_{t-1}+b_f)\\
c_t&=f_t \cdot c_{t-1}+i_t \cdot \tanh(W_{ec}e_t+W_{hc}h_{t-1}+b_c)\\
o_t&=\sigma(W_{eo}e_t+W_{ho}h_{t-1+}b_o)\\
h_t&=o_t\cdot\tanh(c_t)
\end{align*}
where $\cdot$ denotes the element-wise product between vectors. LSTM is an enhanced recurrent neural network (RNN) that addresses short-term memory issue of a vanilla RNN, by maintaining additional cell vector $c_t$ and introducing input gate $i_t$, forget gate $f_t$, and output gate $o_t$. Detailed discussions of the advantages of LSTM can be found in \cite{lstm}, which is omitted in this paper due to the space limit.
For the lowest LSTM layer, $e_t$ is the output of embedding layer with dimension $d_{emb}$, so $W_{e*} \in R^{d_{h} \times d_{emb}}$, $W_{h*} \in R^{d_{h} \times d_{h}}$, and $b_* \in R^{d_{h}}$ are the parameters to be learned. For the upper LSTM layers, $W_{e*}, W_{h*} \in R^{d_{h} \times d_{h}}$, and $b_* \in R^{d_{h}}$ are the parameters to be learned.
$\sigma(\cdot)$ here denotes sigmoid, a nonlinear activation function. For the encoder, each LSTM block is in fact bi-directional (BLSTM), it outputs concatenated hidden vectors from forward and backward directions, for which the final vector fed to decoder is from the last of both directions in concatenation form: $[\overrightarrow{h}_m;\overleftarrow{h}_{-1}]$.
For the decoder, each LSTM block has only forward direction, so readers should interpret $d_h$ accordingly, e.g. $d_h$ in decoder should be twice the size of that in encoder. In encoder each LSTM layer other than the lowest should reduce the input size by half so after concatenation the final output size of each BLSTM layer is the same.
\subsubsection{Attention Layer}
For every top hidden vector of the decoder, we augment it with an attention vector, $g_t$, which is obtained by combining the top hidden vectors from the encoder. The attention mechanism will be discussed in section \ref{sec:atten}. After concatenating the attention vector $g_t$ with the output vector of the top LSTM layer $h_t$, we apply a fully connected layer to reduce the dimension back to the same size as the input hidden vector:
$
\hat{h}_t = Relu(W_c[g_t;h_t]+b_c)
$,
where for $Relu$ is the rectified nonlinearity unit, $max(0, \cdot)$. Here the parameters are $W_c \in R^{d_{h} \times 2d_{h}}$ and $b_c \in R^{d_{h}}$.
The output $\hat{h}_t$ will be passed to the next layer.
\subsubsection{Projection Layer}
The projection layer takes the combined hidden and attention vector as input, and outputs a vector of dimension $|\mathcal V|$. Its parameters include a weight matrix $W_p\in\mathbb R^{|\mathbb V| \times d_h}$ and a bias vector $b_p\in \mathbb R^{|\mathcal V|}$. The output at step $t$ is computed as
$v_t=\text{softmax}(W_p\hat{h}_t+b_p)$.
Note $v$ is a non-negative vector which sums up to 1, hence it can be viewed as a distribution on the vocabulary $\mathcal V$. The likelihood of seeing a specific word with index $w_t$ is the $w_t$-th element of $v_t$, which is abbreviated as
\begin{equation}
v_t(w_t)
\label{eq:word_likelihood}
\end{equation}
\subsubsection{Loss Function}
We perform end-to-end training to learn all the aforementioned parameters together.
For each pair of a source sequence $Src$ and a target sequence $Tgt$ in the training set, where $Tgt=w_{t_1}...w_{t_n}$, by first encoding $Src$ through encoder, the loss of this pair is a summation of per-word cross-entropy loss between $v_i$ and the label which is a one-hot indicator vector of each word $w_{t_i}$.
\begin{figure*}
\centering
\resizebox{0.8\linewidth}{!}{
\begin{tikzpicture}
\node (x_t) {$\text{word}_t$};
\node[draw, align=center, inner sep=1] (embed) [above= 5mm of x_t] {Embedding\\ Layer};
\node[draw, align=center, inner sep=1] (unit_t1) [above= 5mm of embed] {LSTM \\ Layer 1};
\node[draw, align=center, inner sep=1] (unit_t2) [above= 5mm of unit_t1] {LSTM \\ Layer 2};
\node (dots) [above= 5mm of unit_t2]{$\cdots$};
\node[draw, align=center, inner sep=1] (unit_tl) [above= 5mm of dots] {LSTM \\ Layer $l$};
\node[align=center, inner sep=1] (unit_tprev1) [left= 10mm of unit_t1.north] {$\vec h_{t-1}^{(1)}$};
\node[align=center, inner sep=1] (unit_tnext1) [right= 10mm of unit_t1.north] {$\vec h_{t}^{(1)}$};
\node[align=center, inner sep=1] (unit_tprev2) [left= 10mm of unit_t2.north] {$\vec h_{t-1}^{(2)}$};
\node[align=center, inner sep=1] (unit_tnext2) [right= 10mm of unit_t2.north] {$\vec h_{t}^{(2)}$};
\node[align=center, inner sep=1] (unit_tprevl) [left= 10mm of unit_tl.north] {$\vec h_{t-1}^{(l)}$};
\node[align=center, inner sep=1] (unit_tnextl) [right= 10mm of unit_tl.north] {$\vec h_{t}^{(l)}$};
\node (x_t-1) [left= 29mm of x_t] {$\text{word}_{0}$};
\node[draw, align=center, inner sep=1] (embed-1) [above= 5mm of x_t-1] {Embedding\\ Layer};
\node[draw, align=center, inner sep=1] (unit_t1-1) [above= 5mm of embed-1] {LSTM \\ Layer 1};
\node[draw, align=center, inner sep=1] (unit_t2-1) [above= 5mm of unit_t1-1] {LSTM \\ Layer 2};
\node (dots-1) [above= 5mm of unit_t2-1]{$\cdots$};
\node[draw, align=center, inner sep=1] (unit_tl-1) [above= 5mm of dots-1] {LSTM \\ Layer $l$};
\path[->] (x_t-1) edge (embed-1);
\path[->] (embed-1) edge (unit_t1-1);
\path[->] (unit_t1-1) edge (unit_t2-1);
\path[->] (unit_t2-1) edge (dots-1);
\path[->] (dots-1) edge (unit_tl-1);
\node[align=center, inner sep=1] (bunit_tprev1) [left= 10mm of unit_t1.south] {$\cev h_{t-1}^{(1)}$};
\node[align=center, inner sep=1] (bunit_tnext1) [right= 10mm of unit_t1.south] {$\cev h_{t}^{(1)}$};
\node[align=center, inner sep=1] (bunit_tprev2) [left= 10mm of unit_t2.south] {$\cev h_{t-1}^{(2)}$};
\node[align=center, inner sep=1] (bunit_tnext2) [right= 10mm of unit_t2.south] {$\cev h_{t}^{(2)}$};
\node[align=center, inner sep=1] (bunit_tprevl) [left= 10mm of unit_tl.south] {$\cev h_{t-1}^{(l)}$};
\node[align=center, inner sep=1] (bunit_tnextl) [right= 10mm of unit_tl.south] {$\cev h_{t}^{(l)}$};
\node (ldots1) [left= 5mm of unit_tprev1]{$\cdots$};
\node (ldots2) [left= 5mm of unit_tprev2]{$\cdots$};
\node (ldotsl) [left= 5mm of unit_tprevl]{$\cdots$};
\node (rdots1) [left= 5mm of bunit_tprev1]{$\cdots$};
\node (rdots2) [left= 5mm of bunit_tprev2]{$\cdots$};
\node (rdotsl) [left= 5mm of bunit_tprevl]{$\cdots$};
\node (x_t+1) [right= 15mm of x_t] {$\text{word}_{t+1}$};
\node[draw, align=center, inner sep=1] (embed+1) [above= 5mm of x_t+1] {Embedding\\ Layer};
\node[draw, align=center, inner sep=1] (unit_t1+1) [above= 5mm of embed+1] {LSTM \\ Layer 1};
\node[draw, align=center, inner sep=1] (unit_t2+1) [above= 5mm of unit_t1+1] {LSTM \\ Layer 2};
\node (dots+1) [above= 5mm of unit_t2+1]{$\cdots$};
\node[draw, align=center, inner sep=1] (unit_tl+1) [above= 5mm of dots+1] {LSTM \\ Layer $l$};
\path[->] (x_t) edge (embed);
\path[->] (embed) edge (unit_t1);
\path[->] (unit_t1) edge (unit_t2);
\path[->] (unit_t2) edge (dots);
\path[->] (unit_tprev1) edge (unit_t1.148);
\path[->] (unit_t1.32) edge (unit_tnext1);
\path[->] (unit_tprev2) edge (unit_t2.148);
\path[->] (unit_t2.32) edge (unit_tnext2);
\path[->] (dots) edge (unit_tl);
\path[->] (unit_tprevl) edge (unit_tl.148);
\path[->] (unit_tl.32) edge (unit_tnextl);
\path[->] (x_t+1) edge (embed+1);
\path[->] (embed+1) edge (unit_t1+1);
\path[->] (unit_t1+1) edge (unit_t2+1);
\path[->] (unit_t2+1) edge (dots+1);
\path[->] (unit_tnext1) edge (unit_t1+1.148);
\path[->] (unit_tnext2) edge (unit_t2+1.148);
\path[->] (unit_tnextl) edge (unit_tl+1.148);
\path[->] (dots+1) edge (unit_tl+1);
\path[<-] (bunit_tprev1) edge (unit_t1.212);
\path[<-] (bunit_tprev2) edge (unit_t2.212);
\path[<-] (bunit_tprevl) edge (unit_tl.212);
\path[<-] (bunit_tnext1) edge (unit_t1+1.212);
\path[<-] (bunit_tnext2) edge (unit_t2+1.212);
\path[<-] (bunit_tnextl) edge (unit_tl+1.212);
\path[->] (bunit_tnext1) edge (unit_t1.328);
\path[->] (bunit_tnext2) edge (unit_t2.328);
\path[->] (bunit_tnextl) edge (unit_tl.328);
\path[->] (ldots1) edge (unit_tprev1);
\path[->] (ldots2) edge (unit_tprev2);
\path[->] (ldotsl) edge (unit_tprevl);
\path[<-] (rdots1) edge (bunit_tprev1);
\path[<-] (rdots2) edge (bunit_tprev2);
\path[<-] (rdotsl) edge (bunit_tprevl);
\path[<-] (ldots1) edge (unit_t1-1.32);
\path[<-] (ldots2) edge (unit_t2-1.32);
\path[<-] (ldotsl) edge (unit_tl-1.32);
\path[->] (rdots1) edge (unit_t1-1.328);
\path[->] (rdots2) edge (unit_t2-1.328);
\path[->] (rdotsl) edge (unit_tl-1.328);
\node[align=center, inner sep=1] (unit_t1+2) [right= 10mm of unit_t1+1.north] {$\vec h_{t+1}^{(1)}$};
\node[align=center, inner sep=1] (unit_t2+2) [right= 10mm of unit_t2+1.north] {$\vec h_{t+1}^{(2)}$};
\node[align=center, inner sep=1] (unit_tl+2) [right= 10mm of unit_tl+1.north] {$\vec h_{t+1}^{(l)}$};
\node[align=center, inner sep=1] (bunit_t1+2) [right= 10mm of unit_t1+1.south] {$\cev h_{t+1}^{(1)}$};
\node[align=center, inner sep=1] (bunit_t2+2) [right= 10mm of unit_t2+1.south] {$\cev h_{t+1}^{(2)}$};
\node[align=center, inner sep=1] (bunit_tl+2) [right= 10mm of unit_tl+1.south] {$\cev h_{t+1}^{(l)}$};
\path[->] (unit_t1+1.32) edge (unit_t1+2);
\path[->] (unit_t2+1.32) edge (unit_t2+2);
\path[->] (unit_tl+1.32) edge (unit_tl+2);
\path[<-] (unit_t1+1.328) edge (bunit_t1+2);
\path[<-] (unit_t2+1.328) edge (bunit_t2+2);
\path[<-] (unit_tl+1.328) edge (bunit_tl+2);
\node[align=center, inner sep=1] (dots1+2) [right= 5mm of unit_t1+2] {$\cdots$};
\node[align=center, inner sep=1] (dots2+2) [right= 5mm of unit_t2+2] {$\cdots$};
\node[align=center, inner sep=1] (dotsl+2) [right= 5mm of unit_tl+2] {$\cdots$};
\node[align=center, inner sep=1] (bdots1+2) [right= 5mm of bunit_t1+2] {$\cdots$};
\node[align=center, inner sep=1] (bdots2+2) [right= 5mm of bunit_t2+2] {$\cdots$};
\node[align=center, inner sep=1] (bdotsl+2) [right= 5mm of bunit_tl+2] {$\cdots$};
\path[->] (unit_t1+2) edge (dots1+2);
\path[->] (unit_t2+2) edge (dots2+2);
\path[->] (unit_tl+2) edge (dotsl+2);
\path[<-] (bunit_t1+2) edge (bdots1+2);
\path[<-] (bunit_t2+2) edge (bdots2+2);
\path[<-] (bunit_tl+2) edge (bdotsl+2);
\node (y_t)[right=45mm of x_t+1]{$\langle\text{EOS}\rangle$};
\node[draw, align=center, inner sep=1] (dembed) [above= 5mm of y_t] {Embedding\\ Layer};
\node[draw, align=center, inner sep=1] (dunit_t1) [above= 5mm of dembed] {LSTM \\ Layer 1};
\node[draw, align=center, inner sep=1] (dunit_t2) [above= 5mm of dunit_t1] {LSTM \\ Layer 2};
\node (ddots) [above= 5mm of dunit_t2]{$\cdots$};
\node[draw, align=center, inner sep=1] (dunit_tl) [above= 5mm of ddots] {LSTM \\ Layer $l$};
\node[draw, align=center, inner sep=1] (datten) [above= 5mm of dunit_tl] {Attention \\Layer};
\node[draw, align=center, inner sep=1] (dproj) [above= 5mm of datten] {Projection \\Layer};
\node[align=center, inner sep=1] (dunit_tprev1) [left= 5mm of dunit_t1] {${
\begin{bmatrix}
\vec h_{m}^{(1)}\\
\cev h_{-1}^{(1)}
\end{bmatrix}}$};
\node[align=center, inner sep=1] (dunit_tnext1) [right= 5mm of dunit_t1] {$h_{1}^{(1)}$};
\node[align=center, inner sep=1] (dunit_tprev2) [left= 5mm of dunit_t2] {${
\begin{bmatrix}
\vec h_{m}^{(2)}\\
\cev h_{-1}^{(2)}
\end{bmatrix}}$};
\node[align=center, inner sep=1] (dunit_tnext2) [right= 5mm of dunit_t2] {$h_{1}^{(2)}$};
\node[align=center, inner sep=1] (dunit_tprevl) [left= 5mm of dunit_tl] {${
\begin{bmatrix}
\vec h_{m}^{(l)}\\
\cev h_{-1}^{(l)}
\end{bmatrix}}$};
\node[align=center, inner sep=1] (dunit_tnextl) [right= 5mm of dunit_tl] {$h_{1}^{(l)}$};
\node (y_t+1) [right= 15mm of y_t] {$\text{word}_{1}$};
\node[draw, align=center, inner sep=1] (dembed+1) [above= 5mm of y_t+1] {Embedding\\ Layer};
\node[draw, align=center, inner sep=1] (dunit_t1+1) [above= 5mm of dembed+1] {LSTM \\ Layer 1};
\node[draw, align=center, inner sep=1] (dunit_t2+1) [above= 5mm of dunit_t1+1] {LSTM \\ Layer 2};
\node (ddots+1) [above= 5mm of dunit_t2+1]{$\cdots$};
\node[draw, align=center, inner sep=1] (dunit_tl+1) [above= 5mm of ddots+1] {LSTM \\ Layer $l$};
\node[draw, align=center, inner sep=1] (datten+1) [above= 5mm of dunit_tl+1] {Attention \\Layer};
\node[draw, align=center, inner sep=1] (dproj+1) [above= 5mm of datten+1] {Projection \\Layer};
\iffalse
\path[->] (dots1+2) edge (dunit_tprev1);
\path[->] (dots2+2) edge (dunit_tprev2);
\path[->] (dotsl+2) edge (dunit_tprevl);
\fi
\path[->] (y_t) edge (dembed);
\path[->] (dembed) edge (dunit_t1);
\path[->] (dunit_t1) edge (dunit_t2);
\path[->] (dunit_t2) edge (ddots);
\path[->] (dunit_tprev1) edge (dunit_t1);
\path[->] (dunit_t1) edge (dunit_tnext1);
\path[->] (dunit_tprev2) edge (dunit_t2);
\path[->] (dunit_t2) edge (dunit_tnext2);
\path[->] (ddots) edge (dunit_tl);
\path[->] (dunit_tprevl) edge (dunit_tl);
\path[->] (dunit_tl) edge (dunit_tnextl);
\path[->] (dunit_tl) edge (datten);
\path[->] (datten) edge (dproj);
\path[->] (y_t+1) edge (dembed+1);
\path[->] (dembed+1) edge (dunit_t1+1);
\path[->] (dunit_t1+1) edge (dunit_t2+1);
\path[->] (dunit_t2+1) edge (ddots+1);
\path[->] (dunit_tnext1) edge (dunit_t1+1);
\path[->] (dunit_tnext2) edge (dunit_t2+1);
\path[->] (dunit_tnextl) edge (dunit_tl+1);
\path[->] (ddots+1) edge (dunit_tl+1);
\path[->] (dunit_tl+1) edge (datten+1);
\path[->] (datten+1) edge (dproj+1);
\node[align=center, inner sep=1] (dunit_t1+2) [right= 5mm of dunit_t1+1] {$h_{2}^{(1)}$};
\node[align=center, inner sep=1] (dunit_t2+2) [right= 5mm of dunit_t2+1] {$h_{2}^{(2)}$};
\node[align=center, inner sep=1] (dunit_tl+2) [right= 5mm of dunit_tl+1] {$h_{2}^{(l)}$};
\node[align=center, inner sep=1] (dsoftmax) [above= 5mm of dproj] {$v_1$};
\node[align=center, inner sep=1] (dsoftmax+1) [above= 5mm of dproj+1] {$v_2$};
\path[->] (dunit_t1+1) edge (dunit_t1+2);
\path[->] (dunit_t2+1) edge (dunit_t2+2);
\path[->] (dunit_tl+1) edge (dunit_tl+2);
\path[->] (unit_tl+1.north)[bend left=15] edge (datten);
\path[->] (unit_tl.north)[bend left=10] edge (datten);
\path[->] (unit_tl+1.north)[bend left=25] edge (datten+1);
\path[->] (unit_tl.north)[bend left=20] edge (datten+1);
\path[->] (unit_tl-1.north)[bend left=10] edge (datten);
\path[->] (unit_tl-1.north)[bend left=20] edge (datten+1);
\path[->] (dproj) edge (dsoftmax);
\path[->] (dproj+1) edge (dsoftmax+1);
\node (rdots1) [right= 5mm of dunit_t1+2]{$\cdots$};
\node (rdots2) [right= 5mm of dunit_t2+2]{$\cdots$};
\node (rdotsl) [right= 5mm of dunit_tl+2]{$\cdots$};
\path[->] ( dunit_t1+2) edge (rdots1);
\path[->] ( dunit_t2+2) edge (rdots2);
\path[->] ( dunit_tl+2) edge (rdotsl);
\draw [decorate,decoration={brace,mirror,amplitude=10pt},xshift=-4pt,yshift=0pt] (-4,-0.2) -- (5,-0.2) node [black,midway,yshift=-15pt] {Encoder};
\draw [decorate,decoration={brace,mirror,amplitude=10pt},xshift=-4pt,yshift=0pt] (7,-0.2) -- (14,-0.2) node [black,midway,yshift=-15pt] {Decoder};
\end{tikzpicture}
}
\caption{Bi-directional Multilayer LSTM Encoder + LSTM Attention Decoder}
\label{fig:seq2seq}
\end{figure*}
\subsection{Attention Mechanisms}\label{sec:atten}
Attention mechanism is a powerful add-on to recurrent neural networks that is intended to combat the long-term dependency issue. Even LSTM and GRU networks, which are designed to have long term dependencies, are prone to missing information that occurred long time ago.
The intuition behind attention mechanism is that, at each step of the sequence decoding process, we force the network to look back again at the source sequence to pick up the most relevant hidden vectors, and augment the current hidden vector with this extra piece of information.
To introduce the attention mechanism, we define a few notations for convenience purposes. Let the top hidden vectors for the source sequence be $s_1,\cdots,s_m$, and the top hidden vector of the current target word be $h_i$. We summarize four variants that we intend to experiment \cite{luong-pham-manning:2015:EMNLP}\cite{NIPS2013_5028}.
These methods are different ways of averaging source hidden vectors $s_1,\cdots,s_m$, where the important words are supposed to have a larger weight, hence it gets the name of ``attention''. For the four mechanisms, the weight vector $a_{i\cdot}$ for target word $i$ can be calculated respectively firstly:
\begin{enumerate}
\item {\it (dot)} $\tilde{a}_{ij}=s_j^T h_i$
\item {\it (general)} $\tilde{a}_{ij}={s_j}^T W_g h_i$
\item {\it (concat)} $\tilde{a}_{ij}=W_{cc} [s_j; h_i]$
\item
{\it (tensor)} $\tilde{a}_{ij} = U({s_j}^T W h_i+ V [s_j; h_i] + b)$
\end{enumerate}
Then with $a_{i,\cdot} = \text{softmax}(\tilde{a}_{i\cdot})$, the attention vector is obtained by
$g_i=\sum_{j=1}^m a_{ij} s_j$,
which will be combined with $h_i$ and fed into the projection layer. Accordingly we learn parameters $W_g\in\mathbb R^{d_h \times d_h}$, $W_{cc}\in\mathbb R^{1x2d_h}$, tensor $W\in\mathbb R^{d_h \times k \times d_h}$ $V\in\mathbb R^{k \times 2d_h}$, $b\in\mathbb R^{k}$,
and $U\in\mathbb R^{1 \times k}$.
The four different attention mechanisms aim at different purposes. While {\it dot} and {\it general} aim at discovering the similarities between source and target, the last one focuses more on the non-linearity interaction between words, as pointed out in recursive neural network literature \cite{socher2013recursive}. More comparisons and analysis will be discussed in sections \ref{application}.
\subsection{Likelihood Estimation}\label{likelihood}
The above seq2seq model is capable of giving an estimate of the likelihood of a target sequence, $Tgt=w_{t_1}...w_{t_n}$, given a source sequence $Src$. First, notice the chain rule for conditional probability, we have
\[Pr(Tgt|Src)=Pr(w_{t_1}|Src)\times\cdots\times Pr(w_{t_n}|w_{t_{n-1}},..,w_{t_1},Src)\]
For the $i$-th word in the target sequence, $w_{t_i}$, the conditional distribution $Pr(w_{t_i}|w_{t_{i-1}},..,w_{t_1},Src)$ is estimated by the seq2seq model as in Equation (\ref{eq:word_likelihood}),
\[\widehat{Pr}(w_{t_i}|w_{t_{i-1}},..,w_{t_1},Src)=v_{i}(w_{t_i})\]
Combine the chain rule step with the seq2seq estimator, we obtain the estimated sequence likelihood, which is
\begin{equation}
Pr(Tgt|Src)=\prod_{i=1}^n v_{i}(w_{t_i})
\label{eq:likelihood}
\end{equation}
\section{Information-Directed Adaptive Sequence Sampling}
In the above discussions, we focused on deep learning models and attention mechanisms. However, its ability was mostly investigated in traditional, non-adaptive and one-shot inference scenarios. By non-adaptive and one-shot, we mean that the data are given to the algorithm {\em as is}, with no control over the data collecting process whatsoever. Most machine learning algorithms are designed to cope with this scenario, but the rise of new interactive channels like chatbot, virtual agents or interactive webpages demand further. An agent, like a chatbot, has to have the adaptivity of talking and raising clarifying questions to a user to reach his or her goal.
So our framework is created to address this. By {\em adaptive}, it means that it is able to dynamically sample the next user input depending on the current estimates, hence will be more directional and less ad-hoc. In other words, it should interpret user intent and knowingly guide the user to achieve the goal in the most efficient way. Next we will explain how DeepProbe\xspace integrates the seq2seq model to do the estimation, identify the next sampling direction, and make recommendations when the agent is confident.
\subsection{Recommending an Item}
Consider a scenario where we would like to make recommendations. Denote $\pi$ as a prior distribution on the set of all possible items. In this setting, each $Item$ can be represented as a sequence, for example the title of an ad. Now, suppose $k$ input sequences $Input_1^k$ from a user are revealed, e.g. from $k$ rounds of interactions, the posterior distribution on the set of items should change accordingly, reflecting the fact that more information is provided by the user. Applying the Bayes rule, we have
\[Pr(Item|Input_1^k) = \frac{\pi(Item)Pr(Input_1^k|Item)}{\sum_{Item} \pi(Item)Pr(Input_1^k|Item)}\]
Under a na\"ive Bayes framework, we assume conditional independence,
$Pr(Input_1^k|Item)=\prod_{i=1}^k Pr(Input_i|Item)$.
Combining the two expressions, the update rule becomes
\begin{equation}
Pr(Item|Input_1^k) = \frac{\pi(Item)\prod_{i=1}^k Pr(Input_i|Item)}{\sum_{Item} \pi(Item)\prod_{i=1}^k Pr(Input_i|Item)}
\label{eq:posterior}
\end{equation}
where we notice that each likelihood term, $Pr(Input_i|Item)$, is given by the seq2seq likelihood estimator in Equation (\ref{eq:likelihood}).
\subsection{Entropy as a Measure of Confidence}
\subsubsection{Definition and Discussion of Intuition}
Entropy is a functional of a probability distribution, which measures how unpredictable the distribution is. We use it to determine the confidence of an agent, or how vaguely the situation is to the agent. It originated from information theory which quantifies the compressibility of a IID random source sequence \cite{cover2012elements}, but has since been widely applied to other fields, including computer vision and speech recognition. For example, the maximum entropy principle, first proposed by Hoch and Skilling \cite{skilling1984maximum} \cite{hoch1996maximum}, has shown extreme success in image reconstruction and de-blurring. The max entropy principle has found applications in speech recognition, where an example is a speech recognition system \cite{peters2006speech} built by Peters et. al. In NLP, language models, as in \cite{khudanpur1999maximum}, are sometimes built around this idea as well. We also point out that in NLP, the notion of {\it perplexity}, a standard metric used to compare statistical language models and machine translation such as in \cite{sutskever2014sequence}, can be viewed as the exponent of the entropy. Below we give a formal definition of the entropy functional. Notice in the following definition of conditional entropy, it is not averaged across the random variable it conditions on, hence is itself a random variable.
\begin{definition}[Entropy, Conditional Entropy]
Given a pair of discrete random variables $(X,Y)$, where $X$ takes values from a alphabet $\mathcal X$ and $Y$ takes value in $\mathcal Y$. Denote their joint distribution as $p_{X,Y}(x,y)$ and marginals $p_X(x),p_Y(y)$,
\begin{enumerate}
\item
The entropy of $X$ is defined as
\[H(X)=-\sum_{x\in\mathcal X} p_X(x)\log p_X(x)\]
\item
The conditional entropy of $X$ given $Y=y$ is
\[H(X|Y=y)=-\sum_{x\in\mathcal X} p_{X|Y}(x|y)\log p_{X|Y}(x|y)\]
Finally, we use $H(X|Y)=\sum p_Y(y)H(X|Y=y)$ to denote the {\em expected} conditional entropy of $X$ given $Y$.
\end{enumerate}
\end{definition}
In general, a large entropy is an indication of the distribution being more widespread. For example, when entropy is maximized, $X$ has a uniform distribution. On the contrary, when entropy is small, the distribution is more concentrated. $H(X)=0$ effectively means the distribution is deterministic.
\subsubsection{Uncertainty of Sequence Posterior Estimation}
The conditional entropy can serve as an uncertainty measure of the estimated sequence posterior distribution.
Remember in Equation (\ref{eq:posterior}), we discussed the posterior update procedure when $k$ user inputs, $Input_1^k$ are observed. We define the {\it posterior uncertainty} as the entropy of this conditional distribution, $H(Item|Input_1^k)$. A large posterior uncertainty means the estimation is vague, hence more observations are needed before a decision can be made; on the other hand, a posterior uncertainty close to 0 is an indication of the estimation has pretty much converged to its argmax, under which case a sure recommendation is ready to be made. Next we explain how to sample more observations or determine the best question to ask if it's uncertain.
\subsection{Information-directed Sampling: Principle of Maximizing Expected Information Gain}
\subsubsection{Mutual Information}
Originated from information theory, the mutual information quantifies how much information can be reliably communicated through a channel. It is a functional on a pair of random variables $(X,Y)$, which is a measure of how much knowledge one can gain of $X$ when $Y$ is revealed. It is defined as the difference between the entropy of $X$ and the conditional entropy of $X$ given $Y$.
\begin{definition}[Mutual Information]
The mutual information between $(X,Y)$ is
\[I(X;Y)=H(X)-H(X|Y)\]
Similarly, conditioning on a sequence of random variables $Z_1^k=z_1^k$, the mutual information between $(X,Y)$ is
\[I(X;Y|Z_1^i=z_1^i)=H(X|Z_1^i=z_1^i)-H(X|Y,Z_1^i=z_1^i)\]
\end{definition}
\subsubsection{Information-Directed Sampling Algorithm}
Now suppose the agent is able to proactively interact with the user, being able to ask the user with questions and expects answers from the user. To start with, assume there is a set of questions, $\mathcal Q=\{Qst_1,\cdots, Qst_q\}$. Following the maximizing information gain principle, we propose Algorithm \ref{alg:algo1}.
\begin{algorithm}
\caption{Information-directed Sequence Sampling}
\label{alg:algo1}
\begin{algorithmic}[1]
\For{ $n=1,2, \cdots$}
\State The $n$-th sequence $Input_n$ is collected from the user.
\State Estimate the likelihood, $Pr(Input_n|Item)$, using the seq2seq likelihood estimator.
\State Update the posterior distribution
\[Pr(Item|Input_1^n) = \frac{\pi(Item)\prod_{i=1}^n Pr(Input_i|Item)}{\sum_{Item} \pi(Item)\prod_{i=1}^k Pr(Input_i|Item)}\]
\State Calculate the conditional entropy $H(Item|Input_1^n)$
\If{$H(Item|Input_1^n)<T$}
\State Return $\argmax Pr(Item|Input_1^n)$, the most likely item.
\Else
\State Choose $Qst$ that maximizes $I(Qst;Item|Input_1^n)$
\State Propose $Qst$ to user; wait for user feedback $Input_{n+1}$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
We would like to point out that the Algorithm \ref{alg:algo1} which maximizes the expected information gain at each step, is effectively a greedy uncertainty-reduction algorithm. This observation is stated in the lemma below.
\begin{lemma}
The information-gain maximizing $Qst$ proposed at step $n$ is also a uncertainty minimizer at step $n$.
\end{lemma}
\begin{proof}
Note that
\[I(Qst;Item|Input_1^n)=H(Item|Input_1^n)-H(Item|Qst,Input_1^n),\]
Note $H(Item|Input_1^n)$ does not depend on $Qst$, as a result, the maximizer of $I(Qst;Item|Input_1^n)$ is immediately a minimizer of $H(Item|Qst,Input_1^n)$ and vice versa.
\end{proof}
A discussion on the application of a Chatbot and question formulation procedure will be discussed in section \ref{sec:chatbot}.
\section{Training}\label{training}
We present some training detail of the seq2seq likelihood estimator. In addition to likelihood estimation, this same model has the ability to perform query rewriting task, which will be discussed in section \ref{application}.
\subsection{Data Description}
The seq2seq model was trained on our internal dataset, which consists of clicked (offer, query) pairs. The offers come from a product ad database, each is a sequence of words describing the corresponding product. The queries are user inputs in our search engine, and if the user clicked on an offer when searching with a query, we regard it as a valid (offer, query) pair. A total of 100 million such training pairs were collected.
\subsection{Details of Model}
We used the model introduced in section \ref{seq2seq}, with vocabulary size $|\mathcal V|=30k$. Any word that is not part of $\mathcal V$ is assigned with symbol $\langle$UNK$\rangle$. We chose the embedding dimension $d=300$. On the encoder side, we used bi-directional LSTM network with a total of 4 vertically stacked layers. At the decoder side, 4-layer LSTM network was used, and we implemented 4 different attention scenarios: no attention, cosine similarity weighted average as in \cite{luong-pham-manning:2015:EMNLP}, general matrix similarity weighted average as in \cite{luong-pham-manning:2015:EMNLP}, and our dynamic tensor attention network. The results for the four different attention mechanisms are compared.
We used Theano \cite{2016arXiv160502688short} for model training on a Tesla K20 GPU. We used cross entropy as the loss function, and Adadelta \cite{zeiler2012adadelta} for gradient descent, which is a variant of Adagrad \cite{duchi2011adaptive}. We trained the model for 5? epochs, each epoch takes 3? days.
|
1,314,259,993,895 | arxiv | \section{Introduction}\label{Section1}
Internet of things (IoT) aims at connecting ubiquitous devices to Internet, and it has become an important driving force for information technology innovation \cite{Campbell2016, wu2014cognitive}. Power supply is crucial to support high-performance computation and communication in IoT applications. However, the power sustainability is the well-known headache for battery-assisted IoT devices. There are two ways to extend the battery endurance: 1) increasing the battery capacity, and 2) improving the battery charging method.
Increasing the battery capacity faces the challenges, e.g., safety, weight, cost, recycling and so on \cite{scrosati2010lithium}. On the other hand, for the wired charging method, carrying a power cord and looking for a power outlet cause inconvenience for users. Hence, wireless power transfer (WPT), as known as wireless charging, becomes an attractive solution to improve battery endurance \cite{lu2015wireless}.
Being able to transmit Watt-level power over meter-level distance safely, resonant beam charging (RBC), as known as distributed laser charging (DLC), was presented in \cite{liu2016dlc}. Comparing with the other wireless charging methods, e.g., inductive coupling, magnetic resonance coupling, radio frequency, and so on, RBC appears to be more suitable for mobile IoT devices \cite{ho2011comparative, kurs2007wireless, costanzo2014electromagnetic}. In the RBC system, the wireless power transmitter and receiver are separated in space. Without specific aiming or tracking, the resonant beam can be generated as long as the receiver is in the line of sight (LOS) of the transmitter, and multiple resonant beams can be generated from one transmitter to multiple receivers \cite{liu2016dlc}. Thus, RBC can charge multi-device simultaneously like Wi-Fi communications.
Due to various types and working status of the RBC receivers (e.g., IoT devices), the battery remaining capacity percentage (i.e., state of charging, SOC) and discharging status may be diverse \cite{SOC2006}. Therefore, all receivers' charging status (e.g., preferred charging power, charging time) may be different as well. However, in the multi-user RBC application scenario, such as the wireless sensor network discussed in \cite{lewis2004wireless}, if one receiver's battery exhausts, the network system may break down \cite{yu2015malware, ding2013sensing}. Thus, in order to keep all IoT devices in the system working as long as possible, it is necessary to study the scheduling method for the multi-user RBC system.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.4]{Fig1-Application-Scenario.pdf}
\caption{Multi-user RBC Application Scenario}
\label{complex-scenarios}
\end{figure}
The system consisting of a RBC transmitter and multiple receivers is defined as the multi-user RBC system. Fig. \ref{complex-scenarios} shows a multi-user RBC application scenario. In Fig. \ref{complex-scenarios}, all receivers, including mobile phones, watches, tablets and sensors, are operated and charged at the same time. Since the receivers' batteries and the working status are different, the receivers have different power charging and discharging status. Thus, each receiver SOC is different. If the transmitting power can not satisfy the charging power requirements of all receivers simultaneously, several receivers' batteries may exhaust due to ``starvation". Therefore, it is desired to design the scheduling algorithm to keep all receivers working as long as possible for fairness in the multi-user RBC system.
The contributions of this paper include: 1) We propose the First Access First Charge (FAFC) scheduling algorithm for WPT in the multi-user RBC system, which can keep all receivers working as long as possible for fairness. 2) We obtain the closed-form formulas of the parameters for the FAFC scheduling algorithm implementation based on the quantitative analysis. 3) We analyze the performance of the FAFC scheduling algorithm, and find the features of the algorithm as:
\begin{itemize}
\item When the transmitting power is fixed, there exists a threshold for the number of receivers being charged simultaneously in order to avoid the system running out of power. For example, the maximum receiver number threshold is about 35 when the transmitting power is 20W.
\item If the transmitting power can satisfy the consumed power of all receivers, the charging time should be prolonged to extend the working time of all receivers.
\item Regardless of the receiver number and the charging time, the system operational duration increases when increasing the transmitting power or improving the single-user's charging efficiency.
\end{itemize}
In the rest of this paper, we will depict the multi-user RBC system in Section II. In Section III, we will design the FAFC scheduling algorithm and present its execution flow. In Section IV, we will propose the quantitative formulation methods for algorithm implementation. In Section V, we will analyze the performance of the FAFC scheduling algorithm by MATLAB simulation.
\section{Multi-User RBC System}\label{Section2}
The RBC system can transmit Watt-level power over meter-level distance while guaranteeing the mobile and safe charging for IoT devices. In addition, multiple receivers can be charged by a transmitter simultaneously \cite{liu2016dlc}. The system with a transmitter and multiple receivers is called the multi-user RBC system, as shown in Fig. \ref{RBC}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.5]{Fig2-RBC-system.pdf}
\caption{Multi-user RBC System}
\label{RBC}
\end{figure}
In Fig. \ref{RBC}, the multi-user RBC system consists of one transmitter and multiple receivers. The transmitter contains a power source, a retro-reflector R1 with 100\% reflectivity, a gain medium, a power scheduler, a feedback monitor and a power controller. The power source provides electrical power $P_s$ for the gain medium under the control of the power controller. After stimulating the gain medium, the electrical power $P_s$ is converted to the beam power at the transmitter (i.e., transmitting power) $P_t$. The power scheduler controls the arrangement of receiver queue and the transmitting power distribution. The feedback monitor is responsible for receiving and processing feedback information, and informing the power controller with the battery preferred charging information.
A retro-reflector R2 with 95\% reflectivity, a photovoltaic (PV) panel, a battery, a power monitor and a feedback generator are included in the receiver $R_i$. The power of resonant beam sending to R2 $P_b$ can be partially converted into the receiver output electrical power $P_e$ by using PV panel. The power monitor tracks the battery status of charging and sends information to the feedback generator. Then the generator feeds the information back to the transmitter. The symbols in Fig. \ref{RBC} are listed in Table \ref{mulparameter}.
The wireless power transfer processes of the multi-user RBC system include: electricity-to-beam conversion, transmitting power scheduling, resonant beam transmission, beam-to-electricity conversion, charging status feedback, and feedback information processing.
From Fig. \ref{RBC}, the receivers form $R_{1}$ to $R_n$ have established charging connections with the transmitter. When the fixed transmitting power is less than the preferred charging power of all receivers, the transmitting power can not meet all receivers' charging requirements at the same time. Therefore, the scheduling method should be adopted to control the transmitting power scheduling for charging the receivers.
\begin{table}[!t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-3pt}
\centering
\caption{Multi-user RBC System Symbols}
\begin{tabular}{C{1.5cm} C{6.0cm}}
\hline
\textbf{Symbol} & \textbf{Parameter} \\
\hline
\bfseries{$P_s$} & {Source power} \\
\bfseries{$P_t$} & {Beam power, i.e., transmitting power} \\
\bfseries{$P_b$} & {Receiver beam power } \\
\bfseries{$P_e$} & {Receiver output electrical power} \\
\bfseries{$R_i$} & {Serial number of the i-th receiver} \\
\hline
\label{mulparameter}
\end{tabular}
\end{table}
We will present the first access first charge (FAFC) scheduling algorithm to solve the scheduling problem for WPT in the multi-user RBC system. The aim of FAFC scheduling algorithm is depicted as:
\begin{itemize}
\item The receiver queue is:
\center
$R$=\{$R_i$ $\mid$ the $i$-th receiver which has accessed to the transmitter for charging\}.
\end{itemize}
In FAFC scheduling process, all the receivers should be kept working as long as possible for fairness, i.e., the SOC (i.e., remaining capacity percentage) $R_{soc}$ of any receiver $R_i$ is greater than 0, that is:
\begin{equation}\label{purpose}
\centering
R_{soc}(R_i)>0,\ \ \ \ (\forall R_i \in R).
\end{equation}
In the next section, to satisfy the simultaneous charging needs in the multi-user RBC system, we will specify the FAFC scheduling algorithm in detail.
\section{FAFC Algorithm Design}\label{Section2}
We will focus here on the multi-user RBC scenario where the working time of all devices is supposed to be maintained as long as possible. Since the receivers' battery SOC and discharging status are different, their charging status is varying. To maximize the utilization of the transmitting power and meet the receivers' charging power requirements, the transmitting power distribution should be controlled and the receivers should be charged with their preferred charging power. To solve these problems, we will study the FAFC scheduling algorithm in this section.
\subsection{Design Ideas}\label{}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.6]{Fig3-Schematic-Diagram.pdf}
\caption{FAFC Schematic Diagram}
\label{FAFCtime}
\end{figure}
The design ideas of the FAFC scheduling algorithm include: 1) The charging time is divided into several identical charging time slots. 2) The receivers accessed to the transmitter are queued up according to their accessing time chronologically. 3) The multiple receivers in the queue's head can be charged with their preferred charging power during a charging time slot, while the other receivers keep waiting. 4) All the receivers discharge with different power according to their working status in each charging time slot. 5) At the end of the time slot, all the receivers update their SOC, and the receivers which have been charged are queued to the tail of the receiver queue. The schematic diagram of the FAFC scheduling algorithm is shown in Fig. \ref{FAFCtime}.
Fig. \ref{FAFCtime} shows the FAFC scheduling principles with $n$ receivers from $R_1$ to $R_n$. Each rectangle denotes a RBC receiver $R_i$ with its preferred charging power and discharging power. The left horizontal axis represents the preferred charging power $P_r$, the right one represents the discharging power $P_d$, and the ordinate axis represents the receivers. The preferred charging power (the length of the left rectangle) and discharging power (the length of the right rectangle) of each receiver are different.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.6]{Fig4-Execution-Flow.pdf}
\caption{FAFC Scheduling Algorithm Execution Flow}
\label{FAFCAflowchart}
\end{figure}
In Fig. \ref{FAFCtime}, the receivers in the bracket ``charging" are chosen to be charged, which will be routed to the tail of the receiver queue when the charging time slot ends. For example, when the time slot $T_1$ ends, the receivers $R_1$, $R_2$, $R_3$, $R_4$ are placed to the tail of receiver queue. All receivers from $R_1$ to $R_n$ are discharging during the whole scheduling process. The execution flow of the FAFC scheduling algorithm will be discussed in the next subsection.
\subsection{Execution Flow}\label{}
The execution flow of the FAFC scheduling algorithm is depicted in Fig. \ref{FAFCAflowchart}:
1) Access judgement: if the charging connections between the transmitter and the receivers have not been established, the scheduling process ends. Otherwise, go to 2).
2) Time division: the charging time is divided into multiple small charging time slots.
3) Queuing: according to the time sequence of the receivers accessing to the transmitter, the transmitter sets up the receiver queue of all accessed receivers chronologically.
4) Feedback: the receiver identifications (receiver type, serial number, battery type, etc.) and battery preferred charging power are fed back to the transmitter.
5) Selecting: depending on the relationship between the transmitting power and the receivers' preferred charging power, one or more receivers at the head of the receiver queue are chose to be charged.
6) Charging and discharging: the transmitter schedules transmitting power to charge selected receivers with their preferred charging power. All receivers discharge with different power according to their working status.
7) Updating: all receivers update their SOC according to the charging and discharging energy.
8) Rearrangement: the receivers which have been charged are queued to the tail of receiver queue.
9) End judgement: the scheduling process ends when one of the following three conditions is met: a) one or more receivers' batteries run out of power, b) all receivers are fully charged, c) the pre-set charging time (for example 1hour, 2hours, etc.) ends. Otherwise, turn to 4).
In addition, during a charging time slot, the power scheduler allocates all the transmitting power to the receivers. Before the transmitting power is allocated, the available transmitting power $P_o$ is equal to the transmitting power $P_t$. The allocating process of the transmitting power $P_t$ is shown in Fig. \ref{chargingcycleallocate}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.63]{Fig5-Allocation-Process.pdf}
\caption{Charging Power Allocation Process}
\label{chargingcycleallocate}
\end{figure}
In Fig. \ref{chargingcycleallocate}, $P_{ri}$ is the preferred charging power of the receiver $R_i$, and $P_{ci}$ is the allocated charging power for $R_i$ by the transmitter. For each receiver, $P_{ci}$ is less than or equal to $P_{ri}$. The symbols of the FAFC scheduling algorithm are shown in Table \ref{Parameters of scheme}.
\begin{table}[!t]
\centering
\caption{FAFC Scheduling Algorithm Symbols}
\begin{tabular}{C{1.5cm} C{6.0cm}}
\hline
\textbf{Symbol} & \textbf{Parameter} \\
\hline
\bfseries{$P_o$} & {Transmitter available transmitting power} \\
\bfseries{$P_r$} & {Battery preferred charging power} \\
\bfseries{$P_d$} & {Receiver discharging power} \\
\bfseries{$T_i$} & {Charging time slot} \\
\bfseries{$P_c$} & {Receiver allocated charging power} \\
\hline
\label{Parameters of scheme}
\end{tabular}
\end{table}
From Fig. \ref{chargingcycleallocate}, the transmitting power allocation needs two judging processes: 1) if the $P_o$ is greater than the battery preferred charging power of the next receiver $P_{ri}$, 2) if the $P_o$ is greater than zero Watt. For 1), the remaining transmitting power is allocated to the next receiver according to the preferred value $P_{ri}$, and subtracted the corresponding numerical value from the remaining value. For 2), the remaining transmitting power is all allocated to the next receiver and reduced to zero, then the next charging time slot begins.
In this section, the FAFC scheduling algorithm was presented. However, to implement the algorithm, various parameters, such as the battery preferred charging power and the discharging power, still need to be specified. In the next section, we will discuss the implementation of the FAFC scheduling algorithm, which includes the quantitative models of the scheduling parameters and the operation pseudo-code of the algorithm.
\section{FAFC Algorithm Implementation}
In the multi-user RBC system, when the transmitting power is scheduled to keep all receivers working as long as possible for fairness, the algorithm implementation should be presented to quantitatively analyze the charging characteristics. In this section, we will specify the FAFC scheduling algorithm implementation by designing the quantitative models of scheduling parameters. Then, we will illustrate the operation pseudo-code of the algorithm.
\subsection{System Parameters}\label{}
To quantitatively analyze the scheduling algorithm, we need to specify the following parameters:\\
1) Charging Efficiency
In the multi-user RBC system, the charging efficiency ${\eta}_{o}$ of a charging link, which is affected by the electro-optical conversion efficiency ${\eta}_{el}$, the beam transmission efficiency ${\eta}_{lt}$, and the photoelectric conversion efficiency ${\eta}_{le}$ \cite{Qing2017}, can be depicted as:
\begin{equation}\label{etao}
{\eta}_{o}={\eta}_{el} {\eta}_{lt} {\eta}_{le}.
\end{equation}
The impacting factors on ${\eta}_{o}$ include: the resonant beam wavelength, the PV panel temperature, the transmission environment (clear, fog, haze), the transmission distance, etc. Since the charging efficiency ${\eta}_{o}$ of a multi-user RBC system is fixed, it has no effect on the performance of the WPT scheduling algorithm. We can assume the parameters for the charging efficiency as:
\begin{itemize}
\item The resonant beam wavelength is 810nm.
\item The electro-optical conversion efficiency ${\eta}_{el}$ is 40\% \cite{810nmtransmitter, zhang2017distributed}.
\item The transmission efficiency ${\eta}_{lt}$ is 100\% \cite{attenuation}.
\item The photo-electricity converter is the GaAs-based PV panel working at 25 $^{\circ}$C, and the photoelectric conversion efficiency ${\eta}_{le}$ is 50\% \cite{810nmpv, zhang2017distributed}.
\end{itemize}
Therefore, the charging efficiency ${\eta}_{o}$ is 20\% calculated by 40\%$\times$100\%$\times$50\%.\\
2) Transmitting Power
Determined by the power source, the source power $P_s$ is fixed during a charging process. Since the electro-optical conversion efficiency ${\eta}_{el}$ is 40\%, the transmitting power $P_t$ is fixed as well, and it is equal to $0.4 \times P_s$. The relationships among $P_s$, $P_t$, $P_b$, $P_e$ specified in Fig. \ref{RBC} are depicted as:
\begin{equation}\label{transpower}
\begin{aligned}
P_{e}=P_b {\eta}_{le} = P_t {\eta}_{lt} {\eta}_{le} = P_s {\eta}_{el} {\eta}_{lt} {\eta}_{le}.
\end{aligned}
\end{equation}
3) Receiver Specification
The battery types of the same and different receivers are various, such as Li-ion, Ni-MH, etc. \cite{hussein2011review, park2008universal}. The difference of receiver specification mainly reflects in the battery capacity, the charging current and voltage \cite{Anonymous2008, winter2004batteries}.
Mobile phone has become one of the most widely used receiving device, while the lithium-ion battery is generally used for mobile phones due to its excellent performance, such as high specific energy, high efficiency and rechargeability \cite{tarascon2001issues}. Therefore, we adopt the mobile phones with the lithium-ion battery, of which the capacity is 1000mAh, the constant charging voltage is 4.2V, the constant charging current is 1A, as the RBC receivers.\\
4) Receiver Number
In the multi-user RBC system, multiple receivers can be charged with their preferred charging power simultaneously. In the scheduling process, the receiver number $N_{r}$ is random.\\
5) Receiver Initial State of Charge
State of charge (SOC), i.e., the battery remaining capacity percentage, is equivalent to a fuel gauge for the battery pack in a battery electric vehicle (BEV). That is, the receiver SOC is the percentage of the battery remaining capacity (0\% = empty, 100\% = full). We assume that the initial SOC of each receiver is a random number between 0\% and 100\%. The initial SOC of the receiver $R_i$ is:
\begin{equation}\label{remaining-capacity}
\begin{aligned}
R_{soc}(R_i)=randi([0,100],1,1)\%.
\end{aligned}
\end{equation}\\
6) Charging Time Slot
For the FAFC scheduling algorithm, the charging time is divided into consecutive equal time slots. The time slot is the minimum charging time unit for the chosen receivers, and it can be set according to the scheduling conditions (e.g., the charging time, the receiver number and so on) in the scheduling process.
The parameters related to the initial charging status are specified in this subsection. To realize the scheduling process, the core step is the charging and discharging. In next subsections, we will analyze the charging and discharging power for each receiver based on quantitative analysis.
\subsection{Battery Preferred Charging Power Model}\label{}
To get the battery preferred charging power, we systematically analyze the charging profile of lithium-ion battery in this subsection. Based on analyzing battery charging status, the closed-form formula between the battery preferred charging power and receiver SOC will be obtained.\\
1) Lithium-ion Battery Charging Profile
If batteries are charged with fixed current and voltage, it may cause the undercharging or overcharging problem \cite{dearborn2005charging}. Hence, to optimize the battery charging performance, the constant current-constant voltage (CC-CV) li-ion battery charging profile was researched in \cite{hussein2011review, park2008universal, dearborn2005charging}. The charging profile of 4.2V/1A, 1000mAh lithium-ion battery is shown in Fig. \ref{chargeproflie}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.67]{Fig6-Charging-Profile.pdf}
\caption{Li-ion Battery Charging Profile}
\label{chargeproflie}
\end{figure}
In Fig. \ref{chargeproflie}, the charging process of the lithium-ion battery is divided into four stages:
Stage 1: Trickle Charge (TC) - When the voltage is less than 3V, the lithium-ion battery is charged with a low charging current, which is about 100mA.
Stage 2: Constant Current (CC) - When the voltage reaches to 3V, the charging current increases from 100mA to 1000mA. Then, the battery is charged with the charging current about 1000mA until the charging voltage increases from 3V to 4.2V.
Stage 3: Constant Voltage (CV) - At this stage, the battery is charged with a charging voltage about 4.2V, the charging current reduces from 1000mA gradually.
Stage 4: Charge Termination (CT) - There are two methods to terminate the entire charging process: a) A minimum charging current: when the charging current in the CV stage diminishes to 20mA, the entire charging process terminates. b) A timer: when the CV stage lasts for about 2 hours, the charging procedure terminates. Moreover, to terminate the charging process more precisely, a combination of the two termination methods can be applied.\\
2) Battery Charging Power and Charging Energy
From the charging profile of the lithium-ion battery in Fig. \ref{chargeproflie}, the charging current and voltage vary with the charging time. In the charging process, the charging power $P$ of lithium-ion battery can be obtained by multiplying the charging voltage $U$ and current $I$ as:
\begin{equation}\label{power-VC}
P = UI.
\end{equation}
As the dot-dash curve in Fig. \ref{profile-power-energy-time} shows, the battery is charged with dynamic power during the whole charging procedure. Therefore, the battery charging power is the function of the charging time.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.67]{Fig7-Power-and-Energy-vs-Time.pdf}
\caption{Charging Power and Energy vs. Charging Time}
\label{profile-power-energy-time}
\end{figure}
Moreover, as we all know, the charging energy $E$ is the integral of the charging power over time, as:
\begin{equation}\label{battery-energy}
E = \int_{t_{1}}^{t_{2}} P(t)\,dt,
\end{equation}
where the charging energy unit is Wh (i.e., J), $t_{1}$ and $t_{2}$ are the lower and upper bounds of the integral time, $P(t)$ is the function of the charging time $t$.
Based on \eqref{battery-energy}, we can obtain how the charging energy $E$ varies over time from 0 to 3.6h, which is shown as the dash curve in Fig. \ref{profile-power-energy-time}. The battery charging energy starts with a slow growth trend, and then increases gradually as the charging time advances, finally turns to a approximative plateau. Thus, the charging energy also is the function of the charging time. Moreover, the battery total energy $E_o$ is about 6.3865J when the battery is fully charged.\\
3) Battery Preferred Charging Power and Battery Energy
From Fig. \ref{profile-power-energy-time}, the battery charging power and the charging energy depend on the charging time. The charging time increases gradually according to a fixed step size, and each charging time corresponds to the unique values of the charging power and the battery energy. Therefore, the relationship between the battery preferred charging power $P_r$ and the battery energy $E_r$ can be depicted as the stars in Fig. \ref{profile-power-energy-fit}. Moreover, the value of the preferred charging power corresponding to each battery energy obtained in Fig. \ref{profile-power-energy-time} is defined as ``Standard Value".
To implement the scheduling algorithm considering continuously-varying battery charging requirements, we need a closed-form formula to describe the relationship between the battery preferred charging power and the battery energy. Therefore, we fit the relationship between the two by using the MATLAB curve-fitting toolbox. In order to minimize the fitting inaccuracy, we choose the fitting functions with root mean square error (RMSE) less than 0.1 (RMSE $<$ 0.1) among all available fitting functions in the MATLAB curve-fitting toolbox.
The rational function satisfies the above curve-fitting criterions. A function $f(x)$ is called a rational function if and only if it can be written in the form as:
\begin{equation}\label{battery-fitting}
f(x) = \frac{P(x)}{Q(x)} = \frac{\beta_1 x^{n-1}+ \cdots +\beta_{n-1} x + \beta_n}{\alpha_1 x^{m-1} + \cdots + \alpha_{m-1} x + \alpha_m},
\end{equation}
where $P(x)$ and $Q(x)$ are polynomials in $x$, and $Q(x)$ is not the zero polynomial. The relationships between the preferred charging power and battery energy for the five selected fitting functions are shown as curves in Fig. \ref{profile-power-energy-fit}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.67]{Fig8-Power-vs-Energy.pdf}
\caption{Preferred Charging Power vs. Battery Energy}
\label{profile-power-energy-fit}
\end{figure}
Form Fig. \ref{profile-power-energy-fit}, the preferred charging power rises sharply at first, and then grows slowly, finally decreases gradually when the battery energy increases. In Fig.8, we use $f_{nm}(x)$ to denote the fitting rational function, of which the highest numerator is $n$ power and the highest denominator is $m$ power as in \eqref{battery-fitting}.
To choose the fitting function which has the better fitting accuracy in Fig. \ref{profile-power-energy-fit}, we calculate the square error $S_{se}$ between the fitting function values $P_{rf}$ and the standard numerical values $P_{rv}$ obtained in Fig. \ref{profile-power-energy-fit} based on \eqref{battery-SE-fitting}, which is:
\begin{equation}\label{battery-SE-fitting}
S_{se}=(P_{rf}-P_{rv})^2.
\end{equation}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.67]{Fig9-SE-vs-Energy.pdf}
\caption{Square Error vs. Battery Energy}
\label{profile-SE}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.6]{Fig10-RMSE-vs-Function.pdf}
\caption{Root Mean Square Error vs. Fitting Function}
\label{RMSE}
\end{figure}
$S_{se}$ of each fitting function in Fig.8 is shown in Fig. \ref{profile-SE}. From Fig. \ref{profile-SE}, the square error of each fitting rational function is small, and the maximum one is about 0.085. To quantize the overall fitting accuracy of the five fitting rational functions in Fig. \ref{profile-power-energy-fit}, we calculate the RMSE $R_{se}$ respectively based on \eqref{RMSE-function}. The RMSE values of all fitting functions are depicted in Fig. \ref{RMSE}.
\begin{equation}\label{RMSE-function}
R_{se} = \sqrt{\frac{\sum_{i=1}^n (P_{rf}-P_{rv})^2}{n}}.
\end{equation}
From Fig. \ref{RMSE}, the difference of RMSE for the five fitting rational functions is small, and the RMSE of $f_{44}(x)$ and $f_{45}(x)$ are smaller than others. For minimizing the fitting inaccuracy, we choose $f_{44}(x)$ and $f_{45}(x)$ to evaluate the FAFC scheduling algorithm. The two fitting functions are depicted as:
\begin{itemize}
\item The function with 4 power numerator and 4 power denominator is:
\begin{equation}\label{power-soc-fitting44}
P_{r44}(x) = \frac{\beta_1 x^{4}+ \beta_2 x^{3} + \beta_3 x^{2} +\beta_4 x + \beta_5}{x^{4} + \alpha_1 x^{3} + \alpha_2 x^{2} +\alpha_3 x + \alpha_4}.
\end{equation}
\item The function with 4 power numerator and 5 power denominator is:
\begin{equation}\label{power-soc-fitting45}
P_{r45}(x) = \frac{{\beta_1}' x^{4}+ {\beta_2}' x^{3} + {\beta_3}' x^{2} + {\beta_4}' x + {\beta_5}' }{x^{5} + {\alpha_1}' x^{4} + {\alpha_2}' x^{3} + {\alpha_3}' x^2 + {\alpha_4}'x +{\alpha_5}'}.
\end{equation}
\end{itemize}
In \eqref{power-soc-fitting44} and \eqref{power-soc-fitting45}, $P_{r44}(x)$ and $P_{r45}(x)$ are the preferred charging power calculated by the two fitting functions, and $x$ is the battery energy $E_r$. The values of all the coefficients in \eqref{power-soc-fitting44} and \eqref{power-soc-fitting45} are shown in Table \ref{Coefficient}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.67]{Fig11-Energy-vs-SOC.pdf}
\caption{Battery Energy vs. Receiver SOC}
\label{profile-energy-soc}
\end{figure}
\begin{table}[!t]
\centering
\caption{Fitting Functions Coefficients}
\begin{tabular}{C{1.5cm} C{2cm} |C{1.5cm} C{2cm}}
\hline
\multicolumn{2}{c|}{$P_{r44}$} & \multicolumn{2}{c}{$P_{r45}$}\\
\textbf{Coefficient} & \textbf{Value} & \textbf{Coefficient} & \textbf{Value}\\
\hline
$\beta_1$ & {$-3.112$} & ${\beta_1}'$ & {$-21.65$}\\
$\beta_2$ & {$1.439$} & ${\beta_2}'$ & {$141.2$}\\
\bfseries{$\beta_3$} & {$120.4$} &{${\beta_3}'$} & {$-11.5$} \\
\bfseries{$\beta_4$} & {$-7.452$} &{${\beta_4}'$} & {$0.1526$}\\
\bfseries{$\beta_5$} & {$0.1543$} &{${\beta_5}'$} & {$0.008358$}\\
\bfseries{$\alpha_1$} & {$-9.881$} &{${\alpha_1}'$} & {$-10.7$}\\
\bfseries{$\alpha_2$} & {$44.84$} &{${\alpha_2}'$} & {$41.01$}\\
\bfseries{$\alpha_3$} & {$-5.49$} &{${\alpha_3}'$} & {$ -1.509$}\\
\bfseries{$\alpha_4$} & {$ 0.4007$} &{${\alpha_4}'$} & {$-0.3997$}\\
\bfseries{$\ $} & {$\ $} &{${\alpha_5}'$} & {$0.0362$}\\
\hline
\label{Coefficient}
\end{tabular}
\end{table}
\noindent 4) Battery Preferred Charging Power and Receiver SOC
In addition, the receiver SOC $R_{soc}$ is the ratio of the battery energy $E_r$ to the total energy $E_o$:
\begin{equation}\label{battery-energy-soc}
R_{soc} = \frac{E_r}{E_o} \times 100\%.
\end{equation}
From \eqref{battery-energy-soc}, given the total energy $E_o$, the battery energy $E_r$ has a linear relationship with the receiver SOC $R_{soc}$, which is depicted in Fig. \ref{profile-energy-soc}. Thus, the battery energy $E_r$ can be obtained from the receiver SOC $R_{soc}$. For example, $E_r$ is 3.8319J when $R_{soc}$ is 60\%.
From Figs. \ref{profile-power-energy-fit} and \ref{profile-energy-soc}, we can obtain the relationship between the receiver SOC and the preferred charging power for the fitting functions $f_{44}(x)$ and $f_{45}(x)$, which are depicted in Fig. \ref{profile-chargepower-soc}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.67]{Fig12-Power-vs-SOC.pdf}
\caption{Preferred Charging Power vs. Receiver SOC}
\label{profile-chargepower-soc}
\end{figure}
From formulas \eqref{power-soc-fitting44}, \eqref{power-soc-fitting45} and \eqref{battery-energy-soc}, the receiver preferred charging power $P_r$ can be obtained from the receiver SOC $R_{soc}$. For example, the SOC of receiver $R_i$ is 60\%, from \eqref{power-soc-fitting44} and \eqref{battery-energy-soc}, the preferred charging power $P_{r44}$ is 3.8650W. Based on \eqref{power-soc-fitting45} and \eqref{battery-energy-soc}, the preferred charging power $P_{r45}$ is 3.8712W.
\subsection{Battery Discharging Power Model}\label{}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.5]{Fig13-Discharge-Power.pdf}
\caption{Usage Rate and Discharging Power vs. Working Status}
\label{dischargepower}
\end{figure}
The discharging power of each receiver depends on the battery type and the working status \cite{carroll2010analysis, peng2014smartphone, murmuria2012mobile}. For example, the receiver discharging power of lithium-ion battery is different from that of Ni-MH battery, and playing music is different from playing video with the same battery. To analyze the discharging power quantitatively in the scheduling algorithm, we propose the discharging power model for the receivers with different working status.
Based on the survey of the mobile phone applications, the representative working status of mobile phone include: standby, video, social software (e.g., Facebook, Twitter), game and music \cite{Usingfrequency}. We assume that a receiver being charged works at one of the above five working status. The usage rate and discharging power of the five working status are depicted in Fig. \ref{dischargepower}.
In Fig. \ref{dischargepower}, the vertical strip column is the usage rate of each working status, while the horizontal strip column is the discharging power. From Fig. \ref{dischargepower}, the discharging power of playing games is maximal, and standby is the most common working state.
Therefore, during a charging time slot, the discharging power of a receiver can be modeled as:
\begin{itemize}
\item Discharging power of each working status:
\begin{equation}\label{powerconsumption}
\begin{aligned}
P_{u}=\{0.0076\ 0.4289\ 0.4348\ 0.6766\ 0.1706\}.
\end{aligned}
\end{equation}
\item Usage rate of each working status:
\begin{equation}\label{probability}
\begin{aligned}
U_{p}=\{28.39\%\ 12.35\%\ 24.69\%\ 12.35\%\ 22.22\%\}.
\end{aligned}
\end{equation}
\item Discharging power of a receiver during a time slot is:
\begin{equation}\label{}
\begin{aligned}
P_{d}=randsrc(1,1,[P_{u};U_{p}]).
\end{aligned}
\end{equation}
\end{itemize}
In summary, all parameters related to the algorithm are quantified in the above contents, so we can implement the algorithm based on the algorithm principle presented in Section III and the quantized parameters analyzed in this section. The operational pseudo code of FAFC scheduling algorithm will be presented in the next subsection.
\begin{algorithm}
\caption{FAFC Scheduling Algorithm}
\begin{algorithmic}[1]
\Require $N_r$, $R_{soc}$;
\State initialize $T_{c}$, $T_{p}$, $T_{s} \leftarrow 0$, $i \leftarrow 1$, $flag \leftarrow 0$;
\State $E_r \leftarrow R_{soc}\times E_o$;
\While {$E_r \geqslant 0$ \textbf{and} $E_r < E_o$ \textbf{and} $T_s \leqslant T_p$}
\State initialize $P_t$;
\State $P_o \leftarrow P_t$;
\State $P_d \leftarrow randsrc(1,N_r,[P_u;U_p])$;
\State $P_d \leftarrow [P_d(flag:end)\ P_d(1:flag)]$;
\State do \eqref{power-soc-fitting44} (or \eqref{power-soc-fitting45}) to get $P_{r}$;
\For {$i \leftarrow 1\ to\ N_r$}
\If {$P_o \geqslant P_r(i)$}
\State $P_c(i)\leftarrow P_r(i)$;
\State $E_r(i)\leftarrow E_r(i)+P_c(i)\times T_c-P_d(i)\times T_c$;
\State $P_o \leftarrow P_o-P_c(i)$;
\If {$P_o = 0$} $flag \leftarrow i$;
\EndIf
\ElsIf {$P_o > 0$}
\State $P_c(i) \leftarrow P_o$;
\State $E_r(i) \leftarrow E_r(i)+P_c(i)\times T_c-P_d(i)\times T_c$;
\State $P_o \leftarrow P_o-P_c(i)$;
\If {$P_o=0$} $flag \leftarrow i$;
\EndIf
\Else
\State $E_r(i) \leftarrow E_r(i)-P_d(i)\times T_c$;
\EndIf
\EndFor
\State $E_r \leftarrow [E_r(flag:end)\ E_r(1:flag)];$
\State $T_s \leftarrow T_s + T_c$;
\EndWhile
\State \Return{$\frac{E_r}{E_o} \times 100\%$};
\label{FAFCA}
\end{algorithmic}
\end{algorithm}
\subsection{Operational Pseudo Code}\label{}
Based on the algorithm principle and the quantized parameters, the operational pseudo code of the FAFC scheduling algorithm can be written as Algorithm $1$. The parameters of the algorithm are summarized in Table \ref{Parameters of model}.
Based on the FAFC scheduling algorithm, given the receiver number and the initial SOC, the receivers, which discharge at different power, can be charged chronologically with their preferred charging power according to the accessing time. Keeping selecting and charging receivers in the charging process, the transmitting power is scheduled to keep all receivers working as long as possible for fairness. In the charging process, we record the receiver SOC of each receiver and the charging time to determine when to terminate the scheduling process.
\begin{table}[!t]
\centering
\caption{FAFC Execution Scheduling Parameter}
\begin{tabular}{C{1.5cm} C{6cm}}
\hline
\textbf{Symbol} & \textbf{Parameter} \\
\hline
\bfseries{${N}_{r}$} & {Receiver number} \\
\bfseries{${R}_{soc}$} & {Receiver SOC (remaining capacity percentage)} \\
\bfseries{${T}_{c}$} & {Charging time slot} \\
\bfseries{${T}_{p}$} & {Pre-set charging time} \\
\bfseries{${T}_{s}$} & {Scheduling process charging time} \\
\bfseries{${E}_{r}$} & {Battery energy} \\
\bfseries{${E}_{o}$} & {Battery total energy} \\
\bfseries{${P}_{o}$} & {Available transmitting power} \\
\bfseries{${P}_{u}$} & {Discharging power of receiver working status}\\
\bfseries{${U}_{p}$} & {Usage rate of receiver working status}\\
\bfseries{${P}_{d}$} & {Receiver discharging power}\\
\bfseries{${P}_{r}$} & {Receiver preferred charging power}\\
\hline
\label{Parameters of model}
\end{tabular}
\end{table}
\section{Performance Analysis}
The performance analysis of the FAFC scheduling algorithm is based on the simulation in MATLAB. Since the receiver can keep working when its SOC is not zero, the SOC of each receiver reflects the algorithm's performance. The impacts of the receiver number, the charging time, and the transmitting power on the receivers' SOC will be evaluated in this section. The principles of simultaneous charging and the methods to improve the charging performance in the multi-user RBC system will be obtained finally.
\subsection{Simulation Design}\label{}
The simulation is divided into two parts: 1) the impacts that the charging time and the receiver number have on the receivers' SOC; 2) the impacts that the transmitting power and the receiver number have on the receivers' SOC. The parameters of the simulation are illustrated as follows.
For 1), the receiver number $N_r$ varies from 10 to 50, while the transmitting power $P_t$ is 20W. We compare the receivers' average SOC after the receivers being charged 1, 2, 3 hours by the transmitter, and analyze the impacts of the receiver number and the charging time on the receivers' SOC.
For 2), the receiver number $N_r$ varies from 10 to 50, the charging time T is 3 hours, and the transmitting power $P_t$ takes 20W, 40W, 60W, 80W and 100W, respectively. After the charging process, the receivers' average SOC under different transmitting power is compared. The impacts of the transmitting power and the receiver number on the receivers' SOC are illustrated by the comparison.
Moreover, we set the time slot $T_c$ as 10s by calculating the ratio of the charging time with the battery capacity. Since the parameters (receiver initial SOC, discharging power, etc.) of each receiver are random during the scheduling process, the ``averaging multiple experiments" method is adopted to eliminate randomness. Therefore, the SOC of each receiver in the simulation results is the result of averaging multiple scheduling simulations.
\subsection{Simulation Results}\label{}
\emph{1)} When the transmitting power $P_t$ is 20W, the average SOC of all receivers after being charged for 1, 2, 3 hours is compared. The receiver charging power $P_c$ in a charging time slot is equal to $P_{r44}$ calculated by \eqref{power-soc-fitting44} or $P_{r45}$ calculated by \eqref{power-soc-fitting45}. The charging time 1, 2, 3 hours is the different charging stages of a same charging process. The comparison results are depicted in Fig. \ref{FAFCtime-com}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.67]{Fig14-SOC-vs-Number.pdf}
\caption{Receiver Average SOC vs. Receiver Number ($P_t$=20W)}
\label{FAFCtime-com}
\end{figure}
In Fig. \ref{FAFCtime-com}, each curve denotes the average SOC for different receiver number and different charging time. The receiver SOC decreases as the receiver number increases since the total consumed energy increases as the receiver number increases, while the total charging energy is fixed in each time slot. Moreover, when the charging energy is greater than the consumed energy, the receiver SOC increases as the charging time prolongs, e.g., when the receiver number is less than 35. Rather, the average SOC decreases when the receiver number is larger than 35.
Given the different charing time, to illustrates the impacts of the receiver number on the average SOC, we show the variation of the average SOC with different receiver numbers in Fig. \ref{FAFCtime-num}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.67]{Fig15-SOC-vs-Time.pdf}
\caption{Receiver Average SOC vs. Charging time ($P_t$=20W)}
\label{FAFCtime-num}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.67]{Fig16-SOC-vs-Number.pdf}
\caption{Receiver Average SOC vs. Receiver Number (T=3h)}
\label{FAFCpower-com}
\end{figure}
In Fig. \ref{FAFCtime-num}, the receiver SOC increases with the charging time prolonging when the receiver number is small, since the consumed energy is less than the charging energy. As the receiver number is about 30, the receiver SOC is close to steady regardless of the charging time changing. Moreover, the receiver SOC decreases as the charging time prolongs when the receiver number $N_r$ is 40 and 50.
Thus, to maximize the receiver SOC, the charging time should be prolonged when the charging energy is greater than the consumed energy. Furthermore, when the transmitting power is fixed, there exists a threshold for the number of receivers being charged simultaneously, in order to avoid the system running out of power.
\emph{2)} To illustrate the impacts of the transmitting power on the receiver SOC, we depict the receivers' average SOC in Fig. \ref{FAFCpower-com} after receivers being charged for 3 hours with 20W, 40W, 60W, 80W and 100W transmitting power.
In Fig. \ref{FAFCpower-com}, each curve denotes the average SOC for different receiver number and transmitting power. When the transmitting power is fixed, the average SOC decreases gradually since the consumed energy increases as the receiver number grows. Moreover, when the receiver number is same, the receiver SOC increases with the transmitting power increasing.
To explain the impacts of the receiver number $N_r$ under different transmitting power on the receiver SOC, the variation trend of the average receiver SOC with different receiver numbers is shown in Fig. \ref{FAFCpower-num}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.67]{Fig17-SOC-vs-Power.pdf}
\caption{Receiver Average SOC vs. Transmitting Power (T=3h)}
\label{FAFCpower-num}
\end{figure}
From Fig. \ref{FAFCpower-num}, for the same receiver number, the receiver SOC rises when the transmitting power increases. When the receiver number is small and the transmitting power is high, since the charging energy can satisfy the consumed energy, the receivers can be fully charged approximately. For example, when the receiver number is 10, all the receivers can be almost fully charged if the transmitting power is greater than 40W.
Therefore, if the transmitter power source can meet the charging requirements, the transmitting power should be improved to extend the working time of all receivers. Moreover, the charging power for receivers will increase as the charing efficiency raises. For example, when the source power $P_s$ is 50W, the charging power $P_e$ is 10W with the charing efficiency ${\eta}_{o}$ of 20\%, while $P_e$ is 15W with 30\% charing efficiency ${\eta}_{o}$. Thus, to improve the charging performance, it is also effective to increase the single user's charging efficiency in the multi-user RBC system.
Moreover, from the Figs. \ref{FAFCtime-com}, \ref{FAFCtime-num}, \ref{FAFCpower-com}, and \ref{FAFCpower-num}, given the charging power $P_{r44}$ and $P_{r45}$, the change trend of the receiver SOC is same and the difference between the two SOC is small. Therefore, the small difference between the two fitting functions has no effect on the simulation results.
\subsection{Analysis Summary}\label{}
The factors impacts the performance of FAFC scheduling algorithm include: the transmitting power, the receiver number, the charging time and the charging efficiency. The following conclusions are obtained through the simulation results:
1) When the transmitting power and the receiver number are fixed, if the fixed charging energy is greater than the consumed energy in a charging time slot, the receiver SOC increases with the charging time increasing. Otherwise, the receiver SOC decreases as the charging time prolongs.
2) The receiver SOC decreases as the receiver number increases, under the fixed transmitting power and the fixed charging time.
3) Regardless of the variation of the charging time and the receiver number, the receiver SOC grows with the transmitter transmitting power increasing.
4) When the charging time is 3 hours and the receiver number is 10, the receivers will be almost fully charged when the transmitter transmitting power is greater than 40W.
To improve the FAFC scheduling algorithm performance (i.e., to keep all receivers working as long as possible for fairness in the multi-user RBC system), the strategies are as follows:
1) To maximize the SOC of each receiver, the charging time should be prolonged when the charging energy is greater than the consumed energy.
2) When the transmitting power is fixed, there exists a threshold for the number of receivers being charged simultaneously in order to avoid the system running out of power.
3) If the power supply can meet the charging requirements, the transmitting power should be improved to extend the working time of all receivers.
4) For the multi-user RBC system, increasing the charging efficiency of the single user is the essential method to improve the charging performance.
\section{Conclusions}\label{Section5}
We present the First Access First Charge (FAFC) scheduling algorithm for wireless power transfer in the multi-user RBC system to keep all IoT devices working as long as possible for fairness. The muti-user RBC system includes a transmitter with the fixed transmitting power and multiple receivers working with different battery status and power consumption patterns. Based on the FAFC scheduling algorithm, the transmitting power is scheduled to charge multiple IoT devices according to their accessing time sequence. We quantify the system parameters for the FAFC scheduling algorithm implementation. The receiver preferred charging power can be obtained from its SOC, while the discharging power is determined by the working status of the IoT device. Then, the operational pseudo code is presented for the algorithm implementation. Finally, the simulation demonstrates the performance of FAFC the scheduling algorithm for the multi-user RBC system with the impacts of the transmitting power, the receiver number, and the charging time.
Based on the performance analysis, we find that, to keep all receivers working as long as possible for fairness, the charging time should be prolonged when the charging energy is greater than the consumed energy. Moreover, the transmitting power should be increased if the transmitter power source can meet the charging demands. Furthermore, the receiver number being charged simultaneously should be limited by a threshold, and the charging efficiency of the single IoT device should be raised to improve the charging performance.
However, there are still several open issues can be studied in the future, for example:
\begin{itemize}
\item In the charging process, the charging urgency of each receiver is more essential, so the scheduling algorithm should be designed considering the charging urgency rather than the accessing time sequence to improve the charging quality of each IoT device.
\item Throughout the charging process, the charging connections established may be disconnected, and the other receivers may access to the transmitter, which results in the dynamic change of the receiver number. Thus, it is necessary to analyze the performance of the scheduling algorithm when the receiver number is varying.
\item The scheduling process of multi-transmitter to multi-user could be further investigated.
\end{itemize}
\bibliographystyle{IEEEtran}
\bibliographystyle{unsrt}
|
1,314,259,993,896 | arxiv | \section{Introduction}
In recent years, there has been increased interest in the role of supermassive black holes (SMBHs) in galaxy formation and evolution.
Most massive spheroidal galaxies are believed to harbor SMBHs at their centers,
and the masses of the SMBHs have a correlation with the luminosities \citep{magorrian98,bentz09,bennert10,greene10,jiang11,park15},
the stellar velocity dispersions \citep{ferrarese00,gebhardt00,tremaine02,gultekin09,woo10},
and Sersic indices \citep{graham01,graham07} of their spheroids.
The proposed scenario for the growth of SMBHs is that they are assembled by gas accretion \citep{lynden69}.
The SMBHs are thought to grow very rapidly in an active phase, i.e., the period of active galactic nuclei (AGNs).
During this phase, AGNs emit enormous amounts of energy
($10^{43}$--$10^{48}$\,$\rm erg\,s^{-1}$; e.g., \citealt{woo02}) from gamma-ray to radio.
Most of our knowledge of AGNs comes from unobscured type 1 AGNs,
found by using surveys of X-ray, ultraviolet (UV), optical, and radio observations
\citep{grazian00,becker01,anderson03,croom04,risaliti05,schneider05,veron-cetty06,young09}.
However, several studies (e.g., \citealt{comastri01,tozzi06,polletta08}) have reported that
the soft X-ray, UV, and optical based AGN surveys could neglect
a large number (e.g., up to more than $50\%$) of AGNs with very red colors,
due to the dust extinction from the intervening dust and gas in their host galaxies \citep{webster95,cutri02}.
From a similar but different point of view,
there is another missing population of AGNs with red colors due to the interstellar medium in our galaxy \citep{im07,lee08}.
\begin{figure*}[!t]
\centering
\figurenum{1}
\includegraphics[width=\textwidth]{sample_supp.png}\\
\caption{(a) Redshifts vs. $M_{K}$ magnitudes for NIR-red AGNs and unobscured type 1 quasars.
The circles represent 29 NIR-red AGNs listed in \cite{marble03} and
the red filled circles denote 16NIR-red AGNs used in this work.
The blue dots represent the redshifts and $M_{K}$ magnitudes of unobscured type 1 quasars listed in SDSS DR7 \citep{shen11}.
(b) Color-color diagram using $g'-K$ and $J-K$ magnitudes.
The meanings of the circles and the blue dots are identical to the left panel,
and the green triangles denote the $g'-K$ and $J-K$ magnitudes of red quasars used in our previous studies \citep{kim15b,kim18}.
Here, $g'$-, $J$-, and $K$-band magnitudes are corrected for the Galactic extinction \citep{schlafly11},
and the $g'$-band magnitude is in AB unit, while $J$- and $K$-band magnitudes are based on the Vega magnitudes.}
\end{figure*}
The AGNs with red colors are called red AGNs, and they are considered to be a different population from unobscured type 1 AGNs.
In several simulation studies \citep{hopkins05,hopkins06,hopkins08}, red AGNs have been predicted to be in an intermediate phase between
the merger-driven star-forming galaxies, such as ultra-luminous infrared galaxies (ULIRGs; \citealt{sanders88,sanders96}),
and unobscured type 1 AGNs.
Although this explanation for red AGNs is still controversial due to several reasons
(e.g., \citealt{puchnarewicz98,whiting01,wilkes02,schawinski11,schawinski12,simmons12,kocevski12,rose13}),
this scenario is further supported by several observational studies.
For example, red AGNs have (i) high accretion rates \citep{urrutia12,kim15b},
(ii) enhanced star formation activity \citep{georgakakis09},
(iii) a frequent occurrence of merging features \citep{urrutia08,glikman15},
(iv) young radio jets \citep{georgakakis12},
(v) red continua from dust extinction \citep{glikman07,urrutia09}, and
(vi) line-luminosity ratios explainable only when considering dust extinction \citep{kim18}.
Since the observational results of red AGNs could arise from the limited sample size of red AGNs,
there have been several efforts to search for more red AGNs
(\citealt{webster95,benn98,cutri01,cutri02,smith02,glikman07,glikman12,urrutia09,banerji12,stern12,assef13,fynbo13,lacy13}).
A significant number of these studies found red AGNs using large-area NIR photometric surveys,
such as the Two Micron All-Sky Survey (2MASS; \citealt{skrutskie06}),
the UKIRT Infrared Deep Sky Survey (UKIDSS; \citealt{lawrence07}),
and the $\it Wide$-$\it field~Infrared~Survey~Explorer$ ($\it WISE$; \citealt{wright10}) survey.
Compared to how large-area NIR photometric surveys have contributed to our understanding of red AGNs,
investigations based on NIR spectroscopic data
(e.g., \citealt{glikman07,glikman12,kim15b}) are limited.
Despite the limited contribution, the NIR spectrum includes useful information for investigating the nature of red AGNs.
For example, (i) BH masses (Paschen lines: \citealt{kim10,landt11b}, and Brackett lines: \citealt{kim15a}),
(ii) bolometric luminosities \citep{kim10},
(iii) broad-line region (BLR) sizes \citep{landt11a},
(iv) temperatures \citep{glikman06,landt11b,kim15a} and covering factors of hot dust \citep{kim15a},
(v) stellar velocity dispersions ($\sigma_{\ast}$; \citealt{woo10,kang13}), and
(vi) star forming activity \citep{imanishi11,kim12}
can be measured from the NIR spectra.
In this work, we present high signal-to-noise ratio (S/N; up to several hundreds) and medium-resolution ($R \sim 2000$)
optical and NIR spectra of a sample of 16 NIR-red AGNs at $z \sim 0.3$,
for which optical images and polarizations were obtained in previous studies \citep{smith02,marble03}.
We concentrate on a detailed description of our sample and observation (\S\,2),
spectral fittings for hydrogen lines (\S\,3), dust-reddening measurements (\S\,4),
accretion rates (\S\,5), and the $M_{\rm BH}$--$\sigma_{\ast}$ relation for NIR-red AGNs (\S\,6).
In \S\,7, we briefly summarize our results.
Throughout this work, we use a standard $\Lambda$CDM cosmological model of $H_{0}=70\,{\rm km\,s^{-1}}$ Mpc$^{-1}$,
$\Omega_{m}=0.3$, and $\Omega_{\Lambda}=0.7$,
supported by past observational studies (e.g., \citealt{im97}).
Our photometry uses the Vega magnitude system, except for the $g'$ band that is in the AB system.
\section{The Sample and Observation}
\subsection{Sample}
Our sample is drawn from the 29 2MASS-based red AGNs listed in \cite{marble03}.
The 29 objects were selected with the following procedures.
First, \cite{cutri01,cutri02} chose red AGN candidates through a combination of red color in NIR ($J-K_{\rm s} > 2$)
and detection in each of the three 2MASS bands (complete to $K_{s} < 15.0$\,mag).
Then, among the candidates, 70 targets were spectroscopically confirmed in \cite{smith02}.
Furthermore, \cite{smith02} performed optical polarimetric observations
using the Two-Holer Polarimeter/Photometer on the Steward Observatory 1.5\,m telescope and the Bok 2.3\,m reflector.
Finally, \cite{marble03} selected 29 out of the 70 NIR-red AGNs within the redshift range of $0.136 \leq z \leq 0.596$,
and observed them with the Wide Field Planetary Camera 2 (WFPC2) on board the $\it Hubble~Space~Telescope$ ($\it HST$).
\begin{deluxetable*}{ccccccccccccc}[!t]
\tabletypesize{\scriptsize}
\tablecolumns{10}
\tablewidth{0pt}
\tablenum{1}
\tablecaption{Observing summary \label{tbl1}}
\tablewidth{0pt}
\tablehead{
\colhead{}& \colhead{}& \colhead{}& \colhead{}& \colhead{}& \colhead{}& \multicolumn{3}{c}{NIR Spectroscopy}& \colhead{}& \multicolumn{3}{c}{Optical Spectroscopy}\\
\cline{7-9} \cline{11-13}\\
\colhead{Object}& \colhead{R.A.}& \colhead{Decl.}&
\colhead{Redshift}& \colhead{$K_{\rm s}$}& \colhead{$M_{K_{\rm s}}$\tablenotemark{a}}&
\colhead{Telescope/}& \colhead{Exp}& \colhead{Observing}& \colhead{}& \colhead{Telescope/}& \colhead{Exp}& \colhead{Observing}\\
\colhead{}& \colhead{(J2000.0)}& \colhead{(J2000.0)}&
\colhead{($z$)}& \colhead{(mag)}& \colhead{(mag)}&
\colhead{Instrument}& \colhead{(s)}& \colhead{dates}& \colhead{}& \colhead{Instrument}& \colhead{(s)}& \colhead{dates}}
\startdata
0106$+$2603& 01 06 07.7& +26 03 34& 0.411& 14.6& -27.9& Subaru/IRCS& 800\tablenotemark{b}& 2015 Nov& & Keck/ESI& 3600& 2003 Oct\\
& & & & & & & 6000\tablenotemark{c}& 2015 Nov& & \\
0157$+$1712& 01 57 21.0& +17 12 48& 0.213& 13.2& -27.4& Gemini/GNIRS& 3600& 2015 Aug& & Keck/ESI& 7200& 2003 Oct\\
0221$+$1327& 02 21 50.6& +13 27 41& 0.140& 13.2& -26.0& Magellan/FIRE& 6657& 2015 Jan& & Keck/ESI& 5400& 2003 Oct\\
0234$+$2438& 02 34 30.6& +24 38 35& 0.310& 13.7& -27.7& Gemini/GNIRS& 2160& 2015 Aug& & --& --& --\\
0324$+$1748& 03 24 58.2& +17 48 49& 0.328& 12.8& -28.8& Magellan/FIRE& 3635& 2015 Jan& & Keck/ESI& 3600& 2004 Sep\\
0348$+$1255& 03 48 57.6& +12 55 47& 0.210& 13.6& -27.1& Gemini/GNIRS& 2880& 2015 Aug& & Keck/ESI& 3600& 2004 Sep\\
1258$+$2329& 12 58 07.4& +23 29 21& 0.259& 13.4& -27.3& IRTF/SpeX& 9000& 2016 Mar& & SDSS\\
1307$+$2338& 13 07 00.6& +23 38 05& 0.275& 13.4& -28.1& IRTF/SpeX& 18000& 2016 Mar& & --& --& --\\
1453$+$1353& 14 53 31.5& +13 53 58& 0.139& 13.1& -26.2& IRTF/SpeX& 9600& 2016 Mar& & SDSS\\
1543$+$1937& 15 43 07.7& +19 37 51& 0.228& 12.7& -27.8& Gemini/GNIRS& 3600& 2016 Apr& & Keck/ESI& 3600& 2004 Jul\\
& & & & & & IRTF/SepX& 9600& 2016 Mar& & \\
1659$+$1834& 16 59 39.7& +18 34 36& 0.170& 12.9& -26.8& IRTF/SpeX& 8400& 2016 Mar& & Keck/ESI& 5400& 2004 Jul\\
2222$+$1952& 22 22 02.2& +19 52 31& 0.366& 13.3& -29.0& Gemini/GNIRS& 2160& 2015 Aug& & --& --& --\\
2222$+$1959& 22 22 21.1& +19 59 47& 0.211& 12.9& -27.4& Gemini/GNIRS& 2880& 2015 Aug& & Keck/ESI& 5400& 2004 Sep\\
2303$+$1624& 23 03 04.3& +16 24 40& 0.289& 14.7& -26.6& Gemini/GNIRS& 5040& 2015 Aug& & --& --& --\\
2327$+$1624& 23 27 45.6& +16 24 34& 0.364& 14.5& -27.6& Gemini/GNIRS& 3600& 2015 Aug& & Keck/ESI& 5400& 2004 Sep\\
2344$+$1221& 23 44 49.5& +12 21 43& 0.199& 12.9& -27.2& Gemini/GNIRS& 2880& 2015 Aug& & Keck/ESI& 3600& 2004 Jul
\enddata
\tablenotetext{a}{The $M_{K_{\rm s}}$ values are recalculated using the method of Marble et al. (2003)
with the standard $\Lambda$CDM cosmological model}
\tablenotetext{b}{The grism mode observation}
\tablenotetext{c}{The echelle mode observation}
\end{deluxetable*}
Among the 29 NIR-red AGNs, we select 16 AGNs at $z \sim 0.3$ (from 0.139 to 0.411)
for which the redshifted P$\beta$ or P$\alpha$ line is observable within the sky window wavelength range.
The 16 NIR-red AGNs span over a wide range of luminosities ($-29.0 < M_K < -26.0$).
Figure 1 shows the redshifts versus the $M_{K}$ magnitudes and $g'-K$ colors versus the $J-K$ colors of
the 16 NIR-red AGNs and red quasars used in our previous studies (\citealt{kim15b,kim18}; originally from \citealt{urrutia09}).
These NIR-red AGNs have red colors of $J-K > 2$ and $g'-K \gtrsim 4$.
Compared to the red quasars that we studied previously ($J-K > 1.3$ and $g'-K > 5$),
a non-negligible fraction of the 16 NIR-red AGNs possess $g'-K < 5$ as blue as those of unobscured type 1 quasars.
For our sample, we emphasize the advantage of the availability of various types of high-quality data.
The 16 NIR-red AGNs have optical images from $\it HST$ \citep{marble03} and
optical broadband polarimetry \citep{smith02}.
We expect that a combined data set of the high-quality images from the $\it HST$ data, the optical polarization,
and the optical/NIR spectra from this study
will be unique and useful for the comprehensive investigations of the nature of NIR-red AGNs.
\subsection{NIR Observation}
We performed NIR spectroscopic observations with four telescopes and their respective instruments.
Since the 16 NIR-red AGNs have different brightnesses and redshifts,
the observations need to be performed with proper observational instruments and telescopes to fit the characteristics of the 16 NIR-red AGNs.
We describe the details of our NIR observations below.
First, NIR spectra of nine NIR-red AGNs were obtained with the cross-dispersed mode of
the Gemini Near-infrared Spectrograph (GNIRS; \citealt{elias06}) on the 8.1\,m Gemini-North telescope.
The observational configuration is a combination of a 110 l/mm grating, short blue camera, and 0$\farcs$675 slit width,
which provides a discontinuous spectral coverage from $\sim 1$\,$\mu$m to $\sim 2.1$\,$\mu$m
with a spectral resolution of $R \sim 2600$.
Second, we used the SpeX \citep{rayner03} on the 3.0\,m NASA Infrared Telescope Facility (IRTF) for five NIR-red AGNs.
In this observation, we used the short cross-dispersion mode (SXD) with a 0$\farcs$3 slit width
to achieve a spectral resolution of $R \sim 2000$ across 0.7\,--\,2.55\,$\mu$m.
Among these five NIR-red AGNs, one, 1543$+$1937, overlaps with the nine NIR-red AGNs observed with GNIRS/Gemini.
Third, in order to obtain the NIR spectra of two NIR-red AGNs,
we used the Folded-port Infrared Echellette (FIRE) on the 6.5\,m Magellan Baade telescope with a 1$\farcs$0 slit width.
This observational configuration allows the wavelength coverage to span from 0.82 to 2.51\,$\mu$m
with a resolving power of $R \sim 3600$.
Fourth, an NIR spectrum of one NIR-red AGN was obtained with
the Infrared Camera and Spectrograph (IRCS; \citealt{tokunaga98,kobayashi00}) on the 8.2\,m Subaru telescope.
In this observation, we used both grism and echelle modes.
For the grism mode observation, we used a 0$\farcs$1 slit width and $HK$ band with the grism of 52 milliarcsecond pixel scale,
which provides a spectral coverage of 1.4\,--\,2.5\,$\mu$m with a spectral resolution of $R \sim 440$.
The Echelle mode observation was performed with a 0$\farcs$54 slit width and $K$ band,
and this provides a spectral resolution of $R \sim 6600$ with a discontinuous wavelength coverage from 1.90\,$\mu$m to 2.49\,$\mu$m.
\begin{figure*}
\centering
\figurenum{2}
\includegraphics[width=\textwidth]{figure2_1_1.png}\\
\includegraphics[width=\textwidth]{figure2_1_2.png}\\
\includegraphics[width=\textwidth]{figure2_1_3.png}\\
\caption{(Left) NIR spectra of 16 NIR-red AGNs and their S/Ns. The gray lines indicate observed spectra in the observed frame,
and the black lines indicate binned spectra with the spectral resolution.
For 0106$+$2603 and 1543$+$1937, the spectra were obtained with two different observing modes and instruments.
Each binned spectrum from an individual observation is represented by the blue and red lines,
and the sky blue and pink lines indicate their original spectra, respectively.
Moreover, several emission lines (P$\alpha$: 1.8751\,$\mu$m, P$\beta$: 1.2818\,$\mu$m, P$\gamma$: 1.0938\,$\mu$m, P$\delta$: 1.0049\,$\mu$m,
P$\epsilon$: 0.9546\,$\mu$m, P$\zeta$: 0.9229\,$\mu$m, [\ion{Fe}{2}]: 1.5995 and 1.2567\,$\mu$m,
\ion{O}{1}: 1.1287, and 0.8446\,$\mu$m, and \ion{He}{1}: 1.0830\,$\mu$m) are marked on the spectra.
However, when the emission line is not obvious due to the low S/N or duplicate sky lines,
the emission line is marked with a question mark.
(Right) $HST$ images of 16 NIR-red AGNs.
The red boxes across the objects indicate slit widths and the yellow arrows at the top left denote north.}
\end{figure*}
\begin{figure*}[!t]
\centering
\figurenum{2}
\includegraphics[width=\textwidth]{figure2_2_1.png}\\
\includegraphics[width=\textwidth]{figure2_2_2.png}\\
\includegraphics[width=\textwidth]{figure2_2_3.png}\\
\caption{Continued}
\end{figure*}
\begin{figure*}[!t]
\centering
\figurenum{2}
\includegraphics[width=\textwidth]{spectra_fig_7.pdf}\\
\includegraphics[width=\textwidth]{spectra_fig_8.pdf}\\
\caption{Continued}
\end{figure*}
Our observations were performed under clear weather conditions with sub-arcsecond seeings of $\sim$$0 \farcs$6.
For the flux calibration and telluric correction, we observed nearby A0V stars before or after the observations of the NIR-red AGNs.
In order to produce fully reduced spectra, we used Gemini image reduction and analysis facility packages \citep{cooke05}, Spextool \citep{vacca03,cushing04},
FIREHOSE, and general reduction procedure of spectra with image reduction and analysis facility \citep{massey88}
for the spectra obtained with GNIRS, SpeX, FIRE, and IRCS, respectively.
Note that the Gemini reduction packages do not include a flux calibration process for the GNIRS data,
and it recommends achieving the flux calibration by scaling the observed spectrum to existing photometry or spectrum.
For this reason, we perform the flux calibration by scaling the GNIRS spectrum to
its $J$-, $H$-, and $K$-band magnitudes of 2MASS photometry.
For 2303$+$1624, however, the $J$-band flux was abnormally small in comparison to
the $H$- and $K$-band fluxes and had a large error,
so we used $y$-band flux from the Pan-STARRS survey \citep{chambers16} instead of the $J$-band flux.
Due to our observational setup, the GNIRS spectra were taken at four disjointed orders,
and each order was flux-scaled using an adjacent band.
The two short orders ($\sim$10,000--14,000\,$\rm \AA{}$) were scaled by $J$-band magnitude,
and we used $H$- and $K$-band magnitudes for the third-order ($\sim$15,000--18,000\,$\rm \AA{}$)
and fourth-order ($\sim$20,000--24,000\,$\rm \AA{}$), respectively.
In order to check the reliability of the above GNIRS flux calibration,
we obtained the SpeX spectrum for 1543+1937 (a bright and point-like AGN in our data, see the $HST$ image in Figure 2).
Note that both SpeX and GNIRS data were obtained under clear weather and a decent seeing condition ($\sim 0 \farcs 7$).
The SpeX data were flux-scaled using a single relative scaling factor using its $K$-band magnitude (see the paragraph below).
Our comparison shows that the SpeX and GNIRS data match with each other within an $\sim 3$\,\% difference from
the second through the fourth orders.
However, the shortest order of the GNIRS spectrum was $\sim$20\,\% different from the SpeX data.
Although the shortest order of GNIRS spectra were not used in this study,
we caution that the flux calibration of the shortest order could be off by about 20\,\%.
In the next step, the NIR spectra from SpeX, FIRE, and IRCS
were scaled to their $K$-band magnitudes of 2MASS photometry.
This step was necessary, since there is a possibility of photon loss due to
the different observing conditions and slit widths.
We find that the scaling factors for the SpeX, FIRE, and IRCS spectra are not significant,
with factors of 0.91 to 1.74 (a median of 1.18),
and in particular, the scaling factors of the spectra obtained with SpeX are somewhat smaller with $\sim$1.07 (from 0.91 to 1.32).
We also note that the AGN variability in NIR is a worrisome factor in this kind of calibration,
since we are calibrating the spectra using the data that were taken at a different epoch.
However, the NIR AGN variability is known to be generally small ($\sim 0.2$\,mag; e.g., \citealt{enya02}),
so this flux calibration should be good to $\sim$20\,\% accuracy.
\begin{deluxetable}{ccc}
\tablecolumns{3}
\tablewidth{0pt}
\tablenum{2}
\tablecaption{NIR Spectrum of 0106$+$2603\label{tbl2}}
\tablehead{
\colhead{$\lambda$}& \colhead{$f_{\lambda}$}& \colhead{$f_{\lambda}$ Uncertainty}\\
\colhead{($\rm \AA{}$)}& \colhead{($\rm erg\,s^{-1}\,cm^{-2}\,\AA{}$)} & \colhead{($\rm erg\,s^{-1}\,cm^{-2}\,\AA{}$)}
}
\startdata
14439& 4.6018E-17& 2.2044E-17\\
14471& 2.8404E-17& 4.8711E-18\\
14504& 1.9541E-17& 4.8643E-18\\
14537& 3.7478E-17& 6.2486E-18\\
14570& 2.5344E-17& 3.7455E-18\\
14603& 2.6977E-17& 1.8525E-18\\
14636& 3.1782E-17& 2.4565E-18
\enddata
\tablecomments{This table represents only a part of the NIR spectrum of 0106$+$2603.
All the NIR spectra of the 16 NIR-red AGNs obtained with the four telescopes
are available in machine-readable format.}
\end{deluxetable}
In total, we obtained 0.7--2.55\,$\mu$m NIR spectra of 16 NIR-red AGNs at $z \sim 0.3$
with a moderate resolution of $R >$ 2000 from the four instruments and telescopes.
We summarize the observation information in Table 1.
\subsection{Optical Observation}
In addition to the NIR spectra, we obtained optical spectra for 12 NIR-red AGNs.
Two (G. Canalizo \& M. Lazarova) of us observed 10 NIR-red AGNs
using the Echellette Spectrograph and Imager (ESI; \citealt{sheinis02}) on the Keck II telescope
with a spectral wavelength range of 3900\,$\rm \AA{}$ to 11000\,$\rm \AA{}$ and
a slit width of 1$\farcs$0 to achieve a spectral resolution of $R \sim 4000$.
Descriptions of the observations for the 10 NIR-red AGNs are given in Table 1.
Information about the data reduction is given in \citet{canalizo12}.
Among the 10 spectra from the Keck/ESI observation, 5 were used in \citet{canalizo12}.
For the remaining two NIR-red AGNs, we obtained the optical spectra from Data Release 12 (DR12) of the Sloan Digital Sky Survey (SDSS).
The fiber diameter is 3$\farcs$0, and
the spectral coverage of the SDSS spectra is 3800\,$\rm \AA{}$ to 9200\,$\rm \AA{}$ with a spectral resolution of 1500--2500.
\begin{figure*}
\centering
\figurenum{3}
\includegraphics[width=\textwidth]{figure3_1_1.png}\\
\includegraphics[width=\textwidth]{figure3_1_2.png}\\
\includegraphics[width=\textwidth]{figure3_1_3.png}\\
\caption{(Left) Optical spectra of 12 NIR-red AGNs and their S/N.
The gray and black lines indicate observed and binned spectra in the observed frame, and we binned the spectra with the spectral resolution.
We mark several emission lines in the optical wavelength region,
such as [\ion{O}{2}] (3727\,$\rm \AA{}$), H$\gamma$ (4340\,$\rm \AA{}$), H$\beta$ (4861\,$\rm \AA{}$),
[\ion{O}{3}] (4959 and 5007\,$\rm \AA{}$), H$\alpha$ (6563\,$\rm \AA{}$), and [\ion{N}{2}] (6548 and 6583\,$\rm \AA{}$).
(Right) $HST$ images of 12 NIR-red AGNs.
The red boxes and yellow arrows are identical to those in Figure 2,
and the red open circles indicate SDSS fiber diameters.}
\end{figure*}
\begin{figure*}
\centering
\figurenum{3}
\includegraphics[width=\textwidth]{figure3_2_1.png}\\
\includegraphics[width=\textwidth]{figure3_2_2.png}\\
\includegraphics[width=\textwidth]{figure3_2_3.png}\\
\caption{Continued}
\end{figure*}
For these optical spectra, their slit widths and fiber sizes are somewhat larger than
the slit widths for the NIR spectra.
Although the inconsistency of the slit width can introduce discrepancies in the spectral properties between the optical and NIR spectra,
the effects on the spectral properties measured in this study are negligible due to the following reasons:
(i) broad emission lines (BELs) come from the nuclear region, and
(ii) we use only the optical or NIR spectrum to fit the continuum.
\section{High-S/N and Medium-resolution Spectra}
We show the fully reduced NIR spectra and the $HST$ images of the 16 NIR-red AGNs in Figure 2,
and the spectra are available in machine-readable form in Table 2.
Moreover, Figure 2 also shows the S/N of each spectrum, and we mark several interesting lines
such as P$\alpha$ (1.875\,$\mu$m), P$\beta$ (1.282\,$\mu$m), P$\gamma$ (1.094\,$\mu$m), P$\delta$ (1.005\,$\mu$m),
P$\epsilon$ (0.955\,$\mu$m), P$\zeta$ (0.923\,$\mu$m), [\ion{Fe}{2}] (1.600 and 1.257\,$\mu$m),
\ion{O}{1} (1.129 and 0.845\,$\mu$m), and \ion{He}{1} (1.083\,$\mu$m).
In addition to the NIR spectra, Figure 3 shows the reduced optical spectra of the 12 NIR-red AGNs,
and several lines of [\ion{O}{2}] (3727\,$\rm \AA{}$), H$\gamma$ (4340\,$\rm \AA{}$), H$\beta$ (4861\,$\rm \AA{}$),
[\ion{O}{3}] (4959 and 5007\,$\rm \AA{}$), H$\alpha$ (6563\,$\rm \AA{}$), and [\ion{N}{2}] (6548 and 6583\,$\rm \AA{}$)
are marked on the spectra. The optical spectra are given in ascii format in Table 3.
\begin{figure*}[!t]
\centering
\figurenum{4}
\includegraphics[scale=0.25]{Fit_Hb_Continuum.eps}
\includegraphics[scale=0.25]{Fit_Ha_Continuum.eps}\\
\includegraphics[scale=0.25]{Fit_Pb_Continuum.eps}
\includegraphics[scale=0.25]{Fit_Pa_Continuum.eps}
\caption{
(a) Top: the optical spectrum of 0106$+$2603 obtained with the ESI on Keck telescope.
The black solid line denotes the optical spectrum around the H$\beta$ line, and this spectrum includes several interesting emission lines,
such as H$\gamma$ (4341\,$\rm \AA{}$), [\ion{O}{3}] (4363, 4959, and 5007\,$\rm \AA{}$), and H$\beta$ (4861\,$\rm \AA{}$).
The fitted model continuum spectrum is represented by the red dashed line,
and this model is composed of a power-law component (the green dotted line)
and a component for the Fe blends (the sky blue dot-dashed line).
Bottom: the fitted model continuum-subtracted spectrum is represented,
and this continuum-subtracted spectrum is used for estimating the line luminosity and FWHM.
(b) The original and residual spectrum around the H$\alpha$ line of 0348$+$1255, which is obtained with Keck/ESI.
The meanings of the black solid line and the red dashed line are identical to those in panel (a).
The black arrows represent [\ion{O}{1}] (6300\,$\rm \AA{}$), H$\alpha$ (6563\,$\rm \AA{}$),
[\ion{N}{2}] (6548 and 6583\,$\rm \AA{}$), and [\ion{S}{2}] (6716 and 6731\,$\rm \AA{}$).
(c) The original and residual spectrum around the P$\beta$ line of 0324$+$1748, which is taken with Magellan/FIRE.
The meanings of the black solid line and the red dashed line are identical to those in panel (a),
and P$\beta$ and \ion{O}{1} (1.1287\,$\mu$m) are marked with the black arrows.
(d) The original and residual spectrum around the P$\alpha$ line of 1307$+$2338, which is obtained with IRTF/SpeX.
The meanings of the black solid line and the red dashed line are identical to those in panel (a),
and the P$\alpha$ line is marked with the black arrow.
}
\end{figure*}
\begin{deluxetable}{ccc}
\tablecolumns{3}
\tablewidth{0pt}
\tablenum{3}
\tablecaption{Optical Spectrum of 0106$+$2603\label{tbl3}}
\tablehead{
\colhead{$\lambda$}& \colhead{$f_{\lambda}$}& \colhead{$f_{\lambda}$ Uncertainty}\\
\colhead{($\rm \AA{}$)}& \colhead{($\rm erg\,s^{-1}\,cm^{-2}\,\AA{}$)} & \colhead{($\rm erg\,s^{-1}\,cm^{-2}\,\AA{}$)}
}
\startdata
3899.4& 8.3145E-17& 6.4623E-18\\
3900.4& 5.3173E-17& 1.1044E-17\\
3901.3& 2.6999E-17& 3.4251E-18\\
3902.3& 6.7212E-17& 1.8250E-17\\
3903.3& 1.2357E-16& 1.4382E-17\\
3904.3& 1.4238E-16& 3.2512E-18\\
3905.2& 1.1617E-16& 1.4047E-17
\enddata
\tablecomments{This table represents only a part of the optical spectrum of 0106$+$2603.
The optical spectra of the 12 NIR-red AGNs from the Keck/ESI observation and SDSS data
are available in machine-readable format.}
\end{deluxetable}
\subsection{Spectral Fitting of Hydrogen Lines}
In this subsection, we describe how the BELs of H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$ are fitted
to measure the luminosities and FWHMs.
The fitting of these lines starts with the identification of the line,
and we find the H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$ lines in 11, 12, 14, and 6 NIR-red AGNs, respectively.
After the line identification, we correct the spectra for the Galactic extinction \citep{schlafly11}
using the reddening law of \cite{fitzpatrick99}.
Then, we transform the spectra to the rest-frame and
fit the continua for the H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$ lines.
For H$\alpha$, P$\beta$, and P$\alpha$,
the continuum around each line is fitted with a single power law,
however, an additional Fe component is required for the H$\beta$ line.
The Fe blends are determined by scaling and broadening the Fe template from the spectrum of IZw1 \citep{boroson92},
and this procedure is performed with \texttt{MPFIT} \citep{markwardt09} using Interactive Data Language (IDL).
As an example, Figure 4 shows the spectra around the H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$ lines,
along with the fitted continuum models, and the continuum-subtracted spectra.
Note that we omit the stellar component for the continuum fit,
since the stellar component can be fit by the power law in such narrow wavelength ranges.
We note that several lines exist around the H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$ lines
(e.g., H$\gamma$ $\lambda$4340, [\ion{O}{3}] $\lambda\lambda$4956, 5007 doublet,
[\ion{S}{2}] $\lambda\lambda$6716, 6731 doublet, and \ion{O}{1} $\lambda$11287).
Hence, the continuum-fitting regions are chosen to avoid the nearby lines.
After the continuum subtraction, we model the narrow lines
using the [\ion{O}{3}] $\lambda\lambda$4956, 5007 and [\ion{S}{2}] $\lambda\lambda$6716, 6731 doublets as templates.
In order to fit the [\ion{S}{2}] lines, we use two single Gaussian functions.
However, the [\ion{O}{3}] lines require double Gaussian functions for their asymmetric blue wings \citep{greene05}.
Although \cite{greene05} suggest that the [\ion{O}{3}] lines are not appropriate as a template for
the narrow lines of unobscured type 1 quasars due to the blue wings,
this template gives better results
than the [\ion{S}{2}] template when fitting the narrow components of the H$\beta$ line,
for a part of our NIR-red AGN sample.
However, since the [\ion{S}{2}] template fits the narrow components of H$\alpha$, P$\beta$, and P$\alpha$ lines better,
the template from the [\ion{S}{2}] lines is primarily used to fit these lines,
except when the [\ion{S}{2}] lines are not detected.
\begin{deluxetable*}{ccccccccccccc}[!t]
\tablecolumns{13}
\tablewidth{0pt}
\tablenum{4}
\tablecaption{Line measurements of [\ion{O}{3}], [\ion{N}{2}], and [\ion{S}{2}] lines \label{tbl4}}
\tablehead{
\colhead{Object Name}& \colhead{}&
\multicolumn{2}{c}{[\ion{O}{3}] $\lambda$5007}& \colhead{}&
\multicolumn{2}{c}{[\ion{N}{2}] $\lambda$6548}& \colhead{}&
\multicolumn{2}{c}{[\ion{S}{2}] $\lambda$6716}\\
\cline{3-4} \cline{6-7} \cline{9-10}\\
\colhead{}& \colhead{}&
\colhead{$L$}& \colhead{FWHM}& \colhead{}&
\colhead{$L$}& \colhead{FWHM}& \colhead{}&
\colhead{$L$}& \colhead{FWHM}\\
\colhead{}& \colhead{}&
\colhead{($\rm{10^{38}\,erg\,s^{-1}}$)}& \colhead{($\rm km\,s^{-1}$)}& \colhead{}&
\colhead{($\rm{10^{38}\,erg\,s^{-1}}$)}& \colhead{($\rm km\,s^{-1}$)}& \colhead{}&
\colhead{($\rm{10^{38}\,erg\,s^{-1}}$)}& \colhead{($\rm km\,s^{-1}$)}
}
\startdata
0106$+$2603& & 29.72$\pm$1.45& 359.9$\pm$36.2& & 11.75$\pm$0.11& 359.9$\pm$36.2& & --& -- \\
0157$+$1712& & 41.72$\pm$4.01& 988.4$\pm$63.8& & 11.58$\pm$0.14& 563.7$\pm$20.7& & 7.510$\pm$0.395& 563.7$\pm$20.7 \\
0221$+$1327& & 154.6$\pm$12.3& 539.5$\pm$54.9& & 22.73$\pm$0.46& 539.5$\pm$54.9& & --& -- \\
0234$+$2438& & --& --& & --& --& & --& -- \\
0324$+$1748& & --& --& & --& --& & --& -- \\
0348$+$1255& & 21.54$\pm$0.74& 718.7$\pm$25.7& & 20.81$\pm$0.20& 595.8$\pm$6.8& & 20.62$\pm$0.35& 595.8$\pm$6.8 \\
1258$+$2329& & --& --& & --& --& & --& -- \\
1307$+$2338& & --& --& & --& --& & --& -- \\
1453$+$1353& & --& --& & 10.25$\pm$0.38& 695.5$\pm$47.5& & 4.699$\pm$0.349& 695.5$\pm$47.5 \\
1543$+$1937& & 483.2$\pm$23.5& 808.2$\pm$79.0& & 55.58$\pm$0.80& 387.2$\pm$10.8& & 19.70$\pm$1.03& 387.2$\pm$10.8 \\
1659$+$1834& & 344.2$\pm$5.4& 628.7$\pm$22.1& & 30.97$\pm$0.35& 501.2$\pm$5.1& & 27.22$\pm$0.40& 501.2$\pm$5.1 \\
2222$+$1952& & --& --& & --& --& & --& -- \\
2222$+$1959& & 987.2$\pm$41.4& 538.9$\pm$41.8& & 84.91$\pm$0.84& 538.9$\pm$41.8& & --& -- \\
2303$+$1624& & --& --& & --& --& & --& -- \\
2327$+$1624& & 116.3$\pm$20.8& 719.6$\pm$77.7& & 29.92$\pm$0.52& 518.9$\pm$33.1& & 13.76$\pm$1.59& 518.9$\pm$33.1 \\
2344$+$1221& & 248.9$\pm$10.1& 449.3$\pm$30.9& & 52.69$\pm$0.99& 301.1$\pm$10.0& & 17.00$\pm$0.97& 301.1$\pm$10.0
\enddata
\tablecomments{The listed fluxes are not corrected from the dust extinction caused by their host galaxies.}
\end{deluxetable*}
Moreover, the [\ion{S}{2}] narrow line template is also used for
fitting the [\ion{N}{2}] $\lambda\lambda$6548, 6583 doublet and the narrow component of the hydrogen lines.
For the fitting of the [\ion{N}{2}] lines,
we fit the H$\alpha$ and [\ion{N}{2}] lines simultaneously.
The width of the [\ion{N}{2}] line is fixed to the width of the narrow line template,
and its flux ratio is fixed to 2.96 \citep{kim06}.
We note that the narrow line template from the [\ion{O}{3}] lines is used for
0106$+$2603 (H$\beta$ and H$\alpha$), 0157$+$1712 (H$\beta$), 0221$+$1327 (H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$),
0348$+$1255 (H$\beta$), 1659$+$1834 (H$\beta$), 2222$+$1959 (H$\beta$, H$\alpha$, and P$\beta$),
2327$+$1624 (H$\beta$), and 2344$+$1221 (H$\beta$),
and the narrow line template from the [\ion{S}{2}] lines is used for
0157$+$1712 (H$\alpha$ and P$\beta$), 0348$+$1255 (H$\alpha$ and P$\beta$), 1453$+$1353 (H$\alpha$ and P$\alpha$),
1543$+$1937 (H$\alpha$), 1659$+$1834 (H$\alpha$, P$\beta$, and P$\alpha$),
2327$+$1624 (H$\alpha$), and 2344$+$1221 (H$\alpha$ and P$\beta$).
The measured FWHMs and luminosities of the [\ion{O}{3}], [\ion{N}{2}], and [\ion{S}{2}] lines
are listed in Table 4.
For 1258$+$2329 (P$\alpha$) and 2303$+$1624 (P$\beta$),
the narrow lines cannot be modeled due to the absence of the [\ion{O}{3}] and [\ion{S}{2}] lines,
so Gaussian functions were fit to the lines.
One of the fitted components is classified as the narrow component due to its FWHM being less than 600\,$\rm km\,s^{-1}$.
Using the model of the narrow component, we simultaneously fit the broad-line
($\rm FWHM > 600\,km\,s^{-1}$) with a single or double Gaussian function.
Figure 5 shows the H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$ lines of NIR-red AGNs and its fitted models.
For the fit, a single, double, or multiple Gaussian functions
are used depending on the S/N and the resolution of the spectra.
For example, many of the broad lines in NIR spectra are fitted with a single or double Gaussian function due to the limited spectral resolution.
\cite{kim10} showed using 26 unobscured type 1 AGNs with high-S/N and high-resolution spectra that
the application of the single/double component Gaussian fits to data with a limited spectral resolution and S/N
can produce slightly biased line flux/FWHM values
with respect to the multi-component ($> 2$ components) fits, which is the method to use
if high-S/N, high-resolution data were available.
Based on the results, they derived the correction factors to correct for the systematic bias,
which are $\rm flux_{multi}$/$\rm flux_{double}=1.05$, $\rm flux_{multi}$/$\rm flux_{single}=1.06$,
$\rm FWHM_{multi}$/$\rm FWHM_{double}=0.85$, and $\rm FWHM_{multi}$/$\rm FWHM_{single}=0.91$ \citep{kim10,kim18}.
We adopt these values to convert the single/double Gaussian fit results to the multi-component fitting results.
Moreover, the FWHMs are corrected for the instrumental resolution
as $\rm FWHM^2=FWHM^2_{obs} - FWHM^2_{inst}$.
We note that the FWHM of the P$\beta$ line of 2327$+$1624 is not measured,
because the P$\beta$ line is fitted by two Gaussian components that are broadly split.
The broadly split components yield four half-maximum points, so the FWHM cannot be measured.
\begin{figure*}
\centering
\figurenum{5}
\includegraphics[width=\textwidth]{Line1.eps}\\
\includegraphics[width=\textwidth]{Line2.eps}\\
\caption{Results of the fitting of the H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$ lines.
The black lines indicate the continuum-subtracted observed spectra in the rest-frame.
The red lines represent the best-fit model, and
the green and blue lines represent the narrow and broad components, respectively.
Moreover, for the P$\beta$ and P$\alpha$ figures, the purple lines at the bottom show the sky OH emission lines.}
\end{figure*}
\begin{figure*}
\centering
\figurenum{5}
\includegraphics[width=\textwidth]{Line3.eps}\\
\includegraphics[width=\textwidth]{Line4.eps}\\
\caption{Continued}
\end{figure*}
Strong correlations between the FWHMs of the Balmer and Paschen lines
have been established for unobscured type 1 quasars \citep{landt08,kim10}.
A tight correlation between the two quantities for our sample would imply that
the luminosities originate from the same BLR, and the contribution of the narrow component is negligible.
As shown in Figure 6, the measured FWHMs of Paschen lines are similar to those of Balmer lines,
and the correlations between the two quantities are similar to those of unobscured type 1 quasars.
In total, we obtain the broad line-luminosities and FWHMs of 7 H$\beta$, 12 H$\alpha$, 12 P$\beta$, and 6 P$\alpha$ lines.
The measured luminosities and FWHMs of the hydrogen lines for broad and narrow components are summarized in Table 5 and Table 6, respectively.
\section{Reddening}
In this section, we assume that the red colors of NIR-red AGNs originate from
the dust extinction in their host galaxies, as shown in several previous studies \citep{glikman07,urrutia08,urrutia09,kim18}.
Hence, measuring the color excess, $E(B-V)$, is important for investigating the intrinsic, i.e., un-reddened properties of dusty AGNs.
In the following subsections,
$E(B-V)$ values are derived using two methods,
comparison of line-luminosity ratios and continuum slopes
between unobscured and NIR-red AGNs.
In this section, we use the reddening law, $k(\lambda)$, of \cite{fitzpatrick99},
based on the Galactic extinction curve from 1000\,$\rm \AA{}$ to 3.5\,$\mu$m with $R_V=3.1$.
\subsection{Reddening derived from line-luminosity ratios}
We measure the reddening from line-luminosity ratios ($E(B-V)_{\rm line}$) of NIR-red AGNs
by using four broad line-luminosity ratios of Balmer to Paschen lines
($L_{\rm H\beta}$/$L_{\rm P\beta}$, $L_{\rm H\alpha}$/$L_{\rm P\beta}$,
$L_{\rm H\beta}$/$L_{\rm P\alpha}$, and $L_{\rm H\alpha}$/$L_{\rm P\alpha}$).
We use the correlation between the Balmer and Paschen line luminosities of 37 low-redshift ($ z < 0.5$) and bright ($J < 14$ or $K < 14.5$\,mag)
unobscured type 1 quasars adopted from \cite{kim10}
as their intrinsic line-luminosity ratios.
By comparing the line-luminosity ratios, the $E(B-V)_{\rm line}$ values can be measured for 10 out of the 16 NIR-red AGNs.
\begin{figure*}
\centering
\figurenum{6}
\includegraphics[scale=0.4]{FWHM.eps}\\
\caption{Comparisons of the FWHMs of the Balmer and Paschen lines.
The black dashed lines show the two quantities are identical,
and the blue dotted lines denote the adopted correlation of unobscured type 1 quasars from \cite{kim10}.}
\end{figure*}
The $E(B-V)_{\rm line}$ values are computed by varying the amount of dust-reddening to minimize $\chi^2$,
which is a function of the line-luminosity ratios of NIR-red AGNs
($R_{\rm obs,i,j} = L_{\rm obs, \lambda j} / L_{\rm obs, \lambda i}$)
and unobscured type 1 quasars ($R_{\rm int,i,j} = L_{\rm int, \lambda j} / L_{\rm int, \lambda i}$), expressed as
\begin{equation}
\chi^2 = \sum_{\rm i,j = 1}^{ N } \frac{(R_{\rm obs,i,j} - E (R_{\rm int,i,j}))^2}{\sigma_{\rm i,j}^2} .
\end{equation}
Here, $N$ is the number of line-luminosity ratios,
$\sigma_{\rm i,j}$ are the combined uncertainties of the line-luminosity ratios and the adopted correlations from \cite{kim10},
and $E$ is a function for the dust-reddening expressed as
\begin{equation}
\log \left( \frac{E(R_{\rm int,i,j})}{R_{\rm int,i,j}} \right) =\frac{E(B-V)}{1.086}(k({\rm \lambda i})-k({\rm \lambda j})).
\end{equation}
For estimating the uncertainty of $E(B-V)_{\rm line}$, we perform 1000 Monte-Carlo simulations.
We calculate new line-luminosity ratios by adding the measurement uncertainties of the line luminosities randomly
to the observed line luminosities.
The standard deviation of the 1000 newly measured $E(B-V)_{\rm line}$ values
is taken as the uncertainty of the $E(B-V)_{\rm line}$.
In Figure 7, we compare the observed and the dust-extinction-corrected line luminosities with the $E(B-V)_{\rm line}$ values.
The measured $E(B-V)_{\rm line}$ values and uncertainties for the 10 NIR-red AGNs are summarized in Table 7.
\subsection{Reddening derived from continuum slopes}
We measure the color excess values from continuum slope ($E(B-V)_{\rm cont}$) of the NIR-red AGNs by comparing
the observed spectrum, $f(\lambda)$, to a model spectrum.
The model spectrum combines a reddened quasar composite, $Q(\lambda)$, and a reddened stellar template, $S(\lambda)$.
The intrinsic quasar composite, $Q_{0}(\lambda)$, is adopted from \cite{glikman06},
which is a composition of an optical quasar composite \citep{brotherton01} and an NIR quasar composite \citep{glikman06}.
They used unobscured type 1 quasars for constructing the optical and NIR quasar composites.
For the intrinsic stellar template, $S_{0}(\lambda)$,
we use K (MJD=51816, plate=396, and fiber=605), F (MJD=51990, plate=289, and fiber=5),
and G (MJD=51957, plate=273, and fiber=304) type stellar spectra adopted from SDSS,
since the K, F, and G type stars are the most dominant populations of the stellar composite template for NIR-red AGNs \citep{canalizo12}.
In order to fit the $E(B-V)_{\rm cont}$ values,
we fit the model spectrum to the observed spectrum, and the fitting function has a form of
\begin{equation}
f(\lambda)=Q(\lambda) + S(\lambda).
\end{equation}
Here, $Q(\lambda)$ and $S(\lambda)$ are the reddened spectra of $Q_{0}(\lambda)$ and $S_{0}(\lambda)$, respectively,
with their $E(B-V)$ values as
\begin{equation}
\log \left( \frac{X(\lambda)}{X_{0}(\lambda)} \right) =-\frac{k(\lambda) E(B-V)_{X}}{1.086},
\end{equation}
where $E(B-V)_{Q}$ is taken as $E(B-V)_{\rm cont}$.
Here, $X(\lambda)$ denotes $Q(\lambda)$ or $S(\lambda)$, and
$X_{0}(\lambda)$ is $Q_{0}(\lambda)$ or $S_{0}(\lambda)$.
For the fit, we use only a limited wavelength range (3790--10000\,$\rm \AA{}$),
because \cite{glikman07} reported that fitting with the optical and NIR combined spectrum yields
extremely poor results for one-third of red AGNs.
Moreover, to exclude strong emission lines,
wavelength regions of 3790--4700, 5100-6400, and 6700--10000\,$\rm \AA{}$ are used.
There are remaining moderate emission lines
(e.g., \ion{He}{1} $\lambda$3889, H$\epsilon$ $\lambda$3970, H$\delta$ $\lambda$4102, H$\gamma$ $\lambda$4340,
\ion{He}{1} $\lambda$5876, [\ion{O}{1}] $\lambda\lambda$6300, 6364, and \ion{O}{1} $\lambda$8447),
but the effects on the fit are negligible.
In order to find the most likely stellar template,
we calculate $\chi^2$ values for the fits with the K, F, and G type star spectra,
and the fit with the minimum $\chi^2$ is used as the best fit.
From this fitting procedure, we measure the $E(B-V)_{\rm cont}$ values for 12 NIR-red AGNs,
and Figure 8 shows the fitting results.
The measured $E(B-V)_{\rm cont}$ values and uncertainties are summarized in Table 7.
\subsection{Discussion for the two types of reddening}
We compare the $E(B-V)_{\rm line}$ values to the $E(B-V)_{\rm cont}$ values for the 10 NIR-red AGNs
that have both $E(B-V)_{\rm line}$ and $E(B-V)_{\rm cont}$ in Figure 9.
The two types of $E(B-V)$ values are consistent,
but there is a weak trend of $E(B-V)_{\rm cont} - E(B-V)_{\rm line} \sim {0.281}$.
We estimate the Pearson correlation coefficient between the two quantities.
For estimating the coefficient, we exclude 0324$+$1748, which has negative values for both types of $E(B-V)$,
and assume that the negative $E(B-V)$ values ($E(B-V)_{\rm line}$ values of 1258$+$2329 and 2222$+$1959) are 0.
The measured coefficient is 0.911, and the rms scatter with respect to a one-to-one correlation is 0.223.
This result supports that the two measurements of $E(B-V)$ are mutually verified.
\begin{deluxetable*}{ccccccccccccc}
\tablecolumns{13}
\tablewidth{0pt}
\tablenum{5}
\tablecaption{Hydrogen line measurements for the broad component \label{tbl5}}
\tablehead{
\colhead{Object Name}& \colhead{}&
\multicolumn{2}{c}{H$\beta$}& \colhead{}&
\multicolumn{2}{c}{H$\alpha$}& \colhead{}&
\multicolumn{2}{c}{P$\beta$}& \colhead{}&
\multicolumn{2}{c}{P$\alpha$}\\
\cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13}\\
\colhead{}& \colhead{}&
\colhead{FWHM}& \colhead{$L$}& \colhead{}&
\colhead{FWHM}& \colhead{$L$}& \colhead{}&
\colhead{FWHM}& \colhead{$L$}& \colhead{}&
\colhead{FWHM}& \colhead{$L$}\\
\colhead{}& \colhead{}&
\colhead{($\rm{km\,s^{-1}}$)}& \colhead{($\rm{10^{40}\,erg\,s^{-1}}$)}& \colhead{}&
\colhead{($\rm{km\,s^{-1}}$)}& \colhead{($\rm{10^{40}\,erg\,s^{-1}}$)}& \colhead{}&
\colhead{($\rm{km\,s^{-1}}$)}& \colhead{($\rm{10^{40}\,erg\,s^{-1}}$)}& \colhead{}&
\colhead{($\rm{km\,s^{-1}}$)}& \colhead{($\rm{10^{40}\,erg\,s^{-1}}$)}
}
\startdata
0106$+$2603& & 3281$\pm$5& 1.000$\pm$0.003& & 3227$\pm$8& 5.312$\pm$0.028& & --& --& &
--& --\\
0157$+$1712& & --& --& & 2090$\pm$7& 2.009$\pm$0.017& & 1827$\pm$63& 2.133$\pm$0.102& &
--& --\\
0221$+$1327& & --& --& & 2212$\pm$36& 1.875$\pm$0.070& & 2125$\pm$84& 0.682$\pm$0.043& &
2073$\pm$40& 0.436$\pm$0.013\\
0234$+$2438& & --& --& & --& --& & 1515$\pm$77& 5.735$\pm$0.202& &
--& --\\
0324$+$1748& & 2744$\pm$0& 339.5$\pm$1.6& & 2333$\pm$28& 1513$\pm$3& & 2173$\pm$28& 24.67$\pm$0.42& &
--& --\\
0348$+$1255& & --& --& & 2539$\pm$325& 1.027$\pm$0.038& & --& --& &
--& --\\
1258$+$2329& & 1252$\pm$58& 5.601$\pm$0.137& & 1569$\pm$41& 19.75$\pm$0.39& & 1337$\pm$204& 1.368$\pm$0.152& &
1876$\pm$67& 1.342$\pm$0.084\\
1307$+$2338& & --& --& & --& --& & 1125$\pm$106& 1.040$\pm$0.073& &
1113$\pm$35& 2.997$\pm$0.129\\
1453$+$1353& & --& --& & 2301$\pm$35& 0.541$\pm$0.033& & --& --& &
2083$\pm$114& 0.201$\pm$0.017\\
1543$+$1937& & 3456$\pm$10& 9.554$\pm$0.038& & 3615$\pm$4& 44.27$\pm$0.10& & 3165$\pm$365& 3.976$\pm$0.379& &
2624$\pm$62& 5.254$\pm$0.193\\
1659$+$1834& & 9603$\pm$171& 1.603$\pm$0.023& & 8561$\pm$120& 8.117$\pm$0.051& & 6822$\pm$90& 2.128$\pm$0.036& &
6045$\pm$187& 2.135$\pm$0.058\\
2222$+$1952& & --& --& & --& --& & 3212$\pm$502& 3.647$\pm$0.201& &
--& --\\
2222$+$1959& & 5316$\pm$9& 16.28$\pm$0.04& & 5268$\pm$4& 73.07$\pm$0.12& & 6099$\pm$60& 3.705$\pm$0.069& &
--& --\\
2303$+$1624& & --& --& & --& --& & --& --& &
--& --\\
2327$+$1624& & --& --& & 2554$\pm$71& 0.940$\pm$0.050& & --& 0.450$\pm$0.093& &
--& --\\
2344$+$1221& & 5018$\pm$13& 7.423$\pm$0.030& & 4899$\pm$5& 29.56$\pm$0.07& & 4223$\pm$50& 3.273$\pm$0.054& &
--& --
\enddata
\tablecomments{The listed fluxes of H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$ lines are not corrected
from the dust extinction caused by their host galaxies.}
\end{deluxetable*}
\begin{deluxetable*}{ccccccccccccc}
\tablecolumns{13}
\tablewidth{0pt}
\tablenum{6}
\tablecaption{Hydrogen line measurements for the narrow component \label{tbl6}}
\tablehead{
\colhead{Object Name}& \colhead{}&
\multicolumn{2}{c}{H$\beta$}& \colhead{}&
\multicolumn{2}{c}{H$\alpha$}& \colhead{}&
\multicolumn{2}{c}{P$\beta$}& \colhead{}&
\multicolumn{2}{c}{P$\alpha$}\\
\cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13}\\
\colhead{}& \colhead{}&
\colhead{$L$}& \colhead{FWHM}& \colhead{}&
\colhead{$L$}& \colhead{FWHM}& \colhead{}&
\colhead{$L$}& \colhead{FWHM}& \colhead{}&
\colhead{$L$}& \colhead{FWHM}\\
\colhead{}& \colhead{}&
\colhead{($\rm{10^{38}\,erg\,s^{-1}}$)}& \colhead{($\rm km\,s^{-1}$)}& \colhead{}&
\colhead{($\rm{10^{38}\,erg\,s^{-1}}$)}& \colhead{($\rm km\,s^{-1}$)}& \colhead{}&
\colhead{($\rm{10^{38}\,erg\,s^{-1}}$)}& \colhead{($\rm km\,s^{-1}$)}& \colhead{}&
\colhead{($\rm{10^{38}\,erg\,s^{-1}}$)}& \colhead{($\rm km\,s^{-1}$)}
}
\startdata
0106$+$2603& & 20.17$\pm$0.03& 359.9$\pm$36.2& & 140.6$\pm$0.8& 359.9$\pm$36.2& & --& --& &
--& --\\
0157$+$1712& & 6.869$\pm$0.300& 988.4$\pm$63.8& & 40.24$\pm$0.71& 563.7$\pm$20.7& & 17.18$\pm$2.95& 563.7$\pm$20.7& &
--& --\\
0221$+$1327& & 10.73$\pm$4.40& 539.5$\pm$54.9& & 32.92$\pm$1.62& 539.5$\pm$54.9& & 3.615$\pm$0.656& 539.5$\pm$54.9& &
2.566$\pm$0.124& 539.5$\pm$54.9\\
0234$+$2438& & --& --& & --& --& & --& --& &
--& --\\
0324$+$1748& & --& --& & --& --& & --& --& &
--& --\\
0348$+$1255& & 14.88$\pm$0.10& 718.7$\pm$25.7& & 45.51$\pm$0.60& 595.8$\pm$6.8& & 7.530$\pm$0.776& 595.8$\pm$6.8& &
--& --\\
1258$+$2329& & --& --& & --& --& & --& --& &
15.77$\pm$4.82& 190.3$\pm$36.1\\
1307$+$2338& & --& --& & --& --& & --& --& &
--& --\\
1453$+$1353& & --& --& & 13.76$\pm$0.87& 695.5$\pm$47.5& & --& --& &
17.55$\pm$0.65& 695.5$\pm$47.5\\
1543$+$1937& & --& --& & 128.0$\pm$2.1& 387.2$\pm$10.8& & --& --& &
--& --\\
1659$+$1834& & 35.11$\pm$9.53& 628.7$\pm$22.1& & 161.5$\pm$2.0& 501.2$\pm$5.1& & 10.57$\pm$0.65& 501.2$\pm$5.1& &
26.17$\pm$1.27& 501.2$\pm$5.1\\
2222$+$1952& & --& --& & --& --& & --& --& &
--& --\\
2222$+$1959& & 144.6$\pm$1.3& 538.9$\pm$41.8& & 875.9$\pm$5.3& 538.9$\pm$41.8& & 11.71$\pm$0.86& 538.9$\pm$41.8& &
--& --\\
2303$+$1624& & --& --& & --& --& & 7.887$\pm$3.145& 397.4$\pm$115.9& &
--& --\\
2327$+$1624& & 7.587$\pm$9.783& 719.6$\pm$77.7& & 49.27$\pm$1.45& 518.9$\pm$33.1& & --& --& &
--& --\\
2344$+$1221& & 60.26$\pm$0.91& 449.3$\pm$30.9& & 166.5$\pm$2.7& 301.1$\pm$10.0& & 6.810$\pm$1.109& 301.1$\pm$10.0& &
--& --
\enddata
\tablecomments{The listed fluxes are not corrected from the dust extinction caused by their host galaxies.}
\end{deluxetable*}
Moreover, we compare the two types of $E(B-V)$ values to the $E(B-V)$ values adopted from Canalizo et al. (2012; hereafter, $E(B-V)_{\rm C12}$),
and this comparison is shown in Figure 9.
They measured the $E(B-V)_{\rm C12}$ by comparing the observed continuum spectrum to
the SDSS composite QSO spectrum \citep{berk01} reddened with a Small Magellanic Cloud reddening law \citep{bouchet85},
and five NIR-red AGNs (0157$+$1712, 0221$+$1327, 0348$+$1255, 1659$+$1834, and 2327$+$1624) are overlapped with our sample.
They showed that the $E(B-V)_{\rm C12}$ values were generally consistent with the $E(B-V)$ values derived by using Balmer decrements,
and the difference between this two quantities was $\sim$0.3.
Our $E(B-V)_{\rm cont}$ and $E(B-V)_{\rm line}$ values are generally but somewhat weakly consistent with the $E(B-V)_{\rm C12}$ values.
Between the $E(B-V)_{\rm cont}$ and $E(B-V)_{\rm C12}$ values, the Pearson correlation coefficient is 0.579, and the rms scatter is 0.484.
For the $E(B-V)_{\rm line}$ values, the result is generally same as
the coefficient is 0.663 with the rms scatter of 0.582.
We found a trend that the $E(B-V)$ values from this work is less than the $E(B-V)_{\rm C12}$ values,
as much as $\Delta E(B-V) \sim 0.520$, but this trend is not significant due to small number statistics.
Unlike our results, previous studies \citep{glikman07,kim18} reported that
the two types of $E(B-V)$ values are far from a one-to-one correlation.
The Pearson correlation coefficient between the two quantities is only -0.21, with an rms scatter of 0.68 \citep{kim18}.
In the previous studies, the $E(B-V)_{\rm cont}$ values are from \cite{glikman07} and \cite{urrutia09},
and the $E(B-V)_{\rm line}$ values are from \cite{glikman07}.
To obtain the $E(B-V)_{\rm cont}$ values,
they fit the continua using the quasar component only, without the stellar component.
In this study, considering the continuum spectra of the most of NIR-red AGNs are dominated by the quasar component,
the measurement technique for the $E(B-V)_{\rm cont}$ is almost the same, and
it is hard to believe that the contrasting result comes from the discrepancy of the $E(B-V)_{\rm cont}$ values.
\begin{figure*}
\centering
\figurenum{7}
\includegraphics[scale=0.5]{EBV_line.eps}\\
\caption{Observed and dust-extinction-corrected line luminosities of NIR-red AGNs.
The red squares show the observed line luminosities,
and the blue circles mean the dust-extinction-corrected line luminosities with the measured $E(B-V)_{\rm line}$ values.
The green dotted lines represent the correlations between the Balmer and Paschen line luminosities \citep{kim10},
and these correlations are used for estimating the $E(B-V)_{\rm line}$ values.}
\end{figure*}
However, in order to measure the $E(B-V)_{\rm line}$ values,
Glikman et al. used somewhat different way.
They measured the $E(B-V)_{\rm line}$ values using Balmer decrements (hereafter, $E(B-V)_{\rm BD1}$).
The $E(B-V)_{\rm BD1}$ values were obtained by following formula:
\begin{equation}
E(B-V)_{\rm BD1}= \frac{1.086}{k({\rm H\beta})-k({\rm H\alpha})} \ln \left( \frac{[F_{\rm H\alpha}/F_{\rm H\beta}]_{\rm measured}}{[F_{\rm H\alpha}/F_{\rm H\beta}]_{\rm FBQS}} \right).
\end{equation}
Here, $k({\rm X})$ is the extinction law of \cite{calzetti94} at the wavelength of the X line,
and $[F_{\rm H\alpha}/F_{\rm H\beta}]_{\rm measured}$ and $[F_{\rm H\alpha}/F_{\rm H\beta}]_{\rm FBQS}$
are the $F_{\rm H\alpha}/F_{\rm H\beta}$ from the spectra of red quasars and
the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) Bright Quasar Survey (FBQS; \citealt{gregg96}) composite spectrum, respectively.
In this equation, the $[F_{\rm H\alpha}/F_{\rm H\beta}]_{\rm FBQS}$ is used as the
intrinsic $F_{\rm H\alpha}/F_{\rm H\beta}$ of red quasars,
which is a fixed value of 4.526 when only the broad component is treated \citep{glikman07}.
Using this technique, we can measure the $E(B-V)_{\rm BD1}$ values for seven NIR-red AGNs,
and they are summarized in Table 7.
We compare the $E(B-V)_{\rm BD1}$ values to the $E(B-V)_{\rm line}$ and $E(B-V)_{\rm cont}$ values in Figure 10.
For the $E(B-V)_{\rm BD1}$ values versus $E(B-V)_{\rm line}$ values, six NIR-red AGNs are used,
and the measured Pearson correlation coefficient is 0.314 with an rms scatter of 0.257.
For the comparison between the $E(B-V)_{\rm BD1}$ and $E(B-V)_{\rm cont}$ values,
we used seven NIR-red AGNs that have both quantities,
and the Pearson coefficient and the rms scatter are 0.709 and 0.255, respectively.
In order to figure out what makes this difference,
we measure different types of Balmer-decrement-based $E(B-V)$ values (hereafter, $E(B-V)_{\rm BD2}$).
First, we combine two relations of $L_{\rm H\alpha}$--$L_{\rm P\alpha}$ and $L_{\rm H\beta}$--$L_{\rm P\alpha}$
from \cite{kim10} to make a relation of $L_{\rm H\alpha}$--$L_{\rm H\beta}$.
The combined relation is
\begin{equation}
\log \left( \frac{L_{\rm H\alpha}}{{\rm 10^{42}\,erg\,s^{-1}}} \right) =
0.509 + 1.056 \log \left( \frac{L_{\rm H\beta}}{{\rm 10^{42}\,erg\,s^{-1}}} \right)
\end{equation}
that is used as the intrinsic $L_{\rm H\alpha}$--$L_{\rm H\beta}$ relation of NIR-red AGNs.
Second, we measure the $E(B-V)_{\rm BD2}$ values by varying the amount of dust-reddening to minimize $\chi^2$,
which is a function of the $L_{\rm H\alpha}$/$L_{\rm H\beta}$ of NIR-red AGNs ($R_{\rm obs}$)
and unobscured type 1 quasars ($R_{\rm int}$), expressed as
\begin{equation}
\chi^2 = \frac{(R_{\rm obs}-E(R_{\rm int}))^2}{\sigma^2}.
\end{equation}
Here, the $R_{\rm int}$ is derived from the above relation of $L_{\rm H\alpha}$--$L_{\rm H\beta}$,
$E$ is a dust-reddening function, and $\sigma$ is the combined uncertainty of the $R_{\rm obs}$ and $R_{\rm int}$.
The measured $E(B-V)_{\rm BD2}$ values are summarized in Table 7.
The $E(B-V)_{\rm BD2}$ values show tighter correlations with
the $E(B-V)_{\rm line}$ and $E(B-V)_{\rm cont}$ values than the $E(B-V)_{\rm BD1}$ values,
but these comparisons cannot be meaningful due to the small number statistics.
The comparisons of the $E(B-V)_{\rm BD2}$ values with
the $E(B-V)_{\rm line}$ and $E(B-V)_{\rm cont}$ values are shown in Figure 10.
Between the $E(B-V)_{\rm BD2}$ and $E(B-V)_{\rm line}$ values,
the Pearson correlation coefficient is 0.816, with an rms scatter of 0.249.
For the $E(B-V)_{\rm BD2}$ and $E(B-V)_{\rm cont}$ values,
the Pearson correlation coefficient and the rms scatter are 0.941 and 0.136, respectively.
These Pearson correlation coefficients are significantly bigger than the coefficients from the $E(B-V)_{\rm BD1}$ values.
Although the difference cannot be meaningful due to the small number statistics,
if there is a difference, we suspect the different intrinsic $L_{\rm H\alpha}$/$L_{\rm H\beta}$ causes these conflicting results.
Because the intrinsic Balmer decrements are fixed to 4.526 for deriving the $E(B-V)_{\rm BD1}$ values,
these quantities vary with the $L_{\rm H\beta}$ values for the $E(B-V)_{\rm BD2}$ values.
For example, when the $L_{\rm H\beta}$ is increased from $\rm 10^{42}\,erg\,s^{-1}$ to $\rm 10^{44}\,erg\,s^{-1}$,
the intrinsic Balmer decrement increases from 3.23 to 4.18,
which makes up $\sim$22\% of the discrepancy of the measured $E(B-V)$ values.
\begin{deluxetable*}{ccccc}
\tabletypesize{\scriptsize}
\tablecolumns{10}
\tablewidth{0pt}
\tablenum{7}
\tablecaption{Four kinds of $E(B-V)$ values \label{tbl7}}
\tablewidth{0pt}
\tablehead{
\colhead{Object}& \colhead{$E(B-V)_{\rm line}$}& \colhead{$E(B-V)_{\rm cont}$}& \colhead{$E(B-V)_{\rm BD1}$}& \colhead{$E(B-V)_{\rm BD2}$}\\
\colhead{}& \colhead{(mag)}& \colhead{(mag)}& \colhead{(mag)}& \colhead{(mag)}}
\startdata
0106$+$2603& --& 0.675$\pm$0.001& 0.137$\pm$0.005& 0.371$\pm$0.005\\
0157$+$1712& 1.596$\pm$0.034& 2.157$\pm$0.008& --& --\\
0221$+$1327& 0.484$\pm$0.032& 0.990$\pm$0.008& --& --\\
0234$+$2438& --& --& --& --\\
0324$+$1748& -0.745$\pm$0.007& -0.026$\pm$0.000& -0.013$\pm$0.004& -0.003$\pm$0.004\\
0348$+$1255& --& 1.326$\pm$0.010& --& --\\
1258$+$2329& -0.179$\pm$0.021& 0.091$\pm$0.001& -0.213$\pm$0.027& -0.006$\pm$0.023\\
1307$+$2338& --& --& --& --\\
1453$+$1353& 0.657$\pm$0.060& 0.890$\pm$0.002& --& --\\
1543$+$1937& 0.057$\pm$0.013& 0.169$\pm$0.000& 0.020$\pm$0.004& 0.175$\pm$0.004\\
1659$+$1834& 0.505$\pm$0.007& 0.636$\pm$0.001& 0.096$\pm$0.014& 0.315$\pm$0.012\\
2222$+$1952& --& --& --& --\\
2222$+$1959& -0.208$\pm$0.008& 0.271$\pm$0.000& -0.007$\pm$0.003& 0.129$\pm$0.002\\
2303$+$1624& --& --& --& --\\
2327$+$1624& 1.043$\pm$0.154& 0.681$\pm$0.003& --& --\\
2344$+$1221& 0.101$\pm$0.008& 0.259$\pm$0.000& -0.109$\pm$0.004& 0.073$\pm$0.004
\enddata
\end{deluxetable*}
\subsection{Color selection for dusty red AGNs}
In \cite{urrutia09}, their red AGNs were classified to have $E(B-V) > 0.1$.
Only two objects among the $\sim$50 candidates in \cite{urrutia09} have $E(B-V) < 0.1$,
and these two were not classified as dusty red AGNs.
In this study, considering the rms scatter of the $E(B-V)$ values,
$E(B-V) \sim 0.2$ is virtually identical to no extinction,
and we classify the objects with $E(B-V) > 0.2$ as dusty red AGNs.
According to this criteria, among our sample, three (0324$+$1748, 1258$+$2329, and 1543$+$1937) or
five (0324$+$1748, 1258$+$2329, 1543$+$1937, 2222$+$1959, and 2344$+$1221) objects
cannot be classified as dusty red AGNs based on the $E(B-V)_{\rm cont}$ or $E(B-V)_{\rm line}$ values, respectively,
and the fraction of the low-$E(B-V)$ red AGNs (LERA) is bigger than that of \cite{urrutia09}.
To figure out why this discrepancy arises,
we check the differences in sample selection for the two studies.
In order to select the red AGN candidates,
\cite{urrutia09} used the optical-NIR ($r'-K>5$) and NIR colors ($J-K>1.3$) of FIRST-detected objects,
but our sample was selected by NIR color only ($J-K>2$).
Our entire sample within the FIRST coverage has detections in this survey,
so the difference in the LERA fraction of the two samples originates from the lack of the optical-NIR color selection,
and the NIR color alone is not sufficient to select dusty red AGNs.
As shown in Figure 11, the change in $E(B-V)$ due to the increase in $J-K$ is not significant,
but the $E(B-V)$ values are larger ($E(B-V) > 0$) when $g'-K > 5$.
Moreover, this result is supported by \cite{maddox06}, in which
a significant fraction of unobscured type 1 AGNs at low redshifts have such an NIR color of $J-K > 2$.
We conclude that a significant portion of the NIR-red AGNs is unobscured or only mildly obscured type 1 AGNs.
\begin{figure*}[!t]
\centering
\figurenum{8}
\includegraphics[scale=0.35]{EBV_cont_KFGstar1.eps}\\
\includegraphics[scale=0.35]{EBV_cont_KFGstar2.eps}\\
\includegraphics[scale=0.35]{EBV_cont_KFGstar3.eps}\\
\caption{Spectra of NIR-red AGNs with the best-fit models are shown
in the rest-frame from 3790 to 10000\,$\rm \AA{}$.
The black lines denote the observed spectra.
The green and blue lines are the best-fit model stellar and quasar spectra, respectively,
that also include the dust-reddening.
The red line shows the sum of the best-fit stellar and quasar spectra.
The top right box denotes the name of NIR-red AGN, the measured $E(B-V)_{\rm cont}$, and the used stellar template.
Note that the host galaxy dominates the spectra of 0157$+$1712, 0221$+$1327, and 0348$+$1255
at short wavelength range ($\rm < 6500\,\AA{}$)
due to heavy extinction of the AGN component.}
\end{figure*}
\section{Accretion rates}
In this section, we measure the $\lambda_{\rm Edd}$
($L_{\rm bol}$/$L_{\rm Edd}$, where $L_{\rm Edd}$ is the Eddington luminosity) of NIR-red AGNs at $z \sim 0.3$.
To obtain the quantities,
we derive BH masses and bolometric luminosities
after correcting for the dust extinction by using the $E(B-V)_{\rm line}$ values
to avoid the effects of dust extinction.
If the taken $E(B-V)_{\rm line}$ is negative, we did not correct the dust extinction under
an assumption that there is no dust extinction.
\begin{figure*}
\centering
\figurenum{9}
\plottwo{EBV.eps}{EBV_Canalizo.eps}
\caption{(a) Comparison between the $E(B-V)_{\rm line}$ values and the $E(B-V)_{\rm cont}$ values of NIR-red AGNs.
The blue dashed line denotes a line where the two values are identical.
(b) We compare the two types of $E(B-V)$ values with the $E(B-V)_{\rm C12}$ values.
The blue squares and red triangles represent the $E(B-V)_{\rm cont}$ and $E(B-V)_{\rm line}$ values, respectively.
The meaning of the blue dashed line is identical to that of the left panel.}
\end{figure*}
As a comparison sample, we use the unobscured type 1 quasars in
the quasar property catalog \citep{shen11} of
the SDSS Seventh Data Release (DR7; \citealt{abazajian09}).
To avoid the effects of sample bias,
we set the sample selection criteria to be identical to those of our NIR-red AGNs:
(i) $0.139 \leq z \leq 0.411$ and (ii) detection in all three 2MASS bands.
Finally, we select 4130 unobscured type 1 quasars through the sample selection criteria.
Considering that the $\lambda_{\rm Edd}$ values may have dependence on the $L_{\rm bol}$ values (e.g., \citealt{lusso12,suh15}),
these selected control samples may cause the sample bias effects.
Thus, we address this issue in Section 5.3
by placing restraints on these samples with limited ranges of the $L_{\rm bol}$ and $M_{\rm BH}$.
\subsection{BH masses}
\begin{figure*}
\centering
\figurenum{10}
\includegraphics[scale=0.5]{EBV_BD.eps}\\
\caption{(a) Comparison between the $E(B-V)_{\rm BD1}$ and the $E(B-V)_{\rm line}$ values of NIR-red AGNs.
The meaning of the blue dashed line is identical to Figure 9.
(b) The $E(B-V)_{\rm BD2}$ vs. $E(B-V)_{\rm line}$ values.
(c) The $E(B-V)_{\rm BD1}$ vs. $E(B-V)_{\rm cont}$ values.
(d) The $E(B-V)_{\rm BD2}$ vs. $E(B-V)_{\rm cont}$ values.}
\end{figure*}
\begin{figure*}
\centering
\figurenum{11}
\includegraphics[width=\textwidth]{color_EBV.eps}\\
\caption{(a) Comparison between the measured $E(B-V)$ values and $J-K$ magnitudes.
The blue squares and red circles represent that the $E(B-V)$ values are the $E(B-V)_{\rm cont}$ and $E(B-V)_{\rm line}$ values, respectively.
The green triangles show the $J-K$ and $E(B-V)$ values of the red quasars in \cite{urrutia09}.
(b) Comparison of $g'-K$ magnitudes vs. the $E(B-V)$ values.
The meanings of the blue squares, red circles, and green triangles are identical to those of the left panel.}
\end{figure*}
In order to measure the BH masses of NIR-red AGNs,
NIR $M_{\rm BH}$ estimators (e.g., \citealt{kim10,kim15a,landt11b}) are used
to alleviate the effects of the dust extinction.
We adopt the Paschen-line-based $M_{\rm BH}$ estimators \citep{kim10,kim15b},
to which we applied a recent virial coefficient of $\log f = 0.05$ \citep{woo15}.
Note that the virial factor, $f$, is the proportional coefficient that is needed to determine the BH mass based on the virial theorem:
\begin{equation}
M_{\rm BH}=f \frac{R {\Delta V}^{2}}{G},
\end{equation}
where $R$ is the BLR size, $\Delta V$ is the velocity of the BLR gas, and $G$ is the gravitational constant.
The value of $\log f$ = 0.05 \citep{woo15} is for the case of $\Delta V$ being the FWHM of the line.
If the line dispersion is used for $\Delta V$, the $\log f$ value is different.
The modified, new virial-coefficient-applied, Paschen-line-based $M_{\rm BH}$ estimators are
\begin{equation}
\frac{M_{\rm BH}}{M_{\rm \odot}}=10^{7.04\pm0.02} \left( \frac{L_{\rm P\beta}}{{\rm 10^{42}\,erg\,s^{-1}}} \right)^{0.48\pm0.03}
\left( \frac{{\rm FWHM_{P\beta}}}{{\rm 10^{3}\,km\,s^{-1}}} \right)^{2}
\end{equation}
and
\begin{equation}
\frac{M_{\rm BH}}{M_{\rm \odot}}=10^{7.07\pm0.04} \left( \frac{L_{\rm P\alpha}}{{\rm 10^{42}\,erg\,s^{-1}}} \right)^{0.49\pm0.06}
\left( \frac{{\rm FWHM_{P\alpha}}}{{\rm 10^{3}\,km\,s^{-1}}} \right)^{2}.
\end{equation}
We measure the BH masses for 11 and 6 NIR-red AGNs by using the P$\beta$- and P$\alpha$-based $M_{\rm BH}$ estimators, respectively.
There is no significant difference between the measured P$\beta$- and P$\alpha$-based BH masses as shown in Figure 12,
and the measured BH masses are listed in Table 8.
\begin{figure*}
\centering
\figurenum{12}
\includegraphics[width=\textwidth]{PbvsPa.eps}\\
\caption{(a) Comparison between the measured $M_{\rm BH}$ based on P$\beta$ and P$\alpha$ lines.
The BH masses are represented by circles, and the colors in the circles mean their $E(B-V)_{\rm line}$ values.
(b) Comparison of the $L_{\rm bol}$ values derived from P$\beta$ vs. P$\alpha$ lines.
The circles denote the $L_{\rm bol}$ values, and the meaning of the colors in the circles is identical to the left panel.}
\end{figure*}
To obtain the BH masses of unobscured type 1 quasars,
we use an optical $M_{\rm BH}$ estimator \citep{mclure04} consisting of
$\lambda L_{\rm 5100\,\AA{}}$ (L5100) and $\rm FWHM_{H\beta}$.
For the optical $M_{\rm BH}$ estimator,
we apply the virial coefficient of \cite{woo15} as
\begin{equation}
\frac{M_{\rm BH}}{M_{\rm \odot}}=5.27 \left( \frac{{\rm L5100}}{{\rm 10^{44}\,erg\,s^{-1}}} \right)^{0.61}
\left( \frac{{\rm FWHM_{H\beta}}}{{\rm km\,s^{-1}}} \right)^{2}.
\end{equation}
The L5100 and $\rm FWHM_{H\beta}$ values of unobscured type 1 quasars are adopted from \cite{shen11}.
\subsection{Bolometric luminosities}
To estimate the bolometric luminosities of NIR-red AGNs,
we combine several empirical relations between the bolometric luminosity ($L_{\rm bol}$),
the continuum luminosity, and the line luminosity.
We bootstrap the empirical relations between
$L_{\rm bol}$ and L5100 \citep{shen11},
L5100 and $L_{\rm H\alpha}$ \citep{jun15},
and $L_{\rm H\alpha}$ and the two Paschen line luminosities \citep{kim10}.
The combined relations between $L_{\rm bol}$ and the Paschen line luminosities are
\begin{equation}
\log \left( \frac{L_{\rm bol}}{\rm{10^{44}\,erg\,s^{-1}}} \right)=1.33+0.966\,\log \left( \frac{L_{\rm P\beta}}{\rm{10^{42}\,erg\,s^{-1}}} \right)
\end{equation}
and
\begin{equation}
\log \left( \frac{L_{\rm bol}}{\rm{10^{44}\,erg\,s^{-1}}} \right)=1.27+0.920\,\log \left( \frac{L_{\rm P\alpha}}{\rm{10^{42}\,erg\,s^{-1}}} \right).
\end{equation}
We measured the $L_{\rm bol}$ values for 12 and 6 NIR-red AGNs by using $L_{\rm P\beta}$ and $L_{\rm P\alpha}$, respectively.
The $L_{\rm bol}$ measured from P$\beta$ and P$\alpha$ show no significant differences, as shown in Figure 12,
and the measured $L_{\rm bol}$ values are listed in Table 8.
To obtain the $L_{\rm bol}$ values of unobscured type 1 quasars,
we use L5100 values, with the bolometric correction factor (9.26) for L5100,
both of which are adopted from \cite{shen11}.
\subsection{Eddington ratios of NIR-red AGNs}
\begin{figure*}
\centering
\figurenum{13}
\includegraphics[scale=0.5]{Edd.eps}\\
\caption{The $L_{\rm bol}$ and $M_{\rm BH}$ values of NIR-red AGNs and unobscured type 1 quasars.
The circles and triangles show the $L_{\rm bol}$ and $M_{\rm BH}$ values of NIR-red AGNs
derived from P$\beta$ and P$\alpha$, respectively.
The meaning of the colors in the circles and triangles is identical to that in Figure 12.
The blue dots represent the $L_{\rm bol}$ and $M_{\rm BH}$ values of unobscured type 1 quasars.
The dotted, dashed, and dash-dotted lines denote $\lambda_{\rm Edd}$ of 1, 0.1, and 0.01, respectively.}
\end{figure*}
\begin{figure}
\centering
\figurenum{14}
\includegraphics[scale=0.35]{Edd_distributions.eps}\\
\caption{(a) Distributions of the $\lambda_{\rm Edd}$ values of NIR-red AGNs and unobscured type 1 quasars.
The red solid and blue dashed histograms show the $\lambda_{\rm Edd}$ distributions of NIR-red AGNs and unobscured type 1 quasars, respectively.
(b) Distributions of the $\lambda_{\rm Edd}$ of NIR-red AGNs with $E(B-V) > 0.2$ and unobscured type 1 quasars.
The meaning of the red solid lines, blue dashed lines, and the ranges of the $L_{\rm bol}$ and $M_{\rm BH}$
are identical to those in panel (a).}
\end{figure}
When comparing the $\lambda_{\rm Edd}$ values of NIR-red AGNs to those of unobscured type 1 quasars,
we prefer to use the $\lambda_{\rm Edd}$ values from P$\alpha$ than those from P$\beta$
when both quantities are available, to minimize the effects from the dust extinction.
The $\lambda_{\rm Edd}$ values of nine NIR-red AGNs are used for the comparison,
among which four (0157$+$1712, 0324$+$1748, 2222$+$1959, and 2344$+$1221) and
five (0221$+$1327, 1258$+$2329, 1453$+$1353, 1543$+$1937, and 1659$+$1834)
$\lambda_{\rm Edd}$ values are derived from P$\beta$ and P$\alpha$, respectively.
The $M_{\rm BH}$ and $L_{\rm bol}$ values of NIR-red AGNs and unobscured type 1 quasars are shown in Figure 13,
and Figure 14 shows their distributions of $\lambda_{\rm Edd}$ values.
We find that the median $\lambda_{\rm Edd}$ of the nine NIR-red AGNs, $\log (\lambda_{\rm Edd}) = -0.654 \pm 0.174$,
where the error represents the error of the median,
is only mildly higher than that of unobscured type 1 quasars, $\log (\lambda_{\rm Edd}) = -0.961 \pm 0.008$.
For quantifying how significantly these two distributions of the $\lambda_{\rm Edd}$ values differ from each other,
we perform the Kolmogorov–Smirnov test (K-S test) by using the \texttt{KSTWO} code based on IDL.
The maximum deviation between the cumulative distributions of these two $\lambda_{\rm Edd}$ values, $D$, is 0.392,
and the probability of the result given the null hypothesis, $p$, is 0.094.
From this comparison, we conclude that the $\lambda_{\rm Edd}$ of the NIR-red AGNs is only slightly larger than
that of the unobscured type 1 AGNs, but statistically the difference is not significant.
A few outliers with large $\lambda_{\rm Edd}$ appear to dominate the K-S test result,
and this suggests that the NIR-red AGN sample is mostly indiscernible in their property from unobscured type 1 AGNs,
but can include truly dusty high $\lambda_{\rm Edd}$ AGNs such as 0157$+$1712 with $E(B-V)_{\rm line} = 1.596$.
Since $\lambda_{\rm Edd}$ values can depend on the $L_{\rm bol}$ values (e.g., \citealt{lusso12,suh15}),
we repeated the analysis after matching their $L_{\rm bol}$ values.
First, we divide the NIR-red AGNs into two sub-samples, four low-$L_{\rm bol}$
(0221$+$1327, 1258$+$2329, 1453$+$1353, and 1659$+$1834; $44.73 \leq \log( L_{\rm bol} / {\rm erg\,s^{-1}}) \leq 45.65$) and
five high-$L_{\rm bol}$ (0157$+$1712, 0324$+$1748, 1543$+$1937, 2222$+$1959, and 2344$+$1221;
${45.86} \leq \log( L_{\rm bol} / {\rm erg\,s^{-1}}) \leq 46.67$) NIR-red AGNs.
Second, among all the unobscured type 1 AGNs,
we choose 3688 low-$L_{\rm bol}$ ($44.76 \leq \log( L_{\rm bol} / {\rm erg\,s^{-1}}) \leq 45.65$)
and 165 high-$L_{\rm bol}$ (${45.86} \leq \log( L_{\rm bol} / {\rm erg\,s^{-1}}) \leq 46.55$) sub-samples
that have the similar $L_{\rm bol}$ ranges to those of the divided NIR-red AGNs.
By comparing such the $L_{\rm bol}$ limited samples, we confirm the result from the full sample,
which is the $\lambda_{\rm Edd}$ of NIR-red AGNs is only mildly higher than that of unobscured type 1 AGNs.
For the low-$L_{\rm bol}$ samples, the median $\log(\lambda_{\rm Edd})$ values of the NIR-red AGNs and unobscured type 1 AGNs are
$-0.654 \pm 0.216$ and $-0.995 \pm 0.008$, respectively.
For the high-$L_{\rm bol}$ samples, although the $\lambda_{\rm Edd}$ values are larger than those of the low-$L_{\rm bol}$ samples,
the result is consistent throughout.
The median $\log(\lambda_{\rm Edd})$ of the NIR-red AGNs is $-0.424 \pm 0.276$,
and that of the unobscured type 1 AGNs is ${-0.661} \pm {0.035}$.
We also compare the $\lambda_{\rm Edd}$ of unobscured type 1 quasars to those of NIR-red AGNs with $E(B-V) > 0.2$.
The comparison is shown in Figure 14.
We find that the median $\log(\lambda_{\rm Edd})$ of the NIR-red AGNs is -0.654$\pm$0.321,
which is consistent with the above results.
\section{$M_{\rm BH}$--$\sigma_{\ast}$ relation}
\begin{figure*}
\centering
\figurenum{15}
\includegraphics[scale=0.5]{M-sig_supp.eps}\\
\caption{$M_{\rm BH}$--$\sigma_{\ast}$ relations of AGNs.
Circles denote the $M_{\rm BH}$ and $\sigma_{\ast}$ values of NIR-red AGNs,
and the meaning of the colors in the circles is identical to Figure 11.
The gray circles denote the quantities of NIR-red AGNs measured in \cite{canalizo12}.
The $M_{\rm BH}$ and $\sigma_{\ast}$ values of unobscured type 1 AGNs at
$z \simeq$ 0 \citep{woo10}, 0.36 \citep{woo06}, 0.57 \citep{woo08}, and $\lesssim$1 \citep{shen15}
are represented by green triangles, blue squares, red stars, and gray asterisks, respectively.
The green dashed, black dotted, and yellow dot-dashed lines show
the $M_{\rm BH}$--$\sigma_{\ast}$ relations for local AGNs \citep{woo10}, quiescent local galaxies \citep{gultekin09},
and unobscured type 1 AGNs at $z \sim 0.26$ \citep{shen15}.}
\end{figure*}
\begin{figure*}
\centering
\figurenum{16}
\includegraphics[scale=0.4]{M-sig_supp2.eps}\\
\caption{
(a) The $L_{\rm bol}$ values vs. redshifts of NIR-red AGNs and unobscured type 1 AGNs.
The circles denote the redshifts and $L_{\rm bol}$ values of NIR-red AGNs,
and the meaning of the colors in the circles is identical to those in Figure 11.
The blue squares, red stars, and yellow asterisks represent the quantities of unobscured type 1 AGNs
at $z \simeq$ 0.36 \citep{woo06}, 0.57 \citep{woo08}, and $\lesssim$1 \citep{shen15}, respectively.
(b) $M_{\rm BH}$--$\sigma_{\ast}$ relations of the luminosity-matched AGNs.
The meaning of circles, blue squares, red stars, and yellow asterisks have meanings identical to those in the left panel.
The black dashed line indicates the $M_{\rm BH}$--$\sigma_{\ast}$ relation of the luminosity-matched unobscured type 1 AGNs,
where the measured $\alpha$ and $\beta$ are 8.335$\pm$0.016 and 0.768$\pm$0.083, respectively.
}
\end{figure*}
In this section, we discuss the $M_{\rm BH}$--$\sigma_{\ast}$ relation of NIR-red AGNs.
For the relation, we adopt the $\sigma_{\ast}$ values from \cite{canalizo12} which are measured
using the stellar absorption lines in the range of 3900--5500\,$\rm \AA{}$.
However, the $\sigma_{\ast}$ values are available for only three objects (0157$+$1712, 0221+1327, and 1659+1834) in our sample.
These three NIR-red AGNs have $E(B-V) > 0.2$.
For the BH masses, \cite{canalizo12} also estimated the BH masses using the L5100 and $\rm FWHM_{\rm H\alpha}$ values,
but we use the Paschen-line-based BH masses measured in Section 5.2.
We compare these two kinds of BH masses.
In \cite{canalizo12}, the BH masses based on L5100 and $\rm FWHM_{\rm H\alpha}$
are $10^{8.35}$, $10^{8.42}$, and $10^{8.68}$\,$M_{\rm \odot}$ for 0157$+$1712, 0221+1327, and 1659+1834, respectively,
whereas the BH masses derived from the Paschen lines are ${10^{7.96}}$, $10^{7.57}$, and $10^{8.84}$\,$M_{\rm \odot}$.
Although there is no significant difference in the BH masses for 0157$+$1712 and 1659+1834,
the Paschen-line-based BH mass of 0221+1327 is smaller by a factor of $\sim 7$.
The discrepancy for the BH mass of 0221$+$1327 does not come from the dust extinction but from the spectral line fitting.
In \cite{canalizo12}, they measured the FWHM as 4279\,$\rm km\,s^{-1}$, which is estimated from the H$\alpha$ line.
In this study, we measured the FWHM from the H$\alpha$, P$\beta$, and P$\alpha$ lines,
which gives 2212, 2125, and 2073\,$\rm km\,s^{-1}$, respectively,
and these values are significantly smaller than the previous result.
In Figure 15, the newly established $M_{\rm BH}$--$\sigma_{\ast}$ relation of NIR-red AGNs
is presented, along with those for local quiescent galaxies \citep{gultekin09} and
unobscured type 1 AGNs at $z \simeq$ 0, 0.36, 0.57, and $\lesssim 1$ \citep{woo06,woo08,woo10,shen15}.
These $M_{\rm BH}$--$\sigma_{\ast}$ relations of unobscured type 1 AGNs are modified
by applying the virial coefficient of $\log f = 0.05$ \citep{woo15}, as we did for NIR-red AGNs.
By comparing the $M_{\rm BH}$--$\sigma_{\ast}$ relation of NIR-red AGNs and local unobscured type 1 AGNs \citep{woo10},
we find offsets of $\Delta \log (M_{\rm BH}/M_{\rm \odot}) =$
-0.389, 0.243, and 1.190 for 0157$+$1712, 0221+1327, and 1659+1834, respectively,
resulting in a mean offset of $\Delta \log (M_{\rm BH}/M_{\rm \odot}) = {0.348 \pm 0.902}$.
The $M_{\rm BH}$--$\sigma_{\ast}$ relation for NIR-red AGNs and those for unobscured type 1 AGNs at $z = 0$ through 0.5
are consistent with each other
(e.g., $\Delta \log (M_{\rm BH}/M_{\rm \odot}) = $ 0.74, 0.62, and 0.52 for unobscured type 1 AGNs
at $z=$ 0.26, 0.36, and 0.57, respectively; \citealt{shen15,woo06,woo08}).
This result suggests that there is no significant offset in the $M_{\rm BH}$--$\sigma_{\ast}$ relation between
the NIR-red AGNs and the unobscured type 1 AGNs,
although more objects are needed to better quantify the offset.
Moreover, we compare the $M_{\rm BH}$--$\sigma_{\ast}$ relations of NIR-red AGNs and unobscured type 1 AGNs
after matching their $L_{\rm bol}$ values to exclude the selection bias introduced from their different luminosities (e.g., \citealt{shen15}).
The $L_{\rm bol}$ values of 0157$+$1712, 0221$+$1327, and 1659$+$1834 are
$10^{46.12}$, $10^{45.01}$, and $10^{45.65}$\,$\rm erg\,s^{-1}$, respectively, as shown in Figure 16,
and the $L_{\rm bol}$ values of the unobscured type 1 AGNs at $z \simeq$ 0.36 \citep{woo06}, 0.57 \citep{woo08}, and $\lesssim 1$ \citep{shen15}
are measured by applying the bolometric correction factor of 9.26 \citep{shen11} to their L5100 values.
Among them, we choose 42 unobscured type 1 AGNs that have similar, but somewhat lower, $L_{\rm bol}$ range
($10^{45.01} \leq L_{\rm bol}/{\rm erg\,s^{-1}} \leq 10^{45.73}$), to that of the NIR-red AGNs.
For the selected unobscured type 1 AGNs, their $M_{\rm BH}$ can be expressed as a function of $\sigma_{\rm \ast}$:
\begin{equation}
\log \left( \frac{M_{\rm BH}}{M_{\rm \odot}} \right) = \alpha + \beta \,\log \left( \frac{\sigma_{\rm \ast}}{{\rm 200\,km\,s^{-1}}} \right),
\end{equation}
and the measured $\alpha$ and $\beta$ are 8.335$\pm$0.016 and 0.768$\pm$0.083, respectively.
By comparing the newly measured $M_{\rm BH}$--$\sigma_{\rm \ast}$ relation of the unobscured type 1 AGNs and that of the NIR-red AGNs,
we find a mean offset of $\Delta \log (M_{\rm BH}/M_{\rm \odot}) = {-0.166 \pm 0.681}$, as presented in Figure 16.
In this luminosity-matched comparison, the number of the NIR-red AGNs is also insufficient,
but we obtain the same result that there is no significant offset between
the $M_{\rm BH}$--$\sigma_{\ast}$ relations of the NIR-red AGNs and the unobscured type 1 AGNs.
\section{Summary}
By performing NIR spectroscopic observations with
four telescopes, Gemini, IRTF, Magellan, and Subaru,
we obtained 0.7--2.5\,$\mu$m medium-resolution ($R > 2000$) and high-S/N (up to several hundreds) spectra of 16 NIR-red AGNs at $z \sim 0.3$.
In addition to the NIR spectra,
we obtained optical (0.4--1.0\,$\mu$m) medium-resolution ($R \sim 4000$) spectra of 12 NIR-red AGNs taken with Keck/ESI and SDSS data.
Using both sets of spectra, we measured the line luminosities and FWHMs of H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$ lines
for 7, 12, 12, and 6 NIR-red AGNs, respectively.
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{table8.png}\\
\end{figure*}
Before analyzing the physical properties of NIR-red AGNs,
we derived the $E(B-V)$ values of NIR-red AGNs in two ways.
First, we estimated the $E(B-V)_{\rm line}$ values
by using the luminosity ratios of H$\beta$, H$\alpha$, P$\beta$, and P$\alpha$ lines.
Second, the $E(B-V)_{\rm cont}$ values were measured
by comparing the continuum slopes at 3790--10000\,$\rm \AA{}$.
Through these two methods, we measured the $E(B-V)_{\rm line}$ and $E(B-V)_{\rm cont}$ values for 10 and 12 NIR-red AGNs, respectively.
These two $E(B-V)$ values are consistent, and their Pearson correlation coefficient is 0.911.
Among our sample, five objects have low $E(B-V) < 0.2$.
Comparing the previous result \citep{urrutia09}
from the optical-NIR and NIR color selection that yields only two objects have low $E(B-V)$ values in $\sim 50$ candidates,
we suspect that the NIR red color selection alone is not effective at picking up dusty red AGNs.
After correcting for the dust extinction with the measured $E(B-V)$ values,
we measured the $\lambda_{\rm Edd}$ values of NIR-red AGNs.
For the $M_{\rm BH}$ and $L_{\rm bol}$ values, we used Paschen-line-based $M_{\rm BH}$ and $L_{\rm bol}$ estimators
to alleviate the effects of the dust extinction.
The newly estimated median $\lambda_{\rm Edd}$ of NIR-red AGNs, $\log ({\rm Edd}) = -0.654 \pm 0.174$, is only mildly higher than
that of unobscured type 1 quasars, $\log ({\rm Edd}) = -0.961 \pm 0.008$.
Using the measured BH masses,
we compared the $M_{\rm BH}$--$\sigma_{\ast}$ relation of NIR-red AGNs to that of unobscured type 1 AGNs at similar redshift.
Although only three objects were used,
NIR-red AGNs show a tendency to have similar BH masses at a fixed the $\sigma_{\ast}$.
Our results suggest that AGNs with red $J-K$ colors are not necessarily dust-obscured AGNs,
and the selection of dusty AGNs needs to be carefully performed.
\acknowledgements
We thank the referee for useful comments.
This work was supported by the Creative Initiative Program of the National Research Foundation of Korea (NRF),
No. 2017R1A3A3001362, funded by the Korea government (MSIP).
D.K. acknowledges support by the National Research Foundation of Korea to
the Fostering Core Leaders of the Future Basic Science Program, No. 2017-002533.
The Gemini data were taken through the K-GMT Science Program
(PID: GN-2015B-Q-51; GN-2016A-Q-86) of Korea Astronomy and Space Science Institute (KASI).
This paper includes data obtained with the 6.5\,m Magellan Telescopes
located at Las Campanas Observatory, Chile.
D.K. and M.I. are Visiting Astronomers (PID: 2015B092; 2016A043) at the Infrared Telescope Facility,
which is operated by the University of Hawaii under Cooperative Agreement no. NNX-08AE38A
with the National Aeronautics and Space Administration,
Science Mission Directorate, Planetary Astronomy Program.
This paper includes data obtained with Subaru telescope and W. M. Keck observatory on Maunakea.
|
1,314,259,993,897 | arxiv | \section{Introduction}\label{sec:introduction}
The proliferation of online crowdsourcing platforms has ignited a lot of research in crowdsourcing
and, more recently, in team formation. Such platforms enable their users to identify freelance experts
to complete a job. Although small tasks may require just one expert to be completed, more complicated
tasks may require a team of experts to be formed. This is particularly true in platforms that
set out to solve more complicated tasks such as {\tt Galaxy Zoo} or {\tt Foldit}.
Motivated by such applications,
researchers have studied extensively the \emph{team-formation} problem, where
the goal is to identify a set of experts that collectively perform a task while some
metric of team-quality is optimized. Beyond crowdsourcing platforms, team-formation problems are applied to human-resource management as well as industrial and research organizations and
funding agencies that aim to identify groups of scientists
that are the best to work in specific domains of interest.
Team formation as
a combinatorial optimization problem was first introduced by Lappas~{{\emph{et al.}}}~\cite{lappas2009finding}. In their
version of the problem the input consisted of a task, which required a set of skills, along with a set of
individuals; each individual had a set of skills. The individuals were also organized in a social network, which encoded how well they could work together. The team-formation problem that Lappas~{{\emph{et al.}}}
considered was the problem of identifying a subset of individuals that collectively \emph{cover} the
required skills while at the same time the radius (or the weight of the minimum spanning tree) on the subgraph
induced by the individuals is minimized.
All follow-up works to this consider variants of this setting, with the common characteristic that
every team constructed needs to \emph{cover} the skills required by the task it is assigned to complete, and then also optimize some metric related to the subgraph induced by the team members~\cite{bhowmik2014submodularity,kargar2013finding,kargar2011discovering,kargar2012efficient,majumder2012capacitated, li2015team,yin2018social}
or the load assigned to the experts -- when more than one task is completed~\cite{anagnostopoulos2010power,anagnostopoulos2012online,anagnostopoulos18algorithms}.
Although all these works consider slightly different settings, e.g., some of them consider just one task while
others consider multiple tasks that arrive in an online fashion, all of them have a common characteristic:
they focus on optimizing some objective related to the cost of the team while they impose the \emph{hard constraint} that all the skills required by any given task should be covered. That is, in the core of all these
team-formation formulations lies in the hard constraint of solving the \emph{set cover} problem.
In this paper, we take a different approach: we do not assume that the coverage of all skills required by the
input job is a hard constraint, but we rather assume that coverage of skills is one part of the objective function we want to optimize. The other part is the cost of paying the experts who participate in the formed
team. That is, for a team of experts $Q$ we want to optimize a function of the form:
\begin{equation}\label{eq:optimization}
g(Q) = {{{\ensuremath{f}}}}(Q) - {{{\ensuremath{c}}}}(Q),
\end{equation}
where ${{\ensuremath{f}}}(Q)$ is the number of skills required by the task and posessed by at least one expert in $Q$ and ${{\ensuremath{c}}}(Q)$ is the \emph{sum} of the costs of the experts in $Q$, i.e., the amount of money one has to pay to hire the particular team. Note that function $g$ has two parts: the \emph{coverage} part,
which is a \emph{monotone submodular} function, and the \emph{cost} part which is a \emph{linear} function.
To the best of our knowledge we are the first to formulate the team-formation problem as an optimization problem
of maximizing a submodular minus a linear function.
From the application perspective, this would correspond to scenarios where the set of skills required by
a project is more of a ``wish list'' rather than a strict requirement.
Here, we consider two basic variants of this problem:
the \emph{constrained} and the \emph{unconstrained} one.
The former refers to cases where the maximum number of experts we aim to hire is given as part of the input; the
latter finds the optimal number of experts to be hired as part of the solution. For the constrained
version, we also go beyond cardinality constraints and consider matroid constraints as well. As an example, consider the scenario where the experts are partitioned into groups; then, a matroid constraint would impose
an upper bound on the number of experts that can be hired from every group.
Finally, we also consider the online version of the above optimization problem (both for the constrained and the
unconstrained settings). Deviating from existing work on the online team-formation problem~~\cite{anagnostopoulos2010power,anagnostopoulos2012online,anagnostopoulos18algorithms},
where the
tasks arrive in an online manner, we consider the problem where the task is given as part of the input
in the beginning and experts become available in an online fashion.
From the application point of view, and to the best of our knowledge, we are the first to formalize the team-formation problem
as an optimization problem that puts coverage and cost into a single combined objective. In fact, one can think of
our formulation as a generalization of existing problem definitions as we encapsulate both the coverage requirement and the cost of forming a team.
From the algorithmic point of view, one needs to understand the difficulty of optimizing function $g$ in
Equation~\eqref{eq:optimization}, which is submodular but it takes both positive and negative values. One can observe that it is \textbf{NP}-hard to decide whether the optimum value of a submodular objective is positive or not, since we could use such a subroutine and binary search over the optimum value to obtain arbitrarily good approximate solutions, which contradicts existing hardness of approximation results for problems such as maximum cut and maximum coverage \cite{papadimitriou1991optimization,Feige1998}. This also implies that no multiplicative factor approximation is possible for maximizing a potentially negative submodular function with or without constraints. Nevertheless, the objective function we consider has some structure that has been exploited
in previous works~\cite{feldman2019guess,harshaw2019submodular,sviridenko2017optimal}. These works have shown that in this case we should aim for a weaker notion of
approximation and find a subset of experts $Q$ such that:
\[
{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq \alpha \cdot {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}),
\]
for some $\alpha\leq 1$.
Existing works in the theory community have proposed algorithms that achieve $\alpha=\left(1-\frac{1}{e}\right)$, which is the best guarantee one can hope for if we have a cardinality constraint, since the problem captures the classical problem of maximizing a monotone and non-negative submodular function subject to a cardinality constraint for which $1-\frac{1}{e}$ is the tight approximation guarantee \cite{Nemhauser1978,Feige1998}.
In this paper, we take the ideas in the existing work a bit further and propose
simple algorithms that use as a blackbox routine the greedy algorithm used for submodular optimization. Therefore, our algorithms are extremely easy to implement. Moreover, they are very efficient in practice.
In terms of approximation we achieve $\alpha=1/2$, but our experimental evaluation shows that our faster
algorithms work as well as existing algorithms with theoretical $\alpha=(1-1/e)$, while achieving orders of magnitude of computational savings.
Finally, we demonstrate that our algorithms work in a variety of settings including the offline, the cardinality and
matroid constraints, as well as the streaming and online settings.
In our experiments, we demonstrate the efficiency and the practical utility of our proposed algorithms, compared to other baselines as well as existing algorithms. For our experimental evaluation we use real datasets
from online crowdsourcing platforms such as {{\textit{Freelancer}}} and {{\textit{Guru}}}.
Our results show that our algorithms
not only provide solutions that are at least as good as the best known theoretical algorithms, but they are
also extremely efficient in practice.
We want to emphasize, that while the techniques we develop here are motivated by team-formation applications,
they are general and can be applied to other scenarios that were formalized using set cover as a hard requirement in the problem formulation.
Such applications include recommendation systems~\cite{borodin2012max,puthiya2016coverage}
as well as influence maximization in social networks~\cite{borgs2014maximizing, goyal2011data, tang2018online}.
\spara{Contributions:}
\begin{list}{$\bullet$
\item First, we formalize the team-formation problem as a combined optimization problem that takes into consideration both the coverage of the required skills achieved by the team, as well as the cost of the team
itself.
\item We design efficient algorithms with provable approximation guarantees both for the cardinality-constrained as well as the unconstrained problem.
\item We show that these algorithms generalize to the matroid constraints setting as well.
\item We also introduce a new team-formation setting where experts arrive in an online fashion. For this setting we prove that small variants of our algorithms can perform extremely well, both in theory and in practice.
\item In a thorough experimental evaluation where we use data from real online crowdsourcing platforms, we
demonstrate the efficiency and the efficacy of our algorithms in practice.
\end{list}
\spara{Roadmap:} Section~\ref{sec:related} discusses the related work both in terms of the team-formation application as well as previous algorithmic results related to the optimization of a submodular minus a linear function. Section~\ref{sec:preliminaries} introduces our problems. Sections~\ref{sec:alg-cardinality}, \ref{sec:alg-matroid}, \ref{sec:alg-online}, \ref{sec:alg-streaming}, and \ref{sec:alg-variants} present our algorithms for the different settings. We provide a thorough experimental evaluation in Section~\ref{sec:experiments} and conclude the paper in Section~\ref{sec:conclusions}.
\section{Related work}\label{sec:related}
Here we review the related work, both in terms of team-formation applications as well as algorithmic
work on the development of approximation algorithms for functions similar to the ones we seek to optimize.
\spara{Team formation with complete task coverage:}
Lappas~{{\emph{et al.}}} \cite{lappas2009finding} were the first to introduce the notion of team formation in the setting of a social network.
Given a network of experts with skills, their goal is to find a team that collectively covers all the requirements of a single task, while establishing small communication cost (in terms of the network) between the team members.
A series of subsequent works~\cite{anagnostopoulos2012online,bhowmik2014submodularity,kargar2013finding,kargar2011discovering,kargar2012efficient,majumder2012capacitated, li2015team,yin2018social} extended that work.
All these works share two common assumptions:
$(i)$ all the required task skills need to be covered by the formed teams,
$(ii)$ the experts are organized in a network and the created teams optimize some cost function related to the team members' distances in the network.
In this work we do not assume the existence of a network among the experts and the tasks need not be fully covered.
Another line of works does not assume an underlying expert network and defines linear team cost functions, such as the load of the experts and the personnel cost~\cite{anagnostopoulos2010power,anagnostopoulos18algorithms,golshan14profit,kargar2013finding}.
These works also require that the task requirements are completely covered,
while in our setting we relax this constraint and optimize for the combined
objective of the coverage minus the cost.
Finally, another line of research considers the problem of online team formation~\cite{anagnostopoulos2010power,anagnostopoulos2012online,anagnostopoulos18algorithms} where the tasks arrive in an online fashion.
In these works, there is again the hard requirement that all skills of any task need to be
covered, while we do not have this hard constraint in our problem definition.
In our online setting we consider the problem where the task is given as part of the input in the beginning and experts become available in an online fashion. This is a different setting
from the one considered above. As a result, the computational problems we are considering
are different.
Probably, the closest to our work is the work by Dorn and Dustdar \cite{dorn2010composing}, which introduces a multi-objective team composition problem with two objectives: skill coverage and communication cost (with respect to the experts' network).
Their goal is to identify the best balance between the two costs.
For this purpose, they use a set of heuristics that self-adjust a
trade-off parameter to decide team configurations.
There are two significant differences between their setting and ours:
first, although the authors permit partial coverage, they focus on data extraction rather than algorithm design and attempt to solve the problem using heuristic approaches.
In our work, we design algorithms with provable approximation guarantees.
Second, our goal is to design efficient algorithms with respect to running time, while Dorn and Dustdar do not touch upon this issue.
\spara{Applications of maximizing submodular functions:}
There are several applications that require the maximization of a submodular function subject to some constraint~\cite{krause2014submodular}.
Two such applications are social influence maximization~\cite{borgs2014maximizing, goyal2011data, tang2018online} and result diversification in recommender systems~\cite{borodin2012max,puthiya2016coverage}.
In our objective, we are maximizing a monotone submodular function minus a linear function.
Thus our objective is not necessarily positive -- while this is not the case in all previous
works. As discussed in the introduction, the algorithmic problem we are solving is much harder.
\spara{Algorithmic approaches:}
Most of the existing algorithms for (monotone or general) submodular maximization crucially rely on the assumption that the function is non-negative and thus they do not immediately apply to this setting. Indeed, as discussed in the introduction, the problem of maximizing a potentially negative submodular function is inapproximable in the following sense: it is $\mathbf{NP}$-hard to determine whether the optimum value is positive or not, and thus no multiplicative factor approximation is possible.
Nevertheless, the objective functions that we consider have beneficial structure, and several works~\cite{feldman2019guess,harshaw2019submodular,sviridenko2017optimal} have shown that we can obtain meaningful guarantees provided we aim for a slightly weaker notion of approximation.
More specifically,
these works give algorithms that construct a solution $S$
satisfying (in our notation)
\[
{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq\alpha\cdot {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})
\]
for some $\alpha\leq1$, where ${{\ensuremath{\mathrm{OPT}}}}$ is an optimal solution to the problem. Sviridenko~{{\emph{et al.}}}~\cite{sviridenko2017optimal} reduce
the problem $\max_{Q\in\mathcal{I}}{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)$, where $\mathcal{I}$
is a matroid constraint, to the problem of maximizing ${{\ensuremath{f}}}(Q)$ subject
to both a knapsack constraint ${{\ensuremath{c}}}(Q)\leq B$ and the matroid constraint.
This is achieved by approximately guessing ${{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})$ and using each
of those guesses as the knapsack budget $B$. For each fixed guess,
the resulting problem can be solved using a variant of the continuous
greedy algorithm, and the resulting solution satisfies (up to a small
error) ${{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq\left(1-\frac{1}{e}\right){{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})$,
which is the best guarantee one can hope for. Feldman \cite{feldman2019guess}
showed that the guessing step can be removed and one can obtain the
same approximation guarantee by using the continuous greedy algorithm
on a distorted objective function: at time $t$, the objective being
optimized is $e^{t-1}F(x)-\langle w,x\rangle$, where $F$ is the
multilinear extension of ${{\ensuremath{f}}}$. Harshaw {{\emph{et al.}}}~\cite{harshaw2019submodular}
showed that, for the special case of a cardinality constraint, one
can obtain a much more efficient algorithm by combining the time-varying
distortion approach with the standard Greedy algorithm.
In this paper, we take these ideas one step further and obtain very
simple and efficient algorithms that work in a variety of models,
including the offline, online, and streaming. Our algorithms use a
very simple scaling approach: we pick an absolute constant $s\geq1$
and optimize the function ${{\ensuremath{f}}}(Q)-s\cdot {{\ensuremath{c}}}(Q)$ using a black-box application
of standard algorithms, such as the classical Greedy algorithm and
the single-threshold Greedy algorithm.
\section{Problem definition}\label{sec:preliminaries}
Throughout the paper we will assume a set of skills $S$ and
$n$ experts $V = \{1,\ldots , n\}$ such that each expert $i$ is associated with a subset of the
skills $S_i\subseteq S$. These are the skills that expert $i$ masters.
To refer to the expert $i$ we use the notation $i$ and $e_i$ interchangeably.
Each expert $i$ is also associated with a \emph{cost} $c_i$. This cost corresponds to the
cost that one has to pay in order to hire this expert.
We assume that there is a task $T\subseteq S$; that is, the task that requires a set of skills in order to be completed.
Given a subset of experts $Q\subseteq V$, we define the \emph{coverage} of $Q$ to be:
\[
{{\ensuremath{f}}}(Q) = \left |(\cup_{i\in Q} S_i )\cap T\right|.
\]
Similarly, given a subset of experts $Q\subseteq V$, we define the \emph{cost} of $Q$ to be:
\[
{{\ensuremath{c}}}(Q) = \sum_{i\in Q} c_i.
\]
\begin{problem}[{{\sc Cov-Cost}}]\label{problem:unconstrained}
Given a set of experts $V$ and task $T$, find a subset of experts $Q\subseteq V$ such that
\begin{equation}
g(Q) = \lambda\cdot {{\ensuremath{f}}}(Q) - {{\ensuremath{c}}}(Q)
\end{equation}
is maximized.
\end{problem}
In the above definition, $\lambda$ is a normalization coefficient that expresses our bias between the prizes and the costs.
One can also think of $\lambda$ as a way to convert the two quantities, which are defined in different scales, into the same units.
Determining its value is application-dependent and
we explain how to select this value in Section~\ref{sec:picklambda}.
Our algorithmic analysis is independent of this coefficient, and therefore from now on we will use
${{\ensuremath{f}}}(Q)$ to refer to $\lambda\cdot{{\ensuremath{f}}}(Q)$.
We will also refer to
$g(Q) = {{\ensuremath{f}}}(Q) - {{\ensuremath{c}}}(Q)$ as the \emph{combined objective function}.
The combined objective $g$ is a submodular function
defined as the difference between a non-negative monotone submodular function (${{\ensuremath{f}}}$) and a
non-negative linear function (${{\ensuremath{c}}}$).
When we consider instances of the {{\sc Cov-Cost}} problem such that $|Q|\leq k$ we obtain the cardinality-constraint problem defined as follows.
\begin{problem}[{{\sc $k$-Cov-Cost}}]\label{problem:constrained}
Given a set of experts $V$, task $T$ and an integer $k$, find a subset of experts $Q\subseteq V$ such that
$|Q|\leq k$ and
\begin{equation}
g(Q) = {{\ensuremath{f}}}(Q) - {{\ensuremath{c}}}(Q)
\label{pb:kproblem}
\end{equation}
is maximized.
\end{problem}
In addition to a cardinality constraint, we consider a general matroid constraint. We recall the definition of a matroid and we refer the reader to \cite{Schrijver2003} for a more in-depth overview of matroids.
\begin{defn}
(matroid) Let $\mathcal{M}=(V,\mathcal{I})$, where $V$ is a finite
ground set and $\mathcal{I}$ is a collection of subsets of $V$.
We refer to each set in $\mathcal{I}$ as an independent set. Then
$\mathcal{M}$ is a matroid if the collection $\mathcal{I}$ satisfies
the following properties:
\end{defn}
\begin{enumerate}
\item The empty set is independent: $\emptyset\in\mathcal{I}$.
\item (hereditary property) Every subset of an independent set is independent:
if $A\subseteq B$ and $B\in\mathcal{I}$ then $A\in\mathcal{I}$.
\item (augmentation property or exchange property) If $A$ and $B$ are
two independent sets and $|A|>|B|$, then there exists $e\in A\setminus B$
such that $B\cup\{e\}\in\mathcal{I}$.
\end{enumerate}
We now formulate our problem under the general matroid constraint setting.
\begin{problem}[{{\sc Matroid-Cov-Cost}}]\label{problem:matroidconstrained}
Given a set of experts $V$, a task $T$ and a set of independent sets $\mathcal{I}$ find a subset of experts $Q\subseteq V$ such that $Q\in\mathcal{I}$ and
\begin{equation}
g(Q) = {{\ensuremath{f}}}(Q) - {{\ensuremath{c}}}(Q)
\end{equation}
is maximized.
\end{problem}
We see that the problem {{\sc $k$-Cov-Cost}} is a special case of the general matroid constraint problem {{\sc Matroid-Cov-Cost}}, where the set of independent sets is $\mathcal{I} = \{Q\subseteq V: |Q|\leq k\}$.
Another well-known example of a matroid constraint is the partition matroid in which the ground set $V$ is partitioned into disjoint sets $V_1,V_2,\ldots,V_{\ell}$ and $\mathcal{I} = \{Q\subseteq V: |Q\cap V_i|\leq k_i$ for all $i=1,2,\ldots,\ell\}$.
\spara{General objectives:} Our algorithms, which we present in the subsequent sections, solve the above mentioned problems in the general setting when $f$ is any monotone and non-negative submodular function (not just the coverage function) and $c$ is any non-negative linear function. Thus our algorithms are applicable beyond the team formation setting that motivated this work.
We recall that a set function $h: 2^V \rightarrow \mathbb{R}$ is \emph{monotone} if
\begin{align*}
h(S) \leq h(T) \quad \forall S \subseteq T\subseteq V
\end{align*}
The set function $h: 2^V \rightarrow \mathbb{R}$ is \emph{submodular} if it satisfies the following diminishing returns property:
\begin{align*}
h(T\cup \{u\}) - h(T) \leq h(S\cup \{u\}) - h(S) \quad \forall S \subseteq T, u \in V \setminus T
\end{align*}
An equivalent definition of submodularity is the following:
\begin{align*}
h(S) + h(T) \geq h(S\cap T) + h(S \cup T) \quad \forall S, T \subseteq V
\end{align*}
\spara{Approximation guarantees:} Note that while function ${{\ensuremath{f}}}$ is monotone submodular and non-negative, the combined objective function $g$ is a \emph{potentially negative} submodular function. As discussed in the introduction, no multiplicative factor approximation is possible for the problem of maximizing a submodular function that is potentially negative. Similarly to previous work (see Section~\ref{sec:related}), our algorithms construct solutions with the following kind of weaker approximation guarantees:
\begin{equation*}
{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq \alpha\; {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}),
\end{equation*}
where ${{\ensuremath{\mathrm{OPT}}}}$ is an optimal solution to the problem and $\alpha\leq 1$.
\spara{Online problems:}
In addition to the offline setting, we study the above problems in online and streaming models of computation. We consider the unconstrained problem in the online model where the experts arrive in an online fashion, one at a time, in an arbitrary (adversarial) order. When an expert arrives, we need to decide whether to add it to the solution, and this decision is irrevocable. We refer to this problem as {{\sc Online-Cov-Cost}}.
We also consider the cardinality constrained problem \ref{problem:constrained} in the streaming model where the experts arrive one at a time as in the online setting but we are allowed to store a small set of experts in memory and select the final solution from the set of experts that we stored in memory. We refer to this problem as {{\sc Streaming-$k$-Cov-Cost}}.
The online setting poses additional conceptual difficulties as we need to make decisions without knowing the full set of experts.
\section{Algorithm for {{\sc $k$-Cov-Cost}}}
\label{sec:alg-cardinality}
\begin{algorithm}[t]
\textbf{Input:} Set of experts $V$, scaled objective $\tilde{g}(Q)={{\ensuremath{f}}}(Q)-2{{\ensuremath{c}}}(Q)$, cardinality $k$. \\
\textbf{Output:} Team $Q$.
\begin{algorithmic}[1]
\STATE $Q\gets\emptyset$
\FOR{$i=1,\ldots,k$}
\STATE $e_{i} = \arg\max_{e\in V}\tilde{g}(e|Q)$\label{line:condition}
\IF{$\tilde{g}(e_i \vert Q) \leq 0$}
\STATE break
\ENDIF
\STATE $Q\gets Q\cup\{e_{i}\}$
\ENDFOR
\RETURN{$Q$}
\end{algorithmic}
\caption{\label{algo:cardinality}The {{\texttt{CSG}}} algorithm for the cardinality-constrained problem {{\sc $k$-Cov-Cost}}.}
\end{algorithm}
In this section, we describe and analyze our algorithm for the cardinality-constrained problem {{\sc $k$-Cov-Cost}}. We refer to this algorithm as {{\texttt{CSG}}} (Cost Scaled Greedy) and present its outline in Algorithm \ref{algo:cardinality}. For a set function $h$, we use the notation $h(e\vert Q):=h(Q\cup\{e\})-h(Q)$ to denote the marginal gain of $e$ on top of $Q$.
Our approach is to simply apply the standard Greedy algorithm to the scaled objective $\tilde{g}(Q) = {{\ensuremath{f}}}(Q)- 2{{\ensuremath{c}}}(Q)$. We remind the reader that the Greedy algorithm was developed for maximizing \emph{monotone} and \emph{non-negative} submodular functions, whereas the scaled objective $\tilde{g}$ (as well as $g$) is non-monotone and potentially negative. The non-monotone nature of the objective leads to marginal gains being potentially negative. To account for this, we modify the Greedy algorithm so that it only selects elements that have positive marginal gain. Throughout the paper, by elements we mean the elements of the ground set $V$, i.e., the experts.
\spara{Extension to matroid constraints:} Similarly to the standard Greedy algorithm, our algorithm and its analysis readily extends to the more general matroid-constrained problem {{\sc Matroid-Cov-Cost}}. We give this extension in Section~\ref{sec:alg-matroid}. The matroid algorithm can also be sped up using the lazy evaluation technique as described below.
\spara{Running time analysis and speeding up the algorithm using lazy evaluations:}
Similarly to the standard Greedy algorithm, the running time of {{\texttt{CSG}}} is $O(nk)$ evaluations of the functions ${{\ensuremath{f}}}$ and ${{\ensuremath{c}}}$, where $n$ is the number of experts and $k$ is the cardinality constraint: there are $k$ iterations and, in each iteration, we spend $O(n)$ function evaluations to compute all of the marginal gains and find the expert with maximum marginal gain.
We can speed up the algorithm using the lazy evaluations technique introduced by Minoux \cite{minoux1978accelerated} for the standard Greedy algorithm, which we now outline. The computational bottleneck of the algorithm is in finding the element with maximum marginal gain $\tilde{g}(e|Q)$ in every iteration. To speed up these computations and avoid unnecessary evaluations, we store each element in a maximum priority queue with a key $v(e)$. We initialize the keys to $v(e)=\tilde{g}(e|\emptyset)$. The keys are storing potentially outdated marginal gains and the algorithm updates them in a lazy fashion. Since $\tilde{g}$ is submodular, marginal gains can only decrease as the solution $Q$ grows and, as a result, the keys are always an upper bound on the corresponding marginal gains. In each iteration, the algorithm uses the queue to find the element with maximum marginal gain as follows. We remove from the queue the element $e$ with maximum key and evaluate its marginal gain $\tilde{g}(e|Q)$ with respect to the current solution $Q$. We then compare the marginal gain $\tilde{g}(e|Q)$ to the key $v(e')$ of the element $e'$ that is now at the top of the queue (before removing $e$ from the queue, $e'$ was the element with the second-largest key). If $\tilde{g}(e|Q)\geq v(e')$, then $e$ is the element with largest marginal gain, since the key of every element is an upper bound on its current marginal gain. Otherwise, we reinsert $e$ into the queue with key $\tilde{g}(e|Q)$ and repeat.
We use {{\texttt{CSLG}}} to refer to this implementation of the {{\texttt{CSG}}} algorithm with lazy evaluations. The correctness of {{\texttt{CSLG}}} follows directly from submodularity, and the solution constructed is the same as that of {{\texttt{CSG}}}. The worst-case running time of {{\texttt{CSLG}}} is the same as that of {{\texttt{CSG}}}. However, it is well established in the literature that the lazy evaluations lead to significant speedups over the standard greedy procedure. As discussed in more detail in the experimental section, we observed significant speedups in our empirical evaluation.
Finally, we remark that there is also an approximate version of the lazy evaluations technique that allows us to obtain worst-case running time that is nearly-linear at a small loss in the approximation guarantee \cite{badanidiyuru2014fast}. We do not consider this variant in this paper.
\spara{Analysis of the approximation guarantee.} Our analysis builds on the analysis of the standard Greedy algorithm. Throughout the paper, we assume that there is a solution with positive objective, i.e., ${{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)>0$ for some feasible solution $Q$ (this is for simplicity and without loss of generality, since otherwise $\emptyset$ is feasible and is optimal).
\begin{thm}
\label{thm:cardinality}
Algorithm \ref{algo:cardinality} returns a solution $Q$ of size at most $k$ satisfying ${{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq\frac{1}{2}{{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})$.
\end{thm}
\begin{proof}
For analysis purposes, we consider the Greedy ordering of $Q \cup {{\ensuremath{\mathrm{OPT}}}}$, i.e., the ordering
\begin{equation}
\label{eq:greedy-ordering}
e_1, e_2, \dots, e_{|Q\cup {{\ensuremath{\mathrm{OPT}}}}|} \text{ where } e_i \in \arg\max_{e \in (Q \cup {{\ensuremath{\mathrm{OPT}}}}) \setminus \{e_1, \dots, e_{i-1} \}} \tilde{g}(e_i \vert \{e_1, \dots, e_{i-1}\}) \tag{GreedyOrder}
\end{equation}
That is, we select the next element $e_i$ in the ordering to be the element from the remaining set with maximum marginal gain on top of the previously selected elements $e_1, \dots, e_{i-1}$.
It follows from the execution of {{\texttt{CSG}}} that the first $|Q|$ elements in the Greedy ordering (\ref{eq:greedy-ordering}) are the elements of $Q$ in the order in which they were added to $Q$ by the algorithm.
Let $S^{(i)} = \{e_1, \dots, e_i\}$ for all $1\leq i \leq |Q\cup {{\ensuremath{\mathrm{OPT}}}}|$. Let $\ell = |{{\ensuremath{\mathrm{OPT}}}}|$. Consider the solution $S^{(\ell)}$. Since $S^{(\ell)}$ and ${{\ensuremath{\mathrm{OPT}}}}$ have the same size, there is a bijection $\pi: {{\ensuremath{\mathrm{OPT}}}} \to S^{(\ell)}$ such that, for every $i \leq \ell$, $\pi^{-1}(e_i)$ appears after or at the same position as $e_i$ in the Greedy ordering (\ref{eq:greedy-ordering}), i.e., $\pi^{-1}(e_i) = e_j$ for some index $j \geq i$. We can obtain such a mapping $\pi$ by iteratively matching each element of ${{\ensuremath{\mathrm{OPT}}}}$ to the earliest element of $S^{(\ell)}$ that is still unmatched. Since $|{{\ensuremath{\mathrm{OPT}}}}| = |S^{(\ell)}|$ and $S^{(\ell)}$ is comprised of the first $\ell$ elements in the Greedy ordering, every element $o \in {{\ensuremath{\mathrm{OPT}}}}$ will be matched to exactly one element $\pi(o) \in S^{(\ell)}$ such that $\pi(o)$ appears no later than $o$ in the Greedy ordering, as needed.
We can use this bijective mapping $\pi$ to charge ${{\ensuremath{\mathrm{OPT}}}}$ to $S^{(\ell)}$ as follows. By construction of the Greedy ordering and $\pi$, for every $i \leq \ell$, we have
\[ \tilde{g}(e_i \vert S^{(i-1)}) \geq \tilde{g}(\pi^{-1}(e_i) \vert S^{(i-1)}) \]
Let ${{\ensuremath{\mathrm{OPT}}}}^{(i)} = \pi^{-1}(S^{(i)})$ for all $i \leq \ell$. By submodularity and the fact that ${{\ensuremath{\mathrm{OPT}}}}^{(i)} = {{\ensuremath{\mathrm{OPT}}}}^{(i-1)} \cup \{\pi^{-1}(e_i)\}$, we have
\[ \tilde{g}(\pi^{-1}(e_i) \vert S^{(i-1)}) \geq \tilde{g}(\pi^{-1}(e_i) \vert S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}}^{(i-1)}) = \tilde{g}(S^{(\ell)}\cup {{\ensuremath{\mathrm{OPT}}}}^{(i-1)}) - \tilde{g}(S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}}^{(i-1)})\]
Therefore
\[ \tilde{g}(e_i \vert S^{(i-1)}) \geq \tilde{g}(S^{(\ell)}\cup {{\ensuremath{\mathrm{OPT}}}}^{(i-1)}) - \tilde{g}(S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}}^{(i-1)})\]
We sum up the above inequalities over all $i \leq \ell$. Note that the sums telescope. Additionally, we have ${{\ensuremath{\mathrm{OPT}}}}^{(\ell)} = \pi^{-1}(S^{(\ell)}) = {{\ensuremath{\mathrm{OPT}}}}$. Thus we obtain
\[ \tilde{g}(S^{(\ell)}) - \tilde{g}(\emptyset) \geq \tilde{g}(S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}}) - \tilde{g}(S^{(\ell)}) \]
and thus
\begin{equation}
\label{eq:Sell}
\tilde{g}(S^{(\ell)}) \geq \frac{1}{2} \tilde{g}(S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}})
\end{equation}
Next, we show that
\begin{equation}
\label{eq:scaled-obj}
\tilde{g}(Q) \geq \tilde{g}(S^{(\ell)})
\end{equation}
We show the above inequality by considering two cases: $|Q| \geq \ell$ and $|Q| < \ell$. Suppose $|Q| \geq \ell$. We have $S^{(\ell)} \subseteq Q$. Since the algorithm only adds elements with positive marginal gain, we have
\[ \tilde{g}(Q) - \tilde{g}(S^{(\ell)}) = \sum_{i=\ell+1}^{|Q|} \tilde{g}(e_i \vert S^{(i-1)}) \geq 0 \]
Suppose $|Q| < \ell$. We have $Q \subseteq S^{(\ell)}$. Since the algorithm terminates when the marginal gain of every element becomes non-positive, we have
\[ \tilde{g}(S^{(\ell)}) - \tilde{g}(Q) = \sum_{i = |Q|+1}^{\ell} \tilde{g}(e_i \vert S^{(i-1)}) \leq \sum_{i = |Q|+1}^{\ell} \tilde{g}(e_i \vert Q) \leq 0 \]
Suppose that $|Q| < \ell$. We have $S^{(\ell)} \subseteq Q$. Since the algorithm only adds elements with positive marginal gain, we have
\[ \tilde{g}(Q) - \tilde{g}(S^{(\ell)}) = \sum_{i=\ell+1}^{|Q|} \tilde{g}(e_i \vert S^{(i-1)}) \geq 0 \]
Thus, by (\ref{eq:Sell}) and (\ref{eq:scaled-obj}), we have
\[ \tilde{g}(Q) \geq \frac{1}{2} \tilde{g}(S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}}) \]
Recall that $\tilde{g} = {{\ensuremath{f}}} - 2{{\ensuremath{c}}}$. Thus
\[ {{\ensuremath{f}}}(Q) - {{\ensuremath{c}}}(Q) \geq \frac{1}{2} {{\ensuremath{f}}}(S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}}) - {{\ensuremath{c}}}(S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}}) + {{\ensuremath{c}}}(Q) \]
Since ${{\ensuremath{f}}}$ is monotone, we have ${{\ensuremath{f}}}(S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}}) \geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})$. Thus
\[ {{\ensuremath{f}}}(Q) - {{\ensuremath{c}}}(Q) \geq \frac{1}{2} {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}}) - {{\ensuremath{c}}}(S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}}) + {{\ensuremath{c}}}(Q) \]
Thus, to finish the proof, it only remains to verify that
\[ {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}) + {{\ensuremath{c}}}(Q) \geq {{\ensuremath{c}}}(S^{(\ell)} \cup {{\ensuremath{\mathrm{OPT}}}}) \]
As before, we consider two cases: $|Q| \geq \ell$ and $|Q| < \ell$. Suppose that $|Q| \geq \ell$. Then $S^{(\ell)} \subseteq Q$ and thus ${{\ensuremath{c}}}(Q) \geq {{\ensuremath{c}}}(S^{(\ell)})$, since ${{\ensuremath{c}}}$ is non-negative. Thus
\[ {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}) + {{\ensuremath{c}}}(Q) \geq {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}) + {{\ensuremath{c}}}(S^{(\ell)}) \geq {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}} \cup S^{(\ell)}) \]
Suppose that $|Q| < \ell$. Then $Q \subseteq S^{(\ell)}$ and $S^{(\ell)} \setminus Q \subseteq {{\ensuremath{\mathrm{OPT}}}}$. Thus ${{\ensuremath{\mathrm{OPT}}}} \cup Q = {{\ensuremath{\mathrm{OPT}}}} \cup S^{(\ell)}$ and hence
\[ {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}) + {{\ensuremath{c}}}(Q) \geq {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}} \cup Q) = {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}} \cup S^{(\ell)}) \]
Putting everything together, we have
\[ {{\ensuremath{f}}}(Q) - {{\ensuremath{c}}}(Q) \geq \frac{1}{2} {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}}) - {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}) \]
\end{proof}
\section{Algorithm for {{\sc Matroid-Cov-Cost}}}
\label{sec:alg-matroid}
In this section, we extend the {{\texttt{CSG}}} algorithm from Section~\ref{sec:alg-cardinality} and its analysis to the more general setting of a matroid constraint, i.e., the {{\sc Matroid-Cov-Cost}} problem $\max_{Q \in \mathcal{I}} f(Q)-c(Q)$ where $\mathcal{I}$ is the collection of independent sets in a matroid $\mathcal{M} = (V, \mathcal{I})$. As before, the algorithm is the standard Greedy algorithm applied to the scaled objective $\tilde{g}(Q) = f(Q) - 2c(Q)$. We refer to this algorithm as {{\texttt{MCSG}}} (Matroid Cost Scaled Greedy) and present its outline in Algorithm \ref{algo:matroid}.
\spara{Running time.}
The worst-case running time of {{\texttt{MCSG}}} is $O(nk)$ evaluations of the functions ${{\ensuremath{f}}}$ and ${{\ensuremath{c}}}$ and $O(nk)$ matroid feasibility checks, where $n = |V|$ and $k$ is the rank of the matroid (the size of the largest independent set). The lazy evaluation technique that we described in Section~\ref{sec:alg-cardinality} can be used to speed up the matroid algorithm as well. We refer to the implementation of the algorithm with lazy evaluations as {{\texttt{MCSLG}}}.
Our experimental evaluation is for two classes of matroids: a uniform matroid $\mathcal{I} = \{Q \subseteq V \colon |Q| \leq k\}$ (cardinality constraint), and a partition matroid where $V$ is partitioned into disjoint groups $V_1, \dots, V_{\ell}$ and $\mathcal{I} = \{Q \subseteq V \colon |Q \cap V_i| \leq k_i \; \forall 1 \leq i \leq \ell\}$. For both of these matroids, as the algorithm progresses, we can check whether an element can be feasibly added to $Q$ takes $O(1)$ time: for each element of the ground set, we store the part $V_i$ to which it belongs; for each part $1 \leq i \leq \ell$, we store the remaining budget $k'_i$.
\begin{algorithm}[t]
\textbf{Input:} Set of experts $V$, scaled objective $\tilde{g}(Q)={{\ensuremath{f}}}(Q)-2{{\ensuremath{c}}}(Q)$, matroid $\mathcal{M} = (V, \mathcal{I})$. \\
\textbf{Output:} Team $Q$.
\begin{algorithmic}[1]
\STATE $Q\gets\emptyset$, $N\gets V$
\FOR{$i=1,\ldots,n$}
\IF{$N=\emptyset$}
\STATE break
\ENDIF
\STATE $e_{i} = \arg\max_{e\in N} \tilde{g}(e|Q)$\label{line:condition}
\IF{$\tilde{g}(e_i \vert Q) \leq 0$}
\STATE break
\ENDIF
\STATE $Q\gets Q\cup\{e_{i}\}$
\STATE remove from $N$ every element $e$ s.t. $Q\cup \{e\}\notin \mathcal{I}$ \label{line:remove-infeasible}
\ENDFOR
\RETURN{$Q$}
\end{algorithmic}
\caption{\label{algo:matroid}The {{\texttt{MCSG}}} algorithm for the matroid-constrained problem {{\sc Matroid-Cov-Cost}}.}
\end{algorithm}
\spara{Approximation guarantee.}
We prove the following guarantee for the algorithm at the end of the section:
\begin{thm}
\label{thm:matroid}
Algorithm \ref{algo:matroid} returns a solution $Q\in\mathcal{I}$ satisfying ${{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq\frac{1}{2}{{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})$.
\end{thm}
Note that the above guarantee matches the approximation of the standard Greedy algorithm for monotone functions (the special case when the costs are equal to $0$). There are simple examples of monotone submodular maximization with a partition matroid constraint for which Greedy only achieves a $\frac{1}{2}$ approximation. There are algorithms that achieve the optimal $1-\frac{1}{e}$ approximation for monotone submodular maximization with a matroid constraint, but these algorithms maximize the multilinear extension of a submodular function --- a continuous extension of the discrete function ${{\ensuremath{f}}}$ to the domain $[0, 1]^V$ --- and are generally very inefficient. Our algorithm is the first algorithm with provable guarantees for the problem of maximizing the difference between a monotone submodular and a linear function subject to a matroid constraint.
\subsection{Proof of Theorem~\ref{thm:matroid}}
In the remainder of this section, we prove Theorem~\ref{thm:matroid}. Note that the algorithm maintains the invariant that $Q \in \mathcal{I}$. Thus we focus on analyzing the objective value. The analysis is similar to the cardinality constraint. The main difference is in constructing the mapping between the solution $Q$ constructed by the algorithm and the optimal solution ${{\ensuremath{\mathrm{OPT}}}}$. To this end, we use the following standard result for matroids due to Brualdi \cite{brualdi1969comments} (see also Chapter 39 of the textbook \cite{Schrijver2003}).
\begin{thm}[Brualdi's theorem]
\label{thm:mapping}
Let $I$ and $J$ be two independent sets in a matroid such that $|I|=|J|$. There is a bijection $\pi\colon I\setminus J\to J\setminus I$ such that $(J\setminus\pi(e))\cup\{e\}$ is independent for every $e\in I\setminus J$.
\end{thm}
Let $Q^{(i)}$ be the solution $Q$ at the end of iteration $i$ of the algorithm. Let $Q$ and $N$ be the respective sets at the end of the algorithm. We partition ${{\ensuremath{\mathrm{OPT}}}}$ into two sets $O_1$ and $O_2$, where $O_2 = {{\ensuremath{\mathrm{OPT}}}} \cap N$ and $O_1 = {{\ensuremath{\mathrm{OPT}}}} \setminus O_2$. Note that, if $N$ is non-empty, the algorithm terminated because $\tilde{g}(e \vert Q) \leq 0$ for all $e\in N$.
We first show that $|Q| \geq |O_1|$. Suppose for contradiction that $|Q| < |O_1|$. By the augmentation property, there is an element $e \in O_1 \setminus Q$ such that $Q \cup \{e\} \in \mathcal{I}$. By the hereditary property, we have $Q^{(i)} \cup \{e\} \in \mathcal{I}$ for all $i \leq |Q|$. Thus $e$ could not have been removed from $N$ on line~\ref{line:remove-infeasible} and hence $e \in {{\ensuremath{\mathrm{OPT}}}} \cap N = O_2$, contradicting the fact that $e \in O_1$.
Let $\ell = |O_1|$. We analyze $Q^{(\ell)}$ and show the following:
\begin{lem}
We have
\[ 2({{\ensuremath{f}}}(Q^{(\ell)}) - {{\ensuremath{c}}}(Q^{(\ell)})) \geq {{\ensuremath{f}}}(Q^{(\ell)} \cup O_1) - 2{{\ensuremath{c}}}(O_1) \]
\end{lem}
\begin{proof}
Recall that we have $|Q^{(\ell)}| = |O_1| = \ell$. Moreover, $Q^{(\ell)}$ and $O_1$ are both independent. We apply Theorem~\ref{thm:mapping} to obtain a bijective mapping $\pi \colon O_1 \to Q^{(\ell)}$ such that $(Q^{(\ell)} \setminus \pi(e)) \cup \{e\} \in \mathcal{I}$ for all $e \in O_1 \setminus Q^{(\ell)}$. We augment $\pi$ by setting $\pi(e) = e$ for all $e \in O_1 \cap Q^{(\ell)}$, and obtain a bijection from $O_1$ to $Q^{(\ell)}$. We let $O_1^{(i)} = \pi^{-1}(Q^{(i)})$.
Consider an iteration $i\leq \ell$ in which $Q^{(i)} \neq Q^{(i-1)}$ and thus $Q^{(i)} = Q^{(i-1)} \cup \{e_i\}$. Let $o_i = \pi^{-1}(e_i)$. If $o_i = e_i$, then clearly
\[ \tilde{g}(e_i \vert Q^{(i-1)}) = \tilde{g}(o_i \vert Q^{(i-1)}) \]
Suppose that $o_i \neq e_i$. We have $o_i \in O_1 \setminus Q^{(\ell)}$ and, by the choice of $\pi$, we have $(Q^{(\ell)} \setminus \{e_i\}) \cup \{ o_i \} \in \mathcal{I}$. Since $(Q^{(\ell)} \setminus \{e_i\}) \supseteq Q^{(j)}$ for all $j < i$, the hereditary property implies that $Q^{(j)} \cup \{o_i\} \in \mathcal{I}$ and thus $o_i$ could not have been removed from $N$ on line~\ref{line:remove-infeasible} in any iteration $j < i$. Thus $o_i \in N$ at the beginning of iteration $i$ and thus $o_i$ is a candidate for $e_i$. Therefore, by the choice of $e_i$, we have
\[ \tilde{g}(e_i \vert Q^{(i-1)}) \geq \tilde{g}(o_i \vert Q^{(i-1)}) \]
Thus, for every iteration $i \leq \ell$ for which $Q^{(i)} = Q^{(i-1)} \cup \{e_i\}$, we have
\[
{{\ensuremath{f}}}(Q^{(i)})-{{\ensuremath{f}}}(Q^{(i-1)})-2{{\ensuremath{c}}}(e_{i})\geq {{\ensuremath{f}}}(Q^{(i-1)}\cup\{o_{i}\})-{{\ensuremath{f}}}(Q^{(i-1)})-2{{\ensuremath{c}}}(o_{i})
\]
where $o_i = \pi^{-1}(e_i)$.
We have
\[
{{\ensuremath{f}}}(Q^{(i-1)}\cup\{o_{i}\})-{{\ensuremath{f}}}(Q^{(i-1)})
= {{\ensuremath{f}}}(o_i \vert Q^{(i-1)})
\geq {{\ensuremath{f}}}(o_i \vert Q^{(\ell)} \cup O_1^{(i-1)})
= {{\ensuremath{f}}}(Q^{(\ell)} \cup O_1^{(i)})-{{\ensuremath{f}}}(Q^{(\ell)} \cup O_1^{(i-1)})
\]
where the inequality is by submodularity and the last equality is by $O^{(i)} = O^{(i-1)} \cup \{o_i\}$.
Thus
\[
{{\ensuremath{f}}}(Q^{(i)})-{{\ensuremath{f}}}(Q^{(i-1)})-2{{\ensuremath{c}}}(e_{i})\geq {{\ensuremath{f}}}(Q^{(\ell)} \cup O_1^{(i)}) -{{\ensuremath{f}}}(Q^{(\ell)} \cup O_1^{(i-1)}) - 2{{\ensuremath{c}}}(o_{i})
\]
Summing up over all $i \leq \ell$ and using that $O_1^{(\ell)}=\pi^{-1}(Q^{(\ell)})=O_1$, we obtain
\begin{align*}
{{\ensuremath{f}}}(Q^{(\ell)})-2{{\ensuremath{c}}}(Q^{(\ell)}) & \geq {{\ensuremath{f}}}(Q^{(\ell)} \cup O_1) - {{\ensuremath{f}}}(Q^{(\ell)}) - 2{{\ensuremath{c}}}(O_1)\\
\Rightarrow 2({{\ensuremath{f}}}(Q^{(\ell)})-{{\ensuremath{c}}}(Q^{(\ell)})) & \geq {{\ensuremath{f}}}(Q^{(\ell)} \cup O_1)-2{{\ensuremath{c}}}(O_1)
\end{align*}
\end{proof}
\begin{lem}
We have
\[ {{\ensuremath{f}}}(Q) \geq {{\ensuremath{f}}}(Q \cup O_2) - 2{{\ensuremath{c}}}(O_2) \]
\end{lem}
\begin{proof}
We may assume that $O_2 \neq \emptyset$, since otherwise the lemma is immediate. For every $o \in O_2$, we have $\tilde{g}(o \vert Q) \leq 0$ and thus
\[ 2{{\ensuremath{c}}}(o) \geq {{\ensuremath{f}}}(o \vert Q) \]
Summing up and using submodularity, we obtain
\[ 2{{\ensuremath{c}}}(O_2) \geq \sum_{o \in O_2} {{\ensuremath{f}}}(o \vert Q) \geq {{\ensuremath{f}}}(Q \cup O_2) - {{\ensuremath{f}}}(Q) \]
Rearranging, we obtain
\[ {{\ensuremath{f}}}(Q) \geq {{\ensuremath{f}}}(Q \cup O_2) - 2{{\ensuremath{c}}}(O_2) \]
\end{proof}
By combining the two lemmas and using submodularity, we obtain the following result.
\begin{lem}
We have
\[ {{\ensuremath{f}}}(Q^{(\ell)}) + {{\ensuremath{f}}}(Q) - 2{{\ensuremath{c}}}(Q^{(\ell)}) \geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}}) - 2{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})\]
\end{lem}
\begin{proof}
By combining the two lemmas above, we obtain
\[
2({{\ensuremath{f}}}(Q^{(\ell)}) - {{\ensuremath{c}}}(Q^{(\ell)})) + {{\ensuremath{f}}}(Q) \geq {{\ensuremath{f}}}(Q^{(\ell)} \cup O_1) + {{\ensuremath{f}}}(Q \cup O_2) - 2{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})
\]
Recall that $O_1$ and $O_2$ is a partition of ${{\ensuremath{\mathrm{OPT}}}}$ and $Q^{(\ell)} \subseteq Q$. By submodularity, we have
\[ {{\ensuremath{f}}}(Q^{(\ell)} \cup O_1) + {{\ensuremath{f}}}(Q \cup O_2)
\geq {{\ensuremath{f}}}((Q^{(\ell)} \cup O_1) \cap (Q \cup O_2)) + {{\ensuremath{f}}}((Q^{(\ell)} \cup O_1) \cup (Q \cup O_2))
= {{\ensuremath{f}}}(Q^{(\ell)}) + {{\ensuremath{f}}}(Q \cup {{\ensuremath{\mathrm{OPT}}}})
\]
Therefore
\[
{{\ensuremath{f}}}(Q^{(\ell)}) - 2{{\ensuremath{c}}}(Q^{(\ell)}) + {{\ensuremath{f}}}(Q) \geq {{\ensuremath{f}}}(Q \cup {{\ensuremath{\mathrm{OPT}}}}) - 2{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})
\]
Since ${{\ensuremath{f}}}$ is monotone, we have ${{\ensuremath{f}}}(Q \cup {{\ensuremath{\mathrm{OPT}}}}) \geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})$, and thus
\[
{{\ensuremath{f}}}(Q^{(\ell)}) - 2{{\ensuremath{c}}}(Q^{(\ell)}) + {{\ensuremath{f}}}(Q) \geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}}) - 2{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})
\]
\end{proof}
Since the algorithm adds elements with positive marginal gain, we obtain the following result.
\begin{lem}
We have
\[ 2({{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)) \geq {{\ensuremath{f}}}(Q^{(\ell)}) + {{\ensuremath{f}}}(Q) - 2{{\ensuremath{c}}}(Q^{(\ell)}) \]
\end{lem}
\begin{proof}
Since the algorithm adds elements with positive marginal gain, we have $\tilde{g}(Q^{(i)}) - \tilde{g}(Q^{(i-1)}) = \tilde{g}(e_i \vert Q^{i-1}) \geq 0$. Thus
\[ \tilde{g}(Q) - \tilde{g}(Q^{(\ell)}) = \sum_{i = \ell+1}^{|Q|} (\tilde{g}(Q^{(i)}) - \tilde{g}(Q^{i-1})) \geq 0 \]
Therefore
\[ {{\ensuremath{f}}}(Q) - 2{{\ensuremath{c}}}(Q) \geq {{\ensuremath{f}}}(Q^{(\ell)}) - 2{{\ensuremath{c}}}(Q^{(\ell)}) \]
and thus
\[ 2{{\ensuremath{f}}}(Q) - 2{{\ensuremath{c}}}(Q) \geq {{\ensuremath{f}}}(Q^{(\ell)}) + {{\ensuremath{f}}}(Q) - 2{{\ensuremath{c}}}(Q^{(\ell)}) \]
\end{proof}
Theorem~\ref{thm:matroid} now follows from the last two lemmas.
\section{Online algorithm for {{\sc Cov-Cost}}}
\label{sec:alg-online}
In this section, we consider the unconstrained problem $\max_{Q\in2^{V}}{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)$ in the online model where the elements arrive one at a time. When an element arrives, we need to decide whether to add it to the solution, and this decision is irrevocable.
The algorithm, shown in Algorithm~\ref{algo:online}, considers the scaled objective $\tilde{g}(Q)={{\ensuremath{f}}}(Q)-2{{\ensuremath{c}}}(Q)$ and it accepts every element that has positive marginal gain with respect to this scaled objective.
\begin{algorithm}[t]
\textbf{Input:} Stream of experts $V$, scaled objective $\tilde{g} = {{\ensuremath{f}}} - 2{{\ensuremath{c}}}$. \\
\textbf{Output:} Team $Q$.
\begin{algorithmic}[1]
\STATE $Q\gets\emptyset$
\FOR{each arriving element $e$}
\IF{$\tilde{g}(e|Q)>0$}
\STATE $Q\gets Q\cup\{e\}$
\ENDIF
\ENDFOR
\RETURN{$Q$}
\end{algorithmic}
\caption{\label{algo:online}The {{\texttt{OnlineCSG}}} algorithm.}
\end{algorithm}
\begin{thm}
Algorithm \ref{algo:online} returns a solution $Q$ satisfying ${{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq\frac{1}{2}{{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})$.
\end{thm}
\begin{proof}
For every item $o\in{{\ensuremath{\mathrm{OPT}}}}\setminus Q$, $\tilde{g}(o\vert Q)\leq0$ holds. This is due to the fact that $o$ had non-positive marginal gain when it arrived and the marginal gains can only decrease, since $\tilde{g}$ is submodular. Therefore
\begin{align*}
0 & \geq\sum_{o\in{{\ensuremath{\mathrm{OPT}}}}\setminus Q}\tilde{g}(o\vert Q)\\
& \geq\tilde{g}(Q\cup{{\ensuremath{\mathrm{OPT}}}})-\tilde{g}(Q)\\
& =\Big(\underbrace{{{\ensuremath{f}}}(Q\cup{{\ensuremath{\mathrm{OPT}}}})}_{\geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})}-{{\ensuremath{f}}}(Q)\Big)-2\Big(\underbrace{{{\ensuremath{c}}}(Q\cup{{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}(Q)}_{={{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}\setminus Q)\leq {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})}\Big)\\
& \geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{f}}}(Q)-2{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})
\end{align*}
The third inequality is by monotonicity of ${{\ensuremath{f}}}$ and non-negativity and linearity of ${{\ensuremath{c}}}$. The second inequality follows from submodularity as follows. Let $O={{\ensuremath{\mathrm{OPT}}}}\setminus Q$ and let $o_{1},o_{2},\dots,o_{|O|}$ be an arbitrary ordering of $O$. Let $O^{(i)}=\{o_{1},\dots,o_{i}\}$.
Then,
\begin{align*}
\tilde{g}(Q\cup O)-\tilde{g}(Q)
=\sum_{i=1}^{|O|}\left(\tilde{g}(Q\cup O^{(i)})-\tilde{g}(Q\cup O^{(i-1)})\right)
=\sum_{i=1}^{|O|}\tilde{g}(o_{i}\vert Q\cup O^{(i-1)})
\leq\sum_{i=1}^{|O|}\tilde{g}(o_{i}\vert Q)
\end{align*}
where the inequality is by submodularity.
Rearranging, we obtain
\[
{{\ensuremath{f}}}(Q)\geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-2{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})
\]
On the other hand, since the algorithm only added elements with positive marginal gain with respect to $\tilde{g}$, we have
\[
\tilde{g}(Q)>0
\]
Indeed, let $e_{1},e_{2},\dots,e_{|Q|}$ be the elements of $Q$ in the order in which they were added. Let $Q^{(i)}=\{e_{1},\dots,e_{i}\}$. We have
\begin{align*}
\tilde{g}(Q)-\tilde{g}(\emptyset)
=\sum_{i=1}^{|Q|}\left(\tilde{g}(Q^{(i)})-\tilde{g}(Q^{(i-1)})\right)
=\sum_{i=1}^{|Q|}\tilde{g}(e_{i}\vert Q^{(i-1)})
>0
\end{align*}
Since $\tilde{g}(\emptyset)={{\ensuremath{f}}}(\emptyset)-{{\ensuremath{c}}}(\emptyset)={{\ensuremath{f}}}(\emptyset)\geq0$,
we have $\tilde{g}(Q)>0$. Therefore,
\[
{{\ensuremath{f}}}(Q)-2{{\ensuremath{c}}}(Q)>0\Rightarrow {{\ensuremath{c}}}(Q)<\frac{1}{2}{{\ensuremath{f}}}(Q)\Rightarrow {{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)>\frac{1}{2}{{\ensuremath{f}}}(Q)
\]
By combining with the previous inequality, we obtain
\begin{align*}
{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)>\frac{1}{2}{{\ensuremath{f}}}(Q)
\geq\frac{1}{2}\left({{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-2{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})\right)
=\frac{1}{2}{{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})
\end{align*}
\end{proof}
\section{Streaming algorithm for {{\sc $k$-Cov-Cost}}}
\label{sec:alg-streaming}
In this section, we consider the cardinality constrained problem $\max_{|Q|\leq k}{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)$ in the streaming model. The algorithm is an extension of the online algorithm from Section~\ref{sec:alg-online}. As before, we consider the scaled objective $\tilde{g}(Q)={{\ensuremath{f}}}(Q)-{\ensuremath{s}}\cdot {{\ensuremath{c}}}(Q)$, where ${\ensuremath{s}}\geq1$ is an absolute constant (the right choice for ${\ensuremath{s}}$ is no longer $2$, see Theorem~\ref{thm:streaming} below). Now, instead of picking elements whose scaled marginal gain is positive, we pick elements whose scaled marginal gain is above a suitable threshold. In other words, we apply the single-threshold Greedy algorithm \cite{badanidiyuru2014streaming,kumar2015fast} to the scaled objective. Algorithm~\ref{algo:streaming} simply applies the single-threshold Greedy algorithm with threshold $\tau$ and scaled objective $\tilde{g} = {{\ensuremath{f}}} - {\ensuremath{s}} \cdot {{\ensuremath{c}}}$, where $\tau$ and ${\ensuremath{s}} \geq 1$ are given as input. In Theorem~\ref{thm:streaming}, we show that there is a way to set $\tau$ and ${\ensuremath{s}}$ so that Algorithm~\ref{algo:streaming} returns a good approximate solution. However, the right setting for the threshold $\tau$ depends on the value of the optimal solution, which we do not know. We get around this difficulty by approximately guessing the threshold $\tau$ and running several copies of Algorithm~\ref{algo:streaming} with the different guesses. This approach is a standard technique that is due to \cite{badanidiyuru2014streaming}, and it is described at the end of the section.
\begin{algorithm}[t]
\textbf{Input:} Stream of experts $V$, scaled objective $\tilde{g} = {{\ensuremath{f}}} - {\ensuremath{s}} \cdot {{\ensuremath{c}}}$ (${\ensuremath{s}} \geq 1$ is an absolute constant), cardinality $k$, threshold $\tau$. \\
\textbf{Output:} Team $Q$.
\begin{algorithmic}[1]
\STATE $Q\gets\emptyset$
\WHILE{stream not empty}
\STATE $e\gets$next stream element
\IF{$\tilde{g}(e|Q)\geq\tau$ and $|Q|<k$}
\STATE $Q\gets Q\cup\{e\}$
\ENDIF
\ENDWHILE
\RETURN{$Q$}
\end{algorithmic}
\caption{\label{algo:streaming}The {{\texttt{Streaming-k-CSG}}} algorithm.}
\end{algorithm}
As discussed above, we first analyze Algorithm~\ref{algo:streaming} with an appropriate setting of $\tau$ that depends on ${{\ensuremath{\mathrm{OPT}}}}$. We discuss at the end of the section how to approximately guess the threshold. The proof of the following theorem combines the standard analysis of the single-threshold Greedy algorithm with the analysis of the online algorithm given in Section~\ref{sec:alg-online}.
\begin{thm}
\label{thm:streaming}
When run with scaling constant ${\ensuremath{s}}=\frac{1}{2}\left(3+\sqrt{5}\right)$ and threshold $\tau=\frac{1}{k}\left(\frac{1}{2}(3-\sqrt{5}){{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})\right)$, Algorithm \ref{algo:streaming} returns a solution $Q$ such that $|Q| \leq k$ and
\[ {{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq \frac{1}{2}\left(3-\sqrt{5}\right){{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}) \]
\end{thm}
\begin{proof}
It is clear from the execution of the algorithm that $|Q| \leq k$. Therefore we focus on analyzing the function value. We consider two cases, depending on whether $|Q|=k$ or $|Q|<k$.
\spara{Case 1: $|Q|=k$.}
We have
\[
\tilde{g}(Q)\geq\tau k\Rightarrow {{\ensuremath{f}}}(Q)-{\ensuremath{s}}\cdot {{\ensuremath{c}}}(Q)\geq\tau k
\]
\spara{Case 2: $|Q|<k$.}
For every item $o\in{{\ensuremath{\mathrm{OPT}}}}\setminus Q$, we have
\[
\tilde{g}(o\vert Q)\leq\tau
\]
This is due to the fact that $o$ had marginal gain less than $\tau$ when it arrived and the marginal gains can only decrease due to submodularity of $\tilde{g}$. Therefore
\begin{align*}
\tau|{{\ensuremath{\mathrm{OPT}}}}\setminus Q| & \geq\sum_{o\in{{\ensuremath{\mathrm{OPT}}}}\setminus Q}\tilde{g}(o\vert Q)\\
& \geq\tilde{g}(Q\cup{{\ensuremath{\mathrm{OPT}}}})-\tilde{g}(Q)\\
& =\Big(\underbrace{{{\ensuremath{f}}}(Q\cup{{\ensuremath{\mathrm{OPT}}}})}_{\geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})}-{{\ensuremath{f}}}(Q)\Big)-{\ensuremath{s}}\Big(\underbrace{{{\ensuremath{c}}}(Q\cup{{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}(Q)}_{={{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}\setminus Q)\leq {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})}\Big)\\
& \geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{f}}}(Q)-{\ensuremath{s}}\cdot {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})
\end{align*}
The third inequality is by monotonicity of ${{\ensuremath{f}}}$ and non-negativity and linearity of $c$. The second inequality follows from submodularity as follows. Let $O={{\ensuremath{\mathrm{OPT}}}}\setminus Q$ and let $o_{1},o_{2},\dots,o_{|O|}$ be an arbitrary ordering of $O$. Let $O^{(i)}=\{o_{1},\dots,o_{i}\}$. Then
\begin{align*}
\tilde{g}(Q\cup O)-\tilde{g}(Q)
=\sum_{i=1}^{|O|}\left(\tilde{g}(Q\cup O^{(i)})-\tilde{g}(Q\cup O^{(i-1)})\right)
=\sum_{i=1}^{|O|}\tilde{g}(o_{i}\vert Q\cup O^{(i-1)})
\leq\sum_{i=1}^{|O|}\tilde{g}(o_{i}\vert Q)
\end{align*}
where the inequality is by submodularity.
Rearranging, we obtain
\[
{{\ensuremath{f}}}(Q)\geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{\ensuremath{s}}\cdot {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})-\tau\underbrace{|{{\ensuremath{\mathrm{OPT}}}}\setminus Q|}_{\leq k}\geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{\ensuremath{s}}\cdot {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})-\tau k
\]
On the other hand, since the algorithm only added elements with marginal gain at least the threshold, we can show that
\[
\tilde{g}(Q)\geq\tau|Q|
\]
Indeed, let $e_{1},e_{2},\dots,e_{|Q|}$ be the elements of $Q$ in the order in which they were added. Let $Q^{(i)}=\{e_{1},\dots,e_{i}\}$. We have
\begin{align*}
\tilde{g}(Q)-\tilde{g}(\emptyset)
=\sum_{i=1}^{|Q|}\left(\tilde{g}(Q^{(i)})-\tilde{g}(Q^{(i-1)})\right)
=\sum_{i=1}^{|Q|}\tilde{g}(e_{i}\vert Q^{(i-1)})\geq\tau|Q|
\end{align*}
Since $\tilde{g}(\emptyset)={{\ensuremath{f}}}(\emptyset)-{{\ensuremath{c}}}(\emptyset)={{\ensuremath{f}}}(\emptyset)\geq0$,
we have $\tilde{g}(Q)\geq\tau|Q|$. Thus
\[ {{\ensuremath{f}}}(Q)-{\ensuremath{s}}\cdot {{\ensuremath{c}}}(Q)\geq\tau|Q|\geq0 \]
To summarize, we showed the following two inequalities:
\begin{align*}
{{\ensuremath{f}}}(Q) & \geq {{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{\ensuremath{s}}\cdot {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})-\tau k\\
{{\ensuremath{f}}}(Q)-s\cdot {{\ensuremath{c}}}(Q) & \geq0
\end{align*}
Combining the two inequalities with coefficients ${\ensuremath{s}}-1$ and $1$ gives
\begin{align*}
{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q) & \geq\frac{{\ensuremath{s}}-1}{{\ensuremath{s}}}\left({{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{\ensuremath{s}}\cdot {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})-\tau k\right)
\end{align*}
\textbf{Setting ${\ensuremath{s}},\tau$. }We now put together the two cases and
set the two parameters ${\ensuremath{s}}\geq1$ and $\tau$.
In case 1, we obtain a solution $Q$ with value
\[
{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq {{\ensuremath{f}}}(Q)-{\ensuremath{s}}\cdot {{\ensuremath{c}}}(Q)\geq\tau k
\]
where the first inequality is due to $c\geq0$ and ${\ensuremath{s}}\geq1$, and
the second inequality is by our analysis above.
In case 2, we obtain a solution $Q$ with value
\[
{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq\frac{{\ensuremath{s}}-1}{{\ensuremath{s}}}\left({{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{\ensuremath{s}}\cdot {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})-\tau k\right)
\]
Thus overall we get a solution with value at least
\[
\min\left\{ \tau k,\frac{{\ensuremath{s}}-1}{{\ensuremath{s}}}\left({{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{\ensuremath{s}}\cdot {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})-\tau k\right)\right\}
\]
We set $\tau$ to balance the two terms:
\begin{align*}
\tau k =\frac{{\ensuremath{s}}-1}{{\ensuremath{s}}}\left({{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{\ensuremath{s}}\cdot {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})-\tau k\right)
\Rightarrow\tau k =\frac{{\ensuremath{s}}-1}{2{\ensuremath{s}}-1}\left({{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{\ensuremath{s}}\cdot {{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})\right)
\end{align*}
We set ${\ensuremath{s}}$ so that the coefficient of ${{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})$ becomes $1$:
\[
\frac{{\ensuremath{s}}({\ensuremath{s}}-1)}{2{\ensuremath{s}}-1}=1\Rightarrow {\ensuremath{s}}^{2}-3{\ensuremath{s}}+1=0
\]
The above equation has two solutions: ${\ensuremath{s}}_{1}=\frac{1}{2}\left(3-\sqrt{5}\right)$ and ${\ensuremath{s}}_{2}=\frac{1}{2}\left(3+\sqrt{5}\right)$. We want ${\ensuremath{s}}\geq1$, so we pick the latter:
\[
{\ensuremath{s}}=\frac{1}{2}\left(3+\sqrt{5}\right)
\]
For this choice of ${\ensuremath{s}}$, the threshold $\tau$ and the objective value obtained are
\begin{align*}
\tau & =\frac{1}{k}\left(\frac{1}{2}\left(3-\sqrt{5}\right){{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})\right)\\
{{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q) & \geq\frac{1}{2}\left(3-\sqrt{5}\right){{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})
\end{align*}
\end{proof}
\spara{Guessing $\tau$ and overall algorithm.}
Setting the threshold as suggested by the above theorem requires knowing $\hat{g}({{\ensuremath{\mathrm{OPT}}}})$, where $\hat{g}(Q):=\frac{1}{2}(3-\sqrt{5}){{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)$. To remove this assumption, we use the standard technique introduced by \cite{badanidiyuru2014streaming} which we now sketch. The largest singleton value $v=\max_{e}\hat{g}(\{e\})$ gives us a $k$-approximation to $\hat{g}({{\ensuremath{\mathrm{OPT}}}})$. Given this approximation, we guess a $1+\epsilon$ approximation to $\hat{g}({{\ensuremath{\mathrm{OPT}}}})$ from a set of $O(\log k/\epsilon)$ values ranging from $v$ to $kv$. The final streaming algorithm is simply $O(\log k/\epsilon)$ copies of the basic algorithm running in parallel with different guesses. As new elements appear in the stream, the value $v=\max_{e}\hat{g}(\{e\})$ also increases over time and thus, existing copies of the basic algorithm with small guesses are dropped and new copies with higher guesses are added. An important observation is that when we introduce a new copy with a large guess, starting it from mid-stream has exactly the same outcome as if we
started it from the beginning of the stream: all previous elements have marginal gain much smaller than the guess and smaller than the threshold so they would have been rejected anyway. We refer to \cite{badanidiyuru2014streaming} for the full details. We only lose $\epsilon$ in the approximation due to guessing and we use $O\left(k \log{k} / \epsilon \right)$ total space to store the $O(\log{k}/\epsilon)$ solutions.
\begin{thm}
There is a streaming algorithm for the cardinality-constrained problem $\max_{|Q| \leq k} {{\ensuremath{f}}}(Q) - {{\ensuremath{c}}}(Q)$ that takes as input any $\epsilon > 0$ and it returns a solution $Q$ satisfying
\[ {{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq \left(\frac{1}{2}\left(3-\sqrt{5}\right) - \epsilon \right){{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}}) \]
The algorithm uses $O\left(k \log{k} / \epsilon \right)$ space.
\end{thm}
\spara{Remark:} The result presented in this section was obtained independently and concurrently by \cite{kazemi2020regularized} and a preliminary version of our work \cite{ene2020note}.
\section{Experiments}\label{sec:experiments}
This section explores the practicality of our algorithms using real data from two major online labor markets.
We use this data to analyze the performances and the running times of our algorithms for the following problems: (i) {{\sc $k$-Cov-Cost}}, (ii) {{\sc Cov-Cost}}, and (iii) {{\sc Matroid-Cov-Cost}}.
\subsection{Datasets}
\label{sec:datasets}
We use real-world datasets from the online labor marketplaces, {\texttt{freelancer.com}} and {\texttt{guru.com}}.
We refer to these datasets as {{\textit{Freelancer}}} and {{\textit{Guru}}}, respectively.
The number of experts is 1212 and 6120 in {{\textit{Freelancer}}} and {{\textit{Guru}}}, respectively.
\spara{Obtaining the expert skills and costs.}
All expert-related data used in this work are obtained from anonymized profiles of members registered in the two marketplaces.
A profile itself includes professional-specific information about an expert.
From each profile we collect the following artifacts: (i) the set of acquired skills, (ii) the salary demands (in dollars per hour).
The set of acquired skills is a self-defined set of skills that is verified by fraud detecting profile mechanisms.
The average number of skills per expert is 1.46 and 13.07 in {{\textit{Freelancer}}} and {{\textit{Guru}}}, respectively.
The hourly salary of an expert corresponds to the cost of adding that expert to a team.
\spara{Defining a task.}
A task is obtained through a synthetic data-generation procedure similar to the one used in the past \cite{anagnostopoulos2012online} with some additional properties.
For an input set of experts we extract the union of their skills.
We divide the skills into three categories; (i) popular, (ii) rare, (iii) common.
A skill is assigned to a category based on its frequency in the experts' skillsets.
The top 10\% most frequent skills belong to the popular category, the bottom 10\% belong to the rare category, and the rest are in the common category.
We create the input task as follows.
First, we fix the number of skills it contains.
In our experiments we set the number of skills to be 50.
Then, $f_{p}$ fraction of the task's total number of skills is randomly selected from the popular category, $f_{r}$ fraction is selected from the rare category and the remaining $f_{c}$ fraction of skills is picked from the common category.
We perform experiments for the two following fraction parameter settings ($f_{p}=0.1$,$f_{r}=0.1$,$f_{c}=0.8$) and ($f_{p}=0.8$,$f_{r}=0.1$,$f_{c}=0.1$).
Note that $f_{p}+f_{r}+f_{c}$=1.
Therefore, it suffices to refer to these settings as ($f_{p}=0.1$,$f_{r}=0.1$) and ($f_{p}=0.8$,$f_{r}=0.1$), respectively.
Moreover, since in reality having tasks where the rare skills comprise $f_{r}=0.8$ fraction of the total skills is rather unrealistic, we omit evaluating such parameter settings.
\subsection{Experimental setup}
In this section we discuss the details regarding the setup of the experimental evaluation.
\spara{Implementation details.}
The code is developed in Python.
For all our experiments we use single-process implementations on a 64-bit MacBook Pro with an Intel Core i7 CPU at 2.6GHz and 16 GB RAM.
We make the code, the datasets and the chosen parameters available online \footnote{\scriptsize{https://github.com/smnikolakaki/submodular-linear-cost-maximization}}.
\spara{Picking a value $\lambda$.}
\label{sec:picklambda}
Recall, that the combined objective introduced in Section \ref{sec:preliminaries} compares coverage with cost.
In our setting coverage corresponds to the covered number of skills of a task and cost is the total salary demand of the experts in a team (in dollars).
The purpose of parameter $\lambda$ is to transform these two quantities into comparable units.
In our dataset, and in practice, the coverage score is in the order of tens and hundreds, while the total cost is in the order of thousands.
To decide an appropriate $\lambda$ coefficient for our experiments we did the following.
First, we considered all the experts in the larger dataset {{\textit{Guru}}} and defined a task as described in Section \ref{sec:datasets}.
For this dataset we solved the set cover problem using the well-known greedy algorithm \cite{slavik1996tight} and found the team $Q$ of minimum number of experts that covers all the skills in the task.
To solve the set cover problem we define the universe to be the skills required by the task and the covering sets to be the set of experts.
We then computed the total cost of team $Q$, i.e, ${{\ensuremath{c}}}(Q)$.
Finally, we defined $\lambda=\frac{{{\ensuremath{c}}}(Q)}{{{\ensuremath{f}}}(Q)}$ to ensure that the coverage and the cost will be in a comparable scale.
Using the aforementioned approach in our experiments we set $\lambda = 800$.
We use the same $\lambda$ coefficient for all experiments to have comparable objective values and running times.
\spara{Setting the algorithmic parameter $\epsilon$.}
We presented {{\texttt{StochasticDistortedGreedy}}} and {{\texttt{Streaming-k-CSG}}} that require an error parameter $\epsilon$ as part of their input.
This is a trade-off parameter between the quality of the solution and the algorithm's running time.
To select the appropriate value of $\epsilon$ for both algorithms we performed a set of experiments for different $\epsilon$ values and decided the $\epsilon$ value that achieves a close to the best for the algorithm solution, without sacrificing running time.
Due to lack of space the corresponding plots will be presented in the extended version of this paper.
Throughout the experiments we fix $\epsilon$=0.01 and $\epsilon$=0.05 for the {{\texttt{StochasticDistortedGreedy}}} and {{\texttt{Streaming-k-CSG}}} algorithms, respectively.
\spara{Repeating the experiments under the same parameters.}
We observed that our datasets are sensitive to the sampling skill parameters $f_p$, $f_r$ and $f_c$ which define the skills of the input task.
Furthermore, we consider algorithms and run experiments that use random samples of the input data.
For this reason, each experiment in our evaluation is repeated $5$ times with the exact same set of parameters.
For each experiment we report the average performance value of each algorithm, denoted as its line, as well as the confidence interval of the result, denoted as the bar around the line.
\subsection{Evaluation for \large{{{\sc $k$-Cov-Cost}}}}
\label{exp:constrained}
In this section, we evaluate the empirical performance of algorithms for the cardinality-constrained problem {{\sc $k$-Cov-Cost}}.
The algorithms that we evaluate are the following:
\begin{list}{$\bullet$
\item Our algorithm {{\texttt{CSG}}} and its lazy evaluation variant {{\texttt{CSLG}}} (Section~\ref{sec:alg-cardinality}). As discussed in Section~\ref{sec:alg-cardinality}, the two algorithms construct the same solution. Thus it suffices to include one of the algorithms in the plots evaluating the objective value, and we only show {{\texttt{CSLG}}}. We show both algorithms in the plots evaluating the running time.
\item Our streaming algorithm {{\texttt{Streaming-k-CSG}}} (Section~\ref{sec:alg-streaming}). Recall that this algorithm addresses the harder online problem. We evaluate its performance against offline algorithms that have complete knowledge of the input datasets.
\item Algorithms {{\texttt{DistortedGreedy}}} and {{\texttt{StochasticDistortedGreedy}}} proposed by Harshaw {{\emph{et al.}}} \cite{harshaw2019submodular}. The {{\texttt{DistortedGreedy}}} algorithm also builds on the Greedy approach but, instead of considering a cost-scaled objective like we do, the authors design a \emph{distorted objective} which changes throughout the algorithm. The distorted objective initially places higher relative importance on the modular cost term $c$, and gradually increases the relative importance of the coverage function as the algorithm progresses. {{\texttt{DistortedGreedy}}} makes O($nk$) evaluations and returns a solution $Q$ of size at most $k$ satisfying ${{\ensuremath{f}}}(Q)-{{\ensuremath{c}}}(Q)\geq(1-\frac{1}{e}){{\ensuremath{f}}}({{\ensuremath{\mathrm{OPT}}}})-{{\ensuremath{c}}}({{\ensuremath{\mathrm{OPT}}}})$.
An important limitation of {{\texttt{DistortedGreedy}}} is that, due to the distortion of the coverage function that depends on the current iteration, this algorithm cannot be sped up using lazy evaluations. The {{\texttt{StochasticDistortedGreedy}}} algorithm uses the same distorted objective as {{\texttt{DistortedGreedy}}} but has faster asymptotic runtime because it optimizes over a random sample in each iteration.
\item A baseline heuristic algorithm {{\texttt{Top-k-Experts}}} which runs as follows. The algorithm gives each expert $e\in V$ a linear weight $w(e)= {{\ensuremath{f}}}(\{e\})-{{\ensuremath{c}}}(e)$ and it selects the (at most) $k$ experts with largest positive weights. If there are fewer than $k$ experts with positive weight, the algorithm selects all of the experts with positive weights; otherwise, the algorithm selects the $k$ experts with largest weights.
\item A baseline heuristic algorithm {{\texttt{Greedy}}}. This algorithm is similar to {{\texttt{CSG}}}, the only difference is that instead of computing the marginal value with respect to the scaled objective $\tilde{g} = {{\ensuremath{f}}} - 2{{\ensuremath{c}}}$ we use the original objective $g = {{\ensuremath{f}}} - {{\ensuremath{c}}}$.
\end{list}
\begin{figure*}
\centering
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_constrained_guru_pop01_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_constrained_guru_pop08_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_constrained_freelancer_pop01_rare01.pdf}
\end{minipage} %
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_constrained_freelancer_pop08_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_constrained_guru_pop01_rare01.pdf}
\subcaption{\label{fig:sc_time_con_gr_01_01}\scriptsize{{{\textit{Guru}}} ($f_p$=0.1,$f_r$=0.1)}}
\end{minipage}%
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_constrained_guru_pop08_rare01.pdf}
\subcaption{\label{fig:sc_time_con_gr_08_01}\scriptsize{{{\textit{Guru}}} ($f_p$=0.8,$f_r$=0.1)}}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_constrained_freelancer_pop01_rare01.pdf}
\subcaption{\label{fig:sc_time_con_fr_01_01}\scriptsize{{{\textit{Freelancer}}} ($f_p$=0.1,$f_r$=0.1)}}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_constrained_freelancer_pop08_rare01.pdf}
\subcaption{\label{fig:sc_time_con_fr_08_01}\scriptsize{{{\textit{Freelancer}}} ($f_p$=0.8,$f_r$=0.1)}}
\end{minipage}
\caption{\label{fig:perf_time_constrained} Combined objective value ($g$) (row 1) and running time (row 2) comparisons of algorithms and baselines for the cardinality-constrained problem {{\sc $k$-Cov-Cost}}. Columns \ref{fig:sc_time_con_gr_01_01} and \ref{fig:sc_time_con_gr_08_01} correspond to {{\textit{Guru}}} and columns \ref{fig:sc_time_con_fr_01_01} and \ref{fig:sc_time_con_fr_08_01} correspond to {{\textit{Freelancer}}}.}
\end{figure*}
\spara{Performance evaluation.}
\label{sec:performance}
For the performance evaluation we vary the cardinality parameter $k$ and compute the combined objective value ($g$) of the solution.
We present the results of this evaluation in the first row of Figure \ref{fig:perf_time_constrained}.
The $y$-axis represents the combined objective value ($g$) of each algorithm, and the $x$-axis corresponds to the cardinality $k$.
We observe that the performances of the algorithms follow a similar trend, which is consistent among the different datasets.
First, we notice that as $k$ increases so does the performance of the algorithms, up until a particular point where the performances seem to stabilize.
Initially, the algorithms benefit from adding more experts to the solution because the coverage of the solution outweighs its cost.
However, for some cardinality $k$ the algorithms reach a solution where adding more experts does not benefit them.
This happens in two cases: (i) when the task skills have been covered, and (ii) when the benefit from increasing the coverage is smaller than paying the corresponding cost.
When comparing the algorithms in the individual level we notice the following.
Baseline {{\texttt{Top-k-Experts}}} is outperformed by the other algorithms, and the same holds for the streaming algorithm {{\texttt{Streaming-k-CSG}}}.
One would expect from {{\texttt{Streaming-k-CSG}}} to have lower performance compared to the offline algorithms, since it solves the harder problem of not having access to the complete set of experts like the offline algorithms do.
However, even with this restriction we observe that its performance is not significantly worse and particularly in Figure \ref{fig:sc_time_con_fr_08_01} (first row) it is very close to the ones of the other algorithms.
In the comparison of the offline algorithms we first notice that {{\texttt{StochasticDistortedGreedy}}} is outperformed by {{\texttt{DistortedGreedy}}}, {{\texttt{CSLG}}} and {{\texttt{Greedy}}} in all panels.
Finally, notice that the performances of {{\texttt{DistortedGreedy}}}, {{\texttt{CSLG}}} and {{\texttt{Greedy}}} are consistently almost the same.
It is interesting to see that even though {{\texttt{Greedy}}} is just a heuristic in practice it can perform well, without however finding solutions with provable approximation guarantees.
The comparison between {{\texttt{DistortedGreedy}}} and {{\texttt{CSLG}}} shows that in practice considering marginal gains with respect to a cost-scaled objective as opposed to a distorted objective leads to the same performances.
\spara{Runtime analysis.}
We now investigate the scalability of our proposed algorithms.
Recall that the running time complexity of {{\texttt{DistortedGreedy}}}, {{\texttt{CSG}}}, {{\texttt{CSLG}}} and {{\texttt{Greedy}}} is O$(nk)$, the running time complexity of {{\texttt{StochasticDistortedGreedy}}} is O($n\log{\frac{1}{\epsilon}}$), and the running time complexity of {{\texttt{Streaming-k-CSG}}} is O($\frac{n\log{k}}{\epsilon}$).
The results are shown in the second row of Figure \ref{fig:perf_time_constrained}.
The $y$-axis represents the running time reported in seconds, and the $x$-axis corresponds to the cardinality $k$. We observe that {{\texttt{DistortedGreedy}}} and {{\texttt{CSG}}} have very close running times. {{\texttt{Streaming-k-CSG}}} is slower due to constructing several solutions in parallel.
Furthermore, {{\texttt{StochasticDistortedGreedy}}} achieves $\scriptsize{\sim}$10x speedups compared to the aforementioned algorithms, since in each iteration it only evaluates a subset of the elements.
We now draw the attention to the computational savings when using lazy evaluations.
When zooming in, we see that {{\texttt{CSLG}}} achieves $\scriptsize{\sim}$100x speedups compared to {{\texttt{DistortedGreedy}}} which cannot be sped up using lazy evaluations, and other greedy algorithms such as the cost-scaled greedy {{\texttt{CSG}}} and {{\texttt{Greedy}}} without lazy evaluations.
Moreover, it achieves $\scriptsize{\sim}$2x speedups compared to {{\texttt{StochasticDistortedGreedy}}}.
The only algorithm whose running time is comparable to {{\texttt{CSLG}}} is {{\texttt{Top-k-Experts}}}. {{\texttt{Top-k-Experts}}} is only a heuristic and we indeed see that it produces solutions whose quality is inferior to that of {{\texttt{CSLG}}}.
\begin{figure*}
\centering
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_unconstrained_guru_pop01_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_unconstrained_guru_pop08_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_unconstrained_freelancer_pop01_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_unconstrained_freelancer_pop08_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_unconstrained_guru_pop01_rare01.pdf}
\subcaption{\label{fig:sc_time_un_gr_01_01}\scriptsize{{{\textit{Guru}}} ($f_p$=0.1,$f_r$=0.1)}}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_unconstrained_guru_pop08_rare01.pdf}
\subcaption{\label{fig:sc_time_un_gr_08_01}\scriptsize{{{\textit{Guru}}} ($f_p$=0.8,$f_r$=0.1)}}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_unconstrained_freelancer_pop01_rare01.pdf}
\subcaption{\label{fig:sc_time_un_fr_01_01}\scriptsize{{{\textit{Freelancer}}} ($f_p$=0.1,$f_r$=0.1)}}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_unconstrained_freelancer_pop08_rare01.pdf}
\subcaption{ \label{fig:sc_time_un_fr_08_01}\scriptsize{{{\textit{Freelancer}}} ($f_p$=0.8,$f_r$=0.1)}}
\end{minipage}
\caption{\label{fig:perf_time_unconstrained} Combined objective value ($g$) (row 1) and running time (row 2) comparisons of algorithms and baselines for the unconstrained problem {{\sc Cov-Cost}}. Columns \ref{fig:sc_time_un_gr_01_01} and \ref{fig:sc_time_un_gr_08_01} correspond to {{\textit{Guru}}} and columns \ref{fig:sc_time_un_fr_01_01} and \ref{fig:sc_time_un_fr_08_01} correspond to {{\textit{Freelancer}}}.}
\end{figure*}
\subsection{Evaluation for \large{{{\sc Cov-Cost}}}}
\label{exp:unconstrained}
This Section presents the performance evaluation and the running time analysis of algorithms for the unconstrained problem {{\sc Cov-Cost}}.
We evaluate the algorithms that work only in the unconstrained setting as well as algorithms for the cardinality constrained problem, since we can run the latter algorithms with $k=n$. The algorithms that we evaluate are the following:
\begin{list}{$\bullet$
\item Our algorithm {{\texttt{CSG}}} and its lazy evaluation variant {{\texttt{CSLG}}} (Section~\ref{sec:alg-cardinality}). As discussed in Section~\ref{sec:alg-cardinality}, the two algorithms construct the same solution. Thus it suffices to include one of the algorithms in the plots evaluating the objective value, and we only show {{\texttt{CSLG}}}. We show both algorithms in the plots evaluating the running time.
\item The algorithms for the constrained problem and the {{\texttt{Top-k-Experts}}} and {{\texttt{Greedy}}} baselines run with $k=n$ (see the previous section for a description of these algorithms).
\item Our algorithm {{\texttt{OnlineCSG}}} (Section~\ref{sec:alg-online}). Recall that this algorithm solves the harder online problem. We evaluate its performance compared to the offline algorithms that have complete knowledge of the input datasets.
\item The algorithm {{\texttt{UnconstrainedDistortedGreedy}}} proposed by Harshaw {{\emph{et al.}}} \cite{harshaw2019submodular}. This is a linear-time algorithm for the unconstrained problem that runs for $n$ iterations and in each iteration evaluates the marginal gain of a single expert sampled uniformly at random.
\end{list}
\spara{Performance evaluation.}
\label{sec:performance}
For the performance evaluation we vary the expert sample fraction and compute the combined objective value ($g$) of the solution.
We present the results of this evaluation in the first row of Figure \ref{fig:perf_time_unconstrained}.
The $y$-axis represents the combined objective value ($g$) of each algorithm, and the $x$-axis corresponds to the expert sample fraction.
We notice that the algorithmic performance differences are more pronounced in the {{\textit{Guru}}} dataset shown in Figures \ref{fig:sc_time_un_gr_01_01} and \ref{fig:sc_time_un_gr_08_01} (first row).
We see that {{\texttt{DistortedGreedy}}} and {{\texttt{CSLG}}} find the highest value solutions with respect to the combined objective.
Furthermore, we notice that for a smaller samples of experts {{\texttt{StochasticDistortedGreedy}}} and the baseline algorithm {{\texttt{Top-k-Experts}}} can also perform well. However, as the number of users increases and the problem becomes harder to solve, the aforementioned algorithms are outperformed by {{\texttt{DistortedGreedy}}} and {{\texttt{CSLG}}}.
Furthermore, even though, as expected, the online algorithm {{\texttt{OnlineCSG}}} is inferior to the other algorithms, it is interesting to see that its performance is not significantly worse, despite the harder problem it attempts to solve.
Note that in all experiments {{\texttt{UnconstrainedDistortedGreedy}}} has the lowest performance, which is expected since in each iteration the algorithm evaluates a single random sample.
All of the above observations hold for {{\textit{Freelancer}}} as well, yet due to having less experts than {{\textit{Guru}}} the differences here are less pronounced.
\spara{Runtime analysis.}
We now investigate the running time performance of the algorithms for the {{\sc Cov-Cost}} problem.
The runtime complexities are O$(n^{2})$ for {{\texttt{DistortedGreedy}}}, {{\texttt{CSG}}}, {{\texttt{CSLG}}}, {{\texttt{Greedy}}}, O$(n\log n)$ for {{\texttt{Top-k-Experts}}}, O($n\log{\frac{1}{\epsilon}}$) for {{\texttt{StochasticDistortedGreedy}}}, and O($n$) for {{\texttt{UnconstrainedDistortedGreedy}}} and {{\texttt{OnlineCSG}}}.
The results are shown in the second row of Figure \ref{fig:perf_time_unconstrained}.
The $y$-axis represents the running time of reported in seconds, and the $x$-axis is the expert sample fraction.
Again, as expected, {{\texttt{DistortedGreedy}}} and {{\texttt{CSG}}} have similar running times.
Note that in the {{\sc Cov-Cost}} problem the running times of the two algorithms are significantly outperformed by the other algorithms in the scale of orders of magnitude.
When zooming in we see that {{\texttt{StochasticDistortedGreedy}}} is at least $\scriptsize{\sim}$100x times faster.
The most impressive result however, is the huge computational gain that we get when using lazy evaluations.
More specifically, we see that when using lazy evaluations {{\texttt{CSLG}}} achieves $\scriptsize{\sim}$2000x speedups, compared to {{\texttt{DistortedGreedy}}}, {{\texttt{CSG}}} and {{\texttt{Greedy}}}, and $\scriptsize{\sim}$20x speedups compared to {{\texttt{StochasticDistortedGreedy}}}.
Similar runtime performance is achieved by the baseline {{\texttt{Top-k-Experts}}}, but since it is a heuristic and does not take the submodularity of the objective into account, it does not guarantee the quality of solution and underperforms as we saw in the performance evaluation section.
\begin{figure*}
\centering
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_partition_guru_salary_pop01_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_partition_guru_salary_pop08_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_partition_freelancer_salary_pop01_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{score_partition_freelancer_salary_pop08_rare01.pdf}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_partition_guru_salary_pop01_rare01.pdf}
\subcaption{\label{fig:sc_time_part_gr_01_01}\scriptsize{{{\textit{Guru}}} ($f_p$=0.1,$f_r$=0.1)}}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_partition_guru_salary_pop08_rare01.pdf}
\subcaption{\label{fig:sc_time_part_gr_08_01}\scriptsize{{{\textit{Guru}}} ($f_p$=0.8,$f_r$=0.1)}}
\label{fig:sc_time_part_gr_08_01}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_partition_freelancer_salary_pop01_rare01.pdf}
\subcaption{\label{fig:sc_time_part_fr_01_01}\scriptsize{{{\textit{Freelancer}}} ($f_p$=0.1,$f_r$=0.1)}}
\end{minipage}
\begin{minipage}[t]{0.24\linewidth}
\includegraphics[width=\linewidth]{time_partition_freelancer_salary_pop08_rare01.pdf}
\subcaption{\label{fig:sc_time_part_fr_08_01}\scriptsize{{{\textit{Freelancer}}} ($f_p$=0.8,$f_r$=0.1)}}
\end{minipage}
\caption{\label{fig:score_time_partition_matroid}Combined objective value ($g$) (row 1) and running time (row 2) comparisons of algorithms and baselines for the problem {{\sc Matroid-Cov-Cost}} with a partition matroid constraint. Columns \ref{fig:sc_time_part_gr_01_01} and \ref{fig:sc_time_part_gr_08_01} correspond to {{\textit{Guru}}} and columns \ref{fig:sc_time_part_fr_01_01} and \ref{fig:sc_time_part_fr_08_01} correspond to {{\textit{Freelancer}}}.}
\end{figure*}
\subsection{Evaluation for \large{{{\sc Matroid-Cov-Cost}}}}
\label{exp:matroid}
In this section, we evaluate the empirical performance of algorithms for the matroid-constrained problem {{\sc Matroid-Cov-Cost}}.
We evaluate the following algorithms:
\begin{list}{$\bullet$
\item Our algorithm {{\texttt{MCSG}}} (Section~\ref{sec:alg-matroid}) and its lazy evaluation variant {{\texttt{MCSLG}}}. As discussed in Section~\ref{sec:alg-cardinality}, the two algorithms construct the same solution. Thus it suffices to include one of the algorithms in the plots evaluating the objective value, and we only show {{\texttt{MCSLG}}}. We show both algorithms in the plots evaluating the running time.
\item A baseline heuristic algorithm {{\texttt{Greedy}}}. This algorithm is similar to {{\texttt{MCSG}}}, the only difference is that instead of computing the marginal value with respect to the scaled objective $\tilde{g} = {{\ensuremath{f}}} - 2{{\ensuremath{c}}}$ we use the original objective $g = {{\ensuremath{f}}} - {{\ensuremath{c}}}$.
\item A baseline heuristic algorithm {{\texttt{Top-k-Experts-Matroid}}}. This algorithm is the natural generalization to the matroid setting of the {{\texttt{Top-k-Experts}}} baseline described earlier. {{\texttt{Top-k-Experts-Matroid}}} assigns a linear weight $w(e) = {{\ensuremath{f}}}(\{e\})-{{\ensuremath{c}}}(e)$ to each expert. The algorithm considers the experts in decreasing order according to the weights and, if it is feasible to add the current expert to the solution and the expert has positive marginal gain $g(e \vert Q) > 0$, the algorithm adds it to the solution.
\end{list}
\spara{Experimental setting.}
In our experiments, we use a partition matroid constraint where we partition the set of experts into $\ell$ disjoint sets and then select at most $k$ experts from each set.
To demonstrate the efficacy of our methods in our application we create salary range expert partitions as follows.
We first derive the distinct salary values from the expert dataset, and then sort them from smaller to higher values.
We split the sorted range into $\ell$ sub-ranges of equal length, where the smallest and largest values of each sub-range define a salary interval.
Finally, we use each salary interval to create a partition where the experts in this partition have salaries that belong to the corresponding interval.
To create the partitions we utilized the entire set of experts.
Furthermore, in our setting we set the number of partitions to be $\ell$=5.
We experimented with other values of $\ell$ as well but we omit the results for brevity.
Finally, even though the partition matroid constraint provides flexibility in deciding the cardinality constraint of each partition, in our experiments it is the same for all partitions.
\spara{Performance evaluation.}
For the performance evaluation we vary the cardinality constraint $k$ of the partitions.
We present the results of this evaluation in the first row of Figure \ref{fig:score_time_partition_matroid}.
The $y$-axis represents the combined objective value ($g$) of each algorithm, and the $x$-axis corresponds to the cardinality constraint $k$ which we set to be the same for each partition.
Clearly, the baseline algorithm {{\texttt{Top-k-Experts-Matroid}}} is outperformed by the other two algorithms.
However, we see that the performance of the second baseline {{\texttt{Greedy}}} is very close to the one of our proposed algorithm {{\texttt{MCSLG}}}.
This shows that {{\texttt{Greedy}}} can be a good heuristic with respect to the combined objective.
However, {{\texttt{MCSLG}}} performs as well as {{\texttt{Greedy}}}, but also provides solutions with provable approximation guarantees.
\spara{Runtime analysis.}
We now investigate the scalability of our proposed algorithms for the {{\sc Matroid-Cov-Cost}} problem.
We present the results of this evaluation in the second row of Figure \ref{fig:score_time_partition_matroid}.
Note that in the performance evaluation we did not present {{\texttt{MCSG}}} due to its same performance with {{\texttt{MCSLG}}}.
Here, we do compare the two with respect to their running times to showcase the computational benefits of using lazy evaluations.
The running time complexities are O$(n \cdot \mathrm{rank}(\mathcal{M}))$ --- where $\mathrm{rank}(\mathcal{M})$ is the size of the largest independent set in $\mathcal{M}$ --- for {{\texttt{MCSG}}}, {{\texttt{MCSLG}}}, {{\texttt{Greedy}}}, and O$(n\log n)$ for {{\texttt{Top-k-Experts-Matroid}}} (more precisely, $O(n)$ function evaluations plus $O(n \log n)$ additional time).
The $y$-axis represents the running time of each algorithm reported in seconds, and the $x$-axis is the cardinality constraint $k$ that corresponds to the maximum number of experts we can select from each partition.
Note that {{\texttt{Greedy}}} and {{\texttt{CSG}}} have very close running times, which is expected since their base is the same greedy algorithm and only the evaluated objective function changes.
Small differences in the running times are due to to how ``easily'' the individual algorithms add the elements to the solution and how many elements are valid to evaluate in each iteration.
When zooming in we see that the linear-time baseline {{\texttt{Top-k-Experts-Matroid}}} is the fastest algorithm, but its running time is very close to the proposed {{\texttt{MCSLG}}} algorithm.
Recall, that even though the two algorithms have similar runtimes, the latter outperforms the former with respect to the combined objective value and does so with provable approximation guarantees.
Finally, we see that when using lazy evaluations in all datasets {{\texttt{MCSLG}}} achieves $\scriptsize{\sim}$100x speedups, compared to {{\texttt{Greedy}}} and {{\texttt{MCSG}}} that do not use lazy evaluations.
\subsection{Discussion}
\label{exp:discussion}
In this section, we summarize the main contributions of our work.
First, throughout the experimental evaluation we see that the proposed algorithms based on the cost-scaled objective have competitive performance against the algorithms proposed by Harshaw {{\emph{et al.}}} \cite{harshaw2019submodular} that use the distorted objective.
Furthermore, we showcase that performing lazy evaluations always leads to huge computational savings reaching 100x speedups on average, and up to 1000x speedups, compared to the distorted-objective greedy, which cannot be sped up using lazy evaluations, and other greedy algorithms such as the standard greedy and the cost-scaled greedy without lazy evaluations.
Finally, we demonstrate that the proposed streaming and online algorithms achieve competitive performance against the offline algorithms, even though they address the harder online problem, where experts arrive in an online fashion.
\section{Conclusions}\label{sec:conclusions}
In this paper, we formalized the team-formation problem as the problem of maximizing
a positive monotone submodular function minus a linear cost function.
In that way we provided an alternative to the existing literature in team formation that aims for
complete skill coverage. For this more general version of the team-formation problem,
we devised effective and
efficient algorithms with provable approximation guarantees. We showed that these algorithms can be applied to a variety of settings including constrained, unconstrained, online and stream settings.
We also showed that these algorithms perform very well in practice, particularly for larger
datasets, reaching up to 3 orders of magnitude speedups. A direction for future research is to consider extensions of our framework to take into consideration not necessarily only linear costs, but other costs as well, such as the communication cost between experts organized in a social network.
\clearpage
\bibliographystyle{abbrv}
|
1,314,259,993,898 | arxiv | \section{Introduction}
Suppose we observe a series of $m$ independent $n$-dimensional
Gaussian vectors ${\bf y}_1,...,{\bf y}_m$ with independent components and
common variance:
\begin{equation} \label{eq:model} {\bf y}_j=\mbox{\boldmath $\mu$}_j+\mbox{\boldmath $\epsilon$}_j,\;\;\;
\mbox{\boldmath $\epsilon$}_j \stackrel{i.i.d.} \sim {\cal N}_n({\bf 0},\sigma_n^2
I_n),\;\;\;j=1,...,m
\end{equation}
The variance $\sigma_n^2>0$, which may depend on $n$, is assumed to be
known, and the goal is to estimate the unknown mean vectors
$\mbox{\boldmath $\mu$}_1,...,\mbox{\boldmath $\mu$}_m$.
The key extra assumption on the model \fr{eq:model} is {\em both} within-
and between-vectors sparsity (hereafter {\em within}- and {\em
between}-sparsity for brevity). More specifically, we assume that
part of $\mbox{\boldmath $\mu$}_j$'s are identically zero vectors and the entire information
in the noisy data is contained only in a small fraction of them
(between-sparsity). Moreover, even within nonzero $\mbox{\boldmath $\mu$}_j$'s,
most of their components are still zeroes or at least ``negligible''
(within-sparsity). Formally, the within-sparsity can be quantified
in terms of $l_0$, strong or weak $l_p$-balls introduced further.
Neither the indices of non-zero $\mbox{\boldmath $\mu$}_j$'s nor the
locations of their ``significant'' components are known in advance.
Such a model appears in the variety of statistical applications as we illustrate
by the following two examples.
\vspace{.5cm} \noindent {\em Example 1. Additive models}. Consider a
nonparametric regression model
$y_i=f(x_{1i},...,x_{m_i})+\epsilon_i,\;i=1,...,n$, where $f:
\mathbb{R}^m \rightarrow \mathbb{R}$ is the unknown regression
function assumed to belong to some class of functions (e.g.,
H\'older, Sobolev or Besov classes), and $\epsilon_i
\stackrel{i.i.d} \sim {\cal N}(0,\sigma_n^2)$. Estimating $f$ in
such a general setup suffers from a severe ``curse of
dimensionality'', where typically the sample size $n$ should grow
exponentially with the dimensionality $m$ to achieve consistent
estimation. It is essential then to place some extra restrictions on
the complexity of $f$. One of the most common approaches is to
consider the additive models, where
$f(x_1,...,x_m)=f_1(x_1)+...+f_m(x_m)$ and each component $f_j$ lies
in some smoothness class. In addition, similar to sparse linear
regression models, it is often reasonable to assume that only part
of predictors among $x_1,...,x_m$ are really ``significant'', while
the impact of others is negligible if at all. Such {\em sparse}
additive models are especially relevant for $m \sim n$ and $m \gg n$
setups and have been considered in Lin \& Zhang (2006), Meier, van
de Geer \& Buhlmann (2009), Ravikumar {\em et al.} (2009), Raskutti,
Wainwright \& Yu (2012).
Expand each $f_j,\;j=1,...,m$ into (univariate) orthonormal series
$\{\psi_{ij}\}$ as $\sum \mu_{ij} \psi_{ij}(x_j)$,
where $\mu_{ij}=\int f_j(x_j) \psi_{ij}(x_j)dx_j$. The original
nonparametric additive model is then transformed into the equivalent
problem of estimating vectors of corresponding coefficients
$\mbox{\boldmath $\mu$}_1,...,\mbox{\boldmath $\mu$}_m$ within Gaussian noise \fr{eq:model}, where for
sparse additive models, most of $\mbox{\boldmath $\mu$}_j$ are zeroes
(between-sparsity). Moreover, for a properly chosen bases
$\{\psi_{ji}\}$ (e.g., Fourier series for Sobolev or wavelets for
more general Besov classes), the nonzero $\mbox{\boldmath $\mu$}_j$ will be also sparse
(within-sparsity).
\vspace{.5cm} \noindent {\em Example 2. Time-course microarray
experiments}. In time-course microarray experiments the data
consists of measurements of differences in the expression levels
between ``treated'' and ``control'' samples of $m$ genes recorded at
different times. A record on $j$-th gene at time point $t_i$ is
modelled as a measurement of an (unknown) expression profile function
$f_j(t)$ at time $t_i$ corrupted by Gaussian noise. The expression
of most genes are the same in both groups ($f_j \equiv 0$) and the
goal is to identify the differentially expressed genes and estimate
the corresponding non-identically zero expression profile functions
$f_j$. Similar to the previous example, each $f_j$ is commonly expanded into
some ``parsimonious'' orthonormal basis (e.g., Legendre polynomials,
Fourier or wavelets) as $f_j(t)=\sum_i \mu_{ij} \psi_{ij}(t)$
and in the coefficients domain the original functional model becomes
$$
y_{ij}=\mu_{ij}+z_{ij},\;\;\;j=1,...,m;\;i=1,...,n
$$
where $y_{ij}$ are empirical coefficients of the data on $j$-th gene
and $z_{ij}$ are Gaussian noise (see, e.g., Angelini {\em et. al}, 2007).
For most genes, $\mbox{\boldmath $\mu$}_j \equiv 0$ (between-sparsity), while due to the
parsimonity of the chosen basis, for differentially expressed genes, $\mbox{\boldmath $\mu$}_j$
will still have sparse representation (within-sparsity).
\vspace{.5cm}
To estimate $\mbox{\boldmath $\mu$}_1,...,\mbox{\boldmath $\mu$}_m$ in \fr{eq:model} under the assumptions of
between- and within-sparsity we proceed as follows.
From a series of
pioneer works of Donoho \& Johnstone in nineties (e.g., Donoho \& Johnstone,
1994ab), it is well-known
that the optimal strategy for estimating a {\em single} sparse vector $\mbox{\boldmath $\mu$}_j$ from ${\bf y}_j$ is thresholding.
Various threshold estimators $\mbox{\boldmath{$\hat \mu$}}_j$ can be considered
as penalized likelihood estimators, where
$$
\mbox{\boldmath{$\hat \mu$}}_j=\arg \min_{\mbox{\boldmath{$\tilde \mu$}}_j \in \mathbb{R}^n}||{\bf y}_j-\mbox{\boldmath{$\tilde \mu$}}_j||_2^2+Pen_j(\mbox{\boldmath{$\tilde \mu$}}_j),
$$
corresponding to different choices of penalties $Pen_j(\mbox{\boldmath{$\tilde \mu$}})$. In
particular, the $l_1$-type penalty
$Pen_j(\mbox{\boldmath{$\tilde \mu$}}_j)=\lambda||\mbox{\boldmath{$\tilde \mu$}}_j||_1$ leads to soft thresholding of
components of $\mbox{\boldmath{$\tilde \mu$}}_j$ with a constant threshold $\lambda/2$ that
coincides with the lasso estimator of Tibshirani (1996). Wider
classes of penalties on the {\em magnitudes} of components
$\tilde{\mu}_{ij}$ are discussed in Antoniadis \& Fan (2001). In this paper
we consider the
$l_0$ or complexity type penalties $Pen_j(||\mbox{\boldmath{$\tilde \mu$}}_j||_0)$ on the
{\em number of nonzero} components $\tilde{\mu}_{ij}$, where
$||\mbox{\boldmath{$\tilde \mu$}}_j||_0=\#\{i: \tilde{\mu}_{ij} \ne 0\}$,
that yield hard thresholding rules. In the simplest case, where
$Pen_j(||\mbox{\boldmath{$\tilde \mu$}}_j||_0)=\lambda ||\mbox{\boldmath{$\tilde \mu$}}_j||_0$, the resulting (constant)
threshold is $\sqrt{\lambda}$. More general complexity penalties
were studied in Birg\'e \& Massart (2001), Abramovich, Grinshtein \&
Pensky (2007), Abramovich {\em et al.} (2010) and Wu \& Zhou (2012).
Penalizing each $\mbox{\boldmath{$\tilde \mu$}}_j$ separately, however, essentially ignores
the between-sparsity, where it is assumed that most of $\mbox{\boldmath $\mu$}_j$ are
identically zeroes and should be obviously estimated by $\mbox{\boldmath{$\hat \mu$}}_j
={\bf 0}$. Thus, simultaneous estimation of all $m$ mean vectors in
\fr{eq:model} should involve an additional penalty $Pen_0(\cdot)$ on the
number of nonzero $\mbox{\boldmath{$\hat \mu$}}_j$'s that are now defined as solutions of
the following criterion: \begin{equation} \label{eq:est} \min_{\mbox{\boldmath{$\tilde \mu$}}_1,...,\mbox{\boldmath{$\tilde \mu$}}_m
\in \mathbb{R}^n} \left\{\sum_{j=1}^m \left\{
||{\bf y}_j-\mbox{\boldmath{$\tilde \mu$}}_j||_2^2+Pen_j(||\mbox{\boldmath{$\tilde \mu$}}_j||_0)\right\}+ Pen_0(k)\right\},
\end{equation} where $k=\#\{j: \mbox{\boldmath{$\tilde \mu$}}_j \ne {\bf 0}\}$. In this paper we
investigate the optimality of such an approach for estimating
$\mbox{\boldmath $\mu$}_1,...,\mbox{\boldmath $\mu$}_m$ under various within- and between-sparsity setups.
In particular, we specify the classes of complexity penalties
$Pen_j(||\mbox{\boldmath{$\tilde \mu$}}_j||_0)$ and $Pen_0(k)$ on respectively within- and
between sparsity for which the resulting estimators
$\mbox{\boldmath{$\hat \mu$}}_1,...,\mbox{\boldmath{$\hat \mu$}}_m$ achieve asymptotically minimax rates
simultaneously for the wide range of sparse and dense cases. Such types of
penalties naturally arise within a Bayesian model selection
framework. In this sense, this paper extends the results of Bayesian
MAP testimation approach developed in Abramovich, Grinshtein \&
Pensky (2007) and Abramovich {\em et al.} (2010) for estimating a
single normal mean vector to simultaneous estimation of a group of
$m$ vectors in the model \fr{eq:model}.
It is interesting to compare the proposed complexity penalization \fr{eq:est}
with lasso-type procedures. Similar to $l_0$-type penalization,
the vector-wise use of the original lasso of Tibshirani
(1996) for estimating each $\mbox{\boldmath $\mu$}_j$ in \fr{eq:model}
results in per-component (soft) thresholding of each ${\bf y}_j$ that
handles within-sparsity but ignores between-sparsity. To address
the latter, Yuan \& Lin (2006) proposed a {\em group lasso} that for the
particular model \fr{eq:model} at hand solves
$$
\min_{\mbox{\boldmath{$\tilde \mu$}}_1,...,\mbox{\boldmath{$\tilde \mu$}}_m \in \mathbb{R}^n}
\sum_{j=1}^m \left\{||{\bf y}_j-\mbox{\boldmath{$\tilde \mu$}}_j||_2^2+\lambda ||\mbox{\boldmath{$\tilde \mu$}}_j||_2 \right\}
$$
It can be easily shown that in such a setup, the group lasso
estimator is available in the closed form, namely,
$\mbox{\boldmath{$\hat \mu$}}_j=(1-\frac{\lambda/2}{||{\bf y}_j||_2})_+{\bf
y}_j,\;j=1,...,m$ which is the vector-level ``shrink-or-kill''
thresholding with a threshold $\lambda/2$. The $\mbox{\boldmath{$\hat \mu$}}_j$'s are,
therefore, either entirely zero or do not have zero components at
all. As a result, the group lasso does not handle within-sparsity. To
combine both types of sparsity, Friedman, Hastie \& Tibshirani
(2010) introduced the {\em sparse group lasso} that for the model
\fr{eq:model} is defined as \begin{equation} \label{eq:sparselasso}
\min_{\mbox{\boldmath{$\tilde \mu$}}_1,...,\mbox{\boldmath{$\tilde \mu$}}_m \in \mathbb{R}^n} \sum_{j=1}^m
\left\{||{\bf y}_j-\mbox{\boldmath{$\tilde \mu$}}_j||_2^2+ \lambda_1 ||\mbox{\boldmath{$\tilde \mu$}}_j||_2 + \lambda_2
||\mbox{\boldmath{$\tilde \mu$}}_j||_1\right\} \end{equation} yielding
$\mbox{\boldmath{$\hat \mu$}}_j=(1-\frac{\lambda_1/2}{||\tilde{\bf y}_j||_j})_+\tilde{\bf
y}_j,\;j=1,...,m$, where $\tilde{y}_{ij}={\rm
sign}(y_{ij})(|y_{ij}|-\lambda_2/2)_+,\;i=1,...,n$ is the result of
component-level soft thresholding of each ${\bf y}_j$ with a
threshold $\lambda_2/2$.
To the best of our knowledge, there are no theoretical results on optimality
of sparse group lasso similar to those presented in this paper for the
complexity penalized estimators \fr{eq:est}. Moreover, we believe that, generally,
$l_0$-type penalties are more ``natural'' for representing sparsity and the
main reason for other types of penalties ($l_1$ in particular) are mostly
computational.
For a general regression model, complexity penalties indeed imply
combinatorial search over all possible models, while, for example,
sparse group lasso estimator
can be still efficiently computed by numerical iterative algorithms
(see Friedman, Hastie \& Tibshirani, 2010 and Simon {\em et al.}, 2011 for
details).
However, for the model
\fr{eq:model}, that can be essentially viewed as a special case of a general
regression setup, \fr{eq:est} can be
also solved by fast algorithms (see Section \ref{sec:bayes}) that makes such computational
arguments irrelevant.
The paper is organized as follows. In Section \ref{sec:bayes} we
develop a Bayesian formalism that gives raise to penalized
estimators \fr{eq:est}. The asymptotic (as both $m$ and $n$ increase)
adaptive minimaxity of the resulting
{\em sparse group MAP} estimators over various sparse and dense settings is
investigated in Section \ref{sec:main}.
The short simulation study is presented in Section
\ref{sec:examples} and some concluding remarks are given in Section
\ref{sec:remarks}. All the proofs are placed in the Appendix.
\section{Bayesian sparse group MAP estimation} \label{sec:bayes}
Consider again the model \fr{eq:model}. If we knew the indices of nonzero
vectors $\mbox{\boldmath $\mu$}_j$ and the locations of their
``significant'' entries $\mu_{ij}$, we would evidently estimate them by
the corresponding $y_{ij}$ and set others to zero.
Hence, the original problem is
essentially reduced to finding an $n \times m$ indicator matrix $D$, where
$d_{ij}$ indicates whether $\mu_{ij}$ is ``significant'' or not,
and can be viewed as a model
selection problem. Note that due to between- and within-sparsity assumptions,
the matrix $D$ should be sparse in the double sense:
only part of $D$'s columns ${\bf d}_j$ are supposed to be nonzeroes, and
even nonzero columns are sparse.
We introduce first some notations.
Let ${\cal J}_0$ and ${\cal J}^c_0$ be the sets of indices corresponding respectively
to zero and nonzero mean vectors $\mbox{\boldmath $\mu$}_j$'s, and
$m_0=|{\cal J}^c_0|=\#\{j: \mbox{\boldmath $\mu$}_j \ne {\bf 0}, \;j=1,...,m\}$.
Denote by $h_j=\sum_{i=1}^n d_{ij}=
\#\{i: \mu_{ij} \ne 0,\;i=1,...,n\}$ the number of nonzero components in
$\mbox{\boldmath $\mu$}_j$, where evidently $h_j=0$ for $j \in {\cal J}_0$.
Consider the following Bayesian model selection procedure for
identifying nonzero components $\mu_{ij}$ or, equivalently, the
indicator matrix $D$. To capture the between- and within-sparsity
assumptions we place a hierarchical prior on $D$.
We first assume some prior distribution on the number of nonzero
mean vectors $m_0 \sim \pi_0(m_0)>0,\;m_0=0,...,m$. For a given $m_0$,
assume that all ${m \choose m_0}$ different configurations of zero
and nonzero mean vectors are equally likely, that is, conditionally
on $m_0$,
$$
P({\cal J}^c_0\; \bigl|\;|{\cal J}^c_0|=m_0)={m \choose m_0}^{-1}
$$
Obviously, $h_j\bigl|\{j \in {\cal J}_0\} \sim \delta(0)$ and, thus, ${\bf
d}_j\bigl|\{j \in {\cal J}_0\} \sim \delta({\bf 0})$ and $\mbox{\boldmath $\mu$}_j\bigl|\{j \in
{\cal J}_0\} \sim \delta({\bf 0})$. For nonzero $\mbox{\boldmath $\mu$}_j$ we place independent
priors $\pi_j(\cdot)$ on the number of their nonzero components,
that is, $h_j\bigl|\{j \in {\cal J}^c_0\} \sim \pi_j(h_j)>0,\;h_j=1,...,n$. In
this case, we again assume that for a given $h_j$, all
possible ${n \choose h_j}$ indicator vectors ${\bf d}_j$ with
$h_j$ nonzero components
have the same prior probabilities and, therefore,
$$
P({\bf d}_j \; \bigl|\; ||{\bf d}_j||_0=h_j,j \in {\cal J}^c_0)={n \choose h_j}^{-1}
$$
Finally, to complete the prior for \fr{eq:model}, we have
$\mu_{ij}\bigl|d_{ij}=0 \sim \delta(0)$, while
nonzero $\mu_{ij}$ are assumed to be i.i.d. $N(0,\gamma \sigma_n^2)$,
where $\gamma>0$.
A straightforward Bayesian calculus yields the posterior probability
for a given indicator matrix $D$:
$$
P(D\bigl|{\bf y}) \propto \pi_0(m_0) {m \choose m_0}^{-1} \prod_{j \in
{\cal J}^c_0} \left\{\pi_j(h_j){n \choose h_j}^{-1}
(1+\gamma)^{-\frac{h_j}{2}} e^{\frac{\gamma}{\gamma+1}
\frac{\sum_{i=1}^n y_{ij}^2 d_{ij}}{2\sigma_n^2}} \right\}
$$
Given the
posterior distribution $P(D|{\bf y})$ we apply the maximum {\em a
posteriori} (MAP) rule to choose the most likely configuration of
zero and nonzero $\mu_{ij}$ that leads to the following MAP
criterion:
\begin{equation}
\sum_{j \in {\cal J}^c_0} \left\{\sum_{i=1}^n
y_{ij}^2 d_{ij}+ 2\sigma_n^2(1+1/\gamma) \ln\left(\pi_j(h_j){n
\choose h_j}^{-1}(1+\gamma)^{-\frac{h_j}{2}}\right)\right\}+
2\sigma_n^2(1+1/\gamma)\ln\left(\pi_0(m_0){m \choose
m_0}^{-1}\right) \rightarrow \max_D
\label{eq:map0}
\end{equation}
From \fr{eq:map0} it follows
immediately that for a given $h_j>0$ the optimal choice $\hat{\bf
d}_j(h_j)$ for ${\bf d}_j$ is $\hat{d}_{ij}(h_j)=1$ for the $h_j$
largest $|y_{ij}|$ and zero otherwise. The criterion \fr{eq:map0} is
then reduced to \begin{equation} \label{eq:map1} \sum_{j \in {\cal J}^c_0}
\left\{\sum_{i=1}^{h_j} y_{(i)j}^2+ 2\sigma_n^2(1+1/\gamma)
\ln\left(\pi_j(h_j){n \choose
h_j}^{-1}(1+\gamma)^{-\frac{h_j}{2}}\right)\right\} +
2\sigma_n^2(1+1/\gamma)\ln\left(\pi_0(m_0){m \choose
m_0}^{-1}\right) \rightarrow \max_D, \end{equation} where $|y_{(1)j}| \geq ...
\geq |y_{(n)j}|$. For every $j=1,...,m$ define
\begin{eqnarray}
\hat{h}_j& = &\arg \min_{1 \leq h_j \leq n} \left\{\sum_{i=h_j+1}^n
y_{(i)j}^2+2\sigma_n^2(1+1/\gamma) \ln\left(\pi^{-1}_j(h_j){n
\choose h_j}(1+\gamma)^{\frac{h_j}{2}}\right)\right\}
\nonumber \\
& = & \arg \min_{1 \leq h_j \leq n}
\left\{-\sum_{i=1}^{h_j}y_{(i)j}^2+2\sigma_n^2(1+1/\gamma)\ln\left(\pi^{-1}_j(h_j){n
\choose h_j}(1+\gamma)^{\frac{h_j}{2}}\right)\right\} \label{eq:hj}
\end{eqnarray}
Then, \fr{eq:map1} is equivalent to minimizing \begin{equation} \sum_{j \in {\cal J}^c_0}
\left\{-\sum_{i=1}^{\hat{h}_j} y^2_{(i)j}+2\sigma_n^2(1+1/\gamma)
\ln\left(\pi^{-1}_j(\hat{h}_j){n \choose \hat{h}_j}
(1+\gamma)^{\frac{\hat{h}_j}{2}}\right)\right\}+
2\sigma_n^2(1+1/\gamma)\ln\left(\pi^{-1}_0(m_0){m \choose
m_0}\right) \label{eq:map2} \end{equation} over all subsets of indices ${\cal J}_0
\subseteq \{1,...,m\}$. Define \begin{equation} \label{eq:wj}
W_{j}=-\sum_{i=1}^{\hat{h}_j} y^2_{(i)j}+2\sigma_n^2(1+1/\gamma)
\ln\left(\pi^{-1}_j(\hat{h}_j){n \choose
\hat{h}_j}(1+\gamma)^{\frac{\hat{h}_j}{2}}\right) \end{equation} Then,
\fr{eq:map2} is obviously reduced to \begin{equation} \label{eq:map3} \min_{0
\leq m_0 \leq m}\left\{ \sum_{j=1}^{m_0}W_{(j)} +
2\sigma_n^2(1+1/\gamma)\ln\left(\pi^{-1}_0(m_0){m \choose
m_0}\right) \right\}, \end{equation} where $W_{(1)} \leq ... \leq W_{(m)}$ and
for $m_0=0$ the sum in the RHS of \fr{eq:map2} evidently does not
appear.
Summarizing, the efficient simple algorithm for finding the proposed
sparse group MAP estimators of $\mbox{\boldmath $\mu$}_1,...,\mbox{\boldmath $\mu$}_m$ in \fr{eq:model}
can be formulated as follows:
\vspace{.5cm}
{\em
\centerline{\bf Sparse group MAP estimation algorithm}
\begin{enumerate}
\item For every $j=1,...,m$, find $\hat{h}_j$ in \fr{eq:hj} and calculate
the corresponding $W_j$ in \fr{eq:wj}.
\item Order $W_j$ in ascending order $W_{(1)} \leq ... \leq W_{(m)}$ and
find
$$
\hat{m}_0=\arg\min_{0 \leq m_0 \leq m}\left\{
\sum_{j=1}^{m_0}W_{(j)} +
2\sigma_n^2(1+1/\gamma)\ln\left(\pi^{-1}_0(m_0){m \choose
m_0}\right) \right\}
$$
\item Let $\hat{{\cal J}^c_0}$ be the set of indices corresponding to the $\hat{m}_0$
smallest $W_j$. Set $\mbox{\boldmath{$\hat \mu$}}_j \equiv {\bf 0}$ for all $j \in \hat{{\cal J}_0}$, while
for $j \in \hat{{\cal J}^c_0}$, take the $\hat{h}_j$ largest $|y_{ij}|$ and threshold
others, that is, $\hat{\mu}_{ij}=y_{ij} \mathbb{I}\{|y_{ij}| \geq |y_{(\hat{h}_j)j}|\},\;i=1,...,n,\;j \in \hat{{\cal J}^c_0}$, where $|y_{(1)j}| \geq ... \geq |y_{(n)j}|$.
\end{enumerate}
}
\vspace{.5cm}
The resulting estimation procedure combines therefore vector-wise
and component-wise thresholding. It is easily verified that the
minimizer of (\ref{eq:map2}) is, in fact, the penalized likelihood
estimator \fr{eq:est} with the complexity penalties \begin{equation}
\label{eq:penaltyj} Pen_j(0)=0,\;Pen_j(h_j)= 2\sigma_n^2(1+1/\gamma)
\ln\left(\pi^{-1}_j(h_j){n \choose
h_j}(1+\gamma)^{\frac{h_j}{2}}\right), \;h_j=1,...,m \end{equation} and \begin{equation}
\label{eq:penalty0}
Pen_0(m_0)=2\sigma_n^2(1+1/\gamma)\ln\left(\pi^{-1}_0(m_0){m \choose
m_0}\right),\;m_0=0,...,m
\end{equation}
The specific types of penalties $Pen_j(\cdot)$'s and $Pen_0(\cdot)$
depend on the choices of priors $\pi_j(\cdot)$'s and $\pi_0(\cdot)$.
For example, binomial priors $m_0 \sim B(m,\xi_0)$ and $h_j \sim
B(n,\xi_j)$ yield linear type penalties $Pen(m_0)=2\sigma_n^2
\lambda^2_0 m_0$ and $Pen_j(h_j)=2\sigma_n^2 \lambda^2_j h_j$
respectively, where $\lambda^2_0=(1+1/\gamma)\ln\{(1-\xi_0)/\xi_0\}$
and $\lambda^2_j=(1+1/\gamma)\ln\{\sqrt{1+\gamma}(1-\xi_j)/\xi_j\}$.
For such a choice of $\pi_j(\cdot)$, $W_j$ in \fr{eq:wj} is
essentially obtained by hard thresholding of ${\bf y}_j$ with a {\em
constant} threshold $\sqrt{2}\sigma_n\lambda_j$. In particular,
$\xi_j=\sqrt{\gamma+1}/(\sqrt{\gamma+1}+n^{\gamma/(\gamma+1)})$
leads to the universal thresholding of Donoho \& Johnstone (1994a)
with $\lambda_j=\sqrt{\ln n}$. The (truncated) geometric priors
$\pi_j(h_j) \propto q_j^{h_j},\;h_j=1,...,n$ for some $0 < q_j < 1$,
imply the (nonlinear) so-called $2k\ln(n/k)$-type penalties. The
optimality of the resulting hard thresholding estimator with a {\em
data-driven} threshold for estimating a single normal mean vector
has been shown in Abramovich, Grinshtein \& Pensky (2007),
Abramovich {\em et. al} (2010), Wu \& Zhou (2012).
\section{Adaptive minimaxity of sparse group MAP estimators} \label{sec:main}
In this section we investigate the goodness of the proposed sparse
group MAP estimators \fr{eq:est} with the penalties
\fr{eq:penaltyj}-\fr{eq:penalty0}, where
the goodness-of-fit is measured by the global quadratic risk
$\sum_{j=1}^m E||\mbox{\boldmath{$\hat \mu$}}_j-\mbox{\boldmath $\mu$}_j||^2_2$.
We establish their
asymptotic minimaxity over a wide range of sparse and dense settings.
To derive these results we need the following assumption on the priors
$\pi_j(\cdot)$:
\begin{assumption}
\label{as:P}
Assume that
\begin{equation}
\pi_j(h) \leq {n \choose h}e^{-c(\gamma)h},\;h=1,...,n,\;j=1,...,m, \label{eq:P}
\end{equation}
where $c(\gamma)=8(\gamma+3/4)^2>9/2$.
\end{assumption}
Assumption (P) is, in fact, not restrictive. Indeed,
the obvious inequality ${n \choose h} \geq (n/h)^h$ implies that
for {\em any} $\pi_j(\cdot)$, \fr{eq:P} holds
for all $h \leq ne^{-c(\gamma)}$. In particular, Assumption (P) is satisfied for
binomial priors $B(n,\xi_j)$ with $\xi_j \leq e^{-c(\gamma)}/(1+e^{-c(\gamma)})$
and (truncated) geometric priors.
First, we obtain a general upper bound for the quadratic risk
of the sparse group MAP estimator that will be the key
for deriving its asymptotic minimaxity.
\begin{theorem}[general upper bound] \label{th:upper}
Consider the sparse group
MAP estimators $\mbox{\boldmath{$\hat \mu$}}_1,....,\mbox{\boldmath{$\hat \mu$}}_m$ \fr{eq:est}
of $\mbox{\boldmath $\mu$}_1,...,\mbox{\boldmath $\mu$}_m$ with the complexity penalties \fr{eq:penaltyj}-\fr{eq:penalty0}
in the model \fr{eq:model}.
Under Assumption (P) we have
\begin{eqnarray}
\sum_{j=1}^m E||\mbox{\boldmath{$\hat \mu$}}_j-\mbox{\boldmath $\mu$}_j||^2_2 & \leq & c_1(\gamma) \min_{{\cal J}_0 \subseteq \{1,...,m\}} \left\{
\sum_{j \in {\cal J}^c_0}
\min_{1 \leq h_j \leq n}
\left(\sum_{i=h_j+1}^n \mu^2_{(i)j}+Pen_j(h_j)\right) \right. \nonumber \\
& + & \left. \sum_{j \in {\cal J}_0}\sum_{i=1}^n
\mu^2_{ij}+Pen_0(|{\cal J}^c_0|)\right\} +c_2(\gamma) \sigma_n^2 (1-\pi_0(0)),
\label{eq:upper}
\end{eqnarray}
where $|\mu_{(1)j}| \geq ... \geq |\mu_{(n)j}|$ and $c_1(\gamma)$, $c_2(\gamma)$
depend only on $\gamma$.
\end{theorem}
The results of Theorem \ref{th:upper} hold for any
normal mean vectors $\mbox{\boldmath $\mu$}_1,...,\mbox{\boldmath $\mu$}_m$.
Now we consider \fr{eq:model} under the extra within- and between-sparsity assumptions
that will be defined more rigorously below.
The between-sparsity is naturally measured by the number $m_0$ of
nonzero $\mbox{\boldmath $\mu$}_j$'s. The within-sparsity can be introduced in several
ways. The most intuitive measure of within-sparsity of a single
normal mean vector $\mbox{\boldmath $\mu$} \in \mathbb{R}^n$ is the number of its
nonzero components, that is, its $l_0$ quasi-norm $||\mbox{\boldmath $\mu$}||_0$.
Define then an $l_0$-ball $l_0[\eta]$ of standardized radius $\eta$
as a set of $\mbox{\boldmath $\mu$}$ with at most a proportion $\eta$ of non-zero
entries, that is
$$
l_0[\eta]=\{\mbox{\boldmath $\mu$} \in \mathbb{R}^n~: ||\mbox{\boldmath $\mu$}||_0 \leq \eta n \}
$$
One can argue that in many practical settings, it is more reasonable
to assume that the components $\mu_i$'s of $\mbox{\boldmath $\mu$}$ are not exactly
zero but ``small''. In a wider sense the within-sparsity of $\mbox{\boldmath $\mu$}$
can be then defined by the proportion of its large entries.
Formally, define a weak $l_p$-ball $m_p[\eta]$ with a standardized
radius $\eta$ as
$$
m_p[\eta]=\{\mbox{\boldmath $\mu$} \in \mathbb{Re}^n~: |\mu|_{(i)} \leq \sigma_n \eta
(n/i)^{1/p},\;i=1,...,n\},
$$
where $\mu_{(1)} \geq ... \geq \mu_{(n)}$ are the ordered components of $\mbox{\boldmath $\mu$}$.
For $\mbox{\boldmath $\mu$} \in m_p[\eta]$, the proportion of $|\mu_i|$'s larger
than $\sigma_n \delta$ for some $\delta>0$ is at
most $(\eta/\delta)^p$.
Within-sparsity can be also measured in terms of the $l_p$-norm of $\mbox{\boldmath $\mu$}$, where
a strong $l_p$-ball $l_p[\eta]$ with standardized radius $\eta$ is defined as
$$
l_p[\eta]=\{\mbox{\boldmath $\mu$} \in \mathbb{Re}^n~: \frac{1}{n}\sum_{i=1}^n|\mu_i|^p
\leq \sigma_n^p \eta^p\}
$$
There are well-known relationships between these types of balls.
The $l_p$-norm approaches $l_0$ as $p$ decreases, while a weak $l_p$-ball
contains the corresponding strong $l_p$-ball but only just:
$$
l_p[\eta] \subset m_p[\eta] \not\subset l_{p'}[\eta],\;p'>p
$$
We recall first the known results on minimax rates for estimating a
{\em single} normal mean vector $\mbox{\boldmath $\mu$}$ over different types of balls
introduced above. Let $\Theta[\eta_n] \subset \mathbb{R}^n$ be any
of $l_0[\eta_n],l_p[\eta_n]$ or $m_p[\eta_n]$, where the
standardized radius $\eta$ might depend on $n$. The corresponding
minimax quadratic risk for estimating a single $\mbox{\boldmath $\mu$}$ ($m=1$) over
$\Theta[\eta_n]$ in \fr{eq:model} is
$R(\Theta[\eta_n])=\inf_{\mbox{\boldmath{$\tilde \mu$}}}\sup_{\mbox{\boldmath $\mu$} \in
\Theta[\eta_n]}E||\mbox{\boldmath{$\tilde \mu$}}-\mbox{\boldmath $\mu$}||^2_2$, where the infimum is taken over
all estimates $\mbox{\boldmath{$\tilde \mu$}}$ of $\mbox{\boldmath $\mu$}$. For $p>0$ define
$\eta_{0n}=n^{-1/\min(p,2)} \sqrt{\ln n}$. Depending
on the behaviour of $\eta_n$ as $n$ increases, we distinguish between
three cases for $p>0$ and two cases for $p=0$:
\begin{itemize}
\item[a)] {\em dense}, where
$\eta_n \not \rightarrow 0$
\item[b)] {\em sparse}, where $\eta_n \rightarrow 0$ but $\eta_n/\eta_{0n} \not \rightarrow 0$ for $p>0$ and, obviously, $\eta_n \geq n^{-1}$ for $p=0$
\item[c)] {\em super-sparse} (for $p>0$), where $\eta_n/\eta_{0n} \rightarrow 0$\end{itemize}
The corresponding minimax convergence rates over $R(\Theta[\eta_n])$ for
various cases and $p$ are summarized in Table \ref{tab:rates} below
(see Donoho {\em et. al}, 1992; Johnstone, 1994; Donoho \&
Johnstone, 1994b).
The rates for $m_p[\eta_n]$ are the same as for $l_p[\eta_n]$
except $p=2$, where there is an additional log-term.
Table \ref{tab:rates} defines dense and sparse
zones for $p=0$ and $p \geq 2$, and dense, sparse and super-sparse
zones for $0 < p < 2$ of different minimax rates.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Case & $p=0$ & $0 < p < 2$ & $p \geq 2$ \\
\hline
dense case & $\sigma_n^2 n$ & $\sigma_n^2 n$ &
$\sigma_n^2 n$ \\
sparse case & $\sigma_n^2 n \eta_n(\ln \eta_n^{-1})$ &
$\sigma_n^2 n \eta_n^p(\ln \eta_n^{-p})^{1-p/2}$ &
$\sigma_n^2 n \eta_n^2$ \\
super-sparse case &
$ - $ &
$\sigma_n^2n^{2/p}\eta_n^2$ & $\sigma_n^2 n \eta_n^2$ \\
\hline
\end{tabular}
\caption{Minimax rates (up to multiplying constants) over various
$l_0[\eta_n]$, $l_p[\eta_n]$ and
$m_p[\eta_n]$-balls. The rates are the same for $l_p[\eta_n]$ and $m_p[\eta_n]$
except $p=2$, where for $m_p[\eta_n]$ there appears the additional log-term
which is not presented in Table \ref{tab:rates} for brevity.}
\label{tab:rates}
\end{center}
\end{table}
Consider now the model \fr{eq:model} for $m \geq 1$. Recall that
$m_0=\#\{j: \mbox{\boldmath $\mu$}_j \not \ne {\bf 0}\}$ and ${\cal J}^c_0$ is the set of
indices for nonzero $\mbox{\boldmath $\mu$}_j$. In what follows we assume that
$\mbox{\boldmath $\mu$}_j \in \Theta_j[\eta_{jn}]$ for $j \in {\cal J}^c_0$, where the types ($l_0$,
weak $m_p$ or strong $l_p$) and the parameters $p$ of the
corresponding balls are not necessarily the same for all $j$.
Furthermore, we allow the priors $\pi_0(\cdot)$ and $\pi_j(\cdot)$
to depend respectively on $m$ and $n$.
Theorem \ref{th:upperth} below defines the asymptotic upper bounds
for the quadratic risks of the sparse group MAP estimator in
\fr{eq:model} under within- and between sparsity assumptions:
\begin{theorem}[upper bounds over sparse and dense settings] \label{th:upperth}
Consider the model \fr{eq:model}, where ${\cal J}^c_0 \neq \emptyset$ (not
pure noise). Assume that $\mbox{\boldmath $\mu$}_j \in \Theta_j[\eta_{jn}]$ for all $j \in {\cal J}^c_0$,
where $\eta_{jn} \geq n^{-1/\min(p_j,2)} \sqrt{\ln n}$ for all $p_j>0$
(excluding, thus, super-sparse cases).
Let $\mbox{\boldmath{$\hat \mu$}}_1,...,\mbox{\boldmath{$\hat \mu$}}_m$
be the sparse group MAP estimators \fr{eq:est} with the complexity penalties
\fr{eq:penaltyj}-\fr{eq:penalty0}, where assume that
there exist constants $c_0, c_1>0$ and $c_2>c(\gamma)$ such that
\begin{enumerate}
\item $\pi_0(k) \geq (k/m)^{c_0 k},\;k=1,...,\lfloor m/e\rfloor$ and
$\pi_0(m) \geq e^{-c_0 m}$
\item for all $j=1,...,m$, $\pi_j(\cdot)$ satisfy Assumption (P) and, in
addition,
$\pi_j(h) \geq (h/n)^{c_1 h},\;h=1,...,\lfloor ne^{-c(\gamma)}\rfloor$;
$\;\;\pi_j(n) \geq e^{-c_2 n}$
\end{enumerate}
Then, for any ${\cal J}^c_0 \subseteq \{1,...,m\}$ with $|{\cal J}^c_0|=m_0$ and all
$\Theta_j[\eta_{jn}],\; j \in {\cal J}^c_0$,
\begin{equation} \label{eq:rate}
\sup_{\mbox{\boldmath $\mu$}_j
\in \Theta_j[\eta_{jn}],j \in {\cal J}^c_0}\sum_{j=1}^m E||\mbox{\boldmath{$\hat \mu$}}_j-\mbox{\boldmath $\mu$}_j||^2_2
\leq C_1(\gamma) \max\left(\sum_{j \in {\cal J}^c_0}
R(\Theta_j[\eta_{jn}]),\sigma_n^2 m_0 \ln(m/m_0)\right)
\end{equation}
for some constant $C_1(\gamma)$ depending only on $\gamma$, where the
corresponding $R(\Theta_j[\eta_n])$ are given in Table
\ref{tab:rates} (up to multiplying constants).
\end{theorem}
Theorem \ref{th:upperth} shows that as both $m$ and $n$ increase,
the asymptotic convergence rates in \fr{eq:rate} are either of order
$\sum_{j \in {\cal J}^c_0}R(\Theta_j[\eta_{jn}])$ or $\sigma_n^2 m_0
\ln(m/m_0)$. The former is associated with the optimal rates of
estimating $m_0$ single sparse vectors in $\Theta_j[\eta_{jn}],\; j
\in {\cal J}^c_0$, while the latter appears in the optimal rates in the model
selection and corresponds to the error of selecting a subset of
$m_0$ nonzero elements out of $m$ (see, e.g. Abramovich \&
Grinshtein, 2010; Raskutti, Wainwright \& Yu, 2011; Rigollet \&
Tsybakov, 2011). From Table \ref{tab:rates} it follows that for all
within-dense and within-sparse cases, $C_1 \sigma^2_n \ln n \leq R(\Theta_j [\eta_{jn}]) \leq C_2 \sigma^2_n n,\;j \in {\cal J}^c_0$ for some $C_1,\;C_2 > 0$
and, therefore,
the first term $\sum_{j \in {\cal J}_0^c}R(\Theta_j[\eta_n])$ in the upper bound (\ref{eq:rate})
is always dominating for $m_0 > m/n$, while the second term
$\sigma^2_n m_0\ln(m/m_0)$ is necessarily the main one for $m_0 < m/e^n$.
One can easily verify that the conditions on the priors
$\pi_0(\cdot)$ and $\pi_j(\cdot)$ required in Theorem \ref{th:upperth}
are satisfied, for example, for
the (truncated) geometric priors (see Section \ref{sec:bayes}). On
the other hand, no binomial priors $\pi_0=B(m,\xi_0)$ or
$\pi_j=B(n,\xi_j)$ can satisfy all of them: the requirement
$\pi_j(n)=\xi_j^n \geq e^{-c_2n}$ yields $\xi_j \geq e^{-c_2}$,
while to have $\pi_j(1)=n\xi_j(1-\xi_j)^{n-1} \geq n^{-c_1}$ one
needs $\xi_j \rightarrow 0$ as $n$ increases.
\noindent
\newline
\vspace{.1cm} To establish the corresponding lower bound for the
minimax risk, for simplicity of exposition we consider only the two
cases, where $p_j$ for $j \in {\cal J}^c_0$ are either all zeroes or all
positive. In fact, these are the two main scenarios appearing in
various setups.
Somewhat similar results for minimax lower bounds in the particular
context of sparse nonparametric additive models (see Introduction)
appear in Raskutti, Wainwright and Yu (2012).
\begin{theorem}[minimax lower bounds for $l_0$-balls] \label{th:lower1}
Consider the model \fr{eq:model}, where $\mbox{\boldmath $\mu$}_j \in l_0[\eta_{jn}],
\; j \in {\cal J}^c_0$. Assume that $|{\cal J}^c_0|=m_0>0$. Then, there exists a
constant $C_2>0$ such that
\begin{equation} \label{eq:lower1}
\inf_{\mbox{\boldmath{$\tilde \mu$}}_1,...,\mbox{\boldmath{$\tilde \mu$}}_m} \sup_{\mbox{\boldmath $\mu$}_j \in l_0[\eta_{jn}],j \in
{\cal J}^c_0}\sum_{j=1}^m E||\mbox{\boldmath{$\tilde \mu$}}_j-\mbox{\boldmath $\mu$}_j||^2_2 \geq C_2 \max\left(\sum_{j
\in {\cal J}^c_0} R(l_0[\eta_{jn}]),\sigma_n^2 m_0 \ln(m/m_0)\right),
\end{equation}
where the infimum is taken over all estimates $\mbox{\boldmath{$\tilde \mu$}}_1,...,\mbox{\boldmath{$\tilde \mu$}}_m$ of
$\mbox{\boldmath $\mu$}_1,...,\mbox{\boldmath $\mu$}_m$.
\end{theorem}
Theorem \ref{th:lower1} shows that, as $m$ and $n$ increase, the
rates in \fr{eq:rate} cannot be improved for $l_0$-balls.
The proposed sparse group
MAP estimator in this case is, therefore, adaptive to the unknown degrees of
within- and between-sparsity and is simultaneously rate-optimal (in
the minimax sense) over entire range of dense and sparse $l_0$-balls settings.
The analysis of the case $p_j>0$ is slightly more delicate.
Note first that due to the embedding properties of $l_p$-balls
for $p>0$ (see above), it is sufficient to establish the minimax lower
bounds for strong $l_p$-balls settings.
\begin{theorem}[minimax lower bounds for $l_p$-balls] \label{th:lower2}
Consider the model \fr{eq:model}, where $\mbox{\boldmath $\mu$}_j \in
l_{p_j}[\eta_{jn}], \; j \in {\cal J}^c_0$ and $|{\cal J}^c_0|=m_0>0$. In addition,
assume that $\eta^2_{jn} \geq n^{-2/\min(p_j,2)} \max\left(\ln n,\ln(m/m_0)\right)$.
Under this additional constraint, there exists a constant $C_2>0$ such that
\begin{equation}
\label{eq:lower2}
\inf_{\mbox{\boldmath{$\tilde \mu$}}_1,...,\mbox{\boldmath{$\tilde \mu$}}_m} \sup_{\mbox{\boldmath $\mu$}_j \in
l_{p_j}[\eta_{jn}],j \in {\cal J}^c_0}\sum_{j=1}^m E||\mbox{\boldmath{$\tilde \mu$}}_j-\mbox{\boldmath $\mu$}_j||^2_2 \geq
C_2 \max\left(\sum_{j \in {\cal J}^c_0} R(l_{p_j}[\eta_{jn}]),\sigma_n^2 m_0
\ln(m/m_0)\right), \end{equation}
where the infimum is taken over all
estimates $\mbox{\boldmath{$\tilde \mu$}}_1,...,\mbox{\boldmath{$\tilde \mu$}}_m$ of $\mbox{\boldmath $\mu$}_1,...,\mbox{\boldmath $\mu$}_m$.
\end{theorem}
Similar to Theorem \ref{th:lower1}, Theorem \ref{th:lower2} implies
simultaneous optimality (in the minimax sense) of MAP sparse group estimator
over strong and weak $l_p$-balls but with the restriction on $\eta_{jn}$ and $m_0$. In particular, it does not
cover settings with within-super-sparsity but might also exclude part of the
corresponding within-sparse zone (depending on $m_0$).
Within- and between-sparsity cannot be ``too strong'' {\em both}.
In fact, the condition
$\eta^2_{jn} < n^{-2/\min(p_j,2)} \max\left(\ln n,\ln(m/m_0)\right),\;
j \in {\cal J}^c_0$ can be viewed as an extended definition of super-sparsity
for $m>1$.
For such a super-sparse case, the minimax bound (\ref{eq:lower2})
does not hold and can be reduced.
Indeed,
consider the trivial zero estimator $\mbox{\boldmath{$\tilde \mu$}} \equiv {\bf 0},\;j=1,...,m$,
where, evidently,
\begin{equation} \label{eq:zero}
\sup_{\mbox{\boldmath $\mu$}_j \in
l_{p_j}[\eta_{jn}],j \in {\cal J}^c_0}\sum_{j=1}^m E||\mbox{\boldmath{$\tilde \mu$}}_j-\mbox{\boldmath $\mu$}_j||^2_2
=\sup_{\mbox{\boldmath $\mu$}_j \in
l_{p_j}[\eta_{jn}],j \in {\cal J}^c_0}\sum_{j \in {\cal J}^c_0} ||\mbox{\boldmath $\mu$}_j||^2_2
\end{equation}
The least favourable sequences that maximize $||\mbox{\boldmath $\mu$}_j||^2_2$ over
$l_{p_j}[\eta_{jn}]$ are $(\sigma_n\eta_{jn},...,\sigma_n\eta_{jn})'$
and $(\sigma_n\eta_{jn}n^{1/p_j},0,...,0)'$ for $p_j \geq 2$ and $0<p_j<2$
respectively. Thus,
$\sup_{\mbox{\boldmath $\mu$}_j \in l_{p_j}[\eta_{jn}]}||\mbox{\boldmath $\mu$}_j||^2_2=\sigma_n^2\eta_{jn}^2n^{2/\min(p_j,2)}$
and the RHS of \fr{eq:zero} is less than $\sigma^2_n m_0 \ln(m/m_0)$
for $\eta^2_{jn}<n^{-2/\min(p_j,2)} \ln(m/m_0),\;j \in {\cal J}^c_0$.
This goes along the lines with the corresponding results for estimating
a single normal mean vector, where a zero estimator is known to be
rate-optimal for the super-sparse case (Donoho \& Johnstone, 1994b).
\section{Simulation study} \label{sec:examples}
A short simulation study was carried out to demonstrate the
performance of the proposed approach.
The data was generated according to the model \fr{eq:model} with
$m=10$ vectors $\mbox{\boldmath $\mu$}_j$'s of length $n=100$. Five $\mbox{\boldmath $\mu$}_j$'s were
identically zeroes, while the other five had respectively $100, 70,
50, 20$ and $5$ nonzero components randomly sampled from
$N(0,\tau^2),\;\tau=1,3,5$ and zero others. Such a setup covers various
types of within-sparsity. Finally, the
independent standard Gaussian noise $N(0,1)$ was added to all
components of each $\mbox{\boldmath $\mu$}_j$.
We tried binomial and truncated geometric priors for sparse group
MAP estimators.
For the binomial prior, we performed component-wise
universal hard thresholding of Donoho \& Johnstone (1994a) with a threshold
$\lambda=\sigma\sqrt{2\log n}$ within each vector that essentially corresponds
to $\xi_j=\sqrt{\gamma+1}/(\sqrt{\gamma+1}+n^{\gamma/(\gamma+1)})$,
where $\gamma=\tau^2/\sigma^2$ (see Section \ref{sec:bayes}),
and used $\xi_0=1/m$. For the geometric prior
we set $q_0=q_j=0.3$. In addition, we compared the
performances of sparse group MAP estimators with the sparse group
lasso estimator \fr{eq:sparselasso} of Friedman, Hastie \&
Tibshirani (2010) described in Introduction. They do not discuss the
optimal choices for $\lambda_1$ and $\lambda_2$ in
\fr{eq:sparselasso}. Some heuristical arguments are given in Simon {\em et al.} (2011).
In our simulation study we considered instead two oracle-based choices for these
tuning parameter giving thus a significant handicap to sparse group lasso estimators.
Since in simulation examples the true mean vectors $\mbox{\boldmath $\mu$}_j$ are known, they
can be used for optimal choosing $\lambda_1$ and $\lambda_2$.
In particular, we considered a ``semi-oracle'' sparse group lasso
estimator, where we set $\lambda_2=2\sigma \sqrt{2\log n}$ yielding
universal soft thresholding within each vector (see
Introduction) to compare the sparse group lasso with the binomial
sparse group MAP. $\lambda_1$ was chosen by minimizing the
mean squared error $\sum_{j=1}^m E||\hat{\mbox{\boldmath $\mu$}}_j(\lambda_1)- \mbox{\boldmath $\mu$}_j||^2_2$
estimated by averaging over a series of 1000 replications for each value
of $\lambda_1$ by a grid search.
In addition, we applied a ``fully oracle'' sparse group lasso estimator, where
both $\lambda_1$ and $\lambda_2$ were chosen to minimize the mean squared error
by the two-dimensional grid. It can be considered as a benchmark for the
performance of sparse group lasso. Table \ref{tab:lambda} provides the resulting
oracle choices for $\lambda_1$ and $\lambda_2$.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$\gamma$ & $\lambda_1$ & $\lambda_2$ \\
\hline
1 & 11.8 & 0.9 \\
9 & 7.2 & 1.1 \\
25 & 4.7 & 1.3 \\
\hline
\end{tabular}
\caption{The oracle choices for the parameters of the fully oracle
sparse group lasso estimator ($\gamma=\tau^2/\sigma^2$).}
\label{tab:lambda}
\end{center}
\end{table}
Table \ref{tab:lambda} shows that for all $\gamma$, the oracle
choice for $\lambda_2$ in the sparse group lasso
is much less than the conservative universal threshold
$2\sigma \sqrt{2\log n} \approx 6.06$. The oracle thresholding within each
vector is thus much less severe and keeps more coefficients.
The oracle choices for $\lambda_1$ were also quite small and, as a result,
for any $\gamma$, no single vector
was thresholded by a fully oracle sparse group lasso, that is, all $\hat{\mbox{\boldmath $\mu$}}_j \ne 0$.
Thus it was really a non-sparse estimator for the considered setup.
In Table \ref{tab:mse} we present the mean squared errors averaged
over 1000 replications for the four sparse group estimators with the
corresponding standard errors for various $\gamma$ (or, equivalently,
$\tau$).
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$\gamma$ & Sparse Group MAP & Sparse Group MAP & Sparse Group Lasso & Sparse Group Lasso \\
& (binomial) & (geometric) & (semi-oracle) & (fully oracle) \\
\hline
1 & 247.40 & 245.46 & 236.85 & 161.89 \\
& (0.71) & (0.70) & (0.65) & (0.43) \\
\hline
9 & 608.02 & 378.87 & 1120.99 & 403.76 \\
& (1.96) & (1.20) & (2.29) & (0.91) \\
\hline
25 & 549.77 & 351.52 & 1595.91 & 475.47 \\
& (1.68) & (1.30) & (2.79) & (1.07) \\
\hline
\end{tabular}
\caption{MSEs averaged over 1000 replications for four sparse group
estimators and the corresponding standard errors (in brackets) for various $\gamma$.}
\label{tab:mse}
\end{center}
\end{table}
For small $\gamma$ only few largest nonzero components can be
distinguished from the noise that essentially corresponds to a
sparse setting and explains good performance of binomial sparse
group MAP and semi-oracle sparse group lasso estimators based on universal
(respectively, hard and soft) thresholding within each vector.
For larger $\gamma$, it becomes ``over-conservative''.
The negative effect of its conservativeness is
much stronger for the soft than for hard thresholding (see comments below).
The fully oracle sparse group lasso estimator strongly outperforms its
semi-oracle counterpart especially for $\gamma=9\;(\tau=3)$ and
$\gamma=25\;(\tau=5)$ also indicating
that the universal thresholding is far from being optimal for sparse group lasso especially
for moderate and large $\gamma$ (see also our previous comments on the
optimal choice of $\lambda_2$).
On the other hand, geometric sparse group MAP estimator corresponding
to a nonlinear $2k\ln(n/k)$-type penalty (see Section \ref{sec:bayes})
provides good results for all $\gamma$ nicely following the theoretical results
of Section \ref{sec:main}. Moreover, for $\gamma=9$ and $\gamma=25$,
it outperforms even the fully oracle sparse group lasso estimator that
was essentially thought as a benchmark rather than a fair competitor.
This indicates that that sparse group lasso faces general problems.
In fact, it may be not so
surprising since soft ``shrink-or-kill'' thresholding inherent for sparse group
lasso is well-known to be superior to hard ``keep-or-kill'' thresholding
in sparse group MAP estimation for
small coefficients but worse for large ones due to the additional
shrinkage. Moreover, sparse group lasso essentially involves a {\em double}
amount of
shrinkage - both within vectors and at each entire vector as a whole (see
\fr{eq:sparselasso}).
It thus causes
unnecessary extra bias growing with $\gamma$ that outweighs the benefits of variance reduction.
Similar phenomenon appears also for na\"ive elastic set estimation (Zou \& Hastie,
2005).
\section{Concluding remarks} \label{sec:remarks}
In this paper we considered estimation of a sparse group of sparse
normal mean vectors. The proposed approach is based on penalized
likelihood estimation with complexity penalties on both between- and
within-sparsity and can be performed by a computationally fast
algorithm. The resulting estimators naturally arise within Bayesian
framework and can be viewed as MAP estimators corresponding to the
priors on the number of nonzero mean vectors and the numbers of their
nonzero components. Such a Bayesian perspective provides a natural tool
for obtaining a wide class of penalized likelihood estimators with various
complexity penalties.
We established the adaptive minimaxity of sparse group MAP estimators
to the unknown degree of between- and
within-sparsity over a wide range of sparse and dense settings.
The short simulation study demonstrates the efficiency of
the proposed approach that outperforms the recently
presented sparse group lasso estimator.
\medskip
{\bf Acknowledgments}. Both authors were supported by the Israel
Science Foundation grant ISF-248/08. We are grateful to Ofir Harari for
his assistance in running simulation examples and Saharon Rosset for
fruitful discussions.
\section*{Appendix}
Throughout the proofs we use $C$ to denote a generic positive constant, not
necessarily the same each time it is used, even within a single equation.
Similarly, $C(\gamma)$ is a generic positive constant depending on $\gamma$.
\subsection*{Proof of Theorem \ref{th:upper}}
As we have mentioned in Section \ref{sec:bayes}, the sparse group MAP estimator
can be viewed as a penalized likelihood estimator \fr{eq:est} with the
complexity penalties \fr{eq:penaltyj} and \fr{eq:penalty0}. We first
re-write it in a somewhat different form that will allow us then to apply
the general results of Birg\'e \& Massart (2001) for complexity penalized
estimators.
Let ${\bf y}=(y_{11},...,y_{n1},...,y_{1m},...,y_{nm})'$ be an
amalgamated $n \times m$ vector of data. Similarly,
$\mbox{\boldmath $\mu$}=(\mu_{11},...,\mu_{n1},...,\mu_{1m},...,\mu_{nm})'$,
$\mbox{\boldmath $\epsilon$}=(\epsilon_{11},...,\epsilon_{n1},...,\epsilon_{1m},...,\epsilon_{nm})'$
and the original model \fr{eq:model} can be re-written now as \begin{equation}
\label{eq:model1} y_i=\mu_i+\epsilon_i,\;\;\;\epsilon_i
\stackrel{i.i.d.} \sim {\cal N}(0,\sigma_n^2),\;i=1,...,nm \end{equation}
Define an indicator vector ${\bf d}$, where $d_i=\mathbb{I}\{\mu_i
\ne 0\},\; i=1,...,nm$. In terms of the model \fr{eq:model1},
$h_j=\sum_{i=n(j-1)+1}^{nj}d_i,\;j=1,...,m$ and $m_0=\#\{j: h_j >
0\}$. For a given ${\bf d}$, define $D_{\bf d}=\sum_{j=1}^m
h_j=\#\{i: d_i=1,\;i=1,...,nm\}$ and
$$
L_{\bf d}=\frac{1}{D_{\bf d}}\left(\sum_{j=1}^m \ln\left(\pi_j^{-1}(h_j)
{n \choose h_j} \right)+\ln\left(\pi_0^{-1}(m_0){m \choose m_0}\right)\right)
$$
for ${\bf d} \not \equiv {\bf 0}$ and $L_{\bf 0}=2\ln \pi_0^{-1}(0)$, where
we formally set $\pi_j(0)=1$.
Then, the sparse group MAP estimator
$\mbox{\boldmath{$\hat \mu$}}=(\hat{\mu}_{11},...,\hat{\mu}_{n1},...,\hat{\mu}_{1m},...,\hat{\mu}_{nm})'$ is the penalized
likelihood estimator of $\mbox{\boldmath $\mu$}$ with the complexity penalty
\begin{eqnarray*}
Pen({\bf d})& = & 2\sigma_n^2(1+1/\gamma)\left(\sum_{j=1}^m
\ln\left(\pi_j^{-1}(h_j) {n \choose h_j}
(1+\gamma)^{\frac{h_j}{2}}\right)+
\ln\left(\pi_0^{-1}(m_0){m \choose m_0}\right)\right) \\
& = & \sigma_n^2(1+1/\gamma)D_{\bf d} \left(2L_{\bf
d}+\ln(1+\gamma)\right)
\end{eqnarray*}
for ${\bf d} \not \equiv {\bf 0}$ and $Pen({\bf
0})=\sigma_n^2(1+1/\gamma)L_{\bf 0}$.
One can verify that
$$
\sum_{{\bf d} \not \equiv {\bf 0}}e^{-D_{\bf d} L_{\bf d}}=\sum_{k=1}^m
\pi_0(k)=1-\pi_0(0)
$$
A straightforward calculus (see the proof of Theorem 1 of
Abramovich, Grinshtein \& Pensky, 2007 for more details) implies
also that for any ${\bf d}$ under Assumption (P),
$$
(1+1/\gamma)(2L_{\bf d}+\ln(1+\gamma)) \geq C(\gamma) (1+\sqrt{2 L_{\bf d}})^2,
$$
where $C(\gamma)>1$.
One can then apply Theorem 2 of Birg\'e \& Massart (2001) to get
\begin{eqnarray}
\sum_{j=1}^m E||\mbox{\boldmath{$\hat \mu$}}_j-\mbox{\boldmath $\mu$}_j||^2_2 & \leq & c_1(\gamma) \min_{{\cal J}_0 \subseteq \{1,...,m\}} \left\{
\sum_{j \in {\cal J}^c_0}
\min_{1 \leq h_j \leq n}
\left(\sum_{i=h_j+1}^n \mu^2_{(i)j}+Pen_j(h_j)\right) \right. \nonumber \\
& + & \left. \sum_{j \in {\cal J}_0}\sum_{i=1}^n \mu^2_{ij}+
Pen_0(m_0)\right\} +c_2(\gamma) \sigma_n^2 (1-\pi_0(0))
\label{eq:upper1}
\end{eqnarray}
\newline $\Box$
\subsection*{Proof of Theorem \ref{th:upperth}}
One can easily check from Table \ref{tab:rates} that for
$\eta_{jn} \geq n^{-1/\min(p_j,2)}\sqrt{\ln n}$ for $p_j>0$,
the last term $c_2(\gamma)\sigma_n^2(1-\pi_0(0))$ in the RHS of \fr{eq:upper}
is of order
$O(\sigma_n^2)=o(R(\Theta_j[\eta_{jn}]))$ for all nonzero $\mbox{\boldmath $\mu$}_j$ and all
$p_j \geq 0$.
Let ${\cal J}_0^{c*}$ be the true (unknown) subset of nonzero $\mbox{\boldmath $\mu$}$'s and
$m^*_0=|{\cal J}_0^{c*}|$.
\medskip
\noindent
I. $m_0^* \leq \lfloor m/e \rfloor$.
\newline
Apply Theorem \ref{th:upper} for ${\cal J}_0={\cal J}_0^*$:
\begin{eqnarray*}
\sum_{j=1}^m E||\mbox{\boldmath{$\hat \mu$}}_j-\mbox{\boldmath $\mu$}_j||^2_2 & \leq &
c_1(\gamma)\left\{\sum_{j \in {\cal J}_0^{c*}} \min_{1 \leq h_j \leq
n} \left(\sum_{i=h_j+1}^n \mu_{(i)j}^2
+2\sigma_n^2(1+1/\gamma)\ln\left(\pi^{-1}_j(h_j){n \choose
h_j}(1+\gamma)^{\frac{h_j}{2}}\right)\right) \right. \\
& + & \left. 2\sigma_n^2(1+1/\gamma)\ln\left(\pi_0^{-1}(m_0){m
\choose m_0}\right)\right\} +c_2(\gamma)\sigma_n^2(1-\pi_0(0))
\end{eqnarray*}
Since for $m_0=1,...,\lfloor m/e \rfloor$, ${m \choose m_0} \leq
(m/m_0)^{2m_0}$ (see Lemma A1 of Abramovich {\em et. al}, 2010), the
required conditions on $\pi_0(\cdot)$ ensure that
$$
2\sigma_n^2(1+1/\gamma)\ln\left(\pi_0^{-1}(m_0){m \choose
m_0}\right) \leq C(\gamma) \sigma_n^2 m_0\ln(m/m_0)
$$
To complete the proof for this case we consider now separately
\begin{equation}
\label{eq:minhj} \min_{1 \leq h_j \leq n} \left(\sum_{i=h_j+1}^n
\mu_{(i)j}^2 +2\sigma_n^2(1+1/\gamma)\ln\left(\pi^{-1}_j(h_j){n
\choose h_j}(1+\gamma)^{\frac{h_j}{2}}\right)\right)
\end{equation}
for each $j \in {\cal J}_0^{c*}$ and show that it is
$O(R(\Theta_j[\eta_{jn}]))$ (see Table \ref{tab:rates}). We
distinguish between several cases, where the proofs for strong
$l_p$-balls will follow immediately from the proofs for the
corresponding weak $l_p$-balls due to the embedding properties
mentioned in Section \ref{sec:main}.
\medskip
\noindent {\em Case 1}: $\mbox{\boldmath $\mu$}_j \in \Theta_j[\eta_{jn}],\;\eta_{jn} >
e^{-c(\gamma)}$ for $p_j=0$ and $\eta^{p_j}_{jn} > e^{-c(\gamma)}$
for $p_j>0$. Taking $h^*_j=n$, under the condition on $\pi_j(n)$
implies that \fr{eq:minhj} is $O(\sigma_n^2n)=O(R(\Theta_j[\eta_{jn}]))$.
\medskip
\noindent {\em Case 2}: $\mbox{\boldmath $\mu$}_j \in l_0[\eta_{jn}],\; \eta_{jn} \leq
e^{-c(\gamma)}$. Note that since $\mbox{\boldmath $\mu$}_j \not \equiv {\bf 0}$,
$\eta_{jn} \geq n^{-1}$. Choose $h^*_j=n\eta_{jn}$ and repeat the
arguments of the proof of Theorem 3 of Abramovich, Grinshtein \&
Pensky (2007) using a slightly more general Lemma A1 of Abramovich
{\em et. al} (2010) for approximating the binomial coefficient in
\fr{eq:minhj} instead of their original Lemma A.1.
\medskip
\noindent {\em Case 3}: $\mbox{\boldmath $\mu$}_j \in m_{p_j}[\eta_{jn}],\; 0 < p_j < 2,\;
n^{-1}(\ln n)^{p_j/2} \leq \eta^{p_j}_{jn} \leq e^{-c(\gamma)}$. Take
$1 \leq h^*_j=n\eta^{p_j}_{jn}(\ln \eta^{-p_j}_{jn})^{-p_j/2} \leq ne^{-c(\gamma)}$ and follow
the proof of Theorem 4 of Abramovich, Grinshtein \& Pensky (2007)
with a more general version of Lemma A1 (see Case 2).
\medskip
\noindent {\em Case 4}: $\mbox{\boldmath $\mu$}_j \in m_{p_j}[\eta_{jn}],\; p_j \geq 2,\;
n^{-p_j/2}(\ln n)^{p_j/2} \leq \eta^{p_j}_{jn} \leq e^{-c(\gamma)}$.
Take
$h^*_j=1$. Then, for $p_j>2$
$$
\sum_{i=h^*_j+1}^n \mu^2_{(i)j} < \sigma_n^2 n^{2/p_j} \eta^2_{jn}
\int_1^nx^{-2/p_j}dx < \frac{p_j}{p_j-2} \sigma_n^2 n^{2/p_j}
\eta_{jn}^2 n^{1-2/p_j}=O(\sigma_n^2 n \eta^2_{jn})
$$
and, similarly, for $p_j=2$
$$
\sum_{i=h^*_j+1}^n \mu^2_{(i)j} < \sigma_n^2 n \eta^2_{jn}
\int_1^nx^{-1}dx = \sigma_n^2 n \eta^2_{jn} \ln n
$$
On the other hand, under the conditions on $\pi_j(\cdot)$, $\pi_j(1)
\geq n^{-c_1}$ that yields
$$
2\sigma_n^2(1+1/\gamma)\ln\left(\pi^{-1}_j(1)n
\sqrt{1+\gamma}\right)=O(\sigma_n^2 \ln n) = O(\sigma_n^2 n
\eta^2_{jn})
$$
for $\eta_{jn} \geq \sqrt{n^{-1}\ln n}$.
\medskip
\noindent
II. $\lfloor m/e\rfloor < m^*_0 \leq m$.
\newline
Apply Theorem
\ref{th:upper} for ${\cal J}^c_0 = \{1,...,m\}$ (or, equivalently, ${\cal J}_0 =
\emptyset$) and $h_j=1$ for $j \in {\cal J}_0^*$~:
\begin{eqnarray}
\sum_{j=1}^m E||\mbox{\boldmath{$\hat \mu$}}_j-\mbox{\boldmath $\mu$}_j||^2_2 & \leq & c_1(\gamma)\left\{\sum_{j \in
{\cal J}_0^{c*}}\min_{1 \leq h_j \leq n}
\left(\sum_{i=h_j+1}^n \mu^2_{(i)j}+Pen_j(h_j)\right) +
\sum_{j \in {\cal J}_0^*}Pen_j(1)+Pen_0(m) \right\} \nonumber \\
& + & c_2(\gamma)\sigma_n^2(1-\pi_0(0)),
\label{eq:m0m}
\end{eqnarray}
where the conditions on $\pi_j(1)$ and
$\pi_0(m)$ imply $\sum_{j \in {\cal J}_0^*}Pen_j(1)= O(\sigma_n^2 m
\ln n)$ and $Pen_0(m)=O(\sigma_n^2 m)$. From Table \ref{tab:rates} one can
verify that for all dense and sparse cases,
$\sigma^2_n \ln n=O(R(\Theta_j [\eta_{jn}]),\;j \in {\cal J}_0^{c*}$
and, therefore,
the first term $\sum_{j \in {\cal J}_0^{c*}}$ in the RHS of \fr{eq:m0m}
is dominating for $m^*_0 \sim m$.
\newline $\Box$
\subsection*{Proof of Theorems \ref{th:lower1}-\ref{th:lower2}}
The ideas of the proofs of both theorems on the minimax lower bounds are similar
and can be combined.
Note first that any estimator cannot perform better than an oracle
who knows the true ${\cal J}_0$. In this (ideal) case one would obviously set
$\mbox{\boldmath{$\hat \mu$}}_j \equiv {\bf 0}$ for all $j \in {\cal J}_0$ with zero risk and, therefore,
due to the additivity of the risk function,
$$
\inf_{\mbox{\boldmath{$\tilde \mu$}}_1,...,\mbox{\boldmath{$\tilde \mu$}}_m}
\sup_{\mbox{\boldmath $\mu$}_j \in \Theta_j[\eta_{jn}],j \in {\cal J}^c_0}\sum_{j=1}^m E||\mbox{\boldmath{$\tilde \mu$}}_j-\mbox{\boldmath $\mu$}_j||^2_2
\geq C \sum_{j \in {\cal J}^c_0} R(\Theta_j[\eta_{jn}])
$$
for any $\Theta_{jn}[\eta_{jn}]$ (see, e.g., Johnstone, 2011, Proposition 4.14).
Furthermore, following Case II in the proof of Theorem \ref{th:upperth},
$\sum_{j \in {\cal J}^c_0} R(\Theta_j[\eta_{jn}])$ dominates over
$\sigma_n^2 m_0\ln(m/m_0)$ in \fr{eq:lower1} and \fr{eq:lower2} for $m_0>m/2$.
To complete the proof we need to show, therefore, that for $m_0 \leq m/2$,
the minimal unavoidable
price for not being an oracle for selecting nonzero $\mbox{\boldmath $\mu$}_j$'s is of
order $\sigma_n^2 m_0\ln(m/m_0)$.
The main idea of the proof is
to find a subset ${\cal M}_{m_0}$ of $n \times m$ vectors
$\mbox{\boldmath $\mu$}=(\mu_{11},...,\mu_{n1},...,\mu_{1m},...,\mu_{nm})'$ with
$m_0$ nonzero $\mbox{\boldmath $\mu$}_j=(\mu_{1j},...,\mu_{nj})' \in
\Theta_j[\eta_{jn}]$ such that for any pair $\mbox{\boldmath $\mu$}^1,\mbox{\boldmath $\mu$}^2 \in {\cal
M}_{m_0}$ and some $C>0$, $||\mbox{\boldmath $\mu$}^1-\mbox{\boldmath $\mu$}^2||^2_2 \geq
C\sigma_n^2 m_0\ln(m/m_0)$, while the Kullback-Leibler
divergence $K(\mathbb{P}_{\mbox{\boldmath $\mu$}^1},\mathbb{P}_{\mbox{\boldmath $\mu$}^2})=
||\mbox{\boldmath $\mu$}^1-\mbox{\boldmath $\mu$}^2||^2_2/(2\sigma_n^2) \leq (1/16)\ln{\rm card}({\cal
M}_{m_0})$. The result will then follow immediately from Lemma A.1
of Bunea, Tsybakov \& Wegkamp (2007).
Define the subset ${\cal \tilde{D}}_{m_0}$ of all $m$-dimensional indicator
vectors with $m_0$ entries of ones, that is
${\cal \tilde{D}}_{m_0}=\{{\bf d}: {\bf d} \in \{0,1\}^m,\; ||{\bf d}||_0=m_0\}$.
By Lemma A.3 of Rigollet \& Tsybakov (2011), for $m_0 \leq m/2$ there exists a subset
${\cal D}_{m_0} \subset {\cal \tilde{D}}_{m_0}$ such that for some constant
$\tilde{c}>0$, $\ln {\rm card}({\cal D}_{m_0}) \geq \tilde{c}m_0\ln(m/m_0)$,
and for any pair ${\bf d}_1,{\bf d}_2 \in {\cal D}_{m_0}$, the Hamming distance
$\rho({\bf d}_1,{\bf d}_2)=\sum_{j=1}^m \mathbb{I}\{{\bf d}_{1j} \neq {\bf d}_{2j}\} \geq \tilde{c}
m_0$.
To any indicator vector ${\bf d} \in {\cal D}_{m_0}$ assign the
corresponding mean vector $\mbox{\boldmath $\mu$} \in {\cal M}_{m_0}$ as follows. Let
$\tilde{C}^2=(1/16)\sigma_n^2\tilde{c} \ln(m/m_0)$. Define
$\mbox{\boldmath $\mu$}_j=(\tilde{C},0,...,0)'\mathbb{I}\{d_j=1\}$ for $0 \leq p_j < 2$
and
$\mbox{\boldmath $\mu$}_j=(\tilde{C}n^{-1/2},\tilde{C}n^{-1/2},...,\tilde{C}n^{-1/2})'\mathbb{I}\{d_j=1\}
$ for $p_j \geq 2,\;j=1,...,m$. Hence, ${\rm card}({\cal M}_{m_0})={\rm
card}({\cal D}_{m_0})$. Obviously, the resulting $\mbox{\boldmath $\mu$}_j \in
l_0[\eta_{jn}]$ and a straightforward calculus shows that under the
additional constraint on $\eta_{jn}$ and $m_0$ in Theorem
\ref{th:lower2}, $\mbox{\boldmath $\mu$}_j \in l_{p_j}[\eta_{jn}]$.
For any $\mbox{\boldmath $\mu$}^1, \mbox{\boldmath $\mu$}^2 \in {\cal M}_{m_0}$ and the corresponding
${\bf d}_1, {\bf d}_2 \in {\cal D}_{m_0}$, we then have
$$
||\mbox{\boldmath $\mu$}^1-\mbox{\boldmath $\mu$}^2||^2_2 = \tilde{C}^2 \sum_{j=1}^m
\mathbb{I}\{{\bf d}_{1j}\neq{\bf d}_{2j}\} \geq \tilde{C}^2 \; \tilde{c}\;
m_0 = (1/16)\sigma_n^2 \tilde{c}^2 m_0\ln(m/m_0)
$$
and
$$
K(\mathbb{P}_{\mbox{\boldmath $\mu$}^1},\mathbb{P}_{\mbox{\boldmath $\mu$}^2}) =
\frac{\tilde{C}^2}{2\sigma_n^2} \sum_{j=1}^m
\mathbb{I}\{{\bf d}_{1j}\neq{\bf d}_{2j}\} \leq \frac{\tilde{C}^2
m_0}{\sigma_n^2} \leq (1/16)\ln{\rm card}({\cal M}_{m_0})
$$
$\Box$
|
1,314,259,993,899 | arxiv | \section{Introduction}
The field of composition, non-associative algebras, and related Lie
algebras, underwent a series of interesting developments in recent times.
In \cite{D-1} Deligne proposed dimension formulas for the exceptional series
of complex simple Lie algebras, whose parametrization in terms of the dual
Coexeter number was exploited further in \cite{CD-1} by Cohen and de Man
(see also \cite{D-2}). Landsberg and Manivel subsequently pointed out the relation between the dimension formulas and the dimensions of the composition algebras themselves in \cite{LM-1}. In
\cite{D-1,CD-1} it was observed that all parameter values determining
integer outputs in the dimension formulas were already accounted for by the
known normed division algebras, with essentially one exception, intriguingly
corresponding to a would-be composition algebra of dimension six, sitting
between the quaternions and octonions.
This algebra, whose elements were named \textit{sextonions}, was recently studied by Westbury in
\cite{W-1}, who pointed out the related existence of a whole new row in the
Freudenthal Magic Square. Actually, the six-dimensional algebra of
sextonions had been observed earlier as a curiosity; indeed, it was
explicitly constructed in \cite{Kle}. Moreover, it was used in \cite{Jeu} to
study the conjugacy classes in the smallest exceptional Lie algebra $\mathbf{g}_{2}$ in characteristics other than $2$ or $3$. The sextonions
were also constructed in \cite{Rac} (\textit{cfr.} Th. 5 therein), and
proved to be a maximal subalgebra of the split octonions.
In \cite{LM-2}, Landsberg and Manivel \textquotedblleft filled in the hole"
in the exceptional series of Lie algebras, observed by Cvitanovic, Deligne,
Cohen and de Man, showing that sextonions, through the \textit{triality}
construction of \cite{LM-1}, give rise to a non-simple \textit{intermediate}
exceptional Lie algebra, named $\mathbf{e_{7 \frac12}}$, between $\mathbf{e_7}$ and $\mathbf{e_8}$, satisfying some of the
decomposition and dimension formulas of the exceptional simple Lie algebras
\cite{D-1,CD-1,D-2,LM-1,LM-3}.
More recently, such a $190$-dimensional Lie algebra $\mathbf{e_{7 \frac12}}$ was also found by Mkrtchyan in the study of the Vogel plane \cite{MK-1}%
, in the context of the analysis of the \textit{universal} Vogel Lie algebra
\cite{V-1}.\medskip
By the Hurwitz Theorem \cite{H-1}, the real normed division algebras are the
real numbers $\mathbb{R}$, the complex numbers $\mathbb{C}$, the quaternions
$\mathbb{H}$ and the octonions $\mathfrak{C}$ (Cayley numbers). Each
algebra can be constructed from the previous one by the so-called \textit{%
Cayley-Dickson doubling procedure }\cite{Di-1,Sch-1}.
All these algebras can be complexified to give complex algebras. These
complex algebras respectively are
$\mathbb{R}\otimes \mathbb{C}$ $=$ $\mathbb{C}$, $\mathbb{C}\otimes \mathbb{C%
}$ $=$ $\mathbb{C}\oplus \mathbb{C}$, $\mathbb{H}\otimes \mathbb{C}$ $=$ $%
M_{2}(\mathbb{C})$, $\mathfrak{C}\otimes \mathbb{C}$ ($M_{2}$ denoting a $%
2\times 2$ matrix). The three complex algebras other than $\mathbb{C}$ have
a second real form, denoted $\mathbb{C}_{s}$, $\mathbb{H}_{s}$ and $%
\mathfrak{C}_{s}$, with the following isomorphisms holding : $\mathbb{C}_{s}=%
\mathbb{R}\oplus \mathbb{R}$ and $\mathbb{H}_{s}=$ $M_{2}(\mathbb{R})$. The
normed division algebras are called the \textit{compact} forms and the
aforementioned second real form is called the \textit{split} real form. It
is worth pointing out that split real forms are composition algebras but they are
not division algebras.
On the field $\mathbb{R}$, the sextonions only exist in split form $\mathbb{S%
}_{s}$, and they are intermediate between
the split quaternions $\mathbb{H}_{s}$ and the split octonions $\mathfrak{C}%
_{s}$ :%
\begin{equation}
\mathbb{H}_{s}\subset \mathbb{S}_{s}\subset \mathfrak{C}_{s}.
\end{equation}
Note that $\mathbb{S}_{s}$ does not contain the divisional quaternions $%
\mathbb{H}$; see App. A.
In the present paper we will apply the formal machinery introduced in \cite{T-1} and \cite{T-2}, as well as an explicit realization of the sextonions (over the algebraically closed field $\mathbb{C}$), in order to explicitly construct the
non-semisimple Lie algebra $\mathbf{e_{7 \frac12}}$, as well as all algebras
occurring in the sextonionic row of the \textit{extended} Freudenthal Magic
Square \cite{W-1,LM-2}, in terms of Jordan pairs.
\bigskip
The plan of the paper is as follows.
In Sec. \ref{sec:octonions} we provide a realization of the sextonions in
terms of nilpotents constructed from the traceless octonions. Then, in Sec. %
\ref{zzorn}, we represent them with suitably constrained Zorn matrices.
The intermediate exceptional algebra $\mathbf{e_{7 \frac12}}$ is then
considered in Sec. \ref{jazz}, which focuses on the construction (then
developed in Secs. \ref{Blues1} and \ref{Blues2}) of the sextonionic row and
column of the extended Magic Square, by exploiting Jordan pairs for the
sextonionic rank-3 Jordan algebra.
The action of $\mathbf{g}_{2}=Der(\mathfrak{C})$ on the Zorn matrices is
recalled in Sec. \ref{g2d}, and exploited in Sec. \ref{sec:d(s)} to
determine the derivations of the sextonions, $Der(\mathbb S} %\textbf{\large $\mathfrak S$})$.
Then, in Secs. \ref{Blues1} and \ref{Blues2} the explicit construction of
the intermediate algebras $\mathbf{c}_{3\frac{1}{2}}$ (which analogously
holds for $\mathbf{a}_{5\frac{1}{2}}$ and $\mathbf{d}_{6\frac{1}{2}}$) and $%
\mathbf{e_{7 \frac12}}$ is presented.
The paper is concluded by App. A, in which we prove that, on the field $%
\mathbb{R}$, $\mathbb{S}_{s}$ does not contain the divisional quaternions $%
\mathbb{H}$.
\section{Sextonions and their Nilpotent Realization}\label{sec:octonions}
The algebra of sextonions is a six dimensional subalgebra $\mathbb S} %\textbf{\large $\mathfrak S$}$ of the octonions. As mentioned above, we denote by
$\textbf{\large $\mathfrak C$}$ the algebra of the octonions over the complex field $\mathbb{C}$, whose multiplication rule goes according to the Fano diagram in Figure \ref{fig:fano}.
\begin{figure}
\begin{center}
\includegraphics{Octonions.jpg}
\caption{Fano diagram for the octonions' products}\label{fig:fano}
\end{center}
\end{figure}
If $a \in \textbf{\large $\mathfrak C$}$ we write $a = a_0 + \sum_{j=1}^7{a_j u_j}$ where $a_j \in \mathbb{C}$ for $j = 1, \dots , 7$ and $u_j$ for $j = 1, \dots , 7$ denote the octonion imaginary units. We denote by $i$ the the imaginary unit in $\mathbb{C}$.
We introduce 2 idempotent elements:
$$\rho^\pm = \frac{1}{2}(1 \pm i u_7) $$
and 6 nilpotent elements :
$$\varepsilon_k^{\pm} = \rho^\pm u_k \quad , \qquad k = 1,2,3$$
One can readily check that \cite{T-1}:
\begin{equation}
\begin{array}{ll}
& (\rho^\pm)^2 = \rho^\pm \quad , \quad \rho^\pm \rho^\mp = 0 \\ \\
& \rho^\pm \varepsilon_k^{\pm} = \varepsilon_k^{\pm} \rho^\mp = \varepsilon_k^{\pm} \\ \\
& \rho^\mp \varepsilon_k^{\pm} = \varepsilon_k^{\pm} \rho^\pm = 0 \\ \\
& (\varepsilon_k^{\pm})^2 = 0 \\ \\
& \varepsilon_k^{\pm} \varepsilon_{k+1}^\pm = - \varepsilon_{k+1}^\pm \varepsilon_k^{\pm} = \varepsilon_{k+2}^\mp \qquad \text{(indices modulo 3)} \\ \\
& \varepsilon_j^\pm \varepsilon_k^{\mp} = 0 \qquad j \ne k \\ \\
& \varepsilon_k^{\pm} \varepsilon_k^{\mp} = - \rho^\pm
\end{array}\end{equation}
We can write $a \in \textbf{\large $\mathfrak C$}$ as $a = \alpha_0^+ \rho^+ +\alpha_0^- \rho^- + \alpha_k^+
\varepsilon_k^+ +\alpha_k^- \varepsilon_k^-$.
The subalgebra $\mathbb S} %\textbf{\large $\mathfrak S$} \in \textbf{\large $\mathfrak C$}$ generated by $\rho^\pm , \varepsilon_1^\pm , \varepsilon_2^+ , \varepsilon_3^-$ (namely $a \in \mathbb S} %\textbf{\large $\mathfrak S$}$ \textit{iff} $\alpha_2^- = \alpha_3^+ = 0$) provides an explicit realization of the sectonions. The existence of
the non-divisional sextonionic elements can be easily understood. Indeed, in order to
construct divisional sextonions, one would need to combine a nilpotent with
its complex conjugate; but, as given by the above construction, this is not
possible for $\varepsilon _{2}^{+}$ nor for $\varepsilon _{3}^{-}$.
\section{Zorn Matrix Representation}\label{zzorn}
Octonions can be represented by Zorn matrices \cite{zorn}.
If $a \in\textbf{\large $\mathfrak C$}$ , $A^\pm \in \mathbb{C}^3$ is a vector with complex components $\alpha_k^{\pm}$ , $k=1,2,3$ (and we use the
standard summation convention over repeated indices throughout) then we have the identification:
\begin{equation}
a = \alpha_0^+ \rho^+ +\alpha_0^- \rho^- + \alpha_k^+
\varepsilon_k^+ +\alpha_k^- \varepsilon_k^- \longleftrightarrow
\left[
\begin{array}{cc}
\alpha_0^+ & A^+ \\
A^- & \alpha_0^-
\end{array} \right],
\label{oct} \end{equation}
and the product of $a, b \in \mathfrak{C}$ corresponds to
\begin{equation}
\begin{array}{c}
\left [ \begin{array}{cc} \alpha^+ & A^+ \\ A^- & \alpha^-
\end{array}\right]
\left [ \begin{array}{cc} \beta^+ & B^+ \\ B^- & \beta^-
\end{array}\right] \\ =
\left [\begin{array}{cc} \alpha^+ \beta^+ + A^+ \cdot B^- &
\alpha^+ B^+ + \beta^- A^+ + A^- \wedge B^- \\
\alpha^- B^- + \beta^+ A^- + A^+ \wedge B^+ & \alpha^- \beta^- +
A^- \cdot B^+ \end{array} \right],
\label{zorn1}
\end{array}
\end{equation}
where $A^\pm \cdot B^\mp = - \alpha_K^\pm \beta_k^\mp$ and $A
\wedge B$ is the standard vector product of $A$ and $B$.
Thus, from Sec. \ref{sec:octonions} we can represent the sextonions as a Zorn matrix with the same Zorn product, as long as $A^+$ and $A^-$ are $\mathbb{C}^3$-vectors of the type
$$A^+ = (a^+,c^+,0) \qquad \text{and} \qquad A^- = (a^-,0,c^-)$$
Notice that $A^+$ and $A^-$ lie on orthogonal $\mathbb{C}^3$-planes sharing the line along the first component.
\section{$\mathbf{e_{7 \frac12}}$}\label{jazz}
In recent papers \cite{T-1, T-2} a unifying view of all exceptional Lie algebras in terms of $\mathbf{a_2}$ subalgebras and {\it Jordan Pairs} has been presented, and a {\it Zorn-matrix-like} representation of these algebras has been introduced.
The root diagram related to this view is shown in Figure \ref{fig:rootdiagram}, where the roots of the
exceptional Lie algebras are projected on a complex $\mathbf{su(3)}=\mathbf{a}_{2}$ plane,
recognizable by the dots forming the external hexagon, and it exhibits the
\textit{Jordan pair} content of each exceptional Lie algebra. There are
three Jordan pairs $(\mathbf{J_3^n},\mathbf{\overline J_3^{\raisebox{-2 pt}{\scriptsize \textbf n}}})$, each of which lies on an axis
symmetrically with respect to the center of the diagram. Each pair doubles
a simple Jordan algebra of rank $3$, $\mathbf{J_3^n}$, with involution - the
conjugate representation $\mathbf{\overline J_3^{\raisebox{-2 pt}{\scriptsize \textbf n}}}$, which is the algebra of $3\times 3$
Hermitian matrices over $\mathbb{A}$, where $\mathbb{A}=\mathbb{R},\mathbb{C},\mathbb{H},\mathfrak{C}$ for $\mathbf{n}=$dim$_{\mathbb{R}}\mathbb{A}=1,2,4,8$ respectively, stands for real, complex, quaternion, octonion algebras, the four normed division algebras according to Hurwitz's Theorem; see
\textit{e.g.} \cite{McCrimmon}.
Exceptional Lie algebras $\mathbf{f_4}$, $\mathbf{e_6}$, $\mathbf{e_7}$, $\mathbf{e_8}$ are obtained for $\mathbf{n}%
=1,2,4,8$, respectively. $\mathbf{g_2}$ (corresponding to $\mathbf{n}
=-2/3$) can be also represented in the same way, with
the Jordan algebra reduced to a single element. For further detail, \textit{cfr.} \cite{T-1}.
We expand that view in this paper to include $\mathbf{e_{7 \frac12}}$ \cite{LM-2}, a Lie subalgebra of $\mathbf{e}_8$ of dimension 190. If we consider the $\mathbf{e}_8$ root diagram (obtained in Figure \ref{fig:rootdiagram} for $\mathbf{n}=8$),
\begin{figure}
\begin{center}
\includegraphics{Fig_1.png}
\caption{A unifying view of the roots of exceptional Lie algebras through \textit{Jordan pairs} \cite{T-1}. For $\mathbf{n}=8$, the root diagram of $\mathbf{e}_{8}$ is obtained.}\label{fig:rootdiagram}
\end{center}
\end{figure}
then the sub-diagram of $\mathbf{e_{7 \frac12}}$ is shown in Figure \ref{fig:rootesm}, (for $\mathbf{n}=8$, as well).
\begin{figure}
\begin{center}
\includegraphics{E7m.png}
\caption{Root diagram of $\mathbf{e_{7 \frac12}}$ (for $\mathbf{n}=8$)}\label{fig:rootesm}
\end{center}
\end{figure}
In general, one can do the same for all algebras in the fourth and third row of the Magic square, \cite{tits2} \cite{freu1}, that we denote by $\textbf{\large ${\mathfrak g}_{IV}$}$ and $\textbf{\large ${\mathfrak g}_{III}$}$ respectively (see table \ref{ms34}). In this way, the algebras in the intermediate (fourth) row of the extended Magic Square \cite{W-1, LM-2} are explicitly constructed in terms of Jordan pairs.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|}
\hline
$\mathbf{n}$ & $1$ & $2$ & $4$ & $8$ \\ \hline
\rule[-1mm]{0mm}{6mm} $\textbf{\large ${\mathfrak g}_{III}$}$ & $\mathbf{c_3}$ & $\mathbf{a_5}$ & $\mathbf{d_6}$ & $\mathbf{e_7}$ \\ \hline
\rule[-1mm]{0mm}{6mm} $\textbf{\large ${\mathfrak g}_{IV}$}$ & $\mathbf{f_4}$ & $\mathbf{e_6}$ & $\mathbf{e_7}$ & $\mathbf{e_8}$ \\ \hline
\end{tabular}
\end{center}
\caption{Third and fourth row of the magic square \label{ms34}}
\end{table}
We get a subalgebra of $\textbf{\large ${\mathfrak g}_{IV}$}$, that we denote here by $\textbf{\large ${\mathfrak g}_{III \frac{1}{2}}$}$, given by $\textbf{\large ${\mathfrak g}_{III}$}$ plus a $(6n+8)$-dimensional irreducible representation of $\textbf{\large ${\mathfrak g}_{III}$}$ plus a $\textbf{\large ${\mathfrak g}_{III}$}$-singlet, as shown in Figure \ref{fig:gIIIm} \footnote{There are some variations on the definition of {\it intermediate} algebra, \cite{W-1}
\cite{LM-2}, based on the grading induced by an highest root. Our realisation of $Der(\mathbb S} %\textbf{\large $\mathfrak S$})$ and $\mathbf{e_{7 \frac12}}$ corresponds to the algebra denoted by $\bf{{\mathfrak g}^{\prime\prime}\mkern-1.2mu}$ in the Introduction of \cite{LM-2}.}.
\begin{figure}
\begin{center}
\includegraphics{gIIIm1.png}
\caption{ Diagram of $\textbf{\large ${\mathfrak g}_{III \frac{1}{2}}$}$}\label{fig:gIIIm}
\end{center}
\end{figure}
In particular, the irreps. of $\textbf{\large ${\mathfrak g}_{III}$}$ are symplectic (\textit{i.e.}, they admit a skew-symmetric invariant form), and they have complex dimension $6n+8=14, 20, 32, 56$ for $n = 1, 2, 4, 8$ respectively; the algebras $\textbf{\large ${\mathfrak g}_{III \frac{1}{2}}$}$ are their corresponding Heisenberg algebras (denoted by $\mathbf{H}$) through such an invariant tensor, \cite{LM-2}, $\mathbf{c_{3\frac12}} = \mathbf{c_{3}{}_\bullet H_{14}}$, $\mathbf{a_{5 \frac12}} = \mathbf{a_{5}{}_\bullet H_{20}}$, $\mathbf{d_{6 \frac12}} = \mathbf{d_{6}{}_\bullet H_{32}}$, $\mathbf{e_{7 \frac12}} = \mathbf{e_{7}{}_\bullet H_{56}}$, of complex dimension $36, 56, 99, 190$.\\
Let us here present a brief account of the \textit{Jordan pairs} for
sextonions $\mathfrak{\mathbb S} %\textbf{\large $\mathfrak S$}}$ by means of suitable embeddings. We start with the
maximal, non-symmetric embedding:%
\begin{eqnarray}
\mathbf{e}_{7} &\supset &\mathbf{a}_{2}\oplus \mathbf{a}_{5} \\
\mathbf{133} &=&\left( \mathbf{8},\mathbf{1}\right) +\left( \mathbf{1},%
\mathbf{35}\right) +\left( \mathbf{3},\mathbf{15}\right) +\left( \overline{%
\mathbf{3}},\overline{\mathbf{15}}\right) \\
\mathbf{56} &=&\left( \mathbf{3},\mathbf{6}\right) +\left( \overline{\mathbf{%
3}},\overline{\mathbf{6}}\right) +\left( \mathbf{1},\mathbf{20}\right) ,
\end{eqnarray}%
implying that:%
\begin{equation}
\mathbf{e}_{7}\ltimes \mathbf{56}\supset \left[ \mathbf{a}_{2}\oplus \left(
\mathbf{a}_{5}\ltimes \mathbf{20}\right) \right] \ltimes \left( \mathbf{3},%
\mathbf{15}+\mathbf{6}\right) +\left( \overline{\mathbf{3}},\overline{%
\mathbf{15}}+\overline{\mathbf{6}}\right) .\label{decc}
\end{equation}%
Thus, the \textit{Jordan pairs} for the sextonionic Jordan algebra of rank
3, $\mathbf{J}_{3}^{\mathbf{n}=6}$, are given by $\left( \mathbf{3},\mathbf{%
15}+\mathbf{6}\right) +\left( \overline{\mathbf{3}},\overline{\mathbf{15}}+%
\overline{\mathbf{6}}\right) $ in (\ref{decc}).
In order to reconstruct the extended Magic Square \cite{W-1, LM-2}, one needs also to add the extra column shown in table \ref{ms6}, \cite{LM-2},
where a further algebra $\mathbf{d_{6 \frac12 \frac12}} = \mathbf{d_{6}{}_\bullet H_{32}{}_\bullet H_{44}}$
is introduced.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c||c|}
\hline $\mathbf{n}$ & $6$ \\ \hline
\rule[-1mm]{0mm}{6mm} $\textbf{\large ${\mathfrak g}_{I}$}$ & $\mathbf{c_{3\frac12}}$ \\ \hline
\rule[-1mm]{0mm}{6mm} $\textbf{\large ${\mathfrak g}_{II}$}$ & $\mathbf{a_{5 \frac12}}$ \\ \hline
\rule[-1mm]{0mm}{6mm} $\textbf{\large ${\mathfrak g}_{III}$}$ & $\mathbf{d_{6 \frac12}}$ \\ \hline
\rule[-1mm]{0mm}{6mm} $\textbf{\large ${\mathfrak g}_{III \frac{1}{2}}$}$ & $\mathbf{d_{6 \frac12 \frac12}}$ \\ \hline
\rule[-1mm]{0mm}{6mm} $\textbf{\large ${\mathfrak g}_{IV}$}$ & $\mathbf{e_{7 \frac12}}$ \\ \hline
\end{tabular}
\end{center}
\caption{Sixth column of the magic square \label{ms6}}
\end{table}
This column corresponds to the Jordan algebra that we denote by
$\mathbf{J_3^6}$
of $3 \times 3$ Hermitian matrices over the sextonions. The \emph{new} element $\mathbf{d_{6}{}_\bullet H_{32}{}_\bullet H_{44}}$ can be easily seen in the diagram of figure \ref{fig:gIIIm} for $n=6$: $\mathbf{g_0^6} = \mathbf{a_{5 \frac12}}$ is the reduced structure algebra of $\mathbf{J_3^6}$,\, $\textbf{\large ${\mathfrak g}_{III}$} = \mathbf{d_{6 \frac12}}$ the super-structure algebra of $\mathbf{J_3^6}$ and finally $\mathbf{d_{6 \frac12 \frac12}} = \mathbf{d_{6 \frac12}} \mathbf{{}_\bullet H_{44}} = \mathbf{d_{6}{}_\bullet H_{32}{}_\bullet H_{44}}$. Notice that the $44$-dimensional representation of $\mathbf{d_{6 \frac12}}$ is made of $\mathbf{J_3^6} \oplus \mathbf{\overline J_3^6} \oplus 2$.
Finally, the algebra $\mathbf{e_{7 \frac12}}$ at the end of the column is viewed as in the diagram of Figure \ref{fig:rootdiagram} for $n = 6$, with $\mathbf{g_0^6} = \mathbf{a_{5 \frac12}}$ and the subalgebra $\mathbf{e_7}$ represented by the same diagram for $n = 4$.
This completes the explicit construction of the relevant rows and columns (pertaining to the sextonions) of the extended Magic Square\footnote{It is once again worth stressing that in the present investigation, as well as in the previous papers \cite{T-1, T-2}, we only consider complex forms of the Lie algebras.} \cite{W-1, LM-2}.
\section{$\mathbf{g_2}$ action on Zorn matrices}\label{g2d}
In our previous paper \cite{T-2}, we have introduced the following adjoint representation $\mathbf \varrho$ of the Lie algebra $\mathbf{g_2}$:
\begin{equation}
\left[
\begin{array}{cc}
a & A^+ \\
A^- & 0
\end{array} \right]
\label{gdm2} \end{equation}
where $a \in \mathbf{a_2}$, $A^+,\ A^- \in \mathbb{C}^3$, viewed as column and row vector respectively.\\
The commutator of two such matrices reads \cite{T-2}:
\begin{equation}
\begin{array}{c}
\left [ \left[ \begin{array}{cc} a & A^+ \\ A^- & 0
\end{array}\right]
,
\left [ \begin{array}{cc} b & B^+ \\ B^- & 0
\end{array}\right]\right] \\ =
\left [\begin{array}{cc} [a, b] + A^+ \circ B^- - B^+\circ A^- &
a B^+ - b A^+ + 2 A^- \wedge B^- \\
A^- b - B^-a + 2 A^+ \wedge B^+ & 0 \end{array} \right]
\label{comm}
\end{array}
\end{equation}
where
\begin{equation}
A^+ \circ B^- = t(A^+ B^- ) I - t(I) A^+ B^-
\label{circ}
\end{equation}
(with standard matrix products of row and column vectors and with $I$ denoting the $3\times 3$ identity matrix); $A
\wedge B$ is the standard vector product of $A$ and $B$, and $t(a)$ denotes the trace of $a$.\\
The $\mathbf{g_2}$ generators are \cite{T-1}:
\begin{equation}
\begin{array}{l}
\mathbf \varrho (d^\pm_k) = E_{k\pm 1 \ k\pm 2} \quad \text{(mod 3)} \ , \ k = 1,2,3\\
\mathbf \varrho(\sqrt{2} H_1) = E_{11} - E_{22} \qquad \mathbf \varrho(\sqrt{6} H_2) = E_{11} + E_{22} - 2 E_{33} \\
\mathbf \varrho(g^+_k) = E_{k 4} := \mathbf e_k^+ \quad \mathbf \varrho(g^-_k) = E_{4 k} := \mathbf e_k^- \quad , \ k = 1,2,3
\end{array}
\end{equation}
where $E_{ij}$ denotes the matrix with all zero elements except a $1$ in the $\{ij\}$ position: $(E_{ij})_{k\ell} = \delta_{ik}\delta_{j\ell}$ and $\mathbf e_k^+$ are the standard basis vectors of $\mathbb{C}^3$ ($\mathbf e_k^-$ are their transpose).
The correspondence with the roots of $\mathbf{g_2}$ is shown in Figure \ref{fig:g2m}.
\begin{figure}
\begin{center}
\includegraphics{g2matrix.png}
\caption{Diagram of $\mathbf{g_2}$ with corresponding generators and {\it matrix-like} elements}\label{fig:g2m}
\end{center}
\end{figure}
In \cite{T-2} we have also introduced the following action of $\mathbf \varrho(\mathbf{g_2})$ on the octonions represented by Zorn matrices:
\begin{equation}
\begin{array}{c}
\left[ \left [ \begin{array}{cc} a & A^+ \\ A^- & 0
\end{array}\right] ,
\left[
\begin{array}{cc}
\alpha_0^+ & v^+ \\
v^- & \alpha_0^-
\end{array} \right] \right] \\ \\=
\left [\begin{array}{cc} - v^- A^+ + A^- v^+&
a v^+ + (\alpha^-_0 - \alpha^+_0) A^+ - A^- \wedge v^- \\
- v^- a - (\alpha^-_0 - \alpha^+_0) A^- - A^+ \wedge v^+ & v^- A^+ - A^- v^+ \end{array} \right]
\label{g21a}
\end{array}
\end{equation}
We see that $\mathbf \varrho(\mathbf{g_2})$ acts non-trivially on traceless octonions, hence we can write $\alpha_0^+ = - \alpha_0^-$ to get a 'matrix-like' expression of the 7-dimensional representation of $\mathbf{g_2}$.
\section{Derivations of $\mathbb S} %\textbf{\large $\mathfrak S$}$ \label{sec:d(s)}}
We now use the representation $\mathbf \varrho$ to get a representation of the Lie algebra of $Der(\mathbb S} %\textbf{\large $\mathfrak S$})$, which indeed is a non-reductive subalgebra of $\mathbf{g}_{2}=Der(\mathfrak{C})$.
It was shown in \cite{W-1} that the map from the subalgebra of derivations of $\textbf{\large $\mathfrak C$}$ preserving $\mathbb S} %\textbf{\large $\mathfrak S$}$, that we here denote by $Der_\textbf{\large $\mathfrak C$} (\mathbb S} %\textbf{\large $\mathfrak S$})$, to $Der(\mathbb S} %\textbf{\large $\mathfrak S$})$ is surjective with one-dimensional kernel; the corresponding statement at the level of automorphism group was made in \cite{LM-2}.
Within our formalism, this result is achieved by restricting $\mathbf \varrho(\mathbf{g_2})$ to the matrices that preserve $\mathbb S} %\textbf{\large $\mathfrak S$}$. One easily gets:
\begin{equation}
\left[
\begin{array}{cc}
a & S^+ \\
S^- & 0
\end{array} \right] : a =
\left(\begin{array}{ccc}
a_{11} & 0 & a_{13} \\
a_{21} & a_{22} & a_{23} \\
0 & 0 & a_{33} \\
\end{array} \right) \ , \ S^+ =
\left(\begin{array}{ccc}
s^+_1 \\
s^+_2 \\
0 \\
\end{array} \right) \ , \ S^- = (s^-_1,0,s^-_3)
\label{sm} \end{equation}
We also realize very easily that the generator corresponding to $d_1^+$, namely the element $E_{23}$ in $\mathbf \varrho(\mathbf{g_2})$, acts trivially on $\mathbb S} %\textbf{\large $\mathfrak S$}$, hence it can be set to $0$. The commutator (\ref{comm}) must be modified accordingly, by setting the $\{ 23 \}$ element of $a$ equivalent to zero, that is by replacing the standard matrix product of two matrices
\begin{equation}
a =
\left(\begin{array}{ccc}
a_{11} & 0 & a_{13} \\
a_{21} & a_{22} & 0 \\
0 & 0 & a_{33} \\
\end{array} \right) \ , \ b =
\left(\begin{array}{ccc}
b_{11} & 0 & b_{13} \\
b_{21} & b_{22} & 0 \\
0 & 0 & b_{33} \\
\end{array} \right)
\end{equation}
with the new product
\begin{equation}
a \centerdot b = ab - E_{22}\ ab\ E_{33}
\label{nmpu}
\end{equation}
and the product $S^+ \circ S^-$ with
\begin{equation}
S^+ \underline{\circ}\ } %{\stackrel{\circ}{\_} S^- = t(S^+ S^- ) I - t(I) (S^+ S^- - E_{22} S^+ S^- E_{33})
\label{nmpd}
\end{equation}
We thus have $Der(\mathbb S} %\textbf{\large $\mathfrak S$}) = \mathbf{a_1} \oplus \mathbb{C} \oplus V_4$, where $V_4$ is a 4-dimensional\footnote{%
This representation also characterizes $\mathbf{a}_{1}$ as the smallest Lie
group \textquotedblleft of type $E_{7}$" \cite{Brown}, and it pertains to
the so-called $T^{3}$ model of $N=2$, $D=4$ supergravity.} (spin-$3/2$) irreducible representation of $\mathbf{a_1}$ (as confirmed by the entry in the first column, fourth row in the extended Magic Square; \textit{cfr. e.g.} \cite{LM-2}). The corresponding root diagram is shown in Figure \ref{fig:derS}, where we have also included the axes corresponding the linear span of the Cartan generators, represented by the matrices:
\begin{figure}
\begin{center}
\includegraphics{derS.png}
\caption{Root diagram of $Der(\mathbb S} %\textbf{\large $\mathfrak S$})$}\label{fig:derS}
\end{center}
\end{figure}
\begin{equation}
\left[
\begin{array}{cc}
h_{1,2} & 0 \\
0 & 0
\end{array} \right] : h_1 =
\left(\begin{array}{ccc}
-2 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array} \right) \ , \ h_2 =
\left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & -1 \\
\end{array} \right)
\label{cart} \end{equation}
\noindent {\bf Proposition \ref{sec:d(s)}.1} : The algebra spanned by the generators corresponding to the roots in Fig.6 is a Lie algebra. \\
{\bf Proof :}
By looking at the diagram in Fig.5 these generators are $d^-_2, d^-_3, g^+_2, g^-_3$ spanning a subspace $L_1$ of $\mathbf{g_2}$, plus the generators $g^\pm_1, h_1, h_2$ spanning the Lie subalgebra $L_0 := a_1\oplus \mathbb{C}$.
We have $[L_0,L_0] \subset L_0$, $[L_0,L_1] \subset L_1$, $[L_1,L_1] \subset L_2 \sim 0$, where $L_2$ is the span of $d^+_1$. The notation is that of the grading with respect to $h_2$.
We consider the $\mathbf{g_2}$ commutation relations among these generators and identify $d^+_1 \sim 0$. We only need to prove that the Jacobi identity is consistent with this identification.
Let $X,Y,Z \in L_0 \oplus L_1$, then consistency must be checked in only two cases (up to cyclic permutation):
\textbf{case 1:} $[X,Y] \propto d^+_1$;
\textbf{case 2:} $[[X,Y],Z] \propto d^+_1$.
\textbf{Case 1:} Consistency requires $[[Y,Z],X] + [[Z,X],Y] \sim 0$. This is true if $[d^+_1, Z] = 0$, since it is true in $\mathbf{g_2}$. On the other hand, if $[d^+_1, Z] \neq 0$ then $Z \propto h_2$ and $[Z,X] = \lambda X$ , $[Z,Y] = \lambda Y$ ,since $X,Y$ must be in $L_1$ by hypothesis. Therefore $[[Y,Z],X] + [[Z,X],Y] = 2 \lambda [X,Y] \propto d^+_1 \sim 0$.
\textbf{Case 2:} Both $[X,Y]$ and $Z$ must be in $L_1$. In particular either $X$ or $Y$ must be in $L_1$. Suppose $X \in L_1$. Then $[Y,Z]\in L_1$ hence we have both $[X,Z] \sim 0$ and $[[Y,Z],X] \sim 0$. Similarly if $Y \in L_1$.
\\
This concludes the proof $\blacksquare $
\section{$\mathbf{n}=1$ : Matrix representation of $\mathbf{c_{3\frac12}}$}\label{Blues1}
We denote by a dot the Jordan product $x\!\cdot\! y = \frac12(xy+yx)$ and by $ t()$ the ordinary trace of $3\times 3$ matrices. We also set $t(x,y) := t(x\!\cdot\! y)$. For $\mathbf{J_3^1}$ and $\mathbf{J_3^2}$, obviously $t(x,y) = t(x y)$.
We use in this section the representation $\mathbf \varrho$ of $\mathbf{f_4}$ in the form of a matrix introduced in \cite{T-2}, restricted to the subalgebra $\mathbf{c_{3\frac12}}$:
\begin{equation}
\mathbf \varrho(\mathfrak{f}) = \left ( \begin{array}{cc} a\otimes I + I \otimes a_1 & \mathbf s^+ \\ \mathbf s^- & -I\otimes a_1^T
\end{array}\right)
\label{mfq}
\end{equation}
where
\begin{equation}
a =
\left(\begin{array}{ccc}
a_{11} & 0 & a_{13} \\
a_{21} & a_{22} & 0 \\
0 & 0 & a_{33} \\
\end{array} \right) \ , \ t(a)=0\ , \ \mathbf s^+ =
\left(\begin{array}{ccc}
s^+_1 \\
s^+_2 \\
0 \\
\end{array} \right) \ , \ \mathbf s^- = (s^-_1,0,s^-_3)
\label{sm2}
\end{equation}
and $a_1 \in \mathbf{a_2}$, $a_1^T$ is the transpose of $a_1$, $I$ is the $3\times 3$ identity matrix, $s_i^\pm \in \mathbf{J_3^1} \ , \quad i=1,2,3$.
The commutator is set to be:
\begin{equation}
\begin{array}{c}
\left[
\left ( \begin{array}{cc} a\otimes I + I \otimes a_1 & \mathbf s^+ \\ \mathbf s^- & -I\otimes a_1^T
\end{array}\right) ,
\left ( \begin{array}{cc} b\otimes I + I \otimes b_1 &\mathbf r^+ \\ \mathbf r^- & -I\otimes b_1^T
\end{array}\right) \right] \\ \\ :=
\left (\begin{array}{cc} C_{11} & C_{12}\\
C_{21} & C_{22}
\end{array} \right) \hfill
\label{fqcom}
\end{array}
\end{equation}
where, denoting by $[a\centerdot b]$ the commutator with respect to the product (\ref{nmpu})
\begin{equation} [a\centerdot b] = a\centerdot b - b\centerdot a = [a,b] - E_{22}[a,b] E_{33} \end{equation}, it holds that:
\begin{equation}
\begin{array}{ll}
C_{11} &= [a\centerdot b] \otimes I + I \otimes [a_1,b_1] + \mathbf s^+ \diamond \mathbf r^- - \mathbf r^+ \diamond \mathbf s^- \\ \\
C_{12} &= (a \otimes I) \mathbf r^+ - (b \otimes I) \mathbf s^+ + (I \otimes a_1) \mathbf r^+ + \mathbf r^+ (I \otimes a_1^T) \\
&\phantom{:=} - (I \otimes b_1) \mathbf s^+ - \mathbf s^+ (I \otimes b_1^T) + \mathbf s^- \times \mathbf r^- \\ \\
C_{21} &= - \mathbf r^- (a \otimes I) + \mathbf s^- (b \otimes I) - (I \otimes a_1^T) \mathbf r^- - \mathbf r^- (I \otimes a_1) \\
&\phantom{:=} + (I \otimes b_1^T) \mathbf s^- + \mathbf s^- (I \otimes b_1) + \mathbf s^+ \times \mathbf r^+ \\ \\
C_{22} &= I \otimes [a_1^T,b_1^T] + \mathbf s^- \bullet \mathbf r^+ - \mathbf r^- \bullet \mathbf s^+
\end{array}
\label{comrel}
\end{equation}
with the following definitions (summing over repeated indices) :
\begin{equation}
\begin{array}{ll}
\mathbf s^+ \diamond \mathbf r^- &:= \left(\frac13 t(s^+_1, r^-_1) I - (1-(E_{23})_{ij}) t(s^+_i,r^-_j) E_{ij} \right) \otimes I +\\
&\phantom{:=} I \otimes \left(\frac13 t(s^+_1, r^-_1) I - s^+_1 r^-_1 \right) \\ \\
\mathbf s^- \bullet \mathbf r^+ &:= I \otimes (\frac13 t(s^-_1,r^+_1) I - s^-_1 r^+_1) \\ \\
(\mathbf s^\pm \times \mathbf r^\pm)_i &:= \epsilon_{ijk}[s_j^\pm r_k^\pm + r_k^\pm s_j^\pm -s_j^\pm t(r_k^\pm) - r_k^\pm t(s_j^\pm) \\
&\phantom{:=}- (t(s_j^\pm, r_k^\pm) - t(s_j^\pm) t( r_k^\pm)) I] \\
&:= \epsilon_{ijk} (s_j^\pm \# r_k^\pm)
\end{array}
\label{not1}
\end{equation}
Notice that:
\begin{enumerate}
\item $s \in \mathbf{J_3^1}$ is a symmetric complex matrix;
\item writing $\mathbf s^+ \diamond \mathbf r^- := c \otimes I + I\otimes c_1$ we have that both $c$ and $c_1$ are traceless hence $c$ ia a matrix like $a$ in (\ref{sm}), $c_1 \in \mathbf{a_2}$ and $\mathbf r^- \bullet \mathbf s^+ = I\otimes c_1^T$
\item terms like $(I \otimes a_1) \mathbf r^+ + \mathbf r^+ (I \otimes a_1^T)$ are in $\mathbb{C}^3 \otimes \mathbf{J_3^1}$, namely they are matrix valued vectors with symmetric matrix elements;
\item the {\it sharp} product $\#$ of $\mathbf{J_3^1}$ matrices appearing in $\mathbf s^\pm \times \mathbf r^\pm$ is a fundamental product in the theory of Jordan Algebras, \cite{McCrimmon}. It is the linearization of $x^\# := x^2 - t(x) x - \frac12(t(x^2) - t(x)^2)I$, in terms of which we may write the fundamental cubic identity for $\mathbf{J_3^n}, n= 1,2,4,8$:
\begin{equation} x^\#\!\cdot\! x = \frac13 t(x^\#\!, x) I \quad \text{or} \quad x^3 - t(x) x^2 + t(x^\#) x - \frac13 t(x^\#\! , x) I = 0 \label{cubic} \end{equation}
where $x^3 = x^2 \!\cdot\! x$ (notice that for $\mathbf{J_3^8}$, because of non-associativity, $x^2 x \ne x x^2$ in general).
\end{enumerate}
The validity of the Jacobi identity for the algebra of matrices \eqref{mfq} with Lie product given by \eqref{fqcom} - \eqref{not1} derives from the Jacobi identity for $\rho(\mathbf{f_4})$ proven in \cite{T-2} together with Proposition \ref{sec:d(s)}.1, applied to $\mathbf{c_{3\frac12}}$ by trivially extending the three grading argument. The validity of the Jacobi identity, together with the fact that the representation $\mathbf \varrho$ fulfills the root diagram of $\mathbf{c_{3\frac12}}$ (as can be easily seen) proves that $\mathbf \varrho$ is indeed a representation of $\mathbf{c_{3\frac12}}$.
Before passing to $\mathbf{e_{7 \frac12}}$, let us point out that the cases of $\mathbf{a_{5 \frac12}}$ ($\mathbf{n}=2$) and $\mathbf{d_{6 \frac12}}$ ($\mathbf{n}=4$) can be worked out in the same fashion as for $\mathbf{c_{3\frac12}}$, starting from the representations of $\mathbf{e_6}$ and $\mathbf{e_7}$ introduced in \cite{T-2}.
\section{$\mathbf{n}=8$ : Matrix representation of $\mathbf{e_{7 \frac12}}$}\label{Blues2}
We recall a few concepts and notations from \cite{T-2}. We use the notation $L_x z := x\!\cdot\! z$ and, for $\mathbf x \in \mathbb{C}^3 \otimes \mathbf{J_3^8}$ with components $(x_1, x_2, x_3)$, $L_\mathbf x \in \mathbb{C}^3 \otimes L_{\mathbf{J_3^8}}$ denotes the corresponding operator valued vector with components $(L_{x_1}, L_{x_2}, L_{x_3})$. We can write an element $a_1$ of $\mathbf{e_6}$ as $a_1 = L_x + \sum [L_{x_i},L_{y_i}]$ where $x,x_i,y_i \in \mathbf{J_3^8}$ and $t(x) = 0$. The adjoint is defined by $a_1^\dagger:= L_x - [L_{x_1},L_{x_2}]$. Notice that the operators $F := [L_{x_i},L_{y_i}]$ span the $\mathbf{f_4}$ subalgebra of $\mathbf{e_6}$, the derivation algebra of $\mathbf{J_3^8}$ . (Recall that the Lie algebra of the structure group of $\mathbf{J_3^8}$ is $\mathbf{e_6} \oplus \mathbb{C}$.)\\
We remark that $(a_1,-a_1^\dagger)$ is a derivation in the Jordan Pair $(\mathbf{J_3^8},\mathbf{\overline J_3^{\raisebox{-2 pt}{\scriptsize \textbf 8}}})$, and it is useful to recall that the relationship between the structure group of a Jordan algebra $J$ and the automorphism group of a Jordan Pair $V = (J,J)$ goes as follows, \cite{loos1}: if $g \in Str(J)$ then $(g, U^{-1}_{g(I)} g) \in Aut(V)$. In our case, for $g = 1 + \epsilon (L_x + F)$, at first order in $\epsilon$ (namely, in the tangent space of the corresponding group manifold) we get $ U^{-1}_{g(I)} g = 1 + \epsilon (- L_x + F) +O(\epsilon^2)$.\\
Next. we introduce a product $\star$ such that $L_x \star L_y := L_{x\cdot y} + [L_x, L_y]$, $F\star L_x := 2 F L_x$ and $L_x \star F :=2 L_x F$ for each component $x$ of $\mathbf x \in \mathbb{C}^3\otimes \mathbf{J_3^8}$ and $y$ of $\mathbf y \in \mathbb{C}^3\otimes \mathbf{J_3^8}$. If we denote by $[ ; ]$ the commutator with respect to the $\star$ product, we also require that $[F_1 ; F_2] := 2 [F_1,F_2]$. We have that, $L_x \star L_y + L_y \star L_x = 2 L_{x\cdot y}$ and $[F; L_x] := F\star L_x - L_x \star F= 2 [F, L_x] = 2 L_{F(x)}$, where the last equality holds because $F$ is a derivation in $\mathbf{J_3^8}$.\\
Therefore, for $\mathfrak{f} \in \mathbf{e_{7 \frac12}}$, we write:
\begin{equation}
\mathbf \varrho(\mathfrak{f}) = \left ( \begin{array}{cc} a\otimes Id + I \otimes a_1 & L_{\mathbf s^+} \\ L_{\mathbf s^-} & -I\otimes a_1^\dagger
\end{array}\right)
\label{meo}
\end{equation}
where $a,\ \mathbf s^\pm$ are the same as in (\ref{sm}), $a_1 \in \mathbf{e_6}$, $I$ is the $3\times 3$ identity matrix, $Id := L_I$ is the identity operator in $L_{\mathbf{J_3^8}}$: $L_I L_x= L_x$. Notice that $Id$ is the identity also with respect to the $\star$ product.
By extending the $\star$ product in an obvious way to the matrix elements \eqref{meo}, one achieves that $(I \otimes a_1) \star L_{\mathbf r^+} + L_{\mathbf r^+} \star (I \otimes a_1^\dagger) = 2 L_{(I \otimes a_1) \mathbf r^+}$ and $(I \otimes a_1^\dagger) \star L_{\mathbf r^-} + L_{\mathbf r^-} \star (I \otimes a_1) = 2 L_{(I \otimes a_1^\dagger) \mathbf r^-}$.
After some algebra, the commutator of two matrices like \eqref{meo} can be computed to read :
\begin{equation}
\begin{array}{c}
\left[
\left ( \begin{array}{cc} a\otimes Id + I \otimes a_1 & L_{\mathbf s^+} \\ L_{\mathbf s^-} & -I\otimes a_1^\dagger
\end{array}\right) ,
\left ( \begin{array}{cc} b\otimes Id + I \otimes b_1 &L_{\mathbf r^+} \\ L_{\mathbf r^-} & -I\otimes b_1^\dagger
\end{array}\right) \right] \\ \\ :=
\left (\begin{array}{cc} C_{11} & C_{12}\\
C_{21} & C_{22}
\end{array} \right), \hfill
\label{eocom}
\end{array}
\end{equation}
where:
\begin{equation}
\begin{array}{ll}
C_{11} &= [a\centerdot b] \otimes Id + 2 I \otimes [a_1,b_1] + L_{\mathbf s^+} \diamond L_{\mathbf r^-} - L_{\mathbf r^+} \diamond L_{\mathbf s^-} \\ \\
C_{12} &= (a \otimes Id) L_{\mathbf r^+} - (b \otimes Id) L_{\mathbf s^+} +2 L_{(I \otimes a_1) \mathbf r^+}\\
&\phantom{:=} - 2 L_{(I \otimes b_1) \mathbf s^+} + L_{\mathbf s^-} \times L_{\mathbf r^-} \\ \\
C_{21} &= - L_{\mathbf r^-} (a \otimes Id) + L_{\mathbf s^-} (b \otimes Id) - 2 L_{(I \otimes a_1^\dagger) \mathbf r^-} \\
&\phantom{:=} +2 L_{(I \otimes b_1^\dagger) \mathbf s^-} + L_{\mathbf s^+} \times L_{\mathbf r^+} \\ \\
C_{22} &= 2 I \otimes [a_1^\dagger,b_1^\dagger] + L_{\mathbf s^-} \bullet L_{\mathbf r^+} - L_{\mathbf r^-} \bullet L_{\mathbf s^+}.
\end{array}
\label{comreleo}
\end{equation}
The products in \eqref{comreleo} are defined as follows :
\begin{equation}
\begin{array}{ll}
L_{\mathbf s^+} \diamond L_{\mathbf r^-} &:= \left(\frac13 t(s^+_1, r^-_1) I - (1-(E_{23})_{ij}) t(s^+_i,r^-_j) E_{ij}\right) \otimes Id +\\
&\phantom{:=} I \otimes \left(\frac13 t(s^+_1, r^-_1) Id - L_{s^+_1 \cdot r^-_1} - [L_{s^+_1}, L_{r^-_1}] \right) \\ \\
L_{\mathbf s^-} \bullet L_{\mathbf r^+} &:= I \otimes (\frac13 t(s^-_1,r^+_1) Id - L_{s^-_1 \cdot r^+_1} - [L_{s^-_1}, L_{r^+_1}]) \\ \\
L_{\mathbf s^\pm} \times L_{\mathbf r^\pm} &:= L_{\mathbf s^\pm \times \mathbf r^\pm} = L_{\epsilon_{ijk} (s_j^\pm \# r_k^\pm)}
\end{array}
\label{not1eo}
\end{equation}
From the properties of the triple product of Jordan algebras, it holds that $ L_{s^+_1 \cdot r^-_1} + [L_{s^+_1}, L_{r^-_1}] = \frac12 V_{s^+_1 , r^-_1} \in \mathbf{e_6}\oplus \mathbb{C}$, \cite{T-2}. Moreover one can readily check that $[a_1^\dagger,b_1^\dagger] = - [a_1,b_1]^\dagger$ and $L_{\mathbf r^-} \bullet L_{\mathbf s^+} = I \otimes (\dfrac13 t(s^+_1,r^-_1) Id - L_{s^+_1 \cdot r^-_1} - [L_{s^+_1}, L_{r^-_1}])^\dagger$; this result implies that we are actually considering an algebra.
The validity of the Jacobi identity for the algebra of matrices \eqref{meo} with Lie product given by \eqref{eocom} - \eqref{not1eo} derives from the Jacobi identity\footnote{We would like to recall that the proof of the Jacobi identity given in \cite{T-2} strongly relies on identities deriving from the Jordan Pair axioms \cite{loos1}.} for $\rho(\mathbf{e_8})$ (proven in \cite{T-2}), together with Proposition \ref{sec:d(s)}.1, applied to $\mathbf{e_{7 \frac12}}$ by trivially extending the three grading argument. That the Lie algebra so represented is $\mathbf{e_{7 \frac12}}$ is made obvious by a comparison with the root diagram in figure \ref{fig:rootesm}.
|
1,314,259,993,900 | arxiv | \subsection{The Remaining Proof of Proposition \ref{prop:swmso-theorems}}
\begin{proof}
\begin{enumerate}
\item[2) ] We have assumed $\Gamma \cup \{\varphi\} \vdash \Psi_1 \approx \Psi_2$,
and we get $\Gamma \cup \{\neg \varphi\} \vdash \Psi_2 \approx \Psi_2$ by reflexivity,
so $(S4)$ gives $\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \Psi_2$.
Since we have assumed $\Gamma \vdash \varphi$, $(S3)$ gives
$\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \Psi_1$.
Hence $\Gamma \vdash \Psi_1 \approx \Psi_2$ by symmetry and transitivity.
\item[3) ] By reflexivity, we have $\Gamma \vdash \Psi \approx \Psi$,
so ($S1$) gives both $\Gamma \cup \{\varphi\} \vdash \Psi \approx \Psi$
and $\Gamma \cup \{\neg \varphi\} \vdash \Psi \approx \Psi$,
so using ($S4$) we conclude $\Gamma \vdash \varphi \mathbin{?} \Psi : \Psi \approx \Psi$.
\item[6) ] Assume that $\Gamma \vdash \neg \varphi$.
By axiom ($S2$) we get $\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \neg \varphi \mathbin{?} \Psi_2 : \Psi_1$,
and axiom ($S3$) gives $\Gamma \vdash \neg \varphi \mathbin{?} \Psi_2 : \Psi_1 \approx \Psi_2$,
so $\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \Psi_2$.
\item[7) ] This is simply an instantiation of the fourth item of this proposition
where $\varphi_1 = \varphi_2 = \varphi$,
and the other two premises are guaranteed to hold because
$\{\varphi, \neg \varphi\}$ is inconsistent.
\item[8) ] Assume that $\Gamma \cup \{\varphi\} \vdash \Psi_1 \approx \Psi_2$
and $\Gamma \cup \{\neg \varphi\} \vdash \Psi_1 \approx \Psi_2$.
Then axiom (S4) gives
$\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_1 \approx \Psi_2$,
and the third item of this proposition gives
$\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_1 \approx \Psi_1$,
so $\Gamma \vdash \Psi_1 \approx \Psi_2$.
\item[9) ] Since $\Gamma \cup \{\varphi\} \vdash \varphi$,
we get $\Gamma \cup \{\varphi\} \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \Psi_1$
by ($S3$).
\qedhere
\end{enumerate}
\end{proof}
\subsection{The Proof of Theorem \ref{thm:completeness-swmso}}
\begin{proof}
We show the soundness of each axiom in turn.
\begin{description}
\item[($S1$):]
Assume that $\Psi_1 \sim_\Gamma \Psi_2$.
Since $\sat{\Gamma \cup \{\varphi\}} = \sat{\Gamma} \cap \sat{\varphi}$,
for any $(w,\sigma) \in \sat{\Gamma \cup \{\varphi\}}$
we have $(w,\sigma) \in \sat{\Gamma}$,
and hence $\sat{\Psi_1}(w,\sigma) = \sat{\Psi_2}(w,\sigma)$ by assumption.
We conclude that $\Psi_1 \sim_{\Gamma \cup \{\varphi\}} \Psi_2$.
\item[($S2$):]
$\sat{\varphi \mathbin{?} \Psi_1 : \Psi_2}(w,\sigma) = \sat{\Psi_1}(w,\sigma)$ if and only if
\[\sat{\neg \varphi \mathbin{?} \Psi_2 : \Psi_1}(w,\sigma) = \sat{\Psi_1}(w,\sigma),\]
and likewise
$\sat{\varphi \mathbin{?} \Psi_1 : \Psi_2}(w,\sigma) = \sat{\Psi_2}(w,\Sigma)$ if and only if
$
\sat{\neg \varphi \mathbin{?} \Psi_2 : \Psi_1}(w,\sigma) = \sat{\Psi_2}(w,\sigma).
$
It follows that
$\varphi \mathbin{?} \Psi_1 : \Psi_2 \sim_\Gamma \neg \varphi \mathbin{?} \Psi_2 : \Psi_1$.
\item[($S3$):]
Assume $\Gamma \vdash \varphi$.
By Corollary \ref{cor:completeness-mso},
this means that $\Gamma \models \varphi$.
Hence, for any $(w,\sigma) \in \sat{\Gamma}$
we have $(w,\sigma) \models \varphi$,
so $\varphi \mathbin{?} \Psi_1 : \Psi_2 \sim_\Gamma \Psi_1$.
\item[($S4$):]
Assume that $\Psi \sim_{\Gamma \cup \{\varphi\}} \Psi_1$ and
$\Psi \sim_{\Gamma \cup \{\neg \varphi\}} \Psi_2$.
For every $(w,\sigma) \in \sat{\Gamma}$, either $(w,\sigma) \in \sat{\varphi}$ or $(w,\sigma) \in \sat{ \neg \varphi}$.
Therefore, for both cases,
\[\sat{\varphi \mathbin{?} \Psi_1 : \Psi_2}(w,\sigma) = \sat{\Psi}(w,\sigma),\]
and we conclude that
$\varphi \mathbin{?} \Psi_1 : \Psi_2 \sim_\Gamma \Psi$.
\qedhere
\end{description}
\end{proof}
\subsection{The Proof of Theorem \ref{thm:completeness-cwmso}}
\begin{proof}
We show the soundness of each axiom in turn.
Axiom ($C1$):
\begin{align*}
\sat{\Phi + \mathbf{0}}(w,\sigma) &= \sat{\Phi}(w,\sigma) \uplus \sat{\mathbf{0}}(w,\sigma) \\
&= \sat{\Phi}(w,\sigma) \uplus \emptyset = \sat{\Phi}(w,\sigma).
\end{align*}
Axiom ($C2$):
\begin{align*}
\sat{\Phi_1 + \Phi_2}(w,\sigma) &= \sat{\Phi_1}(w,\sigma) \uplus \sat{\Phi_2}(w,\sigma) \\
&= \sat{\Phi_2}(w,\sigma) \uplus \sat{\Phi_1}(w,\sigma) \\
&= \sat{\Phi_2 + \Phi_1}(w,\sigma).
\end{align*}
Axiom ($C3$):
\begin{align*}
\phantom{{}={}} &\sat{(\Phi_1 + \Phi_2) + \Phi_3}(w,\sigma) \\
= &\sat{(\Phi_1 + \Phi_2)}(w,\sigma) \uplus \sat{\Phi_3}(w,\sigma) \\
= &(\sat{\Phi_1}(w,\sigma) \uplus \sat{\Phi_2(w,\sigma)}) \uplus \sat{\Phi_3}(w,\sigma) \\
= &\sat{\Phi_1}(w,\sigma) \uplus (\sat{\Phi_2}(w,\sigma) \uplus \sat{\Phi_3}(w,\sigma)) \\
= &\sat{\Phi_1}(w,\sigma) \uplus \sat{\Phi_2 + \Phi_3}(w,\sigma) \\
= &\sat{\Phi_1 + (\Phi_2 + \Phi_3)}(w,\sigma).
\end{align*}
Axiom ($C4$): This follows from soundness of \textsf{step-wMSO}\ and Lemma \ref{lem:forall}.
Axiom ($C5$): If $y \notin \mathtt{var}(\Psi)$, then
\begin{align*}
\sat{{\textstyle\prod_x} \Psi}(w,\sigma) &= \multiset{\sat{\Psi}(w,\sigma[x \mapsto 1] \dots \sat{\Psi}(w,\sigma[x \mapsto |w|)} \\
&= \multiset{\sat{\Psi[y/x]}(w,\sigma[y \mapsto 1]) \\ &\phantom{{}={}}\cdots \sat{\Psi[y/x]}(w,\sigma[y \mapsto |w|])} \\
&= \sat{{\textstyle \prod_y} \Psi[y/x]}(w,\sigma).
\end{align*}
Axioms ($C6$)--($C9$): The proof of these is similar to
the corresponding proofs in Theorem \ref{thm:completeness-swmso}.
Axiom ($C10$): We evaluate by cases. If $(w,\sigma) \models \varphi$, then
\begin{align*}
\phantom{{}={}} &\sat{(\varphi \mathbin{?} \Phi' : \Phi'') + \Phi}(w,\sigma) \\ = &\sat{(\varphi \mathbin{?} \Phi' : \Phi'')}(w,\sigma) \uplus \sat{\Phi}(w,\sigma) \\
= &\sat{\Phi'}(w,\sigma) \uplus \sat{\Phi}(w,\sigma) \\
= &\sat{\Phi' + \Phi}(w,\sigma).
\end{align*}
and
\begin{align*}
\sat{\varphi \mathbin{?} (\Phi' + \Phi) : (\Phi'' + \Phi)}(w,\sigma) &= \sat{\Phi' + \Phi}(w,\sigma)
\end{align*}
Likewise, if $(w,\sigma) \models \neg \varphi$, then
\begin{align*}
\phantom{{}={}} &\sat{(\varphi \mathbin{?} \Phi' : \Phi'') + \Phi}(w,\sigma) \\ = &\sat{(\varphi \mathbin{?} \Phi' : \Phi'')}(w,\sigma) \uplus \sat{\Phi}(w,\sigma) \\
= &\sat{\Phi''}(w,\sigma) \uplus \sat{\Phi}(w,\sigma) \\
= &\sat{\Phi'' + \Phi}(w,\sigma)
\end{align*}
and
\begin{align*}
\sat{\varphi \mathbin{?} (\Phi' + \Phi) : (\Phi'' + \Phi)}(w,\sigma) &= \sat{\Phi'' + \Phi}(w,\sigma).
\end{align*}
For completeness,
assume $\Phi_1 \sim_\Gamma \Phi_2$.
By Lemma \ref{lem:normal-form}, there exist formulas $\Phi_1'$ and $\Phi_2'$,
both in normal form, such that $\Gamma \vdash \Phi_1 \approx \Phi_1'$
and $\Gamma \vdash \Phi_2 \approx \Phi_2'$.
By soundness,
this implies $\Phi_1 \sim_\Gamma \Phi_1'$ and $\Phi_2 \sim_\Gamma \Phi_2'$,
so $\Phi_1' \sim_\Gamma \Phi_2'$.
Since these are in normal form, Lemma \ref{lem:nf-completeness}
gives $\Gamma \vdash \Phi_1' \approx \Phi_2'$,
and by symmetry and transitivity, this implies $\Gamma \vdash \Phi_1 \approx \Phi_2$.
\end{proof}
\subsection{The Full Proof for the Undecidability of Equational Satisfiability}
We now present the full construction of the reduction that proves that equational satisfiability of \textsf{core-wFO}\ (and therefore also of \textsf{core-wMSO}) is undecidable.
We take special care to only use \textsf{core-wFO}\ formulas, and therefore we use a special construction for recording the positions where each symbol appears in a configuration.
Fix a pair $(w,\sigma)$.
We use a series of formulas and equations to express that a $(w,\sigma)$ encodes the computation of a Turing Machine that halts.
Therefore, the question of whether there is such a pair that satisfies the resulting set of equations is undecidable.
Let $T = (Q,\Sigma,\delta,q_0,H)$ be a Turing Machine, where $Q$ is a finite set of states, $\Sigma$ is the set of symbols that the machine uses, $\delta: Q \times \Sigma \to Q \times \Sigma \times \{L,R\}$ is the machine's transition function, $q_0$ is the starting state, and $H$ is the halting state of $T$.
Let $\ensuremath{{\triangleleft}}, \ensuremath{\texttt{m}}, \ensuremath{\texttt{1}}$ be special symbols not in $\Sigma$.
A configuration of $T$ is represented by a string of the form $s_1 q s_2 \ensuremath{{\triangleleft}}$, where $q$ is the current state for the configuration, $s_1s_2$ is the string of symbols in the tape of the machine, and the head is located at the first symbol of $s_2$; $\ensuremath{{\triangleleft}}$ marks the end of the configuration.
Let $x_0 \in \Sigma^*$ be an input of $T$.
We use every $s \in Q \cup \Sigma \cup \{\ensuremath{{\triangleleft}}, \ensuremath{\texttt{1}}, \ensuremath{\texttt{m}} \}$
as a predicate, so that $s(x)$ is true
if and only if the symbol $s$ is in position $x$.
Let $[0] = \ensuremath{\texttt{1}}$, and for every $i \geq 1$, let $[i] = \ensuremath{\texttt{1}}^{2^{i-1}} \ensuremath{\texttt{m}} \ensuremath{\texttt{1}}^{2^{i-1}}$, so that in $[i]$, $\ensuremath{\texttt{1}}$ appears exactly $2^{i}$ times.
Then, for every string $y_0y_1\cdots y_j \in (Q \cup \Sigma \cup \{\ensuremath{{\triangleleft}} \})^j$, let $[y_0y_1\cdots y_j] = [0]y_0[1]y_1\cdots [j]y_j$.
We want to describe that $(w,\sigma)$ encodes a halting run of $T$ on $x_0$.
In other words, we must ensure that $(w,\sigma)$
is $[c_0] \cdots [c_k]$, where $c_0 \cdots c_k$ is a sequence of configurations of $T$, such that
$c_0$ is $q_0 x_0 \ensuremath{{\triangleleft}} $ and $c_k$ is $s_1Hs_2 \ensuremath{{\triangleleft}}$, where $s_1,s_2 \in \Sigma^*$.
We must therefore ensure that the following conditions hold:
\begin{enumerate}
\item
$(w,\sigma)$ is of the form $[c_0][c_1]\cdots [c_k]$, where each $c_i$ has exactly one $\ensuremath{{\triangleleft}}$, at the end;
\item
each $c_i$ is of the form $s_1 q s_2 \ensuremath{{\triangleleft}}$, where $q \in Q$, $s_1s_2 \in \Sigma^*$, and $s_2 \neq \varepsilon$;
\item $c_0 = q_0 x_0 \ensuremath{{\triangleleft}}$;
\item $c_k = s_1 H s_2 \ensuremath{{\triangleleft}}$ for some $s_1,s_2$; and
\item for every $0 \leq i < k$, $c_{i+1}$ results from $c_i$ by applying the transition function $\delta$. This condition can be further refined into the following subconditions. For every $0 \leq i < k$, if
$c_i = x_1 ~x_2 \cdots ~x_{r} ~q_i ~y_{1} ~y_{2} \cdots ~y_{r'} \ensuremath{{\triangleleft}}$, then:
\begin{enumerate}
\item if $\delta(q_i,y_{1}) = (q,x,L)$ and $r >0$, then $c_{i+1} = x_1 ~x_2 \cdots ~x_{r-1} ~q ~x_{r} ~x ~y_{2} \cdots ~y_{r'} \ensuremath{{\triangleleft}},$
\item if $\delta(q_i,y_{1}) = (q,x,L)$ and $r =0$, then $c_{i+1} = q ~x ~y_2 \cdots ~y_{r'} \ensuremath{{\triangleleft}}$,
\item if $\delta(q_i,y_{1}) = (q,x,R)$ and $r' > 1$, then $c_{i+1} = x_1 ~x_2 \cdots ~x_{r} ~x ~q ~y_{2} \cdots ~y_{r'} \ensuremath{{\triangleleft}}$, and
\item if $\delta(q_i,y_{1}) = (q,x,R)$ and $r' = 1$, then $c_{i+1} = x_1 ~x_2 \cdots ~x_{r} ~x ~q ~\_ \ensuremath{{\triangleleft}}$, where $\_ \in \Sigma$ is the symbol used by $T$ for a blank space.
\end{enumerate}
\end{enumerate}
We now explain how to represent each of the conditions above with a formula or equation.
We use the following macros, where $0,1 \in R$ are two distinct weights:
\begin{align*}
\sym{x} &\stackrel{\textsf{def}}{=} \bigvee_{s \in Q \cup \Sigma \cup \{\ensuremath{{\triangleleft}} \}} s(x) \\
\nxt{x,y} &\stackrel{\textsf{def}}{=} (\neg (y \leq x)) \land \forall z.~ ( z \leq x \lor y \leq z ) \\
\first{x} &\stackrel{\textsf{def}}{=} \forall y.~ y\geq x \\
\firstc{x} &\stackrel{\textsf{def}}{=} \first{x} \lor \exists y.~ \ensuremath{{\triangleleft}}(y) \land \nxt{y,x} \\
\last{x} &\stackrel{\textsf{def}}{=} \forall y.~ y\leq x \\
\nxtsymb{x,y} &\stackrel{\textsf{def}}{=} (\neg y \leq x) \land \forall z.~ (\neg \sym{z} \lor z \leq x \lor y \leq z ) \\
\valone{x} &\stackrel{\textsf{def}}{=} \prod_y (x=y) \mathbin{?} 1 : 0 \\
\valv{x}{s} &\stackrel{\textsf{def}}{=} \prod_y (x=y) \mathbin{?} s : 0 \\
\pos{x}{v} &\stackrel{\textsf{def}}{=} \forall y.~ \neg \sym{y} \lor x \leq y \mathbin{?} v : \\
&\phantom{{}\stackrel{\textsf{def}}{=}{}}\sum_{y} \exists z.~ \sym{z} \land \nxtsymb{z,x} \land \\
&\phantom{{}\stackrel{\textsf{def}}{=}{}}z \leq y \leq x \land \ensuremath{\texttt{1}}(y) \mathbin{?} v : v_0
\end{align*}
Intuitively, formula $\pos{x}{v}$ counts how many $\ensuremath{\texttt{1}}$s appear right before position $x$.
We note that, as long as condition 1 is satisfied, for each symbol $s$ that appears in the set $S$ of positions in a configuration, $S$ is uniquely identified by $\sum_{i \in S} 2^i$.
Furthermore, for each configuration, $\pos{x}{v}$ constructs a map from each such $s$ (represented by the returned value $v$) to $\sum_{i \in S} 2^i$. Therefore, the way that we will use $\pos{x}{v}$ (see how we deal with condition 5, below) gives a complete description of each configuration.
We will use \ensuremath{\nil}\ as the default (negative) value in conditionals,
and as such $\varphi \mathbin{?} v$ is used as shorthand for $\varphi \mathbin{?} v : \ensuremath{\nil}$.
Furthermore, we assume that $:$ binds to the nearest $\mathbin{?}$, and therefore, $\varphi_1 \mathbin{?} \varphi_2 \mathbin{?} \Phi_1 : \Phi_2$ means $\varphi_1 \mathbin{?} \varphi_2 \mathbin{?} \Phi_1 : \Phi_2 : \ensuremath{\nil}$, which can be uniquely parsed as $\varphi_1 \mathbin{?} (\varphi_2 \mathbin{?} \Phi_1 : \Phi_2) : \ensuremath{\nil}$.
We now proceed to describe, for each of the conditions 1-6,
a number of equations that ensure that this condition holds.
By an equation, we mean something of the form $\Phi \pmb{=} \Phi'$,
where $\Phi$ and $\Phi'$ are $\textsf{core-wFO}$ formulas.
Notice that by Lemma \ref{lem:embedMSOtoEq},
any first-order formula can be turned into an equation
(as long as we have at least three distinct weights),
so for some conditions we give a first-order formula rather than an equation.
A number of equations $\Phi_i \pmb{=} \Phi_i'$ ensures that the condition holds
in the sense that for any $(w,\sigma)$,
$\sat{\Phi_i}(w,\sigma) = \sat{\Phi_i'}(w,\sigma)$ for each $i$
if and only if $(w,\sigma)$ satisfies the condition.
By Lemma \ref{lem:closed-under-and},
once we have a number of equations $\Phi_i \pmb{=} \Phi_i'$
that together ensure that all conditions are satisfied,
the equation $\sum_i \Phi_i \pmb{=} \sum_i \Phi_i'$ ensures that all conditions are satisfied,
so that $(w,\sigma)$ satisfies the conditions if and only if
$\sat{\sum_i \Phi_i}(w,\sigma)= \sat{\sum_i \Phi_i'}(w,\sigma)$.
\begin{enumerate}
\item
We describe this condition using a first order formula and two equations.
The formula makes sure that the word is of the form
$d_0d_1\cdots d_k$, where each $d_i$ is of the form
$\ensuremath{\texttt{1}} y_0\ensuremath{\texttt{1}}^{n_1}\ensuremath{\texttt{m}}\ensuremath{\texttt{1}}^{n_2}y_1\cdots \ensuremath{\texttt{1}}^{n_{K-1}} \ensuremath{\texttt{m}} \ensuremath{\texttt{1}}^{n_K}y_K$, where $n_1 \cdots n_K$ is a sequence of non-negative integers and $y_K = \ensuremath{{\triangleleft}}$:
\begin{align*}
& (\forall x.~ \neg \firstc{x} \lor (\ensuremath{\texttt{1}}(x) \land \exists y.~ \nxt{x,y} \land \sym{y} )) \\
&\land (\exists x.~ \last{x} \land \ensuremath{{\triangleleft}}(x)) \\
&\land \forall x.~ \neg \sym{x} \lor \last{x} \\
&\lor \exists y,z.~ \nxtsymb{x,z} \land x \leq y \leq z \land \ensuremath{\texttt{m}}(y) \\
&\land \forall i.~ i \leq x \lor z \leq i \lor i = y \lor \ensuremath{\texttt{1}}(i) .
\end{align*}
The following equation ensures that the same number of $\ensuremath{\texttt{1}}$'s appear before and after $\ensuremath{\texttt{m}}$:
\begin{align*}
&\sum_{x} \ensuremath{\texttt{m}}(x) \mathbin{?} \sum_{y} \exists x',y'.~ x' \leq y \leq x \leq y' \\
&\land \nxtsymb{x',y'} \land \ensuremath{\texttt{1}}(y) \mathbin{?} \valone{x} \\
&\pmb{=} \\
&\sum_{x} \ensuremath{\texttt{m}}(x) \mathbin{?} \sum_{y} \exists x',y'.~ x' \leq x \leq y \leq y' \\
&\land \nxtsymb{x',y'} \land \ensuremath{\texttt{1}}(y) \mathbin{?} \valone{x}
\end{align*}
Finally, in the context of the formula and equation above, the following equation ensures that for every $1 \leq i \leq K$, $n_i = 2^{i}$:
\begin{align*}
&\sum_{x} \sym{x} \land \neg \last{x} \mathbin{?} \\
&(\forall y.~ \neg \sym{y} \lor x \leq y \mathbin{?} \valone{x}) : \\
&\sum_{y} \exists z.~ \sym{z} \land \nxtsymb{z,x} \\
&\land z \leq y \leq x \land \ensuremath{\texttt{1}}(y) \mathbin{?} \valone{x}) \\
&\pmb{=} \\
&\sum_{x} \sym{x} \land \neg \last{x} \mathbin{?} \\
&\sum_{y} \exists z.~ \ensuremath{\texttt{m}}(z) \land \nxtsymb{x,z} \\
&\land x \leq y \leq z \land \ensuremath{\texttt{1}}(y) \mathbin{?} \valone{x}
\end{align*}
\item
For this condition, it suffices to require that between each pair of state symbols, there is a $\ensuremath{{\triangleleft}}$ symbol, and between two occurrences of $\ensuremath{{\triangleleft}}$, there is a state symbol, and right after each state symbol, there is a symbol from the alphabet. The following first-order formula expresses this:
\begin{align*}
&\forall x,y.~ \neg \ensuremath{{\triangleleft}}(x) \lor \neg \ensuremath{{\triangleleft}}(y) \lor \neg x \leq y \\
&~~~~\lor \exists z.~ x \leq z \leq y \land \bigvee_{q \in Q} q(z) \\
&\land \forall x,y.~ \neg \bigvee_{q \in Q} q(x) \lor \neg \bigvee_{q \in Q} q(y) \\
&~~~~\lor \neg x \leq y \lor \exists z.~ x \leq z \leq y \land \ensuremath{{\triangleleft}}(z) \\
& \land \forall x.~ \neg \bigvee_{q \in Q} q(x) \lor \exists y.~ \sym{y} \\
&~~~~\land \nxtsymb{x,y} \land \neg \ensuremath{{\triangleleft}}(y)
\end{align*}
\item This condition can be imposed by a first order formula that explicitly describes $c_0$.
\item By the first-order formula
$\exists x.~ H(x) \land \forall y.~ \neg \ensuremath{{\triangleleft}}(y) \lor \neg y \geq x \lor \last{y}$.
\item
We demonstrate how to treat case a. The other cases are analogous.
Fix a transition $(q,s,q's',L) \in \delta$ and $d \in \Sigma$.
We use the following shorthand.
\begin{align*}
\tr{x,y,z} &\stackrel{\textsf{def}}{=} q(y) \land y \leq x \\
&\phantom{{}\stackrel{\textsf{def}}{=}{}}\land \forall y'.~ \neg (\ensuremath{{\triangleleft}}(y') \land y \leq y' \leq x) \\
&\phantom{{}\stackrel{\textsf{def}}{=}{}}\land s(z) \land \nxtsymb{y,z} \quad \text{and} \\
\trprime{x,y,z} &\stackrel{\textsf{def}}{=} q'(y) \land y \leq x \\
&\phantom{{}\stackrel{\textsf{def}}{=}{}}\land \forall y'.~ \neg (\ensuremath{{\triangleleft}}(y') \land y \leq y' \leq x) \\ &\phantom{{}\stackrel{\textsf{def}}{=}{}}\land s'(z) \land \nxtsymb{y,z}
\end{align*}
Let $s_1,s_2,\ldots,s_m$ be a permutation of $\Sigma$.
We use the following equation:
\begin{align*}
&\sum_{x} \ensuremath{{\triangleleft}}(x)
{\land}
\exists y. (\ensuremath{{\triangleleft}}(y) {\land} \neg y {\leq} x )
{\land}
\exists y,z. \tr{x,y,z}
{\mathbin{?}} \\
&~~\sum_{y} \sym{y} {\land} y {\leq} x {\land} \forall z. (x {\leq} z \lor \neg y {\leq} z \lor \neg \ensuremath{{\triangleleft}}(x))
{\mathbin{?}} \\
&~~~~q(y)
{\mathbin{?}}
\pos{y}{\valv{x}{q}}: s_1(y)
{\mathbin{?}}
\pos{y}{\valv{x}{s_1}}: s_2(y)
{\mathbin{?}}
\pos{y}{\valv{x}{s_2}}: \\
&~~~~\cdots : s_m(y)
{\mathbin{?}}
\pos{y}{\valv{x}{s_m}} \\
&\pmb{=} \\
&\sum_{x} \ensuremath{{\triangleleft}}(x)
{\land}
\exists y. (\ensuremath{{\triangleleft}}(y) {\land} \neg y {\leq} x)
{\land}
\exists y,z. \tr{x,y,z}
{\mathbin{?}} \\
&~\sum_{y} \sym{y} {\land} x {\leq} y {\land} \forall z. (z {\leq} x \lor \neg z {\leq} y \lor \neg \ensuremath{{\triangleleft}}(x))
{\mathbin{?}} \\
&~q'(y) {\land} \exists z. \sym{z} {\land} \nxtsymb{y,z} {\land} s_1(z)
{\mathbin{?}}
\pos{y}{\valv{x}{s_1}} {:} \\
&~q'(y) {\land} \exists z. \sym{z} {\land} \nxtsymb{y,z} {\land} s_2(z)
{\mathbin{?}}
{\pos{y}{\valv{x}{s_2}}} {:} {\cdots} \\
&~q'(y) {\land} \exists z. \sym{z} {\land} \nxtsymb{y,z} {\land} s_m(z)
{\mathbin{?}}
\pos{y}{\valv{x}{s_m}} {:} \\
&~\exists z. q'(z) {\land} \nxtsymb{z,y}
{\mathbin{?}}
\pos{y}{\valv{x}{q}} {:} \\
&~\exists z,z'. q'(z) {\land} \nxtsymb{z,z'} {\land} \nxtsymb{z',y}
{\mathbin{?}}
\pos{y}{\valv{x}{s}} {:} \\
&~s_1(y)
{\mathbin{?}}
\pos{y}{\valv{x}{s_1}} {:} s_2(y)
{\mathbin{?}}
\pos{y}{\valv{x}{s_2}} {:} \cdots {:} s_m(y) {\mathbin{?}} \pos{y}{\valv{x}{s_m}}
\end{align*}
The rightmost part of the equation ensures that if the effects of the transition are reversed, then all symbols are in the same place as in the previous configuration.
We can then make sure that the state has changed to $q'$ and the symbol to $s'$ with the following formula:
\begin{align*}
\forall x,y. &\neg (\ensuremath{{\triangleleft}}(x) \land \ensuremath{{\triangleleft}}(y) \land \neg y {\leq} x \land \exists x_q,x_s. \tr{x,x_q,x_s}) \\
&\lor \exists y_q,y_s.~ \trprime{y,y_q,y_s}.
\end{align*}
\end{enumerate}
\begin{proof}[Proof of Theorem \ref{thm:sat-is-undec}]
We use a reduction from the Halting Problem, as it is described above.
It is not hard to see why conditions 1 to 5 suffice for the correctness of the reduction, and it is not hard to see that the formulas we construct ensure the corresponding conditions.
Furthermore, notice that all formulas are \textsf{core-wFO}\ formulas, and therefore the problem is undecidable for \textsf{core-wFO}, but also for \textsf{core-wMSO}, which is a more general case.
\end{proof}
\subsection{\textsf{MSO}}
\textsf{MSO}\ over finite strings is equivalent to finite automata \cite{B60,E61,T61}, and therefore it also has a decidable validity problem (albeit with a nonelementary complexity). This means that the theory of \textsf{MSO}\ over finite strings has a recursive and complete axiomatization. One such axiomatization is given in \cite{GC12}, and therefore for a set $\Gamma \cup \{ \varphi \}$ of \textsf{MSO}\ formulas,
$\Gamma \vdash \varphi$ means that $\varphi$ is derivable from these axioms and $\Gamma$ ($\Gamma$ may be omitted when empty).
Since \textsf{FO}\ over finite strings also has a decidable validity problem,
it likewise has a recursive and complete axiomatization.
For the purpose of this paper, we fix one such axiomatization,
and we can thus also write $\Gamma \vdash \varphi$ when $\Gamma \cup \{\varphi\}$
is a set of \textsf{FO}\ formulas.
\begin{theorem}[Completeness of \textsf{MSO}\ \cite{GC12}]\label{thm:completeness-mso}
For every \textsf{MSO}\ formula $\varphi$,
$\models \varphi$ if and only if $\vdash \varphi$.
\end{theorem}
\begin{corollary}\label{cor:completeness-mso}
For every finite $\Gamma$,
$\Gamma \models \varphi$ if and only if $\Gamma \vdash \varphi$.
\end{corollary}
\subsection{\textsf{step-wMSO}}
The equational axioms for $\textsf{step-wMSO}$ are given in Table \ref{tab:swmso-axioms}.
Axiom $(S1)$ allows one to add additional assumptions to $\Gamma$,
and $(S2)$ shows how negation affects the conditional operator
by switching the order of the results.
Axiom $(S3)$ shows that if the formula $\varphi$ that is being conditioned on
can be derived from $\Gamma$ itself,
then the first choice of the conditional will always be taken.
Finally, $(S4)$ gives a way to remove assumptions
and put them into a conditional statement instead:
If the first choice of the conditional is equivalent to $\Psi$
under the assumption that $\varphi$ is true,
and the second choice of the conditional is equivalent to $\Psi$
under the assumption that $\varphi$ is false,
then the conditional is equivalent to $\Psi$.
\begin{table}
\centering
\begin{tabular}{r l}
\hline
($S1$): & $\Gamma \vdash \Psi_1 \approx \Psi_2$ implies $\Gamma \cup \{\varphi\} \vdash \Psi_1 \approx \Psi_2$ \\[1ex]
($S2$): & $\Gamma \vdash \neg \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \varphi \mathbin{?} \Psi_2 : \Psi_1$ \\[1ex]
($S3$): & $\Gamma \vdash \varphi$ implies $\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \Psi_1$ \\[1ex]
($S4$): & $\begin{aligned}&\Gamma \cup \{\varphi\} \vdash \Psi_1 \approx \Psi \text{ and } \Gamma \cup \{\neg \varphi\} \vdash \Psi_2 \approx \Psi \\[-1ex]
&\text{implies } \Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \Psi \end{aligned}$ \\
\hline \\
\end{tabular}
\caption{Axioms for \textsf{step-wMSO}.}
\label{tab:swmso-axioms}
\end{table}
Before proving that the axioms given in Table \ref{tab:swmso-axioms} are complete,
we first give some examples of theorems that can be derived from the axioms,
some of which will be used in the proof of completeness.
The first two of these are
particularly
interesting,
since they give properties that are common in many logical systems,
namely the principle of explosion and the cut elimination rule.
The remaining theorems show that the conditional operator
behaves
as expected,
and that all of these behaviours can be inferred
from the four axioms of Table \ref{tab:swmso-axioms}.
\begin{proposition}\label{prop:swmso-theorems}\label{prop:cwmso-theorems}
The following theorems can be derived in \textsf{step-wMSO}.
\begin{enumerate}
\item $\Gamma \vdash \Psi_1 \approx \Psi_2$ for any
$\Psi_1$ and $\Psi_2$ if $\Gamma$ is inconsistent.
\item $\Gamma \vdash \Psi_1 \approx \Psi_2$ if $\Gamma \vdash \varphi$ and $\Gamma \cup \{\varphi\} \vdash \Psi_1 \approx \Psi_2$.
\item $\Gamma \vdash \varphi \mathbin{?} \Psi : \Psi \approx \Psi$.
\item If $\Gamma \cup \{\varphi_1,\varphi_2\} \vdash \Psi_1 \approx \Psi_1'$,
$\Gamma \cup \{\varphi_1, \neg \varphi_2\} \vdash \Psi_1 \approx \Psi_2'$,
$\Gamma \cup \{\neg \varphi_1, \varphi_2\} \vdash \Psi_2 \approx \Psi_1'$, and
$\Gamma \cup \{\neg \varphi_1, \neg \varphi_2\} \vdash \Psi_2 \approx \Psi_2'$, then
$\Gamma \vdash \varphi_1 \mathbin{?} \Psi_1 : \Psi_2 \approx \varphi_2 \mathbin{?} \Psi_1' : \Psi_2'$.
\item $\Gamma \vdash \varphi_1 \mathbin{?} \Psi_1 : \Psi_2 \approx \varphi_2 \mathbin{?} \Psi_1 : \Psi_2$
if $\Gamma \vdash \varphi_1 \leftrightarrow \varphi_2$.
\item $\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \Psi_2$
if $\Gamma \vdash \neg \varphi$.
\item If $\Gamma \cup \{\varphi\} \vdash \Psi_1 \approx \Psi_1'$ and
$\Gamma \cup \{\neg \varphi\} \vdash \Psi_2 \approx \Psi_2'$ then
$\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \varphi \mathbin{?} \Psi_1' : \Psi_2'$.
\item If $\Gamma \cup \{\varphi\} \vdash \Psi_1 \approx \Psi_2$ and
$\Gamma \cup \{\neg \varphi\} \vdash \Psi_1 \approx \Psi_2$
then $\Gamma \vdash \Psi_1 \approx \Psi_2$.
\item $\Gamma \cup \{\varphi\} \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \Psi_1$.
\end{enumerate}
\end{proposition}
\begin{proof}
We only prove some of these claims for illustration.
\begin{enumerate}
\item[1) ] Let $\Psi_1$ and $\Psi_2$ be arbitrary \textsf{step-wMSO}\ formulas
and assume that $\Gamma$ is inconsistent.
Then $\Gamma \vdash \varphi$
and $\Gamma \vdash \neg \varphi$.
Then axiom ($S3$) gives
$\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \Psi_1$ and $\Gamma \vdash \neg \varphi \mathbin{?} \Psi_2 : \Psi_1 \approx \Psi_2$.
Since $\Gamma \vdash \varphi \mathbin{?} \Psi_1 : \Psi_2 \approx \neg \varphi \mathbin{?} \Psi_2 : \Psi_1$ by axiom ($S2$), this implies $\Gamma \vdash \Psi_1 \approx \Psi_2$.
\item[4)] Using ($S4$), $\Gamma \cup \{\varphi_1, \varphi_2\} \vdash \Psi_1 \approx \Psi_1'$
and $\Gamma \cup \{\varphi_1, \neg \varphi_2\} \vdash \Psi_1 \approx \Psi_2'$ gives
$\Gamma \cup \{\varphi_1\} \vdash \varphi_2 \mathbin{?} \Psi_1' : \Psi_2' \approx \Psi_1$.
Likewise, using the other two assumptions, we get
$\Gamma \cup \{\neg \varphi_1\} \vdash \varphi_2 \mathbin{?} \Psi_1' : \Psi_2' \approx \Psi_2$,
and a final application of ($S4$) then gives
$\Gamma \vdash \varphi_1 \mathbin{?} \Psi_1 : \Psi_2 \approx \varphi_2 \mathbin{?} \Psi_1' : \Psi_2'$.
\item[5)] Assume that $\Gamma \vdash \varphi_1 \leftrightarrow \varphi_2$.
Then, because of reflexivity and since $\{\varphi_1, \neg \varphi_2\}$ and $\{\neg \varphi_1, \varphi_2\}$ are inconsistent under $\Gamma$, the fourth item of this proposition gives that $\Gamma \vdash \varphi_1 \mathbin{?} \Psi_1 : \Psi_2 \approx \varphi_2 \mathbin{?} \Psi_1 : \Psi_2$.
\qedhere
\end{enumerate}
\end{proof}
The proof of completeness is
by a case analysis and induction on
the structure of the two formulas $\Psi_1$ and $\Psi_2$.
Lemma \ref{lem:completeness-swmso}
covers
the case where
both
sides of the equation
are conditional statements.
\begin{lemma}\label{lem:completeness-swmso}
If $\varphi_1 \mathbin{?} \Psi_1' : \Psi_1'' \sim_\Gamma \varphi_2 \mathbin{?} \Psi_2' : \Psi_2''$, then
\[\begin{array}{l l} \Psi_1' \sim_{\Gamma \cup \{\varphi_1, \varphi_2\}} \Psi_2', & \Psi_1' \sim_{\Gamma \cup \{\varphi_1, \neg \varphi_2\}} \Psi_2'', \\ \Psi_1'' \sim_{\Gamma \cup \{\neg \varphi_1, \varphi_2\}} \Psi_2', \text{ and} & \Psi_1'' \sim_{\Gamma \cup \{\neg \varphi_1, \neg \varphi_2\}} \Psi_2''.\end{array}\]
\end{lemma}
\begin{proof}
We show why the first equivalence is true; the remaining cases are similar.
Let $\Psi_1 = \varphi_1 \mathbin{?} \Psi_1' : \Psi_1''$ and
$\Psi_2 = \varphi_2 \mathbin{?} \Psi_2' : \Psi_2''$.
If $(w,\sigma) \in \sat{\Gamma \cup \{\varphi_1, \varphi_2\}}$,
then also $(w,\sigma) \in \sat{\Gamma}$, so
\begin{align*}
\sat{\Psi_1'}(w,\sigma) &= \sat{\Psi_1}(w,\sigma) = \sat{\Psi_2}(w,\sigma) = \sat{\Psi_2'}(w,\sigma).
\qedhere
\end{align*}
\end{proof}
\begin{theorem}\label{thm:completeness-swmso}
For finite $\Gamma$ we have
$\Psi_1 \sim_\Gamma \Psi_2$ if and only if $\Gamma \vdash \Psi_1 \approx \Psi_2$.
\end{theorem}
\begin{proof}
Soundness can be proved by simply checking the
validity
of each axiom.
For completeness, note that if $\Gamma$ is inconsistent,
then immediately $\Gamma \vdash \Psi_1 \approx \Psi_2$
by Proposition~\ref{prop:swmso-theorems}(1).
In the rest of the proof we may therefore assume that $\Gamma$ is consistent.
The proof now proceeds by induction on
$|\Psi_1|_?+|\Psi_2|_?$, where $|\Psi|_?$
is
defined as follows.
\[|\Psi|_? = \begin{cases} 0, & \text{if } \Psi = r \\ 1 + |\Psi'|_? + |\Psi''|_? ,& \text{if } \Psi = \varphi \mathbin{?} \Psi' : \Psi'' \end{cases}\]
Case $|\Psi_1|_? + |\Psi_2|_? = 0$:
In this case, $\Psi_1 = r_1$ and $\Psi_2 = r_2$ for some $r_1,r_2 \in R$.
Since $r_1 = \sat{\Psi_1}(w,\sigma) = \sat{\Psi_2}(w,\sigma) = r_2$
by assumption, we get $\Gamma \vdash \Psi_1 \approx \Psi_2$ by reflexivity.
Case $|\Psi_1|_? + |\Psi_2|_? > 0$:
In this case, without loss of generality, $\Psi_1 = \varphi \mathbin{?} \Psi_1' : \Psi_1''$.
From the semantics, we have that $\Psi_1 \sim_{\Gamma \cup \{\varphi \}} \Psi_1'$ and $\Psi_1 \sim_{\Gamma \cup \{\neg \varphi \}} \Psi_1''$, so $\Psi_2 \sim_{\Gamma \cup \{\varphi \}} \Psi_1'$ and $\Psi_2 \sim_{\Gamma \cup \{\neg \varphi \}} \Psi_1''$.
From the inductive hypothesis, we have that
\(
\Gamma \cup \{\varphi\} \vdash \Psi_1' \approx \Psi_2, \text{ and }
\Gamma \cup \{\neg \varphi\} \vdash \Psi_1'' \approx \Psi_2
.
\)
From axiom (S4), we conclude that
\( \Gamma \vdash \Psi_1 \approx \Psi_2.\)
\end{proof}
\subsection{\textsf{core-wMSO}\ Without Sums}
We
present
a complete axiomatization of a fragment of $\textsf{core-wMSO}$
in which $+$
is the only allowed sum operator.
Let $\textsf{core-wMSO}(?, +)$ be the fragment
given by
\[\Phi ::= \mathbf{0} \mid \textstyle{\prod_x} \Psi \mid \varphi \mathbin{?} \Phi_1 : \Phi_2 \mid \Phi_1 + \Phi_2,\]
where $\Psi$ is a \textsf{step-wMSO}\ formula and $\varphi$ a \textsf{MSO}\ formula.
The corresponding first-order fragment \textsf{core-wFO}(?,+) is
obtained from the same grammar but letting $\Psi$ be a \textsf{step-wFO}\ formula
and $\varphi$ a \textsf{FO}\ formula.
Droste and Gastin studied the first-order fragment \textsf{core-wFO}(?,+)
in \cite{DG19},
where they showed that it is expressively equivalent to
aperiodic finitely ambiguous weighted automata.
This result contrasts the situation for
the full first-order \textsf{core-wFO},
which they show to be expressively equivalent
to aperiodic polynomially ambiguous weighted automata.
Here \emph{aperiodic} means that there exists an integer $m \geq 1$
such that for any word $w$, $w$ concatenated with itself $m$ times is accepted
if and only if $w$ concatenated with itself $m+1$ times is accepted,
\emph{polynomially ambiguous} means that there is a polynomial $p$
such that each word $w$ has at most $p(|w|)$ successful runs,
and \emph{finitely ambiguous} means that the polynomial is constant.
We are not aware of a similar characterization
of the second-order fragment \textsf{core-wMSO}(?,+).
In \cite{GM18} it is shown that \emph{adding} various additional operators
to the logic does not increase its expressivity,
but the question of the expressive power
of various fragments of the logic is not addressed.
In the following we give some examples of the expressivity of the fragment \textsf{core-wMSO}(?,+).
\begin{example}
Consider again Example \ref{ex:1}.
The formula in that example does not belong to
\textsf{core-wMSO}(?,+),
because it uses the general sum $\sum_x$.
Instead we can count the number of $a$'s that appear before any $b$'s in a word.
To do this, consider the formula
$\varphi = P_a(x) \land \forall y.(P_b(y) \rightarrow x \leq y)$,
and let $\Psi = \varphi \mathbin{?} 1 : 0$ and $\Phi = \prod_x \Psi$.
For the word $w = abaa$ we then get
\begin{align*}
\sat{\Phi}(w,\sigma) &= \sat{\Psi}(w,\sigma[x \mapsto 1])\sat{\Psi}(w,\sigma[x \mapsto 2]) \\
&= \sat{\Psi}(w,\sigma[x \mapsto 3])\sat{\Psi}(w,\sigma[x \mapsto 4]) = \{1000\},
\end{align*}
which correctly tells us that there is one $a$ before any $b$'s.
It is a simple matter to adapt this to also count
the collective number of different things,
such as the total number of $a$'s and $c$'s before any $b$'s.
However, we can also, in some sense, count individually different things.
If for example we want to count separately the number of $a$'s
and the number of $b$'s in a word, we can let
\begin{align*}
\varphi_1 = P_a(x), \quad \Psi_1 = \varphi_1 \mathbin{?} 1 : 0, \quad \Phi_1 = \prod_x \Psi_1, \\
\varphi_2 = P_b(x), \quad \Psi_2 = \varphi_2 \mathbin{?} 2 : 0, \quad \Phi_2 = \prod_x \Psi_2,
\end{align*}
and finally $\Phi = \Phi_1 + \Phi_2$.
If we again take $w = abaa$, then
\[\sat{\Phi}(w,\sigma) = \sat{\Phi_1}(w,\sigma) \uplus \sat{\Phi_2}(w,\sigma) = \multiset{1011, 0200},\]
and by counting off the number of $1$'s in this multiset,
we obtain the number of $a$'s,
and likewise the number of $2$'s gives number of $b$'s.
\end{example}
For a formula $\Phi$,
let $\mathtt{var}(\Phi)$ be the set of variables used in $\Phi$,
and let $\Phi[y/x]$ be the formula resulting from
replacing the variable $x$ with the variable $y$.
The axioms for the fragment $\textsf{core-wMSO}(?,+)$ are then given in Table \ref{tab:cwfo-axioms}.
Axioms ($C1$)-($C3$) give standard properties of sum,
whereas ($C4$) and ($C5$) take care of the product.
Axioms ($C6$)-($C9$) are similar to the axioms for $\textsf{step-wMSO}$,
and finally, axiom ($C10$) shows how sum distributes over the conditional operator.
\begin{table}
\centering
\begin{tabular}{l l}
\hline
($C1$): & $\Gamma \vdash \Phi + \mathbf{0} \approx \Phi$ \\[1ex]
($C2$): & $\Gamma \vdash \Phi_1 + \Phi_2 \approx \Phi_2 + \Phi_1$ \\[1ex]
($C3$): & $\Gamma \vdash (\Phi_1 + \Phi_2) + \Phi_3 \approx \Phi_1 + (\Phi_2 + \Phi_3)$ \\[1ex]
($C4$): & $\begin{aligned}&\Gamma \vdash \Psi_1 \approx \Psi_2 \text{ implies } \Gamma \vdash \textstyle \prod_x \Psi_1 \approx \prod_x \Psi_2 \\[-1ex]
&\text{if } x \text{ is not free in } \Gamma\end{aligned}$ \\[2ex]
($C5$): & $\Gamma \vdash \prod_x \Psi \approx \prod_y \Psi[y / x]$ if $y \notin \mathtt{var}(\Psi)$ \\[1ex]
($C6$): & $\Gamma \vdash \Phi_1 \approx \Phi_2$ implies $\Gamma \cup \{\varphi\} \vdash \Phi_1 \approx \Phi_2$ \\[1ex]
($C7$): & $\Gamma \vdash \neg \varphi \mathbin{?} \Phi_1 : \Phi_2 \approx \varphi \mathbin{?} \Phi_2 : \Phi_1$ \\[1ex]
($C8$): & if $\Gamma \vdash \varphi$ then $\Gamma \vdash \varphi \mathbin{?} \Phi_1 : \Phi_2 \approx \Phi_1$ \\[1ex]
($C9$): & $\begin{aligned}&\Gamma \cup \{\varphi\} \vdash \Phi_1 \approx \Phi \text{ and } \Gamma \cup \{\neg \varphi\} \vdash \Phi_2 \approx \Phi \\[-1ex]
&\text{implies } \Gamma \vdash \varphi \mathbin{?} \Phi_1 : \Phi_2 \approx \Phi \end{aligned}$ \\[2ex]
($C10$): & $\Gamma \vdash (\varphi \mathbin{?} \Phi' : \Phi'') + \Phi \approx \varphi \mathbin{?} (\Phi' + \Phi) : (\Phi'' + \Phi)$ \\
\hline \\
\end{tabular}
\caption{Axioms for $\textsf{core-wMSO}(?,+)$.}
\label{tab:cwfo-axioms}
\end{table}
Since all of the axioms for $\textsf{step-wMSO}$ are also included
in the axiomatization for $\textsf{core-wMSO}$ (because both include the conditional operator),
we get that the theorems we derived in Proposition \ref{prop:swmso-theorems}
are also derivable for $\textsf{core-wMSO}(?,+)$.
Likewise, Lemma \ref{lem:completeness-swmso} also carries over to $\textsf{core-wMSO}(?,+)$
Our first lemma shows the connection between the product operator $\prod_x$
and the first-order
universal
quantifier $\forall x$,
which implies that
axiom ($C4$) is sound.
\begin{lemma}\label{lem:forall}
If $x$ does not appear as a free variable in $\Gamma,$ then
$\prod_x \Psi_1 \sim_\Gamma \prod_x \Psi_2$ if and only if $\Psi_1 \sim_{\Gamma} \Psi_2$.
\end{lemma}
\begin{proof}
(${\implies}$) $\prod_x \Psi_1 \sim_\Gamma \prod_x \Psi_2$ implies that
$\sat{\Psi_1}(w, \sigma[x \mapsto i]) = \sat{\Psi_2}(w, \sigma[x \mapsto i])$
for all $i$ and $(w,\sigma)$ such that $(w, \sigma) \models \Gamma$.
In particular,
$\sat{\Psi_1}(w,\sigma) = \sat{\Psi_2}(w,\sigma)$
for all $(w,\sigma) \models \Gamma$,
so $\Psi_1 \sim_{\Gamma} \Psi_2$.
($\impliedby$) $\Psi_1 \sim_{\Gamma} \Psi_2$
means that $\sat{\Psi_1}(w,\sigma) = \sat{\Psi_2}(w,\sigma)$
for all $(w,\sigma) \models \Gamma$.
This implies that $\sat{\Psi_1}(w,\sigma[x \mapsto i]) = \sat{\Psi_2}(w,\sigma[x \mapsto i])$
for all $i$ and $(w,\sigma[x \mapsto i]) \models \Gamma$.
But since $x$ does not appear free in $\Gamma$,
$(w,\sigma[x \mapsto i]) \models \Gamma$ if and only if $(w,\sigma) \models \Gamma$, and therefore
$\sat{\Psi_1}(w,\sigma[x \mapsto i]) = \sat{\Psi_2}(w,\sigma[x \mapsto i])$
for all $i$ and $(w,\sigma) \models \Gamma$.
This in turn implies
$\sat{\prod_x \Psi_1}(w,\sigma) = \sat{\prod_x \Psi_2}(w,\sigma)$
for all $(w,\sigma) \models \Gamma$, so $\prod_x \Psi_1 \sim_\Gamma \prod_x \Psi_2$.
\end{proof}
A key part of the proof of completeness is to put formulas
into the following notion of normal form,
where occurrences of the conditional operator are grouped together
and all come before any sum or product is applied.
\begin{definition}
A $\textsf{core-wMSO}(?, +)$ formula $\Phi$ is in \emph{normal form} if
$\Phi$ is generated by the following grammar:
\[ N ::= \varphi \mathbin{?} N_1 : N_2 \mid M \mid \mathbf{0} \quad \text{and} \quad
M ::= {\textstyle\prod_x} \Psi \mid M_1 + M_2.\]
\end{definition}
Every
$\textsf{core-wMSO}(?,+)$ has an equivalent normal form,
which will allow us to only reason about formulas in normal form in the proof.
In order to show this, we make use of the following technical lemma,
which takes care of the case of the sum operator.
\begin{lemma}\label{lem:conditional}
If $\Phi_1$ and $\Phi_2$ are in normal form,
then there exists a formula $\Phi$, also in normal form,
such that $\Gamma \vdash \Phi \approx \Phi_1 + \Phi_2$.
\end{lemma}
\begin{proof
The proof is by induction on the maximum number of nested occurrences
of the conditional operator within $\Phi_1$ and $\Phi_2$.
Note that since these are in normal form,
occurrences of the conditional operator will always appear
consecutively as the outermost operators.
Formally, we define, on formulas $\Phi$ in normal form,
the following function which counts the number of nested occurrences of the conditional operator:
\[\#?(\Phi) = \begin{cases} 1 + \max\{\#?(\Phi'),\#?(\Phi'')\} & \text{if } \Phi = \varphi \mathbin{?} \Phi' : \Phi'' \\ 0 & \text{otherwise.}\end{cases}\]
Let $k = \max\{\#?(\Phi_1),\#?(\Phi_2)\}$.
$k = 0$:
This case follows essentially from ($C1$).
$k > 0$:
We have three cases to consider:
(1) $\#?(\Phi_1) = \#?(\Phi_2)$,
(2) $\#?(\Phi_1) < \#?(\Phi_2)$, or
(3) $\#?(\Phi_1) > \#?(\Phi_2)$.
(1) Consider $\Phi_1 = \varphi_1 \mathbin{?} \Phi_1' : \Phi_1''$
and $\Phi_2 = \varphi_2 \mathbin{?} \Phi_2' : \Phi_2''$.
Now, by three applications of axiom ($C10$),
we get
\begin{align*}
\Gamma &\vdash \varphi_1 \mathbin{?} \Phi_1' : \Phi_1'' + \varphi_2 \mathbin{?} \Phi_2' : \Phi_2'' \\
&\phantom{{}\vdash{}}\approx \varphi_1 \mathbin{?} (\varphi_2 \mathbin{?} \Phi_1' + \Phi_2' : \Phi_1' + \Phi_2'') \\
&\phantom{{}\approx{} \varphi_1}: (\varphi_2 \mathbin{?} \Phi_1'' + \Phi_2' : \Phi_1'' + \Phi_2'').
\end{align*}
Since $\#?(\Phi_1' + \Phi_2') < k$, $\#?(\Phi_1' + \Phi_2'') < k$,
$\#?(\Phi_1'' + \Phi_2') < k$, and $\#?(\Phi_1'' + \Phi_2'') < k$,
the induction hypothesis gives formulas $\Phi'$, $\Phi''$, $\Phi'''$, and $\Phi''''$,
all in normal form, such that
$\Gamma \vdash \Phi' \approx \Phi_1' + \Phi_2'$,
$\Gamma \vdash \Phi'' \approx \Phi_1' + \Phi_2''$,
$\Gamma \vdash \Phi''' \approx \Phi_1'' + \Phi_2'$, and
$\Gamma \vdash \Phi'''' \approx \Phi_1'' + \Phi_2''$.
Thus
\[\Phi = \varphi_1 \mathbin{?} (\varphi_2 \mathbin{?} \Phi' : \Phi'') : (\varphi_2 \mathbin{?} \Phi''' : \Phi'''')\]
is in normal form and satisfies $\Gamma \vdash \Phi \approx \Phi_1 + \Phi_2$.
(2), (3) These cases are simpler versions of case (1).
\end{proof}
\begin{lemma}\label{lem:normal-form}
For each $\Gamma$ and $\textsf{core-wMSO}(?, +)$ formula $\Phi$,
there is a formula $\Phi'$ in normal form such that
$\Gamma \vdash \Phi \approx \Phi'$.
\end{lemma}
\begin{proof}
The proof is by induction on the structure of $\Phi$.
$\Phi = \mathbf{0}$ or $\Phi = \prod_x \Psi$:
Then,
$\Phi$ is already in normal form.
$\Phi = \Phi_1 + \Phi_2$:
By induction hypothesis, there exist formulas $\Phi_1'$ and $\Phi_2'$,
both in normal form, such that $\Gamma \vdash \Phi_1 \approx \Phi_1'$
and $\Gamma \vdash \Phi_2 \approx \Phi_2'$.
By Lemma \ref{lem:conditional},
there exists a formula $\Phi'$ in normal form such that
$\Gamma \vdash \Phi' \approx \Phi_1' + \Phi_2'$.
By congruence we get $\Gamma \vdash \Phi_1 + \Phi_2 \approx \Phi_1' + \Phi_2'$,
so $\Gamma \vdash \Phi \approx \Phi'$.
$\Phi = \varphi \mathbin{?} \Phi_1 : \Phi_2$:
By induction hypothesis there exist $\Phi_1'$ and $\Phi_2'$
in normal form such that $\Gamma \vdash \Phi_1 \approx \Phi_1'$
and $\Gamma \vdash \Phi_2 \approx \Phi_2'$.
Then $\Phi' = \varphi \mathbin{?} \Phi_1' : \Phi_2'$ is in normal form
and, by congruence, $\Gamma \vdash \Phi \approx \Phi'$.
\end{proof}
Notice that for formulas in normal form,
if it is not the case that $\Phi = \varphi \mathbin{?} \Phi_1 : \Phi_2$,
then $\Phi$ can not contain any conditional statements at all,
and hence $\Phi$ must be of the form $\Phi = \sum_{i = 1}^k \prod_x \Phi_i$ for some $k$ (axioms (C2) and (C3) allow us to use this finite sum notation).
The following series of lemmas shows that for formulas of this form, it is enough
to consider each of the summands pairwise.
\begin{lemma}\label{lem:prod_formula}
Given two formulas $\Psi_1$ and $\Psi_2$,
there exists a formula $\varphi_{\Psi_1,\Psi_2}$ such that
$(w,\sigma) \models \forall x.\varphi_{\Psi_1,\Psi_2}$ if and only if
$\sat{\prod_x \Psi_1}(w,\sigma) = \sat{\prod_x \Psi_2}(w,\sigma)$.
In particular,
\[\prod_x \Psi_1 \sim_{\Gamma \cup \{\forall x.\varphi_{\Psi_1,\Psi_2}\}} \prod_x \Psi_2.\]
\end{lemma}
\begin{proof
Consider the sets $R_1$ and $R_2$ of values that appear in $\Psi_1$ and $\Psi_2$, respectively.
If these sets are disjoint,
then $\sat{\prod_x \Psi_1}(w,\sigma) \neq \sat{\prod_x \Psi_2}(w,\sigma)$
for all $(w,\sigma)$, so we can take $\varphi_{\Psi_1,\Psi_2} = \bot$.
If they are not disjoint, consider any $r \in R_1 \cap R_2$.
From Lemma \ref{lem:WtoMSO},
$(w,\sigma) \models \varphi(\Psi_1,r)$ if and only if
$\sat{\Psi_1}(w,\sigma) = r$; and
$(w,\sigma) \models \varphi(\Psi_2,r)$
if and only if $\sat{\Psi_2}(w,\sigma) = r$.
Now we take $\varphi^r_{\Psi_1,\Psi_2} = \varphi(\Psi_1,r) \land \varphi(\Psi_2,r)$ and
\[\varphi_{\Psi_1,\Psi_2} = \bigvee_{r \in R_1 \cap R_2} \varphi^r_{\Psi_1,\Psi_2}.\]
We now have a formula $\varphi_{\Psi_1,\Psi_2}$ such that, for all $(w,\sigma)$,
$(w,\sigma) \models \varphi_{\Psi_1,\Psi_2}$ if and only if
$\sat{\Psi_1}(w,\sigma) = \sat{\Psi_2}(w,\sigma)$.
This is equivalent to
\begin{align*}
\forall (w,\sigma).\forall i &\in \{1,\dots,|w|\}.(w,\sigma[x \mapsto i]) \models \varphi_{\Psi_1,\Psi_2} \\ \text{ iff } &\sat{\Psi_1}(w,\sigma[x \mapsto i]) = \sat{\Psi_2}(w,\sigma[x \mapsto i])
\end{align*}
which implies that
$(w,\sigma) \models \forall x. \varphi_{\Psi_1,\Psi_2}$ iff $\sat{{\textstyle\prod_x} \Psi_1}(w,\sigma) = \sat{{\textstyle \prod_x} \Psi_2}(w,\sigma)$ for all $(w,\sigma)$.
\end{proof}
\begin{lemma}\label{lem:deduction}
If $\Gamma \vdash \bigvee_{m=1}^n \varphi_m$ and for every $m$,
it holds that $\Gamma \cup \{\varphi_m\} \vdash \Phi_1 \approx \Phi_2$,
then $\Gamma \vdash \Phi_1 \approx \Phi_2$.
\end{lemma}
\begin{proof
The proof is by induction on $n$.
The case of $n = 1$ is trivial:
we have assumed that $\Gamma \cup \{\varphi_1\} \vdash \Phi_1 \approx \Phi_2$,
so $\Gamma \vdash \Phi_1 \approx \Phi_2$ from Proposition \ref{prop:cwmso-theorems}(2).
Now, let $n = k+1$.
We have $\Gamma \cup \{ \varphi_n \} \vdash \Phi_1 \approx \Phi_2$ and
$\Gamma \cup \{ \neg \varphi_n \} \vdash
\bigvee_{m=1}^k \varphi_m$, so by the inductive hypothesis,
$\Gamma \cup \{ \neg \varphi_n \} \vdash
\Phi_1 \approx \Phi_2$.
Hence, Proposition \ref{prop:cwmso-theorems}(8) gives
$\Gamma \vdash \Phi_1 \approx \Phi_2$.
\end{proof}
\begin{lemma}\label{lem:completeness-sum}
Let $\Gamma$ be finite.
Assume $\Phi_1 = \sum_{i=1}^k \prod_x \Psi_i$
and $\Phi_2 = \sum_{j=1}^k \prod_x \Psi_j'$
with $\Phi_1 \sim_\Gamma \Phi_2$,
and assume that for all $i$ and $j$
$\prod_x \Psi_i \sim_\Gamma \prod_x \Psi_j'$ implies
$\Gamma \vdash \prod_x \Psi_i \approx \prod_x \Psi_j'$.
Then $\Gamma \vdash \Phi_1 \approx \Phi_2$.
\end{lemma}
\begin{proof}
By definition, $\Phi_1 \sim_\Gamma \Phi_2$ means that
for all $(w,\sigma) \in \sat{\Gamma}$
there exists a permutation $(j_1, \dots, j_k)$ of $\{1,2,\ldots,k\}$, such that
for all $i$,
\begin{equation}\label{eq:proof1}
\sat{\textstyle{\prod_x} \Psi_i}(w,\sigma) = \sat{\textstyle{\prod_x} \Psi_{j_i}'}(w,\sigma)
.
\end{equation}
By Lemma \ref{lem:prod_formula},
for each such permutation $P = (j_1, \dots, j_k)$ there exist
formulas $\varphi_{1,j_1}, \dots, \varphi_{k, j_k}$ such that
\[\prod_x \Psi_i \sim_{\Gamma \cup \{\forall x. \varphi_{i,j_i}\}} \prod_x \Psi_{j_i}',\]
and by assumption, this gives
\begin{equation}\label{eq:proof2}
\Gamma \cup \{\forall x. \varphi_{i, j_i}\} \vdash \prod_x \Psi_i \approx \prod_x \Psi_{j_i}'.
\end{equation}
For each permutation $P = \{j_1, \dots, j_k\}$, let
\[\varphi_P = (\forall x.\varphi_{1,j_1}) \land \dots \land (\forall x. \varphi_{k, j_k}).\]
By Equation \eqref{eq:proof1} and Lemma \ref{lem:prod_formula},
for every $(w,\sigma) \in \sat{\Gamma}$ there exists a permutation
$P = (j_1, \dots, j_k)$ such that we have $(w,\sigma) \models \varphi_P$.
This means that for all $(w,\sigma) \in \sat{\Gamma}$
we have $(w,\sigma) \models \bigvee_P \varphi_P$.
By Corollary \ref{cor:completeness-mso},
this means that $\Gamma \vdash \bigvee_P \varphi_P$.
Now, from Equation \eqref{eq:proof2},
we can use ($C6$) to get
\[\Gamma \cup \{\varphi_P\} \cup \{\forall x.\varphi_{i,j_i}\} \vdash \prod_x \Psi_i \approx \prod_x \Psi'_{j_i},\]
and together with $\Gamma \cup \{\varphi_P\} \vdash \forall x.\varphi_{i,j_i}$,
this gives
$\Gamma \cup \{\varphi_P\} \vdash \prod_x \Psi_i \approx \prod_x \Psi_{j_i}'$
by Proposition \ref{prop:cwmso-theorems}(2).
We can then use congruence to get
\begin{equation}\label{eq:proof3}
\Gamma \cup \{\varphi_P\} \vdash \sum_{i = 1}^k \prod_x \Psi_i \approx \sum_{j = 1}^k \prod_x \Psi_{j_i}'.
\end{equation}
Since $\sum_{j = 1}^k {\textstyle\prod_x} \Psi_{j_i}'$
is a permutation of $\Phi_2$, we get by axioms ($C2$) and ($C3$) that
$\Gamma \cup \{\varphi_P\} \vdash \Phi_2 \approx \sum_{j = 1}^k {\textstyle\prod_x} \Psi_{j_i}'$, so
\begin{equation}\label{eq:proof4}
\Gamma \cup \{\varphi_P\} \vdash \Phi_1 \approx \Phi_2
\end{equation}
by Equation \eqref{eq:proof3}.
By Lemma \ref{lem:deduction}, Equation \eqref{eq:proof4}
together with the fact that $\Gamma \vdash \bigvee_P \varphi_P$ gives
$
\Gamma \vdash \Phi_1 \approx \Phi_2.
$
\end{proof}
We can now prove completeness for formulas in normal form,
and by Lemma \ref{lem:normal-form},
this extends to all formulas.
\begin{lemma}\label{lem:nf-completeness}
If $\Phi_1$ and $\Phi_2$ are in normal form and $\Gamma$ is finite,
then $\Phi_1 \sim_\Gamma \Phi_2$ implies $\Gamma \vdash \Phi_1 \approx \Phi_2$.
\end{lemma}
\begin{proof}
By Proposition \ref{prop:swmso-theorems}(1), we may assume that $\Gamma$ is consistent.
We note that for a formula $\Phi$ in normal form, if $\Phi$ is not a conditional and
$\Phi \neq \mathbf{0}$, then for every $(w,\sigma)$, $|\sat{\Phi}(w,\sigma)|>0$, and therefore $\Phi \not \sim_\Gamma \mathbf{0}$.
The proof now proceeds by induction on $d =
\mathtt{depth}(\Phi_1)
+
\mathtt{depth}(\Phi_2)$,
where
$\mathtt{depth}(\Phi) = 0$ if $\Phi = \mathbf{0}$ or $\Phi = \prod_x \Psi$ and
$\mathtt{depth}(\Phi) = 1 + \max\{\mathtt{depth}(\Phi'),\mathtt{depth}(\Phi'')\}$ if $\Phi = \varphi \mathbin{?} \Phi' : \Phi''$ or $\Phi = \Phi' + \Phi''$.
\emph{Case $d=0$.}
We distinguish the following two subcases.
At least one of $\Phi_1$ and $\Phi_2$ is $\mathbf{0}$: Then from the observation above, $\Phi_1 = \mathbf{0} = \Phi_2$, and therefore $\Gamma \vdash \mathbf{0} \approx \mathbf{0}$ by reflexivity.
$\Phi_1 = \prod_{x_1} \Psi_1$ and $\Phi_2 = \prod_{x_2} \Psi_2$:
In this case, we can find some $x \notin \mathtt{var}(\Psi_1) \cup \mathtt{var}(\Psi_2)$ that does not appear in $\Gamma$.
Then, $\prod_x \Psi_1[x / x_1] \sim_\Gamma \prod_x \Psi_2[x / x_2]$.
By Lemma \ref{lem:forall} we get $\Psi_1[x / x_1] \sim_{\Gamma} \Psi_2[x / x_2]$,
and by completeness of $\textsf{step-wMSO}$, this implies
$\Gamma \vdash \Psi_1[x / x_1] \approx \Psi_2[x / x_2]$.
We can then use axiom ($C4$) to obtain
$\Gamma \vdash \prod_x \Psi_1[x / x_1] \approx \prod_x \Psi_2[x / x_2]$,
and finally use axiom ($C5$) to obtain
$\Gamma \vdash \prod_{x_1} \Psi_1 \approx \prod_{x_2} \Psi_2$.
\emph{Case $d>0$.}
We distinguish the following two subcases.
Without loss of generality, $\Phi_1 = \varphi \mathbin{?} \Phi_1' : \Phi_1''$:
Then, from $\Phi_1 \sim_\Gamma \Phi_2$ we get
$\Phi_1' \sim_{\Gamma \cup \{\varphi\}} \Phi_2$ and
$\Phi_1'' \sim_{\Gamma \cup \{\neg \varphi\}} \Phi_2$, and by the inductive hypothesis this yields ${\Gamma \cup \{\varphi\}} \vdash \Phi_1' \approx \Phi_2$ and ${\Gamma \cup \{\neg \varphi\}} \vdash \Phi_1'' \approx \Phi_2$.
Axiom ($C9$) then gives us that
$\Gamma \vdash \Phi_1 \approx \Phi_2$.
$\Phi_1 = \sum_{i=1}^k \prod_x \Psi_i$
and $\Phi_2 = \sum_{j=1}^{k'} \prod_x \Psi_j'$:
Then, we must have $k = k'$
since otherwise $|\sat{\Phi_1}(w,\sigma)| \neq |\sat{\Phi_2}(w,\sigma)|$,
contradicting $\Phi_1 \sim_\Gamma \Phi_2$.
By the induction hypothesis, $\prod_x \Psi_i \sim_\Gamma \prod_x \Psi_j'$
implies $\Gamma \vdash \prod_x \Psi_i \approx \prod_x \Psi_j'$,
so Lemma~\ref{lem:completeness-sum} yields
$\Gamma \vdash \Phi_1 \approx \Phi_2$.
\end{proof}
\begin{theorem}[Completeness for $\textsf{core-wMSO}(\mathbin{?},+)$]\label{thm:completeness-cwmso}
For every finite $\Gamma$ and $\textsf{core-wMSO}(\mathbin{?},+)$ formulas $\Phi_1$ and $\Phi_1$, we have
$\Phi_1 \sim_\Gamma \Phi_2 \text{ if and only if } \Gamma \vdash \Phi_1 \approx \Phi_2$.
\end{theorem}
\begin{proof}
We prove only completeness.
Assume $\Phi_1 \sim_\Gamma \Phi_2$.
By Lemma \ref{lem:normal-form}, there exist formulas $\Phi_1'$ and $\Phi_2'$,
both in normal form, such that $\Gamma \vdash \Phi_1 \approx \Phi_1'$
and $\Gamma \vdash \Phi_2 \approx \Phi_2'$.
By soundness,
this implies $\Phi_1 \sim_\Gamma \Phi_1'$ and $\Phi_2 \sim_\Gamma \Phi_2'$,
so $\Phi_1' \sim_\Gamma \Phi_2'$.
Since these are in normal form, Lemma \ref{lem:nf-completeness}
gives $\Gamma \vdash \Phi_1' \approx \Phi_2'$,
and we conclude $\Gamma \vdash \Phi_1 \approx \Phi_2$.
\end{proof}
\subsection{Weighted Automata}
An $R$-weighted automaton over $\Sigma$ is a quintuple $A = (Q,\Delta,\texttt{wgt},I,F)$, where $Q$ is a nonempty and finite set of states, $I, F \subseteq Q$ are, respectively, the initial and final states of the automaton, $\Delta \subseteq Q \times \Sigma \times Q$ is the transition relation, and $\texttt{wgt} : \Delta \to R$ assigns a weight from $R$ to each transition of the automaton.
A run of $A$ on a word $w \in \Sigma^*$ of length $n$ is a sequence $\delta_1
\delta_2\cdots \delta_n \in \Delta^n$, where for every $i \leq n$, $\delta_i = (q_i,a_i,q_{i+1})$, and $w = a_1 a_2 \cdots a_n$.
It is an accepting run if $q_1 \in I$ and $q_{n+1} \in F$.
We can extend the weight function $\texttt{wgt}$ on runs, such that $\texttt{wgt}(\delta_1
\delta_2\cdots \delta_n) = \texttt{wgt}(\delta_1)\texttt{wgt}(\delta_2)\cdots \texttt{wgt}(\delta_n)$.
We denote as $\rho(A,w)$ the set of runs of $A$ on $w$.
The semantics of $A = (Q,\Delta,\texttt{wgt},I,F)$ is given by a function $\sat{\cdot}:\Sigma^+ \to \multi{R^*}$ in the following way:
\begin{align*}
\sat{A}(w) = \multiset{\texttt{wgt}(\rho) \mid \rho \text{ is an accepting run of $A$ on $w$}}.
\end{align*}
\begin{theorem}[\cite{GM18}]\label{thm:form-to-A}
For every closed $\textsf{core-wMSO}$ formula $\Phi$, there is an $R$-weighted automaton over $\Sigma$, $A$, such that for every $w \in+\Sigma^*$,
$\sat{\Phi}(w) = \sat{A}(w)$.
\end{theorem}
\begin{remark}
Theorem \ref{thm:form-to-A} applies only to closed formulas, yet we mainly work with possibly open formulas.
But this is not really a limitation, as every
formula $\Phi$ with a set $V$ of free variables can be thought of as a closed formula over the extended alphabet $\Sigma \cup V$.
\end{remark}
We extend the semantic equivalence $\sim$ of formulas to weighted automata as expected, but we also introduce a bounded version of this equivalence. Specifically, for every $n \geq 0$, and for every pair $A_1,A_2$ of automata, $A_1 \sim_n A_1$, if for every $w \in \Sigma^+$ of length at most $n$, $\sat{A_1}(w) = \sat{A_2}(w)$.
\begin{theorem}\label{thm:finite-equivalence-A}
Let $A_1$ and $A_2$ be two $R$-weighted automata over $\Sigma$, such that $A_1$ has $n_1$ states and $A_2$ has $n_2$ states.
Then, $A_1 \sim_{n_1 + n_2 - 1} A_2$ if and only if $A_1 \sim A_2$.
\end{theorem}
\begin{proof}
The ``if'' direction of the theorem is trivial, and therefore we prove the ``only if'' direction.
Let $n = n_1 + n_2$, and let $A_1 = (Q_1,\Delta_1,\texttt{wgt},I_1,F_1)$ and $A_2 = (Q_2,\Delta_2,\texttt{wgt},I_2,F_2)$ --- the weight function is considered the same for the two automata, for convenience.
For every word $wa \in \Sigma^+$, $\gamma r \in R^{|wa|}$, $i = 1,2$, and $S$ a set or multiset of states from $Q_i$, we define
$Q_i(S,\varepsilon,\varepsilon)= S$, and
\begin{align*}
Q_i(S,wa,\gamma r) =
\multiset{ &q \in Q_i \mid \exists q' {\in} Q_i(S,w,\gamma) \text{ such that } \\ & (q',a,q) {\in} \Delta_i
\text{ and } \texttt{wgt}((q',a,q)) {=} r }.
\end{align*}
Let $Q(S,w,\gamma) = Q_1(S\cap Q_1,w,\gamma) \uplus Q_2(S\cap Q_2,w,\gamma)$ and let $I = I_1 \cup I_2$.
We assume
that $A_1 \sim_{n - 1} A_2$
and we use strong induction on $|w|$ to prove that for every string $w$,
$\sat{A_1}(w) = \sat{A_2}(w)$.
The cases for $|w| < n$ are immediate from our assumptions.
We now consider the case where
$w$ is of length $m > n-1$, and for every word $w'$ of length less than $m$, $\sat{A_1}(w') = \sat{A_2}(w')$.
Let $\rho$ be a sequence of transitions from $A_1$ or $A_2$ of length $m$.
We prove that $\texttt{wgt}(\rho)$ appears the same number of times in $\sat{A_1}(w)$ and in $\sat{A_2}(w)$, which suffices to complete the inductive proof.
Since $m \geq n$, $w$ and $\rho$ have at least $n+1$ prefixes each, say $w_i$ and $\rho_i$ of length $i$, where $0 \leq i \leq n$.
We can fix an ordering of the states of the two automata, and therefore,
for each $i$, we can think of $Q(I,w_i,\texttt{wgt}(\rho_i))$ as a vector of nonnegative integers of dimension $n$.
These are at least $n + 1$ vectors of dimension at most $n$, so they must be linearly dependent.
Therefore, there is some $0 < i_0 \leq n$, such that $Q(I,w_{i_0},\texttt{wgt}(\rho_{i_0}))$ is a linear combination of $\{ Q(I,w_i,\texttt{wgt}(\rho_i)) \mid 0 \leq i < i_0 \}$ (with rational coefficients), which we denote as $$Q(I,w_{i_0},\texttt{wgt}(\rho_{i_0})) = \lambda((Q(I,w_i,\texttt{wgt}(\rho_i)))_{i=0}^{i_0 - 1}).$$
Let $w = w_{i_0} w'$ and $\rho = \rho_{i_0} \rho'$.
By a direct inductive argument,
\begin{align}
Q(I,w,\texttt{wgt}(\rho)) = \lambda((Q(I,w_iw',\texttt{wgt}(\rho_i\rho')))_{i=0}^{i_0 - 1}). \label{eq:linear-comb}
\end{align}
We observe that the number of times that
$\texttt{wgt}(\rho)$ appears in $\sat{A_1}(w)$ and in $\sat{A_2}(w)$ is the cardinality of $Q(I,w,\texttt{wgt}(\rho)) \cap F_1$ and of $Q(I,w,\texttt{wgt}(\rho)) \cap F_2$, respectively.
Therefore, for $ k = 1,2 $,
\begin{align*}
\sat{A_k}(w)(\texttt{wgt}(\rho)) =~& |Q(I,w,\texttt{wgt}(\rho)) \cap F_k| \\
=~&
| \lambda((Q(I,w_iw',\texttt{wgt}(\rho_i\rho')))_{i=0}^{i_0 - 1}) \cap F_k |
\tag*{from \eqref{eq:linear-comb}}
\\
=~&
\lambda((|Q(I,w_iw',\texttt{wgt}(\rho_i\rho')) \cap F_k|)_{i=0}^{i_0 - 1}) \\
=~&
\lambda(( |\sat{A_k}(w_iw')(\texttt{wgt}(\rho_i\rho'))| )_{i=0}^{i_0 - 1}),
\end{align*}
but, from the inductive hypothesis, for $i {=} 0$ to $i_0{-}1$,
$
|\sat{A_1}(w_iw')(\texttt{wgt}(\rho_i\rho'))|
=
|\sat{A_2}(w_iw')(\texttt{wgt}(\rho_i\rho'))|
$,
and therefore
$
\sat{A_1}(w)(\texttt{wgt}(\rho))
=
\sat{A_2}(w)(\texttt{wgt}(\rho)),
$
which completes the proof.
\end{proof}
\begin{corollary}\label{cor:automataP}
The equivalence problem for weighted automata is in
\P.
\end{corollary}
\begin{proof}
We observe from the proof of Theorem \ref{thm:finite-equivalence-A} that for every automaton $A$, word $w$, and $\gamma \in R^{|w|}$, that $Q(I,w,\gamma)$ can be computed iteratively in polynomial time, with respect to $|A|$ and $|w|$.
Furthermore, we observe that two weighted automata $A_1 = (Q_1,\Delta_1,\texttt{wgt},I_1,F_1)$ and $A_2 = (Q_2,\Delta_2,\texttt{wgt},I_2,F_2)$ are not equivalent, if, and only if, there is a string $w$ of length at most $|Q_1|+|Q_2|$, and a $\gamma \in R^{|w|}$, such that
\[
|Q(I,w,\gamma) \cap F_1| ~~\neq~~ |Q(I,w,\gamma) \cap F_2|.
\]
We now show how
to try all possible $w$ and $\gamma$ of length at most $|Q_1|+|Q_2|$,
in polynomial time.
Let $R_A \subseteq R$ be the set of weights that appear in $A_1$ or $A_2$.
Starting from $\Lambda := \{Q(I,\varepsilon,\varepsilon)\}$, repeat the following $|Q_1|+|Q_2|$ times:
\begin{itemize}
\item compute $\Lambda := \{ Q(I,w a,\gamma r) \mid Q(I,w,\gamma) \in \Lambda, \ r \in R_A, a \in \Sigma\}$; and
\item replace $\Lambda$ by a maximal subset of linearly independent elements of $\Lambda$.
\end{itemize}
If at any step, for some element $e \in \Lambda$, $ |e \cap F_1| ~~\neq~~ |e \cap F_2|$, then we reject; otherwise we accept.
We maintain at most $|Q_1|+|Q_2|$ values in $\Lambda$ at every step, and
both steps can be done in polynomial time.
Therefore, this is a polynomial-time algorithm for the equivalence problem.
\end{proof}
\begin{remark}
There are similarities between the proof of our complexity bound (Theorem \ref{thm:sat-is-undec} and Corollary \ref{cor:automataP}) and \cite{S61} and \cite{cortes2007lp}. However, the techniques in these papers do not directly apply in our case, due to the nature of abstract semantics, which maintain the information of all the runs of the automata.
Furthermore, we observe that, using the techniques from \cite{KieferMQWW2013}, one can possibly further improve on the complexity bound of Corollary \ref{cor:automataP}.
\end{remark}
\begin{corollary}\label{cor:finite-sat-Form}
There is a computable function $\ell : \mathbb{N} \to \mathbb{N}$, such that for every pair $\Phi_1$ and $\Phi_2$ of $\textsf{core-wMSO}$ formulas, and environment $\Gamma$, $\Phi_1 {\sim_\Gamma} \Phi_2$ if and only if
for every $(w,\sigma) {\in} \sat{\Gamma}$ of length at most $\ell(|\Phi_1|{+}|\Phi_2| {+} |\Gamma|)$, $\sat{\Phi_1}(w,\sigma) = \sat{\Phi_2}(w,\sigma)$.
\end{corollary}
\begin{proof}
The corollary results from Theorems \ref{thm:form-to-A} and \ref{thm:finite-equivalence-A}, and the observations that $\Phi_1 \sim_\Gamma \Phi_2$
iff
$\bigwedge \Gamma \mathbin{?} \Phi_1 : \mathbf{0} \sim \bigwedge \Gamma \mathbin{?} \Phi_2 : \mathbf{0}$, and that the proof of Theorem \ref{thm:form-to-A} in \cite{GM18} is constructive.
\end{proof}
\begin{corollary}
The equational validity problem for \textsf{core-wMSO}\ is decidable.
\end{corollary}
\subsection{An Axiomatization of full \textsf{core-wMSO}}
We now present the full axiomatization for \textsf{core-wMSO}.
For brevity, we only use the second-order version of the sum operator
and elide the first-order versions of these axioms.
Specifically, in the following, axioms (C11) to (C16) have straightforward first-order versions that are omitted, and (C17) and the upcoming formula
$\Phi_1 \leq_l \Phi_2$ can be rewritten to accommodate mixed sequences of both first- and second-order variables; the soundness and completeness proofs then go through in a straightforward way.
We use the notation $\vec{X}$ for a sequence of variables, $X_1,X_2,\ldots,X_k$, and $|\vec{X}| = k$.
This notation can be extended to the sum operator, such that $\sum_{\vec{X}}$ denotes $\sum_{X_1}\cdots \sum_{X_k}$.
For $|\vec{X}| = |\vec{Y}|$, we use $\vec{X} \neq \vec{Y}$ for $$\exists x. \bigvee_{i=1}^k (X_i(x) \land \neg Y_i(x)) \lor (\neg X_i(x) \land Y_i(x)).$$
Let $\Phi_1= \sum_{\vec{X}} \varphi_1 \mathbin{?} \prod_x \Psi_1 : \mathbf{0}$ and
$\Phi_2= \sum_{\vec{X}} \varphi_2 \mathbin{?} \prod_x \Psi_2 : \mathbf{0}$.
For every $l \geq 0$, we use $\Phi_1 \leq_l \Phi_2$ for
\newcommand{\bigwedge_{ i \neq j }}{\bigwedge_{ i \neq j }}
\begin{align*}
&
\bigwedge_{m=1}^l
\forall \vec{X}^1
\vec{X}^2
\cdots \vec{X}^m.~ \exists \vec{Y}^1
\vec{Y}^2
\cdots \vec{Y}^m.\hfill \\
&\left[\!\!\!\!
\begin{array}{c} \displaystyle
\left(
\bigwedge_{ i \neq j } \vec{X}^i \neq \vec{X}^j \land \varphi_1(\vec{X}^i)
\rightarrow
\bigwedge_{ i \neq j } \vec{Y}^i \neq \vec{Y}^j \land \varphi_2(\vec{Y}^i)
\right)
\\[3ex]
{\mathlarger{\mathlarger{\land} } }
\\[2ex]
\displaystyle
\left[\!\!\!\!
\begin{array}{c} \displaystyle
\bigwedge_{ i \neq j }
\!\left(\!
\varphi_1(\vec{X}^i)
\land
\forall x. {\bigwedge_{
r {\in} \Psi_1
}}
\varphi(\Psi_1(\vec{X}^i),r) {\leftrightarrow} \varphi(\Psi_1(\vec{X}^j),r)
\!\right)\!
\\[4ex]
{\mathlarger{\mathlarger{\rightarrow}}}
\\[2ex]
\displaystyle
\left[\!\!\!
\begin{array}{c} \displaystyle
\bigwedge_{ i \neq j }
\!\left(\!
\varphi_2(\vec{Y}^i)
\land
\forall x. {\bigwedge_{
r {\in}\Psi_2
}}
\varphi(\Psi_2(\vec{Y}^i),r) {\leftrightarrow} \varphi(\Psi_2(\vec{Y}^j) ,r)
\!\right)\!
\\[4ex]
{\mathlarger{\mathlarger{\land}}}
\\[2ex]
\displaystyle
\forall x. {\bigwedge_{
r {\in}\Psi_1
}} \varphi(\Psi_1(\vec{X}^1),r) \leftrightarrow \varphi(\Psi_2(\vec{Y}^1),r)
\end{array}
\!\!\!
\right]
\end{array}
\!\!\!\!
\right]
\end{array}
\!\!\!\!
\right]
\end{align*}
Intuitively, the formula describes that if there are $m$ \emph{distinct} sequences of sets of positions, described by the $X$'s that give the same value for $\varphi_1 \mathbin{?} \prod_x \Psi_1 : \mathbf{0}$, then there are $m$ distinct sequences of sets, assigned to the $Y$'s that give that same value for $\varphi_2 \mathbin{?} \prod_x \Psi_2 : \mathbf{0}$.
As Lemma \ref{lem:cw-less-than} demonstrates, formula $\Phi_1 \leq_l \Phi_2$ describes that, if the multisets returned by the formulas have size at most $l$, then $\Phi_2$ has all the elements of $\Phi_1$. Therefore, if both $\Phi_1 \leq_l \Phi_2$ and $\Phi_2 \leq_l \Phi_1$ are true for a string, then either the values of $\Phi_1$ and $ \Phi_2$ are too large, or they are the same.
\begin{lemma}\label{lem:cw-less-than}
Let $l>0$, $\Phi_1= \sum_{\vec{X}} \varphi_1
\mathbin{?} \prod_x \Psi_1
$, and
$\Phi_2= \sum_{\vec{X}} \varphi_2 \mathbin{?} \prod_x \Psi_2
$.
Then, $\Gamma \vdash \Phi_1 \leq_l \Phi_2$ if and only if
for every $(w,\sigma) \in \sat{\Gamma}$ and $\gamma \in R^{|w|}$,
\[
\sat{\Phi_2}(w,\sigma)(\gamma) \geq \min \{ l,~ \sat{\Phi_1}(w,\sigma)(\gamma) \}
.
\]
\end{lemma}
\begin{proof}
We first observe, by Lemma \ref{lem:WtoMSO}, that
\[
\forall x. \bigwedge_{ r \in R(\Psi)} \varphi(\Psi,r) \leftrightarrow \varphi(\Psi',r)
\]
is true at $(w,\sigma)$ if and only if $\sat{\Psi}(w,\sigma) = \sat{\Psi'}(w,\sigma)$.
From the definition of $\Phi_1 \leq_l \Phi_2$ above, for every $(w,\sigma) \in \sat{\Gamma}$, $(w,\sigma) \in \sat{\Phi_1 \leq_l \Phi_2}$ exactly when, if there are $m \leq l$ (distinct) assignments to variables $\vec{X}$ for which $\varphi_1$ evaluates to true and $\Psi_1$ returns a fixed value, then there are $m$ (respectively, distinct) assignments to variables $\vec{Y}$ for which $\varphi_2$ also evaluates to true and $\Psi_2$ returns that same fixed value.
By the completeness of \textsf{MSO}, $\Gamma \vdash \Phi_1 \leq_l \Phi_2$ if and only if for every $(w,\sigma) \in \sat{\Gamma}$, $(w,\sigma) \in \sat{\Phi_1 \leq_l \Phi_2}$, and, by the above observation, the lemma follows.
\end{proof}
The axioms for full \textsf{core-wMSO}\ include all the axioms for $\textsf{core-wMSO}(\mathbin{?},+)$, and, additionally, the ones in Table \ref{tab:cwmsofull-axioms}.
\begin{table}
\centering
\begin{tabular}{l l}
\hline
($C11$): & $\begin{aligned}&\Gamma \vdash \Phi_1 \approx \Phi_2 \text{ implies } \Gamma \vdash \textstyle \sum_X \Phi_1 \approx \sum_X \Phi_2 \\[-1ex]
&\text{if } X \text{ is not free in } \Gamma\end{aligned}$ \\[2ex]
($C12$): & $\Gamma \vdash \sum_X \Phi \approx \sum_Y \Phi[Y / X]$ if $Y \notin \mathtt{var}(\Phi)$ \\[1ex]
($C13$): & $\Gamma \vdash \sum_X \sum_Y \Phi \approx \sum_Y \sum_X \Phi$ \\[1ex]
($C14$): & $\Gamma \vdash \sum_X (\Phi_1 {+} \Phi_2) \approx \sum_X \Phi_1 + \sum_X \Phi_2$ \\[1ex]
($C15$): & $\Gamma \vdash \varphi \mathbin{?} \sum_X \Phi_1 : \sum_X \Phi_2 \approx \sum_X \varphi \mathbin{?} \Phi_2 : \Phi_1$ \\[1ex]
($C16$): & $\Gamma \vdash \Phi \approx \sum_X \varphi \mathbin{?} \Phi
$ if $\Gamma \vdash \exists ! X.~ \varphi(X)$ and $X \notin \mathtt{var}(\Phi)$
\\[1ex]
($C17$): & $\Gamma \vdash \sum_{\vec{X}} \varphi_1 {\mathbin{?}} {\prod_x} \Psi_1
\approx \sum_{\vec{Y}} \varphi_2 {\mathbin{?}} {\prod_x} \Psi_2$ \\
&if $\Gamma \vdash \Phi_1 \leq _l \Phi_2$ and $\Gamma \vdash \Phi_2 \leq _l \Phi_1$, \\
&for $l = 2^{\ell( |\Phi_1| {+} |\Phi_2| {+} |\Gamma | ) \cdot \max \{ |\vec{X}|,|\vec{Y}| \}}$, \\
&where $\Phi_1 = \sum_{\vec{X}} \varphi_1 \mathbin{?} \prod_x \Psi_1$ and $\Phi_2 = \sum_{\vec{X}} \varphi_2 \mathbin{?} \prod_x \Psi_2$. \\
\hline \\
\end{tabular}
\caption{Axioms for $\textsf{core-wMSO}$.}
\label{tab:cwmsofull-axioms}
\end{table}
The most interesting case is the one of Axiom (C17).
This axiom reduces proving the equivalence of the two sides to a bounded proof of their equivalence through \textsf{MSO}.
\subsection{Soundness and Completeness}
We now prove that the axioms of Table \ref{tab:cwmsofull-axioms} are both sound and complete for \textsf{core-wMSO}.
\begin{lemma}\label{lem:merge-sums}
For every $\Gamma$ and pair of \textsf{core-wMSO}\ formulas $\sum_{\vec{X}} \varphi_1 \mathbin{?} \prod_x \Psi_1$ and $\sum_{\vec{Y}} \varphi_2 \mathbin{?} \prod_x \Psi_2$, there is a \textsf{core-wMSO}\ formula $\sum_{\vec{Z}} \varphi_3 \mathbin{?} \prod_x \Psi_3$, such that
\begin{align*}
\Gamma \vdash \sum_{\vec{X}} \varphi_1 \mathbin{?} \prod_x \Psi_1 + \sum_{\vec{Y}} \varphi_2 \mathbin{?} \prod_x \Psi_2 \approx
\sum_{\vec{Z}} \varphi_3 \mathbin{?} \prod_x \Psi_3.
\end{align*}
\end{lemma}
\begin{proof}
We first observe that $\Gamma \vdash \sum_{X} \mathbf{0} \approx \mathbf{0}$ --- a simple application of axiom (C16) and due to the completeness of \textsf{step-wMSO}.
We can assume, due to Axioms (C12), (C13), and (C16) that $\vec{X} = \vec{Y}$.
Let $Z$ be a second-order variable that does not appear in any of the two given formulas, nor in $\Gamma$.
We can see that
there are $\varphi'_1$ and $\varphi'_2$ that only have $Z$ as a free variable, such that
$\Gamma \vdash \exists ! Z. \varphi'_1 \land \exists ! Z. \varphi'_2 \land \forall Z. \neg (\varphi'_1 \land \varphi'_2)$ --- for instance, let $\varphi'_1 = \forall x.\neg Z(x)$ and $\varphi'_2 = \forall x. Z(x)$.
We now observe that
\begin{align}
\Gamma &\vdash
\varphi_1' \mathbin{?} \varphi_1 \mathbin{?} \prod_x \Psi_1 +
\varphi_2' \mathbin{?} \varphi_2 \mathbin{?} \prod_x \Psi_2
\approx \notag \\
&\phantom{{}\vdash{}}
(\varphi_1 \lor \varphi_2) \land (\varphi_1' \lor \varphi_2')
\mathbin{?}
\prod_x \varphi_1 \land \varphi_1' \mathbin{?} \Psi_1 : \Psi_2.
\label{eq:together}
\end{align}
Using the fact that $\varphi_1'$ and $\varphi_2'$ are mutually exclusive,
by taking cases, we can see that the equation above is valid.
Then, \eqref{eq:together} follows from the completeness of \textsf{step-wMSO}.
\begin{align*}
\Gamma \vdash &\sum_{\vec{X}} \varphi_1 {\mathbin{?}} {\prod_x} \Psi_1 + \sum_{\vec{X}} \varphi_2 {\mathbin{?}} {\prod_x} \Psi_2 \approx
\tag{from (C16)}
\\
&
\sum_{Z}\varphi_1' \mathbin{?} \sum_{\vec{X}} \varphi_1 {\mathbin{?}} {\prod_x} \Psi_1 + \sum_{Z}\varphi_2'\mathbin{?} \sum_{\vec{X}} \varphi_2 {\mathbin{?}} {\prod_x} \Psi_2 \approx
\tag{from (C15) and (C14)}
\\
&
\sum_{Z\vec{X}}
\varphi_1' \mathbin{?} \varphi_1 \mathbin{?} \prod_x \Psi_1 +
\varphi_2' \mathbin{?} \varphi_2 \mathbin{?} \prod_x \Psi_2
\approx
\tag{from \eqref{eq:together}}
\\
& \sum_{Z\vec{X}}
(\varphi_1 {\lor} \varphi_2) {\land} (\varphi_1' {\lor} \varphi_2')
\mathbin{?}
\prod_x \varphi_1 {\land} \varphi_1' \mathbin{?} \Psi_1 : \Psi_2.
\tag*{\qedhere}
\end{align*}
\end{proof}
\begin{definition}
A $\textsf{core-wMSO}$ formula $\Phi$ is in first normal form if $\Phi$ is generated by the following grammar:
\begin{align*}
Q &::= \varphi \mathbin{?} Q : Q \mid R \mid \mathbf{0};
&R &::= R + R \mid S; \ \ \text{and}
\\
S &::= \sum_X S \mid \varphi \mathbin{?} \prod_x \Psi.
\end{align*}
It is in second normal form if
$+$ does not occur in $\Phi$.
\end{definition}
\begin{lemma}\label{lem:2nfs}
For every $\Gamma$ and \textsf{core-wMSO}~ formula $\Phi$, there exists a \textsf{core-wMSO}~formula $\Phi'$ in second normal form, such that $\Gamma \vdash \Phi \approx \Phi'$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:merge-sums}, it suffices to prove the lemma for
$\Phi'$ in first normal form.
We can see that axioms (C14) and (C12) allow us to push $+$ inside any sum operator, and (C15) and (C16) allow to do the same for conditionals.
This gives us that $\Gamma \vdash \Phi \approx \sum_{\vec{X}}\Phi''$, where
$\Phi''$ is a $\textsf{core-wMSO}(+,\mathbin{?})$ formula.
But then,
if $\Phi_1, \Phi_2, \Phi_3$ are $\textsf{core-wMSO}(+,\mathbin{?})$ formulas, then
\[ \varphi \mathbin{?} (\Phi_1 + \Phi_2) : \Phi_3 \sim (\varphi \mathbin{?} \Phi_1 : \Phi_3) + \varphi \mathbin{?}\Phi_2, \]
gives that
\[\Gamma \vdash \varphi \mathbin{?} (\Phi_1 + \Phi_2) : \Phi_3 \approx (\varphi \mathbin{?} \Phi_1 : \Phi_3) + \varphi \mathbin{?}\Phi_2, \]
%
from the completeness of $\textsf{core-wMSO}(+,\mathbin{?})$.
Furthermore, it is not hard to see that, from the $\textsf{core-wMSO}(\mathbin{?},+)$ axioms,
\begin{align*}
\Gamma \vdash &
\varphi \mathbin{?} \Phi_1 : \Phi_2 \approx \varphi \mathbin{?} \Phi_1 + \neg \varphi \mathbin{?} \Phi_2.
\end{align*}
%
Therefore, inside the sum operators of $\sum_{\vec{X}}\Phi''$, we can
bring all conditionals in the form $\varphi \mathbin{?} \Phi_1$,
and then use axiom (C14) to
eliminate all occurrences of $+$ inside the sum operators.
The remaining proof is similar to the proof of Lemma \ref{lem:normal-form}.
\end{proof}
\begin{theorem}[Completeness for $\textsf{core-wMSO}$]\label{thm:completeness-full}
For every finite $\Gamma$ and $\textsf{core-wMSO}$ formulas $\Phi_1$ and $\Phi_1$, we have $\Phi_1 \sim_\Gamma \Phi_2$ if and only if $\Gamma \vdash \Phi_1 \approx \Phi_2$.
\end{theorem}
\begin{proof}
\emph{The soundness} of the axioms is straightforward.
The most interesting case is (C17), which we now prove sound.
We assume that
$\Gamma \vdash \Phi_1 \leq _l \Phi_2$
and
$\Gamma \vdash \Phi_2 \leq _l \Phi_1$,
for
$ l = 2^{\ell( |\Phi_1| {+} |\Phi_2| {+} |\Gamma | ) \cdot \max \{ |\vec{X}|,|\vec{Y}| \}}$,
where
\[
\Phi_1 =
\sum_{\vec{X}} \varphi_1 \mathbin{?} \prod_x \Psi_1
\quad \text{and} \quad
\Phi_2 =
\sum_{\vec{X}} \varphi_2 \mathbin{?} \prod_x \Psi_2
.\]
Let $L = \ell( |\Phi_1| {+} |\Phi_2| {+} |\Gamma | ) $.
From Lemma \ref{lem:cw-less-than}, we get that
$\sat{\Phi_1}(w,\sigma)(\gamma) \geq \min \{ l, \sat{\Phi_2}(w,\sigma)(\gamma)\}$
and
$\sat{\Phi_2}(w,\sigma)(\gamma) \geq \min \{ l, \sat{\Phi_1}(w,\sigma)(\gamma)\}$
for every $(w,\sigma)\in \sat{\Gamma}$ and $\gamma \in R^{|w|}$.
Let $(w,\sigma)\in \sat{\Gamma}$, where $|w| \leq L$.
By Corollary \ref{cor:finite-sat-Form}, it suffices to prove that
$\sat{\Phi_1}(w,\sigma) = \sat{\Phi_2}(w,\sigma)$, and to do that, from the above discussion, it suffices to prove that
for every $\gamma \in R^{|w|}$,
$l \geq \sat{\Phi_1}(w,\sigma)(\gamma)$ and $l \geq \sat{\Phi_2}(w,\sigma)(\gamma)$.
Specifically, we prove that
$\sat{\Phi_1}(w,\sigma)(\gamma) \leq 2^{|w| \cdot |\vec{X}| }$ --- the case for $\Phi_2$ is symmetric.
The proof is by induction on $|\vec{X}|$: if $\Phi_1 = \varphi_1 \mathbin{?} \prod_x \Psi_1$, then it outputs at most one value, and therefore
$\sat{\Phi_1}(w,\sigma)(\gamma) \leq 1 = 2^{L \cdot 0 }$;
and for the inductive step,
$\sat{\sum_{Z}\Phi_1}(w,\sigma)(\gamma) \leq
2^{|w|}\sat{\Phi_1}(w,\sigma)(\gamma) \leq
2^{|w| + |w| \cdot |\vec{X}|} = 2^{|w| \cdot (|\vec{X}|+1)}$.
\emph{We now prove the completeness} of the axioms.
Let $\Phi_1 \sim_\Gamma \Phi_2$; we prove that
$\Gamma \vdash \Phi_1 \approx \Phi_2$.
By Lemma \ref{lem:2nfs}, we can assume that $\Phi_1$ and $\Phi_2$ are in second normal form.
The proof is by induction on the total number of the top-level conditionals in these formulas. The inductive step is similar to the one in the proof of Theorem \ref{thm:completeness-cwmso}, so we only deal with the base cases.
If one of the formulas is $\mathbf{0}$, then the other one is either $\mathbf{0}$ or $\sum_{\vec{X}} \Phi$, where $\Phi \sim_\Gamma \mathbf{0}$. By the completeness of $\textsf{core-wMSO}(\mathbin{?},+)$ and Axiom (C16), $\Gamma \vdash \mathbf{0} \approx \sum_{\vec{X}} \Phi$ and we are done.
Finally, let $ \sum_{\vec{X}} \varphi_1 \mathbin{?} \Phi_1 \sim_\Gamma \sum_{\vec{Y}} \varphi_2 \mathbin{?} \Phi_2 $.
From axioms (C12), (C13), and (C16), we can assume that $\vec{X} = \vec{Y}$.
From
Lemma \ref{lem:cw-less-than},
\begin{align*}
\Gamma \vdash& \sum_{\vec{X}} \varphi_1 \mathbin{?} \Phi_1 \leq_l \sum_{\vec{Y}} \varphi_2 \mathbin{?} \Phi_2, \text{ and} \\
\Gamma \vdash& \sum_{\vec{X}} \varphi_2 \mathbin{?} \Phi_2 \leq_l \sum_{\vec{Y}} \varphi_1 \mathbin{?} \Phi_1,
\end{align*}
for every $l$.
Therefore, by using axiom (C17),
\[
\Gamma \vdash \sum_{\vec{X}} \varphi_1 \mathbin{?} \Phi_1 \approx \sum_{\vec{Y}} \varphi_2 \mathbin{?} \Phi_2.
\qedhere
\]
\end{proof}
\subsubsection*{Our contribution}
We give three complete axiomatizations:
one for the full second syntactic layer
of weighted MSO, one for a fragment of
the third and final syntactic layer,
and one for the full third layer.
Each of the three axiomatizations exhibits different
characteristics and machinery, and therefore our presentation of the axioms allows for axiomatizations that are taylored to each fragment of \textsf{core-wMSO}.
Due to the modular nature of \textsf{core-wMSO}, these axiomatizations also apply to \textsf{core-wFO}, the first-order version of the logic.
We prove that the equivalence problem for weighted automata under the abstract interpretation, can be solved in polynomial time (Corollary \ref{cor:automataP}).
We also show that the model checking, satisfiability, and validity problems,
appropriately translated to the equational, weighted setting,
are decidable for the second layer of the logic,
although these inherit the
\PSPACE-completeness and non-elementary complexity
from MSO
for model checking and, respectively,
for satisfiability and validity.
However, for the third layer of the logic, things are more complicated.
The model checking and validity problems remain decidable,
but we show that the satisfiability problem is undecidable, even for the first-order fragment.
\subsubsection*{Related work}
Weighted MSO was introduced by Droste and Gastin in \cite{DG07}.
The version that we study in this paper was defined by Gastin and Monmege in \cite{GM18}, where they prove that it is equivalent to weighted automata.
Droste and Gastin defined a number of first-order restrictions of the logic in \cite{DG19}, where they prove correspondence with corresponding restrictions of weighted automata.
Naturally, more variations of weighted MSO have appeared, for example, extending the logic with multiple weights \cite{DP16}, or on infinite words \cite{DM12}.
To the best of our knowledge, the present paper is the first to give an axiomatization for \textsf{core-wMSO}.
One can find several results about the decidability and complexity of problems about weighted automata in the literature, and these tend to vary, depending on the structure of the weights.
Already from \cite{S61}, Sch\"{u}tzenberger proves that determining the equivalence of $(\mathbb{Q},+,\cdot)$-weighted automata can be done in polynomial time, and we show the same result for the abstract semantics using a different proof.
On the other hand, the same problem over the $(\mathbb{Q},\max,+)$ semiring is undecidable \cite{ABK20,Krob1994}.
Droste and Gastin in \cite{DG07} show that the equivalence problem for formulas --- which we call equational validity in this paper --- over computable, commutative, and locally finite semirings, is decidable.
This and other decidability results (for example, \cite{DP16} and \cite{DM12}) for weighted MSO result from the translation of formulas to automata.
This paper provides an alternative method to decide equational validity, by a proof system.
A problem similar to this paper's equational satisfiability is proven undecidable by Bollig and Gastin in \cite{BG09} for probabilistic logics on trees.
To the best of our knowledge, this paper is the first effort to tackle the decidability of equational satisfiability and validity of \textsf{core-wMSO}\ over the abstract semantics.
\section{Introduction}
\input{intro.tex}
\section{Preliminaries}
\input{prelim.tex}
\section{Syntax and semantics}
\input{syntax-semantics.tex}
\section{Decision problems}
\input{decidability.tex}
\section{Axioms}\label{sec:completeness}
\input{axioms.tex}
\section{An Axiomatization for Full \textsf{core-wMSO}}
\input{full.tex}
\section{Equational Satisfiability is Undecidable}\label{sec:undec}
\input{satisf-undec.tex}
\section{Conclusion}
\input{conclusion.tex}
\section*{Acknowledgments}
This work has been funded by the project ``Open Problems in the Equational Logic of Processes (OPEL)'' (grant no.~196050), the project ``Epistemic Logic for Distributed Runtime Monitoring'' (grant no.~184940), and the project ``MoVeMnt: Mode(l)s of Verification and Monitorability'' (grant no~217987) of the Icelandic Research Fund.
The authors are also thankful to the anonymous reviewers, whose comments have improved this paper.
\bibliographystyle{plain}
\subsubsection*{Completeness, Decidability of Equational Validity, and Semantics}
\subsection{Syntax}
We use
a countably infinite set of first-order variables $\mathcal{V}_{FO}$,
a countably infinite set of second-order variables $\mathcal{V}_{SO}$,
a finite alphabet $\Sigma$, and an arbitrary set $R$ of weights.
The syntax of weighted MSO is given by the following grammar.
\noindent \textsf{MSO}:
\[\varphi ::= \top \mid P_a(x) \mid x \leq y \mid x \in X \mid \neg \varphi \mid \varphi_1 \land \varphi_2 \mid \forall x. \varphi \mid \forall X. \varphi\]
\textsf{step-wMSO}:
\[\Psi ::= r \mid \varphi \mathbin{?} \Psi_1 : \Psi_2\]
\textsf{core-wMSO}:
\[\Phi ::= \mathbf{0} \mid \textstyle{\prod_x} \Psi \mid \varphi \mathbin{?} \Phi_1 : \Phi_2 \mid \Phi_1 + \Phi_2 \mid \textstyle{\sum_x \Phi} \mid \textstyle{\sum_X \Phi}\]
where $a \in \Sigma$, $r \in R$, $x,y \in \mathcal{V}_{FO}$, and $X \in \mathcal{V}_{SO}$.
In the rest of the paper, we use $\varphi$ to denote \textsf{MSO}\ formulas, $\Psi$ to denote \textsf{step-wMSO}\ formulas, $\Phi$ to denote \textsf{core-wMSO}\ formulas, and $\chi$ to denote \textsf{step-wMSO}\ or \textsf{core-wMSO}\ formulas.
In a similar fashion, we obtain \textsf{step-wFO}\ by only allowing conditioning on first-order formulas in \textsf{step-wMSO}\
and \textsf{core-wFO}\ by only allowing first-order formulas and removing the construct $\sum_X \Phi$
which sums over a second-order variable.
We will use \ensuremath{\nil}\ as the default (negative) value in conditionals,
and as such $\varphi \mathbin{?} \Phi$ is used as shorthand for $\varphi \mathbin{?} \Phi : \ensuremath{\nil}$.
Furthermore, we assume that $:$ binds to the nearest $\mathbin{?}$, and therefore, $\varphi_1 \mathbin{?} \varphi_2 \mathbin{?} \Phi_1 : \Phi_2$ means $\varphi_1 \mathbin{?} \varphi_2 \mathbin{?} \Phi_1 : \Phi_2 : \ensuremath{\nil}$, which can be uniquely parsed as $\varphi_1 \mathbin{?} (\varphi_2 \mathbin{?} \Phi_1 : \Phi_2) : \ensuremath{\nil}$.
For a \textsf{step-wMSO}\ formula $\Psi$, $R(\Psi) = \{ r \in R \mid r $ appears in $\Psi \}$; for brevity, we may write $r \in \Psi$ instead of $r \in R(\Psi)$.
We note here that in earlier work of Droste and Gastin \cite{DG07},
a different formulation of weighted MSO was given.
There, the syntax was essentially the same as the syntax for
classical monadic second-order logic,
and the semantics were given as a function from words and valuations to elements of $R$.
However, one can translate between the formulation presented here
and a restricted version of the formulation of \cite{DG07},
as was shown in \cite[Section 5]{GM18}.
We choose to follow the formulation of \cite{GM18}
because this gives a cleaner correspondence with weighted automata,
whereas the earlier formulation of weighted MSO required a (not fully syntactic)
restriction in order to obtain a correspondence with weighted automata,
and because the abstract semantics of this formulation allows us to
focus on the syntactic level, which is ideal for an axiomatization.
The formulas $\varphi$ of \textsf{MSO}\ are interpreted over words $w \in \Sigma^+$
together with a valuation $\sigma$ of this word,
which assigns to each first-order variable a position in the word
and to each second-order variable a set of positions in the word.
When interpreted on a string, a formula outputs a value, which, concretely, may be a single weight, a sequence, or, say, a set (or multiset) of more elementary values.
To preserve the generality of the logic, the semantics are given in two steps.
The first is an abstract semantics, where the meaning of a formula
is given as a multiset of sequences of weights.
The second is a concrete semantics, where one can translate the abstract semantics
into a given semiring structure, by assuming an appropriate operator on the abstract values.
We denote by $\Sigma^+_{val}$ the set of pairs $(w,\sigma)$ where $w \in \Sigma^+$
and $\sigma$ is a valuation of $w$.
Let $x$ be a first-order (respectively, let $X$ be a second-order) variable and $i \in \{ 1, \dots, |w| \}$ (respectively, $I \subseteq \{ 1, \dots, |w| \}$).
By $\sigma[x \mapsto i]$ (respectively $\sigma[X \mapsto I]$) we denote the valuation that maps each variable $y$ and $Y$ to $\sigma(y)$ and $\sigma(Y)$, if $y \neq x$ (respectively, if $Y \neq X$), and $x$ to $i$ (respectively, $X$ to $I$).
The semantics of \textsf{MSO}\ on finite words is standard and can be found in e.g.
\cite{L04}.
In this paper, $(w,\sigma)$ will always be a pair from $\Sigma^+_{val}$.
We denote by $\sat{\varphi}$ the set of all pairs $(w,\sigma) \in \Sigma^+_{val}$
that satisfy $\varphi$.
Likewise, for a set $\Gamma$ of \textsf{MSO}\ formulas,
we define
\[\sat{\Gamma} = \begin{cases} \Sigma^+_{val} & \text{if } \Gamma = \emptyset \\ \bigcap_{\varphi \in \Gamma} \sat{\varphi} & \text{otherwise}. \end{cases}\]
The semantics of formulas $\Psi$ of \textsf{step-wMSO}\ is given
by a function $\sat{\cdot} : \Sigma^+_{val} \rightarrow R$,
defined by $\sat{r}(w,\sigma) = r$ and
\[
\sat{\varphi \mathbin{?} \Psi_1 : \Psi_2}(w,\sigma) = \begin{cases} \sat{\Psi_1}(w,\sigma) \text{ if } (w,\sigma) \models \varphi \\ \sat{\Psi_2}(w,\sigma) \text{ otherwise.}\end{cases}
\]
The semantics of formulas $\Phi$ of \textsf{core-wMSO}\
is given by the function $\sat{\cdot} : \Sigma^+_{val} \rightarrow \multi{R^*}$:
\begin{align*}
\sat{\mathbf{0}}(w,\sigma) &= \emptyset \\
\sat{\textstyle{\prod_x} \Psi}(w,\sigma) &= \multiset{r_1 r_2 \dots r_{|w|}}, r_i = \sat{\Psi}(w,\sigma[x {\mapsto} i]) \\
\sat{\varphi \mathbin{?} \Phi_1 : \Phi_2}(w,\sigma) &= \begin{cases} \sat{\Phi_1}(w,\sigma), \text{ if } (w,\sigma) \models \varphi \\ \sat{\Phi_2}(w,\sigma), \text{ otherwise}\end{cases} \\
\sat{\Phi_1 + \Phi_2}(w,\sigma) &= \sat{\Phi_1}(w,\sigma) \uplus \sat{\Phi_2}(w,\sigma) \\
\sat{\textstyle{\sum_x} \Phi}(w,\sigma) &= \biguplus_{i \in \{1, \dots, |w|\}} \sat{\Phi}(w,\sigma[x \mapsto i]) \\
\sat{\textstyle{\sum_X} \Phi}(w,\sigma) &= \biguplus_{I \subseteq \{1, \dots, |w|\}} \sat{\Phi}(w, \sigma[X \mapsto I])
\end{align*}
Let $\Gamma$ be a set of \textsf{MSO}\ formulas.
We say that two formulas $\chi_1$ and $\chi_2$
are semantically $\Gamma$-equivalent and write
$\chi_1 \sim_\Gamma \chi_2$
if $\sat{\chi_1}(w,\sigma) = \sat{\chi_2}(w,\sigma)$
for all $(w,\sigma) \in \sat{\Gamma}$.
If $\Gamma = \emptyset$, we simply write $\chi_1 \sim \chi_2$
and say that $\chi_1$ and $\chi_2$
are semantically equivalent.
\subparagraph{Concrete semantics}
To obtain the concrete semantics of a formula for a given semiring structure $(X, +, \times, 0 , 1)$,
we assume an \emph{aggregation function} $\mathtt{aggr} : \multi{R^*} \rightarrow X$.
Note that the set $X$ may be different from the set of weights $R$.
\begin{example}\label{ex:1}
Let $\Sigma = \{a,b\}$, $R = \{0,1\}$ and consider the max-plus semiring
$(\mathbb{N} \cup \{-\infty\}, \max, +, -\infty, 0)$.
We wish to count the maximum number of consecutive $a$'s in a given string $w \in \Sigma^*$.
We define the aggregation function as
$
\mathtt{aggr}(M) = \max_{r_1 \dots r_n \in M}( r_1 + \dots + r_n),
$
thus interpreting the sum and product of the multiset sequence semiring ($\uplus$ and $\cdot$)
as the corresponding sum and product ($\max$ and $+$) in the max-plus semiring.
Now define the first-order formula $\varphi$ as
$
\varphi = x \leq y \land \forall z.((x \leq z \land z \leq y) \rightarrow P_a(z)),
$
and let
$\Psi = \varphi \mathbin{?} 1 : 0$,
$\Phi' = \textstyle{\prod_y} \Psi$,
and
$\Phi = \textstyle{\sum_x} \Phi'$,
so that $\Phi = \sum_x \prod_y \varphi \mathbin{?} 1 : 0$.
Consider the string $w = abaa$, which has a maximum number of two consecutive $a$'s.
We find that
\begin{align*}
\sat{\Phi}(w,\sigma) &= \sat{\Phi'}(w,\sigma[x \mapsto 1]) \uplus \sat{\Phi'}(w, \sigma[x \mapsto 2]) \\
&\phantom{{}={}}\uplus \sat{\Phi'}(w,\sigma[x \mapsto 3]) \uplus \sat{\Phi'}(w, \sigma[x \mapsto 4]) \\
&= \multiset{1000} \uplus \multiset{0000} \uplus \multiset{0011} \uplus \multiset{0001} \\
&= \multiset{1000, 0000, 0011, 0001}
\end{align*}
and hence the concrete semantics become
\begin{align*}
\mathtt{aggr}(\sat{\Phi}(w,\sigma)) &= \mathtt{aggr}(\multiset{1000, 0000, 0011, 0001}) \\
&= \max\{1,0,2,1\} = 2,
\end{align*}
which is
the maximum number of
consecutive $a$'s in $w$.
\end{example}
Semiring semantics have some limitations in their expressive power,
and some natural quantities, such as discounted sum,
can not be computed using these semantics.
Alternative concrete semantics have therefore been proposed that
give more expressive power, such as valuation monoids \cite{DM12}
and valuation structures \cite{DP16},
which allows one to compute more complex quantities,
such as optimal discounted cost, average of ratios, and more.
In this paper, we work exclusively with abstract semantics.
\begin{remark}
It is important to note that the abstract semantics that were defined in this section can be seen as a kind of concrete semantics, for the semiring structure $(\multi{R^*}, \uplus, \cdot, \emptyset, \multiset{\varepsilon})$,
the semiring over multisets of sequences over the weights.
\end{remark} |
1,314,259,993,901 | arxiv | \section{Introduction}
The U.S. was one of the first countries to roll out the COVID-19 vaccines. However, vaccination doses administered per 100 people in the U.S. are among the lowest in the developed world (\cite{NytHolde2021}). Vaccine hesitancy is not a new phenomenon in the US and it has been suggested that age, political affiliation, and income may all play a role. In addition, racial disparities had also been observed as a possible factor. In order to assist policy makers to develop better strategies to decrease hesitance and increase access, it is critical to identify and quantify contributing factors associated with COVID-19 vaccination rates in the US, especially now in winter with more indoor activities approaching and new COVID variants spreading.
This paper applies a multivariate Ordinary Least Square (OLS) regression analysis as well as a machine learning regression (Random Forest) to quantify contributing factors for county level vaccination hesitancy in the continental US. Joint effects of multiple variables (race/ethnicity, politics, age etc.) are considered simultaneously to capture and quantify which factors affect the vaccination rate. By implementing a state-of-the-art Artificial Intelligence Explanations (AIX) algorithm, it is possible to solve the black box problem that is often related with machine learning models and provide answers to the “how much” question for each measured impact factor in every individual county.
An interactive GIS online dashboard is made available so that one can search and select a county in the continental U.S., display a diagram to show the value of each contributing factor, and see by how much each factor in the selected county impacts the predicted vaccination rate.\footnote{See \href{https://www.cpp.edu/~clange/vacmap.html}{https://www.cpp.edu/\texttildelow clange/vacmap.html} for an interactive map.} This impact is also known as the SHAP value of the predictor variable.\footnote{SHAP (SHapley Additive exPlanations) is an AIX tool. Based on a trained machine learning model SHAP values quantify for each observation (county) the impact that a predictor variable such as political affiliation, race/ethnicity, age, or income has on the prediction (vaccination rate).} It is apparent that the influence of a variable is not universally the same across different geographies (counties). Consequently, county specific strategies need to be applied to help increasing vaccination rates.
By highlighting the variable impact of different factors in each county, this research can be a helpful tool for local health officials and others to find strategies to improve vaccination rates in individual communities that are currently under-vaccinated.
The paper is structured as follows: Section \ref{sec:methodology} describes the data sources and the methodology used for the research. In Section \ref{SecAnalysis} we use a traditional linear OLS regression model (see Section \ref{SubSecRegression}) and a Random Forest machine learning model (see Section \ref{SecCountyRandFor}) to predict the vaccination rate for continental U.S. counties. The Random Forest model allows to consider non-linear impacts as well as interaction effects between variables. The idea behind a Random Forest Model is presented in Section \ref{SecCountyRandForIdea}, the prediction results are discussed in Section \ref{SecCountyRandForPrediction}, and the impact factors for vaccination hesitancy for the Random Forest Model are analyzed for each county separately in Section \ref{SecCountyShap} using SHAP values.
In Section \ref{SecKeyFindings} we visualize the SHAP values and derive some important key findings. In Section \ref{SecShapTrends} we analyze for each predictor variable the
direction and trend of the impact on the predicted vaccination rate by generating and plotting SHAP values for all counties. In Section \ref{SecUniqueInsights} we compare the SHAP values for some selected counties. This will give us a unique insights into the vaccination behavior of individual counties and will show that different vaccination strategies need to be implemented for different counties.
\subsection{Methodology \label{sec:methodology}}
The goal is to identify and quantify impact factors that can be used to explain different COVID-19 vaccination rates in the counties of the United States.
Impact factors for vaccination rates are analysed on the national level based on county data.
As a first approach we apply an Ordinary Least Square (OLS) model (see Sections \ref{SubSecRegression})
to identify statistically significant predictor variables based on hypothesis tests. In Sections \ref{SecCountyRandFor}
we use a Random Forest machine learning model.\footnote{Random Forest was introduced by \cite{Breiman2001}.} The Random Forest machine learning improves the predictive quality. However, machine learning models are often treated as black-boxes without the capacity to quantify the impacts of specific predictor variables for the final prediction. More recently, a new methodology (SHAP values) was introduced into the machine learning literature to quantify the impact of predictor variables on the estimated value.\footnote{The basic idea behind SHAP values goes back to \cite{Shap54} who introduced it as a game theory approach. \cite{Lundberg2018} modified the methodology to interpret machine learning results and developed a related R package for tree based models such as Random Forest.} We apply SHAP value analysis to the results of the Random Forest model and will be able to determine by how much each of our (county) predictions is determined by each of the predictor variables.\footnote{Mazzanti provides a very intuitive introduction to SHAP values (see \cite{Mazzanti2020Shap}).}
As the dependent variable we use the rate of Fully Vaccinated Adult Persons. We choose this variable over the variable Adult Persons who at Least Received One Shot, because the latter consist of three very diverse groups with different vaccination behaviors: i) individuals, who already completed their vaccination cycle, ii) individuals who received one shot but are not yet eligible for the second shot, and iii) individuals who decided not to complete the vaccination cycle.
Based on a correlation analysis of several variables, we chose the following predictor variables:
\begin{itemize}
\item Race/Ethnicity: proportion of African Americans, Asian Americans, and Hispanics for the relevant geography.
\item Political Affiliation: Number of voters who voted for the Republican presidential candidate as a proportion of the sum of voters who voted for the Democratic or Republican presidential candidate in the 2020 presidential election.
\item Age Groups: Proportion of young adults (20-25 years) and Proportion of older adults (65 years and older).
\item Income related: To control for income effects we used the proportion of households receiving food stamps in the related geography as a predictor variable.\footnote{We also considered Median Household income and the Social Vulnerability Index as defined be the Center for Disease Control (CDC). The results were similar to the proportion of food stamp recipients but less significant. A possible reason could be that the proportion of food stamp variable better represents low income.}
\end{itemize}
For both the OLS and the Random Forest regressions the observations are weighted with the population of the the related geography. This avoids that a
county area with a small population has the same weight in the analysis as
a county area with a high population. This is important because population varies drastically from county to county in the U.S.
\subsection{Data}
The analysis is based on continental U.S. county level data combined with other data resources. The US county level vaccination data "Percent Adults (over 18) Fully Vaccinated Against COVID-19" (outcome variable) is provided by CDC (September 2021).\footnote{See \cite{Cdc2021-9}.} Other less time sensitive predictor variables such as Percent non-Hispanic Asians, Percent non-Hispanic Black, and Percent Hispanic are from a June 2021 CDC dataset.\footnote{See \cite{Cdc2021-6}.}
Although the majority of the U.S. counties have vaccination data available, there are a few exceptions. For example, Texas county-level vaccination data is not present in the CDC dataset. Fortunately, we were able to integrate Texas county level data from the Texas Health and Human Services Agency.\footnote{See \cite{Dshs2021}.}
Since we believe political affiliation is one of the important factors of how people respond to a public health crisis such as COVID-19, we downloaded the precinct-level U.S. 2020 election results from the New York Times’s github site.\footnote{See New York Times \cite{Nyt2021}.} The original precinct data are in GeoJSON format including “detailed data (...) representing 89 percent of all votes cast” (New York Times \cite{Nyt2021}). The precinct level election data is then aggregated to U.S. county level data based on county’s FIPS code using ArcGIS geoprocessing tools from Esri.
ArcGIS Enrichment tools\footnote{See \cite{Esri2021}.} were used to augment the dataset with Esri projected 2021 Census data including county population data (Total Population and Population over 18 Years Old) as well as with income related data (Percent of Food Stamp Recipients). Our dataset has 2630 observations (counties).
\section{Modelling Impact Factors for Vaccine Hesitation\label{SecAnalysis}}
\subsection{Ordinary Least Square Model (OLS)\label{SubSecRegression}}
To quantify the impact of various factors on vaccination behavior, we first use a multivariate OLS model. This ensures that the impacts of the predictor variables are measured simultaneously and consequently that the impact of each variable is controlled for the impact of other variables.
We consider race/ethnicity impacts measured as the percentage of Asians, African Americans, and Hispanics in each of the counties. Note that White is not considered because it is the residual (together with Other Races/Ethnicities).
We also included two age variables: the percentage of people in each county between 20 and 25 years $(Perc_{Young25})$ as well as people older than 65 years $(Perc_{Old65})$ assuming that these two age groups have a different vaccination behaviour because of the low respectively high risk of serious complications when infected with the COVID-19 virus.
Since the two major political parties in the U.S. have diverging views about the need of vaccination and how vaccinations should be implemented, we added a variable that measures the percentage of votes Republicans received in the last presidential election $(Perc_{Rep})$. Note that a low percentage for Republicans indicates a high percentage for Democrats and vice versa (neglecting votes for presidential candidates other than Democrats or Republicans).
We also included income effects on vaccination behaviour. After running several models with different variables to correct for income effects (median household income, percent of households below poverty level, and percent of households receiving food stamps), we decided to use the percentage of households receiving food stamps $(Perc_{FoodStamps})$ to control for income effects. Overall, the model results are similar regardless of which variable is used to control for income effects. However, the model based on $Perc_{FoodStamps}$ provides the best results in terms of prediction quality measured as adjusted $r^2$ as well as significance of the other explanatory variables ($p$ values). This is likely because low income rather than income in general influences vaccination rates. The related OLS model is shown below in equation \ref{eq:OlsModel} :
\begin{eqnarray}
Perc_{FullVac}&=&\beta_1 Perc_{FoodSt}+\beta_2 Perc_{Asian}+\beta_3 Perc_{Black} +\beta_4 Perc_{Hisp}\nonumber \\
&+&\beta_5Perc_{Young25}
+\beta_6 Perc_{Old65}
+\beta_7 Perc_{Rep}
+ \beta_0
\label{eq:OlsModel}
\end{eqnarray}
\begin{table}[htbp]
\begin{center}
\caption{OLS Regression Results}
\label{TabCountyOlsRegr}
\input{TabCountyRegressionsFoodSt.tex}
\end{center}
\end{table}
The results of the OLS analysis are displayed in Table \ref{TabCountyOlsRegr}. All variables are significant at the 99\% significance level and have the expected sign. The coefficients show by how much the vaccination rate is expected to increase when one of the variables increases by one percentage point. The variable $Perc_{Rep}$ has the strongest negative effect with a decrease of the expected vaccination rate about half of the percentage change of republican vote. E.g. if the Republican vote increases by 5\%-points the vaccination rate is expected to decrease by 2.5\% (a factor of 0.5). The variable $Perc_{Black}$ shows the second strongest negative effect. An increase by one percentage point of $Perc_{Black}$ is related to a decrease of 0.4 of a percentage point in the expected vaccination rate. The latter shows the need for community outreach to target black neighborhoods.
The strongest positive impact can be observed for the percentage Asians and for seniors in a county with an about 0.1 percentage point increase of the vaccination rate for a each percent increase of the respective population. This might confirm that the older population is more willing to get vaccinated because of their higher vulnerability. Also Asians are generally known for taking greater precautions to avoid getting infected. The variable for young people, who are less in danger for hospitalization or death, has a significant negative impact.
The impact of $Perc_{FoodSt}$ is negative, which reflects that poverty is an important issue when it comes to vaccination hesitancy. The positive impact of $Perc_{Hisp}$ on the expected vaccination rate is significant but quantitatively smaller than the impact of the other variables. The relatively small impact of $Perc_{Hisp}$ might be explained by two effects that work in different directions resulting in a relatively small overall impact. Hispanics might have a similar vaccination behavior than African Americans (negative impact on vaccination rate). On the other hand, predominately Hispanic neighborhoods were the ones most badly hit at the beginning of the pandemic (positive impact on vaccination rate).\footnote{This argument will be confirmed when the results of the machine learning model will be interpreted using SHAP values in Section \ref{SecCountyRandFor}.}
The OLS results have to be interpreted with great care. We cannot conclude that an increase of one of the variables in a specific county by one percentage point would lead to a lower vaccination rate of $x$-percentage points in that specific county. The result reported above hold only for the average of all counties. When implementing machine learning in Section \ref{SecCountyRandFor}, we will derive so called SHAP values for each county and each variable. These SHAP values will reflect the impact for each of the explanatory variable on the expected vaccination rate of a specific county.
In addition, interactions between explanatory variables are neglected in the OLS model. For example, an increase of Republican vote might have a different impact on vaccination rates in a poorer county (more food stamp recipients) than in a relatively richer county. A linear model like the one used here cannot capture these effects.\footnote{It is possible to add interaction terms explicitly as additional variables to an OLS model but this is usually not as effective as using a tree-based machine learning model like Random Forest. The latter also does not require to explicitly define interaction terms. It considers interactions in its dynamically optimized structure. } Therefore, in the next section we will apply a nonlinear Random Forest model to consider these effects.
\subsection{Random Forest Model\label{SecCountyRandFor}}
In this section we use a Random Forest model to predict vaccination rates. The idea behind a Random Forest model is presented in Section \ref{SecCountyRandForIdea} and the prediction results are discussed in Section \ref{SecCountyRandForPrediction}. We also quantify the impacts on the vaccination rate for each of the predictor variables separately for each county by using SHAP values (see Section \ref{SecCountyShap}).
In order to compare the predictive performance of the machine learning model with the OLS model we use the same data as before, but the observations are randomly assigned to a training dataset (2238 observations) and a testing dataset (392 observations). While the training dataset is used to find the optimal parameters for the OLS model and to optimize the Random Forest model, the testing dataset is never used for any type of optimization and is exclusively reserved to validate the prediction quality of the two models.
Optimizing both the OLS and the Random Forest model exclusively based on the training data and comparing the models based on the testing data is needed because most machine learning models, including Random Forest, have a tendency to overfit when the model's complexity is high.\footnote{While increasing the number of trees in a Random forest model cannot lead to overlearning, increasing other hyper-parameters, such as the tree depth, can lead to overlearning.} Overfitting occurs when a complex machine learning model approximates the training dataset almost perfectly, but performs poorly on the testing dataset. This would make a comparison between the two models impossible if the complete dataset was used for training and at the same time for validation.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{DecisionTree.png}
\caption{Decision Tree}\label{FigDecTree}
\end{figure}
\subsubsection{The Idea Behind a Random Forest Model\label{SecCountyRandForIdea}}
A Random Forest model consists of many Decision Tree models (2000 in our case). The prediction of a Random Forest model is calculated as the mean of the predictions from these Decision Tree models. Therefore, explaining the idea behind a Decision Tree model is a good starting point for understanding a Random Forest model.
In a Decision Tree (see Figure \ref{FigDecTree} for an example) each observation of a dataset is guided through a treelike structure along internal decision nodes and ends up in one of several bins (terminal nodes). At each of the internal nodes an observation is tested based on an inequality. Depending on the outcome of the test the observation is moved down to one of two branches (left branch, if test condition is fulfilled, right branch, otherwise). This is repeated at the following internal nodes until the observation reaches the terminal node on the bottom of the tree. The decision rules at each internal node (choosing the predictor variable and the splitting value) are determined by an optimization algorithm based on the training data.
After decision rules for all nodes in a tree have been established and all training data observations have been sorted into the terminal nodes (bins), the decision tree can be used for predictions. For example a county with 53\% voting for Republicans, a proportion of 28\% of African Americans, and a proportion of 15\% of people older than 65 moves right at the first node (condition not fulfilled), moves left at the following node (condition fulfilled), and moves left (condition fulfilled) into the middle terminal node/bin of the tree in Figure \ref{FigDecTree}.
This county would be predicted to have a vaccination rate of 0.32. The reason is that the average vaccination rate of all training observation (counties) that also ended up in the same terminal node during the training process was 0.32. Overall 45 training observations ended up in the middle terminal node (2\% of the 2,238 training observations).
Decision trees, although intuitive, are called weak predictors because they respond sensitively to small changes in the data or optimization parameters. However, by combining many different decision trees --- different in terms of chosen variables and splitting values at each node --- this problem can be mitigated.
As stated earlier, a Random Forest prediction\footnote{See \cite{Breiman2001}.} is derived from the mean prediction of all its decision trees and usually delivers better and more stable predictions than the individual decision trees. The idea that a combination of weak predictors can lead to a strong prediction is analogous to the wisdom of crowds phenomenon described in \cite{Galton1907} where visitors at a stock and poultry exhibition in England submitted guesses about the weight of an ox. Although most of the visitors were off with their predictions, surprisingly the mean of all predictions was very close to the real weight of the ox.\footnote{See \cite{Galton1907}.}
In order to generate diverse decision trees, two strategies are employed with Random Forrest:
\begin{description}
\item Bagging:\footnote{See \cite{Breiman1996}.} Each Decision Tree is presented with a different training dataset, which has the same number of records than the original dataset and is generated by drawing with replacement from the original dataset (Bootstrapping\footnote{See \cite{Efron1979}.}).
\item Random Subspace Method:\footnote{See \cite{Ho1998}.} For each decision tree only a random subset of exogenous variables is considered as candidates for the decision rules. To find a reasonable value for the number of variables to be considered a rule of thumb suggest to use $\sqrt{M}$ randomly chosen predictor variables, where $M$ is the number of all predictor variables.
\end{description}
\subsubsection{Random Forest Predictions\label{SecCountyRandForPrediction}}
The Ranger R package was used to run the Random Forest model.\footnote{See \cite{RangerPackage} for details about the Ranger package.} The Random Forest model used consists of 2,000 decision trees, with a minimum node size of 5 (default),\footnote{When the minimum node size (observations in a decision node) is reached, a decision tree stops further branching at this node and the node becomes a final node.} and a subset of two variables (rounded down from $\sqrt{7}$; default) that is used as candidate variables for the decision rules. The default values were confirmed by cross validation on the training dataset.
When comparing the Random Forest model to the OLS model based on the testing data, the Random Forest model is able to reduce the Mean Absolute Error (MAE) from 8.3\% for the OLS model to 7.8\% for the Random Forest Model.
\begin{align*}
MAE=\frac{1}{N}\sum_{i=1}^N
\widehat{V_i^{acc}}-V_i^{acc}\quad \mbox{with: $\widehat{V_i^{acc}}:=$ predicted and $V_i^{acc}:=$ true vaccination rate}
\end{align*}
Since $r^2$ was used for both the OLS and the Random Forest model as minimization criteria during the training process, it makes sense to compare the performance also in regards of $r^2$. Based on the testing data the Random Forest Model improved to $r_{RF}^2=0.441$ compared to $r_{OLS}^2=0.399$ for the OLS.
\begin{align*}
r^2=\frac{1}{N}\sum_{i=1}^N
\left(\widehat{V_i^{acc}}-V_i^{acc}\right))^2\quad \mbox{with: $\widehat{V_i^{acc}}:=$ predicted and $V_i^{acc}:=$ true vaccination rate}
\end{align*}
Because machine learning models consider interactions between variables and are inherently non-linear they usually provide better predictions compared to OLS models. However, in the past they were often considered Black-Box models because most machine learning models do not provide information about the quantitative impact of the predictor variables.
This is no longer true because significant progress has been made on Artificial Intelligence Explanations (AIX) such as LIME and SHAP values.\footnote{While Local Interpretable Model-Agnostic Explanations (LIME) (see \cite{Ribeiro2016}) is based on a local linear approximation of the prediction surface of the underlying machine learning model, SHAP values allow to estimate the impact of each variable for each observation (counties in our case) and for each of the variables separately (see \cite{Lundberg2017}). } We will use SHAP values in the following section to quantify the impact of each predictor variable in each individual county on the predicted vaccination rate. SHAP values do not only provide information about the why but also about the how much a predictor variable influences the vaccination rate.
\subsubsection{Quantifying the Impact of Predictor Variables Based on SHAP Values\label{SecCountyShap}}
Very recently, SHAP values became popular in the machine learning literature to quantify the impact of predictor variables for a trained machine learning model.\footnote{SHAP values for tree like machine learning models were first introduced by \cite{Lundberg2017} and are now used for a wide range of machine learning models. An introduction to the methodology can be found in \cite{Molnar2020}, Chapter 5 and a more intuitive introduction is provided by \cite{Mazzanti2020Shap}.} We use the R package ShapR to estimate SHAP values for every predictor variable and for each of the 2,630 counties.\footnote{See \cite{ShapRPackage} for details about the package.}
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.7\textwidth]{Orange.png}}
\caption{SHAP Values for Orange County, CA}\label{FigShap3Counties}
\end{figure}
SHAP values are estimated separately for each individual observation (county) and each predictor variable based on a trained (Random Forest) model. In contrast, the coefficients in an OLS model only quantify the impact of the predictor variable on the {\it average} of all observations (counties).
A SHAP value indicates by how much a specific predictor variable contributes (negatively or positively) to a specific county's predicted vaccination rate. SHAP values are not only influenced by the value of the related predictor variable. They also depend on the interaction between all other predictor variables. While OLS coefficients can be estimated analytically, SHAP values are determined numerically based on repeated predicting.
In Figure \ref{FigShap3Counties} SHAP values together with the empirical values for the related variables are displayed for Orange County, CA. With 20.3\% Asians (see left panel in In Figure \ref{FigShap3Counties} ) the proportion of Asians living in the county is relatively high compared to the national average (5\%).\footnote{See \cite{Cdc2021-9}. Percent of population older 18, where states provided vaccination data.} This relatively high proportion of Asians combined with the influence from the interaction of the other variables contributes to a 9.8\% increase of the vaccination rate (SHAP value for variable Asians) in Orange County. The low proportion of Hispanics living in Orange county (4\%) compared to the national average (19\%) together with the interaction among the other variables leads to a SHAP value of -1.5\% (decrease of the predicted vaccination rate). The low proportion of African Americans living in Orange County (1.6\%) compared to the national average (12\%) impacts the vaccination rate slightly negative with -1.5\%, while counties with a higher proportion of African Americans usually show a higher negative impact. E.g., Figure \ref{FigUniqueBlack} on page \pageref{FigUniqueBlack} shows a decrease of 4.6\% for the proportion of African Americans in Kemper County. The impact of young people, seniors, and poverty is also small in Orange County. In contrast, Politics has a larger impact in Orange County. Figure \ref{FigShap3Counties} shows that Orange county's vote for Republicans was about 45\% in the recent presidential election, which puts the county's political climate slightly in favor of Democrats. As a consequence the SHAP value for Politics shows a SHAP value (increase of the vaccination rate) of +3.7\%.
All SHAP values are in reference to the national average of the vaccination rate (62.6\%) as of September 21, 2021.\footnote{In order to keep SHAP values, Predicted Vaccination Rates, and the National Vaccination Rate compatible, the National Vaccinations Rate was calculated as a weighted average from the counties' vaccination rates. Weights were based on the population over 18 years of the respective county. Consequently, there are small differences between the National Vaccination Rate reported here and the one reported for the same time by the CDC (65.7\%).} The sum of the National Vaccination Rate and all SHAP values equals the Predicted Vaccination Rate:
\begin{eqnarray*}
\mbox{\bf Orange County, CA: }&& \\
\underbrace{0.716}_{Perc^{Orange}_{PredFullVac}}&=& \underbrace{0.626}_{Perc^{National}_{FullVac}}\\
&+&
\underbrace{0.098}_{SHAP_{Asian}}+
\underbrace{(-0.015)}_{SHAP_{Hisp}} +
\underbrace{(-0.017)}_{SHAP_{Black}}\\
&+&
\underbrace{(-0.007)}_{SHAP_{Senior}}+
\underbrace{(-0.008)}_{SHAP_{Young}}+
\underbrace{0.000}_{SHAP_{Poverty}}+
\underbrace{0.037}_{SHAP_{Politics}}
\end{eqnarray*}
This leaves the more technical question, how is the quantitative impact of a variable $x_i$ for county $i$ (i.e., a predictor variable's SHAP value) predicted?
SHAP values are generated based on repeated predictions from the trained Random Forest model. The basic idea of generating a SHAP value for a specific predictor variable for a specific county $i$ can be illustrated as follows:
\begin{description}
\item[Step 1:] The Random Forrest model is used to predict the vaccination rate for all counties based on a dataset where the variable $x$ is switched off. Since setting the values of variable $x$ to zero would introduce bias, the procedure of switching off a variable is embedded in the Decision Tree estimations.
\item[Step 2:] The prediction from Step 1 is repeated but now with the values of variable $x$ switched on.
\item[Step 3:] The (positive or negative) difference for the predicted vaccination rate for each county between Step 2 and Step 1 estimates the contribution of variable $x$ towards the predicted vaccination rate for each of the counties.
\end{description}
Steps 1 -- 3 are repeated for all possible coalitions of predictor variables, where the simplest one does only consider variable $x$, others consider some predictor variables besides $x$, and the largest coalition considers all predictor variables. The estimated SHAP value for variable $x$ is then calculated as the weighted average of all contributions from all scenarios (coalitions) based on Step 3. The weighting schema ensures that for each county the sum of all SHAP values plus the national mean of all counties vaccination rate equals the predicted vaccination rate.
The set of all possible coalitions forms what is called in mathematics a power-set. The number of all possible coalitions in a power-set can be calculated as $2^{NumberPredictors}$, which equals 128 ($2^7$) in our case. Given that we consider 7 predictor variables and 2,630 counties. The algorithm would have to repeat steps 1- 3 approximately 2.4 million times ($128 \cdot 2,630 \cdot 7$). SHAP algorithms like the ShapR algorithm used here\footnote{See \cite{ShapRPackage}} optimize the procedure and thus lower the amount of iterations. However, the calculation of the SHAP values displayed in Figure \ref{FigShapAllVar} still is very computing intensive. It took about 16 hours on a computer with a 7th generation Intel processor with 8 logical cores to estimate all SHAP values.
\section{Key Findings\label{SecKeyFindings}}
When plotting SHAP values for all analyzed counties organized by predictor variables, trends and non-linearities are present for some of the variables (see Section \ref{SecShapTrends}).
In addition, by analyzing the SHAP values for a few selected counties unique insights into vacation behavior can be discovered (see Section \ref{SecUniqueInsights}). These insights are a direct benefit of calculating SHAP values, because SHAP values allow to analysis individual observations (counties).
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\textwidth]{AllComplDataShapSept21.png}
\caption{SHAP Values for Predictor Variables for Counties from the Continental U.S.}\label{FigShapAllVar}
\end{figure}
\subsection{SHAP Value Trends by Predictor Variable Based on All Counties\label{SecShapTrends}}
We calculated SHAP values for each of the seven predictor variables for all continental U.S. counties where data were available. They are plotted in the seven diagrams in Figure \ref{FigShapAllVar}. In addition, a map that displays detailed information for each individual county similar to Figure \ref{FigShap3Counties} is available on the Internet.\footnote{See \href{https://www.cpp.edu/~clange/vacmap.html}{https://www.cpp.edu/\texttildelow clange/vacmap.html} for an interactive map.}
Each diagram in Figure \ref{FigShapAllVar} displays data points for 2,630 counties. Each point represents the value for the related predictor variable (horizontal axis) and the resulting SHAP value (vertical axis) for a specific U.S. county. Negative or positive SHAP values indicate the contribution to the vaccination rate for that county. The red trend lines are just provided for visualization purposes. No claim is made that the SHAP values follow these trends.
The exact impact of the predictor variables varies from county to county, but a large percentage of Republican voters as well as a high percentage of African American residents or a high rate of food stamp recipients impacts the predicted vaccination rate negatively compared to the national average vaccination rate (see panels 7, 2, and 6 in Figure \ref{FigShapAllVar} , respectively) . On the other hand, when a county has a large percentage of Asians, its predicted vaccination rate tends to be higher.
The impact seem to be non-linear for the predictor variables $PercBlack$ and $PercFoodSt$ and linear for the variables $PercAsian$ and $PercRep$. The impact of age groups and Hispanic population is inconclusive overall, but it seems that there is a positive trend for counties with a predominantly (greater than 65\%) proportion of Hispanics (see panel 3 in Figure \ref{FigShapAllVar}).
The results are mostly compatible with the traditional OLS regression analysis (see Section~\ref{eq:OlsModel}). However, the traditional regression analysis also indicates that overall a large percentage of Hispanic population or seniors in a county impact the predicted vacation rate positively, while a high percentage of younger people (between age 20 to 25) tends to negatively affect the vaccination rate. The impact of these variables is mostly inconclusive for the machine learning model.
\subsection{Unique Insights \label{SecUniqueInsights}}
When analysing and comparing selected counties some interesting and unique insights about vaccination hesitancy in different US counties can be discovered. Below are a few examples:
\paragraph{Predominately Hispanic Counties Have High Vaccination Rates:}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.57\textwidth]{Imperial.png}
\caption{Hispanic Vaccine Impact (SHAP) in Imperial County, CA}\label{FigUniqueHispanicic}
\end{figure}
Although we could not derive a clear overall trend for the impact of the predictor $PercHisp$ in Figure \ref{FigShapAllVar} on page \pageref{FigShapAllVar}, there is a clear and strong positive trend for counties with a predominately Hispanics population. The higher the population of Hispanics the higher is the vaccination rate as long as the Hispanic population share exceeds 65 percent. A possible reason for this finding could be that in the first wave of COVID counties with a high Hispanic population share were hit hard with hospitalizations and death. Experiencing this first hand might have lead to high acceptance of vaccinations. This is supported by Figure \ref{FigUniqueHispanicic} showing SHAP values for Imperial County, CA. This county has 84 percent of Hispanic residents and is also high in poverty (10\% receiving food stamps). However, it has one of the highest vaccination rates in the country and the largest impact on the vaccination rate (+15\%) stems from the SHAP value for Hispanic ethnicity.
\begin{figure}[htbp]
\centerline{
\includegraphics[width=0.565\textwidth]{Orange.png}
\includegraphics[width=0.575\textwidth]{Merced.png}}
\caption{Different Political Impact in the Counties of Orange, CA and Merced, CA}\label{FigShapOcAndMerced}
\end{figure}
\paragraph{Politics Matters but the Impact Varies Even in Similar Political Climates:}
When analyzing the impact of political affiliation many papers consider presidential votes as the only predictor variable. This neglects controlling for the effects of other variables like income or race and it also neglects the interactions between predictor variables. For example, survey results published by the Kaiser Family Foundation (see \cite{Kaiser2021}) indicate that Republicans are much more likely to say no to vaccinations, but the only predictor variable that was used to analyze this was the presidential vote for Republicans.
One of the key benefits of this study is to consider interaction effects of multiple variables. Why this is important can be seen in Figure \ref{FigShapOcAndMerced}: Both Orange County, CA and Merced County, CA have about 45 percent Republican voters, resulting in a political climate that is slightly Democratic. Although the proportions of the presidential votes are almost identical, the impact of Politics on the vaccination rate is more than double in Orange County compared to Merced County (+3.7\% vs. +1.3\%). This might be attributed to an indirect effect of the large Asian population in Orange County (20 percent Asian population), which might have increased the effect of the variable Politics in Orange County. This is in addition to the direct effect Asians have on the vaccination rate which is +9.8\% in Orange County compared to -1.3\% in Merced.
\paragraph{Not all Asian Communities are Equal:}
\begin{figure}[htbp]
\centerline{
\includegraphics[width=0.57\textwidth]{Marin.png}
\includegraphics[width=0.57\textwidth]{SanJoaquin.png}}
\caption{ASIAN Impact on the Vaccination Rate (SHAP) in Marin, CA and San Joaquin Counties, CA }\label{FigUniqueAsians}
\end{figure}
The model reveals for most counties that when a county has a larger percentage of Asians, its predicted vaccination rate tends to be higher, which might be due to cultural characteristics and vaccine mobilization efforts in local Asian communities (see Figure \ref{FigShapAllVar} on page \pageref{FigShapAllVar}). But not all Asian communities are equal. In the example in Figure \ref{FigUniqueAsians}, Marin County, CA has a merely 5.8 percent Asian population, while the nearby San Joaquin County has 15 percent Asian population. However, due to other disparities in the two Asian communities, surprisingly the Asian population influence is higher on the vaccination rate in Marin County than in San Joaquin County (+9.8\% vs. +1.5\%). One of these disparities could be that many Asians in Marin County work in the computer industry. In contrast, in San Joaquin County many Asians work in the farming industry.
\paragraph{African American Communities Vary across Different Geographies:}
\begin{figure}[htbp]
\centerline{
\includegraphics[width=.57\textwidth]{kemper.png}
\includegraphics[width=0.57\textwidth]{PrinceGeorge.png}}
\caption{Comparing the Kemper, MS and PrinceGeorge, MD}\label{FigUniqueBlack}
\end{figure}
Although the impact of the variable $PercentBlack$ on the vaccination rate is negative for most counties (see Figure \ref{FigShapAllVar} on page \pageref{FigShapAllVar}), this is not true in all cases. Figure \ref{FigUniqueBlack} shows proportions and SHAP values for Prince George County, MD and Kemper County, MS. Both counties are about 62\% African American. In Kemper County, the SHAP value for Black reflects a negative impact of 4.6\% on the vaccination rate. In contrast, in Prince George County, the impact is 3.4\% positive. Because Prince George County borders Washington DC, it is possible that residents have a very different exposure and understanding of current affairs than the residents in Kemper County and are thus more likely to get vaccinated.
\section{Summary}
The study uses an OLS model as benchmark and a Random Forest model to estimate which socioeconomic factors impact the vaccination rates in U.S. continental counties.
The Random Forest model generates a higher prediction quality than the OLS model. More importantly, the Random Forest model considers interactions between the predictor variables, accounts for non-linearities, and provides predictions for each individual county's vaccination rate. By implementing an Artificial Intelligence Explanations algorithm (SHAP values), it is possible in every county to quantify how much each impact factor contributes to the county's vaccination rate.
For most counties a higher percentage vote for Republicans, a greater African American population share, and a higher poverty rate lower the vaccination rate. While a higher Asian population share increases the predicted vaccination rate.
The impact on the vaccination rate from the Hispanic population proportion, the percentage of seniors, and the one for young people in a county has either a weak or inconclusive impact. However, when the population share of Hispanics increases 65\% the impact of Hispanics starts to increase.
An interactive online mapping dashboard that identifies impact factors for individual U.S. counties is available at \href{https://www.cpp.edu/~clange/vacmap.html}{https://www.cpp.edu/\texttildelow clange/vacmap.html}. It is apparent that the influence of impact factors is not universally the same across different geographies.
\section{Methodology and Data}
\input{A21Methodology.tex}
\input{A22Data.tex}
\newpage
\input{A3County.tex}
\newpage
\input{A5PolicyImplications.tex}
\input{A6Summary.tex}
\newpage
\printbibliography
\end{document} |
1,314,259,993,902 | arxiv | \chapter*{Preface}
\addcontentsline{toc}{chapter}{Preface}
This thesis is about local and non-local Dirichlet forms on the Sierpi\'nski gasket and the Sierpi\'nski carpet. We are concerned with the following three problems in analysis on the Sierpi\'nski gasket and the Sierpi\'nski carpet.
\begin{enumerate}[(1)]
\item A unified purely \emph{analytic} construction of local regular Dirichlet forms on the Sierpi\'n-ski gasket and the Sierpi\'nski carpet. We give a purely analytic construction of a self-similar local regular Dirichlet form on the Sierpi\'nski carpet using $\Gamma$-convergence of stable-like non-local closed forms which gives an answer to an open problem in analysis on fractals. We also apply this construction on the Sierpi\'nski gasket.
\item Determination of walk dimension \emph{without} using diffusion. Although the walk dimension is a parameter that determines the behaviour of diffusion, we give two approaches to the determination of the walk dimension \emph{prior} to the construction of diffusion.
\begin{itemize}
\item We construct non-local regular Dirichlet forms on the Sierpi\'nski gasket from regular Dirichlet forms on certain augmented rooted tree whose certain boundary at infinity is the Sierpi\'nski gasket. Then the walk dimension is determined by a critical value of a certain parameter of the random walk on the augmented rooted tree.
\item We determine a critical value of the index of a non-local quadratic form by finding a more convenient equivalent semi-norm.
\end{itemize}
\item Approximation of local Dirichlet forms by non-local Dirichlet forms. We prove that non-local Dirichlet forms can approximate local Dirichlet forms as direct consequences of our construction of local Dirichlet forms. We also prove that on the Sierpi\'nski gasket the local Dirichlet form can be obtained as a Mosco limit of non-local Dirichlet forms. Let us emphasize that we do \emph{not} need subordination technique based on heat kernel estimates.
\end{enumerate}
\medskip
{\textbf{Acknowledgement}}
Firstly, I would like to express my sincere gratitude to my supervisor Prof. Alexander Grigor'yan. During the three years of my PhD program, he gave me a lot of valuable advice and continuous encouragement on my research. The most important thing I learnt from him is to solve difficult problems using simple ideas.
Secondly, I would like to thank Prof. Jiaxin Hu from Tsinghua University. Without his recommendation, I could not have the opportunity to do research in Bielefeld with many excellent colleagues.
Thirdly, I would like to thank Prof. Ka-Sing Lau from the Chinese University of Hong Kong. Due to his invitation, I was able to discuss fractals with many talented researchers in Hong Kong.
Fourthly, I would like to thank Prof. Martin T. Barlow, Prof. Zhen-Qing Chen, Prof. Wolfhard Hansen, Dr. Michael Hinz, Prof. Moritz Ka{\ss}mann, Prof. Jun Kigami, Prof. Takashi Kumagai and Prof. Alexander Teplyaev for their valuable suggestions and helpful discussions.
Fifthly, I would like to thank my friends, Eryan Hu, Yuhua Sun, Qingsong Gu, Shilei Kong and Jun Cao. They gave me a lot of help in my research and life. In particular, I would like to thank Eryan Hu for the discussions of basic theory of Dirichlet forms and Qingsong Gu for the discussions of two classical papers about the Sierpi\'nski carpet.
Finally, I would like to thank my parents, Gaoyong Yang and Yanfang Zhang for their consistent support.
\begin{flushright}
Bielefeld, \date{December 19, 2018},
\hfill Meng Yang
\end{flushright}
\tableofcontents
\newpage
\pagenumbering{arabic}
\setcounter{page}{1}
\chapter{Introduction and Main Results}\label{ch_intro}
\section{Motivation and History}
This thesis is about local and non-local Dirichlet forms on the Sierpi\'nski gasket and the Sierpi\'nski carpet. Both the Sierpi\'nski gasket and the Sierpi\'nski carpet can be regarded as two-dimensional generalizations of the Cantor set.
The Sierpi\'nski gasket (SG) is a typical example of p.c.f. (post-critically finite) self-similar sets. The SG is the simplest self-similar set in some sense, see Figure \ref{fig_SG}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{gasket.jpg}
\caption{The Sierpi\'nski Gasket}\label{fig_SG}
\end{figure}
The SG can be obtained as follows. Given an equilateral triangle with sides of length 1, divide the triangle into four congruent triangles, each with sides of length $1/2$, and remove the central one. Then divide each of the three remaining triangles into four congruent triangles, each with sides of length $1/4$, and remove the central ones, see Figure \ref{fig_SG_construction}. The SG is the compact connected set that remains after repeating the above procedure infinitely many times.
\begin{figure}[ht]
\centering
\subfigure{
\begin{tikzpicture}[scale=1.2]
\draw[fill=black] (0,0)--(4,0)--(2,2*1.7320508076)--cycle;
\draw[fill=white] (2,0)--(1,1*1.7320508076)--(3,1*1.7320508076)--cycle;
\end{tikzpicture}
}
\hspace{0.2in}
\subfigure{
\begin{tikzpicture}[scale=1.2]
\draw[fill=black] (0,0)--(4,0)--(2,2*1.7320508076)--cycle;
\draw[fill=white] (2,0)--(1,1*1.7320508076)--(3,1*1.7320508076)--cycle;
\draw[fill=white] (1,0)--(0.5,0.5*1.7320508076)--(1.5,0.5*1.7320508076)--cycle;
\draw[fill=white] (3,0)--(2.5,0.5*1.7320508076)--(3.5,0.5*1.7320508076)--cycle;
\draw[fill=white] (2,1.7320508076)--(1.5,1.5*1.7320508076)--(2.5,1.5*1.7320508076)--cycle;
\end{tikzpicture}
}
\caption{The Construction of the Sierpi\'nski Gasket}\label{fig_SG_construction}
\end{figure}
The Sierpi\'nski carpet (SC) is a typical example of \emph{non-p.c.f.} self-similar sets. It was first introduced by Wac\l aw Sierpi\'nski in 1916, see Figure \ref{fig_SC}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{carpet.jpg}
\caption{The Sierpi\'nski Carpet}\label{fig_SC}
\end{figure}
The SC can be obtained as follows. Divide the unit square into nine congruent squares, each with sides of length $1/3$, and remove the central one. Then divide each of the eight remaining squares into nine congruent squares, each with sides of length $1/9$, and remove the central ones, see Figure \ref{fig_SC_construction}. Repeating the above procedure infinitely many times, we obtain the SC.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.4]
\draw[fill=black] (0,0)--(9,0)--(9,9)--(0,9)--cycle;
\draw[fill=white] (3,3)--(6,3)--(6,6)--(3,6)--cycle;
\draw[fill=black] (11,0)--(20,0)--(20,9)--(11,9)--cycle;
\draw[fill=white] (14,3)--(17,3)--(17,6)--(14,6)--cycle;
\draw[fill=white] (12,1)--(13,1)--(13,2)--(12,2)--cycle;
\draw[fill=white] (15,1)--(16,1)--(16,2)--(15,2)--cycle;
\draw[fill=white] (18,1)--(19,1)--(19,2)--(18,2)--cycle;
\draw[fill=white] (12,4)--(13,4)--(13,5)--(12,5)--cycle;
\draw[fill=white] (18,4)--(19,4)--(19,5)--(18,5)--cycle;
\draw[fill=white] (12,7)--(13,7)--(13,8)--(12,8)--cycle;
\draw[fill=white] (15,7)--(16,7)--(16,8)--(15,8)--cycle;
\draw[fill=white] (18,7)--(19,7)--(19,8)--(18,8)--cycle;
\end{tikzpicture}
\caption{The Construction of the Sierpi\'nski Carpet}\label{fig_SC_construction}
\end{figure}
In recent decades, self-similar sets have been regarded as underlying spaces for analysis and probability. Apart from classical Hausdorff measures, this approach requires the introduction of Dirichlet forms (DFs) including local ones and non-local ones.
\subsection*{Construction of Local Regular Dirichlet Forms}
Local regular Dirichlet forms and associated diffusions (also called Brownian motion (BM)) have been constructed in many fractals, see \cite{BP88,BB89,Lin90,KZ92,Kig93,Bar98,Kig01}. In p.c.f. self-similar sets including the SG, the construction is relatively transparent, while similar construction on the SC is much more involved.
The construction of BM on the SG was given by Barlow and Perkins \cite{BP88}. The construction of a local regular Dirichlet form on the SG was given by Kigami \cite{Kig89} using difference quotients method which was generalized to p.c.f. self-similar sets in \cite{Kig93,Kig01}. Subsequently, Strichartz \cite{Str01} gave the characterization of the Dirichlet form and the Laplacian using the averaging method.
For the first time, BM on the SC was constructed by Barlow and Bass \cite{BB89} using \emph{extrinsic} approximation domains in $\mathbb{R}^2$ (see black domains in Figure \ref{fig_SC_construction}) and time-changed reflected BMs in those domains. Technically, \cite{BB89} is based on the following two ingredients in approximation domains:
\begin{enumerate}[(a)]
\item\label{SC_enum_a} Certain resistance estimates.
\item\label{SC_enum_b} Uniform Harnack inequality for harmonic functions with Neumann boundary condition.
\end{enumerate}
For the proof of the uniform Harnack inequality, Barlow and Bass used certain probabilistic techniques based on Knight move argument (this argument was generalized later in \cite{BB99a} to deal also with similar problems in higher dimensions).
Subsequently, Kusuoka and Zhou \cite{KZ92} gave an alternative construction of BM on the SC using \emph{intrinsic} approximation graphs and Markov chains in those graphs. However, in order to prove the convergence of Markov chains to a diffusion, they used the two aforementioned ingredients of \cite{BB89}, reformulated in terms of approximation graphs.
An important fact about the local regular Dirichlet forms on the SG and the SC is that these Dirichlet forms are resistance forms in the sense of Kigami whose existence gives many important corollaries, see \cite{Kig01,Kig03,Kig12}.
\subsection*{Heat Kernel Estimates and Walk Dimension}
Let $K$ be the SG or the SC and $\mathcal{E}_{\mathrm{loc}}$ the self-similar local regular Dirichlet form on $K$. The heat semigroup associated with $\mathcal{E}_{\mathrm{loc}}$ has a heat kernel $p_t(x,y)$ satisfying the following estimates: for all $x,y\in K$, $t\in(0,1)$
\begin{equation}\label{eqn_hk}
p_t(x,y)\asymp\frac{C}{t^{\alpha/\beta^*}}\exp\left(-c\left(\frac{|x-y|}{t^{1/\beta^*}}\right)^{\frac{\beta^*}{\beta^*-1}}\right),
\end{equation}
where $\alpha$ is the Hausdorff dimension of $K$ and $\beta^*$ is a new parameter called the \emph{walk dimension of the BM}. It is frequently denoted also by $d_w$. The estimates (\ref{eqn_hk}) on the SG were obtained by Barlow and Perkins \cite{BP88}. The estimates (\ref{eqn_hk}) on the SC were obtained by Barlow and Bass \cite{BB92,BB99a} and by Hambly, Kumagai, Kusuoka and Zhou \cite{HKKZ00}. Equivalent conditions of sub-Gaussian heat kernel estimates for local regular Dirichlet forms on metric measure spaces were explored by many authors, see Andres and Barlow \cite{AB15}, Grigor'yan and Hu \cite{GH14a,GH14b}, Grigor'yan, Hu and Lau \cite{GHL10,GHL15}, Grigor'yan and Telcs \cite{GT12}.
It is known that
\begin{equation}\label{eqn_alpha}
\alpha=
\begin{cases}
\log3/\log2,&\text{for the SG},\\
\log8/\log3,&\text{for the SC},
\end{cases}
\end{equation}
and
\begin{equation}\label{eqn_beta_up}
\beta^*=
\begin{cases}
\log5/\log2,&\text{for the SG},\\
\log(8\rho)/\log3,&\text{for the SC},
\end{cases}
\end{equation}
where $\rho>1$ is a parameter from the aforementioned resistance estimates, whose exact value remains still unknown. Barlow, Bass and Sherwood \cite{BB90,BBS90} gave two bounds as follows:
\begin{itemize}
\item $\rho\in[7/6,3/2]$ based on shorting and cutting technique.
\item $\rho\in[1.25147,1.25149]$ based on numerical calculation.
\end{itemize}
McGillivray \cite{McG02} generalized the above estimates to higher dimensions.
Although the walk dimension $\beta^*$ of the BM appears as a parameter in the heat kernel estimates (\ref{eqn_hk}), it was proved in \cite{GHL03} by Grigor'yan, Hu and Lau that $\beta^*$ is in fact an invariant of the underlying metric measure space.
\subsection*{Approximation of Local DFs by Non-Local DFs}
Consider the following stable-like non-local quadratic form
\begin{equation}\label{eqn_nonlocal}
\begin{aligned}
&\mathcal{E}_\beta(u,u)=\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y),\\
&\mathcal{F}_\beta=\myset{u\in L^2(K;\nu):\mathcal{E}_\beta(u,u)<+\infty},
\end{aligned}
\end{equation}
where $\alpha=\mathrm{dim}_{\mathcal{H}}K$ as above, $\nu$ is the normalized Hausdorff measure on $K$ of dimension $\alpha$, and $\beta>0$ is so far arbitrary.
Using the heat kernel estimates (\ref{eqn_hk}) and subordination technique, it was proved in \cite{Pie08} that
\begin{equation}\label{eqn_approximation}
\varliminf_{\beta\uparrow\beta^*}(\beta^*-\beta)\mathcal{E}_\beta(u,u)\asymp\mathcal{E}_{\mathrm{loc}}(u,u)\asymp\varlimsup_{\beta\uparrow\beta^*}(\beta^*-\beta)\mathcal{E}_\beta(u,u)
\end{equation}
for all $u\in\mathcal{F}_{\mathrm{loc}}$.
This is similar to the following classical result
\begin{equation}\label{eqn_classical}
\lim_{\beta\uparrow2}(2-\beta)\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\frac{(u(x)-u(y))^2}{|x-y|^{n+\beta}}\mathrm{d} x\mathrm{d} y=C(n)\int_{\mathbb{R}^n}|\nabla u(x)|^2\mathrm{d} x,
\end{equation}
for all $u\in W^{1,2}(\mathbb{R}^n)$, where $C(n)$ is some positive constant (see \cite[Example 1.4.1]{FOT11}).
\section{Goals of the Thesis}
In this thesis, we are concerned with the following three problems in analysis on the SG and the SC.
\begin{enumerate}[(1)]
\item A unified purely \emph{analytic} construction of local regular DFs on the SG and the SC.
\item Determination of walk dimension \emph{without} using diffusion.
\item Approximation of local DFs by non-local DFs.
\end{enumerate}
\subsection*{Analytic Construction of Local Regular Dirichlet Forms}
The problem of a purely analytic construction of a local regular Dirichlet form on the SC (similar to that on p.c.f. self-similar sets) has been open until now and was explicitly raised by Hu \cite{Hu13}.
We give a direct purely \emph{analytic} construction of local regular Dirichlet forms which works on the SC and the SG. Note that Kigami's construction can not be applied on the SC because it relies on certain monotonicity result and harmonic extension result which originate from certain compatible condition.
The most essential ingredient of our construction on the SC is a certain resistance estimate in approximation graphs which is similar to the ingredient (\ref{SC_enum_a}). We obtain the second ingredient---the uniform Harnack inequality on approximation graphs as a consequence of (\ref{SC_enum_a}). A possibility of such an approach was mentioned in \cite{BCK05}. In fact, in order to prove a uniform Harnack inequality on approximation graphs, we extend resistance estimates from finite graphs to the infinite graphical Sierpi\'nski carpet (see Figure \ref{fig_graphSC}) and then deduce from them a uniform Harnack inequality---first on the infinite graph and then also on finite graphs. By this argument, we avoid the most difficult part of the proof in \cite{BB89}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.3]
\foreach \x in {0,1,...,27}
\draw (\x,0)--(\x,28);
\foreach \y in {0,1,...,27}
\draw (0,\y)--(28,\y);
\foreach \x in {0,1,2}
\foreach \y in {0,1,2}
\draw[fill=white] (9*\x+3,9*\y+3)--(9*\x+6,9*\y+3)--(9*\x+6,9*\y+6)--(9*\x+3,9*\y+6)--cycle;
\draw[fill=white] (9,9)--(18,9)--(18,18)--(9,18)--cycle;
\foreach \x in {0,1,...,27}
\foreach \y in {0,0.5,1,...,27.5}
\draw[fill=black] (\x,\y) circle (0.08);
\foreach \y in {0,1,...,27}
\foreach \x in {0,0.5,1,...,27.5}
\draw[fill=black] (\x,\y) circle (0.08);
\draw[fill=white,draw=white] (9.25,9.25)--(17.75,9.25)--(17.75,17.75)--(9.25,17.75)--cycle;
\foreach \x in {0,1,2}
\foreach \y in {0,1,2}
\draw[fill=white,draw=white] (9*\x+3.25,9*\y+3.25)--(9*\x+5.75,9*\y+3.25)--(9*\x+5.75,9*\y+5.75)--(9*\x+3.25,9*\y+5.75)--cycle;
\end{tikzpicture}
\caption{The Infinite Graphical Sierpi\'nski Carpet}\label{fig_graphSC}
\end{figure}
\subsection*{Determination of Walk Dimension Without Using Diffusion}
We develop the techniques of determination of walk dimensions of fractal spaces without using diffusions. Using quadratic form $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ defined in (\ref{eqn_nonlocal}), we define the walk dimension of the fractal space $K$ by
\begin{equation}\label{eqn_beta_low}
\beta_*:=\sup\myset{\beta>0:(\mathcal{E}_\beta,\mathcal{F}_\beta)\text{ is a regular Dirichlet form on }L^2(K;\nu)}.
\end{equation}
In fact, this definition applies to any metric measure space and does not require a priori construction of any Dirichlet form.
However, if the local regular Dirichlet form with heat kernel estimates (\ref{eqn_hk}) is available, then by means of subordination technique, it was proved in \cite{Pie00,GHL03} that $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$ if $\beta\in(0,\beta^*)$ and that $\mathcal{F}_\beta$ consists only of constant functions if $\beta\in(\beta^*,+\infty)$, which implies the identity
$$\beta_*=\beta^*.$$
We provide in this thesis direct approaches of computation of $\beta_*$ based on (\ref{eqn_beta_low}) without using heat kernels.
We prove by means of these approaches that
$$\beta_*=
\begin{cases}
\log5/\log2,&\text{for the SG},\\
\log(8\rho)/\log3,&\text{for the SC},
\end{cases}
$$
thus matching (\ref{eqn_beta_up}).
On the SG, we provide an approach by using reflection technique and trace technique from abstract Dirichlet form theory to construct non-local regular Dirichlet forms on the SG from regular Dirichlet forms on certain augmented rooted tree whose certain boundary at infinity is the SG.
\subsection*{Approximation of Local DFs by Non-Local DFs}
We prove on the SG and the SC the relations (\ref{eqn_approximation}) between $\mathcal{E}_{\mathrm{loc}}$ and $\mathcal{E}_\beta$ without using subordination technique. In fact, (\ref{eqn_approximation}) follows as direct consequences of our construction of $\mathcal{E}_{\mathrm{loc}}$. Moreover, on the SG, we prove that $\mathcal{E}_{\mathrm{loc}}$ can be obtained as a Mosco limit of non-local Dirichlet forms $E_\beta$ as $\beta\uparrow\beta_*$, where $E_\beta\asymp\mathcal{E}_\beta$.
\vspace{1.5em}
Hence, in this thesis, we develop an alternative approach to analysis on fractals that is based on systematic use of the quadratic form $\mathcal{E}_\beta$ and the notion of the walk dimension $\beta_*$ defined in (\ref{eqn_beta_low}). Although this approach has been implemented on the SG and the SC, there are indications that it may work in more general spaces.
An ultimate goal of this approach would be to construct local regular Dirichlet forms on rather general fractal spaces as renormalized limits of $\mathcal{E}_\beta$ as $\beta\uparrow\beta_*$. However, this will be a subject of another work.
\section{Basic Notions of the SG and the SC}\label{sec_notion}
First, we give basic notions of the SG as follows.
Consider the following points in $\mathbb{R}^2$:
$$p_0=(0,0),p_1=(1,0),p_2=(\frac{1}{2},\frac{\sqrt{3}}{2}).$$
Let $f_i(x)=(x+p_i)/2,x\in\mathbb{R}^2,i=0,1,2$, then the SG is the unique non-empty compact set $K$ in $\mathbb{R}^2$ satisfying $K=\cup_{i=0}^2f_i(K)$. Let $\nu$ be the normalized Hausdorff measure on $K$ of dimension $\alpha=\log3/\log2$. Then $(K,|\cdot|,\nu)$ is a metric measure space, where $|\cdot|$ is the Euclidean metric in $\mathbb{R}^2$.
Let
$$V_0=\myset{p_0,p_1,p_2},V_{n+1}=f_0(V_n)\cup f_1(V_n)\cup f_2(V_n)\text{ for all }n\ge0.$$
Then $\myset{V_n}$ is an increasing sequence of finite sets and $K$ is the closure of $V^*=\cup_{n=0}^\infty V_n$.
Let $W_0=\myset{\emptyset}$ and
$$W_n=\myset{w=w_1\ldots w_n:w_i=0,1,2,i=1,\ldots,n}\text{ for all }n\ge1.$$
For all
\begin{align*}
w^{(1)}&=w^{(1)}_1\ldots w^{(1)}_m\in W_m,\\
w^{(2)}&=w^{(2)}_1\ldots w^{(2)}_n\in W_n,
\end{align*}
denote $w^{(1)}w^{(2)}\in W_{m+n}$ by
$$w^{(1)}w^{(2)}=w^{(1)}_1\ldots w^{(1)}_mw^{(2)}_1\ldots w^{(2)}_n.$$
For all $i=0,1,2$, denote
$$i^n=\underbrace{i\ldots i}_{n\ \text{times}}.$$
For all $w=w_1\ldots w_{n-1}w_n\in W_n$, denote $w^-=w_1\ldots w_{n-1}\in W_{n-1}$.
For all $w=w_1\ldots w_n\in W_n$, let
\begin{align*}
f_w&=f_{w_1}\circ\ldots\circ f_{w_n},\\
V_w&=f_{w_1}\circ\ldots\circ f_{w_n}(V_0),\\
K_w&=f_{w_1}\circ\ldots\circ f_{w_n}(K),\\
P_w&=f_{w_1}\circ\ldots\circ f_{w_{n-1}}(p_{w_n}),
\end{align*}
where $f_\emptyset=\mathrm{id}$ is the identity map.
For all $n\ge1$, let $X_n$ be the graph with vertex set $W_n$ and edge set $H_n$ given by
$$H_n=\myset{(w^{(1)},w^{(2)}):w^{(1)},w^{(2)}\in W_n,w^{(1)}\ne w^{(2)},K_{w^{(1)}}\cap K_{w^{(2)}}\ne\emptyset}.$$
For example, we have the figure of $X_3$ in Figure \ref{fig_X3}. Denote $w^{(1)}\sim_n w^{(2)}$ if $(w^{(1)},w^{(2)})\in H_n$. If $w^{(1)}\sim_nw^{(2)}$ satisfies $P_{w^{(1)}}\ne P_{w^{(2)}}$, we say that $w^{(1)}\sim_nw^{(2)}$ is of type \Rmnum{1}. If $w^{(1)}\sim_nw^{(2)}$ satisfies $P_{w^{(1)}}=P_{w^{(2)}}$, we say that $w^{(1)}\sim_nw^{(2)}$ is of type \Rmnum{2}. For example, $000\sim_3001$ is of type \Rmnum{1}, $001\sim_3010$ is of type \Rmnum{2}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(9,0);
\draw (0,0)--(9/2,9/2*1.7320508076);
\draw (9,0)--(9/2,9/2*1.7320508076);
\draw (3,0)--(3/2,3/2*1.7320508076);
\draw (6,0)--(15/2,3/2*1.7320508076);
\draw (3,3*1.7320508076)--(6,3*1.7320508076);
\draw (4,4*1.7320508076)--(5,4*1.7320508076);
\draw (7/2,7/2*1.7320508076)--(4,3*1.7320508076);
\draw (11/2,7/2*1.7320508076)--(5,3*1.7320508076);
\draw (1,1*1.7320508076)--(2,1.7320508076);
\draw (1/2,1/2*1.7320508076)--(1,0);
\draw (2,0)--(5/2,1/2*1.7320508076);
\draw (7,1.7320508076)--(8,1.7320508076);
\draw (13/2,1.7320508076/2)--(7,0);
\draw (17/2,1.7320508076/2)--(8,0);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (1,0) circle (0.06);
\draw[fill=black] (2,0) circle (0.06);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (6,0) circle (0.06);
\draw[fill=black] (7,0) circle (0.06);
\draw[fill=black] (8,0) circle (0.06);
\draw[fill=black] (9,0) circle (0.06);
\draw[fill=black] (1/2,1.7320508076/2) circle (0.06);
\draw[fill=black] (5/2,1.7320508076/2) circle (0.06);
\draw[fill=black] (13/2,1.7320508076/2) circle (0.06);
\draw[fill=black] (17/2,1.7320508076/2) circle (0.06);
\draw[fill=black] (1,1.7320508076) circle (0.06);
\draw[fill=black] (2,1.7320508076) circle (0.06);
\draw[fill=black] (7,1.7320508076) circle (0.06);
\draw[fill=black] (8,1.7320508076) circle (0.06);
\draw[fill=black] (3/2,3/2*1.7320508076) circle (0.06);
\draw[fill=black] (15/2,3/2*1.7320508076) circle (0.06);
\draw[fill=black] (3,3*1.7320508076) circle (0.06);
\draw[fill=black] (4,3*1.7320508076) circle (0.06);
\draw[fill=black] (5,3*1.7320508076) circle (0.06);
\draw[fill=black] (6,3*1.7320508076) circle (0.06);
\draw[fill=black] (7/2,7/2*1.7320508076) circle (0.06);
\draw[fill=black] (11/2,7/2*1.7320508076) circle (0.06);
\draw[fill=black] (4,4*1.7320508076) circle (0.06);
\draw[fill=black] (5,4*1.7320508076) circle (0.06);
\draw[fill=black] (9/2,9/2*1.7320508076) circle (0.06);
\draw (0,-0.3) node {$000$};
\draw (1,-0.3) node {$001$};
\draw (2,-0.3) node {$010$};
\draw (3,-0.3) node {$011$};
\draw (6,-0.3) node {$100$};
\draw (7,-0.3) node {$101$};
\draw (8,-0.3) node {$110$};
\draw (9,-0.3) node {$111$};
\draw (0.1,1/2*1.7320508076) node {$002$};
\draw (2.9,1/2*1.7320508076) node {$012$};
\draw (6.1,1/2*1.7320508076) node {$102$};
\draw (8.9,1/2*1.7320508076) node {$112$};
\draw (0.6,1*1.7320508076) node {$020$};
\draw (2.4,1*1.7320508076) node {$021$};
\draw (6.6,1*1.7320508076) node {$120$};
\draw (8.4,1*1.7320508076) node {$121$};
\draw (1.1,3/2*1.7320508076) node {$022$};
\draw (7.9,3/2*1.7320508076) node {$122$};
\draw (2.6,3*1.7320508076) node {$200$};
\draw (4,3*1.7320508076-0.3) node {$201$};
\draw (5,3*1.7320508076-0.3) node {$210$};
\draw (6.4,3*1.7320508076) node {$211$};
\draw (3.1,7/2*1.7320508076) node {$202$};
\draw (5.9,7/2*1.7320508076) node {$212$};
\draw (3.6,4*1.7320508076) node {$220$};
\draw (5.4,4*1.7320508076) node {$221$};
\draw (4.5,9/2*1.7320508076+0.3) node {$222$};
\end{tikzpicture}
\caption{$X_3$}\label{fig_X3}
\end{figure}
For all $n\ge1,u\in L^2(K;\nu)$, let $P_nu:W_n\to\mathbb{R}$ be given by
$$P_nu(w)=\frac{1}{\nu(K_{w})}\int_{K_w}u(x)\nu(\mathrm{d} x)=\int_K(u\circ f_w)(x)\nu(\mathrm{d} x),w\in W_n.$$
Then, we give basic notions of the SC as follows.
Consider the following points in $\mathbb{R}^2$:
$$p_0=(0,0),p_1=(\frac{1}{2},0),p_2=(1,0),p_3=(1,\frac{1}{2}),$$
$$p_4=(1,1),p_5=(\frac{1}{2},1),p_6=(0,1),p_7=(0,\frac{1}{2}).$$
Let $f_i(x)=(x+2p_i)/3$, $x\in\mathbb{R}^2$, $i=0,\ldots,7$. Then the SC is the unique non-empty compact set $K$ in $\mathbb{R}^2$ satisfying $K=\cup_{i=0}^7f_i(K)$. Let $\nu$ be the normalized Hausdorff measure on $K$ of dimension $\alpha=\log8/\log3$. Then $(K,|\cdot|,\nu)$ is a metric measure space, where $|\cdot|$ is the Euclidean metric in $\mathbb{R}^2$.
Let
$$V_0=\myset{p_0,\ldots,p_7},V_{n+1}=\cup_{i=0}^7f_i(V_n)\text{ for all }n\ge0.$$
Then $\myset{V_n}$ is an increasing sequence of finite sets and $K$ is the closure of $V^*=\cup_{n=0}^\infty V_n$.
Let $W_0=\myset{\emptyset}$ and
$$W_n=\myset{w=w_1\ldots w_n:w_i=0,\ldots,7,i=1,\ldots,n}\text{ for all }n\ge1.$$
Similar to the SG, define $w^{(1)}w^{(2)}\in W_{m+n}$ and $i^n$ for all $w^{(1)}\in W_m,w^{(2)}\in W_n,i=0,\ldots,7$.
For all $w=w_1\ldots w_n\in W_n$, let
\begin{align*}
f_w&=f_{w_1}\circ\ldots\circ f_{w_n},\\
V_w&=f_{w_1}\circ\ldots\circ f_{w_n}(V_0),\\
K_w&=f_{w_1}\circ\ldots\circ f_{w_n}(K),\\
P_w&=f_{w_1}\circ\ldots\circ f_{w_{n-1}}(p_{w_n}),
\end{align*}
where $f_\emptyset=\mathrm{id}$ is the identity map.
\section{Statement of the Main Results}
We list the main results of this thesis. We use the notions introduced in Section \ref{sec_notion}.
\subsection*{Analytic Construction of Local Regular Dirichlet Forms}
We give a unified purely analytic construction of local regular Dirichlet forms on the SG and the SC using $\Gamma$-convergence of non-local closed forms. On the SG, this construction is much more complicated than that of Kigami, but this construction can be applied to more general fractal spaces, in particular, the SC.
\begin{mythm}\label{thm_main_SG_con}
Let $K$ be the SG. There exists a self-similar strongly local regular Dirichlet form $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ satisfying
\begin{align*}
&\mathcal{E}_{\mathrm{loc}}(u,u)\asymp\sup_{n\ge1}\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2,\\
&\mathcal{F}_{\mathrm{loc}}=\myset{u\in L^2(K;\nu):\mathcal{E}_{\mathrm{loc}}(u,u)<+\infty}.
\end{align*}
\end{mythm}
\begin{mythm}\label{thm_main_SC_con}
Let $K$ be the SC. There exists a self-similar strongly local regular Dirichlet form $(\mathcal{E}_{{\mathrm{loc}}},\mathcal{F}_{{\mathrm{loc}}})$ on $L^2(K;\nu)$ satisfying
\begin{align*}
&\mathcal{E}_{{\mathrm{loc}}}(u,u)\asymp\sup_{n\ge1}3^{(\beta^*-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2,\\
&\mathcal{F}_{{\mathrm{loc}}}=\myset{u\in C(K):\mathcal{E}_{\mathrm{loc}}(u,u)<+\infty}.
\end{align*}
\end{mythm}
Theorem \ref{thm_main_SG_con} is Theorem \ref{SG_con_thm_BM}. Theorem \ref{thm_main_SC_con} is Theorem \ref{SC_con_thm_BM}.
\subsection*{Determination of Walk Dimension Without Using Diffusion}
We give the determination of the walk dimensions of the SG and the SC. The point of our approach is that we do \emph{not} need the diffusion. This gives a partial answer to a problem raised by Pietruska-Pa\l uba \cite[PROBLEM 3]{Pie09}.
Denote that
$$\beta^*:=
\begin{cases}
\log5/\log2,&\text{for the SG},\\
\log(8\rho)/\log3,&\text{for the SC},
\end{cases}
$$
where $\rho$ is some parameter in resistance estimates.
Recall that
$$\beta_*:=\sup\myset{\beta>0:(\mathcal{E}_\beta,\mathcal{F}_\beta)\text{ is a regular Dirichlet form on }L^2(K;\nu)}.$$
\begin{mythm}\label{thm_main_det}
Let $K$ be the SG or the SC. For all $\beta\in(\alpha,\beta^*)$, the quadratic form $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$. For all $\beta\in[\beta^*,+\infty)$, the space $\mathcal{F}_\beta$ consists only of constant functions. Consequently, $\beta_*=\beta^*$.
\end{mythm}
For the SG, this is Theorem \ref{SG_det_thm_E_K} and Theorem \ref{SG_det_thm_ub} (alternatively, see also Theorem \ref{SG_app_thm_det} and Theorem \ref{SG_con_thm_nonlocal}). For the SC, this is Theorem \ref{SC_con_thm_walk}.
We give bound of the walk dimension of the SC as follows.
\begin{mythm}\label{thm_main_SC_bound}
For the SC, we have
$$\beta_*\in\left[\frac{\log\left(8\cdot\frac{7}{6}\right)}{\log3},\frac{\log\left(8\cdot\frac{3}{2}\right)}{\log3}\right].$$
\end{mythm}
This is Theorem \ref{SC_con_thm_bound}. This gives a partial answer to an open problem raised by Barlow \cite[Open Problem 2]{Bar13}.
\subsection*{Approximation of Local DFs by Non-Local DFs}
We give the approximation of local Dirichlet forms by non-local Dirichlet forms on the SG and the SC which are direct consequences of our construction. Pietruska-Pa\l uba mentioned in \cite{Pie09} after Theorem 8 stating (\ref{eqn_approximation}) that ``how to do this without appealing to stochastic processes is unknown so far". The point of our approach is that we do \emph{not} need the diffusion.
\begin{mythm}\label{thm_main_app}
Both on the SG and the SC, there exists some positive constant $C$ such that for all $u\in\mathcal{F}_{\mathrm{loc}}$, we have
$$\frac{1}{C}\mathcal{E}_{\mathrm{loc}}(u,u)\le\varliminf_{\beta\uparrow\beta^*}(\beta^*-\beta)\mathcal{E}_\beta(u,u)\le\varlimsup_{\beta\uparrow\beta^*}(\beta^*-\beta)\mathcal{E}_{\beta}(u,u)\le C\mathcal{E}_{\mathrm{loc}}(u,u).$$
\end{mythm}
For the SG, this is Corollary \ref{SG_con_cor_conv}. For the SC, this is Corollary \ref{SC_con_cor_approx}.
On the SG, we give a new semi-norm $E_\beta$ by
$$E_\beta(u,u):=\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2.$$
\begin{mythm}\label{thm_main_Mosco}
Let $K$ be the SG.
\begin{enumerate}[(a)]
\item For all $\beta\in(\alpha,+\infty)$, for all $u\in C(K)$, we have
$$E_\beta(u,u)\asymp\mathcal{E}_\beta(u,u).$$
\item For all $u\in L^2(K;\nu)$, we have
$$(1-5^{-1}\cdot 2^{\beta})E_\beta(u,u)\uparrow\mathfrak{E}_{\mathrm{loc}}(u,u)$$
as $\beta\uparrow\beta^*=\log5/\log2$.
\item For all sequence $\myset{\beta_n}\subseteq(\alpha,\beta^*)$ with $\beta_n\uparrow\beta^*$, we have $(1-5^{-1}\cdot 2^{\beta_n})E_{\beta_n}\to\mathfrak{E}_{\mathrm{loc}}$ in the sense of Mosco.
\end{enumerate}
\end{mythm}
Part (a) is Theorem \ref{SG_app_thm_main}. Part (b) is Theorem \ref{SG_app_thm_incre}. Part (c) is Theorem \ref{SG_app_thm_conv_main}.
The intrinsic idea of this thesis is the \emph{discretization} of non-local quadratic form on self-similar set using a quadratic form on an augmented rooted tree or a scaled summation of quadratic forms on infinitely many finite graphs. This enables us to investigate many interesting problems related to Dirichlet forms on self-similar sets.
\section{Structure of the Thesis}
This thesis is organized as follows.
In Chapter \ref{ch_pre}, we collect some preliminaries for later chapters.
In Chapter \ref{ch_SG_det}, we determine the walk dimension of the SG without using diffusion. We construct non-local regular Dirichlet forms on the SG from regular Dirichlet forms on certain augmented rooted tree whose certain boundary at infinity is the SG. This chapter is based on my work \cite{GY16} joint with Prof. Alexander Grigor'yan.
In Chapter \ref{ch_SG_app}, we consider approximation of the local Dirichlet form by non-local Dirichlet forms on the SG. This chapter is based on my work \cite{MY17}.
In Chapter \ref{ch_SG_con} and Chapter \ref{ch_SC_con}, we give a purely analytic construction of self-similar local regular Dirichlet forms on the SG and the SC using approximation of stable-like non-local closed forms. Chapter \ref{ch_SG_con} is based on my work \cite{MY18}. Chapter \ref{ch_SC_con} is based on my work \cite{GY17} joint with Prof. Alexander Grigor'yan.
NOTATION. The letters $c,C$ will always refer to some positive constants and may change at each occurrence. The sign $\asymp$ means that the ratio of the two sides is bounded from above and below by positive constants. The sign $\lesssim$ ($\gtrsim$) means that the LHS is bounded by positive constant times the RHS from above (below).
\chapter{Preliminary}\label{ch_pre}
In this chapter, we collect some preliminaries for later chapters.
\section{Dirichlet Form Theory}
The book \cite{FOT11} by Fukushima, Oshima and Takeda is a standard reference.
Let $H$ be a real Hilbert space with inner product $(\cdot,\cdot)$.
We say that $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ is a symmetric form on $H$ if
\begin{itemize}
\item $\mathcal{D}[\mathcal{E}]$ is a dense subspace of $H$.
\item For all $u,v\in\mathcal{D}[\mathcal{E}]$, we have $\mathcal{E}(u,v)=\mathcal{E}(v,u)$.
\item For all $u,v,u_1,u_2,v_1,v_2\in\mathcal{D}[\mathcal{E}]$, $c\in\mathbb{R}$, we have
$$\mathcal{E}(u_1+u_2,v)=\mathcal{E}(u_1,v)+\mathcal{E}(u_2,v),\mathcal{E}(u,v_1+v_2)=\mathcal{E}(u,v_1)+\mathcal{E}(u,v_2),$$
$$\mathcal{E}(cu,v)=\mathcal{E}(u,cv)=c\mathcal{E}(u,v).$$
\item For all $u\in\mathcal{D}[\mathcal{E}]$, we have $\mathcal{E}(u,u)\ge0$.
\end{itemize}
For all $\alpha\in(0,+\infty)$, we denote $\mathcal{E}_\alpha(u,v)=\mathcal{E}(u,v)+\alpha(u,v)$ for all $u,v\in\mathcal{D}[\mathcal{E}]$.
We say that a symmetric form $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $H$ is closed or a closed (symmetric) form if $(\mathcal{D}[\mathcal{E}],\mathcal{E}_1)$ is a Hilbert space.
We say that $\myset{T_t:t\in(0,+\infty)}$ is a semi-group on $H$ if
\begin{enumerate}[(1)]
\item For all $t\in(0,+\infty)$, we have $T_t$ is a symmetric operator with domain $\mathcal{D}(T_t)=H$.
\item For all $t\in(0,+\infty)$, $u\in H$, we have $(T_tu,T_tu)\le(u,u)$.
\item For all $t,s\in(0,+\infty)$, we have $T_t\circ T_s=T_{t+s}$.
\end{enumerate}
We say that a semi-group $\myset{T_t:t\in(0,+\infty)}$ on $H$ is strongly continuous if
\begin{itemize}
\item For all $u\in H$, we have $\lim_{t\downarrow0}(T_tu-u,T_tu-u)=0$.
\end{itemize}
We say that $\myset{G_\alpha:\alpha\in(0,+\infty)}$ is a resolvent on $H$ if
\begin{enumerate}[(1)]
\item For all $\alpha\in(0,+\infty)$, we have $G_\alpha$ is a symmetric operator with domain $\mathcal{D}(G_\alpha)=H$.
\item For all $\alpha\in(0,+\infty)$, $u\in H$, we have $(\alpha G_\alpha u,\alpha G_\alpha u)\le(u,u)$.
\item For all $\alpha,\beta\in(0,+\infty)$, we have $G_\alpha-G_\beta+(\alpha-\beta)G_\alpha\circ G_\beta=0$.
\end{enumerate}
We say that a resolvent $\myset{G_\alpha:\alpha\in(0,+\infty)}$ on $H$ is strongly continuous if
\begin{itemize}
\item For all $u\in H$, we have $\lim_{\alpha\to+\infty}(\alpha G_\alpha u-u,\alpha G_\alpha u-u)=0$.
\end{itemize}
\begin{myprop}(\cite[Exercise 1.3.1, Lemma 1.3.1, Lemma 1.3.2, Exercise 1.3.2, Theorem 1.3.1]{FOT11})
There exists a one-to-one correspondence among the family of closed forms, the family of non-positive definite self-adjoint operators, the family of strongly continuous semi-groups and the family of strongly continuous resolvents.
\end{myprop}
Let $(X,m)$ be a measure space. Let $L^2(X;m)$ be the space of all $L^2$-integrable extended-real-valued functions on $(X,m)$, then $L^2(X;m)$ is a real Hilbert space.
Let $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ be a symmetric form on $L^2(X;m)$. We say that $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$ is Markovian or has Markovian property if for all $\varepsilon\in(0,+\infty)$, there exists a function $\phi_\varepsilon:\mathbb{R}\to\mathbb{R}$ satisfying
$$\phi_\varepsilon(t)=t\text{ for all }t\in[0,1],$$
$$\phi_\varepsilon(t)\in[-\varepsilon,1+\varepsilon]\text{ for all }t\in\mathbb{R},$$
$$0\le\phi_\varepsilon(t)-\phi_\varepsilon(s)\le t-s\text{ for all }t,s\in\mathbb{R}\text{ with }t\ge s,$$
such that for all $u\in\mathcal{D}[\mathcal{E}]$, we have $\phi_\varepsilon(u)\in\mathcal{D}[\mathcal{E}]$ and $\mathcal{E}(\phi_\varepsilon(u),\phi_\varepsilon(u))\le\mathcal{E}(u,u)$.
A Dirichlet form is a Markovian closed symmetric form.
Let $u$ be a function on $X$. We say that $(u\vee0)\wedge1$ is the unit contraction of $u$. We say that $v$ is a normal contraction of $u$ if $|v(x)|\le|u(x)|$ and $|v(x)-v(y)|\le|u(x)-u(y)|$ for all $x,y\in X$.
We say that
\begin{enumerate}[(1)]
\item The unit contraction operates on $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$ if for all $u\in\mathcal{D}[\mathcal{E}]$, we have the unit contraction $v=(u\vee0)\wedge1\in\mathcal{D}[\mathcal{E}]$ and $\mathcal{E}(v,v)\le\mathcal{E}(u,u)$.
\item Every normal contraction operates on $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$ if for all $u\in\mathcal{D}[\mathcal{E}]$, for all normal contraction $v$ of $u$, we have $v\in\mathcal{D}[\mathcal{E}]$ and $\mathcal{E}(v,v)\le\mathcal{E}(u,u)$.
\end{enumerate}
Let $S$ be a linear operator with domain $\mathcal{D}(S)=L^2(X;m)$. We say that $S$ is Markovian if for all $u\in L^2(X;m)$ with $0\le u\le 1$ $m$-a.e., we have $0\le Su\le 1$ $m$-a.e..
We say that a semi-group $\myset{T_t:t\in(0,+\infty)}$ on $L^2(X;m)$ is Markovian if for all $t\in(0,+\infty)$, we have $T_t$ is Markovian.
We say that a resolvent $\myset{G_\alpha:\alpha\in(0,+\infty)}$ on $L^2(X;m)$ is Markovian if for all $\alpha\in(0,+\infty)$, we have $\alpha G_\alpha$ is Markovian.
Let $(X,d,m)$ be a metric measure space, that is, $(X,d)$ is a locally compact separable metric space and $m$ is a Radon measure on $X$ with full support.
\begin{myprop}(\cite[Theorem 1.4.1]{FOT11})
Let $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ be a closed form on $L^2(X;m)$, $\myset{T_t:t\in(0,+\infty)}$ on $L^2(X;m)$ its corresponding strongly continuous semi-group and $\myset{G_\alpha:\alpha\in(0,+\infty)}$ on $L^2(X;m)$ its corresponding strongly continuous resolvent. Then the followings are equivalent.
\begin{enumerate}[(1)]
\item $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$ is Markovian.
\item The unit contraction operates on $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$.
\item Every normal contraction operates on $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$.
\item $\myset{T_t:t\in(0,+\infty)}$ on $L^2(X;m)$ is Markovian.
\item $\myset{G_\alpha:\alpha\in(0,+\infty)}$ on $L^2(X;m)$ is Markovian.
\end{enumerate}
\end{myprop}
Denote $C_c(X)$ as the space of all continuous functions with compact supports.
Let $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ be a symmetric form on $L^2(X;m)$. We say that $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$ is regular if $\mathcal{D}[\mathcal{E}]\cap C_c(X)$ is $\mathcal{E}_1$-dense in $\mathcal{D}[\mathcal{E}]$ and uniformly dense in $C_c(X)$. We say that $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$ is local if for all $u,v\in\mathcal{D}[\mathcal{E}]$ with compact supports, we have $\mathcal{E}(u,v)=0$. We say that $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$ is strongly local if for all $u,v\in\mathcal{D}[\mathcal{E}]$ with compact supports and $v$ is constant in an open neighborhood of $\mathrm{supp}(u)$, we have $\mathcal{E}(u,v)=0$.
Let $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$ be a Dirichlet form and $\myset{T_t:t\in(0,+\infty)}$ on $L^2(X;m)$ its corresponding strongly continuous Markovian semi-group. Then for all $t\in(0,+\infty)$, $T_t$ can be extended to be a contractive linear operator on $L^\infty(X;m)$. We say that $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$ or $\myset{T_t:t\in(0,+\infty)}$ on $L^2(X;m)$ is conservative if $T_t1=1$ $m$-a.e. for all (or equivalently, for some) $t\in(0,+\infty)$.
We say that $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ is a closed form on $L^2(X;m)$ in the wide sense if $\mathcal{D}[\mathcal{E}]$ is complete under the inner product $\mathcal{E}_1$ but $\mathcal{D}[\mathcal{E}]$ is not necessary to be dense in $L^2(X;m)$. If $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ is a closed form on $L^2(X;m)$ in the wide sense, we extend $\mathcal{E}$ to be $+\infty$ outside $\mathcal{D}[\mathcal{E}]$, hence the information of $\mathcal{D}[\mathcal{E}]$ is encoded in $\mathcal{E}$.
Note that a closed form is a closed form in the wide sense.
We collect the definitions and some results about $\Gamma$-convergence and Mosco convergence as follows.
\begin{mydef}\label{def_gamma}
Let $\mathcal{E}^n,\mathcal{E}$ be closed forms on $L^2(X;m)$ in the wide sense. We say that $\mathcal{E}^n$ is $\Gamma$-convergent to $\mathcal{E}$ if the following conditions are satisfied.
\begin{enumerate}[(1)]
\item For all $\myset{u_n}\subseteq L^2(X;m)$ that converges \emph{strongly} to $u\in L^2(X;m)$, we have
$$\varliminf_{n\to+\infty}\mathcal{E}^n(u_n,u_n)\ge\mathcal{E}(u,u).$$
\item For all $u\in L^2(X;m)$, there exists a sequence $\myset{u_n}\subseteq L^2(X;m)$ converging \emph{strongly} to $u$ in $L^2(X;m)$ such that
$$\varlimsup_{n\to+\infty}\mathcal{E}^n(u_n,u_n)\le\mathcal{E}(u,u).$$
\end{enumerate}
\end{mydef}
We can see that $\Gamma$-convergence is very weak from the following result.
\begin{myprop}\label{prop_gamma}(\cite[Proposition 6.8, Theorem 8.5, Theorem 11.10, Proposition\\
\noindent 12.16]{Dal93})
Let $\myset{\mathcal{E}^n}$ be a sequence of closed forms on $L^2(X;m)$ in the wide sense, then there exist some subsequence $\myset{\mathcal{E}^{n_k}}$ and some closed form $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ on $L^2(X;m)$ in the wide sense such that $\mathcal{E}^{n_k}$ is $\Gamma$-convergent to $\mathcal{E}$.
\end{myprop}
\begin{mydef}\label{def_Mosco}
Let $\mathcal{E}^n$, $\mathcal{E}$ be closed forms on $L^2(X;m)$. We say that $\mathcal{E}^n$ converges to $\mathcal{E}$ in the sense of Mosco if the following conditions are satisfied.
\begin{enumerate}[(1)]
\item\label{def_Mosco_1} For all $\myset{u_n}\subseteq L^2(X;m)$ that converges \emph{weakly} to $u\in L^2(X;m)$, we have
$$\varliminf_{n\to+\infty}\mathcal{E}^n(u_n,u_n)\ge\mathcal{E}(u,u).$$
\item\label{def_Mosco_2} For all $u\in L^2(X;m)$, there exists a sequence $\myset{u_n}\subseteq L^2(X;m)$ converging \emph{strongly} to $u$ in $L^2(X;m)$ such that
$$\varlimsup_{n\to+\infty}\mathcal{E}^n(u_n,u_n)\le\mathcal{E}(u,u).$$
\end{enumerate}
\end{mydef}
Let $\myset{T_t:t\in(0,+\infty)}$, $\myset{T^n_t:t\in(0,+\infty)}$ be the strongly continuous semi-groups on $L^2(X;m)$ and $\myset{G_\alpha:\alpha\in(0,+\infty)}$, $\myset{G^n_\alpha:\alpha\in(0,+\infty)}$ the strongly continuous resolvents on $L^2(X;m)$ corresponding to closed forms $(\mathcal{E},\mathcal{D}[\mathcal{E}])$, $(\mathcal{E}^n,\mathcal{D}[\mathcal{E}^n])$ on $L^2(X;m)$. We have the following equivalence.
\begin{myprop}(\cite[Theorem 2.4.1, Corollary 2.6.1]{Mos94})\label{prop_Mosco}
The followings are equivalent.
\begin{enumerate}[(1)]
\item $\mathcal{E}^n$ converges to $\mathcal{E}$ in the sense of Mosco.
\item $T^n_tu\to T_tu$ in $L^2(X;m)$ for all $t\in(0,+\infty)$, $u\in L^2(X;m)$.
\item $G^n_\alpha u\to G_\alpha u$ in $L^2(X;m)$ for all $\alpha\in(0,+\infty)$, $u\in L^2(X;m)$.
\end{enumerate}
\end{myprop}
We have the following corollary.
\begin{mycor}\label{cor_Mosco}
Let $(\mathcal{E},\mathcal{D}[\mathcal{E}])$ be a closed form on $L^2(X;m)$, then for all $\myset{u_n}\subseteq L^2(X;m)$ that converges \emph{weakly} to $u\in L^2(X;m)$, we have
\begin{equation}\label{eqn_Mosco}
\mathcal{E}(u,u)\le\varliminf_{n\to+\infty}\mathcal{E}(u_n,u_n).
\end{equation}
\end{mycor}
\begin{proof}
Let $\mathcal{E}^n=\mathcal{E}$ for all $n\ge1$, then by Proposition \ref{prop_Mosco}, $\mathcal{E}^n$ is trivially convergent to $\mathcal{E}$ in the sense of Mosco. By Definition \ref{def_Mosco}, Equation (\ref{eqn_Mosco}) is obvious.
\end{proof}
Note that it is tedious to prove Corollary \ref{cor_Mosco} directly.
\section{Some Results on the SG}\label{sec_SG}
We use the notions of the SG introduced in Section \ref{sec_notion}.
Let us recall the classical Kigami's construction of the self-similar local regular Dirichlet form on the SG.
\begin{mythm}\label{thm_SG_con}(\cite{Kig89,Kig93,Kig01})
Let
$$\mathfrak{E}_n(u,u)=\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2,n\ge0,u\in l(K),$$
where $l(S)$ is the set of all real-valued functions on the set $S$. Then $\mathfrak{E}_n(u,u)$ is monotone increasing in $n$ for all $u\in l(K)$. Let
$$
\begin{aligned}
&\mathfrak{E}_{\mathrm{loc}}(u,u)=\lim_{n\to+\infty}\mathfrak{E}_n(u,u)=\lim_{n\to+\infty}
\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2,\\
&\mathfrak{F}_{\mathrm{loc}}=\myset{u\in C(K):\mathfrak{E}_{\mathrm{loc}}(u,u)<+\infty},
\end{aligned}
$$
then $(\mathfrak{E}_{\mathrm{loc}},\mathfrak{F}_{\mathrm{loc}})$ is a self-similar local regular Dirichlet form on $L^2(K;\nu)$.
\end{mythm}
Given $x_0,x_1,x_2\in\mathbb{R}$, we define $U=U^{(x_0,x_1,x_2)}:K\to\mathbb{R}$ as follows. We define $U$ on $V^*$ by induction. Let $U(p_i)=x_i,i=0,1,2$. Assume that we have defined $U$ on $P_{w}$ for all $w\in W_{n+1}$. Then for all $w\in W_n$, note that $P_{wii}=P_{wi}$ and $P_{wij}=P_{wji}$ for all $i,j=0,1,2$, define
$$
\begin{aligned}
U(P_{w01})&=U(P_{w10})=\frac{2U(P_{w0})+2U(P_{w1})+U(P_{w2})}{5},\\
U(P_{w12})&=U(P_{w21})=\frac{U(P_{w0})+2U(P_{w1})+2U(P_{w2})}{5},\\
U(P_{w02})&=U(P_{w20})=\frac{2U(P_{w0})+U(P_{w1})+2U(P_{w2})}{5}.\\
\end{aligned}
$$
Hence we have the definition of $U$ on $P_w$ for all $w\in W_{n+2}$. Then $U$ is well-defined and uniformly continuous on $V^*$. We extend $U$ on $V^*$ to a continuous function $U$ on $K$.
Let
$$\mathcal{U}=\myset{U^{(x_0,x_1,x_2)}:x_0,x_1,x_2\in\mathbb{R}}.$$
We have energy property and separation property as follows.
\begin{mythm}(\cite{Kig89,Kig93,Kig01})\label{thm_SG_fun}
\begin{enumerate}[(1)]
\item For all $U=U^{(x_0,x_1,x_2)}\in\mathcal{U}$, $n\ge0$, we have
$$\sum_{w\in W_n}\sum_{p,q\in V_w}(U(p)-U(q))^2=\left(\frac{3}{5}\right)^n\left((x_0-x_1)^2+(x_1-x_2)^2+(x_0-x_2)^2\right).$$
\item $\mathcal{U}$ separates points, that is, for all $x,y\in K$ with $x\ne y$, there exists $U\in\mathcal{U}$ such that $U(x)\ne U(y)$.
\end{enumerate}
\end{mythm}
\begin{myrmk}
In Kigami's construction, $U^{(x_0,x_1,x_2)}$ is the standard harmonic function with boundary values $x_0,x_1,x_2$ on $p_0,p_1,p_2$, respectively.
\end{myrmk}
\begin{mylem}\label{lem_SG_holder}(\cite[Theorem 4.11 (\rmnum{3})]{GHL03})
For all $u\in L^2(K;\nu)$, let
$$
\begin{aligned}
E(u)&=\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2,\\
F(u)&=\sup_{n\ge1}2^{(\beta-\alpha)n}\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2.
\end{aligned}
$$
Then for all $\beta\in(\alpha,+\infty)$, there exists some positive constant $c$ such that
\begin{equation}\label{eqn_holder_E}
|u(x)-u(y)|^2\le cE(u)|x-y|^{\beta-\alpha},
\end{equation}
\begin{equation}\label{eqn_holder_F}
|u(x)-u(y)|^2\le cF(u)|x-y|^{\beta-\alpha},
\end{equation}
for $\nu$-almost every $x,y\in K$, for all $u\in L^2(K;\nu)$.
\end{mylem}
\begin{myrmk}
If $u\in L^2(K;\nu)$ satisfies $E(u)<+\infty$ or $F(u)<+\infty$, then $u$ has a continuous version in $C^{\frac{\beta-\alpha}{2}}(K)$. The proof of the above lemma does not rely on heat kernel.
\end{myrmk}
Let us introduce the Besov spaces on the SG as follows. Define
$$
\begin{aligned}
\left[u\right]_{B^{2,2}_{\alpha,\beta}(K)}&=\sum_{n=1}^\infty2^{(\alpha+\beta)n}\int\limits_K\int\limits_{B(x,2^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x),\\
\left[u\right]_{B^{2,\infty}_{\alpha,\beta}(K)}&=\sup_{n\ge1}2^{(\alpha+\beta)n}\int\limits_K\int\limits_{B(x,2^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x),
\end{aligned}
$$
and
$$
\begin{aligned}
B_{\alpha,\beta}^{2,2}(K)&=\myset{u\in L^2(K;\nu):\left[u\right]_{B^{2,2}_{\alpha,\beta}(K)}<+\infty},\\
B_{\alpha,\beta}^{2,\infty}(K)&=\myset{u\in L^2(K;\nu):\left[u\right]_{B^{2,\infty}_{\alpha,\beta}(K)}<+\infty}.
\end{aligned}
$$
\section{Some Results on the SC}\label{sec_SC}
We use the notions of the SC introduced in Section \ref{sec_notion}.
\begin{mylem}\label{lem_SC_holder}(\cite[Theorem 4.11 (\rmnum{3})]{GHL03})
For all $u\in L^2(K;\nu)$, let
$$E(u)=\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y),$$
$$F(u)=\sup_{n\ge1}3^{(\alpha+\beta)n}\int_K\int_{B(x,3^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x).$$
Then for all $\beta\in(\alpha,+\infty)$, there exists some positive constant $c$ such that
$$
\begin{aligned}
|u(x)-u(y)|^2&\le cE(u)|x-y|^{\beta-\alpha},\\
|u(x)-u(y)|^2&\le cF(u)|x-y|^{\beta-\alpha},
\end{aligned}
$$
for $\nu$-almost every $x,y\in K$, for all $u\in L^2(K;\nu)$.
\end{mylem}
\begin{myrmk}
If $u\in L^2(K;\nu)$ satisfies $E(u)<+\infty$ or $F(u)<+\infty$, then $u$ has a continuous version in $C^{\frac{\beta-\alpha}{2}}(K)$. The proof of the above lemma does not rely on heat kernel.
\end{myrmk}
Let us introduce the Besov spaces on the SC as follows. Define
$$
\begin{aligned}
&\left[u\right]_{B^{2,2}_{\alpha,\beta}(K)}&=\sum_{n=1}^\infty3^{(\alpha+\beta)n}\int\limits_K\int\limits_{B(x,3^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x),\\
&\left[u\right]_{B^{2,\infty}_{\alpha,\beta}(K)}&=\sup_{n\ge1}3^{(\alpha+\beta)n}\int\limits_K\int\limits_{B(x,3^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x),\\
\end{aligned}
$$
and
$$
\begin{aligned}
B_{\alpha,\beta}^{2,2}(K)&=\myset{u\in L^2(K;\nu):[u]_{B^{2,2}_{\alpha,\beta}(K)}<+\infty},\\
B_{\alpha,\beta}^{2,\infty}(K)&=\myset{u\in L^2(K;\nu):[u]_{B^{2,\infty}_{\alpha,\beta}(K)}<+\infty}.\\
\end{aligned}
$$
\section{Some Auxiliary Results}
We give two techniques from electrical networks.
The first is $\Delta$-Y transform (see \cite[Lemma 2.1.15]{Kig01}).
\begin{mylem}\label{lem_DeltaY}
The electrical networks in Figure \ref{fig_DeltaY} are equivalent, where
$$
\begin{aligned}
R_1&=\frac{R_{12}R_{31}}{R_{12}+R_{23}+R_{31}},\\
R_2&=\frac{R_{12}R_{23}}{R_{12}+R_{23}+R_{31}},\\
R_3&=\frac{R_{23}R_{31}}{R_{12}+R_{23}+R_{31}},
\end{aligned}
$$
and
$$
\begin{aligned}
R_{12}&=\frac{R_1R_2+R_2R_3+R_3R_1}{R_3},\\
R_{23}&=\frac{R_1R_2+R_2R_3+R_3R_1}{R_1},\\
R_{31}&=\frac{R_1R_2+R_2R_3+R_3R_1}{R_2}.
\end{aligned}
$$
\begin{figure}[ht]
\centering
\subfigure[$\Delta$-circuit]{
\begin{tikzpicture}
\draw (0,0)--(3,0)--(1.5,1.5*1.7320508076)--cycle;
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (1.5,1.5*1.7320508076) circle (0.06);
\draw (0,-0.3) node {$p_2$};
\draw (3,-0.3) node {$p_3$};
\draw (1.5,1.5*1.7320508076+0.3) node {$p_1$};
\draw (1.5,-0.3) node {$R_{23}$};
\draw (0.5,0.75*1.7320508076+0.3) node {$R_{12}$};
\draw (2.5,0.75*1.7320508076+0.3) node {$R_{31}$};
\end{tikzpicture}
}
\hspace{1in}
\subfigure[Y-circuit]{
\begin{tikzpicture}
\draw (0,0)--(1.5,1.5/1.7320508076);
\draw (3,0)--(1.5,1.5/1.7320508076);
\draw (1.5,1.5*1.7320508076)--(1.5,1.5/1.7320508076);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (1.5,1.5*1.7320508076) circle (0.06);
\draw[fill=black] (1.5,1.5/1.7320508076) circle (0.06);
\draw (0,-0.3) node {$p_2$};
\draw (3,-0.3) node {$p_3$};
\draw (1.5,1.5*1.7320508076+0.3) node {$p_1$};
\draw (1.5,1.5/1.7320508076-0.3) node {$p_0$};
\draw (1.5+0.3,0.75*1.7320508076+0.3) node {$R_{1}$};
\draw (0.7,0.7) node {$R_2$};
\draw (2.3,0.7) node {$R_3$};
\end{tikzpicture}
}
\caption{$\Delta$-Y Transform}\label{fig_DeltaY}
\end{figure}
\end{mylem}
The second is shorting and cutting technique (see \cite{DS84}). Shorting certain sets of vertices will decrease the resistance between arbitrary two vertices. Cutting certain sets of vertices will increase the resistance between arbitrary two vertices.
We give two elementary results as follows.
\begin{myprop}\label{prop_ele1}
Let $\myset{x_n}$ be a \emph{monotone increasing} sequence in $[0,+\infty]$. Then $(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n$ is monotone increasing in $\lambda\in(0,1)$ and
$$\lim_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n=\lim_{n\to+\infty}x_n=\sup_{n\ge1}x_n.$$
\end{myprop}
\begin{proof}
For all $\lambda_1,\lambda_2\in(0,1)$ with $\lambda_1<\lambda_2$. We show that
$$(1-\lambda_1)\sum_{n=1}^\infty\lambda_1^nx_n\le(1-\lambda_2)\sum_{n=1}^\infty\lambda_2^nx_n.$$
If $\sum_{n=1}^\infty\lambda_2^nx_n=+\infty$, then this result is obvious.
If $\sum_{n=1}^\infty\lambda_2^nx_n<+\infty$, then $\myset{x_n}\subseteq[0,+\infty)$. For all $\lambda\in(0,\lambda_2]$, we have
$$\sum_{n=1}^\infty\lambda^nx_n\le\sum_{n=1}^\infty\lambda_2^nx_n<+\infty,$$
hence
$$
\begin{aligned}
&(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n=\sum_{n=1}^\infty\lambda^nx_n-\sum_{n=1}^\infty\lambda^{n+1}x_n=\sum_{n=1}^\infty\lambda^nx_n-\sum_{n=2}^\infty\lambda^{n}x_{n-1}\\
=&\lambda^1x_1+\sum_{n=2}^\infty\lambda^nx_n-\sum_{n=2}^\infty\lambda^{n}x_{n-1}=\lambda x_1+\sum_{n=2}^\infty\lambda^n(x_n-x_{n-1}).
\end{aligned}
$$
Since $\myset{x_n}$ is a monotone increasing sequence in $[0,+\infty)$, we have $x_n-x_{n-1}\ge0$ for all $n\ge2$. Hence $(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n$ is monotone increasing in $\lambda\in(0,\lambda_2]$. In particular
$$(1-\lambda_1)\sum_{n=1}^\infty\lambda_1^nx_n\le(1-\lambda_2)\sum_{n=1}^\infty\lambda_2^nx_n.$$
Hence $(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n$ is monotone increasing in $\lambda\in(0,1)$.
Since $\myset{x_n}$ is a monotone increasing sequence in $[0,+\infty]$, we denote
$$x_\infty=\lim_{n\to+\infty}x_n=\sup_{n\ge1}x_n\in[0,+\infty].$$
It is obvious that for all $\lambda\in(0,1)$, we have
$$(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\le(1-\lambda)\sum_{n=1}^\infty\lambda^nx_\infty=(1-\lambda)\frac{\lambda}{1-\lambda}x_\infty=\lambda x_\infty,$$
hence
$$\varlimsup_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\le x_\infty.$$
On the other hand, for all $A<x_\infty$, there exists some positive integer $N\ge1$ such that for all $n>N$, we have $x_n>A$, hence
$$(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\ge(1-\lambda)\sum_{n=N+1}^\infty\lambda^nx_n\ge(1-\lambda)\sum_{n=N+1}^\infty\lambda^nA=(1-\lambda)\frac{\lambda^{N+1}}{1-\lambda}A=\lambda^{N+1}A,$$
hence
$$\varliminf_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\ge A.$$
Since $A<x_\infty$ is arbitrary, we have
$$\varliminf_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\ge x_\infty.$$
Therefore
$$\lim_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n=x_\infty=\lim_{n\to+\infty}x_n=\sup_{n\ge1}x_n.$$
\end{proof}
\begin{myprop}\label{prop_ele2}
Let $\myset{x_n}$ be a sequence in $[0,+\infty]$. Then
\begin{enumerate}[(1)]
\item $$\varliminf_{n\to+\infty}x_n\le\varliminf_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\le\varlimsup_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\le\varlimsup_{n\to+\infty}x_n\le\sup_{n\ge1}x_n.$$
\item If there exists some positive constant $C$ such that
$$x_n\le Cx_{n+m}\text{ for all }n,m\ge1,$$
then
$$\sup_{n\ge1}x_n\le C\varliminf_{n\to+\infty}x_n.$$
\end{enumerate}
\end{myprop}
\begin{proof}
(1) It is obvious that
$$\varliminf_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\le\varlimsup_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n,$$
$$\varlimsup_{n\to+\infty}x_n\le\sup_{n\ge1}x_n.$$
We show that
$$\varlimsup_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\le\varlimsup_{n\to+\infty}x_n.$$
If $\varlimsup_{n\to+\infty}x_n=+\infty$, then the result is obvious. Assume that $\varlimsup_{n\to+\infty}x_n<+\infty$.
For all $A>\varlimsup_{n\to+\infty}x_n$, there exists some positive integer $N\ge1$ such that for all $n>N$, we have $x_n<A$, hence
$$
\begin{aligned}
&(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\le(1-\lambda)\sum_{n=1}^N\lambda^nx_n+(1-\lambda)\sum_{n=N+1}^\infty\lambda^nA\\
=&(1-\lambda)\sum_{n=1}^N\lambda^nx_n+(1-\lambda)\frac{\lambda^{N+1}}{1-\lambda}A=(1-\lambda)\sum_{n=1}^N\lambda^nx_n+\lambda^{N+1}A,
\end{aligned}
$$
hence
$$\varlimsup_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\le A.$$
Since $A>\varlimsup_{n\to+\infty}x_n$ is arbitrary, we have
$$\varlimsup_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\le\varlimsup_{n\to+\infty}x_n.$$
Similarly, we have
$$\varliminf_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n\ge\varliminf_{n\to+\infty}x_n.$$
(2) For all $A<\sup_{n\ge1}x_n$, there exists some positive integer $N\ge1$ such that $x_N>A$. By assumption, for all $n>N$, we have
$$x_n\ge\frac{1}{C}x_N\ge\frac{1}{C}A,$$
hence
$$\varliminf_{n\to+\infty}x_n\ge\frac{1}{C}A.$$
Since $A<\sup_{n\ge1}x_n$ is arbitrary, we have
$$\varliminf_{n\to+\infty}x_n\ge\frac{1}{C}\sup_{n\ge1}x_n,$$
that is,
$$\sup_{n\ge1}x_n\le C\varliminf_{n\to+\infty}x_n.$$
\end{proof}
\chapter{Determination of the Walk Dimension of the SG}\label{ch_SG_det}
This chapter is based on my work \cite{GY16} joint with Prof. Alexander Grigor'yan.
\section{Background and Statement}
Our approach of determination is based on a recent paper \cite{KLW17} of S.-L. Kong, K.-S. Lau and T.-K. Wong. They introduced conductances with parameter $\lambda\in(0,1)$ on the Sierpi\'nski graph $X$ to obtain a random walk (and a corresponding quadratic form) on $X$ and showed that the Martin boundary of that random walk is homeomorphic to the SG $K$. Let $\bar{X}$ be the Martin compactification of $X$. It was also proved in \cite{KLW17} that the quadratic form on $X$ induces an quadratic form on $K\cong\bar{X}\backslash X$ of the form (\ref{eqn_nonlocal}) with $\beta=-\log\lambda/\log2$. However, no restriction on $\beta$ was established, so that the above quadratic form on $K$ does not have to be a regular Dirichlet form.
In this chapter, we establish the exact restriction on $\lambda$ (hence on $\beta$) under which $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$. Our method is as follows.
Firstly, we introduce a measure $m$ on $X$ to obtain a regular Dirichlet form $(\mathcal{E}_X,\mathcal{F}_X)$ on $L^2(X;m)$ associated with the above random walk on $X$. Then we extend this Dirichlet form to an \emph{active reflected} Dirichlet form $(\mathcal{E}^\mathrm{ref},\mathcal{F}^\mathrm{ref}_a)$ on $L^2(X;m)$ which is not regular, though.
Secondly, we \emph{regularize} $(\mathcal{E}^\mathrm{ref},\mathcal{F}^\mathrm{ref}_a)$ on $L^2(X;m)$ using the theory of \cite{Fuk71}. The result of regularization is a regular Dirichlet form $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ on $L^2(\bar{X};m)$ that is an extension of $(\mathcal{E}_{{X}},\mathcal{F}_{{X}})$ on $L^2({X};m)$. By \cite{Fuk71}, regularization is always possible, but we show that the regularized form ``sits" on $\bar{X}$ provided $\lambda>1/5$ which is equivalent $\beta<\beta^*:=\log5/\log2$.
Thirdly, we take trace of $\mathcal{E}_{\bar{X}}$ to $K$ and obtain a regular Dirichlet form $(\mathcal{E}_K,\mathcal{F}_K)$ on $L^2(K;\nu)$ of the form (\ref{eqn_nonlocal}).
If $\beta>\beta^*$, then we show directly that $\mathcal{F}_K$ consists only of constant functions. Hence we conclude that $\beta_*=\beta^*=\log5/\log2$. This approach allows to detect the critical value $\beta_*$ of the index $\beta$ of the jump process without the construction of the diffusion.
This chapter is organized as follows. In section \ref{SG_det_sec_SG}, we review basic constructions of the SG $K$ and the Sierpi\'nski graph $X$. In section \ref{SG_det_sec_rw}, we give a transient reversible random walk $Z$ on $X$. In section \ref{SG_det_sec_df_X}, we construct a regular Dirichlet form $\mathcal{E}_X$ on $X$ and its corresponding symmetric Hunt process $\myset{X_t}$. We prove that the Martin boundaries of $\myset{X_t}$ and $Z$ coincide. We show that $\mathcal{E}_X$ is stochastically incomplete and $\myset{X_t}$ goes to infinity in finite time almost surely. In section \ref{SG_det_sec_ref}, we construct active reflected Dirichlet form $(\mathcal{E}^{\mathrm{ref}},\mathcal{F}^{\mathrm{ref}}_a)$ and show that $\mathcal{F}_X\subsetneqq\mathcal{F}^{\mathrm{ref}}_a$, hence $\mathcal{E}^{\mathrm{ref}}$ is not regular. In section \ref{SG_det_sec_repre}, we construct a regular Dirichlet form $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ on $L^2(\bar{X};m)$ which is a regular representation of Dirichlet form $(\mathcal{E}^\mathrm{ref},\mathcal{F}^\mathrm{ref}_a)$ on $L^2(X;m)$, where $\bar{X}$ is the Martin compactification of $X$. In section \ref{SG_det_sec_trace}, we take trace of the regular Dirichlet form $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ on $L^2(\bar{X};m)$ to $K$ to have a regular Dirichlet form $(\mathcal{E}_K,\mathcal{F}_K)$ on $L^2(K;\nu)$ with the form (\ref{eqn_nonlocal}). In section \ref{SG_det_sec_trivial}, we show that $\mathcal{F}_K$ consists only of constant functions if $\lambda\in(0,1/5)$ or $\beta\in(\beta^*,+\infty)$. Hence $\beta_*=\beta^*=\log5/\log2$.
\section{The SG and the Sierpi\'nski Graph}\label{SG_det_sec_SG}
In this section, we review some basic constructions of the SG and the Sierpi\'nski graph.
Let
$$p_0=(0,0),p_1=(1,0),p_2=(\frac{1}{2},\frac{\sqrt{3}}{2}),$$
$$f_i(x)=\frac{1}{2}(x+p_i),x\in\mathbb{R}^2,i=0,1,2.$$
Then the SG is the unique nonempty compact set $K$ satisfying
$$K=f_0(K)\cup f_1(K)\cup f_2(K).$$
Let
$$V_1=\myset{p_0,p_1,p_2},V_{n+1}=f_0(V_n)\cup f_1(V_n)\cup f_2(V_n)\text{ for all }n\ge1,$$
then $\myset{V_n}$ is an increasing sequence of finite sets such that $K$ is the closure of $\cup_{n=1}^\infty V_n$.
Let $W_0=\myset{\emptyset}$ and
$$W_n=\myset{w=w_1\ldots w_n:w_i=0,1,2,i=1,\ldots,n}\text{ for all }n\ge1,$$
and $W=\cup_{n=0}^\infty W_n$. An element $w=w_1\ldots w_n\in W_n$ is called a finite word with length $n$ and we denote $|w|=n$ for all $n\ge1$. $\emptyset\in W_0$ is called empty word and we denote its length $|\emptyset|=0$, we use the convention that zero length word is empty word. An element in $W$ is called a finite word.
Let
$$W_\infty=\myset{w=w_1w_2\ldots:w_i=0,1,2,i=1,2,\ldots}$$
be the set of all infinite sequences with elements in $\myset{0,1,2}$, then an element $w\in W_\infty$ is called an infinite word. For all $w=w_1\ldots w_n\in W$ with $n\ge1$, we write
$$f_w=f_{w_1}\circ\ldots\circ f_{w_n}$$
and $f_{\emptyset}=\mathrm{id}$. It is obvious that $K_w=f_w(K)$ is a compact set for all $w\in W$. For all $w=w_1w_2\ldots\in W_\infty$, we write
$$K_w=\bigcap_{n=0}^\infty K_{w_1\ldots w_n}.$$
Since $K_{w_1\ldots w_{n+1}}\subseteq K_{w_1\ldots w_n}$ for all $n\ge0$ and $\mathrm{diam}(K_{w_1\ldots w_n})\to0$ as $n\to+\infty$, we have $K_w\subseteq K$ is a one-point set. On the other hand, for all $x\in K$, there exists $w\in W_\infty$ such that $\myset{x}=K_w$. But this $w$ in not unique. For example, for the midpoint $x$ of the segment connecting $p_0$ and $p_1$, we have $\myset{x}=K_{100\ldots}=K_{011\ldots}$, where $100\ldots$ is the element $w=w_1w_2\ldots\in W_\infty$ with $w_1=1,w_n=0$ for all $n\ge2$ and $011\ldots$ has similar meaning.
By representation of infinite words, we construct the Sierpi\'nski graph as follows. First, we construct a triple tree. Take the root $o$ as the empty word $\emptyset$. It has three child nodes, that is, the words in $W_1$, $0,1,2$. Then the nodes $0,1,2$ have child nodes, that is, the words in $W_2$, $0$ has child nodes $00,01,02$, $1$ has child nodes $10,11,12$, $2$ has child nodes $20,21,22$. In general, each node $w_1\ldots w_n$ has three child nodes in $W_{n+1}$, that is, $w_1\ldots w_n0,w_1\ldots w_n1,w_1\ldots w_n2$ for all $n\ge1$. We use node and finite word interchangeable hereafter. For all $n\ge1$ and node $w=w_1\ldots w_n$, the node $w_1\ldots w_{n-1}$ is called the father node of $w$ and denoted by $w^-$. We obtain vertex set $V$ consisting of all nodes. Next, we construct edge set $E$, a subset of $V\times V$. Let
\begin{align*}
E_v&=\myset{(w,w^-),(w^-,w):w\in W_n,n\ge1},\\
E_h&=\myset{(w_1,w_2):w_1,w_2\in W_n,w_1\ne w_2,K_{w_1}\cap K_{w_2}\ne\emptyset,n\ge1},
\end{align*}
and $E=E_v\cup E_h$. $E_v$ is the set of all vertical edges and $E_h$ is the set of all horizontal edges. Then $X=(V,E)$ is the Sierpi\'nski graph, see Figure \ref{SG_det_fig_Sierpinski_graph}. We write $X$ for simplicity.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(4,0);
\draw (0,0)--(2,3.4641016151);
\draw (4,0)--(2,3.4641016151);
\draw (2,3.4641016151)--(2,-1.3333333333);
\draw (4,0)--(2,-1.3333333333);
\draw (0,0)--(2,-1.3333333333);
\draw (1,1.7320508076)--(3,1.7320508076);
\draw (2,1.7320508076-0.6666666667)--(3,1.7320508076);
\draw (1,1.7320508076)--(2,1.7320508076-0.6666666667);
\draw (1,1.7320508076)--(1.3333333333,0);
\draw (1,1.7320508076)--(0.6666666666,-0.4444444444);
\draw (0.6666666666,-0.4444444444)--(1.3333333333,0);
\draw (2,1.7320508076-0.6666666667)--(1.3333333333,-0.8888888888);
\draw (2,1.7320508076-0.6666666667)--(2.6666666666,-0.8888888888);
\draw (2.6666666666,-0.8888888888)--(1.3333333333,-0.8888888888);
\draw (3,1.7320508076)--(3.3333333333,-0.4444444444);
\draw (3,1.7320508076)--(2.6666666666,0);
\draw (2.6666666666,0)--(3.3333333333,-0.4444444444);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (4,0) circle (0.06);
\draw[fill=black] (2,3.4641016151) circle (0.06);
\draw[fill=black] (2,-1.3333333333) circle (0.06);
\draw[fill=black] (1,1.7320508076) circle (0.06);
\draw[fill=black] (2,1.7320508076-0.6666666667) circle (0.06);
\draw[fill=black] (3,1.7320508076) circle (0.06);
\draw[fill=black] (1.3333333333,0) circle (0.06);
\draw[fill=black] (0.6666666666,-0.4444444444) circle (0.06);
\draw[fill=black] (1.3333333333,-0.8888888888) circle (0.06);
\draw[fill=black] (2.6666666666,-0.8888888888) circle (0.06);
\draw[fill=black] (3.3333333333,-0.4444444444) circle (0.06);
\draw[fill=black] (2.6666666666,0) circle (0.06);
\draw (2,3.8) node {$\emptyset$};
\draw (0.9,2) node {$0$};
\draw (2.1,1.4) node {$1$};
\draw (3.1,2) node {$2$};
\draw (-0.1,-0.3) node {$00$};
\draw (4.1,-0.3) node {$22$};
\draw (1.3,-0.3) node {$02$};
\draw (2.7,-0.3) node {$20$};
\draw (0.6,-0.75) node {$01$};
\draw (3.4,-0.75) node {$21$};
\draw (1.2,-1.2) node {$10$};
\draw (2.8,-1.2) node {$12$};
\draw (2,-1.6) node {$11$};
\end{tikzpicture}
\caption{The Sierpi\'nski graph}\label{SG_det_fig_Sierpinski_graph}
\end{figure}
For all $x,y\in V$, if $(x,y)\in E$, then we write $x\sim y$ and say that $y$ is a neighbor of $x$. It is obvious that $\sim$ is an equivalence relation. A path in $X$ is a finite sequence $\pi=[x_0,\ldots,x_n]$ with distinct nodes and $x_0\sim x_1,\ldots,x_{n-1}\sim x_n$, $n$ is called the length of the path. For all $x,y\in V$, let $d(x,y)$ be the graph metric, that is, the minimum length of all paths connecting $x$ and $y$, if a path connecting $x$ and $y$ has length $d(x,y)$, then this path is called geodesic. Hereafter, we write $x\in X$ to mean that $x\in V$. It is obvious that $X$ is a connected and locally finite graph, that is, for all $x,y\in X$ with $x\ne y$, there exists a path connecting $x$ and $y$, for all $x\in X$, the set of its neighbors $\myset{y\in X:x\sim y}$ is a finite set. We write $S_n=\myset{x\in X:|x|=n}$, $B_n=\cup_{i=0}^nS_i$ as sphere and closed ball with radius $n$.
Roughly speaking, for all $n\ge1$, $S_n$ looks like some disconnected triangles, see Figure \ref{SG_det_fig_SG_S3} for $S_3$, and $V_n$ looks like some connected triangles, see Figure \ref{SG_det_fig_SG_V3} for $V_3$. We define a mapping $\Phi_n:S_n\to V_n$ as follows. For all $n\ge2$, $w=w_1\ldots w_n\in W_n$, write $p_{w}=p_{w_1\ldots w_n}=f_{w_1\ldots w_{n-1}}(p_{w_n})$. Write $p_1,p_2,p_3$ for $n=1$ and $w=0,1,2$, respectively. By induction, we have $V_n=\cup_{w\in W_n}p_w$ for all $n\ge1$. Define $\Phi_n(w)=p_w$. Then $\Phi_n$ is onto and many pairs of points are mapped into same points, such as $\Phi_3(001)=\Phi_3(010)$. This property can divide the edges in $S_n$ into two types. For an arbitrary edge in $S_n$ with end nodes $x,y$, it is called of type \Rmnum{1} if $\Phi_n(x)\ne\Phi_n(y)$ such as the edge in $S_3$ with end nodes $000$ and $001$, it is called of type \Rmnum{2} if $\Phi_n(x)=\Phi_n(y)$ such as the edge in $S_3$ with end nodes $001$ and $010$. By induction, it is obvious there exist only these two types of edges on each sphere $S_n$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.7]
\tikzstyle{every node}=[font=\small,scale=0.7]
\draw (0,0)--(9,0);
\draw (0,0)--(9/2,9/2*1.7320508076);
\draw (9,0)--(9/2,9/2*1.7320508076);
\draw (3,0)--(3/2,3/2*1.7320508076);
\draw (6,0)--(15/2,3/2*1.7320508076);
\draw (3,3*1.7320508076)--(6,3*1.7320508076);
\draw (4,4*1.7320508076)--(5,4*1.7320508076);
\draw (7/2,7/2*1.7320508076)--(4,3*1.7320508076);
\draw (11/2,7/2*1.7320508076)--(5,3*1.7320508076);
\draw (1,1*1.7320508076)--(2,1.7320508076);
\draw (1/2,1/2*1.7320508076)--(1,0);
\draw (2,0)--(5/2,1/2*1.7320508076);
\draw (7,1.7320508076)--(8,1.7320508076);
\draw (13/2,1.7320508076/2)--(7,0);
\draw (17/2,1.7320508076/2)--(8,0);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (1,0) circle (0.06);
\draw[fill=black] (2,0) circle (0.06);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (6,0) circle (0.06);
\draw[fill=black] (7,0) circle (0.06);
\draw[fill=black] (8,0) circle (0.06);
\draw[fill=black] (9,0) circle (0.06);
\draw[fill=black] (1/2,1.7320508076/2) circle (0.06);
\draw[fill=black] (5/2,1.7320508076/2) circle (0.06);
\draw[fill=black] (13/2,1.7320508076/2) circle (0.06);
\draw[fill=black] (17/2,1.7320508076/2) circle (0.06);
\draw[fill=black] (1,1.7320508076) circle (0.06);
\draw[fill=black] (2,1.7320508076) circle (0.06);
\draw[fill=black] (7,1.7320508076) circle (0.06);
\draw[fill=black] (8,1.7320508076) circle (0.06);
\draw[fill=black] (3/2,3/2*1.7320508076) circle (0.06);
\draw[fill=black] (15/2,3/2*1.7320508076) circle (0.06);
\draw[fill=black] (3,3*1.7320508076) circle (0.06);
\draw[fill=black] (4,3*1.7320508076) circle (0.06);
\draw[fill=black] (5,3*1.7320508076) circle (0.06);
\draw[fill=black] (6,3*1.7320508076) circle (0.06);
\draw[fill=black] (7/2,7/2*1.7320508076) circle (0.06);
\draw[fill=black] (11/2,7/2*1.7320508076) circle (0.06);
\draw[fill=black] (4,4*1.7320508076) circle (0.06);
\draw[fill=black] (5,4*1.7320508076) circle (0.06);
\draw[fill=black] (9/2,9/2*1.7320508076) circle (0.06);
\draw (0,-0.3) node {$000$};
\draw (1,-0.3) node {$001$};
\draw (2,-0.3) node {$010$};
\draw (3,-0.3) node {$011$};
\draw (6,-0.3) node {$100$};
\draw (7,-0.3) node {$101$};
\draw (8,-0.3) node {$110$};
\draw (9,-0.3) node {$111$};
\draw (0.1,1/2*1.7320508076) node {$002$};
\draw (2.9,1/2*1.7320508076) node {$012$};
\draw (6.1,1/2*1.7320508076) node {$102$};
\draw (8.9,1/2*1.7320508076) node {$112$};
\draw (0.6,1*1.7320508076) node {$020$};
\draw (2.4,1*1.7320508076) node {$021$};
\draw (6.6,1*1.7320508076) node {$120$};
\draw (8.4,1*1.7320508076) node {$121$};
\draw (1.1,3/2*1.7320508076) node {$022$};
\draw (7.9,3/2*1.7320508076) node {$122$};
\draw (2.6,3*1.7320508076) node {$200$};
\draw (4,3*1.7320508076-0.3) node {$201$};
\draw (5,3*1.7320508076-0.3) node {$210$};
\draw (6.4,3*1.7320508076) node {$211$};
\draw (3.1,7/2*1.7320508076) node {$202$};
\draw (5.9,7/2*1.7320508076) node {$212$};
\draw (3.6,4*1.7320508076) node {$220$};
\draw (5.4,4*1.7320508076) node {$221$};
\draw (4.5,9/2*1.7320508076+0.3) node {$222$};
\end{tikzpicture}
\caption{$S_3$}\label{SG_det_fig_SG_S3}
\end{figure}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.125*0.7]
\tikzstyle{every node}=[font=\small,scale=0.7]
\draw (0,0)--(8,0);
\draw (0,0)--(4,4*1.7320508076);
\draw (8,0)--(4,4*1.7320508076);
\draw (4,0)--(2,2*1.7320508076);
\draw (2,2*1.7320508076)--(6,2*1.7320508076);
\draw (6,2*1.7320508076)--(4,0);
\draw (2,0)--(3,1*1.7320508076);
\draw (3,1*1.7320508076)--(1,1*1.7320508076);
\draw (1,1*1.7320508076)--(2,0);
\draw (6,0)--(5,1*1.7320508076);
\draw (5,1*1.7320508076)--(7,1*1.7320508076);
\draw (7,1*1.7320508076)--(6,0);
\draw (4,2*1.7320508076)--(3,3*1.7320508076);
\draw (3,3*1.7320508076)--(5,3*1.7320508076);
\draw (5,3*1.7320508076)--(4,2*1.7320508076);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (2,0) circle (0.06);
\draw[fill=black] (4,0) circle (0.06);
\draw[fill=black] (6,0) circle (0.06);
\draw[fill=black] (8,0) circle (0.06);
\draw[fill=black] (1,1.7320508076) circle (0.06);
\draw[fill=black] (3,1.7320508076) circle (0.06);
\draw[fill=black] (5,1.7320508076) circle (0.06);
\draw[fill=black] (7,1.7320508076) circle (0.06);
\draw[fill=black] (2,2*1.7320508076) circle (0.06);
\draw[fill=black] (4,2*1.7320508076) circle (0.06);
\draw[fill=black] (6,2*1.7320508076) circle (0.06);
\draw[fill=black] (3,3*1.7320508076) circle (0.06);
\draw[fill=black] (5,3*1.7320508076) circle (0.06);
\draw[fill=black] (4,4*1.7320508076) circle (0.06);
\draw (0,-0.3) node {$p_{000}$};
\draw (2,-0.3) node {$p_{001}=p_{010}$};
\draw (4,-0.3) node {$p_{011}=p_{100}$};
\draw (6,-0.3) node {$p_{101}=p_{110}$};
\draw (8,-0.3) node {$p_{111}$};
\draw (4,4*1.7320508076+0.3) node {$p_{222}$};
\draw (0,1*1.7320508076) node {$p_{002}=p_{020}$};
\draw (8,1*1.7320508076) node {$p_{112}=p_{121}$};
\draw (2.95,1*1.7320508076+0.3) node {$p_{012}=p_{021}$};
\draw (5.05,1*1.7320508076+0.3) node {$p_{102}=p_{120}$};
\draw (1,2*1.7320508076) node {$p_{022}=p_{200}$};
\draw (7,2*1.7320508076) node {$p_{122}=p_{211}$};
\draw (4,2*1.7320508076-0.3) node {$p_{201}=p_{210}$};
\draw (2,3*1.7320508076) node {$p_{202}=p_{220}$};
\draw (6,3*1.7320508076) node {$p_{212}=p_{221}$};
\end{tikzpicture}
\caption{$V_3$}\label{SG_det_fig_SG_V3}
\end{figure}
The Sierpi\'nski graph is a hyperbolic graph, see \cite[Theorem 3.2]{LW09}. For arbitrary graph $X$, choose a node $o$ as root, define graph metric $d$ as above, write $|x|=d(o,x)$. For all $x,y\in X$, define Gromov product
$$|x\wedge y|=\frac{1}{2}(|x|+|y|-d(x,y)).$$
$X$ is called a hyperbolic graph if there exists $\delta>0$ such that for all $x,y,z\in X$, we have
$$|x\wedge y|\ge\mathrm{min}{\myset{|x\wedge z|,|z\wedge y|}}-\delta.$$
It is known that the definition is independent of the choice of root $o$. For a hyperbolic graph, we can introduce a metric as follows. Choose $a>0$ such that $a'=e^{3\delta a}-1<\sqrt{2}-1$. For all $x,y\in X$, define
$$
\rho_a(x,y)=
\begin{cases}
\exp{(-a|x\wedge y|)},&\text{if }x\ne y,\\
0,&\text{if }x=y,
\end{cases}
$$
then $\rho_a$ satisfies
$$\rho_a(x,y)\le(1+a')\max{\myset{\rho_a(x,z),\rho_a(z,y)}}\text{ for all }x,y,z\in X.$$
This means $\rho_a$ is an ultra-metric not a metric. But we can define
$$\theta_a(x,y)=\inf{\myset{\sum_{i=1}^n\rho_a(x_{i-1},x_i):x=x_0,\ldots,x_n=y,x_i\in X,i=0,\ldots,n,n\ge1}},$$
for all $x,y\in X$. $\theta_a$ is a metric and equivalent to $\rho_a$. So we use $\rho_a$ rather than $\theta_a$ for simplicity. It is known that a sequence $\myset{x_n}\subseteq X$ with $|x_n|\to+\infty$ is a Cauchy sequence in $\rho_a$ if and only if $|x_m\wedge x_n|\to+\infty$ as $m,n\to+\infty$. Let $\hat{X}$ be the completion of $X$ with respect to $\rho_a$, then $\partial_hX=\hat{X}\backslash X$ is called the hyperbolic boundary of $X$. By \cite[Corollary 22.13]{Wo00}, $\hat{X}$ is compact. It is obvious that hyperbolicity is only related to the graph structure of $X$. We introduce a description of hyperbolic boundary in terms of geodesic rays. A geodesic ray is a sequence $[x_0,x_1,\ldots]$ with distinct nodes, $x_n\sim x_{n+1}$ and path $[x_0,\ldots,x_n]$ is geodesic for all $n\ge0$. Two geodesic rays $\pi=[x_0,x_1,\ldots]$ and $\pi'=[y_0,y_1,\ldots]$ are called equivalent if $\varliminf_{n\to+\infty}d(y_n,\pi)<+\infty$, where $d(x,\pi)=\inf_{n\ge0}d(x,x_n)$. There exists a one-to-one correspondence between the family of all equivalent geodesic rays and hyperbolic boundary as follows.
By \cite[Proposition 22.12(b)]{Wo00}, equivalence geodesic rays is an equivalence relation. By \cite[Lemma 22.11]{Wo00}, for all geodesic ray $\pi=[x_0,x_1,\ldots]$, for all $u\in X$, there exist $k,l\ge0$, $u=u_0,\ldots,u_k=x_l$, such that
$$[u,u_1,\ldots,u_k,x_{l+1},x_{l+2},\ldots]$$
is a geodesic ray. It is obvious that this new geodesic ray is equivalent to $\pi$, hence we can take a geodesic ray in each equivalence class of the form $\pi=[x_0,x_1,\ldots]$, $|x_n|=n$, $x_n\sim x_{n+1}$ for all $n\ge0$. By \cite[Proposition 22.12(c)]{Wo00}, we can define a one-to-one mapping $\tau$ from the family of all equivalent geodesic rays to hyperbolic boundary,
$$\tau:[x_0,x_1,\ldots]\mapsto\text{the limit }\xi\text{ of }\myset{x_n}\text{ in }\rho_a.$$
By above, we can choose $[x_0,x_1,\ldots]$ of the form $|x_n|=n$, $x_n\sim x_{n+1}$ for all $n\ge0$, we say that $[x_0,x_1,\ldots]$ is a geodesic ray from $o$ to $\xi$.
For $y\in\hat{X}$, $x\in X$, we say that $y$ is in the subtree with root $x$ if $x$ lies on the geodesic path or some geodesic ray from $o$ to $y$. And if $y$ is in the subtree with root $x$, then it is obvious that $|x\wedge y|=|x|$, $\rho_a(x,y)=e^{-a|x|}$ if $x\ne y$. For more detailed discussion of hyperbolic graph, see \cite[Chapter \Rmnum{4}, \Rmnum{4}.22]{Wo00}.
\cite[Theorem 3.2, Theorem 4.3, Proposition 4.4]{LW09} showed that for a general class of fractals satisfying open set condition (OSC), we can construct an augmented rooted tree which is a hyperbolic graph and the hyperbolic boundary is H\"older equivalent to the fractal through canonical mapping. In particular, the SG satisfies OSC, the Sierpi\'nski graph is an augmented rooted tree hence hyperbolic. The canonical mapping $\Phi$ can be described as follows.
For all $\xi\in\partial_hX$, there corresponds a geodesic ray in the equivalence class corresponding to $\xi$ through the mapping $\tau$ of the form $[x_0,x_1,\ldots]$ with $|x_n|=n$ and $x_n\sim x_{n+1}$ for all $n\ge0$, then there exists an element $w\in W_\infty$ such that $w_1\ldots w_n=x_n$ for all $n\ge1$. Then $\myset{\Phi(\xi)}=K_w$ and
\begin{equation}\label{SG_det_eqn_Holder}
\lvert\Phi(\xi)-\Phi(\eta)\rvert\asymp\rho_a(\xi,\eta)^{\log2/a}\text{ for all }\xi,\eta\in\partial_hX.
\end{equation}
\section{Random Walk on \texorpdfstring{$X$}{X}}\label{SG_det_sec_rw}
In this section, we give a transient reversible random walk on $X$ from \cite{KLW17}. Let $c:X\times X\to[0,+\infty)$ be conductance satisfying
\begin{align*}
c(x,y)&=c(y,x),\\
\pi(x)&=\sum_{y\in X}c(x,y)\in(0,+\infty),\\
c(x,y)&>0\text{ if and only if }x\sim y,
\end{align*}
for all $x,y\in X$. Let $P(x,y)=c(x,y)/\pi(x)$, $x,y\in X$, then $P$ is a transition probability satisfying
$$\pi(x)P(x,y)=\pi(y)P(y,x)\text{ for all }x,y\in X.$$
We construct a reversible random walk $Z=\myset{Z_n}$ on $X$ with transition probability $P$. We introduce some related quantities. For all $x,y\in X$, let $P^{(0)}(x,y)=\delta_{xy}$ and
$$P^{(n+1)}(x,y)=\sum_{z\in X}P(x,z)P^{(n)}(z,y)\text{ for all }n\ge0.$$
Define
$$G(x,y)=\sum_{n=0}^\infty P^{(n)}(x,y),x,y\in X,$$
then $G$ is the Green function of $Z$ and $Z$ is called transient if $G(x,y)<+\infty$ for all or equivalently for some $x,y\in X$. Define
$$F(x,y)=\mathbb{P}_x\left[Z_n=y\text{ for some }n\ge0\right],$$
that is, the probability of ever reaching $y$ starting from $x$. By Markovian property, we have
$$G(x,y)=F(x,y)G(y,y).$$
For more detailed discussion of general theory of random walk, see \cite[Chapter \Rmnum{1}, \Rmnum{1}.1]{Wo00}.
Here, we take some specific random walk called $\lambda$-return ratio random walk introduced in \cite{KLW17}, that is,
$$\frac{c(x,x^-)}{\sum_{y:y^-=x}c(x,y)}=\frac{P(x,x^-)}{\sum_{y:y^-=x}P(x,y)}=\lambda\in(0,+\infty)\text{ for all }x\in X\text{ with }|x|\ge1.$$
For all $n\ge0$, $x\in S_n,y\in S_{n+1}$, we take $c(x,y)$ the same value denoted by $c(n,n+1)=c(n+1,n)$. Then
$$\lambda=\frac{c(n-1,n)}{3c(n,n+1)},$$
that is,
$$c(n,n+1)=\frac{c(n-1,n)}{3\lambda}=\ldots=\frac{1}{(3\lambda)^n}{c(0,1)}.$$
Take $c(0,1)=1$, then $c(n,n+1)=1/(3\lambda)^n$. Moreover, \cite[Definition 4.4]{KLW17} gave restrictions to conductance of horizontal edges. For all $n\ge1$, $x,y\in S_n$, $x\sim y$, let
$$
c(x,y)=\\
\begin{cases}
\frac{C_1}{(3\lambda)^n},&\text{ if the edge with end nodes }x,y\text{ is of type \Rmnum{1}},\\
\frac{C_2}{(3\lambda)^n},&\text{ if the edge with end nodes }x,y\text{ is of type \Rmnum{2}},
\end{cases}
$$
where $C_1,C_2$ are some positive constants.
\cite[Proposition 4.1, Lemma 4.2]{KLW17} showed that if $\lambda\in(0,1)$, then $Z$ is transient and
\begin{equation}\label{SG_det_eqn_G}
G(o,o)=\frac{1}{1-\lambda},
\end{equation}
\begin{equation}\label{SG_det_eqn_F}
F(x,0)=\lambda^{|x|}\text{ for all }x\in X.
\end{equation}
For a transient random walk, we can introduce Martin kernel given by
$$K(x,y)=\frac{G(x,y)}{G(o,y)},$$
and Martin compactification $\bar{X}$, that is, the smallest compactification such that $K(x,\cdot)$ can be extended continuously for all $x\in X$. Martin boundary is given by $\partial_MX=\bar{X}\backslash X$. Then Martin kernel $K$ can be defined on $X\times\bar{X}$.
\cite[Theorem 5.1]{KLW17} showed that the Martin boundary $\partial_MX$, the hyperbolic boundary $\partial_hX$ and the SG $K$ are homeomorphic. Hence the completion $\hat{X}$ of $X$ with respect to $\rho_a$ and Martin compactification $\bar{X}$ are homeomorphic. It is always convenient to consider $\hat{X}$ rather than $\bar{X}$. We use $\partial X$ to denote all these boundaries. We list some general results of Martin boundary for later use.
\begin{mythm}\label{SG_det_thm_conv}(\cite[Theorem 24.10]{Wo00})
Let $Z$ be transient, then $\myset{Z_n}$ converges to a $\partial_MX$-valued random variable $Z_\infty$, $\mathbb{P}_x$-a.s. for all $x\in X$. The hitting distribution of $\myset{Z_n}$ or the distribution of $Z_\infty$ under $\mathbb{P}_x$, denoted by $\nu_x$, satisfies
$$\nu_x(B)=\int_BK(x,\cdot)\mathrm{d}\nu_o\text{ for all Borel measurable set }B\subseteq\partial_M X,$$
that is, $\nu_x$ is absolutely continuous with respect to $\nu_o$ with Radon-Nikodym derivative $K(x,\cdot)$.
\end{mythm}
For all $\nu_o$-integrable function $\varphi$ on $\partial_MX$, we have
$$h(x)=\int_{\partial_MX}\varphi\mathrm{d}\nu_x=\int_{\partial_MX}K(x,\cdot)\varphi\mathrm{d}\nu_o,x\in X,$$
is a harmonic function on $X$. It is called the Poisson integral of $\varphi$, denoted by $H\varphi$.
\cite[Theorem 5.6]{KLW17} showed that the hitting distribution $\nu_o$ is the normalized Hausdorff measure on $K$. We write $\nu$ for $\nu_o$ for simplicity.
Using conductance $c$, we construct an energy on $X$ given by
$$\mathcal{E}_X(u,u)=\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2.$$
In \cite{Sil74}, Silverstein constructed Na\"im kernel $\Theta$ on $\bar{X}\times\bar{X}$ using Martin kernel to induce an energy on $\partial X$ given by
$$\mathcal{E}_{\partial X}(u,u)=\mathcal{E}_X(Hu,Hu)=\frac{1}{2}\pi(o)\int_{\partial X}\int_{\partial X}(u(x)-u(y))^2\Theta(x,y)\nu(\mathrm{d} x)\nu(\mathrm{d} y),$$
for all $u\in L^2(\partial_MX;\nu)$ with $\mathcal{E}_{\partial X}(u,u)<+\infty$.
\cite[Theorem 6.3]{KLW17} calculated Na\"im kernel forcefully
\begin{equation}\label{SG_det_eqn_Theta}
\Theta(x,y)\asymp\frac{1}{|x-y|^{\alpha+\beta}},
\end{equation}
where $\alpha=\log3/\log2$ is the Hausdorff dimension of the SG, $\beta=-\log\lambda/\log2\in(0,+\infty)$, $\lambda\in(0,1)$. No message of upper bound for $\beta$ of walk dimension appeared in their calculation.
\section{Regular Dirichlet Form on \texorpdfstring{$X$}{X}}\label{SG_det_sec_df_X}
In this section, we construct a regular Dirichlet form $\mathcal{E}_X$ on $X$ and its corresponding symmetric Hunt process $\myset{X_t}$. We prove that the Martin boundaries of $\myset{X_t}$ and $Z$ coincide. We show that $\mathcal{E}_X$ is stochastically incomplete and $\myset{X_t}$ goes to infinity in finite time almost surely.
Let $m:X\to(0,+\infty)$ be a positive function given by
$$m(x)=\left(\frac{c}{3\lambda}\right)^{|x|},x\in X,$$
where $c\in(0,\lambda)\subseteq(0,1)$. Then $m$ can be regarded as a measure on $X$. Note that
$$m(X)=\sum_{x\in X}m(x)=\sum_{n=0}^\infty3^n\cdot\left(\frac{c}{3\lambda}\right)^n=\sum_{n=0}^{\infty}\left(\frac{c}{\lambda}\right)^n<+\infty,$$
we have $m$ is a finite measure on $X$. We construct a symmetric form on $L^2(X;m)$ given by
$$
\begin{cases}
&\mathcal{E}_X(u,u)=\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2,\\
&\mathcal{F}_X=\text{the }(\mathcal{E}_X)_1\text{-closure of }C_c(X),
\end{cases}
$$
where $C_c(X)$ is the set of all functions with finite support. It is obvious that $(\mathcal{E}_X,\mathcal{F}_X)$ is a regular Dirichlet form on $L^2(X;m)$. By \cite[Theorem 7.2.1]{FOT11}, it corresponds to a symmetric Hunt process on $X$. Roughly speaking, this process is a variable speed continuous time random walk characterized by holding at one node with time distributed to exponential distribution and jumping according to random walk. For some discussion of continuous time random walk, see \cite[Chapter 2]{Nor98}. We give detailed construction as follows.
Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space on which given a random walk $\myset{Y_n}$ with transition probability $P$ and initial distribution $\sigma$ and a sequence of independent exponential distributed random variables $\myset{S_n}$ with parameter $1$, that is, $\mathbb{P}[S_n\in\mathrm{d} t]=e^{-t}\mathrm{d} t$. Assume that $\myset{S_n}$ is independent of $\myset{Y_n}$. Let $\alpha(x)=\pi(x)/m(x)$, $x\in X$. For all $n\ge1$, let
$$T_n=\frac{S_n}{\alpha(Y_{n-1})},$$
$$J_0=0,J_n=T_1+\ldots+T_n.$$
Then $T_n$ is called the $n$-th holding time and $J_n$ is called the $n$-th jumping time. Let
$$
X_t=
\begin{cases}
Y_n,&\text{if }J_n\le t<J_{n+1}\text{ for some }n\ge0,\\
\partial,&\text{otherwise},
\end{cases}
$$
where $\partial$ is a death point. This construction is similar to that of Poisson process and it is called variable speed continuous time random walk in some literature. $\myset{X_t}$ is a symmetric Hunt process with initial distribution $\sigma$. We claim that $\myset{X_t}$ is the symmetric Hunt process corresponding to $\mathcal{E}_X$.
Indeed, we only need to show that their generators coincide. By \cite[Corollary 1.3.1]{FOT11}, the generator of $\mathcal{E}_X$ is characterized by
$$\mathcal{E}_X(u,v)=(-Au,v)\text{ for all }u,v\in C_c(X).$$
Noting that
\begin{align*}
\mathcal{E}_X(u,v)&=\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))(v(x)-v(y))\\
&=\sum_{x\in X}\left(\frac{1}{m(x)}\sum_{y\in X}c(x,y)(u(x)-u(y))\right)v(x)m(x),
\end{align*}
we have
$$Au(x)=\frac{1}{m(x)}\sum_{y\in X}c(x,y)(u(y)-u(x))\text{ for all }u\in C_c(X).$$
On the other hand, the generator of $\myset{X_t}$ is characterized by $\lim_{t\downarrow0}\frac{1}{t}\left(\mathbb{E}_xu(X_t)-u(x)\right)$ for all $u\in C_c(X)$.
Since $X$ is locally finite, we have
\begin{align*}
&\lim_{t\downarrow0}\frac{1}{t}\left(\mathbb{E}_xu(X_t)-u(x)\right)=\lim_{t\downarrow0}\frac{1}{t}\sum_{y\in X}(u(y)-u(x))\mathbb{P}_x[X_t=y]\\
=&\lim_{t\downarrow0}\frac{1}{t}\sum_{y:y\ne x}(u(y)-u(x))\mathbb{P}_x[X_t=y]=\sum_{y:y\ne x}(u(y)-u(x))\lim_{t\downarrow0}\frac{1}{t}\mathbb{P}_x[X_t=y].
\end{align*}
Let $p_{xy}(t)=\mathbb{P}_x[X_t=y]$, then for all $x\ne y$, we have $p_{xy}(0)=0$ and
$$\lim_{t\downarrow0}\frac{1}{t}\mathbb{P}_x[X_t=y]=\lim_{t\downarrow0}\frac{1}{t}p_{xy}(t)=p_{xy}'(0)$$
if the derivative exists. We calculate some equation that $p_{xy}$ satisfies. The idea of the following calculation is from \cite[Theorem 2.8.4]{Nor98}.
Note that
$$p_{xy}(t)=\mathbb{P}_x[X_t=y]=\mathbb{P}_x[X_t=y,t<J_1]+\mathbb{P}_x[X_t=y,t\ge J_1],$$
where
\begin{align*}
\mathbb{P}_x[X_t=y,t<J_1]&=\mathbb{P}_x[Y_0=y,t<T_1]=\mathbb{P}_x[Y_0=y]\mathbb{P}_x[t<\frac{S_1}{\alpha(x)}]=\delta_{xy}e^{-\alpha(x)t},\\
\mathbb{P}_x[X_t=y,t\ge J_1]&=\sum_{z\in X}\mathbb{P}_x[X_t=y,t\ge J_1,Y_1=z]\\
&=\sum_{z\in X}\int_0^t\mathbb{P}_x[Y_1=z]\mathbb{P}_x[J_1\in\mathrm{d} s]\mathbb{P}_z[X_{t-s}=y]\\
&=\sum_{z\in X}\int_0^tP(x,z)\alpha(x)e^{-\alpha(x)s}p_{zy}(t-s)\mathrm{d} s.
\end{align*}
Hence
\begin{equation}\label{SG_det_eqn_pxy}
p_{xy}(t)=\delta_{xy}e^{-\alpha(x)t}+\sum_{z\in X}\int_0^tP(x,z)\alpha(x)e^{-\alpha(x)s}p_{zy}(t-s)\mathrm{d} s.
\end{equation}
Since $X$ is locally finite and $p_{xy}\in[0,1]$, we have $p_{xy}$ is continuous, $p_{xy}$ is continuous differentiable,\ldots, $p_{xy}$ is infinitely differentiable. Note that Equation (\ref{SG_det_eqn_pxy}) is equivalent to
$$e^{\alpha(x)t}p_{xy}(t)=\delta_{xy}+\sum_{z\in X}\alpha(x)P(x,z)\int_0^te^{\alpha(x)s}p_{zy}(s)\mathrm{d} s.$$
Differentiating both sides with respect to $t$, we have
$$e^{\alpha(x)t}\left(\alpha(x)p_{xy}(t)+p_{xy}'(t)\right)=\sum_{z\in X}\alpha(x)P(x,z)e^{\alpha(x)t}p_{zy}(t),$$
that is,
$$p_{xy}'(t)=\sum_{z\in X}\alpha(x)P(x,z)p_{zy}(t)-\alpha(x)p_{xy}(t).$$
Letting $x\ne y$ and $t=0$, we have
$$p_{xy}'(0)=\alpha(x)P(x,y)=\frac{\pi(x)}{m(x)}\frac{c(x,y)}{\pi(x)}=\frac{c(x,y)}{m(x)},$$
hence
\begin{align*}
&\lim_{t\downarrow0}\frac{1}{t}\left(\mathbb{E}_xu(X_t)-u(x)\right)=\sum_{y:y\ne x}\left(u(y)-u(x)\right)p_{xy}'(0)\\
=&\frac{1}{m(x)}\sum_{y:y\ne x}c(x,y)\left(u(y)-u(x)\right)=\frac{1}{m(x)}\sum_{y\in X}c(x,y)\left(u(y)-u(x)\right),
\end{align*}
which coincides with $Au(x)$.
By the construction of $\myset{X_t}$ in terms of $\myset{Y_n}$, the Martin boundary of $\myset{X_t}$ is the same as the Martin boundary of $Z$.
Indeed, we calculate the Green function of $\myset{X_t}$ explicitly. By the correspondence between $\mathcal{E}_X$ and $\myset{X_t}$, we only need to calculate the Green function of $\mathcal{E}_X$. By \cite[Theorem 1.5.4]{FOT11}, the Green operator $G$ is characterized by $\mathcal{E}_X(Gu,v)=(u,v)$ for all $u,v\in C_c(X)$. Note that
$$Gu(x)=\int_XG(x,\mathrm{d} y)u(y)=\sum_{y\in X}g(x,y)u(y)m(y)$$
where $g$ is the Green function. Taking $u=\mathbf{1}_{x_0}$, $v=\mathbf{1}_{y_0}$, $x_0,y_0\in X$, then we have
$$(u,v)=\sum_{x\in X}u(x)v(x)m(x)=\delta_{x_0y_0}m(x_0),$$
$$Gu(x)=g(x,x_0)m(x_0),$$
and
\begin{align*}
\mathcal{E}_X(Gu,v)&=\frac{1}{2}\sum_{x,y\in X}c(x,y)(Gu(x)-Gu(y))(v(x)-v(y))\\
&=\frac{1}{2}\sum_{x,y\in X}c(x,y)(g(x,x_0)-g(y,x_0))m(x_0)(v(x)-v(y))\\
&=\sum_{x,y\in X}c(x,y)(g(x,x_0)-g(y,x_0))m(x_0)v(x)\\
&=\sum_{y\in X}c(y_0,y)(g(y_0,x_0)-g(y,x_0))m(x_0).
\end{align*}
Letting $g(y,x_0)=G(y,x_0)/C(x_0)$, where $C$ is some function to be determined, we have
\begin{align*}
\sum_{y\in X}c(y_0,y)(g(y_0,x_0)-g(y,x_0))m(x_0)&=\sum_{y\in X}\frac{\pi(y_0)}{C(x_0)}P(y_0,y)(G(y_0,x_0)-G(y,x_0))m(x_0)\\
&=\frac{\pi(y_0)}{C(x_0)}(G(y_0,x_0)-G(y_0,x_0)+\delta_{x_0y_0})m(x_0)\\
&=\frac{\pi(y_0)}{C(x_0)}\delta_{x_0y_0}m(x_0)\\
&=\delta_{x_0y_0}m(x_0),
\end{align*}
hence $C(x_0)=\pi(x_0)$ and
$$g(x,y)=\frac{G(x,y)}{\pi(y)}.$$
Hence the Martin kernel of $\myset{X_t}$ is given by
$$k(x,y)=\frac{g(x,y)}{g(o,y)}=\frac{G(x,y)/\pi(y)}{G(o,y)/\pi(y)}=\frac{G(x,y)}{G(o,y)}=K(x,y)\text{ for all }x,y\in X.$$
Hence the Martin boundaries of $\myset{X_t}$ and $Z$ coincide. Moreover, $\mathcal{E}_X$ is transient.
\begin{mythm}\label{SG_det_thm_sto}
$(\mathcal{E}_X,\mathcal{F}_X)$ on $L^2(X;m)$ is stochastically incomplete.
\end{mythm}
We prove stochastic incompleteness by considering the lifetime
$$\zeta=\sum_{n=1}^\infty T_n=\lim_{n\to+\infty}J_n.$$
This quantity is called the (first) explosion time in \cite[Chapter 2, 2.2]{Nor98}. We need a proposition for preparation.
\begin{myprop}\label{SG_det_prop_jump}
The jumping times $J_n$ are stopping times of $\myset{X_t}$ for all $n\ge0$.
\end{myprop}
\begin{proof}
Let $\myset{\mathcal{F}_t}$ be the minimum completed admissible filtration with respect to $\myset{X_t}$. It is obvious that $J_0=0$ is a stopping time of $\myset{X_t}$. Assume that $J_n$ is a stopping time of $\myset{X_t}$, then for all $t\ge0$, we have
$$\myset{J_{n+1}\le t}=\cup_{s\in\mathbb{Q},s\le t}\left(\myset{J_n\le t}\cap\myset{X_s\ne X_{J_n}}\right)\in\mathcal{F}_t,$$
hence $J_{n+1}$ is a stopping time of $\myset{X_t}$. By induction, it follows that $J_n$ are stopping times of $\myset{X_t}$ for all $n\ge0$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{SG_det_thm_sto}]
By Equation (\ref{SG_det_eqn_F}), we have
\begin{align*}
\mathbb{E}_o\zeta&=\mathbb{E}_o\sum_{n=1}^\infty T_n=\sum_{n=1}^\infty\mathbb{E}_o\left[\frac{S_n}{\alpha(Y_{n-1})}\right]=\sum_{n=1}^\infty\mathbb{E}_o[{S_n}]\mathbb{E}_o\left[\frac{1}{\alpha(Y_{n-1})}\right]\\
&=\sum_{n=1}^\infty\mathbb{E}_o\frac{m}{\pi}(Y_{n-1})=\sum_{n=0}^\infty\mathbb{E}_o\frac{m}{\pi}(Y_{n})=\sum_{n=0}^\infty\sum_{x\in X}\frac{m(x)}{\pi(x)}P^{(n)}(o,x)\\
&=\sum_{x\in X}\frac{m(x)}{\pi(x)}G(o,x)=\sum_{x\in X}\frac{m(x)}{\pi(x)}\frac{\pi(x)G(x,o)}{\pi(o)}=\sum_{x\in X}\frac{m(x)G(x,o)}{\pi(o)}\\
&=\sum_{x\in X}\frac{m(x)F(x,o)G(o,o)}{\pi(o)}=\frac{G(o,o)}{\pi(o)}\sum_{n=0}^\infty3^n\cdot\left(\frac{c}{3\lambda}\right)^n\cdot\lambda^n\\
&=\frac{G(o,o)}{\pi(o)}\sum_{n=0}^\infty c^n.
\end{align*}
Since $c\in(0,\lambda)\subseteq(0,1)$, we have $\mathbb{E}_o\zeta<+\infty$, hence
$$\mathbb{P}_o[\zeta<+\infty]=1.$$
For all $x\in X$, let $n=|x|$, note that $P^{(n)}(o,x)>0$, by Proposition \ref{SG_det_prop_jump} and strong Markov property, we have
\begin{align*}
\mathbb{E}_o\zeta&\ge\mathbb{E}_o\left[\zeta\mathbf{1}{\myset{X_{J_n}=x}}\right]=\mathbb{E}_o\left[\mathbb{E}_o\left[\zeta\mathbf{1}{\myset{X_{J_n}=x}}|X_{J_n}\right]\right]\\
&=\mathbb{E}_o\left[\mathbf{1}{\myset{X_{J_n}=x}}\mathbb{E}_o\left[\zeta|X_{J_n}\right]\right]=\mathbb{E}_o\left[\mathbf{1}{\myset{X_{J_n}=x}}\mathbb{E}_{X_{J_n}}\left[\zeta\right]\right]\\
&=P^{(n)}(o,x)\mathbb{E}_x\left[\zeta\right].
\end{align*}
Hence $\mathbb{E}_x\zeta<+\infty$, hence
$$\mathbb{P}_x\left[\zeta<+\infty\right]=1.$$
Therefore, $\mathcal{E}_X$ is stochastically incomplete.
\end{proof}
By \cite[Proposition 1.17(b)]{Wo00}, for a transient random walk $Z$ on $X$, for all finite set $A\subseteq X$, we have $\mathbb{P}_x\left[Z_n\in A\text{ for infinitely many }n\right]=0$ for all $x\in X$. Roughly speaking, a transient random walk will go to infinity almost surely. For variable speed continuous time random walk $\myset{X_t}$ on $X$, we have following theorem.
\begin{mythm}\label{SG_det_thm_infty}
$\myset{X_t}$ goes to infinity in finite time almost surely, that is,
$$\mathbb{P}_x\left[\lim_{t\uparrow\zeta}\lvert X_t\rvert=+\infty,\zeta<+\infty\right]=1\text{ for all }x\in X.$$
\end{mythm}
\begin{proof}
There exists $\Omega_0$ with $\mathbb{P}_x(\Omega_0)=1$ such that $\zeta(\omega)<+\infty$ for all $\omega\in\Omega_0$. For all $m\ge1$, we have
$$\mathbb{P}_x\left[Y_n\in B_m\text{ for infinitely many }n\right]=0,$$
hence there exists $\Omega_m$ with $\mathbb{P}_x(\Omega_m)=1$ such that for all $\omega\in\Omega_m$, there exist $N=N(\omega)\ge1$ such that for all $n\ge N$, we have $Y_n(\omega)\notin B_m$. Moreover
$$\mathbb{P}_x\left(\Omega_0\cap\bigcap_{m=1}^\infty\Omega_m\right)=1.$$
For all $\omega\in\Omega_0\cap\cap_{m=1}^\infty\Omega_m$, we have
$$J_n(\omega)\le J_{n+1}(\omega)<\zeta(\omega)<+\infty.$$
Since $\omega\in\Omega_m$, there exists $N=N(\omega)\ge1$ such that for all $n>N$, we have $Y_n(\omega)\notin B_m$. By definition, we have
$$X_t(\omega)=Y_n(\omega)\text{ if }J_n(\omega)\le t<J_{n+1}(\omega).$$
Let $T=J_{N(\omega)}(\omega)$, then for all $t>T$, there exists $n\ge N$ such that $J_n(\omega)\le t<J_{n+1}(\omega)$, hence $X_t(\omega)=Y_n(\omega)\not\in B_m$, that is,
$$\lim_{t\uparrow\zeta(\omega)}\lvert X_t(\omega)\rvert=+\infty.$$
\end{proof}
\section{Active Reflected Dirichlet Space \texorpdfstring{$(\mathcal{E}^{\mathrm{ref}},\mathcal{F}^{\mathrm{ref}}_a)$}{(Eref,Frefa)}}\label{SG_det_sec_ref}
In this section, we construct active reflected Dirichlet form $(\mathcal{E}^{\mathrm{ref}},\mathcal{F}^{\mathrm{ref}}_a)$ and show that $\mathcal{F}_X\subsetneqq\mathcal{F}^{\mathrm{ref}}_a$, hence $\mathcal{E}^{\mathrm{ref}}$ is not regular.
Reflected Dirichlet space was introduced by Chen \cite{Chen92}. This is a generalization of reflected Brownian motion in Euclidean space. He considered abstract Dirichlet form instead of constructing reflection path-wisely from probabilistic viewpoint. More detailed discussion is incorporated into his book with Fukushima \cite[Chapter 6]{CF12}.
Given a regular transient Dirichlet form $(\mathcal{E},\mathcal{F})$ on $L^2(X;m)$, we can do reflection in the following two ways:
\begin{itemize}
\item The linear span of $\mathcal{F}$ and all harmonic functions of finite ``$\mathcal{E}$-energy".
\item All functions that are ``locally" in $\mathcal{F}$ and have finite ``$\mathcal{E}$-energy".
\end{itemize}
We use the second way which is more convenient. Recall the Beurling-Deny decomposition. Since $(\mathcal{E},\mathcal{F})$ is regular, we have
$$\mathcal{E}(u,u)=\frac{1}{2}\mu^c_{<u>}(X)+\int_{X\times X\backslash d}(u(x)-u(y))^2J(\mathrm{d} x\mathrm{d} y)+\int_Xu(x)^2k(\mathrm{d} x)$$
for all $u\in\mathcal{F}_e$, here we use the convention that all functions in $\mathcal{F}_e$ are quasi-continuous. By this formula, we can define
$$\hat{\mathcal{E}}(u,u)=\frac{1}{2}\mu^c_{<u>}(X)+\int_{X\times X\backslash d}(u(x)-u(y))^2J(\mathrm{d} x\mathrm{d} y)+\int_Xu(x)^2k(\mathrm{d} x)$$
for all $u\in\mathcal{F}_{\mathrm{loc}}$, where
$$\mathcal{F}_{\mathrm{loc}}=\myset{u:\forall G\subseteq X\text{ relatively compact open,}\exists v\in\mathcal{F},\text{s.t. }u=v,m\text{-a.e. on }G}.$$
We give the definition of reflected Dirichlet space as follows. \cite[Theorem 6.2.5]{CF12} gave
$$
\begin{cases}
&\mathcal{F}^{\mathrm{ref}}=\myset{u:\text{ finite }m\text{-a.e.},\exists\myset{u_n}\subseteq\mathcal{F}_{\mathrm{loc}}\text{ that is }\hat{\mathcal{E}}\text{-Cauchy},u_n\to u,m\text{-a.e. on }X},\\
&\hat{\mathcal{E}}(u,u)=\lim_{n\to+\infty}\hat{\mathcal{E}}(u_n,u_n).
\end{cases}
$$
Let $\tau_ku=\left((-k)\vee u\right)\wedge k$, $k\ge1$, then \cite[Theorem 6.2.13]{CF12} gave
$$
\begin{cases}
&\mathcal{F}^{\mathrm{ref}}=\myset{u:|u|<+\infty,m\text{-a.e.},\tau_ku\in\mathcal{F}_{\mathrm{loc}}\forall k\ge1,\sup_{k\ge1}\hat{\mathcal{E}}(\tau_ku,\tau_ku)<+\infty},\\
&\mathcal{E}^\mathrm{ref}(u,u)=\lim_{k\to+\infty}\hat{\mathcal{E}}(\tau_ku,\tau_ku).
\end{cases}
$$
Let $\mathcal{F}^\mathrm{ref}_a=\mathcal{F}^\mathrm{ref}\cap L^2(X;m)$, then $(\mathcal{F}^\mathrm{ref}_a,\mathcal{E}^\mathrm{ref})$ is called active reflected Dirichlet space. \cite[Theorem 6.2.14]{CF12} showed that $(\mathcal{E}^\mathrm{ref},\mathcal{F}^\mathrm{ref}_a)$ is a Dirichlet form on $L^2(X;m)$.
Returning to our case, since
$$\mathcal{E}_X(u,u)=\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2\text{ for all }u\in\mathcal{F}_X,$$
$\mathcal{E}_X$ has only jumping part, we have
$$\hat{\mathcal{E}}_X(u,u)=\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2\text{ for all }u\in(\mathcal{F}_X)_{\mathrm{loc}}.$$
By the definition of local Dirichlet space
$$(\mathcal{F}_X)_{\mathrm{loc}}=\myset{u:\forall G\subseteq X\text{ relatively compact open,}\exists v\in\mathcal{F}_X,\text{s.t. }u=v,m\text{-a.e. on }G}.$$
For all $G\subseteq X$ relatively compact open, we have $G$ is a finite set, for all function $u$ on $X$, let
$$v(x)=
\begin{cases}
u(x),&\text{if }x\in G,\\
0,&\text{if }x\in X\backslash G,
\end{cases}
$$
then $v\in C_c(X)\subseteq\mathcal{F}_X$ and $u=v$ on $G$, hence $(\mathcal{F}_X)_{\mathrm{loc}}=\myset{u:u\text{ is a finite function on }X}$.
$$\mathcal{F}^\mathrm{ref}=\myset{u:|u(x)|<+\infty,\forall x\in X,\sup_{k\ge1}\frac{1}{2}\sum_{x,y\in X}c(x,y)\left(\tau_ku(x)-\tau_ku(y)\right)^2<+\infty}.$$
By monotone convergence theorem, we have
$$\frac{1}{2}\sum_{x,y\in X}c(x,y)\left(\tau_ku(x)-\tau_ku(y)\right)^2\uparrow\frac{1}{2}\sum_{x,y\in X}c(x,y)\left(u(x)-u(y)\right)^2,$$
hence
$$\sup_{k\ge1}\frac{1}{2}\sum_{x,y\in X}c(x,y)\left(\tau_ku(x)-\tau_ku(y)\right)^2=\frac{1}{2}\sum_{x,y\in X}c(x,y)\left(u(x)-u(y)\right)^2,$$
and
$$
\begin{cases}
&\mathcal{F}^\mathrm{ref}=\myset{u:\text{ finite function},\frac{1}{2}\sum_{x,y\in X}c(x,y)\left(u(x)-u(y)\right)^2<+\infty},\\
&\mathcal{E}^\mathrm{ref}(u,u)=\frac{1}{2}\sum_{x,y\in X}c(x,y)\left(u(x)-u(y)\right)^2.
\end{cases}
$$
$$\mathcal{F}^\mathrm{ref}_a=\myset{u\in L^2(X;m):\frac{1}{2}\sum_{x,y\in X}c(x,y)\left(u(x)-u(y)\right)^2<+\infty}.$$
Indeed, we can show that $(\mathcal{E}^\mathrm{ref},\mathcal{F}^\mathrm{ref}_a)$ is a Dirichlet form on $L^2(X;m)$ directly. In general, $(\mathcal{E}^\mathrm{ref},\mathcal{F}^\mathrm{ref}_a)$ on $L^2(X;m)$ is not regular, $\mathcal{F}_X\subsetneqq\mathcal{F}^\mathrm{ref}_a$. This is like $H^1_0(D)\subsetneqq H^1(D)$, where $D$ is the open unit disk in $\mathbb{R}^d$. We need to show $\mathcal{F}_X\ne\mathcal{F}^\mathrm{ref}_a$, otherwise reflection is meaningless. Then we do regular representation of $(\mathcal{E}^\mathrm{ref},\mathcal{F}^\mathrm{ref}_a)$ on $L^2(X;m)$ to enlarge the space $X$ to Martin compactification $\bar{X}$ and Martin boundary $\partial X$ will appear.
\begin{mythm}
$\mathcal{F}_X\subsetneqq\mathcal{F}^{\mathrm{ref}}_a$, hence $\mathcal{E}^{\mathrm{ref}}$ is not regular.
\end{mythm}
\begin{proof}
Since $m(X)<+\infty$, we have $1\in\mathcal{F}^{\mathrm{ref}}_a$ and $\mathcal{E}^{\mathrm{ref}}(1,1)=0$, by \cite[Theorem 1.6.3]{FOT11}, $\mathcal{E}^{\mathrm{ref}}$ is recurrent, by \cite[Lemma 1.6.5]{FOT11}, $\mathcal{E}^{\mathrm{ref}}$ is conservative or stochastically complete. Since $\mathcal{E}_X$ is transient and stochastically incomplete, we have $\mathcal{F}_X\ne\mathcal{F}^{\mathrm{ref}}_a$. Note that $\mathcal{E}^{\mathrm{ref}}$ is not regular, there is no corresponding Hunt process, but recurrent and conservative properties are still well-defined, see \cite[Chapter 1, 1.6]{FOT11}.
\end{proof}
\section{Regular Representation of \texorpdfstring{$(\mathcal{E}^\mathrm{ref},\mathcal{F}^\mathrm{ref}_a)$}{(Eref,Frefa)}}\label{SG_det_sec_repre}
In this section, we construct a regular Dirichlet form $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ on $L^2(\bar{X};m)$ which is a regular representation of Dirichlet form $(\mathcal{E}^\mathrm{ref},\mathcal{F}^\mathrm{ref}_a)$ on $L^2(X;m)$, where $\bar{X}$ is the Martin compactification of $X$ and $m$ is given as above.
Recall that $(\frac{1}{2}\mathbf{D},H^1(D))$ on $L^2(D)$ is not regular and $(\frac{1}{2}\mathbf{D},H^1(D))$ on $L^2(\bar{D},\mathbf{1}_D(\mathrm{d} x))$ is a regular representation. Our construction is very simple and similar to this case. Let
$$
\begin{cases}
&\mathcal{E}_{\bar{X}}(u,u)=\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2,\\
&\mathcal{F}_{\bar{X}}=\myset{u\in C(\bar{X}):\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2<+\infty}.
\end{cases}
$$
We show that $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ is a regular Dirichlet form on $L^2({\bar{X}};m)$.
\begin{mythm}\label{SG_det_thm_main}
If $\lambda\in(1/5,1/3)$, then $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ is a regular Dirichlet form on $L^2({\bar{X}};m)$.
\end{mythm}
First, we need a lemma.
\begin{mylem}\label{SG_det_lem_ext}
If $\lambda<1/3$, then for all $u$ on $X$ with
$$C=\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2<+\infty,$$
$u$ can be extended continuously to $\bar{X}$.
\end{mylem}
\begin{proof}
Since $\hat{X}$ is homeomorphic to $\bar{X}$, we consider $\hat{X}$ instead. For all $\xi\in\partial X$, take geodesic ray $[x_0,x_1,\ldots]$ with $|x_n|=n$, $x_n\sim x_{n+1}$ for all $n\ge0$ such that $x_n\to\xi$ in $\rho_a$. Then
$$\lvert u(x_n)-u(x_{n+1})\rvert\le\sqrt{\frac{2C}{c(x_n,x_{n+1})}}=\sqrt{2C}(\sqrt{3\lambda})^n,$$
since $\lambda<1/3$, we have $\myset{u(x_n)}$ is a Cauchy sequence, define $u(\xi)=\lim_{n\to+\infty}u(x_n)$.
First, we show that this is well-defined. Indeed, for all equivalent geodesic rays $[x_0,x_1,\ldots]$ and $[y_0,y_1,\ldots]$ with $|x_0|=|y_0|=0$, by \cite[Proposition 22.12(a)]{Wo00}, for all $n\ge0$, $d(x_n,y_n)\le2\delta$. Take an integer $M\ge 2\delta$, then for arbitrary fixed $n\ge0$, there exist $z_0=x_n,\ldots,z_M=y_n$ with $|z_i|=n$ for all $i=0,\ldots,M$, $z_i=z_{i+1}$ or $z_{i}\sim z_{i+1}$ for all $i=0,\ldots, M-1$, we have
$$\lvert u(x_n)-u(y_n)\rvert\le\sum_{i=0}^{M-1}|u(z_{i})-u(z_{i+1})|\le\sum_{i=0}^{M-1}\sqrt{\frac{2C}{c(z_i,z_{i+1})}}\le M\sqrt{\frac{2C}{\min{\myset{C_1,C_2}}}}(\sqrt{3\lambda})^n.$$
Since $\lambda<1/3$, letting $n\to+\infty$, we have $\lvert u(x_n)-u(y_n)\rvert\to0$, $u(\xi)$ is well-defined and
$$|u(\xi)-u(x_n)|\le\sum_{i=n}^\infty|u(x_i)-u(x_{i+1})|\le\sum_{i=n}^\infty\sqrt{2C}(\sqrt{3\lambda})^n=\frac{\sqrt{2C}}{1-\sqrt{3\lambda}}(\sqrt{3\lambda})^n.$$
Next, we show that the extended function $u$ is continuous on $\hat{X}$. We only need to show that for all sequence $\myset{\xi_n}\subseteq\partial X$ with $\xi_n\to\xi\in\partial X$ in $\rho_a$, we have $u(\xi_n)\to u(\xi)$. Since $\partial X$ with $\rho_a$ is H\"older equivalent to $K$ with Euclidean metric by Equation (\ref{SG_det_eqn_Holder}), we use them interchangeably, $\myset{\xi_n}\subseteq K$ and $\xi_n\to\xi\in K$ in Euclidean metric.
For all $\varepsilon>0$, there exists $M\ge1$ such that $(\sqrt{3\lambda})^M<\varepsilon$. Take $w\in W_M$ such that $\xi\in K_w$, there are at most 12 numbers of $\tilde{w}\in W_M$, $\tilde{w}\ne w$ such that $\tilde{w}\sim w$, see Figure \ref{SG_det_figure_nbhd}. Indeed, if we analyze geometric property of the SG carefully, we will see there are at most 3.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.7]
\tikzstyle{every node}=[font=\small,scale=0.7]
\draw (0,2*1.7320508076)--(2,2*1.7320508076);
\draw (-1,1.7320508076)--(3,1.7320508076);
\draw (-2,0)--(4,0);
\draw (-1,-1.7320508076)--(3,-1.7320508076);
\draw (-2,0)--(0,2*1.7320508076);
\draw (-1,-1.7320508076)--(2,2*1.7320508076);
\draw (1,-1.7320508076)--(3,1.7320508076);
\draw (3,-1.7320508076)--(4,0);
\draw (-2,0)--(-1,-1.7320508076);
\draw (-1,1.7320508076)--(1,-1.7320508076);
\draw (0,2*1.7320508076)--(3,-1.7320508076);
\draw (2,2*1.7320508076)--(4,0);
\draw[line width=2pt] (0,0)--(1,1.7320508076)--(2,0)--cycle;
\draw (1,0.7) node {$K_w$};
\end{tikzpicture}
\caption{A Neighborhood of $K_w$}\label{SG_det_figure_nbhd}
\end{figure}
Let
$$U=\left(\bigcup_{\tilde{w}:\tilde{w}\in W_M,\tilde{w}\sim w}K_{\tilde{w}}\right)\cup K_w,$$
there exists $N\ge1$ for all $n>N$, $\xi_n\in U$. For all $n>N$. If $\xi_n\in K_w$, then
$$\lvert u(\xi_n)-u(\xi)\rvert\le\lvert u(\xi_n)-u(w)\rvert+\lvert u(\xi)-u(w)\rvert\le\frac{2\sqrt{2C}}{1-\sqrt{3\lambda}}(\sqrt{3\lambda})^M<\frac{2\sqrt{2C}}{1-\sqrt{3\lambda}}\varepsilon.$$
If $\xi_n\in K_{\tilde{w}}$, $\tilde{w}\in K_M$, $\tilde{w}\sim w$, then
\begin{align*}
\lvert u(\xi_n)-u(\xi)\rvert&\le\lvert u(\xi_n)-u(\tilde{w})\rvert+\lvert u(\tilde{w})-u(w)\rvert+\lvert u(w)-u(\xi)\rvert\\
&\le\frac{2\sqrt{2C}}{1-\sqrt{3\lambda}}(\sqrt{3\lambda})^M+\sqrt{\frac{2C}{\min{\myset{C_1,C_2}}}}(\sqrt{3\lambda})^M\\
&<\left(\frac{2\sqrt{2C}}{1-\sqrt{3\lambda}}+\sqrt{\frac{2C}{\min{\myset{C_1,C_2}}}}\right)\varepsilon.
\end{align*}
Hence
$$|u(\xi_n)-u(\xi)|<\left(\frac{2\sqrt{2C}}{1-\sqrt{3\lambda}}+\sqrt{\frac{2C}{\min{\myset{C_1,C_2}}}}\right)\varepsilon,$$
for all $n>N$. $\lim_{n\to+\infty}u(\xi_n)=u(\xi)$. The extended function $u$ is continuous on $\hat{X}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{SG_det_thm_main}]
Since $C_c(X)\subseteq\mathcal{F}_{\bar{X}}$ is dense in $L^2({\bar{X}};m)$, we have $\mathcal{E}_{\bar{X}}$ is a symmetric form on $L^2({\bar{X}};m)$.
We show closed property of $\mathcal{E}_{\bar{X}}$. Let $\myset{u_k}\subseteq\mathcal{F}_{\bar{X}}$ be an $(\mathcal{E}_{\bar{X}})_1$-Cauchy sequence. Then there exists $u\in L^2(\bar{X};m)$ such that $u_k\to u$ in $L^2(\bar{X};m)$, hence $u_k(x)\to u(x)$ for all $x\in X$. By Fatou's lemma, we have
\begin{align*}
&\frac{1}{2}\sum_{x,y\in X}c(x,y)\left((u_k-u)(x)-(u_k-u)(y)\right)^2\\
=&\frac{1}{2}\sum_{x,y\in X}c(x,y)\lim_{l\to+\infty}\left((u_k-u_l)(x)-(u_k-u_l)(y)\right)^2\\
\le&\varliminf_{l\to+\infty}\frac{1}{2}\sum_{x,y\in X}c(x,y)\left((u_k-u_l)(x)-(u_k-u_l)(y)\right)^2\\
=&\varliminf_{l\to+\infty}\mathcal{E}_{\bar{X}}(u_k-u_l,u_k-u_l).
\end{align*}
Letting $k\to+\infty$, we have
$$\frac{1}{2}\sum_{x,y\in X}c(x,y)\left((u_k-u)(x)-(u_k-u)(y)\right)^2\to0,$$
and
$$\frac{1}{2}\sum_{x,y\in X}c(x,y)\left(u(x)-u(y)\right)^2<+\infty.$$
By Lemma \ref{SG_det_lem_ext}, $u$ can be extended continuously to $\bar{X}$, hence $u\in C(\bar{X})$, $u\in\mathcal{F}_{\bar{X}}$. $\mathcal{E}_{\bar{X}}$ is closed.
It is obvious that $\mathcal{E}_{\bar{X}}$ is Markovian. Hence $\mathcal{E}_{\bar{X}}$ is a Dirichlet form on $L^2(\bar{X};m)$.
Since ${\bar{X}}$ is compact, we have $C_c({\bar{X}})=C({\bar{X}})$. To show $\mathcal{E}_{\bar{X}}$ is regular, we need to show $C_c({\bar{X}})\cap\mathcal{F}_{\bar{X}}=C({\bar{X}})\cap\mathcal{F}_{\bar{X}}=\mathcal{F}_{\bar{X}}$ is $(\mathcal{E}_{\bar{X}})_1$-dense in $\mathcal{F}_{\bar{X}}$ and uniformly dense in $C_c({\bar{X}})=C({\bar{X}})$. $\mathcal{F}_{\bar{X}}$ is trivially $(\mathcal{E}_{\bar{X}})_1$-dense in $\mathcal{F}_{\bar{X}}$. We need to show that $\mathcal{F}_{\bar{X}}$ is uniformly dense in $C({\bar{X}})$. Since $\bar{X}$ is compact, we have $\mathcal{F}_{\bar{X}}$ is a sub algebra of $C(\bar{X})$. By Stone-Weierstrass theorem, we only need to show that $\mathcal{F}_{\bar{X}}$ separates points. The idea of our proof is from classical construction of local regular Dirichlet form on the SG.
For all $p,q\in\bar{X}$ with $p\ne q$, we only need to show that there exists $v\in\mathcal{F}_{\bar{X}}$ such that $v(p)\ne v(q)$.
If $p\in X$, then letting $v(p)=1$ and $v(x)=0$ for all $x\in X\backslash\myset{p}$, we have
$$\sum_{x,y\in X}c(x,y)(v(x)-v(y))^2<+\infty.$$
By Lemma \ref{SG_det_lem_ext}, $v$ can be extended to a function in $C(\bar{X})$ still denoted by $v$, hence $v\in\mathcal{F}_{\bar{X}}$. Moreover, $v(q)=0\ne1=v(p)$.
If $q\in X$, then we have the similar proof.
If $p,q\in\bar{X}\backslash X=\partial X=K$, then there exists sufficiently large $m\ge1$ and $w^{(1)},w^{(2)}\in S_m$ with $p\in K_{w^{(1)}}$, $q\in K_{w^{(2)}}$ and $K_{w^{(1)}}\cap K_{w^{(2)}}=\emptyset$, hence $w^{(1)}\not\sim w^{(2)}$. Let $v=0$ in $B_m$ and
$$v(w^{(1)}0)=v(w^{(1)}1)=v(w^{(1)}2)=1.$$
For all $w\in S_{m+1}\backslash\myset{w^{(1)}0,w^{(1)}1,w^{(1)}2}$, let
$$
v(w)=
\begin{cases}
1,&\text{if }w\sim w^{(1)}0\text{ or }w\sim w^{(1)}1\text{ or }w\sim w^{(1)}2,\\
0,&\text{otherwise},
\end{cases}
$$
then
$$v(w^{(2)}0)=v(w^{(2)}1)=v(w^{(2)}2)=0.$$
In the summation $\sum_{x,y\in S_{m+1}}c(x,y)(v(x)-v(y))^2$, horizontal edges of type \Rmnum{2} make no contribution since $v$ takes the same values at end nodes of each such edge. Assume that we have constructed $v$ on $B_n$ such that in the summation $\sum_{x,y\in S_i}c(x,y)(v(x)-v(y))^2$, $i=m+1,\ldots,n$, horizontal edges of type \Rmnum{2} make no contribution, that is, $v$ takes the same values at end nodes of each edge. We construct $v$ on $S_{n+1}$ as follows.
Consider $\sum_{x,y\in S_n}c(x,y)(v(x)-v(y))^2$, nonzero terms all come from edges of smallest triangles in $S_n$. Pick up one such triangle in $S_n$, it generates three triangles in $S_{n+1}$, nine triangles in $S_{n+2}$, \ldots. See Figure \ref{SG_det_figure_gene}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(3,0);
\draw (3,0)--(3/2,3/2*1.7320508076);
\draw (0,0)--(3/2,3/2*1.7320508076);
\draw (6,0)--(9,0);
\draw (9,0)--(7.5,3/2*1.7320508076);
\draw (6,0)--(7.5,3/2*1.7320508076);
\draw (6.5,1/2*1.7320508076)--(7,0);
\draw (8.5,1/2*1.7320508076)--(8,0);
\draw (7,1.7320508076)--(8,1.7320508076);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (3/2,3/2*1.7320508076) circle (0.06);
\draw[fill=black] (6,0) circle (0.06);
\draw[fill=black] (7,0) circle (0.06);
\draw[fill=black] (8,0) circle (0.06);
\draw[fill=black] (9,0) circle (0.06);
\draw[fill=black] (6.5,1.7320508076/2) circle (0.06);
\draw[fill=black] (8.5,1.7320508076/2) circle (0.06);
\draw[fill=black] (7,1.7320508076) circle (0.06);
\draw[fill=black] (8,1.7320508076) circle (0.06);
\draw[fill=black] (7.5,3/2*1.7320508076) circle (0.06);
\draw (0,-0.3) node {$a$};
\draw (3,-0.3) node {$b$};
\draw (1.5,3/2*1.7320508076+0.3) node {$c$};
\draw (6,-0.3) node {$a$};
\draw (9,-0.3) node {$b$};
\draw (7.5,3/2*1.7320508076+0.3) node {$c$};
\draw (7,-0.3) node {$x$};
\draw (8,-0.3) node {$x$};
\draw (6.3,1/2*1.7320508076) node {$z$};
\draw (6.8,1.7320508076) node {$z$};
\draw (8.7,1/2*1.7320508076) node {$y$};
\draw (8.2,1.7320508076) node {$y$};
\end{tikzpicture}
\caption{Generation of triangles}\label{SG_det_figure_gene}
\end{figure}
We only need to assign values of $v$ on the three triangles in $S_{n+1}$ from the values of $v$ on the triangle in $S_n$. As in Figure \ref{SG_det_figure_gene}, $x,y,z$ are the values of $v$ at corresponding nodes to be determined from $a,b,c$. The contribution of this one triangle in $S_n$ to $\sum_{x,y\in S_n}c(x,y)(v(x)-v(y))^2$ is
$$A_1=\frac{C_1}{(3\lambda)^n}\left[(a-b)^2+(b-c)^2+(a-c)^2\right].$$
The contribution of these three triangles in $S_{n+1}$ to $\sum_{x,y\in S_{n+1}}c(x,y)(v(x)-v(y))^2$ is
\begin{align*}
A_2=\frac{C_1}{(3\lambda)^{n+1}}&\left[(a-x)^2+(a-z)^2+(x-z)^2\right.\\
&+(b-x)^2+(b-y)^2+(x-y)^2\\
&\left.+(c-y)^2+(c-z)^2+(y-z)^2\right].\\
\end{align*}
Regarding $A_2$ as a function of $x,y,z$, by elementary calculation, we have $A_2$ takes the minimum value when
$$
\begin{cases}
x=\frac{2a+2b+c}{5},\\
y=\frac{a+2b+2c}{5},\\
z=\frac{2a+b+2c}{5},
\end{cases}
$$
and
\begin{align*}
A_2&=\frac{C_1}{(3\lambda)^{n+1}}\cdot\frac{3}{5}\left[(a-b)^2+(b-c)^2+(a-c)^2\right]\\
&=\frac{1}{5\lambda}\left(\frac{C_1}{(3\lambda)^n}\left[(a-b)^2+(b-c)^2+(a-c)^2\right]\right)=\frac{1}{5\lambda}A_1.
\end{align*}
By the above construction, horizontal edges of type \Rmnum{2} in $S_{n+1}$ also make no contribution to
$$\sum_{x,y\in S_{n+1}}c(x,y)(v(x)-v(y))^2$$
and
$$\sum_{x,y\in S_{n+1}}c(x,y)(v(x)-v(y))^2=\frac{1}{5\lambda}\sum_{x,y\in S_{n}}c(x,y)(v(x)-v(y))^2.$$
Since $\lambda>1/5$, we have
$$\sum_{n=0}^\infty\sum_{x,y\in S_{n}}c(x,y)(v(x)-v(y))^2<+\infty,$$
this is the contribution of all horizontal edges to $\sum_{x,y\in X}c(x,y)(v(x)-v(y))^2$.
We consider the contribution of all vertical edges as follows. For all $n\ge m$, by construction $v|_{S_{n+1}}$ is uniquely determined by $v|_{S_n}$, hence the contribution of vertical edges between $S_n$ and $S_{n+1}$ is uniquely determined by $v|_{S_n}$. As above, we pick up one smallest triangle in $S_n$ and consider the contribution of the vertical edges connecting it to $S_{n+1}$. There are nine vertical edges between $S_n$ and $S_{n+1}$ connecting this triangle. These nine vertical edges make contribution
\begin{align*}
A_3&=\frac{1}{(3\lambda)^n}\left[(a-x)^2+(a-z)^2+(a-a)^2\right.\\
&+(b-x)^2+(b-y)^2+(b-b)^2\\
&+(c-y)^2+(c-z)^2+(c-c)^2\left.\right]\\
&=\frac{14}{25C_1}\left(\frac{C_1}{(3\lambda)^n}\left[(a-b)^2+(b-c)^2+(a-c)^2\right]\right)=\frac{14}{25C_1}A_1.
\end{align*}
Hence
$$\sum_{x\in S_n,y\in S_{n+1}}c(x,y)(v(x)-v(y))^2=\frac{14}{25C_1}\sum_{x,y\in S_n}c(x,y)(v(x)-v(y))^2,$$
and
$$\sum_{n=0}^\infty\sum_{x\in S_n,y\in S_{n+1}}c(x,y)(v(x)-v(y))^2<+\infty\Leftrightarrow\sum_{n=0}^\infty\sum_{x,y\in S_n}c(x,y)(v(x)-v(y))^2<+\infty.$$
Since $\lambda>1/5$, we have both summations converge and
$$\sum_{x,y\in X}c(x,y)(v(x)-v(y))^2<+\infty.$$
By Lemma \ref{SG_det_lem_ext}, $v$ can be extended to a function in $C(\bar{X})$ still denoted by $v$, hence $v\in\mathcal{F}_{\bar{X}}$. Since $v$ is constructed by convex interpolation on $X\backslash B_{m+1}$, we have $v(p)=1\ne0=v(q)$.
Therefore, $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ is a regular Dirichlet form on $L^2({\bar{X}};m)$.
\end{proof}
\begin{mythm}
$\mathcal{E}_{\bar{X}}$ on $L^2({\bar{X}};m)$ is a regular representation of $\mathcal{E}^\mathrm{ref}$ on $L^2(X;m)$.
\end{mythm}
Regular representation theory was developed by Fukushima \cite{Fuk71} and incorporated into his book \cite[Appendix, A4]{FOT11}.
\begin{proof}
We only need to construct an algebraic isomorphism $\Phi:(\mathcal{F}^\mathrm{ref}_a)_b\to(\mathcal{F}_{\bar{X}})_b$ such that for all $u\in(\mathcal{F}^\mathrm{ref}_a)_b$, we have
\begin{equation}\label{SG_det_eqn_iso}
\lVert u\rVert_{L^\infty(X;m)}=\lVert\Phi(u)\rVert_{L^\infty(\bar{X};m)},(u,u)_X=(\Phi(u),\Phi(u))_{\bar{X}}, \mathcal{E}^\mathrm{ref}(u,u)=\mathcal{E}_{\bar{X}}(\Phi(u),\Phi(u)).
\end{equation}
Indeed, for all $u\in(\mathcal{F}^\mathrm{ref}_a)_b$, we have $\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2<+\infty$, by Lemma \ref{SG_det_lem_ext}, we define $\Phi(u)$ as the continuous extension of $u$ to ${\bar{X}}$. Since $\mathcal{E}^\mathrm{ref}$, $\mathcal{E}_{\bar{X}}$ have the same expression for energy and $m(\partial X)=0$, Equation (\ref{SG_det_eqn_iso}) is obvious.
\end{proof}
Moreover we have
\begin{mythm}\label{SG_det_thm_part}
$(\mathcal{E}_X,\mathcal{F}_X)$ on $L^2(X;m)$ is the part form on $X$ of $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ on $L^2({\bar{X}};m)$.
\end{mythm}
\begin{proof}
By \cite[Theorem 3.3.9]{CF12}, since $X\subseteq{\bar{X}}$ is an open subset and $\mathcal{F}_{\bar{X}}$ is a special standard core of $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ on $L^2(\bar{X};m)$, we have
$$(\mathcal{F}_{\bar{X}})_X=\myset{u\in\mathcal{F}_{\bar{X}}:\mathrm{supp}(u)\subseteq X}=\myset{u\in\mathcal{F}_{\bar{X}}:u\in C_c(X)}=C_c(X).$$
Since $\mathcal{F}_X$ is the $(\mathcal{E}_X)_1$-closure of $C_c(X)$, we have the part form of $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ on $L^2({\bar{X}};m)$ on $X$ is exactly $(\mathcal{E}_X,\mathcal{F}_X)$ on $L^2(X;m)$.
\end{proof}
From probabilistic viewpoint, $(\mathcal{E}_X,\mathcal{F}_X)$ on $L^2(X;m)$ corresponds to absorbed process $\myset{X_t}$ and $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ on $L^2({\bar{X}};m)$ corresponds to reflected process $\myset{\bar{X}_t}$. By \cite[Theorem 3.3.8]{CF12}, $\myset{X_t}$ is the part process of $\myset{\bar{X}_t}$ on $X$ which can be described as follows.
Let
$$\tau_X=\inf{\myset{t>0:\bar{X}_t\notin X}}=\inf{\myset{t>0:\bar{X}_t\in\partial X}}=\sigma_{\partial X},$$
then
$$X_t=
\begin{cases}
\bar{X}_t,&0\le t<\tau_X,\\
\partial,&t\ge\tau_X,
\end{cases}
$$
and
$$\zeta=\tau_X=\sigma_{\partial X}.$$
\section{Trace Form on \texorpdfstring{$\partial X$}{partial X}}\label{SG_det_sec_trace}
In this section, we take trace of the regular Dirichlet form $(\mathcal{E}_{\bar{X}},\mathcal{F}_{\bar{X}})$ on $L^2(\bar{X};m)$ to $K$ to have a regular Dirichlet form $(\mathcal{E}_K,\mathcal{F}_K)$ on $L^2(K;\nu)$ with the form (\ref{eqn_nonlocal}).
Firstly, we show that $\nu$ is of finite energy with respect to $\mathcal{E}_{\bar{X}}$, that is,
\begin{equation*}
\int_{\bar{X}}\lvert u(x)\rvert\nu(\mathrm{d} x)\le C\sqrt{(\mathcal{E}_{\bar{X}})_1(u,u)}\text{ for all }u\in\mathcal{F}_{\bar{X}}\cap C_c({\bar{X}})=\mathcal{F}_{\bar{X}},
\end{equation*}
where $C$ is some positive constant. Since $\nu(\partial X)=1$, we only need to show that
\begin{mythm}\label{SG_det_thm_trace}
\begin{equation}\label{SG_det_eqn_trace}
\left(\int_{\bar{X}}\lvert u(x)\rvert^2\nu(\mathrm{d} x)\right)^{1/2}\le C\sqrt{(\mathcal{E}_{\bar{X}})_1(u,u)}\text{ for all }u\in\mathcal{F}_{\bar{X}}.
\end{equation}
\end{mythm}
We need some preparation.
\begin{mythm}(\cite[Theorem 1.1]{ALP99})\label{thm_ALP}
Suppose that a reversible random walk $\myset{Z_n}$ is transient, then for all $f$ with
$$D(f)=\frac{1}{2}\sum_{x,y\in X}c(x,y)(f(x)-f(y))^2<+\infty,$$
we have $\myset{f(Z_n)}$ converges almost surely and in $L^2$ under $\mathbb{P}_x$ for all $x\in X$.
\end{mythm}
For all $f$ with $D(f)<+\infty$, under $\mathbb{P}_o$, $\myset{f(Z_n)}$ converges almost surely and in $L^2$ to a random variable $W$, that is
$$f(Z_n)\to W,\mathbb{P}_o\text{-a.s.},\mathbb{E}_o\left[\left(f(Z_n)-W\right)^2\right]\to0,$$
then $W$ is a terminal random variable. By Theorem \ref{SG_det_thm_conv}, we have $Z_n\to Z_\infty$, $\mathbb{P}_o$-a.s.. By \cite[Corollary 7.65]{Wo09}, we have $W$ is of the form $W=\varphi(Z_\infty)$, $\mathbb{P}_o$-a.s., where $\varphi$ is a measurable function on $\partial X$. Hence we define a map $f\mapsto\varphi$, this is the operation of taking boundary value in some sense.
Let $\mathbf{D}=\myset{f:D(f)<+\infty}$. The Dirichlet norm of $f\in\mathbf{D}$ is given by $\lVert f\rVert^2=D(f)+\pi(o)f(o)^2$. Let $\mathbf{D}_0$ be the family of all functions that are limits in the Dirichlet norm of functions with finite support. We have the following Royden decomposition.
\begin{mythm}(\cite[Theorem 3.69]{Soa94})\label{SG_det_thm_Soa}
For all $f\in\mathbf{D}$, there exist unique harmonic Dirichlet function $f_{HD}$ and $f_0\in\mathbf{D}_0$ such that $f=f_{HD}+f_0$. Moreover, $D(f)=D(f_{HD})+D(f_0)$.
\end{mythm}
\begin{mylem}(\cite[Lemma 2.1]{ALP99})\label{SG_det_lem_ALP}
For all $f\in\mathbf{D}_0$, $x\in X$, we have
$$\pi(x)f(x)^2\le D(f)G(x,x).$$
Furthermore, there exists a superharmonic function $h\in\mathbf{D}_0$ such that $h\ge|f|$ pointwise and $D(h)\le D(f)$.
\end{mylem}
\begin{proof}[Proof of Theorem \ref{SG_det_thm_trace}]
Since $\mathcal{F}_{\bar{X}}\subseteq C(\bar{X})$, for all $u\in\mathcal{F}_{\bar{X}}$, it is trivial to take boundary value just as $u|_{\partial X}$. We still use notions $f,\varphi$. Then
$$f(Z_n)\to\varphi(Z_\infty),\mathbb{P}_o\text{-a.s.},$$
$$\mathbb{E}_o\left[\left(f(Z_n)-\varphi(Z_\infty)\right)^2\right]\to0.$$
Under $\mathbb{P}_o$, the hitting distribution of $\myset{Z_n}$ or the distribution of $Z_\infty$ is $\nu$, the normalized Hausdorff measure on $K$, we have
$$\int_{\partial X}|\varphi|^2\mathrm{d}\nu=\mathbb{E}_o\left[\varphi(Z_\infty)^2\right]=\lim_{n\to+\infty}\mathbb{E}_o\left[f(Z_n)^2\right].$$
We only need to estimate $\mathbb{E}_o\left[f(Z_n)^2\right]$ in terms of
$$D(f)+(f,f)=\frac{1}{2}\sum_{x,y\in X}c(x,y)(f(x)-f(y))^2+\sum_{x\in X}f(x)^2m(x).$$
By Theorem \ref{SG_det_thm_Soa}, we only need to consider harmonic Dirichlet functions and functions in $\mathbf{D}_0$.
For all $f\in\mathbf{D}$, we have
\begin{align*}
\sum_{k=0}^\infty\mathbb{E}_o\left[\left(f(Z_{k+1})-f(Z_k)\right)^2\right]=&\sum_{k=0}^\infty\mathbb{E}_o\left[\mathbb{E}_o\left[\left(f(Z_{k+1})-f(Z_k)\right)^2|Z_k\right]\right]\\
=&\sum_{k=0}^\infty\mathbb{E}_o\left[\mathbb{E}_{Z_k}\left[\left(f(Z_{1})-f(Z_0)\right)^2\right]\right]\\
=&\sum_{k=0}^\infty\sum_{x\in X}P^{(k)}(o,x)\mathbb{E}_{x}\left[\left(f(Z_{1})-f(Z_0)\right)^2\right]\\
=&\sum_{x,y\in X}\left(\sum_{k=0}^\infty P^{(k)}(o,x)\right)P(x,y)\left(f(x)-f(y)\right)^2\\
=&\sum_{x,y\in X}G(o,x)\frac{c(x,y)}{\pi(x)}(f(x)-f(y))^2\\
=&\sum_{x,y\in X}\frac{\pi(x)G(x,o)}{\pi(o)}\frac{c(x,y)}{\pi(x)}(f(x)-f(y))^2\\
=&\sum_{x,y\in X}\frac{F(x,o)G(o,o)}{\pi(o)}c(x,y)(f(x)-f(y))^2\\
\le&\frac{G(o,o)}{\pi(o)}\sum_{x,y\in X}c(x,y)(f(x)-f(y))^2\\
=&\frac{2G(o,o)}{\pi(o)}D(f).
\end{align*}
Let $f$ be a harmonic Dirichlet function, then $\myset{f(Z_n)}$ is a martingale. For all $n\ge1$
\begin{align*}
\mathbb{E}_o\left[f(Z_n)^2\right]&=\mathbb{E}_o\left[\left(\sum_{k=0}^{n-1}(f(Z_{k+1})-f(Z_k))+f(Z_0)\right)^2\right]\\
&=\sum_{k=0}^{n-1}\mathbb{E}_o\left[(f(Z_{k+1})-f(Z_k))^2\right]+f(o)^2\\
&\le\sum_{k=0}^{\infty}\mathbb{E}_o\left[(f(Z_{k+1})-f(Z_k))^2\right]+f(o)^2\\
&\le\frac{2G(o,o)}{\pi(o)}D(f)+f(o)^2,
\end{align*}
hence
\begin{equation}\label{SG_det_eqn_hd}
\mathbb{E}_o\left[f_{HD}(Z_n)^2\right]\le\frac{2G(o,o)}{\pi(o)}D(f_{HD})+f_{HD}(o)^2.
\end{equation}
Let $f\in\mathbf{D}_0$. Let $h$ be as in Lemma \ref{SG_det_lem_ALP}. Then $h\ge0$. Since $h$ is superharmonic, we have
$$\mathbb{E}_o\left[h(Z_{k+1})-h(Z_k)|Z_0,\ldots,Z_k\right]\le0,$$
hence
\begin{align*}
&\mathbb{E}_o\left[h(Z_{k+1})^2-h(Z_k)^2\right]\\
=&\mathbb{E}_o\left[(h(Z_{k+1})-h(Z_{k}))^2\right]+2\mathbb{E}_o\left[h(Z_k)(h(Z_{k+1})-h(Z_k))\right]\\
=&\mathbb{E}_o\left[(h(Z_{k+1})-h(Z_{k}))^2\right]+2\mathbb{E}_o\left[\mathbb{E}_o\left[h(Z_k)(h(Z_{k+1})-h(Z_k))|Z_0,\ldots,Z_k\right]\right]\\
=&\mathbb{E}_o\left[(h(Z_{k+1})-h(Z_{k}))^2\right]+2\mathbb{E}_o\left[h(Z_k)\mathbb{E}_o\left[h(Z_{k+1})-h(Z_k)|Z_0,\ldots,Z_k\right]\right]\\
\le&\mathbb{E}_o\left[(h(Z_{k+1})-h(Z_{k}))^2\right].
\end{align*}
We have
\begin{align*}
\mathbb{E}_o\left[h(Z_n)^2\right]&=\sum_{k=0}^{n-1}\mathbb{E}_o\left[h(Z_{k+1})^2-h(Z_k)^2\right]+h(o)^2\\
&\le\sum_{k=0}^{n-1}\mathbb{E}_o\left[(h(Z_{k+1})-h(Z_{k}))^2\right]+\frac{G(o,o)}{\pi(o)}D(h)\\
&\le\sum_{k=0}^{\infty}\mathbb{E}_o\left[(h(Z_{k+1})-h(Z_{k}))^2\right]+\frac{G(o,o)}{\pi(o)}D(h)\\
&\le\frac{2G(o,o)}{\pi(o)}D(h)+\frac{G(o,o)}{\pi(o)}D(h)\\
&=\frac{3G(o,o)}{\pi(o)}D(h),
\end{align*}
hence
$$\mathbb{E}_o\left[f(Z_n)^2\right]\le\mathbb{E}_o\left[h(Z_n)^2\right]\le\frac{3G(o,o)}{\pi(o)}D(h)\le\frac{3G(o,o)}{\pi(o)}D(f).$$
We have
\begin{equation}\label{SG_det_eqn_d0}
\mathbb{E}_o\left[f_0(Z_n)^2\right]\le\frac{3G(o,o)}{\pi(o)}D(f_0).
\end{equation}
Combining Equation (\ref{SG_det_eqn_hd}) and Equation (\ref{SG_det_eqn_d0}), we have
\begin{align*}
\mathbb{E}_o\left[f(Z_n)^2\right]&=\mathbb{E}_o\left[(f_{HD}(Z_n)+f_0(Z_n))^2\right]\le2\mathbb{E}_o\left[f_{HD}(Z_n)^2+f_0(Z_n)^2\right]\\
&\le2\left(\frac{2G(o,o)}{\pi(o)}D(f_{HD})+f_{HD}(o)^2+\frac{3G(o,o)}{\pi(o)}D(f_0)\right)\\
&\le2\left(\frac{5G(o,o)}{\pi(o)}D(f)+(f(o)-f_0(o))^2\right)\\
&\le2\left(\frac{5G(o,o)}{\pi(o)}D(f)+2f(o)^2+2f_0(o)^2\right)\\
&\le2\left(\frac{5G(o,o)}{\pi(o)}D(f)+2\frac{1}{m(o)}f(o)^2m(o)+2\frac{G(o,o)}{\pi(o)}D(f_0)\right)\\
&\le2\left(\frac{7G(o,o)}{\pi(o)}D(f)+2\frac{1}{m(o)}\sum_{x\in X}f(x)^2m(x)\right)\\
&\le\max{\myset{\frac{14G(o,o)}{\pi(o)},\frac{4}{m(o)}}}\left(D(f)+\sum_{x\in X}f(x)^2m(x)\right).
\end{align*}
Letting $C^2=\max{\myset{\frac{14G(o,o)}{\pi(o)},\frac{4}{m(o)}}}$ be a constant only depending on the conductance $c$ and the measure $m$, we have
$$\int_{\partial X}|\varphi|^2\mathrm{d}\nu=\lim_{n\to+\infty}\mathbb{E}_o\left[f(Z_n)^2\right]\le C^2(D(f)+\sum_{x\in X}f(x)^2m(x)).$$
In the notion of $u$, we obtain Equation (\ref{SG_det_eqn_trace}).
\end{proof}
Secondly, we obtain a regular Dirichlet form on $L^2(\partial X;\nu)$ by abstract theory of trace form. For more detailed discussion of trace form, see \cite[Chapter 5, 5.2]{CF12} and \cite[Chapter 6, 6.2]{FOT11}. We introduce some results used here.
Taking trace with respect to a regular Dirichlet form corresponds to taking time-change with respect to the corresponding Hunt process. Taking trace is realized by smooth measure. The family of all smooth measures is denoted by $S$. Taking time-change is realized by positive continuous additive functional, abbreviated as PCAF. The family of all PCAFs is denoted by $A_c^+$. The family of all equivalent classes of $A_c^+$ and the family $S$ are in one-to-one correspondence, see \cite[Theorem 5.1.4]{FOT11}.
We fix a regular Dirichlet form $(\mathcal{E},\mathcal{F})$ on $L^2(E;m)$ and its corresponding Hunt process $X=\myset{X_t}$.
\begin{itemize}
\item Firstly, we introduce the basic setup of time-change. Given a PCAF $A\in A_c^+$, define its support $F$, then $F$ is quasi closed and nearly Borel measurable. Define the right-continuous inverse $\tau$ of $A$, let $\check{X}_t=X_{\tau_t}$, then $\check{X}$ is a right process with state space $F$ and called the time-changed process of $X$ by $A$.
\item Secondly, we introduce the basic setup of trace form. For arbitrary non-polar, quasi closed, nearly Borel measurable, finely closed set $F$, define hitting distribution $H_F$ of ${X}$ for $F$ as follows:
$$H_Fg(x)=\mathbb{E}_x\left[g(X_{\sigma_F})\mathbf{1}_{\sigma_F<+\infty}\right],x\in E,g\text{ is nonnegative Borel function.}$$
By \cite[Theorem 3.4.8]{CF12}, for all $u\in\mathcal{F}_e$, we have $H_F|u|(x)<+\infty$, q.e. and $H_Fu\in\mathcal{F}_e$. Define
$$\check{\mathcal{F}}_e=\mathcal{F}_e|_F,\check{\mathcal{E}}(u|_F,v|_F)={\mathcal{E}}(H_Fu,H_Fv),u,v\in\mathcal{F}_e.$$
Two elements in $\check{\mathcal{F}}_e$ can be identified if they coincide q.e. on $F$. We still need a measure on $F$. Let
$$S_F=\myset{\mu\in S:\text{the quasi support of }\mu=F},$$
where the quasi support of a Borel measure is the smallest (up to q.e. equivalence) quasi closed set outside which the measure vanishes. Let $\mu\in S_F$, by \cite[Theorem 3.3.5]{CF12}, two elements of $\check{\mathcal{F}}_e$ coincide q.e. on $F$ if and only if they coincide $\mu$-a.e.. Define $\check{\mathcal{F}}=\check{\mathcal{F}}_e\cap L^2(F;\mu)$. Then $(\check{\mathcal{E}},\check{\mathcal{F}})$ is a symmetric form on $L^2(F;\mu)$.
\item Thirdly, the relation of trace form and time-change process is as follows. Given $A\in A_c^+$ or equivalently $\mu\in S$, let $F$ be the support of $A$, then $F$ satisfies the conditions in the second setup and by \cite[Theorem 5.2.1(\rmnum{1})]{CF12}, $\mu\in S_F$. We obtain $(\check{\mathcal{E}},\check{\mathcal{F}})$ on $L^2(F;\mu)$. By \cite[Theorem 5.2.2]{CF12}, the regular Dirichlet form corresponding to $\check{X}$ is exactly $(\check{\mathcal{E}},\check{\mathcal{F}})$ on $L^2(F;\mu)$.
\end{itemize}
We have $F\subseteq\mathrm{supp}(\mu)$ q.e.. But the point is that $F$ can be strictly contained in $\mathrm{supp}(\mu)$ q.e., usually we indeed need a trace form on $\mathrm{supp}(\mu)$. \cite{CF12} provided a solution not for all smooth measures, but some subset
$$\mathring{S}=\myset{\mu:\text{positive Radon measure charging no }\mathcal{E}\text{-polar set}}.$$
For non-$\mathcal{E}$-polar, quasi closed subset $F$ of $E$, let
$$\mathring{S}_F=\myset{\mu\in\mathring{S}:\text{the quasi support of }\mu\text{ is }F}.$$
Note that if $\mu\in\mathring{S}_F$, it may happen that $\mathrm{supp}(\mu)\supsetneqq F$ q.e.. We want some $\mu\in\mathring{S}_F$ such that $\mathrm{supp}(\mu)=F$ q.e.. We have a criterion as follows.
\begin{mylem}(\cite[Lemma 5.2.9(\rmnum{2})]{CF12})\label{SG_det_lem_ac}
Let $F$ be a non-$\mathcal{E}$-polar, nearly Borel, finely closed set. Let $\nu\in\mathring{S}$ satisfy $\nu(E\backslash F)=0$. Assume the 1-order hitting distribution $H_F^1(x,\cdot)$ of $X$ for $F$ is absolutely continuous with respect to $\nu$ for $m$-a.e. $x\in E$. Then $\nu\in\mathring{S}_F$.
\end{mylem}
\begin{mycor}(\cite[Corollary 5.2.10]{CF12})\label{SG_det_cor_cf}
Let $F$ be a closed set. If there exists $\nu\in\mathring{S}_F$ such that the topological support $\mathrm{supp}(\nu)=F$, then for all $\mu\in\mathring{S}_F$, we have $(\check{\mathcal{E}},\check{\mathcal{F}})$ is a regular Dirichlet form on $L^2(F;\mu)$.
\end{mycor}
Roughly speaking, given a positive Radon measure $\mu$ charging no $\mathcal{E}$-polar set, let $F=\mathrm{supp}(\mu)$. Firstly, we check Lemma \ref{SG_det_lem_ac} to have $\mu\in\mathring{S}_F$, then the quasi support of $\mu$ is $F$ and the support of corresponding PCAF $A$ can be taken as $F$. Secondly, by Corollary \ref{SG_det_cor_cf}, the time-changed process $\check{X}$ of $X$ by $A$ corresponds to the regular Dirichlet form $(\check{\mathcal{E}},\check{\mathcal{F}})$ on $L^2(F;\mu)$.
Return to our case, $\nu$ is a probability measure of finite energy integral, hence $\nu$ is a positive Radon measure charging no $\mathcal{E}_{\bar{X}}$-polar set. We need to check absolutely continuous condition in Lemma \ref{SG_det_lem_ac}. We give a theorem as follows.
\begin{mythm}\label{SG_det_thm_hit}
The hitting distributions of $\myset{\bar{X}_t}$ and $\myset{Z_n}$ for $\partial X$ coincide.
\end{mythm}
\begin{proof}
Recall that $\myset{X_t}$ is characterized by random walk $\myset{Y_n}$ and jumping times $\myset{J_n}$, $\myset{X_t}$ is part process of $\myset{\bar{X}_t}$ on $X$ and $\zeta=\tau_X=\sigma_{\partial X}<+\infty$, $\mathbb{P}_x$-a.s. for all $x\in X$.
First, we show that jumping times $J_n$ are stopping times of $\myset{\bar{X}_t}$ for all $n\ge0$. Let $\myset{\mathcal{F}_t}$ and $\myset{\bar{\mathcal{F}}_t}$ be the minimum completed admissible filtration with respect to $\myset{X_t}$ and $\myset{\bar{X}_t}$, respectively. By Proposition \ref{SG_det_prop_jump}, $J_n$ are stopping times of $\myset{X_t}$. Since for all Borel measurable set $B\subseteq\bar{X}$, we have
$$\myset{X_t\in B}=\myset{\bar{X}_t\in B\cap X}\cap\myset{t<\zeta}\in\bar{\mathcal{F}}_t,$$
$\mathcal{F}_t\subseteq\bar{\mathcal{F}}_t$ for all $t\ge0$. $J_n$ are stopping times of $\myset{\bar{X}_t}$ for all $n\ge0$.
Then, since $J_n\uparrow\zeta=\sigma_{\partial X}$, by quasi left continuity of $\myset{\bar{X}_t}$, we have for all $x\in X$
$$\mathbb{P}_x\left[\lim_{n\to+\infty}\bar{X}_{J_n}=\bar{X}_{\sigma_{\partial X}},\sigma_{\partial X}<+\infty\right]=\mathbb{P}_x\left[\sigma_{\partial X}<+\infty\right],$$
that is,
$$\mathbb{P}_x[\lim_{n\to+\infty}\bar{X}_{J_n}=\bar{X}_{\sigma_{\partial X}}]=1.$$
Noting that $J_n<\zeta=\sigma_{\partial X}$, we have $\bar{X}_{J_n}=X_{J_n}=Y_n$, hence
$$\mathbb{P}_x\left[\lim_{n\to+\infty}Y_n=\bar{X}_{\sigma_{\partial X}}\right]=1.$$
Hence the hitting distributions of $\myset{\bar{X}_t}$ and $\myset{Z_n}$ for $\partial X$ coincide under $\mathbb{P}_x$ for all $x\in X$.
\end{proof}
By Theorem \ref{SG_det_thm_hit}, the hitting distribution of $\myset{\bar{X}_t}$ for $\partial X$ under $\mathbb{P}_x$ is exactly $\nu_x$, hence
\begin{equation}\label{SG_det_eqn_hit}
\begin{aligned}
H_{\partial X}g(x)&=\mathbb{E}_x\left[g(\bar{X}_{\sigma_{\partial X}})\mathbf{1}_{\myset{\sigma_{\partial X}<+\infty}}\right]=\mathbb{E}_x\left[g(\bar{X}_{\sigma_{\partial X}})\right]\\
&=\int_{\partial X}g\mathrm{d}\nu_x=\int_{\partial X}K(x,\cdot)g\mathrm{d}\nu=Hg(x),
\end{aligned}
\end{equation}
for all $x\in X$ and nonnegative Borel function $g$.
By Theorem \ref{SG_det_thm_hit} and Theorem \ref{SG_det_thm_conv}, $\nu$ satisfies the condition of Lemma \ref{SG_det_lem_ac} with $F=\mathrm{supp}(\nu)=\partial X$. By the above remark, we obtain a regular Dirichlet form $\check{\mathcal{E}}$ on $L^2(\partial X;\nu)$.
We have explicit representation of $\check{\mathcal{E}}$ as follows.
\begin{mythm}\label{SG_det_thm_check_E}
We have
$$
\begin{cases}
\check{\mathcal{E}}(u,u)\asymp\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)<+\infty,\\
\check{\mathcal{F}}=\myset{u\in C(K):\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)<+\infty},
\end{cases}
$$
where $\beta\in(\alpha,\beta^*)$.
\end{mythm}
To prove this theorem, we need some preparation.
\begin{mylem}\label{SG_det_lem_bdy_har}
If $\lambda<1/3$, then for all $u\in C(\partial X)=C(K)$ with
$$\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)<+\infty,$$
let $v\in C(\bar{X})$ be the extended function of $Hu$ in Lemma \ref{SG_det_lem_ext}, we have $v|_{\partial X}=u$.
\end{mylem}
We need a calculation result from \cite[Theorem 5.3]{KLW17} as follows.
\begin{equation}\label{SG_det_eqn_Martin}
K(x,\xi)\asymp\lambda^{|x|-|x\wedge\xi|}(\frac{1}{2})^{-\frac{\log3}{\log2}|x\wedge\xi|}=\lambda^{|x|}\left(\frac{3}{\lambda}\right)^{|x\wedge\xi|},
\end{equation}
where $x\in X$ and $\xi\in\partial X$.
\begin{proof}
By the estimate of Na\"im kernel, we have
$$\sum_{x,y\in X}c(x,y)(Hu(x)-Hu(y))^2<+\infty,$$
hence Lemma \ref{SG_det_lem_ext} can be applied here and $v$ is well-defined. We only need to show that for all $\myset{x_n}\subseteq X$ and $\xi\in\partial X$ with $x_n\to\xi$, then $Hu(x_n)\to u(\xi)$ as $n\to+\infty$. Indeed, since
$$Hu(x)=\int_{\partial X}K(x,\eta)u(\eta)\nu(\mathrm{d}\eta)=\int_{\partial X}u(\eta)\nu_x(\mathrm{d}\eta)=\mathbb{E}_x\left[u(Z_\infty)\right],$$
we have
$$H1(x)=\int_{\partial X}K(x,\eta)\nu(\mathrm{d}\eta)=1$$
for all $x\in X$. Then
\begin{align*}
\lvert Hu(x_n)-u(\xi)\rvert&=\lvert\int_{\partial X}K(x_n,\eta)u(\eta)\nu(\mathrm{d}\eta)-u(\xi)\rvert=\lvert\int_{\partial X}K(x,\eta)(u(\eta)-u(\xi))\nu(\mathrm{d}\eta)\rvert\\
&\le\int_{\partial X}K(x_n,\eta)|u(\eta)-u(\xi)|\nu(\mathrm{d}\eta).
\end{align*}
Since $u\in C(\partial X)$, for all $\varepsilon>0$, there exists $\delta>0$ such that for all $\eta,\xi\in\partial X$ with $\theta_a(\eta,\xi)<\delta$, we have $|u(\eta)-u(\xi)|<\varepsilon$. Assume that $|u(x)|\le M<+\infty$ for all $x\in\partial X$, then
\begin{align*}
&\lvert Hu(x_n)-u(\xi)\rvert\\
&\le\int_{\theta_a(\eta,\xi)<\delta}K(x_n,\eta)|u(\eta)-u(\xi)|\nu(\mathrm{d}\eta)+\int_{\theta_a(\eta,\xi)\ge\delta}K(x_n,\eta)|u(\eta)-u(\xi)|\nu(\mathrm{d}\eta)\\
&<\varepsilon\int_{\theta_a(\eta,\xi)<\delta}K(x_n,\eta)\nu(\mathrm{d}\eta)+2M\int_{\theta_a(\eta,\xi)\ge\delta}K(x_n,\eta)\nu(\mathrm{d}\eta)\\
&\le\varepsilon+2M\int_{\theta_a(\eta,\xi)\ge\delta}K(x_n,\eta)\nu(\mathrm{d}\eta).
\end{align*}
There exists $N\ge1$ such that for all $n>N$, we have $\theta_a(x_n,\xi)<\delta/2$, then for all $\theta_a(\eta,\xi)\ge\delta$
$$\theta_a(x_n,\eta)\ge\theta_a(\eta,\xi)-\theta_a(x_n,\xi)\ge\delta-\frac{\delta}{2}=\frac{\delta}{2}.$$
By Equation (\ref{SG_det_eqn_Martin}), we have
\begin{align*}
K(x_n,\eta)&\asymp \lambda^{|x_n|}\left(\frac{3}{\lambda}\right)^{|x_n\wedge\eta|}=\lambda^{|x_n|}e^{|x_n\wedge\eta|\log(\frac{3}{\lambda})}=\lambda^{|x_n|}e^{-a|x_n\wedge\eta|\frac{1}{a}\log(\frac{\lambda}{3})}\\
&=\lambda^{|x_n|}\rho_a(x_n,\eta)^{\frac{1}{a}\log(\frac{\lambda}{3})}=\frac{\lambda^{|x_n|}}{\rho_a(x_n,\eta)^{\frac{1}{a}\log(\frac{3}{\lambda})}}.
\end{align*}
Since $\rho_a$ and $\theta_a$ are equivalent, there exists some positive constant $C$ independent of $x_n$ and $\eta$ such that
$$K(x_n,\eta)\le C\frac{\lambda^{|x_n|}}{\delta^{\frac{1}{a}\log(\frac{3}{\lambda})}}.$$
Hence
$$\lvert Hu(x_n)-u(\xi)\rvert\le\varepsilon+2M\int_{\theta_a(\eta,\xi)\ge\delta}C\frac{\lambda^{|x_n|}}{\delta^{\frac{1}{a}\log(\frac{3}{\lambda})}}\nu(\mathrm{d}\eta)\le\varepsilon+2MC\frac{\lambda^{|x_n|}}{\delta^{\frac{1}{a}\log(\frac{3}{\lambda})}},$$
letting $n\to+\infty$, we have $|x_n|\to+\infty$, hence
$$\varlimsup_{n\to+\infty}\lvert Hu(x_n)-u(\xi)\rvert\le\varepsilon.$$
Since $\varepsilon>0$ is arbitrary, we have $\lim_{n\to+\infty}Hu(x_n)=u(\xi)$.
\end{proof}
\begin{mythm}\label{SG_det_thm_f_e}
$(\mathcal{F}_{\bar{X}})_e=\mathcal{F}_{\bar{X}}$, here we use the convention that all functions in extended Dirichlet spaces are quasi continuous.
\end{mythm}
\begin{proof}
It is obvious that $(\mathcal{F}_{\bar{X}})_e\supseteq\mathcal{F}_{\bar{X}}$. For all $u\in(\mathcal{F}_{\bar{X}})_e$, by definition, there exists an $\mathcal{E}_{\bar{X}}$-Cauchy sequence $\myset{u_n}\subseteq\mathcal{F}_{\bar{X}}$ that converges to $u$ $m$-a.e.. Hence $u_n(x)\to u(x)$ for all $x\in X$. By Fatou's lemma, we have
\begin{align*}
\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2&=\frac{1}{2}\sum_{x,y\in X}\lim_{n\to+\infty}c(x,y)(u(x)-u(y))^2\\
&\le\varliminf_{n\to+\infty}\frac{1}{2}\sum_{x,y\in X}c(x,y)(u_n(x)-u_n(y))^2\\
&=\varliminf_{n\to+\infty}\mathcal{E}_{\bar{X}}(u_n,u_n)\\
&<+\infty.
\end{align*}
By Lemma \ref{SG_det_lem_ext}, $u|_X$ can be extended to a continuous function $v$ on $\bar{X}$. Since $u,v$ are quasi continuous on $\bar{X}$ and $u=v,m$-a.e., we have $u=v$ q.e., we can take $u$ as $v$. Hence $u$ can be taken continuous, $u\in\mathcal{F}_{\bar{X}}$, $(\mathcal{F}_{\bar{X}})_e\subseteq\mathcal{F}_{\bar{X}}$.
\end{proof}
\begin{myrmk}
It was proved in \cite[Proposition 2.9]{HK06} that a result of the above type holds in much more general frameworks.
\end{myrmk}
\begin{proof}[Proof of Theorem \ref{SG_det_thm_check_E}]
By Equation (\ref{SG_det_eqn_hit}) and Equation (\ref{SG_det_eqn_Theta}), we have
\begin{align*}
\check{\mathcal{E}}(u|_{\partial X},u|_{\partial X})&=\mathcal{E}_{\bar{X}}(H_{\partial X}u,H_{\partial X}u)=\mathcal{E}_{\bar{X}}(Hu,Hu)\\
&=\frac{1}{2}\sum_{x,y\in X}c(x,y)(Hu(x)-Hu(y))^2\\
&\asymp\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y),
\end{align*}
and
\begin{align*}
\check{\mathcal{F}}&=(\mathcal{F}_{\bar{X}})_e|_{\partial X}\cap L^2(\partial X;\nu)=\mathcal{F}_{\bar{X}}|_{\partial X}\cap L^2(\partial X;\nu)\\
&=\myset{u|_{\partial X}\in L^2(\partial X;\nu):u\in C(\bar{X}),\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2<+\infty}\\
&=\myset{u|_{\partial X}:u\in C(\bar{X}),\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2<+\infty}.
\end{align*}
For all $u|_{\partial X}\in\check{\mathcal{F}}$, we have $H_{\partial X}u=Hu\in(\mathcal{F}_{\bar{X}})_e=\mathcal{F}_{\bar{X}}$, $u|_{\partial X}\in C(\partial X)=C(K)$ and
\begin{align*}
\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)&\asymp\frac{1}{2}\sum_{x,y\in X}c(x,y)(Hu(x)-Hu(y))^2\\
&=\mathcal{E}_{\bar{X}}(Hu,Hu)<+\infty,
\end{align*}
that is, $\check{\mathcal{F}}\subseteq\text{RHS}$.
On the other hand, for all $u\in\text{RHS}$, we have $Hu$ satisfies
$$\frac{1}{2}\sum_{x,y\in X}c(x,y)(Hu(x)-Hu(y))^2\asymp\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)<+\infty.$$
By Lemma \ref{SG_det_lem_ext}, $Hu$ can be extended to a continuous function $v$ on $\bar{X}$, then $v\in C(\bar{X})$. By Lemma \ref{SG_det_lem_bdy_har}, we have $v|_{\partial X}=u$.
$$\frac{1}{2}\sum_{x,y\in X}c(x,y)(v(x)-v(y))^2=\frac{1}{2}\sum_{x,y\in X}c(x,y)(Hu(x)-Hu(y))^2<+\infty,$$
hence $v\in\mathcal{F}_{\bar{X}}$, $u\in\check{\mathcal{F}}$, $\text{RHS}\subseteq\check{\mathcal{F}}$.
\end{proof}
Then we have the following result.
\begin{mythm}\label{SG_det_thm_E_K}
Let
$$
\begin{cases}
&\mathcal{E}_K(u,u)=\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y),\\
&\mathcal{F}_K=\myset{u\in L^2(K;\nu):\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)<+\infty}.
\end{cases}
$$
If $\beta\in(\alpha,\beta^*)$, then $(\mathcal{E}_K,\mathcal{F}_K)$ is a regular Dirichlet form on $L^2(K;\nu)$.
\end{mythm}
\begin{proof}
Let
$$
\begin{cases}
&\mathcal{E}_K(u,u)=\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y),\\
&\mathcal{F}_K=\myset{u\in C(K):\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)<+\infty}.
\end{cases}
$$
By Theorem \ref{SG_det_thm_check_E}, if $\beta\in(\alpha,\beta^*)$, then $(\mathcal{E}_K,\mathcal{F}_K)$ is a regular Dirichlet form on $L^2(K;\nu)$. We only need to show that
$$\mathcal{F}_K=\myset{u\in L^2(K;\nu):\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)<+\infty}.$$
Indeed, it is obvious that $\mathcal{F}_K\subseteq\text{RHS}$. On the other hand, since $\beta\in(\alpha,\beta^*)$, by \cite[Theorem 4.11 (\rmnum{3})]{GHL03}, the $\text{RHS}$ can be embedded into a H\"older space with parameter $(\beta-\alpha)/2$, hence the functions in the $\text{RHS}$ can be modified to be continuous, $\text{RHS}\subseteq\mathcal{F}_K$.
\end{proof}
\begin{myrmk}
A more general result was proved in \cite{Kum03}.
\end{myrmk}
\section{Triviality of \texorpdfstring{$\mathcal{F}_K$}{FK} when \texorpdfstring{$\beta\in(\beta^*,+\infty)$}{beta in (beta*,+infty)}}\label{SG_det_sec_trivial}
In this section, we show that $\mathcal{F}_K$ consists only of constant functions if $\lambda\in(0,1/5)$ or $\beta\in(\beta^*,+\infty)$. Hence $\beta_*=\beta^*=\log5/\log2$.
\begin{mythm}\label{SG_det_thm_constant}
If $\lambda<{1}/{5}$, then for all continuous function $u$ on $\bar{X}$ with
\begin{equation}\label{SG_det_eqn_finite}
C=\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2<+\infty,
\end{equation}
we have $u|_{\partial X}$ is constant.
\end{mythm}
\begin{proof}
By Lemma \ref{SG_det_lem_ext}, Equation (\ref{SG_det_eqn_finite}) implies that $u|_X$ can be extended continuously to $\bar{X}$ which is exactly $u$ on $\bar{X}$. Assume that $u|_{\partial X}$ is not constant. First, we consider
$$\sum_{x,y\in S_n}c(x,y)(u(\Phi_n(x))-u(\Phi_n(y)))^2.$$
By the proof of Theorem \ref{SG_det_thm_main}, we have
$$\sum_{x,y\in S_{n+1}}c(x,y)(u(\Phi_{n+1}(x))-u(\Phi_{n+1}(y)))^2\ge\frac{1}{5\lambda}\sum_{x,y\in S_n}c(x,y)(u(\Phi_n(x))-u(\Phi_n(y)))^2.$$
Since $u|_{\partial X}$ is continuous on $\partial X$ and $u|_{\partial X}$ is not constant, there exists $N\ge1$ such that
$$\sum_{x,y\in S_N}c(x,y)(u(\Phi_N(x))-u(\Phi_N(y)))^2>0.$$
Since $\lambda<1/5$, for all $n\ge N$, we have
\begin{align*}
&\sum_{x,y\in S_n}c(x,y)(u(\Phi_n(x))-u(\Phi_n(y)))^2\\
&\ge\frac{1}{(5\lambda)^{n-N}}\sum_{x,y\in S_N}c(x,y)(u(\Phi_N(x))-u(\Phi_N(y)))^2\to+\infty,
\end{align*}
as $n\to+\infty$. Next, we consider the relation between
$$\sum_{x,y\in S_n}c(x,y)(u(x)-u(y))^2\text{ and }\sum_{x,y\in S_n}c(x,y)(u(\Phi_n(x))-u(\Phi_n(y)))^2.$$
Indeed
\begin{align*}
&\sum_{x,y\in S_n}c(x,y)(u(x)-u(y))^2\\
\le&\sum_{x,y\in S_n}c(x,y)\left(\lvert u(x)-u(\Phi_n(x))\rvert+\lvert u(\Phi_n(x))-u(\Phi_n(y))\rvert+\lvert u(\Phi_n(y))-u(y)\rvert\right)^2\\
\le&3\sum_{x,y\in S_n}c(x,y)\left((u(x)-u(\Phi_n(x)))^2+(u(\Phi_n(x))-u(\Phi_n(y)))^2+(u(\Phi_n(y))-u(y))^2\right)\\
=&3\sum_{x,y\in S_n}c(x,y)(u(\Phi_n(x))-u(\Phi_n(y)))^2\\
&+3\sum_{x,y\in S_n}c(x,y)\left((u(x)-u(\Phi_n(x)))^2+(u(\Phi_n(y))-u(y))^2\right).
\end{align*}
For all $x\in S_n$, there are at most 3 elements $y\in S_n$ such that $c(x,y)>0$ and for all $x,y\in S_n$, $c(x,y)\le\max{\myset{C_1,C_2}}/(3\lambda)^n$. By symmetry, we have
\begin{align*}
&\sum_{x,y\in S_n}c(x,y)\left((u(x)-u(\Phi_n(x)))^2+(u(\Phi_n(y))-u(y))^2\right)\\
=&2\sum_{x,y\in S_n}c(x,y)(u(x)-u(\Phi_n(x)))^2\le6\sum_{x\in S_n}\frac{\max{\myset{C_1,C_2}}}{(3\lambda)^n}(u(x)-u(\Phi_n(x)))^2.
\end{align*}
For all $x\in S_n$, there exists a geodesic ray $[x_0,x_1,\ldots]$ with $|x_k|=k$, $x_k\sim x_{k+1}$ for all $k\ge0$ and $x_n=x$, $x_k\to\Phi_n(x)$ as $k\to+\infty$. For distinct $x,y\in S_n$, the corresponding geodesic rays $[x_0,x_1,\ldots]$, $[y_0,y_1,\ldots]$ satisfy $x_k\ne y_k$ for all $k\ge n$. Then
\begin{align*}
\frac{1}{(3\lambda)^n}(u(x)-u(\Phi_n(x)))^2&\le\frac{1}{(3\lambda)^n}\left(\sum_{k=n}^\infty|u(x_k)-u(x_{k+1})|\right)^2\\
&=\left(\sum_{k=n}^\infty\frac{1}{(3\lambda)^{n/2}}|u(x_k)-u(x_{k+1})|\right)^2\\
&=\left(\sum_{k=n}^\infty{(3\lambda)^{(k-n)/2}}\frac{1}{(3\lambda)^{k/2}}|u(x_k)-u(x_{k+1})|\right)^2\\
&\le\sum_{k=n}^\infty(3\lambda)^{k-n}\sum_{k=n}^\infty\frac{1}{(3\lambda)^k}(u(x_k)-u(x_{k+1}))^2\\
&=\frac{1}{1-3\lambda}\sum_{k=n}^\infty c(x_k,x_{k+1})(u(x_k)-u(x_{k+1}))^2,
\end{align*}
hence
\begin{align*}
\sum_{x\in S_n}\frac{1}{(3\lambda)^n}(u(x)-u(\Phi_n(x)))^2&\le\frac{1}{1-3\lambda}\sum_{x\in S_n}\sum_{k=n}^\infty c(x_k,x_{k+1})(u(x_k)-u(x_{k+1}))^2\\
&\le\frac{1}{1-3\lambda}\left(\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2\right)\\
&=\frac{1}{1-3\lambda}C.
\end{align*}
We have
$$
\sum_{x,y\in S_n}c(x,y)\left((u(x)-u(\Phi_n(x)))^2+(u(\Phi_n(y))-u(y))^2\right)\le\frac{6\max{\myset{C_1,C_2}}}{1-3\lambda}C,
$$
and
$$
\sum_{x,y\in S_n}c(x,y)(u(x)-u(y))^2\le 3\sum_{x,y\in S_n}c(x,y)(u(\Phi_n(x))-u(\Phi_n(y)))^2+\frac{18\max{\myset{C_1,C_2}}}{1-3\lambda}C.
$$
Similarly, we have
$$
\sum_{x,y\in S_n}c(x,y)(u(\Phi_n(x))-u(\Phi_n(y)))^2\le 3\sum_{x,y\in S_n}c(x,y)(u(x)-u(y))^2+\frac{18\max{\myset{C_1,C_2}}}{1-3\lambda}C.
$$
Since
$$\lim_{n\to+\infty}\sum_{x,y\in S_n}c(x,y)(u(\Phi_n(x))-u(\Phi_n(y)))^2=+\infty,$$
we have
$$\lim_{n\to+\infty}\sum_{x,y\in S_n}c(x,y)(u(x)-u(y))^2=+\infty.$$
Therefore
$$C=\frac{1}{2}\sum_{x,y\in X}c(x,y)(u(x)-u(y))^2=+\infty,$$
contradiction! Hence $u|_{\partial X}$ is constant.
\end{proof}
\begin{mythm}\label{SG_det_thm_ub}
If $\lambda\in(0,1/5)$ or $\beta\in(\beta^*,+\infty)$, then $(\mathcal{E}_K,\mathcal{F}_K)$ on $L^2(K;\nu)$ is trivial, that is, $\mathcal{F}_K$ consists only of constant functions. Hence the walk dimension of the SG $\beta_*=\beta^*=\log5/\log2$.
\end{mythm}
\begin{proof}
For all $u\in\mathcal{F}_K$, let $v=Hu$ on $X$, then we have
$$\frac{1}{2}\sum_{x,y\in X}c(x,y)(v(x)-v(y))^2<+\infty.$$
Since $\lambda<1/5<1/3$, by Lemma \ref{SG_det_lem_ext}, $v$ on $X$ can be extended continuously to $\bar{X}$ still denoted by $v$. By Lemma \ref{SG_det_lem_bdy_har}, we have $v|_{\partial X}=u$. By Theorem \ref{SG_det_thm_constant}, we have $v|_{\partial X}$ is constant, hence $u$ is constant, $\mathcal{F}_K$ consists only of constant functions.
\end{proof}
\begin{myrmk}
We can not have the triviality of $\mathcal{F}_K$ when $\beta=\beta^*$ from the above proof. We will consider the case $\beta=\beta^*$ in Theorem \ref{SG_app_thm_det} and Theorem \ref{SG_con_thm_nonlocal}.
\end{myrmk}
\chapter{Approximation of Dirichlet Forms on the SG}\label{ch_SG_app}
This chapter is based on my work \cite{MY17}.
\section{Background and Statement}
We use the notions of the SG introduced in Section \ref{sec_notion}.
Let $\nu$ be the normalized Hausdorff measure on the SG $K$.
Let $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ be given by
\begin{align*}
&\mathcal{E}_\beta(u,u)=\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y),\\
&\mathcal{F}_\beta=\myset{u\in C(K):\mathcal{E}_\beta(u,u)<+\infty}.
\end{align*}
where $\alpha=\log3/\log2$ is the Hausdorff dimension of the SG. By Chapter \ref{ch_SG_det}, $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a non-local regular Dirichlet form on $L^2(K;\nu)$ for all $\beta\in(\alpha,\beta^*)$, where $\beta^*=\log5/\log2$ is the walk dimension of the SG (see also \cite{Kum03} using heat kernel estimates and subordination technique and \cite{KL16} using effective resistance on graph). We give another simple proof in this chapter.
Let $(\mathfrak{E}_{{\mathrm{loc}}},\mathfrak{F}_{{\mathrm{loc}}})$ be given by
\begin{align*}
&\mathfrak{E}_{\mathrm{loc}}(u,u)=\lim_{n\to+\infty}\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2,\\
&\mathfrak{F}_{\mathrm{loc}}=\myset{u\in C(K):\mathfrak{E}_{\mathrm{loc}}(u,u)<+\infty}.
\end{align*}
Then $(\mathfrak{E}_{\mathrm{loc}},\mathfrak{F}_{\mathrm{loc}})$ is a self-similar local regular Dirichlet form on $L^2(K;\nu)$ which corresponds to the diffusion on the SG, see Theorem \ref{thm_SG_con}.
Analog to (\ref{eqn_classical}), one may expect that $(\beta^*-\beta)\mathcal{E}_\beta(u,u)$ converges to $\mathfrak{E}_{\mathrm{loc}}(u,u)$. However, this is not known. Using the sub-Gaussian estimates for the heat kernel of $\mathfrak{E}_{\mathrm{loc}}$, it was shown in \cite[Theorem 3.1]{Pie08}, \cite[2.1]{Kum03} that the Dirichlet form $\tilde{\mathcal{E}}_\beta$ that is obtained from $\mathfrak{E}_{\mathrm{loc}}$ by subordination of order $\beta/\beta^*$ has the following properties
\begin{equation}\label{SG_app_eqn_sub}
\begin{aligned}
&\tilde{\mathcal{E}}_\beta(u,u)\asymp(\beta^*-\beta)\mathcal{E}_\beta(u,u),\\
&\tilde{\mathcal{E}}_\beta(u,u)\to\mathfrak{E}_{\mathrm{loc}}(u,u)\text{ as }\beta\uparrow\beta^*.
\end{aligned}
\end{equation}
Moreover, the jump kernel of $\tilde{\mathcal{E}}_\beta$ is of the order $|x-y|^{-(\alpha+\beta)}$ for all $\beta\in(0,\beta^*)$.
In this chapter, we construct explicitly a different semi-norm $E_\beta$ of jump type that has properties similar to (\ref{SG_app_eqn_sub}). Our construction has the following two advantages. Firstly, our construction is independent of any knowledge about the local Dirichlet form $\mathfrak{E}_{\mathrm{loc}}$ except for its definition. Secondly, we obtain a monotone convergence result for all functions in $L^2(K;\nu)$ which implies a Mosco convergence. While \cite[Theorem 3.1]{Pie08} only gave a convergence result for the functions in $\mathfrak{F}_{\mathrm{loc}}$.
The new semi-norm $E_\beta$ is defined as follows.
$$E_\beta(u,u):=\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2.$$
We state the main results in the next two theorems. Our first main result is as follows.
\begin{mythm}\label{SG_app_thm_main}
For all $\beta\in(\alpha,+\infty)$, for all $u\in C(K)$, we have
$$E_\beta(u,u)\asymp\mathcal{E}_\beta(u,u).$$
\end{mythm}
Recall that a similar result for the unit interval was proved in \cite{Kam97} as follows. Let $I=[0,1]$. Then for all $\beta\in(1,+\infty)$, for all $u\in C(I)$, we have
\begin{equation}\label{SG_app_eqn_interval}
\sum_{n=1}^\infty2^{(\beta-1)n}\sum_{i=0}^{2^n-1}(u(\frac{i}{2^n})-u(\frac{i+1}{2^n}))^2\asymp\int_{I}\int_I\frac{(u(x)-u(y))^2}{|x-y|^{1+\beta}}\mathrm{d} x\mathrm{d} y.
\end{equation}
Consider the convergence problem. Assume that $(\mathcal{E},\mathcal{F})$ is a quadratic form on $L^2(K;\nu)$ where the energy $\mathcal{E}$ has an explicit expression and the domain $\mathcal{F}\subseteq C(K)$. We use the convention to extent $\mathcal{E}$ to $L^2(K;\nu)$ as follows. For all $u\in L^2(K;\nu)$, $u$ has at most one continuous version. If $u$ has a continuous version $\tilde{u}$, then we define $\mathcal{E}(u,u)$ as the energy of $\tilde{u}$ using its explicit expression which might be $+\infty$, if $u$ has no continuous version, then we define $\mathcal{E}(u,u)$ as $+\infty$.
It is obvious that $\mathcal{F}_{\beta_1}\supseteq\mathcal{F}_{\beta_2}\supseteq\mathfrak{F}_{\mathrm{loc}}$ for all $\alpha<\beta_1<\beta_2<\beta^*$. We use Theorem \ref{SG_app_thm_main} to answer the question about convergence as follows.
\begin{mythm}\label{SG_app_thm_incre}
For all $u\in L^2(K;\nu)$, we have
$$(1-5^{-1}\cdot 2^{\beta})E_\beta(u,u)\uparrow\mathfrak{E}_{\mathrm{loc}}(u,u)$$
as $\beta\uparrow\beta^*=\log5/\log2$.
\end{mythm}
Moreover, we also have a Mosco convergence.
\begin{mythm}\label{SG_app_thm_conv_main}
For all sequence $\myset{\beta_n}\subseteq(\alpha,\beta^*)$ with $\beta_n\uparrow\beta^*$, we have $(1-5^{-1}\cdot 2^{\beta_n})E_{\beta_n}\to\mathfrak{E}_{\mathrm{loc}}$ in the sense of Mosco.
\end{mythm}
As a byproduct of Theorem \ref{SG_app_thm_main}, we obtain the following result about a trace problem. Let us introduce the notion of Besov spaces. Let $(M,d,\mu)$ be a metric measure space and $\alpha,\beta>0$ two parameters. Let
$$B^{2,2}_{\alpha,\beta}(M)=\myset{u\in L^2(M;\mu):\sum_{n=0}^\infty2^{(\alpha+\beta)n}\int\limits_M\int\limits_{d(x,y)<2^{-n}}(u(x)-u(y))^2\mu(\mathrm{d} y)\mu(\mathrm{d} x)<+\infty}.$$
If $\beta>\alpha$, then $B^{2,2}_{\alpha,\beta}(M)$ can be embedded in $C^{\frac{\beta-\alpha}{2}}(M)$. We regard the SG $K$ and the unit interval $I$ as metric measure spaces with Euclidean metrics and normalized Hausdorff measures. Let $\alpha_1=\log3/\log2$, $\alpha_2=1$ be the Hausdorff dimensions, $\beta_1^*=\log5/\log2$, $\beta_2^*=2$ the walk dimensions of $K$ and $I$, respectively.
Let us identify $I$ as the segment $[p_0,p_1]\subseteq K$. Choose some $\beta_1\in(\alpha_1,\beta_1^*)$. Any function $u\in B^{2,2}_{\alpha_1,\beta_1}(K)$ is continuous on $K$ and, hence, has the trace $u|_I$ on $I$. The trace problem is the problem of identifying the space of all traces $u|_I$ of all functions $u\in B_{\alpha_1,\beta_1}^{2,2}(K)$. This problem was considered by A. Jonsson using general Besov spaces in $\mathbb{R}^n$, see remarks after \cite[Theorem 3.1]{Jon05}. The following result follows from \cite{Jon05}.
\begin{mythm}\label{SG_app_thm_trace_main}
Let $\beta_1,\beta_2$ satisfy $\beta_1\in(\alpha_1,\beta_1^*)$ and $\beta_1-\alpha_1=\beta_2-\alpha_2$. Then the trace space of $B^{2,2}_{\alpha_1,\beta_1}(K)$ to $I$ is $B^{2,2}_{\alpha_2,\beta_2}(I)$.
\end{mythm}
We give here a new short proof of Theorem \ref{SG_app_thm_trace_main} using Theorem \ref{SG_app_thm_main}.
Finally, we construct explicitly a sequence of non-local Dirichlet forms with jumping kernels equivalent to $|x-y|^{-\alpha-\beta}$ that converges \emph{exactly} to the local Dirichlet form. We need some notions as follows. For all $n\ge1$, $w=w_1\ldots w_n\in W_n$ and $p\in V_w$, we have $p=P_{w_1\ldots w_nw_{n+1}}$ for some $w_{n+1}\in\myset{0,1,2}$. Let $\gamma\ge1$ be an integer, define
$$K^{(i)}_{p,n}=K_{w_1\ldots w_nw_{n+1}\ldots w_{n+1}},i\ge1,$$
with $\gamma ni$ terms of $w_{n+1}$.
\begin{mythm}\label{SG_app_thm_jumping_kernel}
For all sequence $\myset{\beta_i}\subseteq(\alpha,\beta^*)$ with $\beta_i\uparrow\beta^*$, there exist positive functions $a_i$ bounded from above and below by positive constants given by
$$a_i=\delta_iC_i+(1-\delta_i),$$
where $\myset{\delta_i}\subseteq(0,1)$ is an arbitrary sequence with $\delta_i\uparrow1$ and
$$C_i(x,y)=\sum_{n=1}^{\Phi(i)}2^{-2\alpha n}\sum_{w\in W_n}\sum_{p,q\in V_w}\frac{1}{\nu(K^{(i)}_{p,n})\nu(K^{(i)}_{q,n})}1_{K^{(i)}_{p,n}}(x)1_{K^{(i)}_{q,n}}(y),$$
where $\Phi:\mathbb{N}\to\mathbb{N}$ is increasing and $(1-5^{-1}\cdot2^{\beta_i})\Phi(i)\ge i$ for all $i\ge1$. Then for all $u\in \mathcal{F}_{\mathrm{loc}}$, we have
$$\lim_{i\to+\infty}(1-5^{-1}\cdot2^{\beta_i})\iint_{K\times K\backslash\mathrm{diag}}\frac{a_i(x,y)(u(x)-u(y))^2}{|x-y|^{\alpha+\beta_i}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)=\mathcal{E}_{\mathrm{loc}}(u,u).$$
\end{mythm}
\begin{myrmk}
The shape of the function $C_i$ reflects the inhomogeneity of the fractal structure with respect to the Euclidean structure. Of course, subordination technique in \cite{Pie08} ensures the existence of the functions $a_i$, but Theorem \ref{SG_app_thm_jumping_kernel} provides them explicitly.
\end{myrmk}
\section{Proof of Theorem \ref{SG_app_thm_main}}
First, we give other equivalent semi-norms which are more convenient for later use.
\begin{mylem}\label{SG_app_lem_equiv1}
For all $u\in L^2(K;\nu)$, we have
$$\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\asymp\sum_{n=0}^\infty2^{(\alpha+\beta)n}\int_K\int_{B(x,2^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x).$$
\end{mylem}
\begin{proof}
On the one hand, we have
\begin{align*}
&\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\\
=&\int_K\int_{B(x,1)}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
=&\sum_{n=0}^\infty\int_K\int_{B(x,2^{-n})\backslash{B(x,2^{-(n+1)})}}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
\le&\sum_{n=0}^\infty2^{(\alpha+\beta)(n+1)}\int_K\int_{B(x,2^{-n})\backslash{B(x,2^{-(n+1)})}}{(u(x)-u(y))^2}\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
\le&2^{\alpha+\beta}\sum_{n=0}^\infty2^{(\alpha+\beta)n}\int_K\int_{B(x,2^{-n})}{(u(x)-u(y))^2}\nu(\mathrm{d} y)\nu(\mathrm{d} x).
\end{align*}
On the other hand, we have
\begin{align*}
&\sum_{n=0}^\infty2^{(\alpha+\beta)n}\int_K\int_{B(x,2^{-n})}{(u(x)-u(y))^2}\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
=&\sum_{n=0}^\infty\sum_{k=n}^\infty2^{(\alpha+\beta)n}\int_K\int_{B(x,2^{-k})\backslash{B(x,2^{-(k+1)})}}{(u(x)-u(y))^2}\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
=&\sum_{k=0}^\infty\sum_{n=0}^k 2^{(\alpha+\beta)n}\int_K\int_{B(x,2^{-k})\backslash{B(x,2^{-(k+1)})}}{(u(x)-u(y))^2}\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
\le&\sum_{k=0}^\infty\frac{2^{(\alpha+\beta)(k+1)}}{2^{\alpha+\beta}-1}\int_K\int_{B(x,2^{-k})\backslash{B(x,2^{-(k+1)})}}{(u(x)-u(y))^2}\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
\le&\frac{2^{\alpha+\beta}}{2^{\alpha+\beta}-1}\sum_{k=0}^\infty\int_K\int_{B(x,2^{-k})\backslash{B(x,2^{-(k+1)})}}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
=&\frac{2^{\alpha+\beta}}{2^{\alpha+\beta}-1}\int_K\int_{K}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y).
\end{align*}
\end{proof}
Moreover, we have
\begin{mycor}\label{SG_app_cor_arbi}
Fix arbitrary integer $N\ge0$ and real number $c>0$. For all $u\in L^2(K;\nu)$, we have
$$\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\asymp\sum_{n=N}^\infty2^{(\alpha+\beta)n}\int_K\int_{B(x,c2^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x).$$
\end{mycor}
\begin{proof}
We only need to show that for all $n\ge1$, there exists some positive constant $C=C(n)$ such that
$$\int_K\int_K(u(x)-u(y))^2\nu(\mathrm{d} x)\nu(\mathrm{d} y)\le C\int_K\int_{B(x,{2^{-n}})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x).$$
Indeed, since the SG satisfies the chain condition, see \cite[Definition 3.4]{GHL03}, that is, there exists a positive constant $C_1$ such that for all $x,y\in K$, for all integer $N\ge1$ there exist $z_0,\ldots,z_N\in K$ with $z_0=x,z_N=y$ and
$$|z_i-z_{i+1}|\le C_1\frac{|x-y|}{N}\text{ for all }i=0,\ldots,N-1.$$
Take integer $N\ge2^{n+2}C_1+1$. Fix $x,y\in K$, there exist $z_0,\ldots,z_N$ with $z_0=x,z_N=y$ and
$$|z_i-z_{i+1}|\le C_1\frac{|x-y|}{N}\le 2^{-(n+2)}\text{ for all }i=0,\ldots,N-1.$$
For all $i=0,\ldots,N-1$, for all $x_i\in B(z_i,2^{-(n+2)})$, $x_{i+1}\in B(z_{i+1},2^{-(n+2)})$, we have
$$|x_i-x_{i+1}|\le|x_i-z_i|+|z_i-z_{i+1}|+|z_{i+1}-x_{i+1}|\le{3}\cdot{2^{-(n+2)}}<2^{-n}.$$
Fix $x_0=z_0=x$, $x_N=z_N=y$, note that
$$(u(x)-u(y))^2=(u(x_0)-u(x_N))^2\le N\sum_{i=0}^{N-1}(u(x_i)-u(x_{i+1}))^2.$$
Integrating with respect to $x_1\in B(z_1,2^{-(n+2)}),\ldots,x_{N-1}\in B(z_{N-1},2^{-(n+2)})$ and dividing by $\nu(B(z_1,2^{-(n+2)})),\ldots,\nu(B(z_{N-1},2^{-(n+2)}))$, we have
\begin{align*}
(u(x)-u(y))^2&\le N\left(\frac{1}{\nu(B(z_1,2^{-(n+2)}))}\int_{B(z_1,2^{-(n+2)})}(u(x_0)-u(x_1))^2\nu(\mathrm{d} x_1)\right.\\
&+\frac{1}{\nu(B(z_{N-1},2^{-(n+2)}))}\int_{B(z_{N-1},2^{-(n+2)})}(u(x_{N-1})-u(x_N))^2\nu(\mathrm{d} x_{N-1})\\
&+\sum_{i=1}^{N-2}\frac{1}{\nu(B(z_i,2^{-(n+2)}))\nu(B(z_{i+1},2^{-(n+2)}))}\\
&\left.\int_{B(z_i,2^{-(n+2)})}\int_{B(z_{i+1},2^{-(n+2)})}(u(x_i)-u(x_{i+1}))^2\nu(\mathrm{d} x_i)\nu(\mathrm{d} x_{i+1})\right).
\end{align*}
Noting that $\nu(B(z_i,2^{-(n+2)}))\asymp 2^{-\alpha n}$ for all $i=1,\ldots,N-1$, we have
\begin{align*}
(u(x)-u(y))^2&\le C_2\left(\int_{B(x,2^{-n})}(u(x)-u(x_1))^2\nu(\mathrm{d} x_1)\right.\\
&+\int_{B(y,2^{-n})}(u(y)-u(x_{N-1}))^2\nu(\mathrm{d} x_{N-1})\\
&\left.+\int_K\int_{B(x,2^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x)\right),
\end{align*}
where $C_2=C_2(n)$ is some positive constant. Since $\nu(K)=1$, integrating with respect to $x,y\in K$, we have
$$\int_K\int_K(u(x)-u(y))^2\nu(\mathrm{d} x)\nu(\mathrm{d} y)\le 4C_2\int_K\int_{B(x,{2^{-n}})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x).$$
Letting $C=4C_2$, then we have the desired result.
\end{proof}
The following result states that a Besov space can be embedded in some H\"older space.
\begin{mylem}\label{SG_app_lem_holder}(\cite[Theorem 4.11 (iii)]{GHL03})
For all $u\in C(K)$, let
$$E(u)=\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y).$$
Then there exists some positive constant $c$ such that
$$|u(x)-u(y)|^2\le cE(u)|x-y|^{\beta-\alpha},$$
for all $x,y\in K$, for all $u\in C(K)$.
\end{mylem}
Note that the proof of the above lemma does not rely on heat kernel.
We divide Theorem \ref{SG_app_thm_main} into the following Theorem \ref{SG_app_thm_equiv2_1} and Theorem \ref{SG_app_thm_equiv2_2}. The idea of the proofs of these theorems comes from \cite{Jon96} where the case of local Dirichlet form was considered.
\begin{mythm}\label{SG_app_thm_equiv2_1}
For all $u\in C(K)$, we have
$$\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\lesssim\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y).$$
\end{mythm}
\begin{proof}
First, fix $n\ge1$ and $w=w_1\ldots w_n\in W_n$, consider $\sum_{p,q\in V_w}(u(p)-u(q))^2$. For all $x\in K_w$, we have
$$(u(p)-u(q))^2\le 2(u(p)-u(x))^2+2(u(x)-u(q))^2.$$
Integrating with respect to $x\in K_w$ and dividing by $\nu(K_w)$, we have
$$(u(p)-u(q))^2\le\frac{2}{\nu(K_w)}\int_{K_w}(u(p)-u(x))^2\nu(\mathrm{d} x)+\frac{2}{\nu(K_w)}\int_{K_w}(u(q)-u(x))^2\nu(\mathrm{d} x),$$
hence
\begin{align*}
&\sum_{p,q\in V_w}(u(p)-u(q))^2\\
\le&\sum_{p,q\in V_w,p\ne q}\left[\frac{2}{\nu(K_w)}\int_{K_w}(u(p)-u(x))^2\nu(\mathrm{d} x)+\frac{2}{\nu(K_w)}\int_{K_w}(u(q)-u(x))^2\nu(\mathrm{d} x)\right]\\
\le&2\cdot2\cdot2\sum_{p\in V_w}\frac{1}{\nu(K_w)}\int_{K_w}(u(p)-u(x))^2\nu(\mathrm{d} x).
\end{align*}
Consider $(u(p)-u(x))^2$, $p\in V_w$, $x\in K_w$. There exists $w_{n+1}\in\myset{0,1,2}$ such that $p=f_{w_1}\circ\ldots\circ f_{w_n}(p_{w_{n+1}})$. Let $k,l\ge1$ be integers to be determined later, let
$$w^{(i)}=w_1\ldots w_nw_{n+1}\ldots w_{n+1}$$
with $ki$ terms of $w_{n+1}$, $i=0,\ldots,l$. For all $x^{(i)}\in K_{w^{(i)}}$, $i=0,\ldots,l$, we have
\begin{align*}
(u(p)-u(x^{(0)}))^2&\le2(u(p)-u(x^{(l)}))^2+2(u(x^{(0)})-u(x^{(l)}))^2\\
&\le2(u(p)-u(x^{(l)}))^2+2\left[2(u(x^{(0)})-u(x^{(1)}))^2+2(u(x^{(1)})-u(x^{(l)}))^2\right]\\
&=2(u(p)-u(x^{(l)}))^2+2^2(u(x^{(0)})-u(x^{(1)}))^2+2^2(u(x^{(1)})-u(x^{(l)}))^2\\
&\le\ldots\le2(u(p)-u(x^{(l)}))^2+2^2\sum_{i=0}^{l-1}2^i(u(x^{(i)})-u(x^{(i+1)}))^2.
\end{align*}
Integrating with respect to $x^{(0)}\in K_{w^{(0)}}$, \ldots, $x^{(l)}\in K_{w^{(l)}}$ and dividing by $\nu(K_{w^{(0)}})$, \ldots, $\nu(K_{w^{(l)}})$, we have
\begin{align*}
&\frac{1}{\nu(K_{w^{(0)}})}\int_{K_{w^{(0)}}}(u(p)-u(x^{(0)}))^2\nu(\mathrm{d} x^{(0)})\\
\le&\frac{2}{\nu(K_{w^{(l)}})}\int_{K_{w^{(l)}}}(u(p)-u(x^{(l)}))^2\nu(\mathrm{d} x^{(l)})\\
&+2^2\sum_{i=0}^{l-1}\frac{2^i}{\nu(K_{w^{(i)}})\nu(K_{w^{(i+1)}})}\int_{K_{w^{(i)}}}\int_{K_{w^{(i+1)}}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)}).
\end{align*}
Now let us use $\nu(K_{w^{(i)}})=(1/3)^{n+ki}=2^{-\alpha(n+ki)}$. For the first term, by Lemma \ref{SG_app_lem_holder}, we have
\begin{align*}
\frac{1}{\nu(K_{w^{(l)}})}\int_{K_{w^{(l)}}}(u(p)-u(x^{(l)}))^2\nu(\mathrm{d} x^{(l)})&\le \frac{cE(u)}{\nu(K_{w^{(l)}})}\int_{K_{w^{(l)}}}|p-x^{(l)}|^{\beta-\alpha}\nu(\mathrm{d} x^{(l)})\\
&\le cE(u){2}^{-(\beta-\alpha)(n+kl)}.
\end{align*}
For the second term, for all $x^{(i)}\in K_{w^{(i)}},x^{(i+1)}\in K_{w^{(i+1)}}$, we have $|x^{(i)}-x^{(i+1)}|\le2^{-(n+ki)}$, hence
\begin{align*}
&\sum_{i=0}^{l-1}\frac{2^i}{\nu(K_{w^{(i)}})\nu(K_{w^{(i+1)}})}\int_{K_{w^{(i)}}}\int_{K_{w^{(i+1)}}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)})\\
\le&\sum_{i=0}^{l-1}{2^{i+\alpha(n+ki+n+ki+k)}}\int_{K_{w^{(i)}}}\int_{|x^{(i+1)}-x^{(i)}|\le2^{-n-ki}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)})\\
=&\sum_{i=0}^{l-1}{2^{i+\alpha k+2\alpha(n+ki)}}\int_{K_{w^{(i)}}}\int_{|x^{(i+1)}-x^{(i)}|\le2^{-(n+ki)}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)}),
\end{align*}
and
\begin{align*}
&\frac{1}{\nu(K_w)}\int_{K_w}(u(p)-u(x))^2\nu(\mathrm{d} x)=\frac{1}{\nu(K_{w^{(0)}})}\int_{K_{w^{(0)}}}(u(p)-u(x^{(0)}))^2\nu(\mathrm{d} x^{(0)})\\
\le& 2cE(u)2^{-(\beta-\alpha)(n+kl)}\\
&+4\sum_{i=0}^{l-1}{2^{i+\alpha k+2\alpha(n+ki)}}\int_{K_{w^{(i)}}}\int_{|x^{(i+1)}-x^{(i)}|\le2^{-n-ki}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)}).
\end{align*}
Hence
\begin{align*}
&\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\\
\le&8\sum_{w\in W_n}\sum_{p\in V_w}\frac{1}{\nu(K_w)}\int_{K_w}(u(p)-u(x))^2\nu(\mathrm{d} x)\\
\le&8\sum_{w\in W_n}\sum_{p\in V_w}\left(2cE(u)2^{-(\beta-\alpha)(n+kl)}\right.\\
&\left.+4\sum_{i=0}^{l-1}{2^{i+\alpha k+2\alpha(n+ki)}}\int_{K_{w^{(i)}}}\int_{|x^{(i+1)}-x^{(i)}|\le2^{-n-ki}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)})\right).
\end{align*}
For the first term, we have
$$\sum_{w\in W_n}\sum_{p\in V_w}2^{-(\beta-\alpha)(n+kl)}=3\cdot3^n\cdot2^{-(\beta-\alpha)(n+kl)}=3\cdot2^{\alpha n-(\beta-\alpha)(n+kl)}.$$
For the second term, fix $i=0,\ldots,l-1$, different $p\in V_w$, $w\in W_n$ correspond to different $K_{w^{(i)}}$, hence
\begin{align*}
&\sum_{i=0}^{l-1}\sum_{w\in W_n}\sum_{p\in V_w}2^{i+\alpha k+2\alpha(n+ki)}\\
&\cdot\int_{K_{w^{(i)}}}\int_{|x^{(i+1)}-x^{(i)}|\le2^{-n-ki}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)})\\
\le&\sum_{i=0}^{l-1}2^{i+\alpha k+2\alpha(n+ki)}\int_{K}\int_{|x^{(i+1)}-x^{(i)}|\le2^{-(n+ki)}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)})\\
=&2^{\alpha k}\sum_{i=0}^{l-1}2^{i-(\beta-\alpha)ki-(\beta-\alpha)n}\\
&\cdot\left(2^{(\alpha+\beta)(n+ki)}\int_{K}\int_{|x^{(i+1)}-x^{(i)}|\le2^{-(n+ki)}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)})\right).
\end{align*}
For simplicity, denote
$$E_{n}(u)=2^{(\alpha+\beta)n}\int_{K}\int_{|x-y|\le2^{-n}}(u(x)-u(y))^2\nu(\mathrm{d} x)\nu(\mathrm{d} y).$$
We have
\begin{align*}
&\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\\
\le&48cE(u)\cdot2^{\alpha n-(\beta-\alpha)(n+kl)}+32\cdot2^{\alpha k}\sum_{i=0}^{l-1}2^{i-(\beta-\alpha)ki-(\beta-\alpha)n}E_{n+ki}(u).
\end{align*}
Hence
\begin{align*}
&\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\\
\le&48cE(u)\sum_{n=1}^\infty2^{\beta n-(\beta-\alpha)(n+kl)}+32\cdot2^{\alpha k}\sum_{n=1}^\infty\sum_{i=0}^{l-1}2^{i-(\beta-\alpha)ki}E_{n+ki}(u).
\end{align*}
Take $l=n$, then
\begin{align*}
&\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\\
\le&48cE(u)\sum_{n=1}^\infty2^{[\beta-(\beta-\alpha)(k+1)]n}+32\cdot2^{\alpha k}\sum_{n=1}^\infty\sum_{i=0}^{n-1}2^{i-(\beta-\alpha)ki}E_{n+ki}(u)\\
=&48cE(u)\sum_{n=1}^\infty2^{[\beta-(\beta-\alpha)(k+1)]n}+32\cdot2^{\alpha k}\sum_{i=0}^\infty2^{i-(\beta-\alpha)ki}\sum_{n=i+1}^\infty E_{n+ki}(u)\\
\le&48cE(u)\sum_{n=1}^\infty2^{[\beta-(\beta-\alpha)(k+1)]n}+32\cdot2^{\alpha k}C_1E(u)\sum_{i=0}^\infty 2^{[1-(\beta-\alpha)k]i},
\end{align*}
where $C_1$ is some positive constant from Lemma \ref{SG_app_lem_equiv1}. Take $k\ge1$ such that
$$\beta-(\beta-\alpha)(k+1)<0$$
and
$$1-(\beta-\alpha)k<0,$$
then the above two series converge, hence
$$\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\lesssim\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y).$$
\end{proof}
\begin{mythm}\label{SG_app_thm_equiv2_2}
For all $u\in C(K)$, we have
\begin{equation}\label{SG_app_eqn_equiv2_1}
\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\lesssim\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2,
\end{equation}
or equivalently
\begin{equation}\label{SG_app_eqn_equiv2_2}
\sum_{n=1}^\infty2^{(\alpha+\beta)n}\int\limits_K\int\limits_{B(x,2^{-n-1})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x)\lesssim\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2.
\end{equation}
\end{mythm}
\begin{proof}
Note $V_n=\cup_{w\in W_n}V_w$, it is obvious that its cardinal $\#V_n\asymp3^n=2^{\alpha n}$. Let $\nu_n$ be the measure on $V_n$ which assigns $1/\#V_n$ on each point of $V_n$, then $\nu_n$ converges weakly to $\nu$.
First, fix $n\ge1$ and $m\ge n$, we estimate
$$2^{(\alpha+\beta)n}\int_K\int_{B(x,2^{-n-1})}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x).$$
Note that
\begin{align*}
&\int_K\int_{B(x,2^{-n-1})}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)\\
=&\sum_{w\in W_n}\int_{K_w}\int_{B(x,2^{-n-1})}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x).
\end{align*}
Fix $w\in W_n$, there exist at most four $\tilde{w}\in W_n$ such that $K_{\tilde{w}}\cap K_w\ne\emptyset$, let
$$K_w^*=\cup_{\tilde{w}\in W_n,K_{\tilde{w}}\cap K_w\ne\emptyset}K_{\tilde{w}}.$$
For all $x\in K_w$, $y\in B(x,2^{-n-1})$, we have $y\in K_w^*$, hence
$$\int_{K_w}\int_{B(x,2^{-n-1})}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)\le\int_{K_w}\int_{K_w^*}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x).$$
For all $x\in K_w$, $y\in K_w^*$, there exists $\tilde{w}\in W_n$ such that $y\in K_{\tilde{w}}$ and $K_{\tilde{w}}\cap K_w\ne\emptyset$. Take $z\in V_w\cap V_{\tilde{w}}$, then
$$(u(x)-u(y))^2\le2(u(x)-u(z))^2+2(u(z)-u(y))^2,$$
and
\begin{align*}
&\int_{K_w}\int_{K_w^*}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)\\
\le&\sum_{\tilde{w}\in W_n,K_{\tilde{w}}\cap K_w\ne\emptyset}\int_{K_w}\int_{K_{\tilde{w}}}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)\\
\le&\sum_{\tilde{w}\in W_n,K_{\tilde{w}}\cap K_w\ne\emptyset}2\int_{K_w}\int_{K_{\tilde{w}}}\left((u(x)-u(z))^2+(u(z)-u(y))^2\right)\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x).
\end{align*}
Hence
\begin{align}
&\sum_{w\in W_n}\int_{K_w}\int_{K_w^*}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)\nonumber\\
\le&2\cdot2\cdot4\cdot2\sum_{w\in W_n}\sum_{z\in V_w}\int_{K_w}(u(x)-u(z))^2\nu_m(\mathrm{d} x)\left(\int_{K_w}\nu_m(\mathrm{d} y)\right)\nonumber\\
=&32\sum_{w\in W_n}\sum_{z\in V_w}\int_{K_w}(u(x)-u(z))^2\nu_m(\mathrm{d} x)\frac{\#(V_m\cap K_w)}{\#V_m}\nonumber\\
=&32\sum_{w\in W_n}\sum_{z\in V_w}\sum_{x\in V_m\cap K_w}(u(x)-u(z))^2\frac{1}{\#V_m}\frac{\#(V_m\cap K_w)}{\#V_m}\nonumber\\
=&32\frac{\#V_{m-n}}{(\#V_m)^2}\sum_{w\in W_n}\sum_{z\in V_w}\sum_{x\in V_m\cap K_w}(u(x)-u(z))^2.\label{SG_app_eqn_tmp1}
\end{align}
Let us estimate $(u(x)-u(z))^2$ for $z\in V_w$, $x\in V_m\cap K_w$, $w\in W_n$. We construct a finite sequence $p_n,\ldots,p_{m+1}$ as follows. If $w=w_1\ldots w_n\in W_n$, then
\begin{align*}
z&=P_{w_1\ldots w_nw_{n+1}},\\
x&=P_{w_1\ldots w_n\tilde{w}_{n+1}\ldots\tilde{w}_m\tilde{w}_{m+1}}.
\end{align*}
Let
\begin{align*}
p_n&=P_{w_1\ldots w_nw_{n+1}}=z,\\
p_{n+1}&=P_{w_1\ldots w_n\tilde{w}_{n+1}},\\
p_{n+2}&=P_{w_1\ldots w_n\tilde{w}_{n+1}\tilde{w}_{n+2}},\\
&\ldots\\
p_{m+1}&=P_{w_1\ldots w_n\tilde{w}_{n+1}\ldots\tilde{w}_{m}\tilde{w}_{m+1}}=x,
\end{align*}
then $|p_i-p_{i+1}|=0$ or $2^{-i}$, $i=n,\ldots,m$ and
\begin{align*}
&(u(x)-u(z))^2=(u(p_n)-u(p_{m+1}))^2\\
\le&2(u(p_n)-u(p_{n+1}))^2+2(u(p_{n+1})-u(p_{m+1}))^2\\
\le&2(u(p_n)-u(p_{n+1}))^2+2\left[2(u(p_{n+1})-u(p_{n+2}))^2+2(u(p_{n+2})-u(p_{m+1}))^2\right]\\
=&2(u(p_n)-u(p_{n+1}))^2+2^2(u(p_{n+1})-u(p_{n+2}))^2+2^2(u(p_{n+2})-u(p_{m+1}))^2\\
\le&\ldots\le\sum_{i=n}^{m}2^{i-n+1}(u(p_i)-u(p_{i+1}))^2.
\end{align*}
Let us sum up the resulting inequality for all $z\in V_w$, $x\in V_m\cap K_w$, $w\in W_n$. For all $i=n,\ldots,m$, $p,q\in V_i\cap K_w$ with $|p-q|=2^{-i}$, the term $(u(p)-u(q))^2$ occurs in the sum with times of the order $3^{m-i}$, hence
$$\sum_{w\in W_n}\sum_{z\in V_w}\sum_{x\in V_m\cap K_w}(u(x)-u(z))^2\le c\sum_{i=n}^m\sum_{w\in W_i}\sum_{p,q\in V_w}(u(p)-u(q))^2\cdot3^{m-i}\cdot2^{i-n}.$$
It follows from Equation (\ref{SG_app_eqn_tmp1}) that
\begin{align*}
&\sum_{w\in W_n}\int_{K_w}\int_{K_{{w}}^*}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)\\
\le& c\frac{3^{m-n}}{3^{2m}}\sum_{i=n}^m\sum_{w\in W_i}\sum_{p,q\in V_w}(u(p)-u(q))^2\cdot3^{m-i}\cdot2^{i-n}\\
=&c\sum_{i=n}^m\sum_{w\in W_i}\sum_{p,q\in V_w}(u(p)-u(q))^2\cdot 3^{-n-i}\cdot2^{i-n}.
\end{align*}
Letting $m\to+\infty$, we obtain
$$\int_K\int_{B(x,2^{-n-1})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x)\le c\sum_{i=n}^\infty\sum_{w\in W_i}\sum_{p,q\in V_w}(u(p)-u(q))^2\cdot 3^{-n-i}\cdot2^{i-n},$$
and
\begin{align*}
&2^{(\alpha+\beta)n}\int_K\int_{B(x,2^{-n-1})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
\le& c\sum_{i=n}^\infty\sum_{w\in W_i}\sum_{p,q\in V_w}(u(p)-u(q))^2\cdot 2^{-(\alpha-1)i}\cdot2^{(\beta-1)n},
\end{align*}
and hence
\begin{align*}
&\sum_{n=1}^\infty2^{(\alpha+\beta)n}\int_K\int_{B(x,2^{-n-1})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
\le& c\sum_{n=1}^\infty\sum_{i=n}^\infty\sum_{w\in W_i}\sum_{p,q\in V_w}(u(p)-u(q))^2\cdot 2^{-(\alpha-1)i}\cdot2^{(\beta-1)n}\\
=&c\sum_{i=1}^\infty\sum_{n=1}^i\sum_{w\in W_i}\sum_{p,q\in V_w}(u(p)-u(q))^2\cdot 2^{-(\alpha-1)i}\cdot2^{(\beta-1)n}\\
\le&\frac{2^{\beta-1}c}{2^{\beta-1}-1}\sum_{i=1}^\infty2^{(\beta-\alpha)i}\sum_{w\in W_i}\sum_{p,q\in V_w}(u(p)-u(q))^2,
\end{align*}
which proves Equation (\ref{SG_app_eqn_equiv2_2}). Applying Corollary \ref{SG_app_cor_arbi}, we obtain Equation (\ref{SG_app_eqn_equiv2_1}).
\end{proof}
\section{Another Approach of Determination of the Walk Dimension of the SG}
In this section, we give another determination of the walk dimension of the SG which is much simpler than that given in Chapter \ref{ch_SG_det}.
We prove the following result.
\begin{mythm}\label{SG_app_thm_det}
For all $\beta\in(\alpha,\beta^*)$, we have $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$. For all $\beta\in[\beta^*,+\infty)$, we have $\mathcal{F}_\beta$ consists only of constant functions.
\end{mythm}
\begin{proof}
By Fatou's lemma, it is obvious that $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a closed form on $L^2(K;\nu)$ in the wide sense.
For all $\beta\in(\alpha,\beta^*)$. By \cite[Theorem 4.11 (\rmnum{3})]{GHL03}, we have $\mathcal{F}_\beta\subseteq C(K)$. We only need to show that $\mathcal{F}_\beta$ is uniformly dense in $C(K)$, then $\mathcal{F}_\beta$ is dense in $L^2(K;\nu)$, hence $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular closed form on $L^2(K;\nu)$.
Indeed, for all $U=U^{(x_0,x_1,x_2)}\in\mathcal{U}$, by Theorem \ref{thm_SG_fun}, we have
\begin{align*}
E_\beta(U,U)&=\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(U(p)-U(q))^2\\
&=\sum_{n=1}^\infty2^{(\beta-\alpha)n}\left(\frac{3}{5}\right)^{n}\left((x_0-x_1)^2+(x_1-x_2)^2+(x_0-x_2)^2\right)<+\infty,
\end{align*}
hence $U\in\mathcal{F}_\beta$, $\mathcal{U}\subseteq\mathcal{F}_\beta$. By Theorem \ref{thm_SG_fun} again, we have $\mathcal{F}_\beta$ separates points. It is obvious that $\mathcal{F}_\beta$ is a sub-algebra of $C(K)$, that is, for all $u,v\in\mathcal{F}_\beta,c\in\mathbb{R}$, we have $u+v,cu,uv\in\mathcal{F}_\beta$. By Stone-Weierstrass theorem, we have $\mathcal{F}_\beta$ is uniformly dense in $C(K)$.
It is obvious that $\mathcal{E}_\beta$ does have Markovian property, hence $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$.
For all $\beta\in[\beta^*,+\infty)$. Assume that $u\in\mathcal{F}_\beta$ is not constant, then there exists some integer $N\ge1$ such that $\mathfrak{E}_N(u,u)>0$. By Theorem \ref{thm_SG_con}, we have
\begin{align*}
E_\beta(u,u)&=\sum_{n=1}^\infty2^{(\beta-\alpha)n}\left(\frac{3}{5}\right)^n\mathfrak{E}_n(u,u)\ge\sum_{n=N+1}^\infty2^{(\beta-\alpha)n}\left(\frac{3}{5}\right)^n\mathfrak{E}_n(u,u)\\
&\ge\sum_{n=N+1}^\infty2^{(\beta-\alpha)n}\left(\frac{3}{5}\right)^n\mathfrak{E}_N(u,u)=+\infty,
\end{align*}
contradiction! Hence $\mathcal{F}_\beta$ consists only of constant functions.
\end{proof}
\section{Proof of Theorem \ref{SG_app_thm_incre} and \ref{SG_app_thm_conv_main}}
First, we prove Theorem \ref{SG_app_thm_incre}.
\begin{proof}[Proof of Theorem \ref{SG_app_thm_incre}]
For simplicity, let $\lambda=5^{-1}\cdot2^{\beta}$ or $\beta=\log(5\lambda)/\log2$, where $\beta\in(\alpha,\beta^*)$ or $\lambda\in(3/5,1)$. Then
\begin{align*}
(1-5^{-1}\cdot 2^{\beta})E_\beta(u,u)&=\left(1-\frac{2^{\beta}}{5}\right)\sum_{n=1}^\infty\left(\frac{2^\beta}{5}\right)^n\left[\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\right]\\
&=(1-\lambda)\sum_{n=1}^\infty\lambda^n\mathfrak{E}_n(u,u).
\end{align*}
If $u$ has no continuous version, then this result is obvious. Hence, we may assume that $u$ is continuous. Let
$$x_n=\mathfrak{E}_n(u,u)=\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2.$$
By Theorem \ref{thm_SG_con}, we have $\myset{x_n}$ is a monotone increasing sequence in $[0,+\infty]$ and
$$\lim_{n\to+\infty}x_n=\lim_{n\to+\infty}\mathfrak{E}_n(u,u)=\mathfrak{E}_{\mathrm{loc}}(u,u)=\lim_{n\to+\infty}\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2.$$
By Proposition \ref{prop_ele1}, we have $(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n$ is monotone increasing in $\lambda\in(3/5,1)$, that is, $(1-5^{-1}\cdot2^\beta)E_\beta(u,u)$ is monotone increasing in $\beta\in(\alpha,\beta^*)$. Moreover
$$\lim_{\beta\uparrow\beta^*}(1-5^{-1}\cdot2^\beta)E_\beta(u,u)=\lim_{\lambda\uparrow1}(1-\lambda)\sum_{n=1}^\infty\lambda^nx_n=\lim_{n\to+\infty}x_n=\lim_{n\to+\infty}\mathfrak{E}_n(u,u)=\mathfrak{E}_{\mathrm{loc}}(u,u).$$
\end{proof}
Next, we prove Theorem \ref{SG_app_thm_conv_main}.
\begin{proof}[Proof of Theorem \ref{SG_app_thm_conv_main}]
First, we check (\ref{def_Mosco_2}) in Definition \ref{def_Mosco}. For all $u\in L^2(K;\nu)$, let $u_n=u$ for all $n\ge1$, then $u_n$ is trivially convergent to $u$ in $L^2(K;\nu)$ and by Theorem \ref{SG_app_thm_incre}, we have
$$\mathfrak{E}_{\mathrm{loc}}(u,u)=\lim_{n\to+\infty}(1-5^{-1}\cdot2^{\beta_n})E_{\beta_n}(u,u)=\lim_{n\to+\infty}(1-5^{-1}\cdot2^{\beta_n})E_{\beta_n}(u_n,u_n).$$
Then, we check (\ref{def_Mosco_1}) in Definition \ref{def_Mosco}. For all $\myset{u_n}\subseteq L^2(K;\nu)$ that converges weakly to $u\in L^2(K;\nu)$. For all $m\ge1$, by Corollary \ref{cor_Mosco}, we have
$$(1-5^{-1}\cdot2^{\beta_m})E_{\beta_m}(u,u)\le\varliminf_{n\to+\infty}(1-5^{-1}\cdot2^{\beta_m})E_{\beta_m}(u_n,u_n).$$
By Theorem \ref{SG_app_thm_incre}, for all $n\ge m$, we have
$$(1-5^{-1}\cdot2^{\beta_m})E_{\beta_m}(u_n,u_n)\le(1-5^{-1}\cdot2^{\beta_n})E_{\beta_n}(u_n,u_n),$$
hence
$$(1-5^{-1}\cdot2^{\beta_m})E_{\beta_m}(u,u)\le\varliminf_{n\to+\infty}(1-5^{-1}\cdot2^{\beta_m})E_{\beta_m}(u_n,u_n)\le\varliminf_{n\to+\infty}(1-5^{-1}\cdot2^{\beta_n})E_{\beta_n}(u_n,u_n).$$
By Theorem \ref{SG_app_thm_incre} again, we have
$$\mathfrak{E}_{\mathrm{loc}}(u,u)=\lim_{m\to+\infty}(1-5^{-1}\cdot2^{\beta_m})E_{\beta_m}(u,u)\le\varliminf_{n\to+\infty}(1-5^{-1}\cdot2^{\beta_n})E_{\beta_n}(u_n,u_n).$$
Hence $(1-5^{-1}\cdot2^{\beta_n})E_{\beta_n}$ converges to $\mathfrak{E}_{\mathrm{loc}}$ in the sense of Mosco.
\end{proof}
Mosco convergence in Theorem \ref{SG_app_thm_conv_main} implies that appropriate time-changed jump processes can approximate the diffusion at least in the sense of finite-dimensional distribution.
\section{Proof of Theorem \ref{SG_app_thm_trace_main}}
Similar to Lemma \ref{SG_app_lem_equiv1}, we have the following result for the unit interval. For all $u\in L^2(I)$, we have
$$\int_{I}\int_I\frac{(u(x)-u(y))^2}{|x-y|^{1+\beta}}\mathrm{d} x\mathrm{d} y\asymp\sum_{n=0}^\infty2^{n}2^{\beta n}\int_I\int_{B(x,2^{-n})}(u(x)-u(y))^2\mathrm{d} y\mathrm{d} x.$$
Combining this result with Equation (\ref{SG_app_eqn_interval}), we obtain that for all $u\in C(I)$
$$\sum_{n=1}^\infty2^{-n}2^{\beta n}\sum_{i=0}^{2^n-1}(u(\frac{i}{2^n})-u(\frac{i+1}{2^n}))^2\asymp\sum_{n=0}^\infty2^{n}2^{\beta n}\int_I\int_{B(x,2^{-n})}(u(x)-u(y))^2\mathrm{d} y\mathrm{d} x.$$
Since $\beta_1\in(\alpha_1,\beta_1^*)$, we have
$$\beta_2=\beta_1-(\alpha_1-\alpha_2)\in(\alpha_2,\beta_1^*-\alpha_1+\alpha_2)\subseteq(\alpha_2,\beta_2^*).$$
For all $u\in B^{2,2}_{\alpha_1,\beta_1}(K)$, we have $u\in C(K)$, hence $u|_I\in C(I)$. Note that
$$\sum_{n=1}^\infty2^{-\alpha_2n}2^{\beta_2 n}\sum_{i=0}^{2^n-1}(u(\frac{i}{2^n})-u(\frac{i+1}{2^n}))^2\le\sum_{n=1}^\infty2^{-\alpha_1 n}2^{\beta_1 n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2<+\infty,$$
hence $u|_I\in B^{2,2}_{\alpha_2,\beta_2}(I)$.
\section{Proof of Theorem \ref{SG_app_thm_jumping_kernel}}
Firstly, we construct equivalent semi-norms with jumping kernels that converge exactly to the local Dirichlet form.
For all $\beta\in(\alpha,\beta^*)$, $((1-5^{-1}\cdot2^{\beta})E_\beta,\mathcal{F}_{\beta})$ is a non-local regular Dirichlet form on $L^2(K;\nu)$, by Beurling-Deny formula, there exists a unique jumping measure $J_\beta$ on $K\times K\backslash\mathrm{diag}$ such that for all $u\in\mathcal{F}_{\beta}$, we have
$$(1-5^{-1}\cdot2^\beta)E_{\beta}(u,u)=\iint_{K\times K\backslash\mathrm{diag}}(u(x)-u(y))^2J_\beta(\mathrm{d} x\mathrm{d} y).$$
It is obvious that
$$J_{\beta}(\mathrm{d} x\mathrm{d} y)=(1-5^{-1}\cdot2^{\beta})\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}\delta_p(\mathrm{d} x)\delta_q(\mathrm{d} y),$$
where $\delta_p,\delta_q$ are Dirac measures at $p,q$, respectively. Hence $J_\beta$ is singular with respect to $\nu\times\nu$ and no jumping kernel exists. Note that
\begin{align*}
&\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\\
=&\iint_{K\times K\backslash\mathrm{diag}}(u(x)-u(y))^2\left(\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}\delta_p(\mathrm{d} x)\delta_q(\mathrm{d} y)\right),
\end{align*}
where
\begin{align*}
&\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}\delta_p(\mathrm{d} x)\delta_q(\mathrm{d} y)=\sum_{n=1}^\infty\sum_{w\in W_n}\sum_{p,q\in V_w}\frac{1}{|p-q|^{\alpha+\beta}}{|p-q|^{2\alpha}}\delta_p(\mathrm{d} x)\delta_q(\mathrm{d} y)\\
=&\frac{1}{|x-y|^{\alpha+\beta}}\sum_{n=1}^\infty2^{-2\alpha n}\sum_{w\in W_n}\sum_{p,q\in V_w}\delta_p(\mathrm{d} x)\delta_q(\mathrm{d} y)=\frac{1}{|x-y|^{\alpha+\beta}}J(\mathrm{d} x\mathrm{d} y),
\end{align*}
and
$$J(\mathrm{d} x\mathrm{d} y)=\sum_{n=1}^\infty2^{-2\alpha n}\sum_{w\in W_n}\sum_{p,q\in V_w}\delta_p(\mathrm{d} x)\delta_q(\mathrm{d} y).$$
\begin{myprop}\label{SG_app_prop_kernel1}
Let
$$c_i(x,y)=\sum_{n=1}^\infty2^{-2\alpha n}\sum_{w\in W_n}\sum_{p,q\in V_w}\frac{1}{\nu(K^{(i)}_{p,n})\nu(K^{(i)}_{q,n})}1_{K^{(i)}_{p,n}}(x)1_{K^{(i)}_{q,n}}(y),$$
then for all $u\in C(K)$, we have
\begin{equation}\label{SG_app_eqn_kernel1}
\begin{aligned}
&\left(1-C(\frac{2^{\alpha-\gamma i}}{1-2^{\alpha-\gamma i}}+\frac{2^{\alpha-\frac{\beta-\alpha}{2}\gamma i}}{1-2^{\alpha-\frac{\beta-\alpha}{2}\gamma i}})\right)\sum_{n=1}^\infty 2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\\
\le&\iint_{K\times K\backslash\mathrm{diag}}\frac{c_i(x,y)(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\\
\le&\left(1+C(\frac{2^{\alpha-\gamma i}}{1-2^{\alpha-\gamma i}}+\frac{2^{\alpha-\frac{\beta-\alpha}{2}\gamma i}}{1-2^{\alpha-\frac{\beta-\alpha}{2}\gamma i}})\right)\sum_{n=1}^\infty 2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2.
\end{aligned}
\end{equation}
\end{myprop}
\begin{proof}
Note that
\begin{align*}
&\iint_{K\times K\backslash\mathrm{diag}}\frac{c_i(x,y)(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\\
=&\sum_{n=1}^\infty2^{-2\alpha n}\sum_{w\in W_n}\sum_{p,q\in V_w}\frac{1}{\nu(K^{(i)}_{p,n})\nu(K^{(i)}_{q,n})}\int_{K^{(i)}_{p,n}}\int_{K^{(i)}_{q,n}}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y).
\end{align*}
Since
$$\frac{1}{\nu(K^{(i)}_{p,n})\nu(K^{(i)}_{q,n})}1_{K^{(i)}_{p,n}}(x)1_{K^{(i)}_{q,n}}(y)\nu(\mathrm{d} x)\nu(\mathrm{d} y)\text{ converges weakly to }\delta_p(\mathrm{d} x)\delta_q(\mathrm{d} y),$$
for all $u\in C(K)$, we have
\begin{align*}
&\lim_{i\to+\infty}\frac{1}{\nu(K^{(i)}_{p,n})\nu(K^{(i)}_{q,n})}\int_{K^{(i)}_{p,n}}\int_{K^{(i)}_{q,n}}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\\
=&\frac{(u(p)-u(q))^2}{|p-q|^{\alpha+\beta}}=2^{(\alpha+\beta)n}(u(p)-u(q))^2.
\end{align*}
By Fatou's lemma, we have
\begin{align*}
&\sum_{n=1}^\infty 2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\\
\le&\varliminf_{i\to+\infty}\iint_{K\times K\backslash\mathrm{diag}}\frac{c_i(x,y)(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y).
\end{align*}
If $\text{LHS}=+\infty$, then $E(u)=+\infty$, the limit in RHS exists and equals to $+\infty$. Hence, we may assume that $E(u)<+\infty$, by Lemma \ref{SG_app_lem_holder}, we have
$$|u(x)-u(y)|^2\le cE(u)|x-y|^{\beta-\alpha}\text{ for all }x,y\in K.$$
Consider
\begin{align*}
&\lvert\frac{1}{\nu(K^{(i)}_{p,n})\nu(K^{(i)}_{q,n})}\int_{K^{(i)}_{p,n}}\int_{K^{(i)}_{q,n}}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)-\frac{(u(p)-u(q))^2}{|p-q|^{\alpha+\beta}}\rvert\\
\le&\frac{1}{\nu(K^{(i)}_{p,n})\nu(K^{(i)}_{q,n})}\int_{K^{(i)}_{p,n}}\int_{K^{(i)}_{q,n}}\lvert\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}-\frac{(u(p)-u(q))^2}{|p-q|^{\alpha+\beta}}\rvert\nu(\mathrm{d} x)\nu(\mathrm{d} y).
\end{align*}
For all $x\in K^{(i)}_{p,n}, y\in K^{(i)}_{q,n}$, we have
\begin{align*}
&\lvert\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}-\frac{(u(p)-u(q))^2}{|p-q|^{\alpha+\beta}}\rvert\\
\le&\frac{1}{|x-y|^{\alpha+\beta}|p-q|^{\alpha+\beta}}\left((u(x)-u(y))^2\lvert |p-q|^{\alpha+\beta}-|x-y|^{\alpha+\beta}\rvert\right.\\
&\left.+\lvert(u(x)-u(y))^2-(u(p)-u(q))^2\rvert\cdot |x-y|^{\alpha+\beta}\right).
\end{align*}
Since
\begin{align*}
&|p-q|\ge|x-y|\ge|p-q|-|x-p|-|y-q|\\
\ge&|p-q|-\frac{2}{2^{\gamma ni}}|p-q|=\left(1-\frac{2}{2^{\gamma ni}}\right)|p-q|,
\end{align*}
for all $i\ge2$, we have $|x-y|\ge|p-q|/2$, for all $i\ge1$, we have
$$|p-q|^{\alpha+\beta}\ge|x-y|^{\alpha+\beta}\ge\left(1-\frac{2}{2^{\gamma ni}}\right)^{\alpha+\beta}|p-q|^{\alpha+\beta}.$$
Therefore, we have
$$\frac{1}{|x-y|^{\alpha+\beta}|p-q|^{\alpha+\beta}}\le\frac{2^{\alpha+\beta}}{|p-q|^{2(\alpha+\beta)}}=2^{\alpha+\beta}2^{2(\alpha+\beta)n},$$
\begin{align*}
&(u(x)-u(y))^2\le cE(u)|x-y|^{\beta-\alpha}\\
\le&cE(u)|p-q|^{\beta-\alpha}=cE(u)2^{-(\beta-\alpha)n},
\end{align*}
\begin{align*}
\lvert |p-q|^{\alpha+\beta}-|x-y|^{\alpha+\beta}\rvert&\le|p-q|^{\alpha+\beta}\left[1-(1-\frac{2}{2^{\gamma ni}})^{\alpha+\beta}\right]\\
&\le 2(\alpha+\beta)2^{-(\alpha+\beta)n-\gamma ni},
\end{align*}
\begin{align*}
&\lvert(u(x)-u(y))^2-(u(p)-u(q))^2\rvert\\
=&\lvert(u(x)-u(y))+(u(p)-u(q))\rvert\cdot\lvert(u(x)-u(y))-(u(p)-u(q))\rvert\\
\le&\left(|u(x)-u(y)|+|u(p)-u(q)|\right)\left(|u(x)-u(p)|+|u(y)-u(q)|\right)\\
\le& cE(u)\left(|x-y|^{\frac{\beta-\alpha}{2}}+|p-q|^{\frac{\beta-\alpha}{2}}\right)\left(|x-p|^{\frac{\beta-\alpha}{2}}+|y-q|^{\frac{\beta-\alpha}{2}}\right)\\
\le& 4cE(u)2^{-\frac{\beta-\alpha}{2}(2n+\gamma ni)},
\end{align*}
$$|x-y|^{\alpha+\beta}\le|p-q|^{\alpha+\beta}=2^{-(\alpha+\beta)n}.$$
It follows that
$$\lvert\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}-\frac{(u(p)-u(q))^2}{|p-q|^{\alpha+\beta}}\rvert\le2^{\alpha+\beta}cE(u)\left(2(\alpha+\beta)2^{2\alpha n-\gamma ni}+4\cdot 2^{2\alpha n-\frac{\beta-\alpha}{2}\gamma ni}\right),$$
and, hence,
\begin{align*}
&|\sum_{n=1}^\infty 2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\\
&-\sum_{n=1}^\infty2^{-2\alpha n}\sum_{w\in W_n}\sum_{p,q\in V_w}\frac{1}{\nu(K^{(i)}_{p,n})\nu(K^{(i)}_{q,n})}\int_{K^{(i)}_{p,n}}\int_{K^{(i)}_{q,n}}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)|\\
\le&\sum_{n=1}^\infty 2^{-2\alpha n}2^{\alpha n}2^{\alpha+\beta}cE(u)\left(2(\alpha+\beta)2^{2\alpha n-\gamma ni}+4\cdot 2^{2\alpha n-\frac{\beta-\alpha}{2}\gamma ni}\right)\\
\le& CE(u)\sum_{n=1}^\infty\left(2^{\alpha n-\gamma ni}+2^{\alpha n-\frac{\beta-\alpha}{2}\gamma ni}\right)=CE(u)\sum_{n=1}^\infty\left(2^{(\alpha-\gamma i)n}+2^{(\alpha-\frac{\beta-\alpha}{2}\gamma i)n}\right).
\end{align*}
Choose $\gamma\ge1$ such that $\alpha-\gamma<0$ and $\alpha-\frac{\beta-\alpha}{2}\gamma<0$, then
$$\sum_{n=1}^\infty\left(2^{(\alpha-\gamma i)n}+2^{(\alpha-\frac{\beta-\alpha}{2}\gamma i)n}\right)=\frac{2^{\alpha-\gamma i}}{1-2^{\alpha-\gamma i}}+\frac{2^{\alpha-\frac{\beta-\alpha}{2}\gamma i}}{1-2^{\alpha-\frac{\beta-\alpha}{2}\gamma i}}\to0,$$
as $i\to+\infty$. Hence
\begin{align*}
&|\iint_{K\times K\backslash\mathrm{diag}}\frac{c_i(x,y)(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)-\sum_{n=1}^\infty 2^{(\beta-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2|\\
\le& CE(u)\left(\frac{2^{\alpha-\gamma i}}{1-2^{\alpha-\gamma i}}+\frac{2^{\alpha-\frac{\beta-\alpha}{2}\gamma i}}{1-2^{\alpha-\frac{\beta-\alpha}{2}\gamma i}}\right),
\end{align*}
hence we have Equation (\ref{SG_app_eqn_kernel1}).
\end{proof}
Secondly, we do appropriate cutoff to have bounded jumping kernels.
\begin{myprop}\label{SG_app_prop_kernel2}
For all sequence $\myset{\beta_i}\subseteq(\alpha,\beta^*)$ with $\beta_i\uparrow\beta^*$. Let
$$C_i(x,y)=\sum_{n=1}^{\Phi(i)}2^{-2\alpha n}\sum_{w\in W_n}\sum_{p,q\in V_w}\frac{1}{\nu(K^{(i)}_{p,n})\nu(K^{(i)}_{q,n})}1_{K^{(i)}_{p,n}}(x)1_{K^{(i)}_{q,n}}(y),$$
where $\Phi:\mathbb{N}\to\mathbb{N}$ is increasing and $(1-5^{-1}\cdot2^{\beta_i})\Phi(i)\ge i$ for all $i\ge1$. Then for all $u\in\mathfrak{F}_{\mathrm{loc}}$, we have
$$\lim_{i\to+\infty}(1-5^{-1}\cdot 2^{\beta_i})\iint_{K\times K\backslash\mathrm{diag}}\frac{C_i(x,y)(u(x)-u(y))^2}{|x-y|^{\alpha+\beta_i}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)=\mathfrak{E}_{\mathrm{loc}}(u,u).$$
\end{myprop}
\begin{proof}
By the proof of Proposition \ref{SG_app_prop_kernel1}, for all $u\in\mathfrak{F}_{\mathrm{loc}}$, we have
\begin{align*}
&\lvert\iint_{K\times K\backslash\mathrm{diag}}\frac{C_i(x,y)(u(x)-u(y))^2}{|x-y|^{\alpha+\beta_i}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)-\sum_{n=1}^{\Phi(i)}2^{(\beta_i-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\rvert\\
\le& CE_{\beta_i}(u,u)\sum_{n=1}^{\Phi(i)}\left(2^{(\alpha-\gamma i)n+2^{(\alpha-\frac{\beta_i-\alpha}{2}\gamma i)n}}\right)\le CE_{\beta_i}(u,u)\left(\frac{2^{\alpha-\gamma i}}{1-2^{\alpha-\gamma i}}+\frac{2^{\alpha-\frac{\beta_i-\alpha}{2}\gamma i}}{1-2^{\alpha-\frac{\beta_i-\alpha}{2}\gamma i}}\right),
\end{align*}
hence
\begin{align*}
&\lvert(1-5^{-1}\cdot2^{\beta_i})\iint_{K\times K\backslash\mathrm{diag}}\frac{C_i(x,y)(u(x)-u(y))^2}{|x-y|^{\alpha+\beta_i}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\\
&-(1-5^{-1}\cdot2^{\beta_i})\sum_{n=1}^{\Phi(i)}2^{(\beta_i-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2\rvert\\
\le& C(1-5^{-1}\cdot2^{\beta_i})E_{\beta_i}(u,u)\left(\frac{2^{\alpha-\gamma i}}{1-2^{\alpha-\gamma i}}+\frac{2^{\alpha-\frac{\beta_i-\alpha}{2}\gamma i}}{1-2^{\alpha-\frac{\beta_i-\alpha}{2}\gamma i}}\right)\to0,
\end{align*}
as $i\to+\infty$. Hence we only need to show that for all $u\in\mathfrak{F}_{\mathrm{loc}}$
$$\lim_{i\to+\infty}(1-5^{-1}\cdot2^{\beta_i})\sum_{n=1}^{\Phi(i)}2^{(\beta_i-\alpha)n}\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2=\mathfrak{E}_{\mathrm{loc}}(u,u).$$
Let $\lambda_i=5^{-1}\cdot 2^{\beta_i}$, then $\myset{\lambda_i}\subseteq(3/5,1)$ and $\lambda_i\uparrow1$. We use the notions in the proof of Theorem \ref{SG_app_thm_incre}. We only need to show that for all $u\in\mathfrak{F}_{\mathrm{loc}}$
$$\lim_{i\to+\infty}(1-\lambda_i)\sum_{n=1}^{\Phi(i)}\lambda_i^nx_n=\lim_{n\to+\infty}x_n.$$
It is obvious that
$$\varlimsup_{i\to+\infty}(1-\lambda_i)\sum_{n=1}^{\Phi(i)}\lambda_i^nx_n\le\varlimsup_{i\to+\infty}(1-\lambda_i)\sum_{n=1}^{\infty}\lambda_i^nx_n=\lim_{n\to+\infty}x_n.$$
On the other hand, for all $A<\lim_{n\to+\infty}x_n$, there exists some positive integer $N_0\ge1$ such that for all $n>N_0$, we have $a_n>A$, hence
\begin{align*}
&(1-\lambda_i)\sum_{n=1}^{\Phi(i)}\lambda_i^nx_n\ge(1-\lambda_i)\sum_{n=N_0+1}^{\Phi(i)}\lambda_i^nA\\
=&(1-\lambda_i)\frac{\lambda_i^{N_0+1}\left(1-\lambda_i^{\Phi(i)-N_0}\right)}{1-\lambda_i}A=\lambda_i^{N_0+1}\left(1-\lambda_i^{\Phi(i)-N_0}\right)A.\\
\end{align*}
It is obvious that $\lambda_i^{N_0}\to1$ as $i\to+\infty$. Since $(1-\lambda_i)\Phi(i)\ge i$, we have
\begin{align*}
\lambda_i^{-\Phi(i)}&=(1+\lambda_i^{-1}-1)^{\Phi(i)}=\left[(1+\lambda_i^{-1}-1)^{\frac{1}{1-\lambda_i}}\right]^{(1-\lambda_i)\Phi(i)}\\
&\ge\left[(1+\lambda_i^{-1}-1)^{\frac{\lambda_i^{-1}}{\lambda_i^{-1}-1}}\right]^{i}\to+\infty.
\end{align*}
Hence
$$\varliminf_{i\to+\infty}(1-\lambda_i)\sum_{n=1}^{\Phi(i)}\lambda_i^nx_n\ge A.$$
Since $A<\lim_{n\to+\infty}x_n$ is arbitrary, we have
$$\lim_{i\to+\infty}(1-\lambda_i)\sum_{n=1}^{\Phi(i)}\lambda_i^nx_n=\lim_{n\to+\infty}x_n.$$
\end{proof}
Now we give the proof of Theorem \ref{SG_app_thm_jumping_kernel}.
\begin{proof}[Proof of Theorem \ref{SG_app_thm_jumping_kernel}]
For all $u\in\mathfrak{F}_{\mathrm{loc}}$, by Proposition \ref{SG_app_prop_kernel2}, we have
$$\lim_{i\to+\infty}(1-5^{-1}\cdot 2^{\beta_i})\iint_{K\times K\backslash\mathrm{diag}}\frac{C_i(x,y)(u(x)-u(y))^2}{|x-y|^{\alpha+\beta_i}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)=\mathfrak{E}_{\mathrm{loc}}(u,u).$$
By Theorem \ref{SG_app_thm_main} and Theorem \ref{SG_app_thm_incre}, we have
\begin{align*}
\frac{1}{C}\mathfrak{E}_{\mathrm{loc}}(u,u)&\le\varliminf_{i\to+\infty}(1-5^{-1}\cdot 2^{\beta_i})\iint_{K\times K\backslash\mathrm{diag}}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta_i}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\\
&\le\varlimsup_{i\to+\infty}(1-5^{-1}\cdot 2^{\beta_i})\iint_{K\times K\backslash\mathrm{diag}}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta_i}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\le C\mathfrak{E}_{\mathrm{loc}}(u,u),
\end{align*}
hence
$$\lim_{i\to+\infty}(1-\delta_i)(1-5^{-1}\cdot 2^{\beta_i})\iint_{K\times K\backslash\mathrm{diag}}\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta_i}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)=0,$$
hence
$$\lim_{i\to+\infty}(1-5^{-1}\cdot 2^{\beta_i})\iint_{K\times K\backslash\mathrm{diag}}\frac{a_i(x,y)(u(x)-u(y))^2}{|x-y|^{\alpha+\beta_i}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)=\mathfrak{E}_{\mathrm{loc}}(u,u).$$
It is obvious that $a_i=\delta_iC_i+(1-\delta_i)$ is bounded from above and below by positive constants.
\end{proof}
\chapter{Construction of Local Regular Dirichlet Form on the SG}\label{ch_SG_con}
This chapter is based on my work \cite{MY18}.
\section{Background and Statement}
We have used the local regular Dirichlet form $\mathfrak{E}_{\mathrm{loc}}$ given by Kigami in Chapter \ref{ch_SG_app}. Here we give another construction of local regular Dirichlet form. This construction is much more complicated than that of Kigami, but this construction can be applied to more general spaces. We will apply this construction to the SC in Chapter \ref{ch_SC_con}.
We use the notions of the SG introduced in Section \ref{sec_notion}.
Our main result is as follows.
\begin{mythm}\label{SG_con_thm_BM}
There exists a self-similar strongly local regular Dirichlet form $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ satisfying
\begin{align*}
&\mathcal{E}_{\mathrm{loc}}(u,u)\asymp\sup_{n\ge1}\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2,\\
&\mathcal{F}_{\mathrm{loc}}=\myset{u\in L^2(K;\nu):\mathcal{E}_{\mathrm{loc}}(u,u)<+\infty}.
\end{align*}
\end{mythm}
\begin{myrmk}
This theorem was also proved by Kusuoka and Zhou \cite[Theorem 7.19, Example 8.4]{KZ92} using approximation of Markov chains. Here we use $\Gamma$-convergence of stable-like non-local closed forms.
\end{myrmk}
The Besov spaces $B^{2,2}_{\alpha,\beta}(K)$ and $B^{2,\infty}_{\alpha,\beta}(K)$ have the following equivalent semi-norms.
\begin{mylem}(\cite[Lemma 3.1]{HK06}, Lemma \ref{SG_app_lem_equiv1})\label{SG_con_lem_equiv}
For all $\beta\in(0,+\infty)$, for all $u\in L^2(K;\nu)$, we have
$$\mathcal{E}_\beta(u,u)\asymp\mathfrak{E}_\beta(u,u)\asymp[u]_{B^{2,2}_{\alpha,\beta}(K)},$$
$$\sup_{n\ge1}2^{(\beta-\alpha)n}\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2\asymp[u]_{B^{2,\infty}_{\alpha,\beta}(K)},$$
where
\begin{align*}
\mathcal{E}_\beta(u,u)&=\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y),\\
\mathfrak{E}_\beta(u,u)&=\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2.
\end{align*}
\end{mylem}
We have the following two corollaries whose proofs are obvious by Lemma \ref{SG_con_lem_equiv} and the proof of Theorem \ref{SG_con_thm_BM}.
Firstly, we have the characterization of the domain of the local Dirichlet form.
\begin{mycor}\label{SG_con_cor_chara}
$\mathcal{F}_{\mathrm{loc}}=B^{2,\infty}_{\alpha,\beta^*}(K)$ and $\mathcal{E}_{\mathrm{loc}}(u,u)\asymp[u]_{B^{2,\infty}_{\alpha,\beta^*}(K)}$ for all $u\in\mathcal{F}_{\mathrm{loc}}$, where $\alpha=\log3/\log2$ is the Hausdorff dimension of the SG and $\beta^*=\log5/\log2$ is the walk dimension of the BM.
\end{mycor}
Secondly, we have the approximation of non-local Dirichlet forms to the local Dirichlet form.
\begin{mycor}\label{SG_con_cor_conv}
There exists some positive constant $C$ such that for all $u\in\mathcal{F}_{\mathrm{loc}}$
$$\frac{1}{C}\mathcal{E}_{\mathrm{loc}}(u,u)\le\varliminf_{\beta\uparrow\beta^*}(\beta^*-\beta)\mathcal{E}_\beta(u,u)\le\varlimsup_{\beta\uparrow\beta^*}(\beta^*-\beta)\mathcal{E}_\beta(u,u)\le C\mathcal{E}_{\mathrm{loc}}(u,u),$$
$$\frac{1}{C}\mathcal{E}_{\mathrm{loc}}(u,u)\le\varliminf_{\beta\uparrow\beta^*}(\beta^*-\beta)\mathfrak{E}_\beta(u,u)\le\varlimsup_{\beta\uparrow\beta^*}(\beta^*-\beta)\mathfrak{E}_\beta(u,u)\le C\mathcal{E}_{\mathrm{loc}}(u,u),$$
$$\frac{1}{C}\mathcal{E}_{\mathrm{loc}}(u,u)\le\varliminf_{\beta\uparrow\beta^*}(\beta^*-\beta)[u]_{B^{2,2}_{\alpha,\beta}(K)}\le\varlimsup_{\beta\uparrow\beta^*}(\beta^*-\beta)[u]_{B^{2,2}_{\alpha,\beta}(K)}\le C\mathcal{E}_{\mathrm{loc}}(u,u).$$
\end{mycor}
This chapter is organized as follows. In Section \ref{SG_con_sec_resist}, we give resistance estimates and introduce good functions. In Section \ref{SG_con_sec_monotone}, we give weak monotonicity result. In Section \ref{SG_con_sec_BM}, we prove Theorem \ref{SG_con_thm_BM}.
\section{Resistance Estimates and Good Functions}\label{SG_con_sec_resist}
Firstly, we give resistance estimates.
For all $n\ge1$, let us introduce an energy on $X_n$ given by
$$E_n(u,u)=\sum_{w^{(1)}\sim_nw^{(2)}}\left(u(w^{(1)})-u(w^{(2)})\right)^2,u\in l(W_n).$$
For all $w^{(1)},w^{(2)}\in W_n$ with $w^{(1)}\ne w^{(2)}$, we define the resistance
\begin{align*}
R_n(w^{(1)},w^{(2)})&=\inf\myset{E_n(u,u):u(w^{(1)})=1,u(w^{(2)})=0,u\in l(W_n)}^{-1}\\
&=\sup\myset{\frac{\left(u(w^{(1)})-u(w^{(2)})\right)^2}{E_n(u,u)}:E_n(u,u)\ne0,u\in l(W_n)}.
\end{align*}
It is obvious that
$$\left(u(w^{(1)})-u(w^{(2)})\right)^2\le R_n(w^{(1)},w^{(2)})E_n(u,u)\text{ for all }w^{(1)},w^{(2)}\in W_n,u\in l(W_n),$$
and $R_n$ is a metric on $W_n$, hence
$$R_n(w^{(1)},w^{(2)})\le R_n(w^{(1)},w^{(3)})+R_n(w^{(3)},w^{(2)})\text{ for all }w^{(1)},w^{(2)},w^{(3)}\in W_n.$$
\begin{mythm}\label{SG_con_thm_resist}
Considering effective resistances between any two of $0^n,1^n,2^n$, we have the electrical network $X_n$ is equivalent to the electrical network in Figure \ref{SG_con_fig_resist}, where
$$r_n=\frac{1}{2}\left(\frac{5}{3}\right)^n-\frac{1}{2}.$$
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(1.5,1.5/1.7320508076);
\draw (3,0)--(1.5,1.5/1.7320508076);
\draw (1.5,1.5*1.7320508076)--(1.5,1.5/1.7320508076);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (1.5,1.5*1.7320508076) circle (0.06);
\draw[fill=black] (1.5,1.5/1.7320508076) circle (0.06);
\draw (0,-0.3) node {$0^n$};
\draw (3,-0.3) node {$1^n$};
\draw (1.5,1.5*1.7320508076+0.3) node {$2^n$};
\draw (1.5+0.3,0.75*1.7320508076+0.3) node {$r_{n}$};
\draw (0.7,0.7) node {$r_n$};
\draw (2.3,0.7) node {$r_n$};
\end{tikzpicture}
\caption{An Equivalent Electrical Network}\label{SG_con_fig_resist}
\end{figure}
\end{mythm}
\begin{proof}
Using $\Delta$-Y transform directly, we have
$$r_1=\frac{1\cdot1}{1+1+1}=\frac{1}{3}=\frac{1}{2}\left(\frac{5}{3}\right)^1-\frac{1}{2}.$$
For $n+1$, using $\Delta$-Y transform again, we have the electrical network $X_{n+1}$ is equivalent to the electrical networks in Figure \ref{SG_con_fig_resist1}.
\begin{figure}[ht]
\centering
\subfigure{
\begin{tikzpicture}[scale=0.5]
\draw (0,0)--(1.5,1.5/1.7320508076);
\draw (3,0)--(1.5,1.5/1.7320508076);
\draw (1.5,1.5*1.7320508076)--(1.5,1.5/1.7320508076);
\draw (0+6,0)--(1.5+6,1.5/1.7320508076);
\draw (3+6,0)--(1.5+6,1.5/1.7320508076);
\draw (1.5+6,1.5*1.7320508076)--(1.5+6,1.5/1.7320508076);
\draw (0+3,0+3*1.7320508076)--(1.5+3,1.5/1.7320508076+3*1.7320508076);
\draw (3+3,0+3*1.7320508076)--(1.5+3,1.5/1.7320508076+3*1.7320508076);
\draw (1.5+3,1.5*1.7320508076+3*1.7320508076)--(1.5+3,1.5/1.7320508076+3*1.7320508076);
\draw (1.5,1.5*1.7320508076)--(3,3*1.7320508076);
\draw (6,3*1.7320508076)--(7.5,1.5*1.7320508076);
\draw (3,0)--(6,0);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (3/2,3/2*1.7320508076) circle (0.06);
\draw[fill=black] (1.5,1.5/1.7320508076) circle (0.06);
\draw[fill=black] (0+6,0) circle (0.06);
\draw[fill=black] (3+6,0) circle (0.06);
\draw[fill=black] (3/2+6,3/2*1.7320508076) circle (0.06);
\draw[fill=black] (1.5+6,1.5/1.7320508076) circle (0.06);
\draw[fill=black] (0+3,0+3*1.7320508076) circle (0.06);
\draw[fill=black] (3+3,0+3*1.7320508076) circle (0.06);
\draw[fill=black] (3/2+3,3/2*1.7320508076+3*1.7320508076) circle (0.06);
\draw[fill=black] (1.5+3,1.5/1.7320508076+3*1.7320508076) circle (0.06);
\draw (0,-0.3) node {\tiny{$0^{n+1}$}};
\draw (9,-0.3) node {\tiny{$1^{n+1}$}};
\draw (4.5,9/2*1.7320508076+0.3) node {\tiny{$2^{n+1}$}};
\draw (1.5+0.3,0.75*1.7320508076+0.4) node {\tiny{$r_{n}$}};
\draw (0.7,0.7) node {\tiny{$r_n$}};
\draw (2.3,0.7) node {\tiny{$r_n$}};
\draw (1.5+0.3+6,0.75*1.7320508076+0.4) node {\tiny{$r_{n}$}};
\draw (0.7+6,0.7) node {\tiny{$r_n$}};
\draw (2.3+6,0.7) node {\tiny{$r_n$}};
\draw (1.5+0.3+3,0.75*1.7320508076+0.4+3*1.7320508076) node {\tiny{$r_{n}$}};
\draw (0.7+3,0.7+3*1.7320508076) node {\tiny{$r_n$}};
\draw (2.3+3,0.7+3*1.7320508076) node {\tiny{$r_n$}};
\draw (4.5,0.3) node {\tiny{$1$}};
\draw (2.2,2.25*1.7320508076+0.3) node {\tiny{$1$}};
\draw (6.8,2.25*1.7320508076+0.3) node {\tiny{$1$}};
\end{tikzpicture}
}
\hspace{0.2in}
\subfigure{
\begin{tikzpicture}[scale=0.5]
\draw (0,0)--(4.5,4.5/1.7320508076);
\draw (9,0)--(4.5,4.5/1.7320508076);
\draw (4.5,4.5*1.7320508076)--(4.5,4.5/1.7320508076);
\draw[fill=black] (4.5,4.5/1.7320508076) circle (0.06);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (1.5,1.5/1.7320508076) circle (0.06);
\draw[fill=black] (3+6,0) circle (0.06);
\draw[fill=black] (1.5+6,1.5/1.7320508076) circle (0.06);
\draw[fill=black] (3/2+3,3/2*1.7320508076+3*1.7320508076) circle (0.06);
\draw[fill=black] (1.5+3,1.5/1.7320508076+3*1.7320508076) circle (0.06);
\draw (0,-0.3) node {\tiny{$0^{n+1}$}};
\draw (9,-0.3) node {\tiny{$1^{n+1}$}};
\draw (4.5,9/2*1.7320508076+0.3) node {\tiny{$2^{n+1}$}};
\draw (0.7,0.7) node {\tiny{$r_n$}};
\draw (2.3+6,0.7) node {\tiny{$r_n$}};
\draw (1.5+0.3+3,0.75*1.7320508076+0.4+3*1.7320508076) node {\tiny{$r_{n}$}};
\draw (5.5,4.33012701892219323) node {\tiny{$\frac{2}{3}r_n+\frac{1}{3}$}};
\draw (2,2.3) node {\tiny{$\frac{2}{3}r_n+\frac{1}{3}$}};
\draw (7,2.3) node {\tiny{$\frac{2}{3}r_n+\frac{1}{3}$}};
\end{tikzpicture}
}
\caption{Equivalent Electrical Networks}\label{SG_con_fig_resist1}
\end{figure}
Hence
$$r_{n+1}=\frac{5}{3}r_n+\frac{1}{3}.$$
By elementary calculation, we have
$$r_n=\frac{1}{2}\left(\frac{5}{3}\right)^n-\frac{1}{2}\text{ for all }n\ge1.$$
\end{proof}
\begin{myrmk}\label{SG_con_rmk_resist012}
For all $n\ge1$, we have
$$R_n(0^n,1^n)=R_n(1^n,2^n)=R_n(0^n,2^n)=2r_n=\left(\frac{5}{3}\right)^n-1.$$
\end{myrmk}
\begin{myprop}\label{SG_con_prop_resist}
For all $n\ge1,w\in W_n$, we have
$$R_n(w,0^n),R_n(w,1^n,),R_n(w,2^n)\le\frac{5}{2}\left(\frac{5}{3}\right)^n.$$
\end{myprop}
\begin{proof}
By symmetry, we only need to consider $R_n(w,0^n)$. Letting
$$w=w_1\ldots w_{n-2}w_{n-1}w_n,$$
we construct a finite sequence in $W_n$ as follows.
\begin{align*}
w^{(1)}&=w_1\ldots w_{n-2}w_{n-1}w_{n}=w,\\
w^{(2)}&=w_1\ldots w_{n-2}w_{n-1}w_{n-1},\\
w^{(3)}&=w_1\ldots w_{n-2}w_{n-2}w_{n-2},\\
&\ldots\\
w^{(n)}&=w_1\ldots w_1w_1w_1,\\
w^{(n+1)}&=0\ldots000.
\end{align*}
For all $i=1,\ldots,n-1$, by cutting technique, we have
\begin{align*}
&R_n(w^{(i)},w^{(i+1)})\\
=&R_n(w_1\ldots w_{n-i-1}w_{n-i}w_{n-i+1}\ldots w_{n-i+1},w_1\ldots w_{n-i-1}w_{n-i}w_{n-i}\ldots w_{n-i})\\
\le& R_{i}(w_{n-i+1}\ldots w_{n-i+1},w_{n-i}\ldots w_{n-i})\le R_i(0^i,1^i)=\left(\frac{5}{3}\right)^i-1\le\left(\frac{5}{3}\right)^i.
\end{align*}
Since
$$R_n(w^{(n)},w^{(n+1)})=R_n(w_1^n,0^n)\le R_n(0^n,1^n)=\left(\frac{5}{3}\right)^n-1\le\left(\frac{5}{3}\right)^n,$$
we have
$$
R_n(w,0^n)=R_n(w^{(1)},w^{(n+1)})\le\sum_{i=1}^nR_n(w^{(i)},w^{(i+1)})\le\sum_{i=1}^n\left(\frac{5}{3}\right)^i\le\frac{5}{2}\left(\frac{5}{3}\right)^n.
$$
\end{proof}
Secondly, we introduce \emph{good} functions with energy property and separation property.
Let $U^{(x_0,x_1,x_2)}$ and $\mathcal{U}$ be defined as in Section \ref{sec_SG}.
For all $n\ge1$, let
$$A_n(u)=E_n(P_nu,P_nu)=\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2,u\in L^2(K;\nu).$$
For all $n\ge0$, let
$$B_n(u)=\sum_{w\in W_n}\sum_{p,q\in V_w}(u(p)-u(q))^2,u\in l(K).$$
By Theorem \ref{thm_SG_fun}, for all $U=U^{(x_0,x_1,x_2)}\in\mathcal{U},n\ge0$, we have
$$B_n(U)=\left(\frac{3}{5}\right)^n\left((x_0-x_1)^2+(x_1-x_2)^2+(x_0-x_2)^2\right).$$
We calculate $A_n(U)$ as follows.
\begin{mythm}\label{SG_con_thm_AnU}
For all $n\ge1$, we have
$$A_n(U)=\frac{2}{3}\left[\left(\frac{3}{5}\right)^n-\left(\frac{3}{5}\right)^{2n}\right]\left((x_0-x_1)^2+(x_1-x_2)^2+(x_0-x_2)^2\right).$$
\end{mythm}
\begin{myrmk}
The above result was also obtained in \cite[Theorem 3.1]{Str01}.
\end{myrmk}
\begin{proof}
We observe the following facts.
\begin{itemize}
\item For all $w^{(1)}\sim_nw^{(2)}$ is of type \Rmnum{1}, $w^{(1)},w^{(2)}$ are of the form $wi,wj$ for some $w\in W_{n-1}$ and $i,j=0,1,2$ with $i\ne j$. On the other hand, for all $w\in W_{n-1}$ and $i,j=0,1,2$ with $i\ne j$, $wi\sim_nwj$ is of type \Rmnum{1}.
\item For all $w^{(1)},w^{(2)}\in W_n$ such that
$$w^{(1)}=w^{(1)}_1\ldots w^{(1)}_n\sim_nw^{(2)}=w^{(2)}_1\ldots w^{(2)}_n$$
is of type \Rmnum{2}, there exists $k=1,\ldots,n-1$ such that $w^{(1)}_1\ldots w^{(1)}_k\sim_kw^{(2)}_1\ldots w^{(2)}_k$ is of type \Rmnum{1} and
\begin{align*}
w^{(2)}_{k}&=w^{(1)}_{k+1}=\ldots=w^{(1)}_{n},\\
w^{(1)}_{k}&=w^{(2)}_{k+1}=\ldots=w^{(2)}_{n}.
\end{align*}
On the other hand, for all $w^{(1)},w^{(2)}\in W_k$ such that
$$w^{(1)}_1\ldots w^{(1)}_k\sim_kw^{(2)}_1\ldots w^{(2)}_k$$
is of type \Rmnum{1}, we have
$$w^{(1)}_1\ldots w^{(1)}_kw^{(2)}_k\ldots w^{(2)}_k\sim_nw^{(2)}_1\ldots w^{(2)}_kw^{(1)}_k\ldots w^{(1)}_{k}$$
is of type \Rmnum{2} for all $n=k+1,k+2,\ldots$.
\end{itemize}
It is obvious that for all $n\ge1,w\in W_n$, we have $V_{w}=\myset{P_{w0},P_{w1},P_{w2}}$ and
$$P_nU(w)=\frac{1}{\nu(K_w)}\int_{K_w}U(x)\nu(\mathrm{d} x)=\frac{1}{3}\left(U(P_{w0})+U(P_{w1})+U(P_{w2})\right).$$
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(4,0)--(2,2*1.7320508076)--cycle;
\draw (2,0)--(1,1.7320508076)--(3,1.7320508076)--cycle;
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (4,0) circle (0.06);
\draw[fill=black] (2,2*1.7320508076) circle (0.06);
\draw[fill=black] (1,1*1.7320508076) circle (0.06);
\draw[fill=black] (3,1*1.7320508076) circle (0.06);
\draw[fill=black] (2,0) circle (0.06);
\draw (0,-0.5) node {$P_{w0}$};
\draw (4,-0.5) node {$P_{w1}$};
\draw (2,2*1.7320508076+0.5) node {$P_{w2}$};
\draw (-0.1,1.7320508076) node {$P_{w02}=P_{w20}$};
\draw (2,-0.5) node {$P_{w01}=P_{w10}$};
\draw (4.1,1.7320508076) node {$P_{w12}=P_{w21}$};
\draw (2,1.7320508076+0.5) node {$K_{w2}$};
\draw (1,0.5) node {$K_{w0}$};
\draw (3,0.5) node {$K_{w1}$};
\end{tikzpicture}
\caption{Cells and Nodes}\label{SG_con_fig_AnU1}
\end{figure}
For all $n\ge1,w\in W_{n-1}$, we have
\begin{align*}
&P_{n}U(w0)=\frac{1}{3}\left(U(P_{w00})+U(P_{w01})+U(P_{w02})\right)\\
=&\frac{1}{3}\left(U(P_{w0})+\frac{2U(P_{w0})+2U(P_{w1})+U(P_{w2})}{5}+\frac{2U(P_{w0})+U(P_{w1})+2U(P_{w2})}{5}\right)\\
=&\frac{1}{3}\frac{9U(P_{w0})+3U(P_{w1})+3U(P_{w2})}{5}=\frac{3U(P_{w0})+U(P_{w1})+U(P_{w2})}{5}.
\end{align*}
Similarly
\begin{align*}
P_{n}U(w1)&=\frac{U(P_{w0})+3U(P_{w1})+U(P_{w2})}{5},\\
P_{n}U(w2)&=\frac{U(P_{w0})+U(P_{w1})+3U(P_{w2})}{5}.
\end{align*}
Hence
\begin{align*}
&\left(P_{n}U(w0)-P_{n}U(w1)\right)^2+\left(P_{n}U(w1)-P_{n}U(w2)\right)^2+\left(P_{n}U(w0)-P_{n}U(w2)\right)^2\\
=&\frac{4}{25}\left[\left(U(P_{w0})-U(P_{w1})\right)^2+\left(U(P_{w1})-U(P_{w2})\right)^2+\left(U(P_{w0})-U(P_{w2})\right)^2\right].
\end{align*}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(4,0)--(2,2*1.7320508076)--cycle;
\draw (4,0)--(8,0)--(6,2*1.7320508076)--cycle;
\draw (2,0)--(3,1.7320508076);
\draw (2,2/1.7320508076) node {$K_{w^{(1)}}$};
\draw (2.9,1/1.7320508076-0.25) node {\tiny{$K_{w^{(1)}w^{(2)}_n}$}};
\draw (6,0)--(5,1.7320508076);
\draw (6,2/1.7320508076) node {$K_{w^{(2)}}$};
\draw (5.1,1/1.7320508076-0.25) node {\tiny{$K_{w^{(2)}w^{(1)}_n}$}};
\draw (0,-0.5) node {$a_2$};
\draw (2,2*1.7320508076+0.5) node {$a_1$};
\draw (8,-0.5) node {$b_2$};
\draw (6,2*1.7320508076+0.5) node {$b_1$};
\draw (4,-0.5) node {$c$};
\end{tikzpicture}
\caption{Adjacent Cells}\label{SG_con_fig_AnU2}
\end{figure}
For all $n\ge1,w^{(1)}=w^{(1)}_1\ldots w^{(1)}_n\sim_nw^{(2)}=w^{(2)}_1\ldots w^{(2)}_n$. Assume that $U$ takes values $a_1,a_2,c$ and $b_1,b_2,c$ on $V_{w^{(1)}}$ and $V_{w^{(2)}}$, respectively, see Figure \ref{SG_con_fig_AnU2}. By above, we have
$$P_nU(w^{(1)})=\frac{a_1+a_2+c}{3},P_nU(w^{(2)})=\frac{b_1+b_2+c}{3},$$
$$P_{n+1}U(w^{(1)}w^{(2)}_n)=\frac{a_1+a_2+3c}{5},P_{n+1}U(w^{(2)}w^{(1)}_n)=\frac{b_1+b_2+3c}{5},$$
hence
$$P_nU(w^{(1)})-P_nU(w^{(2)})=\frac{1}{3}\left((a_1+a_2)-(b_1+b_2)\right),$$
$$P_{n+1}U(w^{(1)}w^{(2)}_n)-P_{n+1}U(w^{(2)}w^{(1)}_n)=\frac{1}{5}\left((a_1+a_2)-(b_1+b_2)\right).$$
Hence
$$P_{n+1}U(w^{(1)}w^{(2)}_n)-P_{n+1}U(w^{(2)}w^{(1)}_n)=\frac{3}{5}\left(P_nU(w^{(1)})-P_nU(w^{(2)})\right).$$
Therefore
\begin{align*}
A_n(U)&=\frac{4}{25}B_{n-1}(U)+\left(\frac{3}{5}\right)^2\left[\frac{4}{25}B_{n-2}(U)\right]+\ldots+\left(\frac{3}{5}\right)^{2(n-1)}\left[\frac{4}{25}B_{0}(U)\right]\\
&=\frac{4}{25}\left((x_0-x_1)^2+(x_1-x_2)^2+(x_0-x_2)^2\right)\\
&\cdot\left[\left(\frac{9}{25}\right)^0\left(\frac{3}{5}\right)^{n-1}+\left(\frac{9}{25}\right)^1\left(\frac{3}{5}\right)^{n-2}+\ldots+\left(\frac{9}{25}\right)^{n-1}\left(\frac{3}{5}\right)^{0}\right]\\
&=\frac{2}{3}\left[\left(\frac{3}{5}\right)^{n}-\left(\frac{3}{5}\right)^{2n}\right]\left((x_0-x_1)^2+(x_1-x_2)^2+(x_0-x_2)^2\right).
\end{align*}
\end{proof}
\section{Weak Monotonicity Result}\label{SG_con_sec_monotone}
In this section, we give weak monotonicity result using resistance estimates.
For all $n\ge1$, let
$$D_n(u)=\left(\frac{5}{3}\right)^nA_n(u)=\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2,u\in L^2(K;\nu).$$
The weak monotonicity result is as follows.
\begin{mythm}\label{SG_con_thm_monotone}
There exists some positive constant $C$ such that
$$D_n(u)\le CD_{n+m}(u)\text{ for all }u\in L^2(K;\nu),n,m\ge1.$$
Indeed, we can take $C=36$.
\end{mythm}
\begin{myrmk}
In Kigami's construction, the energies are monotone, that is, the constant $C=1$. Hence, the above result is called weak monotonicity.
\end{myrmk}
Theorem \ref{SG_con_thm_monotone} can be reduced as follows.
For all $n\ge1$, let
$$G_n(u)=\left(\frac{5}{3}\right)^nE_n(u,u)=\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(u(w^{(1)})-u(w^{(2)})\right)^2,u\in l(W_n).$$
For all $n,m\ge1$, let $M_{n,m}:l(W_{n+m})\to l(W_n)$ be a mean value operator given by
$$(M_{n,m}u)(w)=\frac{1}{3^m}\sum_{v\in W_m}u(wv),w\in W_n,u\in l(W_{n+m}).$$
\begin{mythm}\label{SG_con_thm_monotone_graph}
There exists some positive constant $C$ such that
$$G_n(M_{n,m}u)\le CG_{n+m}(u)\text{ for all }u\in l(W_{n+m}),n,m\ge1.$$
\end{mythm}
\begin{proof}[Proof of Theorem \ref{SG_con_thm_monotone} using Theorem \ref{SG_con_thm_monotone_graph}]
Note $P_nu=M_{n,m}(P_{n+m}u)$, hence
\begin{align*}
D_n(u)&=\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2=G_n(P_nu)\\
&=G_n(M_{n,m}(P_{n+m}u))\le CG_{n+m}(P_{n+m}u)\\
&=C\left(\frac{5}{3}\right)^{n+m}\sum_{w^{(1)}\sim_{n+m}w^{(2)}}\left(P_{n+m}u(w^{(1)})-P_{n+m}u(w^{(2)})\right)^2=CD_{n+m}(u).
\end{align*}
\end{proof}
\begin{myrmk}
The constant in Theorem \ref{SG_con_thm_monotone} can be taken as the one in Theorem \ref{SG_con_thm_monotone_graph}.
\end{myrmk}
\begin{proof}[Proof of Theorem \ref{SG_con_thm_monotone_graph}]
For all $n\ge1$. Assume that $W\subseteq W_n$ is connected, that is, for all $w^{(1)},w^{(2)}\in W$, there exists a finite sequence $\myset{v^{(1)},\ldots,v^{(k)}}\subseteq W$ with $v^{(1)}=w^{(1)},v^{(k)}=w^{(2)}$ and $v^{(i)}\sim_nv^{(i+1)}$ for all $i=1,\ldots,k-1$. Let
$$E_W(u,u)=
\sum_{\mbox{\tiny
$
\begin{subarray}{c}
w^{(1)},w^{(2)}\in W\\
w^{(1)}\sim_nw^{(2)}
\end{subarray}
$}}
(u(w^{(1)})-u(w^{(2)}))^2,u\in l(W).$$
For all $w^{(1)},w^{(2)}\in W$, let
\begin{align*}
R_W(w^{(1)},w^{(2)})&=\inf\myset{E_W(u,u):u(w^{(1)})=1,u(w^{(2)})=0,u\in l(W)}^{-1}\\
&=\sup\myset{\frac{(u(w^{(1)})-u(w^{(2)}))^2}{E_W(u,u)}:E_W(u,u)\ne0,u\in l(W)}.
\end{align*}
It is obvious that
$$\left(u(w^{(1)})-u(w^{(2)})\right)^2\le R_W(w^{(1)},w^{(2)})E_W(u,u)\text{ for all }w^{(1)},w^{(2)}\in W,u\in l(W),$$
and $R_W$ is a metric on $W$, hence
$$R_W(w^{(1)},w^{(2)})\le R_W(w^{(1)},w^{(3)})+R_W(w^{(3)},w^{(2)})\text{ for all }w^{(1)},w^{(2)},w^{(3)}\in W.$$
By definition, we have
\begin{align*}
G_n(M_{n,m}u)&=\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(M_{n,m}u(w^{(1)})-M_{n,m}u(w^{(2)})\right)^2\\
&=\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(\frac{1}{3^m}\sum_{v\in W_m}\left(u(w^{(1)}v)-u(w^{(2)}v)\right)\right)^2\\
&\le\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\frac{1}{3^m}\sum_{v\in W_m}\left(u(w^{(1)}v)-u(w^{(2)}v)\right)^2.
\end{align*}
Fix $w^{(1)}\sim_nw^{(2)}$, there exist $i,j=0,1,2$ with $i\ne j$ such that $w^{(1)}i^m\sim_{n+m}w^{(2)}j^m$, see Figure \ref{SG_con_fig_monotone}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(4,0)--(2,2*1.7320508076)--cycle;
\draw (5,0)--(9,0)--(7,2*1.7320508076)--cycle;
\draw (4,0)--(5,0);
\draw[fill=black] (4,0) circle (0.06);
\draw[fill=black] (5,0) circle (0.06);
\draw (2,2/1.7320508076) node {$w^{(1)}W_m$};
\draw (7,2/1.7320508076) node {$w^{(2)}W_m$};
\draw (3.7,-0.5) node {$w^{(1)}i^m$};
\draw (5.3,-0.5) node {$w^{(2)}j^m$};
\draw[fill=black] (1.3,1.7) circle (0.06);
\draw (2,2) node {$w^{(1)}v$};
\draw[fill=black] (6.3,1.7) circle (0.06);
\draw (7,2) node {$w^{(2)}v$};
\end{tikzpicture}
\caption{$w^{(1)}W_m$ and $w^{(2)}W_m$}\label{SG_con_fig_monotone}
\end{figure}
Fix $v\in W_m$, we have
$$\left(u(w^{(1)}v)-u(w^{(2)}v)\right)^2\le R_{w^{(1)}W_m\cup w^{(2)}W_m}(w^{(1)}v,w^{(2)}v)E_{w^{(1)}W_m\cup w^{(2)}W_m}(u,u).$$
By cutting technique and Proposition \ref{SG_con_prop_resist}, we have
\begin{align*}
&R_{w^{(1)}W_m\cup w^{(2)}W_m}(w^{(1)}v,w^{(2)}v)\\
\le& R_{w^{(1)}W_m\cup w^{(2)}W_m}(w^{(1)}v,w^{(1)}i^m)+R_{w^{(1)}W_m\cup w^{(2)}W_m}(w^{(1)}i^m,w^{(2)}j^m)\\
&+R_{w^{(1)}W_m\cup w^{(2)}W_m}(w^{(2)}j^m,w^{(2)}v)\\
\le& R_m(v,i^m)+1+R_m(v,j^m)\le5\left(\frac{5}{3}\right)^m+1\le6\left(\frac{5}{3}\right)^m,
\end{align*}
hence
\begin{align*}
&(u(w^{(1)}v)-u(w^{(2)}v))^2\le6\left(\frac{5}{3}\right)^mE_{w^{(1)}W_m\cup w^{(2)}W_m}(u,u)\\
=&6\left(\frac{5}{3}\right)^m\left(E_{w^{(1)}W_m}(u,u)+E_{w^{(2)}W_m}(u,u)+\left(u(w^{(1)}i^m)-u(w^{(2)}j^m)\right)^2\right).
\end{align*}
Hence
\begin{align*}
&\frac{1}{3^m}\sum_{v\in W_m}\left(u(w^{(1)}v)-u(w^{(2)}v)\right)^2\\
\le&6\left(\frac{5}{3}\right)^m\left(E_{w^{(1)}W_m}(u,u)+E_{w^{(2)}W_m}(u,u)+\left(u(w^{(1)}i^m)-u(w^{(2)}j^m)\right)^2\right).
\end{align*}
In the summation with respect to $w^{(1)}\sim_nw^{(2)}$, the terms $E_{w^{(1)}W_m}(u,u),E_{w^{(2)}W_m}(u,u)$ are summed at most 6 times, hence
\begin{align*}
&G_n(M_{n,m}u)\le\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\frac{1}{3^m}\sum_{v\in W_m}\left(u(w^{(1)}v)-u(w^{(2)}v)\right)^2\\
\le&6\left(\frac{5}{3}\right)^n6\left(\frac{5}{3}\right)^mE_{n+m}(u,u)=36\left(\frac{5}{3}\right)^{n+m}E_{n+m}(u,u)=36G_{n+m}(u).
\end{align*}
\end{proof}
\section{Proof of Theorem \ref{SG_con_thm_BM}}\label{SG_con_sec_BM}
For all $\beta>0$, let
$$\mathfrak{E}_\beta(u,u)=\sum_{n=1}^\infty2^{(\beta-\alpha)n}\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2,$$
denote $\mathscr{E}_\beta(u,u)=[u]_{B^{2,2}_{\alpha,\beta}(K)}$ for simplicity.
We obtain non-local regular closed forms and Dirichlet forms as follows.
\begin{mythm}\label{SG_con_thm_nonlocal}
For all $\beta\in(\alpha,\beta^*)$, $(\mathfrak{E}_\beta,\mathcal{F}_\beta)$ is a non-local regular closed form on $L^2(K;\nu)$, $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ and $(\mathscr{E}_\beta,\mathcal{F}_\beta)$ are non-local regular Dirichlet forms on $L^2(K;\nu)$. For all $\beta\in[\beta^*,+\infty)$, $\mathcal{F}_\beta$ consists only of constant functions.
\end{mythm}
\begin{myrmk}
$\mathfrak{E}_\beta$ does not have Markovian property but $\mathcal{E}_\beta$ and $\mathscr{E}_\beta$ do have Markovian property.
\end{myrmk}
\begin{proof}
By Fatou's lemma, it is obvious that $(\mathfrak{E}_\beta,\mathcal{F}_\beta)$ is a closed form on $L^2(K;\nu)$ in the wide sense.
For all $\beta\in(\alpha,\beta^*)$. By Lemma \ref{lem_SG_holder}, we have $\mathcal{F}_\beta\subseteq C(K)$. We only need to show that $\mathcal{F}_\beta$ is uniformly dense in $C(K)$, then $\mathcal{F}_\beta$ is dense in $L^2(K;\nu)$, hence $(\mathfrak{E}_\beta,\mathcal{F}_\beta)$ is a regular closed form on $L^2(K;\nu)$.
Indeed, by Theorem \ref{SG_con_thm_AnU}, for all $U=U^{(x_0,x_1,x_2)}\in\mathcal{U}$, we have
\begin{align*}
&\mathfrak{E}_\beta(U,U)=\sum_{n=1}^\infty2^{(\beta-\alpha)n}A_n(U)\\
=&\sum_{n=1}^\infty2^{(\beta-\alpha)n}\frac{2}{3}\left[\left(\frac{3}{5}\right)^{n}-\left(\frac{3}{5}\right)^{2n}\right]\left((x_0-x_1)^2+(x_1-x_2)^2+(x_0-x_2)^2\right)\\
\le&\frac{2}{3}\left((x_0-x_1)^2+(x_1-x_2)^2+(x_0-x_2)^2\right)\sum_{n=1}^\infty2^{(\beta-\alpha)n}\left(\frac{3}{5}\right)^n<+\infty,
\end{align*}
hence $U\in\mathcal{F}_\beta$, $\mathcal{U}\subseteq\mathcal{F}_\beta$. By Theorem \ref{thm_SG_fun}, we have $\mathcal{F}_\beta$ separates points. It is obvious that $\mathcal{F}_\beta$ is a sub-algebra of $C(K)$, that is, for all $u,v\in\mathcal{F}_\beta,c\in\mathbb{R}$, we have $u+v,cu,uv\in\mathcal{F}_\beta$. By Stone-Weierstrass theorem, $\mathcal{F}_\beta$ is uniformly dense in $C(K)$.
Since $\mathcal{E}_\beta,\mathscr{E}_\beta$ do have Markovian property, by above, $(\mathcal{E}_\beta,\mathcal{F}_\beta),(\mathscr{E}_\beta,\mathcal{F}_\beta)$ are non-local regular Dirichlet forms on $L^2(K;\nu)$.
For all $\beta\in[\beta^*,+\infty)$. Assume that $u\in\mathcal{F}_\beta$ is not constant, then there exists some integer $N\ge1$ such that $D_N(u)>0$. By Theorem \ref{SG_con_thm_monotone}, we have
\begin{align*}
\mathfrak{E}_\beta(u,u)&=\sum_{n=1}^\infty2^{(\beta-\alpha)n}\left(\frac{3}{5}\right)^nD_n(u)\ge\sum_{n=N+1}^\infty2^{(\beta-\alpha)n}\left(\frac{3}{5}\right)^nD_n(u)\\
&\ge\frac{1}{C}\sum_{n=N+1}^\infty2^{(\beta-\alpha)n}\left(\frac{3}{5}\right)^nD_N(u)=+\infty,
\end{align*}
contradiction! Hence $\mathcal{F}_\beta$ consists only of constant functions.
\end{proof}
Take $\myset{\beta_n}\subseteq(\alpha,\beta^*)$ with $\beta_n\uparrow\beta^*$. By Proposition \ref{prop_gamma}, there exist some subsequence still denoted by $\myset{\beta_n}$ and some closed form $(\mathcal{E},\mathcal{F})$ on $L^2(K;\nu)$ in the wide sense such that $(\beta^*-\beta_n)\mathfrak{E}_{\beta_n}$ is $\Gamma$-convergent to $\mathcal{E}$. Without lose of generality, we may assume that
$$0<\beta^*-\beta_n<\frac{1}{n+1}\text{ for all }n\ge1.$$
We have the characterization of $(\mathcal{E},\mathcal{F})$ on $L^2(K;\nu)$ as follows.
\begin{mythm}\label{SG_con_thm_E}
\begin{align*}
&\mathcal{E}(u,u)\asymp\sup_{n\ge1}D_n(u)=\sup_{n\ge1}\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2,\\
&\mathcal{F}=\myset{u\in L^2(K;\nu):\sup_{n\ge1}\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2<+\infty}.
\end{align*}
Moreover, $(\mathcal{E},\mathcal{F})$ is a regular closed form on $L^2(K;\nu)$ and
$$\frac{1}{2(\log2)C^2}\sup_{n\ge1}D_n(u)\le\mathcal{E}(u,u)\le\frac{1}{\log2}\sup_{n\ge1}D_n(u).$$
\end{mythm}
\begin{proof}
Recall that
$$\mathfrak{E}_{\beta}(u,u)=\sum_{n=1}^\infty2^{(\beta-\alpha)n}A_n(u)=\sum_{n=1}^\infty2^{(\beta-\beta^*)n}D_n(u).$$
We use weak monotonicity result Theorem \ref{SG_con_thm_monotone} and elementary result Proposition \ref{prop_ele2}.
On the one hand, for all $u\in L^2(K;\nu)$
\begin{align*}
\mathcal{E}(u,u)&\le\varliminf_{n\to+\infty}(\beta^*-\beta_n)\mathcal{E}_{\beta_n}(u,u)=\varliminf_{n\to+\infty}(\beta^*-\beta_n)\sum_{k=1}^\infty2^{(\beta_n-\beta^*)k}D_k(u)\\
&=\frac{1}{\log2}\varliminf_{n\to+\infty}(1-2^{\beta_n-\beta^*})\sum_{k=1}^\infty2^{(\beta_n-\beta^*)k}D_k(u)\le\frac{1}{\log2}\sup_{k\ge1}D_k(u).
\end{align*}
On the other hand, for all $u\in L^2(K;\nu)$, there exists $\myset{u_n}\subseteq L^2(K;\nu)$ converging strongly to $u$ in $L^2(K;\nu)$ such that
\begin{align*}
&\mathcal{E}(u,u)\ge\varlimsup_{n\to+\infty}(\beta^*-\beta_n)\mathcal{E}_{\beta_n}(u_n,u_n)=\varlimsup_{n\to+\infty}(\beta^*-\beta_n)\sum_{k=1}^\infty2^{(\beta_n-\beta^*)k}D_k(u_n)\\
\ge&\varlimsup_{n\to+\infty}(\beta^*-\beta_n)\sum_{k=n+1}^\infty2^{(\beta_n-\beta^*)k}D_k(u_n)\ge\frac{1}{C}\varlimsup_{n\to+\infty}(\beta^*-\beta_n)\sum_{k=n+1}^\infty2^{(\beta_n-\beta^*)k}D_n(u_n)\\
=&\frac{1}{C}\varlimsup_{n\to+\infty}\left[(\beta^*-\beta_n)\frac{2^{(\beta_n-\beta^*)(n+1)}}{1-2^{\beta_n-\beta^*}}D_n(u_n)\right].
\end{align*}
Since $0<\beta^*-\beta_n<1/(n+1)$, we have $2^{(\beta_n-\beta^*)(n+1)}>1/2$. Since
$$\lim_{n\to+\infty}\frac{\beta^*-\beta_n}{1-2^{\beta_n-\beta^*}}=\frac{1}{\log2},$$
we have
$$\mathcal{E}(u,u)\ge\frac{1}{2C}\varlimsup_{n\to+\infty}\frac{\beta^*-\beta_n}{1-2^{\beta_n-\beta^*}}D_n(u_n)\ge\frac{1}{2(\log2)C}\varlimsup_{n\to+\infty}D_n(u_n).$$
Since $u_n\to u$ in $L^2(K;\nu)$, for all $k\ge1$, we have
$$D_k(u)=\lim_{n\to+\infty} D_k(u_n)=\lim_{k\le n\to+\infty} D_k(u_n)\le C\varliminf_{n\to+\infty} D_n(u_n).$$
Taking supremum with respect to $k\ge1$, we have
$$\sup_{k\ge1}D_k(u)\le C\varliminf_{n\to+\infty}D_n(u_n)\le C\varlimsup_{n\to+\infty}D_n(u_n)\le 2(\log2)C^2\mathcal{E}(u,u).$$
By Lemma \ref{lem_SG_holder}, we have $\mathcal{F}\subseteq C(K)$. We only need to show that $\mathcal{F}$ is uniformly dense in $C(K)$, then $\mathcal{F}$ is dense in $L^2(K;\nu)$, hence $(\mathcal{E},\mathcal{F})$ is a regular closed form on $L^2(K;\nu)$.
Indeed, by Theorem \ref{SG_con_thm_AnU}, for all $U=U^{(x_0,x_1,x_2)}\in\mathcal{U}$, we have
\begin{align*}
&\sup_{n\ge1}D_n(U)=\sup_{n\ge1}\left(\frac{5}{3}\right)^nA_n(U)\\
=&\sup_{n\ge1}\left(\frac{5}{3}\right)^n\frac{2}{3}\left[\left(\frac{3}{5}\right)^n-\left(\frac{3}{5}\right)^{2n}\right]\left((x_0-x_1)^2+(x_1-x_2)^2+(x_0-x_2)^2\right)\\
\le&\frac{2}{3}\left((x_0-x_1)^2+(x_1-x_2)^2+(x_0-x_2)^2\right)<+\infty,
\end{align*}
hence $U\in\mathcal{F}$, $\mathcal{U}\subseteq\mathcal{F}$. By Theorem \ref{thm_SG_fun}, we have $\mathcal{F}$ separates points. It is obvious that $\mathcal{F}$ is a sub-algebra of $C(K)$. By Stone-Weierstrass theorem, $\mathcal{F}$ is uniformly dense in $C(K)$.
\end{proof}
Now we prove Theorem \ref{SG_con_thm_BM} using a standard approach as follows.
\begin{proof}[Proof of Theorem \ref{SG_con_thm_BM}]
For all $u\in L^2(K;\nu),n,k\ge1$, we have
\begin{align*}
&\sum_{w^{(1)}\sim_{n+k}w^{(2)}}\left(P_{n+k}u(w^{(1)})-P_{n+k}u(w^{(2)})\right)^2\\
=&\sum_{w\in W_n}\sum_{w^{(1)}\sim_kw^{(2)}}\left(P_{n+k}u(ww^{(1)})-P_{n+k}u(ww^{(2)})\right)^2\\
&+\sum_{w^{(1)}=w^{(1)}_1\ldots w^{(1)}_n\sim_nw^{(2)}=w^{(2)}_1\ldots w^{(2)}_n}\left(P_{n+k}u(w^{(1)}w^{(2)}_n\ldots w^{(2)}_n)-P_{n+k}u(w^{(2)}w^{(1)}_n\ldots w^{(1)}_n)\right)^2,
\end{align*}
where for all $i=1,2$
\begin{align*}
&P_{n+k}u(ww^{(i)})=\int_K(u\circ f_{ww^{(i)}})(x)\nu(\mathrm{d} x)\\
=&\int_K(u\circ f_{w}\circ f_{w^{(i)}})(x)\nu(\mathrm{d} x)=P_k(u\circ f_w)(w^{(i)}),
\end{align*}
hence
\begin{align*}
\sum_{w\in W_n}A_k(u\circ f_w)=&\sum_{w\in W_n}\sum_{w^{(1)}\sim_kw^{(2)}}\left(P_{k}(u\circ f_w)(w^{(1)})-P_{k}(u\circ f_w)(w^{(2)})\right)^2\\
\le&\sum_{w^{(1)}\sim_{n+k}w^{(2)}}\left(P_{n+k}u(w^{(1)})-P_{n+k}u(w^{(2)})\right)^2=A_{n+k}(u),
\end{align*}
and
\begin{align*}
\left(\frac{5}{3}\right)^n\sum_{w\in W_n}D_k(u\circ f_w)&=\left(\frac{5}{3}\right)^{n+k}\sum_{w\in W_n}A_k(u\circ f_w)\\
&\le\left(\frac{5}{3}\right)^{n+k}A_{n+k}(u)=D_{n+k}(u).
\end{align*}
For all $u\in\mathcal{F},n\ge1,w\in W_n$, we have
\begin{align*}
&\sup_{k\ge1}D_k(u\circ f_w)\le\sup_{k\ge1}\sum_{w\in W_n}D_k(u\circ f_w)\\
\le&\left(\frac{3}{5}\right)^n\sup_{k\ge1}D_{n+k}(u)\le\left(\frac{3}{5}\right)^n\sup_{k\ge1}D_k(u)<+\infty,
\end{align*}
hence $u\circ f_w\in\mathcal{F}$.
For all $u\in L^2(K;\nu),n\ge1$, let
\begin{align*}
\mybar{\mathcal{E}}(u,u)&=\sum_{i=0}^2\left(u(p_i)-\int_Ku(x)\nu(\mathrm{d} x)\right)^2,\\
\mybar{\mathcal{E}}^{(n)}(u,u)&=\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\mybar{\mathcal{E}}(u\circ f_w,u\circ f_w).
\end{align*}
By Lemma \ref{lem_SG_holder}, we have
\begin{align*}
\mybar{\mathcal{E}}(u,u)&=\sum_{i=0}^2\left(\int_K\left(u(p_i)-u(x)\right)\nu(\mathrm{d} x)\right)^2\le\sum_{i=0}^2\int_K\left(u(p_i)-u(x)\right)^2\nu(\mathrm{d} x)\\
&\le\sum_{i=0}^2\int_Kc^2|p_i-x|^{\beta^*-\alpha}\left(\sup_{k\ge1}D_k(u)\right)\nu(\mathrm{d} x)\le3c^2\sup_{k\ge1}D_k(u),
\end{align*}
hence
\begin{equation}\label{SG_con_eqn_Ebar_upper}
\begin{aligned}
\mybar{\mathcal{E}}^{(n)}(u,u)&\le\left(\frac{5}{3}\right)^n\sum_{w\in W_n}3c^2\sup_{k\ge1}D_k(u\circ f_w)\\
&\le3c^2C\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\varliminf_{k\to+\infty}D_k(u\circ f_w)\\
\\
&\le3c^2C\left(\frac{5}{3}\right)^n\varliminf_{k\to+\infty}\sum_{w\in W_n}D_k(u\circ f_w)\\
&\le 3c^2C\varliminf_{k\to+\infty}D_{n+k}(u)\le3c^2C\sup_{k\ge1}D_k(u).
\end{aligned}
\end{equation}
On the other hand, for all $u\in L^2(K;\nu),n\ge1$, we have
$$D_n(u)=\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(\int_K(u\circ f_{w^{(1)}})(x)\nu(\mathrm{d} x)-\int_K(u\circ f_{w^{(2)}})(x)\nu(\mathrm{d} x)\right)^2.$$
For all $w^{(1)}\sim_n w^{(2)}$, there exist $i,j=0,1,2$ such that
$$K_{w^{(1)}}\cap K_{w^{(2)}}=\myset{f_{w^{(1)}}(p_i)}=\myset{f_{w^{(2)}}(p_j)}.$$
Hence
\begin{align}
D_n(u)=&\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left[\left((u\circ f_{w^{(1)}})(p_i)-\int_K(u\circ f_{w^{(1)}})(x)\nu(\mathrm{d} x)\right)\right.\nonumber\\
&-\left.\left((u\circ f_{w^{(2)}})(p_j)-\int_K(u\circ f_{w^{(2)}})(x)\nu(\mathrm{d} x)\right)\right]^2\nonumber\\
\le&2\left(\frac{5}{3}\right)^n\sum_{w^{(1)}\sim_nw^{(2)}}\left[\left((u\circ f_{w^{(1)}})(p_i)-\int_K(u\circ f_{w^{(1)}})(x)\nu(\mathrm{d} x)\right)^2\right.\nonumber\\
&+\left.\left((u\circ f_{w^{(2)}})(p_j)-\int_K(u\circ f_{w^{(2)}})(x)\nu(\mathrm{d} x)\right)^2\right]\nonumber\\
\le&6\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\sum_{i=0}^2\left((u\circ f_{w})(p_i)-\int_K(u\circ f_{w})(x)\nu(\mathrm{d} x)\right)^2\nonumber\\
=&6\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\mybar{\mathcal{E}}(u\circ f_w,u\circ f_w)=6\mybar{\mathcal{E}}^{(n)}(u,u).\label{SG_con_eqn_Ebar_lower}
\end{align}
For all $u\in L^2(K;\nu),n\ge1$, we have
\begin{align}
\mybar{\mathcal{E}}^{(n+1)}(u,u)&=\left(\frac{5}{3}\right)^{n+1}\sum_{w\in W_{n+1}}\mybar{\mathcal{E}}(u\circ f_w,u\circ f_w)\nonumber\\
&=\left(\frac{5}{3}\right)^{n+1}\sum_{i=0}^2\sum_{w\in W_{n}}\mybar{\mathcal{E}}(u\circ f_i\circ f_w,u\circ f_i\circ f_w)\nonumber\\
&=\frac{5}{3}\sum_{i=0}^2\mybar{\mathcal{E}}^{(n)}(u\circ f_i,u\circ f_i).\label{SG_con_eqn_Ebar_ss}
\end{align}
Let
$$\tilde{\mathcal{E}}^{(n)}(u,u)=\frac{1}{n}\sum_{l=1}^n\mybar{\mathcal{E}}^{(l)}(u,u),u\in L^2(K;\nu),n\ge1.$$
By Equation (\ref{SG_con_eqn_Ebar_upper}), we have
$$\tilde{\mathcal{E}}^{(n)}(u,u)\le3c^2C\sup_{k\ge1}D_k(u)\asymp\mathcal{E}(u,u)\text{ for all }u\in\mathcal{F},n\ge1.$$
Since $(\mathcal{E},\mathcal{F})$ is a regular closed form on $L^2(K;\nu)$, by \cite[Definition 1.3.8, Remark 1.3.9, Definition 1.3.10, Remark 1.3.11]{CF12}, we have $(\mathcal{F},\mathcal{E}_1)$ is a separable Hilbert space. Let $\{u_i\}_{i\ge1}$ be a dense subset of $(\mathcal{F},\mathcal{E}_1)$. For all $i\ge1$, $\{\tilde{\mathcal{E}}^{(n)}(u_i,u_i)\}_{n\ge1}$ is a bounded sequence. By diagonal argument, there exists a subsequence $\{n_k\}_{k\ge1}$ such that $\{\tilde{\mathcal{E}}^{(n_k)}(u_i,u_i)\}_{k\ge1}$ converges for all $i\ge1$. Hence $\{\tilde{\mathcal{E}}^{(n_k)}(u,u)\}_{k\ge1}$ converges for all $u\in\mathcal{F}$. Let
$$\mathcal{E}_{{\mathrm{loc}}}(u,u)=\lim_{k\to+\infty}\tilde{\mathcal{E}}^{(n_k)}(u,u)\text{ for all }u\in\mathcal{F}_{{\mathrm{loc}}}:=\mathcal{F}.$$
Then
$$\mathcal{E}_{\mathrm{loc}}(u,u)\le3c^2C\sup_{k\ge1}D_k(u)\text{ for all }u\in\mathcal{F}_{\mathrm{loc}}=\mathcal{F}.$$
By Equation (\ref{SG_con_eqn_Ebar_lower}), for all $u\in\mathcal{F}_{\mathrm{loc}}=\mathcal{F}$, we have
$$\mathcal{E}_{\mathrm{loc}}(u,u)=\lim_{k\to+\infty}\tilde{\mathcal{E}}^{(n_k)}(u,u)\ge\varliminf_{n\to+\infty}\mybar{\mathcal{E}}^{(n)}(u,u)\ge\frac{1}{6}\varliminf_{k\to+\infty}D_k(u)\ge\frac{1}{6C}\sup_{k\ge1}D_k(u).$$
Hence
$$\mathcal{E}_{\mathrm{loc}}(u,u)\asymp\sup_{k\ge1}D_k(u)\text{ for all }u\in\mathcal{F}_{\mathrm{loc}}=\mathcal{F}.$$
Hence $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ is a regular closed form on $L^2(K;\nu)$. Since $1\in\mathcal{F}_{\mathrm{loc}}$ and $\mathcal{E}_{\mathrm{loc}}(1,1)=0$, by \cite[Lemma 1.6.5, Theorem 1.6.3]{FOT11}, $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ is conservative.
For all $u\in\mathcal{F}_{\mathrm{loc}}=\mathcal{F}$, we have $u\circ f_i\in\mathcal{F}=\mathcal{F}_{\mathrm{loc}}$ for all $i=0,1,2$. Moreover, by Equation (\ref{SG_con_eqn_Ebar_ss}), we have
\begin{align*}
\frac{5}{3}\sum_{i=0}^2\mathcal{E}_{\mathrm{loc}}(u\circ f_i,u\circ f_i)&=\frac{5}{3}\sum_{i=0}^2\lim_{k\to+\infty}\tilde{\mathcal{E}}^{(n_k)}(u\circ f_i,u\circ f_i)\\
&=\frac{5}{3}\sum_{i=0}^2\lim_{k\to+\infty}\frac{1}{n_k}\sum_{l=1}^{n_k}\mybar{\mathcal{E}}^{(l)}(u\circ f_i,u\circ f_i)\\
&=\lim_{k\to+\infty}\frac{1}{n_k}\sum_{l=1}^{n_k}\left[\frac{5}{3}\sum_{i=0}^2\mybar{\mathcal{E}}^{(l)}(u\circ f_i,u\circ f_i)\right]\\
&=\lim_{k\to+\infty}\frac{1}{n_k}\sum_{l=1}^{n_k}\mybar{\mathcal{E}}^{(l+1)}(u,u)=\lim_{k\to+\infty}\frac{1}{n_k}\sum_{l=2}^{n_k+1}\mybar{\mathcal{E}}^{(l)}(u,u)\\
&=\lim_{k\to+\infty}\left[\frac{1}{n_k}\sum_{l=1}^{n_k}\mybar{\mathcal{E}}^{(l)}(u,u)+\frac{1}{n_k}\mybar{\mathcal{E}}^{(n_k+1)}(u,u)-\frac{1}{n_k}\mybar{\mathcal{E}}^{(1)}(u,u)\right]\\
&=\lim_{k\to+\infty}\tilde{\mathcal{E}}^{(n_k)}(u,u)=\mathcal{E}_{\mathrm{loc}}(u,u).
\end{align*}
Hence $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ is self-similar.
For all $u,v\in\mathcal{F}_{\mathrm{loc}}$ satisfying $\mathrm{supp}(u),\mathrm{supp}(v)$ are compact and $v$ is constant in an open neighborhood $U$ of $\mathrm{supp}(u)$, we have $K\backslash U$ is compact and $\mathrm{supp}(u)\cap(K\backslash U)=\emptyset$, hence
$$\delta=\mathrm{dist}(\mathrm{supp}(u),K\backslash U)>0.$$
Taking sufficiently large $n\ge1$ such that $2^{1-n}<\delta$, by self-similarity, we have
$$\mathcal{E}_{\mathrm{loc}}(u,v)=\left(\frac{5}{3}\right)^n\sum_{w\in W_n}\mathcal{E}_{\mathrm{loc}}(u\circ f_w,v\circ f_w).$$
For all $w\in W_n$, we have $u\circ f_w=0$ or $v\circ f_w$ is constant, hence $\mathcal{E}_{\mathrm{loc}}(u\circ f_w,v\circ f_w)=0$, hence $\mathcal{E}_{\mathrm{loc}}(u,v)=0$, that is, $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ is strongly local.
For all $u\in\mathcal{F}_{\mathrm{loc}}$, it is obvious that $u^+,u^-,1-u,\mybar{u}=(0\vee u)\wedge 1\in\mathcal{F}_{\mathrm{loc}}$ and
$$\mathcal{E}_{\mathrm{loc}}(u,u)=\mathcal{E}_{\mathrm{loc}}(1-u,1-u).$$
Since $u^+u^-=0$ and $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ is strongly local, we have $\mathcal{E}_{\mathrm{loc}}(u^+,u^-)=0$. Hence
\begin{align*}
\mathcal{E}_{\mathrm{loc}}(u,u)&=\mathcal{E}_{\mathrm{loc}}(u^+-u^-,u^+-u^-)=\mathcal{E}_{\mathrm{loc}}(u^+,u^+)+\mathcal{E}_{\mathrm{loc}}(u^-,u^-)-2\mathcal{E}_{\mathrm{loc}}(u^+,u^-)\\
&=\mathcal{E}_{\mathrm{loc}}(u^+,u^+)+\mathcal{E}_{\mathrm{loc}}(u^-,u^-)\ge\mathcal{E}_{\mathrm{loc}}(u^+,u^+)=\mathcal{E}_{\mathrm{loc}}(1-u^+,1-u^+)\\
&\ge\mathcal{E}_{\mathrm{loc}}((1-u^+)^+,(1-u^+)^+)=\mathcal{E}_{\mathrm{loc}}(1-(1-u^+)^+,1-(1-u^+)^+)\\
&=\mathcal{E}_{\mathrm{loc}}(\mybar{u},\mybar{u}),
\end{align*}
that is, $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ is Markovian. Hence $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ is a self-similar strongly local regular Dirichlet form on $L^2(K;\nu)$.
\end{proof}
\begin{myrmk}
The idea of the standard approach is from \cite[Section 6]{KZ92}. The proof of Markovian property is from the proof of \cite[Theorem 2.1]{BBKT10}.
\end{myrmk}
\chapter{Construction of Local Regular Dirichlet Form on the SC}\label{ch_SC_con}
This chapter is based on my work \cite{GY17} joint with Prof. Alexander Grigor'yan.
\section{Background and Statement}
We apply the method introduced in Chapter \ref{ch_SG_con} to the SC. We use the notions of the SC introduced in Section \ref{sec_notion}.
Let $\nu$ be the normalized Hausdorff measure on the SC $K$.
Let $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ be given by
$$
\begin{aligned}
&\mathcal{E}_\beta(u,u)=\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y),\\
&\mathcal{F}_\beta=\myset{u\in L^2(K;\nu):\mathcal{E}_\beta(u,u)<+\infty},
\end{aligned}
$$
where $\alpha=\log8/\log3$ is Hausdorff dimension of the SC, $\beta>0$ is so far arbitrary. Then $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a quadratic form on $L^2(K;\nu)$ for all $\beta\in(0,+\infty)$. Note that $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is not necessary to be a regular Dirichlet form on $L^2(K;\nu)$ related to a stable-like jump process. The \emph{walk dimension of the SC} is defined as
$$\beta_*:=\sup\myset{\beta>0:(\mathcal{E}_\beta,\mathcal{F}_\beta)\text{ is a regular Dirichlet form on }L^2(K;\nu)}.$$
We give a new semi-norm $E_\beta$ as follows.
$$E_\beta(u,u):=\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2.$$
Our first result is as follows.
\begin{mylem}\label{SC_con_lem_equiv}
For all $\beta\in(\alpha,+\infty)$, for all $u\in C(K)$, we have
$$E_\beta(u,u)\asymp\mathcal{E}_\beta(u,u).$$
\end{mylem}
We have established similar equivalence on the SG, see Theorem \ref{SG_app_thm_main}.
We use Lemma \ref{SC_con_lem_equiv} to give bound of the walk dimension of the SC as follows.
\begin{mythm}\label{SC_con_thm_bound}
\begin{equation}\label{SC_con_eqn_bound_beta}
\beta_*\in\left[\frac{\log\left(8\cdot\frac{7}{6}\right)}{\log3},\frac{\log\left(8\cdot\frac{3}{2}\right)}{\log3}\right].
\end{equation}
\end{mythm}
This estimate follows also from the results of \cite{BB90} and \cite{BBS90} where the same bound
for $\beta^*$ was obtained by means of shorting and cutting techniques, while the identity $\beta_{*}=\beta^{*}$
follows from the sub-Gaussian heat kernel estimates by means of subordination technique.
Here we prove the estimate (\ref{SC_con_eqn_bound_beta}) of $\beta_*$ directly, without using heat kernel or subordination technique.
We give a direct proof of the following result.
\begin{mythm}\label{SC_con_thm_walk}
$$\beta_*=\beta^*:=\frac{\log(8\rho)}{\log3},$$
where $\rho$ is some parameter in resistance estimates.
\end{mythm}
Hino and Kumagai \cite{HK06} established other equivalent semi-norms as follows.
For all $n\ge1,u\in L^2(K;\nu)$, let
$$P_nu(w)=\frac{1}{\nu(K_w)}\int_{K_w}u(x)\nu(\mathrm{d} x),w\in W_n.$$
For all $w^{(1)},w^{(2)}\in W_n$, denote $w^{(1)}\sim_nw^{(2)}$ if $\mathrm{dim}_{\mathcal{H}}(K_{w^{(1)}}\cap K_{w^{(2)}})=1$. Let
$$\mathfrak{E}_\beta(u,u):=\sum_{n=1}^\infty3^{(\beta-\alpha)n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
w^{(1)}\sim_nw^{(2)}
\end{subarray}
$
}}}
\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2.$$
\begin{mylem}\label{SC_con_lem_equivHK}(\cite[Lemma 3.1]{HK06})
For all $\beta\in(0,+\infty),u\in L^2(K;\nu)$, we have
$$\mathfrak{E}_\beta(u,u)\asymp\mathcal{E}_\beta(u,u).$$
\end{mylem}
We combine $E_\beta$ and $\mathfrak{E}_\beta$ to construct a local regular Dirichlet form on $K$ using $\Gamma$-convergence technique as follows.
\begin{mythm}\label{SC_con_thm_BM}
There exists a self-similar strongly local regular Dirichlet form $(\mathcal{E}_{{\mathrm{loc}}},\mathcal{F}_{{\mathrm{loc}}})$ on $L^2(K;\nu)$ satisfying
\begin{align}
&\mathcal{E}_{{\mathrm{loc}}}(u,u)\asymp\sup_{n\ge1}3^{(\beta^*-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2,\label{SC_con_eqn_Kigami}\\
&\mathcal{F}_{{\mathrm{loc}}}=\myset{u\in C(K):\mathcal{E}_{\mathrm{loc}}(u,u)<+\infty}.\nonumber
\end{align}
\end{mythm}
By the uniqueness result in \cite{BBKT10}, we have the above local regular Dirichlet form coincides with that given by \cite{BB89} and \cite{KZ92}.
We have a direct corollary that non-local Dirichlet forms can approximate local Dirichlet form as follows.
\begin{mycor}\label{SC_con_cor_approx}
There exists some positive constant $C$ such that for all $u\in\mathcal{F}_{\mathrm{loc}}$, we have
$$\frac{1}{C}\mathcal{E}_{\mathrm{loc}}(u,u)\le\varliminf_{\beta\uparrow\beta^*}(\beta^*-\beta)\mathcal{E}_\beta(u,u)\le\varlimsup_{\beta\uparrow\beta^*}(\beta^*-\beta)\mathcal{E}_{\beta}(u,u)\le C\mathcal{E}_{\mathrm{loc}}(u,u).$$
\end{mycor}
We characterize $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ as follows.
\begin{mythm}\label{SC_con_thm_Besov}
$\mathcal{F}_{\mathrm{loc}}=B_{\alpha,\beta^*}^{2,\infty}(K)$ and $\mathcal{E}_{\mathrm{loc}}(u,u)\asymp[u]_{B^{2,\infty}_{\alpha,\beta^*}(K)}$ for all $u\in\mathcal{F}_{\mathrm{loc}}$.
\end{mythm}
We give a direct proof of this theorem using (\ref{SC_con_eqn_Kigami}) and thus avoiding heat kernel estimates, while using some geometric properties of the SC.
Finally, using (\ref{SC_con_eqn_Kigami}) of Theorem \ref{SC_con_thm_BM}, we give an alternative proof of sub-Gaussian heat kernel estimates as follows.
\begin{mythm}\label{SC_con_thm_hk}
$(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ has a heat kernel $p_t(x,y)$ satisfying
$$p_t(x,y)\asymp\frac{C}{t^{\alpha/\beta^*}}\exp\left(-c\left(\frac{|x-y|}{t^{1/\beta^*}}\right)^{\frac{\beta^*}{\beta^*-1}}\right),$$
for all $x,y\in K,t\in(0,1)$.
\end{mythm}
This chapter is organized as follows. In Section \ref{SC_con_sec_equiv}, we prove Lemma \ref{SC_con_lem_equiv}. In Section \ref{SC_con_sec_bound}, we prove Theorem \ref{SC_con_thm_bound}. In Section \ref{SC_con_sec_resistance}, we give resistance estimates. In Section \ref{SC_con_sec_harnack}, we give uniform Harnack inequality. In Section \ref{SC_con_sec_monotone}, we give two weak monotonicity results. In Section \ref{SC_con_sec_good}, we construct one good function. In Section \ref{SC_con_sec_walk}, we prove Theorem \ref{SC_con_thm_walk}. In Section \ref{SC_con_sec_BM}, we prove Theorem \ref{SC_con_thm_BM}. In Section \ref{SC_con_sec_Besov}, we prove Theorem \ref{SC_con_thm_Besov}. In Section \ref{SC_con_sec_hk}, we prove Theorem \ref{SC_con_thm_hk}.
\section{Proof of Lemma \ref{SC_con_lem_equiv}}\label{SC_con_sec_equiv}
We need some preparation as follows.
\begin{mylem}\label{SC_con_lem_equiv1}
For all $u\in L^2(K;\nu)$, we have
$$\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\asymp\sum_{n=0}^\infty3^{(\alpha+\beta)n}\int_K\int_{B(x,3^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x).$$
\end{mylem}
\begin{mycor}\label{SC_con_cor_arbi}
Fix arbitrary integer $N\ge0$ and real number $c>0$. For all $u\in L^2(K;\nu)$, we have
$$\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\asymp\sum_{n=N}^\infty3^{(\alpha+\beta)n}\int_K\int_{B(x,c3^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x).$$
\end{mycor}
The proofs of the above results are essentially the same as those of Lemma \ref{SG_app_lem_equiv1} and Corollary \ref{SG_app_cor_arbi} except that contraction ratio $1/2$ is replaced by $1/3$. We also need the fact that the SC satisfies the chain condition, see \cite[Definition 3.4]{GHL03}.
We divide Lemma \ref{SC_con_lem_equiv} into the following Theorem \ref{SC_con_thm_equiv1} and Theorem \ref{SC_con_thm_equiv2}. The idea of the proofs of these theorems comes form \cite{Jon96}. We do need to pay special attention to the difficulty brought by non-p.c.f. property.
\begin{mythm}\label{SC_con_thm_equiv1}
For all $u\in C(K)$, we have
$$\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\lesssim\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y).$$
\end{mythm}
\begin{proof}
First, fix $n\ge1,w=w_1\ldots w_n\in W_n$, consider
$$
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2.$$
For all $x\in K_w$, we have
$$(u(p)-u(q))^2\le2(u(p)-u(x))^2+2(u(x)-u(q))^2.$$
Integrating with respect to $x\in K_w$ and dividing by $\nu(K_w)$, we have
$$(u(p)-u(q))^2\le\frac{2}{\nu(K_w)}\int_{K_w}(u(p)-u(x))^2\nu(\mathrm{d} x)+\frac{2}{\nu(K_w)}\int_{K_w}(u(x)-u(q))^2\nu(\mathrm{d} x),$$
hence
$$
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\le2\cdot2\cdot2\sum_{p\in V_w}\frac{1}{\nu(K_w)}\int_{K_w}(u(p)-u(x))^2\nu(\mathrm{d} x).$$
Consider $(u(p)-u(x))^2$, $p\in V_w$, $x\in K_w$. There exists $w_{n+1}\in\myset{0,\ldots,7}$ such that $p=f_{w_1}\circ\ldots\circ f_{w_n}(p_{w_{n+1}})$. Let $k,l\ge1$ be integers to be determined, let
$$w^{(i)}=w_1\ldots w_nw_{n+1}\ldots w_{n+1}$$
with $ki$ terms of $w_{n+1}$, $i=0,\ldots,l$. For all $x^{(i)}\in K_{w^{(i)}}$, $i=0,\ldots,l$, we have
$$
\begin{aligned}
(u(p)-u(x^{(0)}))^2&\le2(u(p)-u(x^{(l)}))^2+2(u(x^{(0)})-u(x^{(l)}))^2\\
&\le2(u(p)-u(x^{(l)}))^2+2\left[2(u(x^{(0)})-u(x^{(1)}))^2+2(u(x^{(1)})-u(x^{(l)}))^2\right]\\
&=2(u(p)-u(x^{(l)}))^2+2^2(u(x^{(0)})-u(x^{(1)}))^2+2^2(u(x^{(1)})-u(x^{(l)}))^2\\
&\le\ldots\le2(u(p)-u(x^{(l)}))^2+2^2\sum_{i=0}^{l-1}2^i(u(x^{(i)})-u(x^{(i+1)}))^2.
\end{aligned}
$$
Integrating with respect to $x^{(0)}\in K_{w^{(0)}}$, \ldots, $x^{(l)}\in K_{w^{(l)}}$ and dividing by $\nu(K_{w^{(0)}})$, \ldots, $\nu(K_{w^{(l)}})$, we have
$$
\begin{aligned}
&\frac{1}{\nu(K_{w^{(0)}})}\int_{K_{w^{(0)}}}(u(p)-u(x^{(0)}))^2\nu(\mathrm{d} x^{(0)})\\
\le&\frac{2}{\nu(K_{w^{(l)}})}\int_{K_{w^{(l)}}}(u(p)-u(x^{(l)}))^2\nu(\mathrm{d} x^{(l)})\\
&+2^2\sum_{i=0}^{l-1}\frac{2^i}{\nu(K_{w^{(i)}})\nu(K_{w^{(i+1)}})}\int_{K_{w^{(i)}}}\int_{K_{w^{(i+1)}}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)}).
\end{aligned}
$$
Now let us use $\nu(K_{w^{(i)}})=(1/8)^{n+ki}=3^{-\alpha(n+ki)}$. For the first term, by Lemma \ref{lem_SC_holder}, we have
$$
\begin{aligned}
\frac{1}{\nu(K_{w^{(l)}})}\int_{K_{w^{(l)}}}(u(p)-u(x^{(l)}))^2\nu(\mathrm{d} x^{(l)})&\le \frac{cE(u)}{\nu(K_{w^{(l)}})}\int_{K_{w^{(l)}}}|p-x^{(l)}|^{\beta-\alpha}\nu(\mathrm{d} x^{(l)})\\
&\le{2}^{(\beta-\alpha)/2}cE(u){3}^{-(\beta-\alpha)(n+kl)}.
\end{aligned}
$$
For the second term, for all $x^{(i)}\in K_{w^{(i)}},x^{(i+1)}\in K_{w^{(i+1)}}$, we have
$$|x^{(i)}-x^{(i+1)}|\le\sqrt{2}\cdot3^{-(n+ki)},$$
hence
$$
\begin{aligned}
&\sum_{i=0}^{l-1}\frac{2^i}{\nu(K_{w^{(i)}})\nu(K_{w^{(i+1)}})}\int_{K_{w^{(i)}}}\int_{K_{w^{(i+1)}}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)})\\
\le&\sum_{i=0}^{l-1}{2^{i}\cdot3^{\alpha k+2\alpha(n+ki)}}\int\limits_{K_{w^{(i)}}}\int\limits_{|x^{(i+1)}-x^{(i)}|\le\sqrt{2}\cdot3^{-(n+ki)}}(u(x^{(i)})-u(x^{(i+1)}))^2\nu(\mathrm{d} x^{(i)})\nu(\mathrm{d} x^{(i+1)}),
\end{aligned}
$$
and
$$
\begin{aligned}
&\frac{1}{\nu(K_w)}\int_{K_w}(u(p)-u(x))^2\nu(\mathrm{d} x)=\frac{1}{\nu(K_{w^{(0)}})}\int_{K_{w^{(0)}}}(u(p)-u(x^{(0)}))^2\nu(\mathrm{d} x^{(0)})\\
\le& 2\cdot{2}^{(\beta-\alpha)/2}cE(u)3^{-(\beta-\alpha)(n+kl)}\\
&+4\sum_{i=0}^{l-1}{2^{i}\cdot3^{\alpha k+2\alpha(n+ki)}}\int\limits_{K_{w^{(i)}}}\int\limits_{|x-y|\le\sqrt{2}\cdot3^{-(n+ki)}}(u(x)-u(y))^2\nu(\mathrm{d} x)\nu(\mathrm{d} y).
\end{aligned}
$$
Hence
$$
\begin{aligned}
&\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\le&8\sum_{w\in W_n}\sum_{p\in V_w}\frac{1}{\nu(K_w)}\int_{K_w}(u(p)-u(x))^2\nu(\mathrm{d} x)\\
\le&8\sum_{w\in W_n}\sum_{p\in V_w}\left(2\cdot{2}^{(\beta-\alpha)/2}cE(u)3^{-(\beta-\alpha)(n+kl)}\right.\\
&\left.+4\sum_{i=0}^{l-1}{2^{i}\cdot3^{\alpha k+2\alpha(n+ki)}}\int\limits_{K_{w^{(i)}}}\int\limits_{|x-y|\le\sqrt{2}\cdot3^{-(n+ki)}}(u(x)-u(y))^2\nu(\mathrm{d} x)\nu(\mathrm{d} y)\right).
\end{aligned}
$$
For the first term, we have
$$\sum_{w\in W_n}\sum_{p\in V_w}3^{-(\beta-\alpha)(n+kl)}=8\cdot8^n\cdot3^{-(\beta-\alpha)(n+kl)}=8\cdot3^{\alpha n-(\beta-\alpha)(n+kl)}.$$
For the second term, fix $i=0,\ldots,l-1$, different $p\in V_w$, $w\in W_n$ correspond to different $K_{w^{(i)}}$, hence
$$
\begin{aligned}
&\sum_{i=0}^{l-1}\sum_{w\in W_n}\sum_{p\in V_w}2^{i}\cdot3^{\alpha k+2\alpha(n+ki)}\int\limits_{K_{w^{(i)}}}\int\limits_{|x-y|\le\sqrt{2}\cdot3^{-(n+ki)}}(u(x)-u(y))^2\nu(\mathrm{d} x)\nu(\mathrm{d} y)\\
\le&\sum_{i=0}^{l-1}2^{i}\cdot3^{\alpha k+2\alpha(n+ki)}\int\limits_{K}\int\limits_{|x-y|\le\sqrt{2}\cdot3^{-(n+ki)}}(u(x)-u(y))^2\nu(\mathrm{d} x)\nu(\mathrm{d} y)\\
=&3^{\alpha k}\sum_{i=0}^{l-1}2^{i}\cdot3^{-(\beta-\alpha)(n+ki)}\left(3^{(\alpha+\beta)(n+ki)}\int\limits_{K}\int\limits_{|x-y|\le\sqrt{2}\cdot3^{-(n+ki)}}(u(x)-u(y))^2\nu(\mathrm{d} x)\nu(\mathrm{d} y)\right).
\end{aligned}
$$
For simplicity, denote
$$E_{n}(u)=3^{(\alpha+\beta)n}\int_{K}\int_{|x-y|\le\sqrt{2}\cdot3^{-n}}(u(x)-u(y))^2\nu(\mathrm{d} x)\nu(\mathrm{d} y).$$
We have
\begin{equation}\label{SC_con_eqn_equiv1_1}
\begin{aligned}
&\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\le&128\cdot{2}^{(\beta-\alpha)/2}cE(u)3^{\alpha n-(\beta-\alpha)(n+kl)}+32\cdot3^{\alpha k}\sum_{i=0}^{l-1}2^{i}\cdot3^{-(\beta-\alpha)(n+ki)}E_{n+ki}(u).
\end{aligned}
\end{equation}
Hence
$$
\begin{aligned}
&\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\le&128\cdot{2}^{(\beta-\alpha)/2}cE(u)\sum_{n=1}^\infty3^{\beta n-(\beta-\alpha)(n+kl)}+32\cdot3^{\alpha k}\sum_{n=1}^\infty\sum_{i=0}^{l-1}2^{i}\cdot3^{-(\beta-\alpha)ki}E_{n+ki}(u).
\end{aligned}
$$
Take $l=n$, then
$$
\begin{aligned}
&\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\le&128\cdot{2}^{(\beta-\alpha)/2}cE(u)\sum_{n=1}^\infty3^{\left[\beta-(\beta-\alpha)(k+1)\right]n}+32\cdot3^{\alpha k}\sum_{n=1}^\infty\sum_{i=0}^{n-1}2^{i}\cdot3^{-(\beta-\alpha)ki}E_{n+ki}(u)\\
=&128\cdot{2}^{(\beta-\alpha)/2}cE(u)\sum_{n=1}^\infty3^{\left[\beta-(\beta-\alpha)(k+1)\right]n}+32\cdot3^{\alpha k}\sum_{i=0}^\infty2^{i}\cdot3^{-(\beta-\alpha)ki}\sum_{n=i+1}^{\infty}E_{n+ki}(u)\\
\le&128\cdot{2}^{(\beta-\alpha)/2}cE(u)\sum_{n=1}^\infty3^{\left[\beta-(\beta-\alpha)(k+1)\right]n}+32\cdot3^{\alpha k}\sum_{i=0}^\infty3^{\left[1-(\beta-\alpha)k\right]i}C_1E(u),
\end{aligned}
$$
where $C_1$ is some positive constant from Corollary \ref{SC_con_cor_arbi}. Take $k\ge1$ sufficiently large such that $\beta-(\beta-\alpha)(k+1)<0$ and $1-(\beta-\alpha)k<0$, then the above two series converge, hence
$$\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\lesssim\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y).$$
\end{proof}
\begin{mythm}\label{SC_con_thm_equiv2}
For all $u\in C(K)$, we have
\begin{equation}\label{SC_con_eqn_equiv2_1}
\int_K\int_K\frac{(u(x)-u(y))^2}{|x-y|^{\alpha+\beta}}\nu(\mathrm{d} x)\nu(\mathrm{d} y)\lesssim\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2,
\end{equation}
or equivalently, for all $c\in(0,1)$, we have
\begin{equation}\label{SC_con_eqn_equiv2_2}
\begin{aligned}
&\sum_{n=2}^\infty3^{(\alpha+\beta)n}\int\limits_K\int\limits_{B(x,c3^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
\lesssim&\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2.
\end{aligned}
\end{equation}
\end{mythm}
\begin{proof}
Note $V_n=\cup_{w\in W_n}V_w$, it is obvious that its cardinal $\#V_n\asymp8^n=3^{\alpha n}$. Let $\nu_n$ be the measure on $V_n$ which assigns $1/\#V_n$ on each point of $V_n$, then $\nu_n$ converges weakly to $\nu$.
First, for $n\ge2,m>n$, we estimate
$$3^{(\alpha+\beta)n}\int_K\int_{B(x,c3^{-n})}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x).$$
Note that
$$
\begin{aligned}
\int\limits_K\int\limits_{B(x,c3^{-n})}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)=\sum\limits_{w\in W_n}\int\limits_{K_w}\int\limits_{B(x,c3^{-n})}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x).
\end{aligned}
$$
Fix $w\in W_n$, there exist at most nine $\tilde{w}\in W_n$ such that $K_{\tilde{w}}\cap K_w\ne\emptyset$, see Figure \ref{SC_con_fig_Kw}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.5]
\draw (0,0)--(6,0)--(6,6)--(0,6)--cycle;
\draw (2,0)--(2,6);
\draw (4,0)--(4,6);
\draw (0,2)--(6,2);
\draw (0,4)--(6,4);
\draw[very thick] (2,2)--(4,2)--(4,4)--(2,4)--cycle;
\draw (3,3) node {$K_w$};
\end{tikzpicture}
\caption{A Neighborhood of $K_w$}\label{SC_con_fig_Kw}
\end{figure}
Let
$$K_w^*=
{\bigcup_{\mbox{\tiny
$
\begin{subarray}{c}
\tilde{w}\in W_n\\
K_{\tilde{w}}\cap K_w\ne\emptyset
\end{subarray}
$
}}}
K_{\tilde{w}}.$$
For all $x\in K_w$, $y\in B(x,c3^{-n})$, we have $y\in K_w^*$, hence
$$
\begin{aligned}
&\int_{K_w}\int_{B(x,c3^{-n})}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)\le\int_{K_w}\int_{K_w^*}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)\\
&=
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
\tilde{w}\in W_n\\
K_{\tilde{w}}\cap K_w\ne\emptyset
\end{subarray}
$
}}}
\int_{K_w}\int_{K_{\tilde{w}}}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x).
\end{aligned}
$$
Note $\myset{P_w}=K_w\cap V_{n-1}$ for all $w\in W_n$. Fix $\tilde{w},w\in W_n$ with $K_{\tilde{w}}\cap K_w\ne\emptyset$. If $P_{\tilde{w}}\ne P_w$, then $|P_{\tilde{w}}-P_w|=2^{-1}\cdot3^{-(n-1)}$ or there exists a unique $z\in V_{n-1}$ such that
\begin{equation}\label{SC_con_eqn_med}
\lvert P_{\tilde{w}}-z\rvert=\lvert P_w-z\rvert=2^{-1}\cdot3^{-(n-1)}.
\end{equation}
Let $z_1=P_{\tilde{w}}$, $z_3=P_w$ and
$$z_2=
\begin{cases}
P_{\tilde{w}}=P_w,&\text{if }P_{\tilde{w}}=P_w,\\
P_{\tilde{w}},&\text{if }|P_{\tilde{w}}-P_w|=2^{-1}\cdot3^{-(n-1)},\\
z,&\text{if }P_{\tilde{w}}\ne P_w\text{ and }z \text{ is given by Equation (\ref{SC_con_eqn_med})}.
\end{cases}
$$
Then for all $x\in K_w$, $y\in K_{\tilde{w}}$, we have
$$
\begin{aligned}
&(u(x)-u(y))^2\\
\le&4\left[(u(y)-u(z_1))^2+(u(z_1)-u(z_2))^2+(u(z_2)-u(z_3))^2+(u(z_3)-u(x))^2\right].
\end{aligned}
$$
For $i=1,2$, we have
$$
\begin{aligned}
&\int_{K_w}\int_{K_{\tilde{w}}}(u(z_i)-u(z_{i+1}))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)=(u(z_i)-u(z_{i+1}))^2\left(\frac{\#(K_w\cap V_m)}{\#V_m}\right)^2\\
\asymp&(u(z_i)-u(z_{i+1}))^2\left(\frac{8^{m-n}}{8^m}\right)^2=3^{-2\alpha n}(u(z_i)-u(z_{i+1}))^2.
\end{aligned}
$$
Hence
$$
\begin{aligned}
&\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
\tilde{w}\in W_n\\
K_{\tilde{w}}\cap K_w\ne\emptyset
\end{subarray}
$
}}}
\int_{K_w}\int_{K_{\tilde{w}}}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)\\
\lesssim&3^{-\alpha n}\sum\limits_{w\in W_n}\int\limits_{K_w}(u(x)-u(P_w))^2\nu_m(\mathrm{d} x)+3^{-2\alpha n}\sum_{w\in W_{n-1}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-(n-1)}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\asymp&3^{-\alpha(m+n)}\sum\limits_{w\in W_n}\sum\limits_{x\in K_w\cap V_m}(u(x)-u(P_w))^2\\
&+3^{-2\alpha n}\sum_{w\in W_{n-1}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-(n-1)}
\end{subarray}
$
}}}
(u(p)-u(q))^2.
\end{aligned}
$$
Let us estimate $(u(x)-u(P_w))^2$ for $x\in K_w\cap V_m$. We construct a finite sequence
$$p_1,\ldots,p_{4(m-n+1)},p_{4(m-n+1)+1}$$
such that
\begin{itemize}
\item $p_1=P_w$ and $p_{4(m-n+1)+1}=x$.
\item For all $k=0,\ldots,m-n$, we have
$$p_{4k+1},p_{4k+2},p_{4k+3},p_{4k+4},p_{4(k+1)+1}\in V_{n+k}.$$
\item For all $i=1,2,3,4$, we have
$$\lvert p_{4k+i}-p_{4k+i+1}\rvert=0\text{ or }2^{-1}\cdot3^{-(n+k)}.$$
\end{itemize}
Then
$$
\begin{aligned}
\left(u(x)-u(P_w)\right)^2\lesssim\sum_{k=0}^{m-n}4^{k}&\left[(u(p_{4k+1})-u(p_{4k+2}))^2+(u(p_{4k+2})-u(p_{4k+3}))^2\right.\\
&\left.+(u(p_{4k+3})-u(p_{4k+4}))^2+(u(p_{4k+4})-u(p_{4(k+1)+1}))^2\right].
\end{aligned}
$$
For all $k=n,\ldots,m$, for all $p,q\in V_k\cap K_w$ with $|p-q|=2^{-1}\cdot 3^{-k}$, the term $(u(p)-u(q))^2$ occurs in the sum with times of the order $8^{m-k}=3^{\alpha(m-k)}$, hence
$$
\begin{aligned}
&3^{-\alpha(m+n)}\sum\limits_{w\in W_n}\sum\limits_{x\in K_w\cap V_m}(u(x)-u(P_w))^2\\
\lesssim&3^{-\alpha(m+n)}\sum_{k=n}^{m}4^{k-n}\cdot3^{\alpha(m-k)}\sum_{w\in W_k}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-k}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
=&\sum_{k=n}^{m}4^{k-n}\cdot3^{-\alpha(n+k)}\sum_{w\in W_k}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-k}
\end{subarray}
$
}}}
(u(p)-u(q))^2.
\end{aligned}
$$
Hence
$$
\begin{aligned}
&\int_K\int_{B(x,c3^{-n})}(u(x)-u(y))^2\nu_m(\mathrm{d} y)\nu_m(\mathrm{d} x)\\
\lesssim&\sum_{k=n}^{m}4^{k-n}\cdot3^{-\alpha(n+k)}\sum_{w\in W_k}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-k}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
&+3^{-2\alpha n}\sum_{w\in W_{n-1}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-(n-1)}
\end{subarray}
$
}}}
(u(p)-u(q))^2.
\end{aligned}
$$
Letting $m\to+\infty$, we have
\begin{equation}\label{SC_con_eqn_equiv2_3}
\begin{aligned}
&\int_K\int_{B(x,c3^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
\lesssim&\sum_{k=n}^\infty4^{k-n}\cdot3^{-\alpha(n+k)}\sum_{w\in W_k}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-k}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
&+3^{-2\alpha n}\sum_{w\in W_{n-1}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-(n-1)}
\end{subarray}
$
}}}
(u(p)-u(q))^2.
\end{aligned}
\end{equation}
Hence
$$
\begin{aligned}
&\sum_{n=2}^\infty3^{(\alpha+\beta)n}\int_K\int_{B(x,c3^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
\lesssim&\sum_{n=2}^\infty\sum_{k=n}^\infty4^{k-n}\cdot3^{\beta n-\alpha k}\sum_{w\in W_k}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-k}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
&+\sum_{n=2}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_{n-1}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-(n-1)}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\lesssim&\sum_{k=2}^\infty\sum_{n=2}^k4^{k-n}\cdot3^{\beta n-\alpha k}\sum_{w\in W_k}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-k}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
&+\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_{n}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\lesssim&\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_{n}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2.
\end{aligned}
$$
\end{proof}
\section{Proof of Theorem \ref{SC_con_thm_bound}}\label{SC_con_sec_bound}
Firstly, we consider lower bound. We need some preparation.
\begin{myprop}\label{SC_con_prop_lower}
Assume that $\beta\in(\alpha,+\infty)$. Let $f:[0,1]\to\mathbb{R}$ be a strictly increasing continuous function. Assume that the function $U(x,y)=f(x)$, $(x,y)\in K$ satisfies $\mathcal{E}_{\beta}(U,U)<+\infty$. Then $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$.
\end{myprop}
\begin{myrmk}
The above proposition means that only \emph{one} good enough function contained in the domain can ensure that the domain is large enough.
\end{myrmk}
\begin{proof}
We only need to show that $\mathcal{F}_\beta$ is uniformly dense in $C(K)$. Then $\mathcal{F}_\beta$ is dense in $L^2(K;\nu)$. Using Fatou's lemma, we have $\mathcal{F}_\beta$ is complete under $(\mathcal{E}_\beta)_1$-norm. It is obvious that $\mathcal{E}_\beta$ has Markovian property. Hence $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a Dirichlet form on $L^2(K;\nu)$. Moreover, $\mathcal{F}_\beta\cap C(K)=\mathcal{F}_\beta$ is trivially $(\mathcal{E}_\beta)_1$-dense in $\mathcal{F}_\beta$ and uniformly dense in $C(K)$. Hence $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ on $L^2(K;\nu)$ is regular.
Indeed, by assumption, $U\in\mathcal{F}_\beta$, $\mathcal{F}_\beta\ne\emptyset$. It is obvious that $\mathcal{F}_\beta$ is a sub-algebra of $C(K)$, that is, for all $u,v\in\mathcal{F}_\beta$, $c\in\mathbb{R}$, we have $u+v,cu,uv\in\mathcal{F}_\beta$. We show that $\mathcal{F}_\beta$ separates points. For all distinct $(x^{(1)},y^{(1)}),(x^{(2)},y^{(2)})\in K$, we have $x^{(1)}\ne x^{(2)}$ or $y^{(1)}\ne y^{(2)}$.
If $x^{(1)}\ne x^{(2)}$, then since $f$ is strictly increasing, we have
$$U(x^{(1)},y^{(1)})=f(x^{(1)})\ne f(x^{(2)})=U(x^{(2)},y^{(2)}).$$
If $y^{(1)}\ne y^{(2)}$, then let $V(x,y)=f(y)$, $(x,y)\in K$, we have $V\in\mathcal{F}_\beta$ and
$$V(x^{(1)},y^{(1)})=f(y^{(1)})\ne f(y^{(2)})=V(x^{(2)},y^{(2)}).$$
By Stone-Weierstrass theorem, we have $\mathcal{F}_\beta$ is uniformly dense in $C(K)$.
\end{proof}
Now, we give lower bound.
\begin{proof}[Proof of Lower Bound]
The point is to construct an explicit function. We define $f:[0,1]\to\mathbb{R}$ as follows. Let $f(0)=0$ and $f(1)=1$. First, we determine the values of $f$ at $1/3$ and $2/3$. We consider the minimum of the following function
$$\varphi(x,y)=3x^2+2(x-y)^2+3(1-y)^2,x,y\in\mathbb{R}.$$
By elementary calculation, $\varphi$ attains minimum $6/7$ at $(x,y)=(2/7,5/7)$. Assume that we have defined $f$ on $i/3^n$, $i=0,1,\ldots,3^n$. Then, for $n+1$, for all $i=0,1,\ldots,3^{n}-1$, we define
$$
f(\frac{3i+1}{3^{n+1}})=\frac{5}{7}f(\frac{i}{3^n})+\frac{2}{7}f(\frac{i+1}{3^n}),f(\frac{3i+2}{3^{n+1}})=\frac{2}{7}f(\frac{i}{3^n})+\frac{5}{7}f(\frac{i+1}{3^n}).
$$
By induction principle, we have the definition of $f$ on all triadic points. It is obvious that $f$ is uniformly continuous on the set of all triadic points. We extend $f$ to be continuous on $[0,1]$. It is obvious that $f$ is increasing. For all $x,y\in[0,1]$ with $x<y$, there exist triadic points $i/3^n,(i+1)/3^n\in(x,y)$, then $f(x)\le f(i/3^n)<f((i+1)/3^n)\le f(y)$, hence $f$ is strictly increasing.
Let $U(x,y)=f(x)$, $(x,y)\in K$. By induction, we have
$$\sum_{w\in W_{n+1}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-(n+1)}
\end{subarray}
$
}}}
(U(p)-U(q))^2=\frac{6}{7}\sum_{w\in W_{n}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(U(p)-U(q))^2\text{ for all }n\ge1.$$
Hence
\begin{equation}\label{SC_con_eqn_lower}
\sum_{w\in W_{n}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(U(p)-U(q))^2=\left(\frac{6}{7}\right)^n\text{ for all }n\ge1.
\end{equation}
For all $\beta\in(\log8/\log3,\log(8\cdot7/6)/\log3)$, we have $3^{\beta-\alpha}<7/6$. By Equation (\ref{SC_con_eqn_lower}), we have
$$\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(U(p)-U(q))^2<+\infty.$$
By Lemma \ref{SC_con_lem_equiv}, $\mathcal{E}_\beta(U,U)<+\infty$. By Proposition \ref{SC_con_prop_lower}, $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$ for all $\beta\in(\log8/\log3,\log(8\cdot7/6)/\log3)$. Hence
$$\beta_*\ge\frac{\log(8\cdot\frac{7}{6})}{\log3}.$$
\end{proof}
\begin{myrmk}
\normalfont
The construction of the above function is similar to that given in the proof of \cite[Theorem 2.6]{Bar13}. Indeed, the above function is constructed in a self-similar way. Let $f_n:[0,1]\to\mathbb{R}$ be given by $f_0(x)=x$, $x\in[0,1]$ and for all $n\ge0$
$$
f_{n+1}(x)=
\begin{cases}
\frac{2}{7}f_n(3x),&\text{if }0\le x\le\frac{1}{3},\\
\frac{3}{7}f_n(3x-1)+\frac{2}{7},&\text{if }\frac{1}{3}<x\le\frac{2}{3},\\
\frac{2}{7}f_n(3x-2)+\frac{5}{7},&\text{if }\frac{2}{3}<x\le1.
\end{cases}
$$
See Figure \ref{SC_con_fig_f012} for the figures of $f_0,f_1,f_2$.
\begin{figure}[ht]
\centering
\subfigure[$f_0$]{
\begin{tikzpicture}[scale=2.5]
\draw[->] (0,0)--(1.5,0);
\draw[->] (0,0)--(0,1.5);
\draw[dashed] (1,0)--(1,1)--(0,1);
\draw[thick] (0,0)--(1,1);
\draw (-0.2,-0.2) node {\footnotesize{$O$}};
\draw (1,-0.2) node {\footnotesize{$1$}};
\draw (-0.2,1) node {\footnotesize{$1$}};
\end{tikzpicture}
}
\hspace{0.00in}
\subfigure[$f_1$]{
\begin{tikzpicture}[scale=2.5]
\draw[->] (0,0)--(1.5,0);
\draw[->] (0,0)--(0,1.5);
\draw[dashed] (1,0)--(1,1)--(0,1);
\draw[dashed] (1/3,0)--(1/3,2/7)--(0,2/7);
\draw[dashed] (2/3,0)--(2/3,5/7)--(0,5/7);
\draw[thick] (0,0)--(1/3,2/7)--(2/3,5/7)--(1,1);
\draw (-0.2,-0.2) node {\footnotesize{$O$}};
\draw (1,-0.2) node {\footnotesize{$1$}};
\draw (-0.2,1) node {\footnotesize{$1$}};
\draw (-0.2,2/7) node {\scriptsize{${2}/{7}$}};
\draw (-0.2,5/7) node {\scriptsize{${5}/{7}$}};
\draw (1/3,-0.2) node {\scriptsize{$\frac{1}{3}$}};
\draw (2/3,-0.2) node {\scriptsize{$\frac{2}{3}$}};
\end{tikzpicture}
}
\hspace{0.0in}
\subfigure[$f_2$]{
\begin{tikzpicture}[scale=2.5]
\draw[->] (0,0)--(1.5,0);
\draw[->] (0,0)--(0,1.5);
\draw[dashed] (1,0)--(1,1)--(0,1);
\draw[dashed] (1/3,0)--(1/3,2/7)--(0,2/7);
\draw[dashed] (2/3,0)--(2/3,5/7)--(0,5/7);
\draw[dashed] (1/9,0)--(1/9,4/49)--(0,4/49);
\draw[dashed] (2/9,0)--(2/9,10/49)--(0,10/49);
\draw[dashed] (4/9,0)--(4/9,20/49)--(0,20/49);
\draw[dashed] (5/9,0)--(5/9,29/49)--(0,29/49);
\draw[dashed] (7/9,0)--(7/9,39/49)--(0,39/49);
\draw[dashed] (8/9,0)--(8/9,45/49)--(0,45/49);
\draw[thick] (0,0)--(1/9,4/49)--(2/9,10/49)--(1/3,2/7)--(4/9,2/7+3/7*2/7)--(5/9,2/7+3/7*5/7)--(2/3,5/7)--(7/9,39/49)--(8/9,45/49)--(1,1);
\draw (-0.2,-0.2) node {\footnotesize{$O$}};
\draw (1,-0.2) node {\tiny{$1$}};
\draw (-0.2,1) node {\tiny{$1$}};
\draw (-0.2,2/7) node {\tiny{${2}/{7}$}};
\draw (-0.2,5/7) node {\tiny{${5}/{7}$}};
\draw (1/3,-0.2) node {\tiny{$\frac{1}{3}$}};
\draw (2/3,-0.2) node {\tiny{$\frac{2}{3}$}};
\draw (1/9,-0.2) node {\tiny{$\frac{1}{9}$}};
\draw (2/9,-0.2) node {\tiny{$\frac{2}{9}$}};
\draw (4/9,-0.2) node {\tiny{$\frac{4}{9}$}};
\draw (5/9,-0.2) node {\tiny{$\frac{5}{9}$}};
\draw (7/9,-0.2) node {\tiny{$\frac{7}{9}$}};
\draw (8/9,-0.2) node {\tiny{$\frac{8}{9}$}};
\draw (-0.2,4/49) node {\tiny{${4}/{49}$}};
\draw (-0.2,10/49) node {\tiny{${10}/{49}$}};
\draw (-0.2,20/49) node {\tiny{${20}/{49}$}};
\draw (-0.2,29/49) node {\tiny{${29}/{49}$}};
\draw (-0.2,39/49) node {\tiny{${39}/{49}$}};
\draw (-0.2,45/49) node {\tiny{${45}/{49}$}};
\end{tikzpicture}
}
\caption{The Figures of $f_0,f_1,f_2$}\label{SC_con_fig_f012}
\end{figure}
It is obvious that
$$f_n(\frac{i}{3^n})=f(\frac{i}{3^n})\text{ for all }i=0,\ldots,3^n,n\ge0,$$
and
$$\max_{x\in[0,1]}\lvert f_{n+1}(x)-f_n(x)\rvert\le\frac{3}{7}\max_{x\in[0,1]}\lvert f_{n}(x)-f_{n-1}(x)\rvert\text{ for all }n\ge1,$$
hence $f_n$ converges uniformly to $f$ on $[0,1]$. Let $g_1,g_2,g_3:\mathbb{R}^2\to\mathbb{R}^2$ be given by
$$g_1(x,y)=\left(\frac{1}{3}x,\frac{2}{7}y\right),g_2(x,y)=\left(\frac{1}{3}x+\frac{1}{3},\frac{3}{7}y+\frac{2}{7}\right),g_3(x,y)=\left(\frac{1}{3}x+\frac{2}{3},\frac{2}{7}y+\frac{5}{7}\right).$$
Then $\myset{(x,f(x)):x\in[0,1]}$ is the unique non-empty compact set $G$ in $\mathbb{R}^2$ satisfying
$$G=g_1(G)\cup g_2(G)\cup g_3(G).$$
\end{myrmk}
Secondly, we consider upper bound. We shrink the SC to another fractal. Denote $\mathcal{C}$ as the Cantor ternary set in $[0,1]$. Then $[0,1]\times\mathcal{C}$ is the unique non-empty compact set $\tilde{K}$ in $\mathbb{R}^2$ satisfying
$$\tilde{K}=\cup_{i=0,1,2,4,5,6}f_i(\tilde{K}).$$
Let
$$\tilde{V}_0=\myset{p_0,p_1,p_2,p_4,p_5,p_6},\tilde{V}_{n+1}=\cup_{i=0,1,2,4,5,6}f_i(\tilde{V}_n)\text{ for all }n\ge0.$$
Then $\myset{\tilde{V}_n}$ is an increasing sequence of finite sets and $[0,1]\times\mathcal{C}$ is the closure of $\cup_{n=0}^\infty\tilde{V}_n$. Let $\tilde{W}_0=\myset{\emptyset}$ and
$$\tilde{W}_n=\myset{w=w_1\ldots w_n:w_i=0,1,2,4,5,6,i=1,\ldots,n}\text{ for all }n\ge1.$$
For all $w=w_1\ldots w_n\in\tilde{W}_n$, let
$$\tilde{V}_w=f_{w_1}\circ\ldots\circ f_{w_n}(\tilde{V}_0).$$
\begin{proof}[Proof of Upper Bound]
Assume that $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$, then there exists $u\in\mathcal{F}_\beta$ such that $u|_{\myset{0}\times[0,1]}=0$ and $u|_{\myset{1}\times[0,1]}=1$. By Lemma \ref{SC_con_lem_equiv}, we have
\begin{equation}\label{SC_con_eqn_upper}
\begin{aligned}
+\infty&>\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
&\ge\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in\tilde{W}_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in\tilde{V}_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
&=\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in\tilde{W}_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in\tilde{V}_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
((u|_{[0,1]\times\mathcal{C}})(p)-(u|_{[0,1]\times\mathcal{C}})(q))^2\\
&\ge\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in\tilde{W}_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in\tilde{V}_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(\tilde{u}(p)-\tilde{u}(q))^2,
\end{aligned}
\end{equation}
where $\tilde{u}$ is the function on $[0,1]\times\mathcal{C}$ that is the minimizer of
$$\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in\tilde{W}_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in\tilde{V}_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(\tilde{u}(p)-\tilde{u}(q))^2:\tilde{u}|_{\myset{0}\times\mathcal{C}}=0,\tilde{u}|_{\myset{1}\times\mathcal{C}}=1,\tilde{u}\in C([0,1]\times\mathcal{C}).$$
By symmetry of $[0,1]\times\mathcal{C}$, $\tilde{u}(x,y)=x,(x,y)\in [0,1]\times\mathcal{C}$. By induction, we have
$$\sum_{w\in\tilde{W}_{n+1}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in\tilde{V}_w\\
|p-q|=2^{-1}\cdot3^{-(n+1)}
\end{subarray}
$
}}}
(\tilde{u}(p)-\tilde{u}(q))^2=\frac{2}{3}\sum_{w\in\tilde{W}_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in\tilde{V}_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(\tilde{u}(p)-\tilde{u}(q))^2\text{ for all }n\ge1,$$
hence
$$\sum_{w\in\tilde{W}_{n}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in\tilde{V}_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(\tilde{u}(p)-\tilde{u}(q))^2=\left(\frac{2}{3}\right)^n\text{ for all }n\ge1.$$
By Equation (\ref{SC_con_eqn_upper}), we have
$$\sum_{n=1}^\infty3^{(\beta-\alpha)n}\left(\frac{2}{3}\right)^n<+\infty,$$
hence, $\beta<\log(8\cdot3/2)/\log3$. Hence
$$\beta_*\le\frac{\log(8\cdot\frac{3}{2})}{\log3}.$$
\end{proof}
\section{Resistance Estimates}\label{SC_con_sec_resistance}
In this section, we give resistance estimates using electrical network techniques.
We consider two sequences of finite graphs related to $V_n$ and $W_n$, respectively.
For all $n\ge1$. Let $\mathcal{V}_n$ be the graph with vertex set $V_n$ and edge set given by
$$\myset{(p,q):p,q\in V_n,|p-q|=2^{-1}\cdot3^{-n}}.$$
For example, we have the figure of $\mathcal{V}_2$ in Figure \ref{SC_con_fig_V2}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\foreach \x in {0,1,...,9}
\draw (\x,0)--(\x,9);
\foreach \y in {0,1,...,9}
\draw (0,\y)--(9,\y);
\draw[fill=white] (3,3)--(6,3)--(6,6)--(3,6)--cycle;
\foreach \x in {0,1,...,9}
\foreach \y in {0,0.5,1,...,9}
\draw[fill=black] (\x,\y) circle (0.06);
\foreach \y in {0,1,...,9}
\foreach \x in {0,0.5,1,...,9}
\draw[fill=black] (\x,\y) circle (0.06);
\draw[fill=white,draw=white] (3.25,3.25)--(5.75,3.25)--(5.75,5.75)--(3.25,5.75)--cycle;
\end{tikzpicture}
\caption{$\mathcal{V}_2$}\label{SC_con_fig_V2}
\end{figure}
Let $\mathcal{W}_n$ be the graph with vertex set $W_n$ and edge set given by
$$\myset{(w^{(1)},w^{(2)}):w^{(1)},w^{(2)}\in W_n,\mathrm{dim}_{\mathcal{H}}\left(K_{w^{(1)}}\cap K_{w^{(2)}}\right)=1}.$$
For example, we have the figure of $\mathcal{W}_2$ in Figure \ref{SC_con_fig_W2}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(8,0)--(8,8)--(0,8)--cycle;
\draw (2,0)--(2,8);
\draw (6,0)--(6,8);
\draw (0,2)--(8,2);
\draw (0,6)--(8,6);
\draw (3,0)--(3,2);
\draw (5,0)--(5,2);
\draw (0,3)--(2,3);
\draw (0,5)--(2,5);
\draw (6,3)--(8,3);
\draw (6,5)--(8,5);
\draw (3,6)--(3,8);
\draw (5,6)--(5,8);
\draw (2,1)--(3,1);
\draw (5,1)--(6,1);
\draw (1,2)--(1,3);
\draw (7,2)--(7,3);
\draw (1,5)--(1,6);
\draw (7,5)--(7,6);
\draw (2,7)--(3,7);
\draw (5,7)--(6,7);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (1,0) circle (0.06);
\draw[fill=black] (2,0) circle (0.06);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (4,0) circle (0.06);
\draw[fill=black] (5,0) circle (0.06);
\draw[fill=black] (6,0) circle (0.06);
\draw[fill=black] (7,0) circle (0.06);
\draw[fill=black] (8,0) circle (0.06);
\draw[fill=black] (0,1) circle (0.06);
\draw[fill=black] (2,1) circle (0.06);
\draw[fill=black] (3,1) circle (0.06);
\draw[fill=black] (5,1) circle (0.06);
\draw[fill=black] (6,1) circle (0.06);
\draw[fill=black] (8,1) circle (0.06);
\draw[fill=black] (0,2) circle (0.06);
\draw[fill=black] (1,2) circle (0.06);
\draw[fill=black] (2,2) circle (0.06);
\draw[fill=black] (3,2) circle (0.06);
\draw[fill=black] (4,2) circle (0.06);
\draw[fill=black] (5,2) circle (0.06);
\draw[fill=black] (6,2) circle (0.06);
\draw[fill=black] (7,2) circle (0.06);
\draw[fill=black] (8,2) circle (0.06);
\draw[fill=black] (0,3) circle (0.06);
\draw[fill=black] (1,3) circle (0.06);
\draw[fill=black] (2,3) circle (0.06);
\draw[fill=black] (6,3) circle (0.06);
\draw[fill=black] (7,3) circle (0.06);
\draw[fill=black] (8,3) circle (0.06);
\draw[fill=black] (0,4) circle (0.06);
\draw[fill=black] (2,4) circle (0.06);
\draw[fill=black] (6,4) circle (0.06);
\draw[fill=black] (8,4) circle (0.06);
\draw[fill=black] (0,5) circle (0.06);
\draw[fill=black] (1,5) circle (0.06);
\draw[fill=black] (2,5) circle (0.06);
\draw[fill=black] (6,5) circle (0.06);
\draw[fill=black] (7,5) circle (0.06);
\draw[fill=black] (8,5) circle (0.06);
\draw[fill=black] (0,6) circle (0.06);
\draw[fill=black] (1,6) circle (0.06);
\draw[fill=black] (2,6) circle (0.06);
\draw[fill=black] (3,6) circle (0.06);
\draw[fill=black] (4,6) circle (0.06);
\draw[fill=black] (5,6) circle (0.06);
\draw[fill=black] (6,6) circle (0.06);
\draw[fill=black] (7,6) circle (0.06);
\draw[fill=black] (8,6) circle (0.06);
\draw[fill=black] (0,7) circle (0.06);
\draw[fill=black] (2,7) circle (0.06);
\draw[fill=black] (3,7) circle (0.06);
\draw[fill=black] (5,7) circle (0.06);
\draw[fill=black] (6,7) circle (0.06);
\draw[fill=black] (8,7) circle (0.06);
\draw[fill=black] (0,8) circle (0.06);
\draw[fill=black] (1,8) circle (0.06);
\draw[fill=black] (2,8) circle (0.06);
\draw[fill=black] (3,8) circle (0.06);
\draw[fill=black] (4,8) circle (0.06);
\draw[fill=black] (5,8) circle (0.06);
\draw[fill=black] (6,8) circle (0.06);
\draw[fill=black] (7,8) circle (0.06);
\draw[fill=black] (8,8) circle (0.06);
\end{tikzpicture}
\caption{$\mathcal{W}_2$}\label{SC_con_fig_W2}
\end{figure}
On $\mathcal{V}_n$, the energy
$$
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_n\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2,u\in l(V_n),$$
is related to a weighted graph with the conductances of all edges equal to $1$. While the energy
$$\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2,u\in l(V_n),$$
is related to a weighted graph with the conductances of some edges equal to $1$ and the conductances of other edges equal to $2$, since the term $(u(p)-u(q))^2$ is added either once or twice.
Since
$$
\begin{aligned}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_n\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2&\le\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
&\le 2
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_n\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2,
\end{aligned}
$$
we use
$$D_n(u,u):=\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2,u\in l(V_n),$$
as the energy on $\mathcal{V}_n$. Assume that $A,B$ are two disjoint subsets of $V_n$. Let
$$R_n(A,B)=\inf\myset{D_n(u,u):u|_A=0,u|_B=1,u\in l(V_n)}^{-1}.$$
Denote
$$R_n^V=R_n(V_n\cap\myset{0}\times[0,1],V_n\cap\myset{1}\times[0,1]),$$
$$R_n(x,y)=R_n(\myset{x},\myset{y}),x,y\in V_n.$$
It is obvious that $R_n$ is a metric on $V_n$, hence
$$R_n(x,y)\le R_n(x,z)+R_n(z,y)\text{ for all }x,y,z\in V_n.$$
On $\mathcal{W}_n$, the energy
$$\mathfrak{D}_n(u,u):=\sum_{w^{(1)}\sim_nw^{(2)}}(u(w^{(1)})-u(w^{(2)}))^2,u\in l(W_n),$$
is related to a weighted graph with the conductances of all edges equal to $1$. Assume that $A,B$ are two disjoint subsets of $W_n$. Let
$$\mathfrak{R}_n(A,B)=\inf\myset{\mathfrak{D}_n(u,u):u|_A=0,u|_B=1,u\in l(W_n)}^{-1}.$$
Denote
$$\mathfrak{R}_n(w^{(1)},w^{(2)})=\mathfrak{R}_n(\myset{w^{(1)}},\myset{w^{(2)}}),w^{(1)},w^{(2)}\in W_n.$$
It is obvious that $\mathfrak{R}_n$ is a metric on $W_n$, hence
$$\mathfrak{R}_n(w^{(1)},w^{(2)})\le\mathfrak{R}_n(w^{(1)},w^{(3)})+\mathfrak{R}_n(w^{(3)},w^{(2)})\text{ for all }w^{(1)},w^{(2)},w^{(3)}\in W_n.$$
The main result of this section is as follows.
\begin{mythm}\label{SC_con_thm_resist}
There exists some positive constant $\rho\in\left[7/6,3/2\right]$ such that for all $n\ge1$
$$R_n^V\asymp\rho^n,$$
$$R_n(p_0,p_1)=\ldots=R_n(p_6,p_7)=R_n(p_7,p_0)\asymp\rho^n,$$
$$\mathfrak{R}_n(0^n,1^n)=\ldots=\mathfrak{R}_n(6^n,7^n)=\mathfrak{R}_n(7^n,0^n)\asymp\rho^n.$$
\end{mythm}
\begin{myrmk}
By triangle inequality, for all $i,j=0,\ldots,7,n\ge1$
$$R_{n}(p_i,p_j)\lesssim\rho^n,$$
$$\mathfrak{R}_{n}(i^n,j^n)\lesssim\rho^n.$$
\end{myrmk}
We have a direct corollary as follows.
\begin{mycor}\label{SC_con_cor_resist_upper}
For all $n\ge1,p,q\in V_n,w^{(1)},w^{(2)}\in W_n$
$$R_n(p,q)\lesssim\rho^n,$$
$$\mathfrak{R}_n(w^{(1)},w^{(2)})\lesssim\rho^n.$$
\end{mycor}
\begin{proof}
We only need to show that $\mathfrak{R}_n(w,0^n)\lesssim\rho^n$ for all $w\in W_n,n\ge1$. Then for all $w^{(1)},w^{(2)}\in W_n$
$$\mathfrak{R}_n(w^{(1)},w^{(2)})\le\mathfrak{R}_n(w^{(1)},0^n)+\mathfrak{R}_n(w^{(2)},0^n)\lesssim\rho^n.$$
Similarly, we have the proof of $R_n(p,q)\lesssim\rho^n$ for all $p,q\in V_n,n\ge1$.
Indeed, for all $n\ge1,w=w_1\ldots w_n\in W_n$, we construct a finite sequence as follows.
$$
\begin{aligned}
w^{(1)}&=w_1\ldots w_{n-2}w_{n-1}w_n=w,\\
w^{(2)}&=w_1\ldots w_{n-2}w_{n-1}w_{n-1},\\
w^{(3)}&=w_1\ldots w_{n-2}w_{n-2}w_{n-2},\\
&\ldots\\
w^{(n)}&=w_1\ldots w_1w_1w_1,\\
w^{(n+1)}&=0\ldots 000=0^n.
\end{aligned}
$$
For all $i=1,\ldots,n-1$, by cutting technique
$$
\begin{aligned}
&\mathfrak{R}_n(w^{(i)},w^{(i+1)})=\mathfrak{R}_n(w_1\ldots w_{n-i}w_{n-i+1}\ldots w_{n-i+1},w_1\ldots w_{n-i}w_{n-i}\ldots w_{n-i})\\
\le&\mathfrak{R}_i(w_{n-i+1}\ldots w_{n-i+1},w_{n-i}\ldots w_{n-i})=\mathfrak{R}_i(w_{n-i+1}^i,w_{n-i}^i)\lesssim\rho^i.
\end{aligned}
$$
Since $\mathfrak{R}_n(w^{(n)},w^{(n+1)})=\mathfrak{R}_n(w_1^n,0^n)\lesssim\rho^n$, we have
$$\mathfrak{R}_n(w,0^n)=\mathfrak{R}_n(w^{(1)},w^{(n+1)})\le\sum_{i=1}^n\mathfrak{R}_n(w^{(i)},w^{(i+1)})\lesssim\sum_{i=1}^n\rho^i\lesssim\rho^n.$$
\end{proof}
We need the following results for preparation.
Firstly, we have resistance estimates for some symmetric cases.
\begin{mythm}\label{SC_con_thm_resist1}
There exists some positive constant $\rho\in[7/6,3/2]$ such that for all $n\ge1$
$$R_n^V\asymp\rho^n,$$
$$R_n(p_1,p_5)=R_n(p_3,p_7)\asymp\rho^n,$$
$$R_n(p_0,p_4)=R_n(p_2,p_6)\asymp\rho^n.$$
\end{mythm}
\begin{proof}
The proof is similar to \cite[Theorem 5.1]{BB90} and \cite[Theorem 6.1]{McG02} where flow technique and potential technique are used. We need discrete version instead of continuous version.
Hence there exists some positive constant $C$ such that
$$\frac{1}{C}x_nx_m\le x_{n+m}\le Cx_nx_m\text{ for all }n,m\ge1,$$
where $x$ is any of the above resistances. Since the above resistances share the same complexity, there exists \emph{one} positive constant $\rho$ such that they are equivalent to $\rho^n$ for all $n\ge1$.
By shorting and cutting technique, we have $\rho\in[7/6,3/2]$, see \cite[Equation (2.6)]{Bar13} or \cite[Remarks 5.4]{BB99a}.
\end{proof}
Secondly, by symmetry and shorting technique, we have the following relations.
\begin{myprop}\label{SC_con_prop_resist2}
For all $n\ge1$
$$R_n(p_0,p_1)\le\mathfrak{R}_n(0^n,1^n),$$
$$R_n^V\le R_n(p_1,p_5)=R_n(p_3,p_7)\le\mathfrak{R}_n(1^n,5^n)=\mathfrak{R}_n(3^n,7^n),$$
$$R_n^V\le R_n(p_0,p_4)=R_n(p_2,p_6)\le\mathfrak{R}_n(0^n,4^n)=\mathfrak{R}_n(2^n,6^n).$$
\end{myprop}
Thirdly, we have the following relations.
\begin{myprop}\label{SC_con_prop_resist3}
For all $n\ge1$
$$\mathfrak{R}_n(0^n,1^n)\lesssim R_n(p_0,p_1),$$
$$\mathfrak{R}_n(1^n,5^n)=\mathfrak{R}_n(3^n,7^n)\lesssim R_n(p_1,p_5)=R_n(p_3,p_7),$$
$$\mathfrak{R}_n(0^n,4^n)=\mathfrak{R}_n(2^n,6^n)\lesssim R_n(p_0,p_4)=R_n(p_2,p_6).$$
\end{myprop}
\begin{proof}
The idea is to use electrical network transformations to \emph{increase} resistances to transform weighted graph $\mathcal{W}_n$ to weighted graph $\mathcal{V}_{n-1}$.
Firstly, we do the transformation in Figure \ref{SC_con_fig_trans1} where the resistances of the resistors in the new network only depend on the shape of the networks in Figure \ref{SC_con_fig_trans1} such that we obtain the weighted graph in Figure \ref{SC_con_fig_trans2} where the resistances between any two points are larger than those in the weighted graph $\mathcal{W}_n$. For $\mathfrak{R}_n(i^n,j^n)$, we have the equivalent weighted graph in Figure \ref{SC_con_fig_trans3}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(2,0)--(2,1)--(0,1)--cycle;
\draw (1,0)--(1,1);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (1,0) circle (0.06);
\draw[fill=black] (2,0) circle (0.06);
\draw[fill=black] (0,1) circle (0.06);
\draw[fill=black] (1,1) circle (0.06);
\draw[fill=black] (2,1) circle (0.06);
\draw (3,0)--(3,1);
\draw (4,0)--(4,1);
\draw (5,0)--(5,1);
\draw (3,0.5)--(5,0.5);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (4,0) circle (0.06);
\draw[fill=black] (5,0) circle (0.06);
\draw[fill=black] (3,1) circle (0.06);
\draw[fill=black] (4,1) circle (0.06);
\draw[fill=black] (5,1) circle (0.06);
\draw (2.5,0.5) node {$\Rightarrow$};
\end{tikzpicture}
\caption{First Transformation}\label{SC_con_fig_trans1}
\end{figure}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (0.5,0) circle (0.06);
\draw[fill=black] (1,0) circle (0.06);
\draw[fill=black] (1.5,0) circle (0.06);
\draw[fill=black] (2,0) circle (0.06);
\draw[fill=black] (2.5,0) circle (0.06);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (3.5,0) circle (0.06);
\draw[fill=black] (4,0) circle (0.06);
\draw[fill=black] (4.5,0) circle (0.06);
\draw[fill=black] (5,0) circle (0.06);
\draw[fill=black] (5.5,0) circle (0.06);
\draw[fill=black] (6,0) circle (0.06);
\draw[fill=black] (6.5,0) circle (0.06);
\draw[fill=black] (7,0) circle (0.06);
\draw[fill=black] (7.5,0) circle (0.06);
\draw[fill=black] (8,0) circle (0.06);
\draw[fill=black] (8.5,0) circle (0.06);
\draw[fill=black] (0,0.5) circle (0.06);
\draw[fill=black] (1,0.5) circle (0.06);
\draw[fill=black] (1.5,0.5) circle (0.06);
\draw[fill=black] (2.5,0.5) circle (0.06);
\draw[fill=black] (3,0.5) circle (0.06);
\draw[fill=black] (4,0.5) circle (0.06);
\draw[fill=black] (4.5,0.5) circle (0.06);
\draw[fill=black] (5.5,0.5) circle (0.06);
\draw[fill=black] (6,0.5) circle (0.06);
\draw[fill=black] (7,0.5) circle (0.06);
\draw[fill=black] (7.5,0.5) circle (0.06);
\draw[fill=black] (8.5,0.5) circle (0.06);
\draw[fill=black] (0,1) circle (0.06);
\draw[fill=black] (0.5,1) circle (0.06);
\draw[fill=black] (1,1) circle (0.06);
\draw[fill=black] (1.5,1) circle (0.06);
\draw[fill=black] (2,1) circle (0.06);
\draw[fill=black] (2.5,1) circle (0.06);
\draw[fill=black] (3,1) circle (0.06);
\draw[fill=black] (3.5,1) circle (0.06);
\draw[fill=black] (4,1) circle (0.06);
\draw[fill=black] (4.5,1) circle (0.06);
\draw[fill=black] (5,1) circle (0.06);
\draw[fill=black] (5.5,1) circle (0.06);
\draw[fill=black] (6,1) circle (0.06);
\draw[fill=black] (6.5,1) circle (0.06);
\draw[fill=black] (7,1) circle (0.06);
\draw[fill=black] (7.5,1) circle (0.06);
\draw[fill=black] (8,1) circle (0.06);
\draw[fill=black] (8.5,1) circle (0.06);
\draw[fill=black] (0,1.5) circle (0.06);
\draw[fill=black] (0.5,1.5) circle (0.06);
\draw[fill=black] (1,1.5) circle (0.06);
\draw[fill=black] (3,1.5) circle (0.06);
\draw[fill=black] (3.5,1.5) circle (0.06);
\draw[fill=black] (4,1.5) circle (0.06);
\draw[fill=black] (4.5,1.5) circle (0.06);
\draw[fill=black] (5,1.5) circle (0.06);
\draw[fill=black] (5.5,1.5) circle (0.06);
\draw[fill=black] (7.5,1.5) circle (0.06);
\draw[fill=black] (8,1.5) circle (0.06);
\draw[fill=black] (8.5,1.5) circle (0.06);
\draw[fill=black] (0,2) circle (0.06);
\draw[fill=black] (1,2) circle (0.06);
\draw[fill=black] (3,2) circle (0.06);
\draw[fill=black] (4,2) circle (0.06);
\draw[fill=black] (4.5,2) circle (0.06);
\draw[fill=black] (5.5,2) circle (0.06);
\draw[fill=black] (7.5,2) circle (0.06);
\draw[fill=black] (8.5,2) circle (0.06);
\draw[fill=black] (0,2.5) circle (0.06);
\draw[fill=black] (0.5,2.5) circle (0.06);
\draw[fill=black] (1,2.5) circle (0.06);
\draw[fill=black] (3,2.5) circle (0.06);
\draw[fill=black] (3.5,2.5) circle (0.06);
\draw[fill=black] (4,2.5) circle (0.06);
\draw[fill=black] (4.5,2.5) circle (0.06);
\draw[fill=black] (5,2.5) circle (0.06);
\draw[fill=black] (5.5,2.5) circle (0.06);
\draw[fill=black] (7.5,2.5) circle (0.06);
\draw[fill=black] (8,2.5) circle (0.06);
\draw[fill=black] (8.5,2.5) circle (0.06);
\draw[fill=black] (0,3) circle (0.06);
\draw[fill=black] (0.5,3) circle (0.06);
\draw[fill=black] (1,3) circle (0.06);
\draw[fill=black] (1.5,3) circle (0.06);
\draw[fill=black] (2,3) circle (0.06);
\draw[fill=black] (2.5,3) circle (0.06);
\draw[fill=black] (3,3) circle (0.06);
\draw[fill=black] (3.5,3) circle (0.06);
\draw[fill=black] (4,3) circle (0.06);
\draw[fill=black] (4.5,3) circle (0.06);
\draw[fill=black] (5,3) circle (0.06);
\draw[fill=black] (5.5,3) circle (0.06);
\draw[fill=black] (6,3) circle (0.06);
\draw[fill=black] (6.5,3) circle (0.06);
\draw[fill=black] (7,3) circle (0.06);
\draw[fill=black] (7.5,3) circle (0.06);
\draw[fill=black] (8,3) circle (0.06);
\draw[fill=black] (8.5,3) circle (0.06);
\draw[fill=black] (0,3.5) circle (0.06);
\draw[fill=black] (1,3.5) circle (0.06);
\draw[fill=black] (1.5,3.5) circle (0.06);
\draw[fill=black] (2.5,3.5) circle (0.06);
\draw[fill=black] (3,3.5) circle (0.06);
\draw[fill=black] (4,3.5) circle (0.06);
\draw[fill=black] (4.5,3.5) circle (0.06);
\draw[fill=black] (5.5,3.5) circle (0.06);
\draw[fill=black] (6,3.5) circle (0.06);
\draw[fill=black] (7,3.5) circle (0.06);
\draw[fill=black] (7.5,3.5) circle (0.06);
\draw[fill=black] (8.5,3.5) circle (0.06);
\draw[fill=black] (0,4) circle (0.06);
\draw[fill=black] (0.5,4) circle (0.06);
\draw[fill=black] (1,4) circle (0.06);
\draw[fill=black] (1.5,4) circle (0.06);
\draw[fill=black] (2,4) circle (0.06);
\draw[fill=black] (2.5,4) circle (0.06);
\draw[fill=black] (3,4) circle (0.06);
\draw[fill=black] (3.5,4) circle (0.06);
\draw[fill=black] (4,4) circle (0.06);
\draw[fill=black] (4.5,4) circle (0.06);
\draw[fill=black] (5,4) circle (0.06);
\draw[fill=black] (5.5,4) circle (0.06);
\draw[fill=black] (6,4) circle (0.06);
\draw[fill=black] (6.5,4) circle (0.06);
\draw[fill=black] (7,4) circle (0.06);
\draw[fill=black] (7.5,4) circle (0.06);
\draw[fill=black] (8,4) circle (0.06);
\draw[fill=black] (8.5,4) circle (0.06);
\draw[fill=black] (0,4.5) circle (0.06);
\draw[fill=black] (0.5,4.5) circle (0.06);
\draw[fill=black] (1,4.5) circle (0.06);
\draw[fill=black] (1.5,4.5) circle (0.06);
\draw[fill=black] (2,4.5) circle (0.06);
\draw[fill=black] (2.5,4.5) circle (0.06);
\draw[fill=black] (3,4.5) circle (0.06);
\draw[fill=black] (3.5,4.5) circle (0.06);
\draw[fill=black] (4,4.5) circle (0.06);
\draw[fill=black] (0,5) circle (0.06);
\draw[fill=black] (1,5) circle (0.06);
\draw[fill=black] (1.5,5) circle (0.06);
\draw[fill=black] (2.5,5) circle (0.06);
\draw[fill=black] (3,5) circle (0.06);
\draw[fill=black] (4,5) circle (0.06);
\draw[fill=black] (0,5.5) circle (0.06);
\draw[fill=black] (0.5,5.5) circle (0.06);
\draw[fill=black] (1,5.5) circle (0.06);
\draw[fill=black] (1.5,5.5) circle (0.06);
\draw[fill=black] (2,5.5) circle (0.06);
\draw[fill=black] (2.5,5.5) circle (0.06);
\draw[fill=black] (3,5.5) circle (0.06);
\draw[fill=black] (3.5,5.5) circle (0.06);
\draw[fill=black] (4,5.5) circle (0.06);
\draw[fill=black] (0,6) circle (0.06);
\draw[fill=black] (0.5,6) circle (0.06);
\draw[fill=black] (1,6) circle (0.06);
\draw[fill=black] (3,6) circle (0.06);
\draw[fill=black] (3.5,6) circle (0.06);
\draw[fill=black] (4,6) circle (0.06);
\draw[fill=black] (0,6.5) circle (0.06);
\draw[fill=black] (1,6.5) circle (0.06);
\draw[fill=black] (3,6.5) circle (0.06);
\draw[fill=black] (4,6.5) circle (0.06);
\draw[fill=black] (0,7) circle (0.06);
\draw[fill=black] (0.5,7) circle (0.06);
\draw[fill=black] (1,7) circle (0.06);
\draw[fill=black] (3,7) circle (0.06);
\draw[fill=black] (3.5,7) circle (0.06);
\draw[fill=black] (4,7) circle (0.06);
\draw[fill=black] (0,7.5) circle (0.06);
\draw[fill=black] (0.5,7.5) circle (0.06);
\draw[fill=black] (1,7.5) circle (0.06);
\draw[fill=black] (1.5,7.5) circle (0.06);
\draw[fill=black] (2,7.5) circle (0.06);
\draw[fill=black] (2.5,7.5) circle (0.06);
\draw[fill=black] (3,7.5) circle (0.06);
\draw[fill=black] (3.5,7.5) circle (0.06);
\draw[fill=black] (4,7.5) circle (0.06);
\draw[fill=black] (0,8) circle (0.06);
\draw[fill=black] (1,8) circle (0.06);
\draw[fill=black] (1.5,8) circle (0.06);
\draw[fill=black] (2.5,8) circle (0.06);
\draw[fill=black] (3,8) circle (0.06);
\draw[fill=black] (4,8) circle (0.06);
\draw[fill=black] (0,8.5) circle (0.06);
\draw[fill=black] (0.5,8.5) circle (0.06);
\draw[fill=black] (1,8.5) circle (0.06);
\draw[fill=black] (1.5,8.5) circle (0.06);
\draw[fill=black] (2,8.5) circle (0.06);
\draw[fill=black] (2.5,8.5) circle (0.06);
\draw[fill=black] (3,8.5) circle (0.06);
\draw[fill=black] (3.5,8.5) circle (0.06);
\draw[fill=black] (4,8.5) circle (0.06);
\draw (0,0)--(8.5,0);
\draw (0,0)--(0,8.5);
\draw (1,1)--(1.5,1);
\draw (1,1)--(1,1.5);
\draw (1.25,0)--(1.25,1);
\draw (0,1.25)--(1,1.25);
\draw (1,0.5)--(1.5,0.5);
\draw (0.5,1)--(0.5,1.5);
\draw (1.5,1)--(3,1);
\draw (2.75,1)--(2.75,0);
\draw (2.5,0.5)--(3,0.5);
\draw (3,1)--(3,1.5);
\draw (4,1)--(4,1.5);
\draw (3,1.25)--(4,1.25);
\draw (3.5,1)--(3.5,1.5);
\draw (4,1)--(4.5,1);
\draw (4,0.5)--(4.5,0.5);
\draw (4.25,0)--(4.25,1);
\draw (4.5,1)--(4.5,1.5);
\draw (5,1)--(5,1.5);
\draw (5.5,1)--(5.5,1.5);
\draw (4.5,1.25)--(5.5,1.25);
\draw (5.5,1)--(6,1);
\draw (5.5,0.5)--(6,0.5);
\draw (5.75,0)--(5.75,1);
\draw (6,1)--(7,1);
\draw (7,1)--(7.5,1);
\draw (7,0.5)--(7.5,0.5);
\draw (7.25,0)--(7.25,1);
\draw (7.5,1)--(7.5,1.5);
\draw (8,1)--(8,1.5);
\draw (8.5,1)--(8.5,1.5);
\draw (7.5,1.25)--(8.5,1.25);
\draw (8.5,0)--(9,0);
\draw (8.5,0.5)--(9,0.5);
\draw (8.5,1)--(9,1);
\draw (8.75,0)--(8.75,1);
\draw (1,1.5)--(1,3);
\draw (0.5,2.5)--(0.5,3);
\draw (0,2.75)--(1,2.75);
\draw (1,3)--(3,3);
\draw (3,3)--(3,1);
\draw (3.5,2.5)--(3.5,3);
\draw (4,2.5)--(4,3);
\draw (3,2.75)--(4,2.75);
\draw (4,1.5)--(4.5,1.5);
\draw (4,2)--(4.5,2);
\draw (4,2.5)--(4.5,2.5);
\draw (4.25,1.5)--(4.25,2.5);
\draw (4.5,2.5)--(4.5,3);
\draw (5,2.5)--(5,3);
\draw (5.5,2.5)--(5.5,3);
\draw (4.5,2.75)--(5.5,2.75);
\draw (5.5,1.5)--(5.5,2.5);
\draw (5.5,3)--(7.5,3);
\draw (7.5,3)--(7.5,1.5);
\draw (8,2.5)--(8,3);
\draw (8.5,2.5)--(8.5,3);
\draw (7.5,2.75)--(8.5,2.75);
\draw (8.5,1.5)--(9,1.5);
\draw (8.5,2)--(9,2);
\draw (8.5,2.5)--(9,2.5);
\draw (8.75,1.5)--(8.75,2.5);
\draw (8.5,3)--(9,3);
\draw (8.5,3.5)--(9,3.5);
\draw (8.5,4)--(9,4);
\draw (8.75,3)--(8.75,4);
\draw (7,3.5)--(7.5,3.5);
\draw (7,4)--(7.5,4);
\draw (7.25,3)--(7.25,4);
\draw (5.5,3.5)--(6,3.5);
\draw (5.5,4)--(6,4);
\draw (5.75,3)--(5.75,4);
\draw (6,4)--(7,4);
\draw (4.5,4)--(5.5,4);
\draw (4,3)--(4.5,3);
\draw (4,3.5)--(4.5,3.5);
\draw (4,4)--(4.5,4);
\draw (4.25,3)--(4.25,4);
\draw (4,4)--(4,4.5);
\draw (3.5,4)--(3.5,4.5);
\draw (3,4)--(3,4.5);
\draw (3,4.25)--(4,4.25);
\draw (2.5,4)--(3,4);
\draw (2.5,3.5)--(3,3.5);
\draw (2.75,3)--(2.75,4);
\draw (1,4)--(1.5,4);
\draw (1,3.5)--(1.5,3.5);
\draw (1.25,3)--(1.25,4);
\draw (7.5,4)--(8.5,4);
\draw (0.5,4)--(0.5,4.5);
\draw (1,4)--(1,4.5);
\draw (0,4.25)--(1,4.25);
\draw (1.5,4)--(1.5,4.5);
\draw (2,4)--(2,4.5);
\draw (2.5,4)--(2.5,4.5);
\draw (1.5,4.25)--(2.5,4.25);
\draw (1,4.5)--(1.5,4.5);
\draw (1,5)--(1.5,5);
\draw (1,5.5)--(1.5,5.5);
\draw (1.25,4.5)--(1.25,5.5);
\draw (2.5,4.5)--(3,4.5);
\draw (2.5,5)--(3,5);
\draw (2.5,5.5)--(3,5.5);
\draw (2.75,4.5)--(2.75,5.5);
\draw (4,4.5)--(4,5.5);
\draw (1.5,5.5)--(2.5,5.5);
\draw (0.5,5.5)--(0.5,6);
\draw (1,5.5)--(1,6);
\draw (0,5.75)--(1,5.75);
\draw (3,5.5)--(3,6);
\draw (3.5,5.5)--(3.5,6);
\draw (4,5.5)--(4,6);
\draw (3,5.75)--(4,5.75);
\draw (1,6)--(1,7);
\draw (3,6)--(3,7);
\draw (4,6)--(4,7);
\draw (0.5,7)--(0.5,7.5);
\draw (1,7)--(1,7.5);
\draw (0,7.25)--(1,7.25);
\draw (3,7)--(3,7.5);
\draw (3.5,7)--(3.5,7.5);
\draw (4,7)--(4,7.5);
\draw (3,7.25)--(4,7.25);
\draw (1,7.5)--(3,7.5);
\draw (1,8)--(1.5,8);
\draw (1,8.5)--(1.5,8.5);
\draw (1.25,7.5)--(1.25,8.5);
\draw (2.5,8)--(3,8);
\draw (2.5,8.5)--(3,8.5);
\draw (2.75,7.5)--(2.75,8.5);
\draw (0,8.5)--(0,9);
\draw (0.5,8.5)--(0.5,9);
\draw (1,8.5)--(1,9);
\draw (1.5,8.5)--(1.5,9);
\draw (2,8.5)--(2,9);
\draw (2.5,8.5)--(2.5,9);
\draw (3,8.5)--(3,9);
\draw (3.5,8.5)--(3.5,9);
\draw (4,8.5)--(4,9);
\draw (0,8.75)--(1,8.75);
\draw (1.5,8.75)--(2.5,8.75);
\draw (3,8.75)--(4,8.75);
\draw (4,8.5)--(4,7.5);
\draw (2,9.5) node {\ldots};
\draw (9.5,2) node {\vdots};
\end{tikzpicture}
\caption{First Transformation}\label{SC_con_fig_trans2}
\end{figure}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (0.5,0) circle (0.06);
\draw[fill=black] (1,0) circle (0.06);
\draw[fill=black] (1.5,0) circle (0.06);
\draw[fill=black] (2,0) circle (0.06);
\draw[fill=black] (2.5,0) circle (0.06);
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (3.5,0) circle (0.06);
\draw[fill=black] (4,0) circle (0.06);
\draw[fill=black] (4.5,0) circle (0.06);
\draw[fill=black] (5,0) circle (0.06);
\draw[fill=black] (5.5,0) circle (0.06);
\draw[fill=black] (6,0) circle (0.06);
\draw[fill=black] (6.5,0) circle (0.06);
\draw[fill=black] (7,0) circle (0.06);
\draw[fill=black] (7.5,0) circle (0.06);
\draw[fill=black] (8,0) circle (0.06);
\draw[fill=black] (8.5,0) circle (0.06);
\draw[fill=black] (0,0.5) circle (0.06);
\draw[fill=black] (1,0.5) circle (0.06);
\draw[fill=black] (1.5,0.5) circle (0.06);
\draw[fill=black] (2.5,0.5) circle (0.06);
\draw[fill=black] (3,0.5) circle (0.06);
\draw[fill=black] (4,0.5) circle (0.06);
\draw[fill=black] (4.5,0.5) circle (0.06);
\draw[fill=black] (5.5,0.5) circle (0.06);
\draw[fill=black] (6,0.5) circle (0.06);
\draw[fill=black] (7,0.5) circle (0.06);
\draw[fill=black] (7.5,0.5) circle (0.06);
\draw[fill=black] (8.5,0.5) circle (0.06);
\draw[fill=black] (0,1) circle (0.06);
\draw[fill=black] (0.5,1) circle (0.06);
\draw[fill=black] (1,1) circle (0.06);
\draw[fill=black] (1.5,1) circle (0.06);
\draw[fill=black] (2,1) circle (0.06);
\draw[fill=black] (2.5,1) circle (0.06);
\draw[fill=black] (3,1) circle (0.06);
\draw[fill=black] (3.5,1) circle (0.06);
\draw[fill=black] (4,1) circle (0.06);
\draw[fill=black] (4.5,1) circle (0.06);
\draw[fill=black] (5,1) circle (0.06);
\draw[fill=black] (5.5,1) circle (0.06);
\draw[fill=black] (6,1) circle (0.06);
\draw[fill=black] (6.5,1) circle (0.06);
\draw[fill=black] (7,1) circle (0.06);
\draw[fill=black] (7.5,1) circle (0.06);
\draw[fill=black] (8,1) circle (0.06);
\draw[fill=black] (8.5,1) circle (0.06);
\draw[fill=black] (0,1.5) circle (0.06);
\draw[fill=black] (0.5,1.5) circle (0.06);
\draw[fill=black] (1,1.5) circle (0.06);
\draw[fill=black] (3,1.5) circle (0.06);
\draw[fill=black] (3.5,1.5) circle (0.06);
\draw[fill=black] (4,1.5) circle (0.06);
\draw[fill=black] (4.5,1.5) circle (0.06);
\draw[fill=black] (5,1.5) circle (0.06);
\draw[fill=black] (5.5,1.5) circle (0.06);
\draw[fill=black] (7.5,1.5) circle (0.06);
\draw[fill=black] (8,1.5) circle (0.06);
\draw[fill=black] (8.5,1.5) circle (0.06);
\draw[fill=black] (0,2) circle (0.06);
\draw[fill=black] (1,2) circle (0.06);
\draw[fill=black] (3,2) circle (0.06);
\draw[fill=black] (4,2) circle (0.06);
\draw[fill=black] (4.5,2) circle (0.06);
\draw[fill=black] (5.5,2) circle (0.06);
\draw[fill=black] (7.5,2) circle (0.06);
\draw[fill=black] (8.5,2) circle (0.06);
\draw[fill=black] (0,2.5) circle (0.06);
\draw[fill=black] (0.5,2.5) circle (0.06);
\draw[fill=black] (1,2.5) circle (0.06);
\draw[fill=black] (3,2.5) circle (0.06);
\draw[fill=black] (3.5,2.5) circle (0.06);
\draw[fill=black] (4,2.5) circle (0.06);
\draw[fill=black] (4.5,2.5) circle (0.06);
\draw[fill=black] (5,2.5) circle (0.06);
\draw[fill=black] (5.5,2.5) circle (0.06);
\draw[fill=black] (7.5,2.5) circle (0.06);
\draw[fill=black] (8,2.5) circle (0.06);
\draw[fill=black] (8.5,2.5) circle (0.06);
\draw[fill=black] (0,3) circle (0.06);
\draw[fill=black] (0.5,3) circle (0.06);
\draw[fill=black] (1,3) circle (0.06);
\draw[fill=black] (1.5,3) circle (0.06);
\draw[fill=black] (2,3) circle (0.06);
\draw[fill=black] (2.5,3) circle (0.06);
\draw[fill=black] (3,3) circle (0.06);
\draw[fill=black] (3.5,3) circle (0.06);
\draw[fill=black] (4,3) circle (0.06);
\draw[fill=black] (4.5,3) circle (0.06);
\draw[fill=black] (5,3) circle (0.06);
\draw[fill=black] (5.5,3) circle (0.06);
\draw[fill=black] (6,3) circle (0.06);
\draw[fill=black] (6.5,3) circle (0.06);
\draw[fill=black] (7,3) circle (0.06);
\draw[fill=black] (7.5,3) circle (0.06);
\draw[fill=black] (8,3) circle (0.06);
\draw[fill=black] (8.5,3) circle (0.06);
\draw[fill=black] (0,3.5) circle (0.06);
\draw[fill=black] (1,3.5) circle (0.06);
\draw[fill=black] (1.5,3.5) circle (0.06);
\draw[fill=black] (2.5,3.5) circle (0.06);
\draw[fill=black] (3,3.5) circle (0.06);
\draw[fill=black] (4,3.5) circle (0.06);
\draw[fill=black] (4.5,3.5) circle (0.06);
\draw[fill=black] (5.5,3.5) circle (0.06);
\draw[fill=black] (6,3.5) circle (0.06);
\draw[fill=black] (7,3.5) circle (0.06);
\draw[fill=black] (7.5,3.5) circle (0.06);
\draw[fill=black] (8.5,3.5) circle (0.06);
\draw[fill=black] (0,4) circle (0.06);
\draw[fill=black] (0.5,4) circle (0.06);
\draw[fill=black] (1,4) circle (0.06);
\draw[fill=black] (1.5,4) circle (0.06);
\draw[fill=black] (2,4) circle (0.06);
\draw[fill=black] (2.5,4) circle (0.06);
\draw[fill=black] (3,4) circle (0.06);
\draw[fill=black] (3.5,4) circle (0.06);
\draw[fill=black] (4,4) circle (0.06);
\draw[fill=black] (4.5,4) circle (0.06);
\draw[fill=black] (5,4) circle (0.06);
\draw[fill=black] (5.5,4) circle (0.06);
\draw[fill=black] (6,4) circle (0.06);
\draw[fill=black] (6.5,4) circle (0.06);
\draw[fill=black] (7,4) circle (0.06);
\draw[fill=black] (7.5,4) circle (0.06);
\draw[fill=black] (8,4) circle (0.06);
\draw[fill=black] (8.5,4) circle (0.06);
\draw[fill=black] (0,4.5) circle (0.06);
\draw[fill=black] (0.5,4.5) circle (0.06);
\draw[fill=black] (1,4.5) circle (0.06);
\draw[fill=black] (1.5,4.5) circle (0.06);
\draw[fill=black] (2,4.5) circle (0.06);
\draw[fill=black] (2.5,4.5) circle (0.06);
\draw[fill=black] (3,4.5) circle (0.06);
\draw[fill=black] (3.5,4.5) circle (0.06);
\draw[fill=black] (4,4.5) circle (0.06);
\draw[fill=black] (0,5) circle (0.06);
\draw[fill=black] (1,5) circle (0.06);
\draw[fill=black] (1.5,5) circle (0.06);
\draw[fill=black] (2.5,5) circle (0.06);
\draw[fill=black] (3,5) circle (0.06);
\draw[fill=black] (4,5) circle (0.06);
\draw[fill=black] (0,5.5) circle (0.06);
\draw[fill=black] (0.5,5.5) circle (0.06);
\draw[fill=black] (1,5.5) circle (0.06);
\draw[fill=black] (1.5,5.5) circle (0.06);
\draw[fill=black] (2,5.5) circle (0.06);
\draw[fill=black] (2.5,5.5) circle (0.06);
\draw[fill=black] (3,5.5) circle (0.06);
\draw[fill=black] (3.5,5.5) circle (0.06);
\draw[fill=black] (4,5.5) circle (0.06);
\draw[fill=black] (0,6) circle (0.06);
\draw[fill=black] (0.5,6) circle (0.06);
\draw[fill=black] (1,6) circle (0.06);
\draw[fill=black] (3,6) circle (0.06);
\draw[fill=black] (3.5,6) circle (0.06);
\draw[fill=black] (4,6) circle (0.06);
\draw[fill=black] (0,6.5) circle (0.06);
\draw[fill=black] (1,6.5) circle (0.06);
\draw[fill=black] (3,6.5) circle (0.06);
\draw[fill=black] (4,6.5) circle (0.06);
\draw[fill=black] (0,7) circle (0.06);
\draw[fill=black] (0.5,7) circle (0.06);
\draw[fill=black] (1,7) circle (0.06);
\draw[fill=black] (3,7) circle (0.06);
\draw[fill=black] (3.5,7) circle (0.06);
\draw[fill=black] (4,7) circle (0.06);
\draw[fill=black] (0,7.5) circle (0.06);
\draw[fill=black] (0.5,7.5) circle (0.06);
\draw[fill=black] (1,7.5) circle (0.06);
\draw[fill=black] (1.5,7.5) circle (0.06);
\draw[fill=black] (2,7.5) circle (0.06);
\draw[fill=black] (2.5,7.5) circle (0.06);
\draw[fill=black] (3,7.5) circle (0.06);
\draw[fill=black] (3.5,7.5) circle (0.06);
\draw[fill=black] (4,7.5) circle (0.06);
\draw[fill=black] (0,8) circle (0.06);
\draw[fill=black] (1,8) circle (0.06);
\draw[fill=black] (1.5,8) circle (0.06);
\draw[fill=black] (2.5,8) circle (0.06);
\draw[fill=black] (3,8) circle (0.06);
\draw[fill=black] (4,8) circle (0.06);
\draw[fill=black] (0,8.5) circle (0.06);
\draw[fill=black] (0.5,8.5) circle (0.06);
\draw[fill=black] (1,8.5) circle (0.06);
\draw[fill=black] (1.5,8.5) circle (0.06);
\draw[fill=black] (2,8.5) circle (0.06);
\draw[fill=black] (2.5,8.5) circle (0.06);
\draw[fill=black] (3,8.5) circle (0.06);
\draw[fill=black] (3.5,8.5) circle (0.06);
\draw[fill=black] (4,8.5) circle (0.06);
\draw (0,0)--(8.5,0);
\draw (0,0)--(0,8.5);
\draw (1,1)--(1.5,1);
\draw (1,1)--(1,1.5);
\draw (1.25,0)--(1.25,1);
\draw (0,1.25)--(1,1.25);
\draw (1.5,1)--(3,1);
\draw (2.75,1)--(2.75,0);
\draw (3,1)--(3,1.5);
\draw (4,1)--(4,1.5);
\draw (3,1.25)--(4,1.25);
\draw (4,1)--(4.5,1);
\draw (4.25,0)--(4.25,1);
\draw (4.5,1)--(4.5,1.5);
\draw (5.5,1)--(5.5,1.5);
\draw (4.5,1.25)--(5.5,1.25);
\draw (5.5,1)--(6,1);
\draw (5.75,0)--(5.75,1);
\draw (6,1)--(7,1);
\draw (7,1)--(7.5,1);
\draw (7.25,0)--(7.25,1);
\draw (7.5,1)--(7.5,1.5);
\draw (8.5,1)--(8.5,1.5);
\draw (7.5,1.25)--(8.5,1.25);
\draw (8.5,0)--(9,0);
\draw (8.5,1)--(9,1);
\draw (8.75,0)--(8.75,1);
\draw (1,1.5)--(1,3);
\draw (0,2.75)--(1,2.75);
\draw (1,3)--(3,3);
\draw (3,3)--(3,1);
\draw (4,2.5)--(4,3);
\draw (3,2.75)--(4,2.75);
\draw (4,1.5)--(4.5,1.5);
\draw (4,2.5)--(4.5,2.5);
\draw (4.25,1.5)--(4.25,2.5);
\draw (4.5,2.5)--(4.5,3);
\draw (5.5,2.5)--(5.5,3);
\draw (4.5,2.75)--(5.5,2.75);
\draw (5.5,1.5)--(5.5,2.5);
\draw (5.5,3)--(7.5,3);
\draw (7.5,3)--(7.5,1.5);
\draw (8.5,2.5)--(8.5,3);
\draw (7.5,2.75)--(8.5,2.75);
\draw (8.5,1.5)--(9,1.5);
\draw (8.5,2.5)--(9,2.5);
\draw (8.75,1.5)--(8.75,2.5);
\draw (8.5,3)--(9,3);
\draw (8.5,4)--(9,4);
\draw (8.75,3)--(8.75,4);
\draw (7,4)--(7.5,4);
\draw (7.25,3)--(7.25,4);
\draw (5.5,4)--(6,4);
\draw (5.75,3)--(5.75,4);
\draw (6,4)--(7,4);
\draw (4.5,4)--(5.5,4);
\draw (4,3)--(4.5,3);
\draw (4,4)--(4.5,4);
\draw (4.25,3)--(4.25,4);
\draw (4,4)--(4,4.5);
\draw (3,4)--(3,4.5);
\draw (3,4.25)--(4,4.25);
\draw (2.5,4)--(3,4);
\draw (2.75,3)--(2.75,4);
\draw (1,4)--(1.5,4);
\draw (1.25,3)--(1.25,4);
\draw (7.5,4)--(8.5,4);
\draw (1,4)--(1,4.5);
\draw (0,4.25)--(1,4.25);
\draw (1.5,4)--(1.5,4.5);
\draw (2.5,4)--(2.5,4.5);
\draw (1.5,4.25)--(2.5,4.25);
\draw (1,4.5)--(1.5,4.5);
\draw (1,5.5)--(1.5,5.5);
\draw (1.25,4.5)--(1.25,5.5);
\draw (2.5,4.5)--(3,4.5);
\draw (2.5,5.5)--(3,5.5);
\draw (2.75,4.5)--(2.75,5.5);
\draw (4,4.5)--(4,5.5);
\draw (1.5,5.5)--(2.5,5.5);
\draw (1,5.5)--(1,6);
\draw (0,5.75)--(1,5.75);
\draw (3,5.5)--(3,6);
\draw (4,5.5)--(4,6);
\draw (3,5.75)--(4,5.75);
\draw (1,6)--(1,7);
\draw (3,6)--(3,7);
\draw (4,6)--(4,7);
\draw (1,7)--(1,7.5);
\draw (0,7.25)--(1,7.25);
\draw (3,7)--(3,7.5);
\draw (4,7)--(4,7.5);
\draw (3,7.25)--(4,7.25);
\draw (1,7.5)--(3,7.5);
\draw (1,8.5)--(1.5,8.5);
\draw (1.25,7.5)--(1.25,8.5);
\draw (2.5,8.5)--(3,8.5);
\draw (2.75,7.5)--(2.75,8.5);
\draw (0,8.5)--(0,9);
\draw (1,8.5)--(1,9);
\draw (1.5,8.5)--(1.5,9);
\draw (2.5,8.5)--(2.5,9);
\draw (3,8.5)--(3,9);
\draw (4,8.5)--(4,9);
\draw (0,8.75)--(1,8.75);
\draw (1.5,8.75)--(2.5,8.75);
\draw (3,8.75)--(4,8.75);
\draw (4,8.5)--(4,7.5);
\draw (2,9.5) node {\ldots};
\draw (9.5,2) node {\vdots};
\end{tikzpicture}
\caption{First Transformation}\label{SC_con_fig_trans3}
\end{figure}
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0.25-1,0.25)--(0.25-1,-0.25)--(-0.25-1,-0.25)--(-0.25-1,0.25)--cycle;
\draw (0-1,0.25)--(0-1,0.75);
\draw (0-1,-0.25)--(0-1,-0.75);
\draw (0.25-1,0)--(0.75-1,0);
\draw (-0.25-1,0)--(-0.75-1,0);
\draw[fill=black] (0.25-1,0.25) circle (0.06);
\draw[fill=black] (0.25-1,-0.25) circle (0.06);
\draw[fill=black] (-0.25-1,-0.25) circle (0.06);
\draw[fill=black] (-0.25-1,0.25) circle (0.06);
\draw[fill=white] (-1,0) circle (0.06);
\draw (0,0) node {$\Rightarrow$};
\draw (0.25,0)--(1.75,0);
\draw (1,-0.75)--(1,0.75);
\draw[fill=black] (0.25+1,0.25) circle (0.06);
\draw[fill=black] (0.25+1,-0.25) circle (0.06);
\draw[fill=black] (-0.25+1,-0.25) circle (0.06);
\draw[fill=black] (-0.25+1,0.25) circle (0.06);
\draw[fill=white] (1,0) circle (0.06);
\draw (-5,-1.5)--(-5,-2)--(-4.5,-2);
\draw[fill=black] (-5,-1.5) circle (0.06);
\draw[fill=black] (-5,-2) circle (0.06);
\draw[fill=black] (-4.5,-2) circle (0.06);
\draw[fill=white] (-5,-1.75) circle (0.06);
\draw[fill=white] (-4.75,-2) circle (0.06);
\draw[fill=white] (-4.75,-1.75) circle (0.06);
\draw (-4,-2) node {$\Rightarrow$};
\draw (-3,-1.75)--(-2.5,-1.75)--(-2.5,-2);
\draw (-2.75,-2)--(-2.75,-1.5)--(-3,-1.5);
\draw[fill=black] (-5+2,-1.5) circle (0.06);
\draw[fill=black] (-5+2,-2) circle (0.06);
\draw[fill=black] (-4.5+2,-2) circle (0.06);
\draw[fill=white] (-5+2,-1.75) circle (0.06);
\draw[fill=white] (-4.75+2,-2) circle (0.06);
\draw[fill=white] (-4.75+2,-1.75) circle (0.06);
\draw (3,-2)--(3,-1.5)--(3.5,-1.5);
\draw[fill=black] (3,-2) circle (0.06);
\draw[fill=black] (3,-1.5) circle (0.06);
\draw[fill=black] (3.5,-1.5) circle (0.06);
\draw[fill=white] (3,-1.75) circle (0.06);
\draw[fill=white] (3.25,-1.5) circle (0.06);
\draw[fill=white] (3.25,-1.75) circle (0.06);
\draw (4,-2) node {$\Rightarrow$};
\draw (5,-1.75)--(5.5,-1.75)--(5.5,-1.5);
\draw (5.25,-1.5)--(5.25,-2)--(5,-2);
\draw[fill=black] (5,-2) circle (0.06);
\draw[fill=black] (5,-1.5) circle (0.06);
\draw[fill=black] (5.5,-1.5) circle (0.06);
\draw[fill=white] (5,-1.75) circle (0.06);
\draw[fill=white] (5.25,-1.5) circle (0.06);
\draw[fill=white] (5.25,-1.75) circle (0.06);
\draw (-5,-2.5)--(-4.5,-2.5)--(-4.5,-3);
\draw[fill=black] (-5,-2.5) circle (0.06);
\draw[fill=black] (-4.5,-2.5) circle (0.06);
\draw[fill=black] (-4.5,-3) circle (0.06);
\draw[fill=white] (-4.75,-2.5) circle (0.06);
\draw[fill=white] (-4.5,-2.75) circle (0.06);
\draw[fill=white] (-4.75,-2.75) circle (0.06);
\draw (-4,-3) node {$\Rightarrow$};
\draw (-2.75,-2.5)--(-2.75,-3)--(-2.5,-3);
\draw (-3,-2.5)--(-3,-2.75)--(-2.5,-2.75);
\draw[fill=black] (-5+2,-2.5) circle (0.06);
\draw[fill=black] (-4.5+2,-2.5) circle (0.06);
\draw[fill=black] (-4.5+2,-3) circle (0.06);
\draw[fill=white] (-4.75+2,-2.5) circle (0.06);
\draw[fill=white] (-4.5+2,-2.75) circle (0.06);
\draw[fill=white] (-4.75+2,-2.75) circle (0.06);
\draw (3,-3)--(3.5,-3)--(3.5,-2.5);
\draw[fill=black] (3,-3) circle (0.06);
\draw[fill=black] (3.5,-3) circle (0.06);
\draw[fill=black] (3.5,-2.5) circle (0.06);
\draw[fill=white] (3.25,-3) circle (0.06);
\draw[fill=white] (3.5,-2.75) circle (0.06);
\draw[fill=white] (3.25,-2.75) circle (0.06);
\draw (4,-3) node {$\Rightarrow$};
\draw (5,-3)--(5,-2.75)--(5.5,-2.75);
\draw (5.25,-3)--(5.25,-2.5)--(5.5,-2.5);
\draw[fill=black] (3+2,-3) circle (0.06);
\draw[fill=black] (3.5+2,-3) circle (0.06);
\draw[fill=black] (3.5+2,-2.5) circle (0.06);
\draw[fill=white] (3.25+2,-3) circle (0.06);
\draw[fill=white] (3.5+2,-2.75) circle (0.06);
\draw[fill=white] (3.25+2,-2.75) circle (0.06);
\end{tikzpicture}
\caption{Second Transformation}\label{SC_con_fig_trans4}
\end{figure}
Secondly, we do the transformations in Figure \ref{SC_con_fig_trans4} where the resistances of the resistors in the new networks only depend on the shape of the networks in Figure \ref{SC_con_fig_trans4} such that we obtain a weighted graph with vertex set $V_{n-1}$ and all conductances equivalent to $1$. Moreover, the resistances between any two points are larger than those in the weighted graph $\mathcal{W}_n$, hence we obtain the desired result.
\end{proof}
Now, we estimate $R_n(p_0,p_1)$ and $\mathfrak{R}_n(0^n,1^n)$ as follows.
\begin{proof}[Proof of Theorem \ref{SC_con_thm_resist}]
The idea is that replacing one point by one network should increase resistances by multiplying the resistance of an individual network.
By Proposition \ref{SC_con_prop_resist2} and Proposition \ref{SC_con_prop_resist3}, we have for all $n\ge1$
$$R_n(p_0,p_1)\asymp\mathfrak{R}_n(0^n,1^n).$$
By Theorem \ref{SC_con_thm_resist1} and Proposition \ref{SC_con_prop_resist2}, we have for all $n\ge1$
$$\mathfrak{R}_n(0^n,1^n)\ge R_n(p_0,p_1)\ge\frac{1}{4}R_n(p_1,p_5)\asymp\rho^n.$$
We only need to show that for all $n\ge1$
$$\mathfrak{R}_n(0^n,1^n)\lesssim\rho^n.$$
Firstly, we estimate $\mathfrak{R}_{n+1}(0^{n+1},12^n)$. Cutting certain edges in $\mathcal{W}_{n+1}$, we obtain the electrical network in Figure \ref{SC_con_fig_resist1} which is equivalent to the electrical networks in Figure \ref{SC_con_fig_resist2}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(2,0)--(2,2)--(0,2)--cycle;
\draw (3,0)--(5,0)--(5,2)--(3,2)--cycle;
\draw (6,0)--(8,0)--(8,2)--(6,2)--cycle;
\draw (0,3)--(2,3)--(2,5)--(0,5)--cycle;
\draw (6,3)--(8,3)--(8,5)--(6,5)--cycle;
\draw (0,6)--(2,6)--(2,8)--(0,8)--cycle;
\draw (3,6)--(5,6)--(5,8)--(3,8)--cycle;
\draw (6,6)--(8,6)--(8,8)--(6,8)--cycle;
\draw (2,2)--(2,3);
\draw (2,2)--(3,2);
\draw (0,5)--(0,6);
\draw (2,8)--(3,8);
\draw (5,6)--(6,6);
\draw (6,6)--(6,5);
\draw (8,3)--(8,2);
\draw (5,0)--(6,0);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (5,0) circle (0.06);
\draw (0,-0.5) node {$0^{n+1}$};
\draw (5,-0.5) node {$12^n$};
\draw (1,1) node {$0W_n$};
\draw (4,1) node {$1W_n$};
\draw (7,1) node {$2W_n$};
\draw (7,4) node {$3W_n$};
\draw (7,7) node {$4W_n$};
\draw (4,7) node {$5W_n$};
\draw (1,7) node {$6W_n$};
\draw (1,4) node {$7W_n$};
\end{tikzpicture}
\caption{An Equivalent Electrical Network}\label{SC_con_fig_resist1}
\end{figure}
\begin{figure}[ht]
\centering
\subfigure{
\begin{tikzpicture}[scale=0.5]
\draw (0,0)--(2,2);
\draw (5,0)--(3,2);
\draw (6,0)--(8,2);
\draw (2,3)--(0,5);
\draw (8,3)--(6,5);
\draw (0,6)--(2,8);
\draw (5,6)--(3,8);
\draw (2,2)--(2,3);
\draw (2,2)--(3,2);
\draw (0,5)--(0,6);
\draw (2,8)--(3,8);
\draw (5,6)--(6,6);
\draw (6,6)--(6,5);
\draw (8,3)--(8,2);
\draw (5,0)--(6,0);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (5,0) circle (0.06);
\draw (0,-0.5) node {$0^{n+1}$};
\draw (5,-0.5) node {$12^n$};
\draw (0,1.4) node {\tiny{$\mathfrak{R}_n(0^n,4^n)$}};
\draw (2,4.4) node {\tiny{$\mathfrak{R}_n(2^n,6^n)$}};
\draw (0,7.4) node {\tiny{$\mathfrak{R}_n(0^n,4^n)$}};
\draw (5,7.4) node {\tiny{$\mathfrak{R}_n(2^n,6^n)$}};
\draw (8,4.4) node {\tiny{$\mathfrak{R}_n(2^n,6^n)$}};
\draw (5,1.4) node {\tiny{$\mathfrak{R}_n(2^n,6^n)$}};
\draw (8,0.6) node {\tiny{$\mathfrak{R}_n(0^n,4^n)$}};
\draw (2.5,1.7) node {\tiny{$1$}};
\draw (1.7,2.5) node {\tiny{$1$}};
\draw (-0.3,5.5) node {\tiny{$1$}};
\draw (2.5,8.3) node {\tiny{$1$}};
\draw (5.5,6.3) node {\tiny{$1$}};
\draw (6.3,5.5) node {\tiny{$1$}};
\draw (8.3,2.5) node {\tiny{$1$}};
\draw (5.5,0.3) node {\tiny{$1$}};
\end{tikzpicture}}
\hspace{0.01pt}
\subfigure{
\begin{tikzpicture}
\draw (0,0)--(2,0);
\draw (2,0)..controls (3,1) and (4,1) ..(5,0);
\draw (2,0)..controls (3,-1) and (4,-1) ..(5,0);
\draw[fill=black] (0,0) circle (0.06);
\draw[fill=black] (5,0) circle (0.06);
\draw (0,-0.5) node {$0^{n+1}$};
\draw (5,-0.5) node {$12^n$};
\draw (1,0.5) node {$\mathfrak{R}_n(0^n,4^n)$};
\draw (3.5,1) node {$5\mathfrak{R}_n(0^n,4^n)+7$};
\draw (3.5,-1) node {$\mathfrak{R}_n(0^n,4^n)+1$};
\end{tikzpicture}
}
\caption{Equivalent Electrical Networks}\label{SC_con_fig_resist2}
\end{figure}
Hence
$$
\begin{aligned}
\mathfrak{R}_{n+1}(0^{n+1},12^n)&\le\mathfrak{R}_n(0^n,4^n)+\frac{\left(5\mathfrak{R}_n(0^n,4^n)+7\right)\left(\mathfrak{R}_n(0^n,4^n)+1\right)}{\left(5\mathfrak{R}_n(0^n,4^n)+7\right)+\left(\mathfrak{R}_n(0^n,4^n)+1\right)}\\
&\lesssim\mathfrak{R}_n(0^n,4^n)+\frac{5}{6}\mathfrak{R}_n(0^n,4^n)=\frac{11}{6}\mathfrak{R}_n(0^n,4^n)\lesssim\rho^{n+1}.
\end{aligned}
$$
Secondly, from $0^{n+1}$ to $1^{n+1}$, we construct a finite sequence as follows. For $i=1,\ldots,n+2$,
$$
w^{(i)}=
\begin{cases}
1^{i-1}0^{n+2-i},\text{ if }i\text{ is an odd number},\\
1^{i-1}2^{n+2-i},\text{ if }i\text{ is an even number}.\\
\end{cases}
$$
By cutting technique, if $i$ is an odd number, then
$$
\begin{aligned}
&\mathfrak{R}_{n+1}(w^{(i)},w^{(i+1)})=\mathfrak{R}_{n+1}(1^{i-1}0^{n+2-i},1^{i}2^{n+1-i})\\
\le&\mathfrak{R}_{n+2-i}(0^{n+2-i},12^{n+1-i})\lesssim\rho^{n+2-i}.
\end{aligned}
$$
If $i$ is an even number, then
$$
\begin{aligned}
&\mathfrak{R}_{n+1}(w^{(i)},w^{(i+1)})=\mathfrak{R}_{n+1}(1^{i-1}2^{n+2-i},1^{i}0^{n+1-i})\\
\le&\mathfrak{R}_{n+2-i}(2^{n+2-i},10^{n+1-i})=\mathfrak{R}_{n+2-i}(0^{n+2-i},12^{n+1-i})\lesssim\rho^{n+2-i}.
\end{aligned}
$$
Hence
$$
\begin{aligned}
&\mathfrak{R}_{n+1}(0^{n+1},1^{n+1})=\mathfrak{R}_{n+1}(w^{(1)},w^{(n+2)})\\
\le&\sum_{i=1}^{n+1}\mathfrak{R}_{n+1}(w^{(i)},w^{(i+1)})\lesssim\sum_{i=1}^{n+1}\rho^{n+2-i}=\sum_{i=1}^{n+1}\rho^{i}\lesssim\rho^{n+1}.
\end{aligned}
$$
\end{proof}
\section{Uniform Harnack Inequality}\label{SC_con_sec_harnack}
In this section, we give uniform Harnack inequality as follows.
\begin{mythm}\label{SC_con_thm_harnack}
There exist some constants $C\in(0,+\infty),\delta\in(0,1)$ such that for all $n\ge1,x\in K,r>0$, for all nonnegative harmonic function $u$ on $V_n\cap B(x,r)$, we have
$$\max_{V_n\cap B(x,\delta r)}u\le C\min_{V_n\cap B(x,\delta r)}u.$$
\end{mythm}
\begin{myrmk}
The point of the above theorem is that the constant $C$ is \emph{uniform} in $n$.
\end{myrmk}
The idea is as follows. Firstly, we use resistance estimates in finite graphs $V_n$ to obtain resistance estimates in an infinite graph $V_\infty$. Secondly, we obtain Green function estimates in $V_\infty$. Thirdly, we obtain elliptic Harnack inequality in $V_\infty$. Finally, we transfer elliptic Harnack inequality in $V_\infty$ to uniform Harnack inequality in $V_n$.
Let $\mathcal{V}_\infty$ be the graph with vertex set $V_\infty=\cup_{n=0}^\infty3^nV_n$ and edge set given by
$$\myset{(p,q):p,q\in V_\infty,|p-q|=2^{-1}}.$$
We have the figure of $\mathcal{V}_\infty$ in Figure \ref{fig_graphSC}.
Locally, $\mathcal{V}_\infty$ is like $\mathcal{V}_n$. Let the conductances of all edges be $1$. Let $d$ be an integer-valued metric, that is, $d(p,q)$ is the minimum of the lengths of all paths connecting $p$ and $q$. It is obvious that
$$d(p,q)\asymp|p-q|\text{ for all }p,q\in V_\infty.$$
By shorting and cutting technique, we reduce $\mathcal{V}_\infty$ to $\mathcal{V}_n$ to obtain resistance estimates as follows.
$$R(x,y)\asymp\rho^{\frac{\log d(x,y)}{\log 3}}=d(x,y)^{\frac{\log\rho}{\log3}}=d(x,y)^\gamma\text{ for all }x,y\in V_\infty,$$
where $\gamma=\log\rho/\log3$.
Let $g_B$ be the Green function in a ball $B$. We have Green function estimates as follows.
\begin{mythm}(\cite[Proposition 6.11]{GHL14})\label{SC_con_thm_green}
There exist some constants $C\in(0,+\infty),\eta\in(0,1)$ such that for all $z\in V_\infty,r>0$, we have
$$g_{B(z,r)}(x,y)\le Cr^\gamma\text{ for all }x,y\in B(z,r),$$
$$g_{B(z,r)}(z,y)\ge\frac{1}{C}r^\gamma\text{ for all }y\in B(z,\eta r).$$
\end{mythm}
We obtain elliptic Harnack inequality in $V_\infty$ as follows.
\begin{mythm}(\cite[Lemma 10.2]{GT01},\cite[Theorem 3.12]{GH14a})\label{SC_con_thm_harnack_infinite}
There exist some constants $C\in(0,+\infty)$, $\delta\in(0,1)$ such that for all $z\in V_\infty,r>0$, for all nonnegative harmonic function $u$ on $V_\infty\cap B(z,r)$, we have
$$\max_{B(z,\delta r)}u\le C\min_{B(z,\delta r)}u.$$
\end{mythm}
\begin{myrmk}
We give an alternative approach as follows. It was proved in \cite{BCK05} that sub-Gaussian heat kernel estimates are equivalent to resistance estimates for random walks on fractal graph under strongly recurrent condition. Hence we obtain sub-Gaussian heat kernel estimates, see \cite[Example 4]{BCK05}. It was proved in \cite[Theorem 3.1]{GT02} that sub-Gaussian heat kernel estimates imply elliptic Harnack inequality. Hence we obtain elliptic Harnack inequality in $V_\infty$.
\end{myrmk}
Now we obtain Theorem \ref{SC_con_thm_harnack} directly.
\section{Weak Monotonicity Results}\label{SC_con_sec_monotone}
In this section, we give two weak monotonicity results.
For all $n\ge1$, let
$$a_n(u)=\rho^n\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2,u\in l(V_n).$$
We have one weak monotonicity result as follows.
\begin{mythm}\label{SC_con_thm_monotone1}
There exists some positive constant $C$ such that for all $n,m\ge1,u\in l(V_{n+m})$, we have
$$a_n(u)\le Ca_{n+m}(u).$$
\end{mythm}
\begin{proof}
For all $w\in W_n,p,q\in V_w$ with $|p-q|=2^{-1}\cdot3^{-n}$, by cutting technique and Corollary \ref{SC_con_cor_resist_upper}
$$
\begin{aligned}
\left(u(p)-u(q)\right)^2&\le R_m(f_w^{-1}(p),f_w^{-1}(q))
\sum_{v\in W_m}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
x,y\in V_{wv}\\
|x-y|=2^{-1}\cdot3^{-(n+m)}
\end{subarray}
$
}}}
(u(x)-u(y))^2\\
&\le C\rho^m\sum_{v\in W_m}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
x,y\in V_{wv}\\
|x-y|=2^{-1}\cdot3^{-(n+m)}
\end{subarray}
$
}}}
(u(x)-u(y))^2.
\end{aligned}
$$
Hence
$$
\begin{aligned}
a_n(u)&=\rho^n\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
&\le\rho^n\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
\left(C\rho^m\sum_{v\in W_m}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
x,y\in V_{wv}\\
|x-y|=2^{-1}\cdot3^{-(n+m)}
\end{subarray}
$
}}}
(u(x)-u(y))^2\right)\\
&=C\rho^{n+m}\sum_{w\in W_{n+m}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-(n+m)}
\end{subarray}
$
}}}
(u(p)-u(q))^2=Ca_{n+m}(u).
\end{aligned}
$$
\end{proof}
For all $n\ge1$, let
$$b_n(u)=\rho^n
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
w^{(1)}\sim_nw^{(2)}
\end{subarray}
$
}}}
(P_nu(w^{(1)})-P_nu(w^{(2)}))^2,u\in L^2(K;\nu).$$
We have another weak monotonicity result as follows.
\begin{mythm}\label{SC_con_thm_monotone2}
There exists some positive constant $C$ such that for all $n,m\ge1,u\in L^2(K;\nu)$, we have
$$b_n(u)\le Cb_{n+m}(u).$$
\end{mythm}
\begin{myrmk}
This result was also obtained in \cite[Proposition 5.2]{KZ92}. Here we give a direct proof using resistance estimates.
\end{myrmk}
This result can be reduced as follows.
For all $n\ge1$, let
$$B_n(u)=\rho^n\sum_{w^{(1)}\sim_nw^{(2)}}(u(w^{(1)})-u(w^{(2)}))^2,u\in l(W_n).$$
For all $n,m\ge1$, let $M_{n,m}:l(W_{n+m})\to l(W_n)$ be a mean value operator given by
$$(M_{n,m}u)(w)=\frac{1}{8^m}\sum_{v\in W_m}u(wv),w\in W_n,u\in l(W_{n+m}).$$
\begin{mythm}\label{SC_con_thm_monotonicity2}
There exists some positive constant $C$ such that for all $n,m\ge1,u\in l(W_{n+m})$, we have
$$B_n(M_{n,m}u)\le CB_{n+m}(u).$$
\end{mythm}
\begin{proof}[Proof of Theorem \ref{SC_con_thm_monotone2} using Theorem \ref{SC_con_thm_monotonicity2}]
For all $u\in L^2(K;\nu)$, note that
$$P_nu=M_{n,m}(P_{n+m}u),$$
hence
$$
\begin{aligned}
b_n(u)&=\rho^n\sum_{w^{(1)}\sim_nw^{(2)}}(P_nu(w^{(1)})-P_nu(w^{(2)}))^2=B_n(P_nu)\\
&=B_n(M_{n,m}(P_{n+m}u))\le CB_{n+m}(P_{n+m}u)\\
&=C\rho^{n+m}\sum_{w^{(1)}\sim_{n+m}w^{(2)}}(P_{n+m}u(w^{(1)})-P_{n+m}u(w^{(2)}))^2=Cb_{n+m}(u).
\end{aligned}
$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{SC_con_thm_monotonicity2}]
Fix $n\ge1$. Assume that $W\subseteq W_n$ is connected, that is, for all $w^{(1)},w^{(2)}\in W$, there exists a finite sequence $\myset{v^{(1)},\ldots,v^{(k)}}\subseteq W$ such that $v^{(1)}=w^{(1)},v^{(k)}=w^{(2)}$ and $v^{(i)}\sim_nv^{(i+1)}$ for all $i=1,\ldots,k-1$. Let
$$\mathfrak{D}_W(u,u):=
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
w^{(1)},w^{(2)}\in W\\
w^{(1)}\sim_nw^{(2)}
\end{subarray}
$
}}}
(u(w^{(1)})-u(w^{(2)}))^2,u\in l(W).$$
For all $w^{(1)},w^{(2)}\in W$, let
$$
\begin{aligned}
\mathfrak{R}_W(w^{(1)},w^{(2)})&=\inf\myset{\mathfrak{D}_W(u,u):u(w^{(1)})=0,u(w^{(2)})=1,u\in l(W)}^{-1}\\
&=\sup\myset{\frac{(u(w^{(1)})-u(w^{(2)}))^2}{\mathfrak{D}_W(u,u)}:\mathfrak{D}_W(u,u)\ne0,u\in l(W)}.
\end{aligned}
$$
It is obvious that
$$(u(w^{(1)})-u(w^{(2)}))^2\le\mathfrak{R}_W(w^{(1)},w^{(2)})\mathfrak{D}_W(u,u)\text{ for all }w^{(1)},w^{(2)}\in W,u\in l(W),$$
and $\mathfrak{R}_W$ is a metric on $W$, hence
$$\mathfrak{R}_W(w^{(1)},w^{(2)})\le\mathfrak{R}_W(w^{(1)},w^{(3)})+\mathfrak{R}_W(w^{(3)},w^{(2)})\text{ for all }w^{(1)},w^{(2)},w^{(3)}\in W.$$
Fix $w^{(1)}\sim_nw^{(2)}$, there exist $i,j=0,\ldots,7$ such that $w^{(1)}i^m\sim_{n+m}w^{(2)}j^m$, see Figure \ref{SC_con_fig_monotonicity}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0)--(3,0)--(3,3)--(0,3)--cycle;
\draw (4,0)--(7,0)--(7,3)--(4,3)--cycle;
\draw (3,0)--(4,0);
\draw (3,3)--(4,3);
\draw (3.5,1.5) node {$\vdots$};
\draw[fill=black] (3,0) circle (0.06);
\draw[fill=black] (4,0) circle (0.06);
\draw (1.5,1.5) node {$w^{(1)}W_m$};
\draw (5.5,1.5) node {$w^{(2)}W_m$};
\draw (2.5,-0.5) node {$w^{(1)}i^m$};
\draw (4.5,-0.5) node {$w^{(2)}j^m$};
\draw (1,2.5) node {$w^{(1)}v$};
\draw[fill=black] (1,2.2) circle (0.06);
\draw (5,2.5) node {$w^{(2)}v$};
\draw[fill=black] (5,2.2) circle (0.06);
\end{tikzpicture}
\caption{$w^{(1)}W_m$ and $w^{(2)}W_m$}\label{SC_con_fig_monotonicity}
\end{figure}
Fix $v\in W_m$
$$(u(w^{(1)}v)-u(w^{(2)}v))^2\le\mathfrak{R}_{w^{(1)}W_m\cup w^{(2)}W_m}(w^{(1)}v,w^{(2)}v)\mathfrak{D}_{w^{(1)}W_m\cup w^{(2)}W_m}(u,u).$$
By cutting technique and Corollary \ref{SC_con_cor_resist_upper}
$$
\begin{aligned}
&\mathfrak{R}_{w^{(1)}W_m\cup w^{(2)}W_m}(w^{(1)}v,w^{(2)}v)\\
\le&\mathfrak{R}_{w^{(1)}W_m\cup w^{(2)}W_m}(w^{(1)}v,w^{(1)}i^m)+\mathfrak{R}_{w^{(1)}W_m\cup w^{(2)}W_m}(w^{(1)}i^m,w^{(2)}j^m)\\
&+\mathfrak{R}_{w^{(1)}W_m\cup w^{(2)}W_m}(w^{(2)}j^m,w^{(2)}v)\\
\le&\mathfrak{R}_m(v,i^m)+1+\mathfrak{R}_m(v,j^m)\lesssim\rho^m.
\end{aligned}
$$
Hence
$$
\begin{aligned}
&(u(w^{(1)}v)-u(w^{(2)}v))^2\lesssim\rho^m\mathfrak{D}_{w^{(1)}W_m\cup w^{(2)}W_m}(u,u)\\
=&\rho^m\left(\mathfrak{D}_{w^{(1)}W_m}(u,u)+\mathfrak{D}_{w^{(2)}W_m}(u,u)\right.\\
&\left.+
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
v^{(1)},v^{(2)}\in W_m\\
w^{(1)}v^{(1)}\sim_{n+m}w^{(2)}v^{(2)}
\end{subarray}
$
}}}
(u(w^{(1)}v^{(1)})-u(w^{(2)}v^{(2)}))^2\right).
\end{aligned}
$$
Hence
$$
\begin{aligned}
&\left(M_{n,m}u(w^{(1)})-M_{n,m}u(w^{(2)})\right)^2=\left(\frac{1}{8^m}\sum_{v\in W_m}\left(u(w^{(1)}v)-u(w^{(2)}v)\right)\right)^2\\
\le&\frac{1}{8^m}\sum_{v\in W_m}\left(u(w^{(1)}v)-u(w^{(2)}v)\right)^2\\
\lesssim&\rho^m\left(\mathfrak{D}_{w^{(1)}W_m}(u,u)+\mathfrak{D}_{w^{(2)}W_m}(u,u)\right.\\
&\left.+
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
v^{(1)},v^{(2)}\in W_m\\
w^{(1)}v^{(1)}\sim_{n+m}w^{(2)}v^{(2)}
\end{subarray}
$
}}}
(u(w^{(1)}v^{(1)})-u(w^{(2)}v^{(2)}))^2\right).
\end{aligned}
$$
In the summation with respect to $w^{(1)}\sim_nw^{(2)}$, the terms $\mathfrak{D}_{w^{(1)}W_m}(u,u),\mathfrak{D}_{w^{(2)}W_m}(u,u)$ are summed at most $8$ times, hence
$$
\begin{aligned}
B_n(M_{n,m}u)&=\rho^n\sum_{w^{(1)}\sim_nw^{(2)}}\left(M_{n,m}u(w^{(1)})-M_{n,m}u(w^{(2)})\right)^2\\
&\lesssim\rho^n\sum_{w^{(1)}\sim_nw^{(2)}}\rho^m\left(\mathfrak{D}_{w^{(1)}W_m}(u,u)+\mathfrak{D}_{w^{(2)}W_m}(u,u)\right.\\
&\left.+
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
v^{(1)},v^{(2)}\in W_m\\
w^{(1)}v^{(1)}\sim_{n+m}w^{(2)}v^{(2)}
\end{subarray}
$
}}}
(u(w^{(1)}v^{(1)})-u(w^{(2)}v^{(2)}))^2\right)\\
&\le8\rho^{n+m}\sum_{w^{(1)}\sim_{n+m}w^{(2)}}\left(u(w^{(1)})-u(w^{(2)})\right)^2=8B_{n+m}(u).
\end{aligned}
$$
\end{proof}
\section{One Good Function}\label{SC_con_sec_good}
In this section, we construct \emph{one} good function with energy property and separation property.
By standard argument, we have H\"older continuity from Harnack inequality as follows.
\begin{mythm}\label{SC_con_thm_holder}
For all $0\le\delta_1<\varepsilon_1<\varepsilon_2<\delta_2\le1$, there exist some positive constants $\theta=\theta(\delta_1,\delta_2,\varepsilon_1,\varepsilon_2)$, $C=C(\delta_1,\delta_2,\varepsilon_1,\varepsilon_2)$ such that for all $n\ge1$, for all bounded harmonic function $u$ on $V_n\cap(\delta_1,\delta_2)\times[0,1]$, we have
$$|u(x)-u(y)|\le C|x-y|^\theta\left(\max_{V_n\cap[\delta_1,\delta_2]\times[0,1]}|u|\right)\text{ for all }x,y\in V_n\cap[\varepsilon_1,\varepsilon_2]\times[0,1].$$
\end{mythm}
\begin{proof}
The proof is similar to \cite[Theorem 3.9]{BB89}.
\end{proof}
For all $n\ge1$. Let $u_n\in l(V_n)$ satisfy $u_n|_{V_n\cap\myset{0}\times[0,1]}=0,u_n|_{V_n\cap\myset{1}\times[0,1]}=1$ and
$$D_n(u_n,u_n)=\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u_n(p)-u_n(q))^2=(R_n^V)^{-1}.$$
Then $u_n$ is harmonic on $V_n\cap(0,1)\times[0,1]$, $u_n(x,y)=1-u_n(1-x,y)=u_n(x,1-y)$ for all $(x,y)\in V_n$ and
$$u_n|_{V_n\cap\myset{\frac{1}{2}}\times[0,1]}=\frac{1}{2},u_n|_{V_n\cap[0,\frac{1}{2})\times[0,1]}<\frac{1}{2},u_n|_{V_n\cap(\frac{1}{2},1]\times[0,1]}>\frac{1}{2}.$$
By Arzel\`a-Ascoli theorem, Theorem \ref{SC_con_thm_holder} and diagonal argument, there exist some subsequence still denoted by $\myset{u_n}$ and some function $u$ on $K$ with $u|_{\myset{0}\times[0,1]}=0$ and $u|_{\myset{1}\times[0,1]}=1$ such that $u_n$ converges uniformly to $u$ on $K\cap[\varepsilon_1,\varepsilon_2]\times[0,1]$ for all $0<\varepsilon_1<\varepsilon_2<1$. Hence $u$ is continuous on $K\cap(0,1)\times[0,1]$, $u_n(x)\to u(x)$ for all $x\in K$ and $u(x,y)=1-u(1-x,y)=u(x,1-y)$ for all $(x,y)\in K$.
\begin{myprop}\label{SC_con_prop_u}
The function $u$ given above has the following properties.
\begin{enumerate}[(1)]
\item There exists some positive constant $C$ such that
$$a_n(u)\le C\text{ for all }n\ge1.$$
\item For all $\beta\in(\alpha,\log(8\rho)/\log3)$, we have
$$E_{\beta}(u,u)<+\infty.$$
Hence $u\in C^{\frac{\beta-\alpha}{2}}(K)$.
\item
$$u|_{K\cap\myset{\frac{1}{2}}\times[0,1]}=\frac{1}{2},u|_{K\cap[0,\frac{1}{2})\times[0,1]}<\frac{1}{2},u|_{K\cap(\frac{1}{2},1]\times[0,1]}>\frac{1}{2}.$$
\end{enumerate}
\end{myprop}
\begin{proof}
(1) By Theorem \ref{SC_con_thm_resist1} and Theorem \ref{SC_con_thm_monotone1}, for all $n\ge1$, we have
$$
\begin{aligned}
&a_n(u)=\lim_{m\to+\infty}a_{n}(u_{n+m})\le C\varliminf_{m\to+\infty}a_{n+m}(u_{n+m})\\
=&C\varliminf_{m\to+\infty}\rho^{n+m}D_{n+m}(u_{n+m},u_{n+m})=C\varliminf_{m\to+\infty}\rho^{n+m}\left(R_{n+m}^V\right)^{-1}\le C.
\end{aligned}
$$
(2) By (1), for all $\beta\in(\alpha,\log(8\rho)/\log3)$, we have
$$E_{\beta}(u,u)=\sum_{n=1}^\infty\left(3^{\beta-\alpha}\rho^{-1}\right)^na_n(u)\le C\sum_{n=1}^\infty\left(3^{\beta-\alpha}\rho^{-1}\right)^n<+\infty.$$
By Lemma \ref{SC_con_lem_equiv} and Lemma \ref{lem_SC_holder}, we have $u\in C^{\frac{\beta-\alpha}{2}}(K)$.
(3) It is obvious that
$$u|_{K\cap\myset{\frac{1}{2}}\times[0,1]}=\frac{1}{2},u|_{K\cap[0,\frac{1}{2})\times[0,1]}\le\frac{1}{2},u|_{K\cap(\frac{1}{2},1]\times[0,1]}\ge\frac{1}{2}.$$
By symmetry, we only need to show that
$$u|_{K\cap(\frac{1}{2},1]\times[0,1]}>\frac{1}{2}.$$
Suppose there exists $(x,y)\in K\cap(1/2,1)\times[0,1]$ such that $u(x,y)=1/2$. Since $u_n-\frac{1}{2}$ is a nonnegative harmonic function on $V_n\cap(\frac{1}{2},1)\times[0,1]$, by Theorem \ref{SC_con_thm_harnack}, for all $1/2<\varepsilon_1<x<\varepsilon_2<1$, there exists some positive constant $C=C(\varepsilon_1,\varepsilon_2)$ such that for all $n\ge1$
$$\max_{V_n\cap[\varepsilon_1,\varepsilon_2]\times[0,1]}\left(u_n-\frac{1}{2}\right)\le C\min_{V_n\cap[\varepsilon_1,\varepsilon_2]\times[0,1]}\left(u_n-\frac{1}{2}\right).$$
Since $u_n$ converges uniformly to $u$ on $K\cap[\varepsilon_1,\varepsilon_2]\times[0,1]$, we have
$$\sup_{K\cap[\varepsilon_1,\varepsilon_2]\times[0,1]}\left(u-\frac{1}{2}\right)\le C\inf_{K\cap[\varepsilon_1,\varepsilon_2]\times[0,1]}\left(u-\frac{1}{2}\right)=0.$$
Hence
$$u-\frac{1}{2}=0\text{ on }K\cap[\varepsilon_1,\varepsilon_2]\times[0,1]\text{ for all }\frac{1}{2}<\varepsilon_1<x<\varepsilon_2<1.$$
Hence
$$u=\frac{1}{2}\text{ on }K\cap(\frac{1}{2},1)\times[0,1].$$
By continuity, we have
$$u=\frac{1}{2}\text{ on }K\cap[\frac{1}{2},1]\times[0,1],$$
contradiction!
\end{proof}
\section{Proof of Theorem \ref{SC_con_thm_walk}}\label{SC_con_sec_walk}
Firstly, we consider upper bound. Assume that $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$, then there exists $u\in\mathcal{F}_\beta$ such that $u|_{\myset{0}\times[0,1]}=0$ and $u|_{\myset{1}\times[0,1]}=1$. Hence
$$
\begin{aligned}
+\infty&>E_\beta(u,u)=\sum_{n=1}^\infty3^{(\beta-\alpha)n}D_n(u,u)\ge\sum_{n=1}^\infty3^{(\beta-\alpha)n}D_n(u_n,u_n)\\
&=\sum_{n=1}^\infty3^{(\beta-\alpha)n}\left(R_n^V\right)^{-1}\ge C\sum_{n=1}^\infty\left(3^{\beta-\alpha}\rho^{-1}\right)^n.
\end{aligned}
$$
Hence $3^{\beta-\alpha}\rho^{-1}<1$, that is, $\beta<{\log\left(8\rho\right)}/{\log3}=\beta^*$. Hence $\beta_*\le\beta^*$.
Secondly, we consider lower bound. Similar to the proof of Proposition \ref{SC_con_prop_lower}, to show that $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$ for all $\beta\in(\alpha,\beta^*)$, we only need to show that $\mathcal{F}_\beta$ separates points.
Let $u\in C(K)$ be the function in Proposition \ref{SC_con_prop_u}. By Proposition \ref{SC_con_prop_u} (2), we have $E_{\beta}(u,u)<+\infty$, hence $u\in\mathcal{F}_\beta$.
For all distinct $z_1=(x_1,y_1),z_2=(x_2,y_2)\in K$, without lose of generality, we may assume that $x_1<x_2$. Replacing $z_i$ by $f_w^{-1}(z_i)$ with some $w\in W_n$ and some $n\ge1$, we only have the following cases.
\begin{enumerate}[(1)]
\item $x_1\in[0,\frac{1}{2}),x_2\in[\frac{1}{2},1]$.
\item $x_1\in[0,\frac{1}{2}],x_2\in(\frac{1}{2},1]$.
\item $x_1,x_2\in[0,\frac{1}{2})$, there exist distinct $w_1,w_2\in\myset{0,1,5,6,7}$ such that
$$z_1\in K_{w_1}\backslash K_{w_2}\text{ and }z_2\in K_{w_2}\backslash K_{w_1}.$$
\item $x_1,x_2\in(\frac{1}{2},1]$, there exist distinct $w_1,w_2\in\myset{1,2,3,4,5}$ such that
$$z_1\in K_{w_1}\backslash K_{w_2}\text{ and }z_2\in K_{w_2}\backslash K_{w_1}.$$
\end{enumerate}
For the first case, $u(z_1)<{1}/{2}\le u(z_2)$. For the second case, $u(z_1)\le{1}/{2}<u(z_2)$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.5]
\draw (0,0)--(6,0)--(6,6)--(0,6)--cycle;
\draw (2,0)--(2,6);
\draw (4,0)--(4,6);
\draw (0,2)--(6,2);
\draw (0,4)--(6,4);
\draw (1,1) node {$K_0$};
\draw (3,1) node {$K_1$};
\draw (5,1) node {$K_2$};
\draw (5,3) node {$K_3$};
\draw (5,5) node {$K_4$};
\draw (3,5) node {$K_5$};
\draw (1,5) node {$K_6$};
\draw (1,3) node {$K_7$};
\end{tikzpicture}
\caption{The Location of $z_1,z_2$}\label{SC_con_fig_characterization}
\end{figure}
For the third case. If $w_1,w_2$ do \emph{not} belong to the same one of the following sets
$$\myset{0,1},\myset{7},\myset{5,6},$$
then we construct a function $w$ as follows. Let $v(x,y)=u(y,x)$ for all $(x,y)\in K$, then
$$v|_{[0,1]\times\myset{0}}=0,v|_{[0,1]\times\myset{1}}=1,$$
$$v(x,y)=v(1-x,y)=1-v(x,1-y)\text{ for all }(x,y)\in K,$$
$$E_\beta(v,v)=E_\beta(u,u)<+\infty.$$
Let
$$w=
\begin{cases}
v\circ f_i^{-1}-1,&\text{on }K_i,i=0,1,2,\\
v\circ f_i^{-1},&\text{on }K_i,i=3,7,\\
v\circ f_i^{-1}+1,&\text{on }K_i,i=4,5,6,\\
\end{cases}
$$
then $w\in C(K)$ is well-defined and $E_\beta(w,w)<+\infty$, hence $w\in\mathcal{F}_\beta$. Moreover, $w(z_1)\ne w(z_2)$, $w|_{[0,1]\times\myset{0}}=-1,w|_{[0,1]\times\myset{1}}=2,w(x,y)=w(1-x,y)=1-w(x,1-y)$ for all $(x,y)\in K$.
If $w_1,w_2$ \emph{do} belong to the same one of the following sets
$$\myset{0,1},\myset{7},\myset{5,6},$$
then it can only happen that $w_1,w_2\in\myset{0,1}$ or $w_1,w_2\in\myset{5,6}$, without lose of generality, we may assume that $w_1=0$ and $w_2=1$, then $z_1\in K_0\backslash K_1$ and $z_2\in K_1\backslash K_0$.
Let
$$
w=
\begin{cases}
u\circ f_i^{-1}-1,&\text{on }K_i,i=0,6,7,\\
u\circ f_i^{-1},&\text{on }K_i,i=1,5,\\
u\circ f_i^{-1}+1,&\text{on }K_i,i=2,3,4,\\
\end{cases}
$$
then $w\in C(K)$ is well-defined and $E_{\beta}(w,w)<+\infty$, hence $w\in\mathcal{F}_\beta$. Moreover $w(z_1)\ne w(z_2)$, $w|_{\myset{0}\times[0,1]}=-1,w|_{\myset{1}\times[0,1]}=2,w(x,y)=w(x,1-y)=1-w(1-x,y)$ for all $(x,y)\in K$.
For the forth case, by reflection about $\myset{\frac{1}{2}}\times[0,1]$, we reduce to the third case.
Hence $\mathcal{F}_\beta$ separates points, hence $(\mathcal{E}_\beta,\mathcal{F}_\beta)$ is a regular Dirichlet form on $L^2(K;\nu)$ for all $\beta\in(\alpha,\beta^*)$, hence $\beta_*\ge\beta^*$.
In conclusion, $\beta_*=\beta^*$.
\section{Proof of Theorem \ref{SC_con_thm_BM}}\label{SC_con_sec_BM}
In this section, we use $\Gamma$-convergence technique to construct a local regular Dirichlet form on $L^2(K;\nu)$ which corresponds to the BM. The idea of this construction is from \cite{KS05}.
The construction of local Dirichlet forms on p.c.f. self-similar sets relies heavily on some monotonicity result which is ensured by some compatibility condition, see \cite{Kig93,Kig01}. Our key observation is that even with some weak monotonicity results, we still apply $\Gamma$-convergence technique to obtain some limit.
Take $\myset{\beta_n}\subseteq(\alpha,\beta^*)$ with $\beta_n\uparrow\beta^*$. By Proposition \ref{prop_gamma}, there exist some subsequence still denoted by $\myset{\beta_n}$ and some closed form $(\mathcal{E},\mathcal{F})$ on $L^2(K;\nu)$ in the wide sense such that $(\beta^*-\beta_n)\mathfrak{E}_{\beta_n}$ is $\Gamma$-convergent to $\mathcal{E}$. Without lose of generality, we may assume that
$$0<\beta^*-\beta_n<\frac{1}{n+1}\text{ for all }n\ge1.$$
We have the characterization of $(\mathcal{E},\mathcal{F})$ on $L^2(K;\nu)$ as follows.
\begin{mythm}\label{SC_con_thm_gamma}
$$
\begin{aligned}
&\mathcal{E}(u,u)\asymp\sup_{n\ge1}3^{(\beta^*-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2,\\
&\mathcal{F}=\myset{u\in C(K):\sup_{n\ge1}3^{(\beta^*-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2<+\infty}.
\end{aligned}
$$
Moreover, $(\mathcal{E},\mathcal{F})$ is a regular closed form on $L^2(K;\nu)$.
\end{mythm}
\begin{proof}
Recall that $\rho=3^{\beta^*-\alpha}$, then
$$
\begin{aligned}
E_{\beta}(u,u)&=\sum_{n=1}^\infty3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2=\sum_{n=1}^\infty3^{(\beta-\beta^*)n}a_n(u),\\
\mathfrak{E}_\beta(u,u)&=\sum_{n=1}^\infty3^{(\beta-\alpha)n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
w^{(1)}\sim_nw^{(2)}
\end{subarray}
$
}}}
\left(P_nu(w^{(1)})-P_nu(w^{(2)})\right)^2=\sum_{n=1}^\infty3^{(\beta-\beta^*)n}b_n(u).
\end{aligned}
$$
We use weak monotonicity results Theorem \ref{SC_con_thm_monotone1}, Theorem \ref{SC_con_thm_monotone2} and elementary result Proposition \ref{prop_ele2}.
For any $u\in L^2(K;\nu)$, there exists $\myset{u_n}\subseteq L^2(K;\nu)$ converging strongly to $u$ in $L^2(K;\nu)$ such that
\begin{align*}
\mathcal{E}(u,u)&\ge\varlimsup_{n\to+\infty}(\beta^*-\beta_n)\mathfrak{E}_{\beta_n}(u_n,u_n)\\
&=\varlimsup_{n\to+\infty}(\beta^*-\beta_n)\sum_{k=1}^\infty3^{(\beta_n-\beta^*)k}b_k(u_n)\\
&\ge\varlimsup_{n\to+\infty}(\beta^*-\beta_n)\sum_{k=n+1}^\infty3^{(\beta_n-\beta^*)k}b_k(u_n)\\
&\ge C\varlimsup_{n\to+\infty}(\beta^*-\beta_n)\sum_{k=n+1}^\infty3^{(\beta_n-\beta^*)k}b_n(u_n)\\
&=C\varlimsup_{n\to+\infty}\left\{b_n(u_n)\left[(\beta^*-\beta_n)\frac{3^{(\beta_n-\beta^*)(n+1)}}{1-3^{\beta_n-\beta^*}}\right]\right\}.
\end{align*}
Since $0<\beta^*-\beta_n<1/(n+1)$, we have $3^{(\beta_n-\beta^*)(n+1)}>1/3$. Since
$$\lim_{n\to+\infty}\frac{\beta^*-\beta_n}{1-3^{\beta_n-\beta^*}}=\frac{1}{\log3},$$
there exists some positive constant $C$ such that
$$(\beta^*-\beta_n)\frac{3^{(\beta_n-\beta^*)(n+1)}}{1-3^{\beta_n-\beta^*}}\ge C\text{ for all }n\ge1.$$
Hence
$$\mathcal{E}(u,u)\ge C\varlimsup_{n\to+\infty}b_n(u_n).$$
Since $u_n\to u$ in $L^2(K;\nu)$, for all $k\ge1$, we have
$$b_k(u)=\lim_{n\to+\infty}b_k(u_n)=\lim_{k\le n\to+\infty}b_k(u_n)\le C\varliminf_{n\to+\infty}b_n(u_n).$$
For all $m\ge1$, we have
$$
\begin{aligned}
(\beta^*-\beta_m)\sum_{k=1}^\infty3^{(\beta_m-\beta^*)k}b_k(u)&\le C(\beta^*-\beta_m)\sum_{k=1}^\infty3^{(\beta_m-\beta^*)k}\varliminf_{n\to+\infty}b_n(u_n)\\
&=C(\beta^*-\beta_m)\frac{3^{\beta_m-\beta^*}}{1-3^{\beta_m-\beta^*}}\varliminf_{n\to+\infty}b_n(u_n).
\end{aligned}
$$
Hence $\mathcal{E}(u,u)<+\infty$ implies $\mathfrak{E}_{\beta_m}(u,u)<+\infty$, by Lemma \ref{lem_SC_holder}, we have $\mathcal{F}\subseteq C(K)$. Hence
$$\varliminf_{m\to+\infty}(\beta^*-\beta_m)\sum_{k=1}^\infty3^{(\beta_m-\beta^*)k}b_k(u)\le C\varliminf_{n\to+\infty}b_n(u_n).$$
Hence for all $u\in\mathcal{F}\subseteq C(K)$, we have
$$
\begin{aligned}
\mathcal{E}(u,u)&\ge C\varlimsup_{n\to+\infty}b_n(u_n)\ge C\varliminf_{n\to+\infty}b_n(u_n)\\
&\ge C\varliminf_{m\to+\infty}(\beta^*-\beta_m)\sum_{k=1}^\infty3^{(\beta_m-\beta^*)k}b_k(u)\\
&\ge C\varliminf_{m\to+\infty}(\beta^*-\beta_m)\sum_{k=1}^\infty3^{(\beta_m-\beta^*)k}a_k(u)\\
&\ge C\sup_{n\ge1}a_n(u).
\end{aligned}
$$
On the other hand, for all $u\in\mathcal{F}\subseteq C(K)$, we have
$$
\begin{aligned}
\mathcal{E}(u,u)&\le\varliminf_{n\to+\infty}(\beta^*-\beta_n)\mathfrak{E}_{\beta_n}(u,u)\\
&\le C\varliminf_{n\to+\infty}(\beta^*-\beta_n)E_{\beta_n}(u,u)\\
&=C\varliminf_{n\to+\infty}(\beta^*-\beta_n)\sum_{k=1}^\infty3^{(\beta_n-\beta^*)k}a_k(u)\\
&=C\varliminf_{n\to+\infty}\frac{\beta^*-\beta_n}{1-3^{\beta_n-\beta^*}}(1-3^{\beta_n-\beta^*})\sum_{k=1}^\infty3^{(\beta_n-\beta^*)k}a_k(u)\\
&\le C\sup_{n\ge1}a_n(u).
\end{aligned}
$$
Therefore, for all $u\in\mathcal{F}\subseteq C(K)$, we have
$$\mathcal{E}(u,u)\asymp\sup_{n\ge1}a_n(u)=\sup_{n\ge1}3^{(\beta^*-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2,$$
and
$$\mathcal{F}=\myset{u\in C(K):\sup_{n\ge1}3^{(\beta^*-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2<+\infty}.$$
It is obvious that the function $u\in C(K)$ in Proposition \ref{SC_con_prop_u} is in $\mathcal{F}$. Similar to the proof of Theorem \ref{SC_con_thm_walk}, we have $\mathcal{F}$ is uniformly dense in $C(K)$. Hence $(\mathcal{E},\mathcal{F})$ is a regular closed form on $L^2(K;\nu)$.
\end{proof}
Now we prove Theorem \ref{SC_con_thm_BM} as follows.
\begin{proof}[Proof of Theorem \ref{SC_con_thm_BM}]
For all $n\ge1,u\in l(V_{n+1})$, we have
$$
\begin{aligned}
&\rho\sum_{i=0}^7a_n(u\circ f_i)=\rho\sum_{i=0}^7\rho^n\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u\circ f_i(p)-u\circ f_i(q))^2\\
=&\rho^{n+1}\sum_{w\in W_{n+1}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-(n+1)}
\end{subarray}
$
}}}
(u(p)-u(q))^2=a_{n+1}(u).
\end{aligned}
$$
Hence for all $n,m\ge1,u\in l(V_{n+m})$, we have
$$\rho^m\sum_{w\in W_m}a_n(u\circ f_w)=a_{n+m}(u).$$
For all $u\in\mathcal{F},n\ge1,w\in W_n$, we have
$$\sup_{k\ge1}a_k(u\circ f_w)\le\sup_{k\ge1}\sum_{w\in W_n}a_k(u\circ f_w)=\rho^{-n}\sup_{k\ge1}a_{n+k}(u)\le\rho^{-n}\sup_{k\ge1}a_{k}(u)<+\infty,$$
hence $u\circ f_w\in\mathcal{F}$.
Let
$$\mybar{\mathcal{E}}^{(n)}(u,u)=\rho^n\sum_{w\in W_n}\mathcal{E}(u\circ f_w,u\circ f_w),u\in\mathcal{F},n\ge1.$$
Then
$$
\begin{aligned}
\mybar{\mathcal{E}}^{(n)}(u,u)&\ge C\rho^n\sum_{w\in W_n}\varlimsup_{k\to+\infty}a_k(u\circ f_w)\ge C\rho^n\varlimsup_{k\to+\infty}\sum_{w\in W_n}a_k(u\circ f_w)\\
&=C\varlimsup_{k\to+\infty}a_{n+k}(u)\ge C\sup_{k\ge1}a_k(u).
\end{aligned}
$$
Similarly
$$
\begin{aligned}
\mybar{\mathcal{E}}^{(n)}(u,u)&\le C\rho^n\sum_{w\in W_n}\varliminf_{k\to+\infty}a_k(u\circ f_w)\le C\rho^n\varliminf_{k\to+\infty}\sum_{w\in W_n}a_k(u\circ f_w)\\
&=C\varliminf_{k\to+\infty}a_{n+k}(u)\le C\sup_{k\ge1}a_k(u).
\end{aligned}
$$
Hence
$$\mybar{\mathcal{E}}^{(n)}(u,u)\asymp\sup_{k\ge1}a_k(u)\text{ for all }u\in\mathcal{F},n\ge1.$$
Moreover, for all $u\in\mathcal{F}$, $n\ge1$, we have
$$
\begin{aligned}
\mybar{\mathcal{E}}^{(n+1)}(u,u)&=\rho^{n+1}\sum_{w\in W_{n+1}}\mathcal{E}(u\circ f_w,u\circ f_w)\\
&=\rho^{n+1}\sum_{i=0}^7\sum_{w\in W_n}\mathcal{E}(u\circ f_i\circ f_w,u\circ f_i\circ f_w)\\
&=\rho\sum_{i=0}^7\left(\rho^n\sum_{w\in W_n}\mathcal{E}((u\circ f_i)\circ f_w,(u\circ f_i)\circ f_w)\right)\\
&=\rho\sum_{i=0}^7\mybar{\mathcal{E}}^{(n)}(u\circ f_i,u\circ f_i).
\end{aligned}
$$
Let
$$\tilde{\mathcal{E}}^{(n)}(u,u)=\frac{1}{n}\sum_{l=1}^n\mybar{\mathcal{E}}^{(l)}(u,u),u\in\mathcal{F},n\ge1.$$
It is obvious that
$$\tilde{\mathcal{E}}^{(n)}(u,u)\asymp\sup_{k\ge1}a_k(u)\text{ for all }u\in\mathcal{F},n\ge1.$$
Since $(\mathcal{E},\mathcal{F})$ is a regular closed form on $L^2(K;\nu)$, by \cite[Definition 1.3.8, Remark 1.3.9, Definition 1.3.10, Remark 1.3.11]{CF12}, we have $(\mathcal{F},\mathcal{E}_1)$ is a separable Hilbert space. Let $\{u_i\}_{i\ge1}$ be a dense subset of $(\mathcal{F},\mathcal{E}_1)$. For all $i\ge1$, $\{\tilde{\mathcal{E}}^{(n)}(u_i,u_i)\}_{n\ge1}$ is a bounded sequence. By diagonal argument, there exists a subsequence $\{n_k\}_{k\ge1}$ such that $\{\tilde{\mathcal{E}}^{(n_k)}(u_i,u_i)\}_{k\ge1}$ converges for all $i\ge1$. Since
$$\tilde{\mathcal{E}}^{(n)}(u,u)\asymp\sup_{k\ge1}a_k(u)\asymp\mathcal{E}(u,u)\text{ for all }u\in\mathcal{F},n\ge1,$$
we have $\{\tilde{\mathcal{E}}^{(n_k)}(u,u)\}_{k\ge1}$ converges for all $u\in\mathcal{F}$. Let
$$\mathcal{E}_{{\mathrm{loc}}}(u,u)=\lim_{k\to+\infty}\tilde{\mathcal{E}}^{(n_k)}(u,u)\text{ for all }u\in\mathcal{F}_{{\mathrm{loc}}}:=\mathcal{F}.$$
Then
$$\mathcal{E}_{{\mathrm{loc}}}(u,u)\asymp\sup_{k\ge1}a_k(u)\asymp\mathcal{E}(u,u)\text{ for all }u\in\mathcal{F}_{\mathrm{loc}}=\mathcal{F}.$$
Hence $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ is a regular closed form on $L^2(K;\nu)$. It is obvious that $1\in\mathcal{F}_{\mathrm{loc}}$ and $\mathcal{E}_{\mathrm{loc}}(1,1)=0$, by \cite[Lemma 1.6.5, Theorem 1.6.3]{FOT11}, we have $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ is conservative.
For all $u\in\mathcal{F}_{\mathrm{loc}}=\mathcal{F}$, we have $u\circ f_i\in\mathcal{F}=\mathcal{F}_{\mathrm{loc}}$ for all $i=0,\ldots,7$ and
$$
\begin{aligned}
\rho\sum_{i=0}^7\mathcal{E}_{\mathrm{loc}}(u\circ f_i,u\circ f_i)&=\rho\sum_{i=0}^7\lim_{k\to+\infty}\tilde{\mathcal{E}}^{(n_k)}(u\circ f_i,u\circ f_i)\\
&=\rho\sum_{i=0}^7\lim_{k\to+\infty}\frac{1}{n_k}\sum_{l=1}^{n_k}\mybar{\mathcal{E}}^{(l)}(u\circ f_i,u\circ f_i)\\
&=\lim_{k\to+\infty}\frac{1}{n_k}\sum_{l=1}^{n_k}\left[\rho\sum_{i=0}^7\mybar{\mathcal{E}}^{(l)}(u\circ f_i,u\circ f_i)\right]\\
&=\lim_{k\to+\infty}\frac{1}{n_k}\sum_{l=1}^{n_k}\mybar{\mathcal{E}}^{(l+1)}(u,u)\\
&=\lim_{k\to+\infty}\frac{1}{n_k}\sum_{l=2}^{n_k+1}\mybar{\mathcal{E}}^{(l)}(u,u)\\
&=\lim_{k\to+\infty}\left[\frac{1}{n_k}\sum_{l=1}^{n_k}\mybar{\mathcal{E}}^{(l)}(u,u)+\frac{1}{n_k}\mybar{\mathcal{E}}^{(n_k+1)}(u,u)-\frac{1}{n_k}\mybar{\mathcal{E}}^{(1)}(u,u)\right]\\
&=\lim_{k\to+\infty}\tilde{\mathcal{E}}^{(n_k)}(u,u)=\mathcal{E}_{\mathrm{loc}}(u,u).
\end{aligned}
$$
Hence $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ is self-similar.
For all $u,v\in\mathcal{F}_{\mathrm{loc}}$ satisfying $\mathrm{supp}(u),\mathrm{supp}(v)$ are compact and $v$ is constant in an open neighborhood $U$ of $\mathrm{supp}(u)$, we have $K\backslash U$ is compact and $\mathrm{supp}(u)\cap(K\backslash U)=\emptyset$, hence
$$\delta=\mathrm{dist}(\mathrm{supp}(u),K\backslash U)>0.$$
Taking sufficiently large $n\ge1$ such that $3^{1-n}<\delta$, by self-similarity, we have
$$\mathcal{E}_{\mathrm{loc}}(u,v)=\rho^n\sum_{w\in W_n}\mathcal{E}_{\mathrm{loc}}(u\circ f_w,v\circ f_w).$$
For all $w\in W_n$, we have $u\circ f_w=0$ or $v\circ f_w$ is constant, hence $\mathcal{E}_{\mathrm{loc}}(u\circ f_w,v\circ f_w)=0$, hence $\mathcal{E}_{\mathrm{loc}}(u,v)=0$, that is, $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ is strongly local.
For all $u\in\mathcal{F}_{\mathrm{loc}}$, it is obvious that $u^+,u^-,1-u,\mybar{u}=(0\vee u)\wedge1\in\mathcal{F}_{\mathrm{loc}}$ and
$$\mathcal{E}_{\mathrm{loc}}(u,u)=\mathcal{E}_{\mathrm{loc}}(1-u,1-u).$$
Since $u^+u^-=0$ and $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ is strongly local, we have $\mathcal{E}_{\mathrm{loc}}(u^+,u^-)=0$. Hence
$$
\begin{aligned}
\mathcal{E}_{\mathrm{loc}}(u,u)&=\mathcal{E}_{\mathrm{loc}}(u^+-u^-,u^+-u^-)\\
&=\mathcal{E}_{\mathrm{loc}}(u^+,u^+)+\mathcal{E}_{\mathrm{loc}}(u^-,u^-)-2\mathcal{E}_{\mathrm{loc}}(u^+,u^-)\\
&=\mathcal{E}_{\mathrm{loc}}(u^+,u^+)+\mathcal{E}_{\mathrm{loc}}(u^-,u^-)\\
&\ge\mathcal{E}_{\mathrm{loc}}(u^+,u^+)=\mathcal{E}_{\mathrm{loc}}(1-u^+,1-u^+)\\
&\ge\mathcal{E}_{\mathrm{loc}}((1-u^+)^+,(1-u^+)^+)=\mathcal{E}_{\mathrm{loc}}(1-(1-u^+)^+,1-(1-u^+)^+)\\
&=\mathcal{E}_{\mathrm{loc}}(\mybar{u},\mybar{u}),
\end{aligned}
$$
that is, $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ on $L^2(K;\nu)$ is Markovian. Hence $(\mathcal{E}_{\mathrm{loc}},\mathcal{F}_{\mathrm{loc}})$ is a self-similar strongly local regular Dirichlet form on $L^2(K;\nu)$.
\end{proof}
\begin{myrmk}
The idea of the construction of $\mybar{\mathcal{E}}^{(n)},\tilde{\mathcal{E}}^{(n)}$ is from \cite[Section 6]{KZ92}. The proof of Markovain property is from the proof of \cite[Theorem 2.1]{BBKT10}.
\end{myrmk}
\section{Proof of Theorem \ref{SC_con_thm_Besov}}\label{SC_con_sec_Besov}
Theorem \ref{SC_con_thm_Besov} is a special case of the following result.
\begin{myprop}\label{SC_con_prop_equiv_local}
For all $\beta\in(\alpha,+\infty)$, for all $u\in C(K)$, we have
$$\sup_{n\ge1}3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\asymp[u]_{B^{2,\infty}_{\alpha,\beta}(K)}.$$
\end{myprop}
\begin{proof}[Proof of Proposition \ref{SC_con_prop_equiv_local}]
The proof is very similar to that of Lemma \ref{SC_con_lem_equiv}. We only point out the differences. To show that LHS$\lesssim$RHS, by the proof of Theorem \ref{SC_con_thm_equiv1}, we still have Equation (\ref{SC_con_eqn_equiv1_1}) where $E(u)$ is replaced by $F(u)$. Then
$$
\begin{aligned}
&3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\le&128\cdot2^{(\beta-\alpha)/2}cF(u)3^{\beta n-(\beta-\alpha)(n+kl)}+32\cdot3^{\alpha k}\sum_{i=0}^{l-1}2^i\cdot3^{-(\beta-\alpha)ki}E_{n+ki}(u).
\end{aligned}
$$
Take $l=n$, then
$$
\begin{aligned}
&3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\le&128\cdot2^{(\beta-\alpha)/2}cF(u)3^{[\beta-(\beta-\alpha)(k+1)]n}+32\cdot3^{\alpha k}\sum_{i=0}^{n-1}2^i\cdot3^{-(\beta-\alpha)ki}E_{n+ki}(u)\\
\le&128\cdot2^{(\beta-\alpha)/2}cF(u)3^{[\beta-(\beta-\alpha)(k+1)]n}+32\cdot3^{\alpha k}\sum_{i=0}^{\infty}3^{[1-(\beta-\alpha)k]i}\left(\sup_{n\ge1}E_{n}(u)\right).
\end{aligned}
$$
Take $k\ge1$ sufficiently large such that $\beta-(\beta-\alpha)(k+1)<0$ and $1-(\beta-\alpha)k<0$, then
$$
\begin{aligned}
&\sup_{n\ge1}3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\lesssim&\sup_{n\ge1}3^{(\alpha+\beta)n}\int_K\int_{B(x,3^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x).
\end{aligned}
$$
To show that LHS$\gtrsim$RHS, by the proof of Theorem \ref{SC_con_thm_equiv2}, we still have Equation (\ref{SC_con_eqn_equiv2_3}). Then
$$
\begin{aligned}
&\sup_{n\ge2}3^{(\alpha+\beta)n}\int_K\int_{B(x,c3^{-n})}(u(x)-u(y))^2\nu(\mathrm{d} y)\nu(\mathrm{d} x)\\
\lesssim&\sup_{n\ge2}\sum_{k=n}^\infty4^{k-n}\cdot3^{\beta n-\alpha k}\sum_{w\in W_k}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-k}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
&+\sup_{n\ge2}3^{(\beta-\alpha)n}\sum_{w\in W_{n-1}}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-(n-1)}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\lesssim&\sup_{n\ge2}\sum_{k=n}^\infty4^{k-n}\cdot3^{\beta(n-k)}\left(\sup_{k\ge1}3^{(\beta-\alpha)k}\sum_{w\in W_k}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-k}
\end{subarray}
$
}}}
(u(p)-u(q))^2\right)\\
&+\sup_{n\ge1}3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2\\
\lesssim&\sup_{n\ge1}3^{(\beta-\alpha)n}\sum_{w\in W_n}
{\sum_{\mbox{\tiny
$
\begin{subarray}{c}
p,q\in V_w\\
|p-q|=2^{-1}\cdot3^{-n}
\end{subarray}
$
}}}
(u(p)-u(q))^2.
\end{aligned}
$$
\end{proof}
We have the following properties of Besov spaces for large exponent.
\begin{mycor}\label{SC_con_cor_chara}
$B^{2,2}_{\alpha,\beta^*}(K)=\myset{\text{constant functions}}$ but $B^{2,\infty}_{\alpha,\beta^*}(K)$ is uniformly dense in $C(K)$. $B^{2,2}_{\alpha,\beta}(K)=B^{2,\infty}_{\alpha,\beta}(K)=\myset{\text{constant functions}}$ for all $\beta\in(\beta^*,+\infty)$.
\end{mycor}
\begin{proof}
By Theorem \ref{SC_con_thm_BM} and Theorem \ref{SC_con_thm_Besov}, we have $B^{2,\infty}_{\alpha,\beta^*}(K)$ is uniformly dense in $C(K)$. Assume that $u\in C(K)$ is non-constant, then there exists $N\ge1$ such that $a_N(u)>0$. By Theorem \ref{SC_con_thm_monotone1}, for all $\beta\in[\beta^*,+\infty)$, we have
\begin{align*}
&\sum_{n=1}^\infty3^{(\beta-\beta^*)n}a_n(u)\ge\sum_{n=N+1}^\infty3^{(\beta-\beta^*)n}a_n(u)\\
\ge&C\sum_{n=N+1}^\infty3^{(\beta-\beta^*)n}a_N(u)=+\infty,
\end{align*}
for all $\beta\in(\beta^*,+\infty)$, we have
\begin{align*}
&\sup_{n\ge1}3^{(\beta-\beta^*)n}a_n(u)\ge\sup_{n\ge N+1}3^{(\beta-\beta^*)n}a_n(u)\\
\ge&C\sup_{n\ge N+1}3^{(\beta-\beta^*)n}a_N(u)=+\infty.
\end{align*}
By Lemma \ref{SC_con_lem_equiv} and Proposition \ref{SC_con_prop_equiv_local}, we have $B^{2,2}_{\alpha,\beta}(K)=\myset{\text{constant functions}}$ for all $\beta\in[\beta^*,+\infty)$ and $B^{2,\infty}_{\alpha,\beta}(K)=\myset{\text{constant functions}}$ for all $\beta\in(\beta^*,+\infty)$.
\end{proof}
\section{Proof of Theorem \ref{SC_con_thm_hk}}\label{SC_con_sec_hk}
We use effective resistance as follows.
Let $(M,d,\mu)$ be a metric measure space and $(\mathcal{E},\mathcal{F})$ a regular Dirichlet form on $L^2(M;\mu)$. Assume that $A,B$ are two disjoint subsets of $M$. Define \emph{effective resistance} as
$$R(A,B)=\inf\myset{\mathcal{E}(u,u):u|_A=0,u|_B=1,u\in\mathcal{F}\cap C_0(M)}^{-1}.$$
Denote
$$R(x,B)=R(\myset{x},B),R(x,y)=R(\myset{x},\myset{y}),x,y\in M.$$
It is obvious that if $A_1\subseteq A_2$, $B_1\subseteq B_2$, then
$$R(A_1,B_1)\ge R(A_2,B_2).$$
\begin{proof}[Proof of Theorem \ref{SC_con_thm_hk}]
First, we show that
$$R(x,y)\asymp|x-y|^{\beta^*-\alpha}\text{ for all }x,y\in K.$$
By Lemma \ref{lem_SC_holder}, we have
$$(u(x)-u(y))^2\le c\mathcal{E}_{\mathrm{loc}}(u,u)|x-y|^{\beta^*-\alpha}\text{ for all }x,y\in K,u\in\mathcal{F}_{\mathrm{loc}},$$
hence
$$R(x,y)\lesssim|x-y|^{\beta^*-\alpha}\text{ for all }x,y\in K.$$
On the other hand, we claim
$$R(x,B(x,r)^c)\asymp r^{\beta^*-\alpha}\text{ for all }x\in K,r>0\text{ with }B(x,r)^c\ne\emptyset.$$
Indeed, fix $C>0$. If $u\in\mathcal{F}_{\mathrm{loc}}$ satisfies $u(x)=1$, $u|_{B(x,r)^c}=0$, then $\tilde{u}:y\mapsto u(x+C(y-x))$ satisfies $\tilde{u}\in\mathcal{F}_{\mathrm{loc}}$, $\tilde{u}(x)=1$, $\tilde{u}|_{B(x,Cr)^c}=0$. By Theorem \ref{SC_con_thm_BM}, it is obvious that
$$\mathcal{E}_{\mathrm{loc}}(\tilde{u},\tilde{u})\asymp C^{-(\beta^*-\alpha)}\mathcal{E}_{\mathrm{loc}}(u,u),$$
hence
$$R(x,B(x,Cr)^c)\asymp C^{\beta^*-\alpha}R(x,B(x,r)^c).$$
Hence
$$R(x,B(x,r)^c)\asymp r^{\beta^*-\alpha}.$$
For all $x,y\in K$, we have
$$R(x,y)\ge R(x,B(x,|x-y|)^c)\asymp|x-y|^{\beta^*-\alpha}.$$
Then, we follow a standard analytic approach as follows. First, we obtain Green function estimates as in \cite[Proposition 6.11]{GHL14}. Then, we obtain heat kernel estimates as in \cite[Theorem 3.14]{GH14a}. Note that we are dealing with compact set, the final estimates only hold for some finite time $t\in(0,1)$.
\end{proof}
\newpage
\fancyhead[RE,LO]{\textit{BIBLIOGRAPHY}}
\bibliographystyle{plain}
\def$'${$'$}
|
1,314,259,993,903 | arxiv |
\section{Introduction}
\subsection{Contribution of this report}
Civil aviation is considered to have been responsible for about 2\% of anthropogenic carbon dioxide $(CO_2)$ emissions in 1990 \cite{IPCC1999}, and this percentage is widely believed to have increased since then. This report shows that significant reduction in fuel use, and consequently in $CO_2$ emissions, could be achieved by the adoption of `free flight' type of trajectories in the Terminal Manoeuvring Area (TMA) of an airport, under the control of an algorithm which optimises the trajectories of all the aircraft within the TMA simultaneously while maintaining safe separation.
The report outlines the technical problem formulation and solution methodology, and describes the (ground-based) computing technology required to solve the problem quickly enough to make the approach feasible. It also discusses briefly changes in on-board equipment and procedures which would be required.
Typical approach and departure trajectories generated by the algorithm are presented and discussed, including special cases such as an aircraft landing with low fuel reserve.
A major part of the report is concerned with demonstrating that our proposed method would give substantial fuel savings in realistic scenarios. For this purpose we have taken traffic data from the vicinity of London Gatwick Airport over a 24-hour period, estimated the fuel used in actual approach and departure trajectories, and used our algorithm to generate alternative trajectories for the same aircraft. Examination of 4 hours out of the 24, including periods of both low and high traffic density, indicates that a fuel-use reduction of about 30\% can be achieved over this duration --- specifically, a reduction of over 14 tonnes of fuel (kerosene), equivalent to over 46 tonnes of $CO_2$. These results have been obtained with inclusion of a realistic representation of uncertainty, principally in the form of a random wind and turbulence field. Aircraft characteristics have been taken from the BADA database~\cite{BADA}.
One feature of our approach is that the objectives being optimised can be easily re-defined. For example, noise pollution over heavily-populated areas can be reduced by penalising trajectories that overfly such areas at low altitudes. We demonstrate this capability, with reference to population centres in the vicinity of London Gatwick Airport. More radical re-definition of objectives is also possible; for example, instead of reducing fuel use, the same approach could be used to increase traffic density and runway utilisation.
\subsection{Potential for fuel and emissions reductions in aircraft operations}
Work carried out by the Society of British Aerospace Companies and NATS in the UK has identified areas where savings in $CO_2$ emissions could be obtained~\cite{NATS09}. Savings of 10\% in $CO_2$ emissions were targeted from operational changes in air-traffic management. These savings were broken down between four flight modes:
\begin{itemize}
\item 3.25\% Climb: smooth continuous climb departures are more fuel efficient than stepped climb. Management of other aircraft to allow space for such climbing is needed.
\item 1.5\% Cruise: although generally well optimised, further savings can be found via improving matching of aircraft performance characteristics to route decisions.
\item 4.75\% Descent: continuous descent approaches, where aircraft approach the runway in a long glide, offer significant fuel savings when compared to descent in bursts with levelling out. Reduction in time spent holding also contributes significantly to this.
\item 0.5\% Ground Operations: reducing delays on stands and taxiways.
\end{itemize}
Of the planned savings, 80\% were to be gained from the climb and descent phases of each flight. In continuous climbs or descents the aim is to minimise the amount of time aircraft spend holding at fixed altitudes. In a congested airspace, particularly around the terminal manoeuvring area (TMA) of an airport, achieving this is extremely difficult for existing air-traffic management systems and procedures. This motivates our focus on operations within a TMA in this report.
\subsection{Trajectory Optimisation in a TMA}
Many existing proposals for multi-aircraft trajectory optimisation focus on cruise-like conditions where trajectories can be confined to two dimensions, or movement in the third dimension is highly discouraged for reasons of passenger comfort~\citep{HPS02,GT00,BP00}. Some work has been done using a novel conic approximation of the TMA to extend a 2-dimensional approach~\citep{PKMKZ08}. Methods such as mixed integer linear programming (MILP) require a linearisation of the problem and are too slow on centralised problems with large numbers of vehicles, due to the number of binary variables required~\citep{RH02}.
Works more specific to the TMA often focus primarily on runway ordering optimisation through dynamic programming techniques~\citep{MPB10}. These techniques can include stochastic elements to account for uncertainty within the problem \citep{S12}. The work of \citep{K12} compares direct and indirect optimisation methods for a single aircraft's landing trajectory with the possible use of a dynamic programming technique to allow for real time operation. The work generated Shortest and Fastest Continuous Descent Approach (SF-CDA) paths able to reduce commercial aircraft annoyances from noise and fuel consumption.
This report tackles the TMA problem by combining the use of model predictive control (MPC) with sequential Monte Carlo (SMC) optimisation. In model predictive control an (open loop) optimal control problem, defined over a finite prediction horizon, is solved at each time step \citep{M02}. The solution is a trajectory of control actions, defined over the horizon. An initial segment of this solution is applied to the system being controlled (in our case, the aircraft in the TMA), then when new state measurements have been obtained a new solution is obtained and the process is repeated indefinitely. More details of MPC are given in section \ref{sec:MPC}.
A paramount concern of air-traffic management is the maintenance of safe separation of aircraft. Separation constraints are inherently non-convex: there may exist two safe trajectories, each going either side of an exclusion region for example, but a convex (weighted-average) combination of these trajectories is not necessarily safe, because it may penetrate the exclusion region. Optimisation problems with non-convex constraints (and/or non-convex objective functions) are known as `non-convex' problems, and are inherently difficult to solve~\citep{BV04}, because they can have multiple local optima, and a `go downhill' search strategy may well result in becoming stuck at one of those local minima.
Stochastic optimisation algorithms address the problem of non-convex optimisation. They do so by `learning' from exploration where good solutions lie in the search space and concentrating further exploration there, but with occasional large jumps to probe whether a more promising region should be explored. Well-known examples of stochastic optimisation methods are `simulated annealing'~\citep{KirGelVec83} and `genetic algorithms'~\citep{Gol89}, while a less well-known one is `sequential Monte Carlo'~\citep{DFG01}, which is the one we employ in this study. A `nice to have' feature of most stochastic optimisation algorithms is that they guarantee to find a solution arbitrarily close to the global optimum (or set of global optima, in case there is more than one), \emph{if} one runs the algorithm for sufficiently many iterations. A further and important advantage of stochastic optimisation methods is that the solutions found have a degree of robustness in cases where the problem to be solved itself contains some randomness (in our case: random wind and turbulence, uncertain aircraft parameters, etc).
The use of stochastic optimisation, and specifically SMC algorithms, in the context of MPC and with application to planning trajectories for aircraft and UAV's has previously been proposed in~ \citep{KML08,EM11,VGS11}. In these and other studies it has been noted that conventional implementation of the SMC algorithm resulted in computation times which are too long for real-time use. However, Monte Carlo methods are known to have a highly-parallelisable structure, and very
significant speed-ups for statistical, chemical and economic applications have been demonstrated through the use of Graphical Processor Units (GPU's)~\citep{LYGDH10,FVR07,R00,SPFHTS07}.
Previous work by the authors also demonstrated a 98\% computational speed-up for implementing SMC on a GPU for \emph{en-route} conflict resolution
~\citep{EMCL13}.
\section{Model Predictive Control}
\label{sec:MPC}
Model Predictive Control (MPC), also known as Receding Horizon Control, is an optimisation-based control strategy. A constrained, finite-horizon, optimal control problem is solved online at each iteration based on the available state/measurements. This solution yields the controls for the duration of a finite horizon.
\input{mpc_block_diagram}
Figure \ref{fig:mpc_block_diagram} shows the components of a typical Model Predictive Control (MPC) feedback system~\cite{M02}. At the top, labelled ``Setup'', are the components that define the controller: a \emph{model} which will be used to generate the predictions needed by the optimiser, an \emph{objective function} which will be minimised by the optimiser, and \emph{constraints} which need to be satisfied by the solution generated by the optimiser. Below this the block labelled ``Online tasks'' shows what needs to be executed at each sampling/update interval:
\begin{enumerate}
\item Measurements are obtained from sensors located on the plant which is being controlled, and used by an observer to estimate the plant state vector (not always necessary).
\item A specific optimal control problem is defined (which depends on the current (estimated) plant state, using all the ingredients specified in the ``Setup''.
\item This optimal control problem is solved by an optimisation algorithm. Usually this computes a whole sequence of future input commands. Only the first of these is applied to the plant being controlled. Note that the optimisation problem is time-critical; it needs to be solved within the control-update interval.
\end{enumerate}
On the right of Fig.\ref{fig:mpc_block_diagram}, labelled ``Real world'', is the plant being controlled. Disturbances acting on this plant are shown explicitly, in order to emphasise that the plant never behaves exactly as predicted by the model.
In this work's application to air-traffic management, the `plant' will be the whole set of $N$ aircraft in the airspace being controlled. The state of each aircraft will be a vector of $n=6$ components: its position in 3-D space, its airspeed, its heading, and its mass (the latter varying because of fuel consumption). The control applied to each aircraft will be a vector of $m=3$ components: the engine thrust, the climb angle and the bank angle. Thus the complete `plant' will have a state vector with $nN$ components and an input vector with $mN$ components. The disturbances acting on the `plant' will be primarily the wind/turbulence acting on each aircraft, but these can also represent the effects of modelling errors when making predictions.
The prediction model used consists of standard, relatively simple, nonlinear difference equations for each aircraft due to the time discretisation within the problem. If measurements of all the aircraft states are taken at time step $k$, then we use our model to provide predictions at the following $H$ steps $k+1,k+2,\ldots,k+H$. $H$ is called the `prediction horizon'. These predictions depend on predicted values of the $m$ controls applied to each aircraft over $H$ steps, and it is the role of the optimiser to find the best values of these. Many alternative forms of optimisation have been applied to MPC and often, in simpler systems, linear or quadratic programming can be applied. In this report Sequential Monte Carlo optimisation is used as the method for solving for the applied control sequence.
\section{Sequential Monte Carlo Optimisation}
A Sequential Monte Carlo method approximates the probability distribution $p(\xi_k|\zeta_1,\zeta_2,\ldots,\zeta_k)$ of some (vector) variable $\xi_k$ at time step $k$, conditional on observations $\zeta_1,\zeta_2,\ldots,\zeta_k$ which have been made up to that time. It assumes that $\xi_k$ is the value of a Markovian random process: $p(\xi_k|\xi_1,\xi_2,\ldots,\xi_{k-1})=p(\xi_k|\xi_{k-1})$, and that the probability of an observation $p(\zeta_k|\xi_k)$ depends only on $\xi_k$. It does this by propagating, by simulation, a large number $L$ of samples of $\xi_k$, and approximating the distribution by a `histogram' of their values. Each of these samples is often referred to as a `particle', hence an alternative name for SMC is `particle filter', and in general it estimates the state of a `hidden Markov model'~\cite{DFG01},\cite[chapter~14]{RC99}.
When SMC is used to solve the MPC problem described in section \ref{sec:MPC}, a `hidden state' $\xi_k$ is the optimal solution $\mathbf{u}_k^*$, and an `observation' $\zeta_k$ is a value $J_{k:k+H}(x_k,\mathbf{u}_k)$ of the cost that results from a trial solution $\mathbf{u}_k$, together with the corresponding set of aircraft trajectories $\hat{x}_{k+j|k}$, $j=1,\ldots,H$ --- these are needed to check whether any constraint violations are predicted to occur. $L$ needs to be chosen sufficiently large for the approximate distribution of $\xi_k$ to be sufficiently concentrated about the global optimum (or set of global optima) to be useful --- that is, that any sample drawn from this distribution is very likely to be very close to a global optimiser. See~\cite{VGS11} for a precise account of how an augmented statistical problem can be set up such that an optimisation problem is re-cast as an inference problem, in a context very similar to the one considered in this report.
\subsection{Algorithm}
\begin{algorithm*}[htb]
\caption{Sequential Monte Carlo}
\label{alg:SMC}
\begin{algorithmic}[1]
\STATE $J \leftarrow$ 0
\STATE Define a monotonically increasing SampleSchedule of length $J_{\max}$
\STATE Clone all aircrafts' current states to give each particle its own copy.
\STATE Set the weights of all aircraft to $1/L$ where $L$ is the number of particles.
\STATE For each particle and aircraft in the particle randomly generate controls over the entire horizon $H$ \label{alg:initial}
\WHILE {$J\leq J_{\max}$}
\STATE $j \leftarrow$ 0
\FOR {each particle $l$}
\WHILE {$j\leq$ SampleSchedule(J)} \label{alg:inner1}
\STATE Sample disturbance realisations for all agents and all time steps to horizon $H$.\label{alg:disturb}
\STATE Simulate trajectories for all agents till horizon $H$.\label{alg:plan}
\IF {an aircraft $i$ fails constraints} \label{alg:constraint}
\STATE aircraft's $i$'s weight set to 0.
\ENDIF
\STATE Calculate each aircraft's cost.
\STATE Scale each aircraft's weightings by their cost. \label{alg:weighting}
\STATE $j \leftarrow j+1$
\ENDWHILE \label{alg:inner2}
\ENDFOR
\STATE $J \leftarrow J+1$
\IF {$J \leq J_{\max}$}
\STATE Resample all particles \label{alg:resample}
\STATE Peturb all aircrafts'' controls with Gaussian white noise.\label{alg:perturb}
\STATE Set the weights of all agents to $1/L$ where $L$ is the number of particles.
\ENDIF
\ENDWHILE
\STATE Draw final sample from particle population. \label{alg:final}
\end{algorithmic}
\end{algorithm*}
Algorithm \ref{alg:SMC} summarises the implementation of SMC. Each step of the algorithm shall now be dealt with in further depth with specific reference to the application of multiple aircraft trajectory planning with conflict avoidance.
\subsection{Particles}
\label{sec:particle}
A particle represents a single instance of the problem with the data of all associated agents. The number of particles for optimisation is a design variable dependent on the complexity of the problem to be optimised. The more complex the problem, the larger the number of particles may need to be to adequately characterise the search space. Inside each particle there is:
\begin{itemize}
\item An initial state for each of the $N$ individual aircraft. These initial states for aircraft are the same across all particles and are provided to the optimisation at the beginning of the algorithm.
\item The control inputs for every step of the MPC horizon $H$ for every aircraft. In this report's formulation there are 3 controls for each aircraft (thrust, bank angle and pitch angle) totalling $N*H*3$ control inputs. These control inputs are different for every particle.
\item A separate weighting for each of the $N$ individual aircraft in the scenario, used for resampling.
\end{itemize}
The controls inside particles are initialised as random samples from a uniform distribution for each control between the maximum and minimum value. After this initialisation at line~\ref{alg:initial} the controls are then updated following resampling and perturbation in lines~\ref{alg:resample}-~\ref{alg:perturb}. The perturbation moves particles locally in the search space, whilst resampling causes particles to cluster around areas of the search space with positive attributes as determined by the objective function. The weights of all aircraft are initialised as $1/L$ where $L$ is the number of particles and these are updated in line~\ref{alg:weighting}.
\subsection{Trajectory Planning and Disturbance Realisations}
In lines~\ref{alg:disturb}-\ref{alg:plan} the future trajectory of each aircraft in each particle is simulated given the initial state and controls (from the particle's data) and the disturbance samples. The discretised aircraft dynamics model is a standard 6 DOF model with 6 states and 3 inputs
\begin{subequations}
\begin{align}
x_i(k+1)=&x_i(k)+\delta t (v_{s,i}(k) \cos(\chi_i(k))\cos(\gamma_i(k)))+w_x(k)\delta t \\
y_i(k+1)=&y_i(k)+\delta t(v_{s,i}(k) \sin(\chi_i(k))\cos(\gamma_i(k)))+w_y(k)\delta t \\
z_i(k+1)=&z_i(k)+\delta t(v_{s,i}(k) \sin(\gamma_i(k)))\\
v_{s,i}(k+1)=&v_{s,i}(k)+\delta t(\frac{T_i(k)-D_i(k)}{m_i(k)}-g\sin(\gamma_i(k))) \\
\chi_i(k+1)=&\chi_i(k)+\delta t(\frac{L_i(k)\sin(\phi_i(k))}{m_i(k)v_{s,i}(k)})\\
m_i(k+1)=&m_i(k)-\delta t(\eta T_i(k))
\end{align}%
\label{eqn:hor}%
\end{subequations}%
where: $\delta t$ is the step length; $x, y$ and $z$ are the Cartesian coordinates of the aircraft ($z$ acting as the altitude of the aircraft); $m$ is the total mass of the aircraft; $v_s$ is the true airspeed; $\chi$ is the heading angle; $\gamma$ the climb angle and $\phi$ the bank angle. The subscript $i$ denotes the $i^{\mathrm{th}}$ aircraft. The control variables are $\phi$ (bank), $T$ (thrust) and $\gamma$ (climb). Lift $L$ and drag $D$ are calculated using the standard aerodynamic relations. The fuel usage is controlled by the constant $\eta$.
Each aircraft is given an initial state $X_{0,i}=(x_{0,i},y_{0,i},z_{0,i},\chi_{0,i},v_{s,0,i},m_{0,i})$. Departing aircraft are modelled from an initial altitude of 400m, where the aircraft is assumed to have taken off from the runway and completed its initial climb. A departing aircraft is considered as having left the TMA once it has reached a set distance $D_{TMA}$ from the airport at which point it is assumed to enter a neighbouring control zone and is handled by alternative ATC.
Arrival aircraft are allowed the flexibility to enter the problem at any point along the TMA's boundary. Arrival aircraft are deemed to have `landed' or passed from our control once they have satisfied the following constraints:
\begin{subequations}
\begin{align}
\sqrt{x_i(k)^2+y_i(k)^2}\leq& P_{\mathrm{runway}}\\
\beta_i(x_i(k),y_i(k),z_i(k))\leq& P_{\beta}\\
|\arctan(y_i(k)/x_i(k))|\leq& P_{\chi}\label{eqn:cone}\\
|\chi_i(k)-180|\leq& P_{\chi}\label{eqn:heading}\\
v_{s,i}(k)\leq& P_{v_s}\label{eqn:landspeed}
\end{align}
\label{eqn:TO_init}
\end{subequations}
The first 3 of these constraints define the landing sector, an arc with radius $P_{\mathrm{runway}}$ horizontal angle $ P_{\chi}$ and vertical angle $P_{\beta}$ where $\beta_i$ is described later in Equation~\ref{eqn:flow}. Constraint \ref{eqn:heading} ensures that the aircraft is pointing towards the runway assuming an East to West landing without significant crosswind. Finally constraint \ref{eqn:landspeed} limits the airspeed ($P_{v_s}$)at which the aircraft can enter the landing sector for a successful landing. These are shown pictorially in Figure~\ref{fig:envelope}.
\begin{figure}[ht]
\begin{center}
\def12cm{14cm}
\includegraphics[width=12cm,viewport=1.7in 4.0in 10.8in 6.0in,clip=true]{landingEnvelope.pdf}
\caption{Landing Envelope Shape and Size}
\label{fig:envelope}
\end{center}
\end{figure}
The disturbance realisations added to the system are the primary method of simulating uncertainty within the system. This uncertainty can come from wind, sensor noise, controller noise and human factors. Within the inner loop of optimisation (lines \ref{alg:inner1}-\ref{alg:inner2}) each aircraft within a particle will have drawn $\mathrm{SampleSchedule}(J)$ disturbance realisations and planned the same number of trajectories. This inner loop serves to simulate different disturbance scenarios on an aircraft to determine if the controls are valid with respect to constraints, and their fitness with respect to the objective function. This information is stored in the aircraft's weight within the particle as described in the next two subsections. In applications with no uncertainty there would be no need for repeated planning and simulation of the aircraft and the inner loop would only be executed once. In this report we have restricted the disturbances to being wind and this is discussed in greater depth in Section~\ref{sec:wind}.
\subsection{Constraint Handling}
\label{sec:constraints}
There are two types of constraint handled by line~\ref{alg:constraint}. The first are unary constraints which deal only with one aircraft at a time. Examples of these include flight envelope constraints and the minimum aircraft mass constraint.
The flight envelope constraints provide upper and lower bounds for both the state and control variables:
\begin{subequations}
\begin{align}
z_{\mathrm{min}}\leq z(t)&\leq z_{\mathrm{max}}\\
-\gamma_{\mathrm{max}}\leq \gamma(t)&\leq \gamma_{\mathrm{max}} \\
-\phi_{\mathrm{max}}< \phi(t) &< \phi_{\mathrm{max}}\\
T_{\mathrm{min}} \leq T(t) &\leq T_{\mathrm{max}}\\
v_{\mathrm{min}} \leq v_s(t) &\leq v_{\mathrm{max}}.
\end{align}
\end{subequations}
The minimum aircraft mass constraint enforces that the total aircraft mass must always remain above the mass of the aircraft alone without fuel:
\begin{equation}
m(t)\geq m_{\mathrm{Aircraft}}.
\end{equation}
This acts on a constraint on the total fuel use across the entire modelled flight. Further unary constraints can be added. For example to create exclusion zones where aircraft are not allowed to fly, or must fly above a certain altitude. These are not included in the existing model but can be implemented in future work.
The second type of constraint are the binary constraints. These hold between two aircraft in the same particle, such as in conflict avoidance. In this report the protection zone around each aircraft is modelled as a cylinder with horizontal radius $P_r$ and altitude separation of $P_h$. Two aircraft $i$ and $j$ avoid each other if they satisfy the following constraint for every time step in the MPC horizon:
\begin{align}
\nonumber (x_{i}(k)-x_{j}(k))^2+(y_{i}(k)-y_{j}(k))^2 &\geq (2P_r)^2 \\
\nonumber \vee |z_{i}(k)-z_{j}(k)| &\geq 2P_h \\
\forall k, \forall i,j &\in \{1,...,N\}: i\neq j.
\label{eqn:avoidance}
\end{align}
If an aircraft fails any of its unary constraints then its weight stored in the particle is set to 0. If a pair of aircraft fail a binary constraint then both aircraft have their weights set to 0. Any aircraft with a weighting of 0 will not be resampled in the resampling phase of the algorithm described in Subsection~\ref{sec:resample}.
The constraint handling method described in this section is a departure from the the generic formulation for a SMC optimisation. In the generic formulation a weight is linked to a single particle which would contain the entire simulation of all $N$ aircraft. In that case if a single aircraft fails any constraint the entire simulation would be weighted as 0 and all controls from that particle discarded when the particles are resampled in the next iteration. Conversely, by using our implementation, aircraft are weighted individually inside a particle. If a single aircraft fails a constraint the non-failing aircraft weights would still be non-zero. This would allow their controls to continue to the next iteration. The application under consideration is a highly constrained situation and this alteration gives greater flexibility in keeping valid control solutions for aircraft which would otherwise have been discarded. This in turn allows a smaller number of particles, as the distribution they are required to model is less complex than in the case of a multi-aircraft weighted particle.
\subsection{Particle Costing}
The SMC method requires a non-negative function as the objective function $J_{T}$ which is to be maximised. This objective function is made up of multiple elements and is different for arrival and departure aircraft due to their distinct priorities. However each element of the objective functions for both types of aircraft takes a basic form where we have negated the original minimisation function then normalised by adding the supremum of the function and dividing by the difference between the supremum and the infimum of the function. The supremum and infimum are readily available via geometric arguments as will be explained in greater depth case by case.
\subsubsection{Departure:}
Departing aircraft are given a desired target altitude $Z_{i}$, a desired airspeed $v_{s,D}$ and bearing $\theta_{i,F}$. These are used to evaluate the aircraft's progress in the objective function. However it is not necessary for the aircraft to have reached these before leaving the TMA as it can take a while to reach cruise altitude and traffic in the TMA could make reaching the exact target bearing unrealistic.
The departing aircrafts' objective function $J^D_{T,i}$ is made up of four elements: reaching the target bearing they need to achieve for onwards flight to their destination; a cost on fuel used; rewarding the aircraft for climbing towards their required cruise height and rewarding the aircraft for maintaining its desired speed.
\begin{subequations}
\begin{eqnarray}
J^D_{1,i}(k:k+H)=&\frac{-A_i(k:k+H)+ \mathrm{sup} A_i(k:k+H) }{\mathrm{sup}A_i(k:k+H)- \mathrm{inf}A_i(k:k+H) }\\
J^D_{2,i}(k:k+H)=&\frac{(m_i(k)-m_i(k+H))+ \delta t H T_{\mathrm{max}}\eta}{\delta t H T_{\mathrm{max}}\eta}\\
J^D_{3,i}(k:k+H)=&\frac{-B_i(k:k+H)+ \mathrm{sup} B_i(k:k+H) }{\mathrm{sup}B_i(k:k+H)- \mathrm{inf}B_i(k:k+H)}\\
J^D_{4,i}(k:k+H)=&\frac{-C_i(k:k+H)+ \mathrm{sup} C_i(k:k+H) }{\mathrm{sup}C_i(k:k+H)- \mathrm{inf}C_i(k:k+H)}
\end{eqnarray}
\end{subequations}
where:
\begin{subequations}
\begin{align}
A_i(k:k+H)=&\sum^{k+H}_{j=k} \frac{|\theta_i(j)-\theta_{i,F}|}{H}\\
B_i(k:k+H)=&\sum^{k+H}_{j=k} \frac{\sqrt{(z_{t_f,i}-z_i(j))^2}}{H}\\
C_i(k:k+H)=&\sum^{k+H}_{j=k} \frac{|v_{s,i}(j)-v_{s,D}|}{H}
\end{align}
\end{subequations}
A is the absolute angle between the desired bearing of the aircraft leaving the airport and the actual bearing of the aircraft from the airport. B is the distance between the desired altitude for cruise and the altitude of the aircraft. C is the difference between the desired airspeed and actual airspeed of the aircraft. The sup and inf of A are approximated as the maximum deflection from the desired bearing, 180 degrees and the minimum deflection 0 degrees respectively. Similarly for B the sup is the maximum distance the aircraft can be from its target altitude given its current altitude, obtained by either climbing or descending with $\gamma_{\mathrm{max}}$ and $V_{\mathrm{max}}$. The inf is then the minimum distance the aircraft can be from its target altitude similarly using the $\gamma_{\mathrm{max}}$ and $V_{\mathrm{max}}$ until it is at the desired altitude at which point it would be 0. By using this formulation, the cost for aircraft $i$ which is on its desired bearing for all steps of the horizon would have a cost of 1 for $J^D_{1,i}$, while an aircraft $j$ which is 90 degrees away from its desired bearing for all steps of the horizon would have a cost of 0.5 for $J^D_{1,j}$. The cost $J^D_2$ is the fuel minimisation recast as a maximisation; $\delta t H T_{\mathrm{max}}\eta$ is the maximum amount of fuel that could be used in a step, the minimum is approximated to be 0.
The four parts of the cost $J^D$ are all normalised and then scaled in importance by the weighting factors $\alpha_j$. These weighting coefficients have been tuned empirically and have a large effect on the behaviour of aircraft. Choice of $\alpha_j$ affects policy e.g. fuel use, noise abatement etc.
\begin{subequations}
\begin{eqnarray}
J^D_{T,i}(k:k+H)=&\sum^4_{j=1} \frac{\alpha_j J^D_{j,i}(k:k+H)}{H} \label{eqn:maxcost}\\
0 \leq& J^D_{T,i}(k:k+H) \leq 1 \\
\sum^{4}_{j=1}\alpha_j =& 1
\end{eqnarray}
\end{subequations}
\subsubsection{Arrival}
In contrast to departures, arriving aircraft have far stricter terminal constraints with a specific requirement that to land the aircraft must be: travelling towards the runway; be in line with the runway given a heading tolerance; within a certain distance of the runway; travelling below a given speed; and be within a set altitude trajectory to accomplish landing. The model does not extend to fully landing the plane and continues to assume pilot control for this aspect of the flight, similar to that of the initial take off.
Arriving aircraft will head towards the airport following an objective function made up of 3 elements. The first is a cost on the fuel used to encourage minimal fuel use; the second rewards the aircraft for following a nominal altitude descent trajectory; and the third rewards the aircraft for keeping its heading close to a nominal flow field designed to encourage the aircraft to turn smoothly before approaching the runway. Such a flow field is entirely designable based on the airport or goals of the ATC for the airport. For this work the airport flow field has been formulated as shown in Figure~\ref{fig:flowfield} and the cost function penalises aircraft for their heading not aligning with the flow field. The use of a nominal altitude descent trajectory rather than relying purely on the minimisation of fuel or a target altitude is justified by the finite horizon of planning which might not be long enough for an aircraft to reach the landing sector. By providing a nominal trajectory the aircraft's descent is smoother and less likely to drop in altitude sharply before having to fly close to the ground till it reaches the landing sector.
\begin{figure}[ht]
\begin{center}
\def12cm{10cm}
\includegraphics[width=12cm,viewport=0.7in 0.1in 6.3in 4.8in,clip=true]{flowfield}
\caption{Example flowfield for a single runway airport with East to West landing pattern}
\label{fig:flowfield}
\end{center}
\end{figure}
The arrival problem is then optimised for the maximisation of the cost $J^A_{T}$ which is the sum of the weighted costs for heading offset from flowfield, fuel usage and nominal altitude descent trajectory following:
\begin{subequations}
\begin{eqnarray}
J^A_{1,i}(k:k+H)=&\frac{-D_i(k:k+H)+ \mathrm{sup} D_i(k:k+H) }{\mathrm{sup}D_i(k:k+H)- \mathrm{inf}D_i(k:k+H) }\\
J^A_{2,i}(k:k+H)=&\frac{(m_i(k)-m_i(k+H))+ \delta t H T_{\mathrm{max}}\eta}{\delta t H T_{\mathrm{max}}\eta}\\
J^A_{3,i}(k:k+H)=&\frac{-E_i(k:k+H)+ \mathrm{sup} E_i(k:k+H) }{\mathrm{sup}E_i(k:k+H)- \mathrm{inf}E_i(k:k+H) }
\end{eqnarray}
\end{subequations}
where:
\begin{subequations}
\begin{align}
D_i(k:k+H)=&\sum^{k+H}_{j=k} \frac{|\chi_i(j)-\hat{\chi}_{i,f}(x_i(j),y_i(j))|}{H}\\
E_i(k:k+H)=&\sum^{k+H}_{j=k} \frac{|\beta_i(x_i(j),y_i(j),z_i(j))-\beta_{i,f}|}{H}
\end{align}
\end{subequations}
and
\begin{equation}
\hat{\chi}_{i,f}(x_i(j),y_i(j))=2\arctan\left (\frac{x_i(j)}{y_i(j)}\right )+90\\
\end{equation}
\begin{equation}
\beta_i(x_i(j),y_i(j),z_i(j))=\arctan\left(\frac{z_i(j)}{180-2\arctan\left(\frac{x_i(j)}{y_i(j)}\right)\frac{||x_i(j),y_i(j)||}{\cos(\arctan(x_i(j)/y_i(j)))} }\right)
\label{eqn:flow}
\end{equation}
$D$ is the normalised difference between the flow field's heading and the aircraft's, $E$ is the normalised difference between the angle to the ground origin for the aircraft and the desired angle of approach. The geometric terms of $\beta$ and $\hat\chi$ are based on the flow field shown in Figure~\ref{fig:flowfield} with an east west runway centred on the origin with arrival flights landing on the eastern end of the runway. This example flow field is used throughout the report for its elegant geometric properties. It is simple to change the flow field and would result in a change of equation~\ref{eqn:flow}. The distance from the aircraft to the origin for $E$ is measured not as the Euclidean distance but rather as the distance remaining on the arc of the flow field the aircraft is currently on. The point of this design choice is to encourage a constant descent angle for the aircraft throughout its travel to the runway. The Euclidean distance would have lead to variations in the angle of descent over the full path for aircraft starting at positions greater than 90 degrees to the desired landing bearing.
Like the departures, the arrivals objective function is designed to be non-negative and the total cost $J^A_T$ and all its constituent costs are normalised between 0 and 1. The weightings on the terms of the cost function $\tilde\alpha$ are subject to priorities of the desired trajectory.
\begin{subequations}
\begin{eqnarray}
J^A_{T,i}(k:k+H)=&\sum^3_{j=1} \frac{\tilde\alpha_j J^A_{j,i}(k:k+H)}{H} \label{eqn:maxcost}\\
0 \leq& J^A_{T,i}(k:k+H) \leq 1 \\
\sum^{3}_{j=1}\tilde\alpha_j =& 1
\end{eqnarray}
\end{subequations}
\subsection{Weight Scaling}
\label{sec:weighting}
As previously mentioned in subsection~\ref{sec:particle} each aircraft in a particle has a weight associated with it. This weight records the degree of success an aircraft has in the simulations with various noise realisations. To do this in line~\ref{alg:weighting} the cost of aircraft $i$ in particle $l$ is multiplied by the weight of aircraft $i$ in particle $l$ each time the inner loop \ref{alg:inner1}-\ref{alg:inner2} is executed. An aircraft only needs to violate one constraint in any execution of the inner loop to have its weighting set to 0. This weighting would remain as 0 until resampling (described in the next section) removes that aircraft's proposed controls from the population.
Ideally at each iteration of the inner loop the sum across all particles $L$ of weights associated with aircraft $i$ would be normalised to 1. This would reduce numerical issues as the inner loop progresses. Without such a normalisation step the weight would always be decreasing
\begin{subequations}
\begin{align}
W_{i,l}^{j+1}&=W_{i,l}^j J_{T,i,l}^j\\
W_{i,l}^0 &= 1/L \\
0 &\leq J_{T,i,l}^j \leq 1
\end{align}
\end{subequations}
where $W_i^j$ represents the weight and $J_{T,i,l}^j$ the objective function value of aircraft $i$ in particle $l$ at inner loop iteration $j$ and $L$ is the number of particles. The cost is always between 0 and 1 and the weight starts at less than 1.
\subsection{Particle Resampling and Perturbation}
\label{sec:resample}
In lines \ref{alg:resample} - \ref{alg:perturb} the aircraft are individually resampled to generate a new population of particles. These new particles have their aircraft controls perturbed by Gaussian white noise to separate particles with similar controls across the local search space. This perturbation also allows particles to explore away from points which were in the original randomly sampled population of controls.
Resampling is done on the basis of sequential importance sampling using a method such as Kitagawa resampling~\citep{K96}. The higher a weight aircraft $i$ has in particle $l$ compared to the weight of aircraft $i$ in all other particles, the more likely it is to have its controls resampled into the new population.
As the aircraft are resampled separately to form the new particles no conflict avoidance guarantees can hold once resampling has taken place. This arises from the phenomena that the controls from aircraft $i$ from particle $l$ could now be in a new particle with the controls from aircraft $j$ from particle $m$. This again arises from the departure from the generic method of one weight per particle mentioned in Section~\ref{sec:constraints}, justified by the significant simplification of dimensionality of the search space and the effect on both computation time and particle populations needed. As long as the final sample from the distribution of particles is drawn correctly with regards to the binary conflict constraints the lack of conflict guarantees at the beginning of each inner loop is not problematic.
\subsection{Final Sample Selection}
This is the final step of the SMC optimisation (line~\ref{alg:final}) before it hands back to the MPC update process. This step determines the controls to be used by all aircraft for the next update step. In many generic SMC applications this final sample is taken as the mean of the values of the particles. In multi modal distributions this would give an inaccurate estimation of the global optimisers (consider the mean of a control where half of the population steers left around an obstacle and the other half right). An alternative statistical measure to offset this problem would be to select the mode of the distribution. However this is also unsuitable in our implementation. The presence of binary constraints across aircraft control distributions could lead to constraint violation. Therefore throughout this report we select the best performing particle in the final iteration as our estimate of the maximiser.
To find the best performing particle the weights of all aircraft in the particle are multiplied together and the particle with the greatest overall weight is selected as the estimator of the maximiser
\begin{equation}
\max_{l} \prod_{i=0}^N W_{i,l}.
\end{equation}
Any particle where an aircraft has failed any constraint in simulation would have a weight 0 associated with the failing aircraft and thus could not be selected as the best performing particle.
\subsection{Rolling Window}
To allow for longer scenarios where aircraft both enter and leave the problem, as would happen in a real airport, the whole application is considered in the form of a rolling window. The length of the window is the same length of the horizon $H$ for the MPC and the window moves with every completion of an MPC update. Any aircraft entering the problem (either through take off or landing) does so at a pre-determined time step, though this can be relaxed to consider uncertainty on arrival into the problem. Only aircraft which are considered active in the problem have their trajectories optimised by the algorithm previously described.
As we are using MPC, aircraft may become active in the problem midway through a horizon. Their trajectories are optimised from the point they enter the problem. Similarly for arrival aircraft there is the chance for an aircraft to complete mid horizon, thus leaving the problem. During the algorithm when trajectories are planned (lines~\ref{alg:disturb}-\ref{alg:plan}) arrival aircraft active in the problem are tested to see if they've entered the landing sector. If they have they will be noted as finished and future time steps within that horizon will not be planned for that aircraft. Departure aircraft are less constrained and not tested for early completion during the trajectory planning stage. Arrival aircraft which have met their landing constraints mid horizon are given the best possible cost, 1, for all remaining steps of the horizon as a bonus to that aircraft for landing.
\begin{figure}[htb]
\begin{center}
\def12cm{14cm}
\includegraphics[width=12cm,,viewport=0in 0.5in 10.5in 8.4in,clip=true]{rolling_window_diagram}
\caption{Graphical representation of active aircraft (light blue), completed aircraft (dark blue) and aircraft which haven't joined the problem (hatched) for rolling window simulation at three different points in simulation}
\label{fig:rollingwindow}
\end{center}
\end{figure}
Figure~\ref{fig:rollingwindow} shows an example problem where aircraft enter and leave the problem during simulation. Active aircraft are shown in light blue, finished aircraft which have left the problem are shown in dark blue and aircraft yet to enter the problem are shown in hatched lines. Aircraft will not be deactivated until they no longer appear within the bounds of the rolling window (MPC horizon), likewise they are not activated until they have appeared in the MPC horizon. The aircraft which have not finished are shown planning for an unknown time into the future as finishing can only be detected within the MPC horizon.
\subsubsection{Wind Model}
\label{sec:wind}
Large portions of the uncertainty about aircraft flight plans results from the inherent uncertainty in meteorological forecasts. Wind speed and direction is probably the most important meteorological factor affecting aircraft trajectories in the TMA. The wind disturbance is assumed to be made up of two components, a nominal component representing the wind forecast and a stochastic component representing the forecast errors. This report uses a spatio-temporal wind model for the forecast errors based on the model used by Lymperopoulos and Lygeros \citep{LL10}. Their model approximations were designed for wind prediction over a far larger geographical scale than that used by the TMA example. However, they appear to be suitable for our application, following experimentation.
The model of Lymperopoulos and Lygeros assumes an isotropic wind-field (invarient under rotations) with uncorrelated wind speeds in the South-North and East-West directions. Vertical wind is neglected in the model however correlation between wind at different altitudes is considered. The wind field is generated from a random field where each point is Gaussian with zero mean and a covariance matrix $R(t,P,t',P')$ where $t$ and $t'$ are the points in time and $P$ and $P'$ are the points in three dimensional space. Through these assumptions and by using an exponential function the correlation can be approximated as:
\begin{equation}
\rho_{xy}(t,P,t',P')=\sigma(z)\sigma(z')\mathrm{exp}(-\lambda|t-t'|)\mathrm{exp} \left(-\beta \left |\begin{array}{ccc}
{x-x'}\\
{y-y'} \end{array}\right | \right )\mathrm{exp}(-\gamma| z-z'|).
\label{eqn:cov}
\end{equation}
Where $\sigma(z)$ is the standard deviation of the wind error in m/s at altitude $z$. The figures used for $\lambda, \beta$ and $\gamma$ used by Lygeros and Glover\citep{LG04} were $\lambda=6*10^{-6}m^{-1}$, $\beta=1.6*10^{-6}m^{-1}$ and $\gamma=1.5*10^{-5}m^{-1}$ along with $\sigma(z)$ are based on the data reported in \citep{CRKB98}.
To use the correlation function above Lymperopoulos and Lygeros proposed the following method for generating wind realisations.
The wind field is divided into a grid with $N_x$ points in the South-North direction, $N_y$ points in the East-West direction and $N_z$ vertically. For each point in the three dimensional grid 2 random numbers are generated from a zero mean Gaussian distribution. The first of these random numbers in each pair relates to the South-North wind error and the second to the East-West wind error, these numbers are stored in two vectors $W_X(k)$ and $W_Y(k)$ at time step $k$.
The covariance matrix $\hat{R}\in \mathbb{R}^{N_X N_Y N_Z \times N_X N_Y N_Z}$ where each entry is a comparison between two grid points using equation~\ref{eqn:cov} is the covariance matrix for both $W_X(k)$ and $W_Y(k)$ given the isotropic assumption. Lygeros and Lymperopoulos assume this covariance matrix to be constant in time and thus generate wind samples using the following linear Gaussian model:
\begin{eqnarray}
\begin{array}{cc}
W_X(0)=\hat{Q}v_X(0), & W_X(k+1)=aW_X(k)+Qv_X(k+1),\\
W_Y(0)=\hat{Q}v_Y(0), & W_Y(k+1)=aW_Y(k)+Qv_Y(k+1),\\
\end{array}
\end{eqnarray}
where $v_X(k),v_Y(k)\in\mathbb{R}^{N_X N_Y N_Z}$ are standard independent Gaussian random variables, $Q$ and $\hat{Q}$ are derived by Cholesky Decomposition from the covariance matrix $\hat{R}$ using
\begin{equation}
\begin{array}{ccc} QQ^T=(1-a^2)\hat{R} &\mathrm{and} & \hat{Q}\hat{Q}^T=\hat{R}\end{array}
\end{equation}
where $a=e^{-\delta t/G_t}$ with $G_t$ a parameter of time correlation obtained from \citep{CRKB98}. The wind error for individual aircraft is then obtained by tri-linear interpolation between the grid points.
\section{Parallelisation}
\label{sec:parallel}
It had been previously been proposed there is a large degree of speed up that can be obtained by parallelising the SMC algorithm\citep{LYGDH10,FVR07,R00,SPFHTS07}. The authors previous work~\citep{EMCL13} focused on implementation on a graphics processing unit (GPU) using the CUDA programming language provided by NVIDIA and demonstrated a 98\% computational improvement. CUDA is a scalable parallel programming language similar to C and C++ with the libraries and utility to design kernels to implement on a GPU. These kernels are repeatable functions representing the code that the user wishes to be executed in parallel with different data. GPUs can use thousands of threads at the same time, whereas a conventional computer will only use a few. The execution and near instantaneous switching between threads is how a GPU achieves high efficiency on parallel applications.
\subsection{GPU Structure}
A GPU differs from a standard CPU in the number and type of cores it has. A CPU will have a few large cores optimised for sequential operations whilst a GPU will have many hundreds of smaller cores efficient at running different tasks in parallel. In GPU programming the structure of the problem is very a data driven model of computation. Typically each thread executes the same operation on different elements of the data in parallel. Threads have an id number which is used to compute memory addresses for where that thread's data is stored and the id number can also be used for controlling decisions inside the kernel. To simplify handling many thousands of threads and to take advantage of potential structures in the data threads are grouped together into blocks containing up to 512 threads (larger numbers of threads can be handled in blocks with advances in the programming architecture) in 1D, 2D or 3D arrays. These blocks can then also be structured in grids. Threads can also bundled together into blocks for cooperation purposes, if threads need to share information they can use a shared memory resource accessible by all threads in the same block. This shared memory is both quite small by comparison to the global GPU memory but is also significantly faster as it is based on-chip with the computation cores themselves. Therefore the primary decisions when implementing an existing algorithm in parallel are: firstly identifying the code to parallelise inside a kernel; and deciding on intelligent use of shared memory for the threads executing that kernel.
\subsection{Kernel Design}
\begin{figure}[htb]
\begin{center}
\def12cm{14cm}
\includegraphics[width=12cm,viewport=0.1in 1in 10in 6.8in,clip=true]{SMC_kernel}
\caption{Graphical Representation of SMC Algorithm with Kernel Highlighted}
\label{fig:paralleldiag}
\end{center}
\end{figure}
Figure \ref{fig:paralleldiag} shows a simplified graphical representation of the SMC algorithm focusing on the operations taking place in a single computation step of the MPC. There is also a boxed region of the flowchart demonstrating the GPU kernel. This kernel incorporates the inner loop of simulation specifically the lines \ref{alg:inner1}-\ref{alg:inner2} of the algorithm. This code was a bottleneck in a sequential implementation as it must be executed multiple times for each particle where the only difference between each execution is the particle's individual data. This made it a perfect candidate for parallelisation with one particle per thread, as there is minimal thread interaction and use of shared memory.
In the algorithm used there is no need for the threads to synchronise or communicate and thus the problem can be considered as `embarrassingly parallelisable'. The remaining implementation decision was to select an appropriate random number generator for use within the kernel. In this report's implementation we have focused on using the NVIDIA provided library CURAND, specifically the XORWOW (Xor Shift added with Weyl sequence generator~\citep{MX03}) which seeds each thread and maintains the state of each random generator between kernel calls. This generator was used for its superior speed compared to that of the others included in CURAND.
\subsection{Computational Savings}
\label{sec:compsave}
Previously the MPC-SMC method was demonstrated for fast computation of many aircraft problems in cruise-like conditions\citep{EMCL13} with computation time per MPC update step varying up to 33 seconds for a 20 vehicle problem. This implementation had the simplification of fixed scenario lengths with all aircraft entering the problem at the start and no aircraft leaving the problem before the end. As such all aircraft were active during the entire scenario and computation time per step of the algorithm was deterministic based on the number of aircraft in the problem. This is not true for this report's implementation where the number of active aircraft in the problem varies step by step as aircraft both enter and leave the scenario.
Figure~\ref{fig:times} shows the comparison of our previous implementation's time per update step compared to the average of this report; fewer points are available for this report's implementation due to the nature of the scenarios being solved however it offers an insight into the magnitudes involved.
\begin{figure}[htb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{TMAvsOLD.pdf}
\caption{Comparison of previous and current computation time spent per time step for problems with different number of vehicles}
\label{fig:times}
\end{center}
\end{figure}
This implementation is significantly slower than the previous incarnation, most notably due to the 10-fold increase in number of particles needed to adequately represent the search space. The problem of landing vehicles into a small terminal area whilst obeying all flight constraints is significantly harder than the previous cruise like situation and 1024 particles was not enough to reliably find feasible solutions. A larger or faster GPU would offer direct benefit to the computation speed through allowing more cores to process the particle threads and each core to work faster.
\subsection{Structural Considerations}
Given the layout of parallelisation previously defined is such that no individual thread of simulations needs to communicate with any other thread during their time in the kernel there is a high level of flexibility on how we lay out our threads and blocks for computation. In GPU computation individual runs of the kernel are threads which are then bundled together into blocks for structural reasons. All threads in a block are run on the same microprocessor on the GPU and the current maximum number of threads a block can hold is 512. Threads inside the same block can make use of low latency memory onboard their specific microprocessor, which is important if they need to communicate or synchronise (not needed in our application). Groups of blocks are held together in a grid and there is no real limit on the number of blocks that can be held in a grid. There can be more blocks than there are microprocessors on the GPU and in this case blocks which cannot be immediately assigned a processor are held in queue until they can be.
With a fixed number of particles used throughout our TMA simulations ($10240 = 5*2^{11}$) there are 17 combinations of threads and blocks which directly factorise the total number of particles and do not exceed the maximum number of threads one block can hold. These 17 combinations are shown in Table \ref{tab:comp_sense}. The computation speed of a time step is mostly dependant on how many aircraft are in the problem and how many of them are active at at a given time. Therefore to test the variation of computational speed based on the layout of threads in blocks it was key to set up a standard problem where a fixed number of aircraft were always active for enough time steps to observe the fixed computational speed. The set problem used to test computational speed was chosen as a 10 aircraft problem where all aircraft entered the problem at the 5th MPC time step (with a horizon of 6 steps so aircraft were optimised from time step one over increasing parts of the horizon until the 5th step was reached). The times for the first 4 time steps were disregarded as aircraft were not active across the entire horizon and thus the computation time varied step to step. After the 5th time step regardless of structural layout fixed time step lengths were observed. The average computation time for a given structural layout was then obtained by averaging 3-4 time steps after the settling period.
\begin{table}[h]
\centering
\begin{tabular}{ c | c | c }
\hline
No. Threads in Block & No. Blocks & Average Computation Time (s)\\
\hline
1 & 10240 & 1497\\
2 & 5120 & 761\\
4 & 2560 & 394\\
5 & 2048 & 330\\
8 & 1280 & 201\\
10 & 1024 & 180\\
16 & 640 & 109\\
20 & 512 & 98\\
32 & 320 & 72\\
40 & 256 & 70\\
64 & 160 & 60\\
80 & 128 & 61\\
128 & 80 & 58\\
160 & 64 & 57\\
256 & 40 & 58\\
320 & 32 & 56\\
512 & 20 & 64\\
\hline
\end{tabular}
\caption{Average Computation Time for a single update in different Thread*Block layouts}
\label{tab:comp_sense}
\end{table}
\begin{figure}[htb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{comp_sense.pdf}
\caption{Average Computation Time for a single update in different Thread*Block layouts}
\label{fig:comp_sense}
\end{center}
\end{figure}
All tests were done on the same GPU unit and it would be expected that individual GPU setups (specifically the number of cores) would affect the outcome of the average computational time and best thread*block layout. The GPU used for these tests was an NVIDIA GeForce GTX 580 which has 512 cores. Table \ref{tab:comp_sense} and Figure \ref{fig:comp_sense} show the average step computation time in different layouts. Clearly sending a single thread per block to one of the GPU's processors suffers too greatly from the overheads of memory writing and instructing the processor. More surprisingly sending 20 threads to every processor on the GPU is also still quite slow. The best observed step times seem to strike a balance to sending a large group of threads to each processor (potentially reducing the effect of computational overhead) vs using more of the GPU simultaneously.
\section{Simulation Results}
This section presents the results of simulations performed on the previously outlined algorithm and application setup. These simulations vary in complexity from single aircraft paths, used to tune the performance of the objective function coefficients to tests designed to see how far we can push the implementation with our current setup. The final section of the results discusses the computational speed of the method compared to that of a previous implementation\citep{EMCL13}.
\subsection{Simulation Setup}
For all simulations within this report the following parameters were kept constant. MPC time steps were 10 seconds in length ($\delta t = 10$) and had a horizon length of $H=6$ time steps. $\delta t$ is arguably far shorter than needed in the application where standard ATC would be looking at 30 seconds to 60 seconds updates. For simplicity aircraft have been standardised with the aerodynamic properties of an Airbus A320 obtained from BADA\citep{BADA}. The TMA is considered as a circle of radius 30km where there is a single runway at the origin. Aircraft both land and depart East to West from this one runway. There is no additional runway scheduler running alongside the simulations so separation near the landing envelope is maintained by the distance separation used for conflict avoidance from Equation~\ref{eqn:avoidance}.
The algorithm used a $J_{\max}=100$, with a schedule function of $\mathrm{SampleSchedule}(J)=\lfloor(3+5e^{0.05J})\rfloor$ and $L=10240$ particles (which have been naively arranged as 80 blocks of 128 threads for the GPU implementation). The simulations were performed on an NVIDIA Tesla C2070, which has 448 cores.
The disturbances from wind for all scenarios were simulated using the wind model outlined in Section~\ref{sec:wind}. A three dimensional grid of eight points was used to generate the wind at the outlying reaches of the TMA before trilinear interpolation found the disturbances at the aircrafts' individual locations. No prevailing wind was used in the scenarios so the aircraft are only subject to the random disturbances of the wind field itself. A denser grid of points can be used for sampling the wind speed however this slows computation down significantly with the need to generate many more random numbers within each inner loop of the GPU kernel.
\subsection{Single Aircraft Trajectories}
\label{sec:SAT}
The presented results were used to tune the coefficients in the objective functions for both departure and arrival aircraft and evaluate the behaviour of arrival aircraft when exposed to the flow field. In each case for departures and arrivals 12 aircraft have been independently simulated and then plotted together on Figures~\ref{fig:depart} and~\ref{fig:arrive} to give an overview on behaviour. The arrival aircraft were started at different places around the airport and simulated until they reached the same landing zone. Departing aircraft were given different desired bearings and simulated until they reached the edge of the TMA zone.
\begin{figure}[htb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{departures.pdf}
\caption{Departure trajectories for 12 single aircraft taking off from a runway at the origin in a Westerly direction before heading to their desired bearing}
\label{fig:depart}
\end{center}
\end{figure}
\begin{figure*}[h]
\begin{center}
\def12cm{13cm}
\includegraphics[width=12cm]{arrivals.pdf}
\caption{Arrival trajectories for 12 single aircraft entering the TMA at different angles before proceeding to the landing envelope to land on an runway from the East}
\label{fig:arrive}
\end{center}
\end{figure*}
The effect of the flow field on arrival aircraft is clear in Figure~\ref{fig:arrive}, on aircraft approaching from the Eastern side of the runway minimal diversion is taken prior to arrival. However the vehicle approaching from due west of the runway is given a long diversion outside the TMA before re-approaching from a more suitable bearing. In reality aircraft are allowed to approach an airport from a restricted number of directions and flying directly into departing aircraft traffic would be unlikely.
From this tuning the weightings shown in Table~\ref{tab:coeff} were used for landing and take off aircraft in the remainder of the scenarios in the report. Tuning these parameters differently will have significant effects on the trajectories generated.
\begin{table}[h]
\centering
\begin{tabular}{ c | c | c | l }
\hline
Aircraft Type & Coefficient & Value & Associated Cost \\
\hline
\multirow{4}{*}{Departure}&$\alpha_1$ & 0.4 & Difference in bearing from desired to current\\
&$\alpha_2$ & 0.1 & Fuel minimisation\\
&$\alpha_3$ & 0.25 & Difference in altitude from desired to current\\
&$\alpha_4$ & 0.25 & Difference in airspeed from desired to current\\
\hline
\multirow{4}{*}{Arrival} & $\tilde\alpha_1$ & 0.25 & Difference from aircraft heading to flowfield\\
& $\tilde\alpha_2$ & 0.65 & Difference from aircraft altitude to nominal altitude\\
& $\tilde\alpha_3$ & 0.1 & Fuel minimisation\\
\hline
\end{tabular}
\caption{Example objective function weightings used for optimisation}
\label{tab:coeff}
\end{table}
\subsection{Standard Arrivals and Departures}
The simulations presented here demonstrate the unified running of the departure and arrival aircraft trajectories for the airport. Scenarios were randomly generated such that both arrival and departure aircraft joined the simulation at fixed times evenly spread across the entire simulation interval with a variance of a few time steps. The simulations were run for 100 time step updates (total scenario length of around 15 minutes). Between 5-10 vehicles of both arrival and departure types were included in each scenario. Figure~\ref{fig:LandDepart} shows an example of final trajectories for a 10 landing 10 departure scenario. Not all scenarios completed successfully. In cases where departing aircraft joined the problem when a landing aircraft was due to enter the landing envelope separation constraints were violated and no valid solution could be found. This is one of the restrictions of having fixed departure times. These could be amended by allowing flexibility in scheduling like a real airport. Whether this flexibility can be directly implemented in the SMC optimisation without an outside scheduling agent is a topic of future work.
\begin{figure}[htb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{LandDepart.pdf}
\caption{Trajectories of 10 arrival (blue) and 10 departure (red) vehicles over a 100 time step simulation}
\label{fig:LandDepart}
\end{center}
\end{figure}
\subsection{Low Fuel Scenario}
In the low fuel scenario two arriving aircraft with the same distance till landing are initialised, one with plenty of fuel and the other with very little fuel remaining. This scenario is then run 20 times till both aircraft have landed to see how the aircraft behave. In each run of the scenario the aircraft are subject to a different set of disturbances from wind as well as the stochastic nature of the algorithm. Figure~\ref{fig:lowfuel} shows one of the generated trajectories. In the case of a human operator we would expect the aircraft with low fuel to be prioritised for landing with the aircraft with more fuel being diverted or held. However as Table~\ref{tab:fuel} shows the order in which the aircraft landed was actually more split. The cases where the low fuel aircraft landed second was due to the aircraft using its fuel sparingly for acceleration; most likely due to the fuel minimisation term in the objective function. Thus the high fuel aircraft proceeded faster towards the landing envelope and landed first. In both situations the low fuel aircraft lands before it runs out of fuel. To alter this behaviour an outside observer with some form of priorities would be needed to be added to the system to schedule the order which aircraft may enter the landing envelope.
\begin{table}[htb]
\centering
\begin{tabular}{ c | c | c }
\hline
Aircraft which landed first & Number of times & Average fuel (\kilogram) remaining on low fuel aircraft \\
\hline
High Fuel & 11 & 22.6 \\
\hline
Low Fuel& 9 & 6.2 \\
\hline
\end{tabular}
\caption{Breakdown of which aircraft landed first in low fuel scenario and average fuel remaining }
\label{tab:fuel}
\end{table}
\begin{figure}[htb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{low_fuel20.pdf}
\caption{Example trajectory for an aircraft with low fuel and an aircraft with high fuel approaching the landing envelope at the same time (low fuel landing first)}
\label{fig:lowfuel}
\end{center}
\end{figure}
\subsection{Congested Arrival Schedule}
The congested arrival schedule is a demonstration of the number of aircraft the system can handle concurrently. Previous work by the authors did demonstrate the method for up to 20 aircraft simultaneously in cruise-like situations. However with the limited resource of the single runway it is not necessarily sensible to initialise 20 aircraft in the first time step and run until all have landed. Instead aircraft are introduced to the scenario gradually as would happen in a real airport and removed from the scenario as they land. No departure aircraft are considered in this scenario as this would necessitate flexible scheduling of take offs which is currently neglected for simplicity (but as mentioned is the subject of future work). Figure~\ref{fig:congest} shows 24 arrival aircraft landing trajectories over time. The maximum concurrency observed in this scenario was 8 vehicles.
\begin{figure}[htb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{Concurrency.pdf}
\caption{Aircraft trajectories from a 100 step simulation with 24 arriving aircraft from random locations}
\label{fig:congest}
\end{center}
\end{figure}
\section{Comparison to Real Airport Data}
\subsection{Collection of Real Data}
Data was collected from FlightRadar24. Using the open broadcast system automatic dependent surveillance-broadcast (ADS-B). ADS-B signals are picked up by a network of receivers which forward the information to the FlightRadar24 servers. The information that each aircraft broadcasts is a mixture of its identity, GPS position, heading, airspeed and timestamps. In Europe roughly 75\% of passenger aircraft contain ADS-B equipment. Relatively accurate information about aircraft can be obtained in populated areas (99\% of Europe is covered with ADS-B receivers) though coverage can be limited as aircraft fly further over oceans. As this work focuses on the terminal manoeuvring area of an airport such as Gatwick, the area around this airport does have good receiver coverage and thus aircraft with the equipment can be reliably tracked.
Data from the FlightRadar24 servers is stored in one minute update intervals which can be queried back as far as a month. The data used for comparison in this work was minute updates from the 14th of September 2013 over the full 24 hours, resulting in 1440 snapshots of aircraft across the globe. This data was then filtered to only include aircraft which were within 50km of London Gatwick airport and had Gatwick as either their destination or arrival airport.. Other traffic within the TMA not related the airport was neglected to avoid the need for the modelling of a third class of aircraft (aircraft which are neither arrival nor departure). Aircraft can continue to broadcast their ADS-B signals whilst on the ground at the airport so a minimum altitude was also imposed to allow detection of aircraft which were actually in flight.
The remaining data was then parsed into Matlab data files into three key data structures:
\begin{itemize}
\item The numerical state data across the entire 1440 time intervals containing, x, y co-ordinates translated from the latitudes and longitudes, the heading of the aircraft, the altitude of the aircraft and the true airspeed.
\item The non numerical data for each individual aircraft detected throughout the day which contains their flight code, aircraft type, registration, departure airport and arrival airport
\item The final contained over-arching numerical data about each aircraft including the first time step it was detected in the area, the last time step it was detected in the area, a binary value to indicate if it was an arrival or departure aircraft and 7 key aerodynamic coefficients and aircraft measures relating to the model of aircraft (obtained by cross referencing with BADA using the aircraft model).
\end{itemize}
The data obtained from Flightradar24 was relatively raw data in that there were data corruption issues or errors which needed to be filtered down before simulations of fuel usage could take place. Around 600 aircraft were registered as appearing in the 50km TMA of LGW and related to the airport, this was reduced to 556 once any aircraft which only appeared for fewer than 3 intervals were removed from the data set. These 556 aircraft were then iteratively reduced down to 528 as aircraft which vanished before leaving the TMA or landing or had clear errors in their GPS data were removed from the data set. One aircraft on the day did a go-around to have a second attempt at landing and as such was removed from the data set in order to provide a fair comparison of fuel use between our simulations and the real data was removed from the data set for that. Of the remaining aircraft 13 incorrectly identified whether they were arriving or departing. This was corrected in the data directly and the aircraft remained as part of the data set.
Weather reports for the 14th of September were obtained in order to provide estimates of the nominal wind directions throughout the day. These initial estimates of wind directions and strengths were updated at 20-30 minute intervals which were then interpolated for a minute by minute estimate. In cases where no clear wind direction was recorded on the weather data a random wind bearing was chosen. There were no unusual or remarkable weather conditions on the day in question, the prevailing wind was West to East so no changes in runway direction were recorded over the day.
Since the data obtained from FlightRadar24 contains no estimate of the aircraft's weight the several assumptions are made. All arrival aircraft are operating at their max take-off load and have 20\% of their fuel reserve left when they enter the TMA. All departure aircraft are operating at their max take-off load and have their full fuel reserve at take-off. Maximum loads were obtained from each model of aircraft's specs.
\subsection{Analysis of Real Data}
\begin{figure}[htb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{flight_concurrency.pdf}
\caption{Pattern of 528 real aircraft (arrival red circles, departure blue cross) active across 24 hours at Gatwick airport}
\label{fig:flight_concurrency}
\end{center}
\end{figure}
With the remaining 528 vehicles' data available we processed the information to get an overview of the airport operations prior to fuel use calculations. Figure\ref{fig:flight_concurrency} shows a concurrency plot of aircraft in the TMA 40km radius of the airport (although aircraft were recorded up to 50km away they were only considered to enter the TMA at 40km). The beginning and end points for each aircraft are marked by either a circle (for arrival aircraft) or cross (for departure aircraft) connected by a line between the two markers. The y axis counts how many aircraft are concurrently active in the TMA and the x axis displays the time in minutes across the day (1440 minutes = 24 hours). Departures from the airport don't begin until around 4am at which point activity at the airport increases rapidly and stays high across the daytime hours until it begins to curtail later into the evening. Arrivals can occur at all times of the day and night. This plot allows us to identify key times to simulate the airport for various traffic demands over the 24 hours.
\begin{figure}[htbp]
\begin{center}
\def12cm{11cm}
\includegraphics[width=12cm]{gatwick_arrivals.pdf}
\caption{Paths of all arrival aircraft within the 24 hour period at Gatwick airport}
\label{fig:gat_arrival}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\def12cm{11cm}
\includegraphics[width=12cm]{gatwick_departures.pdf}
\caption{Paths of all departure aircraft within the 24 hour period at Gatwick airport}
\label{fig:gat_departure}
\end{center}
\end{figure}
Figures\ref{fig:gat_arrival} and \ref{fig:gat_departure} display the paths taken by all arrival and all departure aircraft respectively across the day. These paths give an indication of the way aircraft are managed at the airport in defined streams to avoid arrival and departure aircraft interacting.
\subsubsection{Fuel Estimate 1}
Fuel estimate 1 assumes that aircraft fly on the heading that they are recorded at for each minute interval, accelerating or decelerating such that their airspeed will match the next interval's value. Any difference between the new x,y position and the second interval's x,y position is considered to be caused by disturbance or 'wind'. The aircraft dynamics model is effectively run in reverse to calculate the change in mass of the aircraft and thus the fuel burned.
\begin{subequations}
\begin{align}
\eta_i(k)=&C_{f1,i}\left(1+{{v_{s,i}(k)}\over{C_{f2,i}}}\right) \\
\gamma_i(k)=& \sin^{-1}\left({{z_i(k+1)-z_i(k)}\over{\delta t v_{s,i}(k)}}\right) \\
w_{x,i}(k) =&{{x_i(k+1)-x_i(k)}\over{\delta t}}- (v_{s,i}(k) \cos(\chi_i(k))\cos(\gamma_i(k))) \\
w_{y,i}(k) =&{{y_i(k+1)-y_i(k)}\over{\delta t}}- (v_{s,i}(k) \sin(\chi_i(k))\cos(\gamma_i(k))) \\
T_i(k+1)=& {{m_i(k)(v_{s,i}(k+1)-v_{s,i}(k))}\over{\delta t}}+D_i(k) +m_i(k)g\sin(\gamma_i(k)) \\
m_i(k+1)=&m_i(k)-\delta t(\eta_i(k) T_i(k))
\end{align}
\label{eqn:blah}
\end{subequations}
Where $\eta_i(k)$ is the fuel burn rate of aircraft $i$ at step $k$, based on the true airspeed and two aircraft model specific aerodynamic coefficients $C_{f1,i},C_{f2,i}$ obtained from BADA.
The following values are known from the data: $x_i(k), y_i(k), z_i(k), v_{s,i}(k), \chi_i(k), m_i(1), x_i(k+1), y_i(k+1), z_i(k+1), v_{s,i}(k+1), \chi_i(k+1)$ with $m_i(k+1)$ needing calculation. For this method of fuel estimation additional parameters $w_x$ and $w_y$ are obtained. These paramters act as an estimate of how accurate the fuel usage might be. When $w_x$ and $w_y$ are in the same order of magnitude as the nominal wind conditions at that time then the fuel estimate is likely reasonable. Figures \ref{fig:w_x} and \ref{fig:w_y} show all the estimates of $w_x$ and $w_y$ obtained at each time step for every aircraft. The vast majority of $w_x$ and $w_y$ estimates are around the origin which indicates good general performance, however there are also many cases where wind estimates are clearly too high for the fuel estimates associated with them to be entirely accurate. This motivates the need for a secondary method of fuel estimation which can be used for comparison both with this first estimate and with data obtained from our own fuel simulations.
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{w_x_plot.pdf}
\caption{Estimates of $w_x$ over the 24 hour data period}
\label{fig:w_x}
\end{center}
\end{figure}
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{w_y_plot.pdf}
\caption{Estimates of $w_y$ over the 24 hour data period}
\label{fig:w_y}
\end{center}
\end{figure}
\subsubsection{Fuel Estimate 2}
To provide a second estimate of fuel use from the real data an alternative method was used. This other method uses dead reckoning of total distance travelled between the two points $d_i$ and assumes no wind disturbance was present. This distance is used to get an estimate for true airspeed of the aircraft $\hat{v}_{i,s}$. From the airspeed estimate, the fuel burn coefficient $\eta_i$ is obtained along with the climb angle $\gamma_i$. An estimate of track angle $\hat{\chi}$ is obtained from the two positions of the aircraft, which when compared to the aircraft's reported heading, leads to the bank angle needed to change the heading by that margin. Finally these values can be used to work out the thrust needed to get the velocity change between the airspeed reported by the aircraft at the next time step and the estimate of true airspeed.
\begin{subequations}
\begin{align}
d_i(k)=& \sqrt{(x_i(k+1)-x_i(k))^2+(y_i(k+1)-y_i(k))^2+(z_i(k+1)-z_i(k))^2}\\
\hat{v}_{s,i}(k)=&{{d_i(k)}\over{\delta t}}\\
\eta_i(k)=&C_{f1,i}\left(1+{{\hat{v}_{s,i}(k)}\over{C_{f2,i}}}\right) \\
\gamma_i(k)=& \sin^{-1}\left({{z_i(k+1)-z_i(k)}\over{\delta t \hat{v}_{s,i}(k)}}\right) \\
\hat{\chi}_i(k)=&\tan^{-1}\left({{y_i(k+1)-y_i(k)}\over{x_i(k+1)-x_i(k)}}\right)\\
\delta \chi_i(k)=&\hat{\chi}_i(k)-\chi_i(k)\\
\phi_i(k)=&\tan^{-1}\left({{\delta \chi_i(k) \hat{v}_{i,s}(k)}\over{g \delta t}}\right)\\
T_i(k+1)=& {{m_i(k)(v_{s,i}(k+1)-\hat{v}_{s,i}(k))}\over{\delta t}}+D_i(k) +m_i(k)g\sin(\gamma_i(k)) \\
m_i(k+1)=&m_i(k)-\delta t(\eta_i(k) T_i(k))
\label{eqn:blah}
\end{align}
\end{subequations}
This method can suffer from issues where the x-y distance change is actually very small since the aircraft was circling and the path was poorly captured by the 1 minute sample time. This leads to poor estimates of airspeed which can result in large thrusts needed to reach the airspeed at the next interval and thus artificially high fuel usage estimates in some cases.
\subsection{Comparison Between Fuel Estimates}
For both fuel estimates aircraft were restricted from burning negative fuel and gaining weight. The two estimates were used as guidelines for how much fuel the aircraft might have used, though neither method can reliably claim to always upper or lower bound the fuel use. Figures \ref{fig:fuel_subtract} and \ref{fig:fuel_comp} show the difference between fuel estimate 1 and fuel estimate 2 for each of the 528 aircraft. The vast majority of cases are within 100kg of fuel each. The significant outliers are typically caused by a very poor fuel estimate from fuel estimate 2. This poor fuel estimate is due to a very short Euclidean distance apparently being travelled by the aircraft in the sampling time of a minute. In reality the outlier aircraft were still circling in stack with close to one revolution of the stack per minute. A short Euclidean distance leads to a severe under-estimate of $\hat{v}_{i,s}(k)$ which then leads to a significant over-estimate of thrust needed to increase airspeed to match the observed airspeed $v_{i,s}(k+1)$.
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{fuel_subtract_plot.pdf}
\caption{Fuel Estimate 1 - Fuel Estimate 2 for each aircraft}
\label{fig:fuel_subtract}
\end{center}
\end{figure}
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{fuel_vs_plot.pdf}
\caption{Fuel Estimates 1 and 2 plotted against each other}
\label{fig:fuel_comp}
\end{center}
\end{figure}
\subsection{Comparison Between Algorithm and Real Data}
Running a single simulation from our algorithm to simulate across a full 24 hour period of airport activity would be ideal but would cause issues due to the raw size of the problem, memory issues and stability issues. Several adjustments will therefore need to be made when comparing to the real data. The most important of these is to shorten the sampling time from every 60 seconds from the recorded data to time steps of 20 seconds. As highlighted in the various fuel estimates with a linear model between time steps longer sampling times do have negative effects on the accuracy of aircraft trajectories. Additionally although we only have recorded data from the aircraft every minute they are running a sophisticated inner loop controller in the form of a pilot. A pilot's decisions are not limited to minute by minute sampling times. The shorter sampling time of 20 seconds was chosen as a compromise between the additional computation load and the ability of the model to cope. Extending our sampling time would ideally require some form of inner loop controller and a distinct change of structure of our simulations.
Given the real world data available at every minute interval it is possible to initialise our own simulations starting at any particular minute of the day to run for a set number of steps. Computational speed is highly impacted by the number of aircraft within the simulation window. In high density traffic simulations taking place at the peaks of the day the length of simulation window was limited to one hour (180 simulation time steps). In low density operations (e.g. very early in the day) simulation window length was increased to take advantage of the reduced number of aircraft.
Fuel comparison simulations were classified into two types based on traffic density ranging from low (under 5 aircraft active per time step), to high density. These traffic densities were identified from the concurrency analysis shown on Figure \ref{fig:flight_concurrency}.
\subsubsection{Low Density Traffic}
Low density traffic mainly occurs in the very early hours of the morning between midnight and 4.30 am (data time steps 0 to 230). Traffic in low density was periods was dominated by arrivals as take offs do not begin in the airport until close to dawn in order to reduce the impact of noise pollution on residents. A similar period of low density arrival traffic takes place in the 100 minutes leading up to midnight. In this period the majority of aircraft are arrivals however 1 delayed departure does take place as well. Examining the low traffic density periods is a useful technique for fine tuning arrival aircraft objective function weightings and bug testing the simulations.
\begin{figure}[p]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{sim_low_density_1_traj.pdf}
\caption{Simulated trajectories of aircraft in first 230 minutes of Sept 14th}
\label{fig:low_1_traj}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{low_density_1_fuel_comp.pdf}
\caption{Fuel comparison between fuel estimates and simulations for first 230 minutes of Sept 14th}
\label{fig:low_1_fuel}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{sim_low_density_2_traj.pdf}
\caption{Simulated trajectories of aircraft in last 100 minutes of Sept 14th (Blue = arrival aircraft, Red = departure aircraft)}
\label{fig:low_2_traj}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{low_density_2_fuel_comp.pdf}
\caption{Fuel comparison between fuel estimates and simulations for last 100 minutes of Sept 14th}
\label{fig:low_2_fuel}
\end{center}
\end{figure}
The final trajectories of the low density aircraft traffic are shown in Figures \ref{fig:low_1_traj} and \ref{fig:low_2_traj}. In one case of the last 100 minutes of Sept 14th a single arrival aircraft was unable to land successfully in the landing envelope on its first pass needing to do a go around before landing successfully the second time. The fuel cost of a go-around will be significant and should show up as an outlier in the fuel comparison between the real data and the simulations.
Figures \ref{fig:low_1_fuel} and \ref{fig:low_2_fuel} and Tables \ref{tab:low_1_fuel} and \ref{tab:low_2_fuel} show the fuel comparisons between the fuel estimates obtained from the data and the simulated fuel use. In both cases the majority of aircraft do make fuel savings from the simulation compared to the fuel estimates. In all tables $F_s$ is the fuel used in the simulation, $F_1$ is the first fuel use estimate, $F_2$ is the second fuel estimate and the mean $F$ is the mean of the two fuel estimates.
\begin{table}[htb]
\centering
\begin{tabular}{ c | c | c | c | c| c}
\hline
Aircraft No. & $F_S$ (kg) & $F_1$ (kg) & $F_2$ (kg) & Mean $F$ (kg) & Fuel Saving \% \\
\hline
1 &40.23 &36.47 &33.25 &34.86 & -15.40\\
2 &184.20 &30.08 &21.02 &25.55 & -620.97\\
3 &73.02 &193.63 &269.61 &231.62 & 68.48\\
4 &13.31 &72.11 &84.99 &78.55 &83.05\\
5 &5.91 &91.17 &125.85 &108.52 &94.55\\
6 &83.87 &505.48 &516.39 &510.93 &83.58\\
7 &123.20 &187.12 &225.91 &206.51 &40.34\\
8 &353.95 &746.28 &741.06 &743.67 &52.41\\
9 &41.77 &86.49 &151.63 &119.06 &64.92\\
10 &5.90 &68.14 &85.73 &76.94 &92.34\\
11 &36.21 &552.18 &565.02 &558.60 &93.52\\
12 &171.88 &255.35 &278.81 &267.08 &35.64\\
13 &1.53 &81.74 &163.87 &122.81 &98.76\\
14 &80.95 &142.83 &157.48 &150.15 &46.09\\
15 &1612.85 &1187.06 &1173.86 &1180.46 &-36.63\\
16 &326.73 &1482.06 &1541.67 &1511.87 &78.39\\
17 &27.35 &904.74 &929.20 &916.97 &97.02\\
18 &327.89 &799.50 &753.69 &776.59 &57.78\\
\hline
Total &3510.7 &7422.4 &7819.0 & 7620.7 &53.93\\
\hline
\end{tabular}
\caption{Fuel use comparison between simulation and estimates for first 230 minutes of Sept 14th}
\label{tab:low_1_fuel}
\end{table}
\begin{table}[htb]
\centering
\begin{tabular}{ c | c | c | c | c| c}
\hline
Aircraft No. & $F_S$ (kg) & $F_1$ (kg) & $F_2$ (kg) & Mean $F$ (kg) & Fuel Saving \% \\
\hline
1 &67.15 &93.90 &128.59 &111.24 &39.64 \\
2 &1166.72 &515.56 &516.18 &515.87 &-126.16\\
3 &54.88 &159.55 &252.38 &205.96 &73.36\\
4 &654.54 &146.76 &192.92 &169.84 &-285.38\\
5 &868.04 &885.26 &937.35 &911.31 &4.75\\
6 &206.54 &343.97 &362.41 &353.19 &41.52\\
7 &345.05 &811.48 &773.74 &792.61 &56.47\\
8 &482.51 &76.57 &80.11 &78.34 &-515.93\\
9 &413.82 &236.52 &243.02 &239.77 &-72.59\\
10 &50.04 &330.67 &432.73 &381.70 &86.89\\
11 &95.04 &297.55 &340.01 &318.78 &70.19\\
12 &26.60 &84.90 &241.64 &163.27 &83.71\\
13 &95.42 &121.57 &155.55 &138.56 &31.13\\
14 &8.34 &70.90 &91.49 &81.20 &89.73\\
15 &47.48 &49.43 &97.68 &73.55 &35.45\\
16 &0.079 &0 &504.90 &252.45 &99.97\\
\hline
Total &4582.2 &4224.6 & 5350.7 &4787.7 & 4.29\\
\hline
\end{tabular}
\caption{Fuel use comparison between simulation and estimates for last 100 minutes of Sept 14th}
\label{tab:low_2_fuel}
\end{table}
In the first low density simulations (first 230 minutes of Sept 14th) there is one main outlier case where a significant percentage more fuel was used by the simulations compared to the fuel estimates and a second case where both fuel estimates and simulation fuel used were very high. Figure \ref{fig:outlier_1} shows the largest outlier where 620\% more fuel was used by the simulation aircraft. Both paths appear superficially similar though, given the sampling rate, three points for the simulation aircraft (red) should cover a similar distance to one point for the real data (blue). The simulated aircraft is going much faster than the real data. Given the distance needed to travel before the landing envelope is small the objective function in the simulation is likely at fault encouraging an earlier completion time so that the aircraft leaves the problem over the importance of the fuel minimisation itself. Notably in the fuel estimations the fuel estimates for 3 of the 5 steps taken were calculated as negative and recorded as 0, it is likely that this has also impacted on the significant difference in fuel estimates.
The second case where all the fuel estimates were high and the simulation fuel was also high is shown in Figure \ref{fig:outlier_15}. This is the first take-off of the day and does take a longer trajectory than the real data contributing to the increased fuel use.
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{low_density_1_traj_2.pdf}
\caption{Trajectory comparison of 2nd vehicle in first 230 minutes of Sept 14th (blue= real aircraft, red= simulated)}
\label{fig:outlier_1}
\end{center}
\end{figure}
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{low_density_1_traj_15.pdf}
\caption{Trajectory comparison of 15th vehicle in first 230 minutes of Sept 14th (blue= real aircraft, red= simulated)}
\label{fig:outlier_15}
\end{center}
\end{figure}
Considering the second low density simulations (last 100 minutes of Sept 14th) there are 2 outlier cases where significantly more fuel was used by the simulations than the fuel estimates. Figure \ref{fig:outlier_8} shows the largest outlier where 600\% more fuel was used by the simulation than the mean of the fuel estimations. This appears to be a straightforward path requirement where the aircraft is entering the TMA directly in line with the landing envelope. The simulated aircraft significantly slows down whilst following the nominal descent altitude and appears to keep this slow speed until close to the end of its trajectory where it rapidly descends and accelerates into the flight envelope very close to the airport. The significantly longer time that the simulated aircraft took to land compared to that of the real aircraft (12 minutes vs 6 minutes) might explain the increased fuel use. The behaviour of the simulated aircraft could be due to congestion around the flight envelope from other arrival traffic causing this aircraft to slow down in order to allow others to land before it.
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{low_density_2_traj_8.pdf}
\caption{Trajectory comparison of 8th vehicle in last 100 minutes of Sept 14th (blue= real aircraft, red= simulated)}
\label{fig:outlier_8}
\end{center}
\end{figure}
The second significant outlier in the second low density simulations is shown in Figure \ref{fig:outlier_4}. Here the failure of the simulated arrival aircraft to enter the landing envelope prior to reaching the airport has resulted in the need for a go around. This would obviously require significantly more fuel than the real aircraft's successful approach.
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{low_density_2_traj_4.pdf}
\caption{Trajectory comparison of 4th vehicle in last 100 minutes of Sept 14th (blue= real aircraft, red= simulated)}
\label{fig:outlier_4}
\end{center}
\end{figure}
\subsubsection{High Density Traffic}
The high density traffic scenarios are selected in order to try and push the simulation to the limit. In these cases the airport is operating at maximum capacity and handling an arrival or departure runway movement every minute.
Given that the simulation has no built in runway control, occasional problems with collision avoidance feasibility occur when multiple departure aircraft take off every minute (every 3 simulation time steps). In the highest density scenario considered 2 aircraft were delayed by up to 2 time steps so as to increase the gap between departure aircraft such that they don't break the collision avoidance constraints. Similarly 2 aircraft in the second high density scenario had be delayed outside of the simulation envelope to allow enough space for the remaining aircraft to take off without collision avoidance constraint violations. Aircraft in the simulation tend to turn quickly to their target bearing once they've taken off and the cylindrical avoidance constraints are conservative. In reality the aircraft cannot reach one another due to their turn rates and speeds. A more sophisticated collision avoidance system would likely fix this problem.
\begin{figure}[p]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{sim_2100_traj.pdf}
\caption{Simulated trajectories of aircraft in highest density traffic flow 11.40am-12.40pm of Sept 14th (Blue = arrival aircraft, Red = departure aircraft)}
\label{fig:high_1_traj}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{high_density_fuel_2100_comp.pdf}
\caption{Fuel comparison between fuel estimates and simulations for highest density traffic flow 11.40am-12.40pm of Sept 14th}
\label{fig:high_1_fuel}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{sim_3000_traj.pdf}
\caption{Simulated trajectories of aircraft in 4.40pm-5.40pm of Sept 14th (Blue = arrival aircraft, Red = departure aircraft)}
\label{fig:high_2_traj}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{high_density_fuel_3000_comp.pdf}
\caption{Fuel comparison between fuel estimates and simulations in 4.40pm-5.40pm of Sept 14th}
\label{fig:high_2_fuel}
\end{center}
\end{figure}
The final trajectories of the high density aircraft traffic are shown in Figures \ref{fig:high_1_traj} and \ref{fig:high_2_traj}. Figures \ref{fig:high_1_fuel} and \ref{fig:high_2_fuel} show the fuel comparisons between the fuel estimates obtained from the data and the simulated fuel use along with Tables \ref{tab:high_1_fuel} and \ref{tab:high_2_fuel}. In both cases the majority of aircraft do make fuel savings from the simulation compared to the fuel estimates. The first high density scenario (11.40am - 12.40pm) is especially good for fuel savings with only some very short paths failing to make a saving over the estimate. The second high density scenario shows a greater variation of fuel savings with a number of paths failing to make fuel savings over estimates but only two paths show significant outlying behaviour from the norm.
\begin{table}[phtb]
\centering
\begin{tabular}{ c | c | c | c | c| c}
\hline
Aircraft No. & $F_S$ (kg) & $F_1$ (kg) & $F_2$ (kg) & Mean $F$ (kg) & Fuel Saving \% \\
\hline
1& 6.75 &41.20 &118.71 &79.96 &91.55\\
2& 20.09 &45.35 &47.99 &46.67 &56.94\\
3& 143.47 &31.78 &43.06 &37.42 &-283.43\\
4& 60.46 &222.96 &271.58 &247.27 &75.55\\
5& 370.88 &394.79 &491.19 &442.99 &16.28\\
6& 236.31 &401.19 &395.79 &398.49 &40.70\\
7& 35.04 &141.03 &148.90 &144.96 &75.83\\
8& 937.13 &1203.14 &1158.40 &1180.76 &20.63\\
9& 15.94 &114.13 &238.54 &176.34 &90.96\\
10& 347.35 &194.67 &315.22 &254.94 &-36.25\\
11& 391.88 &581.55 &587.00 &584.28 &32.93\\
12& 224.07 &245.99 &282.36 &264.18 &15.18\\
13& 469.25 &573.37 &536.30 &554.83 &15.42\\
14& 286.07 &924.53 &991.72 &958.13 &70.14\\
15& 135.32 &130.35 &168.95 &149.65 &9.57\\
16& 357.87 &763.70 &705.73 &734.71 &51.29\\
17& 540.73 &784.48 &824.65 &804.56 &32.79\\
18& 113.22 &142.08 &577.63 &359.86 &68.54\\
19& 2077.19 &2405.27 &2317.04 &2361.16 &12.03\\
20& 502.14 &770.08 &817.34 &793.71 &36.74\\
21& 335.33 &215.76 &225.92 &220.84 &-51.85\\
22& 437.88 &826.45 &807.23 &816.84 &46.39\\
23& 193.77 &84.12 &143.55 &113.83 &-70.22\\
24& 407.62 &484.46 &482.49 &483.48 &15.69\\
25& 156.55 &63.27 &106.31 &84.79 &-84.63\\
26& 358.29 &697.57 &691.05 &694.31 &48.40\\
27& 127.01 &179.45 &184.48 &181.97 &30.20\\
28& 8.36 &223.60 &851.40 &537.50 &98.44\\
29& 1785.12 &2546.99 &2300.89 &2423.95 &26.35\\
30& 192.42 &196.04 &229.73 &212.88 &9.61\\
31& 356.64 &853.68 &868.37 &861.02 &58.58\\
32& 751.03 &1227.82 &1529.48 &1378.65 &45.52\\
33& 234.34 &272.95 &280.80 &276.87 &15.36\\
34& 619.16 &1064.15 &1066.75 &1065.45 &41.89\\
35& 342.84 &522.50 &481.20 &501.85 &31.68\\
36& 34.08 &140.89 &139.81 &140.35 &75.72\\
37& 164.20 &568.39 &570.20 &569.30 &71.16\\
38& 403.58 &671.65 &623.49 &647.57 &37.68\\
39& 23.49 &163.07 &163.19 &163.13 &85.60\\
40& 23.23 &151.44 &155.20 &153.32 &84.85\\
41& 218.78 &522.63 &471.82 &497.23 &56.00\\
42& 118.15 &474.08 &433.16 &453.62 &73.95\\
43& 2.97 &107.74 &125.47 &116.60 &97.46\\
\hline
Total &14566 &22370 &23970 & 23170 &37.13\\
\hline
\end{tabular}
\caption{Fuel use comparison between simulation and estimates for 11.40am to 12.40pm of Sept 14th}
\label{tab:high_1_fuel}
\end{table}
\begin{table}[phtb]
\centering
\begin{tabular}{ c | c | c | c | c| c}
\hline
Aircraft No. & $F_S$ (kg) & $F_1$ (kg) & $F_2$ (kg) & Mean $F$ (kg) & Fuel Saving \% \\
\hline
1 &9.42 &104.07 &124.51 &114.29 &91.76\\
2 &105.33 &74.91 &108.67 &91.81 &-14.73\\
3 &366.33 &401.63 &398.24 &399.94 &8.40\\
4 &102.73 &212.76 &234.45 &223.61 &54.06\\
5 &148.83 &243.54 &295.60 &269.57 &44.79\\
6 &612.76 &791.94 &702.23 &747.09 &17.98\\
7 &255.55 &208.69 &219.98 &214.34 &-19.23\\
8 &511.21 &668.67 &606.37 &637.51 &19.81\\
9 &520.56 &638.37 &705.13 &671.75 &22.51\\
10 &171.92 &268.15 &375.57 &321.86 &46.59\\
11 &77.09 &110.13 &183.20 &146.67 &47.44\\
12 &524.75 &807.87 &894.76 &851.31 &38.36\\
13 &511.04 &744.91 &694.32 &719.62 &28.98\\
14 &457.58 &594.28 &560.27 &577.28 &20.74\\
15 &448.99 &554.31 &492.59 &523.45 &14.23\\
16 &136.85 &180.33 &176.21 &178.27 &23.24\\
17 &670.78 &820.13 &809.46 &814.80 &17.68\\
18 &590.81 &285.33 &303.03 &294.18 &-100.84\\
19 &238.68 &249.56 &270.99 &260.27 &8.30\\
20 &614.25 &811.95 &755.17 &783.56 &21.61\\
21 &441.54 &640.34 &577.50 &608.92 &27.49\\
22 &760.94 &240.40 &278.13 &259.27 &-193.49\\
23 &421.44 &572.56 &556.21 &564.39 &25.33\\
24 &733.80 &701.65 &663.05 &682.35 &-7.54\\
25 &7.79 &86.75 &112.10 &99.42 &92.17\\
26 &771.69 &826.40 &819.08 &822.74 &6.21\\
27 &33.47 &397.96 &419.55 &408.75 &91.81\\
28 &880.66 &748.89 &664.47 &706.68 &-24.62\\
29 &890.05 &586.08 &640.08 &613.08 &-45.18\\
30 &441.65 &612.92 &579.21 &596.07 &25.91\\
31 &449.05 &401.71 &366.04 &383.87 &-16.98\\
32 &0.13 &55.08 &56.32 &55.70 &99.78\\
\hline
Total &12908 &14642 &14643 &14642 &11.85\\
\hline
\end{tabular}
\caption{Fuel use comparison between simulation and estimates for 4.40pm to 5.40pm of Sept 14th}
\label{tab:high_2_fuel}
\end{table}
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{high_density_1_traj_3.pdf}
\caption{Trajectory comparison of 3rd vehicle in 11.40am-12.40pm scenario Sept 14th (blue= real aircraft, red= simulated)}
\label{fig:outlier_3}
\end{center}
\end{figure}
Figure~\ref{fig:outlier_3} shows the single outlier case from the first high density simulation. Where in one of the shortest paths in the simulation 283\% more fuel has been used by the simulation than the fuel estimates. Similar to the case discussed in the low density simulations (Figure \ref{fig:outlier_1}) the simulated aircraft does a large airspeed acceleration the step before it enters the landing envelope. This serves to rush entering the envelope and get an improved cost for finishing sooner over a slower more fuel efficient approach. Since both fuel estimates are very low for this section of the path this rash acceleration shows up very poorly in the percentage fuel saving metric.
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{high_density_2_traj_18.pdf}
\caption{Trajectory comparison of 18th vehicle in 4.40pm-5.40pm scenario Sept 14th (blue= real aircraft, red= simulated)}
\label{fig:outlier_18}
\end{center}
\end{figure}
Figure~\ref{fig:outlier_18} shows the first outlier case from the second high density simulation where an arrival aircraft has taken a very circuitous route towards the landing envelope. Initially pushed slightly off course by other aircraft also arriving at a similar point in the TMA the route taken is significantly further south than the standard arrival path. Additionally the aircraft has undesirable behaviour at the point the aircraft begins to turn back towards the airport and the landing envelope. Currently this path is recorded as having a 100\% increase in fuel use but since the aircraft has yet to land in the simulation window this would be far higher in practice. Altering the flow field to encourage less of a diversion to this arrival aircraft would improve matters. Lowering the cost function weighting to the flow field could possibly improve matters but only when used alongside an increase in horizon length.
\begin{figure}[hptb]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{high_density_2_traj_20.pdf}
\caption{Trajectory comparison of 22nd vehicle in 4.40pm-5.40pm scenario Sept 14th (blue= real aircraft, red= simulated)}
\label{fig:outlier_22}
\end{center}
\end{figure}
Figure~\ref{fig:outlier_22} shows the second outlier case from the second high density simulation where the 22nd vehicle of the simulation takes an aborted go-around to reach the landing envelope. This go-around appears to have been motivated by a collision avoidance manoeuvre and explains the 193\% increase in fuel used over the fuel estimates which follow a far simpler and smoother path.
\subsubsection{Overall Fuel and CO$_2$ Savings}
The summary tables for all four scenarios (both high and low density) do demonstrate an overall fuel saving summed across all aircraft with a magnitude of between 4.29\% to 53.93\%. In total across all four scenarios 14653.5 kg of fuel was saved. This leads to an overall fuel saving across every aircraft simulated of 29.7\%. On average during flight 3.15g of CO$_2$ is produced for each gram of aircraft fuel burnt \cite{CORINAIR} leading to a saving of 46158.52 kg of CO$_2$.
\section{Model Extension}
\subsection{Noise Pollution Reduction}
For a noise pollution reduction cost a basic population density distribution was generated over the area surrounding Gatwick airport. Through using Google Maps satellite images to identify areas of population (assumed to be grey built up areas from the satellite map) which were equal to or greater than 1.5km in radius and then noting their co-ordinates relative to the airport 20 population centres were identified in the TMA area (shown in Figure \ref{fig:pop_circles}).
\begin{figure}[htbp]
\begin{center}
\def12cm{13cm}
\includegraphics[width=12cm]{pop_centres_circles.pdf}
\caption{Location of 20 population centres around Gatwick airport and their relative size. $1=$ Crawley, $2=$ Horsham, $3=$ Haywards Heath, $4=$ Reigate, $5=$ East Grinstead, $6=$ Crowborough, $7=$ Horley, $8=$ Burgess Hill, $9=$ Dorking, $10=$ Billinghurst, $11=$ Leatherhead, $12=$ Tunbridge Wells, $13=$ Tonbridge, $14=$ Sevenoaks, $15=$ Orpington, $16=$ Croydon, $17=$ Sutton, $18=$ Guildford, $19=$ Godalming, $20=$ Uckfield.}
\label{fig:pop_circles}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\def12cm{14cm}
\includegraphics[width=12cm,viewport=0.5in 8in 7in 11in,clip=true]{Gatwick_Airport_Google.pdf}
\caption{Map of the local area around Gatwick airport (courtesy of GoogleMaps)}
\label{fig:gatwick_map}
\end{center}
\end{figure}
A grid of 1km spacing was then imposed across the entire area and the population density in each location was estimated using a Gaussian function:
\begin{equation}
\mathrm{popdense}(x,y)=\min \left(\sum_{i=1}^{20} {{e^{-{{b_i(x,y)}^2}\over{2{c_i}^2}}}\over{c_i\sqrt{2\pi}} }, 1\right).
\end{equation}
Where $b_i$ was taken as the distance from the grid point to the i-th population centre and $c_i$ was the radius of that i-th population centre. This results in a population density map shown in Figure \ref{fig:pop_dense} where brighter areas correlate with areas of higher population density. Ideally aircraft would not fly over populated areas, or if they did so, would fly at heights such that their noise pollution is negligible. So to augment the population density grid a cost on the aircraft based on their altitude is also included.
\begin{figure}[bhtp]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{pop_dense.pdf}
\caption{Population density estimations in local area surrounding Gatwick airport}
\label{fig:pop_dense}
\end{center}
\end{figure}
\begin{equation}
J_{\mathrm{noise}}=1-\max\left(\left(1-\left( 1- {{A_c-a}\over{A_c}}\right)^2\right),0\right)\mathrm{popdense}(x,y),
\end{equation}
where $A_c$ is effectively an altitude cutoff above which the noise of the aircraft begins to be negligible to the population on the ground. Experimentally this has been set at 4000m. The quadratic altitude element heavily penalises aircraft flying low to the ground. If aircraft fly above the altitude cutoff the quadratic element will go to 0 and thus $J_{\mathrm{noise}}$ will stay at 1 regardless of population density. This cost function could also be further augmented with terms relating the thrust of an aircraft's engines and the amount of noise produced. This cost function is then weighted based on the relative importance of noise pollution reduction vs other cost terms like following the flow field, following the nominal altitude and fuel saving.
This implementation for a noise reduction reward is an extension of a basic penalty method which could be trivially used to select no fly zones or minimum altitude constraints around an airport. Specific no fly zones could also be implemented through use of hard constraints. However softer constraints by penalising cost are easier for the SMC method to handle and, given enough time, will converge to an optimal solution.
\subsection{Noise Pollution Simulation Results}
To test the performance of the noise pollution cost function and its effects on aircraft trajectories 22 individual scenarios were run. Firstly with noise pollution reduction carrying a weighting of 10\% importance in the cost function and then with 20\% importance. The 22 scenarios chosen were single aircraft simulations similar to those used to test the original cost functions in Section \ref{sec:SAT}. With the exception of 2 missing scenarios where either an arrival aircraft flies directly into the path of take off aircraft or where a departure aircraft is asked to leave in the direction of the runway landing envelope. Single aircraft trajectories were chosen for this testing in order to remove behaviour caused by interaction between aircraft and limit the trajectory changes seen to those directly caused by the noise pollution element of the cost function. All simulations were done under the same wind conditions and with the same type of aircraft. Arrival aircraft were arranged to start at 30\degree intervals around the perimeter of the TMA and told to head to the landing envelope. Departure aircraft all start from the airport and are given a desired bearing to leave the TMA. Notably departure aircraft do not have to reach their desired bearing to leave the TMA and are considered to leave the TMA once they pass over 30km from the airport regardless of what bearing they are currently at.
\begin{figure}[bhtp]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{10noise_arrive.pdf}
\caption{Trajectories for 11 separate arrival simulations in presence of 10\% weighted noise pollution reduction cost}
\label{fig:10arrive}
\end{center}
\end{figure}
\begin{figure}[bhtp]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{20noise_arrive.pdf}
\caption{Trajectories for 11 separate arrival simulations in presence of 20\% weighted noise pollution reduction cost}
\label{fig:20arrive}
\end{center}
\end{figure}
Figure \ref{fig:10arrive} shows the arrival aircraft simulations over the top of the population density map for 10\% and Figure \ref{fig:20arrive}20\% noise pollution cost waitings. Compared to the pure flow field with fuel reduction cost shown in Figure \ref{fig:arrive} (which operated on a shorter sampling time of 10 seconds instead of the 20 seconds used for the noise pollution simulation) there is a significant change in pathing of aircraft to the landing envelope. In the case of the 10\% performance weighting for noise pollution the paths are behave more like the original non augmented cost function than those for the 20\% importance. This is due to the 20\% importance weighting requiring a weighting reduction in many elements of the cost function which otherwise would ensure smooth aircraft pathing via the flow field and nominal altitude profiles. In the 20\% importance simulations at least 3 of the aircraft had to complete go-around manoeuvres to reach the landing envelope successfully. This behaviour was limited to aircraft entering from the southern side of the airport where they had to avoid a population zone at East Grinstead and could do so by either heading to the left or right of the region. In the case of the 10\% importance weighting aircraft were still prone to pass over East Grinstead in order to enter the landing envelope faster based on the importance of the other elements of the cost function. Arrival aircraft entering from the northern side of the TMA are seen to take a larger diversion than they would normally so as to avoid the Croydon/Sutton population centre. Since more outlying districts of London exist above Croydon and Sutton the population centres for this section of the map would need to be extended so as to encourage them to use the gap between Croydon and Leatherhead instead.
\begin{figure}[bhtp]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{10noise_depart.pdf}
\caption{Trajectories for 11 separate departure simulations in presence of 10\% weighted noise pollution reduction cost}
\label{fig:10depart}
\end{center}
\end{figure}
\begin{figure}[bhtp]
\begin{center}
\def12cm{12cm}
\includegraphics[width=12cm]{20noise_depart.pdf}
\caption{Trajectories for 11 separate departure simulations in presence of 20\% weighted noise pollution reduction cost}
\label{fig:20depart}
\end{center}
\end{figure}
Figures \ref{fig:10depart} and \ref{fig:20depart} show the departure aircraft simulations. Here some immediate differences can be seen between both Figure \ref{fig:depart} and the 10\% and 20\% importance weightings. The 10\% importance weightings again roughly follow the pattern established in the original cost functions with the exception of the trajectory with the desired 0 \degree bearing to leave the TMA. The departure cost function contains elements which reward altitude gain, fuel minimisation, desired bearing matching and minimisation of noise pollution. No element of the function rewards gain of distance from the airport itself, instead relying on the general tendency of aircraft matching a bearing and gaining altitude to fly in a straight line along that bearing and thus leave the airport. The 0 \degree bearing simulation finds the degenerate case where the altitude gain at roughly the right bearing can be achieved by continuously circling and climbing. This behaviour is likely started by the encouragement of the noise pollution minimisation function reducing the penalty for noise pollution the higher the altitude that the aircraft passes over the population region (to 0 at the altitude cut off at 4km). In the case of the 10\% importance weighting this circling climb behaviour is continued until the maximum altitude is reached and the simulation terminated from constraint violation. This same behaviour is visible in the 20\% importance weighting although the aircraft does break the degenerate behaviour and leave the TMA before the maximum altitude constraint is reached.
The 20\% importance weighting departures on the northern half display a preference to avoid flying over the Leatherhead population centre. Instead they take a diversion around the area rather than the tighter turn visible on the simulations with no noise pollution. Of significant interest is the affect that 20\% importance has had on the southern departures. In the 10\% importance simulations flying over Crawley's population centre seems to be considered still acceptable. Comparatively the lower population density gap between Crawley and Horsham is no longer considered valid to fly through for the 20\% importance. A secondary break in population hubs exists between Horsham and Billinghurst but by this point the aircraft are close enough to the TMA borders they gain a greater improvement in cost function by opting to leave the TMA early than turning to their desired bearing.
The noise pollution reduction cost is clearly shown to cause deviations in aircraft paths. Care is needed to maintain a balance between all the elements of the cost function such that degenerate behaviour like the uncontrolled climbing is avoided without clouding the importance of the noise pollution reduction.
\section{Conclusions}
This report has presented the application of Sequential Monte Carlo (SMC) optimisation in the framework of Model Predictive Control (MPC) for the control of aircraft in the Terminal Manoeuvring Area (TMA). Through application of parallelisation on graphical processors (GPU) the slow method of SMC can be significantly accelerated. This allows for the solution of very complicated scenarios with both arrival and departure aircraft, in three dimensions, in the presence of a stochastic wind model and non-convex avoidance constraints.
The goal of the majority of the control optimisation done in this report is to reduce fuel usage and thus CO$_2$ production in and around the TMA. The benefits of this optimisation were demonstrated by comparison to real air traffic data around Gatwick airport over a 24 hour period of normal traffic patterns. Despite some aircraft trajectories looking far longer than real paths, significant fuel savings were achieved through improved usage of continuous descent and continuous ascent profiles. Some of this fuel saving comes from the freedom that the aircraft are given in our model to fly where they choose in the airspace. The use of a flow field to guide arrival aircraft, though very influential on our final aircraft trajectories, is far less strict than the specific corridors and beacons used in the actual airport. Our results therefore do rely on a level of flexibility and accuracy of aircraft positioning in the airspace. In total across all simulated scenarios 14653.5 kg of fuel was saved with comparison to the real data estimates. This leads to an overall fuel saving across every aircraft simulated of 29.7\% and a saving of 46158.52 kg of CO$_2$.
The model was also extended to include the effects of noise pollution. Firstly it was entirely feasible to incorporate this type of cost into the existing model. The minimal number of restrictions on the form of the cost for SMC (bounded maximisation) easily allows inclusions of complicated costs or objectives which may not have been suitable in other optimisation methods. The results from this extension also demonstrated the variability in behaviour based on the level of importance weighting noise pollution was given in the overall objective function and the importance of tuning these objective importance weightings thoroughly.
\section{Future Directions}
The work presented in this report has a wide scope for continuation. Some directions are inspired by challenging the smaller simplifying decisions taken along the way, others through the desire to reach a compromise between current working of ATC and the envisioned. This discussion of possible future directions of the work is by no means exhaustive.
\subsection{Algorithmic Improvements}
The underlying SMC algorithm was adapted to reduce the level of search space dimensionality through parallel solution of multiple agent problems. Aside from this adjustment and the minor alterations used to make the method run on a GPU kernel the base algorithm has been kept very simple. Options for algorithmic improvements could include standardizing the search patterns used by SMC. So instead of traversing the search space in a random fashion through the perturbations of controls a hybrid of more ordered local searching could be combined with random elements in a similar fashion to simulated annealing. Work into how to best characterise the search space and how many particles are needed for a given complexity of problem would also directly benefit this work.
Currently all control variables optimised have been considered as continuous but bounded values. Although realistic from a control perspective this is not necessarily viable for real world air traffic control. Asking a pilot to set the thrust, bank and climb of an aircraft to very specific values (non round numbers) on such a rapid update cycle would be infeasible. One alternative for this would be to adapt the optimisation algorithm to operate on discrete controls. This would significantly diminish the size of the search space but would require thought on how the search space is traversed. How much discretisation of controls would also be an interesting topic to consider. The algorithm has the scope to be optimising between specific set piece moves of aircraft (such as climb 1000 feet and change heading by 10\degree) which would be very easy to relay in the current ATC environment but would impact on the potential of fuel savings due to the coarser nature of the controls. A cost benefit analysis of any discretisation would be vital in order to accurately weigh the alternatives.
Similarly in order to address the rapid update cycle required by the linearised dynamics equations vs the communications overload required to deliver these instructions to a pilot the MPC set-up could be altered. If the decision steps were placed further apart, with the addition of monitoring steps between these decisions to allow for smooth updates of the dynamics equations and constraint management, this would reduce the regularity of communication needed. However this would potentially place further conservatism on the planned trajectories for aircraft to remain within their constraints and away from avoidance regions with a reduced regularity of control. This could also allow the control horizon to be greatly extended which would give better longer term decision making for avoidance situations.
\subsection{Scenario Enhancements}
The scenarios considered in the report have had a number of limitations imposed upon them, most notably the use of a single runway which is outside of the control of the simulation itself. One clear extension would be to build in a runway management system to the optimisation either through basic runway `blocking' where an aircraft which has just used the runway blocks it from being used for a small section of time. This would need to be built in with the use of the landing envelope to predict how long from entering the landing envelope an aircraft blocks the runway. Additionally some form of priority system may need to be evolved so as to stop arrival aircraft always blocking a runway and delaying departure aircraft from taking off. This scenario enhancement would benefit from a longer control horizon or some estimate of time of arrival to plan runway movements accordingly.
Multiple runways are common in larger airports across the world and could be implemented within this work. In its simplest form some sort of decision on which runway any aircraft uses could be pre-allocated or if a runway management system is in place a more dynamic decision could be selected by the optimisation itself. This would require more investigation into how flow fields for arrival aircraft guide them to their desired runway along with emergency switching of flow fields for last minute runway changes.
With respect to the noise pollution model extension a simple but potentially interesting adaptation to the noise pollution cost could be the variation of the population density field over time. Some airports operate periods of respite for population centres over the day, by forcing a harsh penalty for aircraft flying over a given population centre during its respite time. Modelling this would demonstrate an evolution over the day for air traffic paths based on how the underlying population density field is artificially shaped.
\subsection{Computational Speed Ups}
Scope for additional computation speed up exists. The SMC algorithm was demonstrated as near real time for on-route traffic optimisation but the increase in complexity of scenarios for TMA optimisation necessitated an increase in particle numbers. This slowed the computational solve time. Some improvements can be made through data storage optimisation. Many values are currently stored as doubles where a float would be just as effective. How data is copied to the GPU could also be improved as this makes up a large overhead.
Some investigation into whether any benefit is gained from switching to larger or multiple GPUs along with how to optimise the spread of particles across these GPUs (similar to the analysis carried out in Section \ref{sec:compsave}) would be required.
Computational speed up could also result from a more algorithmic adjustment where the number of particles is allowed to vary throughout the SMC loops. The largest number of particles is required at the beginning of the optimisation to characterise as much of the search space as possible. As the method searches further and narrows down into areas of interest this number of active particles can be allowed to drop. This would require some changes in coding since we currently assume a fixed number of particles and thus threads for the kernels. It is however entirely possible that even with that code overhead varying particle numbers downwards through the optimisation would yield speed ups.
\section{Data access statement}
The data used for this study is archived in the University of Cambridge `DSpace@Cambridge' repository, under the title \emph{Air traffic in Gatwick TMA on 14 September 2013}. It can be accessed at\\
\url{http://www.repository.cam.ac.uk/handle/1810/248130}
\section{Acknowledgements}
The authors would like to acknowledge the funding from EPSRC (Engineering and Physical Sciences Research Council - UK) Grant No. EP/G066477/1.
Data for this study was downloaded from FlightRadar24.
The authors would also like to acknowledge their collaborators Thomas Chau and Wayne Luk from Imperial College, London for their provision of GPU facilities and expertise. Finally the authors would like to thank Thomas Kent and Arthur Richards from the University of Bristol for their expertise in accessing FlightRadar24 data.
\bibliographystyle{plainnat}
|
1,314,259,993,904 | arxiv | \section{Introduction}
The thought experiment known as ``Maxwell's Demon''~\cite{maxwell1871theory}
addressed the issue that the Second Law of thermodynamics is statistical in nature. An ideal gas at temperature $T$ is enclosed in an isolated container divided into two equal parts by a fixed wall with a trap door operated by some sentient being, later called a ``demon'', who will open the door to incoming particles, sorting them by velocity. This process would result in a temperature gradient that could be used to obtain work from the system. Maxwell's idea started an ongoing debate, to which Szil\'{a}rd\ contributed significantly with a model that circumvents the necessity of a sentient being, replacing it by a simple mechanism which, importantly, retained the main feature of the ``demon", namely that of having a memory.
Szil\'{a}rd's engine consists of a single particle gas within a container divided into two equal partitions separated by a movable, frictionless wall \cite{szilard-german}. When the container is put into contact with a single thermal reservoir at temperature $T$, the movable wall may then be used to extract work (\textit{e.g.} by lifting a weight), when moved towards the empty side of the box, as the particle transfers kinetic energy in successive elastic collisions. Being able to do this requires knowledge of which side is empty at the beginning of work extraction, to remain present throughout, i.e., it requires a memory. When operated cyclically, the average extractable work is compensated by the average amount of work that has to be done to run the memory. An adiabatic, isothermal volume expansion yields work $W_{\text{ext}}= k_B T \ln\left({V \over V/2}\right) = k_B T \ln 2$, which corresponds to $k_B T$ times the mutual information captured about the coarse grained particle location. This idea has served as a foundation not only for computing fundamental thermodynamic bounds for information processing (e.g. \cite{landauer1961irreversibility, bennett1982thermodynamics, sagawa2012thermodynamics, parrondo2015thermodynamics, CB}), but also for concretely demonstrating how information can be turned into work, and vice versa. Recently, interest in these issues has spiked with increased experimental capabilities \cite{sagawa2009minimal, berut2012expLandauer, mandal2012work, SagawaUedaFeedback12, mandal2014, koski2014experimental, exp-landauer2014, martinez2016brownian, hong2016experimental, gavrilov2016erasure, gavrilov2017direct, lathouwers2017memory, kumar2018nanoscale, paneru2018lossless, admon2018experimental,wolpertbook2019}. Different variations of the standard Szil\'{a}rd\ engine have been discussed, including generalisations to $N$-particles systems \cite{kim2011quantum,kim2011information} and non-ideal (classical or quantum) particles \cite{Horowitz_2011,bengtsson2018quantum}.
Here, we study how much work can be extracted, on average, when a Szil\'{a}rd\ engine, operated quasi-statically, contains an ideal gas with $N$ particles, and when $q$ partitions can be created in the box. We assume that the observer counts and memorizes how many particles fall into each partition, and then exploit the isothermal expansion of the ideal gas in the different compartments to extract work, as in the original Szil\'{a}rd\ box \cite{szilard-german}.
We show in Sec. \ref{sec2} that the average extracted work is proportional to the mutual information retained in memory about the location of a single particle, not of the location vector of the ensemble. The latter information controls the minimal cost for memorizing the counts, whereby their difference controls a lower bound on dissipation of the engine (when run cyclically). We calculate how much average work can maximally be extracted when the choice of where to place the movable walls before the measurement is optimized, for fixed $N$ and $q$. To build intuition, Sec. \ref{sec3} treats the case with only one movable wall, $q=2$, and confirms agreement with previous work. The general case is then treated in Sec. \ref{sec4}.
\section{Mutual information and work}
\label{sec2}
We consider a Szil\'{a}rd\ engine, generalized to $N$ particles inside a container of longitudinal size $L$ and transverse unit area.
The gas in the container is coupled to a thermal reservoir, and is in thermal equilibrium at the beginning of each cycle. The mass of the walls is assumed to be much larger than the mass of the particles (which is set to unity). The engine is run cyclically as follows:
\begin{enumerate}
\item Preparation step (assumed not to require work): Insert walls which divide the container along the longitudinal axis into $q$ partitions with lengths \mbox{$\bm{\ell}= (\ell_1, \ldots, \ell_q)$}, see Fig.~\ref{fig:diagram}a.
\item Measurement and data representation step: the observer has {\it a priori} knowledge of the experimental setup, including
$\bm{\ell}$, and is provided with a snapshot of the $N$ particles' x-positions, denoted by $\bm{x}=(x_1, \ldots, x_N)$. Using this measurement, the observer commits the counts of how many particles reside in each partition to memory, denoted by $\bm{k}=(k_1, \ldots, k_N)$.
\item Work extraction step: using the information committed to memory, weights are attached in such a way that work is extracted while the walls move quasi-statically until the pressure is equalized across the container. The observer knows the number density in each partition, $k_i/\ell_i$, and thus the local pressure, $P_i=k_B T k_i/\ell_i$ \footnote{The volume of partition $i$ is $\ell _i$, because the container has transverse unit area.}. This enables the observer to determine how much each partition will be moved by the expansion of the gas, and in which direction. In equilibrium, the pressure is $P_i^{(\rm eq)}=N k_BT/L$ in all partitions, which implies that the lengths of the partitions after work extraction are $\ell_i^{(\rm eq)}=k_i L/N$.
\item Return to beginning: Walls are pulled out (assumed not to require work).
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/diagram_2.pdf}
\caption{Schematic representation of a Szilard engine with $q=5$ partitions and $N=20$ particles. Panel (a) depicts the initial state, where the length of each partition is given by $\ell_i$ and the red arrows indicate in which direction each wall is going to move as the engine performs work. The final state, which is assumed to be reached quasi-statically is shown in panel (b), where the length of each partition is given by $\ell_i^{(\rm eq)} = k_iL/N$ and thus the pressure is the same in all of them. The values indicated in the lower panel corresponds to the counts obtained by setting $L=1$.
}
\label{fig:diagram}
\end{figure}
We know that the total information captured by the observer's memory, about the available data determines the energetic cost of the memory, while only the relevant part of this information determines the potential energetic gain \cite{CB}. To asses the engine's overall efficiency, we need to calculate both aspects.
In the setup we consider here, available data are the x-positions of the ensemble, $\bm{x}$, and work extraction can happen only via movement of the walls towards pressure equalization. Since the particles of an ideal gas are non-interacting, only information about the locations of the $N$ individual particles matters, none of the correlations between particle positions make any difference to calculating the density, and thus the pressure, in the different partitions.
But the counts $\bm{k}$ do contain information about the ensemble, because knowledge of ${k_i}$ constraints the possible counts in the other partitions, $k_{j\neq i}$.
Counting is thus intrinsically wasteful in this situation, because it captures some irrelevant information. Overall engine dissipation is proportional to irrelevant information \cite{CB} when the observer is viewed as a part of the information engine and the observer's energetic costs are taken into account.
We thus have a quantitative expectation of the costs and gains associated with the observer's memory, given the physical constraints imposed by our specific work extraction protocol. To make that precise, let $\bm{K}$ denote a random variable with realizations $\bm{k} \in \mathcal{K}$. This, together with $p(\bm{k}|\bm{x})$, characterizes the memory we have chosen here: ensemble values of all N $x$-positions are mapped onto memory states, which are specified by the count vector $\bm{k}$.
We are using the following standard notational shortcuts: vectors are bold face symbols, entropy functionals are written as $H[X] = -\langle \ln{p(x)} \rangle_{p(x)}$, and \mbox{$H[\bm{X}] = -\langle \ln{p(\bm{x})} \rangle_{p(\bm{x})} = -\langle \ln{p(x_i, \dots, x_N)} \rangle_{p(x_i, \dots, x_N)}$}, where $\langle \cdot \rangle_{p}$ denotes the average. Conditional entropy and information are written accordingly \cite{CoverThomas}. For readers unfamiliar with information theory, all terms and calculations pertaining to this section are written out in detail in Appendix \ref{A}.
While the counts committed to memory capture information in the amount of $I[\bm{X}, \bm{K}]$, the given work extraction protocol should not allow the observer to recover that much work. We expect that the observer can use only the total information captured about individual particle locations, which is
$\sum_{i=1}^N I[X_i, \bm{K}] = N I[X,\bm{K}]$, because the particles are identical.
\subsection{Extractable work and relevant information}
\label{W-I}
The work extracted \cite{callen1998thermodynamics} when all partitions change from $\ell_i$ to $ k_i L/N$, is
\beeq{
W(\bm{k})&=\sum_{i=1}^q\int_{\ell_i}^{L\frac{k_i}{N}} d V_i P_i= k_B T \sum_{i=1}^q\int_{\ell_i}^{L\frac{k_i}{N}} d V_i k_i/V_i \\
&= k_BT\sum_{i=1}^q k_i \ln \frac{k_iL}{N\ell_i}\,,
\label{eq:work-extr}
}
The expected extracted work \footnote{Averages are denoted by $\bracket{\cdot}$}, $\bracket{W}$, then results from averaging $W(\bm{k})$ over the distribution $P(\bm{k})$ of measurement vectors given by the multinomial distribution
\beeq{
P(\bm{k}) = \frac{N!}{k_1! \cdots k_q!} \, p_1^{k_1} \cdots\, p_q^{k_q}, \label{multin dist}
}
where $p_i = \ell_i/L$ is the probability of finding any one particle in partition $i$.
We now calculate the relevant information captured by the counts:
\beeq{
I[X ,\bm{K}]
&= \sum_{\bm{k}} P(\bm{k})\int {\rm d}x P({x}|\bm{k}) \ln{\frac{P({x}|\bm{k})}{P(x)}} ~.
}
First, note that
the probability of finding a single particle in any location along the x-axis is $P(x) = 1/L$. Second, within each partition, $j$, the overall probability of finding a particle, given the count vector $\bm{k}$, is $k_j/N$.
The probability of finding a single particle in position $x$ within partition $j$ is uniform over the length of the partition, $\ell_j$. Therefore, the probability of finding a particle in position $x$, given the counts $\bm{k}$ is:
\beeq{
P(x|\bm{k}) = \frac{k_i}{N \ell_i},\quad \sum_{j=0}^{i-1} \ell_j \leq x < \sum_{j=1}^{i} \ell_j\, ,
\label{eq:Pxgk}
}
for $i=1,\ldots,q$, using the convention $\ell_0=0$.
Putting everything together lets us compute how much information the counts contain about the location of a single particle:
\beeq{
I[X ,\bm{K}]
&=\left\langle \sum_{i=1}^q \frac{k_i}{N} \ln \frac{k_i L}{N \ell_i} \right\rangle_{P(\bm{k})} \, .
\label{eq:a}
}
We thus arrive at our main result: combining Eqs. (\ref{eq:work-extr}) and (\ref{eq:a}), tells us that the average extracted work is proportional to $N$ times the single particle location information captured by the counts,
\begin{equation}
\label{mainresult}
\bracket{W}= k_B T NI[X,\bm{K}].
\end{equation}
\subsection{Irrelevant information retained in memory}
To physically run the memory that keeps the counts of particles in each partition, heat in the amount of at least $k_B T I[\bm{X},\bm{K}]$ joules has to be dissipated per cycle, on average (e.g. \cite{parrondo2015thermodynamics, CB}). But the memory can be used to extract, on average, work up to only $k_B T NI[X,\bm{K}]$ joules per cycle. The lower limit on overall average dissipation per cycle is the difference, which is proportional to the irrelevant information retained in memory \cite{CB}. Here, the counts keep irrelevant information in the amount of
\begin{eqnarray}
I_{\rm irrel} &=& I[\bm{X}, \bm{K}] - N I[X, \bm{K}] \label{Iirrel-1}\\
&=& I[\bm{X} | \bm{K}] \geq 0 \label{Iirrel-2}~,
\end{eqnarray}
equal to the conditional multi information,
\begin{equation}
I[\bm{X} | \bm{K}] = \left\langle \ln{\left[ {P(x_1, \dots, x_N | k_1, \dots, k_q) \over \prod_i^{N} P(x_i | k_1, \dots, k_q)} \right]} \right\rangle_{P(\bm{x},\bm{k})}~, \label{Iirrel-3}
\end{equation}
a non-negative quantity.
To get from line (\ref{Iirrel-1}) to line (\ref{Iirrel-2}), note that $I[\bm{X}, \bm{K}] - \sum_{i=1}^N I[X_i, \bm{K}] = I[\bm{X} | \bm{K}] - I[\bm{X}]$. But the multi information $I[\bm{X}] = 0$ is zero here, because the particles are non-interacting.
The mapping from particle locations to the vector of counts characterizes the memory accessible to the information engine. Counting is a deterministic mapping, which means that the conditional entropy $H[\bm{K}|\bm{X}]$ is zero, since the counts are completely determined by the locations (unless measurement errors are taken into account). Therefore, we have $I[\bm{X},\bm{K}] = H[\bm{K}]$. The irrelevant information retained in the counts is thus $I_{\rm irrel}^{\rm count} =$
\begin{eqnarray}
H[\bm{K}] - N I[X,\bm{K}]
= \ln\left[\frac{N^N}{N!}\right] - \sum_{i=1}^q \left\langle \ln \left[ \frac{k_i^{k_i}}{k_i!} \right] \right\rangle_{p(k_i)}
\end{eqnarray}
We use Stirling's approximation, $n!\cong n^n e^{-n}\sqrt{2\pi n}$, to estimate for very large $N$, and for equidistant partitions:
\begin{eqnarray}
I_{\rm irrel}^{\rm count} &\cong&
{1\over 2} \Bigl( (q-1) \ln(2\pi N) -q\ln(q)\Bigr)
\end{eqnarray}
\subsection{Discussion}
Better memories could perhaps do better, as could different work extraction mechanisms. The problem of finding an optimal memory was discussed for the case of generalized, partially observable information engines in \cite{CB}. The case where particles have a weak, short range, repulsive potential is studied in \cite{Horowitz_2011}. Furthermore, it is obvious that if the particles can initially be prepared in a non-equilibrium state, then a larger amount of work can be extracted from a cycle of the Szil\'ard engine, a point made in \cite{Touzo_2020}. However, none of these variations on the original Szil\'{a}rd\ scheme are the focus of this paper.
Instead, here we explore the consequences of the choices made in our specific setup: a naive, counting, observer, and a work extraction mechanism based solely on pressure equalization. These choices constitute a {\em direct} extension of the Szil\'{a}rd\ engine to $N$ particles and $q$ partitions, without the introduction of additional variations or optimizations.
In this straightforward extension of the Szil\'{a}rd\ engine to $N$ particles and $q$ partitions there is then only one remaining choice, namely the initial placement of the dividers. Conceptually, we could ascribe the task of inserting the partitions to the observer, which would also justify the fact that the observer knows $\bm{\ell}$.
We will now compute those partition locations $\bm{\ell}$ that maximize relevant information, and hence maximize extractable work (Sec.~\ref{W-I}). This tells us how to design the best naive $N$ particle $q$ partition classical extension of Szil\'{a}rd's engine.
\section{Maximizing work extraction by positioning divider partitions}
\label{Max-l}
We now ask: given a system of $N$ particles and $q$ partitions, does there exist a partitioning that is optimal in the sense that it maximizes the average extractable work? That means, we seek to find a location vector $\hat{\bm{\ell}}$ that maximizes $I[X ,\bm{K}]$.\\
\subsection{{Maximum} work extraction for a single movable wall}
\label{sec3}
To build intuition, let us start with the case of one movable wall, i.e., two partitions. Let $p$ denote the probability of finding a particle in the left partition of longitudinal size $\ell$, which is $p=\ell/L=\bracket{k}/N$. The probability of observing $k$ particles out of the $N$ possible in the left partition is then:
\beeq{
P(k)=\binom{N}{k}p^k (1-p)^{N-k}\,.
}
The conditional probability $P(x|k)$ is:
\beeq{
P(x|k) = \begin{cases}
k\, / \, {N \ell}, & 0 \leq x \leq \ell\\
(N-k)\, / \, {N (L-\ell)}, & \ell < x \leq L
\end{cases}\,,
}
and the marginal probability is $P(x)=1/L$.
This yields the mutual information between the measurement $k$ and the position of any single particle:
\beeq{
I[X, K] \!=\! \sum_{k} \!P(k) \Bigg[ {k\over N} \ln{\frac{k L}{N \ell}} + {N-k\over N} \ln{\frac{(N-k) L}{N (L-\ell)}} \Bigg]\,,
\label{eq:1}
}
which is in agreement with earlier work \cite{kim2011quantum,kim2011information}.
For one particle, a quick and intuitive calculation shows that the maximal value of $I[X, K] $ is attained by placing the wall in the middle. But this is not always the case for any number of particles, $N$.
\begin{figure}
\centering
\includegraphics[width=8.5cm,height=5.5cm]{figures/Fig1.pdf}
\caption{The mutual information $N\, I[X,\bm{K}]$ as a function of the position of the wall $p=\ell/L$, for a Szil\'{a}rd's engine containing $N$ particles. The different curves correspond to different values of $N$, while the dotted line is the asymptotic limit ${1\over 2\ln (2)} \simeq 0.7213$ bits . }
\label{fig1}
\end{figure}
To illustrate this, we plot $N I[X, K] $ against $p=\ell/L$ in Fig. \ref{fig1}, for various values of $N$. Only for $N\leq 3$ does the optimal position of the movable wall, $\hat{\ell}$, correspond to halving the volume ($\hat{\ell}=L/2$). For larger numbers of particles, the optimal partition is given by an asymmetric configuration of the partitions: the symmetric solution with the wall in the middle becomes a local minimum, and two maxima appear at $\hat{\ell}$ and $1-\hat{\ell}$. This agrees with what was reported in \cite{pal2016role}. To understand the mechanism behind the appearance of this asymmetric solution, we can perform an asymptotic expansion of $NI[X,K]$ for large $N$. Introducing the average count $n:=\bracket{k}=N~\ell/L$, and the random variable $\Delta$, such that $k=n+\Delta$, we first write the mutual information as:
\beeq{
N I[X,K]&=\sum_{k}P(k)\left[k\ln \frac{kL}{\ell N}+ (N-k)\ln\frac{(N-k)L}{N(L-\ell)}\right]\\
&=\sum_{k}P(k)\Bigg[\left(n + \Delta\right) \ln\left(1 + \frac{\Delta}{n}\right)\\
&+ \left(N-n-\Delta\right) \ln\left(1-\frac{\Delta}{N-n}\right)\Bigg]\,,
}
and then perform a Taylor expansion for small values of $\Delta$. Recalling the following statistical properties of the binomial distribution, $\avg{\Delta}=0$, $\avg{\Delta^2} = N p\, (1-p) $ , $\avg{\Delta^3} = N p (1-p) (1-2p)$ and $\avg{\Delta^4} = 3 N(N-2) p^2 (1-p)^2 + N p (1-p)$, we obtain
\beeq{
N I[X, K] = \frac{1}{2} + \frac{1}{4N} + \frac{(1-2p)^2}{12 N p (1-p)} + \mathcal{O}\left({N^{-2}}\right)\,.
\label{explargeN}
}
Apart from the first term, all orders go to zero in the limit of an infinite number of particles. For finite $N$, however, the second term gives the leading order on the decaying value for the symmetric partition, while the third one is responsible of making the symmetric partition become a local minimum, because, as $p$ deviates from $1/2$, this term increases. The same asymptotic expansion can be carried out for the general case of $q$ partitions, which we will calculate later.
For a more quantitative analysis, we use an integral representation of the natural logarithm,
\begin{equation}
\ln z = \int_{0}^{\infty} \frac{du}{u} \left( e^{-u} - e^{-zu} \right)\,,
\label{Eq::log_int}
\end{equation}
to rewrite Eq.~\eqref{eq:a} as
\begin{eqnarray}
NI[X,\bm{K}]&=\sum_{i=1}^q\sum_{k_i}P(k_i) & k_i\ln\left(\frac{k_i }{\bracket{k_i}}\right)\\
&=\sum_{i=1}^q\int_0^\infty\frac{du}{u}\Bigg[ & \bracket{k_i\left(e^{-u} - e^{-k_iu}\right)}\\
&& -\bracket{k_i}\left(e^{-u} - e^{-\bracket{k_i}u}\right)\Bigg]~. \notag
\end{eqnarray}
Noticing that
\beeq{
\bracket{e^{-k_i u} }&= \sum_{k_i=0}^{N} {N \choose k_i} (p_i e^{-u})^{k_i} (1-p_i)^{N-k_i}\\
&= \left( 1-p_i(1 - e^{-u} ) \right)^N\,,
}
we obtain
\beeq{
N I[X,\bm{K}] &=\sum_{i=1}^{q}\mathcal{F}_N(n_i)\,,
\label{eq:I-F}
}
with $n_i := \avg{k_i}$, and the function
$\mathcal{F}_N(x)$ is given by
\beeq{
\mathcal{F}_N (x) &= \int_0^\infty \!\frac{du}{u^2} \left\{ \left[ 1 - \frac{x}{N} (1-e^{-u})\right]^N - e^{-x u} \right\}\,.
\label{eq:3}
}
We can hence write Eq. \eqref{eq:1} as
\beeq{
N I[X, K] = \mathcal{F}_N (n) + \mathcal{F}_N (N-n)\,,
\label{eq:2}
}
where $n=\bracket{k}$ is the average count.
\begin{figure}
\centering
\includegraphics[width=8.5cm,height=5.5cm]{figures/Fig2.pdf}
\caption{$\mathcal{F}_N(n)/\ln(2)$ [in bits] as a function of $n$, plotted for various values of $N$. In the limit of an infinite number of particles the maximum of $\mathcal{F}_N(n)$ is achieved at $\hat{n}\simeq 1.338$ and has a value of $\mathcal{F}_\infty(\hat{n})\simeq 0.8371$ bits.}
\label{fig2}
\end{figure}
Thus, maximizing $N I[X, K]$ with respect to the position $\ell$ of the wall for a fixed number of particles is equivalent to maximizing $\mathcal{F}_N(n) + \mathcal{F}_N (N-n)$ with respect to $n$. A plot of the function $\mathcal{F}_N(n)/ln(2)$ (scaled to be in units of bits) can be found in Fig.~\ref{fig2} for various values of $N$. The optimal value $\hat{n}(N)$ which maximizes the mutual information must obey $\mathcal{F}'_N(n)=\mathcal{F}_N'(N-n)$.
For $N\leq 3$ one finds that $\hat{n}=\frac{N}{2}$, and for larger numbers of particles we have that the optimal solution is given by an asymmetric partition (consistent with Fig.~\ref{fig1}). For $N\rightarrow \infty$, either $\mathcal{F}_N (n)$ or $\mathcal{F}_N (N-n)$ becomes the dominant term in Eq.~\eqref{eq:2}, and one can write that
\beeq{
\lim_{N \to \infty} N I[X, K]&= \mathcal{F}_{\infty}(n) \\
&= \int_0^\infty \frac{du}{u^2} \left( e^{-n(1-e^{-u})} - e^{-nu} \right)\,.
\label{eq:3b}
}
The maximal value of the mutual information occurs at $\hat{n}(\infty)\simeq 1.338$ and is $\mathcal{F}_{\infty}(\hat{n})\simeq 0.8371$ bits. This is to be contrasted with a symmetric partition, which would give a lower value for the mutual information of ${1\over 2} \ln(2)\simeq 0.7213$ bits (see Fig.~\ref{fig1}).
The larger the number of particles, $N$, the closer to the edge of the box we have to insert the movable wall to maximize average work extraction, while the average work extracted by a wall in the middle goes to zero. This explains not only why, with a container filled with some regular gas ($N$ roughly between $10^{22}$ and $10^{23}$), zero work can be extracted, on average, by putting a wall in the middle, but also why there is no chance to extract macroscopic work by implementing the optimal partitioning, because the necessary distances become much too extreme to realize.
Optimizing the average extracted work (for $q=2$) also with respect to the number of particles, $N$, gives as the best choice either one or two particles, (with the wall in the middle), as can be appreciated from Figs.~\ref{fig1} and \ref{fig3}.
\subsection{Optimal work extraction with $q$ partitions}
\label{sec4}
For one particle, we can trivially insert as many partitions as our experimental setup allows, and measure to the same resolution, in order to get more work out of the information engine, but we equally have to spend more energy to run the memory. We have, for one particle, that the cost and the potential benefit of the memory are precisely equal, because $I[\bm{X},\bm{K}] = I[X ,\bm{K}]$ for $N=1$. Therefore, the overall bound on the engine's dissipation is unaffected. With one particle, a cyclically run Szil\'{a}rd\ engine can, in principle, achieve zero dissipation.
How much work can be extracted from a Szil\'{a}rd\ box with $N$ particles and $q$ partitions? We can use Eq. \eqref{eq:I-F} to analyze this general case similarly to the $q=2$ case discussed above. Finding the optimal partition that maximizes the mutual information for fixed $N$ and $q$, \textit{i.e.}, $\hat{\bm{\ell}}=(\hat{\ell}_1,\ldots,\hat{\ell}_q):=\text{arg max}_{\bm{\ell}} \left( N I[X,\bm{X}]\right)$, together with the maximal value, $\hat{\mathcal{I}}_{q}(N) = \max_{\bm{\ell}}\left( N I[X, \bm{K}]\right)$, then reduces to finding the number vector $\bm{n}=(n_1,\ldots,n_q)$ that maximizes Eq. \eqref{eq:I-F}. The number vector has to be normalized, $\sum_{i=1}^qn_i=N$, and to carry out the optimization, we introduce
\beeq{
\mathcal{I}^{(N)}_q(\bm{n},\lambda)=\sum_{i=1}^q\mathcal{F}_N(n_i)-\lambda\left(\sum_{i=1}^qn_i-N\right)\,,
}
which must be maximized with respect to $\{\bm{n},\lambda\}$, where $\lambda$ is a Lagrange multiplier.
Then the optimal $\hat{\bm{n}}$ must obey
\beeq{
0&=\frac{\partial \mathcal{I}^{(N)}_q(\bm{n},\lambda)}{\partial n_i}\Big|_{\bm{n}=\hat{\bm{n}}}\,,\quad\quad i=1,\ldots,q\,,\\
0&=\frac{\partial \mathcal{I}^{(N)}_q(\bm{n},\lambda)}{\partial \lambda}\Big|_{\bm{n}=\hat{\bm{n}}}\,,
}
yielding:
\beeq{
\lambda&=\mathcal{F}_N^\prime(\hat{n}_i)\,,\quad\quad i=1,\ldots,q\,,\\
N&=\sum_{i=1}^q\hat{n}_i\,.
}
We must, moreover, be sure that
\beeq{
\sum_{i,j=1}^q\frac{\partial^2 \mathcal{I}^{(N)}_q(\bm{n},\lambda)}{\partial n_i\partial n_j}\Big|_{\bm{n}=\hat{\bm{n}}}\delta n_i\delta n_j<0\,,
}
with $\delta n_i=n_i-\hat{n}_i$, or equivallently,
\beeq{
\sum_{i=1}^q \mathcal{F}_N''(\hat n_i)(\delta n_i)^2<0\,.
\label{eq:cond}
}
with $\sum_{i=1}^q\delta n_i=0$.
\begin{figure}
\centering
\includegraphics[width=8.5cm,height=5.5cm]{figures/FigAppendix1.pdf}
\caption{Plot of $\mathcal{F}_N'(n)$ (solid blue line) and $\mathcal{F}_N''(n)$ (solid red line) as a function of $n$, for $N=10$. As we can see here, for a given value of $\lambda$ (dashed orange line), there are two solutions of $\mathcal{F}_N'(n)=\lambda$ (represented here with black circles). These, denoted as $n^{-}$ and $n^{+}$ are such that $\mathcal{F}_N''(n^{-})<0$ and $\mathcal{F}_N''(n^{+})>0$. }
\label{fig:app1}
\end{figure}
To understand the solutions to this system of equations, we plot $\mathcal{F}_N'(n)$ and $\mathcal{F}_N''(n)$ for different values of $N$ and $q$ in Fig. \ref{fig:app1}. The symmetric solution $n_i=N/q$ is stable as long as $\mathcal{F}_N''(N/q)<0$ (red curve), but beyond that point unstable, and we need to investigate asymmetric solutions. For a given value of $N$ and $q$, the equation $\lambda=\mathcal{F}_N'(\hat{n}_i)$ has two solutions $\hat{n}_i=n^{-}$ and $n^{+}$ with $n^{-}<n^{+}$. Since the Lagrange multiplier imposes a global constraint, this implies that, regardless of the order of the indices labeling the partitions, a general solution may correspond to having $q^{-}$ partitions with solution $n^{-}$ and $q^{+}$ partitions with $n^{+}$, such that $q=q^{-}+q^{+}$ and $N=q^{-}n^{-}+q^{+}n^{+}$. Importantly, the solution $n^{-}$ is such that $\mathcal{F}''_N(n^{-})\leq 0$, while for $n^{+}$ we have instead that $\mathcal{F}''_N(n^{+})\geq 0$ (see solid red line in Fig. \ref{fig:app1}). Therefore, the stability condition \eqref{eq:cond} reads
\beeq{
\left|\mathcal{F}''_N(n^{+})\right|\sum_{\ell=1}^{q_+}(\delta n^{+}_\ell)^2-\left|\mathcal{F}''_N(n^{-})\right|\sum_{\ell=1}^{q_-}(\delta n^{-}_\ell)^2<0\,,
\label{eq:cond2}
}
with
\beeq{
\sum_{\ell=1}^{q_{+}}\delta n^{+}_\ell+\sum_{\ell=1}^{q_{-}}\delta n^{-}_\ell=0\,.
\label{eq:cond3}
}
This automatically implies that $q^{+} \leq 1$, because if $q^{+}$ was 2 or larger, the inequality (\ref{eq:cond2}) could be violated by the choice $\delta n^{-}_\ell=0$ for all $\ell=1,\ldots q^{-}$. This means that the optimal asymmetric partition corresponds to having one large partition and $q-1$ small and equal partitions.
\begin{figure}
\centering
\includegraphics[width=8.5cm,height=5.5cm]{figures/Fig3a.pdf}\\
\includegraphics[width=8.5cm,height=5.5cm]{figures/Fig3b.pdf}
\caption{Plot of the mutual information for the symmetric partition $q\mathcal{F}_N(N/q)$ (orange rhomboid markers) and the optimal mutual information $\mathcal{I}_q(N)$ (brown triangle markers) as a function of $N$, in units of nats. Top panel: Plots for a particular value on the number of partitions ($q=6$). As we can see, there exists a critical value $N^{\star}(q)$ (vertical solid red line) on the number of particles, below which the optimal partition is the symmetric one, while above it the optimal partition corresponds to an asymmetric partition. Moreover the optimal mutual information becomes maximal for a particular number of particles $\hat{N}(q)<N^{\star}(q)$ ($\hat{N}(s)$ is shown by the vertical solid blue line). Bottom panel: plot for various values of $q$, shown the same general features.}
\label{fig3}
\end{figure}
This, in turn, allows us to write down a general expression for $\hat{\mathcal{I}}_{q}(N) $, namely:
\beeq{
\hat{\mathcal{I}}_q (N)&=\mathcal{F}_N (n^+) + (q-1) \mathcal{F}_N (n^{-})\,,
\label{eq:final}
}
with the constraint $n^{+}+(q-1)n^{-}=N$. Expression \eqref{eq:final} also contains the symmetric solution, corresponding to $n^{+}=n^{-}=N/q$.
Thus, the optimal value of the mutual information can always be written as
\beeq{
\mathcal{I}_q(N)=\max_{0\leq n^{+}\leq N}\left[\mathcal{F}_N(n^+)+(q-1)\mathcal{F}_N\left(\frac{N-n^+}{q-1}\right)\right]\,,
\label{eq:f}
}
and is plotted in Fig. \ref{fig3} (brown triangle markers), and compared to the mutual information achieved for the symmetric partition, corresponding to $n_i=N/q$ for $i=1,\ldots,q$ (orange rhomboid markers).
We see that, for a fixed $q$, there exists a critical value of particles $N^\star(q)$ such that for $N\leq N^\star(q)$ the optimal partition is always the symmetric one, while for $N>N^\star(q)$, the optimal partition is asymmetric. We also found that for $q\gg 1$, this critical number of particles is given by $N^*(q)\simeq 2.1803 q$. Additionally, Fig. \ref{fig3} shows that there exists a value $\hat{N}(q)<N^\star(q)$ for which $\mathcal{I}_q(N)$ is maximal at fixed $q$.
We obtain an asymptotic value of $\hat{N}(q)$ by noticing that, since this maximum is achieved for a symmetric partition we must have that $\hat{N}=nq$. The optimal mutual information is then given by $\hat{N} I[X,\bm{K}]= q\mathcal{F}_{nq}(n)$, so that for a large number of partitions, and using Eq. (\ref{eq:3b}) we obtain $\hat{N}(q)\sim 1.338 q$. Note also that as $N$ goes to infinity, the value of $\mathcal{I}_q(N)$ given by Eq. \eqref{eq:f} will be dominated by the $q-1$ smaller partitions, yielding the asymptotic value of $\mathcal{I}_q(N)\sim (q-1)0.8371$, as shown in the top panel of Fig. \ref{fig3}.\\
\section{Conclusions}
\label{sec6}
In a Szil\'{a}rd\ engine that can use an ideal gas with $N$ particles, and for which the box can be partitioned into $q$ partitions with walls that can move to extract work via a quasi-static process, the average extracted work is proportional to the information retained in the counts of how many particles fall into each partition about the single particle locations. The cost of running a memory that contains counts of how many particles are in each partition is proportional to the information retained about the ensemble locations. Run cyclically, the engine's efficiency is thus limited by the difference---the information retained in memory that is not relevant with respect to work extraction. This provides
a non-negative lower bound on engine dissipation.
We calculated the maximum average extracted work, by optimizing over the initial wall locations, for given $N$ and $q$. For fixed $q$, there is a critical value $N^{\star}(q)$ below which the optimal partitioning is symmetric (partitions of equal volume), and above which an asymmetric solution is preferable. The maximum value of the average extractable work occurs at $\hat{N}(q)<N^{\star}(q)$. Asymptotically, as $N\rightarrow \infty$, $\hat{N}(q)$ is linear in $q$, as is the maximal extractable average work.
The extension of Szil\'{a}rd's engine we have explored here is basic in the sense that it allows for work extraction only via movement of the partitions along the $x$-axis, without building in additional work extraction mechanisms, such as used e.g. in \cite{Horowitz_2011}, and in that it uses a naive, counting observer, without requiring an optimal observer, as discussed in \cite{CB}.
|
1,314,259,993,905 | arxiv | \subsection{Please Capitalize the First Letter of Each Notional Word
\section{Introduction}
\label{sect:intro}
According to the hierarchical bottom-up scenario, clusters of galaxies are
thought to be formed by accreting and merging of subunits. The structure and
dynamics of the rich galaxy clusters with ongoing merger events are of great
importance for understanding the cluster evolution. N-body numeric
simulations show that substructure may occur in individual rich clusters
before their final collapse and virialization (White 1976; Burns et al.\
1994). Since the cluster merging events are the most common and energetic
phenomena in the Universe, more and more observational efforts in optical
and X-ray bands have been devoted to the nearby rich clusters with
significant substructures (e.g., White, Briel \& Henry 1993; Gastaldello et
al.\ 2003).
When these subsystems are slightly more separated they may be classified as
separate galaxy clusters. The interacting system of clusters A399 and A401
is a good example. They have long been treated as a merging pair of clusters
since they are close to each other, with a projected separation of $\sim 0.6
^\circ$ between their central cD galaxies (McGlynn \& Fabian 1984; Oegerle
\& Hill 1994). The temperature map for this binary cluster, derived from the
$ASCA$ spatially resolved spectroscopic data, possibly suggests a physical
link or a massive dark matter filament between these two clusters
(Markevitch et al.\ 1998). The X-ray observation with the {\it ROSAT} High
Resolution Imager (HRI) unveiled a significant linear structure of A399
which points directly to the core of A401, and this feature might be
resulted from a past violent interaction (Fabian, Peres \& White 1997).
Recent {\it XMM-Newton} observations also confirmed the enhanced X-ray flux
and temperature in the region between two clusters (Sakelliou \& Ponman
2004).
Therefore it is of great interest to search for the optical anomalies in
dynamics of the member galaxies. In general, the merger history can be
modelled on the basis of the structure and dynamics of cluster galaxies,
intracluster gas, and intergalactic medium (e.g., Colless \& Dunn 1996).
This paper will use the existing redshift measurements to investigate the
possible structures in 2-dimension map and in velocity space. A prevalent
mixture-modeling algorithm, known as the KMM algorithm (Ashman et al.\
1994), will be applied to obtain a robust separation of these two clusters.
Disregarding the rotation of the system, we will try to model this cluster
pair as a two-body system on the basis of the velocity distribution and
virial mass estimates. In \S2, we present the spatial distribution and
localized variations in velocity distribution for all the member galaxies in
the A399/A401 system as a whole. We apply the KMM partition algorithm and
discuss the velocity distribution for each cluster in \S3. Then, the virial
mass estimate and two-body modeling for this binary system of galaxy
clusters are performed in \S4. Finally, a summary is given in \S5.
\section{PROPERTIES OF THE SAMPLE}
A399 and A401 are nearby clusters of galaxies ($z \sim$ 0.07181 and 0.07366;
Oegerle \& Hill 2001), classified respectively as type I-II and I clusters
by Bautz \& Morgan (1970). With respect to the center of this binary system
($2^{\rm h}58^{\rm m}30^{\rm s}, +13^{\circ}20''$; J2000.0), 1331
extragalactic objects with positional offsets less than 100.0 arcmin were
extracted from the NASA/IPAC Extragalactic Database (NED). However, only 240
galaxies have spectroscopic redshifts. The remainder appear only in one of
the imaging surveys from radio, infrared, and X-ray wavebands.
Most of the spectroscopic data were contributed by Hill \& Oegerle (1993)
who carried out a survey of the cD clusters. The redshift measurements were
detailed in Hill \& Oegerle (1993), and the typical velocity uncertainty for
the galaxies is less than 100 km~s$^{-1}$. The distribution of spectroscopic
redshifts for 240 known galaxies is shown in Fig.~1. There are 215 galaxies
with their redshifts in a range of 18,000 km~s$^{-1}$ $< cz <$ 25,000 km~s$^{-1}$, with a
peak at $\sim$ 21,500 km~s$^{-1}$. Only one peak can be found in the velocity
distribution because the velocity dispersions for individual clusters are
larger than the apparent velocity separation between two clusters. It is
unambiguous to treat these 215 galaxies as the members of this cluster pair
since their redshift distribution is quite concentrated and isolated. The
contamination from the foreground and background galaxies should be very
slight. The spatial distribution for these 215 galaxies is presented in
Fig.~2. We superpose the contour map of the surface density that has been
smoothed by a Gaussian window with $\sigma = 2'$. Because of the severe
overlap in the redshift distributions, clusters A399 and A401 can not be
completely resolved by the velocity distribution only.
\begin{figure}[b]
\begin{center}
\mbox{\epsfxsize=0.7\textwidth\epsfysize=0.65\textwidth\epsfbox{fig1.eps}}
\caption{Distribution of the radial velocities for 240 galaxies
with known spectroscopic redshifts. The bin size is 1000 km~s$^{-1}$ }
\label{Fig:plot1}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\mbox{\epsfxsize=0.6\textwidth\epsfysize=0.6\textwidth\epsfbox{fig2.eps}}
\caption{Spatial distribution for 215 member galaxies of the
binary system of galaxy clusters, superposed by the contour map of
the surface density where the smoothing Gaussian window with a
radius of $2'$ is used. The contour levels are 0.03, 0.09, 0.15,
and 0.21 arcmin$^{-2}$, respectively. } \label{Fig:plot2}
\end{center}
\end{figure}
In order to detect the substructures in both the velocity space
and the projected map, we take use of the $\kappa$-test (Colless
\& Dunn 1996) for the A399/A401 system as a whole. The statistic
${\kappa}_n$ is defined to characterize the local deviation on the
scale of groups of $n$ nearest neighbors. A larger ${\kappa}_n$
means a greater possibility that the local velocity distribution
differs from the overall distribution. The probability that
$\kappa_n$ is larger than the observed value $\kappa_n^{\rm obs}$,
$P(\kappa_n>\kappa_n^{\rm obs})$, can be estimated by Monte Carlo
simulations by randomly shuffling velocities. Table 1 gives the
results of $\kappa$-test for 215 known member galaxies, and $10^3$
simulations are used to estimate $P(\kappa_n>\kappa_n^{\rm obs})$
for all cases.
It is found that the optimum scale of the nearest neighbors is 10,
and the substructure appears most obvious on this scale. The
bubble plot in Fig.~3 shows the location of localized variation
using neighbor size $n=10$, and the bubble size for each galaxy is
proportional to $-\log[P_{\rm KS}(D>D_{\rm obs})]$. Therefore
larger bubbles indicate a greater difference between local and
overall velocity distributions. A comparison with Fig.~2 shows
that the bubble clusterings near the central regions of A399 and
A401 are found to be significant, which indicates a distinct
difference between the localized and whole velocity distributions.
\begin{table}
\caption[]{Result of $\kappa$-Test for 215 Member Galaxies in the Binary
System}
\begin{center}
\begin{tabular}{cccccccccc} \hline
\noalign{\smallskip}
$n$ & 4 & 5& 6& 7& 8& 9& 10&11 &12\\
\hline \noalign{\smallskip}
$P(\kappa_n>\kappa_n^{\rm obs})$ & 18.3\%&
44.4\%& 42.6\%& 34.1\%& 17.0\%& 11.9\%& 6.8\% & 15.1\%& 14.5\%\\
\noalign{\smallskip} \hline
\end{tabular} \end{center}
\end{table}
\begin{figure} \begin{center}
\mbox{\epsfxsize=0.6\textwidth\epsfysize=0.6\textwidth\epsfbox{fig3.eps}}
\caption{ Bubble plot showing the degree of difference between the
local velocity distribution for groups of ten nearest neighbors
and the overall distribution of 215 known cluster galaxies. }
\end{center} \end{figure}
\section{THE KMM PARTICIPATION INTO TWO CLUSTERS }
For studying the dynamical properties for each cluster, those 215 galaxies
should be correctly partitioned. It is relatively easy to partition the
galaxies whose projected locations appear close to the cluster centers.
However, for the galaxies located exactly between the cluster centers, the
partition might become a rather ambiguous task.
In order to achieve a robust partition, we apply a prevalent technique of
mixture modeling, namely KMM algorithm, on the sample of 215 galaxies. The
KMM is a maximum-likelihood algorithm which assigns objects into groups and
assesses the improvement in fitting a multi-group model over a single group
model (Ashman et al.\ 1994). A detailed description of the KMM algorithm
can be also found in Nemec \& Nemec (1993). For a dynamically relaxed
cluster, the line-of-sight velocities of galaxies are expected to be
Gaussian distributed. Since A399 and A401 are two gravitationally distinct
clusters of galaxies, we here apply the KMM algorithm which estimates the
statistical significance of bimodality based on the three-dimension data:
the projected positions of the galaxies and the radial velocity, just as
Colless \& Dunn (1996) did. When an initial partition into two clusters or a
set of initial parameters for each cluster is given, the KMM algorithm can
start iterating toward the maximum-likelihood solution. Table 2 lists the
various initial partitions/parameters that we used and the corresponding
final solutions, where $(\bar{x}_1, \bar{y}_1, \bar{v}_1)$ and $(\bar{x}_2,
\bar{y}_2, \bar{v}_2)$ are the mean positions and velocities of A401 and
A399, respectively, $(\sigma_{x_1}, \sigma_{y_1}, \sigma_{v_1})$ and
$(\sigma_{x_2}, \sigma_{y_2}, \sigma_{v_2})$ are their standard deviations
in positions and velocities, and $f_1$ and $f_2$ are the fractions of the
sample in the two clusters. Furthermore, the estimate of the overall
correction rate ($R$) given by the KMM algorithm is listed.
\begin{table}[t]
\caption[]{Initial parameters and final solutions of the KMM algorithm}
\vspace{-9mm}
\begin{center}
\begin{tabular}{ccccccc} \hline
\noalign{\smallskip}
Case & $(\bar{x}_1, \bar{y}_1, \bar{v}_1)$ & $(\sigma_{x_1}, \sigma_{y_1},
\sigma_{v_1})$ & $(\bar{x}_2, \bar{y}_2, \bar{v}_2)$ & $(\sigma_{x_2},
\sigma_{y_2}, \sigma_{v_2})$ & $(f_1, f_2)$ & Rate \\
\noalign{\smallskip} \hline \noalign{\smallskip}
& & &Initial Parameters & & & \\
1& (5.9,16.9,22133) & (9.8,9.9,1208) & (-11.4,-15.7,21536) & (9.8,9.5,1227) &
(0.521,0.479) & \\
2& (4.0,14.6,22080) & (10.7,11.1,1212) & (-11.7,-18.3,21505) &
(10.3,7.7,1234)& (0.595,0.405) & \\
3& (5.0,16.8,22126) & (10.5,9.7,1204) &
(-9.0,-18.1,21378) & (10.4,8.9,1232) & (0.530,0.470) & \\
\noalign{\smallskip} \hline \noalign{\smallskip}
& & & Final Solutions & & & \\
1 & (4.8,14.4,22107) & (10.2,11.7,1185) & (-12.6,-17.4,21477) &
(9.1,8.8,1241) & (0.586,0.414) & 95.2\% \\
2 & (4.9,14.5,22107) & (10.2,11.7,1185) & (-12.6,-17.3,21477) &
(9.2,8.9,1240) & (0.585,0.415) & 95.2\%\\
3 & (4.8,14.4,22107) & (10.2,11.7,1185) & (-12.6,-17.4,21477) &
(9.1,8.8,1241) & (0.587,0.413) & 95.1\% \\
\noalign{\smallskip} \hline
\end{tabular} \end{center}
\end{table}
We chose to specify initial positions and dispersions for two clusters in
case 1, while in case 2 we specify a simple partition of sample in which all
galaxies with the declination offset $y>-5$ arcmin are assigned to be A401
members. With the different initial parameters we specified, the KMM
algorithm searched for an optimum two-group solution, and converge to a very
similar results. The estimate of the correct allocation rate reaches 95\%.
In case 3 we tried another initial partition: A399 concentration includes
all galaxies within an angular distance of 20 armin to the central cD galaxy
UGC~2438, and we obtain the same final solution.
According to the final partition into two clusters, there are 127 galaxies
belonging to A401, and 88 galaxies for A399. The spatial distribution for
each cluster is plotted in Fig.~4(a). Then, we apply the $\kappa$-tests
again for individual clusters, and the probability
$P(\kappa_n>\kappa_n^{\rm obs})$ is estimated for each cluster. Table 3
presents the $\kappa$-test results, and the corresponding bubble plot are
given in Fig.~4(b). Compared with the last $\kappa$-test on the whole binary
system (see Table 1 and Fig.~3), the variation of localized velocity
distribution for each cluster is significantly decreased, which indicates
that the KMM algorithm arrived at a robust partition.
\begin{figure}[t]
\plottwo{fig4_1.eps} {fig4_2.eps} \caption{(a) The projected
positions for the member galaxies of A399 (denoted by astroids)
and of A401 (denoted by plus sign). The dotted ellipses are the
$2\sigma$ contours of the fitted Gaussians. (b) Bubble plots for
groups of six nearest neighbors for 127 galaxies in A401 and 88
galaxies in A399. The dashed line separates two clusters.}
\end{figure}
\begin{table}[h]
\caption[]{Results of $\kappa$-tests for 88 galaxies in A399 and for 127
galaxies in A401} \vspace{-5mm}
\begin{center}
\begin{tabular}{cccccccc}
\noalign{\smallskip} \hline
Group size $n$ & 2& 3 & 4 & 5& 6& 7& 8\\
\noalign{\smallskip} \hline $P(\kappa_n>\kappa_n^{\rm obs})$ for A399
&33.5\%
&8.7\% & 17.5\%&
56.7\%& 52.7\%& 57.1\%& 59.1\%\\
$P(\kappa_n>\kappa_n^{\rm obs})$ for A401 &7.9\% & 59.4\%& 59.0\%&
87.3\%& 93.1\%& 87.6\%& 72.0\%\\
\noalign{\smallskip} \hline
\end{tabular} \end{center}
\end{table}
\section{MERGER DYNAMICS BETWEEN A399 AND A401}
\subsection{Velocity Structure}
The radial velocity distributions for the binary system and each cluster are
given in Fig.~5. The solid lines represent the best-fit Gaussians with the
mean velocities and dispersions listed in Table 2. To characterize the
kinematical properties of clusters of galaxies, two robust estimators
analogous to the velocity mean and standard deviation, namely the biweight
location ($C_{BI}$) and scale ($S_{BI}$), are defined by Beers et al.\
(1990). These two quantities are resistant in the presence of outliers, and
robust for a broad range of probable non-Gaussian underlying populations.
Using the $ROSTAT$ software, we compute the biweight location and scale in
line-of-sight velocity distribution for each cluster. As a result, we obtain
$C_{BI}=21491\pm141$ km~s$^{-1}$\ and $S_{BI}=1315 \pm 82$ km~s$^{-1}$\ for A399, and
$C_{BI}=22069\pm107$ km~s$^{-1}$\ and $S_{BI}=1212 \pm 71$ km~s$^{-1}$\ for A401. The
physical line-of-sight velocity difference between these two clusters is
$V_r=\Delta (C_{BI})/(1+\bar{z})=539\pm165$ km~s$^{-1}$.
\begin{figure}[t] \begin{center}
\mbox{\epsfxsize=0.6\textwidth\epsfysize=0.9\textwidth\epsfbox{fig5.eps}}
\caption{The velocity distributions for the
galaxies in (a) the entire binary system, (b) A401, and (c) A399, superposed
with the fitted Gaussian distributions.} \end{center} \end{figure}
\subsection{Virial Mass Estimate}
A399 and A401 are gravitationally bound with each other. In order to
determine the dynamical state of the binary system, we will apply the virial
theorem for estimating the mass of each cluster. Assuming that the galaxy
cluster is bound and the galaxy orbits are random, the virial mass ($M_{\rm
vt}$) can be estimated from the following standard formula (Geller \&
Peebles 1973; Oegerle \& Hill 1994):
$$M_{\rm vt}=\frac{3\pi}{G}\sigma^2_{\rm r} D N_{\rm p} \left( \sum_{i>j}^{N}
\frac{1}{\theta_{ij}} \right) ,$$
where $\sigma_{\rm r}$ is the
line-of-sight velocity dispersion, $D$ is the cosmological distance of the
cluster, $N_{\rm p}=N(N-1)/2$ is the number of galaxy pairs, and
$\theta_{ij}$ is the angular separation between galaxies $i$ and $j$. The
extended X-ray emission associated with A399 and A401, revealed by the {\it
ROSAT} HRI imaging observations (Fabian et al.\ 1997), indicates that at
least the first of these assumptions is reasonable. We derive the virial
masses of $2.0h^{-1} \times 10^{15} M_{\odot}$ and $2.1 h^{-1}\times 10^{15}
M_{\odot}$ for A399 and A401, respectively.
\subsection{Two-Body Models}
With the estimate of the mass for each cluster, we can investigate the state
of evolution. The concise dynamical model for this binary system is the
two-body model which was first applied to clusters by Beers, Geller \&
Huchra (1982). In this model two clusters are treated as point masses
following a linear orbit under their mutual gravity. They are presumed to
have started with zero separation and then to have moved apart before
turning around. Given the projected separation of this binary system $R_{\rm
p}$, the relative radial velocity $V_{\rm r}$ and the total mass $M$, the
model can speculate the projection angle $\alpha$ (the angle between the
line joining the two clusters and the plane of the sky), the true separation
$R$, and relative velocity $V$. The equations of motion are as follows:
\begin{eqnarray}
V &=& \frac{V_{\rm r}}{\sin\alpha} = \left( \frac{2GM}{R_{\rm m}}
\right)^{1/2}\, \frac{\sin
\chi}{(1- \cos \chi)}, \\
R &=& \frac{R_{\rm p}}{\cos\alpha} = \frac{R_{\rm m}}{2}(1-\cos \chi),\\
t &=& t_0 = \left( \frac{R_{\rm m}^3}{8GM} \right) ^{1/2} (\chi-\sin \chi),
\end{eqnarray}
where $R_m$ is the maximum expansion, $M$ is the total mass of the binary
system, $t_0$ is the age of the universe, and $\chi$ is the developmental
angle tracing the merger process. The two clusters have zero separation when
$\chi=0,\,2\pi$, while they are at maximum expansion when $\chi=\pi,\,
3\pi$. Due to the ambiguity in observing the system only in projection, this
model usually results in more than one orbital solution.
The KMM analysis provides the initial estimate of the projected separation
$R_{\rm p}$ of the two-body model. Assuming a Friedmann-Robertson-Walker
cosmology with $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$, we adopt the age of
the universe as $t_0=9.43 h^{-1}$\,Gyr $= 2.976\,h^{-1} \times 10^{17} {\rm
s}$, and the angular separation between the centroids of these two clusters
($\sim 36.3$ arcmin) corresponds to a projection distance $R_{\rm p}$ of
$2.05 \,h^{-1}$Mpc at the average redshift. We take the total mass $M = \sum
M_{\rm vt} = 4.1\,h^{-1} \times 10^{15} M_{\odot}$ in our modeling.
Previous applications of the two-body models tried to solve the dynamical
solutions within a range of $0<\chi<2\pi$, assuming that the subclusters
start to saperate at t=0 and they are moving apart or coming together for
the first time in their history (Beers, Geller \& Huchra 1982; Oegerle \&
Hill 1994; Colless \& Dunn 1996). However, numerical simulations of cluster
collisions by McGlynn \& Fabian (1984) showed that one clusters can even
pass through each other without destorying the optical components. For this
pair of clusters, recent observational evidences in X-ray and radio bands
support such a picture that A399 and A400 have been passed through each
other in the past. As mentioned in \S1, Fabian et al.\ (1997) found a linear
X-ray structure in A399 pointing at the cD galaxy of A401, indicating a past
violent interaction. On the other hand, the absence of a cooling flows in
both A399 and A401 was first found by Edge et al.\ (1992), and confirmed by
Markevitch et al.\ (1998) and Sakelliou \& Ponman (2004). According to the
numerical experiments of McGlynn \& Fabian (1984), the structure of a
cooling flow can be disrupted by the merger of two similar clusters. A
simulations by Burns et al.\ (1997) showed that mergers of clusters with a
mass ratio of 1:4 may destroy a pre-existing cooling flow. The picture of a
previous interaction between these clusters is also supported by the
temperature map for this system which shows a bridge of matter between
clusters (Markevitch et al.\ 1998). Furthermore, the extensive cluster radio
helo is found to be associated with A401, suggesting the coalescence of
clusters (Harris et al.\ 1980).
Based on above observations, we suppose that the two clusters started at
$t=0$ with zero separation, and expanded to a maximum extent, and once
experienced a close encounter, which indicates a $\chi$ range from 2$\pi$ to
4$\pi$. From the equations of motion given above, four two-body models are
allowed within $2\pi<\chi<4\pi$, including two expanding (outgoing) and two
collapsing (incoming) models. Fig.~6 shows the ($\alpha, V_{\rm r}$)-plane
where four solutions at $V_{\rm r}=539\pm165$ km~s$^{-1}$\ are found in the bound
region. Fig.~6 also shows the limit curve for the bound region, which can be
defined by the Newtonian criterion $V_r^2 R_{\rm p} \leq 2GM\, \sin^2\alpha
\, \cos \alpha$. Meanwhile, we derive three solutions within $0<\chi<2\pi$
for reference only. The seven solutions are presented in Table 4.
\begin{table}[h]
\caption[]{Gravitationally bound solutions for the two-body model}
\vspace{-5mm}
\begin{center}
\begin{tabular}{ccccccc}
\noalign{\smallskip} \hline
Case & Dynamical & $\chi$ & $\alpha$ & $V$ & $R$ & $R_{\rm m}$ \\
& Status &($\pi$ rad.) & (degree) & (km~s$^{-1}$) &($h^{-1}$ Mpc) & $(h^{-1}$
Mpc)
\\
\noalign{\smallskip} \hline
(a)& Outgoing & $ 0.776$ & $81.6^{+0.6}_{-0.6}$ & $545^{+165}_{-166}$ &
$14.09^{+1.04}_{-1.00}$ & $15.99^{+3.32}_{-2.16}$ \\
(b)& Incoming & $ 1.175$ &$75.8^{+1.2}_{-1.3}$ & $556^{+175}_{-172}$ &
$8.38^{+0.75}_{-0.71}$ & $9.04^{+0.45}_{-0.46}$ \\
(c)& Incoming & $ 1.636$ &$9.0^{+2.8}_{-2.8}$ & $3460^{+27}_{-15}$ &
$2.08^{+0.01}_{-0.02}$ & $7.10^{+0.00}_{-0.01}$ \\
\noalign{\smallskip} \hline \noalign{\smallskip}
(d)&Outgoing & $ 2.374$ & $9.0^{+2.9}_{-2.8}$ & $3430^{+31}_{-22}$ &
$2.08^{+0.02}_{-0.02}$ &$6.75^{+0.00}_{-0.01}$ \\
(e)&Outgoing & $ 2.854$ &$67.5^{+0.2}_{-0.4}$ &$583^{+181}_{-179}$
&$5.36^{+0.04}_{-0.09}$ &$5.65^{+0.12}_{-0.11}$ \\
(f)&Incoming & $ 3.141$ &$64.4^{+1.1}_{-1.4}$ &$598^{+192}_{-187}$ &
$4.74^{+0.21}_{-0.23}$ & $4.98^{+0.09}_{-0.07}$ \\
(g)&Incoming & $ 3.523$ &$10.3^{+3.4}_{-3.2}$ &$3008^{+33}_{-35}$
&$2.08^{+0.03}_{-0.01}$ &$4.48^{+0.01}_{-0.00}$ \\
\noalign{\smallskip} \hline
\end{tabular} \end{center}
\end{table}
Fig.~6 shows that A399 and A401 are very likely to be gravitationally bound
unless the projection angle $\alpha$ is smaller than $7^{\circ}$. The
unbound orbit requires the true relative velocity $V > 4,400$ km~s$^{-1}$, which
will lead to a very quick separation of the clusters. We should assume that
we are not viewing the cluster system at such a special time when the
projected separation rate reaches more than 4 $h^{-1}$ Mpc per Gyr for a
pair of clusters with only 2.05 $h^{-1}$ Mpc apart.
\begin{figure} \begin{center}
\mbox{\epsfxsize=0.8\textwidth\epsfysize=0.7\textwidth\epsfbox{fig6.eps}}
\caption{Projection angle $\alpha$ as a function of relative
radial velocity $V_r$ predicted by the two-body model. Four bound
solutions are presented at $V_r=539\pm165$ km~s$^{-1}$ within $2\pi <
\chi < 4\pi$. The limit between the bound and unbound regions is
also given.}
\end{center} \end{figure}
For cases (d) and (g), the present relative velocity of this bound system
{\em exceeds} the physical velocity dispersion within each cluster, and the
clusters are so close together that they should have begun to coalesce or
have just coalesced. If the system were at such evolutional situations, we
would expect to see some strong merging distortion between these two
clusters in the X-ray surface brightness contours, contrary to the {\it
ROSAT} PSPC image in Fig.~1 of Fabian et al.\ (1997). Therefore these two
solutions can be definitely ruled out.
Cases (e) and (f) are two solutions with larger projection angles, but their
dynamical states are different. Case (e) is an expanding (outgoing) model in
which the last encounter event occured about $2.5h^{-1}$ Gyr ago, and the
cluster pair will expand for another $1.0h^{-1}$ Gyr, reaching a maximum
extent of $5.65 h^{-1}$ Mpc. The true relative velocity is 583 km~s$^{-1}$, and
they are $\sim 5.4h^{-1}$ Mpc apart along the direction with a projection
angle of $67^{\circ}.5$. A collapsing model defined in case (f) is also
allowed for this system. According to this model, the two clusters passed
through each other $3.7h^{-1}$ Gyr ago, and reached the last maximum
expansion of $5.0h^{-1}$ Mpc about $0.8h^{-1}$ Gyr ago.
It is rather hard to determine whether the system is collapsing or expanding
at present. The high resolution observations by the {\it XMM-Newton}
observatory confirmed the enhancement in X-ray flux and temperature in the
region between the two clusters, but no clues of intracluster compression or
shock wave were found (Sakelliou \& Ponman 2004). Gastaldello et al.\ (2003)
pointed out that the profiles of X-ray surface brightness, temperature and
metallicity will shed light on the large-scale dynamics of the binary
cluster system. Sakelliou \& Ponman (2004) presented a clear contour plot of
the residual smoothed images for these two clusters (see figure 6 therein)
where positive residuals can be found on the south-west of the central cD
galaxy in A401 as well as on the north of the one in A399. This indicates
that A401 is moving from south-west to the north-east while A399 is moving
to the south. This feature seems to favor the scenario in which this pair
of clusters is currently expanding. Since the projection angles are
$67^{\circ}.5$ in case (e), we can not expect to see a significant
azimuthally asymmetric surface brightness for each cluster in the {\it
ROSAT} PSPC brightness contour maps of this cluster pair (see figure 2 in
Markevitch et al.\ 1998). However, a steeper gradient can be marginally
detectable in the north-east region of A401, and this also points toward an
expanding picture. Therefore case (e) is the more likely solution of the
two-body model.
It should be noted that the two-body model disregards any angular momentum
of the system, and it ignores the distribution of the matter within
individual clusters which will be important when two clusters are close to
merger. The gravitational interaction from the matter outside the cluster
pair is also neglected. It is well appreciated that dark matter mass
dominate in the overall dynamical mass of individual clusters. Since our
estimate of dynamical mass is based on the virial theorem, two-body model
assumes that the dark matter within a cluster is in approximate virial
equilibrium. Despite the above-mentioned restrictions, two-body model is
still a concise approach which is widely used to discuss the dynamic state
of some gravitationally bound systems.
\section{SUMMARY}
We have investigated the dynamics of the cluster pair A399/A401, using the
existing redshift measurements. We applied the KMM algorithm on a sample of
215 galaxies with known radial velocities, and obtained a robust separation
of this cluster pair. Based on the velocity structure and virial mass
estimate of individual clusters, we explored the two-body model for studying
the merger dynamics between clusters. Because the observational features in
the radio and X-ray wavebands suggest that this pair of clusters have once
experienced a close encounter, we derived four gravitationally bound
solutions within a $\chi$ region between $2\pi$ and $4\pi$. The recent X-ray
data from the {\it ROSAT} and {\it XMM-Newton} observations can be used to
choose the most likely two-body model. In this scenario, the pair of
clusters with a true separation of 5.4$h^{-1}$ Mpc is currently expanding at
583 km~s$^{-1}$\ along the direction with a projection angle of $ 67^{\circ}.5$,
and it will reach a maximum extent of $5.65 \,h^{-1}$ Mpc in about
$1.0h^{-1}$ Gyr.
\begin{acknowledgements}
We thank the anonymous referee for his helpful comments. This work is
supported by the National Key Base Sciences Research Foundation under
contract G1999075402, and the National Natural Science Foundation of China
through grant 10273007.
\end{acknowledgements}
|
1,314,259,993,906 | arxiv | \section{Introduction}
\quad Simulated annealing (SA) is an umbrella term which denotes a set of stochastic optimization methods.
The goal of SA is to find the global minimum of a function $f: \mathbb{R}^d \to \mathbb{R}$, in particular when $f$ is non-convex.
These methods have many applications in physics, operations research and computer science,
see e.g. \cite{VA87, KAJ94, DCM19}.
The name is inspired from annealing in metallurgy, which is a process aiming to increase the size of crystals by heating and controlled cooling.
The stochastic version of SA was independently proposed by \cite{KGV83} and \cite{Cer85}.
The idea is as follows: consider a stochastic process related to $f$ which is subject to thermal noise.
When simulating this process, one decreases the temperature slowly over the time.
As a result, the stochastic process escapes from saddle points and local optima, and converges to the global minimum of $f$ with high probability.
This is generally true if the cooling is slow enough, and it is important to find the right stochastic process with the fastest possible cooling schedule which approximates the global optimum.
\quad In this paper, we explore the convergence rate of continuous-time SA and its discretization.
To be more precise, define the {\em continuous-time SA process} $(X_t; \, t \ge 0)$ by
\begin{equation}
\label{eq:SA}
dX_t = - \nabla f(X_t) dt + \sqrt{2 \tau_t} \, dB_t, \quad X_0 \stackrel{d}{=} \mu_0(dx),
\end{equation}
where $(B_t; \, t \ge 0)$ is $d$-dimensional Brownian motion, $\tau_t$ is the cooling schedule of temperature, and $\mu_0(dx)$ is some initial distribution.
This formulation was first considered by \cite{GH86}.
If $\tau_t \equiv \tau$ is constant in time, the process \eqref{eq:SA} is the well-known {\em overdamped Langevin equation} whose stationary distribution is the Gibbs measure $\nu_{\tau}(dx) \propto \exp(-f(x)/\tau)dx$.
Thus, we sometimes call the process \eqref{eq:SA} an {\em SA adapted overdamped Langevin equation} as well.
See Section \ref{sc3} for background.
We also consider the Euler-Maruyama discretization of the continuous-time simulated annealing process.
Let $\eta_k$ be the step size at iteration $k$, and $\Theta_k:=\sum_{j \le k} \eta_j$ be the cumulative step size up to iteration $k$.
The {\em discrete SA process} $(x_k; \, k = 0,1, \ldots)$ is defined by
\begin{equation}
\label{eq:discreteSA}
x_{k+1} = x_k - \nabla f(x_k) \eta_k + \sqrt{2 \tau_{\scaleto{\Theta_k}{5 pt}} \eta_k} Z_k, \quad x_0 \stackrel{d}{=} \mu_0(dx),
\end{equation}
where $(Z_k; \, k = 0, 1, \ldots)$ are independent and identically distributed standard normal vectors,
and $\tau_{\scaleto{\Theta_k}{5 pt}}$ is the cooling schedule at iteration $k$.
For $\tau_t \equiv \tau$ constant in time, the scheme \eqref{eq:discreteSA} is known as the
{\em unadjusted Langevin algorithm} (ULA) which approximates the Gibbs measure $\nu_{\tau}$.
The algorithm was introduced in \cite{Parisi81, GM94}, and further studied by \cite{RT96, DM17}.
\quad The goal of this paper is to study the decay in time of the tail probability
\begin{equation*}
\mathbb{P}(f(X_t) > \min f +\delta) \quad \mbox{or} \quad \mathbb{P}(f(x_k) > \min f+ \delta),
\end{equation*}
under suitable conditions on the function $f$, the cooling schedule $\tau_t$, and the discretization scheme $\eta_k, \Theta_k$.
Let us mention a few motivations.
First there are a line of works on the interplay between sampling and optimization (\cite{RRT17, MCC19, MCJ19}).
Note that if $\tau_t \equiv \tau$ is constant in time, the overdamped Langevin equation converges to the Gibbs measure
$\nu_{\tau}$,
and for $\tau$ sufficiently small, the Gibbs measure $\nu_{\tau}$ approximates the Dirac mass at the global minimum of $f$.
Combining these two ingredients, SA sets the cooling schedule $\tau_t$ to decrease to $0$ over the time.
It is then expected that the SA process \eqref{eq:SA} or \eqref{eq:discreteSA} converges to the global minimum as $t \to \infty$ or $k \to \infty$.
There are also recent works on various noisy gradient-based algorithms (\cite{GHJY15, JG17, CDT20, GHT20}), which aims to escape saddle points and find a local minimum of $f$.
In comparison with these methods, SA has priority to find the global minimum but at the cost of longer exploration time.
\quad The main tool in our analysis is the Eyring-Kramers law, which is a set of functional inequalities for the Gibbs measure at low temperatures.
Let us explain our results.
It was shown in \cite{GH86, CH87} that the correct order of $\tau_t$ for the process \eqref{eq:SA} to converge to the global minimum of $f$ is $(\ln t)^{-1}$.
In fact, there is a phase transition related to the {\em critical depth} $E_{*}$ of the function $f$:
\begin{enumerate}[itemsep = 3 pt]
\item[(a)]
If $\tau_t \le \frac{E}{\ln t}$ for $t$ large enough with $E < E_{*}$, then
$\limsup_{t \to \infty} \mathbb{P}(f(X_t) \le \min f + \delta) < 1$.
\item[(b)]
If $\tau_t \ge \frac{E}{\ln t}$ for $t$ large enough with $E > E_{*}$, then
$\lim_{t \to \infty} \mathbb{P}(f(X_t) \le \min f + \delta) = 1$.
\end{enumerate}
Roughly speaking, the critical depth $E_{*}$ is the largest hill one needs to climb starting from a local minimum to the global minimum.
The formal definition of the critical depth $E_{*}$ will be given in Assumption \ref{assump:nondeg}, but
see Figure \ref{fig:CD} below for an illustration when $f$ is a double-well function.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\columnwidth]{DW.png}
\caption{Illustration of the critical depth of a double-well function.}
\label{fig:CD}
\end{figure}
Part $(a)$ was proved by \cite{HKS89} using a sophisticated argument that involves the Poincar\'e inequality.
Part $(b)$ was proved by \cite{Miclo92} who characterized the fastest cooling schedule by the Eying-Kramers law for the log-Sobolev inequality.
See also \cite{Miclo95, Zitt08, FT20} for similar results under different conditions on the function $f$.
The convergence rate of $(f(X_t); \, t \ge 0)$ to the global minimum of $f$ was considered by \cite{Mar97} only for $\delta$ sufficiently small.
However, the rate of $\mathbb{P}(f(X_t) > \min f +\delta)$ for general $\delta > 0$ has yet been studied since no estimates of the log-Sobolev inequality for the Gibbs measure at low temperatures were known until the mid-2010s.
Taking advantage of recently developed theory (\cite{MS14, MSTW18, VW19}), we are able to give a non-asymptotic convergence rate of both continuous-time and discrete SA.
\quad To simplify the notation, we assume throughout this paper that
\begin{equation*}
\min_{\mathbb{R}^d} f(x) = 0,
\end{equation*}
i.e. the global minimum of $f$ is $0$ by considering $f - \min f$.
Our results are outlined as follows, and the precise statement of these results will be given in Section \ref{sc2}.
\begin{theorem*}[Informal]
Under some assumptions on the function $f$, we have:
\begin{enumerate}[itemsep = 3 pt]
\item[(i)]
Assume that $\tau_t \ge \frac{E}{\ln t}$ for $t$ large enough with $E > E_{*}$.
For $\delta > 0$,
there exists $C > 0$ (depending on $\delta, f, E, d$) such that
\begin{equation*}
\mathbb{P}(f(X_t) > \delta) \le C t^{-\min\left(\frac{\delta}{E}, \frac{1}{2}(1 - \frac{E_{*}}{E}) \right)}.
\end{equation*}
\item[(ii)]
Assume that $\tau_t \ge \frac{E}{\ln t}$ for $t$ large enough with $E > E_{*}$.
Also assume that $\Theta_k \to \infty$, $\eta_{k+1} \Theta_k \to 0$ and
$\frac{\ln \Theta_k}{\sum_{j \le k} \eta_{j+1} \Theta_j^{-E_{*}/E}} \to 0$ as $k \to \infty$.
For $\delta > 0$,
there exists $C > 0$ (depending on $\delta, f, E, d$) such that
\begin{equation*}
\mathbb{P}(f(x_k) > \delta) \le C \Theta_k^{-\min\left(\frac{\delta}{E}, \frac{1}{2}(1 - \frac{E_{*}}{E}) \right)}.
\end{equation*}
\end{enumerate}
\end{theorem*}
\quad The contribution of this paper is twofold:
\begin{itemize}[itemsep = 3 pt]
\item
{\bf Polynomial decay rate}.
We prove that the tail probability $\mathbb{P}(f(X_t) > \delta)$ (resp. $\mathbb{P}(f(x_k) > \delta)$) decays polynomially in time (resp. in cumulative step size).
In the continuous setting, \cite{M18} obtained the same rate of convergence for SA adapted underdamped Langevin equation, and \cite{MSTW18} considered an improvement of SA via parallel tempering.
However, the convergence rate for continuous-time SA, i.e. part $(i)$ has yet been recorded in the literature
though this result is probably understood in the Eyring-Kramers folklore.
We provide a self-contained treatment which bridges the literature and, more importantly, can be applied to obtain the new result in the discrete setting.
\item
{\bf Choice of step size}.
Part $(ii)$ for the discrete simulated annealing is completely novel to our best knowledge, and it also gives a practical guidance on the choice of step size for discretization.
The condition $\Theta_k \to \infty$ indicates that the step size cannot be chosen too small, while the condition $\eta_{k+1} \Theta_k \to 0$ suggests that the step size cannot be chosen too large.
The condition $\frac{\ln \Theta_k}{\sum_{j \le k} \eta_{j+1} \Theta_j^{-E_{*}/E}} \to 0$ is purely technical.
For instance, $\eta_{k} = k^{-\theta}$ with $\theta \in (\frac{1}{2}, 1]$ satisfies the conditions in $(ii)$ to ensure the convergence.
\end{itemize}
Also note that the rate $\min\left(\frac{\delta}{E}, \frac{1}{2}(1 - \frac{E_{*}}{E}) \right)$ is smaller than $\frac{1}{2}$.
It is interesting to know whether this rate is optimal, and we leave the problem for future work.
\quad There is another interesting problem: the dependence of the constant $C$ on the dimension $d$.
The issue is subtle, since most analysis including the Eyring-Kramers law uses Laplace's method.
However, Laplace's method may fail if both the dimension $d$ and the inverse temperature $1/\tau$ tend to infinity (\cite{SM95}).
As shown in Remark \ref{rk:Laplace}, we obtain an upper bound for $C$ which is exponential in $d$.
This suggests the convergence rate is exponentially slow as the dimension increases, which concurs with the fact that
finding the global minimum of a general nonconvex function is NP-hard (\cite{JK17}).
\quad Finally, we mention a few approaches to accelerate or improve SA.
\cite{FQG97} considered a cooling schedule depending on both time and state;
\cite{Pav07} used the L\'evy flight;
\cite{M18} studied SA adapted to underdamped Langevin equation;
\cite{MSTW18} applied the replica exchange technique;
\cite{GXZ20} employed a relaxed stochastic control formulation, originally proposed by \cite{WZZ} for reinforcement learning, to derive a state-dependent temperature control schedule.
\medskip
The remainder of the paper is organized as follows.
Section \ref{sc2} presents the assumptions and our main results.
Section \ref{sc3} provides background on diffusion processes and functional inequalities.
The result for the continuous-time simulated annealing (Theorem \ref{thm:contrate}) is proved in Section \ref{sc4}.
The result for the discrete simulated annealing (Theorem \ref{thm:discreterate}) is proved in Section \ref{sc5}.
We conclude with Section \ref{sc6}.
\section{Main results}
\label{sc2}
\quad In this section, we make precise the informal statement in the introduction, and present the main results of the paper.
We first collect the notations that will be used throughout this paper.
\begin{itemize}[label = {--}, itemsep = 3 pt]
\item
The notation $| \cdot |$ is the Euclidean norm of a vector, and $a \cdot b$ is the scalar product of vectors $a$ and $b$.
\item
For a function $f: \mathbb{R}^d \to \mathbb{R}$,
let $\nabla f$, $\nabla^2 f$ and $\Delta f$ denote its gradient, Hessian and Laplacian respectively.
\item
For $X, Y$ two random variables, $||X - Y||_{TV}$ denotes the total variation norm of the signed measure corresponding to $X - Y$.
\item
The symbol $a \sim b$ means that $a/b \to 1$ as some problem parameter tends to $0$ or $\infty$.
Similarly, the symbol $a = \mathcal{O}(b)$ means that $a/b$ is bounded as some problem parameter tends to $0$ or $\infty$.
\end{itemize}
We use $C$ for a generic constant which depends on problem parameters ($\delta, f, E \ldots$), and may change from line to line.
\quad Next, we present a few assumptions on the function $f$.
These assumptions are standard in the study of metastability.
\begin{assumption}
\label{assump:reggrow}
Let $f: \mathbb{R}^d \to \mathbb{R}$ be smooth, bounded from below,
and satisfy the conditions:
\begin{enumerate}[itemsep = 3 pt]
\item[(i)]
$f$ is non-degenerate on the set of critical points. That is, for some $C > 0$,
\begin{equation*}
\frac{|\xi|}{C} \le |\nabla^2f(x) \xi| \le C|\xi| \quad \mbox{for each } x \in \{z: \nabla f(z) = 0\}.
\end{equation*}
\item[(ii)]
There exists $C > 0$ such that
\begin{equation*}
\liminf_{|x| \to \infty} \frac{|\nabla f(x)|^2 - \Delta f(x)}{|x|^2} \ge C, \quad \inf_{x} \nabla^2f(x) \ge -C.
\end{equation*}
\end{enumerate}
\end{assumption}
\quad Let us make a few comments on Assumption \ref{assump:reggrow}.
The condition $(ii)$ implies that $f$ has at least quadratic growth at infinity.
This is a necessary and sufficient condition to obtain the log-Sobolev inequality (see \cite[Theorem 3.1.21]{Royer07} and
Section \ref{sc32}) which is key to convergence analysis.
The conditions $(i)$ and $(ii)$ imply that the set of critical points is discrete and finite \cite[Remark 1.6]{MS14}.
In particular, it follows that the set of local minimum points $\{m_1, \ldots, m_N\}$ is also finite, with $N$ the number of local minimum points of $f$.
\quad To keep the presentation simple, we make additional assumptions on $f$, following \cite[Assumption 2.5]{MSTW18}.
Define the saddle height $\widehat{f}(m_i,m_j)$ between two local minimum points $m_i, m_j$ by
\begin{equation}
\widehat{f}(m_i,m_j) : = \inf \left\{\max_{s \in [0,1]} f(\gamma(s)): \gamma \in \mathcal{C}[0,1], \, \gamma(0) = m_i, \, \gamma(1) = m_j \right\}.
\end{equation}
See Figure \ref{fig:CD} for an illustration of the saddle height $\widehat{f}(m_0,m_1)$ when $f$ is a double-well function with $m_0$ the global minimum and $m_1$ the local minimum.
\begin{assumption}
\label{assump:nondeg}
Let $m_1, \cdots, m_N$ be the positions of the local minima of $f$.
\begin{enumerate}[label=(\roman*), itemsep = 3 pt]
\item
$m_1$ is the unique global minimum point of $f$, and $m_1, \ldots, m_N$ are ordered in the sense that there exists $\delta > 0$ such that
\begin{equation*}
f(m_N) \geq f(m_{N-1}) \geq \cdots \geq f(m_2) \ge \delta \quad \mbox{and} \quad f(m_1) = 0.
\end{equation*}
\item
For each $i, j \in \{1, \ldots, N\}$, the saddle height between $m_i, m_j$ is attained at a unique critical point $s_{ij}$ of index one. That is, $f(s_{ij}) = \widehat{f}(m_i,m_j)$, and if $\{\lambda_1, \ldots, \lambda_n\}$ are the eigenvalues of $\nabla^2 f(s_{ij})$, then $\lambda_1< 0$ and $\lambda_i > 0$ for $i \in \{2, \ldots, n\}$.
The point $s_{ij}$ is called the communicating saddle point between the minima $m_i$ and $m_j$.
\item
There exists $p \in [N]$ such that the energy barrier $f(s_{p1}) - f(m_p)$ dominates all the others. That is,
there exists $\delta > 0$ such that for all $i \in [N] \setminus \{p\}$,
\begin{equation*}
E_* := f(s_{p1}) - f(m_p) \ge f(s_{i1}) - f(m_i) + \delta.
\end{equation*}
The dominating energy barrier $E_*$ is called the critical depth.
\end{enumerate}
\end{assumption}
\quad The convergence result for the continuous-time SA \eqref{eq:SA} is stated as follows.
The proof will be given in Section \ref{sc4}.
\begin{theorem}
\label{thm:contrate}
Let $f$ satisfy Assumption \ref{assump:reggrow} $\&$ \ref{assump:nondeg}.
Assume that $\tau_t \sim \frac{E}{\ln t}$ and $\frac{d}{dt}\left(\frac{1}{\tau_t}\right) = \mathcal{O}\left(\frac{1}{t}\right)$
as $t \to \infty$ with $E > E_{*}$.
Also assume the moment condition for the initial distribution $\mu_0$: for each $p \ge 1$,
there exists $C_p > 0$ such that
\begin{equation}
\label{eq:moment}
\int_{\mathbb{R}^d} f(x)^p \mu_0(dx) \le C_p.
\end{equation}
Then for each $\delta, \varepsilon > 0$, there exists $C > 0$ (depending on $\delta, \varepsilon, f, \mu_0, E, d$) such that
\begin{equation}
\label{eq:main1}
\mathbb{P}(f(X_t) > \delta) \le C t^{-\min\left(\frac{\delta}{E}, \frac{1}{2}(1 - \frac{E_{*}}{E}) \right) + \varepsilon}.
\end{equation}
\end{theorem}
\quad To get the convergence result for the discrete simulated annealing, we need an additional condition on the function $f$.
\begin{assumption}
\label{assump:upper}
The gradient $\nabla f$ is globally Lipschitz, i.e. $|\nabla f(x) - \nabla f(y)| \le L |x - y|$ for some $L > 0$.
\end{assumption}
\quad The convergence result for the discrete simulated annealing \eqref{eq:discreteSA} is stated as follows.
The proof will be given in Section \ref{sc5}.
\begin{theorem}
\label{thm:discreterate}
Let $f$ satisfy Assumption \ref{assump:reggrow}, \ref{assump:nondeg} $\&$ \ref{assump:upper}, and let the condition \eqref{eq:moment} for $\mu_0$ hold.
Assume that $\tau_t \sim \frac{E}{\ln t}$ and $\frac{d}{dt}\left(\frac{1}{\tau_t}\right) = \mathcal{O}\left(\frac{1}{t}\right)$ as $t \to \infty$ with $E > E_{*}$.
Also assume that $\Theta_k \to \infty$,
\begin{equation}
\label{eq:dominate}
\eta_{k+1}\Theta_k \to 0,
\end{equation}
and
\begin{equation}
\label{eq:dominate2}
\frac{\ln \Theta_k}{\sum_{j \le k} \eta_{j+1} \Theta_j^{-E_{*}/E}} \to 0,
\end{equation}
as $k \to \infty$.
Then for each $\delta, \varepsilon > 0$, there exists $C > 0$ (depending on $\delta, \varepsilon, f, \mu_0, E, d$) such that
\begin{equation}
\label{eq:main2}
\mathbb{P}(f(x_k) > \delta) \le C \Theta_k^{-\min\left(\frac{\delta}{E}, \frac{1}{2}(1 - \frac{E_{*}}{E}) \right) + \varepsilon}.
\end{equation}
\end{theorem}
\section{Preliminaries}
\label{sc3}
\quad In this section, we present a few vocabularies and basic results of diffusion processes and functional inequalities.
We also explain how these results are applied in the setting of SA, which will be useful in our convergence analysis.
\subsection{Diffusion processes and SA}
\label{sc31}
Consider the general diffusion process $(X_t; \, t \ge 0)$ in $\mathbb{R}^d$ of form:
\begin{equation}
\label{eq:diffusion}
dX_t = b(t, X_t) dt + \sigma(t, X_t) dB_t, \quad X_0 \stackrel{d}{=} \mu_0(dx),
\end{equation}
where $(B_t; t \ge 0)$ is a $d$-dimensional Brownian motion,
with the drift $b: \mathbb{R}_{+} \times \mathbb{R}^d \to \mathbb{R}^d$ and the covariance matrix $\sigma: \mathbb{R}_{+} \times \mathbb{R}^d \to \mathbb{R}^{d \times d}$.
To ensure the well-posedness of the SDE \eqref{eq:diffusion}, it requires some growth and regularity conditions on $b$ and $\sigma$.
For instance,
\begin{itemize}[itemsep = 3 pt]
\item
If $b$ and $\sigma$ are Lipschitz and have linear growth in $x$ uniformly in $t$, then \eqref{eq:diffusion} has a strong solution which is pathwise unique.
\item
If $b$ is bounded, and $\sigma$ is bounded, continuous and strictly elliptic, then \eqref{eq:diffusion} has a weak solution which is unique in distribution.
\end{itemize}
We refer to \cite{SV79, RW87} for background and further developments on the well-posedness of SDEs, and to
\cite[Chapter 1]{CE05} for a review of related results.
\quad Another important aspect is the distributional property of $(X_t; \, t \ge 0)$ governed by the SDE \eqref{eq:diffusion}.
Let $\mathcal{L}$ be the infinitesimal generator of the diffusion process $X$ defined by
\begin{align}
\label{eq:infg}
\mathcal{L}g(t,x) &: = b(t,x) \cdot \nabla g(x) + \frac{1}{2} \sigma(t,x) \sigma(t,x)^T:\nabla^2 g(x) \notag\\
&: = \sum_{i=1}^d b_i(t,x) \frac{\partial}{\partial x_i} g(x) + \frac{1}{2} \sum_{i,j = 1}^d \left( \sigma(t,x) \sigma(t,x)^T \right)_{ij} \frac{\partial^2}{\partial x_i \partial x_j} g(x),
\end{align}
and $\mathcal{L}^{*}$ be the corresponding adjoint operator given by
\begin{align}
\label{eq:adjoint}
\mathcal{L}^{*}g(t,x) & := -\nabla \cdot (b(t,x) g(x)) + \frac{1}{2} \nabla^2:(\sigma(t,x) \sigma(t,x)^T g(x)) \notag \\
&:= -\sum_{i = 1}^d \frac{\partial}{\partial x_i} (b_i(t,x) g(x)) + \frac{1}{2} \sum_{i,j = 1}^d \frac{\partial^2}{\partial x_i \partial x_j} (\sigma(t,x) \sigma(t,x)^T g(x))_{ij},
\end{align}
where $g: \mathbb{R}^d \to \mathbb{R}$ is a suitably smooth test function,
and $:$ denotes the Frobenius inner product which is the component-wise inner product of two matrices.
The probability density $\rho_t(\cdot)$ of the process $X$ at time $t$ then satisfies the Fokker-Planck equation:
\begin{equation}
\label{eq:FPdiffusion}
\frac{\partial \rho_t}{\partial t} = \mathcal{L}^{*} \rho_t.
\end{equation}
Specializing \eqref{eq:FPdiffusion} to the SA process \eqref{eq:SA} with $b(t,x) = -\nabla f(x)$ and $\sigma(t,x)= \sqrt{2 \tau_t} \, I_d$, we have that the probability density $\mu_t(\cdot)$ of $X$ governed by the SDE \eqref{eq:SA} satisfies
\begin{equation}
\label{eq:FPSA}
\frac{\partial \mu_t}{\partial t} = \nabla \cdot (\mu_t \nabla f) + \tau_t \Delta \mu_t.
\end{equation}
Under further growth conditions on $b$ and $\sigma$, it can be shown that as $t\to \infty$,
$\rho_t(\cdot) \to \rho_{\infty}(\cdot)$ which is the stationary distribution of $(X_t; \, t \ge 0)$.
It is easily deduced from \eqref{eq:FPdiffusion} that $\rho_{\infty}$ is characterized by the equation $\mathcal{L}^{*} \rho_{\infty} = 0$;
see \cite{EK86, MT933} for a general theory on stability of diffusion processes, and \cite[Section 2]{Tang19} for a summary with various pointers to the literature.
\quad However, for general $b$ and $\sigma$, the stationary distribution $\rho_{\infty}(\cdot)$ does not have a closed-form expression.
One good exception is $b(t,x) = - \nabla f(x)$ and $\sigma(t,x) = \sqrt{2 \tau} \, I_d$, where $X$ is governed by the overdamped Langevin equation:
\begin{equation}
\label{eq:overdamped}
dX_t = - \nabla f(X_t) dt + \sqrt{2 \tau} \, dB_t, \quad X_0 \stackrel{d}{=} \mu_0(dx).
\end{equation}
Such a process is time-reversible, and
the stationary distribution, under some growth condition on $f$, is the Gibbs measure
\begin{equation}
\label{eq:Gibbs}
\nu_{\tau}(dx) = \frac{1}{Z_{\tau}} \exp \left(-\frac{f(x)}{\tau} \right)dx,
\end{equation}
where $Z_{\tau}:= \int_{\mathbb{R}^d} \exp(-f(x)/\tau)dx$ is the normalizing constant.
Much is known about the overdamped Langevin dynamics.
For instance, if $f$ is $\lambda$-convex (i.e. $\nabla^2 f + \lambda I_d$ is positive definite),
the overdamped Langevin process \eqref{eq:overdamped} converges exponentially in the Wasserstein metric with rate $\lambda$ to the Gibbs measure $\nu_{\tau}$ \cite[]{CMV06}.
See also \cite{BGL14} for modern techniques to analyze the evolution of the overdamped Langevin equation and generalizations.
\quad Now we turn to the SA process \eqref{eq:SA}.
The difference between the overdamped Langevin process \eqref{eq:overdamped} and the process \eqref{eq:SA} is that the temperature $\tau_t$ of the latter is decreasing in time.
Due to the time dependence, the limiting distribution of SA is unknown.
As we will see in Section \ref{sc4}, the idea is to approximate \eqref{eq:SA} by a process of Gibbs measures with temperature $\tau_t$.
Since $\tau_t$ decreases to $0$ in the limit, the problem boils down to studying Gibbs measures at low temperatures.
In the next section, we recall some results of Gibbs measures at low temperatures,
which are motivated by applications in molecular dynamics and Bayesian statistics.
\subsection{Functional inequalities and the Erying-Kramers law}
\label{sc32}
Here we present functional inequalities of Gibbs measures at low temperatures $(\tau \to 0)$.
Let $\mu$ and $\nu$ be two probability measures on $\mathbb{R}^d$ such that $\mu$ is absolutely continuous relative to $\nu$, with $d\mu/d\nu$ the Radon-Nikodym derivative.
Define the relative entropy or KL-divergence $H(\mu|\nu)$ of $\mu$ with respect to $\nu$ by
\begin{equation}
\label{eq:KL}
H(\mu|\nu):= \int \log\bigg( \frac{d\mu}{d\nu} \bigg) d\mu = \int \frac{d\mu}{d\nu} \log\bigg( \frac{d\mu}{d\nu} \bigg)d \nu,
\end{equation}
and the Fisher information $I(\mu|\nu)$ of $\mu$ with respect to $\nu$ by
\begin{equation}
\label{eq:FI}
I(\mu|\nu):= \frac{1}{2}\int \bigg|\nabla \bigg(\frac{d\mu}{d\nu}\bigg) \bigg|^2 \bigg( \frac{d \mu}{d \nu}\bigg)^{-1} d\nu.
\end{equation}
We say that the probability measure $\nu$ satisfies the log-Sobolev inequality (LSI) with constant $\alpha > 0$,
if for all probability measures $\mu$ with $I(\mu|\nu) < \infty$,
\begin{equation}
H(\mu|\nu) \le \frac{1}{\alpha} I(\mu|\nu).
\end{equation}
The constant $\alpha$ is called the LSI constant for the probability measure $\nu$.
For instance, the LSI constant $\alpha = 1$ for $\nu$ the multivariate Gaussian with mean $0$ and covariance matrix $I_d$.
\quad The LSI also plays an important role in studying the convergence rate of the overdamped Langevin equation.
Recall that $\nu_{\tau}$ is the Gibbs measure defined by \eqref{eq:Gibbs}, and assume that $\nu_{\tau}$ satisfies the LSI with constant $\alpha_{\tau} > 0$.
It follows from \cite[Theorem 1.7]{Sch12} that by letting $\mu_{\tau, t}$ be the probability distribution of $X_t$ defined by \eqref{eq:overdamped},
we have
\begin{equation}
\label{eq:entropydecay}
H(\mu_{\tau,t}|\nu_{\tau}) \le e^{-2 \tau \alpha_{\tau} t}H(\mu_{\tau, 0}|\nu_{\tau}).
\end{equation}
So larger the value of $\alpha_{\tau}$ is, faster the convergence of the overdamped Langevin process in the KL divergence is.
The subscript `$\tau$' in $\alpha_{\tau}$ suggests the dependence of the LSI constant on the temperature $\tau$, and we are interested in the asymptotics of $\alpha_{\tau}$ at low temperatures as $\tau \to 0$.
This problem was considered by \cite[Corollary 3.18]{MS14}, who derived a sharp lower bound for $\alpha_{\tau}$ as $\tau \to 0$.
\begin{lemma}
\label{fact:EK}
Let $f$ satisfy Assumption \ref{assump:reggrow} $\&$ \ref{assump:nondeg}. Then
the Gibbs measure $\nu_{\tau}$ defined by \eqref{eq:Gibbs} satisfies the LSI with constant $\alpha_{\tau} > 0$ such that
\begin{equation}
\label{eq:EKformula}
\alpha_{\tau} \sim C \exp\left( -\frac{E_{*}}{\tau}\right) \quad \mbox{as } \tau \to 0,
\end{equation}
where $C > 0$ depends on $f, d$.
\end{lemma}
\quad The Eyring-Kramers law provides an estimate on the spectral gap of the overdamped Langevin equation \eqref{eq:overdamped}.
It dates back to \cite{Eyring35, Kramers40} in the study of metastability in chemical reactions, and is proved rigorously by \cite{Bovier04, Bovier05}.
Lemma \ref{fact:EK} is the LSI version of the Eyring-Kramers law, which is stronger than the spectral gap estimate implied by the Poincar\'e inequality.
\quad Further define the Wasserstein distance $W_2(\mu, \nu)$ between $\mu$ and $\nu$ by
\begin{equation}
\label{eq:W2}
W_2(\mu, \nu): = \inf_{\Pi}\sqrt{ \int |x - y|^2 \Pi(dx,dy)},
\end{equation}
where the infimum is over all joint distributions $\Pi$ coupling $\mu$ and $\nu$.
We say that the probability measure $\nu$ satisfies Talagrand's inequality with constant $\gamma > 0$, if for all probability measure $\mu$ with $H(\mu|\nu)< \infty$,
\begin{equation}
\label{eq:Talagrand}
W_2(\mu, \nu) \le \frac{2}{\gamma}H(\mu|\nu).
\end{equation}
It follows from \cite[Theorem 1]{OV00} that the LSI implies Talagrand's inequality with the same constant.
That is,
if $\nu$ satisfies the LSI with constant $\alpha > 0$, then $\nu$ also satisfies Talagrand's inequality with constant $\gamma = \alpha$.
Combining with Lemma \ref{fact:EK}, we get a lower bound estimate of Talagrand's inequality constant for the Gibbs measure $\nu_{\tau}$.
\begin{lemma}
\label{fact:Tala}
Let $f$ satisfy Assumption \ref{assump:reggrow} $\&$ \ref{assump:nondeg}.
Then the Gibbs measure $\nu_{\tau}$ defined by \eqref{eq:Gibbs} satisfies Talagrand's inequality with constant $\gamma_{\tau} > 0$ such that
\begin{equation}
\label{eq:EKformulaTala}
\gamma_{\tau} \sim C \exp\left( -\frac{E_{*}}{\tau}\right) \quad \mbox{as } \tau \to 0
\end{equation}
where $C > 0$ depends on $f, d$.
\end{lemma}
\section{Continuous-time simulated annealing}
\label{sc4}
\quad In this section, we prove Theorem \ref{thm:contrate} by using the ideas developed in \cite{Miclo92,M18,MSTW18}.
Let $\mu_t$ be the probability measure of $X_t$ defined by \eqref{eq:SA}.
The key idea is to compare $\mu_t$ with the time-dependent Gibbs measure $\nu_{\tau_t}$ given by
\begin{equation}
\label{eq:Gibbstime}
\nu_{\tau_t}(dx) = \frac{1}{Z_{\tau_t}} \exp\left(-\frac{f(x)}{\tau_t} \right) dx,
\end{equation}
where $Z_{\tau_t}: = \int_{\mathbb{R}^d} \exp(-f(x)/\tau_t)$ is the normalizing constant.
Note that $\nu_{\tau_t}$ will concentrate on the minimum point of $f$ as $t \to \infty$ since $\tau_t \to 0$ as $t \to \infty$.
We will see that $\nu_{\tau_t}$ is close to $\mu_t$ in some sense as $t \to \infty$.
The proof of Theorem \ref{thm:contrate} is broken into four steps.
\medskip
{\bf Step 1: Reduce $\mu_t$ to $\nu_{\tau_t}$.}
We establish a bound that relates $\nu_{\tau_t}$ to $\mu_t$.
Let $(\widetilde{X}_t; \, t \ge 0)$ be a process whose distribution is $\nu_{\tau_t}$ at time $t$, coupled with $(X_t; \, t \ge 0)$ on the same probability space.
Fix $\delta > 0$. We have
\begin{align}
\label{eq:Pbound}
\mathbb{P}(f(X_t) > \delta) &= \mathbb{P}(f(X_t) > \delta, f(\widetilde{X}_t)> \delta) + \mathbb{P}(f(X_t) > \delta, f(\widetilde{X}_t) \le \delta) \notag\\
& \le \mathbb{P}(f(\widetilde{X}_t)> \delta) + || X_t - \widetilde{X}_t||_{TV} \notag \\
& \le \mathbb{P}(f(\widetilde{X}_t)> \delta) + \sqrt{2 H(\mu_t|\nu_{\tau_t})},
\end{align}
where we use Pinsker's inequality \cite[Lemma 2.5]{Tsy09} in the last inequation.
Now the problem boils down to estimating $\mathbb{P}(f(\widetilde{X}_t)> \delta)$ and $H(\mu_t|\nu_{\tau_t})$.
\medskip
{\bf Step 2: Long-time behavior of $f(\widetilde{X}_t)$.}
We study the asymptotics of $\mathbb{P}(f(\widetilde{X}_t)> \delta)$ as $t \to \infty$.
The following lemma provides a quantitative estimate of how $\nu_{\tau_t}$, or equivalently $\widetilde{X}_t$ concentrates on the minimum point of $f$ as $t \to \infty$.
\begin{lemma}
\label{lem:firterm}
Let $f$ satisfy Assumption \ref{assump:reggrow} $\&$ \ref{assump:nondeg}.
Assume that $\tau_t \sim \frac{E}{\ln t}$ as $t \to \infty$ with $E > E_{*}$.
For each $\varepsilon \in (0, \delta)$, there exist $C > 0$ (depending on $\delta, \varepsilon, f, E, d$) such that
\begin{equation}
\label{eq:nuest}
\mathbb{P}(f(\widetilde{X}_t)> \delta) \le C t^{-\frac{\delta - \varepsilon}{E}}.
\end{equation}
\end{lemma}
\begin{proof}
Note that
\begin{equation}
\label{eq:firtermdef}
\mathbb{P}(f(\widetilde{X}_t)> \delta) = \frac{\int_{f(x) > \delta} \exp(-f(x)/\tau_t) dx}{\int_{\mathbb{R}^d} \exp(-f(x)/\tau_t) dx}.
\end{equation}
Under Assumption \ref{assump:reggrow}, $f$ has quadratic growth, so at least linear growth at infinity \cite[Lemma 3.14]{MS14}: there exists $C > 0$ such that for $R$ large enough,
\begin{equation*}
f(x) \ge \min_{|z| = R} f(z) + C(|x| - R) \quad \mbox{for } |x| > R.
\end{equation*}
We can also choose $R$ sufficiently large so that $\min_{|z| = R} f(z) > \delta$.
Consequently,
\begin{align}
\label{eq:firtermnum}
\int_{f(x) > \delta} \exp(-f(x)/\tau_t) dx & = \int_{f(x) > \delta, |x| \le R} \exp(-f(x)/\tau_t) dx + \int_{f(x) > \delta, |x| > R}\exp(-f(x)/\tau_t) dx \notag \\
& = e^{-\frac{\delta}{\tau_t}} \vol(B_R) (1 + \mathcal{O}( \tau_t)),
\end{align}
where $\vol(B_R)$ is the volume of a ball with radius $R$.
Further by Laplace's method,
\begin{equation}
\label{eq:firtermdenom}
\int_{\mathbb{R}^d} \exp(-f(x)/\tau_t) dx \sim C (\tau_t)^{\frac{d}{2}}.
\end{equation}
By injecting \eqref{eq:firtermnum}, \eqref{eq:firtermdenom} into \eqref{eq:firtermdef},we get
\begin{equation}
\mathbb{P}(f(\widetilde{X}_t)> \delta) \le C t^{-\frac{\delta}{E}} (\ln t)^{\frac{d}{2}},
\end{equation}
which clearly yields \eqref{eq:nuest} since $(\ln t)^{\frac{d}{2}}/t^{\frac{\varepsilon}{E}} \to 0$ as $t \to \infty$.
\end{proof}
\begin{remark}
\label{rk:Laplace}
It is interesting to get a bound for $\mathbb{P}(f(\widetilde{X}_t) > \delta)$ when the dimension $d$ is large.
As mentioned in the introduction, the Laplace bound \eqref{eq:firtermdenom} may fail when $d, t \to \infty$ simultaneously.
Recall that $m_0$ is the minimum point of $f$.
By continuity of $f$, there exists $r > 0$ such that $f(x) < \varepsilon$ when $|x - m_0| < r$.
Thus,
\begin{align}
\label{eq:firtermdenom2}
\int_{\mathbb{R}^d} \exp(-f(x)/\tau_t) dx & \ge \int_{|x - m_0| < r} \exp(-f(x)/\tau_t) dx \notag \\
& \ge e^{-\frac{\varepsilon}{\tau_t}} \vol(B_r).
\end{align}
Further, if $t/e^{\frac{Ed}{CR}} \to \infty$ as $d \to \infty$,
\begin{equation}
\label{eq:firtermnum2}
\int_{f(x) > \delta} \exp(-f(x)/\tau_t) dx = e^{-\frac{\delta}{\tau_t}} \vol(B_R) (1 + \mathcal{O}( \tau_t d)),
\end{equation}
Combining \eqref{eq:firtermdenom2} and \eqref{eq:firtermnum2}, we get
\begin{equation}
\label{eq:nuest22}
\mathbb{P}(f(\widetilde{X}_t) > \delta) \le C \gamma^d t^{-\frac{\delta - \varepsilon}{E}},
\end{equation}
where $C > 0$ depends on $\delta, \varepsilon, f, E$, and $\gamma = \max(R/r, e^{\frac{\delta - \varepsilon}{CR}})$.
Also note that \cite{RRT17} obtained the bound $\mathbb{E}f(\widetilde{X}_t) \le C d/\ln t$.
By Markov's inequality, we get
\begin{equation}
\label{eq:boundRRT}
\mathbb{P}(f(\widetilde{X}_t) > \delta) \le C\delta^{-1} d \, (\ln t)^{-1}.
\end{equation}
In comparison with \eqref{eq:boundRRT}, the bound \eqref{eq:nuest22} is better in `$t$' but worse in `$d$'.
In terms of relaxation time, i.e. letting $\mathbb{P}(f(\widetilde{X}_t) > \delta)$ be of constant order,
both estimates show an exponential dependence of $t$ on $d$.
This suggests that SA is exponentially slow as the dimension increases.
\end{remark}
{\bf Step 3: Differential inequality for $H(\mu_t|\nu_{\tau_t})$.}
To get an estimate of $H(\mu_t|\nu_{\tau_t})$, we need to consider the time derivative $\frac{d}{dt}H(\mu_t|\nu_{\tau_t})$.
The following lemma is a reformulation of \cite[Proposition 3]{Miclo92}.
For ease of reference, we give a simplified proof here. First let us convent some notation. For an absolutely continuous measure $\mu(dx)$, we abuse the notation $\mu(dx) = \mu(x) dx$, i.e. $\mu(x)$ is the density of $\mu(dx)$.
So for two such probability measures $\mu$ and $\nu$, the Radon-Nikodym derivative $\frac{d\mu}{d\nu}(x)$ is identified with $\frac{\mu(x)}{\nu(x)}$.
\begin{lemma}
\label{lem:DI}
Let $\tau_t$ be decreasing in $t$. We have
\begin{equation}
\label{eq:diffineq}
\frac{d}{dt}H(\mu_t|\nu_{\tau_t}) \le -2 \tau_t I\left(\mu_t|\nu_{\tau_t}\right) + \frac{d}{dt}\bigg(\frac{1}{\tau_t} \bigg)\, \mathbb{E}f(X_t).
\end{equation}
where $I(\mu_t|\nu_{\tau_t})$ is the Fisher information defined by \eqref{eq:FI}.
\end{lemma}
\begin{proof}
Observe that
\begin{align}
\label{eq:DE}
\frac{d}{dt}H(\mu_t|\nu_{\tau_t}) & = \frac{d}{dt} \int \mu_t \ln\bigg(\frac{\mu_t}{\nu_{\tau_t}} \bigg) dx \notag\\
& = \underbrace{\int \frac{\partial \mu_t}{\partial t} \ln\bigg(\frac{\mu_t}{\nu_{\tau_t}} \bigg) dx}_{(a)} +\underbrace{\int \mu_t \frac{\partial}{\partial t} \bigg(\ln\bigg(\frac{\mu_t}{\nu_{\tau_t}} \bigg)\bigg) dx}_{(b)}.
\end{align}
We first consider the term (a).
Recall that $\mu_t$ satisfies the Fokker-Planck equation \eqref{eq:FPSA}.
Together with the fact that $\nabla(\tau_t \nu_{\tau_t}) = - \nu_{\tau_t} \nabla f$, we have
\begin{equation}
\label{eq:FPSA2}
\frac{\partial \mu_t}{\partial t} = \nabla \cdot \bigg(\tau_t \nu_{\tau_t} \nabla\bigg(\frac{\mu_t}{\nu_{\tau_t}} \bigg) \bigg).
\end{equation}
By injecting \eqref{eq:FPSA2} into the term (a) and further by integration by parts, we get
\begin{align}
\label{eq:terma}
(a) & = \int \nabla \cdot \bigg(\tau_t \nu_{\tau_t} \nabla\bigg(\frac{\mu_t}{\nu_{\tau_t}} \bigg) \bigg) \ln\bigg(\frac{\mu_t}{\nu_{\tau_t}} \bigg) dx \notag\\
& = - \int \tau_t \nu_{\tau_t} \nabla\bigg(\frac{\mu_t}{\nu_{\tau_t}} \bigg) \cdot \nabla \ln\bigg(\frac{\mu_t}{\nu_{\tau_t}} \bigg) dx \notag\\
& = - \tau_t \int \bigg| \nabla\bigg(\frac{\mu_t}{\nu_{\tau_t}} \bigg)\bigg|^2 \bigg(\frac{\mu_t}{\nu_{\tau_t}} \bigg)^{-1} d \nu_{\tau_t}
= -2 \tau_t I(\mu_t| \nu_{\tau_t}).
\end{align}
Now we consider the term (b). Direct computation leads to
\begin{align}
\label{eq:termb}
(b) = \int \bigg(\frac{\partial \mu_t}{\partial t} - \frac{\mu_t}{\nu_{\tau_t}} \frac{\partial \nu_{\tau_t}}{\partial t} \bigg)dx &= - \int \frac{\partial}{\partial t} \left(\ln \nu_{\tau_t} \right) d\mu_t \notag \\
& = \int \frac{d}{dt}\left(\ln Z_{\tau_t}\right) d\mu_t + \frac{d}{dt}\bigg(\frac{1}{\tau_t} \bigg) \mathbb{E}f(X_t) \notag\\
& \le \frac{d}{dt}\bigg(\frac{1}{\tau_t} \bigg) \, \mathbb{E}f(X_t),
\end{align}
where we use the fact that $\int \mu_t dx = 1$ in the second equation, and that $\tau_t$ is decreasing in $t$ in the last inequality.
Combining \eqref{eq:DE} with \eqref{eq:terma} and \eqref{eq:termb} yields \eqref{eq:diffineq}.
\end{proof}
{\bf Step 4: Estimating $H(\mu_t|\nu_{\tau_t})$ via the Erying-Kramers law.}
Note that there are two terms on the right hand side of \eqref{eq:diffineq}.
We start with an estimate of the second term.
\begin{lemma}
\label{lem:expfX}
Let $f$ satisfy Assumption \ref{assump:reggrow}, and assume that the condition \eqref{eq:moment} for $\mu_0$ holds.
For each $\varepsilon > 0$, there exists $C > 0$ (depending on $\varepsilon, f$) such that
\begin{equation}
\label{eq:expfX}
\mathbb{E}f(X_t) \le C(1+t)^{\varepsilon}.
\end{equation}
\end{lemma}
\begin{proof}
It is easy to see that Assumption \ref{assump:reggrow} implies Assumption $\mbox{H}_1$ in \cite{Miclo92}.
Together with the moment condition \eqref{eq:moment}, the proof follows the line of reasoning in \cite[Lemma 2]{Miclo92}.
\end{proof}
\quad Now we apply the Eyring-Kramers law, combining with a Gr\"{o}nwall-type argument to bound $H(\mu_t|\nu_{\tau_t})$ for large $t$.
\begin{lemma}
\label{lem:estH}
Let $f$ satisfy Assumption \ref{assump:reggrow} $\&$ \ref{assump:nondeg}, and assume that the condition \eqref{eq:moment} for $\mu_0$ holds.
Assume that $\tau_t \sim \frac{E}{\ln t}$ and $\frac{d}{dt}\left(\frac{1}{\tau_t}\right) = \mathcal{O}\left(\frac{1}{t}\right)$ as $t \to \infty$ with $E > E_{*}$.
For each $\varepsilon > 0$, there exists $C > 0$ (depending on $\varepsilon, f, E, d$) such that
\begin{equation}
\label{eq:estH}
H(\mu_t| \nu_{\tau_t}) \le Ct^{-\left(1 - \frac{E_{*}}{E} - \varepsilon \right)}.
\end{equation}
\end{lemma}
\begin{proof}
Using Lemma \ref{fact:EK} and the bound \eqref{eq:diffineq}, we have
\begin{equation}
\label{eq:dH}
\frac{d}{dt} H(\mu_t|\nu_{\tau_t}) \le - 2 \tau_t \alpha_t H(\mu_t|\nu_{\tau_t}) + \frac{C}{t} \, \mathbb{E}f(X_t),
\end{equation}
where $\alpha_t$ is the LSI constant for the Gibbs measure $\nu_{\tau_t}$.
By the Eyring-Kramers formula \eqref{eq:EKformula}, for each $\varepsilon > 0$,
there exist $C > 0$ and $t_0 > 0$,
\begin{equation}
\label{eq:alphat}
2 \tau_t \alpha_t \ge C t^{-\left( \frac{E_{*}}{E} - \varepsilon \right)} \quad \mbox{for } t \ge t_0.
\end{equation}
Combining \eqref{eq:dH} with \eqref{eq:expfX}, \eqref{eq:alphat}, we get
\begin{equation}
\label{eq:diffEK}
\frac{d}{dt} H(\mu_t|\nu_{\tau_t}) \le -C t^{-\left( \frac{E_{*}}{E} - \varepsilon \right)} H(\mu_t|\nu_{\tau_t}) + C't^{-1 + \varepsilon}.
\end{equation}
Fix $\varepsilon \in (0, \frac{1}{2} - \frac{E_{*}}{2E})$, let
\begin{equation*}
Q(t): = H(\mu_t|\nu_{\tau_t}) - \frac{2C'}{C}t^{-1 + \frac{E_{*}}{E}+ 2 \varepsilon}.
\end{equation*}
Then for $t_0$ sufficiently large and $t \ge t_0$,
we have $\frac{d}{dt} Q(t) \le -Ct^{-\frac{E_{*}}{E} + \varepsilon} Q(t)$ by \eqref{eq:diffEK}.
This implies that $Q(t) \le Q(t_0) e^{-C \int_{t_0}^t s^{-\frac{E_{*}}{E} + \varepsilon} ds}$.
Thus,
\begin{equation}
\label{eq:finalest}
H(\mu_t|\nu_{\tau_t}) \le \frac{2C'}{C}t^{-1+ \frac{E_{*}}{E} + 2 \varepsilon} + H(\mu_{t_0}|\nu_{t_0}) e^{-\frac{C}{\kappa}(t^{\kappa} - t_0^{\kappa})},
\end{equation}
where $\kappa: = 1 - \frac{E_{*}}{E} - \varepsilon > 0$.
Note that the first term on the right hand side of \eqref{eq:finalest} dominates, and the conclusion follows.
\end{proof}
\quad Finally, by injecting \eqref{eq:nuest}, \eqref{eq:estH} into \eqref{eq:Pbound}, we get the desired estimate \eqref{eq:main1}.
\section{Discrete simulated annealing}
\label{sc5}
\quad This section is devoted to the proof of Theorem \ref{thm:discreterate}.
The idea is close to that employed for the continuous-time SA process \eqref{eq:SA}.
However, the analysis is more complicated due to discretization, and additional tools from \cite{VW19} on the convergence of ULA are used.
Recall that $\eta_k$ is the step size at iteration $k$, and $\Theta_k:=\sum_{j \le k} \eta_j$ is the cumulative step size up to iteration $k$.
Let $\mu_k$ be the probability density of $x_k$ defined by \eqref{eq:discreteSA}, and
\begin{equation}
\label{eq:Gibbsdistime}
\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}(dx) = \frac{1}{Z_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \exp\left(-\frac{f(x)}{\tau_{\scaleto{\Theta_k}{5 pt}}} \right) dx,
\end{equation}
where $Z_{\tau_{\scaleto{\Theta_k}{4 pt}}}: = \int_{\mathbb{R}^d} \exp(-f(x)/\tau_{\scaleto{\Theta_k}{5 pt}}) dx$ is the normalizing constant.
We divide the proof into four steps.
\medskip
{\bf Step 1: Reduce $\mu_k$ to $\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}$.}
This step is similar to Step $1$ $\&$ $2$ in Section \ref{sc4}.
Let $(\widetilde{x}_k; \, k \ge 0)$ be a sequence whose distribution is $\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}$ at epoch $k$, coupled with $(x_k; \, k \ge 0)$ on the same probability space.
Fix $\delta > 0$.
The same argument as in \eqref{eq:Pbound} shows that
\begin{equation}
\label{eq:Pbounddis}
\mathbb{P}(f(x_k) > \delta) \le \mathbb{P}(f(\widetilde{x}_k) > \delta) + \sqrt{2 H(\mu_k|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}})}.
\end{equation}
Assume that $\Theta_k \to \infty$ and $\tau_{\scaleto{\Theta_k}{5 pt}} \sim \frac{E}{\ln \Theta_k}$ as $k \to \infty$ with $E > E_{*}$.
By Lemma \ref{lem:firterm}, we get a bound for the first term on the right hand side of \eqref{eq:Pbounddis}.
That is, for each $\varepsilon \in (0, \delta)$, there exists $C>0$ (depending on $\varepsilon, \delta, f, E, d$) such that
\begin{equation}
\label{eq:nuestdiscrete}
\mathbb{P}(f(\widetilde{x}_k) > \delta) \le C \Theta_k^{-\frac{\delta - \varepsilon}{E}}.
\end{equation}
So it remains to estimate $H(\mu_k|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}})$, which is the task of the next three steps.
\medskip
{\bf Step 2: Continuous-time coupling.}
To make use of continuous-time tools, we couple the sequence $(x_k; \, k \ge 0)$ by a continuous-time process $(X_t; \, t \ge 0)$ such that
$(X_{\Theta_k}; \, k \ge 0)$ has the same distribution as $(x_k; \, k \ge 0)$.
To do this, define the process $X$ by
\begin{equation}
\label{eq:couple}
dX_t = -\nabla f(x_k) dt + \sqrt{2 \tau_{\scaleto{\Theta_k}{5 pt}}} dB_t, \quad t \in [\Theta_k, \Theta_{k+1}),
\end{equation}
where we identify $X_{\Theta_k}$ with $x_k$.
Note that the Fokker-Planck equation \eqref{eq:FPSA} plays an important role in the analysis of continuous-time SA \eqref{eq:SA}.
It is desirable to get a version of the Fokker-Planck equation for the coupled process \eqref{eq:couple}.
The result is stated as follows.
\begin{lemma}
For $t \in [\Theta_k, \Theta_{k+1})$, the probability density $\mu_t$ of $X_t$ defined by \eqref{eq:couple} satisfies the following equation:
\begin{equation}
\label{eq:FPcouple}
\frac{\partial \mu_t}{\partial t} = \nabla \cdot \bigg(\tau_{\scaleto{\Theta_k}{5 pt}} \nu_{\tau_{\scaleto{\Theta_k}{4 pt}}} \nabla \bigg(\frac{\mu_t}{\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \bigg) \bigg)
+ \nabla \cdot \left(\mu_t \, \mathbb{E}[\nabla f(x_k) - \nabla f(X_t)| X_t = x] \right).
\end{equation}
\end{lemma}
\begin{proof}
Let $\mu_{t|s}(x|y)$ be the conditional probability $\mathbb{P}(X_t = x|X_s = y)$.
By conditioning on $X_{\Theta_k} = x_k$, we have
\begin{equation}
\label{eq:conditionFP}
\frac{\partial \mu_{t|\Theta_k}(x|x_k)}{\partial t} = \nabla \cdot (\mu_{t|\Theta_k}(x|x_k) \nabla f(x_k)) + \tau_{\Theta_k} \Delta \mu_{t|\Theta_k}(x|x_k).
\end{equation}
By integrating \eqref{eq:conditionFP} against $\mu_{\Theta_k}$, and using the fact that $\mu_{t|\Theta_k}(x|x_k) \mu_{\Theta_k}(x_k) = \mu_t(x) \mu_{\Theta_k|t}(x_k|x)$, we get
\begin{equation}
\label{eq:FPcouple2}
\frac{\partial \mu_t}{\partial t} = \nabla \cdot (\mu_t(x) \, \mathbb{E}[\nabla f(x_k)|X_t = x]) + \tau_{\scaleto{\Theta_k}{5 pt}} \Delta \mu_t.
\end{equation}
Further by \eqref{eq:FPSA2}, we have
\begin{equation}
\label{eq:FPSA3}
\nabla \cdot \bigg(\tau_{\scaleto{\Theta_k}{5 pt}} \nu_{\tau_{\scaleto{\Theta_k}{4 pt}}} \nabla \bigg(\frac{\mu_t}{\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \bigg) \bigg) = \nabla \cdot (\mu_t \nabla f(x)) + \tau_{\scaleto{\Theta_k}{5 pt}} \Delta \mu_t.
\end{equation}
Combining \eqref{eq:FPcouple2} and \eqref{eq:FPSA3} yields \eqref{eq:FPcouple}.
\end{proof}
\quad There are two terms on the right hand side of \eqref{eq:FPcouple}.
Comparing to \eqref{eq:FPSA2}, the first term is the usual Fokker-Planck term, while the second term corresponds to the discretization error.
\medskip
{\bf Step 3: One-step analysis of $H(\mu_k|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}})$.}
Here we use the coupled process \eqref{eq:couple} to study the one-step decay of $H(\mu_k|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}})$.
\begin{lemma}
\label{lem:onestep}
Let $f$ satisfy Assumption \ref{assump:reggrow}, \ref{assump:nondeg} $\&$ \ref{assump:upper}, and assume that
the condition \eqref{eq:moment} for $\mu_0$ holds.
Assume that $\tau_t \sim \frac{E}{\ln t}$ and $\frac{d}{dt}\left(\frac{1}{\tau_t}\right) = \mathcal{O}\left(\frac{1}{t}\right)$ as $t \to \infty$ with $E > E_{*}$.
Also assume that $\Theta_k \to \infty$ and $\eta_{k+1}\Theta_k \to 0$ as $k \to \infty$.
Then, for each $\varepsilon > 0$, there exist $C, C' > 0$ (depending on $\varepsilon, f, E, d$) such that
\begin{align}
\label{eq:keyonestep}
H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k+1}}{4 pt}}}) & \le
\bigg(1 - C\eta_{k+1}\Theta_k^{-(\frac{E_{*}}{E} - \varepsilon)}\bigg)H(\mu_{k}|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) \notag \\
& \qquad \qquad \qquad \quad+ C'(\eta_{k+1}^2+ \eta_{k+1}^3 \ln \Theta_k + \eta_{k+1} \Theta_k^{-1 + \varepsilon}).
\end{align}
\end{lemma}
\begin{proof}
Write
\begin{equation}
\label{eq:decomp}
H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k+1}}{4 pt}}}) = \underbrace{H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k}}{4 pt}}})}_{(a)} + \underbrace{(H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k+1}}{4 pt}}}) - H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k}}{4 pt}}}))}_{(b)}.
\end{equation}
We first use the coupled process \eqref{eq:couple} to study the term $(a)$.
Note that
\begin{align}
\label{eq:timedecomp}
\frac{d}{dt} H(\mu_t | \nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) & = \int \frac{\partial \mu_t}{\partial t} \ln \bigg(\frac{\mu_t}{\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \bigg)dx + \int \mu_t \frac{d}{dt}\ln \bigg(\frac{\mu_t}{\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \bigg) dx \notag\\
& = \int \nabla \cdot \bigg(\tau_{\scaleto{\Theta_k}{5 pt}} \nu_{\tau_{\scaleto{\Theta_k}{4 pt}}} \nabla \bigg(\frac{\mu_t}{\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \bigg) \bigg) \ln \bigg(\frac{\mu_t}{\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \bigg)dx \notag \\
&\quad + \underbrace{\int \nabla \cdot \left(\mu_t \, \mathbb{E}[\nabla f(x_k) - \nabla f(X_t)| X_t = x] \right)\ln \bigg(\frac{\mu_t}{\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \bigg)dx}_{(c)} + \frac{d}{dt}\int \mu_t(dx) \notag \\
& = - 2 \tau_{\scaleto{\Theta_k}{5 pt}} I(\mu_t|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) + (c),
\end{align}
where we use \eqref{eq:FPcouple} in the second equation, and \eqref{eq:terma} in the third equation.
Now we need to estimate the term $(c)$ in \eqref{eq:timedecomp}.
By integration by parts and the fact that $a \cdot b \le \frac{1}{\tau_{\scaleto{\Theta_k}{4 pt}}} |a|^2 + \frac{\tau_{\scaleto{\Theta_k}{4 pt}}}{4} |b|^2$, we get
\begin{align}
\label{eq:discreterr}
(c) & = \mathbb{E}\bigg((\nabla f(X_t) -\nabla f(x_k)) \cdot \nabla \log \bigg(\frac{\mu_t}{\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \bigg) \bigg) \notag \\
& \le \frac{1}{\tau_{\scaleto{\Theta_k}{5 pt}}} \mathbb{E}|\nabla f(X_t) -\nabla f(x_k))|^2 + \frac{\tau_{\scaleto{\Theta_k}{5 pt}}}{4} \mathbb{E} \bigg| \log \bigg(\frac{\mu_t}{\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \bigg)\bigg|^2 \notag\\
& \le \frac{L^2}{\tau_{\scaleto{\Theta_k}{5 pt}}} \mathbb{E}|X_t - x_k|^2 + \frac{\tau_{\scaleto{\Theta_k}{5 pt}}}{2} I(\mu_t|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}),
\end{align}
where $L$ is the Lipschitz constant of $\nabla f$ by Assumption \ref{assump:upper}.
Recall from \eqref{eq:couple} that $X_t - x_k = - \nabla f(x_k) (t - \Theta_k) + \sqrt{2 \tau_{\scaleto{\Theta_k}{5 pt}} (t - \Theta_k)} Z$, where $Z$ is standard normal.
Consequently,
\begin{align}
\label{eq:diffonestep}
\mathbb{E}|X_t - x_k|^2 &= (t - x_k)^2 \mathbb{E}|\nabla f(x_k)|^2 + 2 \tau_{\scaleto{\Theta_k}{5 pt}} (t - x_k) d \notag \\
& \le \eta_{k+1}^2 \mathbb{E}|\nabla f(x_k)|^2 + C \tau_{\scaleto{\Theta_k}{5 pt}} \eta_{k+1}.
\end{align}
According to Lemma \ref{fact:Tala}, $\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}$ satisfies Talagrand's inequality with constant $\gamma_{\tau_{\scaleto{\Theta_k}{4 pt}}} \sim \kappa \exp(-E_{*}/\tau_{\scaleto{\Theta_k}{5 pt}})$.
Further by \cite[Lemma 10]{VW19},
\begin{equation}
\label{eq:gradest}
\mathbb{E}|\nabla f(x_k)|^2 \le \frac{C}{\gamma_{\tau_{\scaleto{\Theta_k}{4 pt}}}}H(\mu_k|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) + C.
\end{equation}
Combining \eqref{eq:discreterr} with \eqref{eq:diffonestep}, \eqref{eq:gradest} and the fact that $\tau_{\scaleto{\Theta_k}{5 pt}} \sim \frac{E}{\ln \Theta_k}$ as $k \to \infty$, we have
\begin{equation}
\label{eq:cest}
(c) \le C \bigg(\eta_{k+1}^2 \Theta_k^{\frac{E_{*}}{E}} \ln \Theta_k \bigg) H(\mu_k|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) + C (\eta_{k+1}+ \eta_{k+1}^2 \ln \Theta_k) + \frac{\tau_{\scaleto{\Theta_k}{5 pt}}}{2} I(\mu_t|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}).
\end{equation}
Injecting \eqref{eq:cest} into \eqref{eq:timedecomp} and further by Lemma \ref{fact:EK}, we get
\begin{align*}
\frac{d}{dt} H(\mu_t |\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) &\le -\frac{3}{2}\tau_{\scaleto{\Theta_k}{5 pt}} I(\mu_t|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) + C' \bigg(\eta_{k+1}^2 \Theta_k^{\frac{E_{*}}{E}} \ln \Theta_k \bigg) H(\mu_k|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) + C' ( \eta_{k+1}+ \eta_{k+1}^2 \ln \Theta_k) \notag \\
& \le -\frac{3}{2} C \Theta_k^{-(\frac{E_{*}}{E} - \varepsilon)}H(\mu_t|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) + C' \bigg(\eta_{k+1}^2 \Theta_k^{\frac{E_{*}}{E}+ \varepsilon} H(\mu_k|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) + (\eta_{k+1}+ \eta_{k+1}^2 \ln \Theta_k) \bigg),
\end{align*}
Now by a Gr\"{o}nwall argument, we have
\begin{align}
\label{eq:decompfirst}
H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) & \le e^{- \frac{3}{2}C \eta_{k+1} \Theta_k^{-(\frac{E_{*}}{E} - \varepsilon)}}\left( (1 + C' \eta_{k+1}^3 \Theta^{\frac{E_{*}}{E} + \varepsilon}) H(\mu_{k}|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) + C'(\eta_{k+1}^2+ \eta_{k+1}^3 \ln \Theta_k) \right) \notag \\
& \le e^{-\frac{5}{4}C \eta_{k+1}\Theta_k^{-(\frac{E_{*}}{E} - \varepsilon)}}H(\mu_{k}|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}})
+ C'(\eta_{k+1}^2+ \eta_{k+1}^3 \ln \Theta_k) \notag \\
& \le \bigg(1 - C\eta_{k+1}\Theta_k^{-(\frac{E_{*}}{E} - \varepsilon)}\bigg)H(\mu_{k}|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}})
+ C'(\eta_{k+1}^2+ \eta_{k+1}^3 \ln \Theta_k),
\end{align}
where we use the fact that $\eta_{k+1} \Theta_k^{\frac{E_{*}}{E}} \to 0$ as $k \to \infty$ in the second inequality.
\quad Now we consider the term $(b)$ in \eqref{eq:decomp}.
Note that
\begin{align}
\label{eq:diffHkk1}
H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k+1}}{4 pt}}}) - H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k}}{4 pt}}}) &= \ln \bigg(\frac{Z_{\tau_{\scaleto{\Theta_{k+1}}{4 pt}}}}{Z_{\tau_{\scaleto{\Theta_k}{4 pt}}}} \bigg) + \bigg( \frac{1}{\tau_{\scaleto{\Theta_{k+1}}{5 pt}}} - \frac{1}{\tau_{\scaleto{\Theta_k}{4 pt}}}\bigg) \mathbb{E}f(x_{k+1}) \notag \\
& \le C \frac{\eta_{k+1}}{\Theta_k} \mathbb{E}f(x_{k+1}).
\end{align}
We claim that for each $\varepsilon>0$,
$\mathbb{E}f(x_{k+1}) \le C \le C \Theta_k^{\varepsilon}$.
Choose $C > 0$ sufficiently large, and let $\mathbb{E}f(x_{k+1})$ be the first term exceeding $C$.
By Assumption \ref{assump:upper},
\begin{align*}
f(x_{k+1}) \le f(x_k) - \eta_k |\nabla f(x_k)|^2 +\sqrt{2 \tau_{\scaleto{\Theta_k}{5 pt}}\eta_k} \nabla f(x_k) \cdot Z_k
+ \frac{L}{2} |\eta_k \nabla f(x_k) +\sqrt{2 \tau_{\scaleto{\Theta_k}{5 pt}}\eta_k} Z_k |^2
\end{align*}
Further by taking expectation, we get
\begin{equation}
\label{eq:diffkk1}
\mathbb{E}f(x_{k+1}) - \mathbb{E}f(x_k) \le - \eta_k \bigg(1 - \frac{\eta_k L}{2} \bigg) \mathbb{E}|\nabla f(x_k)|^2 + Ld \tau_{\scaleto{\Theta_k}{5 pt}}\eta_k.
\end{equation}
Thus, $\mathbb{E}f(x_{k+1}) - \mathbb{E}f(x_k) \le Ld \tau_{\scaleto{\Theta_k}{5 pt}}\eta_k$ which implies that $\mathbb{E}f(x_k) > C-1$ for $k$ large enough.
By Assumption \ref{assump:reggrow}(ii),
$\mathbb{E}|\nabla f(x_k)|^2 > C'$ for some $C' > 0$.
Combining with \eqref{eq:diffkk1}, we have $\mathbb{E}f(x_k) > \mathbb{E}f(x_{k+1}) \ge C$ for $k$ large enough.
This contradicts the fact that $\mathbb{E}f(x_{k+1})$ is the first term exceeding $C$.
Now by \eqref{eq:diffHkk1}, we get
\begin{equation}
\label{eq:decompsecond}
H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k+1}}{4 pt}}}) - H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k}}{4 pt}}}) \le C \eta_{k+1} \Theta_k^{-1 + \varepsilon}.
\end{equation}
Combining \eqref{eq:decomp} with \eqref{eq:decompfirst}, \eqref{eq:decompsecond} yields \eqref{eq:keyonestep}.
\end{proof}
{\bf Step 4: Estimating $H(\mu_k|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}})$.}
We use Lemma \ref{lem:onestep} to derive an estimate for $H(\mu_k|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}})$.
Under the condition \eqref{eq:dominate}, the term $\eta_{k+1} \Theta_k^{-1 + \varepsilon}$ dominates
$\eta_{k+1}^2$, $\eta_{k+1}^3 \ln \Theta_k$ as $k \to \infty$.
Thus, the recursion \eqref{eq:keyonestep} yields
\begin{equation*}
H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k+1}}{4 pt}}}) \le
\bigg(1 - C\eta_{k+1}\Theta_k^{-(\frac{E_{*}}{E} - \varepsilon)}\bigg)H(\mu_{k}|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) + C' \eta_{k+1} \Theta_k^{-1 + \varepsilon}.
\end{equation*}
Since $E_{*}/E < 1$, a similar argument as in Lemma \ref{lem:estH} shows that
\begin{equation*}
H(\mu_{k+1}|\nu_{\tau_{\scaleto{\Theta_{k+1}}{4 pt}}}) - C \Theta_k^{-(1 - \frac{E_{*}}{E} - \varepsilon)} \le \bigg(1 - C' \eta_{k+1}\Theta_k^{-(\frac{E_{*}}{E} - \varepsilon)}\bigg) \bigg(H(\mu_{k}|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) - C \Theta_{k-1}^{-(1 - \frac{E_{*}}{E} - \varepsilon)}\bigg).
\end{equation*}
This together with the condition \eqref{eq:dominate2} implies that
\begin{equation}
\label{eq:estHdis}
H(\mu_{k}|\nu_{\tau_{\scaleto{\Theta_k}{4 pt}}}) \le C\Theta_{k}^{-(1 - \frac{E_{*}}{E} - \varepsilon)}.
\end{equation}
By injecting \eqref{eq:nuestdiscrete} and \eqref{eq:estHdis} into \eqref{eq:Pbounddis} we obtain \eqref{eq:main2}.
\section{Conclusion}
\label{sc6}
\quad In this paper, we study the convergence rate of SA in both continuous and discrete settings.
The main tool is functional inequalities for the Gibbs measure at low temperatures.
We prove that the tail probability, in both settings, exhibits a polynomial decay in time.
The decay rate is also given as function of the model parameters.
In the discrete setting, we derive a condition on the step size to ensure the convergence to the global minimum.
This condition may be useful in tuning the step size.
\quad There are a few directions to extend this work.
For instance, one can study the convergence rate of SA for L\'evy flight with a suitable cooling schedule.
Another problem is to study the dependence of the convergence rate in the dimension $d$.
This requires a deep understanding of the Eyring-Kramers law in high dimension, and is related to the Laplace approximation of high dimensional integrals.
Both problems are worth exploring, but may be challenging.
\bigskip
{\bf Acknowledgement:}
Tang thanks Georg Menz for helpful discussions.
He also gratefully acknowledges financial support through a start-up grant at Columbia University. Zhou gratefully acknowledges financial supports through a start-up grant at Columbia University and through the Nie Center for Intelligent Asset Management.
\bibliographystyle{abbrvnat}
|
1,314,259,993,907 | arxiv | \section{Introduction.}
Let us introduce some notations. As usual, we denote by $\mathbb{R}, \mathbb{C}, \mathbb{N}, \mathbb{Z}, \mathbb{Z}_+$
the sets of real numbers, complex numbers, positive integers, integers and non-negative integers,
respectively.
By $\mathbb{Z}^n_+$ we mean $\mathbb{Z}_+\times \ldots \times\mathbb{Z}_+$, and $\mathbb{R}^n = \mathbb{R}\times \ldots \times\mathbb{R}$,
where the decart products are taken with $n$ copies.
Let $\mathbf{k} = (k_1,\ldots,k_n)\in\mathbb{Z}^n_+$, $\mathbf{t} = (t_1,\ldots,t_n)\in\mathbb{R}^n$. Then
$\mathbf{t}^{\mathbf{k}}$ means the monomial $t_1^{k_1}\ldots t_n^{k_n}$, and $|\mathbf{k}| = k_1 + \ldots + k_n$.
By $\mathfrak{B}(\mathbb{R}^n)$ we denote the set of all Borel subsets of $\mathbb{R}^n$.
Let $\mathcal{K}$ be an arbitrary finite subset of $\mathbb{Z}^n_+$. Let $\mathcal{S} = (s_{\mathbf{k}})_{\mathbf{k}\in\mathcal{K}}$
be an arbitrary set of real numbers.
\textit{The truncated multidimensional moment problem} consists of finding a (non-negative) measure $\mu$ on $\mathfrak{B}(\mathbb{R}^n)$
such that
\begin{equation}
\label{f1_1}
\int \mathbf{t}^{\mathbf{k}} d\mu(\mathbf{t}) = s_{\mathbf{k}},\qquad \forall \mathbf{k}\in\mathcal{K}.
\end{equation}
The multidimensional moment problem (both the full and the truncated versions) turned out to be much more complicated than its
one-dimensional prototype~\cite{cit_1000_Berezansky_1965__Book}, \cite{cit_980_Berg_Christiansen_Ressel__Book}, \cite{cit_990_Marshall_Book}.
An operator-theoretical interpretation of the (full)
multidimensional moment problem was given
by Fuglede in~\cite{cit_980_Fuglede}. It should be noticed that the operator approach to moment problems was introduced
by Naimark in 1940--1943 and then developed by many authors, see historical notes in~\cite{cit_15000_Zagorodnyuk_2017_J_Adv_Math_Stud}.
Elegant conditions for the solvability of the multidimensional moment problem in the case of the support on semi-algebraic sets were given by
Schm\"udgen in~\cite{cit_995_Schmudgen__1991}, \cite{cit_995_Schmudgen__2003}.
Another conditions for the solvability of the multidimensional moment problem, using an extension of the moment sequence,
were given by Putinar and Vasilescu, see~\cite{cit_993_P_V__1999}, \cite{cit_14100_Vasilescu}.
Developing the idea of Putinar and Vasilescu, we presented different conditions for the solvability of
the two-dimensional moment problem and proposed an algorithm (which essentially consists of solving of linear equations)
for a construction of the solutions set~\cite{cit_14500_Zagorodnyuk_2010_AFA}.
An analytic parametrization for all solutions of the two-dimensional moment problem in a strip was given
in~\cite{cit_14700_Zagorodnyuk_2013_MFAT}.
Another approach to multidimensional and complex moment problems (including truncated problems), using extension arguments for $*$-semigroups,
has been developed by Cicho\'n, Stochel and Szafraniec, see~\cite{cit_982_C_St_Sz_2011} and references therein.
Still another approach for the two-dimensional moment problem was proposed by Ovcharenko
in~\cite{cit_991_Ovcharenko_1}, \cite{cit_991_Ovcharenko_2}.
In this paper we shall be focused on the truncated multidimensional moment problem.
A general approach for this moment problem was given by Curto and Fialkow in their books~\cite{cit_985_Curto_Fialkow__Book1}
and~\cite{cit_985_Curto_Fialkow__Book2}.
These books entailed a series of papers by a group of mathematicians, see recent papers~\cite{cit_987_Fialkow_2011},
\cite{cit_14200_Vasilescu}, \cite{cit_14000_Yoo}
and references therein.
This approach includes an extension of the matrix of prescribed moments with the same rank.
Effective optimization algorithms for the multidimensional moment problems were given in the book of Lasserre~\cite{cit_985_Lasserre_Book}.
Another approach for truncated moment problems, using a notion of an idempotent, was presented by Vasilescu in~\cite{cit_14220_Vasilescu}.
Atomic solutions to various matrix truncated $K$-moment problems were studied by Kimsey and Woerdeman in~\cite{cit_983_Kimsey_Woerdeman}.
There exists a connection of the truncated multidimensional moment problems with the completion problems for subnormal
operators, see, e.g.,~\cite{cit_984_Kimsey}. Observe that the complexification of the real truncated moment problem needs the even dimension $d$.
The complexification of the truncated multidimensional moment problem and the use of hyponormal operators was investigated
by Kimsey and Putinar
in~\cite{cit_982_Kimsey_Putinar}.
We should also mention recent papers~\cite{cit_15000_Zagorodnyuk_2018_AOT}, \cite{cit_998_Dio_Schmudgen_ArXiv}
on the subject.
Even in the one-dimensional case ($n=1$) the operator approach is effective not for all types of truncations $\mathcal{K}$.
Thus, we need to define some admissible types of truncations, where we can get solutions. The second feature of the truncated case is that
we need to take care that the corresponding multiplication operators in the associated Hilbert space are well-defined.
If all the above is done, we come to a problem of an extension of commuting symmetric operators.
In the case of the dimensional stability (see a precise definition below), the corresponding operators are self-adjoint and we
get an atomic solution.
More weak conditions which give a way for an explicit check for solutions are given, as well.
\noindent
{\bf Notations. }
Besides the given above notations we shall use the following conventions.
By $\mathbb{Z}_{k,l}$ we mean all integers $j$ satisfying the following inequality:
$k\leq j\leq l$; ($k,l\in\mathbb{Z}$).
If H is a Hilbert space then $(\cdot,\cdot)_H$ and $\| \cdot \|_H$ mean
the scalar product and the norm in $H$, respectively.
Indices may be omitted in obvious cases.
For a linear operator $A$ in $H$, we denote by $D(A)$
its domain, by $R(A)$ its range, and $A^*$ means the adjoint operator
if it exists. If $A$ is invertible then $A^{-1}$ means its
inverse. $\overline{A}$ means the closure of the operator, if the
operator is closable. If $A$ is bounded then $\| A \|$ denotes its
norm.
For a set $M\subseteq H$
we denote by $\overline{M}$ the closure of $M$ in the norm of $H$.
By $\mathop{\rm Lin}\nolimits M$ we mean
the set of all linear combinations of elements from $M$,
and $\mathop{\rm span}\nolimits M:= \overline{ \mathop{\rm Lin}\nolimits M }$.
By $E_H$ we denote the identity operator in $H$, i.e. $E_H x = x$,
$x\in H$. In obvious cases we may omit the index $H$. If $H_1$ is a subspace of $H$, then $P_{H_1} =
P_{H_1}^{H}$ is an operator of the orthogonal projection on $H_1$
in $H$.
\section{Necessary conditions for the solvability of the moment problem.}
Consider the following operator $W_j$ on $\mathbf{Z}^n_+$:
\begin{equation}
\label{f2_1}
W_j (k_1, \ldots, k_{j-1}, k_j, k_{j+1},\ldots, k_n) = (k_1, \ldots, k_{j-1}, k_j + 1, k_{j+1},\ldots, k_n),
\end{equation}
for $j=1,\ldots,n$. Thus, the operator $W_j$ increases the $j$-th coordinate.
Probably, the following kind of subsets appeared for the first time in the work of Kimsey and Woerdeman~\cite{cit_983_Kimsey_Woerdeman}.
\begin{definition}
\label{d2_1}
A finite subset $K\subset \mathbb{Z}^n_+$ is said to be \textbf{admissible}, if the following conditions hold:
\begin{itemize}
\item[1)] $\mathbf{0} = (0,\ldots,0)\in K$;
\item[2)] $\forall \mathbf{k}\in K\backslash\{ \mathbf{0} \}$,
\begin{equation}
\label{f2_5}
\mathbf{k} = W_{a_{|\mathbf{k}|}} W_{a_{|\mathbf{k}| - 1}} \ldots W_{a_1} \mathbf{0},
\end{equation}
for some $a_j\in\{ 1,\ldots,n \}$, and
\begin{equation}
\label{f2_7}
\widetilde{\mathbf{k}}_r := W_{a_r} \ldots W_{a_1} \mathbf{0} \in K,\qquad \forall r=1,2,\ldots, |\mathbf{k}|.
\end{equation}
\end{itemize}
\end{definition}
\begin{example}
\label{e1_1}
1) Let $K = K_r = \{ \mathbf{k}\in\mathbb{Z}^n_+: |\mathbf{k}|\leq r \}$, $r\in\mathbb{Z}_+$. Then $K$ is admissible, since
$\forall \mathbf{k} = (k_1,\ldots,k_n) \in K\backslash\{ \mathbf{0} \}$,
$$ \mathbf{k} = W_n^{k_n} W_{n-1}^{k_{n-1}} \ldots W_1^{k_1} \mathbf{0}. $$
\noindent
2) Let $K = K_{d_1,d_2,\ldots,d_n} = \{ \mathbf{k}=(k_1,\ldots,k_n)\in\mathbb{Z}^n_+: k_1\leq d_1,\ldots,k_n\leq d_n \}$,
$d_1,\ldots, d_n\in\mathbb{Z}_+$.
Notice that the truncated two-dimensional moment problem with rectangular data appeared in~\cite{cit_982_Kimsey_Putinar},
\cite{cit_15000_Zagorodnyuk_2018_AOT}.
The general case of the set $K_{d_1,d_2,\ldots,d_n}$ was proposed to the author by Prof. Vasilescu.
The set $K_{d_1,d_2,\ldots,d_n}$ is admissible, since
$\forall \mathbf{k} = (k_1,\ldots,k_n) \in K\backslash\{ \mathbf{0} \}$,
$$ \mathbf{k} = W_n^{k_n} W_{n-1}^{k_{n-1}} \ldots W_1^{k_1} \mathbf{0}. $$
\end{example}
Suppose that for an admissible finite set $K\subset\mathbb{Z}^n_+$ the moment problem~(\ref{f1_1}), with
$\mathcal{K} = K+K$ and some $\mathcal{S} = (s_{\mathbf{k}})_{\mathbf{k}\in\mathcal{K}}$,
has a solution $\mu$. Let us investigate which properties of the data $\mathcal{S}$ this fact yields.
The first property is the usual positivity condition. For practical purposes, it is useful to introduce some
indexation in the set $K$ by a unique index $j$.
\begin{equation}
\label{f2_9}
K = \left\{
\mathbf{k}_j(\in\mathbb{Z}^n_+),\quad j=0,1,\ldots,\rho
\right\}.
\end{equation}
Of course, $\rho + 1$ is the number of elements in $K$.
Consider an arbitrary polynomial of the following form:
\begin{equation}
\label{f2_20}
p(\mathbf{t}) = \sum_{j=0}^\rho \alpha_j \mathbf{t}^{\mathbf{k}_j},\qquad \alpha_j\in\mathbb{C}.
\end{equation}
Then
$$ 0 \leq \int |p|^2 d\mu =
\sum_{j,m=0}^\rho \alpha_j \overline{\alpha_m} s_{\mathbf{k}_j + \mathbf{k}_m}. $$
Denote
\begin{equation}
\label{f2_23}
\Gamma = \left( s_{\mathbf{k}_j + \mathbf{k}_m} \right)_{m,j=0}^\rho.
\end{equation}
We obtain the first necessary condition of the solvability:
\begin{equation}
\label{f2_25}
\Gamma \geq 0.
\end{equation}
We now suppose that for an admissible finite set $K\subset\mathbb{Z}^n_+$ the moment problem~(\ref{f1_1}), with
$\mathcal{K} = K+K$ and some $\mathcal{S} = (s_{\mathbf{k}})_{\mathbf{k}\in\mathcal{K}}$, is given and condition~(\ref{f2_25})
holds (we do not require that the moment problem is solvable).
A set $\mathfrak{L}$ of all polynomials of the form~(\ref{f2_20}) is a linear vector space. Let us consider the following functional:
$$ <p, q> = \sum_{j,m=0}^\rho \alpha_j \overline{\beta_m} s_{\mathbf{k}_j + \mathbf{k}_m}, $$
where $p$ is from~(\ref{f2_20}), and $q$ has the same form as $p$, but with $\beta_j (\in\mathbb{C})$ instead of $\alpha_j$.
The functional $<\cdot,\cdot>$ is sesquilinear, $<p,p>\geq 0$, and $\overline{<p,q>} = <q,p>$.
Introducing the classes of the equivalence $[p]_{\mathfrak{L}}$ in $\mathfrak{L}$ we obtain a finite-dimensional Hilbert space $H$.
\textit{We now return to the case of the solvable moment problem}. Consider the space $L^2_\mu$ which consists of (the classes of the equivalence
of) complex-valued measurable functions $f$ such that $\int |f(\mathbf{t})|^2 d\mu < \infty$.
The class of the equivalence in $L^2_\mu$ will be denoted by $[\cdot]_{L^2_\mu}$.
Denote by $T_l$ the multiplication operator in $L^2_\mu$:
\begin{equation}
\label{f2_27}
T_l f(\mathbf{t}) = t_l f(\mathbf{t}),\qquad f\in D_l,
\end{equation}
where $D_l = \{ f(\mathbf{t})\in L^2_\mu:\ t_l f(\mathbf{t})\in L^2_\mu \}$.
\noindent
Consider the associated Hilbert space $H$, defined as above. The following transformation is useful:
\begin{equation}
\label{f2_29}
W \sum_{j=0}^\rho \alpha_j [ \mathbf{t}^{\mathbf{k}_j} ]_{L^2_\mu} = \sum_{j=0}^\rho \alpha_j [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}},\qquad
\alpha_j\in\mathbb{C}.
\end{equation}
The transformation $W$ is well-defined, linear and isometric. It maps
$L^2_{\mu;K} := \mathop{\rm Lin}\nolimits \{ [ \mathbf{t}^{\mathbf{k}_j} ]_{L^2_\mu} \}_{j=0}^\rho$
on the whole space $H$. Denote
$\vec e_r := (\delta_{r,m})_{m=1}^n \in \mathbb{Z}^n_+$, $r=1,\ldots,n$, and
\begin{equation}
\label{f2_31}
\Omega_l = \{ j\in \{ 0,\ldots,\rho \}:\ \mathbf{k}_j + \vec e_l \in K \},\qquad l=1,\ldots,n.
\end{equation}
Observe that
$$ W T_l W^{-1} \sum_{j\in\Omega_l} \alpha_j [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}} =
\sum_{j\in\Omega_l} \alpha_j [ \mathbf{t}^{\mathbf{k}_j + \vec e_l} ]_{\mathfrak{L}},\quad \alpha_j\in\mathbb{C}. $$
Since the operator $W T_l W^{-1}$ is well defined, the following implication holds:
\begin{equation}
\label{f2_33}
\left( \sum_{j\in\Omega_l} \alpha_j [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}} = 0,\quad \mbox{for some $\alpha_j\in\mathbb{C}$} \right)
\Rightarrow
\left( \sum_{j\in\Omega_l} \alpha_j [ \mathbf{t}^{\mathbf{k}_j + \vec e_l} ]_{\mathfrak{L}} = 0 \right).
\end{equation}
The latter implication is equivalent to the following one:
$$ \left( \left( \sum_{j\in\Omega_l} \alpha_j [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}}, [ \mathbf{t}^{\mathbf{k}_m} ]_{\mathfrak{L}} \right)_H
= 0,\ \forall m\in\Omega_l,\quad \mbox{for some $\alpha_j\in\mathbb{C}$} \right)
\Rightarrow $$
\begin{equation}
\label{f2_35}
\left( \left( \sum_{j\in\Omega_l} \alpha_j [ \mathbf{t}^{\mathbf{k}_j + \vec e_l} ]_{\mathfrak{L}}, [ \mathbf{t}^{\mathbf{k}_m + \vec e_l} ]_{\mathfrak{L}}
\right)_H = 0,\ \forall m\in\Omega_l \right)
\end{equation}
or, equivalently,
$$ \left( \sum_{j\in\Omega_l} \alpha_j s_{\mathbf{k}_j + \mathbf{k}_m} = 0,\ \forall m\in\Omega_l,\quad
\mbox{for some $\alpha_j\in\mathbb{C}$} \right)
\Rightarrow $$
\begin{equation}
\label{f2_37}
\left( \sum_{j\in\Omega_l} \alpha_j s_{\mathbf{k}_j + \vec e_l + \mathbf{k}_m + \vec e_l}
= 0,\ \forall m\in\Omega_l \right).
\end{equation}
Denote
\begin{equation}
\label{f2_39}
\Gamma_l = \left( s_{\mathbf{k}_j + \mathbf{k}_m} \right)_{m,j\in\Omega_l},\quad
\widehat \Gamma_l = \left( s_{\mathbf{k}_j + \vec e_l + \mathbf{k}_m + \vec e_l} \right)_{m,j\in\Omega_l},\qquad l=1,2,\ldots,n,
\end{equation}
where the indices from $\Omega_l$ are taken in the increasing order.
We obtain the second necessary condition of the solvability:
\begin{equation}
\label{f2_45}
\mathop{\rm Ker}\nolimits \Gamma_l \subseteq \mathop{\rm Ker}\nolimits \widehat \Gamma_l,\qquad l=1,2,\ldots,n.
\end{equation}
\section{The operator approach to the moment problem. The dimensional stability.}
Suppose that for an admissible finite set $K\subset\mathbb{Z}^n_+$ the moment problem~(\ref{f1_1}), with
$\mathcal{K} = K+K$ and some $\mathcal{S} = (s_{\mathbf{k}})_{\mathbf{k}\in\mathcal{K}}$, is given.
Choose and fix some indexation~(\ref{f2_9}). Assume that conditions~(\ref{f2_25}),(\ref{f2_45})
hold.
\noindent
We may construct the associated Hilbert space $H$, as in the previous section. For $l=1,\ldots,n$ we consider the following operators:
\begin{equation}
\label{f3_5}
M_l \sum_{j\in\Omega_l} \alpha_j [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}} =
\sum_{j\in\Omega_l} \alpha_j [ \mathbf{t}^{\mathbf{k}_j + \vec e_l} ]_{\mathfrak{L}},\quad \alpha_j\in\mathbb{C},
\end{equation}
with $D(M_l) = \mathop{\rm Lin}\nolimits \{ [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}} \}_{j\in\Omega_l}$.
By condition~(\ref{f2_45}) the operator $M_l$ is well-defined. Moreover, it is linear and symmetric.
In particular, we have
\begin{equation}
\label{f3_7}
M_l [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}} =
[ \mathbf{t}^{\mathbf{k}_j + \vec e_l} ]_{\mathfrak{L}},\qquad j\in\Omega_l;\quad l=1,\ldots,n.
\end{equation}
Suppose that there exist commuting self-adjoint operators $\widetilde M_j\supseteq M_j$ ($j=1,\ldots,n$) in
a \textit{finite-dimensional} Hilbert space $\widetilde H\supseteq H$.
Observe that in this case operators $\widetilde M_j$ are bounded and defined on the whole space $\widetilde H$.
Choose an arbitrary $\mathbf{k}=(k_1,\ldots,k_n)\in K\backslash\{ \mathbf{0} \}$. We shall use the notations from Definition~\ref{d2_1}.
By the induction argument one can verify that
\begin{equation}
\label{f3_11}
\left[ \mathbf{t}^{\widetilde{\mathbf{k}}_r} \right]_{\mathfrak{L}} =
\widetilde M_{a_r} \ldots \widetilde M_{a_1} [1]_{\mathfrak{L}},\qquad r=1,2,\ldots, |\mathbf{k}|.
\end{equation}
In particular, we obtain that
\begin{equation}
\label{f3_15}
\left[ \mathbf{t}^{\mathbf{k}} \right]_{\mathfrak{L}} =
\widetilde M_{a_{|\mathbf{k}|}} \ldots \widetilde M_{a_1} [1]_{\mathfrak{L}}.
\end{equation}
Since operators $\widetilde M_j$ commute we may rearrange the product in~(\ref{f3_15}). Moreover, it is clear that
$W_1$ appears in the product in~(\ref{f2_5}) $k_1$ times, $W_2$ appears $k_2$ times, ..., $W_n$ appears $k_n$ times.
Then we get
\begin{equation}
\label{f3_17}
\left[ \mathbf{t}^{\mathbf{k}} \right]_{\mathfrak{L}} =
\widetilde M_{1}^{k_1} \widetilde M_{2}^{k_2} \ldots \widetilde M_{n}^{k_n} [1]_{\mathfrak{L}},\qquad
\forall \mathbf{k}=(k_1,\ldots,k_n)\in K.
\end{equation}
We can now construct a solution of the moment problem. For an arbitrary $\mathbf{k}=(k_1,\ldots,k_n)\in (K+K)$,
$\mathbf{k} = \mathbf{k}'+ \mathbf{k}''$,
$\mathbf{k}'=(k_1',\ldots,k_n'), \mathbf{k}''=(k_1'',\ldots,k_n'')\in K$, we may write
$$ s_{\mathbf{k}} = \left( [\mathbf{t}^{\mathbf{k}'}], [\mathbf{t}^{\mathbf{k}''}] \right)_H =
\left( \widetilde M_{1}^{k_1'} \ldots \widetilde M_{n}^{k_n'} [1]_{\mathfrak{L}},
\widetilde M_{1}^{k_1''} \ldots \widetilde M_{n}^{k_n''} [1]_{\mathfrak{L}} \right)_H = $$
\begin{equation}
\label{f3_18}
= \left( \widetilde M_{1}^{k_1} \ldots \widetilde M_{n}^{k_n} [1]_{\mathfrak{L}}, [1]_{\mathfrak{L}} \right)_H =
\int \mathbf{t}^{\mathbf{k}} d\mu(\mathbf{t}),
\end{equation}
where
\begin{equation}
\label{f3_19}
\mu(\delta) = \left(
E(\delta) [1]_{\mathfrak{L}}, [1]_{\mathfrak{L}}
\right)_H,\qquad \delta\in\mathfrak{B}(\mathbb{R}^n),
\end{equation}
where $E(\delta)$ is the spectral measure of a commuting tuple $\widetilde M_1,\ldots,\widetilde M_n$.
Consequently, we get a solution $\mu$ of the moment problem.
Denote
\begin{equation}
\label{f3_30}
\Omega_0 = \{ j\in \{0,\ldots,\rho \}:\ \mathbf{k}_j + \vec e_1, \mathbf{k}_j + \vec e_2, \ldots, \mathbf{k}_j + \vec e_n \in K \},
\end{equation}
and
\begin{equation}
\label{f3_35}
H_0 = \mathop{\rm Lin}\nolimits \{ [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}} \}_{j\in\Omega_0}.
\end{equation}
Observe that
$$ \Omega_0\subseteq \Omega_j,\qquad j=1,\ldots,n, $$
and therefore
$$ H_0\subseteq D(M_j),\qquad j=1,\ldots,n. $$
\begin{definition}
\label{d3_1}
Suppose that for an admissible finite set $K\subset\mathbb{Z}^n_+$ the moment problem~(\ref{f1_1}), with
$\mathcal{K} = K+K$ and some $\mathcal{S} = (s_{\mathbf{k}})_{\mathbf{k}\in\mathcal{K}}$, is given and
conditions~(\ref{f2_25}),(\ref{f2_45})
hold (for some indexation). Define the associated Hilbert space $H$ and its subspace $H_0$.
The set of moments $\mathcal{S}$ is said to be \textbf{dimensionally stable}, if
$\dim H = \dim H_0$.
\end{definition}
Suppose that for the moment problem, as in Definition~\ref{d3_1}, the set $\mathcal{S}$ is dimensionally stable.
Then operators $M_l$ are self-adjoint and defined on the whole $H$. Observe that for $l,r\in\{1,\ldots, n\}:\ l\not= r$, we have
\begin{equation}
\label{f3_37}
M_l M_r [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}} = M_l [ \mathbf{t}^{\mathbf{k}_j + \vec e_r} ]_{\mathfrak{L}},\qquad
\forall j\in\Omega_0.
\end{equation}
In general, it is not clear if the element $\mathbf{k}_j + \vec e_r =: \mathbf{k}_s$, with $s\in\{0,\ldots,\rho\}$,
has the property $s\in\Omega_l$. Thus, we can not apply
relation~(\ref{f3_7}) to get $[ \mathbf{t}^{\mathbf{k}_j + \vec e_r + \vec e_l} ]_{\mathfrak{L}}$.
\textit{However, for an important type of admissible sets $K$, described in Example~\ref{e1_1}, part~2), this property
holds and we come to the commutativity of operators $M_k$.}
Then we can construct an atomic solution $\mu$ by relation~(\ref{f3_19}).
Assume that $s_{\mathbf{0}} \not= 0$. Then $H\not= \{ 0 \}$.
Apply the Gram-Schmidt orthogonalization process, removing linearly dependent elements, to the elements
$[ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}}$, $j\in\Omega_0$. Then we get an orthonormal basis
$\mathfrak{F} = \{ f_j \}_{j=0}^\tau$ in $H_0 = H$.
For an arbitrary admissible $K$, the commutativity of operators $M_j$ can be directly checked by using their matrices with respect to $\mathfrak{F}$.
We do not know, if the commutativity is true for all admissible sets $K$ (in the case of the dimensional stability).
Observe that conditions~(\ref{f2_25}),(\ref{f2_45}) can be verified by the standard tools. To verify the dimensional stability,
one can find projections of elements $[ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}}$, $j\in\{ 0,\ldots, r \}\backslash\Omega_0$,
on the subspace $H_0$, using an orthonormal basis in $H_0$. The norms of these projections should be zero.
\begin{example}
\label{e3_1}
Consider the truncated moment problem~(\ref{f1_1}) with $n=2$,
$K = K_{2,2}$ (see~Example~\ref{e1_1}), $\mathcal{K} = K+K = K_{4,4}$, and the following moments:
$$ s_{(0,0)} = 3,\ s_{(0,1)} = s_{(0,2)} = s_{(0,3)} = s_{(0,4)} = 1, $$
$$ s_{(1,0)} = 4,\ s_{(1,1)} = s_{(1,2)} = s_{(1,3)} = s_{(1,4)} = 0, $$
$$ s_{(2,0)} = 4,\ s_{(2,1)} = s_{(2,2)} = s_{(2,3)} = s_{(2,4)} = 0, $$
$$ s_{(3,0)} = 16,\ s_{(3,1)} = s_{(3,2)} = s_{(3,3)} = s_{(3,4)} = 0, $$
$$ s_{(4,0)} = 32,\ s_{(4,1)} = s_{(4,2)} = s_{(4,3)} = s_{(4,4)} = 0. $$
Choose the following indexation in the set $K$:
$$ \mathbf{k}_0 = (0,0),\ \mathbf{k}_1 = (0,1),\ \mathbf{k}_2 = (0,2), $$
$$ \mathbf{k}_3 = (1,0),\ \mathbf{k}_4 = (1,1),\ \mathbf{k}_5 = (1,2), $$
$$ \mathbf{k}_6 = (2,0),\ \mathbf{k}_7 = (2,1),\ \mathbf{k}_8 = (2,2). $$
Thus, we have $\rho = 8$. The matrix $\Gamma = (s_{\mathbf{k}_j + \mathbf{k}_m})_{m,j=0}^8$ has the following form:
\begin{equation}
\label{f3_45}
\Gamma =
\left(
\begin{array}{ccccccccc}
3 & 1 & 1 & 4 & 0 & 0 & 8 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
4 & 0 & 0 & 8 & 0 & 0 & 16 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
8 & 0 & 0 & 16 & 0 & 0 & 32 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}
\right).
\end{equation}
The non-negativity of $\Gamma$ can be verified directly, by checking that the determinants of all submatrices, standing on the intersections
of rows and columns with the same indices, are non-negative.
The matrices $\Gamma_1, \Gamma_2, \widehat\Gamma_1, \widehat\Gamma_2$ have the following forms:
\begin{equation}
\label{f3_50}
\Gamma_1 =
\left(
\begin{array}{cccccc}
3 & 1 & 1 & 4 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 & 0 \\
4 & 0 & 0 & 8 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \end{array}
\right),\quad
\Gamma_2 =
\left(
\begin{array}{cccccc}
3 & 1 & 4 & 0 & 8 & 0 \\
1 & 1 & 0 & 0 & 0 & 0 \\
4 & 0 & 8 & 0 & 16 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
8 & 0 & 16 & 0 & 32 & 0 \\
0 & 0 & 0 & 0 & 0 & 0\end{array}
\right),
\end{equation}
\begin{equation}
\label{f3_52}
\widehat\Gamma_1 =
\left(
\begin{array}{cccccc}
8 & 0 & 0 & 16 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
16 & 0 & 0 & 32 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0\end{array}
\right),\quad
\widehat\Gamma_2 =
\left(
\begin{array}{cccccc}
1 & 1 & 0 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0\end{array}
\right).
\end{equation}
The linear algebraic equation
$$ \Gamma_1 \vec x = 0,\quad \vec x = (x_1,\ldots,x_6)^T, $$
has the following solution:
$x_3,x_4,x_5,x_6$ are arbitrary complex numbers, $x_1 = -2 x_4$, $x_2 = - x_3 + 2 x_4$.
It is readily checked that for any solution it holds $\widehat \Gamma_1 \vec x = 0$.
\noindent
On the other hand, the linear algebraic equation
$$ \Gamma_2 \vec x = 0,\quad \vec x = (x_1,\ldots,x_6)^T, $$
has the following solution:
$x_2,x_4,x_5,x_6$ are arbitrary complex numbers, $x_1 = - x_2$, $x_3 = \frac{1}{2} x_2 - 2 x_5$.
It is readily checked that for any solution it holds $\widehat \Gamma_2 \vec x = 0$.
Thus, conditions~(\ref{f2_25}),(\ref{f2_45}) hold. Let us check the dimensional stability.
Consider the associated Hilbert space $H$.
For simplicity, we denote
$$ g_j = [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}},\qquad j=0,\ldots, 8. $$
Observe that
$$ \Omega_0 = \{ 0, 1, 3, 4 \}. $$
Let us apply the Gram-Schmidt orthogonalization process, removing linearly dependent elements, to the sequence
$g_0, g_1, g_3, g_4$. Notice that all norms and scalar products are calculated by the moments:
\begin{equation}
\label{f3_54}
(g_j,g_r)_H = \left(
[ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}},
[ \mathbf{t}^{\mathbf{k}_r} ]_{\mathfrak{L}}
\right)_H =
s_{\mathbf{k}_j + s_{\mathbf{k}_r}},\qquad j,r=0,1,\ldots, 8.
\end{equation}
We obtain an orthonormal basis
$\mathfrak{F} = \{ f_0, f_1 \}$ in $H_0$, with
\begin{equation}
\label{f3_55}
f_0 = \frac{1}{\sqrt{3}} g_0,\quad
f_1 = \sqrt{\frac{3}{2}} \left(
g_1 - \frac{1}{3} g_0
\right).
\end{equation}
Moreover, it turned out that
\begin{equation}
\label{f3_57}
g_3 = 2 g_0 - 2 g_1,\quad g_4 = 0.
\end{equation}
It remains to verify that the projections of elements $g_2,g_5,g_6,g_7,g_8$ on $H_0$ coincide with the corresponding elements.
For example,
$$ g_2 - (g_2,f_0) f_0 - (g_2,f_1) f_1 = g_2 - g_1, $$
but
$$ \| g_2 - g_1 \|_H^2 = ( g_2 - g_1, g_2 - g_1 )_H = (g_2,g_2) - (g_2,g_1) - (g_1,g_2) + (g_1,g_1) = 0. $$
For other elements, we proceed in a similar way.
Consequently, the sequence $\mathcal{S} = (s_{\mathbf{k}})_{\mathbf{k}\in\mathcal{K}}$ is dimensionally stable.
Let us construct an atomic solution $\mu$ of the moment problem.
Observe that
$$ \Omega_1 = \{ 0, 1, 2, 3, 4, 5 \},\quad \Omega_2 = \{ 0, 1, 3, 4, 6, 7 \}. $$
The operators $M_1$ and $M_2$ act in the following way:
$$ M_1 g_0 = g_3 = 2g_0 - g_1,\quad M_1 g_1 = g_4 =0; $$
$$ M_2 g_0 = g_1,\quad M_2 g_1 = g_2 = g_1. $$
Therefore
$$ M_1 f_0 = \frac{4}{3} f_0 - \frac{2\sqrt{2}}{3} f_1,\quad M_1 f_1 = - \frac{2\sqrt{2}}{3} f_0 + \frac{2}{3} f_1; $$
$$ M_2 f_0 = \frac{1}{3} f_0 + \frac{\sqrt{2}}{3} f_1,\quad M_1 f_1 = \frac{\sqrt{2}}{3} f_0 + \frac{2}{3} f_1. $$
The matrices $\mathcal{M}_1,\mathcal{M}_2$ of operators $M_1,M_2$, respectively, for the basis
$\mathfrak{F}$ are:
$$ \mathcal{M}_1 = \left(
\begin{array}{cc} \frac{4}{3} & - \frac{2\sqrt{2}}{3}\\
- \frac{2\sqrt{2}}{3} & \frac{2}{3}\end{array}
\right),\quad
\mathcal{M}_2 = \left(
\begin{array}{cc} \frac{1}{3} & \frac{\sqrt{2}}{3}\\
\frac{\sqrt{2}}{3} & \frac{2}{3}\end{array}
\right). $$
The matrix $\mathcal{M}_1$ has eigenvalues $\lambda_1 = 0, \lambda_2 = 2$, with eigenvectors respectively:
$$ \vec u_1 = \frac{1}{\sqrt{3}} (1,\sqrt{2})^T,\quad \vec u_2 = \frac{1}{\sqrt{3}} (-\sqrt{2}, 1)^T. $$
The matrix $\mathcal{M}_2$ has eigenvalues $\widetilde\lambda_1 = 0, \widetilde\lambda_2 = 1$, with eigenvectors respectively:
$$ \vec v_1 = \frac{1}{\sqrt{3}} (\sqrt{2},-1)^T,\quad \vec v_2 = \frac{1}{\sqrt{3}} (1,\sqrt{2})^T. $$
Denote
$$ \mathcal{H}_1 = \mathop{\rm Lin}\nolimits \left\{ \frac{1}{\sqrt{3}} (f_0 + \sqrt{2} f_1) \right\},\quad
\mathcal{H}_2 = \mathop{\rm Lin}\nolimits \left\{ \frac{1}{\sqrt{3}} (-\sqrt{2} f_0 + f_1) \right\}, $$
$$ \widetilde{\mathcal{H}}_1 = \mathop{\rm Lin}\nolimits \left\{ \frac{1}{\sqrt{3}} (\sqrt{2} f_0 - f_1) \right\},\quad
\widetilde{\mathcal{H}}_2 = \mathop{\rm Lin}\nolimits \left\{ \frac{1}{\sqrt{3}} (f_0 + \sqrt{2} f_1) \right\}. $$
Observe that the spectral measure $E(\delta)$ in relation~(\ref{f3_19}) can have jumps at points $(x,y)$ with $x\in \{ \lambda_1,\lambda_2 \}$,
$y\in\{ \widetilde{\lambda}_1, \widetilde{\lambda}_2 \}$.
The measure support is contained in this set of four points.
Thus, the measure $\mu$ has at most $4$ atoms. Notice that
$$ \mu(\{ (x,y) \}) = (E(\{ (x,y) \}) g_0, g_0)_H = (E_1(\{ x \}) E_2(\{ y \}) g_0, g_0)_H = $$
\begin{equation}
\label{f3_60}
= (E_2(\{ y \}) g_0, E_1(\{ x \}) g_0)_H,\qquad \forall (x,y)\in\mathbb{R}^2.
\end{equation}
Observe that
$$ E_1(\{ 0 \}) g_0 = P_{\mathcal{H}_1} g_0 = \frac{1}{\sqrt{3}} (f_0 + \sqrt{2} f_1), $$
$$ E_1(\{ 2 \}) g_0 = P_{\mathcal{H}_2} g_0 = \frac{\sqrt{6}}{3} (\sqrt{2} f_0 - f_1), $$
$$ E_2(\{ 0 \}) g_0 = P_{\widetilde{\mathcal{H}}_1} g_0 = \frac{\sqrt{6}}{3} (\sqrt{2} f_0 - f_1), $$
$$ E_2(\{ 1 \}) g_0 = P_{\widetilde{\mathcal{H}}_2} g_0 = \frac{1}{\sqrt{3}} (f_0 + \sqrt{2} f_1). $$
By~(\ref{f3_60}) we conclude that the solution $\mu$ is $2$-atomic, having jumps $1$ and $2$ at points $(0,1)$
and $(2,0)$, respectively.
\end{example}
In the case, \textit{when the dimensional stability does not hold}, one can parametrize self-ajoint extensions $\widetilde M_j$ in a finite-dimensional
Hilbert space $\widetilde H\supseteq H$ for each $M_j$ separately ($j=1,\ldots,n$). Then one may study the commutativity
of $M_j$s. For example, this can be done by the investigation of the commutativity of their (finite size) matrices
with respect to an orthonormal basis in $\widetilde H$.
On the other hand, one can write conditions that ensure that all $M_j$, but one $M_{j_0}$, are selfadjoint. Then one can
parametrize self-ajoint extensions $\widetilde M_{j_0}$ of $M_{j_0}$ inside $H$. Finally, it remains to check the commutativity of all operators,
using their matrices in an orthonormal basis.
In the next section we shall rewrite the above ideas as a detailed algorithm for the case $n=2$.
\section{An algorithm for the truncated two-dimensional moment problem.}
In this section we shall describe an algorithm which, under certain conditions, allows to construct atomic solutions of
the moment problem~(\ref{f1_1}). For simplicity, we restrict ourselves to the case $n=2$.
\noindent
\textbf{Algorithm 1.}
\noindent
\textbf{The given data}: an admissible finite set $K\subset\mathbb{Z}^2_+$,
$\mathcal{K} := K+K$ and a set of prescribed moments $\mathcal{S} = (s_{\mathbf{k}})_{\mathbf{k}\in\mathcal{K}}$.
\noindent
\textbf{Step 1.}
Choose and fix some indexation~(\ref{f2_9}) for the set $K$, with $\mathbf{k}_0 = \mathbf{0}$.
\noindent
\textbf{Step 2.}
Check conditions~(\ref{f2_25}) and (\ref{f2_45}). If they do not hold, then \textit{the moment problem
has no solutions} and we stop the algorithm.
\noindent
\textbf{Step 3.}
Consider the associated Hilbert space $H$, which is defined as in the paragraph following formula~(\ref{f2_25}).
Although this space consists of abstract elements (classes of the equivalence), all required numerical calculations will be performed
by the following basic correlation property:
\begin{equation}
\label{f4_5}
\left( [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}}, [ \mathbf{t}^{\mathbf{k}_m} ]_{\mathfrak{L}} \right)_H =
s_{\mathbf{k}_j + \mathbf{k}_m},\qquad j,m\in\{ 0,1,...,\rho \}.
\end{equation}
For $l=1,2$ we consider the multiplication operators $M_l$ as in~(\ref{f3_5}).
For convenience, we denote
\begin{equation}
\label{f4_6}
g_j = [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}},\qquad j\in\{ 0,1,...,\rho \}.
\end{equation}
Then
\begin{equation}
\label{f4_7}
\left( g_j, g_m \right)_H =
s_{\mathbf{k}_j + \mathbf{k}_m},\qquad j,m\in\{ 0,1,...,\rho \},
\end{equation}
and
\begin{equation}
\label{f4_9}
M_l g_j =
[ \mathbf{t}^{\mathbf{k}_j + \vec e_l} ]_{\mathfrak{L}} =: g_{\eta(l;j)},\qquad j\in\Omega_l;\quad l=1,2.
\end{equation}
\noindent
\textbf{Step 4.}
If $\| g_0 \|_H^2 = s_{\mathbf{k}_0 + \mathbf{k}_0} = s_\mathbf{0} = 0$, then the moment problem can not have any solution
different from $\mu=0$. In this case, if all the moments are zero then $\mu = 0$ is a solution, otherwise there are no solutions.
Thus, in the case $s_\mathbf{0} = 0$ we stop the algorithm.
\noindent
\textbf{Step 5.} (\textit{The construction of an orthonormal basis}).
Apply the Gram-Schmidt orthogonalization procedure to the sequence
$$ g_0, g_1, ..., g_\rho, $$
removing the linearly dependent elements, if they appear.
We get an orthonormal basis
$$ \mathfrak{F} = \{ f_0, f_1, ..., f_{\rho'} \} $$
in the Hilbert space $H$ (where $0\leq \rho'\leq\rho$).
By the construction, an element $f_j$ is a linear combination of $g_k$s, with explicitly calculated coefficients.
Notice that $f_0 \not= 0$.
\noindent
\textbf{Step 6.} (\textit{The parametrization of extensions for each $M_l$ separately}).
Observe that $M_l$ is defined on elements $g_j$, $j\in \Omega_l$ ($l=1,2$).
At first, define linear operators $\widetilde M_l$ on these elements in the same way.
Denote
\begin{equation}
\label{f4_15}
\Omega_l' := \{ 0,1,...,\rho \} \backslash \Omega_l,\qquad l=1,2.
\end{equation}
For $l=1,2$ one should repeat the following procedure.
Choose an arbitrary element $g_k$, $k\in\Omega_l'$. Calculate the norm of its projection on $D(\widetilde M_l)$.
If $g_k\in D(\widetilde M_l)$ then we skip this element. Otherwise, we set
\begin{equation}
\label{f4_17}
\widetilde M_l g_k = \sum_{j=0}^{\rho'} (\alpha_{l;k,j} + \beta_{l;k,j} i) f_j,\qquad \alpha_{l;k,j},\beta_{l;k,j}\in\mathbb{R}.
\end{equation}
Then we take another element $g_k$, $k\in\Omega_l'$, and proceed in a similar way. We continue this procedure to define
$\widetilde M_l$ for all $g_k$, $k\in\Omega_l'$.
Using the linearity, we construct some linear (but not necessary self-adjoint) extensions $\widetilde M_l$ of $M_l$ on the whole $H$ ($l=1,2$).
Notice that the case $D(M_l)=H$ was not excluded. The latter case means that the corresponding parameters
$\alpha_{l;k,j},\beta_{l;k,j}$ are absent.
\noindent
\textbf{Step 7.} (\textit{The calculation of matrices of $\widetilde M_l$}).
Observe that each $f_j$ is a linear combination of $g_k$s (by the Gram-Schmidt orthogonalization):
\begin{equation}
\label{f4_19}
f_j = \sum_{k=0}^\rho c_{j;k} g_k,\qquad c_{j;k}\in\mathbb{C};\ j\in\mathbb{Z}_{0,\rho'};
\end{equation}
and vice versa:
\begin{equation}
\label{f4_22}
g_j = \sum_{k=0}^{\rho'} d_{j;k} f_k,\qquad d_{j;k}\in\mathbb{C};\ j\in\mathbb{Z}_{0,\rho}.
\end{equation}
Then
$$ \widetilde M_l f_j = \sum_{k=0}^\rho c_{j;k} \widetilde M_l g_k. $$
By~(\ref{f4_9}), (\ref{f4_17}) and~(\ref{f4_22}) we see that $\widetilde M_l f_j$
is a linear combination of $f_k$ with some coefficients, which may depend linearly on $\alpha_{l;k,j},\beta_{l;k,j}$.
\noindent
In the basis $\mathfrak{F}$, we calculate the matrices $\mathcal{M}_1$, $\mathcal{M}_2$ of $\widetilde M_1$ and $\widetilde M_2$, respectively.
Thus, the coefficients of $\mathcal{M}_1, \mathcal{M}_2$ may depend
\textit{linearly} on real parameters $\alpha_{l;k,j},\beta_{l;k,j}$.
\noindent
\textbf{Step 8.} (\textit{The check for the self-adjointness and the commutativity}).
The following conditions:
\begin{equation}
\label{f4_25}
\mathcal{M}_1 = \mathcal{M}_1^*,\quad \mathcal{M}_2 = \mathcal{M}_2^*,
\end{equation}
and
\begin{equation}
\label{f4_27}
\mathcal{M}_1 \mathcal{M}_2 = \mathcal{M}_2 \mathcal{M}_1,
\end{equation}
ensure the self-adjointness and the commutativity of $\widetilde M_1$ and $\widetilde M_2$.
Conditions~(\ref{f4_25}) generate linear algebraic systems for unknown real parameters $\alpha_{l;k,j},\beta_{l;k,j}$. They
can be solved by elementary methods (e.g. by the Gauss elimination method).
\noindent
Substitute the general solutions of linear systems~(\ref{f4_25}) (which may depend on free real parameters) into relation~(\ref{f4_27}).
Observe that the coefficients of matrices $\mathcal{M}_1,\mathcal{M}_2$ may depend \textit{linearly} on these new free real parameters.
If parameters of $\mathcal{M}_1$ or parameters of $\mathcal{M}_2$ are absent, then we obtain a linear algebraic system of
equations. In general, if we fix the parameters of $\mathcal{M}_2$, then we get a linear system with respect to parameters of $\mathcal{M}_2$.
Similarly, if we fix the parameters of $\mathcal{M}_1$, then we get a linear system with respect to parameters of $\mathcal{M}_2$.
If we can not get a solution on this way, we stop the algorithm.
Choose and fix arbitrary parameters $\alpha_{1;k,j},\beta_{1;k,j}$, $\alpha_{2;k,j},\beta_{2;k,j}$,
satisfying relations~(\ref{f4_25}),(\ref{f4_27}).
In what follows, we shall consider matrices $\mathcal{M}_1,\mathcal{M}_2$ corresponding to these parameters.
\noindent
\textbf{Step 9.}
Find all eigenvalues and eigenvectors of matrices $\mathcal{M}_1,\mathcal{M}_2$.
\noindent
\textbf{Step 10.}
Calculate the (atomic) solution of the moment problem by formula~(\ref{f3_19}).
Observe that the solution can have atoms at points $(x,y)$, where $x$ is an eigenvalue of $\mathcal{M}_1$,
$y$ is an eigenvalue of $\mathcal{M}_2$. Notice that
\begin{equation}
\label{f4_28}
\mu(\{ (x,y) \}) = (E(\{ (x,y) \}) g_0, g_0)_H = (E_2(\{ y \}) g_0, E_1(\{ x \}) g_0)_H,
\end{equation}
where $E_j(\delta)$ ($\delta\in\mathfrak{B}(\mathbb{R})$) is the spectral measure of the (bounded) self-adjoint operator $\widetilde M_j$,
$j=1,2$.
Let us illustrate this algorithm by the following examples.
\begin{example}
\label{e4_1}
Consider the truncated moment problem~(\ref{f1_1}) with $n=2$,
$K = K_{1,1}$ (see~Example~\ref{e1_1}), $\mathcal{K} = K+K = K_{2,2}$, and the following moments:
$$ s_{(0,0)} = 4,\ s_{(0,1)} = 12,\ s_{(0,2)} = 48, $$
$$ s_{(1,0)} = 4,\ s_{(1,1)} = 12,\ s_{(1,2)} = 48, $$
$$ s_{(2,0)} = 4,\ s_{(2,1)} = 12,\ s_{(2,2)} = 48. $$
\noindent
\textbf{Step 1.}
Choose the following indexation in the set $K$:
$$ \mathbf{k}_0 = (0,0),\ \mathbf{k}_1 = (0,1),\ \mathbf{k}_2 = (1,0),\ \mathbf{k}_3 = (1,1). $$
Thus, we have $\rho = 3$.
\noindent
\textbf{Step 2.}
The matrix $\Gamma = (s_{\mathbf{k}_j + \mathbf{k}_m})_{m,j=0}^3$ has the following form:
\begin{equation}
\label{f4_30}
\Gamma =
\left(
\begin{array}{cccc}
4 & 12 & 4 & 12 \\
12 & 48 & 12 & 48 \\
4 & 12 & 4 & 12 \\
12 & 48 & 12 & 48 \end{array}
\right).
\end{equation}
The non-negativity of $\Gamma$ holds. It is verified by checking that the determinants of all submatrices, standing on the intersections
of rows and columns with the same indices, are non-negative.
Observe that
\begin{equation}
\label{f4_32}
\Omega_1 = \{ 0,1 \},\ \Omega_2 = \{ 0,2 \}.
\end{equation}
The matrices $\Gamma_1, \Gamma_2, \widehat\Gamma_1, \widehat\Gamma_2$ have the following forms:
\begin{equation}
\label{f4_50}
\Gamma_1 =
\left(
\begin{array}{cc}
4 & 12 \\
12 & 48\end{array}
\right),\quad
\Gamma_2 =
\left(
\begin{array}{cc}
4 & 4 \\
4 & 4\end{array}
\right),
\end{equation}
\begin{equation}
\label{f4_52}
\widehat\Gamma_1 =
\left(
\begin{array}{cc}
4 & 12 \\
12 & 48 \end{array}
\right),\quad
\widehat\Gamma_2 =
\left(
\begin{array}{cc}
48 & 48 \\
48 & 48\end{array}
\right).
\end{equation}
Therefore conditions~(\ref{f2_45}) hold.
\noindent
\textbf{Step 3.}
Consider the associated Hilbert space $H$, which is defined as in the paragraph following formula~(\ref{f2_25}).
Consider the multiplication operators $M_l$ as in~(\ref{f3_5}). Denote
$$ g_j = [ \mathbf{t}^{\mathbf{k}_j} ]_{\mathfrak{L}},\qquad j=0,1,2,3. $$
Notice that
\begin{equation}
\label{f4_55}
M_1 g_0 = g_2,\quad M_1 g_1 = g_3,
\end{equation}
\begin{equation}
\label{f4_57}
M_2 g_0 = g_1,\quad M_2 g_2 = g_3,
\end{equation}
and
\begin{equation}
\label{f4_58}
D(M_1) = \mathop{\rm Lin}\nolimits \{ g_0, g_1 \},\quad
D(M_2) = \mathop{\rm Lin}\nolimits \{ g_0, g_2 \}.
\end{equation}
\noindent
\textbf{Step 4.}
In our case we have $\| g_0 \|_H^2 = s_{\mathbf{k}_0 + \mathbf{k}_0} = s_\mathbf{0} = 4\not= 0$.
\noindent
\textbf{Step 5.}
Let us apply the Gram-Schmidt orthogonalization process, removing linearly dependent elements, to the sequence
$g_0, g_1, g_2, g_3$.
We shall use the property~(\ref{f4_7}).
We obtain that
\begin{equation}
\label{f4_61}
f_0 = \frac{1}{2} g_0,\quad f_1 = \frac{1}{2\sqrt{3}} \left(
g_1 - 3 g_0 \right),
\end{equation}
and
\begin{equation}
\label{f4_63}
g_2 = g_0,\quad g_3 = g_1.
\end{equation}
Therefore
$\mathfrak{F} := \{ f_0, f_1 \}$ is an orthonormal basis in $H$, and $\rho' = 1$.
\noindent
\textbf{Step 6.}
Notice that
\begin{equation}
\label{f4_65}
\Omega_1' = \{ 2,3 \},\ \Omega_2' = \{ 1,3 \}.
\end{equation}
By~(\ref{f4_58}) and (\ref{f4_63}) we see that $D(M_1) = H$ and $D(M_2) = \mathop{\rm Lin}\nolimits \{ g_0 \}$.
Therefore $\widetilde M_1 = M_1$.
Define $\widetilde M_2$ on $g_1$ in the following way:
\begin{equation}
\label{f4_67}
\widetilde M_2 g_1 = \sum_{j=0}^{1} (\alpha_{2;1,j} + \beta_{2;1,j} i) f_j,\qquad \alpha_{2;1,j},\beta_{2;1,j}\in\mathbb{R}.
\end{equation}
Since $g_3 = g_1$, the procedure of the extension is finished.
\noindent
\textbf{Step 7.}
By~(\ref{f4_55}),(\ref{f4_63}) we see that $M_1 = \widetilde M_1 = E_H$. Therefore, $\mathcal{M}_1$ is the identity matrix.
Let us calculate $\mathcal{M}_2$.
Observe that
\begin{equation}
\label{f4_71}
g_0 = 2 f_0,\quad g_1 = 6 f_0 + 2\sqrt{3} f_1.
\end{equation}
By~(\ref{f4_61}),(\ref{f4_71}),(\ref{f4_57}),(\ref{f4_67}) we get
$$ \widetilde M_2 f_0 = 3 f_0 + \sqrt{3} f_1, $$
$$ \widetilde M_2 f_1 = \left(
\frac{1}{2\sqrt{3}} (\alpha_{2;1,0} + \beta_{2;1,0} i) - 18
\right) f_0
+
\left(
\frac{1}{2\sqrt{3}} (\alpha_{2;1,1} + \beta_{2;1,1} i) - 6\sqrt{3}
\right)
f_1. $$
Then
\begin{equation}
\label{f4_73}
\mathcal{M}_2 =
\left(
\begin{array}{cc}
3 & \frac{1}{2\sqrt{3}} (\alpha_{2;1,0} + \beta_{2;1,0} i) - 18 \\
\sqrt{3} & \frac{1}{2\sqrt{3}} (\alpha_{2;1,1} + \beta_{2;1,1} i) - 6\sqrt{3} \end{array}
\right).
\end{equation}
\noindent
\textbf{Step 8.}
Conditions~(\ref{f4_25}) imply that
\begin{equation}
\label{f4_75}
\alpha_{2;1,0} = 36\sqrt{3} + 6,\quad \beta_{2;1,0} = \beta_{2;1,0} = 0.
\end{equation}
Conditions~(\ref{f4_25}) are satisfied, since $\mathcal{M}_1$ is the identity matrix.
The real parameter $\alpha_{2;1,1}$ is free. We choose
$\alpha_{2;1,1} = 36 + 2\sqrt{3}$ to get
\begin{equation}
\label{f4_77}
\mathcal{M}_2 =
\left(
\begin{array}{cc}
3 & \sqrt{3} \\
\sqrt{3} & 1 \end{array}
\right).
\end{equation}
\noindent
\textbf{Step 9.}
The matrix $\mathcal{M}_1$ has an eigenvalue $\lambda_1 = 1$ and the eigensubspace
$\mathcal{H}_1 = H$.
The matrix $\mathcal{M}_2$ has eigenvalues $\widetilde\lambda_1 = 0$ and $\widetilde\lambda_2 = 4$, with eigensubspaces
$$ \widetilde{\mathcal{H}}_1 = \mathop{\rm Lin}\nolimits \left\{ -\frac{1}{2} f_0 + \frac{\sqrt{3}}{2} f_1 \right\},\quad
\widetilde{\mathcal{H}}_2 = \mathop{\rm Lin}\nolimits \left\{ \frac{\sqrt{3}}{2} f_0 + \frac{1}{2} f_1 \right\}, $$
respectively.
\noindent
\textbf{Step 10.}
Observe that
$$ E_2 (\{ 0 \}) g_0 = P_{\widetilde{\mathcal H}_1} g_0 = \frac{1}{2} f_0 - \frac{\sqrt{3}}{2} f_1, $$
$$ E_2 (\{ 4 \}) g_0 = P_{\widetilde{\mathcal H}_2} g_0 = \frac{3}{2} f_0 + \frac{\sqrt{3}}{2} f_1. $$
By formula~(\ref{f4_28}) we obtain that the solution $\mu$ is $2$-atomic with jumps $1$, $3$ at points
$(1,0)$ and $(1,4)$, respectively.
\end{example}
\begin{example}
\label{e4_2}
Consider the truncated moment problem~(\ref{f1_1}) with $n=2$,
$K = K_{1,1}$ (see~Example~\ref{e1_1}), $\mathcal{K} = K+K = K_{2,2}$, and the following moments:
$$ s_{(0,0)} = 3,\ s_{(0,1)} = 2,\ s_{(0,2)} = 2, $$
$$ s_{(1,0)} = 3,\ s_{(1,1)} = 2,\ s_{(1,2)} = 2, $$
$$ s_{(2,0)} = 5,\ s_{(2,1)} = 4,\ s_{(2,2)} = 4. $$
\noindent
\textbf{Step 1.} The same as in the previous example.
\noindent
\textbf{Step 2.}
The matrix $\Gamma = (s_{\mathbf{k}_j + \mathbf{k}_m})_{m,j=0}^3$ has the following form:
\begin{equation}
\label{f4_80}
\Gamma =
\left(
\begin{array}{cccc}
3 & 2 & 3 & 2 \\
2 & 2 & 2 & 2 \\
3 & 2 & 5 & 4 \\
2 & 2 & 4 & 4 \end{array}
\right).
\end{equation}
The non-negativity of $\Gamma$ holds.
Observe that
\begin{equation}
\label{f4_82}
\Omega_1 = \{ 0,1 \},\ \Omega_2 = \{ 0,2 \}.
\end{equation}
The matrices $\Gamma_1, \Gamma_2, \widehat\Gamma_1, \widehat\Gamma_2$ have the following forms:
\begin{equation}
\label{f4_90}
\Gamma_1 =
\left(
\begin{array}{cc}
3 & 2 \\
2 & 2\end{array}
\right),\quad
\Gamma_2 =
\left(
\begin{array}{cc}
3 & 3 \\
3 & 5\end{array}
\right),
\end{equation}
\begin{equation}
\label{f4_92}
\widehat\Gamma_1 =
\left(
\begin{array}{cc}
5 & 4 \\
4 & 4 \end{array}
\right),\quad
\widehat\Gamma_2 =
\left(
\begin{array}{cc}
2 & 2 \\
2 & 4\end{array}
\right).
\end{equation}
Therefore conditions~(\ref{f2_45}) hold.
\noindent
\textbf{Step 3.} The same as in the previous example.
\noindent
\textbf{Step 4.}
In our case we have $\| g_0 \|_H^2 = s_{\mathbf{k}_0 + \mathbf{k}_0} = s_\mathbf{0} = 3\not= 0$.
\noindent
\textbf{Step 5.}
Let us apply the Gram-Schmidt orthogonalization process, removing linearly dependent elements, to the sequence
$g_0, g_1, g_2, g_3$.
We shall use the property~(\ref{f4_7}).
We obtain that
\begin{equation}
\label{f4_101}
f_0 = \frac{1}{\sqrt{3}} g_0,\quad f_1 = \sqrt{ \frac{3}{2} } \left(
g_1 - \frac{2}{3} g_0 \right),\quad f_2 = \frac{1}{\sqrt{2}} (g_2 - g_0),
\end{equation}
and
\begin{equation}
\label{f4_103}
g_3 = -g_0 + g_1 + g_2.
\end{equation}
Therefore
$\mathfrak{F} := \{ f_0, f_1, f_2 \}$ is an orthonormal basis in $H$, and $\rho' = 2$.
\noindent
\textbf{Step 6.}
Notice that
\begin{equation}
\label{f4_105}
\Omega_1' = \{ 2,3 \},\ \Omega_2' = \{ 1,3 \}.
\end{equation}
Define $\widetilde M_1$ on $g_2$ in the following way:
\begin{equation}
\label{f4_107}
\widetilde M_1 g_2 = \sum_{j=0}^{2} (\alpha_{1;2,j} + \beta_{1;2,j} i) f_j,\qquad \alpha_{1;2,j},\beta_{1;2,j}\in\mathbb{R}.
\end{equation}
We define $\widetilde M_2$ on $g_1$ by the following formula:
\begin{equation}
\label{f4_109}
\widetilde M_2 g_1 = \sum_{j=0}^{2} (\alpha_{2;1,j} + \beta_{2;1,j} i) f_j,\qquad \alpha_{2;1,j},\beta_{2;1,j}\in\mathbb{R}.
\end{equation}
Since $g_3 = -g_0 + g_1 + g_2$, the procedure of the extension is finished.
Notice that we have $12$ free real parameters at this moment.
\noindent
\textbf{Step 7.}
Observe that
$$ g_0 = \sqrt{3} f_0,\quad g_1 = \frac{2}{\sqrt{3}} f_0 + \sqrt{ \frac{2}{3} } f_1,\quad g_2 = \sqrt{3} f_0 + \sqrt{2} f_2, $$
\begin{equation}
\label{f4_111}
g_3 = \frac{2}{\sqrt{3}} f_0 + \sqrt{ \frac{2}{3} } f_1 + \sqrt{2} f_2.
\end{equation}
By~(\ref{f4_101}),(\ref{f4_111}),(\ref{f4_107}),(\ref{f4_109}),(\ref{f4_55}) and (\ref{f4_57}) we get
$$ \widetilde M_1 f_0 = f_0 + \sqrt{ \frac{2}{3} } f_2,\quad \widetilde M_1 f_1 = f_1 + \frac{1}{ \sqrt{3} } f_2, $$
$$ \widetilde M_1 f_2 =
\frac{1}{ \sqrt{2} }
\left(
\alpha_{1;2,0} + \beta_{1;2,0} i - \sqrt{3}
\right) f_0
+
\frac{1}{ \sqrt{2} }
\left(
\alpha_{1;2,1} + \beta_{1;2,1} i
\right)
f_1
+ $$
$$ +
\frac{1}{ \sqrt{2} }
\left(
\alpha_{1;2,2} + \beta_{1;2,2} i - \sqrt{2}
\right) f_2; $$
$$ \widetilde M_2 f_0 = \frac{2}{3} f_0 + \frac{ \sqrt{2} }{3} f_1, $$
$$ \widetilde M_2 f_1 =
\left(
\sqrt{ \frac{3}{2} } (\alpha_{2;1,0} + \beta_{2;1,0} i) - \frac{ 2\sqrt{2} }{3}
\right) f_0
+ $$
$$ + \left(
\sqrt{ \frac{3}{2} } (\alpha_{2;1,1} + \beta_{2;1,1} i) - \frac{ 2 }{3}
\right) f_1
+
\sqrt{ \frac{3}{2} }
\left(
\alpha_{2;1,2} + \beta_{2;1,2} i
\right)
f_2, $$
$$ \widetilde M_2 f_2 = f_2. $$
Then
\begin{equation}
\label{f4_114}
\mathcal{M}_1 =
\left(
\begin{array}{ccc}
1 & 0 & \frac{1}{ \sqrt{2} }
\left(
\alpha_{1;2,0} + \beta_{1;2,0} i - \sqrt{3}
\right) \\
0 & 1 & \frac{1}{ \sqrt{2} }
\left(
\alpha_{1;2,1} + \beta_{1;2,1} i
\right) \\
\sqrt{\frac{2}{3}} & \frac{1}{\sqrt{3}} & \frac{1}{ \sqrt{2} }
\left(
\alpha_{1;2,2} + \beta_{1;2,2} i - \sqrt{2}
\right) \end{array}
\right),
\end{equation}
\begin{equation}
\label{f4_116}
\mathcal{M}_2 =
\left(
\begin{array}{ccc}
\frac{2}{3} &
\sqrt{ \frac{3}{2} } (\alpha_{2;1,0} + \beta_{2;1,0} i) - \frac{ 2\sqrt{2} }{3} & 0 \\
\frac{\sqrt{2}}{3} &
\sqrt{ \frac{3}{2} } (\alpha_{2;1,1} + \beta_{2;1,1} i) - \frac{ 2 }{3} & 0\\
0 & \sqrt{ \frac{3}{2} }
\left(
\alpha_{2;1,2} + \beta_{2;1,2} i
\right) & 1\end{array}\right).
\end{equation}
\noindent
\textbf{Step 8.}
Conditions~(\ref{f4_25}) imply that
$$ \beta_{1;2,j} = \beta_{2;1,j} = 0,\qquad j=0,1,2; $$
\begin{equation}
\label{f4_118}
\alpha_{1;2,0} = \frac{5}{ \sqrt{3} },\ \alpha_{1;2,1} = \sqrt{ \frac{2}{3} },\
\alpha_{2;1,0} = \frac{2}{ \sqrt{3} },\ \alpha_{2;1,2} = 0.
\end{equation}
It remains two free real parameters: $\alpha_{1;2,2}$ and $\alpha_{2;1,1}$.
Matrices $\mathcal{M}_1$, $\mathcal{M}_2$ take the following form:
\begin{equation}
\label{f4_122}
\mathcal{M}_1 =
\left(
\begin{array}{ccc}
1 & 0 & \sqrt{\frac{2}{3}} \\
0 & 1 & \frac{1}{ \sqrt{3} } \\
\sqrt{\frac{2}{3}} & \frac{1}{\sqrt{3}} &
\frac{1}{ \sqrt{2} }
\alpha_{1;2,2} - 1 \end{array}
\right),
\end{equation}
\begin{equation}
\label{f4_124}
\mathcal{M}_2 =
\left(
\begin{array}{ccc}
\frac{2}{3} &
\frac{ \sqrt{2} }{3} & 0 \\
\frac{\sqrt{2}}{3} &
\sqrt{ \frac{3}{2} } \alpha_{2;1,1} - \frac{ 2 }{3} & 0\\
0 & 0 & 1\end{array}\right).
\end{equation}
Condition~(\ref{f4_27}) will be satisfied if
$\alpha_{2;1,1} = \sqrt{ \frac{2}{3} }$.
It remains one free real parameter $\alpha_{1;2,2}$. We set
$\alpha_{1;2,2} = 2\sqrt{2}$.
Therefore
\begin{equation}
\label{f4_126}
\mathcal{M}_1 =
\left(
\begin{array}{ccc}
1 & 0 & \sqrt{\frac{2}{3}} \\
0 & 1 & \frac{1}{ \sqrt{3} } \\
\sqrt{\frac{2}{3}} & \frac{1}{\sqrt{3}} & 1 \end{array}
\right),\quad
\mathcal{M}_2 =
\left(
\begin{array}{ccc}
\frac{2}{3} &
\frac{ \sqrt{2} }{3} & 0 \\
\frac{\sqrt{2}}{3} &
\frac{ 1 }{3} & 0\\
0 & 0 & 1\end{array}\right).
\end{equation}
\noindent
\textbf{Step 9.}
The matrix $\mathcal{M}_1$ has eigenvalues $\lambda_0 = 0$, $\lambda_1 = 1$ and $\lambda_2 = 2$, with eigensubspaces
$$ \mathcal{H}_0 = \mathop{\rm Lin}\nolimits \left\{ -\frac{1}{\sqrt{3}} f_0 - \frac{1}{ \sqrt{6} } f_1
+ \frac{1}{ \sqrt{2} } f_2 \right\},\quad
\mathcal{H}_1 = \mathop{\rm Lin}\nolimits \left\{ \frac{1}{\sqrt{3}} f_0 - \sqrt{ \frac{2}{3} } f_1 \right\}, $$
$$ \mathcal{H}_2 = \mathop{\rm Lin}\nolimits \left\{ \frac{1}{\sqrt{3}} f_0 + \frac{1}{ \sqrt{6} } f_1
+ \frac{1}{ \sqrt{2} } f_2 \right\}, $$
respectively.
The matrix $\mathcal{M}_2$ has eigenvalues $\widetilde\lambda_0 = 0$ and $\widetilde\lambda_1 = 1$, with eigensubspaces
$$ \widetilde{\mathcal{H}}_0 = \mathop{\rm Lin}\nolimits \left\{ -\frac{1}{ \sqrt{3} } f_0 + \sqrt{ \frac{2}{3} } f_1 \right\}, $$
$$ \widetilde{\mathcal{H}}_1 =
\mathop{\rm Lin}\nolimits \left\{ \frac{1}{\sqrt{2}} f_0 + \frac{1}{2} f_1 + \frac{1}{2} f_2;\
-\frac{1}{ \sqrt{6} } f_0 - \frac{1}{ 2\sqrt{3} } f_1
+ \frac{ \sqrt{3} }{2} f_2 \right\}, $$
respectively.
\noindent
\textbf{Step 10.}
Observe that
$$ E_1 (\{ 0 \}) g_0 = P_{\mathcal H_0} g_0 = \frac{1}{ \sqrt{3} } f_0 - \frac{1}{ \sqrt{6} } f_1 - \frac{1}{ \sqrt{2} } f_2, $$
$$ E_1 (\{ 1 \}) g_0 = P_{\mathcal H_1} g_0 = \frac{1}{ \sqrt{3} } f_0 - \sqrt{ \frac{2}{3} } f_1, $$
$$ E_1 (\{ 2 \}) g_0 = P_{\mathcal H_2} g_0 = \frac{1}{ \sqrt{3} } f_0 + \frac{1}{ \sqrt{6} } f_1 + \frac{1}{ \sqrt{2} } f_2; $$
$$ E_2 (\{ 0 \}) g_0 = P_{\widetilde{\mathcal H}_0} g_0 = \frac{1}{ \sqrt{3} } f_0 - \sqrt{ \frac{2}{3} } f_1, $$
$$ E_2 (\{ 1 \}) g_0 = P_{\widetilde{\mathcal H}_1} g_0 = \frac{ 2\sqrt{3} }{3} f_0 + \frac{ \sqrt{6} }{3} f_1. $$
By formula~(\ref{f4_28}) we obtain that the solution $\mu$ is $3$-atomic with unit jumps at points
$(0,1)$, $(1,0)$ and $(2,1)$.
\end{example}
\begin{remark}
\label{r4_1}
Observe that in Algorithm~1 we restricted ourselves by considering possible extensions $\widetilde M_j$ of $M_j$ inside the original Hilbert
space $H$. Instead of $H$ one can consider any finite-dimensional Hilbert space $\widetilde H\supseteq H$, and construct
possible extensions $\widetilde M_j$ of $M_j$ in $\widetilde H$.
\end{remark}
\noindent
\textbf{Acknowledgements.} The author is grateful to Prof. Vasilescu for a useful discussion on the moment problems.
|
1,314,259,993,908 | arxiv | \section{Introduction}
With the advancement of high throughput technologies, it is {now} common to encounter high dimensional data {with the} number of parameters ($d$), often far exceeding the sample size ($n$). {In this high dimensional setting it is often of interest to investigate relationships among thousands of variables.}
This paper is motivated by the recent surge in interest to understand the effects of microbiome on our {external and internal environment and also on public health}. {For example, it is often of interest to understand the relationships among various bacterial populations and how such relationships may affect health outcomes. In some cases it may also be of interest in identifying microbial biomarkers which can classify subjects into two different populations using microbiome data. A detailed review of recent literature on this topic is provided by (cf Clemente et. al., 2012)}
In order to address such scientific questions, one needs to first estimate the covariance matrix $(\boldsymbol\Sigma)$ or its inverse, the precision matrix $(\boldsymbol\Omega=\boldsymbol\Sigma^{-1}).$ Estimation of $\boldsymbol\Sigma$ and $\boldsymbol\Omega$, when the dimension exceeds the sample size, i.e. $n \le d$ has been discussed extensively in the literature. The existing literature can be broadly classified into two categories, the first approach involves estimation of the precision matrix by exploiting its natural sparsity in comparison to the covariance matrix [cf.. Friedman, Hastie and Tibshirani, 2007, Cai, Liu and Luo, 2011, and Rothman, Bickel, Levina and Zhu, 2008]. A limitation of this approach is that it does not apply to low rank matrices $\boldsymbol\Sigma$ since the precision matrix does not exist in this case. The second popular approach is to estimate the $\boldsymbol\Sigma$ by assuming that $\boldsymbol\Sigma$ is itself sparse. One of several methods for this purpose is to threshold each element of the sample covariance matrix [Bickel and Levina 2008, and Rothman, Levina and Zhu, 2009].
All papers mentioned above assume the availability of independent and identically distributed (i.i.d) copies of the vector ${\bf X}=(X_1,X_2,...,X_d)^T$ whose distribution is Gaussian or more generally sub-Gaussian with $\boldsymbol\mu$ and $\boldsymbol\Sigma$ as the $d$ dimensional mean vector and covariance matrix respectively. Note that a real valued random variable $X_1$ is said to be sub-Gaussian if there exists a $b>0$ such that for every $t\in {\mathbb R},$ one has $Ee^{tX_1}\le e^{b^2t^2/2}.$
In contrast to typical high dimensional data, not all variables (i.e. microbes) are observed in {a microbial expression sample.} Thus if ${\bf X}$ represents a $d$ dimensional vector of abundances of $d$ taxa in a specimen obtained from an ecosystem, then not all components of ${\bf X}$ may be observed. {We refer to this missingness as structural zeros and it is due to the underlying biology and not not due to error in measurement or values below the minimum detection level. For example, it is known that the bacterial genus {\it Bacteroides} is prevalent in the human gut when the associated diet is high protein/fat diet, whereas it may be completely absent otherwise, i.e. carbohydrate rich diet.} The total abundance of such bacteria are coded as 0 counts in the observational vector ${\bf X}$.
The missing structure required to model structural zeros is more general than typical notions of missingness in the literature. More precisely, in the classical notions of missingness, such as missing completely at random (MCAR) or missing at random (MAR), it is assumed that in place of ${\bf X}$ we observe a surrogate vector ${\bf U}={\bf X}\oplus{\bf W},$ where $\oplus$ represents a component-wise product and ${\bf W}$ is a $d$-dimensional vector of independent Bernoulli random variables. In effect, not all components of ${\bf X}$ are observed in ${\bf U}$. For example, ${\bf U}=(0,0,X_3,..,X_p)^{T},$ corresponds to the case where the first two components of ${\bf X}=(X_1,...,X_d)^T$ are not observed in ${\bf U}$ with ${\bf W}=(0,0,1,...,1)^T.$ In this example, although $X_1$ and $X_2$ are absent in ${\bf U},$ they still influence the distribution of the remaining components $X_3,..,X_p$ through the underlying dependence structure of $\boldsymbol\Sigma$ and are only hidden by the corresponding multiplicative Bernoulli noise vector ${\bf W}.$ In contrast, for the case of structural zeros the observed vector itself is ${\bf X}=(0,0,X_3..,X_p),$ i.e., the first two components are truly absent from the observation and thus the missing components should not influence the distribution of the remaining components.
{In this paper we define a general framework which allows for structural zeros in the model and discuss consistent methods of estimating sparse high dimensional covariance and precision matrices under this setup. We establish consistency in estimation of the proposed methodology and empirically support it with a simulation study. We also apply our methodology to analyse the global human gut microbiome data of Yatsunenko et. al. 2012. Estimation of covariance and precision matrices in the traditional missing values setting has also been discussed in the literature [cf. Loh and Wainwright , 2012) and Lounici, 2012]. As shall become apparent in the following, our model allows for a more general notion of missingness while assuming weaker conditions in comparison to typical notions of missingness.}
\section{Notations and Framework} \label{notations}
Throughout the paper, for any $l\times m$ matrix ${\bf A}=[a_{ij}]$ define the $\ell_0,$ $\ell_1,$ $Sup,$ $Spectral$ and $Frobenius$ norms as $\|{\bf A}\|_0=\textnormal{Card}\{ij\,:\, A_{ij}\ne 0\},$ $\|{\bf A}\|_1=\sum_{i,j}|a_{ij}|,$ $\|{\bf A}\|_{\infty}=\max_{i,j}|a_{ij}|,$ $\|{\bf A}\|_2=\sup_{||x||_2\le 1}||Ax||_2$ and $\|{\bf A}\|_{F}=\sqrt{\sum_{i,j}a_{ij}^2}$, respectively. Also ${\bf A}\succ 0$ indicates the matrix ${\bf A}$ is positive definite. We use $c_0,$ $c_1$ and $c_2$ as generic constants which may change according to the context. For any set of indices $S,$ its cardinality is denoted by $|S|.$ For a subset $A\subseteq \{1,2,\cdots,d\}$, ${\bf b}_A$ denote the vector of components of ${\bf b}$ with indices in $A.$ Also a $p\times p$ matrix $\boldsymbol\Sigma$ is partitioned as
\begin{eqnarray}\label{sigpar}
\boldsymbol\Sigma = \left(\begin{matrix} \boldsymbol\Sigma_{AA} & \boldsymbol\Sigma_{AA^c}\\ \boldsymbol\Sigma _{A^cA} & \boldsymbol\Sigma_{A^cA^c}
\end{matrix}\right),\qquad \mbox{where $A^c$ denote the compliment set of A.}
\end{eqnarray}
We begin by describing a framework that characterizes structural zeros. As briefly stated in the Introduction, these structural zeros represent components that are biologically absent in the specimen. Hence, intuitively the framework should allow for the distribution of the specimen to be completely determined by only the observed components. Restating this statistically, the distribution of an observation should be characterized conditional to the missing structure for each $1\le i\le n$. Hence we first define the missing structure.
Let the sample space ${\cal S}$ of possible configurations of missing components in a given sample be as follows.
\begin{eqnarray}\label{label}
{\cal S}=\begin{cases}
(1,\ldots, 1), \\
(0,1,\ldots, 1), (1,0,\ldots, 1), \ldots, (1,\ldots, 1,0) \\
(0,0,1,...1), (0,1,0..,1), \ldots, (1,\ldots, 0,0)\\
.\\
.\\
(0,0,\ldots, 1), (0,0,\ldots, 1,0), \ldots, (1,0,...0)
\end{cases}
\end{eqnarray}
Here $0,1$ correspond to the cases where a component is unobserved or observed in the sample respectively. We shall represent each of the above $2^d -1$ events of the sample space by Configuration $(j)$, $j = 1, 2, \ldots, 2^d -1$, in the order written in (\ref{label}). For example, Configuration ($1$) is the case where all components are observed and Configuration $(2^d-1)$ corresponds to the configuration where only the first component is observed. For each sample $i$, $1\le i\le n,$ we assume that the missing structure is generated by independent random variables ${\bf M}_i,$ $1\le i \le n,$ with sample space described in (\ref{label}).
{In many applications, it may be unreasonable to assume that the missingness is generated by identically distributed r.v.'s. The distriubtion function may be influenced by factors or covariates such as geographical location, age, race and gender of the subject.} To allow for this flexibility, let ${\bf z_i},$ $1\le i\le n$ be $q$-dimensional vectors of non-random covariates which can possibly influence the distribution of the missingness, more precisely, define the distribution of the random variables ${\bf M}_i,$ $1\le i\le n$ by,
\begin{eqnarray}\label{delta}
P\Big({\bf M}_i\,\, \textnormal {is in Configuration}\,\, (j)\,\,\Big) =\delta_{(j)}({\bf z}_i),\quad 0\le \delta_{(j)}({\bf z}_i)\le 1,\quad \,1\le j\le 2^d-1.
\end{eqnarray}
{This feature of allowing the distribution to be influenced by factors or covariates while preserving independence is reminiscent of the MAR structure of missingness.} We now proceed to define the conditional distribution of the observed components of a specimen.
Let $\boldsymbol\mu=(\mu^{1},...,\mu^{d})^{T},$ $\mu^{k}\in{\mathbb R}$ and $\boldsymbol \Sigma=[\sigma_{ij}]_{d\times d}$ be a d-dimensional vector and symmetric matrix respectively. For a subject $i,$ with missing configuration given by the random variable ${\bf M}_i$, we denote the observed components by the index set
\begin{eqnarray}
A_i=\{j,\, M_{ij}=1\}.
\end{eqnarray}
Note that the index set $A_i$ is a random set which is determined by the r.v. ${\bf M}_i.$ Now assume that conditioned on ${\bf M}_i,$ the components of ${\bf X}_i$ with indices in the index set $A_i$ jointly follow a Gaussian distribution with mean and covariance being the corresponding sub-vector of $\boldsymbol\mu_i$ and sub-matrix of $\boldsymbol \Sigma $ respectively, i.e., for any ${\bf x}\in {\mathbb R}^{d},$
\begin{eqnarray}\label{yid}
P\Big({\bf X}_{A_i} \le {\bf x}_{A_i}\Big| {\bf M}_i\Big)=\Phi_{A_i}({\bf x}_{A_i}),
\end{eqnarray}
where $\Phi_{A_i}$ represents the Gaussian distribution function with mean $\boldsymbol\mu_{A_i}$ and covariance matrix $\boldsymbol\Sigma_{A_iA_i}.$ For example, let ${\bf M}_i =(1,1,0,...,0)$, then the observed vector is ${\bf X}_i=(X_{i1},X_{i2},0...,0)$ with the conditional distribution of the observed components as $P\Big(X_{i1}\le x_{i1}, X_{i2}\le x_{i2}\Big| {\bf M_i}\Big)=\Phi(x_{i1},x_{i2}).$
For $1\le l, m\le d$ let
\begin{eqnarray}\label{nlm}
n(l)= \{ i\,\,:\,\, l\in A_i,\, 1\le i\le n \}, \quad {\rm and}\quad n(l,m)= \{ i\,\,:\, l,m \in A_i\,,1\le i\le n \}\nonumber
\end{eqnarray}
be the number of subjects where $l^{th}$ component is observed and the number of subjects where the $l^{th}$ and $m^{th}$ components are observed respectively. Note that these are random quantities.
For a given subject $i = 1, 2, \ldots, n$, with covariate vector ${\bf z}_i$, and for $1\le l,m\le d,$ define
\begin{eqnarray}
C_{{\bf z}_i}(l)&=&\big\{1\le j\le 2^d-1,\,\,\textnormal{component $l$ is present in Configuration $(j)$}\nonumber\\
&& \hskip 1.9in \textnormal {with covariate ${\bf z}_i$}\big\},\nonumber\\
C_{{\bf z}_i}(l,m)&=&\{1\le j\le 2^d-1,\,\,\textnormal{components $l$ and $m$ are present in Configuration $(j)$}\nonumber\\
&& \hskip 1.75in \textnormal {with covariate ${\bf z}_i$}\big\}
\end{eqnarray}
In the sequel we make the following additional assumption over the missing structure.
\begin{description}
\item[{\bf (A1)}] There exists a constant $\delta_{\min}>0$ such that for any $1\le l,\,m\le d,$
\begin{eqnarray}
\textnormal {(i)}\,\,\frac{1}{n}\sum_{i=1}^{n}\sum_{j\in C_{{\bf z}_i} (l)}\delta_{(j)}({\bf z}_i)=\delta{(l)}>\delta_{\min}\quad\textnormal{(ii)}\,\,\frac{1}{n}\sum_{i=1}^{n}\sum_{j\in C_{{\bf z}_i}(l,m)}\delta_{(j)}({\bf z}_i)=\delta(l,m)>\delta_{\min}.\nonumber
\end{eqnarray}
\end{description}
Note that {\bf (A1)} is a mild assumption on the missing structure. When there are no covariates, (i) reduces to $\sum_{j \in C(l)}\delta_{(j)}>\delta_{\min},$ and (ii) reduces to $\sum_{j \in C(l,m)}\delta_{(j)}>\delta_{\min}$. Thus in this case, Assumption {\bf (A1)} requires that each component is present in an observational vector with a nonzero probability and that every pair of components are present in each observational vector with a nonzero probability.
\section{Estimation of the Covariance and Precision Matrices}\label{est}
In this section we derive the theoretical properties of two methodologies, a generalised thresholding procedure to estimate the covariance matrix $\boldsymbol\Sigma$ and a $\ell_1$ minimisation approach to estimate the precision matrix $\boldsymbol\Omega.$ We shall derive these properties under the structural zero's setup while allowing the dimension of the observed vector to increase exponentially with the sample size. The consistency results to follow later in this section shall hold for the following class of approximately sparse matrices.
\begin{description}
\item[{\bf (A2)}] We assume that the covariance and precision matrices belong to the following classes of matrices respectively:
\begin{eqnarray}
\textnormal{(i)}\,\,\,{\cal M}(q,s_o(d),K)&=&\Big\{\boldsymbol\Sigma\,:\, \sigma_{ii}\le K\,\,\max_{1\le i\le d}\sum_{j=1}^{d}|\sigma_{ij}|^{q}\le s_0(d)\Big\}\,\,\,\textnormal{and}\nonumber\\
\textnormal{(ii)}\,\,\,{\cal U}(q,s_o(d),K)&=&\Big\{\boldsymbol\Omega\,:\,\boldsymbol\Omega\succ 0,\,\,\|\boldsymbol\Omega\|_1\le K,\,\,\max_{1\le i\le d}\sum_{j=1}^{d}|\omega_{ij}|^{q}\le s_0(d)\Big\}.\nonumber
\end{eqnarray}
Here $0\le q<1.$
\end{description}
The quantity $s_0(d)$ is allowed to depend on $d$ and thus is not and explicit restriction on sparsity. Two examples of matrices that satisfy the above restrictions are, a p-diagonal matrix that satisfies this condition with any $0\le q<1$ and $s_0(d)=K^qp.$ Second, an $AR(1)$ covariance matrix where $\sigma_{ij}=\rho^{|i-j|},$ which satisfies the restriction with $s_0(d)=c_0$ for some constant $c_0<\infty.$
To describe our methodology we need the following definitions. Let
\begin{eqnarray}\label{meanest}
\hat \mu^{l}= \frac{1}{|n(l)|}\sum_{i\in n(l)}X_{ij},\qquad\,\, 1\le l\le d.
\end{eqnarray}
and define a re-normalized sample covariance matrix as follows $\hat{\boldsymbol\Sigma},$
\begin{eqnarray}\label{hsig}
\hat\sigma_{lm}=\sum_{i\in n(l,m)}(X_{il}-\hat\mu^{l})(X_{im}-\hat\mu^{m})\Big/|n(l,m)|\quad \textnormal{and}\quad \hat{\boldsymbol\Sigma}=\big[\hat\sigma_{lm}\big]_{l,m=1,..,d}.
\end{eqnarray}
The matrix $\hat{\boldsymbol\Sigma}$ is an initial estimator for obtaining consistent estimators $\boldsymbol\Sigma$ and $\boldsymbol\Omega $ of the covariance matrix and the precision matrix, respectively. Following is a key result needed for deriving the convergence rates of the estimators of $\boldsymbol\Sigma$ and $\boldsymbol\Omega $.
\begin{lem}\label{ecrossl}
Let $\hat{\boldsymbol\Sigma}$ be as defined in (\ref{hsig}) and assume that $\sigma_{ii}\le K,$ $1\le i\le d$ for some constant $K<\infty$ along with condition {\bf (A1)}. Then with probability at least $1-c_1\exp(-c_2\log d),$
\begin{eqnarray}\label{maxb}
\big\|\hat{\boldsymbol\Sigma}-\boldsymbol\Sigma\big\|_{\infty}\le c_0\sqrt{\frac{\log d}{n}},
\end{eqnarray}
for some constant $c_0<\infty.$
\end{lem}
{To appreciate this fairly innocuous result note that $\hat\sigma_{lm},$ $1\le l,m\le d$ are defined through ${\bf X}_i,$ $1\le i\le n,$ whose distribution is in turn defined conditionally of the missing structure ${\bf M}_i.$ However, Lemma \ref{ecrossl} provides an unconditional probability bound on the desired random quantity with little only a mild assumption {\bf (A1)} on the missing structure. The key to the proof of this result is the observation that $|n(l,m)|,$ $1\le l,m\le d$ is a sum of independent random variables, which allows the applicability of the Hoeffding's inequality in combination with conditional expectation arguments. The details of the proof are provided in the appendix. We now proceed with the estimation of $\boldsymbol\Sigma $ and $\boldsymbol\Omega$.}
\subsection{{\bf Covariance Matrix}}\label{cov}
Let $s_{\lambda}(x)$ be a generalized thresholding operator as defined by Rothman, Levina and Zhu (2009). We restate this definition for the convenience or the reader. A function $s_{\lambda}\,:\,{\mathbb R}\to {\mathbb R}$ satisfying
\begin{eqnarray}\label{sla}
\textnormal(i)\,\,|s_{\lambda}(x)|\le |x|,\quad \textnormal(ii)\,\,s_{\lambda}(x)=0\,\, \textnormal{for}\,\,|x|\le \lambda\,\,\textnormal{and}\,\, \textnormal(iii)\,\,|s_{\lambda}(x)-x|\le \lambda
\end{eqnarray}
is said to be a generalised thresholding operator. In view of this definition, the covariance matrix $\boldsymbol\Sigma$ can be estimated by,
\begin{eqnarray}
s_{\lambda}(\hat{\boldsymbol\Sigma})=\big[s_{\lambda}(\hat\sigma_{ij})\big]_{i,j=1,...,d}\nonumber
\end{eqnarray}
The two most common examples of the thresholding operators are the hard and soft thresholding operators defined as,
\begin{eqnarray}
s_{\lambda}^{H}(x)=z{\bf 1}(|x|>\lambda),\qquad s_{\lambda}^{s}(x)=sign(x)(|x|-\lambda)_{+},
\end{eqnarray}
respectively. The soft thresholding operator can alternatively be defined as,
\begin{eqnarray}
s_{\lambda}^{s}(x)=\textnormal{arg min}_{\theta} \Big\{(\theta-x)^2+\lambda|\theta|\Big\},\nonumber
\end{eqnarray}
and has been studied by various authors the first of which are Donoho et. al. (1995) and Tibshirani (1996). The hard thresholding operator was first investigated by Bickel and Levina (2008) and several authors since then. Other examples of thresholding operators include SCAD of Fan and Li (2001), the adaptive Lasso of Zuo (2008).
The following result provides the consistency of the proposed estimator.
\begin{thm}\label{covr}
{Suppose conditions (\ref{yid}), {\bf(A1)} and {\bf (A2(i))}. Also, assume that $s_{\lambda}$ satisfies condition (\ref{sla}).} Then, uniformly on ${\cal M}(q, s_0(d),K)$ if $\lambda=K'\sqrt{\log d}/\sqrt{n}=o(1)$ for sufficiently large $K'$, then
\begin{eqnarray}
\big\|s_{\lambda}(\hat{\boldsymbol\Sigma})-\boldsymbol\Sigma\big\|_2=O\bigg(s_0(d)\Big(\sqrt{\frac{\log d}{n}}\Big)^{1-q}\bigg),
\end{eqnarray}
with probability at least $1-c_1\exp(-c_2\log d).$
\end{thm}
{In the standard i.i.d Gaussian setting, Rothman, Levina and Zhu (2009) introduced this generalized thresholding methodology by thresholding the usual sample covariance matrix.}
\subsection{{\bf Precision Matrix}}\label{preci}
In some problems it is of interest to estimate a precision matrix directly, for example to explore the underlying conditional independence structure via graphical models. In addition, the precision matrix under a Gaussian setup is naturally sparser in comparison to the corresponding sparse covariance matrix. Here we describe a methodology to estimate the precision matric under our structural zeros setup.
Let $\hat{\boldsymbol\Omega}_1$ be the solution of the following convex program,
\begin{eqnarray}\label{clime}
\min{\|\boldsymbol\Omega\|_{1}}\quad \textnormal{subject to}\quad \big|\hat{\boldsymbol\Sigma}_n\boldsymbol\Omega-{\bf I}\big|_{\infty}\le \lambda_{\Omega},\quad \boldsymbol\Omega\in{\mathbb R}^{p\times p},
\end{eqnarray}
with a suitable choice of $\lambda_{\boldsymbol\Omega}>0.$ Here ${\bf I}$ represents the identity matrix and $\hat{\boldsymbol\Sigma}$ as defined in (\ref{hsig}). Since the solution $\hat{\boldsymbol\Omega_1}$ may not be symmetric in general, the final estimate $\hat{\boldsymbol\Omega}$ is obtained by symmetrizing $\hat{\boldsymbol\Omega_1}=[\omega_{ij}^1]_{d\times d}$ as follows,
\begin{eqnarray}
\hat{\boldsymbol\Omega}&=&(\hat\omega_{ij}),\quad {\textnormal{with}},\nonumber\\
\hat\omega_{ij}&=&\hat\omega_{ji}=\hat\omega_{ij}^1{\bf 1}[|\omega_{ij}^1|\le|\hat\omega_{ji}^1|]+\hat\omega_{ji}^1{\bf 1}[|\omega_{ij}^1|>|\hat\omega_{ji}^1|],\nonumber
\end{eqnarray}
i.e., the smaller of $|\omega_{ij}^1|$ and $|\omega_{ji}^1|$ is chosen in the final estimate $\hat{\boldsymbol\Omega}.$
The following theorem provides the consistency of this methodology.
\begin{thm}\label{fcor}
Suppose (\ref{yid}) and assume condition {\bf(A1)}. If $\boldsymbol\Omega \in {\cal U}$ and $\lambda_{\boldsymbol\Omega}=c_0\sqrt{\log d/n},$ then the following bounds hold with probability at least $1-c_1\exp(-c_2\log d),$
\begin{eqnarray}
&\textnormal(i)&\,\,\,\|\hat{\boldsymbol\Omega}-\boldsymbol\Omega\|_{\infty}\le O\Big(\sqrt{\frac{\log d}{n}}\Big)\nonumber\\
&\textnormal(ii)&\,\,\, \|\hat{\boldsymbol\Omega}-\boldsymbol\Omega\|_{2}\le O\Big(s_0(d)\sqrt{\frac{\log d}{n}}\Big)^{1-q}\quad\textnormal{and},\nonumber\\
&\textnormal(iii)&\,\,\frac{1}{d}\|\hat{\boldsymbol\Omega}-\boldsymbol\Omega\|_F^2\le O\Big(s_0(d)\sqrt{\frac{\log d}{n}}\Big)^{2-q}.\nonumber
\end{eqnarray}
\end{thm}
{This methodology was introduced by Cai, Liu and Luo (2011) under the standard i.i.d. Gaussian setup, which is implemented using the sample covariance matrix as the initial estimate.} The proofs for the error bounds of Theorem \ref{covr} and Theorem \ref{fcor} follow by deterministic arguments on the event where the inequality (\ref{maxb}) holds and is thus the same as that of Rothman, Levina and Zhu and Cai, Liu and Luo respectively and are hence omitted.
\section{Simulation Study}
In this section we numerically evaluate the performance of the methodology developed in this paper. All computations were done in R. The Lasso optimizations are done by the 'glmnet' package developed by Friedman, Hastie, Simon and Tibshirani (2015) and the estimation of the precision matrix was done by the `clime' package of Cai Liu and Luo (2011). The tuning parameters $\lambda$ and $\lambda_ {\boldsymbol\Omega }$ are chosen by cross validation with the loss function chosen as $\|s_{\lambda}(\hat{\boldsymbol\Sigma})-\hat{\boldsymbol\Sigma}\|_F$ and $\textnormal{Tr}(\hat{\boldsymbol\Sigma}\hat{\boldsymbol\Omega}-{\bf I})^2$ respectively.
\subsection{Simulation Setup and Results}
We examine the performance of the proposed methodologies in estimating the covariance and precision matrices under two types of Gaussian graphical models, namely band and cluster structured graphs. These precision matrices are generated by the package ``fastclime’’ developed by Pang, Liu and Vanderbei (2014). For a $d$-dimensional graph, around $d/20$ band width or clusters are assumed in the two cases, respectively. The adjacency matrices of these graphs with $d=50$ are illustrated below.
\begin{figure}[]
\caption{\small{Plots of adjacency matrices of banded and cluster precision matrices respectively at d=50.}}
\vspace{1.5mm}
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[scale=0.25]{omega_band.png}
\end{minipage}
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[scale=0.25]{omega_cluster.png}
\end{minipage}
\label{omill}
\end{figure}
The precision matrices are generated so that the corresponding covariance matrix $\boldsymbol\Sigma = \boldsymbol\Omega^{-1}$ is a correlation matrix. For further details on the construction of these matrices see, page 5 of Pang, Liu and Vanderbei (2014).
We generate the missing structure matrix ${\bf M}_i=\big[m_{ij}\big]_{n\times d},$ as $m_{ij}\sim^{i.i.d} Bernoulli(1-\rho_j),$ $1\le i \le n,$ $1\le j\le d.$ Here $\rho_j,$ denotes the probability of $j^{th}$ component missing and they are generated by a uniform distribution between $(0, 0.75).$ For each $i$, $1\le i\le n$, the non-missing components are assumed to be normally distributed with corresponding mean sub-vector of $\boldsymbol\mu$ and sub-block of the matrix $\boldsymbol\Sigma.$ Without loss of generality, the mean vector $\boldsymbol\mu $ is assumed to be a $d$-dimensional vector of zeros.
{The covariance and precision estimators derived in this paper are based on the re-normalized sample covariance matrix (\ref{hsig}). In this simulation study we compare the covariance and precision estimators based on the re-normalized sample covariance matrix with those based on the usual sample covariance matrix in terms of the spectral norm loss function, i.e. $\|\hat{\boldsymbol\Sigma}-\boldsymbol\Sigma\|_2$ and $\|\hat{\boldsymbol\Omega}-\boldsymbol\Omega\|_2$, respectively. In the simulation experiments, the sample sizes $n$ varied from $75$ to $300$ and the dimension $d$ varied from $25$ to $175$.}
{\noindent}$\bullet$ {\bf Covariance matrix}: A total of 160 independent models were generated in this study. Estimates are computed for both the hard and soft thresholding procedures described in Section \ref{est}. Simulation results are illustrated in Figure \ref{grafsim1} and Figure \ref{grafsim2}.
{\noindent}$\bullet$ {\bf Precision matrix}: A total of 112 independent models were generated in this study. Simulation results are illustrated in Figure \ref{grafsim3}.
\begin{figure}[]
\caption{\small{Plots of $\|\hat{\boldsymbol\Sigma}-\boldsymbol\Sigma\|_2$, against $n/\log p,$ for clustered graph model (CS) and for banded graph model (BS) for soft thresholding procedure.}}
\vspace{1.5mm}
\begin{minipage}{.48\textwidth}
\centering
{\begin{center} \bf CS\end{center}}
\includegraphics[scale=0.35]{cluster_soft.png}
\end{minipage}
\begin{minipage}{.48\textwidth}
\centering
{\begin{center} \bf BS\end{center} }
\includegraphics[scale=0.35]{band_soft.png}
\end{minipage}
\label{grafsim1}
\end{figure}
\begin{figure}[]
\caption{\small{Plots of $\|\hat{\boldsymbol\Sigma}-\boldsymbol\Sigma\|_2$, against $n/\log p,$ for clustered graph model (CH) and for banded graph model (BH) for hard thresholding procedure.}}
\vspace{1.5mm}
\begin{minipage}{.48\textwidth}
\centering
{\begin{center}\bf CH\end{center}}
\includegraphics[scale=0.35]{cluster_hard.png}
\end{minipage}
\begin{minipage}{.48\textwidth}
\centering
{\begin{center}\bf BH\end{center}}
\includegraphics[scale=0.35]{band_hard.png}
\end{minipage}
\label{grafsim2}
\end{figure}
\begin{figure}[]
\caption{\small{Plots of $\|\hat{\boldsymbol\Omega}- \boldsymbol\Omega \|_2$, against $n/\log p,$ for clustered graph model (CP) and for banded graph model (BP) the $\ell_1$ minimization procedure .}}
\vspace{1.5mm}
\begin{minipage}{.48\textwidth}
\centering
{\begin{center}\bf CP\end{center}}
\includegraphics[scale=0.35]{cluster_preci.png}
\end{minipage}
\begin{minipage}{.48\textwidth}
\centering
{\begin{center}\bf BP\end{center}}
\includegraphics[scale=0.35]{band_preci.png}
\end{minipage}
\label{grafsim3}
\end{figure}
{Figures \ref{grafsim1}, \ref{grafsim2} \& \ref{grafsim3} clearly illustrate consistency in estimation of both the covariance and precision matrix estimators, thus agreeing with the theoretical results. Also the proposed methodology based on the renormalized covariance almost uniformly outperforms the estimates obtained via the usual sample covariance matrix which ignores the structural zeros in data.}
\vspace{2mm}
\noindent{\bf Note:} In Figures \ref{grafsim1}, \ref{grafsim2} \& \ref{grafsim3} two colors of each dot represent the spectral norm of the estimation error in an independently generated model for two estimates being compared. To measure the average performance over the independently simulated models, non parametric regression lines and corresponding confidence bands are drawn, these are made via the Loess method with its smoothing parameter set as $0.75$.
\section{Analysis of Global Human Gut Microbiome Data}
In this section we apply the proposed methodology to analyze the global human gut microbiome data of Yatsunenko et. al. (2012). The data consists of microbial taxa counts obtained from $317$ subjects from U.S. (US), $99$ from Venezuela (VE) and $114$ from Malawi (MA). The available data can be analyzed at various levels of bacterial taxonomy. We illustrate our methodology by analyzing these data at three levels, namely, the genus, the family and the order. We shall generically use the term ``taxa’’ to mean either genus or family or order.
The microbiome data are measured in terms of count variables called operational taxonomic units (OTUs). For details regarding these data one may refer to Mandal et al. (2015). Corresponding to the $i^{th}$ sample, let ${\bf Z}_i,$ $1\le i\le n$ denote $(d+1)$ dimensional vector of counts of taxa, which are assumed to be independent over $1\le i\le n.$ Any taxon which appears in all $n$ samples is assumed to be a reference category, without loss of generality, we shall assume the $(d+1)^{th}$ taxon to be this reference taxon. We define random variables ${\bf X}_i=(X_{i1},...X_{id})^T$ where for each $1\le j\le d,$
\begin{eqnarray}\label{log}
X_{ij} = \begin{cases}
\log \Big(Z_{i,j}/Z_{i,d+1}\Big), \quad\textnormal{if}\,\, Z_{i,j}\ne 0 \\
{\textnormal {NA}}, \hskip 1in\,\,\textnormal{if}\,\, Z_{i,j}= 0
\end{cases},
\end{eqnarray}
In this definition we use `NA' to represent structural zeros since the log ratio term can also be zero valued. Also, the reference taxon is chosen as Bifidobacterium, Bifidobacteriaceae and Bifidobacteriales at the genus, family and order level respectively. As described in the Introduction, the structural zeros (represented by NA) in each observation represent taxons that are biologically absent in the specimen. Although by construction ${\bf X}_i$'s are independent over $1\le i\le n,$ however unlike Aitchison (1986), due to the structural zeros, the log ratio transformed observations cannot be assumed to be identically distributed random variables. In contrast, the distribution of ${\bf X}_i,$ $1\le i\le n$ is assumed to be as described in (\ref{yid}).
Before proceeding to the analysis, we reduce the data set by retaining only those taxa that are present in at least 20\% of the samples. Although this step is not essential for our methdology, however it is done to maintain a reasonable sample size for each pair of correlations and in turn maintain reliability of estimates. In doing so, the number of taxa at the three levels reduces to 227, 99 and 52, at the genus, the family and the order levels respectively .
\vspace{2.5mm}
\noindent
{\bf Classification of subjects to geographical location}
\vspace{1.5mm}
We use the estimates of the covariance obtained by soft thresholding and precision matrices obtained in Section \ref{est} to classify subjects of the above Global gut data to their respective geographical locations. For each pair of locations, a two sample t-test is performed and 10, 25 and 50 most significant components are selected. Here the t-statistic is computed only over the observed components of the log transformed observation vector. Furthermore we also perform classification among Venezuela and Malawi subjects with $d=179$ most significant components to illustrate the performance of the proposed methodology for the case $d>n$.
For each pair of locations, data is divided into a testing and training set, we randomly split $5/6^{th}$ data into training and the remaining $1/6^{th}$ in to test sets. The training set is used to estimate means of the respective populations as well as the common covariance matrix (precision matrix) using the procedures described in Section \ref{est}.
Let ${\bf X}=(X_{1},..,X_{d})^{T}$ denote the $d$-dimensional observation to be classified and let $A=\{j\,\,;\,\, X_{j}\ne 0\}$ denote the collection of indices of the non-zero components of ${\bf X}$. For location $r = 1, 2$, let $\hat{\boldsymbol\mu}_{rA}$ denote the sub-vector of $\hat{\boldsymbol\mu}_r$ and $\boldsymbol\Sigma_{AA}$ denote the corresponding sub-block of $\hat{\boldsymbol\Sigma}.$ Since the observation ${\bf X}$ is assumed to be conditionally Gaussian as described in \ref{yid}, we can now implement the following linear discriminant function for classification.
\begin{eqnarray}
\delta_{r}({\bf X}_A)={\bf X}_A^{T}{\hat{\boldsymbol\Sigma}_{AA}}^{-1}\hat{\boldsymbol\mu}_{rA}-\frac{1}{2}\hat{\boldsymbol\mu}_{rA}^{T}{\hat{\boldsymbol\Sigma}_{AA}}^{-1}\hat{\boldsymbol\mu}_{rA}.
\end{eqnarray}
We classify ${\bf X}$ into location 1 if $\delta_1 ({\bf X}_A) > \delta_2 ({\bf X}_A) $, otherwise we classify it into population 2.
Here $\hat{\boldsymbol\Sigma}$ is the estimated covariance matrix, which can be obtained via the generalized thresholding procedure of Section 3.1 or inverting the precision matrix $\hat{\boldsymbol\Omega}$ obtained from Section 3.2. Also $\hat{\boldsymbol\mu}_{r}^{\star}$ is the corresponding mean sub-vector of ${\boldsymbol\mu}_r,$ $r=1,2$ which in turn is computed using the training data for each corresponding location. The observation $x$ is assigned category $1$ when $\delta_1(x^{\star})>\delta_2(x^{\star})$ otherwise assigned category $2.$
\vspace{2mm}
\noindent {\it Tuning parameter:} The tuning parameters $\lambda$ and $\lambda_{\boldsymbol\Omega}$ is evaluated via 5-fold cross validation within the combined training data set of the two locations being classified. Also, the loss function used to evaluate cross validation error for covariance and precision matrix estimation is chosen to be as $\|s_{\lambda}(\hat{\boldsymbol\Sigma})-\hat{\boldsymbol\Sigma}\|_F$ and $\textnormal{Tr}(\hat{\boldsymbol\Sigma}\hat{\boldsymbol\Omega}-{\bf I})^2$ respectively. Also, if a pair $(l,m)$ does not occur then we set the pairwise covariance to zero.
The percentage of correctly classified observations from the test sample is computed and we repeat the above process twenty times and average the correct classification percentages over these 20 repeats as a measure of success of the procedure.
The classification results at the order, family and genus level of bacterial taxonomy are tabulated in Table \ref{cl1} - Table \ref{cl3}. There is a uniformly decreasing trend in the percentages of correct classification among the pairs US-MA, US-VE and VE-MA. This being possibly due to the populations of Venezuela and Malawi being microbially similar as is indicated by Figure \ref{snr} of the empirical survival functions of the pairwise differences in the sample mean divided by the corresponding standard deviation, i.e. difference in the signal to noise ratio (S/N ratio). It is clear that the difference in the S/N ratio for Malawi and Venezuela subjects is uniformly smaller than the other two pairs.
Lastly, we perform classification between Venezuela and Malawi samples at the genus level with the 179 most significant taxa using the soft thresholding method of the re-normalized sample covariance matrix. Note that the training sample size here is 178, thus allowing us to implement the procedure in the $d>n$ setup. In this case the percentage of correct classification for Venezuela, Malawi and overall are $58.5\%,$ $55.7\%$ and $57\%$ respectively.
\begin{table}[]
\centering
\caption{Classification percentages of U.S. Vs. Malawi}
\vspace{1.5mm}
\label{cl1}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{} & \multicolumn{3}{c|}{\textbf{10 Taxa}} & \multicolumn{3}{c|}{\textbf{25 Taxa}} & \multicolumn{3}{c|}{\textbf{50 Taxa}} \\ \hline
\textbf{} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Omega}$} & \multicolumn{1}{l|}{$s_{\lambda}(\hat{\boldsymbol\Sigma})$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Sigma}$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Omega}$} & \multicolumn{1}{l|}{$s_{\lambda}(\hat{\boldsymbol\Sigma})$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Sigma}$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Omega}$} & \multicolumn{1}{l|}{$s_{\lambda}(\hat{\boldsymbol\Sigma})$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Sigma}$} \\ \hline
\textbf{Order} & 79.3 & 74.5 & 72.2 & 75.1 & 71.3 & 71.3 & 75.6 & 67 & 65.5 \\ \hline
\textbf{Family} & 94.1 & 92.2 & 92.2 & 88.1 & 92.2 & 83.9 & 85.2 & 83.1 & 83.3 \\ \hline
\textbf{Genus} & 96.6 & 97.5 & 97.5 & 93.3 & 93.4 & 90 & 92.2 & 83.4 & 83.8 \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Classification percentages of U.S. Vs. Venezuela}
\vspace{1.5mm}
\label{cl2}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{} & \multicolumn{3}{c|}{\textbf{10 Taxa}} & \multicolumn{3}{c|}{\textbf{25 Taxa}} & \multicolumn{3}{c|}{\textbf{50 Taxa}} \\ \hline
\textbf{} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Omega}$} & \multicolumn{1}{l|}{$s_{\lambda}(\hat{\boldsymbol\Sigma})$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Sigma}$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Omega}$} & \multicolumn{1}{l|}{$s_{\lambda}(\hat{\boldsymbol\Sigma})$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Sigma}$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Omega}$} & \multicolumn{1}{l|}{$s_{\lambda}(\hat{\boldsymbol\Sigma})$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Sigma}$} \\ \hline
\textbf{Order} & 76.9 & 76.1 & 76.3 & 78.2 & 74.4 & 75.3 & 75.5 & 75.2 & 74.6 \\ \hline
\textbf{Family} & 76.8 & 74.2 & 74.9 & 78.1 & 87.8 & 87.8 & 75.6 & 80.2 & 76.6 \\ \hline
\textbf{Genus} & 79.2 & 72.7 & 72.6 & 79.7 & 90.9 & 77.1 & 79.5 & 78.5 & 78.3 \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Classification percentages of Venezuela Vs. Malawi}
\vspace{1.5mm}
\label{cl3}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{} & \multicolumn{3}{c|}{\textbf{10 Taxa}} & \multicolumn{3}{c|}{\textbf{25 Taxa}} & \multicolumn{3}{c|}{\textbf{50 Taxa}} \\ \hline
\textbf{} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Omega}$} & \multicolumn{1}{l|}{$s_{\lambda}(\hat{\boldsymbol\Sigma})$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Sigma}$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Omega}$} & \multicolumn{1}{l|}{$s_{\lambda}(\hat{\boldsymbol\Sigma})$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Sigma}$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Omega}$} & \multicolumn{1}{l|}{$s_{\lambda}(\hat{\boldsymbol\Sigma})$} & \multicolumn{1}{l|}{$\hat{\boldsymbol\Sigma}$} \\ \hline
\textbf{Order} & 62.2 & 63.2 & 63.2 & 60.2 & 71.5 & 68.4 & 62.0 & 63.1 & 58.4 \\ \hline
\textbf{Family} & 58.2 & 59.5 & 59.4 & 62.8 & 62.5 & 62.0 & 58.2 & 60.7 & 59.4 \\ \hline
\textbf{Genus} & 63.1 & 62.1 & 64.1 & 61.1 & 82.1 & 78.5 & 61.1 & 65.7 & 59.7 \\ \hline
\end{tabular}
\end{table}
\begin{figure}[]
\caption{Survival functions of SNR for different pairs}
\label{snr}
\begin{center}
\includegraphics[scale=0.35]{SNR2.png}
\end{center}
\end{figure}
\section*{Acknowledgments}
Shyamal Peddada and Abhishek Kaul were supported [in part] by the Intramural Research Program of the NIH, National Institute of Environmental Health Sciences (Z01 ES101744-04). Ori Davidov was partially supported by the Israeli Science Foundation Grant No. 1256/13.
\section{Appendix}
The results to follow shall critically rely on the Hoeffding's inequality (Hoeffding (1963)).This inequality is restated below from B\:uhlmann and van de Geer (2011) for the convenience of the reader.
\begin{lem} Let $Z_1,..Z_n$ be independent r.v's with values in some space ${\cal L}$ and let $\gamma$ be a real valued function on ${\cal L},$ satisfying
\begin{eqnarray}
E\gamma(Z_i)=0,\quad |\gamma(Z_i)|\le c_i\,\,\forall\,\, i.
\end{eqnarray}
Then for all $K>0,$
\begin{eqnarray}
E\exp\big[\sum_{i=1}^n\gamma(Z_i)/K\big]\le \exp\big[\frac{\sum_{i=1}^nc_i^2}{2K^2}\big].
\end{eqnarray}
\end{lem}
The \noindent {\bf Proof of Lemma \ref{ecrossl}} shall rely on the following two results.
\begin{lem}\label{ecrossl1}
Let $\eta_{il}=X_{il}-\mu^{(l)},$ $1\le i\le d$ and assume conditions {\bf (A1)}, (\ref{yid}) and that $\sigma_{ii}\le K,$ for constant $K<\infty.$ Then with probability at least $1-c_1\exp(-c_2\log d),$
\begin{eqnarray}
\max_{1\le l,m\le d}\frac{1}{|n(l,m)|}\Big|\sum_{i\in n(l,m)}\eta_{il}\eta_{im}-\sigma_{lm}\Big|\le c_0\sqrt{\frac{\log d}{n}}.\nonumber
\end{eqnarray}
\end{lem}
\vspace{3mm}
\noindent {\bf Proof of Lemma \ref{ecrossl1}}
Observe that
\begin{eqnarray}\label{ine2}
\Big|\sum_{i\in n(l,m)}\big(\eta_{il}\eta_{im}-E(\eta_{il}\eta_{im})\big)\Big|&\le& \frac{1}{4}\Big|\sum_{i \in n(l,m)}\big((\eta_{il}+\eta_{im})^2-E(\eta_{il}+\eta_{im})^2\big)\Big|\nonumber\\
&&+\frac{1}{4}\Big|\sum_{i \in n(l,m)}\big((\eta_{il}-\eta_{im})^2-E(\eta_{il}-\eta_{im})^2\big)\Big|\nonumber\\
&=& (TI)+ (TII)
\end{eqnarray}
For any $1\le i\le n,$ by definition of $\eta_{il}$ and $\eta_{im},$ we have $\eta_{il}+\eta_{im},$ $1\le l,m \le d$ are conditionally Gaussian on ${\bf M}_i,$ also by elementary properties of Gaussian distributions we have
$E\big[e^{t(\eta_{iu}+\eta_{iv})^2}\Big|{\bf M}_i\big]\le c_0,$ for all $t\in {\mathbb R}.$ This fact can be used to show, see, for e.g. Lemma 12, Yuan (2010),
\begin{eqnarray}\label{expo}
E\Big[e^{t\big[(\eta_{il}+\eta_{im})^2-E(\eta_{il}+\eta_{im})^2\big]}\Big| {\bf M}_i\Big]\le e^{c_1t^2},\qquad\textnormal{for some constant}\,\,\, c_1>0
\end{eqnarray}
Let ${\bf M}$ be the sigma field generated by the r.v.'s $({\bf M}_1,..,{\bf M}_n).$ Observing that $|n(l,m)|$ is entirely characterized by ${\bf M},$ we apply the exponential bound (\ref{expo}) together with the Chebychev's inequality with $\lambda>0$ and $t=|n(l,m)|\lambda/2c_1,$ to obtain
\begin{eqnarray}
P\left(\frac{1}{|n(l,m)|}\sum_{i\in n(l,m)}\big((\eta_{il}+\eta_{im})^2-E(\eta_{il}+\eta_{im})^2\big)>\lambda\Big| {\bf M}\right)\le \exp\big[-|n(l,m)|\lambda^2/4c_1\big]\nonumber
\end{eqnarray}
Repeating this argument for the left tail and combining both we obtain,
\begin{eqnarray}\label{pb}
P\left(\frac{1}{|n(l,m)|}\Big|\sum_{i\in n(l,m)}\big((\eta_{il}+\eta_{im})^2-E(\eta_{il}+\eta_{im})^2\big)\Big|>\lambda\Big| {\bf M}\right)\le 2\exp\big[-|n(l,m)|\lambda^2/4c_1\big].\nonumber
\end{eqnarray}
Now applying a trivial union bound we obtain,
\begin{eqnarray}
P\left(\max_{1\le l,m \le d}\frac{1}{|n(l,m)|}\Big|\sum_{i\in n(l,m)}\big((\eta_{il}+\eta_{im})^2-E(\eta_{il}+\eta_{im})^2\big)\Big|>\lambda\Big| {\bf M}\right)\hskip 1.25in\nonumber\\
\le \sum_{l=1}^{d}\sum_{m=1}^{d}\exp\big[-|n(l,m)|\lambda^2/4c_1\big]\nonumber
\end{eqnarray}
Applying the towering and monotonic property of conditional expectation we obatin,
\begin{eqnarray}\label{cmain}
P\left(\max_{1\le l,m \le d}\frac{1}{|n(l,m)|}\Big|\sum_{i\in n(l,m)}\big((\eta_{il}+\eta_{im})^2-E(\eta_{il}+\eta_{im})^2\big)\Big|>\lambda\right)\hskip 1.5in\nonumber\\
\le d^2 \max_{1\le l,m\le d}E\exp\big[-|n(l,m)|\lambda^2/4c_1\big]
\end{eqnarray}
Recall the definition of $n(l,m)$ from (\ref{nlm}) and observe that it can equivalently be written as,
\begin{eqnarray}
|n(l,m)|=\sum_{i=1}^{n}I_{ilm}
\end{eqnarray}
where $I_{ilm}={\bf 1}[M_{il}=1\,\,\&\,\,M_{im=1}]$ for every $1\le l,m\le d,$ where ${\bf 1}$ represents the indicator function. Note that by construction $I_{ilm}$ are independent r.v.'s over $1\le i\le n.$ Now
\begin{eqnarray}\label{hoeff}
\max_{1\le l\le d}E\exp\big[\frac{-|n(l,m)|\lambda^2}{4c_1}\big]&=&\max_{1\le l\le d}E\exp\big[-\sum_{i=1}^{n}\frac{\lambda^2\delta(l,m)}{4c_1}\big]\exp\big[-\frac{\lambda^2}{4c_1}\big(|n(l,m)|-En(l,m)\big)\big]\nonumber\\
&\le& \exp\big[-\frac{n\lambda^2\delta_{\min}}{4c_1}\big]\max_{1\le l\le d}E\exp\big[-\frac{\lambda^2}{4c_1}\big(|n(l,m)|-En(l,m)\big)\big]
\end{eqnarray}
observe that $|I_i-E(I_i)|\le 2$ and apply the Hoeffdings inequality (Hoeffding (1963)) to the expected value in the r.h.s of (\ref{hoeff}) to obtain,
\begin{eqnarray}\label{hoef}
E\exp\big[-\frac{\lambda^2}{4c_1}\big(|n(l,m)|-En(l,m)\big)\big]\le \exp \big[\frac{4n\lambda^4}{16c_1^2}\big].
\end{eqnarray}
Combining (\ref{hoef}) and (\ref{hoeff}) with (\ref{cmain}) we obtain
\begin{eqnarray}
P\left(\max_{l,m}\frac{1}{|n(l,m)|}\sum_{i\in n(l,m)}\big((\eta_{il}+\eta_{im})^2-E(\eta_{il}+\eta_{im})^2\big)>\lambda\right)\hskip 1.5in\nonumber\\
\le 2d^2\exp\big[-\frac{n\lambda^2\delta_{\min}}{4c_1}\big]\exp\big[\frac{n\lambda^4}{4c_1^2}\big].\nonumber
\end{eqnarray}
This provides a probability bound for (T1) in (\ref{ine2}). Repeating the above arguments for term (TII) of (\ref{ine2}) and combining it with the bound for (T1) we obtain
\begin{eqnarray}
P\Big(\max_{l,m}\frac{1}{|n(l,m)|} \big|\sum_{i\in n(l,m)} \eta_{il}\eta_{im}-E(\eta_{il}\eta_{im})\big|\ge\lambda\Big)\hskip 1.5in\nonumber\\
\le 2d^2\exp\big[-\frac{n\lambda^2\delta_{\min}}{4c_1}\big]\exp\big[\frac{4n\lambda^4}{16c_1^2}\big]\nonumber
\end{eqnarray}
Choosing $\lambda\ge c_0\sqrt{\frac{\log d}{n}}$ we obtain the statement of the Lemma. This completes the proof. \hfill $\Box$
\begin{rem}
In addition to the result of Lemma \ref{ecrossl1}, we shall also need the following probability bound. Assuming the conditions of Lemma \ref{ecrossl1} we have
\begin{eqnarray}
\max_{1\le l\le d} \frac{1}{|n(l)|}\big|\sum_{i\in n(l)} \eta_{il}\big|\le c_0\sqrt{\frac{\log d}{n}}
\end{eqnarray}
with probability at least $1-c_1\exp(-c_2\log d).$ Applying arguments similar to (\ref{hoeff}) and (\ref{hoef}), this result is straightforward to obtain by observing that $\frac{1}{|\sqrt{n(l)}|}\sum_{i\in n(l)}\eta_{il}$ conditioned on {\bf M} is a Gaussian r.v. with finite variance.
\end{rem}
\noindent {\bf Proof of Lemma \ref{ecrossl}}
Without loss of generality assume that $\mu^l=0,$ $1\le l,m\le d,$ then,
\begin{eqnarray}\label{iner}
|\hat\sigma_{l,m}-\sigma_{l,m}|&=&\frac{1}{|n(l,m)|}\Big|\sum_{i\in n(l,m)}(X_{il}-\hat\mu^l)(X_{im}-\hat\mu^m)- \sigma_{lm}\Big|\nonumber\\
&\le& \frac{1}{|n(l,m)|}\Big|\sum_{i\in n(l,m)}X_{il}X_{im}-\sigma_{lm} \Big| + \frac{1}{|n(l,m)|}\Big|\sum_{i\in n(l,m)}\hat\mu^l\hat\mu^m\Big|\nonumber\\
&&+\frac{1}{|n(l,m)|}\Big|\sum_{i \in n(l,m)} X_{im}\hat\mu^{l}\Big|+\frac{1}{|n(l,m)|}\Big|\sum_{i \in n(l,m)} X_{il}\hat\mu^m\Big|\nonumber\\
&=& (I)+(II)+(III)+(IV),
\end{eqnarray}
Term (I) of \ref{iner} can be bounded by a direct application of Lemma \ref{ecrossl1}. Consider Term (II),
\begin{eqnarray}
\frac{1}{|n(l,m)|}\Big|\sum_{i\in n(l,m)}\hat\mu^l\hat\mu^m\Big|\le \max_{1\le l,m\le d}|\hat\mu^{l}||\hat\mu^{m}|\le c_0\frac{\log d}{n}
\end{eqnarray}
with probability at least $1-c_1\exp(-c_2\log d).$ Lastly terms (III) and (IV) can be bounded in probability by the same arguments. Combining these bounds we obtain,
\begin{eqnarray}
\max_{1\le l,m\le d}|\hat\sigma_{l,m}-\sigma_{l,m}|\le c_0\sqrt{\frac{\log d}{n}}
\end{eqnarray}
with probability at least $1-c_1\exp(-c_2\log d).$ This completes the proof of this Lemma.\hfill$\Box$
\section*{References}
\begin{enumerate}
\itemsep -0.02in
\item Aitchison, J. (1986). The Statistical Analysis of Compositional Data. London: Chapman and Hall.
\item Bickel, P. and Levina, E. (2008). Covariance Regularization by Thresholding. {\it Annals of Statistics} {\bf 36}, 2577-2604.
\item B\"{u}hlmann, P. and van de Geer, S. (2011). Statistics for High Dimensional Data, {\textit {Springer, Heidelberg}}
\item Cai, T., Liu, W. and Luo, X. (2011). A Constrained $l_1$ Minimization Approach to Sparse Precision Matrix Estimation. {\it J. of Amer. Stat. Asso.}, {\bf 106}, 594-607.
\item Clemente, J.C., Ursell, L.K., Parfrey, L.W. \& Knight, R. (2012). The Impact of the Gut Microbiota on Human Health: An Integrative View. {\it Cell} {\bf148}, 1258-1270.
\item Friedman, J., Hastie, T., Tibshirani, R. (2008). Sparse Inverse Covariance Estimation with the Graphical Lasso. {\it Biostatistics}, {\bf9}, 432--441.
\item Hoeffding, W. (1963). Probability inequalities for sums of bounded variables. {\it J. of Amer. Stat. Asso.}, {\bf 58}, 13-30.
\item Loh, P., and Wainwright, M.J. (2012). High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity. {\it Annals of Statistics} {\bf{40}}, 1637--1664.
\item Lounici, K. (2014). High Dimensional Covariance Matrix Estimation with Missing Observations. {\it Bernoulli } {\bf 20}, 1029-1058.
\item Mandal, S., Teuren, W. V., White, R. A., Eggesbo, M., Knight, R. and Peddada, S. (2015) Analysis of composition of microbiomes: a novel method for studying microbial composition. {\it Microbial Ecology in Health and Disease} {\bf 26} 1651-2235.
\item Pang, H., Liu, H., and Vanderbei, R. (2014). The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
{\it J. Mach. Learn. Res.}, {\bf{15}} 489-493.
\item Rothman, A., Levina, E., and Zhu, J. (2009). Generalized Thresholding of Large Covariance Matrices. {\it J. of Amer. Stat. Asso. } {\bf 104}, 177-186.
\item Yatsunenko, T., Rey, F.E., Manary, M.J., Trehan, I., Dominguez-Bello, M.G., Contreras, M., Magris, M., Hidalgo, G., Baldassano, R.N., Anokhin, A.P., Heath, A.C., Warner, B., Reeder, J., Kuczynski, J., Caporaso, J.G., Lozupone, C.A., Lauber, C., Clemente, J.C., Knights, D., Knight, R \& Gordon, J. I. (2012). Human gut microbiome viewed across age and geography. {\it Nature}, {\bf486}, 222-227.
\end{enumerate}
\end{document}
|
1,314,259,993,909 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{figure}
\includegraphics[width=\linewidth]{flow}
\vspace{-20pt}
\caption{
Given only unannotated 2D images as training data, our model learns (1) to reconstruct and predict the pose of 3D meshes from a single test image, and (2) to generate new 3D mesh samples.
It is trained end-to-end (orange dashed arrow) to reconstruct input images, via a differentiable renderer that produces lit, shaded RGB images, allowing us to exploit shading cues in the loss.
}
\label{fig:flow}
\end{figure}
Reconstructing 3D objects from 2D images is a long-standing research area in computer vision.
While traditional methods rely on multiple images of the same object instance~\cite{seitz2006comparison,furukawa15cgft,broadhurst01iccv,laurentini94pami,debonet99iccv,gargallo07accv,liu10cvpr}, there has recently been a surge of interest in learning-based methods that can infer 3D structure from a single image, assuming that it shows an object of a class seen during training~\cite{fan17cvpr,zhu17iccv-rethinking,gwak173dv,choy16eccv,yan16nips,girdhar16eccv,tulsiani17cvpr,wiles17bmvc}.
A related problem to reconstruction is that of generating new 3D shapes from a given object class \textit{a priori}, i.e. without conditioning on an image.
Again, there have recently been several works that apply deep learning techniques to this task~\cite{wu16nips,gadelha173dv,sinha17cvpr,zou17iccv,li17tog}.
Most learning-based methods for reconstruction and generation rely on strong supervision.
For generation, \cite{wu16nips,rezende16nips,sinha17cvpr,zou17iccv,li17tog} use large collections of manually constructed 3D shapes~\cite{shapenet15arxiv,wu15cvpr-shapenets}.
For reconstruction, \cite{girdhar16eccv,choy16eccv,fan17cvpr,zhu17iccv-rethinking} require training images paired with aligned 3D meshes; \cite{gwak173dv} relaxes this slightly by not requiring the images and meshes to be paired.
While other methods do not rely on 3D ground-truth, they still require annotations on the 2D training images such as keypoints~\cite{kar15cvpr,vicente14cvpr} and object poses \cite{tulsiani17cvpr,yan16nips,wiles17bmvc}. Furthermore, some of them also require multiple views for each object instance \cite{yan16nips,wiles17bmvc,rezende16nips}.
In this paper, we consider the more challenging setting where we only have access to unannotated 2D images for training, without ground-truth pose, keypoints, or 3D shape, and with a single view per object instance; this setting is considered in just one previous work~\cite{gadelha173dv}.
It is well known that \textit{shading} provides an important cue for 3D understanding~\cite{horn75pcv}.
It allows determination of surface orientations, if the lighting and material characteristics are known; this has been explored in numerous works on shape-from-shading over the years~\cite{horn75pcv,zhang99pami,barron15pami}.
Unlike learning-based approaches, these methods can only reconstruct non-occluded parts of an object, and achieving good results requires strong priors~\cite{barron15pami}.
Conversely, existing learning-based generation and reconstruction methods can reason over occluded or visually-ambiguous areas, but do not leverage shading information in their loss.
Furthermore, the vast majority use voxel grids as an output representation (except \cite{zou17iccv,sinha17cvpr}); while easy to work with, these cannot model surfaces that are not axis-aligned, limiting the usefulness of shading cues.
To exploit shading information in a learning-based approach, we therefore need to move beyond voxels; a natural choice of representation is then 3D \textit{meshes}.
Meshes are ubiquitous in computer graphics, and have desirable properties for our task: they can represent surfaces of arbitrary orientation and dimensions at fixed cost, and are able to capture fine details.
Thus, they avoid the visually displeasing `blocky' reconstructions that result from voxels.
We also go beyond monochromatic light, considering the case of coloured directional lighting; this provides even stronger shading cues when combined with arbitrarily-oriented mesh surfaces.
In this paper, we present a unified framework for both reconstruction and generation of 3D shapes, that is trained with only 2D supervision, and models 3D meshes rather than voxels (\fig{flow}).
Our framework is very general, and can be trained in similar settings to existing models~\cite{tulsiani17cvpr,yan16nips,wiles17bmvc}, while also supporting weaker supervision scenarios.
It allows:
\begin{itemize}
\item
use of different \textbf{mesh parameterisations}, which lets us incorporate useful modeling priors such as smoothness or composition from primitives
\item
exploitation of \textbf{shading cues} due to monochromatic or coloured directional lighting, letting us discover concave structures that silhouette-based methods~\cite{gadelha173dv,tulsiani17cvpr,yan16nips} cannot
\item
training with \textbf{varying degrees of supervision}: single or multiple views per instance, with or without ground-truth pose annotation
\end{itemize}
To achieve this, we design a probabilistic generative model that captures the full image formation process, whereby the shape and pose of a 3D mesh are first sampled independently, then a 2D rendering is produced from these (\sect{generative}).
We use stochastic gradient variational Bayes~\cite{kingma14iclr,rezende14icml} for training (\sect{training}).
This involves learning an \textit{inference network} that can predict 3D shape and pose from a single image, with the shape placed in a canonical frame of reference, i.e. disentangled from the pose.
Together, the model plus its inference network resemble a variational autoencoder~\cite{kingma14iclr} on pixels.
It represents 3D shapes in a compact latent embedding space, and has extra layers in the decoder corresponding to the mesh representation and renderer.
As we do not provide 3D supervision, the encoder and decoder must bootstrap and guide one another during training. The decoder learns the manifold of shapes, while at the same time the encoder learns to map images onto this.
This learning process is driven purely by the objective of reconstructing the training images.
While this is an ambiguous task and the model cannot guarantee to reconstruct the true shape of an object from a single image, its generative capability means that it always produces a plausible instance of the relevant class; the encoder ensures that this is consistent with the observed image.
This works because
the generative model must learn to produce shapes that reproject well over {\em all} training images, starting from low-dimensional latent representations.
This creates an inductive bias towards regularity, which avoids degenerate solutions with unrealistic shapes that could, in isolation, explain each individual training image.
We display samples from our model in \sect{gen-results}, showing that (i) the use of meshes yields more natural samples than those from voxel-based methods, and (ii) our samples are diverse and realistic.
In \sect{recon-results}, we quantitatively evaluate the performance of our method on single-view reconstruction and pose estimation, in the various settings described above.
We show that
(i) it learns to predict pose, and disentangle it from shape;
(ii) exploiting information from shading improves the results;
(iii) it achieves comparable or better performance than prior works with equivalent supervision; and
(iv) it still performs well when given weaker supervision than supported by prior works.
\begin{figure}
\includegraphics[width=\linewidth]{light-and-paramns}
\vspace{-22pt}
\caption{
\textbf{Lighting}: Coloured directional lighting (a) provides strong cues for surface orientation; white light (b) provides less information; silhouettes (c) provide none at all. Our model is able to exploit the shading information from coloured or white lighting.
\textbf{Mesh parameterisations}: \textbf{ortho-block} \& \textbf{full-block} (assembly from cuboidal primitives, of fixed or varying orientation) are suited to objects consisting of compact parts (d-e); \textbf{subdivision} (per-vertex deformation of a subdivided cube) is suited to complex continuous surfaces (f).
}
\label{fig:light-and-paramns}
\end{figure}
\section{Generative Model}
\label{sec:generative}
Our goal is to build a probabilistic generative model of 3D meshes for a given object class.
For this to be trainable with 2D supervision, we cast the entire image-formation process as a directed model (\fig{flow}).
We assume that the content of an image can be explained by two independent latent components---the shape of the mesh, and its pose relative to the camera.
These are modelled by two low-dimensional random variables, $\mathbf{z}$ and $\theta$ respectively. The joint distribution over these and the resulting pixels $\mathbf{x}$ factorises as
$
P(\mathbf{x}, \mathbf{z}, \theta)
=
P(\theta) P(\mathbf{z}) P(\mathbf{x} \,|\, \mathbf{z}, \theta)
$.
Following \cite{gadelha173dv,yan16nips,tulsiani17cvpr,wiles17bmvc}, we assume that the pose $\theta$ is parameterised by just the azimuth angle, with $\theta \sim \text{Uniform}(-\pi, \pi)$.
The camera is then placed at fixed distance and elevation relative to the object.
Following recent works on deep latent variable models~\cite{kingma14iclr,goodfellow14nips}, we assume that $\mathbf{z}$ is drawn from a standard isotropic Gaussian, and then transformed by a deterministic \textit{decoder network},
$F_{\phi}$, parameterised by weights $\phi$ which are to be learnt.
This produces the \textit{mesh parameters} $\Pi = F_{\phi}(\mathbf{z})$.
Intuitively, the decoder network $F_{\phi}$ transforms and entangles the dimensions of $\mathbf{z}$ such that all values in the latent space map to plausible values for $\Pi$, even if these lie on a highly nonlinear manifold.
Note that our approach contrasts with previous models that directly output pixels~\cite{kingma14iclr,goodfellow14nips} or voxels~\cite{wu16nips,gadelha173dv} from a decoder network.
We use $\Pi$ as inputs to a fixed mesh parameterisation function $M(\Pi)$, which yields vertices $\mathbf{v}_{\text{object}}$ of triangles defining the shape of the object in 3D space, in a canonical pose (different options for $M$ are described below).
The vertices are transformed into camera space according to the pose $\theta$, by a fixed function $T$: $\mathbf{v}_{\text{camera}} = T(\mathbf{v}_{\text{object}},\, \theta)$.
They are then rendered into an RGB image $I_0 = \mathcal{G}(\mathbf{v}_{\text{camera}})$ by a rasteriser $\mathcal{G}$ with Gouraud shading~\cite{gouraud71tc} and Lambertian directional lighting~\cite{lambert60bk}.
We are free to choose the lighting parameters: our experiments include tri-directional coloured lighting, and white directional lighting with an ambient component.
The final observed pixel values $\mathbf{x}$ are modelled as independent Gaussian random variables, with means equal to the values in an $L$-level Gaussian pyramid~\cite{burt83tc}, whose base level equals $I_0$, and whose $L$\textsuperscript{th} level has smallest dimension equal to $1$:
\begin{align}
P_{\phi}(\mathbf{x} \,|\, \mathbf{z}, \theta) &= \prod_l P_{\phi}(\mathbf{x}_l \,|\, \mathbf{z}, \theta)
&
\mathbf{x}_l &\sim \text{Normal}\left( I_l, \tfrac{\epsilon}{2^l} \right)
\\
I_0 &= \mathcal{G}(T(M(F_{\phi}(\mathbf{z})),\, \theta))
&
I_{l+1} &= I_l * k_G
\end{align}
where $k_G$ is a small Gaussian kernel, $\epsilon$ is the noise magnitude at the base scale, and $*$ denotes convolution with stride two.
We use a multi-scale pyramid instead of just the raw pixel values to ensure that, during training, there will be gradient forces over long distances in the image, thus avoiding bad local minima where the reconstruction is far from the input.
\paragraph{Mesh parameterisations.}
After the decoder network has transformed the latent embedding $\mathbf{z}$ into the mesh parameters $\Pi$, these are converted to actual 3D vertices using a simple, non-learnt mesh-parameterisation function $M$.
One possible choice for $M$ is the identity function, in which case the decoder network directly outputs vertex locations.
However, initial experiments showed that this does not work well: it produces very irregular meshes with large numbers of intersecting triangles.
Conversely, using a more sophisticated form for $M$ enforces regularity of the mesh.
We use three different parameterisations in our experiments.
In our first parameterisation, $\Pi$ specifies the locations and scales of a fixed number of axis-aligned cuboidal \textit{primitives} (\fig{light-and-paramns}d), from which the mesh is assembled~\cite{zou17iccv,tulsiani17cvpr-blocks}.
Changing $\Pi$ can produce configurations with different topologies, depending which blocks touch or overlap, but all surfaces will necessarily be axis-aligned.
In our experiments we call this \textbf{ortho-block}.
Our second parameterisation is strictly more powerful than the first: we still assemble the mesh from cuboidal primitives, but now parameterise each with a 3D rotation, in addition to its location and scale.
In our experiments we call this \textbf{full-block} (\fig{light-and-paramns}e).
The above parameterisations are naturally suited to objects composed of compact parts, but cannot represent complex continuous surfaces.
For these, we define a third parameterisation, \textbf{subdivision} (\fig{light-and-paramns}f).
This parameterisation is based on a single unit cube, centred at the origin; the edges and faces of the cube are subdivided several times along each axis.
Then, $\Pi$ specifies a list of displacements, one per vertex, which deform the subdivided cube into the required shape.
\section{Variational Training}
\label{sec:training}
We wish to learn the parameters of our model from a training set of 2D images of objects of a single class.
More precisely, we assume access to a set of images $\{\mathbf{x}^{(i)}\}$, each showing an unknown object instance at unknown pose.
Note that we do \textit{not} require that there are multiple views of each object (in contrast with \cite{yan16nips}), nor that the object poses are given as supervision (in contrast with \cite{yan16nips,tulsiani17cvpr,wiles17bmvc}).
We seek to maximise the marginal log-likelihood of the training set, which is given by $\sum_i \log P_{\phi}(\mathbf{x}^{(i)})$, with respect to $\phi$.
For each image, we have
\begin{equation}
\label{eq:loglik}
\log P_{\phi}(\mathbf{x}^{(i)}) = \log \int_{\mathbf{z},\theta} P_{\phi}(\mathbf{x}^{(i)} \,|\, \mathbf{z}, \theta) P(\mathbf{z}) P(\theta) d\mathbf{z} \, d\theta.
\end{equation}
Unfortunately this is intractable, due to the integral over the latent space $\mathbf{z}, \theta$.
Hence, we use amortised variational inference, in the form of stochastic gradient variational Bayes \cite{kingma14iclr,rezende14icml}.
This introduces an approximate posterior $Q_{\omega}(\mathbf{z}, \theta \,|\, \mathbf{x})$, parameterised by some $\omega$ that we learn jointly with the model parameters $\phi$.
Intuitively, $Q$ maps an image $\mathbf{x}$ to a distribution over likely values of the latent variables $\mathbf{z}$ and $\theta$.
Instead of the log-likelihood \eqn{loglik}, we then maximise the \textit{evidence lower bound} (ELBO):
\begin{equation}
\label{eq:elbo}
\mathop{\mathbb{E}}_{\mathbf{z}, \theta \sim Q_{\omega}(\mathbf{z}, \theta \,|\, \mathbf{x}^{(i)})}\left[
\log P_{\phi}( \mathbf{x}^{(i)} \,|\, \mathbf{z}, \theta )
\right] - \kldiv{
Q_{\omega}(\mathbf{z}, \theta \,|\, \mathbf{x}^{(i)})
}{
P(\mathbf{z}) P(\theta)
}
\le
\log P_{\phi}(\mathbf{x}^{(i)}).
\end{equation}
This lower-bound on the log-likelihood can be evaluated efficiently, as the necessary expectation is now with respect to $Q$, for which we are free to choose a tractable form.
The expectation can then be approximated using a single sample.
We let $Q$ be a mean-field approximation, factorised as $Q_{\omega}(\mathbf{z}, \theta \,|\, \mathbf{x}) = Q_{\omega}(\mathbf{z} \,|\, \mathbf{x}) Q_{\omega}(\theta \,|\, \mathbf{x})$.
$Q_{\omega}(\mathbf{z} \,|\, \mathbf{x})$ is a multivariate Gaussian with diagonal covariance.
The mean and variance of each latent dimension are given by an \textit{encoder network}, $\mathrm{enc}_{\omega}(\mathbf{x})$, which takes the image $\mathbf{x}$ as input.
For this encoder network we use a CNN with architecture similar to \cite{wiles17bmvc}.
When training with multiple views per instance, we apply the encoder network to each image separately, then calculate the final shape embedding $\mathbf{z}$ by max-pooling each dimension over all views.
For the pose $\theta$, we could similarly use a Gaussian posterior.
However, many objects are roughly symmetric with respect to rotation, and so the true posterior is typically multi-modal.
We capture this multi-modality by decomposing the rotation into coarse and fine parts~\cite{mousavian17cvpr}: an integer random variable $\theta_{\text{coarse}}$ that chooses from $R$ rotation bins, and a small Gaussian offset $\theta_{\text{fine}}$ relative to this.
We apply this transformation in both the generative $P(\theta)$ and variational $Q_{\omega}(\theta)$, giving
\begin{equation}
\theta = -\pi + \theta_{\text{coarse}} \frac{2\pi}{R} + \theta_{\text{fine}}
\end{equation}
\begin{align}
\label{eq:theta-prior}
P(\theta_{\text{coarse}} = r) &= 1/R, &\;\;\; P(\theta_{\text{fine}}) &= \text{Normal}(\theta_{\text{fine}} \,|\, 0, \pi / R )
\\
Q_{\omega}\left( \theta_{\text{coarse}} = r \,\Big\vert\, \mathbf{x}^{(i)} \right) &= \rho_r \left( \mathbf{x}^{(i)} \right), &\;\;\; Q_{\omega}(\theta_{\text{fine}}) &= \text{Normal}\left( \theta_{\text{fine}} \,\Big\vert\, \xi(\mathbf{x}^{(i)}), \zeta(\mathbf{x}^{(i)}) \right)
\end{align}
where the variational parameters $\rho_r, \xi, \zeta$ for image $\mathbf{x}^{(i)}$ are again estimated by the encoder network $\mathrm{enc}_{\omega}(\mathbf{x}^{(i)})$.
Provided $R$ is sufficiently small, we can integrate directly with respect to $\theta_{\text{coarse}}$ when evaluating \eqn{elbo}, i.e. sum over all possible rotations. We found in initial experiments that this significantly improves performance.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{gen-samples.png}
\vspace{-10pt}
\caption{\textbf{(a-c)} Samples from \cite{gadelha173dv} (grey background), shown next to stylistically-similar samples from our model (white background). Both are trained with a single view per instance, and without ground-truth pose. However, our model outputs meshes, and uses shading in the loss. \textbf{(d)} For \textit{sofa}, we only show samples from our model, as \cite{gadelha173dv} cannot handle sofas due to the concavities. \textbf{(e)} Additional samples from our model, showing their diversity and quality}
\label{fig:samples}
\end{figure}
\paragraph{Imposing a uniform pose prior.}
While the above allows our training process to reason over different poses, it is still prone to predicting the same pose $\theta$ for every image;
clearly this does not correspond to the prior on $\theta$ given by \eqn{theta-prior}.
The model is therefore relying on the shape embedding $\mathbf{z}$ to model all variability, rather than disentangling shape and pose.
The ELBO \eqn{elbo} does include a KL-divergence term that should encourage latent variables to match their prior.
However, it does not have a useful effect for $\theta_{\text{coarse}}$: minimising the KL divergence from a uniform distribution for each sample individually corresponds to independently minimising all the probabilities $Q_{\omega}(\theta_{\text{coarse}})$, which does not encourage uniformity of the full distribution.
The effect we desire is to match the aggregated posterior distribution $\left\langle Q_{\omega}(\theta \,|\, \mathbf{x}^{(i)}) \right\rangle_i$ to the prior $P(\theta)$, where $\langle \,\cdot\, \rangle_i$ is the empirical mean over the training set.
As $\theta_{\text{coarse}}$ follows a categorical distribution in both generative and variational models, we can directly minimise the L1 distance between the aggregated posterior and the prior:
\begin{equation}
\sum_r^R \Big\lvert
\left\langle
Q_{\omega}\left(\theta_{\text{coarse}} = r \,|\, \mathbf{x}^{(i)}\right)
\right\rangle_i
- P\left(\theta_{\text{coarse}} = r\right)
\Big\rvert
=
\sum_r^R \Big\lvert
\left\langle
\rho_r(\mathbf{x}^{(i)})
\right\rangle_i
- \frac{1}{R} \;
\Big\rvert.
\end{equation}
We use this term in place of $\kldiv{ Q(\theta_{\text{coarse}} \,|\, \mathbf{x}^{(i)}) }{ P(\theta_{\text{coarse}}) }$ in our loss, approximating the empirical mean with a single minibatch.
\paragraph{Loss.}
Our final loss function for a minibatch $\mathcal{B}$ is then given by
\begin{multline}
\label{eq:loss}
\hspace{-10pt}
\sum_r^R
\left\{
-
\left\langle
\rho_r(\mathbf{x}^{(i)})
\mathop{\mathbb{E}}_{
\mathbf{z}, \theta_{\text{fine}} \sim Q_{\omega}
}\left[
\log P_{\phi}\!\left( \mathbf{x}^{(i)} \,\Big\vert\, \mathbf{z}, \theta_{\text{coarse}} = r, \theta_{\text{fine}} \right)
\right]
\right\rangle_{\!i \in \mathcal{B}}
\!
+ \alpha \,
\Big\lvert
\!
\left\langle
\rho_r(\mathbf{x}^{(i)})
\right\rangle_{\!i \in \mathcal{B}}
\!
- \frac{1}{R} \;
\Big\rvert
\right\} \\
+ \beta \,
\left\langle
\kldiv{
Q_{\omega}\left( \mathbf{z}, \theta_{\text{fine}} \,\Big\vert\, \mathbf{x}^{(i)} \right)
}{
P(\mathbf{z}) P(\theta_{\text{fine}})
}
\right\rangle_{\!i \in \mathcal{B}}
\end{multline}
where $\beta$ increases the relative weight of the KL term as in \cite{higgins17iclr}, and $\alpha$ controls the strength of the pose prior matching.
We minimise \eqn{loss} with respect to $\phi$ and $\omega$ using ADAM~\cite{kingma15iclr}, applying the reparameterisation trick~\cite{kingma14iclr,rezende14icml} to handle the Gaussian random variables.
\paragraph{Differentiable rendering.}
Note that optimising \eqn{loss} by gradient descent requires differentiating through the mesh-rendering operation $\mathcal{G}$ used to calculate $P_{\phi}(\mathbf{x} \,|\, \mathbf{z}, \theta)$, to find the derivative of the pixels with respect to the vertex locations and colours.
While computing exact derivatives of $\mathcal{G}$ is very expensive, \cite{loper14eccv} describes an efficient approximation.
We employ a similar technique here, and have made our TensorFlow implementation publicly available\footnote{\textit{DIRT: a fast Differentiable Renderer for TensorFlow}, \url{https://github.com/pmh47/dirt}}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{recon-samples.png}
\vspace{-20pt}
\caption{Qualitative examples of reconstructions. Each row of five images shows (i) ShapeNet ground-truth; (ii) our reconstruction with \textbf{subdivision} parameterisation; (iii) reconstruction aligned to canonical pose; (iv) our reconstruction with \textbf{blocks}; (v) aligned reconstruction. Experimental setting: single-view training, colour lighting, shading loss.}
\label{fig:reconstructions}
\end{figure}
\section{Experiments}
We follow recent works \cite{gadelha173dv,yan16nips,tulsiani17cvpr,fan17cvpr} and evaluate our approach using the ShapeNet dataset~\cite{shapenet15arxiv}.
Using synthetic data has two advantages: it allows
(i) controlled experiments modifying lighting and other parameters;
(ii) benchmarking the performance of the reconstruction network against ground-truth 3D shapes.
Our experiments focus on the four classes \textit{aeroplane}, \textit{car}, \textit{chair}, and \textit{sofa}.
The first three are used in \cite{gadelha173dv,tulsiani17cvpr,yan16nips}, while the fourth is an example of a highly concave class that is not easily handled by silhouette-based approaches.
To rigorously evaluate the performance of our model, we vary several factors:
\begin{itemize}
\item
\textbf{Mesh parameterisations:} We evaluate the three parameterisations described in \sect{generative}.
\item
\textbf{Lighting:} Unlike previous works~\cite{gadelha173dv,wiles17bmvc,yan16nips,tulsiani17cvpr}, our method is able to exploit shading in the images.
We test in two settings, illumination by
(i) three coloured directional lights (\textbf{colour}), and
(ii) one white directional light plus a white ambient component (\textbf{white}).
\item
\textbf{Reconstruction loss:}
We typically calculate the reconstruction loss (pixel log-likelihood) over the RGB shaded image (\textbf{shading}), but for comparison with \cite{yan16nips,tulsiani17cvpr,wiles17bmvc} we also experiment with using only the silhouette in the loss (\textbf{silhouette}), disregarding differences in shading between the input and reconstructed pixels.
\item
\textbf{Pose supervision:} Previous works that train for 3D reconstruction with 2D supervision require the ground-truth pose of each training instance~\cite{yan16nips,wiles17bmvc,tulsiani17cvpr}.
Although our method does not need this, we evaluate whether it can benefit from it.
\item
\textbf{Multiple views:} \cite{yan16nips,wiles17bmvc} require that multiple views of each instance are presented together in each training batch, and \cite{tulsiani17cvpr} also focuses on this setting.
Our model does not require this, but for comparison we include results with four views per instance at training time, and either one or four at test time.
\end{itemize}
During training, we construct each minibatch by randomly sampling 128 meshes from the relevant ShapeNet class uniformly with replacement.
For each selected mesh, we render a single image, using a pose sampled from $\text{Uniform}(-\pi,\,\pi)$.
Only these images are used to train the model, not the meshes themselves.
In experiments using multiple views, we instead sample 32 meshes and four poses per mesh, and correspondingly render four images.
\subsection{Generation}
\label{sec:gen-results}
\fig{samples} shows examples of meshes sampled from our model, using the same setting as \cite{gadelha173dv} (i.e. single-view training without pose supervision). That is the only prior work that learns a 3D generative model with just images as supervision.
We manually selected samples from our model that are stylistically similar to those from \cite{gadelha173dv} to allow side-by-side comparison.
We see that in all cases, generating meshes tends to give cleaner, more visually-pleasing samples than voxels (as used by \cite{gadelha173dv}).
For \textit{chair}, our model is able to capture the very narrow legs; for \textit{aeroplane}, it captures the diagonal edges of the wings; for \textit{car}, it captures the smoothly curved edges.
We have also successfully learnt a model for the concave class \textit{sofa}---which is impossible for \cite{gadelha173dv} as it does not consider shading.
Finally, note that our samples are diverse:
the model generates various different styles for each class.
\subsection{Reconstruction}
\label{sec:recon-results}
We now evaluate the performance of our model on 3D reconstruction from a single image.
We benchmark on a held-out test set, following the protocol of \cite{yan16nips}, where each object is presented at 24 different poses, and statistics aggregated across objects and poses.
We evaluate according to the following measures:
\begin{itemize}
\item
\textit{iou}: to measure the shape reconstruction error, we calculate the mean intersection-over-union between the predicted mesh and ground-truth; this follows recent works on reconstruction~\cite{yan16nips,tulsiani17cvpr}. To calculate this, we voxelise both meshes at a resolution of $32^3$
\item
\textit{err}: to measure the pose estimation error, we calculate the median error in degrees of predicted rotations
\item
\textit{acc}: again to evaluate pose estimation, we measure the fraction of instances whose predicted rotation is within $\pi/6$ of the ground-truth rotation.
\end{itemize}
\begin{table}[t]
\small \centering
\setlength{\tabcolsep}{2pt}
\rowcolors{3}{gray!20}{white}
\begin{tabular}{r|cccc|cccc|cccc|cccc}
& \multicolumn{4}{c|}{\textbf{car}} & \multicolumn{4}{c|}{\textbf{chair}} & \multicolumn{4}{c|}{\textbf{aeroplane}} & \multicolumn{4}{c}{\textbf{sofa}} \\[-3pt]
& \textit{iou} & \textit{err} & \textit{acc} & \textit{iou|$\theta$} &
\textit{iou} & \textit{err} & \textit{acc} & \textit{iou|$\theta$} &
\textit{iou} & \textit{err} & \textit{acc} & \textit{iou|$\theta$} &
\textit{iou} & \textit{err} & \textit{acc} & \textit{iou|$\theta$} \\
\hline
\textit{ortho-block} & 0.71 & 7.3 & \textbf{0.84} & 0.74 & 0.41 & 9.2 & \textbf{0.69} & 0.49 & 0.30 & 7.9 & 0.73 & 0.24 & \textbf{0.59} & \textbf{7.3} & \textbf{0.94} & \textbf{0.69} \\
\textit{full-block} & 0.54 & 6.5 & 0.82 & 0.63 & \textbf{0.46} & \textbf{4.6} & \textbf{0.69} & \textbf{0.51} & \textbf{0.51} & \textbf{4.4} & \textbf{0.89} & \textbf{0.57} & 0.39 & 9.1 & 0.70 & 0.68 \\
\textit{subdivision} & \textbf{0.77} & \textbf{4.7} & \textbf{0.84} & \textbf{0.81} & 0.39 & 7.9 & 0.65 & \textbf{0.51} & 0.49 & 6.7 & 0.64 & \textbf{0.57} & 0.39 & 14.7 & 0.52 & 0.59
\end{tabular}
\\[4pt]
\caption{
Reconstruction performance for four classes, with three different mesh parameterisations (\sect{generative}).
For each class, the first three columns are in the default setting of no pose supervision and correspond to the metrics in \sect{recon-results}; iou|$\theta$ is the IOU when trained with pose supervision. Higher is better for \textit{iou} and \textit{acc}; lower is better for \textit{err}. Experimental setting: single-view training, colour lighting, shading loss.
}
\label{tab:class-vs-paramn}
\end{table}
\begin{table}[t]
\small
\centering
\setlength{\tabcolsep}{2.1pt}
\rowcolors{3}{gray!20}{white}
\begin{tabular}{r|cccc|cccc|cccc|cccc}
& \multicolumn{4}{c|}{\textbf{car}} & \multicolumn{4}{c|}{\textbf{chair}} & \multicolumn{4}{c|}{\textbf{aeroplane}} & \multicolumn{4}{c}{\textbf{sofa}} \\[-3pt]
& \textit{iou} & \textit{err} & \textit{acc} & \textit{iou|$\theta$} &
\textit{iou} & \textit{err} & \textit{acc} & \textit{iou|$\theta$} &
\textit{iou} & \textit{err} & \textit{acc} & \textit{iou|$\theta$} &
\textit{iou} & \textit{err} & \textit{acc} & \textit{iou|$\theta$} \\
\hline
\textit{colour} & \textbf{0.77} & \textbf{4.7} & \textbf{0.84} & \textbf{0.81} & \textbf{0.46} & \textbf{4.6} & \textbf{0.69} & \textbf{0.51} & \textbf{0.51} & \textbf{4.4} & \textbf{0.89} & \textbf{0.57} & \textbf{0.59} & \textbf{7.3} & \textbf{0.94} & 0.69 \\
\textit{white} & 0.58 & 13.8 & 0.82 & 0.81 & 0.25 & 33.6 & 0.49 & 0.42 & 0.42 & 7.7 & 0.85 & 0.54 & 0.51 & 56.1 & 0.49 & \textbf{0.71} \\
\textit{cl.+sil.} & 0.46 & 65.2 & 0.29 & 0.64 & 0.28 & 51.7 & 0.35 & 0.48 & 0.20 & 17.8 & 0.57 & 0.47 & 0.27 & 89.8 & 0.15 & 0.57
\end{tabular}
\\[4pt]
\caption{
Reconstruction performance with different lighting and loss. \textit{colour} indicates three coloured directional lights with shading loss; \textit{white} indicates a single white directional light plus white ambient, with shading loss; \textit{cl.+sil.} indicates coloured lighting with only the silhouette used in the loss. Our model can exploit the extra information gained by considering shading in the loss, and coloured directional lighting helps further. Experimental setting: single-view training, best mesh parameterisations from \tab{class-vs-paramn}.
}
\label{tab:lighting}
\end{table}
\begin{table}[t]
\small
\centering
\setlength{\tabcolsep}{6pt}
\rowcolors{3}{gray!20}{white}
\begin{tabular}{r|cccc|cccc}
& \multicolumn{4}{c|}{\textbf{car}} & \multicolumn{4}{c}{\textbf{chair}} \\[-3pt]
& \textit{iou} & \textit{err} & \textit{acc} & \textit{iou|$\theta$} &
\textit{iou} & \textit{err} & \textit{acc} & \textit{iou|$\theta$} \\
\hline
\textit{single-view} & 0.77 & 4.7 & 0.84 & 0.81 & 0.46 & 4.6 & 0.69 & 0.51 \\
\textit{4-view train, 4-view test} & \textbf{0.83} & \textbf{2.6} & \textbf{0.94} & \textbf{0.86} & \textbf{0.51} & 4.7 & 0.72 & \textbf{0.55} \\
\textit{4-view train, 1-view test} & 0.81 & 5.1 & 0.93 & 0.83 & 0.46 & \textbf{2.5} & \textbf{0.78} & 0.50
\end{tabular}
\\[4pt]
\caption{
Reconstruction performance with multiple views at train/test time. Our model is able to exploit the extra information gained through multiple views, and can benefit even when testing with a single view. Experimental setting: best mesh parameterisations from \tab{class-vs-paramn}, colour lighting, shading loss.
}
\label{tab:multi-view}
\end{table}
\begin{table}[t]
\small
\renewcommand*{\arraystretch}{0.95}
\centering
\setlength{\tabcolsep}{6pt}
\rowcolors{1}{white}{gray!20}
\begin{tabular}{>{\itshape}c >{\itshape}c >{\itshape}c|cccc}
& \textbf{lighting} & \textbf{loss} & \textbf{car} & \textbf{chair} & \textbf{aeroplane} & \textbf{sofa} \\
\hline
DRC~\cite{tulsiani17cvpr} & white & silhouette & 0.73 & 0.43 & 0.50 & - \\
DRC~\cite{tulsiani17cvpr} & white & depth & 0.74 & 0.44 & 0.49 & - \\
PTN~\cite{yan16nips} & white & silhouette & 0.71 & 0.50 & 0.56 & 0.62 \\
PTN, our images & colour & silhouette & 0.66 & 0.22 & 0.42 & 0.46 \\
\hline
ours & white & silhouette & 0.71 & 0.25 & 0.53 & 0.68 \\
ours & white & shading & 0.79 & 0.44 & 0.54 & \textbf{0.69} \\
ours & colour & shading & \textbf{0.83} & \textbf{0.51} & \textbf{0.57} & \textbf{0.69} \\
\hline
PSG~\cite{fan17cvpr} & white & 3D & \textit{0.83} & \textit{0.54} & \textit{0.60} & \textit{0.71}
\end{tabular}
\\[4pt]
\caption{
Reconstruction performance (iou|$\theta$) in a setting matching \cite{tulsiani17cvpr,yan16nips} (multi-view training; best parameterisations from \tab{class-vs-paramn}), but with mesh output instead of voxels. \textit{PTN, our images} is running the unmodified public code of \cite{yan16nips} with its normal silhouette loss, on our coloured images. The final row shows performance of a state-of-the-art method~\cite{fan17cvpr} with full 3D supervision---note that our colour results are comparable with this, in spite of using only unannotated 2D images as supervision
}
\label{tab:competitors}
\end{table}
\paragraph{Object classes and mesh parameterisations.}
\tab{class-vs-paramn} shows the performance of our model on four different classes, comparing the three mesh parameterisations of \sect{generative}.
This focuses on our default setting of colour lighting, shading loss, single-view training without pose supervision (columns \textit{iou, err, acc}); we also give \textit{iou} when trained with pose supervision (column \textit{iou}|$\theta$).
We see that different parameterisations are better suited to different classes, in line with our expectations.
Cars have smoothly curved edges, and are well-approximated by a single simply-connected surface; hence, \textbf{subdivision} performs well.
Chairs vary in topology (e.g. the back may be solid or slatted) and sometimes have non-axis-aligned surfaces, so the flexible \textbf{full-block} parameterisation performs best.
Aeroplanes have one dominant topology and include non-axis-aligned surfaces; both \textbf{full-block} and \textbf{subdivision} perform well here.
Sofas often consist of axis-aligned blocks, so the \textbf{ortho-block} parameterisation is expressive enough to model them.
We hypothesise that it performs better than the other more flexible parameterisations as it is easier for training to find a good solution in a more restricted representation space.
This is effectively a form of regularisation.
Overall, the best reconstruction performance is achieved for cars, which accords with \cite{tulsiani17cvpr,yan16nips,fan17cvpr}.
The low values of \textit{err} (and corresponding high values of \textit{acc}) indicate that the model has indeed learnt to disentangle pose from shape.
This is noteworthy given the model has seen only unannotated 2D images with arbitrary poses---disentanglement of these factors presumably arises because it is easier for the model to learn to reconstruct in a canonical frame, given that it is encouraged by our loss to predict diverse poses.
However, providing the ground-truth poses as input improves reconstruction performance further in almost all cases (column \textit{iou}|$\theta$ vs. \textit{iou}).
\paragraph{Benefit of lighting.}
\tab{lighting} shows how reconstruction performance varies with the different choices of lighting, \textbf{colour} and \textbf{white}, using \textbf{shading} loss.
Coloured directional lighting provides more information during training than white lighting, and the results are correspondingly better.
We also show performance with \textbf{silhouette} loss for coloured light.
This performs significantly worse than with shading in the loss, in spite of the input images being identical.
Thus, back-propagating information from shading through the renderer does indeed help with learning---it is not merely that colour images contain more information for the encoder network.
As in the previous experiment, we see that pose supervision helps the model (column \textit{iou}|$\theta$ vs. \textit{iou}).
In particular, only with pose supervision are silhouettes informative enough for the model to learn a canonical frame of reference reliably, as evidenced by the high median rotation errors without (column \textit{err}).
\paragraph{Multi-view training/testing.}
\tab{multi-view} shows results when we provide 4 views of each object instance to the model.
Using 4 views at both training and testing time improves results in all cases---the model has learnt to exploit the additional information about each instance.
There is also a smaller performance improvement when we train with 4 views, but test with only one---although the network has not been optimised for the single-view task during training.
\paragraph{Comparison to previous works.}
\tab{competitors} compares our results with previous works.
Here, we conduct experiments in a setting matching \cite{tulsiani17cvpr,yan16nips}: multiple views at training time, with ground-truth pose supervision.
This shows that our results using meshes are roughly comparable with these previous works using voxels, even when only silhouette supervision is used (our results are worse on `chair', but better on `sofa').
Furthermore, when we add shading information to the loss (which these previous works cannot), our results show a significant improvement; coloured lighting helps even further.
We also show results for \cite{yan16nips} using our coloured lighting images as input, but their silhouette loss.
This performs worse than our method on the same images, again showing that shading in the loss is useful---our colour images are not simply more informative to the encoder network than those of \cite{yan16nips}.
Interestingly, when trained with shading or colour, our method outperforms \cite{tulsiani17cvpr} even when the latter is trained with depth information.
When trained with colour, our results are even close to \cite{fan17cvpr}, which is a state-of-the-art method trained with full 3D supervision.
\section{Conclusion}
We have presented a framework for generation and reconstruction of 3D meshes.
Our approach is flexible and supports many different supervision settings, including weaker supervision than any prior works (i.e. a single view per training instance, and without pose annotations).
Unlike prior works, we can exploit shading cues due to directional lighting; we have shown that this improves performance over silhouettes.
Moreover, performance is higher than that of a method with depth supervision~\cite{tulsiani17cvpr}, and even close to the state-of-the-art results using full 3D supervision~\cite{fan17cvpr}.
Finally, ours is the first method that can learn a generative model of 3D meshes, trained with only 2D images.
We have shown that use of meshes leads to more visually-pleasing results than prior voxel-based works~\cite{gadelha173dv}.
|
1,314,259,993,910 | arxiv | \section{Introduction}\label{sec:intro}
\emph{Catenoids} are among the simplest examples of a non-flat minimal hypersurface in Euclidean space. With respect to the Lorentzian generalization of the minimal hypersurface equation, which is a quasilinear wave equation that will be referred to as the \emph{hyperbolic vanishing mean curvature equation} (HVMC equation) in this paper, these minimal hypersurfaces furnish examples of nontrivial asymptotically flat time-independent solutions. From this point of view, a fundamental question is that of the \emph{nonlinear asymptotic stability} of catenoids as solutions to the HVMC equation -- this will be the subject of the present paper.
Our main result is the nonlinear asymptotic stability, modulo suitable translation and boost, of the $n$-dimensional catenoid as a solution to the HVMC equation with respect to a ``codimension-$1$'' set of initial data perturbations without any symmetry assumptions, for $n \geq 5$ (see Theorem~\ref{thm:main-0} below). The codimension-$1$ condition is necessary and sharp, in view of the fact that the linearized HVMC equation around the catenoid admits a one-parameter family of growing solutions corresponding to the negative eigenvalue of the stability operator (second variation of area). The necessity for an adjustment of the translation and boost parameters (i.e., \emph{modulation}) stems from the kernel of the linearized equation arising from Lorentz invariance. Our result extends the pioneering work of Donninger--Krieger--Szeftel--Wong \cite{DKSW}, which considers the same problem in radial symmetry for $n \geq 2$, to the non-symmetric context albeit for $n \geq 5$.
Beyond the intrinsic interest in the asymptotic stability problem for the catenoid, our motivation for this work is to take on specific challenges for soliton stability problems brought on by the quasilinear nature of the corresponding hyperbolic evolution equations.
We are hopeful for applications of our approach in the study of well-known topological solitons arising in quasilinear wave equations such as the Skyrmion for the Skyrme model.
\subsection{Stability Problems for the Hyperbolic Vanishing Mean Curvature Equation}
We begin by giving a precise formulation of the hyperbolic vanishing mean curvature equation.
Let $(\mathbb R^{1+(n+1)},\boldsymbol{\eta})$ be the $1+(n+1)$ dimensional Minkowski space with the standard metric
\begin{align*}
\begin{split}
\boldsymbol{\eta}=-\mathrm{d} X^0\otimes \mathrm{d} X^0+\mathrm{d} X^1\otimes \mathrm{d} X^1+\dots + \mathrm{d} X^{n+1}\otimes \mathrm{d} X^{n+1},
\end{split}
\end{align*}
and let $\mathcal M$ be an $n+1$ dimensional connected orientable manifold without boundary. We consider embeddings $\Phi:\mathcal M\to \mathbb R^{1+{(n+1)}}$ such that the pull-back metric $\Phi^\ast\boldsymbol{\eta}$ is Lorentzian (i.e., $\Phi(\mathcal M)$ is timelike), and which satisfy
\begin{align}\label{eq:HVMC1}
\begin{split}
\Box_{\Phi^\ast \boldsymbol{\eta}}\Phi=0.
\end{split}
\end{align}
The vector $\Box_{\Phi^\ast \boldsymbol{\eta}}\Phi$ is the \emph{mean curvature vector} of $\Phi(\mathcal M)$ as a hypersurface in $\mathbb R^{1+(n+1)}$, and equation~\eqref{eq:HVMC1} is the requirement that this hypersurface have vanishing mean curvature (VMC). Embeddings satisfying these requirements are called \emph{(timelike) maximal} and equation~\eqref{eq:HVMC1} is referred to as the hyperbolic vanishing mean curvature equation (HVMC equation). When there is no risk of confusion, by a slight abuse of notation, we will often identify $\mathcal M$ with its image $\Phi(\mathcal M)$ and simply refer to $\mathcal M$ as a hypersurface of $\mathbb R^{1+(n+1)}$. The HVMC equation is the hyperbolic analogue of the elliptic minimal surface equation (or the parabolic mean curvature flow), and arises variationally as the Euler-Lagrange equations of the area functional
\begin{align}\label{eq:area1}
\begin{split}
\mathcal A(\Phi)=\int_{\mathcal M}\sqrt{|\det \Phi^\ast\boldsymbol{\eta}|}.
\end{split}
\end{align}
Maximal hypersurfaces are also called \emph{membranes} when $n=2$, and \emph{strings} when $n=1$.
As \eqref{eq:HVMC1} is a system of wave equations, it is natural to consider the associated Cauchy problem which can be described as follows. Given a coordinate patch $U\subseteq \mathcal M$ with coordinates $s=(s^0,\dots,s^n)$, let $U_0:=\{s\in U~\vert~ s^0=0\}\subset U$, and consider two functions $\Phi_0,\Phi_1\colon U_0\to \mathbb R^{1+(n+1)}$. We assume that $\Phi_0$ is an embedding, that $\Phi_0^\ast \boldsymbol{\eta}$ is Riemannian, and that the metric
\begin{align*}
g_{\mu\nu}:=\begin{cases}
\boldsymbol{\eta}(\partial_{\mu}\Phi_0,\partial_\nu\Phi_0),\quad &\mu,\nu=1,\dots,n\\
\boldsymbol{\eta}(\Phi_1,\partial_\nu\Phi_0),\quad &\mu=0,\nu=1,\dots n\\
\boldsymbol{\eta}(\partial_\mu\Phi_0,\Phi_1),\quad &\mu=1,\dots n, \nu=0\\
\boldsymbol{\eta}(\Phi_1,\Phi_1),\quad &\mu=\nu=0
\end{cases}
\end{align*}
satisfies $\sup_{U_0}\det g<0$. We then ask if there is a neighborhood $V\subseteq U$ of $U_0$ such that there is a timelike embedding $\Phi\colon V\to \mathbb R^{1+(n+1)}$ satisfying \eqref{eq:HVMC1}, as well as $\Phi\vert_{U_0}=\Phi_0$ and $\partial_{0}\Phi\vert_{U_0}=\Phi_1$. Due to the diffeomorphism invariance of the problem, the solution cannot be unique. But, it is shown in \cite{AC2} that this problem admits a solution $\Phi$ and that any two solutions $\Phi$ and $\Psi$ are related by a diffeomorphism. In the present work we are interested in manifolds $\mathcal M$ that can be written as direct products $\mathcal M=\mathbb R\times M$ (in fact, we will soon restrict attention to the case where $M$ is a catenoid). In this case we use $(t,x)$ to denote points in $\mathbb R\times M$. Given $\Phi_0:M\to \{0\}\times \mathbb R^{n+1}\subseteq \mathbb R^{1+(n+1)}$ and a family of future directed timelike vectors $\Phi_1:M\to \mathbb R^{1+(n+1)}$, by finite speed of propagation and standard patching arguments, the result of \cite{AC2} implies the existence of an interval $I\ni 0$, and a unique solution $\Phi\colon I\times M\to\mathbb R^{1+(n+1)}$ to \eqref{eq:HVMC1} such that $\Phi(t,M)\subseteq \{t\}\times\mathbb R^{n+1}$, $\Phi(0)=\Phi_0$, and $\partial_t\Phi(0)=\Phi_1$.
Having a satisfactory local theory, one can consider the question of global (in time) dynamics of solutions to \eqref{eq:HVMC1}. For instance, in the context of the Cauchy problem formulated on $\mathbb R\times M$, one can ask if the local solution extends from $I\times M$ to all of $\mathbb R\times M$, and if so, how it behaves as $t\to\pm\infty$. A special class of maximal hypersurfaces for which the global dynamics are easily described are the products of Riemannian VMC surfaces in $\mathbb R^{n+1}$ with $\mathbb R$. More precisely, if $\Phi_0\colon M\to\mathbb R^{n+1}$ is a Riemannian embedding with vanishing mean curvature, then $\Phi\colon \mathbb R\times M\to\mathbb R^{1+(n+1)}$ given by $\Phi(t,x)=(t,\Phi_0(x))$ satisfies \eqref{eq:HVMC1} with $\Phi(0)=\Phi_0$ and $\partial_t\Phi(0)=(1,0)$. We refer to such product solutions as \emph{stationary} solutions.
A natural question regarding the long time dynamics of solutions of \eqref{eq:HVMC1} is the stability of stationary solutions. The simplest case is when $\Phi_0$ is a linear embedding of a hyperplane in $\mathbb R^{n+1}$. When $n\geq 3$, it was proved in \cite{B1} that small perturbations of a hyperplane solution lead to global solutions which decay back to a hyperplane. A similar result when $n=2$ was later proved in \cite{Lin1}. Analytically, the results in \cite{B1,Lin1} amount to proving global existence and decay estimates for solutions to a system of quasilinear wave equations with small initial data on Minkowski space. From this point of view, hyperplanes can be thought of as the zero solution to~\eqref{eq:HVMC1}.
The first stability result for a non-flat stationary solution of \eqref{eq:HVMC1} is contained in \cite{DonKrie1,DKSW} for the Lorentzian catenoid. The Riemannian catenoid is a VMC surface of revolution in $\mathbb R^{n+1}$ (see Section~\ref{subsec:Riemcat} for a more detailed description), and the Lorentzian catenoid is the corresponding stationary solution. The authors in \cite{DKSW} consider radial perturbations of the $(1+2)$ dimensional Lorentzian catenoid that satisfy an additional discrete symmetry\footnote{This is an important technical assumption, which avoids the resonances of the linearized operator in dimension two.}. Their main result asserts that if the initial data belong to a codimension one subset in an appropriate topology, then the corresponding solution can be extended globally and converges to a Lorentzian catenoid as $t\to\infty$. The codimension one restriction on the initial data is necessary and sharp (see the comments following Theorem~\ref{thm:main-0}). A similar result for radial perturbations of the Lorentzian helicoid was subsequently obtained in \cite{Marchali1}. From a PDE point of view, \cite{DKSW,Marchali1} establish the codimension one (asymptotic) stability of a time-independent solution to a quasilinear wave equation on Minkowski space under radial symmetry.
In this work we prove the codimension one stability of the $(1+n)$ dimensional Lorentzian catenoid in dimensions $n\geq 5$ without any symmetry restrictions on the perturbations . In the previous paragraphs we mentioned the results on the HVMC equation which are most directly related to our work. A more complete account is given in Section~\ref{subsubsec:morehistory} below. Before providing a simplified statement of our main theorem, we describe the catenoid solution in more detail in the next subsection.
\subsection{The Catenoid Solution}\label{subsec:Riemcat}
We recall some basic geometric properties of the catenoid.
In $\mathbb R^{n+1}$ let ${\underline X}=(X',X^{n+1})$ where $X'=(X^1,\dots,X^n)$ denote the first $n$ coordinates and $X^{n+1}$ the last coordinate. Suppose $I$ is an interval in $\mathbb R$ (possibly all of $\mathbb R$), which we identify with the $X^{n+1}$ axis, and let
\begin{align*}
\begin{split}
\mathfrak f: I \to [1,\infty)
\end{split}
\end{align*}
be a given even function with\footnote{One could also consider $\mathfrak f(0)=\mathfrak f_0>0$, corresponding to a different radius for the neck of the catenoid. But this can be reduced to the case considered here by a rescaling ${\underline X}\mapsto\lambda{\underline X}$.} $\mathfrak f(0)=1$, $\mathfrak f'(0)=0$. Consider the surface of revolution obtained by rotating the graph of
\begin{align}\label{eq:catpar}
\begin{split}
X^{n+1} \mapsto X' =(\mathfrak f(X^{n+1}),0,\dots,0)
\end{split}
\end{align}
about the $X^{n+1}$-axis. It can be parameterized as
\begin{align}\label{eq:Fdef}
\begin{split}
F:I\times \mathbb S^{n-1}\to \mathbb R^{n+1},\qquad F(\frakz,\omega)=(\mathfrak f(\frakz)\Theta(\omega),\frakz),
\end{split}
\end{align}
where $\Theta:\mathbb S^{n-1}\to \mathbb R^{n}$ is the standard embedding of the unit sphere. As a level set our surface is
\begin{align*}
\begin{split}
\{G({\underline X}):=|X'|^2-\mathfrak f^2(X^{n+1})=0\},
\end{split}
\end{align*}
and the unit outward normal is
\begin{align}\label{eq:normal}
\begin{split}
\mathfrak n= \frac{\nabla G}{|\nabla G|}= \frac{1}{(1+(\mathfrak f'(\frakz))^2)^{\frac{1}{2}}}(\Theta(\omega),-\mathfrak f'(\frakz)).
\end{split}
\end{align}
Below we identify $\partial_\frakz$ and $\partial_{a}$, $a\in \{\omega^1,\dots,\omega^{n-1}\}$, with their images in $\mathbb R^{n+1}$ under $dF$, that is,
\begin{align*}
\begin{split}
\partial_\frakz = (\mathfrak f'\Theta,1),\qquad \partial_{a}= (\mathfrak f \partial_{a}\Theta,0).
\end{split}
\end{align*}
Differentiating \eqref{eq:normal} with respect to the ambient covariant derivative $\nabla$ we find that
\begin{align*}
\begin{split}
\nabla_{\partial_\frakz}\mathfrak n = -\frac{\mathfrak f''}{(1+(\mathfrak f')^2)^{\frac{3}{2}}}\partial_\frakz,\qquad \nabla_{\partial_a}\mathfrak n = \frac{1}{\mathfrak f(1+(\mathfrak f)^2)^{\frac{1}{2}}}\partial_a.
\end{split}
\end{align*}
From this we see that the second fundamental form $ {\mathrm{I\!I}} $ of the surface, as a matrix with components $\angles{\nabla_{\partial_j} {\bf n}}{\partial_i}$, $i,j\in\{\frakz,\omega^1,\dots,\omega^{n-1}\}$, is (here $\mathring{{\slashed{g}}}$ denotes the round metric on $\mathbb S^{n-1}$)
\begin{align*}
\begin{split}
{\mathrm{I\!I}} =\pmat{\lambda_1(1+(\mathfrak f')^2)&0\\0&\slashed{\lambda} \mathfrak f^2\mathring{{\slashed{g}}}},\qquad \lambda_1:=-\frac{\mathfrak f''}{(1+(\mathfrak f')^2)^{\frac{3}{2}}}, \quad \slashed{\lambda}:=\frac{1}{\mathfrak f(1+(\mathfrak f')^2)^{\frac{1}{2}}}.
\end{split}
\end{align*}
It follows that the principal curvatures of the surface are $\{\lambda_1, \dots,\lambda_n\}$ where $\lambda_1$ is as above and $\lambda_j=\slashed{\lambda}$ for $j\geq 2$. The mean curvature of the embedding is then
\begin{align*}
\begin{split}
{\bf H}=\frac{1}{n}\Bigg(-\frac{\mathfrak f''}{(1+(\mathfrak f')^2)^{\frac{3}{2}}}+\frac{n-1}{\mathfrak f(1+(\mathfrak f')^2)^{\frac{1}{2}}}\Bigg).
\end{split}
\end{align*}
Therefore, for the surface to have zero mean curvature, $\mathfrak f$ must satisfy the ODE
\begin{align}\label{eq:psiODE}
\frac{\mathfrak f''}{(1+(\mathfrak f')^2)^{\frac{3}{2}}}-\frac{n-1}{\mathfrak f(1+(\mathfrak f')^2)^{\frac{1}{2}}}=0,\qquad \mathfrak f(0)=1,\quad \mathfrak f'(0)=0.
\end{align}
\begin{definition}
The Riemannian Catenoid is the surface of revolution ${\underline{\calC}}$ in $\mathbb R^{n+1}$ defined by~\eqref{eq:catpar}, where $\mathfrak f$ satisfies \eqref{eq:psiODE}. The Lorentzian Catenoid is the surface $\mathcal C:=\mathbb R\times {\underline{\calC}}$ in the Minkowski space $\mathbb R^{1+(n+1)}$.
\end{definition}
There is a qualitative difference between the shape of the catenoid in dimension $n=2$ and dimensions $n\geq3$. Indeed, \eqref{eq:psiODE} implies the following ODE for $\frakz$:
\begin{align*}
\begin{split}
\frac{\mathrm{d}^2 \frakz}{\mathrm{d} \mathfrak f^2}+\frac{n-1}{\mathfrak f}\frac{\mathrm{d}\frakz}{\mathrm{d} \mathfrak f}+\frac{n-1}{\mathfrak f}\big(\frac{\mathrm{d} \frakz}{\mathrm{d} \mathfrak f}\big)^3=0.
\end{split}
\end{align*}
From this, one can derive that $I=\mathbb R$ when $n=2$ and $I=(-S,S)$ when $n\geq 3$, where (see for instance \cite{Tam-Zhou} for more details on these calculations)
\begin{align}\label{eq:Sendpointdef1}
\begin{split}
S:=\int_{1}^{\infty}\frac{\mathrm{d} \mathfrak f}{\sqrt{\mathfrak f^{2(n-1)}-1}}<\infty.
\end{split}
\end{align}
As we will see more explicitly below, one significance of this difference for the analysis is that in dimension $n=2$, the zero modes for the linearized operator, which correspond to the symmetries of the ambient space, are not eigenfunctions (that is, they do not belong to $L^2$), but rather resonances. In this work we will consider only the high dimensional case $n\geq 5$, where the geometry of the catenoid approaches the flat geometry at a fast rate. To make these statements precise, we compute an expression for the induced metric on ${\underline{\calC}}$. Using polar coordinates $X'={\underline R} \underline{\Omega}$ for the first $n$ coordinates in the ambient $\mathbb R^{n+1}$ and denoting the $X^{n+1}$ coordinate by ${\underline Z}$, the ambient Euclidean metric becomes
\begin{align*}
\begin{split}
\mathrm{d} {\underline Z}^2+ \mathrm{d} {\underline R}^2 + {\underline R}^2\mathrm{d} \underline{\Omega}^2,
\end{split}
\end{align*}
where $\mathrm{d} \underline{\Omega}^2$ denotes the standard metric on $\mathbb S^{n-1}$. On ${\underline{\calC}}\cap\{{\underline Z}>0\}$ we introduce radial coordinates $({\underline r},{\underline{\theta}})\in[1,\infty)\times \mathbb S^{n-1}$ by
\begin{align}\label{eq:dZdr}
\begin{split}
({\underline r},{\underline{\theta}})\mapsto ({\underline Z}= {\underline Z}({\underline r}), {\underline R} = {\underline r},\underline{\Omega} = \Theta({\underline{\theta}})),\qquad \frac{\mathrm{d} {\underline Z}}{\mathrm{d} {\underline r}} = \frac{1}{\sqrt{{\underline r}^{2(n-1)}-1}}.
\end{split}
\end{align}
The induced Riemannian metric on ${\underline{\calC}}$ in these coordinates becomes
\begin{align*}
\begin{split}
\Big(1+\frac{1}{{\underline r}^{2(n-1)}-1}\Big)\mathrm{d} {\underline r}\otimes \mathrm{d} {\underline r}+{\underline r}^2 \mathring{{\slashed{g}}}_{ab}\mathrm{d}{\underline{\theta}}^a\otimes\mathrm{d}{\underline{\theta}}^b.
\end{split}
\end{align*}
Instead of the geometric radial coordinate function ${\underline r}$, it is more convenient to use $\rho \in(-\infty,\infty)$ with
\begin{align*}
\begin{split}
{\underline r}(\rho)=\jap{\rho}:=\sqrt{1+\rho^2}.
\end{split}
\end{align*}
The coordinates $(\rho,\omega)\in \mathbb R\times \mathbb S^{n-1}$, where $\omega={\underline{\theta}}$, now describe all of ${\underline{\calC}}$ (not just half) with $Z=Z({\underline r}(\rho))$ if $\rho\geq0$ and $Z= - Z({\underline r}(\rho))$ if $\rho<0$. The metric on ${\underline{\calC}}$ in these coordinates becomes
\begin{align}\label{eq:RiemmCatmetric1}
\begin{split}
\frac{\rho^2\jap{\rho}^{2(n-2)}}{\jap{\rho}^{2(n-1)}-1}\mathrm{d} \rho\otimes\mathrm{d} \rho+\jap{\rho}^2\mathring{{\slashed{g}}}_{ab} \mathrm{d} \omega^a\otimes\mathrm{d}\omega^b.
\end{split}
\end{align}
Using $t$ to denote the variable in $\mathbb R$ in $\mathcal C=\mathbb R\times {\underline{\calC}}$, the Lorentzian metric on $\mathcal C$ becomes
\begin{align*}
\begin{split}
-\mathrm{d} t\otimes\mathrm{d} t+\frac{\rho^2\jap{\rho}^{2(n-2)}}{\jap{\rho}^{2(n-1)}-1}\mathrm{d} \rho\otimes\mathrm{d} \rho+\jap{\rho}^2 \mathring{{\slashed{g}}}_{ab} \mathrm{d} \omega^a\otimes\mathrm{d}\omega^b.
\end{split}
\end{align*}
From the second variation of the area functional one can see that the stability, or linearized, operator for the catenoid as a minimal surface is $\Hbar:=\Delta_{\underline{\calC}}+ | {\mathrm{I\!I}} |^2$ (respectively, $\Box_\mathcal C+| {\mathrm{I\!I}} |^2$ in the Lorentzian case), where $\Delta_{\underline{\calC}}$ denotes the Laplacian on ${\underline{\calC}}$ (respectively, $\Box_\mathcal C=-\partial_t^2+\Delta_{\underline{\calC}}$ denotes the d'Alembertian on $\mathcal C$). See for instance \cite{Tam-Zhou,F-CS}. As mentioned earlier, it is shown in \cite{F-CS,Tam-Zhou} that $\Hbar$ admits a unique positive eigenvalue, indicating the instability of the catenoid as a minimal surface:
\begin{align*}
\begin{split}
\Hbar \underline{\varphi}_\mu=\mu^2\underline{\varphi}_\mu.
\end{split}
\end{align*}
Heuristically, this instability corresponds to the shrinking of the neck of the catenoid. See for instance~\cite{KL1,DKSW} for more discussion on this. On the other hand, since every translation of ${\underline{\calC}}$ in the ambient $\mathbb R^{n+1}$ is another minimal surface (another catenoid), by differentiating in the translation parameter one obtains $n+1$ zero modes of $\Hbar$. Explicitly, in the $(\rho,\omega)$ coordinates above, these are given by
\begin{align*}
\begin{split}
{\underline e}_j=\frac{\Theta^j(\omega)}{\jap{\rho}^{n-1}},\quad 1\leq j\leq n,\qquad {\underline e}_{n+1}=\frac{\sqrt{\jap{\rho}^{2(n-1)}-1}}{\jap{\rho}^{n-1}},
\end{split}
\end{align*}
corresponding to translations in the direction $\frac{\partial}{\partial X^j}$ respectively. In general, the zero mode ${\underline e}_{n+1}$, corresponding to translation in the direction of the axis of symmetry, does not belong to $L^2$ for any $n\geq 2$. For the other directions, ${\underline e}_j$ are in $L^2$ when $n\geq3$, while they logarithmically fail to be in $L^2$ when $n=2$. In the Lorentzian case, the Lorentz boosts of the ambient $\mathbb R^{1+(n+1)}$ give the additional zero modes $t{\underline e}_j$ of $\Box_\mathcal C+| {\mathrm{I\!I}} |^2$, which, for $1\leq j\leq n$ and $n\geq 3$, are referred to as the \emph{generalized eigenfunctions} of the linear operator.\footnote{Ambient rotations about the axis of symmetry map ${\underline{\calC}}$ to itself, so differentiation along the rotation parameter yields the trivial zero mode ${\underline e}=0$ for $\Hbar$. Similarly for translations along the time axis in the Lorentzian case. When $n\geq 3$, scaling changes the value of $S$ in \eqref{eq:Sendpointdef1} and differentiation in the scaling parameter yields a zero solution which is neither an eigenfunction nor a resonance.} See Section~\ref{subsec:eigenfunctions} for the discussion of eigenfunctions and generalized eigenfunctions in the context of the first order formulation.
We end this subsection by giving a more explicit description of the boosted and translated catenoid, which will be needed for the statement of our theorem. For any $\ell_0\in \mathbb R^n$ let $P_{\ell_0}$ denote the orthogonal projections in the direction of $\ell_0$, and $P_{\ell_0}^\perp$ the orthogonal projection to the complement. Here and below, by a slight abuse of notation, we view $\mathbb R^n$ as a subset of $\mathbb R^{n+1}$ using the embedding $X'\mapsto (X',0)^\intercal$. The corresponding ambient Lorentz boost $\Lambda_{\ell_0}$ is defined by
\begin{align*}
\begin{split}
\Lambda_{\ell_0}=\pmat{\gamma_0&-\gamma_0\ell_0^\intercal\\-\gamma_0 \ell_0&\gamma_0 P_{\ell_0}+P_{\ell_0}^\perp},\quad \gamma_0=\frac{1}{\sqrt{1-|\ell_0|^2}}.
\end{split}
\end{align*}
The Lorentzian catenoid boosted by $\ell_0\in \mathbb R^n$ and translated by $a_0\in \mathbb R^n$ is the following HVMC submanifold of $\mathbb R^{1+(n+1)}$,
\begin{align*}
\begin{split}
\mathcal C_{a_0,\ell_0}:=\{\Lambda_{-\ell_0} X~\vert~ X\in \mathcal C\}+\pmat{0\\a_0}.
\end{split}
\end{align*}
\subsection{First Statement of the Main Result}\label{sec:introfirststatement}
We are now ready to give a first formulation of the main result of this paper. For a manifold $\mathcal X$, we will use the notation $\mathcal T_p\mathcal X$ to denote the tangent space at $p\in \mathcal X$. We also use the parameterization $F\colon I\times \mathbb S^{n-1}\to \mathbb R^{n+1}$ introduced in \eqref{eq:Fdef}.
\begin{theorem}\label{thm:main-0} Let $\Phi_0\colon I\times \mathbb S^{n-1} \to\{0\}\times\mathbb R^{n+1}$, $n\geq 5$, be an embedding and~$\Phi_1\colon I\times \mathbb S^{n-1}\to \mathbb R^{1+(n+1)}$ be a family of future directed timelike vectors such that $\Psi_0=F$ and $\Psi_1=(1,0)$ outside of a compact set. Suppose $\Phi_0$ and $\Phi_1$ belong to an appropriate codimension-1 subset in a suitable topology, and are sufficiently close to $F$ and $(0,1)$, respectively, in this topology. Then there is a unique complete timelike VMC hypersurface $\mathcal M$ in $\mathbb R^{1+(n+1)}$ such that $\mathcal M\cap \{X^0=0\}=\Phi_0({\underline{\calC}})$ and $\mathcal T_{\Phi_0(p)}\mathcal M$ is spanned by $\mathrm{d}_p \Phi_0(\mathcal T_p{\underline{\calC}})$ and $\Phi_1(p)$ for any $p\in I \times \mathbb S^{n-1}$. Moreover, there exist $a_0,\ell_0\in \mathbb R^n$ such that the ambient Euclidean distance between $\mathcal M\cap\{X^0=t\}$ and $\mathcal C_{a_0,\ell_0}\cap \{X^0=t\}$ tends to zero as $t\to\infty$.
\end{theorem}
A more precise version of the theorem will be stated as Theorem~\ref{thm:main} below. We pause to make a few comments.
\begin{itemize}[leftmargin=*]
\item[{\bf{1.}}] In view of the growing mode of the stability operator (see Section~\ref{subsec:Riemcat}), the codimension-1 restriction on the data in Theorem~\ref{thm:main} is optimal. See for instance \cite{KL1}. However, we do not pursue the question of uniqueness or regularity of the codimension-1 set in the initial data topology. See Item (3) in Section~\ref{subsubsec:introoverallscheme} for more on this point.
\item [{\bf{2.}}] The fact that the solution approaches a boosted and translated catenoid is related to the presence of a non-trivial kernel and generalized kernel for the linearized operator. As we saw in Section~\ref{subsec:Riemcat}, the kernel and generalized kernel are generated by the translation and boost symmetries. Therefore, to obtain decay for the perturbative part of the solution we need to choose the translation and boost parameters dynamically (modulation) to stay away from the kernel and generalized kernel. To give a more precise description of how we track these parameters, we need to decompose the solution into a \emph{profile} and a perturbation, and set up a first order formulation of the problem. These aspects are summarized in Sections~\ref{sec:profileintro} and~\ref{subsec:introfirstorderandcodim}. In Remark~\ref{rem:parametersintro1} in Section~\ref{subsec:introsecondthm} we give more precise references for how the parameters are tracked. Translation and boost symmetries are common features of quasilinear soliton stability problems that arise from Lorentz invariant theories. Developing a robust modulation approach for translation invariant quasilinear wave equations is one of the main achievements of this work. In this direction, our novel profile construction plays a central role. We hope that our methods will find applications in other quasilinear soliton stability problems.
\item [{\bf{3.}}] The assumption $\Psi_0=F$ and $\Psi_1=(0,1)$ outside a compact set can be replaced by sufficient decay at spatial infinity. Indeed, outside an ambient cone ${\mathscr{C}}$ with vertex at $(-R,0)$, with $R$ sufficiently large, the problem reduces to a quasilinear wave equation on Minkowski space. By finite speed of propagation, this problem can be analyzed separately in this region, for instance using the vectorfield $(t-r)^{p} \partial_{t}$. This will lead to suitably decaying and small data on the cone ${\mathscr{C}}$ which can be taken as the starting point of the analysis in this paper.
\item [{\bf{4.}}] We consider dimensions $n\geq5$ in this work, because this range is a more accessible analytic setting to approach some of the structural challenges in quasilinear soliton stability problems. Specifically, this restriction has the following two advantages: (i) The faster spatial decay rate of the difference between the catenoid and flat metrics (faster decay of the tail of the soliton) amounts to weaker interactions between the profile and the radiation. (ii) The faster time decay of waves in higher dimensions allows us to directly obtain twice integrability of the time derivatives of the boost and translation parameters. This strong decay enters in proving integrated local energy decay for the perturbative part of the solution. See Section~\ref{subsec:ideas-iled}. The case $n=2$ (outside of radial symmetry) poses the additional challenge that the zero modes corresponding to translations and boosts are no longer eigenfunctions, but rather resonances (see Section~\ref{subsec:Riemcat}). We expect that the general scheme in this paper is applicable to dimensions $n=3,4$ and hope to address these cases in future work.
\item[{\bf{5.}}] The minimal surface equation is widely studied in Riemannian geometry and calculus of variations. In particular, the spectral properties of the stability operator for the catenoid are well-understood. See for instance \cite{F-CS,Tam-Zhou}. This makes our problem a natural starting point for the study of asymptotic stability of solitons in quasilinear wave equations.
\end{itemize}
\subsection{Overall Scheme and Main Difficulties} \label{subsec:ideas-outline}
In Sections~\ref{sec:profileintro}--\ref{subsec:introoutlinerp} below, we will describe the main ideas for the proof of Theorem~\ref{thm:main-0}. Before we do so, let us begin with an executive summary of the overall scheme, as well as a discussion of the main difficulties in the proof.
\subsubsection{Overall scheme}\label{subsubsec:introoverallscheme} The overall scheme of our proof of Theorem~\ref{thm:main-0} is as follows:
\begin{enumerate}
\item {\it Decomposition of solution.} The basic idea is to make the (formal) decomposition
\begin{equation} \label{eq:basic-decomp}
\hbox{(Solution)} = \underbrace{\mathcal Q}_{\hbox{profile}} \hbox{``}+\hbox{''} \underbrace{\psi}_{\hbox{perturbation}}
\end{equation}
This decomposition will be made precise in Section~\ref{sec:profileintro}.
A key goal is to show that the perturbation $\psi$ decays to zero as $t \to \infty$ in a suitable sense (see Item~(4) and Section~\ref{subsec:introoutlinerp}). In the absence of any obstructions, the profile $\mathcal Q$ would be the object that we wish to prove the asymptotic stability of -- the catenoid in our case. However, as discussed earlier, the linearized HVMC equation around the catenoid, $(-\partial_{t}^{2} + \underline{H}) \psi = 0$, admits non-decaying solutions, namely (i)~a $1$-dimensional family of exponentially growing solutions, which arises from the simple negative eigenvalue of $\underline{H}$, and (ii)~a $2n$-dimensional family of solutions growing at most linearly in $t$, which arises from the $n$-dimensional kernel of $\underline{H}$ generated by translational symmetries. To avoid these obstructions, we employ the ideas of \emph{modulation} and \emph{shooting}, which we turn to now.
\smallskip
\item {\it Modulation.} To ensure transversality to the $2n$-dimensional family of non-decaying solutions in (ii), we impose $2n$ orthogonality conditions on $\psi$ at each time. To compensate for such a restriction, we allow the profile $\mathcal Q$ to depend on $2n$ time-dependent parameters. Since the family in (ii) arises from translational symmetries, it is natural to introduce an $n$-dimensional \emph{position} vector $\xi(\sigma) = (\xi^{1}, \ldots, \xi^{n})(\sigma)$, an $n$-dimensional \emph{velocity} (or \emph{boost}) vector $\ell(\sigma) = (\ell^{1}, \ldots, \ell^{n})(\sigma)$, a foliation $\sigma$ (whose leaves will represent an appropriate notion of time for our problem) and an \emph{approximate solution} (or \emph{profile}) $\mathcal Q = \mathcal Q(\xi(\cdot), \ell(\cdot))$ to HVMC that represents ``a moving catenoid at position $\xi(\sigma)$ with velocity $\ell(\sigma)$ at each time $\sigma$''. Appropriate choices of the profile $\mathcal Q$, the foliation $\sigma$ and the $2n$ orthogonality conditions would lead, upon combination with the HVMC equation, to $2n$ equations that dictate the evolution of $(\xi(\sigma), \ell(\sigma))$ in terms of $\psi$.
\smallskip
\item {\it Shooting argument.} To avoid the exponential growth stemming from obstruction (i), we further decompose the perturbation $\psi$ as follows:
\begin{equation*}
\psi = a_{+}(\sigma) Z_{+} + a_{-}(\sigma) Z_{-} + \phi,
\end{equation*}
where $Z_{+}$ and $Z_{-}$ are uniformly bounded functions, $a_{+}(\sigma)$ and $a_{-}(\sigma)$ obey ODEs in $\sigma$ with growing and damping linear parts, respectively, and $\phi$ obeys $2n+2$ orthogonality conditions so as to be transversal (to a sufficient extent) to all possible linear obstructions to decay at each time\footnote{Strictly speaking, the term $a_{-}(\sigma) Z_{-}$ decays forward-in-time, so the reader may wonder why we also took it out of $\phi$ by imposing $2n+2$ orthogonality conditions (as opposed to $2n+1$, the dimension of the space of forward-in-time non-decaying solutions to $(-\partial_{t}^{2} + \underline{H}) \psi = 0$). The reason is that we want $\phi$ to exhibit only dispersive behaviors like $\mathbb P_{c} \psi$ in the simple example $(-\partial_{t}^{2} + \underline{H}) \psi = 0$.}. If $\mathcal Q$ were simply the Lorentzian catenoid, then we may choose $a_{+} Z_{+} + a_{-} Z_{-}$ and $\phi$ to be the $L^{2}$-orthogonal projections of $\psi$ to the negative eigenvalue and the absolutely continuous spectrum of $\underline{H}$, respectively.
By analyzing the ODE for $a_{-}$, the modulation equations for $(\xi, \ell)$, and the wave equation for $\phi$ (see (4) below), we will show that $\dot{\xi} - \ell$, $\dot{\ell}$ and $\psi$ decay as long as the unstable mode $a_{+}(\sigma)$ satisfies the so-called \emph{trapping assumption}, which roughly says that $a_{+}(\sigma)$ decays in time. We then employ a topological \emph{shooting argument} to select a family of initial data -- which is codimension $1$ in the sense described below in Theorem~\ref{thm:main} and \eqref{eq:shootingbdata1} -- such that $a_{+}$ indeed continues to satisfy the trapping assumption for all times $\sigma \geq 0$.
\smallskip
\item {\it Integrated local energy decay and vectorfield method.} Finally, we study the quasilinear wave equation satisfied by $\phi$, which also satisfies $2n+2$ orthogonality conditions. Under suitable bootstrap assumptions (to handle nonlinear terms) and the trapping assumption for $a_{+}$ (see (3)), we prove the pointwise decay of $\phi$ via the following steps:
\begin{align*}
&\hbox{(transversality to linear obstructions)} \\
&\Rightarrow \hbox{(uniform boundedness of energy and integrated local energy decay (ILED))} \\
& \Rightarrow
\hbox{(pointwise decay)}
\end{align*}
Here, integrated local energy decay estimates (ILED; also known as Morawetz estimates) refer to, roughly speaking, bounds on integrals of the energy density on spacetime cylinders for finite energy solutions. They are a weak form of dispersive decay. These have the advantage of being $L^{2}$-based and hence being amenable to a wide range of techniques, such as the vectorfield method, Fourier transform, spectral theory etc. A powerful philosophy, that has recently arisen in works \cite{DR1, Tat2, MTT, OlSt} related to the problem of black hole stability, is to view integrated local energy decay as a key intermediate step for obtaining stronger pointwise decay (see also \cite{Tat1, MeTa} for papers in the related context of global Strichartz estimates). Specifically, in our proof we adapt the $r^{p}$-method of Dafermos--Rodnianski \cite{DR1}, extended by Schlue \cite{Schlue1} and Moschidis \cite{Moschidis1}. See Section~\ref{subsec:introoutlinerp} for further discussions.
\end{enumerate}
\subsubsection{Main difficulties}\label{subsubsec:intromaindiff} We face several significant challenges in implementing the above scheme for our problem. A summary of the main difficulties is as follows:
\begin{itemize}
\item {\it (Quasilinearity)} First and foremost, the hyperbolic vanishing mean curvature equation is \emph{quasilinear}. While the stability property of the linearized equation $(-\partial_{t}^{2} + \underline{H}) \phi = f$ around the Lorentzian catenoid $\mathcal C$ is well-understood, upgrading it to the nonlinear asymptotic stability result Theorem~\ref{thm:main-0} involves a number of serious difficulties; specifically, see {\it (Proof of integrated local energy decay)} below. Furthermore, since the highest order term is nonlinear, at various places in the proof we need to be careful to avoid any derivative losses.
\item {\it (Gauge choice)} Another basic point about HVMC is that it is an equation for a geometric object, namely a hypersurface in $\mathbb R^{1+(n+1)}$. Hence, we need to fix a way of describing the hypersurface by a function to perform any analysis -- this is the well-known problem of \emph{gauge choice}.
\item {\it (Profile and foliation construction)} In order for the above scheme to work, it is crucial for the profile $\mathcal Q$, representing a moving catenoid at position $\xi(\sigma)$ with velocity $\ell(\sigma)$ at each time $\sigma$, to solve HVMC up to an adequately small error. Unfortunately, the obvious construction based on the standard $t = X^{0}$-foliation would lead to an inappropriately large error. The key issue is the inaccuracy of the construction in the far-away region (i.e., as $\rho \to \pm \infty$), which is fatal due to the slow spatial decay of the catenoid (i.e., mere polynomial decay towards the flat hyperplane as $\rho \to \pm \infty$). As we will see, we are led to consider a different foliation $\sigma$ consisting of \emph{moving asymptotically null leaves}; see Section~\ref{sec:profileintro}.
\item {\it (Proof of integrated local energy decay)} Existing methods \cite{MMT1, MST}, combined with the detailed knowledge of the spectral properties of the stability operator $\underline{H}$ for $\underline{\mathcal C}$ \cite{F-CS, Tam-Zhou}, establish integrated local energy decay for the linearized equation $(-\partial_{t}^{2} + \underline{H}) \phi = f$ around the Lorentzian catenoid $\mathcal C$ when $\phi = \mathbb P_{c} \phi$ ($L^{2}$-projection to the absolutely continuous spectrum). See Section~\ref{subsec:iled-catenoid}. Transferring this estimate to the solution $\phi$ to the quasilinear wave equation satisfying our orthogonality conditions, however, is met with several difficulties, such as (i) quasilinearity, (ii) existence of a trapped null geodesic (traveling around the collar $\set{\rho = 0}$ in case of $\mathcal C$), (iii) existence of zero and negative eigenvalues of $\underline{H}$ (what we referred to as linear obstructions to decay) and (iv) nonstationarity of the profile $\mathcal Q$.
\item {\it (Modulation theory and vectorfield method)} Standard modulation theory \cite{Stuart1, Weinstein1} is based on the standard $t = X^{0}$-foliation, whose leaves are flat spacelike hypersurfaces; the method needs to be adapted to the foliation $\sigma$ used in our profile construction. Similarly, the standard $r^{p}$-method utilizes a foliation consisting of \emph{non}-moving asymptotically null leaves \cite{DR1, Schlue1, Moschidis1}, which needs to be adapted to our foliation $\sigma$ of moving asymptotically null leaves. The presence of linear obstructions to decay (i.e., zero and negative eigenvalues for $\underline{H}$) also needs to be incorporated.
\end{itemize}
In Sections~\ref{sec:profileintro}--\ref{subsec:introoutlinerp}, we describe the main ideas in this paper for resolving the above difficulties.
\subsection{Profile, Foliation and Gauge Construction}\label{sec:profileintro}
Here we describe our profile $\mathcal Q$ and foliation $\tau$, as well as the gauge we use to express our solution as a scalar function $\psi$ on $\mathcal Q$; these constructions make the basic decomposition \eqref{eq:basic-decomp} precise. Since this decomposition is needed for the discussion of other parts of our proof, we shall give the full construction here.
Let $\xi = \xi(\sigma)$ and $\ell = \ell(\sigma)$ be two functions on an interval $I \subseteq \mathbb R$ with values in $\mathbb R^n$ and $|\ell|,|{\dot{\xi}}|<1$.
By a slight abuse of notation, we will always view $\xi$ and $\ell$ as vectors both in $\mathbb R^n$ and in $\mathbb R^{n+1}$, using the embedding of $\mathbb R^{n}$ in $\mathbb R^{n+1}$ given by $X'\mapsto (X',0)^\intercal$. Denoting the orthogonal projection in the direction of $\ell$ by $P_\ell$ and the projection on the orthogonal complement by $P_\ell^\perp$, the linear boost in the ambient $\mathbb R^{1+(n+1)}$ in the direction of $\ell$ is
\begin{align}\label{eq:Lambdadef1}
\begin{split}
\Lambda_\ell=\pmat{\gamma&-\gamma\ell^\intercal\\-\gamma \ell&A_\ell},\qquad A_\ell= \gamma P_\ell+P_\ell^\perp,\quad \gamma=\frac{1}{\sqrt{1-|\ell|^2}},
\end{split}
\end{align}
and the inverses of $\Lambda_\ell$ and $A_\ell$ are
\begin{align*}
\begin{split}
\Lambda_\ell^{-1}=\Lambda_{-\ell},\qquad A_\ell^{-1}=\gamma^{-1}P_\ell+P_\ell^\perp.
\end{split}
\end{align*}
Recall that ${\underline{\calC}}$ denotes the Riemannian catenoid in $\mathbb R^{n+1}$, and $\mathcal C=\mathbb R\times {\underline{\calC}}$ the product Lorentzian catenoid in $\mathbb R^{1+(n+1)}$.
Given two functions $\xi(\cdot)$ and $\ell(\cdot)$ as above, let
\begin{align}\label{eq:calCsigmadefintro1}
\begin{split}
\mathcal C_{\sigma}:=\{\Lambda_{-\ell(\sigma)} X~\vert~ X\in \mathcal C\}+\pmat{0\\\xi(\sigma)-\sigma\ell(\sigma)}.
\end{split}
\end{align}
Note that if ${\dot{\ell}}\equiv0$ and $\xi(\sigma)\equiv \sigma\ell+a_0$ for a constant vector $a_0\in \mathbb R^n\subseteq\mathbb R^{n+1}$, then $\mathcal C_\sigma$ is the Lorentzian catenoid obtained by boosting $\mathcal C$ by $\Lambda_{-\ell}$ and then translating the result by $a_0$.
We will assume that $|\ell|,|{\dot{\xi}}|<1$ and that $|\ell(0)|,|\xi(0)|$, and $|{\dot{\ell}}|$ are sufficiently small\footnote{The smallness assumptions are not essential and are a consequence of how we have set things up. The smallness requirement on $|{\dot{\ell}}|$ is to guarantee that the curve $\xi-\gamma R$ is timelike. The smallness conditions on $|\ell(0)|$ and $|\xi(0)|$ are so that ${\tilde{\boldsymbol{\Upsigma}}}_0$ contains ${\mathscr{C}}_{-R}$. If we remove these assumptions we simply need to take $R$ larger and replace $R-2$ by $R-C$ for a larger constant $C$ in the definition of ${\mathscr{C}}_{-R}$. In our applications the smallness of the initial data and the bootstrap assumptions imply all the smallness conditions required here.}. We will first define a foliation of the interior of the ambient cone
\begin{align*}
\begin{split}
{\mathscr{C}}_{-R} =\{X\in\mathbb R^{1+(n+1)}~\vert~ X^0+R-2=|{\underline X}|\}
\end{split}
\end{align*}
as $\cup_\sigma {\boldsymbol{\Upsigma}}_\sigma$, and then define the profile of our solution on the leaf ${\boldsymbol{\Upsigma}}_\sigma$ to be $\mathcal C_\sigma\cap{\boldsymbol{\Upsigma}}_\sigma$. Note that we will restrict attention to compactly supported perturbations\footnote{This simplifying assumption is not essential for the proof. See the third comment following the statement of Theorem~\ref{thm:main-0}.} of $\mathcal C$ (in a suitable sense to be described below), so by finite speed of propagation we already know the form of our solution in the exterior of ${\mathscr{C}}_{-R}$. The leaves ${\boldsymbol{\Upsigma}}_\sigma$ will be chosen to be asymptotically null, more precisely hyperboloidal, away from the moving center $\xi(\sigma)$. To define this foliation, we first fix reference hyperboloids defined as (here $\gamma$ is evaluated at~$\sigma$)
\begin{align*}
\begin{split}
\mathcal H_\sigma = \{y=(y^0,y',y^{n+1})~\vert~y^0-\gamma^{-1}\sigma+R=\sqrt{|y'|^2+1}\}.
\end{split}
\end{align*}
In general, we will denote the restriction to $\{X^{n+1}=S\}$ by an underline, so (a similar construction can be carried out with respect $\{X^{n+1}=-S\}$)
\begin{align*}
\begin{split}
{\underline{\calH}}_\sigma = \mathcal H_\sigma\cap\{y^{n+1}=S\}.
\end{split}
\end{align*}
The boosted and translated hyperboloids are denoted by ${\tilde{\boldsymbol{\Upsigma}}}_\sigma$, that is (here $\ell$ and $\xi$ are evaluated at~$\sigma$),
\begin{align}\label{eq:tilSigmadefintro1}
\begin{split}
{\tilde{\boldsymbol{\Upsigma}}}_\sigma=\big(\Lambda_{-\ell}\mathcal H_\sigma+(0,\xi-\sigma\ell)^\intercal\big)=\{X~\vert~X^0-\sigma+\gamma R=\sqrt{|X'-\xi+\gamma R\ell|^2+1}\}.
\end{split}
\end{align}
The restriction of ${\tilde{\boldsymbol{\Upsigma}}}_\sigma$ to $\{X^{n+1}=S\}$ is denoted by (here $x=(x^0,x')\in \mathbb R\times \mathbb R^{n}$ denote the rectangular coordinates on $\{X^{n+1}=S\}$)
\begin{align*}
\begin{split}
{\underline{\tilbsUpsigma}}_\sigma:={\tilde{\boldsymbol{\Upsigma}}}_\sigma \cap \{X^{n+1}=S\}=\{x~\vert~x^0-\sigma+\gamma R=\sqrt{|x'-\xi+\gamma R\ell|^2+1}\}.
\end{split}
\end{align*}
\begin{remark}
Let ${\underline{{\mathscr{C}}}}_{-R}=\{(x^0,x',S)~\vert~x^0+R-2=|x'|\}$. The fact that $\eta=\xi-\gamma R$ is a timelike curve (that is, $|{\dot{\eta}}|<1$) implies that $\cup_{\sigma\geq0}{\underline{\tilbsUpsigma}}_\sigma$ gives a foliation of the region ${\mathscr{R}}:=\{(x^0,x')\in\{X^{n+1}=S\}~\vert~x^0+\gamma(0)R\geq \sqrt{|x'-\eta(0)|^2+1}\}$, which contains ${\underline{{\mathscr{C}}}}_{-R}$ because we have assumed that $|\xi(0)|$ and $|\ell(0)|$ are small (${\mathscr{R}}$ is also contained in a slightly larger cone ${\underline{{\mathscr{C}}}}_{-(R+3)}$). Indeed, if $(x^0,x')\in{\mathscr{R}}$ belongs to ${\underline{\tilbsUpsigma}}_{\sigma_2}\cap{\underline{\tilbsUpsigma}}_{\sigma_1}$, $\sigma_2>\sigma_1$, then
\begin{align*}
\begin{split}
\sigma_2-\sigma_1&=\sqrt{|x'-\eta(\sigma_1)|^2+1}-\sqrt{|x'-\eta(\sigma_2)|^2+1}\leq \big||x'-\eta(\sigma_1)|-|x'-\eta(\sigma_2)|\big|\leq |\eta(\sigma_2)-\eta(\sigma_1)|\\
&<\sigma_2-\sigma_1,
\end{split}
\end{align*}
which is impossible. Here to pass to the last inequality we have used $|{\dot{\eta}}|<1$. It follows that the map $U:(\sigma,y)=(\sigma,y^0,y')\mapsto \Lambda_{-\ell}y+(0,\xi-\sigma\ell)$ from $\cup_{\sigma\geq0}{\underline{\calH}}_\sigma$ is a diffeomorphism to its image. To see that it covers all of ${\mathscr{R}}$, given $x=(x^0,x')\in{\mathscr{R}}$ choose $\sigma_0>x^0+R+5$ and note that $x$ lies between ${\underline{\tilbsUpsigma}}_0=U({\underline{\calH}}_0)$ and ${\underline{\tilbsUpsigma}}_{\sigma_0}=U({\underline{\calH}}_\sigma)$, and since $U$ is a diffeomorphism onto its image, $x$ must lie on ${\underline{\tilbsUpsigma}}_\sigma$ for some $\sigma\in[0,\sigma_0)$. It follows from this that ${\tilde{\boldsymbol{\Upsigma}}}_\sigma$ foliate a region containing ${\mathscr{C}}_{-R}$.
\end{remark}
Fix $\mathfrak m:\mathbb R\times\mathbb R\to \mathbb R$ to be a smoothed out version of the minimum function such that for some small~$\delta_1>0$
\begin{align*}
\begin{split}
\mathfrak m(x,y)= \min(x,y)\qquad \mathrm{when~}|x-y|>\delta_1.
\end{split}
\end{align*}
Define $\sigma,\sigma_{\mathrm{temp}}: \{-S\leq X^{n+1}\leq S\} \to \mathbb R$ by
\begin{align}\label{eq:sigmadefintro1}
\begin{split}
&\sigma_{\mathrm{temp}}(X)=\sigma'\qquad \mathrm{if~}X\in {\tilde{\boldsymbol{\Upsigma}}}_{\sigma'},\\
&\sigma(X)= \mathfrak m(X^0,\sigma_{\mathrm{temp}}(X)).
\end{split}
\end{align}
Finally let
\begin{align}\label{eq:Sigmadefintro1}
\begin{split}
{\boldsymbol{\Upsigma}}_{\sigma'}=\{X~\vert~\sigma(X)=\sigma'\},\qquad {\underline{\bsUpsigma}}_{\sigma'} = {\boldsymbol{\Upsigma}}_{\sigma'}\cap \{X^{n+1}=S\}.
\end{split}
\end{align}
The transition from ${\tilde{\boldsymbol{\Upsigma}}}_\sigma$ to ${\boldsymbol{\Upsigma}}_\sigma$ corresponds to considering an asymptotically null (more precisely, hyperboloidal) foliation only starting at a large radius from the center $\xi(\sigma)$. Note that $\mathfrak m$ can be chosen so that $\cup_\sigma{\boldsymbol{\Upsigma}}_\sigma$ gives a smooth foliation of a region containing ${\mathscr{C}}_{-R}$. Indeed, as we have seen, the level sets of $\sigma_{\mathrm{temp}}$ and $X^0$ provide such foliations, and $d\sigma_{\mathrm{temp}}$ and $dX^0$ are full rank. Since
\begin{align*}
\begin{split}
d\sigma = (\partial_{x}\mathfrak m) dX^0+ (\partial_y\mathfrak m) d\sigma_{\mathrm{temp}},
\end{split}
\end{align*}
we can choose $\mathfrak m$ so that the $d\sigma$ is also full rank. Going forward, we will assume that $\mathfrak m$ is chosen as such. The hyperboloidal (where $X^0\geq \sigma_{\mathrm{temp}}+\delta_1$) and flat (where $\sigma_{\mathrm{temp}}\geq X^0+\delta_1$) parts of these surfaces are denoted by ${\boldsymbol{\Upsigma}}_\sigma^{\mathrm{hyp}}, {\underline{\bsUpsigma}}_\sigma^{\mathrm{hyp}}$ and ${\boldsymbol{\Upsigma}}_\sigma^{\mathrm{flat}}, {\underline{\bsUpsigma}}_\sigma^{\mathrm{flat}}$ respectively. We have thus constructed the \emph{foliation} $\sigma$ adapted to $\xi, \ell$. We define our \emph{profile} as the hypersurface
\begin{align*}
\begin{split}
\mathcal Q := \cup_{\sigma} \Sigma_{\sigma}, \quad \hbox{ where } \Sigma_\sigma:=\mathcal C_\sigma\cap {\boldsymbol{\Upsigma}}_\sigma.
\end{split}
\end{align*}
Next, we \emph{fix a gauge}, that is, describe a parameterization (or embedding) of the VMC hypersurface $\mathcal M$ and a way to measure its deviation from the the profile $\mathcal Q = \cup_\sigma \Sigma_\sigma$.
We will do this by fixing a vector $$N:\cup_{\sigma}\Sigma_\sigma\to\mathbb R^{1+(n+1)},$$ and defining \emph{the perturbation} $\psi:\cup_{\sigma}\Sigma_\sigma\to \mathbb R$
by the requirement that (later we will further decompose $\psi$ as in Item (3) of Section~\ref{subsubsec:introoverallscheme}; see also Section~\ref{subsec:introfirstorderandcodim})
\begin{align}\label{eq:psidefintro1}
\begin{split}
p+\psi(p)N(p)\in \mathcal M,\qquad \forall~ p\in \cup_{\sigma}\Sigma_\sigma.
\end{split}
\end{align}
Under suitable smallness assumptions on the perturbation, this condition determines $\psi$ uniquely (see Lemma~\ref{rem:normalneighborhood}).
If ${\dot{\ell}}\equiv0$ and ${\dot{\xi}}\equiv \ell$, from the second variation of the area we would expect that the relevant perturbations are those which are in the direction of the (spacetime) normal to~$\Sigma_\sigma$. In general, since $\xi$ and $\ell$ do not necessarily obey ${\dot{\ell}}=0$ and ${\dot{\xi}}=\ell$, there will be additional errors. Nevertheless, the most natural geometric choice for $N$ still seems to be the normal to $\cup_{\sigma}\Sigma_\sigma$, or perhaps $\Lambda_{-\ell}n$, where $n$ is the normal to the straight Lorentzian catenoid~$\mathcal C$. However, for reasons that will be discussed below we choose to work with a less geometric $N$ which is defined as follows. First, if $\Sigma_\sigma\ni p=\Lambda_{-\ell}q+(0,\xi(\sigma)-\sigma\ell(\sigma))^\intercal$ for some $q$ in $\mathcal C$, let $n_\wp(p)=\Lambda_{-\ell(\sigma)}n(q)$, where $n(q)$ is the normal to $\mathcal C$ at $q$.
Let
\begin{align}\label{eq:tilNdefintro1}
\begin{split}
{\tilde{N}}=\chi {\tilde{N}}_{\mathrm{int}}+(1-\chi)\frac{\partial}{\partial X^{n+1}}
\end{split}
\end{align}
where ${\tilde{N}}_{\mathrm{int}}$ is the normal to $\Sigma_{\sigma}$ viewed as a subspace of ${\boldsymbol{\Upsigma}}_{\sigma}$, and where $\chi$ is some cutoff function which is equal to one in $\mathcal C_\sigma\cap{\boldsymbol{\Upsigma}}_\sigma^{\mathrm{flat}}$ and equal to zero in $\mathcal C_\sigma\cap{\boldsymbol{\Upsigma}}_\sigma^{\mathrm{hyp}}$. We then define $N$ to be parallel to ${\tilde{N}}$ and such that $\boldsymbol{\eta}(n_\wp,N)=1$:
\begin{align}\label{eq:Ndefintro1}
\begin{split}
N\parallel {\tilde{N}},\qquad \boldsymbol{\eta}(n_\wp,N)=1.
\end{split}
\end{align}
A few remarks are in order about this gauge choice.
\begin{itemize}[leftmargin=*]
\item[{\bf{1.}}] In the exterior hyperboloidal region, $N$ is parallel to $\frac{\partial}{\partial X^{n+1}}$. This choice is motivated by the fact that in this region the catenoid looks almost like a hyperplane, so we are in fact parameterizing the VMC hypersurface $\mathcal M$ as a graph over a hyperplane. The advantage is that this simplifies the derivation of the equations and the form of the nonlinear terms. As will become clear in the course of the proof of our main theorem, the precise structure of the nonlinearity is important only in this exterior region where we will be able to treat the difference between a hyperplane and a catenoid perturbatively. We will come back to the normalization of the length of $N$.
\item[{\bf{2.}}] In the interior our choice of $N$ is crude, but since $\ell$ and $\xi-\sigma\ell$ are small, it is still close to the geometric normal $n_{\wp}$. Our choice in this region is consistent with our general philosophy that besides some spectral information on the linearized operator and appropriate modulation equations for the parameters (which will be a consequence of our first order formulation and orthogonality conditions), precise structures are not so important in the interior region.
\item[{\bf{3.}}] Finally, the reason for the length normalization of $N$ is that we want the linear part of the equation satisfied by $\psi$ to be (except for errors coming from ${\dot{\ell}}$ and ${\dot{\xi}}-\ell$ not vanishing)
\begin{align*}
\begin{split}
\Box \psi +| {\mathrm{I\!I}} |^2\psi,
\end{split}
\end{align*}
where $\Box$ and $ {\mathrm{I\!I}} $ denote, respectively, the wave operator and second fundamental form of $\mathcal C_\sigma$, in the case where ${\dot{\ell}}\equiv0$ and ${\dot{\xi}}\equiv\ell$. This is important because $\Box+| {\mathrm{I\!I}} |^2$ is precisely the operator $-\partial_{t}^{2} + \underline{H}$ conjugated by the Lorentz transform with parameter $\ell$ (after a suitable translation).
\end{itemize}
To summarize, our profile is defined as $\mathcal Q = \cup_\sigma \Sigma_\sigma$ with $\Sigma_\sigma=\mathcal C_\sigma\cap{\boldsymbol{\Upsigma}}_\sigma$, and $\mathcal C_\sigma$ and ${\boldsymbol{\Upsigma}}_\sigma$ as defined in \eqref{eq:calCsigmadefintro1}, \eqref{eq:tilSigmadefintro1}, \eqref{eq:sigmadefintro1}, \eqref{eq:Sigmadefintro1}, and the perturbation is described by a scalar function $\psi:\cup_\sigma \Sigma_\sigma\to \mathbb R$ defined by \eqref{eq:psidefintro1}, \eqref{eq:tilNdefintro1}, \eqref{eq:Ndefintro1}. We will denote the hyperboloidal and flat parts of the profile by $\mathcal C_{{\mathrm{hyp}}}:=\{X^0\geq \sigma_{\mathrm{temp}}(X)+\delta_1\}$ and $\mathcal C_{{\mathrm{flat}}}:=\{X^0\leq \sigma_{\mathrm{temp}}(X)-\delta_1\}$ respectively.
\begin{remark}
The following simplified picture is helpful when thinking about the foliation and the definition of the profile. Imagine the scenario where we want to decompose a solution $u$ of a semilinear equation $\Box u = F(u)$ in terms of a soliton $Q$ and a remainder $\psi$. Suppose the equation is translation and Lorentz invariant, and let $Q_{\xi,\ell}$ denote the translated and boosted soliton. We foliate the domain by leaves which are flat up to a radius of order $R$ about $\xi(\tau)$, and then become asymptotically null and approach the cone through $(-R,0)^\intercal$ translated by $\xi(\tau)$ and boosted by $\ell(\tau)$, as in the following figure
\begin{center}
\begin{tikzpicture}[scale=0.8,transform shape]
\draw (-4,0) -- (4,0);
\draw[->] (0,0) -- (0,3.5) ;
\coordinate (A) at (0,0);
\coordinate (B) at (0.25,0.4);
\coordinate (C) at (0.5,1.2);
\coordinate (D) at (0.75,3);
\draw[thick] plot [smooth] coordinates { (A) (B) (C) (D)};
\draw[thick] plot [smooth] coordinates { (-1,0) (A) (1,0)};
\draw[thick] plot [smooth] coordinates { (1,0) (2.25,0.7) (3.35,1.9)};
\draw[thick] plot [smooth] coordinates { (-1,0) (-2.05,0.9) (-3.15,2.3)};
\draw[thick] plot [smooth] coordinates { (-0.75,0.4) (B) (1.25,0.4)};
\draw[thick] plot [smooth] coordinates { (1.25,0.4) (2.4,1.1) (3.6,2.5)};
\draw[thick] plot [smooth] coordinates { (-0.75,0.4) (-1.8,1.3) (-2.9,2.7)};
\draw[thick] plot [smooth] coordinates { (-0.5,1.2) (C) (1.5,1.2)};
\draw[thick] plot [smooth] coordinates { (1.5,1.2) (2.65,2.1) (3.75,3.5)};
\draw[thick] plot [smooth] coordinates { (-0.5,1.2) (-1.55,2.1) (-2.65,3.5)};
\node[above] at (D) {$\xi$};
\node[right] at (3.75,3.5) {${\boldsymbol{\Upsigma}}_\tau$};
\node[right] at (3.35,1.9) {${\boldsymbol{\Upsigma}}_0$};
\node[right] at (4,0) {$x^0=0$};
\end{tikzpicture}
\end{center}
Our profile construction corresponds to decomposing the solution as $u=Q_{\xi(\tau),\ell(\tau)}+\psi$ on the leaf~${\boldsymbol{\Upsigma}}_\tau$.
\end{remark}
\subsection{First-Order Formulation, Modulation Equations and Selection of a Codimension One Set of Initial Data}\label{subsec:introfirstorderandcodim}
The role of the first-order formulation is to derive the evolution equations for the modulation parameters $\xi$ and $\ell$. The modulation parameters are fixed by imposing the matching number of ``orthogonality conditions'' on the perturbation. The orthogonality conditions also guarantee that the perturbation stays away from the kernel of the linearized operator. Our approach is based on that of \cite{Stuart1}, which is in turn motivated by \cite{Weinstein1}.
The first order formulation is closely related to a Hamiltonian formulation of the original Euler-Lagrange equations. To arrive at an adequate first order formulation we need to fix a time function. In our case we have already discussed the foliation and we simply take the time function to be $\sigma$ in \eqref{eq:sigmadefintro1}. This is a degenerate choice because the level sets of $\sigma$ are asymptotically null. To deal with this, we make use of the observation that the orthogonality conditions may be localized to a large compact set (see for instance \cite{GNT1}), and we impose conditions that involve the perturbation only on the flat part of $\Sigma_{\sigma}$. An implication of localizing the orthogonality conditions is that the perturbation enters linearly in the parameter ODEs. Since the derivatives of parameters also enter linearly in the equation for the perturbation (see Section~\ref{subsec:introoutlinerp}), some care is needed to avoid circularity in the estimates. The key here is that the linear contributions of the perturbation stem from the localization of the eigenfunctions to the complement of the large compact set. Hence, the spatial decay of the eigenfunctions furnishes extra smallness.
Two more technical issues deserve further explanation. In view of the gauge invariance of the problem, the choice of momentum variable for the first order formulation is not obvious. The proper choice must be such that the orthogonality conditions result in non-degenerate first order ODEs for $\ell$ and $\xi$. We motivate our choice in Section~\ref{susec:momentumvariable}. The derivation of the equations in first order form is rather technical and occupies most of Section~\ref{sec:interior}.
Additionally, due to the quasilinear nature of the HMVC equation, sometimes more derivatives of the modulation parameters arise than we can a priori control in our bootstrap. In principle, it may be possible to use the hyperbolic structure of the equation to solve for the highest order time derivatives in terms of spatial derivatives of the perturbation, and to use integration by parts to avoid the loss of regularity (see for instance \cite[Section~4.1.3]{DKSW}). However, this approach would have to carefully exploit the structure of the equation, which becomes especially difficult in view of the complex form of the equations in the first order formulation. Instead, we modify the orthogonality conditions to obtain \emph{smoothing of the modulation parameters}. This is a robust approach that does not rely on the algebraic structures of the equations. The details are carried out in Section~\ref{subsec:modulationeqs}. Conceptually, we exploit the freedom that while the final values of the parameters are determined by the initial conditions for the HVMC equation, their trajectories are not. Technically, this is achieved by choosing the orthogonality conditions so that $\xi$ and $\ell$ satisfy ODEs of the forms ${\dot{\ell}}=S\mathcal F_{\ell}$ and ${\dot{\xi}}-\ell=S\mathcal F_\xi$, where $\mathcal F_\ell$ and $\mathcal F_\xi$ depend on the perturbation and its derivatives, and $S$ is a smoothing operator in the time variable. We choose the integral kernel $k(\sigma, \sigma')$ of $S$ to be compactly supported in the range $\sigma' < \sigma$ to preserve the causality of the smoothed-out modulation equations (i.e., $\xi(\sigma), \ell(\sigma)$ are independent of the solution at future times $\sigma' > \sigma$).
Finally we say a few words about the shooting argument discussed in Item (3) in Section~\ref{subsubsec:introoverallscheme}. The decomposition $\psi=a_{+}Z_{+}+a_{-}Z_{-}+\phi$ is derived in Section~\ref{subsec:unstableint}. The ODEs satisfied by $a_{\pm}$ are given in equation~\eqref{eq:wpoutline10}, and again involve a smoothing operator in the time variable. The trapping assumption is stated in equation~\eqref{eq:a+trap}. Note that this is at the level of the derivative of $a_{+}$. Finally, the standard topological argument (see for instance \cite{MRR1}) is described in Step 2a of the proof of Theorem~\ref{thm:main} in Section~\ref{subsec:proofofmaintheorem}.
For more background on the construction of center-stable manifolds we refer to \cite{Schlag1,NakSchbook1}.
\subsection{Uniform Boundedness of Energy and Integrated Local Energy Decay} \label{subsec:ideas-iled}
We now discuss the ideas behind our proof of the uniform boundedness and integrated local energy decay estimates for $\phi$. In the case of the linearized equation $(-\partial_{t}^{2} + \underline{H}) \psi = f$ around the Lorentzian catenoid, both bounds follow from existing methods \cite{Tat1, MeTa, MMT1, MST}; see Section~\ref{subsec:iled-catenoid} below. The challenge is to extend these estimates to $\phi$ in our decomposition of the solution. Here, $\phi$ solves the equation
\begin{equation*}
\mathcal P \phi = f,
\end{equation*}
where $\mathcal P$ is the linearized HMVC operator around $\mathcal Q$ modulo terms involving $\dot{\ell}$ and $\dot{\xi}-\ell$, which are regarded as nonlinearities. The right-hand side $f$ consists of the profile error (i.e., failure of $\mathcal Q$ to solve HVMC; this includes terms linear in $\dot{\ell}$ and $\dot{\xi}-\ell$) and quasilinear nonlinearity. We refer to \eqref{eq:phi1}, \eqref{eq:LEDcalPint1} (interior) and \eqref{eq:abstractexteq1} (exterior) for the precise expressions. We work under a trapping assumption for $a_{+}$ and suitable bootstrap assumptions on $\xi$, $\ell$, $a_{-}$ and $\phi$; see Section~\ref{sec:bootstrap}.
The proof of uniform boundedness of energy for $\phi$ follows by using the global almost stationary vectorfield ${\bf T}$ (see Section~\ref{sec:GNGC} for the definition) as a vectorfield multiplier, and using the orthogonality conditions to obtain coercivity of the spatial part of the operator $\mathcal P$. To control higher ${\bf T}$-derivatives, we use ${\bf T}$ as a commuting vectorfield. Using the equation and elliptic regularity estimates to also control higher spatial derivatives. We refer to Proposition~\ref{prop:energyestimate1} for the precise statements and proofs.
The proof of integrated local energy decay for $\phi$ is significantly more difficult due to the reasons discussed in Section~\ref{subsec:ideas-outline}, including
\begin{enumerate}
\item [(i)] quasilinearity (i.e., occurrence of nonlinear second-order terms),
\item [(ii)] trapping (i.e., existence of a unstable trapped null geodesics along $\set{\rho = 0}$),
\item [(iii)] eigenvalues of the stability operator (i.e., zero and negative eigenvalues of $\underline{H}$),
\item [(iv)] nonstationarity of $\mathcal Q$.
\end{enumerate}
We seek to \emph{divide-and-conquer} these difficulties.
Our first main tool is a \emph{vectorfield multiplier argument} that resembles the proof for the base case $-\partial_{t}^{2} + \underline{H}$ on the Lorentzian catenoid $\mathcal C$ in Section~\ref{subsec:iled-catenoid}, but adapted to our profile $\mathcal Q$. This argument gives the desired integrated local energy decay estimate with an additional lower order term $\| \phi \|_{L^{2}(K)}$ on the right-hand side, where $K$ is a spacetime cylinder around the trajectory $\xi$. For details, see the proof of Lemma~\ref{lem:phifar1} below. In particular, this argument takes care of issues (i) (quasilinearity) and (ii) (trapping), which are ``high time frequency'' issues.
To handle the remaining issues we introduce our next key tool, the \emph{time smoothing operator} $P_{\leq N}$, where $N^{-1}$ is the smoothing scale (equivalently, $N$ is the time frequency localization scale). Our initial observation is that $\| \phi - P_{\leq N} \phi \|_{L^{2}(K)}$ is small compared to the left-hand side of integrated local energy decay if $N$ is sufficiently large (see Lemma~\ref{lem:LEDhighfreq1} below), so we only need to control $P_{\leq N} \phi$. We have thus reduced the problem to the consideration of only ``low time frequencies''!
The key benefit of $P_{\leq N}$ is that, by elliptic regularity (for the part of $\mathcal P$ that does not involve ${\bf T}$), any potentially dangerous second order term $\partial^{2} P_{\leq N} \phi$ may be bounded in terms of $P_{\leq N} \phi$ and $\mathcal P (P_{\leq N} \phi)$. Hence, \emph{the equation $\mathcal P P_{\leq N} \phi = P_{\leq N} f + [\mathcal P, P_{\leq N}] \phi$ may be thought of as }$\mathcal P_{0} P_{\leq N} \phi = \hbox{(perturbative error)}$, where $\mathcal P_{0}$ on the left-hand side is the operator obtained by conjugating $(-\partial_{t}^{2} + \underline{H})$ with the Lorentz transformation with parameter $\underline{\ell}$. In the context of the bootstrap argument, $\underline{\ell}$ would be the final velocity parameter. This summarizes how issue~(iv) (nonstationarity of $\mathcal Q$) shows up and gets resolved in our proof.
It remains to establish an integrated local energy decay estimate for $P_{\leq N} \phi$, which would in particular control $\| P_{\leq N} \phi \|_{L^{2}(K)}$. As discussed earlier, to obtain such a bound from the properties of $-\partial_{t}^{2} + \underline{H}$, we need $P_{\leq N} \phi$ to satisfy $2n+2$ orthogonality conditions on suitable time slices (in this case, they are boosts of $\{ X^{0} = const \}$ by $\underline{\ell}$). This is issue~(iii) (eigenvalues of the stability operator). Our idea is to use a suitable multiplier argument to \emph{transfer} our orthogonality conditions on $\Sigma_{\sigma}$ to the needed ones; see the proof of Proposition~\ref{prop:LED1}. We remark that, at this point, we need doubly integrable decay rates of $\dot{\xi} - \ell$ and $\dot{\ell}$ to control the error.
This procedure also requires the right-hand side of the equation to be localized to the flat portion of $\Sigma_{\sigma}$. For this reason, we enact (in fact, twice for technical purposes) the so-called \emph{near-far decomposition} in our proof; see \eqref{eq:phifardef1}, \eqref{eq:calPphinear1}, \eqref{eq:psi-nearfar} and \eqref{eq:psi-nearnear}.
We end this part with a remark on the time smoothing operator $P_{\leq N}$. We define this operator as a smooth cutoff in time frequencies, where the Fourier transform is defined in suitable coordinates. However, unlike the Fourier transform in time, whose definition usually requires taking the Laplace transform first and then considering its analytic continuation, the time smoothing operator is easier to make sense of as an integral operator on physical space.
\subsection{Vectorfield Method for Moving Profile}\label{subsec:introoutlinerp} The final part of the scheme from Section~\ref{subsubsec:introoverallscheme} is proving pointwise decay for the perturbation. For this purpose we use the $r^p$-vectorfield method introduced in \cite{DR1}. This method combines an ILED estimate in a bounded region with vectorfield estimates outside a compact set to obtain pointwise decay. We refer to \cite[Sections 1.1--1.4]{Moschidis1} for a review of the history of the vectorfield method. The $r^p$ method applies to wave equations on asymptotically flat spacetimes. In its simplest form in \cite{DR1} it yields the (interior) pointwise decay rate $t^{-1}$ on $\mathbb R^{1+n}$. In \cite{Schlue1} the method was extended to give the decay rate $t^{-\frac{3}{2}+\delta}$. A further extension was obtained in \cite{Moschidis1} giving the rate $t^{-\frac{n}{2}}$. We refer to \cite{DR1,Moschidis1} for a general review of the method, and to \cite[Section 9.4]{Moschidis1} for an explanation of the scheme for the improved $t^{-\frac{n}{2}}$ decay. In this work we adapt the method from \cite{Moschidis1}. Our setup differs from the one in \cite{Moschidis1} in a few important respects which we now describe.
The first new aspect is that our foliation is centered at the trajectory $\xi$ (see Section~\ref{sec:profileintro}). To deal with this, we introduce a null frame that is adapted to the dynamically constructed foliation. We then define our weighted multiplier and commutator vectorfields with respect to this null frame, with spatial weights that are measured from the moving center $\xi$. The remarkable fact is that the wave equation written in the moving null frame has the right structure for the application of the $r^p$-vectorfield method. In particular, because in general ${\dot{\ell}}\neq0$ and ${\dot{\xi}}\neq \ell$, there will be new error terms with time decay in the wave equation itself, and in the multiplier and commutator identities. The important point is that these errors do not grow at spatial infinity, so they can be estimated in our bootstrap argument. Related to this issue, is the failure of the profile to be an exact solution of the HVMC equation. This implies that the radiation satisfies a wave equation with a source term with time decay. One of the main advantages of our foliation, and the adapted definition of the commutators, is that there is no spatial growth when the commutators fall on the source term.
Another difference of our setup with that of \cite{Moschidis1} is that our linearized operator has an order zero potential. Moreover, the elliptic part of the operator has a nontrivial kernel. These differences become relevant when using the improved decay of higher time derivatives of the perturbation to get improved decay for arbitrary derivatives. In \cite{Moschidis1} this is achieved by viewing the wave equation as an elliptic equation with the time derivatives as source terms, and applying global elliptic estimates. In our context we need a separate argument to deal with the zero order potential and the kernel. These arguments are presented in Lemma~\ref{lem:ellipticestimate1} and Corollary~\ref{cor:higherdL2decay}. The orthogonality conditions from the first order formulation are used here to guarantee that the projection of the perturbation on the kernel has sufficient decay.
Our modified scheme in dimension $n=5$ gives the pointwise decay $t^{-\frac{9}{4}+\frac{\kappa}{2}}$, $\kappa>0$ arbitrary, for the perturbation. This is different from the rate $t^{-\frac{5}{2}}$ in \cite{Moschidis1}, and we now explain the reason for the discrepancy. Let $\phi$ denote the perturbation. An intermediate step in deriving pointwise decay is proving that $\|\partial^3\phi\|_{L^2}\lesssim t^{-3}$, when at least one of the derivatives is with respect to time. Global eliptic estimates are then applied in \cite{Moschidis1} to conclude that $\|\partial^3\phi\|_{L^2}\lesssim t^{-3}$ for arbitrary derivatives. A similar argument gives $\|\partial^2\phi\|_{L^2}\lesssim t^{-2}$. The $t^{-\frac{5}{2}}$ pointwise decay in \cite{Moschidis1} then follows from the Gagliardo-Nirenberg estimate $\|\phi\|_{L^\infty}\lesssim \|\partial^2\phi\|_{L^2}^{\frac{1}{2}}\|\partial^3\phi\|_{L^2}^{\frac{1}{2}}$. In our case, the equation for $\phi$ contains a source term that depends linearly on the parameter derivatives, which we denote by ${\dot{\wp}}$. Spatial derivatives do not improve the time decay of this term, so we cannot hope to improve the decay of $\|\partial^k\phi\|_{L^2}$ beyond the decay of ${\dot{\wp}}$. On the other hand, the ODEs for the parameters can be used to bound $|{\dot{\wp}}|$ by a small multiple of $\|\jap{r}^{-c_1}\phi\|_{L^2}$, $c_1<\frac{7}{2}$ (here $r$ denotes the distance to the center $\xi$). Using the elliptic estimates discussed in the previous paragraph (see Lemma~\ref{lem:ellipticestimate1}) we can estimate $\|\jap{r}^{-c_2}\phi\|_{L^2}$, $c_2<\frac{5}{2}$, by~$|{\dot{\wp}}|$, where the restriction on $c_2$ comes from the order zero potential in the linear operator. Taking $c_1=c_2=\frac{5}{2}-\kappa$ we get the estimate $\|\jap{r}^{-\frac{5}{2}+\kappa}\phi\|_{L^2}\lesssim t^{-\frac{5}{2}+\delta}$. This sharp estimate can then be used to obtain the non-sharp estimate $\|\partial^3\phi\|_{L^2}\lesssim t^{-\frac{5}{2}+\kappa}$. Combined with Gagliardo-Nirenberg we obtain the pointwise decay $t^{-\frac{9}{4}+\frac{\kappa}{4}}$. Note that if we used elliptic estimates with fractional derivatives (instead of weights) and a fractional Gagliardo-Nirenberg estimate, we could hope to obtain the decay rate $t^{-\frac{5}{2}+\kappa}$. Since the rate $t^{-\frac{9}{4}+\frac{\kappa}{2}}$ is already sufficient to close our bootstrap, we did not further complicate the argument by introducing fractional derivatives.
\subsection{Second Statement of the Main Result}\label{subsec:introsecondthm}
To state our main result more precisely, we first describe the initial data. Consider two functions
\begin{align}\label{eq:initialdata1}
\begin{split}
\psi_0,\psi_1\in C^\infty_0({\underline{\calC}}),\qquad {\mathrm{supp}}~ \psi_0,\psi_1\subseteq {\underline{\calC}}\cap\{|{\underline X}|<R/2\}.
\end{split}
\end{align}
Using the notation introduced in Sections~\ref{subsec:Riemcat} and~\ref{sec:profileintro}, see \eqref{eq:Fdef} and \eqref{eq:tilNdefintro1}, consider
\begin{align}\label{eq:initialdata2}
\begin{split}
\Phi_0[\psi_0]= F+(\psi_0\circ F) N\circ F,\qquad \Phi_1[\psi_1]=(1,0)+(\psi_1\circ F)N\circ F.
\end{split}
\end{align}
We also let ${\tilde{\varphi}}_\mu:=\chi \underline{\varphi}_\mu$ where the cutoff function $\chi$ is equal to one on ${\underline{\calC}}\cap\{|{\underline X}|<R/3\}$ and supported on ${\underline{\calC}}\cap\{|{\underline X}|<R/2\}$. As discussed earlier, our stability theorem holds under a codimension one condition on the initial data. This condition is given by the vanishing of a certain functional on $(\psi_0,\psi_1)$. But, as the exact form of the vanishing condition is a bit complicated to state at this point, we defer this until Section~\ref{sec:bootstrap}, and simply refer to \eqref{eq:codim1} in the statement of our main theorem.
\begin{theorem}\label{thm:main}
Let $n\geq 5$, and consider $\Phi_0$, $\Phi_1$ as in \eqref{eq:initialdata1}, \eqref{eq:initialdata2}, and assume that $(\psi_0,\psi_1)$ satisfy \eqref{eq:codim1}. If $\epsilon\geq0$ is sufficiently small, then there exist $b_0\in\mathbb R$ with $|b_0|\lesssim 1$ and $\Phi:\mathbb R\times I \times \mathbb S^{n-1}\to \mathbb R^{1+(n+1)}$ satisfying \eqref{eq:HVMC1}, such that $\Phi\vert_{\{t=0\}}=\Phi_0[\epsilon(\psi_0+b_0{\tilde{\varphi}}_\mu)]$ and $\partial_t\Phi\vert_{\{t=0\}}=\Phi_1[\epsilon(\psi_1-\mu b_0{\tilde{\varphi}}_\mu)]$. Moreover, there exist $\ell,\xi:\mathbb R\to \mathbb R^{n}$ satisfying $|\ell|,|{\dot{\xi}}|\lesssim \epsilon$ and
\begin{align*}
\begin{split}
|{\dot{\ell}}(\sigma)|, |{\dot{\xi}}(\sigma)-\ell(\sigma)|\to0\qquad \mathrm{as}~\sigma\to\infty,
\end{split}
\end{align*}
such that the image of $\Phi$ can be parameterized as
\begin{align*}
\begin{split}
\cup_\sigma\Sigma_\sigma\ni p\mapsto p+\psi(p)N(p),
\end{split}
\end{align*}
with $\|\psi\|_{L^\infty(\Sigma_\sigma)}\to0$ as $\sigma\to\infty$. More precisely, there exists a positive $\kappa\ll1$ such that
\begin{align*}
\begin{split}
|{\dot{\ell}}(\sigma)|,|{\dot{\xi}}(\sigma)-\ell(\sigma)|\lesssim \epsilon \sigma^{-\frac{5}{2}+\kappa},\qquad \|\psi\|_{L^\infty(\Sigma_\sigma)}\lesssim \epsilon \sigma^{-\frac{9}{4}+\kappa},\qquad \mathrm{as}~\sigma\to\infty.
\end{split}
\end{align*}
\end{theorem}
More precise decay estimates on $\psi$ and the parameters can be found in Propositions~\ref{prop:bootstrappar1} and~\ref{prop:bootstrapphi1}. We now make a few remarks about Theorem~\ref{thm:main}.
\begin{remark}\label{rem:parametersintro1}
It follows from the decay rate of ${\dot{\ell}}$ and ${\dot{\xi}}-\ell$ that there exist ${\underline a},{\underline \ell}\in\mathbb R^n$ such that $\ell(t)\to{\underline \ell}$ and $\xi(t)\to {\underline a}+ {\underline \ell} t$ as $t\to\infty$. In this sense our theorem implies that the solution approaches a fixed, boosted and translated Lorentzian catenoid. The differential equations governing the evolution of the parameters are derived in Section~\ref{subsec:modulationeqs}.
\end{remark}
\begin{remark}
As mentioned earlier the codimension one condition of the data in Theorem~\ref{thm:main} is optimal. However, in this work, we do not pursue the question of uniqueness and continuous dependence of $b_0$ on the initial data $\psi_0$ and $\psi_1$. As a result, we cannot infer that the set of initial data, considered in Theorem~\ref{thm:main} form a codimension one \emph{submanifold} in any topology.
\end{remark}
\subsection{Further Discussions}\label{subsec:discussion}
Further discussion of related works and subjects are in order.
\subsubsection{Other prior works on the hyperbolic vanishing mean curvature equation}\label{subsubsec:morehistory}
Beyond the previously mentioned result \cite{AC2} on local existence for the HVMC equation for sufficiently smooth initial data, we point out the low regularity local well-posedness results \cite{Ettinger, AIT21}.
We refer to \cite{Wong1} for the study of local well-posedness in relation to the action principle formulation.
The global nonlinear stability of hyperplanes under the HVMC evolution was considered in \cite{B1, Lin1, Stefanov11, Wong17}. Under symmetric perturbations the nonlinear stability of the Lorentzian catenoid was studied in \cite{KL1, DKSW} and that of the Lorentzian helicoid in \cite{Marchali1}. Simple planar travelling wave solutions to the HVMC equation were proven to be globally nonlinearly stable in \cite{AW20}.
Singularity formation has been analyzed in \cite{NT13, JNO15, Wong18, BMP1}.
For a discussion of the relevance of the HVMC equation in physics, we refer the reader to \cite{AC1, AC2, Hoppe13}.
The Lorentzian constant positive mean-curvature flow has been considered in \cite{Wong2}.
\subsubsection{Comparison with black hole stability}
The present paper concerns nonlinear asymptotic stability of a stationary solution to a multi-dimensional quasilinear wave equation without any symmetry assumptions. Despite obvious differences in the inherent complexities of the underlying PDEs, our main result may be formally compared with the recent colossal works \cite{DHRT1,KlSz1,KlSz2} on the nonlinear asymptotic stability of Kerr and Schwarzschild black holes as stationary solutions to the vacuum Einstein equation, which is a $(3+1)$-dimensional quasilinear wave equation, without any symmetry assumptions.
Our problem and the black hole stability problem share some important features. The nontrivial kernel of the linearized operator around the stationary solution necessitates modulation of some parameters and a suitable choice of gauge (i.e., a way to represent the solution among many equivalent descriptions). In the case of the Schwarzschild black hole, a codimension condition on the initial data naturally appears as in our problem \cite{DHRT1, KlSz1}. At the level of proofs, this paper and the above works share the same basic strategy for proving the pointwise decay of the perturbation, namely, to first prove an integrated local energy decay (or Morawetz) estimate and the uniform boundedness of energy, then to establish pointwise decay by some version of the vectorfield method. Indeed, this powerful strategy was mostly developed in works with the black hole stability problem in mind -- see \cite{DR1}, and also \cite{Tat2, MTT, OlSt}.
Needless to say, our problem is simpler compared to the black hole stability problem in a number of aspects, such as the spatial dimension, the gauge choice (compare our choice described in Section~\ref{sec:profileintro} with \cite{DHRT1, KlSz1, KlSz2}), and the analysis of the linearized problem (compare the discussion in Section~\ref{subsec:Riemcat} with \cite{HHV1,DHR1,ABBM1,HKW1}). Nevertheless, in this paper we satisfactorily resolve a key issue that is shared by many soliton stability problems, but not with the black hole stability problem -- this is the issue of \emph{modulation of the translation and boost parameters}. In our problem, as well as in many soliton stability problems, the stationary solution (the catenoid or the soliton) is defined on a natural ambient spacetime, and it is of interest to track the evolution of the translation and boost parameters in relation to the ambient spacetime. In contrast, in general relativity there is no notion of an ambient spacetime, and the analogous issue is subsumed in the choice of the gauge in the black hole stability problem. As discussed earlier, this issue is resolved in our work by a new construction of a dynamic profile representing a ``moving catenoid,'' the use of localized orthogonality conditions that enables us to utilize a suitable first-order formulation of the equation to derive the evolution equations for the parameters, a robust scheme for establishing integrated local energy decay for perturbations of the dynamic profile from the case of the stationary solution, as well as an adaptation of the $r^{p}$-method for the dynamic profile. In view of the pervasiveness of the same issue in soliton stability problems, we are hopeful that our ideas might be useful elsewhere as well.
\subsubsection{Soliton stability problem for semilinear dispersive equations}
There is a vast literature on the problem of stability of solitons for \emph{semilinear} dispersive equations; for those who are interested, we recommend the excellent survey articles of Kowalczyk--Martel--Mu\~{n}oz \cite{KMM17} and Tao \cite{Tao09} as a good starting point. In relation to this rich and beautiful subject, our aim in this paper is to specifically tackle those challenges that arise from the \emph{quasilinearity} of the equation. Our aim, in turn, is motivated by the conjectured asymptotic stability of some well-known topological solitons solving quasilinear wave equations, such as the Skyrmion for the Skyrme model \cite{MantonSutcliffe}.
\subsection{Outline of the Paper}\label{subsec:introremainingoutline}
The remainder of this paper is organized as follows. Section~\ref{sec:prelim} contains the notation and some preliminary results. In Section~\ref{sec:interior} we derive a first order formulation of our problem in terms of a vector unknown $\vec{\psi}$, for a given set of parameters $\xi$ and $\ell$. We also state the corresponding orthogonality conditions and carry out a further decomposition of $\vec{\psi}=\vec{\phi}+a_{+}{\vec Z}_{+}+a_{-}{\vec Z}_{-}$, by separating the contribution of growing mode of the linearized operator. For this decomposition and our choice of orthogonality conditions, we then derive the modulation equations satisfied by $\xi$, $\ell$, and $a_{\pm}$.
In Section~\ref{sec:coordinates}, we give a more detailed description of the foliation, various coordinates, and vectorfields, again for a given choice of parameters $\xi$ and $\ell$. We also derive expressions for the relevant operators in terms of the described coordinates and vectorfields.
The bootstrap assumptions are stated in Section~\ref{sec:bootstrap}, where, in Propositions~\ref{prop:bootstrappar1} and~\ref{prop:bootstrapphi1} we also give more precise decay estimates than the ones given in Theorem~\ref{thm:main}. The proof of Propositions~\ref{prop:bootstrappar1} and~\ref{prop:bootstrapphi1} will occupy the remaining sections of the paper, and in Section~\ref{sec:bootstrap} we further show that Theorem~\ref{thm:main} follows from the bootstrap propositions.
Sections~\ref{sec:parametercontrol} contains the proof of Proposition~\ref{prop:bootstrappar1} which closes the bootstrap assumptions for all parameters except the growing mode $a_{+}$. For the latter, a separate shooting argument is needed, which is carried out in the proof of Theorem~\ref{thm:main} in the last part of Section~\ref{sec:bootstrap}.
The proof of Proposition~\ref{prop:bootstrapphi1} is contained in Sections~\ref{sec:LED} and~\ref{sec:exterior}. Section~\ref{sec:LED} contains a general local energy decay at the linear level. In view of the calculations in Section~\ref{sec:coordinates} and the bootstrap assumptions in Section~\ref{sec:bootstrap}, the assumptions on the linear operator in this estimate are satisfied for us. In Section~\ref{sec:exterior} we use the linear result of Section~\ref{sec:LED} to prove nonlinear energy and local energy decay estimates. Using these, we also prove $r^p$ weighted energy estimates, which in turn are used to prove decay estimates and Proposition~\ref{prop:bootstrapphi1}.
\section{Preliminaries}\label{sec:prelim}
\subsection{Notation and Conventions}
Here we collect some of the notation and conventions that are used repeatedly in this work. This is meant as a reference for the reader, and some of the precise definitions will appear only later in the paper. Some of the notation and conventions which are used more locally in various parts of the paper do not appear in this list.
\subsubsection{\underline{The profile and the main variables}} ${\underline{\calC}}$ denotes the Riemannian catenoid with its standard embedding in $\mathbb R^{n+1}$, and $\mathcal C = \mathbb R \times \underline{\mathcal C}$ the product Lorentzian catenoid in $\mathbb R^{1+(n+1)}$. The boost and translation\footnote{To be precise, to leading order $\xi\approx t\ell+a$ where $a$ is a fixed translation parameter.} parameters are denoted by $\ell$ and $\xi$ respectively, where $|\ell|,|{\dot{\xi}}|<1$. In our applications we will always have $|\ell|,|{\dot{\xi}}|\ll1$. Here, and below, the dot over a parameter denotes the time derivative. We will also sometimes use a prime $'$ to denote the derivative of a function of a single variable (such as time). Given $\ell$ and $\xi$ as above, the boosted catenoid $\mathcal C_\sigma$ and ${\boldsymbol{\Upsigma}}_\sigma$ are defined as in \eqref{eq:calCsigmadefintro1} and \eqref{eq:Sigmadefintro1}, and the profile is $\cup_\sigma\Sigma_\sigma$, $\Sigma_\sigma=\mathcal C_\sigma\cap\Sigma_\sigma$. The almost normal vector to the profile is denoted by $N:\cup_\sigma\Sigma_\sigma\to \mathbb R^{1+(n+1)}$, and the perturbation, defined in \eqref{eq:psidefintro1}, by $\psi:\cup_\sigma\Sigma_\sigma\to \mathbb R$. In the first order formulation, $\vec{\psi}=(\psi,{\dot{\psi}})$ denotes\footnote{When there is no risk of confusion, we identify row and column vectors in this work. So, for instance, we use both $(\psi,{\dot{\psi}})$ and $(\psi,{\dot{\psi}})^\intercal$ for $\vec{\psi}$.} the vector form of the perturbation, where ${\dot{\psi}}$ is the momentum variable and roughly corresponds to the time derivative of $\psi$. Corresponding to the negative eigenvalue of the linearized operator $-\Delta_{\underline{\calC}}-| {\mathrm{I\!I}} |^2$ (see Section~\ref{subsec:Riemcat}) there are two projection coefficients in the first order formulation: $a_{+}$ denotes the unstable (growing mode) coefficient corresponding to the eigenfunction, and $a_{-}$ the stable (decaying mode) coefficient. The remainder, after subtracting the contribution of the corresponding eigenfunction from $\vec{\psi}$, is denoted by $\vec{\phi}$ at the vector level (in the first order formulation) and by $\phi$ at the scalar level (see Section~\ref{subsec:unstableint}). We will denote the flat and hyperboloidal parts of the profile by $\mathcal C_{{\mathrm{flat}}}:=\{X^0\geq \sigma_{\mathrm{temp}}(X)+\delta_1\}$ and $\mathcal C_{{\mathrm{hyp}}}:=\{X^0\leq \sigma_{\mathrm{temp}}(X)-\delta_1\}$ respectively. We often refer to the region inside a large compact set contained in $\mathcal C_{\mathrm{flat}}$ as the interior, and the complement of this region as the exterior.
\subsubsection{\underline{Parameter derivatives}} We will use ${\dot{\wp}}$ to denote the parameter derivatives ${\dot{\ell}}$ and ${\dot{\xi}}-\ell$. When used as a vector, ${\dot{\wp}}=({\dot{\ell}},{\dot{\xi}}-\ell)^\intercal$ in that order. When used schematically, for instance in estimates or to denote dependence on parameter derivatives, the order will not be important, so for example $O({\dot{\wp}})$ denotes terms that are bounded by $|{\dot{\ell}}|$ or $|{\dot{\xi}}-\ell|$. The distinction will be clear from the context. More generally, ${\dot{\wp}}^{(k)}$ denotes a total of $k$ derivatives of the parameters, so for instance ${\dot{\wp}}^{(2)}$ could be any of $\ddot{\ell}$, $|{\dot{\xi}}-\ell|^2$, ${\ddot{\xi}}-{\dot{\ell}}$, etc. $\dot{\wp}^{(\leq k)}$ denotes a total of up to $k$, but at least one, parameter derivatives. $\wp^{(\leq k)}$ denotes a total of up to $k$ parameter derivatives, but possibly also an undifferentiated $\ell$. We sometimes also use the notation $\wp$ for $\ell$. Note that $\xi$ itself cannot be written as $\wp$ ($\xi$ is expected to grow linearly in time), but ${\dot{\xi}}$ can be written as ${\dot{\xi}}={\dot{\xi}}-\ell+\ell$, which is a sum of terms of the form ${\dot{\wp}}$ and $\wp$. A similar notation is used for ${\dot{a}}_{\pm}^{(k)}$, $a_{\pm}^{(\leq k)}$, etc. Note that here even the undifferentiated $a_{\pm}$ are expected to have time decay (for the growing mode $a_{+}$ only after appropriately modifying the initial data; see Theorem~\ref{thm:main}).
\subsubsection{\underline{Constants}} $\epsilon$ is the smallness parameter for the size of the initial perturbation. $\kappa$ is a small positive absolute constant which arises in the decay rates in the bootstrap argument; see Section~\ref{sec:bootstrap}. In our bootstrap argument, the energy of the perturbation enters to linear order in estimating the parameter derivatives, and the parameter derivatives enter linearly in the energy estimates. What breaks the circularity is that the linear appearance of the energy of the perturbation in the estimates for the parameter derivatives is always accompanied by a small (but not decaying) constant. This small constant is denoted by $\delta_\wp$ in the bootstrap assumptions of Section~\ref{sec:bootstrap}. The final time of the bootstrap interval is denoted by $\tau_f$. There are also a few large radii that appear in our arguments. $R$ is a large constant such that the initial data are supported in ${\underline{\calC}}\cap\{|{\underline X}|<R/2\}$; see \eqref{eq:initialdata1}. Also, the transition region from the flat to hyperboloidal parts of the foliation happens in the region $|{\underline X}|\simeq R$; see Section~\ref{sec:profileintro}. The constant $R_1\gg1$ is such that $R_1\ll R$ and such that the support of the test functions\footnote{These are the truncated eigenfunctions of the linearized operator in the first order formulation.} ${\vec Z}_i$, $i\in\{\pm,1,\dots,2n\}$, in Section~\ref{sec:interior} is contained in $|\rho|\leq R_1$ (see \ref{subsec:modulationeqs}). We will use ${R_1}$ as an absolute constant and gain smallness in inverse powers of ${R_1}$. The size of the data, $\epsilon$, is considered small relative to any inverse power of ${R_1}$. In particular, since in view of the bootstrap assumptions in Section~\ref{sec:bootstrap} we have $|\ell|\lesssim \epsilon$, quantities such as $({R_1})^m|\ell|$ are considered small, for any power $m>0$. The smallness of the constant $\delta_\wp$ above is in terms of $\ell$ and inverse powers of ${R_1}$ (the reason the energy of the perturbation enters linearly in the equation for the parameter derivatives is that the eigenfunctions are truncated at scale ${R_1}$, so one should expect the error to get smaller for larger ${R_1}$).
\subsubsection{\underline{Coordinates, derivatives, and vectorfields}}\label{subsec:prelimvfs} We will mainly work with two sets of coordinates: $(t,\rho,\omega)$ in the interior (flat) part of the foliation and $(\tau,r,\theta)$ in the exterior (hyperboloidal) part. The precise definitions are given in Section~\ref{sec:interior} and~\ref{sec:profile2} for the interior, and in Section~\ref{sec:profile2} for the exterior. In addition to these, in a few occasions we will use the global non-geometric coordinates $(\uptau,\uprho,\uptheta)$, see Section~\ref{sec:GNGC}, and the global geometric coordinates $({\tilde{\uptau}},{\tilde{\uprho}},{\tilde{\uptheta}})$, see Section~\ref{sec:GGC}. ${\bfT}$ denotes the global almost stationary vectorfield, which in terms of the global non-geometric coordinates introduced in Section~\ref{sec:GNGC} is given by $\partial_{\uptau}$. In general $\partial$ denotes arbitrary derivatives that have size of order one, and $\partial_\Sigma$ the subset of these derivatives that are tangential to the leaves of the foliations. In the exterior region, ${\tilde{\partial}}_\Sigma$ denotes derivatives which can be written as a linear combination of $\partial_\Sigma$ and $r^{-1}{\bfT}$, with coefficients of size of order one. In general we denote the number of derivatives by a superscript. For instance $\partial_\Sigma^{\leq k}$ means up to $k$ tangential derivatives. There are also a few commutator and multiplier vectorfields which are used in the exterior in Section~\ref{sec:exterior} in the context of proving decay estimates for the perturbation. The precise definitions are given in Section~\ref{sec:profile2}, but we give a brief description here: $L$ and ${\underline L}$ are the outgoing and incoming almost null vectorfields. $\Omega$ is the rotation vectorfield. $T$, which is comparable and almost colinear with ${\bfT}$, is defined by $T=\frac{1}{2}(L+{\underline L})$. In the exterior region where these vectorfields are defined we use $\tilde{r}L$, $\Omega$, and $T$ as commutators, and use $X^k$ (when $k=1$ we simply write $X$) to denote am arbitrary string of $k$ such vectorfields. Here ${\tilde{r}}$ is a geometric radial variable introduced in Section~\ref{sec:profile2}.
\subsubsection{\underline{Volume forms}} In general we use $\mathrm{d} V$ to denote the induced volume form from the ambient space $\mathbb R^{1+(n+1)}$. If there is any risk of confusion we use a subscript to denote the subset on which the volume form is induced (for instance $\mathrm{d} V_S$ for the subset $S$). When working in a fixed set of coordinates we sometimes write out the volume form explicitly. In the exterior region, it is sometimes more convenient to use the coordinate volume form for the Minkowski metric rather than the geometric induced one. It will be clear from the bootstrap assumptions that these two volume forms are comparable and therefore various norms defined with respect to them are equivalent. The volume form on the standard unit sphere will be denoted by $\mathrm{d}\theta$ or $\mathrm{d} S$ interchangeably (or $\mathrm{d} \omega$, $\mathrm{d} \uptheta$, etc, depending on the coordinate system we are using).
\subsubsection{\underline{Cutoffs}} We use the notation $\chi$ for smooth cutoff functions defined on $\cup_\sigma\Sigma_\sigma$ and taking values in $[0,1]$. We may denote the set on which $\chi$ is equal to one by a subscript. For instance $\chi_S$ is equal to one on $S$ and equal to zero outside of a neighborhood of $S$ (we will make the support more precise when needed). For a positive number $c$, $\chi_{\geq c}$ denotes a cutoff which is one in the region $\{|\uprho|\geq c\}$ and equal to zero outside of $\{|\uprho|\geq \frac{c}{2}\}$. Here $\uprho$ is the radial coordinate from the global non-geometric coordinates in Section~\ref{sec:GNGC}. We also define $\chi_{<c}:=1-\chi_{\geq c}$.
\subsubsection{\underline{Dimension}}The main result of this work is valid for dimensions $n\geq 5$. For concreteness we have set $n=5$ in many places (for instance for the decay rates in the bootstrap assumptions) and kept the notation $n$ in other places (for instance in some multiplier identities) where we thought this would add to the clarity of exposition. The reader can set $n=5$ everywhere, and the modifications for higher dimensional cases are minimal.
\subsubsection{\underline{The normal and decay of eigenfunctions}}\label{subsubsec:normal} For the standard Riemannian catenoid as described in Section~\ref{subsec:Riemcat}, and with the notation used there, the normal vector is given by
\begin{align*}
\begin{split}
\nu\equiv \nu(z,\omega)=(\frac{\Theta(\omega)}{\jap{z}^{n-1}},\sqrt{1-\jap{z}^{2-2n}}).
\end{split}
\end{align*}
As mentioned in Section~\ref{subsec:Riemcat}, the first $n$ components, $\nu^i=\frac{\Theta^i}{\jap{z}^{n-1}}$, $i=1,\dots,n$,
appear as eigenfunctions of the main linearized operator {\underline H}. It is useful to keep in mind that these have decay $\jap{z}^{1-n}$ and satisfy (here $\sqrt{|h|}$ denotes the volume form associated to the metric \eqref{eq:RiemmCatmetric1})
\begin{align*}
\begin{split}
\int \nu^i\nu^j \sqrt{|h|}\mathrm{d} \omega\mathrm{d} z = C\delta^{ij},
\end{split}
\end{align*}
where $\delta^{ij}$ is the Kronecker delta and $C$ is a constant of order one. We also remark that since the metric is asymptotically flat, the eigenfunction $\underline{\varphi}_\mu$ from Section~\ref{subsec:Riemcat} is exponentially decaying.
\subsubsection{\underline{Two asymptotic ends}} Many of the estimates and identities in this work are derived only near one of the asymptotic ends of the solution. In all cases, the other asymptotic end can be treated in exactly the same way, possibly with a change of overall sign. This remark applies in particular to many of the vectorfied identities and estimates, for instance in sections~\ref{sec:LED} and~\ref{sec:exterior}.
\subsubsection{\underline{Notation for the second fundamental form}} As discussed in the introduction, the stability (or linearized) operator for the Riemannian catenoid is $-\Delta_{\underline{\calC}}-| {\mathrm{I\!I}} |^2$, where $ {\mathrm{I\!I}} $ denotes the second fundamental form. We will sometimes use the notation $V=| {\mathrm{I\!I}} |^2$ when working with this linearized operator. When proving more general linear estimates (such as local energy decay) we still use $V$ for the potential, and impose conditions on the linearized operator which are satisfied by $-\Delta_{\underline{\calC}}-| {\mathrm{I\!I}} |^2$. This distinction between the different uses of $V$ will be clear from the context.
\subsubsection{\underline{Exterior parametrization over a hyperplane}}\label{subsubsec:calOnotation} Outside a large compact set, we can parameterize each asymptotic end of the solution as a graph over a hyperplane (for instance the hyperplanes $\{X^{n+1}=\pm S\}$). The function giving this parameterization for the Riemannian catenoid is denoted by $Q$. We use $Q_\wp$ to denote the corresponding function when taking into account the boost and translation parameters, although when there is no risk of confusion we drop $\wp$ from the notation and simply write $Q$. See Section~\ref{sec:coordinates}.
\subsubsection{\underline{The $O$ and $\mathcal O$ notation}} The notation $f=O(g)$ is used as usual to mean $|f|\leq C|g|$ for some constant $C$. The notation $f=o_{\alpha}(g)$ is also used in the usual way to mean that $|f|/|g|$ goes to zero as the parameter $\alpha$ approaches a limiting value which will be clear from the context (usually zero or infinity). We will also use the notation $\mathcal O$ whose meaning we now explain. In order to prove decay estimates for $\phi$ we will commute the equation it satisfies with ${\bfT}$ (see Subsection~\ref{subsec:prelimvfs} for the notation). To obtain the desired decay in time, it is important that $k$ applications of ${\bfT}$ improve the decay of $\phi$ by $\uptau^{-k}$ for $k\leq 2$ (the upper bound $2$ comes from setting $n=5$). Similarly, we will need improved decay estimates on the time derivatives of the parameters up to two commutations of ${\bfT}$. These improved decay rates are reflected in the bootstrap assumptions in Section~\ref{sec:bootstrap} and the estimates in Section~\ref{sec:parametercontrol} (see for instance Proposition~\ref{lem:OmegaiTkphi}). For this, it is important that the various error terms that appear in our estimates have improved time decay up to two orders of differentiation in ${\bfT}$. In this process, we also need to commute the equation satisfied by $\phi$ with the weighted derivatives ${\tilde{r}} L$ and $\Omega$ (see Subsection~\ref{subsec:prelimvfs}), which have size of order ${\tilde{r}}$, near the asymptotically flat ends. Again, it is important that certain error terms have faster ${\tilde{r}}$ decay in the exterior region with every application of $L$ and $\frac{1}{{\tilde{r}}}\Omega$, up to the order of commutation. To capture these improved decay properties we use the notation $\mathcal O$. That is, an error term of the form $\mathcal O(f)$ is still bounded by $|f|$ after any number of differentiations by ${\tilde{r}} L$ or $\Omega$ in the exterior, and by $\sum_{j\leq k}|{\dot{\wp}}^{(j)}{\bfT}^{k-j}f|$ after $k$ differentiations by ${\bfT}^k$ globally. For instance, an error that is denoted by $\mathcal O({\dot{\wp}})$ will still be bounded by $\mathcal O({\dot{\wp}})$ after applications of ${\tilde{r}} L$ and $\Omega$ in the exterior, and by $\mathcal O({\dot{\wp}}^{(k+1)})$ after $k$ applications of ${\bfT}$ globally. Also note that more than two differentiations in ${\bfT}$ does not change the decay rate, so for instance a term of the form $\mathcal O({\dot{\wp}})$ is still bounded by $\mathcal O({\dot{\wp}}^{(3)})$ after $k$ applications of ${\bfT}$, $k\geq 3$. That we can bound higher derivatives of the parameters by their lower derivatives is a consequence of the parameter smoothing, which is carried out in Sections~\ref{subsec:modulationeqs} and~\ref{subsec:unstableint} to prevent loss of regularity (see also Section~\ref{sec:parametercontrol} for the corresponding estimates). Even though we start using this notation already in Section~\ref{sec:interior}, the corresponding properties of these error terms follow only after the bootstrap estimates are stated in Section~\ref{sec:bootstrap}. It is worth mentioning that the error terms in sections~\ref{sec:interior} and~\ref{sec:parametercontrol} are always estimated after integrating against a compactly supported function. Therefore, the spatial decay of these terms is not relevant, and are not specified when using the $\mathcal O$ notation there.
\subsection{Local Existence}\label{subsec:lwp}
As mentioned in the introduction, a systematic treatment of local existence for the HVMC equation is contained in \cite{AC1,AC2}. For our purposes it is convenient to also have a formulation with respect to an almost null foliation of the ambient space. The results of \cite{AC1,AC2} can be adapted to this setting using the arguments of \cite{Luk1} (see also \cite{Rendall1}) which proves local existence for a class of nonlinear wave equations with characteristic initial data. Without reproducing the details of these arguments, we record the desired corollary of these works for our future reference. Before doing so, we need to introduce some more notation. Recall the definition of the profile $\cup_\sigma(\mathcal C_\sigma\cap {\boldsymbol{\Upsigma}}_\sigma)$ from Section~\ref{sec:profileintro}. Given fixed $\ell_0, \xi_0\in \mathbb R^n$, let $\mathring{\mathcal C}_\sigma(\ell_0,\xi_0)$ and $\mathring{{\boldsymbol{\Upsigma}}}_\sigma(\xi_0,\ell_0)$ denote the submanifolds of $\mathbb R^{1+(n+1)}$ corresponding to the choices $\ell\equiv\ell_0$ and $\xi(\tau)\equiv\xi_0+\ell_0\tau$. The corresponding choice of transversal vector $N$ is denoted by ${\mathring N}_{\ell_0,\xi_0}$. Then, for each $\tau>0$ let
\begin{align*}
\begin{split}
\mathcal D_0^\tau(\ell_0,\xi_0):=\cup_{\sigma\in[0,\tau)} \mathring{\Sigma}_\sigma(\ell_0,\xi_0):=\cup_{\sigma\in[0,\tau)} (\mathring{\mathcal C}_\sigma(\ell_0,\xi_0)\cap \mathring{{\boldsymbol{\Upsigma}}}_\sigma(\ell_0,\xi_0)).
\end{split}
\end{align*}
We equip each leaf $\mathring{\Sigma}_\sigma(\ell_0,\xi_0)=\mathring{\mathcal C}_\sigma(\ell_0,\xi_0)\cap \mathring{{\boldsymbol{\Upsigma}}}_\sigma(\ell_0,\xi_0)$ with the (Riemannian) metric induced from the ambient space, and denote the tangential derivatives of size one by ${\mathring{\partial}}_\Sigma$. The restriction of $\frac{\partial}{\partial X^0}+\ell_0$ to $\mathcal D_0^\tau(\ell_0,\xi_0)$ is denoted by $T$ (note that $\frac{\partial}{\partial X^0}+\ell_0$ is tangent to $\mathcal D_0^\tau(\ell_0,\xi_0)$). We use $\rho$ to denote the distance along $\mathring{\Sigma}_\sigma(\ell_0,\xi_0)$ to ${\underline X}=\xi_0+\sigma\ell_0$, with respect to the induced metric.
\begin{proposition}\label{prop:LWP}
Let $n\geq 5$, and consider $\mathring{\Phi}_0(p)=p+\mathring{\epsilon}\mathring{\psi}_0 {\mathring N}_{\ell_0,\xi_0}$, $\mathring{\Phi}_1= (1,\ell_0)+\mathring{\epsilon}\mathring{\psi}_1 {\mathring N}_{\ell_0,\xi_0}$, where $\mathring{\psi}_j$ are smooth functions on $\mathring{\Sigma}_0(\ell_0,\xi_0)$ with $\|{\mathring{\partial}}_\Sigma^k\mathring{\psi}_0\|_{L^2(\Sigma_0(\ell_0,\xi_0))}$ and $\|\jap{\rho}^{-1}{\mathring{\partial}}_\Sigma^{k-1}\mathring{\psi}_1\|_{L^2(\Sigma_0(\ell_0,\xi_0))}$ finite for $k\leq M$, $M$ sufficiently large. If $|\ell_0|$ and $\mathring{\epsilon}>0$ are sufficiently small, then there exists $\tau\gtrsim1$ and a unique smooth function $\mathring{\psi}:\mathcal D_0^{\tau}(\ell_0,\xi_0)\to\mathbb R$, such that $\mathring{\Phi}:\mathcal D_0^{\tau}(\ell_,\xi_0)\to\mathbb R^{1+(n+1)}$ defined by
\begin{align}\label{eq:lwppar}
\begin{split}
\mathring{\Phi}(p)= p+\mathring{\psi}(p){\mathring N}_{\ell_0,\xi_0}(p)
\end{split}
\end{align}
satisfies \eqref{eq:HVMC1}, and $\mathring{\psi}(0)=\mathring{\epsilon}\mathring{\psi}_0$, $T\mathring{\psi}(0)=\mathring{\epsilon}\mathring{\psi}_1$.
\end{proposition}
We also want to be able to parameterize the solution given by Proposition~\ref{prop:LWP} as in Section~\ref{sec:profileintro} for other choices of $\ell(\tau)$ and $\xi(\tau)$, with $|\ell|,|{\dot{\xi}}|\ll1$. This is possible according to the following normal neighborhood lemma.
\begin{lemma}\label{rem:normalneighborhood}
Consider $\ell$ and $\xi$ with $|\ell|$, $|{\dot{\xi}}|$, and $|\xi(0)-\xi_0|$ sufficiently small, and the solution $\mathcal M$ from Proposition~\ref{prop:LWP} parameterized by \eqref{eq:lwppar} for $\tau\in[0,\tau_0]$. Then condition \eqref{eq:psidefintro1} for $\sigma\in [0,\tau_0]$ determines $\psi$ uniquely and $p\mapsto p+\psi(p) N(p)$ as defined in \eqref{eq:psidefintro1} gives another parameterization of $\mathcal M$.
\end{lemma}
\begin{proof}
To see that $\psi$ is uniquely determined, we need to show that for each $p\in\cup\Sigma_\sigma$, the line $p+sN(p)$ intersects $\mathcal M$ only once. Let $P$ be a point of intersection. By construction, there is a unique point ${\mathring p}\in \cup\mathring{\Sigma}_\sigma(\ell_0,\xi_0)$ such that $P$ is on the line through ${\mathring p}$ in the direction of ${\mathring N}_{(\ell_0,\xi_0)}({\mathring p})$. Moreover, since ${\mathring N}_{(\ell_0,\xi_0)}$ is almost normal to $\mathcal M$, this line satisfies $$\mathrm{dist}(P+s{\mathring N}_{(\ell_0,\xi_0)}({\mathring p}),\mathcal M)\gtrsim |s|,$$ where $\mathrm{dist}(A,\mathcal P)$ denotes the Euclidean distance from $A$ to $\mathcal M$. But then, since $|N(p)-{\mathring N}_{(\ell_0,\xi_0)}({\mathring p})|\ll1$ (which follows from the smallness of $\ell$ and $\ell_0$),
\begin{align*}
\begin{split}
\mathrm{dist}(P+sN(p),\mathcal M)\geq \mathrm{dist}(P+s{\mathring N}_{(\ell_0,\xi_0)}({\mathring p}),\mathcal M)-|s||N(p)-{\mathring N}_{(\ell_0,\xi_0)}({\mathring p})|\gtrsim |s|,
\end{split}
\end{align*}
which shows that the line $P+sN(p)$ does not intersect $\mathcal M$ again. To see that we have a parameterization, suppose $p+\psi(p)N(p) = q+\psi(q)N(q)$ for some $p,q\in\cup_\sigma\Sigma_\sigma$. Then by derivative bounds on $\psi$ and $N$,
\begin{align*}
\begin{split}
|p-q|\leq |\psi(p)N(p)-\psi(q)N(q)|\ll |p-q|,
\end{split}
\end{align*}
which can happen only if $p=q$.
\end{proof}
\subsection{Local Energy Decay for the Product Lorentzian Catenoid} \label{subsec:iled-catenoid}
The notation used in this section is independent of the rest of the paper. We prove a local energy decay (LED) estimate for $\Box+V$, where $\Box$ denotes the wave operator of the product Lorentzian catenoid with metric
\begin{align*}
\begin{split}
g= -\mathrm{d} t\otimes\mathrm{d} t+\frac{\rho^2\jap{\rho}^{2(n-2)}}{\jap{\rho}^{2(n-1)}-1}\mathrm{d} \rho\otimes\mathrm{d}\rho+\jap{\rho}^2 \mathrm{d} \omega\otimes \mathrm{d} \omega,
\end{split}
\end{align*}
and $V$ is a smooth, time independent, potential satisfying $|V|\lesssim \jap{\rho}^{-6}$. This abstract estimate will be used during the proof of LED in Section~\ref{sec:LED} and the choice of multipliers here motivate the ones made there. To start, let $\psi$ satisfy
\begin{align}\label{eq:LEDproduct1}
\begin{split}
(\Box+V)\psi=g.
\end{split}
\end{align}
In coordinates, we have
\begin{align*}
\begin{split}
\Box:=-\partial_t^2+\Delta = -\partial_t^2+g^{\rho\rho}\partial_\rho^2+\frac{\partial_\rho(|g|g^{\rho\rho})}{|g|}\partial_\rho+\jap{\rho}^{-2}{\mathring{\slashed{\Delta}}},
\end{split}
\end{align*}
where ${\mathring{\slashed{\Delta}}}$ denotes the Laplacian on the round sphere of radius one. We will also use the notation ${\slashed{\Delta}}=\rho^2\Delta$, and similarly for ${\mathring{\slashed{\nabla}}}$ and ${\slashed{\nabla}}$. We use $L^p_x$ to denote the $L^p$ norm on constant $t$ hypersurfaces with respect to the volume form induced by $g$. We will also use the notation $L^p_tL^q_x[t_1,t_2]$ to indicate that the $L^p_t$ norm is calculated over the time interval $[t_1,t_2]$. In this section we use the notation
\begin{align*}
\begin{split}
&\|\phi\|_{LE[t_1,t_2]}^2:=\|\rho\jap{\rho}^{-\frac{1}{2}(3+\alpha)}\partial_t\phi\|_{L^2_{t,x}[t_1,t_2]}^2+\|\rho\jap{\rho}^{-\frac{1}{2}(5+\alpha)}\phi\|_{L^2_{t,x}[t_1,t_2]}^2+\|\rho\jap{\rho}^{-\frac{3}{2}}{\slashed{\nabla}}\phi\|_{L^2_{t,x}[t_1,t_2]}^2\\
&\phantom{\|\phi\|_{LE[t_1,t_2]}^2:=}+\|\jap{\rho}^{-\frac{1+\alpha}{2}}\partial_\rho\phi\|_{L^2_{t,x}[t_1,t_2]}^2,\\
&\|f\|_{LE^\ast[t_1,t_2]}^2:=\|\jap{\rho}^{\frac{1+\alpha}{2}}f\|_{L^2_{t,x}[t_1,t_2]}^2.
\end{split}
\end{align*}
Note the degeneracy at $\rho=0$ for the non-radial derivatives in the definition of the $LE$ norm. This degeneracy appears due to the presence of the trapped sphere at $\rho=0$. The energy norm is defined as
\begin{align*}
\begin{split}
\|\phi\|_E^2=\|\partial_t\phi\|_{L^2_x}^2+\|\partial_x\phi\|_{L^2_x}^2,
\end{split}
\end{align*}
where by definition $\|\partial_x\phi\|_{L^2_x}^2=\int (g^{-1})^{ij}(\partial_i\phi)(\partial_j\phi)\sqrt{|g|}\mathrm{d}\omega\mathrm{d}\rho$ (here and in the remainder of this section $i,j$ run over $1,\dots,n$). The $L^2_x$ pairing with respect to $\sqrt{|g|}$ will be denoted by~$\angles{\cdot}{\cdot}$. We also use the following notation to denote the $L^2_{t,x}$ norm over the region $\{|\rho|\leq1\}$:
\begin{align*}
\begin{split}
\|\phi\|_{L^2_{t,x}(\{|\rho|\leq1\})}.
\end{split}
\end{align*}
We assume that $\Delta+V$ has spectrum consisting of the absolutely continuous part $(-\infty,0]$ and possibly a finite number of eigenvalues at zero and in $(0,\infty)$. In particular, there are no threshold resonances\footnote{By a threshold resonance, we mean a function $\varphi$ belonging to the spatial part of $LE$ but not to $L^2_x$, such that $(\Delta+V)\varphi=0$. Threshold resonances do not exist in our applications, because the strong spatial decay of $V$ and the difference between the coefficients of $\Delta$ and the Euclidean Laplacian imply that a threshold resonance $\varphi$ must decay at the same rate as the Newtonian potential. In dimensions $n\geq 5$ this implies that $\varphi\in L^2_x$.}. The eigenfunctions are assumed to satisfy the decay rate $\jap{\rho}^{-n+1}$ or faster. We use $\mathbb P_c$ to denote the projection onto the continuous spectrum of $\Delta+V$ (with respect to the volume form induced by $g$). These conditions are easily verified when $V=| {\mathrm{I\!I}} |^2$ is the squared norm of the second fundamental form of the standard embedding of ${\underline{\calC}}$ in $\mathbb R^{n+1}$; see Section~\ref{subsec:Riemcat} and \cite{F-CS,Tam-Zhou}.
\begin{proposition}\label{prop:LEDproduct}
Suppose $\psi$ satisfies \eqref{eq:LEDproduct1} on a time interval $[t_1,t_2]$. Then for any small constant $\delta>0$ the following three estimates hold
\begin{align*}
\begin{split}
&\sup_{[t_1,t_2]}\|\mathbb P_c\psi(t)\|_E\lesssim \|\mathbb P_c\psi(t_1)\|_{E}+\|\mathbb P_cg\|_{L^1_tL^2_x[t_1,t_2]},\\
&\sup_{[t_1,t_2]}\|\mathbb P_c\psi(t)\|_E\lesssim \|\mathbb P_c\psi(t_1)\|_{E}+C_\delta\|\mathbb P_cg\|_{LE^\ast[t_1,t_2]}+\delta(\|\mathbb P_c\psi\|_{LE[t_1,t_2]}+\|\partial_t\mathbb P_c\psi\|_{L^2_{t,x}(\{\rho\leq1\})}),\\
&\sup_{[t_1,t_2]}\|\mathbb P_c\psi(t)\|_E\lesssim \|\mathbb P_c\psi(t_1)\|_{E}+C_\delta\|\partial_t\mathbb P_cg\|_{LE^\ast([t_1,t_2])}+\|\mathbb P_cg\|_{L^\infty L^2[t_1,t_2]}+\delta\|\mathbb P_c\psi\|_{LE[t_1,t_2]},\\
&\|\mathbb P_c\psi\|_{LE[t_1,t_2]}\lesssim \sup_{[t_1,t_2]}\|\mathbb P_c\psi(t)\|_E+\|\mathbb P_cg\|_{LE^\ast[t_1,t_2]+L^1_tL^2_x[t_1,t_2]}.
\end{split}
\end{align*}
\end{proposition}
\begin{remark}\label{rem:LEDproduct1}
Combining the first and third estimates in the proposition we get
\begin{align*}
\begin{split}
\sup_{[t_1,t_2]}\|\mathbb P_c\psi(t)\|_E+\|\mathbb P_c\psi\|_{LE[t_1,t_2]}\lesssim \|\mathbb P_c\psi(t_1)\|_{E}+\|\mathbb P_cg\|_{L^1_tL^2_x[t_1,t_2]}.
\end{split}
\end{align*}
This is sufficient for most applications, but in a few instances we need to estimate $\mathbb P_cg$ in the $LE^\ast$ norm. For this we need to use the second and third energy estimates in the statement of the proposition. Note that the last term on the right-hand side of the second estimate cannot be absorbed by the $LE$ norm due to the degeneracy of the $LE$ norm at $\rho=0$.
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:LEDproduct}]
Let $\phi=\mathbb P_c\psi$ and $f=\mathbb P_cg$. The first two estimates are standard energy estimates which follow from multiplying the equation by $\partial_t\phi$. The fact that $\phi$ is orthogonal to the eigenfunctions of $\Delta+V$ guarantees that the flux on $\{t=\mathrm{constnat}\}$ bounds the energy. For the third estimate, the integral of $f\partial_t\phi$ is treated by integration by parts in $t$, in the region $\{|\rho|\leq 1\}$ (where the $LE$ norm is degenrate) and observing that, using Hardy and Poincar\'e type inequalities (see for instance the proof of Lemma~\ref{lem:LEDhighfreq1} in Section~\ref{sec:LED}),
\begin{align*}
\begin{split}
\int_{t_1}^{t_2}\int_{\{|\rho|\leq1\}}(\partial_tf)\phi\,\mathrm{d} x \,\mathrm{d} t\leq \delta\|\phi\|_{LE[t_1,t_2]}^2+C_\delta\|\partial_tf\|_{LE^\ast[t_1,t_2]}^2.
\end{split}
\end{align*}
For the last estimate in the statement of the proposition we start by proving the weaker estimate
\begin{align}\label{eq:LEDproducttemp1}
\begin{split}
\|\phi\|_{LE[t_1,t_2]}\lesssim \sup_{[t_1,t_2]}\|\phi(t)\|_E+\|f\|_{LE^\ast[t_1,t_2]+L^1_tL^2_x[t_1,t_2]}+\|\phi\|_{L^2_{t,x}(\mathcal B)},
\end{split}
\end{align}
where $\mathcal B:=[t_1,t_2]\times B$ and $B$ denotes a large compact spatial region. With $\partial_\rho^\ast$ denoting the formal adjoint of $\partial_\rho$ for the pairing $\angles{\cdot}{\cdot}$, let
\begin{align*}
\begin{split}
Q:=\beta\partial_\rho-\partial_\rho^\ast\beta,\qquad \partial_\rho^\ast=\partial_\rho-(\partial_\rho\log|g|),
\end{split}
\end{align*}
where $\beta\equiv\beta(\rho)$ is to be chosen. The main positive commutator identity is
\begin{align}\label{eq:comm1}
\begin{split}
\angles{(\Box+V) \phi}{Q\phi}= -\partial_t\angles{\partial_t\phi}{Q\phi}+\overbrace{ \angles{\Delta \phi}{Q\phi} }^{-\frac{1}{2}\angles{[Q,\Delta]\phi}{\phi} }-\overbrace{\angles{[Q,V]\phi}{\phi}}^{2\angles{(\beta\partial_\rho V)\phi}{\phi}}.
\end{split}
\end{align}
Note that
\[
Q \phi = (\beta \partial_\rho - \partial_\rho^\ast \beta) \phi = 2 \beta (\partial_\rho \phi) - (\partial_\rho^\ast \beta) \phi.
\]
Then we have
\begin{equation} \label{equ:DeltaQ}
\begin{aligned}
\langle - \Delta \phi, Q \phi \rangle &= \langle \partial_\rho^\ast g^{\rho \rho} \partial_\rho \phi, Q \phi \rangle - \langle \jap{\rho}^{-2} {\mathring{\slashed{\Delta}}} \phi, Q \phi \rangle \\
&= \langle \partial_\rho^\ast g^{\rho \rho} \partial_\rho \phi, 2 \beta (\partial_\rho \phi) - (\partial_\rho^\ast \beta) \phi \rangle - \langle \jap{\rho}^{-2} {\mathring{\slashed{\Delta}}} \phi, 2 \beta (\partial_\rho \phi) - (\partial_\rho^\ast \beta) \phi \rangle.
\end{aligned}
\end{equation}
For the first term on the right-hand side we integrate by parts repeatedly and rearrange to find that
\begin{align*}
\langle \partial_\rho^\ast g^{\rho \rho} \partial_\rho \phi, 2 \beta (\partial_\rho \phi) - (\partial_\rho^\ast \beta) \phi \rangle &= \langle g^{\rho \rho} \partial_\rho \phi, (2 (\partial_\rho \beta) - (\partial_\rho^\ast \beta)) \partial_\rho \phi \rangle + \langle g^{\rho \rho} \partial_\rho \phi, 2 \beta (\partial_\rho^2 \phi) \rangle \\
&\quad- \langle g^{\rho \rho} \partial_\rho \phi, (\partial_\rho \partial_\rho^\ast \beta) \phi \rangle \\
&= \langle \partial_\rho \phi, ( 2 g^{\rho \rho} (\partial_\rho \beta) - g^{\rho \rho} (\partial_\rho^\ast \beta) + \partial_\rho^\ast ( g^{\rho \rho} \beta ) ) \partial_\rho \phi \rangle \\
&\quad- \frac{1}{2} \langle g^{\rho \rho} (\partial_\rho \partial_\rho^\ast \beta), \partial_\rho (\phi^2) \rangle \\
&= \langle ( 2 g^{\rho \rho} (\partial_\rho \beta) - (\partial_\rho g^{\rho \rho}) \beta ) \partial_\rho \phi, \partial_\rho \phi \rangle - \frac{1}{2} \langle (\partial_\rho^\ast g^{\rho \rho} \partial_\rho \partial_\rho^\ast \beta) \phi, \phi \rangle \\
&= \langle (2 g^{\rho \rho} (\partial_\rho \beta) - (\partial_\rho g^{\rho \rho}) \beta) \partial_\rho \phi, \partial_\rho \phi \rangle + \frac{1}{2} \langle (\Delta \partial_\rho^\ast \beta) \phi, \phi \rangle.
\end{align*}
For the second term on the right-hand side of~\eqref{equ:DeltaQ} we obtain that
\begin{align*}
- \langle \jap{\rho}^{-2} {\mathring{\slashed{\Delta}}} \phi, 2 \beta (\partial_\rho \phi) - (\partial_\rho^\ast \beta) \phi \rangle &= \langle \jap{\rho}^{-2} {\mathring{\slashed{\nabla}}} \phi, 2 \beta {\mathring{\slashed{\nabla}}} \partial_\rho \phi \rangle - \langle \jap{\rho}^{-2} (\partial_\rho^\ast \beta) {\mathring{\slashed{\nabla}}} \phi, {\mathring{\slashed{\nabla}}} \phi \rangle \\
&= \langle \partial_\rho^\ast ( \jap{\rho}^{-2} \beta ) {\mathring{\slashed{\nabla}}} \phi, {\mathring{\slashed{\nabla}}} \phi \rangle - \langle \jap{\rho}^{-2} (\partial_\rho^\ast \beta) {\mathring{\slashed{\nabla}}} \phi, {\mathring{\slashed{\nabla}}} \phi \rangle \\
&= 2 \langle {\textstyle \frac{\rho}{\jap{\rho}^4} } \beta {\mathring{\slashed{\nabla}}} \phi, {\mathring{\slashed{\nabla}}} \phi \rangle \\
&= 2 \langle {\textstyle \frac{\rho}{\jap{\rho}^2} } \beta {\slashed{\nabla}} \phi, {\slashed{\nabla}} \phi \rangle.
\end{align*}
Putting the above identities together, we arrive at
\begin{align}\label{eq:comm-hard}
\langle - \Delta \phi, Q \phi \rangle = \langle (2 g^{\rho \rho} (\partial_\rho \beta) - (\partial_\rho g^{\rho \rho}) \beta) \partial_\rho \phi, \partial_\rho \phi \rangle + 2 \langle {\textstyle \frac{\rho}{\jap{\rho}^2} } \beta {\slashed{\nabla}} \phi, {\slashed{\nabla}} \phi \rangle + \frac{1}{2} \langle (\Delta \partial_\rho^\ast \beta) \phi, \phi \rangle.
\end{align}
In view of \eqref{eq:comm-hard}, for \eqref{eq:comm1} we can get control of the spatial part of the $LE$ norm in \eqref{eq:LEDproducttemp1} with the choice
\begin{align*}
\begin{split}
\beta:=\chi \rho + K(1-\chi) \beta_E,
\end{split}
\end{align*}
where $K$ is a large constant, $\beta_E$ is the Euclidean choice for LED (which is an odd function), and $\chi$ is a radial cut-off to the region $\rho\in[-R,R]$, for some large R, which decays to zero monotonically outside of $[-R,R]$ and is zero on $[-2R,2R]$. The point is that in this way $\rho \beta$ is positive and if $K$ is sufficiently large
\begin{align*}
\begin{split}
\partial_\rho\beta= -(\partial_\rho\chi)(K \beta_E-\rho)
\end{split}
\end{align*}
is also positive in the gluing region. Note that for the angular derivative we get a degeneracy of order two at $\rho=0$. In order to control $\partial_t \phi$ in the LED estimate we also use the multiplier identity
\begin{align}\label{eq:comm2}
\begin{split}
\angles{\Box\phi}{\gamma\phi}=\angles{\gamma\partial_t\phi}{\partial_t\phi}-\angles{\gamma\nabla_g\phi}{\nabla_g\phi}-\partial_t\angles{\gamma\partial_t\phi}{\phi}+\frac{1}{2}\angles{(\Delta\gamma)\phi}{\phi}.
\end{split}
\end{align}
For \eqref{eq:comm2}, the choice $\gamma(\rho)=\frac{\rho^2}{\jap{\rho}^{3+\epsilon}}$ gives
\begin{align*}
\begin{split}
\angles{\frac{\rho^2}{\jap{\rho}^{3+\epsilon}}\partial_t\phi}{\partial_t\phi}=&\angles{f}{\frac{\rho^2}{\jap{\rho}^{3+\epsilon}}\phi}+\angles{\frac{\rho^2}{\jap{\rho}^{3+\epsilon}}\nabla_g\phi}{\nabla_g\phi}-\frac{1}{2}\angles{(\Delta\frac{\rho^2}{\jap{\rho}^{3+\epsilon}})\phi}{\phi}+\partial_t\angles{\frac{\rho^2}{\jap{\rho}^{3+\epsilon}}\partial_t\phi}{\phi}.
\end{split}
\end{align*}
Adding this identity to a suitable multiple of \eqref{eq:comm-hard}, we arrive at \eqref{eq:LEDproducttemp1}.
The $L^2_{t,x}$ error in the bounded region $\mathcal B$ in \eqref{eq:LEDproducttemp1} can now be removed by a standard contradiction argument using the absence of threshold resonances and embedded eigenvalues. See for instance \cite{MMT1} and references therein for an implementation of this argument. Here we provide a brief outline with references to this article. By introducing a suitable damping with parameter $\varepsilon$ and applying the Fourier transform in the time variable, with time frequency variable $\tau$, we can deduce from what we have proved above that for any function $v$ on ${\underline{\calC}}$ (see \cite[Section 4.1]{MMT1}, Steps 1--3)
\begin{align}\label{eq:LEDfreq1}
\begin{split}
\|v\|_{LE_x}\lesssim \|(\Delta+V-\tau-i\varepsilon)v\|_{LE^\ast_x}+\|v\|_{L^2_x(B)},
\end{split}
\end{align}
uniformly in $\varepsilon>0$ and $\tau\in\mathbb R$. Here $\|\cdot\|_{LE_x}$ and $\|\cdot\|_{LE_x^\ast}$ denote the spatial parts of the local energy and dual local energy norms. In fact, by applying the spectral projection $\mathbb P_c$, we can replace $v$ by $\mathbb P_cv$ everywhere in this estimate. For this we use the $O(\jap{\rho}^{-n+1})$ decay of the eigenfunctions of $\Delta+V$ to observe that $\mathbb P_c$ is bounded in the $LE^\ast_x$ norm (which is just the weighted space $L^2(\jap{\rho}^{1+\alpha}\sqrt{|g|}\mathrm{d} \omega\mathrm{d}\rho)$) in dimensions $n\geq 5$. By similar considerations, the estimate we want to prove is equivalent to having the following bound uniformly in $\varepsilon>0$ and $\tau\in\mathbb R$:
\begin{align}\label{eq:LEDfreq2}
\begin{split}
\|\mathbb P_c v\|_{LE_x}\lesssim \|(\Delta+V-\tau-i\epsilon)\mathbb P_cv\|_{LE^\ast_x}.
\end{split}
\end{align}
For large $|\tau|$, estimate \eqref{eq:LEDfreq2} follows from \eqref{eq:LEDfreq1} (with $v$ replaced by $\mathbb P_cv$) using elliptic estimates exactly as in Step 4 in \cite[Section 4.1]{MMT1}. For $|\tau|\lesssim1$, assuming \eqref{eq:LEDfreq2} fails, we can find sequences $\varepsilon_n\to0$, $\tau_n\to\tau$, and $v_n\in LE_x$ such that $\mathbb P_cv_n=v_n$,
\begin{align*}
\begin{split}
\|(\Delta+V-\tau_n-i\epsilon_n)v_n\|_{LE_x^\ast}\to 0,\qquad \|v_n\|_{L^2_x(B)}=1,
\end{split}
\end{align*}
and $v_n\rightharpoonup v$ in $LE_x$ and $v_n\to v$ in $L^2_{x,loc}$ for some $v\in LE_x$ with $\mathbb P_c v=v$. It follows that $(\Delta+V-\tau)v=0$ and $\|v\|_{L^2_x(\mathcal B)}=1$. To derive a contradiction from this we consider three separate cases $\tau<0$, $\tau>0$, and $\tau=0$. The first two cases give a contradiction exactly as in \cite[Section 4.1]{MMT1} Steps 6 and Steps 8--10, respectively. In the case $\tau=0$, as in \cite[Section 4.1]{MMT1} Step 7, we see that $v$ must be a threshold resonance, which we assume does not exist, or an eigenfunction with zero eigenvalue, which is ruled out by the fact that $v=\mathbb P_cv$, where we recall that we were allowed to replace $v$ by $\mathbb P_cv$ in \eqref{eq:LEDfreq1} because $\mathbb P_c$ is bounded in $LE_x^\ast$.
\end{proof}
\section{Interior}\label{sec:interior}
In this section we first define the momentum variable ${\dot{\psi}}$ and we derive, in vector form, the first-order equation for $\vec{\psi}$. Then we introduce suitable orthogonality conditions for the modulation parameters $\ell(t)$ and $\xi(t)$, and derive the equations satisfied by $\dot{{\wp}}=({\dot{\ell}},{\dot{\xi}}-\ell)^\intercal$.
Finally, we enact a further decomposition of the perturbation $\vec{\psi}$ to take into account the unstable mode of the linearized operator.
Throughout this section, all error estimates are to be understood to hold under the bootstrap assumptions~\eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2}. Moreover, we refer to Subsection~\ref{subsubsec:calOnotation} for the definition of the $\mathcal O$ notation.
\subsection{Setup}\label{subsec:setupint1}
Our parametrization for the profile in the flat region $\mathcal C_{\mathrm{flat}} := \{ \sigma_{\text{temp}}(X)\geq X^0+ \delta_1\}$ is
\begin{equation} \label{equ:definition_profile_flat_region}
\Psi_\wp(t, \rho, \omega) = \bigl( t, \xi + \gamma^{-1} P_\ell F(\rho, \omega) + P_\ell^\perp F(\rho, \omega) \bigr),
\end{equation}
where $\xi = \xi(t)$ and $\ell = \ell(t)$ are our time-dependent modulation parameters.
We denote the normal to $\Sigma_t$, in the case where the parameters are treated as fixed, by
\begin{equation*}
n_\wp := \Lambda_{-\ell} \begin{pmatrix} 0 \\ \nu \end{pmatrix} = \begin{pmatrix} \gamma \ell \cdot \nu \\ A_{-\ell} \nu \end{pmatrix},
\end{equation*}
where $\nu$ is the geometric normal to the Riemannian catenoid $\underline{\mathcal C}$.
Then $\widetilde{N}_{int} = (0, A_{-\ell} \nu)$ is the normal to $\Sigma_t$ viewed as a subspace of ${\boldsymbol{\Upsigma}}_t$. In the interior we define $N$ to be parallel to ${\widetilde{N}}_{int}$ and such that $\boldsymbol{\eta}(n_\wp, N) = 1$, that is,
\begin{equation*}
N := \begin{pmatrix} 0 \\ |A_{-\ell} \nu|^{-2} A_{-\ell} \nu \end{pmatrix}.
\end{equation*}
Moreover, we write
\begin{equation*}
W := N - n_\wp.
\end{equation*}
We introduce the scalar perturbation $\psi$ in the interior via the decomposition
\begin{equation*}
\Phi = \Psi_\wp + \psi N.
\end{equation*}
Next, we introduce the metric components
\begin{equation*}
g_{\mu \nu} := \boldsymbol{\eta} \bigl( \partial_\mu \Phi, \partial_\nu \Phi \bigr), \quad \quad k_{\mu \nu} := \boldsymbol{\eta} \bigl( \partial_\mu \Psi_\wp, \partial_\nu \Psi_\wp \bigr), \quad \quad 0 \leq \mu, \nu \leq n,
\end{equation*}
and the Lagrangian density
\begin{equation*}
\mathcal L := \sqrt{|g|} = \sqrt{-\det(g)}.
\end{equation*}
We also introduce
\begin{align*}
\Psi_{\mu; \wp} &:= \partial_\mu \Psi_\wp \Big|_{\substack{\dot{\ell} = 0 \\\dot{\xi} = \ell}}, \quad \quad 0 \leq \mu \leq n, \\
h_{\mu \nu} &:= \boldsymbol{\eta} \bigl( \partial_\mu \Psi_\wp, \partial_\nu \Psi_\wp \bigr) \Big|_{\substack{\dot{\ell} = 0 \\ \dot{\xi} = \ell}} = \boldsymbol{\eta} \bigl( \Psi_{\mu; \wp}, \Psi_{\nu; \wp} \bigr), \quad \quad 0 \leq \mu, \nu \leq n.
\end{align*}
Then we have
\begin{align*}
\Psi_{0;\wp} = \begin{pmatrix} 1 \\ \ell \end{pmatrix}, \quad \quad \Psi_{j;\wp} = \begin{pmatrix} 0 \\ \gamma^{-1} P_\ell \partial_j F + P_\ell^\perp \partial_j F \end{pmatrix}, \quad \quad 1 \leq j \leq n.
\end{align*}
{\bf In what follows, we still view $\ell$ as time-dependent in the expressions for $\Psi_{\mu; \wp}$ and $h_{\mu\nu}$.}
\begin{remark} \label{rem:HVMCj_for_Psinu}
A parametrization of the Lorentzian catenoid boosted by a fixed $\ell \in \mathbb R^{n+1}$, $|\ell| < 1$, and translated by $a\in \mathbb R^{n+1}$ (with $\xi$ corresponding to $a+t\ell$) is given by
\begin{equation*}
\Gamma_{a,\ell}(t, \rho, \omega) := \bigl( t, a+t\ell + \gamma^{-1} P_\ell F(\rho, \omega) + P_\ell^\perp F(\rho, \omega) \bigr).
\end{equation*}
Then $\Gamma_{a,\ell}$ satisfies the HVMC equation
\begin{equation*}
\frac{1}{\sqrt{|\kappa|}} \partial_\mu \bigl( \sqrt{|\kappa|} \kappa^{\mu \nu} \pnu \Gamma_\ell \bigr) = 0
\end{equation*}
with $\kappa_{\mu \nu} = \boldsymbol{\eta}( \partial_\mu \Gamma_{a,\ell}, \partial_\nu \Gamma_{a,\ell} )$.
Since the metric coefficients $\kappa_{\mu \nu}$ are time-independent and since by direct computation $\partial_t \pnu \Gamma_{a,\ell} = 0$ for fixed $\ell$, we in fact have
\begin{equation} \label{equ:remark_HVMCj_pj_kappa}
\partial_j \bigl( \sqrt{|\kappa|} \kappa^{j \nu} \pnu \Gamma_{a,\ell} \bigr) = 0.
\end{equation}
Formally, the expressions for $h_{\mu \nu}$ and $\Psi_{\nu; \wp}$ are the same as those for $\kappa_{\mu \nu}$ and $\pnu \Gamma_{a,\ell}$, only that the parameter $\ell$ is considered time-dependent in $h_{\mu \nu}$ and $\Psi_{\nu; \wp}$, while $\ell$ is considered time-independent in $\kappa_{\mu \nu}$ and $\pnu \Gamma_{a,\ell}$. Since in the preceding equation~\eqref{equ:remark_HVMCj_pj_kappa} only spatial derivatives act from outside, we can conclude that
\begin{equation*}
\begin{aligned}
\partial_j \bigl( \sqrt{|h|} h^{j\nu} \Psi_{\nu;\wp} \bigr) = 0.
\end{aligned}
\end{equation*}
\end{remark}
\begin{remark} \label{rem:hinv0j_smallness}
We point out that the coefficients $(h^{-1})^{0j} = \mathcal O(\ell)$, $1 \leq j \leq n$, have extra smallness. This follows from observing that $h_{00}= -1 + |\ell|^2$, $h_{0j} = \mathcal O(|\ell|)$, and $h_{ij} = \partial_i F \cdot \partial_j F + \mathcal O(|\ell|^2)$.
\end{remark}
\subsection{Definition of the momentum variable}\label{susec:momentumvariable}
The HVMC equation $\Box_g \Phi = 0$ for $\Phi = \Psi_\wp + \psi N$ gives rise to a second-order quasilinear wave equation for the scalar $\psi$. In order to be able to formulate first-order modulation equations later on, our first goal is to arrive at a suitable formulation of an associated system of linearized first-order equations for
\begin{equation}
\vec{\psi} := \begin{pmatrix} \psi \\ \dot{\psi} \end{pmatrix}
\end{equation}
for a suitably defined momentum variable $\dot{\psi}$.
We begin by motivating our definition of $\dot{\psi}$. A good starting point is to examine the Euler-Lagrange equation for $\psi$ given by (here and below the notation $\frac{\delta}{\delta\psi}$, etc, simply mean the partial derivative of $\mathcal L$ with respect to the corresponding variable, and not the functional derivative)
\begin{equation} \label{equ:EL_for_phi}
\partial_t \biggl( \frac{\delta \mathcal L}{\delta (\partial_t \psi)} \biggr) + \partial_j \biggl( \frac{\delta \mathcal L}{\delta (\partial_j \psi)} \biggr) = \frac{\delta \mathcal L}{\delta \psi}.
\end{equation}
It suggests that the quantity $\dot{\psi}$ should be part of
\begin{equation*}
\frac{\delta \mathcal L}{\delta (\partial_t \psi)} = \frac12 \sqrt{|g|} (g^{-1})^{\mu \nu} \frac{\delta g_{\mu \nu}}{\delta (\partial_t \psi)}.
\end{equation*}
Since $g_{\mu \nu} = \boldsymbol{\eta}(\partial_\mu \Phi, \partial_\nu \Phi)$ and $\partial_\mu \Phi = \partial_\mu \Psi_\wp + (\partial_\mu \psi) N + \psi \partial_\mu N$, we have
\begin{equation*}
\frac{\delta g_{\mu \nu}}{\delta (\partial_t \psi)} = \delta_{\nu 0} \boldsymbol{\eta}(\partial_\mu \Phi, N) + \delta_{\mu 0} \boldsymbol{\eta}(N, \partial_\nu \Phi),
\end{equation*}
and therefore
\begin{equation*}
\frac{\delta \mathcal L}{\delta (\partial_t \psi)} = \boldsymbol{\eta}( \sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi, N).
\end{equation*}
Next, we determine the precise expression for $\frac{\delta \mathcal L}{\delta (\partial_t \psi)}$ up to quadratic (and higher-order) terms in the perturbation $\psi$ and its derivatives $\partial_\mu \psi$. To this end we first record the following expansions
\begin{equation} \label{equ:expansion_ginversemunu}
\begin{aligned}
(g^{-1})^{\mu \nu} &= (k^{-1})^{\mu \nu} - (k^{-1})^{\mu \alpha} \boldsymbol{\eta}( \partial_\alpha \Psi_\wp, (\partial_\beta \psi) N + \psi \partial_\beta N ) (k^{-1})^{\beta \nu} \\
&\quad \quad - (k^{-1})^{\mu \alpha} \boldsymbol{\eta}( (\partial_\alpha \psi) N + \psi (\partial_\alpha N), \partial_\beta \Psi_\wp) \bigr) (k^{-1})^{\beta \nu} + \mathcal O\bigl( (\psi, \partial_\mu \psi)^2 \bigr)
\end{aligned}
\end{equation}
as well as
\begin{equation} \label{equ:expansion_sqrtmodg}
\begin{aligned}
\sqrt{|g|} &= \sqrt{|k|} + \sqrt{|k|} (k^{-1})^{\alpha \beta} \boldsymbol{\eta}( \partial_\alpha \Psi_\wp, (\partial_\beta \psi) N + \psi \partial_\beta N ) + \mathcal O\bigl( (\psi, \partial_\mu \psi)^2 \bigr),
\end{aligned}
\end{equation}
Expanding up to terms that are at least quadratic in the perturbation $\psi$ and its derivatives $\partial_\mu \psi$, we have
\begin{equation} \label{equ:expand_delta_L_delta_ptphi}
\begin{aligned}
\frac{\delta \mathcal L}{\delta (\partial_t \psi)} &= \boldsymbol{\eta}( \sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi, N) \\
&= \sqrt{|k|} (k^{-1})^{0 \nu} \boldsymbol{\eta}( \partial_\nu \Psi_\wp, N) + \boldsymbol{\eta} \biggl( \frac{\delta(\sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi)}{\delta (\partial_\mu \psi)} \bigg|_{\psi=0} \partial_\mu \psi, N \biggr) \\
&\quad + \boldsymbol{\eta} \biggl( \frac{\delta(\sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi)}{\delta \psi} \bigg|_{\psi=0} \psi, N \biggr) + \mathcal E_1,
\end{aligned}
\end{equation}
where the remainder term $\mathcal E_1$ satisfies $\mathcal E_1 = \mathcal O\bigl( (\psi, \partial_\mu \psi)^2 \bigr)$.
By direct computation, using the expansions \eqref{equ:expansion_ginversemunu} and \eqref{equ:expansion_sqrtmodg}, we obtain
\begin{equation} \label{equ:deltaLdeltaphi_munu_1storder}
\begin{aligned}
\frac{\delta (\sqrt{|g|} (g^{-1})^{\mu \nu} \pnu \Phi)}{\delta \psi} \bigg|_{\psi=0} &= \sqrt{|k|} (k^{-1})^{\mu \nu} \pnu N + \sqrt{|k|} (k^{-1})^{\mu \nu} (k^{-1})^{\alpha \beta} \boldsymbol{\eta}(\partial_\alpha \Psi_\wp, \partial_\beta N) \pnu \Psi_\wp \\
&\quad - \sqrt{|k|} (k^{-1})^{\mu \alpha} (k^{-1})^{\beta \nu} \boldsymbol{\eta}(\partial_\alpha \Psi_\wp, \partial_\beta N) \pnu \Psi_\wp \\
&\quad - \sqrt{|k|} (k^{-1})^{\mu \alpha} (k^{-1})^{\beta \nu} \boldsymbol{\eta}(\partial_\alpha N, \partial_\beta \Psi_\wp) \pnu \Psi_\wp
\end{aligned}
\end{equation}
and
\begin{equation} \label{equ:deltaLdeltaphi_0nu_1storder}
\begin{aligned}
\frac{\delta (\sqrt{|g|} (g^{-1})^{0\nu} \pnu \Phi)}{\delta(\partial_\mu \psi)} \bigg|_{\psi=0} &= \sqrt{|k|} (k^{-1})^{0\mu} N + \sqrt{|k|} (k^{-1})^{\alpha \mu} (k^{-1})^{0\nu} \boldsymbol{\eta}(\partial_\alpha \Psi_\wp, N) \pnu \Psi_\wp \\
&\quad - \sqrt{|k|} (k^{-1})^{0\alpha} (k^{-1})^{\mu\nu} \boldsymbol{\eta}(\partial_\alpha \Psi_\wp, N) \pnu \Psi_\wp \\
&\quad - \sqrt{|k|} (k^{-1})^{0\mu} (k^{-1})^{\beta \nu} \boldsymbol{\eta}(N, \partial_\beta \Psi_\wp) \pnu \Psi_\wp.
\end{aligned}
\end{equation}
For later use we also record that
\begin{equation} \label{equ:deltaLdeltaphi_jnu_1storder}
\begin{aligned}
\frac{\delta (\sqrt{|g|} (g^{-1})^{j\nu} \pnu \Phi)}{\delta (\partial_\mu \psi)} \bigg|_{\psi=0} &= \sqrt{|k|} (k^{-1})^{j\mu} N + \sqrt{|k|} (k^{-1})^{\alpha \mu} (k^{-1})^{j\nu} \boldsymbol{\eta}(\partial_\alpha \Psi_\wp, N) \pnu \Psi_\wp \\
&\quad - \sqrt{|k|} (k^{-1})^{j\alpha} (k^{-1})^{\mu \nu} \boldsymbol{\eta}(\partial_\alpha \Psi_\wp, N) \pnu \Psi_\wp \\
&\quad - \sqrt{|k|} (k^{-1})^{j\mu} (k^{-1})^{\beta \nu} \boldsymbol{\eta}(N, \partial_\beta \Psi_\wp) \pnu \Psi_\wp.
\end{aligned}
\end{equation}
We proceed with a further examination of the terms on the right-hand side of~\eqref{equ:expand_delta_L_delta_ptphi}.
Using that $\boldsymbol{\eta}(\partial_j \Psi_\wp, N) = 0$, we can rewrite the first term on the RHS of \eqref{equ:expand_delta_L_delta_ptphi} as
\begin{equation}
\begin{aligned}
\sqrt{|k|} (k^{-1})^{0 \nu} \boldsymbol{\eta}( \partial_\nu \Psi_\wp, N) &= \sqrt{|k|} (k^{-1})^{00} \boldsymbol{\eta}( \partial_t \Psi_\wp, N) \\
&= \sqrt{|h|} (h^{-1})^{00} \boldsymbol{\eta} \bigl( (1,\ell), N \bigr) \\
&\quad + \sqrt{|h|} (h^{-1})^{00} \boldsymbol{\eta} \bigl( \partial_t \Psi_\wp - (1,\ell), N \bigr) \\
&\quad + \boldsymbol{\eta} \Bigl( \bigl( \sqrt{|k|} (k^{-1})^{00} - \sqrt{|h|} (h^{-1})^{00} \bigr) \partial_t \Psi_\wp, N \Bigr) \\
&= \sqrt{|h|} (h^{-1})^{00} \boldsymbol{\eta} \bigl( (1,\ell), N \bigr) \\
&\quad + \sqrt{|h|} (h^{-1})^{00} \boldsymbol{\eta} \Bigl( \bigl( \dot{\ell} \cdot \nabla_\ell + (\dot{\xi}-\ell) \cdot \nabla_\xi \bigr) \Psi_\wp, N \Bigr) + \mathcal E_2,
\end{aligned}
\end{equation}
where the remainder term satisfies
\begin{equation*}
\begin{aligned}
\mathcal E_2 := \boldsymbol{\eta} \Bigl( \bigl( \sqrt{|k|} (k^{-1})^{00} - \sqrt{|h|} (h^{-1})^{00} \bigr) \partial_t \Psi_\wp, N \Bigr) = \mathcal O\bigl({\dot{\wp}}^2, {\dot{\wp}} \ell\bigr).
\end{aligned}
\end{equation*}
Using~\eqref{equ:deltaLdeltaphi_0nu_1storder} we obtain that the second term on the right-hand side of~\eqref{equ:expand_delta_L_delta_ptphi} is explicitly given by
\begin{equation} \label{equ:expand_delta_L_delta_ptphi_2nd_term}
\begin{aligned}
&\boldsymbol{\eta} \biggl( \frac{\delta(\sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi)}{\delta (\partial_\mu \psi)} \bigg|_{\psi=0} \partial_\mu \psi, N \biggr) \\
&\quad = \sqrt{|k|} (k^{-1})^{0\mu} (\partial_\mu \psi) \boldsymbol{\eta}\bigl(N, N - (k^{-1})^{\alpha \beta} \boldsymbol{\eta}( N, \partial_\alpha \Psi_\wp ) \partial_\beta \Psi_\wp \bigr).
\end{aligned}
\end{equation}
Now recall that if $\Psi_\wp$ were a genuine maximal embedding, then $\{ n_\wp, \pnu \Psi_\wp\}$ would form a basis of the ambient space with $(k^{-1})^{\alpha \beta} \boldsymbol{\eta}( N, \partial_\alpha \Psi_\wp ) \partial_\beta \Psi_\wp$ denoting the tangential part of $N$. Since $\boldsymbol{\eta}(N, n_\wp) = 1$ by construction, the right-hand side of the preceding identity~\eqref{equ:expand_delta_L_delta_ptphi_2nd_term} would then just read $\sqrt{|k|} (k^{-1})^{0\mu} \partial_\mu \psi$. To quantify the difference, we write
\begin{equation*}
\begin{aligned}
(k^{-1})^{\alpha \beta} \boldsymbol{\eta}( N, \partial_\alpha \Psi_\wp ) \partial_\beta \Psi_\wp = (h^{-1})^{\alpha \beta} \boldsymbol{\eta}( N, \Psi_{\alpha; \wp} ) \Psi_{\beta; \wp} + \mathcal E_{3,1} + \mathcal E_{3,2}
\end{aligned}
\end{equation*}
with remainder terms
\begin{align*}
\mathcal E_{3,1} &:= \bigl( (k^{-1})^{\alpha \beta} - (h^{-1})^{\alpha \beta} \bigr) \boldsymbol{\eta}( N, \partial_\alpha \Psi_\wp ) \partial_\beta \Psi_\wp, \\
\mathcal E_{3,2} &:= (h^{-1})^{\alpha \beta} \boldsymbol{\eta}\bigl( N, \partial_\alpha \Psi_\wp - \Psi_{\alpha;\wp} \bigr) \partial_\beta \Psi_\wp + (h^{-1})^{\alpha \beta} \boldsymbol{\eta}( N, \Psi_{\alpha;\wp} ) \bigl( \partial_\beta \Psi_\wp - \Psi_{\beta; \wp} \bigr)
\end{align*}
of the form $\mathcal O({\dot{\wp}})$.
Correspondingly, we have
\begin{equation*}
\begin{aligned}
\boldsymbol{\eta} \biggl( \frac{\delta(\sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi)}{\delta (\partial_\mu \psi)} \bigg|_{\psi=0} \partial_\mu \psi, N \biggr) &= \sqrt{|k|} (k^{-1})^{0\mu} \partial_\mu \psi + \mathcal E_3
\end{aligned}
\end{equation*}
with remainder term
\begin{equation*}
\begin{aligned}
\mathcal E_3 := \sqrt{|k|} (k^{-1})^{0\mu} (\partial_\mu \psi) \boldsymbol{\eta}\bigl(N, \mathcal E_{3,1} + \mathcal E_{3,2} \bigr) = \mathcal O\bigl( (\partial \psi) {\dot{\wp}}\bigr).
\end{aligned}
\end{equation*}
In order to obtain a more favorable structure of the linearized equation for $\vec{\psi}$, it is preferable to remove the linear terms involving $\psi$ from the above candidate for $\dot{\psi}$, that is, in \eqref{equ:expand_delta_L_delta_ptphi}.
But in order to make sure no second order derivatives of the parameters appear when we calculate $\partial_t{\dot{\psi}}$, we replace $k$ by $h$ when subtracting off the linear contributions of $\psi$ in \eqref{equ:expand_delta_L_delta_ptphi}. More specifically, we also only subtract off those expressions where we think of $\ell$ as being time-independent (so for instance $\partial_t N$ should be thought of as zero and we use $\Psi_{\nu; \wp}$ instead of $\partial_\nu \Psi_\wp$). We introduce the following more succinct notation for these terms
\begin{equation} \label{equ:definition_B}
\begin{aligned}
B \psi &:= \boldsymbol{\eta} \biggl( \frac{\delta (\sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi)}{\delta \psi} \bigg|_{\substack{\psi = 0 \\ k = h}} \psi, N \biggr) \\
&= \sqrt{|h|} (h^{-1})^{0j} \boldsymbol{\eta}( \partial_j N, N ) \psi \\
&\quad - \sqrt{|h|} (h^{-1})^{0 \alpha} (h^{-1})^{j \nu} \boldsymbol{\eta}( \Psi_{\alpha; \wp}, \partial_j N ) \boldsymbol{\eta}( \Psi_{\nu; \wp}, N ) \psi \\
&\quad - \sqrt{|h|} (h^{-1})^{0 j} (h^{-1})^{\beta \nu} \boldsymbol{\eta}( \partial_j N, \Psi_{\beta; \wp} ) \boldsymbol{\eta}( \Psi_{\nu; \wp}, N ) \psi \\
&\quad + \sqrt{|h|} (h^{-1})^{\alpha j} (h^{-1})^{0\nu} \boldsymbol{\eta}( \Psi_{\alpha; \wp}, \partial_j N ) \boldsymbol{\eta}( \Psi_{\nu; \wp}, N) \psi.
\end{aligned}
\end{equation}
We denote the resulting difference by
\begin{equation}
\mathcal E_4 := \boldsymbol{\eta} \biggl( \frac{\delta (\sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi)}{\delta \psi} \bigg|_{\substack{\psi = 0}} \psi, N \biggr) - \boldsymbol{\eta} \biggl( \frac{\delta (\sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi)}{\delta \psi} \bigg|_{\substack{\psi = 0 \\ k = h}} \psi, N \biggr),
\end{equation}
which satisfies $\mathcal E_4 = \mathcal O( {\dot{\wp}} \psi )$.
\begin{remark}
The notation $\frac{\delta (\cdot)}{\delta \psi} \big|_{\substack{\psi = 0 \\ k = h}}$ shall indicate that we compute $\frac{\delta}{\delta \psi}$, while the $t$-dependence of $\xi$ and $\ell$ is frozen, i.e., we replace $\dot{\xi}$ by $\ell$ as well as $\dot{\ell}$ by $0$ and we use $\Psi_{\nu; \wp}$ instead of $\partial_\nu \Psi_\wp$ This means that $\partial_t (B \psi)$ will involve at most one $t$-derivative of $\xi$ or $\ell$. Moreover, in those terms $\dot{\xi}$ or $\dot{\ell}$ are always multiplied by $\psi$.
\end{remark}
We arrive at the following definition
\begin{equation}
\begin{aligned}
\dot{\psi} := \boldsymbol{\eta} \bigl( \sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi, N \bigr) - \boldsymbol{\eta} \bigl( \sqrt{|h|} (h^{-1})^{00} (1, \ell), N \bigr) - B \psi.
\end{aligned}
\end{equation}
\subsection{Relation between $\dot{\psi}$ and $\partial_t \psi$}
Next, we record the relation between $\dot{\psi}$ and $\partial_t \psi$.
From the preceding we obtain
\begin{equation*}
\begin{aligned}
\dot{\psi} = \sqrt{|h|} (h^{-1})^{00} \boldsymbol{\eta} \Bigl( \bigl( \dot{\ell} \cdot \nabla_\ell + (\dot{\xi}-\ell) \cdot \nabla_\xi \bigr) \Psi_\wp, N \Bigr) + \sqrt{|k|} (k^{-1})^{0\mu} \partial_\mu \psi + \mathcal E_1 + \ldots + \mathcal E_4.
\end{aligned}
\end{equation*}
Solving for $\partial_t \psi$ yields
\begin{equation*}
\begin{aligned}
\partial_t \psi &= \frac{1}{\sqrt{|k|} (k^{-1})^{00}} \dot{\psi} - \frac{\sqrt{|k|} (k^{-1})^{0j}}{\sqrt{|k|} (k^{-1})^{00}} \partial_j \psi \\
&\quad - \frac{\sqrt{|h|} (h^{-1})^{00}}{\sqrt{|k|} (k^{-1})^{00}} \boldsymbol{\eta} \Bigl( \bigl( \dot{\ell} \cdot \nabla_\ell + (\dot{\xi}-\ell) \cdot \nabla_\xi \bigr) \Psi_\wp, N \Bigr) - \frac{1}{\sqrt{|k|} (k^{-1})^{00}} \bigl( \mathcal E_1 + \ldots + \mathcal E_4 \bigr).
\end{aligned}
\end{equation*}
Upon rewriting the prefactors in terms of $h$, which accrues further errors, we arrive at the relation
\begin{equation} \label{equ:relation_ptphi_dotphi}
\begin{aligned}
\partial_t \psi &= \frac{1}{\sqrt{|h|} (h^{-1})^{00}} \dot{\psi} - \frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \partial_j \psi - \boldsymbol{\eta} \Bigl( \bigl( \dot{\ell} \cdot \nabla_\ell + (\dot{\xi}-\ell) \cdot \nabla_\xi \bigr) \Psi_\wp, N \Bigr) + f,
\end{aligned}
\end{equation}
where
\begin{equation*}
\begin{aligned}
f &:= - \frac{1}{\sqrt{|k|} (k^{-1})^{00}} \bigl( \mathcal E_1 + \ldots + \mathcal E_4 \bigr) + \biggl( \frac{1}{\sqrt{|k|} (k^{-1})^{00}} - \frac{1}{\sqrt{|h|} (h^{-1})^{00}} \biggr) \dot{\psi} \\
&\quad - \biggl( \frac{\sqrt{|k|} (k^{-1})^{0j}}{\sqrt{|k|} (k^{-1})^{00}} - \frac{\sqrt{|h|} (h^{-1})^{0j}}{\sqrt{|h|} (h^{-1})^{00}} \biggr) \partial_j \psi \\
&\quad + \biggl(- \frac{\sqrt{|h|} (h^{-1})^{00}}{\sqrt{|k|} (k^{-1})^{00}} + 1\biggr) \boldsymbol{\eta} \Bigl( \bigl( \dot{\ell} \cdot \nabla_\ell + (\dot{\xi}-\ell) \cdot \nabla_\xi \bigr) \Psi_\wp, N \Bigr).
\end{aligned}
\end{equation*}
\begin{remark}
Note that the term $f$ still contains $\partial_t \psi$ terms, but those come with additional smallness. Correspondingly, under suitable smallness assumptions we can use the implicit function theorem to solve for $\partial_t \psi$, as we will do further below.
\end{remark}
\subsection{Computation of $\partial_t \dot{\psi}$}
Next, we compute the time derivative of $\dot{\psi}$,
\begin{equation} \label{equ:pt_dotphi}
\begin{aligned}
\partial_t \dot{\psi} &= \boldsymbol{\eta} \Bigl( \partial_t \bigl( \sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi \bigr), N \Bigr) - \boldsymbol{\eta} \Bigl( \partial_t \bigl( \sqrt{|h|} (h^{-1})^{00} (1, \ell) \bigr), N \Bigr) - \partial_t \bigl( B \psi \bigr) \\
&\quad \quad + \boldsymbol{\eta} \Bigl( \sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi - \sqrt{|h|} (h^{-1})^{00} (1, \ell), \partial_t N \Bigr).
\end{aligned}
\end{equation}
We rewrite the first term on the right-hand side of~\eqref{equ:pt_dotphi} as
\begin{equation} \label{equ:pt_dotphi_1st_term}
\begin{aligned}
\boldsymbol{\eta} \Bigl( \partial_t \bigl( \sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi \bigr), N \Bigr) &= \boldsymbol{\eta} \Bigl( \partial_\mu \bigl( \sqrt{|g|} (g^{-1})^{\mu \nu} \partial_\nu \Phi \bigr), N \Bigr) - \boldsymbol{\eta} \Bigl( \partial_j \bigl( \sqrt{|g|} (g^{-1})^{j \nu} \partial_\nu \Phi \bigr), N \Bigr) \\
&= - \boldsymbol{\eta} \Bigl( \partial_j \bigl( \sqrt{|g|} (g^{-1})^{j \nu} \partial_\nu \Phi \bigr) - \partial_j \bigl( \sqrt{|k|} (k^{-1})^{j \nu} \partial_\nu \Psi_\wp \bigr) , N \Bigr) \\
&\quad - \boldsymbol{\eta} \Bigl( \partial_j \bigl( \sqrt{|k|} (k^{-1})^{j \nu} \partial_\nu \Psi_\wp \bigr) - \partial_j \bigl( \sqrt{|h|} (h^{-1})^{j \nu} \Psi_{\nu; \wp} \bigr), N \Bigr),
\end{aligned}
\end{equation}
where we used the HVMC equation $\partial_\mu \bigl( \sqrt{|g|} (g^{-1})^{\mu \nu} \partial_\nu \Phi \bigr) = 0$ and that $\partial_j \bigl( \sqrt{|h|} (h^{-1})^{j \nu} \Psi_{\nu; \wp} \bigr) = 0$, see Remark~\ref{rem:HVMCj_for_Psinu}. Then to leading order the first term on the right-hand side of~\eqref{equ:pt_dotphi_1st_term} is given by
\begin{equation*}
\begin{aligned}
&- \boldsymbol{\eta} \Bigl( \partial_j \bigl( \sqrt{|g|} (g^{-1})^{j \nu} \partial_\nu \Phi \bigr) - \partial_j \bigl( \sqrt{|k|} (k^{-1})^{j \nu} \partial_\nu \Psi_\wp \bigr) , N \Bigr) \\
&= - \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta (\partial_\mu \psi)} \bigg|_{\substack{\psi = 0, \\ k = h}} \partial_\mu \psi \Bigr), N \biggr) - \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \psi \Bigr), N \biggr) + \mathcal E_5 + \mathcal E_6
\end{aligned}
\end{equation*}
with remainder terms
\begin{align*}
\mathcal E_5 &:= \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta (\partial_\mu \psi)} \bigg|_{\substack{\psi = 0, \\ k = h}} \partial_\mu \psi \Bigr), N \biggr) + \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \psi \Bigr), N \biggr) \\
&\quad - \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta (\partial_\mu \psi)} \bigg|_{\psi = 0} \partial_\mu \psi \Bigr), N \biggr) - \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\psi = 0} \psi \Bigr), N \biggr)
\end{align*}
and
\begin{align*}
\mathcal E_6 &:= \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta (\partial_\mu \psi)} \bigg|_{\psi = 0} \partial_\mu \psi \Bigr), N \biggr) + \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\psi = 0} \psi \Bigr), N \biggr) \\
&\quad - \boldsymbol{\eta} \Bigl( \partial_j \bigl( \sqrt{|g|} (g^{-1})^{j \nu} \partial_\nu \Phi \bigr) - \partial_j \bigl( \sqrt{|k|} (k^{-1})^{j \nu} \partial_\nu \Psi_\wp \bigr) , N \Bigr).
\end{align*}
We have
\begin{equation*}
\mathcal E_5 = \mathcal O\bigl( {\dot{\wp}} (\partial \psi), {\dot{\wp}} (\partial^2 \psi) \bigr) \quad \text{and} \quad \mathcal E_6 = \mathcal O\bigl( (\psi, \partial \psi, \partial^2 \psi)^2 \bigr).
\end{equation*}
The second term on the right-hand side of~\eqref{equ:pt_dotphi_1st_term} is an error term with
\begin{equation*}
\mathcal E_7 := - \boldsymbol{\eta} \Bigl( \partial_j \bigl( \sqrt{|k|} (k^{-1})^{j \nu} \partial_\nu \Psi_\wp \bigr) - \partial_j \bigl( \sqrt{|h|} (h^{-1})^{j \nu} \Psi_{\nu; \wp} \bigr), N \Bigr) = \mathcal O\bigl( {\dot{\wp}}^2, {\dot{\wp}} \ell \bigr).
\end{equation*}
To see that $\mathcal E_7$ is a quadratic error, we use that $\sqrt{|k|}-\sqrt{|h|} = \mathcal O(|\ell|^2)$ and $(k^{-1})^{ij}-(h^{-1})^{ij} = \mathcal O(|\ell|^2)$. The latter observations follow from Remark~\ref{rem:hinv0j_smallness} and a Taylor expansion.
For the second term on the right-hand side of~\eqref{equ:pt_dotphi} we have
\begin{equation*}
\begin{aligned}
- \boldsymbol{\eta} \Bigl( \partial_t \bigl( \sqrt{|h|} (h^{-1})^{00} (1, \ell) \bigr), N \Bigr) &= - \boldsymbol{\eta} \Bigl( \sqrt{|h|} (h^{-1})^{00} (0, \dot{\ell}), N \Bigr) + \mathcal E_8
\end{aligned}
\end{equation*}
with remainder term
\begin{equation*}
\begin{aligned}
\mathcal E_8 := - \partial_t \Bigl( \sqrt{|h|} (h^{-1})^{00} \Bigr) \boldsymbol{\eta}\bigl( (1,\ell), N \bigr) = \mathcal O\bigl( {\dot{\wp}} \ell \bigr),
\end{aligned}
\end{equation*}
and for the third term on the right-hand side of~\eqref{equ:pt_dotphi}, we compute
\begin{equation*}
\begin{aligned}
-\partial_t (B \psi) &= -\boldsymbol{\eta} \biggl( \partial_t \Bigl( \frac{\delta (\sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi)}{\delta \psi} \bigg|_{\substack{\psi = 0 \\ k = h}} \psi \Bigr), N \biggr) + \mathcal E_9
\end{aligned}
\end{equation*}
with
\begin{equation*}
\begin{aligned}
\mathcal E_9 := - \boldsymbol{\eta} \biggl( \frac{\delta (\sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi)}{\delta \psi} \bigg|_{\substack{\psi = 0 \\ k = h}} \psi , \partial_t N \biggr) = \mathcal O\bigl( \psi \dot{\ell} \bigr).
\end{aligned}
\end{equation*}
Finally, the fourth term on the right-hand side of~\eqref{equ:pt_dotphi} is again an error term of the form
\begin{equation*}
\begin{aligned}
\mathcal E_{10} := \boldsymbol{\eta} \Bigl( \sqrt{|g|} (g^{-1})^{0 \nu} \partial_\nu \Phi - \sqrt{|h|} (h^{-1})^{00} (1, \ell), \partial_t N \Bigr) = \mathcal O\bigl( (\psi, \partial \psi) {\dot{\wp}} \bigr) + \mathcal O\bigl( {\dot{\wp}}^2 \bigr).
\end{aligned}
\end{equation*}
Combining the preceding expressions, we find that
\begin{equation} \label{equ:pt_dotphi_comp1}
\begin{aligned}
\partial_t \dot{\psi} &= - \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta (\partial_\mu \psi)} \bigg|_{\substack{\psi = 0, \\ k = h}} \partial_\mu \psi \Bigr), N \biggr) - \boldsymbol{\eta} \biggl( \partial_\mu \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{\mu \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \psi \Bigr), N \biggr) \\
&\quad - \boldsymbol{\eta} \Bigl( \sqrt{|h|} (h^{-1})^{00} (0, \dot{\ell}), N \Bigr) + \mathcal E_6 + \ldots + \mathcal E_{10} \\
&= - \partial_j \biggl( \boldsymbol{\eta} \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta (\partial_\mu \psi)} \bigg|_{\substack{\psi = 0, \\ k = h}} \partial_\mu \psi , N \Bigr) \biggr) + \boldsymbol{\eta} \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta (\partial_\mu \psi)} \bigg|_{\substack{\psi = 0, \\ k = h}} \partial_\mu \psi , \partial_j N \Bigr) \\
&\quad - \boldsymbol{\eta} \biggl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{\mu \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \partial_\mu \psi, N \biggr) - \boldsymbol{\eta} \biggl( \partial_\mu \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{\mu \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \Bigr) \psi, N \biggr) \\
&\quad - \boldsymbol{\eta} \Bigl( \sqrt{|h|} (h^{-1})^{00} (0, \dot{\ell}), N \Bigr) + \mathcal E_6 + \ldots + \mathcal E_{10}.
\end{aligned}
\end{equation}
From \eqref{equ:deltaLdeltaphi_jnu_1storder} we obtain for the first term on the right-hand side of~\eqref{equ:pt_dotphi_comp1} that
\begin{align*}
&- \partial_j \biggl( \boldsymbol{\eta} \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta (\partial_\mu \psi)} \bigg|_{\substack{\psi = 0, \\ k = h}} \partial_\mu \psi , N \Bigr) \biggr) \\
&= - \partial_j \Bigl( \sqrt{|h|} (h^{-1})^{j\mu} \bigl( \boldsymbol{\eta}(N, N) - (h^{-1})^{\beta \nu} \boldsymbol{\eta}(N_\ell, \Psi_{\beta; \wp}) \boldsymbol{\eta}(\Psi_{\nu; \wp}, N) \bigr) \partial_\mu \psi \Bigr) \\
&= - \partial_j \Bigl( \sqrt{|h|} (h^{-1})^{j\mu} \partial_\mu \psi \Bigr),
\end{align*}
where we used that
\begin{equation*}
\begin{aligned}
\boldsymbol{\eta}(N, N) - (h^{-1})^{\beta \nu} \boldsymbol{\eta}(N, \Psi_{\beta; \wp}) \boldsymbol{\eta}(\Psi_{\nu; \wp}, N) = \boldsymbol{\eta}(N, n) = 1.
\end{aligned}
\end{equation*}
The latter identity follows from the fact that $\{ \Psi_{\mu;\wp}, n_\wp \}$ forms a basis for the ambient space.
Using~\eqref{equ:deltaLdeltaphi_jnu_1storder} and~\eqref{equ:deltaLdeltaphi_munu_1storder}, it follows that the second and third terms on the right-hand side of~\eqref{equ:pt_dotphi_comp1} exactly cancel each other out.
To evaluate the fourth term on the right-hand side of~\eqref{equ:pt_dotphi_comp1} we first need the following identity.
\begin{lemma}\label{lem:secondff}
Let $ {\mathrm{I\!I}} $ be the second fundamental form of the embedding $\Psi_\wp|_{\substack{\dot{\ell}=0, \dot{\xi}=\ell}}$.
Then we have
\begin{equation}
\frac{1}{\sqrt{|h|}} \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \Bigr), N \biggr) = | {\mathrm{I\!I}} |^2.
\end{equation}
\end{lemma}
\begin{proof}
First, we observe that
\begin{equation*}
\begin{aligned}
\frac{1}{\sqrt{|h|}} \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \Bigr), N \biggr) &= \boldsymbol{\eta} \biggl( \frac{\delta \bigl( \Box_g \Phi \bigr)}{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} , N \biggr).
\end{aligned}
\end{equation*}
We now split the evaluation of the right-hand side into several steps. In what follows, we write $N = n_\wp + W$, where $W$ denotes the tangential part of $N$. In what follows, $\nabla$ denotes the covariant derivative with respect to the embedding $\Phi$.
\medskip
\noindent {\it Step 1: Computation of the part of $\Box_g \Phi$ that is linear in $\psi$.}
We begin by expanding
\begin{equation*}
\begin{aligned}
\Box_g \Phi &= (g^{-1})^{\mu \nu} \nabla_\mu \nabla_\nu \bigl( \Psi_\wp + \psi N \bigr) \\
&= \Box_h \Psi_\wp + (\dot{g}^{-1})^{\mu \nu} \nabla_\mu \nabla_\nu \Psi_\wp - (h^{-1})^{\mu \nu} \dot{\Gamma}^\lambda_{\mu \nu} \partial_\lambda \Psi_\wp + (h^{-1})^{\mu \nu} \nabla_\mu \nabla_\nu \bigl( \psi N \bigr) + \mathcal O \bigl( (\psi, \partial \psi)^2 \bigr) \\
&= \Box_{h} \Psi_\wp + (\dot{g}^{-1})^{\mu \nu} \nabla_\mu \nabla_\nu \Psi_\wp - (h^{-1})^{\mu \nu} \dot{\Gamma}^\lambda_{\mu \nu} \partial_\lambda \Psi_\wp \\
&\quad + (h^{-1})^{\mu \nu} \partial_\mu \partial_\nu \bigl( \psi N \bigr) - (h^{-1})^{\mu \nu} \Gamma_{\mu \nu}^\lambda \partial_\lambda \bigl( \psi N \bigr) + \mathcal O \bigl( (\psi, \partial \psi)^2 \bigr),
\end{aligned}
\end{equation*}
where
\begin{equation*}
\begin{aligned}
(\dot{g}^{-1})^{\mu \nu} &= - (h^{-1})^{\mu \alpha} \boldsymbol{\eta}( \partial_\alpha \Psi_\wp, \partial_\beta N ) (h^{-1})^{\beta \nu} \psi - (h^{-1})^{\mu \alpha} \boldsymbol{\eta}( \partial_\alpha N, \partial_\beta \Psi_\wp ) (h^{-1})^{\beta \nu} \psi + \mathcal O( \partial \psi ) + \mathcal O \bigl( (\psi, \partial \psi)^2 \bigr)
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\dot{\Gamma}_{\mu \nu}^\lambda &= (h^{-1})^{\lambda k} \bigl( \boldsymbol{\eta}( \partial_\mu \partial_\nu \Psi_\wp, \partial_k N ) + \boldsymbol{\eta}( \partial_k \Psi_\wp, \partial_\mu \partial_\nu N ) \bigr) \psi \\
&\quad - (h^{-1})^{\lambda \rho} \Gamma_{\mu\nu}^\sigma \bigl( \boldsymbol{\eta}( \partial_\rho \Psi_\wp, \partial_\sigma N ) + \boldsymbol{\eta}( \partial_\sigma \Psi_\wp, \partial_\rho N ) \bigr) \psi + \mathcal O( \partial \psi ) + \mathcal O \bigl( (\psi, \partial \psi)^2 \bigr).
\end{aligned}
\end{equation*}
Observe that
\begin{equation*}
\begin{aligned}
- (h^{-1})^{\mu \nu} \dot{\Gamma}_{\mu \nu}^\lambda &= - (h^{-1})^{\mu \nu} (h^{-1})^{\lambda k} \boldsymbol{\eta}( \partial_\mu \partial_\nu \Psi_\wp, \partial_k N ) \psi + (h^{-1})^{\mu \nu} (h^{-1})^{\lambda \rho} \Gamma_{\mu \nu}^\sigma \boldsymbol{\eta}( \partial_\sigma \Psi_\wp, \partial_\rho N ) \\
&\quad - (h^{-1})^{\mu\nu} (h^{-1})^{\lambda k} \boldsymbol{\eta}( \partial_k \Psi_\wp, \partial_\mu \partial_\nu N ) \psi + (h^{-1})^{\mu \nu} (h^{-1})^{\lambda \rho} \Gamma^{\sigma}_{\mu\nu} \boldsymbol{\eta}( \partial_\rho \Psi_\wp, \partial_\sigma N ) \\
&\quad + \mathcal O( \partial \psi ) + \mathcal O \bigl( (\psi, \partial \psi)^2 \bigr) \\
&= - \boldsymbol{\eta}( (h^{-1})^{\mu \nu} \partial_\mu \partial_\nu \Psi_\wp, \nabla^\lambda N ) \psi + \boldsymbol{\eta}( (h^{-1})^{\mu \nu} \Gamma_{\mu \nu}^\sigma \partial_\sigma \Psi_\wp, \nabla^\lambda N ) \psi \\
&\quad - \boldsymbol{\eta}( \nabla^\lambda \Psi_\wp, (h^{-1})^{\mu\nu} \partial_\mu \partial_\nu N ) \psi + \boldsymbol{\eta}( \nabla^\lambda \Psi_\wp, (h^{-1})^{\mu\nu} \Gamma_{\mu\nu}^\sigma \partial_\sigma N ) \psi \\
&\quad + \mathcal O( \partial \psi ) + \mathcal O \bigl( (\psi, \partial \psi)^2 \bigr) \\
&= - \boldsymbol{\eta}( \Box_{h} \Psi_\wp, \nabla^\lambda N ) \psi - \boldsymbol{\eta}( \nabla^\lambda \Psi_\wp, \Box_h N ) + \mathcal O( \partial \psi ) + \mathcal O \bigl( (\psi, \partial \psi)^2 \bigr).
\end{aligned}
\end{equation*}
In what follows, we use the notation
\begin{equation*}
\begin{aligned}
\tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp &:= \partial_\mu \partial_\nu \Psi_\wp - (h^{-1})^{\mu \nu} \tilde{\Gamma}_{\mu \nu}^\lambda \partial_\lambda \Psi_\wp, \\
\tilde{\Gamma}_{\mu \nu}^\lambda &:= \frac12 (h^{-1})^{\lambda \kappa} \bigl( \partial_\mu h_{\kappa \nu} + \partial_\nu h_{\mu\kappa} - \partial_\kappa h_{\mu \nu} \bigr), \\
\tilde{\nabla}^\mu \Psi_\wp &:= (h^{-1}){}^{\mu\nu} \partial_\nu \Psi_\wp, \\
[ \tilde{\nabla}_\mu, \tilde{\nabla}_\nu ] \tilde{\nabla}_\lambda \Psi_\wp &= \tilde{R}_{\mu\nu\lambda\sigma} \tilde{\nabla}^\sigma \Psi_\wp, \\
\tilde{R}_{\mu\nu} &= (h^{-1})^{\lambda \sigma} \tilde{R}_{\mu\lambda\nu\sigma}.
\end{aligned}
\end{equation*}
Correspondingly, we obtain
\begin{equation*}
\begin{aligned}
\frac{\delta \bigl( \Box_g \Phi \bigr)}{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} &= \Box_h N + \frac{ \delta \bigl( (\dot{g}^{-1})^{\mu \nu} \bigr) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp - \frac{\delta \bigl( (h^{-1})^{\mu \nu} \dot{\Gamma}^\lambda_{\mu \nu} \partial_\lambda \Psi_\wp \bigr)}{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \\
&= \Box_{h} N - \boldsymbol{\eta}( \tilde{\nabla}^\mu \Psi_\wp, \tilde{\nabla}^\nu N ) (\tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp) - \boldsymbol{\eta}( \tilde{\nabla}^\nu \Psi_\wp, \tilde{\nabla}^\mu N ) (\tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp) \\
&\quad - \boldsymbol{\eta}( \underbrace{\Box_{h} \Psi_\wp}_{=0}, \partial^\lambda N ) \partial_\lambda \Psi_\wp - \boldsymbol{\eta}( \partial^\lambda \Psi_\wp, \Box N ) \partial_\lambda \Psi_\wp \\
&= \Box_h N - \boldsymbol{\eta}( \tilde{\nabla}^\mu \Psi_\wp, \tilde{\nabla}^\nu N ) (\tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp) - \boldsymbol{\eta}( \tilde{\nabla}^\nu \Psi_\wp, \tilde{\nabla}^\mu N ) (\tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp) \\
&\quad - \boldsymbol{\eta}( \tilde{\nabla}^\lambda \Psi_\wp, \Box_h N ) \partial_\lambda \Psi_\wp.
\end{aligned}
\end{equation*}
\noindent {\it Step 2: Computation of $\boldsymbol{\eta}( \Box_h W, n_\wp )$ and $\boldsymbol{\eta}( \Box_h N, n_\wp )$.}
Since $W$ is tangential, we may write
\begin{equation*}
\begin{aligned}
W = \boldsymbol{\eta}( W, \partial_\mu \Psi_\wp ) \tilde{\nabla}^\mu \Psi_\wp.
\end{aligned}
\end{equation*}
In what follows we will use that for all $\mu, \nu, \sigma$
\begin{equation} \label{equ:vanishing_nabla2_Psi_tested_partial_Psi}
\begin{aligned}
\boldsymbol{\eta}( \tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp, \partial_\sigma \Psi_\wp ) &= 0.
\end{aligned}
\end{equation}
To see this we expand
\begin{equation*}
\begin{aligned}
\boldsymbol{\eta}( \tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp, \partial_\sigma \Psi_\wp ) &= \boldsymbol{\eta}( \partial_\mu \partial_\nu \Psi_\wp, \partial_\sigma \Psi_\wp ) - \tilde{\Gamma}^\lambda_{\mu \nu} h_{\lambda \sigma} \\
&= \boldsymbol{\eta}( \partial_\mu \partial_\nu \Psi_\wp, \partial_\sigma \Psi_\wp ) - \frac12 (h^{-1})^{\lambda \kappa} \bigl( \partial_\mu h_{\kappa \nu} + \partial_\nu h_{\mu \kappa} - \partial_\kappa h_{\mu \nu} \bigr) h_{\lambda \sigma} \\
&= \boldsymbol{\eta}( \partial_\mu \partial_\nu \Psi_\wp, \partial_\sigma \Psi_\wp ) - \frac12 \bigl( \partial_\mu h_{\sigma \nu} + \partial_\nu h_{\mu \sigma} - \partial_\sigma h_{\mu \nu} \bigr) \\
&= \boldsymbol{\eta}( \partial_\mu \partial_\nu \Psi_\wp, \partial_\sigma \Psi_\wp ) - \boldsymbol{\eta}( \partial_\mu \partial_\nu \Psi_\wp, \partial_\sigma \Psi_\wp ) \\
&= 0.
\end{aligned}
\end{equation*}
Using~\eqref{equ:vanishing_nabla2_Psi_tested_partial_Psi}, we obtain by direct computation
\begin{equation*}
\begin{aligned}
\Box_h W &= \boldsymbol{\eta}( W, \partial_\mu \Psi_\wp ) \tilde{\nabla}^\mu \underbrace{\Box_h \Psi_\wp}_{= \, 0} + \boldsymbol{\eta}( W, \partial_\mu \Psi_\wp ) {\tilde{R}}^{\mu \lambda} \partial_\lambda \Psi_\wp + \bigl( \Box_h \boldsymbol{\eta}( W, \partial_\mu \Psi_\wp ) \bigr) \tilde{\nabla}^\mu \Psi_\wp \\
&\quad + 2 \boldsymbol{\eta}( \partial_\nu W, \partial_\mu \Psi_\wp ) \tilde{\nabla}^\nu \tilde{\nabla}^\mu \Psi_\wp + 2 \underbrace{\boldsymbol{\eta}( W, \tilde{\nabla}_\nu \tilde{\nabla}_\mu \Psi_\wp )}_{= \, 0} \tilde{\nabla}^\nu \tilde{\nabla}^\mu \Psi_\wp.
\end{aligned}
\end{equation*}
Note that $\boldsymbol{\eta}( W, \tilde{\nabla}_\nu \tilde{\nabla}_\mu \Psi_\wp ) = 0$ follows from \eqref{equ:vanishing_nabla2_Psi_tested_partial_Psi} since $W$ is tangential.
Testing against $n_\wp$ and inserting the relation $W = N - n_\wp$, we arrive at the identity
\begin{equation*}
\begin{aligned}
\boldsymbol{\eta}( \Box_h W, n_\wp ) &= 2 \boldsymbol{\eta}( \partial_\nu W, \partial_\mu \Psi_\wp ) \boldsymbol{\eta}( \tilde{\nabla}^\nu \tilde{\nabla}^\mu \Psi_\wp, n_\wp ) \\
&= 2 \boldsymbol{\eta}( \partial_\nu N, \partial_\mu \Psi_\wp ) \boldsymbol{\eta}( \tilde{\nabla}^\nu \tilde{\nabla}^\mu \Psi_\wp, n_\wp ) - 2 \boldsymbol{\eta}( \partial_\nu n_\wp, \partial_\mu \Psi_\wp ) \boldsymbol{\eta}( \tilde{\nabla}^\nu \tilde{\nabla}^\mu \Psi_\wp, n_\wp ) \\
&= 2 \boldsymbol{\eta}( \partial_\nu N, \partial_\mu \Psi_\wp ) \boldsymbol{\eta}( \tilde{\nabla}^\nu \tilde{\nabla}^\mu \Psi_\wp, n_\wp ) + 2 \boldsymbol{\eta}( \partial_\nu n_\wp, \partial_\mu \Psi_\wp ) \boldsymbol{\eta}( \tilde{\nabla}^\mu \Psi_\wp, \tilde{\nabla}^\nu n_\wp ).
\end{aligned}
\end{equation*}
Moreover, using that $\Box_h n_\wp = - | {\mathrm{I\!I}} |^2 n_\wp$ by \cite[Corollary II]{RV70}, we obtain
\begin{equation*}
\begin{aligned}
\boldsymbol{\eta}( \Box_h N, n_\wp ) &= \boldsymbol{\eta}( \Box_h n_\wp, n_\wp ) + \boldsymbol{\eta}( \Box_h W, n_\wp ) \\
&= - | {\mathrm{I\!I}} |^2 + 2 \boldsymbol{\eta}( \partial_\nu N, \partial_\mu \Psi_\wp ) \boldsymbol{\eta}( \tilde{\nabla}^\nu \tilde{\nabla}^\mu \Psi_\wp, n_\wp ) + 2 \boldsymbol{\eta}( \partial_\nu n_\wp, \partial_\mu \Psi_\wp ) \boldsymbol{\eta}( \tilde{\nabla}^\mu \Psi_\wp, \tilde{\nabla}^\nu n_\wp ).
\end{aligned}
\end{equation*}
\medskip
\noindent {\it Step 3: Final computation.}
We decompose into
\begin{equation*}
\begin{aligned}
\boldsymbol{\eta} \biggl( \frac{\delta \bigl( \Box_g \Phi \bigr)}{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}}, N \biggr) = \boldsymbol{\eta} \biggl( \frac{\delta \bigl( \Box_g \Phi \bigr)}{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}}, n_\wp \biggr) + \boldsymbol{\eta} \biggl( \frac{\delta \bigl( \Box_g \Phi \bigr)}{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}}, W \biggr).
\end{aligned}
\end{equation*}
Then on the one hand we have
\begin{equation*}
\begin{aligned}
\boldsymbol{\eta} \biggl( \frac{\delta \bigl( \Box_g \Phi \bigr)}{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}}, n_\wp \biggr) &= \boldsymbol{\eta}( \Box N, n_\wp ) - \boldsymbol{\eta}( \tilde{\nabla}^\mu \Psi_\wp, \tilde{\nabla}^\nu N ) \boldsymbol{\eta}( \tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp, n_\wp ) \\
&\quad - \boldsymbol{\eta}( \tilde{\nabla}^\nu \Psi_\wp, \tilde{\nabla}^\mu N ) \boldsymbol{\eta}( \tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp, n_\wp ) - \boldsymbol{\eta}( \tilde{\nabla}^\lambda \Psi_\wp, \Box N ) \underbrace{\boldsymbol{\eta}( \partial_\lambda \Psi_\wp, n_\wp )}_{= \, 0} \\
&= - | {\mathrm{I\!I}} |^2 + 2 \boldsymbol{\eta}( \partial_\mu \Psi_\wp, \partial_\nu N ) \boldsymbol{\eta}( \tilde{\nabla}^\nu \tilde{\nabla}^\mu \Psi_\wp, n_\wp ) + 2 \boldsymbol{\eta}( \partial_\mu \Psi_\wp, \partial_\nu n_\wp ) \boldsymbol{\eta}( \tilde{\nabla}^\mu \Psi_\wp, \tilde{\nabla}^\nu n_\wp ) \\
&\quad - \boldsymbol{\eta}( \tilde{\nabla}^\mu \Psi_\wp, \tilde{\nabla}^\nu N ) \boldsymbol{\eta}( \tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp, n_\wp ) - \boldsymbol{\eta}( \tilde{\nabla}^\nu \Psi_\wp, \tilde{\nabla}^\mu N ) \boldsymbol{\eta}( \tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp, n_\wp ) \\
&= - | {\mathrm{I\!I}} |^2 + 2 \boldsymbol{\eta}( \partial_\mu \Psi_\wp, \partial_\nu n_\wp ) \boldsymbol{\eta}( \tilde{\nabla}^\mu \Psi_\wp, \tilde{\nabla}^\nu n_\wp ) \\
&= | {\mathrm{I\!I}} |^2.
\end{aligned}
\end{equation*}
On the other hand, using that $\boldsymbol{\eta}( \tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp, \nabla_\lambda \Psi_\wp ) = 0$, we find
\begin{equation*}
\begin{aligned}
\boldsymbol{\eta} \biggl( \frac{\delta \bigl( \Box_g \Phi \bigr)}{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}}, W \biggr) &= \boldsymbol{\eta}( \Box_h N, W ) - \boldsymbol{\eta}( \tilde{\nabla}^\mu \Psi_\wp, \tilde{\nabla}^\nu N ) \underbrace{\boldsymbol{\eta}( \tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp, W )}_{= \, 0} \\
&\quad - \boldsymbol{\eta}( \tilde{\nabla}^\nu \Psi_\wp, \tilde{\nabla}^\mu N ) \underbrace{\boldsymbol{\eta}( \tilde{\nabla}_\mu \tilde{\nabla}_\nu \Psi_\wp, W )}_{= \, 0} - \underbrace{\boldsymbol{\eta}( \tilde{\nabla}^\lambda \Psi_\wp, \Box_h N ) \boldsymbol{\eta}( \partial_\lambda \Psi_\wp, W )}_{= \, \boldsymbol{\eta}( \Box_h N, W )} \\
&= 0.
\end{aligned}
\end{equation*}
This finishes the proof.
\end{proof}
Using the preceding lemma, we find that the fourth term on the right-hand side of~\eqref{equ:pt_dotphi_comp1} simplifies to
\begin{equation*}
\begin{aligned}
&- \boldsymbol{\eta} \biggl( \partial_\mu \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{\mu \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \Bigr) \psi, N \biggr) \\
&= - \boldsymbol{\eta} \biggl( \partial_j \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{j \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \Bigr) \psi, N \biggr) - \boldsymbol{\eta} \biggl( \partial_t \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{0 \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \Bigr) \psi, N \biggr) \\
&= - \sqrt{|h|} | {\mathrm{I\!I}} |^2 \psi + \mathcal E_{11}
\end{aligned}
\end{equation*}
with remainder term
\begin{equation*}
\begin{aligned}
\mathcal E_{11} := - \boldsymbol{\eta} \biggl( \partial_t \Bigl( \frac{ \delta (\sqrt{|g|} (g^{-1})^{0 \nu} \pnu \Phi) }{\delta \psi} \bigg|_{\substack{\psi = 0, \\ k = h}} \Bigr) \psi, N \biggr) = \mathcal O\bigl( {\dot{\wp}} \psi \bigr).
\end{aligned}
\end{equation*}
We arrive at the equation
\begin{equation*}
\begin{aligned}
\partial_t \dot{\psi} &= - \partial_j \bigl( \sqrt{|h|} (h^{-1})^{j\mu} \partial_\mu \psi \bigr) - \sqrt{|h|} | {\mathrm{I\!I}} |^2 \psi - \boldsymbol{\eta} \bigl( \sqrt{|h|} (h^{-1})^{00} (0, \dot{\ell}), N \bigr) + \mathcal E_6 + \ldots + \mathcal E_{11}.
\end{aligned}
\end{equation*}
Finally, inserting for $\partial_t \psi$ on the right-hand side the relation~\eqref{equ:relation_ptphi_dotphi} between $\dot{\psi}$ and $\partial_t \psi$, we obtain
\begin{equation} \label{equ:pt_dotphi_final}
\begin{aligned}
\partial_t \dot{\psi} &= - \sqrt{|h|} L \psi - \partial_j \biggl( \frac{(h^{-1})^{j0}}{(h^{-1})^{00}} \dot{\psi} \biggr) - \boldsymbol{\eta} \bigl( \sqrt{|h|} (h^{-1})^{00} (0, \dot{\ell}), N \bigr) + \dot{f},
\end{aligned}
\end{equation}
where we introduce the linear operator
\begin{equation*}
\begin{aligned}
L := \frac{1}{\sqrt{|h|}} \partial_j \bigl( \sqrt{|h|} (\underline{h}^{-1})^{jk} \partial_k \bigr) + | {\mathrm{I\!I}} |^2
\end{aligned}
\end{equation*}
with
\begin{equation*}
\begin{aligned}
(\underline{h}^{-1})^{jk} := (h^{-1})^{jk} - \frac{(h^{-1})^{j0} (h^{-1})^{0k}}{(h^{-1})^{00}},
\end{aligned}
\end{equation*}
and where
\begin{equation*}
\begin{aligned}
\dot{f} &:= \partial_j \biggl( \sqrt{|h|} (h^{-1})^{j0} \boldsymbol{\eta} \Bigl( \bigl( \dot{\ell} \cdot \nabla_\ell + (\dot{\xi}-\ell) \cdot \nabla_\xi \bigr) \Psi_\wp, N \Bigr) \biggr) - \partial_j \bigl( \sqrt{|h|} (h^{-1})^{j0} f \bigr) + \mathcal E_6 + \ldots + \mathcal E_{11}.
\end{aligned}
\end{equation*}
Note that the first term in the preceding definition of $\dot{f}$ is of the form $\mathcal O(\ell {\dot{\wp}})$ since $(h^{-1})^{j0} = \mathcal O(\ell)$ by Remark~\ref{rem:hinv0j_smallness}, which justifies its treatment as a quadratic error.
\medskip
Introducing the matrix operator
\begin{equation} \label{equ:definition_matrix_operator}
\begin{aligned}
M := \begin{pmatrix}
- \frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \partial_j & \frac{1}{\sqrt{|h|} (h^{-1})^{00}} \\
-\sqrt{|h|} L & - \partial_j \Bigl( \frac{(h^{-1})^{j0}}{(h^{-1})^{00}} \Bigr)
\end{pmatrix}
\end{aligned}
\end{equation}
and setting
\begin{equation*}
\begin{aligned}
\vec{K} := \begin{pmatrix}
K \\ \dot{K}
\end{pmatrix}
= \begin{pmatrix}
- \boldsymbol{\eta} \Bigl( \bigl( \dot{\ell} \cdot \nabla_\ell + (\dot{\xi}-\ell) \cdot \nabla_\xi \bigr) \Psi_\wp, N \Bigr) \\ - \boldsymbol{\eta} \bigl( \sqrt{|h|} (h^{-1})^{00} (0, \dot{\ell}), N \bigr)
\end{pmatrix},
\qquad
\vec{f} := \begin{pmatrix}
f \\ \dot{f}
\end{pmatrix},
\end{aligned}
\end{equation*}
we obtain from \eqref{equ:relation_ptphi_dotphi} and \eqref{equ:pt_dotphi_final} the following first-order formulation of the HVMC equation for $\vec{\psi}$
\begin{equation} \label{equ:first_order_linearized_HVMC}
\begin{aligned}
(\partial_t - M) \vec{\psi} = \vec{K} + \vec{f}.
\end{aligned}
\end{equation}
\medskip
We end this subsection by computing the second order equation for $\psi$ coming from~\eqref{equ:first_order_linearized_HVMC}.
Upon rearranging, we obtain from~\eqref{equ:first_order_linearized_HVMC} that
\begin{equation} \label{equ:dotpsi_from_matrix_equation}
\dot{\psi} = \sqrt{|h|}(h^{-1})^{0\nu} \partial_\nu \psi - \sqrt{|h|}(h^{-1})^{00} (K+f),
\end{equation}
and
\begin{align*}
\begin{split}
\partial_t \dot{\psi} + \partial_j\bigl( \sqrt{|h|}(h^{-1})^{ij} \partial_i\psi \bigr) - \partial_j \biggl( \sqrt{|h|} \frac{(h^{-1})^{0i}(h^{-1})^{0j}}{(h^{-1})^{00}} \partial_i\psi \biggr) + \partial_j \biggl( \frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \dot{\psi} \biggr) + \sqrt{|h|} | {\mathrm{I\!I}} |^2 \psi = \dot{K} + \dot{f}.
\end{split}
\end{align*}
Substituting~\eqref{equ:dotpsi_from_matrix_equation} for $\dot{\psi}$ in the preceding identity, we arrive at
\begin{align*}
\begin{split}
\partial_\mu \bigl( \sqrt{|h|}(h^{-1})^{\mu\nu} \partial_\nu \psi \bigr) + \sqrt{|h|} | {\mathrm{I\!I}} |^2 \psi = (\dot{K} + \dot{f}) + \partial_\nu \bigl(\sqrt{|h|}(h^{-1})^{0\nu} (K+f) \bigr).
\end{split}
\end{align*}
We conclude that if $\vec{\psi}$ satisfies~\eqref{equ:first_order_linearized_HVMC}, then $\psi$ satisfies the wave equation
\begin{align} \label{eq:phi1}
\begin{split}
\frac{1}{\sqrt{|h|}} \partial_\mu \bigl( \sqrt{|h|} (h^{-1})^{\mu\nu} \partial_\nu \psi \bigr) + | {\mathrm{I\!I}} |^2 \psi = \frac{1}{\sqrt{|h|}} (\dot{K} + \dot{f}) + \frac{1}{\sqrt{|h|}} \partial_\nu \bigl( \sqrt{|h|}(h^{-1})^{0\nu} (K+f) \bigr).
\end{split}
\end{align}
\subsection{Eigenfunctions} \label{subsec:eigenfunctions}
In this subsection we determine the eigenfunctions and generalized eigenfunctions of the matrix operator $M$ defined in~\eqref{equ:definition_matrix_operator}
when the parameter $\ell(t)$ is time-independent, i.e., $\ell(t) = \ell$ for some fixed $\ell \in \mathbb R^{n+1}$ with $|\ell| < 1$, and $\xi(t)=a+t\ell$ for some fixed $ a\in \mathbb R^{n+1}$. We will denote the particular choice of $\ell$ for which we want to compute the eigenfunctions by $\ell_0$, and use the notation $M_{\ell_0}$ for the corresponding operator.
To find the eigenfunctions and generalized eigenfunctions of $M_{\ell_0}$, we consider the following maximal embeddings
\begin{align*}
\begin{split}
\Psi_{\xi_0,\ell_0} &:= \bigl( t, \xi_0 + \gamma_0^{-1}P_{\ell_0}F+P_{\ell_0}^\perp F \bigr), \qquad \xi_0 = a_0+t\ell_0, \\
\Psi_{\xi,\ell} &:= \bigl( t, \xi + \gamma^{-1} P_\ell F+P_\ell^\perp F \bigr), \qquad \quad \, \xi = a+t\ell,
\end{split}
\end{align*}
for fixed $(a_0, \ell_0)$ and $(a, \ell)$.
For each $(a,\ell)$ we write
\begin{align*}
\Psi_{\xi,\ell} = \Psi_{\xi_0,\ell_0} + \psi_{\xi,\ell}N_{\ell_0},
\end{align*}
where $N_{\ell_0}$ is the normal to $\text{im}(\Psi_{\xi_0,\ell_0}(t)) \cap {\boldsymbol{\Upsigma}}_t$ viewed as a subspace of ${\boldsymbol{\Upsigma}}_t$.
Then we have $\psi_{\xi_0,\ell_0} \equiv 0$.
The metric $\Psi_{\xi_0,\ell_0}^\ast\boldsymbol{\eta}$ is denoted by $h$ and the metric $\Psi_{\xi,\ell}^\ast\boldsymbol{\eta}$ by $g$ (note that in this subsection there is no difference between $h$ and what would be $k$).
In view of~\eqref{equ:definition_B}, we define
\begin{align*}
\begin{split}
B_0 &:= \sqrt{|h|} (h^{-1})^{0 j}\boldsymbol{\eta}(\partial_j N_{\ell_0},N_{\ell_0}) \\
&\qquad - \sqrt{|h|} (h^{-1})^{0 \kappa} (h^{-1})^{\nu j}\boldsymbol{\eta}(\partial_\kappa\Psi_{\xi_0,\ell_0},\partial_j N_{\ell_0})\boldsymbol{\eta}(\partial_\nu\Psi_{\xi_0,\ell_0},N_{\ell_0}) \\
&\qquad - \sqrt{|h|} (h^{-1})^{0 j} (h^{-1})^{\nu\lambda}\boldsymbol{\eta}(\partial_\lambda\Psi_{\xi_0,\ell_0},\partial_j N_{\ell_0})\boldsymbol{\eta}(\partial_\nu\Psi_{\xi_0,\ell_0},N_{\ell_0}) \\
&\qquad + \sqrt{|h|} (h^{-1})^{0 \nu} (h^{-1})^{\kappa j}\boldsymbol{\eta}(\partial_\kappa\Psi_{\xi_0,\ell_0},\partial_j N_{\ell_0})\boldsymbol{\eta}(\partial_\nu\Psi_{\xi_0,\ell_0},N_{\ell_0})
\end{split}
\end{align*}
and let
\begin{align*}
\begin{split}
{\dot{\psi}}_{\xi,\ell} := \boldsymbol{\eta}\bigl( \sqrt{|g|} (g^{-1})^{0\nu}\partial_\nu\Psi_{\xi,\ell}, N_{\ell_0} \bigr) - \boldsymbol{\eta}\bigl( \sqrt{|h|} (h^{-1})^{00} (1,\ell_0),N_{\ell_0}\bigr) - B_0 \psi_{\xi,\ell}.
\end{split}
\end{align*}
Then ${\dot{\psi}}_{\xi_0,\ell_0} = 0$ and $\vec{\psi}_{\xi,\ell} = (\psi_{\xi,\ell}, {\dot{\psi}}_{\xi,\ell})$ satisfies
\begin{align*}
(\partial_t-M_{\ell_0}) \vec{\psi}_{\xi,\ell} = \vec{\mathcal F},
\end{align*}
where $\vec{\mathcal F}$ depends at least quadratically on $\vec{\psi}_{\xi,\ell}$. In particular
\begin{align*}
\begin{split}
\frac{\delta \vec{\mathcal F}}{\delta a} \Big|_{(a,\ell)=(a_0,\ell_0)} = \frac{\delta\vec{\mathcal F}}{\delta\ell}\Big|_{(a,\ell)=(a_0,\ell_0)}=0.
\end{split}
\end{align*}
It follows that $\frac{\delta\vec{\psi}_{\xi,\ell}}{\delta a^i}\vert_{(a,\ell)=(a_0,\ell_0)}$ and $\frac{\delta\vec{\psi}_{\xi,\ell}}{\delta \ell^i}\vert_{(a,\ell)=(a_0,\ell_0)}$ for $1 \leq i \leq n$ are solutions of
\begin{align*}
(\partial_t-M_{\ell_0}) \vec{\varphi} = 0.
\end{align*}
To compute these parameter derivatives more easily, we also observe that in view of~\eqref{equ:relation_ptphi_dotphi}
\begin{align*}
{\dot{\psi}}_{\xi,\ell} = \sqrt{|h|} h^{0\nu} \partial_\nu \psi_{\xi,\ell} + {\tilde{f}},
\end{align*}
where ${\tilde{f}}$ depends quadratically on $\psi_{\xi,\ell}$, whence $\frac{\delta {\tilde{f}}}{\delta a^i}\vert_{(a,\ell)=(a_0,\ell_0)}=\frac{\delta{\tilde{f}}}{\delta\ell^i}\vert_{(a,\ell)=(a_0,\ell_0)}=0$. Moreover, we note that $\frac{\delta}{\delta a^i}$ is the same as $\frac{\delta}{\delta\xi^i}$ and
\begin{align*}
\begin{split}
\frac{\delta \psi_{\xi,\ell}}{\delta \xi^i} \Bigr|_{\ell=\ell_0} = \boldsymbol{\eta}\Bigl(\frac{\delta\Psi_{\xi,\ell}}{\delta \xi^i} \Bigr|_{\ell=\ell_0},|N_{\ell_0}|^{-2}N_{\ell_0}\Bigr), \qquad \frac{\delta \psi_{\xi,\ell}}{\delta \ell^i} \Bigr|_{\ell=\ell_0} = \boldsymbol{\eta}\Bigl(\frac{\delta\Psi_{\xi,\ell}}{\delta \ell^i} \Bigr|_{\ell=\ell_0},|N_{\ell_0}|^{-2}N_{\ell_0}\Bigr).
\end{split}
\end{align*}
With these observations we compute
\begin{align*}
\begin{split}
\vec{\varphi}_i = \begin{pmatrix} \varphi_i \\ {\dot{\fy}}_i \end{pmatrix} := \begin{pmatrix} \frac{\delta\psi_{\xi,\ell}}{\delta\xi^i} \bigl|_{\ell=\ell_0} \\ \frac{\delta{\dot{\psi}}_{\xi,\ell}}{\delta\xi^i} \bigl|_{\ell=\ell_0} \end{pmatrix} = \begin{pmatrix} \frac{\delta\psi_{\xi,\ell}}{\delta\xi^i} \bigl|_{\ell=\ell_0} \\ \sqrt{|h|} (h^{-1})^{0\nu} \bigl|_{\ell=\ell_0} \partial_\nu\frac{\delta\psi_{\xi,\ell}}{\delta\xi^i} \bigl|_{\ell=\ell_0} \end{pmatrix}, \quad 1 \leq i \leq n,
\end{split}
\end{align*}
to be
\begin{equation*}
\begin{aligned}
\varphi_i = |\ell_0|^{-2} (\gamma_0 - 1) (\ell_0 \cdot \nu) \ell_0^i + \nu^i, \qquad {\dot{\fy}}_i = \sqrt{|h|} (h^{-1})^{0j} \bigl|_{\ell=\ell_0} \partial_j \varphi_i
\end{aligned}
\end{equation*}
with
\begin{equation*}
\gamma_0 = (1-|\ell_0|^2)^{-\frac12}.
\end{equation*}
Since $\vec{\varphi}_i$ is independent of $t$, we conclude that
\begin{align*}
M_{\ell_0} \vec{\varphi}_i=0, \qquad 1 \leq i \leq n.
\end{align*}
Similarly, for the derivatives with respect to $\ell^i$ we have
(below we have used the fact that the first $n$ components of $\nu$ and $F$ are proportional, to simplify a bit)
\begin{align*}
\begin{split}
\frac{\delta \psi_{\xi,\ell}}{\delta \ell^i} \Bigr|_{\ell=\ell_0} = \boldsymbol{\eta}\Bigl(\frac{\delta\Psi_{\xi,\ell}}{\delta \ell^i} \Bigr|_{\ell=\ell_0}, |N_{\ell_0}|^{-2}N_{\ell_0}\Bigr)
&= -\gamma_0(\ell_0 \cdot F) \nu^i - |\ell_0|^{-2} \gamma_0 (\gamma_0-1) (\ell_0 \cdot F)(\ell_0 \cdot \nu) \ell_0^i + t\varphi_i.
\end{split}
\end{align*}
We set
\begin{align*}
\begin{split}
\varphi_{n+i} := -\gamma_0 (\ell_0 \cdot F)\nu^i-|\ell_0|^{-2} \gamma_0 (\gamma_0-1)(\ell_0 \cdot F)(\ell_0 \cdot \nu) \ell_0^i, \quad 1 \leq i \leq n,
\end{split}
\end{align*}
and thus have
\begin{equation*}
\frac{\delta \psi_{\xi,\ell}}{\delta \ell^i} \Bigr|_{\ell=\ell_0} = \varphi_{n+i} + t \varphi_i, \quad 1 \leq i \leq n.
\end{equation*}
It follows that
\begin{align*}
\begin{split}
\frac{\delta {\dot{\psi}}_{\xi,\ell}}{\delta \ell^i} \Bigr|_{\ell=\ell_0} &= \sqrt{|h|} (h^{-1})^{0\nu} \partial_\nu \Bigl( \frac{\delta \psi_{\xi,\ell}}{\delta \ell^i} \Bigr) \Bigr|_{\ell=\ell_0} \\
&= \sqrt{|h|}(h^{-1})^{0j} \bigr|_{\ell=\ell_0} \partial_j \varphi_{n+i} + \sqrt{|h|}(h^{-1})^{00} \bigr|_{\ell=\ell_0} \varphi_i + t \sqrt{|h|} (h^{-1})^{0j} \bigr|_{\ell=\ell_0} \partial_j \varphi_{i} \\
&= \sqrt{|h|}(h^{-1})^{0j} \bigr|_{\ell=\ell_0} \partial_j \varphi_{n+i} + \sqrt{|h|}(h^{-1})^{00} \bigr|_{\ell=\ell_0} \varphi_i + t {\dot{\fy}}_i.
\end{split}
\end{align*}
Hence, upon defining
\begin{equation*}
{\dot{\fy}}_{n+i} := \sqrt{|h|}(h^{-1})^{0j} \bigr|_{\ell=\ell_0} \partial_j \varphi_{n+i} + \sqrt{|h|}(h^{-1})^{00} \bigr|_{\ell=\ell_0} \varphi_i, \quad 1 \leq i \leq n,
\end{equation*}
as well as
\begin{equation*}
\vec{\varphi}_{n+i} = \begin{pmatrix} \varphi_{n+i} \\ {\dot{\fy}}_{n+i} \end{pmatrix}, \quad 1 \leq i \leq n,
\end{equation*}
we have
\begin{equation*}
\begin{pmatrix} \frac{\delta\psi_{\xi,\ell}}{\delta\ell^i} \Bigr|_{\ell=\ell_0} \\ \frac{\delta{\dot{\psi}}_{\xi,\ell}}{\delta\ell^i} \Bigr|_{\ell=\ell_0} \end{pmatrix} = \vec{\varphi}_{n+i} + t \vec{\varphi}_i, \quad 1 \leq i \leq n.
\end{equation*}
Now recall that we have shown for time-independent $\ell = \ell_0$,
\begin{align*}
\begin{split}
(\partial_t - M_{\ell_0}) \begin{pmatrix} \frac{\delta\psi_{\xi,\ell}}{\delta\ell^i} \Bigr|_{\ell=\ell_0} \\ \frac{\delta{\dot{\psi}}_{\xi,\ell}}{\delta\ell^i} \Bigr|_{\ell=\ell_0} \end{pmatrix} = 0, \quad 1 \leq i \leq n.
\end{split}
\end{align*}
Since $\vec{\varphi}_i$ satisfies $(\partial_t-M_{\ell_0}) \vec{\varphi}_i = M_{\ell_0} \vec{\varphi}_i=0$ and since $\vec{\varphi}_{n+i}$ satisfies $\partial_t\vec{\varphi}_{n+i}=0$, we conclude
\begin{align*}
M_{\ell_0} \vec{\varphi}_{n+i}=\vec{\varphi}_i, \quad 1 \leq i \leq n.
\end{align*}
\subsection{Modulation equations}\label{subsec:modulationeqs}
In this section we revert to the notation used before Section~\ref{subsec:eigenfunctions}. In particular, $\ell = \ell(t)$ and $\xi=\xi(t)$ are assumed to be time-dependent again, and $\xi(t)$ is no longer assumed to be of the form $a+t\ell$.
We begin with the definition of the symplectic form.
To this end we introduce the operator
\begin{equation*}
\begin{aligned}
J = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}.
\end{aligned}
\end{equation*}
Note that $J^\ast = - J$, in the sense that $(J{\vec u})\cdot{\vec v}=-{\vec u}\cdot(J{\vec v})$.
We define the symplectic form $\boldsymbol{\Omega}$ as
\begin{equation*}
\begin{aligned}
\boldsymbol{\Omega}(\vec{u}, \vec{v}) := \langle \vec{u}, J \vec{v} \rangle, \quad \vec{u} = \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}, \quad \vec{v} = \begin{pmatrix} v_1 \\ v_2 \end{pmatrix},
\end{aligned}
\end{equation*}
where
\begin{equation*}
\langle \vec{u}, \vec{v} \rangle = \int \bigl( u_1 v_1 + u_2 v_2 \bigr) \, \mathrm{d} \omega \, \mathrm{d} \rho.
\end{equation*}
We emphasize that the reason why we are using $\mathrm{d}\omega \,\mathrm{d} \rho$ for the volume form is that in our applications, $\sqrt{| h|}$ is incorporated in the definition of $\vec{\psi}$.
Recall the definition of the matrix operator
\begin{equation*}
\begin{aligned}
M := \begin{pmatrix}
- \frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \partial_j & \frac{1}{\sqrt{|h|} (h^{-1})^{00}} \\
-\sqrt{|h|} L & - \partial_j \Bigl( \frac{(h^{-1})^{j0}}{(h^{-1})^{00}} \Bigr)
\end{pmatrix}.
\end{aligned}
\end{equation*}
Its adjoint with respect to the inner product $\langle \vec{u}, \vec{v} \rangle$ is given by
\begin{equation*}
\begin{aligned}
M^\ast := \begin{pmatrix}
\partial_j \Bigl( \frac{(h^{-1})^{j0}}{(h^{-1})^{00}} \Bigr) & -\sqrt{|h|} L \\
\frac{1}{\sqrt{|h|} (h^{-1})^{00}} & \frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \partial_j
\end{pmatrix}.
\end{aligned}
\end{equation*}
Then we have
\begin{equation*}
\begin{aligned}
J M + M^\ast J = 0.
\end{aligned}
\end{equation*}
In particular, it follows that
\begin{equation*}
\boldsymbol{\Omega}({\vec u}, M {\vec v}) = - \boldsymbol{\Omega}(M{\vec u}, {\vec v}).
\end{equation*}
Motivated by the discussion in Subsection~\ref{subsec:eigenfunctions}, we define for arbitrary $\ell \in \mathbb R^n$, $|\ell| < 1$,
\begin{equation*}
\begin{aligned}
\vec{\varphi}_i := \begin{pmatrix} \varphi_i \\ {\dot{\fy}}_i \end{pmatrix}, \quad \vec{\varphi}_{n+i} := \begin{pmatrix} \varphi_{n+i} \\ {\dot{\fy}}_{n+i} \end{pmatrix}, \quad 1 \leq i \leq n,
\end{aligned}
\end{equation*}
where
\begin{equation*}
\begin{aligned}
\varphi_i &:= |\ell|^{-2}(\gamma-1)(\ell\cdot\nu)\ell^i+\nu^i, \\
{\dot{\fy}}_i &:= \sqrt{|h|}(h^{-1})^{0j}\partial_j\varphi_i, \\
\varphi_{n+i} &:= -\gamma(\ell\cdot F)\nu^i-|\ell|^{-2}\gamma(\gamma-1)(\ell\cdot F)(\ell\cdot\nu)\ell^i, \\
{\dot{\fy}}_{n+i} &:= \sqrt{|h|}(h^{-1})^{0j}\partial_j\varphi_{n+i}+\sqrt{|h|}(h^{-1})^{00}\varphi_i.
\end{aligned}
\end{equation*}
Recall that here we no longer assume that $\frac{\mathrm{d}\xi}{\mathrm{d} t}=\ell$ and $\frac{\mathrm{d}\ell}{\mathrm{d} t}=0$.
From Subsection~\ref{subsec:eigenfunctions} we still obtain
\begin{equation*}
M \vec{\varphi}_i = 0, \quad M \vec{\varphi}_{n+i} = \vec{\varphi}_i, \quad 1 \leq i \leq n,
\end{equation*}
but $\vec{\varphi}_i$ and $\vec{\varphi}_{n+i}$ are no longer elements of the kernel, respectively generalized kernel, of $(\partial_t - M)$.
Next, we introduce truncated versions of the generalized eigenfunctions given by
\begin{equation*}
{\vec Z}_i := \chi \vec{\varphi}_i, \quad {\vec Z}_{n+i} := \chi \vec{\varphi}_{n+i}, \quad i = 1, \ldots, n,
\end{equation*}
where the smooth cut-off function $\chi \in C_c^\infty(\mathbb R)$ satisfies $\chi(\rho) = 1$ for $|\rho| \leq {R_1}$ and $\chi(\rho) = 0$ for $|\rho| \geq 2{R_1}$.
Then, using \eqref{equ:first_order_linearized_HVMC}, we find for $i=1, \ldots, n$ that
\begin{equation*}
\begin{aligned}
\partial_t \bigl( \boldsymbol{\Omega}( \vec{\psi}, {\vec Z}_i ) \bigr) &= \boldsymbol{\Omega}({\vec K}, {\vec Z}_i) + \boldsymbol{\Omega}({\vec f}, {\vec Z}_i) + \boldsymbol{\Omega}(\vec{\psi}, (\partial_t-M) {\vec Z}_i), \\
\partial_t \bigl( \boldsymbol{\Omega}( \vec{\psi}, {\vec Z}_{n+i} ) \bigr) &= \boldsymbol{\Omega}({\vec K}, {\vec Z}_{n+i}) + \boldsymbol{\Omega}({\vec f}, {\vec Z}_{n+i}) + \boldsymbol{\Omega}(\psi, (\partial_t-M){\vec Z}_{n+i}).
\end{aligned}
\end{equation*}
We determine the leading order behavior of $\boldsymbol{\Omega}( \vec{\psi}, {\vec Z}_i )$ and $\boldsymbol{\Omega}( \vec{\psi}, {\vec Z}_{n+i} )$.
To this end we first observe that to leading order
\begin{equation*}
\begin{aligned}
K &= -(\dot{\xi}-\ell) \cdot \bigl( \nu + \mathcal O(|\ell|^2) \nu \bigr) + \dot{\ell} \cdot \mathcal O(|\ell|), \\
\dot{K} &= -\sqrt{|h|} (h^{-1})^{00} \dot{\ell} \cdot \bigl( \nu + \mathcal O(|\ell|^2) \nu \bigr),
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\varphi_i &= \nu^i + \mathcal O(|\ell|^2), \\
\dot{\varphi}_i &= \sqrt{|h|} (h^{-1})^{0j} \partial_j \varphi_i = \mathcal O(|\ell|), \\
\varphi_{n+i} &= \mathcal O(|\ell|), \\
\dot{\varphi}_{n+i} &= \sqrt{|h|} (h^{-1})^{00} \nu^i + \mathcal O(|\ell|).
\end{aligned}
\end{equation*}
We obtain corresponding leading order expressions for ${\vec Z}_i$ and ${\vec Z}_{n+i}$.
Thus, we find that to leading order
\begin{equation*}
\begin{aligned}
\boldsymbol{\Omega}({\vec K}, {\vec Z}_i) &= \sum_j \dot{\ell}_j ( d_{ij} + r_{ij} ) + \sum_j (\dot{\xi}_j - \ell_j) b_{ij}, \\
\boldsymbol{\Omega}({\vec K}, {\vec Z}_{n+i}) &= \sum_j \dot{\ell}_j {\tilde{b}}_{ij} - \sum_j (\dot{\xi}_j - \ell_j) ( d_{ij} + {\tilde{r}}_{ij})
\end{aligned}
\end{equation*}
with (see Section~\ref{subsubsec:normal})
\begin{equation} \label{equ:derivation_modul_equ_def_dij}
d_{ij} = \int \chi \nu^i \nu^j \sqrt{|h|} (h^{-1})^{00} \, \mathrm{d} \rho \, \mathrm{d} \omega \, \simeq \, \left\{ \begin{aligned} 1, \quad i = j, \\
o_{{R_1},\ell}(1), \quad i \neq j \end{aligned} \right.
\end{equation}
and
\begin{equation*}
r_{ij} = b_{ij} = {\tilde{r}}_{ij} = {\tilde{b}}_{ij} = \mathcal O(|\ell|).
\end{equation*}
Below we denote by $D$ the $n \times n$ matrix with entries $d_{ij}$ defined in~\eqref{equ:derivation_modul_equ_def_dij}. Clearly, $D$ is invertible for small $|\ell|$ and sufficiently large ${R_1}$.
Parts of the quantities $\boldsymbol{\Omega}({\vec f}, \vec{Z}_i)$ and $\boldsymbol{\Omega}({\vec f}, \vec{Z}_{n+i})$ are (at least) linear in ${\dot{\wp}}$ with coefficients that are $\mathcal O({\dot{\wp}}, \ell, \psi, \partial \psi)$. Note that not the entire quantities $\boldsymbol{\Omega}({\vec f}, \vec{Z}_i)$ and $\boldsymbol{\Omega}({\vec f}, \vec{Z}_{n+i})$ contain a factor of ${\dot{\wp}}$, for instance $\mathcal E_1$ does not.
Finally, we have
\begin{equation*}
\begin{aligned}
\boldsymbol{\Omega}(\vec{\psi}, (\partial_t - M) {\vec Z}_i) &= \boldsymbol{\Omega}(\vec{\psi}, \chi \dot{\ell} \cdot \nabla_\ell \vec{Z}_i) - \boldsymbol{\Omega}(\vec{\psi}, M(\chi \vec{Z}_i)), \\
\boldsymbol{\Omega}(\vec{\psi}, (\partial_t - M) {\vec Z}_{n+i}) &= \boldsymbol{\Omega}(\vec{\psi}, \chi \dot{\ell} \cdot \nabla_\ell \vec{Z}_{n+i}) - \boldsymbol{\Omega}(\vec{\psi}, M(\chi \vec{Z}_{n+i})).
\end{aligned}
\end{equation*}
Putting the preceding observations together, we have found that schematically
\begin{equation} \label{equ:derivation_modul_equ1}
\begin{aligned}
\partial_t {\vec{\bfOmega}} + {\vec N} = \begin{pmatrix} D + R & R \\ R & D + R \end{pmatrix} {\dot{\wp}} + {\vec H},
\end{aligned}
\end{equation}
where (recall the notation from Section~\ref{subsec:prelimvfs})
\begin{equation*}
\begin{aligned}
{\vec{\bfOmega}} &= \bigl( \boldsymbol{\Omega}(\vec{\psi}, {\vec Z}_1), \ldots, \boldsymbol{\Omega}(\vec{\psi}, {\vec Z}_{2n}) \bigr), \\
{\vec N} &= \bigl( \boldsymbol{\Omega}(\vec{\psi}, M {\vec Z}_1), \ldots, \boldsymbol{\Omega}(\vec{\psi}, M {\vec Z}_{2n}) \bigr), \\
R &= \mathcal O(\psi, \partial \psi, \partial_\Sigma \partial \psi, {\dot{\wp}}, \ell), \\
{\vec H} &= \mathcal O\bigl( (\psi, \partial \psi, \partial_\Sigma \partial \psi)^2 \bigr).
\end{aligned}
\end{equation*}
Note that here we have separated ${\vec N}$ from ${\vec H}$ to emphasize that ${\vec N}$, which contains the linear contribution in $\vec{\psi}$, is the principal source term for $\partial_t{\vec{\bfOmega}}$.
In what follows, we would like to view~\eqref{equ:derivation_modul_equ1} as an equation entirely in terms of $\vec{\psi} = (\psi, {\dot{\psi}})$, ${\dot{\wp}}$, and $\ell$. However, at this point the right-hand side of~\eqref{equ:derivation_modul_equ1} still involves $\partial_t \psi$. To remedy this, we use the relation~\eqref{equ:relation_ptphi_dotphi} between $\partial_t \psi$ and ${\dot{\psi}}$ to replace any $\partial_t \psi$ terms on the right-hand side of~\eqref{equ:derivation_modul_equ1}. However, some care has to be taken here because the quadratic error term $f$ in~\eqref{equ:derivation_modul_equ1} still contains terms involving $\partial_t \psi$, but they are of the form $\mathcal O(\psi, \partial \psi, \ell, {\dot{\wp}}) \partial_t \psi$.
We can therefore use the implicit function theorem to infer from the relation~\eqref{equ:relation_ptphi_dotphi} that for sufficiently small $\psi$, $\partial \psi$, $\partial_\Sigma \partial \psi$, $\ell$, and ${\dot{\wp}}$, we can write
\begin{equation} \label{equ:derivation_modul_equ1_addon}
\partial_t \psi = \frac{1}{\sqrt{|h|} (h^{-1})^{00}} \dot{\psi} - \frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \partial_j \psi - \boldsymbol{\eta} \Bigl( \bigl( \dot{\ell} \cdot \nabla_\ell + (\dot{\xi}-\ell) \cdot \nabla_\xi \bigr) \Psi_\wp, N \Bigr) + \mathcal O\bigl( (\partial^{\leq 2}_\Sigma \vec{\psi}, \ell, {\dot{\wp}})^2 \bigr).
\end{equation}
\begin{remark}
The smallness required for this application of the implicit function theorem will follow from the assumptions on the initial data and our bootstrap assumptions in Section~\ref{sec:bootstrap}, and will be assumed in the remainder of this section.
\end{remark}
Inserting the relation~\eqref{equ:derivation_modul_equ1_addon} into the right-hand side of~\eqref{equ:derivation_modul_equ1} and rearranging, we obtain
\begin{equation} \label{equ:derivation_modul_equ2}
\begin{aligned}
\partial_t {\vec{\bfOmega}} + {\vec N} = \begin{pmatrix} D + {\widetilde{R}} & {\widetilde{R}} \\ {\widetilde{R}} & D + {\widetilde{R}} \end{pmatrix} {\dot{\wp}} + \vec{{\widetilde{H}}} =: {\vec F}(\partial^{\leq 2}_\Sigma \vec{\psi}, \ell, {\dot{\wp}}),
\end{aligned}
\end{equation}
where ${\widetilde{R}}$ and ${\widetilde{H}}$ are smooth functions of the form
\begin{equation*}
\begin{aligned}
{\widetilde{R}} &= \mathcal O(\partial^{\leq 2}_\Sigma \vec{\psi}, \ell, {\dot{\wp}}), \\
\vec{{\widetilde{H}}} &= \mathcal O\bigl( (\partial^{\leq 2}_\Sigma \vec{\psi})^2, \ell \vec{\psi}, \ell \partial_\Sigma \vec{\psi} \bigr).
\end{aligned}
\end{equation*}
As a consequence of the implicit function theorem, there exists a smooth function ${\vec G}$ such that for small $x$, $y$, $w$, and $q$, we have $q = {\vec F}(x,y,w)$ if and only if $w = {\vec G}(x,y,q)$. Moreover, in view of the structure of ${\vec F}$ defined in~\eqref{equ:derivation_modul_equ2}, ${\vec G}$ must then also satisfy (uniformly for all sufficiently small $x$, $y$, and $q$) the estimate
\begin{equation} \label{equ:derivation_modul_equ_bound_from_IFT}
|{\vec G}(x,y,q)| \lesssim |x| + |q|.
\end{equation}
Thus, assuming $\partial_\Sigma^{\leq2} \vec{\psi}$, $\ell$, and ${\dot{\wp}}$ are sufficiently small (in a pointwise sense) as remarked above, \eqref{equ:derivation_modul_equ2} holds if and only if we have
\begin{equation} \label{equ:derivation_modul_equ3}
\begin{aligned}
{\dot{\wp}} = {\vec G}(\partial_\Sigma^{\leq2} \vec{\psi}, \ell, \partial_t {\vec{\bfOmega}} + {\vec N}).
\end{aligned}
\end{equation}
Note that in this equation for ${\dot{\wp}}$ second-order derivatives of $\vec{\psi}$ appear in the source term. This has the potential danger that after differentiating in $t$ (that is, for after eventually commuting the equation with the maximum number of $\partial_t$ derivatives), there is a loss of regularity. To avoid this issue, one can try to replace \eqref{equ:derivation_modul_equ3} by a smoothed out version of it. To achieve this, we could try to impose orthogonality conditions for ${\vec{\bfOmega}}$ that lead to a differential equation of the form
\begin{equation}
\begin{aligned}
{\dot{\wp}} = {\vec G}( S\partial_\Sigma^{\leq2} \vec{\psi}, \ell, S {\vec N} ),
\end{aligned}
\end{equation}
where $S$ is a smoothing operator (in time) that will be defined shortly. While this seems feasible in principle, technically, it seems simpler (see \eqref{equ:derivation_modul_equ5} below) to allow more flexibility in the final differential equation for ${\dot{\wp}}$, and to let it be of the form
\begin{equation}\label{equ:derivation_modul_equ3_alt}
\begin{aligned}
{\dot{\wp}} = {\vec G}( S\partial_\Sigma^{\leq2} \vec{\psi}, \ell, S{\vec N} - \beta \vec{\omega}),
\end{aligned}
\end{equation}
where $\beta > 0$ and $\beta \vec{\omega}$ is no larger than ${\vec N}$.
Comparing with \eqref{equ:derivation_modul_equ3}, this is equivalent to
\begin{equation}
\begin{aligned}
\partial_t {\vec{\bfOmega}} + {\vec N} = {\vec F}(\partial_\Sigma^{\leq2} \vec{\psi}, \ell, {\vec G}( S\partial_\Sigma^{\leq2} \vec{\psi}, \ell, S{\vec N} - \beta \vec{\omega}) ).
\end{aligned}
\end{equation}
Using that ${\vec F}(0,\ell, {\vec G}(0,\ell, q)) = q$, this is also equivalent to
\begin{equation}
\begin{aligned}
\partial_t {\vec{\bfOmega}} = (S-I) {\vec N} - \beta \vec{\omega} - {\vec F}_\omega,
\end{aligned}
\end{equation}
where
\begin{equation}\label{eq:Fomegadef1}
\begin{aligned}
{\vec F}_\omega(\partial_\Sigma^{\leq2} \vec{\psi}, \ell, S{\vec N}-\beta\vec{\omega}) &:= {\vec F}(0,\ell, {\vec G}(0,\ell, S{\vec N}-\beta\vec{\omega})) - {\vec F}(\partial_\Sigma^{\leq2} \vec{\psi}, \ell, {\vec G}(S\partial_\Sigma^{\leq2} \vec{\psi}, \ell, S{\vec N}-\beta\vec{\omega}))
\end{aligned}
\end{equation}
has additional smallness of order $\mathcal O( \partial_\Sigma^{\leq2} \vec{\psi} )$. Indeed, by~\eqref{equ:derivation_modul_equ_bound_from_IFT} it follows from the preceding that
\begin{equation}\label{eq:Fomegaestimate1}
\begin{aligned}
|{\vec F}_\omega(\partial_\Sigma^{\leq2} \vec{\psi}, \ell, S{\vec N}-\beta\vec{\omega})| \lesssim |\partial_\Sigma^{\leq2} \vec{\psi}| \bigl( |\partial_\Sigma^{\leq2} \vec{\psi}| + |S{\vec N}-\beta\vec{\omega}| \bigr).
\end{aligned}
\end{equation}
We now seek to impose a decomposition
\begin{equation} \label{equ:derivation_modul_equ4}
\begin{aligned}
{\vec{\bfOmega}} = {\vec{\Upomega}} + \vec{\omega}
\end{aligned}
\end{equation}
such that
\begin{equation} \label{equ:derivation_modul_equ5}
\begin{aligned}
\partial_t{\vec{\Upomega}} &= (S-I)({\vec N}+{\vec F}_\omega), \\
\partial_t\vec{\omega} &= -\beta\vec{\omega}-S{\vec F}_\omega.
\end{aligned}
\end{equation}
The motivation for this decomposition can be explained as follows. We view ${\vec N}$ as the main source term for $\partial_t{\vec{\bfOmega}}$. As we will see momentarily, the smoothing operator $S$ can be chosen to be almost local in time, so that for instance $S \psi$ and $\psi$ will satisfy comparable decay estimates in time. We will also see that with this choice, we can write $S-I=\partial_t{\tilde{S}}$ where ${\tilde{S}}$ is another almost local (but not smoothing) operator, so comparing with \eqref{equ:derivation_modul_equ5} we see that ${\vec{\Upomega}}$ is of the same order as ${\vec N}$. This captures the main contribution to ${\vec{\bfOmega}}$. For the remainder $\vec{\omega}$ we no longer have the structure $S-I$, but \eqref{equ:derivation_modul_equ3_alt} was conceived so that the equation in \eqref{equ:derivation_modul_equ5} satisfied by $\vec{\omega}$ comes with the damping term $-\beta \vec{\omega}$, which can be used to estimate $\vec{\omega}$ in terms of $S{\vec F}_\omega$. The details of this argument are presented in Section~\ref{sec:parametercontrol}.
To achieve \eqref{equ:derivation_modul_equ4} and \eqref{equ:derivation_modul_equ5}, we will invoke the implicit function theorem on suitable Banach spaces of time-dependent curves on a time interval $J = [0, \tau_0]$.
We first introduce the definition of the smoothing operator (in time) $S$. Let $k \in C_c^\infty(\mathbb R)$ be a smooth bump function supported in the interval $[0,1]$ such that $\int_0^1 k(s) \, \mathrm{d} s = 1$. For a given (locally integrable) function $h(t)$ defined for $t \geq -1$, we set
\begin{equation*}
(S h)(t) := \int_\mathbb R \chi_{[-1,\infty)}(s) h(s) k(t-s) \, \mathrm{d} s, \quad t \geq -1.
\end{equation*}
Then $(S h)(t)$ is a smooth function for all $t \geq 0$.
We also define an associated operator $\widetilde{S}$ with the property that $(S-I) h = \frac{\mathrm{d}}{\mathrm{d} t} (\widetilde{S} h)$.
To this end we set $\tilde{k}(r) := 0$ for $r < 0$ and $\tilde{k}(r) := -\int_r^\infty k(s) \, \mathrm{d} s$ for $r \geq 0$. Note that $\tilde{k}(r)$ is also supported in the interval $[0,1]$. Then the operator
\begin{equation*}
(\widetilde{S} h)(t) := \int_\mathbb R \chi_{[-1,\infty)}(s) h(s) \tilde{k}(t-s) \, \mathrm{d} s, \quad t \geq -1,
\end{equation*}
has the desired properties.
Let $\Phi$ be a sufficiently regular (for instance $C^5$) solution to HVMC. Given $C^1$ curves $\xi(t)$ and $\ell(t)$ defined on some time interval $J = [0,\tau_0]$ in the domain of definition of $\Phi$, we denote by $\Psi_{\wp}$ the associated profile in the flat region, defined as in~\eqref{equ:definition_profile_flat_region}. We let
\begin{equation}
\psi_{\xi,\ell} := \boldsymbol{\eta}( \Phi - \Psi_\wp, n_\wp ),
\end{equation}
and correspondingly define $\dot{\psi}_{\xi,\ell}$ as in~\eqref{equ:relation_ptphi_dotphi}.
First, we trivially extend $\xi(t)$, $\ell(t)$, and $\partial_\Sigma^{\leq 2} \vec{\psi}_{\xi,\ell}$ to times $-1 \leq t \leq 0$.
Then we define $\vec{\omega}(t)$ to be the solution of
\begin{equation}\label{eq:omegasolutionform1}
\left\{ \begin{aligned}
\vec{\omega}(t) &= - \int_0^t e^{-\beta(t-s)} S {\vec F}_\omega(\partial_\Sigma^{\leq 2} \vec{\psi}_{\xi,\ell}, \ell, \vec{\omega})(s) \, \mathrm{d} s, &\quad t \geq 0, \\
\vec{\omega}(t) &= 0, \quad &t < 0.
\end{aligned} \right.
\end{equation}
Observe that $\vec{\omega}$ has additional smallness in view of the definition of ${\vec F}_\omega$.
Next, in view of \eqref{equ:derivation_modul_equ5}, we define
\begin{equation}\label{eq:upOmegasolutionform1}
{\vec{\Upomega}}(t) := {\widetilde{S}} \bigl( {\vec N}+{\vec F}_\omega \bigr)(t).
\end{equation}
Finally, we define\footnote{To eventually apply the implicit function theorem to $\Upsilon$ we need to specify its domain of definition. This can be done for instance as follows. Let $X=C^5_c(\{|\rho|< {\tilde{R}}_1\};\mathbb R^{1+(n+1}))$ denote the space of $5$ times continuously differentiable functions (of $(\rho,\omega)$) supported in $\{|\rho|\leq {\tilde{R}}_1\}$ for some large ${\tilde{R}}_1$. We let
\begin{align*}
\begin{split}
\mathcal X=C([-1,\tau_0];X)\cap C^1([-1,\tau_0];X),\quad \mathcal Y=C([-1,\tau_0];\mathbb R)\cap C^1([-1,\tau_0];\mathbb R).
\end{split}
\end{align*}
Then we can view $\Upsilon$ as a function
\begin{align*}
\begin{split}
\Upsilon: \mathcal X\times \mathcal Y^{2n}\to \mathcal Y^{2n},
\end{split}
\end{align*}
where $\mathcal Y^{2n}$ represents the $2n$ components of $\ell$ and $\xi$. Note that by construction, if ${\tilde{R}}_1$ is chosen such that $\chi$ in the definition of ${\vec Z}_i$ is supported on $\{|\rho|\leq {\tilde{R}}_1\}$, then $\Upsilon(\chi \Psi_{0,0},0,0)=0$.
}
\begin{align*}
\vec\Upsilon(\Phi, \xi, \ell) := {\vec{\bfOmega}}(\vec{\psi}_{\xi,\ell})(t) - {\vec{\Upomega}}(t) - \vec{\omega}(t).
\end{align*}
Observe that by definition
\begin{equation*}
\vec\Upsilon(\Psi_{0,0},0,0) = 0,
\end{equation*}
where $\Psi_{0,0}(t,\rho,\omega) = (t, F(\rho,\omega))$ is the parametrization of the standard Lorentzian catenoid.
We now want to show that the Fr\'echet derivative $D_{\xi, \ell} \vec\Upsilon(\Psi_{0,0},0,0)$ is invertible.
Then the existence of the decomposition~\eqref{equ:derivation_modul_equ4} satisfying~\eqref{equ:derivation_modul_equ5} follows from the implicit function theorem under our bootstrap assumptions.
To this end, we observe that by the preceding definitions and in view of the computations in Subsection~\ref{subsec:eigenfunctions}, we have for $1 \leq i, j \leq n$ that
\begin{equation*}
\begin{aligned}
\frac{\delta \bigl( \boldsymbol{\Omega}(\vec{\psi}_{\xi,\ell}, {\vec Z}_i) \bigr)}{\delta \xi_j} \bigg|_{(\xi, \ell)=(0,0)} &= \boldsymbol{\Omega}(\vec{\varphi}_j, {\vec Z}_i) \big|_{\ell=0} = 0, \\
\frac{\delta \bigl( \boldsymbol{\Omega}(\vec{\psi}_{\xi,\ell}, {\vec Z}_i) \bigr)}{\delta \ell_j} \bigg|_{(\xi, \ell)=(0,0)} &= \boldsymbol{\Omega}(\vec{\varphi}_{n+j}, {\vec Z}_i) \big|_{\ell=0} = - \int \chi \nu^j \nu^i \sqrt{|h|}\big|_{\ell=0} \, \mathrm{d} \omega \, \mathrm{d} \rho, \\
\frac{\delta \bigl( \boldsymbol{\Omega}(\vec{\psi}_{\xi,\ell}, {\vec Z}_{n+i}) \bigr)}{\delta \xi_j} \bigg|_{(\xi, \ell)=(0,0)} &= \boldsymbol{\Omega}(\vec{\varphi}_j, {\vec Z}_{n+i}) \big|_{\ell=0} = \int \chi \nu^j \nu^i \sqrt{|h|}\big|_{\ell=0} \, \mathrm{d} \omega \, \mathrm{d} \rho, \\
\frac{\delta \bigl( \boldsymbol{\Omega}(\vec{\psi}_{\xi,\ell}, {\vec Z}_{n+i}) \bigr)}{\delta \ell_j} \bigg|_{(\xi, \ell)=(0,0)} &= \boldsymbol{\Omega}(\vec{\varphi}_{n+j}, {\vec Z}_{n+i}) \big|_{\ell=0} = 0.
\end{aligned}
\end{equation*}
This determines the contributions of ${\vec{\bfOmega}}(\vec{\psi}_{\xi,\ell})(t)$ to the Fr\'echet derivative $D_{\xi, \ell} \vec\Upsilon(\Psi_{0,0},0,0)$.
Since ${\vec F}_\omega$ has additional smallness, $\vec{\omega}$ does not contribute to $D_{\xi, \ell} \vec\Upsilon(\Psi_{0,0},0,0)$, and we only need to examine the part $(\widetilde{S} {\vec N})(t)$ in ${\vec{\Upomega}}(t)$ more carefully. Since $M \vec{\varphi}_i = 0$ for $1 \leq i \leq n$, and ${\vec Z}_i = \chi \vec{\varphi}_i$, we have
that $M{\vec Z}_i$ is supported in $\{ |\rho| \simeq {R_1}\}$ and is of size $\mathcal O({R_1}^{-n+1},\ell)$. Thus, we find
\begin{equation*}
\begin{aligned}
\frac{\delta \bigl( \boldsymbol{\Omega}(\vec{\psi}_{\xi,\ell}, M {\vec Z}_i) \bigr)}{\delta \xi_j} \bigg|_{(\xi, \ell)=(0,0)} = \frac{\delta \bigl( \boldsymbol{\Omega}(\vec{\psi}_{\xi,\ell}, M {\vec Z}_i) \bigr)}{\delta \ell_j} \bigg|_{(\xi, \ell)=(0,0)} = o_{R_1}(1).
\end{aligned}
\end{equation*}
Since $M \vec{\varphi}_{n+i} = \vec{\varphi}_i$, $1 \leq i \leq n$, we have $M {\vec Z}_{n+i} = {\vec Z}_i + \text{error}$ up to an error term that is supported in $\{ |\rho| \simeq {R_1} \}$ and is of size $\mathcal O({R_1}^{-2}, \ell)$.
Thus,
\begin{equation*}
\begin{aligned}
\frac{\delta \bigl( \boldsymbol{\Omega}(\vec{\psi}_{\xi,\ell}, M {\vec Z}_{n+i}) \bigr)}{\delta \xi_j} \bigg|_{(\ell,\xi)=(0,0)} &= o_{R_1}(1), \\
\frac{\delta \bigl( \boldsymbol{\Omega}(\vec{\psi}_{\xi,\ell}, M {\vec Z}_{n+i}) \bigr)}{\delta \ell_j} \bigg|_{(\ell,\xi)=(0,0)} &= \boldsymbol{\Omega}(\vec{\varphi}_{n+j}, {\vec Z}_i) \big|_{\ell=0} + o_{R_1}(1) = - \int \chi \nu^j \nu^i \sqrt{|h|}\big|_{\ell=0} \, \mathrm{d} \omega \, \mathrm{d} \rho + o_{R_1}(1).
\end{aligned}
\end{equation*}
It follows that the Fr\'echet derivative $D_{\xi, \ell} \vec\Upsilon(\Psi_{0,0},0,0)$ is a map of the form
\begin{equation} \label{equ:frechet_derivative_form_schematic}
\begin{aligned}
C \begin{bmatrix} 0 & Id \\ Id & Id \end{bmatrix} + o_{R_1}(1),
\end{aligned}
\end{equation}
where $C$ is a constant of order one as in Section~\ref{subsubsec:normal}.
Clearly, the map~\eqref{equ:frechet_derivative_form_schematic} is invertible as a linear map of Banach spaces for sufficiently large ${R_1}$. Thus, in view of~\eqref{equ:frechet_derivative_form_schematic} the Fr\'echet derivative $D_{\xi, \ell} \vec\Upsilon(\Psi_{0,0},0,0)$ is invertible for sufficiently large ${R_1}$.
\subsection{Controlling the unstable mode}\label{subsec:unstableint}
Finally, we need to take into account the exponential instablity caused by the positive eigenvalue of the linearized operator $L$.
At this point we assume that the modulation parameters $\ell(t)$ and $\xi(t)$ have been determined in terms of $\vec{\psi}$, so we treat these as given and enact a further decomposition of the perturbation $\vec{\psi}$.
Starting from the first-order equation~\eqref{equ:first_order_linearized_HVMC} and inserting the relation~\eqref{equ:derivation_modul_equ1_addon} between $\partial_t \psi$ and $\dot{\psi}$ (furnished by the implicit function theorem), we obtain a first-order evolution equation for the perturbation $\vec{\psi}$ of the form
\begin{equation} \label{equ:unstable_mode_first_order_psi_equation}
(\partial_t - M) \vec{\psi} = \vec{F}_1\bigl(\partial_\Sigma^{\leq 2} \vec{\psi}, \ell, {\dot{\wp}}\bigr).
\end{equation}
Recall from Section~\ref{sec:intro} that the linearized operator $\Hbar := \Delta_{\underline{\calC}} + | {\mathrm{I\!I}} |^2$ of the Riemannian catenoid has a positive eigenvalue $\mu^2 > 0$ with associated (exponentially decaying) eigenfunction $\underline{\varphi}_\mu$.
For the operator $M$ we introduce the time-independent ``almost eigenfunctions''
\begin{equation*}
\vec{Z}_\pm := c_\pm \bigl( \chi \underline{\varphi}_\mu, \mp \mu \sqrt{|h|}\big|_{\ell=0} \chi \underline{\varphi}_\mu \bigr),
\end{equation*}
where $\chi \in C_c^\infty(\mathbb R)$ is the previously introduced smooth cut-off to $|\rho| \lesssim {R_1}$ and where $c_\pm$ are normalization constants such that
\begin{equation*}
\boldsymbol{\Omega}(\vec{Z}_+, \vec{Z}_-) = - \boldsymbol{\Omega}(\vec{Z}_-, \vec{Z}_+) = 1.
\end{equation*}
Then we have
\begin{equation*}
M Z_\pm = \pm \mu Z_\pm + \mathcal E_\pm,
\end{equation*}
where the errors $\mathcal E_\pm$ consist of terms that are supported around $|\rho| \simeq {R_1}$ or that have additional smallness in terms of the parameter $\ell$.
We now enact a decomposition of $\vec{\psi}$ into
\begin{equation} \label{equ:unstable_mode_decomposition}
\vec{\psi} = \vec{\phi} + a_+ \vec{Z}_+ + a_- \vec{Z}_-,
\end{equation}
where the time-dependent parameters $a_+(t)$ and $a_-(t)$ will be defined shortly by imposing suitable orthogonality conditions.
Inserting~\eqref{equ:unstable_mode_decomposition} into \eqref{equ:unstable_mode_first_order_psi_equation}, we find
\begin{align}\label{eq:wpoutline6}
\begin{split}
(a_{+}'-\mu a_{+}){\vec Z}_{+}+(a_{-}+\mu a_{-}){\vec Z}_{-} &= a_{+}(M{\vec Z}_{+}-\mu{\vec Z}_{+})+a_{-}(M{\vec Z}_{-}+\mu{\vec Z}_{-}) \\
&\quad \quad -(\partial_t-M)\vec{\phi} + {\vec F}_1(\partial_\Sigma^{\leq 2} \vec{\psi}, \ell, \dot{{\wp}}).
\end{split}
\end{align}
Note that the dependence of ${\vec F}_1(\partial_\Sigma^{\leq 2} \vec{\psi}, \ell, \dot{{\wp}})$ on $\vec{\psi}$ comes with additional smallness. Thus, if we replace $\vec{\psi}$ in the definition of ${\vec F}_1(\partial_\Sigma^{\leq 2} \vec{\psi}, \ell, \dot{{\wp}})$ in terms of $\vec{\phi}$ and $a_{\pm}$, the terms involving $a_{\pm}$ come with extra smallness. Also note that at this point we have already determined the parameters $\ell(t)$ and $\xi(t)$, so we treat these as given.
Now taking the $\boldsymbol{\Omega}$ inner product of \eqref{eq:wpoutline6} with ${\vec Z}_{-}$, multiplying by $e^{-\mu t}$, and recalling that
\begin{align*}
\begin{split}
\boldsymbol{\Omega}(M\vec{\phi},{\vec Z}_{-})=-\boldsymbol{\Omega}(\vec{\phi},M{\vec Z}_{-})=\mu\boldsymbol{\Omega}(\vec{\phi},{\vec Z}_{-})-\boldsymbol{\Omega}(\vec{\phi},M{\vec Z}_{-}+\mu{\vec Z}_{-}),
\end{split}
\end{align*}
we get
\begin{align}\label{eq:wpoutline7}
\begin{split}
\frac{\mathrm{d}}{\mathrm{d} t}\big(e^{-\mu t}a_{+}\big)=-\frac{\mathrm{d}}{\mathrm{d} t}\big(e^{-\mu t}\boldsymbol{\Omega}(\vec{\phi},{\vec Z}_{-})\big)-e^{-\mu t}F_{+},
\end{split}
\end{align}
where
\begin{align}\label{eq:Fplusdef}
\begin{split}
-F_{+} &:= \boldsymbol{\Omega}({\vec F}_1,{\vec Z}_{-})-\boldsymbol{\Omega}(\vec{\phi},M{\vec Z}_{-}+\mu{\vec Z}_{-})+a_{+}\boldsymbol{\Omega}(M{\vec Z}_{+}-\mu{\vec Z}_{+},{\vec Z}_{+})\\
&\quad+a_{-}\boldsymbol{\Omega}(M{\vec Z}_{-}+\mu{\vec Z}_{-},{\vec Z}_{-}).
\end{split}
\end{align}
Similarly, taking the $\boldsymbol{\Omega}$ inner product of \eqref{eq:wpoutline6} with ${\vec Z}_{+}$ and multiplying by $e^{\mu t}$, we get
\begin{align}\label{eq:wpoutline8}
\begin{split}
\frac{\mathrm{d}}{\mathrm{d} t}\big(e^{\mu t}a_{-}\big)=\frac{\mathrm{d}}{\mathrm{d} t}\big(e^{\mu t}\boldsymbol{\Omega}(\vec{\phi},{\vec Z}_{+})\big)+e^{\mu t}F_{-},
\end{split}
\end{align}
where
\begin{align*}
\begin{split}
F_{-}=\boldsymbol{\Omega}({\vec F}_1,{\vec Z}_{+})-\boldsymbol{\Omega}(\vec{\phi},M{\vec Z}_{+}-\mu{\vec Z}_{+})+a_{+}\boldsymbol{\Omega}(M{\vec Z}_{+}-\mu{\vec Z}_{+},{\vec Z}_{+})+a_{-}\boldsymbol{\Omega}(M{\vec Z}_{-}+\mu{\vec Z}_{-},{\vec Z}_{+}).
\end{split}
\end{align*}
Motivated by \eqref{eq:wpoutline7} and \eqref{eq:wpoutline8}, we require the orthogonality conditions
\begin{align} \label{eq:wpoutline9}
\begin{split}
\boldsymbol{\Omega}(\vec{\phi},{\vec Z}_{-}) = e^{\mu t}{\tilde{S}}(e^{-\mu t}F_{+}), \qquad \boldsymbol{\Omega}(\vec{\phi},{\vec Z}_{+}) = e^{-\mu t}{\tilde{S}}(e^{\mu t}F_{-}).
\end{split}
\end{align}
In view of \eqref{eq:wpoutline7} and \eqref{eq:wpoutline8} and recalling that $\partial_t {\tilde{S}} = S-Id$, the orthogonality conditions~\eqref{eq:wpoutline9} lead to the following equations for $a_{\pm}$
\begin{align}\label{eq:wpoutline10}
\begin{split}
\frac{\mathrm{d}}{\mathrm{d} t}\big(e^{-\mu t}a_{+}\big)=-S(e^{-\mu t}F_{+}),\qquad \frac{\mathrm{d}}{\mathrm{d} t}\big(e^{\mu t}a_{-}\big)=S(e^{\mu t}F_{-}).
\end{split}
\end{align}
Finally note that derivatives commute nicely with \eqref{eq:wpoutline9} in the sense that (using the product rule and the fact that $[\frac{\mathrm{d}}{\mathrm{d} t},{\tilde{S}}]=0$)
\begin{align}\label{eq:wpoutline11}
\begin{split}
\frac{\mathrm{d}}{\mathrm{d} t}\boldsymbol{\Omega}(\vec{\phi},{\vec Z}_{-})=e^{\mu t}{\tilde{S}}(e^{-\mu t}F_{+}'),\qquad \frac{\mathrm{d}}{\mathrm{d} t}\boldsymbol{\Omega}(\vec{\phi},{\vec Z}_{+})=e^{-\mu t}{\tilde{S}}(e^{\mu t}F_{-}'),
\end{split}
\end{align}
and similarly for higher derivatives.
To conclude, we briefly explain how to obtain the decomposition~\eqref{equ:unstable_mode_decomposition} satisfying the orthogonality conditions \eqref{eq:wpoutline9}.
Given $\vec{\psi}$ and $a_\pm(t)$ on some time interval $J = [0,\tau_0]$, we first trivially extend these to times $-1 \leq t \leq 0$.
Then we consider
\begin{equation*}
\Upsilon_\mu(\vec{\psi}, a_+, a_-) := \begin{pmatrix} \boldsymbol{\Omega}\bigl( \vec{\psi} - a_+ \vec{Z}_+ - a_- \vec{Z}_-, \vec{Z}_- \bigr) - e^{\mu t}{\tilde{S}}(e^{-\mu t}F_{+}) \\ \boldsymbol{\Omega}\bigl( \vec{\psi} - a_+ \vec{Z}_+ - a_- \vec{Z}_-, \vec{Z}_+ \bigr) - e^{-\mu t}{\tilde{S}}(e^{\mu t}F_{-}) \end{pmatrix}.
\end{equation*}
Note that the orthogonality conditions~\eqref{eq:wpoutline9} are equivalent to $\Upsilon_\mu(\vec{\psi}, a_+, a_-) = (0,0)$. We have $\Upsilon_\mu(0,0,0) = 0$ and we compute the Fr\'echet derivative
\begin{equation*}
D_{a_+, a_-}\Upsilon_\mu(0,0,0) = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} + o_{{R_1},\ell}(1).
\end{equation*}
Hence, for given $\vec{\psi}$ satisfying suitable bootstrap assumptions, the existence of the decomposition~\eqref{equ:unstable_mode_decomposition} obeying the orthogonality conditions~\eqref{eq:wpoutline9} follows (for sufficiently large ${R_1}$) from the implicit function theorem.
\section{Coordinates, vectorfields, and a more precise description of the profile} \label{sec:coordinates}
Given $\ell$,$\xi$, and $a_{\pm}$, we give a more detailed description of the foliation and the profile. Moreover, we obtain various expressions for the linear operator acting on $\phi$. Our starting point is to derive a parameterization of the profile.
\subsection{Parameterization of the Profile}\label{sec:profile2}
We give separate parameterizations of the profile $\cup_\sigma \Sigma_\sigma$ in the interior and exterior regions.
In the interior, our parameterization is the same as in the first order formulation in Section~\ref{sec:interior}. That is, we parameterize the profile as
\begin{align*}
\begin{split}
(t,\xi+\gamma^{-1}P_\ell F(\rho,\omega)+P_\ell^\perp F(\rho,\omega)),
\end{split}
\end{align*}
where $\ell$, $\xi$, and $\gamma$ are functions of $t$. According to the definition of the profile in Section~\ref{sec:profileintro}, and with the notation used there, this parameterization is valid in the flat region $\mathcal C_{{\mathrm{flat}}}=\{X^0\geq \sigma_{\mathrm{temp}}(X)+\delta_1\}$, where as usual $X=(X^0,\dots X^{n+1})$ are the rectangular coordinates in the ambient $\mathbb R^{1+(n+1)}$.
In the exterior we eventually want to parameterize the VMC surface as a graph over a hyperplane, so we start by parameterizing the profile itself as a graph. For this, let the function $x^0(\sigma,x')$ be defined by the requirement that $(x^0(\sigma,x'),x')\in {\underline{\bsUpsigma}}_\sigma$. Note that in the hyperboloidal region $\mathcal C_{{\mathrm{hyp}}}=\{X^0\leq \sigma_{\mathrm{temp}}(X)-\delta_1\}$
\begin{align}\label{eq:x0hyptemp1}
\begin{split}
x^0(\sigma,x')=\sigma-\gamma R+\sqrt{|x'-\xi+\gamma R|^2+1},
\end{split}
\end{align}
while in the flat region $\{X^0\geq \sigma_{\mathrm{temp}}(\sigma)+\delta_1\}$
\begin{align*}
\begin{split}
x^0(\sigma,x')=\sigma.
\end{split}
\end{align*}
The expression for $x^0$ in the intermediate region $\{|X^0-\sigma_{{\mathrm{temp}}}(X)|<\delta_1\}$ is not explicit and depends on the choice of the smoothed out minimum function $\mathfrak m$ in Section~\ref{sec:profileintro}. With $x^0$ determined, we define the function $\mathcal Q(x^0,x')$ by the requirement that $(x^0(\sigma,x'),x',\mathcal Q(x'(\sigma,x'),x'))\in\Sigma_\sigma$. The map
\begin{align*}
\begin{split}
(\sigma,x')\mapsto (x^0(\sigma,x'),x',\mathcal Q(x'(\sigma,x'),x'))
\end{split}
\end{align*}
is then a parameterization of the profile in the exterior region $\{|x'|\gg1\}$ (more precisely, this parameterization is valid in a neighborhood of the support of $1-\chi$ in the definition \eqref{eq:tilNdefintro1} of $N$). We now want to derive more explicit expressions for $\mathcal Q$ in the flat and hyperboloidal parts of ${\boldsymbol{\Upsigma}}_\sigma$. First, for reasons that will become clear momentarily, we let
\begin{align*}
\begin{split}
\eta = \xi-\gamma R\ell,
\end{split}
\end{align*}
and define the non-geometric polar coordinates $(\tau,r,\theta)$ by
\begin{align*}
\begin{split}
\tau = \sigma-\gamma(\sigma)R\qquad {\ \ \text{and} \ \ }\qquad x'=\eta(\tau)+r\Theta(\theta).
\end{split}
\end{align*}
Here $\Theta$ denotes the standard parameterization of $\mathbb S^{n-1}\subseteq\mathbb R^n$, and here and in what follows, by a slight abuse of notation, we simply write $\eta(\tau)$ for $\eta(\sigma(\tau))$ (and similarly for other parameters $\ell$, $\xi$, etc.). Differentiation with respect to $\tau$ is denoted by a prime, and differentiation with respect to $\sigma$ by a dot, so for instance
\begin{align*}
\begin{split}
\eta' - \frac{\mathrm{d} }{\mathrm{d} \tau}\eta(\tau)= \frac{\mathrm{d}\sigma}{\mathrm{d} \tau}\frac{\mathrm{d}}{\mathrm{d} \sigma}\eta(\sigma)\vert_{\sigma=\sigma(\tau)},\qquad {\dot{\eta}}= \frac{\mathrm{d}}{\mathrm{d} \sigma}\eta(\sigma).
\end{split}
\end{align*}
Note that
\begin{align*}
\begin{split}
\frac{\mathrm{d}\sigma}{\mathrm{d} \tau}=1-\gamma' R\simeq 1.
\end{split}
\end{align*}
In the hyperboloidal region $\mathcal C_{\mathrm{hyp}}$ we have
\begin{align*}
\begin{cases}
x^0=\tau+ \jap{r}\\
x'=\eta(\tau)+r\Theta(\theta)
\end{cases},
\end{align*}
while in the flat region $\mathcal C_{\mathrm{flat}}$
\begin{align*}
\begin{cases}
x^0=\tau+ \gamma R\\
x'=\eta(\tau)+r\Theta(\theta)
\end{cases}.
\end{align*}
In general, our parameterization of the profile in polar coordinates becomes
\begin{align*}
\begin{split}
(x^0(\tau,r),\eta(\tau)+r\Theta(\theta),\mathcal Q(x^0(\tau,r),\eta(\tau)+r\Theta(\theta))).
\end{split}
\end{align*}
Now to motivate our definition of the polar coordinates, we investigate the form of $\mathcal Q$ in the hyperboloidal region more closely. Note that if $(x^0,x')\in \Sigma_\sigma$, is given by (recall the definition of ${\tilde{\boldsymbol{\Upsigma}}}_\sigma$ from Section~\ref{sec:profileintro})
\begin{align*}
\begin{split}
\pmat{x^0\\ x'}=\Lambda_{-\ell}\pmat{y^0\\y'}+\pmat{0\\\xi-\sigma\ell},
\end{split}
\end{align*}
then using \eqref{eq:x0hyptemp1},
\begin{align*}
\begin{split}
&y^0=\gamma^{-1}(\sigma-\gamma R) +\gamma(\sqrt{1+|x'-(\xi-\sigma R\ell)|^2}-(x'-(\xi-\sigma R\ell))\cdot \ell),\\
&y'= A_\ell(x'-(\xi-\sigma R\ell))-\gamma \sqrt{1+|x'-(\xi-\sigma R\ell)|^2} \ell,
\end{split}
\end{align*}
and the profile parameterization becomes
\begin{align*}
\begin{split}
(x^0,x',Q(A_\ell(x'-\eta(\sigma))-\gamma\ell \sqrt{1+|x'-\eta(\sigma)|^2})).
\end{split}
\end{align*}
Here $Q(y)=Q(|y|)$ is given by the parameterization of the Riemannian catenoid and satisfies the ODE (after identifying it with a function of a single variable)
\begin{align}\label{eq:QRiem1}
\begin{split}
Q''({\tilde{r}})+\frac{n-1}{{\tilde{r}}}Q'({\tilde{r}})-(1+(Q'({\tilde{r}}))^2)^{-1}(Q'({\tilde{r}}))^2Q''({\tilde{r}})=0.
\end{split}
\end{align}
Therefore, in our polar coordinates these expressions take the simple forms
\begin{align}\label{eq:yx1}
\begin{split}
&y^0=\gamma^{-1}\tau +\gamma\jap{r}(1-\frac{r}{\jap{r}}\Theta\cdot\ell),\\
&y'=A_\ell(r\Theta-\jap{r}\ell)=rA_\ell\Theta-\gamma \jap{r}\ell,
\end{split}
\end{align}
and
\begin{align}\label{eq:extpar1}
\begin{split}
(\tau+\jap{r},\eta(\tau)+r\Theta,Q(rA_\ell\Theta-\gamma\jap{r}\ell)).
\end{split}
\end{align}
For future use, we also record the following coordinate change formulas in the hyperboloidal region:
\begin{align}\label{eq:extchangeofvars1}
\begin{split}
&\partial_\tau = \partial_{x^0}+\eta',\\
&\partial_r = \frac{r}{\jap{r}}\partial_{x^0}+\Theta,\\
&\partial_a= r\Theta_a, ~a=1,\dots n-1,
\end{split}
\end{align}
and
\begin{align}\label{eq:extchangeofvars2}
\begin{split}
&\partial_{x^0}=\frac{1}{1-\frac{r}{\jap{r}}\Theta\cdot\eta'}\partial_\tau-\frac{\eta'\cdot\Theta}{1-\frac{r}{\jap{r}}\Theta\cdot\eta'}\partial_r-\frac{(\mathring{{\slashed{g}}}^{-1})^{ab}\Theta_b\cdot\eta'}{r(1-\frac{r}{\jap{r}}\Theta\cdot\eta')}\partial_a,\\
&\partial_{x^j}=-\frac{r\Theta^j}{\jap{r}(1-\frac{r}{\jap{r}}\Theta\cdot\eta')}\partial_\tau+\frac{\Theta^j}{1-\frac{r}{\jap{r}}\Theta\cdot\eta'}\partial_r+\Big(\frac{(\mathring{{\slashed{g}}}^{-1})^{ab}\Theta_b^j}{r}+\frac{(\mathring{{\slashed{g}}}^{-1})^{ab}\Theta_b\cdot\eta' \Theta^j}{\jap{r}(1-\frac{r}{\jap{r}}\Theta\cdot\eta')}\Big)\partial_a.
\end{split}
\end{align}
Next, we want to define the rotation and outgoing and incoming null vectorfields $\Omega$, $L$, ${\underline L}$ and the geometric radial function ${\tilde{r}}$, corresponding to the Minkowski metric, at a point on ${\underline{\bsUpsigma}}_\sigma$. These will play an important role in the analysis in the exterior region, and will appear in our bootstrap assumptions in Section~\ref{sec:bootstrap} below. Let $(x^0,x')=(\tau+\jap{r},r\Theta+\eta)$ be a point on the hyperboloidal part of ${\underline{\bsUpsigma}}_\sigma$. With $\tau=\tau(\sigma)$, let $(y^0,y')$ be related to $(x^0,x')$ by \eqref{eq:yx1}. Note that by construction
\begin{align*}
\begin{split}
\pmat{x^0\\x'}= \Big(\Lambda_{-\ell}\pmat{y^0\\y'} +\pmat{0\\-\sigma\ell(\sigma)+\xi(\sigma)}\Big)\Big{\vert}_{\sigma=\sigma(\tau)}.
\end{split}
\end{align*}
We define our $L$, ${\underline L}$, $\Omega$, and $T$ as the push forward by $\Lambda_{-\ell}$ of the corresponding vectorfields in the $y$ coordinates. That is,
\begin{align}\label{eq:VFdef0}
\begin{split}
& T=\Lambda_{-\ell}\partial_{y^0},\\
&L= \Lambda_{-\ell}(\partial_{y^0}+\frac{y^i}{|y'|}\partial_{y^i}),\\
&{\underline L} = \Lambda_{-\ell}(\partial_{y^0}-\frac{y^i}{|y'|}\partial_{y^i}),\\
&\Omega_{jk}=\Lambda_{-\ell}(y^j\partial_{y^k}-y^k\partial_{y^j}).
\end{split}
\end{align}
Using \eqref{eq:extchangeofvars2} we can find the following coordinate representations for these vectorfields:
\begin{align}\label{eq:VFexpansion1}
\begin{split}
&T=\calO(1)\partial_\tau+\calO({\dot{\wp}})\partial_r+\calO({\dot{\wp}} r^{-1})\partial_a,\\
&L=\calO(r^{-2})\partial_\tau+\calO(1)\partial_r+\calO(r^{-3})\partial_a,\\
&{\underline L} = \calO(1)\partial_\tau+\calO(1)\partial_r+\calO({\dot{\wp}} r^{-1}+r^{-3})\partial_a,\\
&\Omega_{jk}=\calO(\wp r)\partial_r+\calO(1) \partial_a.
\end{split}
\end{align}
Using the notation ${\slashed{\partial}}^a=(\mathring{{\slashed{g}}}^{-1})^{ab}\partial_b$, for $T$ and $L$ also record the following more precise expressions
\begin{align}\label{eq:Tprecise1}
\begin{split}
T&= \gamma\Big(1+\frac{r\Theta\cdot(\eta'-\ell)}{\jap{r}(1-\frac{r}{\jap{r}}\Theta\cdot\eta')}\Big)\partial_\tau-\frac{\gamma(\eta'-\ell)\cdot\Theta}{1-\frac{r}{\jap{r}}\Theta\cdot\eta'}\partial_r\\
&\quad-\frac{\gamma}{r(1-\frac{r}{\jap{r}}\Theta\cdot\eta')}\big((\Theta\cdot\ell){\slashed{\partial}}^a\Theta\cdot(\eta'-\ell)-({\slashed{\partial}}^a\Theta\cdot\ell)\Theta\cdot(\eta'-\ell)-{\slashed{\partial}}^a\Theta\cdot(\eta'-\ell)\big)\partial_a,\\
\end{split}
\end{align}
and
\begin{align}\label{eq:Lprecise1}
\begin{split}
L&=(\frac{1}{2}\gamma^{-1}(1-\Theta\cdot\ell)^{-1}(1-\Theta\cdot\eta')^{-1}r^{-2}+\calO(r^{-4}))\partial_\tau\\
&\quad+(\gamma^{-1}(1-\Theta\cdot\ell)^{-1}+\calO(r^{-2}))\partial_r+\calO(r^{-3})\partial_a.
\end{split}
\end{align}
By inverting these relations we can also express the the coordinate derivatives in the $(\tau,r,\theta)$ coordinates in terms of $L$, ${\underline L}$, and $\Omega$:
\begin{align}\label{eq:VFbasis1}
\begin{split}
&\partial_\tau=\calO(1)L+\calO(1){\underline L},\\
&\partial_r= \calO(1)L+\calO(r^{-2}){\underline L}+\calO( r^{-3})\Omega,\\
&\partial_\theta=\calO(\wp r)L+\calO(1)\Omega+\calO(\wp r^{-1}){\underline L}.
\end{split}
\end{align}
Similarly, the geometric radial function is defined in terms of the $y$ variables as ${\tilde{r}}=|y'|$ which in radial coordinates reads
\begin{align}\label{eq:tilrprecise1}
\begin{split}
{\tilde{r}} =|rA_\ell\Theta-\gamma \jap{r}\ell|=\gamma(1-\Theta\cdot\ell)r+\calO(r^{-1}).
\end{split}
\end{align}
In particular ${\tilde{r}}\simeq r$.
\subsection{Parameterization of the VMC Surface and Derivation of the Equations}
Recall from Section~\ref{sec:profileintro}, that to derive a parameterization of the VMC surface we first introduced an almost-normal vectorfield
\begin{align*}
\begin{split}
N:\cup_\sigma \Sigma_\sigma\to \mathbb R^{1+(n+1)},
\end{split}
\end{align*}
and then defined
\begin{align*}
\begin{split}
\psi:\cup_{\sigma}\Sigma_\sigma\to \mathbb R
\end{split}
\end{align*}
by the requirement that $p+\psi(p)N(p)\in \mathcal M$ for all $p\in \cup_{\sigma}\Sigma_\sigma$ (see also Lemma~\ref{rem:normalneighborhood}). Then in Section~\ref{sec:interior} we further decomposed $\psi$ as
\begin{align*}
\begin{split}
\psi=\phi+a_\mu Z_\mu,\qquad a_\mu:=a_{+}+a_{-}.
\end{split}
\end{align*}
In view of the compact support of $Z_\mu$, the functions $\phi$ and $\psi$ agree outside a compact region of $\mathcal C_{{\mathrm{flat}}}$. In the hyperboloidal region $\mathcal C_{{\mathrm{hyp}}}$ we will sometimes work with the following renormalized version of $\phi$:
\begin{align*}
\begin{split}
\varphi:=\angles{\phi N}{ \partial_{X^{n+1}}}.
\end{split}
\end{align*}
The advantage of $\varphi$ is that it satisfies a simpler equation, while the advantage of $\phi$ is that the linear part of the equation satisfied by it has the form which is familiar from the second variation of the area functional. The exact relation between $\phi$ and $\varphi$ can be calculated as follows. In the region $\mathcal C_{{\mathrm{hyp}}}$ the normal $n_\wp$ is given by (here the indices run over any set of coordinates, such as $(\tau,r,\theta)$ or $(x^0,x')$, and $Q_\nu$ denotes $\partial_\nu Q$ with $\ell$ treated as fixed as in Section~\ref{sec:interior})
\begin{align}\label{eq:nwpext1}
\begin{split}
n_\wp= (1+(m^{-1})^{\mu\nu} Q_\mu Q_\nu)^{-\frac{1}{2}}((-m^{-1})^{\mu\nu}Q_\mu \partial_\nu,1).
\end{split}
\end{align}
It follows from the normalization $\boldsymbol{\eta}(N,n_\wp)=1$ that in this region
\begin{align*}
\begin{split}
N= (1+(m^{-1})^{\mu\nu} Q_\mu Q_\nu)^{\frac{1}{2}}\frac{\partial}{\partial X^{n+1}},
\end{split}
\end{align*}
and hence
\begin{align}\label{eq:varphiphi1}
\begin{split}
\varphi = s\phi,\qquad s:=(1+(m^{-1})^{\mu\nu} Q_\mu Q_\nu)^{\frac{1}{2}}.
\end{split}
\end{align}
Since $(m^{-1})^{\mu\nu} Q_\mu Q_\nu=O(r^{-2(n- 1)})$, the relation \eqref{eq:varphiphi1} implies that the various energy norms for $\phi$ and $\varphi$ are equivalent.
In the remainder of this section we give more explicit expressions for $N$ using the parameterizations introduced in the previous section, and derive the equations satisfied by $\psi$, $\phi$, and $\varphi$ in the respective coordinates. Finally, we introduce a set of global coordinates and discuss the structure of the linearized operator in these various coordinate systems.
\subsubsection{Interior Non-Geometric Coordinates}\label{sec:INGC}
Here $N$ is defined to lie on the $\{X^0=\mathrm{constnt}\}$ hypersurfaces of the ambient space, and the equations satisfied by $\psi$ and $\phi$ can be read off from the first order formulation. In particular, according to \eqref{eq:phi1}, in the coordinate system $(t,\rho,\omega)$ introduced in Section~\ref{sec:interior} and in the region where $\chi\equiv1$ in \eqref{eq:tilNdefintro1}, the linear part of the equation satisfied by $\phi$ can be written as
\begin{align}\label{eq:LEDcalPint1}
\begin{split}
\mathcal P \phi=\frac{1}{\sqrt{|h|}}\partial_\mu(\sqrt{|h|}(h^{-1})^{\mu\nu}\partial_\nu\phi)+V\phi+a^{\mu\nu}\partial^2_{\mu\nu}\phi+b^\mu\partial_\mu\phi+c\phi,
\end{split}
\end{align}
where $a$ is symmetric and $a,b,c=\calO({\dot{\wp}}^{\leq 2})$, and $V=| {\mathrm{I\!I}} |^2$.
Besides the coordinates introduced above, in the flat part of the foliation we will often use the coordinates $({\tilde{t}},{\tilde{\rho}},{\tilde{\omega}})$ defined by (note that this is a valid change of variables in the region flat region where $\rho$ is bounded)
\begin{align}\label{eq:ttilt1}
\begin{split}
{\tilde{t}}=\gamma^{-1}(t)t-\ell(t)\cdot F(\rho,\omega),\quad {\tilde{\rho}}=\rho,\quad {\tilde{\omega}}=\omega.
\end{split}
\end{align}
The corresponding coordinate derivatives are related as follows:
\begin{align}\label{eq:ttilt2}
\begin{split}
&\partial_t=(1+\upkappa)\partial_{\tilde{t}},\quad \partial_\rho=\ell\cdot F_\rho\partial_{\tilde{t}}+\partial_{\tilde{\rho}},\quad \partial_\omega= \ell\cdot F_\omega\partial_{\tilde{t}}+\partial_{\tilde{\omega}},\\
&\partial_{\tilde{t}}=(1+\upkappa)^{-1}\partial_t,\quad \partial_{\tilde{\rho}}=-(1+\upkappa)^{-1}\ell\cdot F_\rho\partial_t+\partial_\rho,\quad \partial_{\tilde{\omega}}=-(1+\upkappa)^{-1}\ell\cdot F_\omega\partial_t+\partial_\omega,
\end{split}
\end{align}
where
\begin{align}\label{eq:kappadef1}
\begin{split}
\upkappa:=t\frac{\mathrm{d}}{\mathrm{d} t}\gamma^{-1}-{\dot{\ell}}\cdot F.
\end{split}
\end{align}
Note that by these relations $\det\frac{\partial({\tilde{t}},{\tilde{\rho}},{\tilde{\omega}})}{\partial(t,\rho,\omega)}=1+\upkappa$. In the calculations in the flat region we will often use both the $(t,\rho,\omega)$ and the $({\tilde{t}},{\tilde{\rho}},{\tilde{\omega}})$ coordinates. To emphasize which coordinate system is being used in each calculation, we will use a tilde to indicate that calculations are being done in the $({\tilde{t}},{\tilde{\rho}},{\tilde{\omega}})$ coordinates. For instance, we write $h_{\mu}$ for the components of $h$ in the $(t,\rho,\omega)$ coordinates and ${\tilde{h}}_{\mu\nu}$ for its components in the $({\tilde{t}},{\tilde{\rho}},{\tilde{\omega}})$ coordinates, and similarly for $|h|$ and $|{\tilde{h}}|$.
\subsubsection{Exterior Non-Geometric Coordinates}\label{sec:ENGC}
Since $\cup_{\sigma}\Sigma_\sigma$ can be parameterized by $\cup_{\sigma}{\underline{\bsUpsigma}}_\sigma$, by a slight abuse of notation we will often view $N$, $\nu$, $\phi$, and $\varphi$ as functions on $\cup_{\sigma}{\underline{\bsUpsigma}}_\sigma$ in the exterior region, and use coordinates, such as $(x^0,x')$ or $(\tau,r,\theta)$, as their arguments. In the graph formulation, the requirement that $\mathcal M$ be a VMC surface is equivalent to the following PDEs for $u= Q_\wp+\varphi$:
\begin{align}\label{eq:uMinkowskieq1}
\begin{split}
\nabla_\mu\Big(\frac{\nabla^\mu u}{\sqrt{1+\nabla^\alpha u \nabla_\alpha u}}\Big)=\frac{1}{\sqrt{|m|}}\partial_\mu\Big(\frac{\sqrt{|m|}(m^{-1})^{\mu\nu}\partial_\nu u}{\sqrt{1+(m^{-1})^{\alpha\beta}\partial_\alpha u\partial_\beta u}}\Big)=0.
\end{split}
\end{align}
Here $m$ denotes the Minkowski metric
\begin{align*}
\begin{split}
m=-\mathrm{d} x^0\otimes \mathrm{d} x^0+\sum_{i=1}^n \mathrm{d} x^i\otimes \mathrm{d} x^i,
\end{split}
\end{align*}
and $\nabla$ the corresponding covariant derivative. The equation for $v$ can be expanded as
\begin{align}\label{eq:uext1}
\begin{split}
\Box_m u -(1+\nabla^\alpha u \nabla_\alpha u )^{-1}(\nabla^\nu u )(\nabla^\mu u) \nabla_\nu \nabla_\mu u=0.
\end{split}
\end{align}
Plugging in the decomposition for $u$ we arrive at the following equation for $\varphi$ and $\phi$ (see \eqref{eq:varphiphi1}):
\begin{align}\label{eq:abstractexteq1}
\begin{split}
s\mathcal P\phi=\calP_{\mathrm{graph}}\varphi=\sum_{i=0}^3\mathcal F_i,\qquad \mathcal P:=\mathcal P_{\mathrm{graph}}+s^{-1}[\mathcal P_{\mathrm{graph}},s].
\end{split}
\end{align}
Here $\mathcal F_i$ denotes inhomogeneous terms of order $i$ in $\varphi$. A more explicit expression for $\mathcal P$ is derived in \eqref{eq:extgeomlin1} below. One advantage of working with $\varphi$ rather than $\phi$ is that $\mathcal F_i$ on the right-hand side are easier to compute. The source term $\mathcal F_0$,which is independent of $\varphi$ (but depends on the derivatives of the parameters), is calculated in Lemma~\ref{lem:sourceext1} below. The linear operator $\mathcal P$ is given by (here $Q\equiv Q_{\wp}$)
\begin{align}\label{eq:callP1}
\begin{split}
\calP_{\mathrm{graph}}&=\Box_m-(1+\nabla^\alpha Q \nabla_\alpha Q)^{-1}(\nabla^\mu Q)(\nabla^\nu Q)\nabla_{\mu}\nabla_\nu-2(1+\nabla^\alpha Q \nabla_\alpha Q)^{-1}(\nabla^{\mu}\nabla^\nu Q)(\nabla_\mu Q)\nabla_\nu\\
&\quad+2(1+\nabla^\alpha Q\nabla_\alpha Q)^{-2}(\nabla^\nu Q)(\nabla^\mu Q)(\nabla^\lambda Q)(\nabla_\lambda \nabla_\mu Q)\nabla_\nu.
\end{split}
\end{align}
The quadratic and cubic terms are (here $Q\equiv Q_{\wp}$)
\begin{align}\label{eq:calF2_1}
\begin{split}
\mathcal F_2&=-\frac{2\nabla^\mu Q\nabla^\nu \varphi\nabla^2_{\mu\nu}\varphi}{1+\nabla^\alpha u \nabla_\alpha u}-\frac{\nabla^2_{\mu\nu}Q\nabla^\mu\varphi\nabla^\nu\varphi}{1+\nabla_\alpha u \nabla^\alpha u}+\frac{\nabla^\mu Q \nabla^\nu Q \nabla^2_{\mu\nu}Q \nabla^\beta \varphi\nabla_\beta \varphi}{(1+\nabla^\alpha u \nabla_\alpha u)(1+\nabla^\alpha Q \nabla_\alpha Q)}\\
&\quad +\frac{2\nabla^\beta Q\nabla_\beta \varphi(\nabla^\mu Q\nabla^\nu Q \nabla^2_{\mu\nu} \varphi+2\nabla^2_{\mu\nu}Q\nabla^\mu Q \nabla^\nu \varphi)}{(1+\nabla^\alpha u \nabla_\alpha u)(1+\nabla^\alpha Q\nabla_\alpha Q)}-\frac{4(\nabla^\mu Q\nabla^\nu Q\nabla^2_{\mu\nu}Q)(\nabla^\beta Q\nabla_\beta \varphi)^2}{(1+\nabla^\alpha u\nabla_\alpha u)(1+\nabla^\alpha Q\nabla_\alpha Q)^2},
\end{split}
\end{align}
and
\begin{align}\label{eq:calF3_1}
\begin{split}
\mathcal F_3&=-\frac{\nabla^\mu\varphi\nabla^\nu\varphi\nabla^2_{\mu\nu}\varphi}{1+\nabla^\alpha u \nabla_\alpha u}-\frac{\nabla^\beta\varphi\nabla_\beta\varphi(\nabla^\mu Q\nabla^\nu Q \nabla^2_{\mu\nu} \varphi+2\nabla^2_{\mu\nu}Q\nabla^\mu Q \nabla^\nu \varphi)}{(1+\nabla^\alpha u \nabla_\alpha u)(1+\nabla^\alpha Q\nabla_\alpha Q)}\\
&\quad+\frac{2(\nabla^\mu Q \nabla^\nu Q\nabla^2_{\mu\nu}Q)(\nabla^\beta Q \nabla_\beta\varphi)(\nabla^\gamma\varphi\nabla_\gamma\varphi)}{(1+\nabla^\alpha u \nabla_\alpha u)(1+\nabla^\alpha Q\nabla_\alpha Q)^2}.
\end{split}
\end{align}
In view of \eqref{eq:callP1}, the linearized operator $\calP$ has the expansion
\begin{align}\label{eq:callP2}
\begin{split}
\calP_{\mathrm{graph}}\psi&=\Box_m\psi+{\mathrm{Err}}_{\calP}(\psi),
\end{split}
\end{align}
where for some symmetric bounded tensor $p_2$ and a bounded tensor $p_1$,
\begin{align}\label{eq:ErrcallP1}
\begin{split}
{\mathrm{Err}}_{\calP}\psi&= \jap{r}^{-4}p_2^{\mu\nu}\partial^2_{\mu\nu}\psi+\jap{r}^{-5}p_1^\mu \partial_\mu\psi.
\end{split}
\end{align}
Due to the asymptotic flatness of the metric of the catenoid, the Minkowski wave operator $\Box_m$ will play a major role in the exterior analysis. For this reason, it is convenient to derive expressions for $m$, $m^{-1}$, and $\Box_m$ in the $(\tau,r,\theta)$ coordinates. Starting with $m$, we have
\begin{align}
m&=-{\tilde{\gamma}}^{-2}\mathrm{d}\tau\otimes \mathrm{d}\tau-(1-\jap{r}^{-2}+\Theta\cdot\eta')(\mathrm{d}\tau\otimes\mathrm{d} r +\mathrm{d} r\otimes \mathrm{d} \tau)+r\Theta_a\cdot\eta'(\mathrm{d}\tau\otimes \mathrm{d}\theta^a+\mathrm{d} \theta^a\otimes \mathrm{d}\tau)\nonumber\\
&\quad +\jap{r}^{-2} \mathrm{d} r\otimes \mathrm{d} r + r^2\mathring{{\slashed{g}}}_{ab}\mathrm{d}\theta^a\otimes \mathrm{d} \theta^b,\label{eq:mform1}
\end{align}
where we have used the notation ${\tilde{\gamma}}=(1-|\eta'|^2)^{-\frac{1}{2}}$. In matrix form this is
\begin{align}\label{eq:mmatrix1}
\begin{split}
m= m_0+\jap{r}^{-2}m_1=\pmat{-{\tilde{\gamma}}^{-2}&-1+\Theta\cdot\eta'&r\Theta_\theta\cdot\eta'\\-1+\Theta\cdot\eta'&0&0\\r\Theta_\theta\cdot\eta'&0&r^2\mathring{{\slashed{g}}}}+\jap{r}^{-2}\pmat{0&1&0\\1&1&0\\0&0&0}.
\end{split}
\end{align}
It follows that (here $|m|=|\det m|$)\footnote{Here we have used the fact that $|\Theta_\theta\cdot\eta'|^2=1-(\Theta\cdot\eta')^2$, where$
|\Theta_\theta\cdot\eta'|^2=\sum_{a=1}^{n-1}(\Theta_a\cdot \eta') \det \mathring{{\slashed{g}}}_a$, and where
$\mathring{{\slashed{g}}}_a$ is $\mathring{{\slashed{g}}}$ with the $a^\mathrm{th}$ column replaced by $\Theta_\theta\cdot\eta'=(\Theta_1\cdot\eta',\dots,\Theta_{n-1}\cdot\eta')^\intercal$.}
\begin{align}\label{eq:mdvol1}
\begin{split}
| m|^{\frac{1}{2}} = (1-\Theta\cdot\eta')r^{n-1}|\mathring{{\slashed{g}}}|^{\frac{1}{2}}(1+\jap{r}^{-2}\frac{{\tilde{\gamma}}^{-2}+(1-(\Theta\cdot\eta')^2)}{(1-\Theta\cdot\eta')^2}).
\end{split}
\end{align}
The inverse $m^{-1}$ can be calculated using \eqref{eq:extchangeofvars2} (in this formula $\Theta^a=(\mathring{{\slashed{g}}}^{-1})^{ab}\Theta_b$):
\begin{align*}
\begin{split}
m^{-1}&=\pmat{\frac{-\jap{r}^{-2}}{(1-\frac{r}{\jap{r}}\Theta\cdot\eta')^2}&\frac{-1+\Theta\cdot\eta'+\jap{r}^{-2}}{(1-\frac{r}{\jap{r}}\Theta\cdot\eta')^2}&\frac{\jap{r}^{-2}\Theta^\theta\cdot\eta'}{r(1-\frac{r}{\jap{r}}\Theta\cdot\eta')^2}\\\frac{-1+\Theta\cdot\eta'+\jap{r}^{-2}}{(1-\frac{r}{\jap{r}}\Theta\cdot\eta')^2}&\frac{1-(\Theta\cdot\eta')^2}{(1-\frac{r}{\jap{r}}\Theta\cdot\eta')^2}&\frac{(1-\Theta\cdot\eta'-\jap{r}^{-2})\Theta^\theta\cdot\eta'}{r(1-\frac{r}{\jap{r}}\Theta\cdot\eta')^2}\\\frac{\jap{r}^{-2}\Theta^\theta\cdot\eta'}{r(1-\frac{r}{\jap{r}}\Theta\cdot\eta')^2}&\frac{(1-\Theta\cdot\eta'-\jap{r}^{-2})\Theta^\theta\cdot\eta'}{r(1-\frac{r}{\jap{r}}\Theta\cdot\eta')^2}&\frac{\mathring{{\slashed{g}}}^{-1}}{r^2}-\frac{\jap{r}^{-2}\Theta^a\cdot\eta'\Theta^b\cdot\eta'}{r^2(1-\frac{r}{\jap{r}}\Theta\cdot\eta')^2}}.
\end{split}
\end{align*}
Expanding in powers of $r$, we can write this as
\begin{align}\label{eq:minvdecomp1}
\begin{split}
m^{-1}=m_0^{-1}+\jap{r}^{-2}m_1,
\end{split}
\end{align}
where
\begin{align}\label{eq:m02inv1}
\begin{split}
m_0^{-1}=\pmat{0&\frac{-1}{1-\Theta\cdot\eta'}&0\\\frac{-1}{1-\Theta\cdot\eta'}&\frac{1+\Theta\cdot\eta'}{1-\Theta\cdot\eta'}&\frac{\Theta^\theta\cdot\eta'}{r(1-\Theta\cdot\eta')}\\0&\frac{\Theta^\theta\cdot\eta'}{r(1-\Theta\cdot\eta')}&r^{-2}\mathring{{\slashed{g}}}^{-1}},
\end{split}
\end{align}
and $m_1$ is a matrix of size $\calO(1)$. Next, to derive an expression for the wave operator $\Box_m$ we write
\begin{align*}
\begin{split}
\Box_m\psi&=\frac{1}{\sqrt{|m|}}\partial_\mu(\sqrt{|m|}\partial_\nu\psi)\\
&=\frac{1}{\sqrt{|m_0|}}\partial_\mu (\sqrt{|m_0|}\partial_\nu\psi)+\jap{r}^{-2}m_1^{\mu\nu}\partial_{\mu\nu}^2\psi\\
&\quad+\frac{1}{\sqrt{|m|}}\partial_\mu(\sqrt{|m_0|}((m^{-1})^{\mu\nu}-(m_0^{-1})^{\mu\nu})+(\sqrt{|m|}-\sqrt{|m_0|})(m^{-1})^{\mu\nu})\partial_\nu\psi\\
&\quad +(|m|^{-\frac{1}{2}}-|m_0|^{-\frac{1}{2}})\partial_\mu(\sqrt{|m_0|}(m_0^{-1})^{\mu\nu})\partial_\nu\psi.
\end{split}
\end{align*}
Using the fact that $\sqrt{|m_0|}(m_0^{-1})^{\tau\nu}$ is independent of $\tau$, we arrive at the expression
\begin{align}\label{eq:Boxm1}
\begin{split}
\Box_m\psi&=2(m_0^{-1})^{\tau r}\partial^2_{\tau r}\psi+\frac{n-1}{r}(m_0^{-1})^{\tau r}\partial_\tau\psi+2(m_0^{-1})^{\theta r}\partial^2_{\theta r}\psi\\
&\quad + (m_0^{-1})^{rr}\partial^2_r\psi+\frac{n-1}{r}(m_0^{-1})^{rr}\partial_r\psi+\frac{1}{\sqrt{|m_0|}}\partial_\theta(\sqrt{|m_0|}(m_0^{-1})^{\theta r})\partial_r\psi\\
&\quad
+\frac{1}{r^2}{\slashed{\Delta}}_{\mathbb S^{n-1}}\psi+\frac{n-2}{r}(m_0^{-1})^{\theta r}\partial_\theta\psi+\frac{\Theta_\theta\cdot{\dot{\eta}}}{r^2(1+\Theta\cdot{\dot{\eta}})}\partial_\theta\psi+{\mathrm{Err}}_{\Box_m}(\psi),
\end{split}
\end{align}
where
\begin{align}\label{eq:ErrBoxm1}
\begin{split}
{\mathrm{Err}}_{\Box_m}\psi&= \jap{r}^{-2}m_1^{\mu\nu}\partial^2_{\mu\nu}\psi+(\calO(r^{-2}|{\dot{\wp}}|)+\calO(r^{-3}))\partial_\tau\psi+(\calO(r^{-2}|{\dot{\wp}}|) +\calO(r^{-3}))\partial_r\psi\\
&\quad+\calO((r^{-3}|{\dot{\wp}}|)+\calO(r^{-4}))\partial_\theta\psi.
\end{split}
\end{align}
\begin{remark}
When working in the exterior region, it is often easier to work with the metric $m$ rather than the induced metric on the leaves of the foliation, and treat the difference as an error. In such cases we will often also use the volume form coming from the coordinate expression for $m$. Due to the asymptotic flatness of the induced metric this volume form is comparable in size with the geometrically induced volume form, and therefore the various inequalities we derive remain valid if we change to the geometric volume form.
\end{remark}
For future reference, we end this subsection by deriving a geometric expression for the linear operator in the equation satisfied by $\phi$ (rather than $\varphi$). Recall equation \eqref{eq:uMinkowskieq1}, where $u=Q+\varphi$. Writing (see \eqref{eq:varphiphi1})
\begin{align*}
\begin{split}
\varphi= s\phi,\qquad s=\sqrt{1+(m^{-1})^{\alpha\beta} Q_\alpha Q_\beta},
\end{split}
\end{align*}
our goal is to derive the linear part of \eqref{eq:uMinkowskieq1} in terms of $\phi$. The computation is similar to those in Section~\ref{sec:interior} and we will be brief
\begin{align}\label{eq:kmetrixextdef1}
\begin{split}
h_{\alpha\beta}= \boldsymbol{\eta}((\partial_\alpha,Q_\alpha), (\partial_\beta, Q_\beta))= m_{\alpha\beta}+Q_\alpha Q_\beta.
\end{split}
\end{align}
Then by direct computation
\begin{align}\label{eq:exteriorgeomlintemp1}
\begin{split}
(h^{-1})^{\alpha\beta} = (m^{-1})^{\alpha\beta}-\frac{ Q^\alpha Q^\beta}{1+|\nabla Q|^2},\qquad |h|= |m|(1+|\nabla Q|^2),
\end{split}
\end{align}
where we have used the notation $|\nabla Q|^2:= (m^{-1})^{\alpha\beta} Q_\alpha Q_\beta$ and $ Q^\alpha:=(m^{-1})^{\alpha\beta} Q_\beta$. It follows from the expression for $k^{-1}$ in \eqref{eq:exteriorgeomlintemp1} that the linear part of \eqref{eq:uMinkowskieq1} is
\begin{align}\label{eq:exteriorgeomlintemp2}
\begin{split}
\Box_h \phi+s^{-1}(\Box_h s-2s^{-1}(h^{-1})^{\mu\nu}\partial_\mu s\partial_\nu s)\phi+\calO({\dot{\wp}}^{\leq 2}r^{-6})\partial^{\leq 2}\phi.
\end{split}
\end{align}
Now we claim that in the case when $Q$ corresponds to a true parameterization of a boosted and translated catenoid (that is, when ${\dot{\ell}}=0$ and ${\dot{\xi}}=\ell$), the coefficient of $\phi$ above is precisely $V=| {\mathrm{I\!I}} |^2$. To see this, recall from Lemma~\ref{lem:secondff} (more precisely from \cite[Corollary II]{RV70}) that in this case we would have
\begin{align*}
\begin{split}
\Box_hn_\wp=-| {\mathrm{I\!I}} |^2n_\wp.
\end{split}
\end{align*}
Since, with $N=s(0,1)$, we have $\boldsymbol{\eta}(n_{\wp},N)=1$, it follows from the expression \eqref{eq:nwpext1} that in this case (that is, when ${\dot{\ell}}=0$ and ${\dot{\xi}}=\ell$)
\begin{align*}
\begin{split}
-| {\mathrm{I\!I}} |^2=\eta(\Box_h n_\wp,N)= s\Box_h s^{-1}=-s^{-1}\Box_hs+2s^{-2}(h^{-1})^{\mu\nu}\partial_\mu s\partial_\nu s,
\end{split}
\end{align*}
which proves our claim. Returning to \eqref{eq:exteriorgeomlintemp2}, since the only errors come from when derivatives fall on parameters, we see that the linear part of \eqref{eq:uMinkowskieq1} in terms of $\phi$ is, with $V=| {\mathrm{I\!I}} |^2$,
\begin{align}\label{eq:extgeomlin1}
\begin{split}
\Box_h\phi + V\phi +\calO({\dot{\wp}}^{\leq 2}r^{-6})\partial^{\leq2}\phi.
\end{split}
\end{align}
\subsubsection{Global Non-Geometric Coordinates}\label{sec:GNGC}
Under the assumption that $\wp$ is sufficiently small, we introduce a global set of coordinates $(\uptau,\uprho,\uptheta)$ which glue the interior and exterior coordinates introduced earlier. The smallness of $\wp$ will be guaranteed by the choice of initial data and the bootstrap assumptions (to be described in Section~\ref{sec:bootstrap}). The procedure is as follows. Let $\Psi_{\mathrm{int}}$ and $\Psi_{\mathrm{ext}}$, defined on overlapping open sets $V_{\mathrm{int}}$ and $V_{\mathrm{ext}}$, denote the coordinate maps associated to the rectangular coordinates of $(t,\rho,\omega)$ and $(\sigma,r,\theta)$ respectively. More precisely, for a point $p=(t,\xi(t)+\gamma^{-1}P_{\ell(t)} F(\rho,\omega)+P_{\ell(t)}^\perp F(\rho,\omega))$ in $V_{\mathrm{int}} \cap \big(\cup \Sigma_\sigma)$ let
\begin{align*}
\begin{split}
\Psi_{\mathrm{int}}(p)= (y^0,\dots,y^n), \qquad y^0=t, ~y^i= \rho\Theta^i(\omega); i=1,\dots,n.
\end{split}
\end{align*}
The coordinate map $\Psi_{\mathrm{ext}}$ is defined similarly. In the overlapping region which is, by assumption, contained in $\mathcal C_{\mathrm{flat}}$, for a point $p=(\sigma,\eta(\sigma)+r\Theta(\theta))$,
\begin{align*}
\Psi_{\mathrm{ext}}(p)=(y^0,\dots,y^n),\qquad y^0=\sigma,~y^i=r\Theta^i(\theta); i=1,\dots,n.
\end{align*}
Let $\chi$ be a cutoff function which is supported in $V_{\mathrm{int}}$ and is equal to one on $V_{\mathrm{int}}\backslash(V_{\mathrm{int}}\cap V_{\mathrm{ext}})$. We define the global rectangular coordinates by
\begin{align*}
\begin{split}
y=\Psi(p):= \chi(p)\Psi_{\mathrm{int}}(p)+(1-\chi(p))\Psi_{\mathrm{ext}}(p),
\end{split}
\end{align*}
and the global polar coordinates by expressing $y$ in polar coordinates $(\uptau,\uprho,\uptheta)$. To see that $\Psi$ defines a coordinate map we need to check that $d\Psi(p)$ is invertible for all $p\in V_{\mathrm{int}}\cap V_{\mathrm{ext}}$ and that $\Psi$ is one to one. For the derivatives, it suffices to show that $d(\Psi\circ \Psi^{-1}_{\mathrm{int}})$ is invertible. By the definition of $\Psi$,
\begin{align*}
\begin{split}
\Psi\circ\Psi_{\mathrm{int}}^{-1}(x)=\chi\circ\Psi^{-1}_{\mathrm{int}}(x)x+(1-\chi\circ\Psi^{-1}_{\mathrm{int}}(x))\Psi_{\mathrm{ext}}\circ\Psi_{\mathrm{int}}^{-1}(x),
\end{split}
\end{align*}
so (here $I$ denotes the identity matrix)
\begin{align*}
\begin{split}
d(\Psi\circ\Psi_{\mathrm{int}}^{-1})(x)&=d(\Psi_{\mathrm{ext}}\circ\Psi_{\mathrm{int}}^{-1})(x)\\
&\quad+\chi\circ\Psi_{\mathrm{int}}(x)(I-d(\Psi_{\mathrm{ext}}\circ\Psi_{\mathrm{int}}^{-1})(x))+\nabla(\chi\circ\Psi_{\mathrm{int}}^{-1}(x)) (x-\Psi_{\mathrm{ext}}\circ\Psi_{\mathrm{int}}^{-1}(x)).
\end{split}
\end{align*}
Since $x-\Psi_{\mathrm{ext}}\circ\Psi_{\mathrm{int}}^{-1}(x)$ is small for $x\in \Psi_{\mathrm{int}}(V_{\mathrm{int}}\cap V_{\mathrm{ext}})$, it suffices to show that $I-d(\Psi_{\mathrm{ext}}\circ\Psi_{\mathrm{int}}^{-1})(x)$ is also small for such $x$. To compute the derivative we write $(\tau,z)$ for $\Psi_{\mathrm{ext}}\circ\Psi_{\mathrm{int}}^{-1}(x)$ and $(t,y)$ for $x$. Then according to the above formulas for $\Psi_{\mathrm{int}}$ and $\Psi_{\mathrm{ext}}$, $(t,y)$ and $(\sigma,z)$ are related by
\begin{align*}
\begin{split}
t= \sigma,\quad \xi(t)+\gamma^{-1}(t) P_{\ell(t)}(\jap{y}|y|^{-1}y)+P_{\ell(t)}^\perp(\jap{y}|y|^{-1}y)= \eta(\sigma)+z.
\end{split}
\end{align*}
The desired invertibility then follows by implicitly differentiating these relations to get
\begin{align*}
\begin{split}
&\frac{\partial\sigma}{\partial t}=1,\quad \frac{\partial\sigma}{\partial y^j}=0,\\
&\frac{\partial z}{\partial t}+\eta'(\sigma)\frac{\partial\sigma}{\partial t}=-\gamma'(t)\gamma^{-2}(t)P_\ell(\jap{y}|y|^{-1}y)+(\gamma^{-1}-1)\frac{\jap{y}}{|y|}\big(\frac{y\cdot\ell}{|\ell|^2}{\dot{\ell}}+\frac{y\cdot{\dot{\ell}}}{|\ell|^2}\ell-2\frac{({\dot{\ell}}\cdot\ell)(y\cdot\ell)}{|\ell|^4}\ell\big)+\xi'(t),\\
&\frac{\partial z}{\partial y^j}=-\gamma^{-1}P_\ell(|y|^{-3}\jap{y}^{-1}y^jy)-P_\ell^\perp(|y|^{-3}\jap{y}^{-1}y^jy)+\gamma^{-1}P_\ell(\jap{y}|y|^{-1}e_j)+P_\ell(\jap{y}|y|^{-1}e_j),
\end{split}
\end{align*}
and using the smallness of $\ell$ and $\eta'(\tau)-\xi'(t)$ (by assumption). The fact that $\Psi$ is one to one can be shown using similar considerations.
We also remark that since in the overlapping region $t(p)=\sigma(p) = X^0(p)$, the coordinate $\uptau$ satisfies $\uptau=t=\sigma$. The normalized vectorfield $${\bfT}:=\partial_\uptau$$ plays a distinguished role in this work a globally defined almost Killing and unit timelike vectorfield. Note that $T$, defined in \eqref{eq:VFdef0} in the exterior, and ${\bf T}$ differ only by terms which have $\uptau$ decay (see~\eqref{eq:Tprecise1}).
\begin{remark}\label{rem:nongeomglobal1}
Since the global coordinates $(\uptau,\uprho,\uptheta)$ agree with the coordinates introduced in the previous two subsections in the respective regions, and in view of the invariant form $\Box+V$ appearing in \eqref{eq:LEDcalPint1} and \eqref{eq:extgeomlin1}, by inspection of the calculations in the interior and exterior regions (see \eqref{eq:LEDcalPint1}, \eqref{eq:varphiphi1}, \eqref{eq:callP2}, \eqref{eq:ErrcallP1}, \eqref{eq:Boxm1}, \eqref{eq:ErrBoxm1}, \eqref{eq:extgeomlin1}), the operator $\mathcal P$ in \eqref{eq:LEDcalPint1} and \eqref{eq:callP2} satisfies the following properties:
\begin{enumerate}
\item $\mathcal P$ admits the decomposition
\begin{align*}
\begin{split}
\mathcal P=\mathcal P_\uptau+\mathcal P_{\mathrm{ell}},
\end{split}
\end{align*}
where $\mathcal P_{\mathrm{ell}}$ is elliptic and does not contain $\partial_{\uptau}$ derivatives. $\mathcal P_{\mathrm{ell}}$ can be further decomposed as
\begin{align*}
\begin{split}
\mathcal P_{\mathrm{ell}}=\Delta_{\underline{\calC}}+V+\mathcal P_{\mathrm{ell}}^{{\mathrm{pert}}},
\end{split}
\end{align*}
where
\begin{align*}
\begin{split}
\Delta_{\underline{\calC}}=\frac{1}{\jap{\uprho}^{n-1}|F_{\uprho}|}\partial_\uprho(\jap{\uprho}^{n-1}|F_\uprho|^{-1}\partial_\rho)+\frac{1}{\jap{\uprho}^2}{\mathring{\slashed{\Delta}}},
\end{split}
\end{align*}
and (recall the notation from Section~\ref{subsec:prelimvfs})
\begin{align*}
\begin{split}
\mathcal P_{\mathrm{ell}}^{\mathrm{pert}}=o_{\wp,{\Green{R}}}(1)(\partial_\Sigma^2+\jap{\uprho}^{-1}\partial_\Sigma+\jap{\uprho}^{-2})+\calO({\dot{\wp}})\partial_\Sigma+\calO({\dot{\wp}})\jap{\uprho}^{-2}.
\end{split}
\end{align*}
Here $o_{\wp,{\Green{R}}}(1)$ denotes coefficients which are $\calO(\wp)$ or can be made arbitrarily small by taking ${\Green{R}}$ (the transition region from the flat to hyperboloidal foliation) large. Finally, $\mathcal P_\uptau$ has the form
\begin{align*}
\begin{split}
\mathcal P_\uptau=\calO(1)\partial {\bfT}+\calO(\jap{\uprho^{-1}}){\bfT}.
\end{split}
\end{align*}
\item If $|{\dot{\wp}}^{\leq 2}|\lesssim \epsilon \uptau^{-\gamma-1}$, for some $\gamma>1$, then the operator $\mathcal P$ takes the form
\begin{align*}
\begin{split}
\mathcal P\psi=\uppi_q^{\mu\nu}\partial^2_{\mu\nu}\psi+\uppi_l^\mu\partial_\mu\psi+\uppi_c\psi,
\end{split}
\end{align*}
where the coefficients satisfy the following properties. In view of the invariant form $\Box_h+V$ appearing in \eqref{eq:LEDcalPint1} and \eqref{eq:extgeomlin1} (see also the comments following \eqref{eq:extgeomlin1}), let
\begin{align*}
\begin{split}
&\underline{\uppi}_q^{\mu\nu}=(h^{-1})^{\mu\nu}\vert_{\uptau=t_2}, \quad \underline{\uppi}_l^\nu=|h|^{-\frac{1}{2}}\partial_\mu(|h|^{\frac{1}{2}}(h^{-1})^{\mu\nu})\vert_{\uptau=t_2},\quad \underline{\uppi}_c=V\vert_{\substack{\dot{\ell} = 0 \\ \dot{\xi} = \ell\\ \uptau=t_2}},\\
& \mathring{\uppi}_q=\uppi_q-\underline{\uppi},\quad \mathring{\uppi}_l=\uppi_l-\underline{\uppi},\quad \mathring{\uppi}_c=\uppi_c-\underline{\uppi},
\end{split}
\end{align*}
and correspondingly decompose $\mathcal P$ as $\mathcal P=\mathcal P_0+\mathcal P_{\mathrm{pert}}$ with
\begin{align}\label{eq:calPP0Ppertdecomp1}
\begin{split}
\mathcal P_0=\underline{\uppi}_q^{\mu\nu}\partial^2_{\mu\nu}+\underline{\uppi}_l^\mu\partial_\mu+\underline{\uppi}_c, \qquad \mathcal P_{\mathrm{pert}}=\mathring{\uppi}_q^{\mu\nu}\partial^2_{\mu\nu}+\mathring{\uppi}_l^\mu\partial_\mu+\mathring{\uppi}_c.
\end{split}
\end{align}
Then $|\mathring{\uppi}_q|, |\mathring{\uppi}_l|, |\mathring{\uppi}_c|\lesssim \jap{\uptau}^{-\gamma}$, and, with $y$ denoting the spatial variables $(\uprho,\uptheta)$,
\begin{align*}
\begin{split}
\sup_y(\jap{\uprho}^{2}|\mathring{\uppi}_q^{\uptau\uptau}|+|\mathring{\uppi}_q^{\uptau y}|+|\mathring{\uppi}_q^{yy}|)\lesssim \epsilon \jap{\uptau}^{-\gamma}.
\end{split}
\end{align*}
Moreover, in the hyperboloidal part of the foliation, $\mathcal P_{\mathrm{pert}}$ has the more precise structure
\begin{align*}
\begin{split}
{\mathring a}\partial_\uptau(\partial_\uprho+\frac{n-1}{2\uprho})+{\mathring a}^{\mu\nu}\partial_{\mu\nu}+{\mathring b}^\mu \partial_\mu +{\mathring c},
\end{split}
\end{align*}
where,
\begin{align}\label{eq:calPP0Ppertdecomp2}
\begin{split}
&|{\mathring a}|, |{\mathring a}^{yy}|\lesssim \epsilon \jap{\uptau}^{-\gamma}, \quad |\partial_y{\mathring a}^{yy}|, |{\mathring a}^{\uptau y}|, |{\mathring b}^y|\lesssim \epsilon\jap{\uptau}^{-\gamma}\uprho^{-1},\quad |{\mathring a}^{\uptau\uptau}|, |{\mathring b}^\uptau|\lesssim \epsilon \jap{\uptau}^{-\gamma}\uprho^{-2},\\
&|{\mathring c}|\lesssim \epsilon\jap{\uptau}^{-\gamma} \uprho^{-4}.
\end{split}
\end{align}
\item $\mathcal P$ can be written as
\begin{align*}
\begin{split}
\mathcal P\uppsi=\frac{1}{\sqrt{|{\bf h}|}}\partial_\mu (\sqrt{|{\bf h}|}({\bf h}^{-1})^{\mu\nu}\partial_\nu\uppsi)+V\uppsi+{\tilde{\boldsymbol{\calP}}},
\end{split}
\end{align*}
where the Lorentzian metric ${\bf h}$ agrees with $h$ in \eqref{eq:LEDcalPint1} in the region where $(\uptau,\uprho,\uptheta)$ agrees with $(t,\rho,\omega)$, and with $m$ in \eqref{eq:mform1} in the region where $(\uptau,\uprho,\uptheta)$ agrees with $(\tau,r,\theta)$. Moreover, the perturbative part ${\tilde{\boldsymbol{\calP}}}$ can be written as
\begin{align*}
\begin{split}
{\tilde{\boldsymbol{\calP}}}= {\bf p}^{\mu\nu}\partial_{\mu\nu}^2+{\bf q}^\mu \partial_\mu+{\bf s}
\end{split}
\end{align*}
for a symmetric tensor ${\bf p}$, a vectorfield ${\bf q}$, and a scalar ${\bf s}$ satisfying (in the notation of item~(1) above)
\begin{align*}
\begin{split}
\jap{\uprho}^2|{\bf p}|+\jap{\uprho}^3|{\bf q}|+\jap{\uprho}^4|{\bf s}|=o_{\wp,{\Green{R}}}(1)+\calO({\dot{\wp}}^{\leq 2}).
\end{split}
\end{align*}
\end{enumerate}
\end{remark}
\subsubsection{Global Geometric Coordinate}\label{sec:GGC}
Here we introduce a new set of coordinates $({\tilde{\uptau}},{\tilde{\uprho}},{\tilde{\uptheta}})$ to which we refer as \emph{geometric global coordinates}. Their main property of interest to us is that the operator $\mathcal P_0$ introduced in Remark~\ref{rem:nongeomglobal1} above has the following expression in these coordinates:
\begin{align*}
\begin{split}
\mathcal P_0=-\partial_{{\tilde{\uptau}}}^2+{\tilde{\Delta}}+V({\tilde{\rho}}),
\end{split}
\end{align*}
Here, with ${\mathring{\slashed{\Delta}}}$ denoting the Laplacian on the round sphere $\mathbb S^{n-1}$,
\begin{align*}
\begin{split}
{\tilde{\Delta}}=\frac{1}{\jap{{\tilde{\uprho}}}^{n-1}|F_{{\tilde{\uprho}}}|}\partial_{\tilde{\uprho}}(\jap{{\tilde{\uprho}}}^{n-1}|F_{\tilde{\uprho}}|^{-1}\partial_{\tilde{\uprho}})+\frac{1}{\jap{{\tilde{\uprho}}}^2}{\mathring{\slashed{\Delta}}}.
\end{split}
\end{align*}
The geometric global coordinates can be defined as follows. Let ${\underline \ell}=\ell(t_2)$, $\underline{\xi}=\xi(t_2)$ and $\underline{\gamma}=\gamma(t_2)$. With these choices we consider two parameterizations of the catenoid defined in \eqref{eq:calCsigmadefintro1}, where $\ell\equiv {\underline \ell}$ and $\xi\equiv\underline{\xi}$. The first parameterization is exactly by the non-geometric global coordinates $(\uptau,\uprho,\uptheta)$ from Section~\ref{sec:GNGC}, corresponding the choice of parameters $\ell\equiv {\underline \ell}$, $\xi\equiv\underline{\xi}+\sigma{\underline \ell}$. The second parameterization is simply
\begin{align*}
\begin{split}
\Lambda_{-{\underline \ell}}({\tilde{\uptau}},F({\tilde{\uprho}},{\tilde{\uptheta}}))+(0,\underline{\xi}),
\end{split}
\end{align*}
where $F$ and $\Lambda$ are as in \eqref{eq:Fdef} and \eqref{eq:Lambdadef1}. The coordinate change between $(\uptau,\uprho,\uptheta)$ and $({\tilde{\uptau}},{\tilde{\uprho}},{\tilde{\uptheta}})$ is then obtained by equating the $X^i$, $i=0,\dots,n$ coordinates of the ambient space with respect to these two parameterizations. The desired form of $\mathcal P_0$ follows from the coordinate invariance of this operator. Explicit formulas for the coordinate transformation can also be given in the regions where the non-geometric coordinates agree with the coordinates $(t,\rho,\omega)$ and $(\tau,r,\theta)$, and are given respectively by
\begin{align}\label{eq:coordinatetransformationrelation1}
\begin{split}
{\tilde{\uptau}}=\underline{\gamma}^{-1}\uptau-{\underline \ell}\cdot F(\uprho,\uptheta), \quad {\tilde{\uprho}}=\uprho,\quad {\tilde{\uptheta}}=\uptheta
\end{split}
\end{align}
in the interior, and by
\begin{align*}
\begin{split}
\uptau+\jap{\uprho}=\underline{\gamma}{\tilde{\uptau}}+\underline{\gamma}{\underline \ell}\cdot F({\tilde{\uprho}},{\tilde{\uptheta}}), \quad \uptau{\underline \ell}+\uprho\Theta(\uptheta)=\underline{\gamma}{\tilde{\uptau}}{\underline \ell}+A_{\underline \ell} F({\tilde{\uprho}},{\tilde{\uptheta}})
\end{split}
\end{align*}
in the exterior.
\section{Main Bootstrap Argument and the Proof of Theorem~\ref{thm:main}}\label{sec:bootstrap}
In the first part of this section we set up the bootstrap assumptions and state the propositions which assert that the bootstrap regime is trapped. These are Propositions~\ref{prop:bootstrappar1} and~\ref{prop:bootstrapphi1}, and their proofs will occupy most of the remainder of the paper. In the rest of this section we will prove Theorem~\ref{thm:main} assuming Propositions~\ref{prop:bootstrappar1} and~\ref{prop:bootstrapphi1}.
For concreteness, we set $n=5$ for the remainder of the paper, but our arguments are easily adaptable to higher dimensions. We can now state our bootstrap assumptions. We assume that there exist $\xi,\ell$ defined on $[0,\tau_f)$ and a parameterization \eqref{eq:psidefintro1} for $\tau\in[0,\tau_f)$ such that the orthogonality conditions \eqref{equ:derivation_modul_equ4} and \eqref{eq:wpoutline9} are satisfied. We also assume that the following \emph{trapping assumption} (see \eqref{eq:Fplusdef} for the definition of $F_{+}$ and \eqref{equ:unstable_mode_decomposition} for the relation between $\psi$ and $\phi$):
\begin{equation} \label{eq:a+trap}
|\mu a_{+}(\tau)-e^{\mu \tau }S(e^{-\mu \tau}F_{+})|\leq C_{trap} \delta_{\wp} \epsilon\tau^{-3},
\end{equation}
and the following estimates hold for all $\tau,\sigma_1,\sigma_2\in[0,\tau_f)$ (recall from Section~\ref{subsec:prelimvfs} that ${\tilde{\partial}}_\Sigma$ denotes size one tangential derivatives $\partial_\Sigma$ or $\jap{\uprho}^{-1} {\bfT}$ and that in the exterior $X$ denotes any of the vecotrfields ${\tilde{r}} L$, $\Omega$, or $T$ introduced in Section~\ref{sec:profile2}):
\begin{align}
&|a_{+}^{(k)}|\leq 2 C_{k} \delta_{\wp}\epsilon \tau^{-\frac{5}{2}+\kappa}, \quad \forall k\geq 0.\label{eq:a+b1}\\
&|a_{-}^{(k)}|\leq 2C_k \delta_{\wp} \epsilon\tau^{-\frac{5}{2}+\kappa},\quad \forall k\geq 0.\label{eq:a-b1}\\
&|{\dot{\wp}}^{(k)}|\leq 2C_k \delta_{\wp}\epsilon \tau^{-\frac{5}{2}+\kappa},\quad \forall k\geq 1.\label{eq:wpb1}\\
&|\phi|+\chi_{\geq R}|X^k\phi|\leq 2C\epsilon\tau^{-\frac{9}{4}+\frac{\kappa}{2}},\quad k\leq M-7.\label{eq:phiptwiseb1}\\
&|\partial^k\phi|+\chi_{\geq R}|\partial^{k-j}X^j\phi|\leq 2C\epsilon\tau^{-\frac{5}{2}+\kappa},\quad 1\leq k\leq M-8,~j<k.\label{eq:dphiptwiseb1}\\
&\chi_{\geq R}|\partial^{k-j}X^j\phi|\leq 2C\epsilon\jap{r}^{-\frac{3}{2}}\tau^{-1},\quad 1\leq k\leq M-8,~j<k.\label{eq:dphiptwiseb2}\\
&\chi_{\geq R}|\partial^{k-j}X^j\phi|\leq 2C\epsilon\jap{r}^{-2}\tau^{-\frac{1}{2}},\quad 1\leq k\leq M-8,~j<k.\label{eq:dphiptwiseb3}\\
&\| \chi_{\leq R}\partial^k\phi\|_{L^2(\Sigma_\tau)}+\|\jap{r}^{-\frac{5}{2}+\kappa}(\chi_{\geq R}X^k\phi)\|_{L^2(\Sigma_\tau)}\leq 2C \epsilon\tau^{-\frac{5}{2}+\kappa},\quad 0\leq k\leq M-4.\label{eq:dphiL2b1}
\end{align}
\begin{align}
&\|\partial^2_\Sigma (\chi_{\leq R}\partial^k{\bfT}\phi)\|_{L^2(\Sigma_\tau)}+\|\partial^2_\Sigma(\chi_{\geq R} X^k{\bfT}\phi)\|_{L^2(\Sigma_\tau)}\leq 2C \epsilon\tau^{-3},\quad 0\leq k \leq M-4.\label{eq:d2Tphienergyb1}\\
&\|\chi_{\leq R}\partial_\Sigma \partial^{k}{\bfT}^j\phi\|_{L^2(\Sigma_\tau)}+\|\chi_{\geq R} {\tilde{\partial}}_\Sigma X^{k}{\bfT}^j\phi\|_{L^2(\Sigma_\tau)}\leq 2C \epsilon\tau^{-1-j},\quad 0\leq j\leq 2,~ k+3j \leq M-2.\label{eq:Tjphienergyb1}\\
&\|\chi_{\geq R}r^{p} (\partial_r+\frac{n-1}{2r}) X^{k}{\bfT}^j\phi\|_{L^2(\Sigma_\tau)}\leq 2C \epsilon\tau^{-1-j+\frac{p}{2}},\quad 0\leq p\leq 2,~ k+3j \leq M-2.\label{eq:Tjphienergyb2}
\end{align}
We state our bootstrap propositions in three steps. First we close the bootstrap assumptions for the parameters, but with a suboptimal rate for $\mu a_{+}-e^{\mu \tau}S(e^{-\mu\tau}F_{+})$ in \eqref{eq:a+trap}. We then use this to improve the bootstrap bounds on $\phi$, and finally we show that the initial data and parameters can be chosen such that the bootstrap regime remains trapped. The precise statements are contained in the following three propositions.
\begin{proposition}\label{prop:bootstrappar1}
Suppose the estimates \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} and orthogonality conditions \eqref{equ:derivation_modul_equ4} and \eqref{eq:wpoutline9} are satisfied. If $\epsilon$ is sufficiently small and $C,C_k$ appearing on the right-hand side of \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} are sufficiently large (compared to $C_{trap}$), then the following improved estimates hold:
\begin{align}
&|a_{+}^{(k)}|\leq C_{k} \delta_{\wp} \epsilon\tau^{-\frac{5}{2}+\kappa}, \quad \forall k\geq 0.\label{eq:a+1}\\
&|a_{-}^{(k)}|\leq C_k \delta_{\wp} \epsilon\tau^{-\frac{5}{2}+\kappa},\quad \forall k\geq 0.\label{eq:a-1}\\
&|{\dot{\wp}}^{(k)}|\leq C_k \delta_{\wp} \epsilon\tau^{-\frac{5}{2}+\kappa},\quad \forall k\geq 1.\label{eq:wp1}
\end{align}
\end{proposition}
\begin{proposition}\label{prop:bootstrapphi1}
Suppose the estimates \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} orthogonality conditions \eqref{equ:derivation_modul_equ4} and \eqref{eq:wpoutline9} are satisfied. If $\epsilon$ is sufficiently small and $C,C_k$ appearing on the right-hand side of \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} are sufficiently large (compared to $C_{trap}$), then the following improved estimates hold:
\begin{align}
&|\phi|+\chi_{\geq R}|X^k\phi|\leq C\epsilon\tau^{-\frac{9}{4}+\frac{\kappa}{2}},\quad k\leq M-7.\label{eq:phiptwise1}\\
&|\partial^k\phi|+\chi_{\geq R}|\partial^{k-j}X^j\phi|\leq C\epsilon\tau^{-\frac{5}{2}+\kappa},\quad 1\leq k\leq M-8,~j<k.\label{eq:dphiptwise1}\\
&|\partial^k\phi|+\chi_{\geq R}|\partial^{k-j}X^j\phi|\leq C\epsilon\jap{r}^{-\frac{3}{2}}\tau^{-1+\kappa},\quad 1\leq k\leq M-8,~j<k.\label{eq:dphiptwise2}\\
&|\partial^k\phi|+\chi_{\geq R}|\partial^{k-j}X^j\phi|\leq C\epsilon\jap{r}^{-2}\tau^{-\frac{1}{2}+\kappa},\quad 1\leq k\leq M-8,~j<k.\label{eq:dphiptwise3}\\
&\| \jap{r}^{-\frac{5}{2}+\kappa}(\chi_{\leq R}\partial^k\phi)\|_{L^2(\Sigma_\tau)}+\|\jap{r}^{-\frac{5}{2}+\kappa}(\chi_{\geq R}X^k\phi)\|_{L^2(\Sigma_\tau)}\leq C \epsilon\tau^{-\frac{5}{2}+\kappa},\quad 0\leq k\leq M-4.\label{eq:dphiL21}\\
&\|\partial^2_\Sigma (\chi_{\leq R}{\bfT}\partial^k\phi)\|_{L^2(\Sigma_\tau)}+\|\partial^2_\Sigma(\chi_{\geq R}{\bfT} X^k\phi)\|_{L^2(\Sigma_\tau)}\leq C \epsilon\tau^{-3},\quad 0\leq k \leq M-4.\label{eq:d2Tphienergy1}\\
&\|\chi_{\leq R}\partial_\Sigma \partial^{k}{\bfT}^j\phi\|_{L^2(\Sigma_\tau)}+\|\chi_{\geq R} {\tilde{\partial}}_\Sigma X^{k}{\bfT}^j\phi\|_{L^2(\Sigma_\tau)}\leq C \epsilon\tau^{-1-j},\quad 0\leq j\leq 2,~k+3j \leq M-2.\label{eq:Tjphienergy1}\\
&\|\chi_{\geq R}r^{p} (\partial_r+\frac{n-1}{2r}) X^{k}{\bfT}^j\phi\|_{L^2(\Sigma_\tau)}\leq C \epsilon\tau^{-1-j+\frac{p}{2}},\quad 0\leq p\leq 2,~ k+3j \leq M-2.\label{eq:Tjphienergy2}
\end{align}
\end{proposition}
Propositions~\ref{prop:bootstrappar1} and \ref{prop:bootstrapphi1} will be proved in Sections~\ref{sec:parametercontrol} and \ref{sec:exterior} respectively. Observe that Proposition~\ref{prop:bootstrappar1} does \emph{not} improve the trapping assumption \eqref{eq:a+trap}, but only the bounds \eqref{eq:a+b1}. We will employ a topological (shooting) argument to find a global solution for which~\eqref{eq:a+trap} holds for all times, by choosing the initial data appropriately.
\subsection{Proof of Theorem~\ref{thm:main}}\label{subsec:proofofmaintheorem}
Assuming Propositions~\ref{prop:bootstrappar1} and~\ref{prop:bootstrapphi1}, we will prove Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
The proof consists of two steps as we now explain. Given $(\psi_0,\psi_1)$ for each $b$ (see the statement of Theorem~\ref{thm:main}) we let $\tau_f(b)$ be the maximal time on which there is a solution parameterized as in \eqref{eq:psidefintro1} such that the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} orthogonality conditions \eqref{equ:derivation_modul_equ4} and \eqref{eq:wpoutline9} are satisfied. By local well-posedness (Proposition~\ref{prop:LWP}), the normal neighborhood lemma (Lemma~\ref{rem:normalneighborhood}), and the implicit function theorem arguments in Sections~\ref{subsec:modulationeqs} and~\ref{subsec:unstableint}, we know that $\tau_f(b)$ is strictly positive for each choice of $b$. Our goal is to show that $\tau_f(b)$ is infinite for some choice of $b$. Suppose $\tau_f(b)$ is finite for all choices of $b$. In the first step we show that condition \eqref{eq:a+trap} must get saturated, that is the inequality must be an equality, at $\tau=\tau_f$. In the second step, we show that if $(\psi_0,\psi_1)$ satisfy a certain codimension one condition, equation \eqref{eq:codim1}, then there is a choice of $b$ for which \eqref{eq:a+trap} is not saturated by $\tau_f(b)$, and this is the desired contradiction.
Before turning to the details of Steps 1 and 2, we clarify one point. Because of the nonlocal nature of the orthogonality conditions \eqref{equ:derivation_modul_equ4} and \eqref{eq:wpoutline9} (coming from the smoothing operator $S$), in order to apply the implicit function theorem to get parameters which guarantee \eqref{equ:derivation_modul_equ4} and \eqref{eq:wpoutline9}, we first need to define $\psi$ for $\tau\in[-1,0]$. For this, we fix an extension procedure which is continuous (say with respect to the some finite regularity Sobolev norm) and linear in $(\psi_0,\psi_1)$, and work with this fixed extension throughout the proof.
{\underline{\emph{Step 1.}}}
Fix $b$ and let $\tau^\ast\in(0,\infty)$ be such that the bootstrap conditions described above (including the orthogonality conditions and the parameterization \eqref{eq:psidefintro1}) are satisfied on $[0,\tau^\ast]$. We want to show that if \eqref{eq:a+trap} is strict on $[0,\tau^\ast]$ then $\tau_f(b)>\tau^\ast$, where as above $\tau_f(b)$ is the maximal time on which the bootstrap conditions are satisfied. By Propositions~\ref{prop:bootstrappar1} and~\ref{prop:bootstrapphi1} (with $\tau_f$ replaced by $\tau^\ast$) we can improve the bootstrap assumptions \eqref{eq:a+b1}--\eqref{eq:Tjphienergyb2} on $[0,\tau^\ast]$. Now suppose \eqref{eq:a+trap} is strict on $[0,\tau^\ast]$. By Proposition~\ref{prop:LWP} applied with $\ell_0$ and $\xi_0$ fixed at values of $\ell$ and $\xi$ close to $\tau^\ast$, we can extend the solution on an interval of size of order one beyond $\tau^\ast$. Applying Lemma~\ref{rem:normalneighborhood} and the implicit function theorems in Sections~\ref{subsec:modulationeqs} and~\ref{subsec:unstableint} (note that while the details of the proofs there were carried out for zero $\ell_0$ and $\xi_0$, identical arguments can be used for other choices) we can extend $\xi$ and $\ell$ and the parameterization \eqref{eq:psidefintro1} beyond $\tau^\ast$ such that the orthogonality conditions \eqref{equ:derivation_modul_equ4} and \eqref{eq:wpoutline9} are still satisfied. Moreover, since \eqref{eq:a+trap} is strict on $[0,\tau^\ast]$, by continuity it is still satisfied on a larger interval. It follows that on this larger interval all the bootstrap conditions are satisfied and hence $\tau_f(b)>\tau^\ast$.
{\underline{\emph{Step 2.}}} Assume, for contradiction, that $\tau_f(b)$ is finite for every choice of $b$. To simplify notation let
\begin{align*}
\begin{split}
q(\tau):=\mu a_{+}(\tau)-e^{\mu \tau}S(e^{-\mu \tau}F_{+}(\tau)),\qquad q_0=q(0),\qquad \uplambda(\tau):=C_\mathrm{trap}\delta_\wp\epsilon \jap{\tau}^{-3},\qquad \uplambda_0:=C_\mathrm{trap}\delta_\wp\epsilon.
\end{split}
\end{align*}
Note that $q={\dot{a}}_{+}$.
{\underline{\emph{Step 2a.}}} We claim that if $(\psi_0,\psi_1)$ satisfy an orthogonality condition (see \eqref{eq:codim1}), then for each $|q_0|\leq \lambda_0$ there is a choice of $b$ in a neighborhood of zero for which $q(0)=q_0$. From Section~\ref{subsec:unstableint}, the orthogonality condition \eqref{eq:wpoutline9} determines $a_{+}$ such that
\begin{align*}
\begin{split}
a_+(t)=\boldsymbol{\Omega}(\vec{\psi},{\vec Z}_{-})-e^{\mu t}{\tilde{S}}(e^{-\mu t}F_{+}).
\end{split}
\end{align*}
Recalling the extension procedure to $[-1,0]$ described at the beginning of the proof of Theorem~\ref{thm:main}, we define a map $\mathcal Z:C^\infty({\underline{\calC}})\times C^\infty({\underline{\calC}})\times I\to \mathbb R$, where $I$ is a neighborhood of zero in $\mathbb R$, by
\begin{align*}
\begin{split}
\mathcal Z(\psi_0,\psi_1,b)=q(0),
\end{split}
\end{align*}
where $q$ is determined using initial data
\begin{align}\label{eq:shootingbdata1}
\begin{split}
\Phi\vert_{\{t=0\}}=\Phi_0[\epsilon(\psi_0+b{\tilde{\varphi}}_\mu)]{\ \ \text{and} \ \ } \partial_t\Phi\vert_{\{t=0\}}=\Phi_1[\epsilon(\psi_1-\mu b{\tilde{\varphi}}_\mu)],
\end{split}
\end{align}
as in the statement of Theorem~\ref{thm:main}. We then restrict attention to $(\psi_0,\psi_1)$ satisfying the codimension one condition
\begin{align}\label{eq:codim1}
\begin{split}
\mathcal Z(\psi_0,\psi_1,0)=0.
\end{split}
\end{align}
Now since $\boldsymbol{\Omega}(({\tilde{\varphi}}_\mu,-\mu{\tilde{\varphi}}_\mu)^{\intercal},{\vec Z}_{-})\simeq 1$, by \eqref{eq:shootingbdata1} and a similar argument as for the implicit function theorem in Section~\ref{subsec:unstableint}, we see that $\big|\frac{\partial q_0}{\partial b}\vert_{(\psi_0,\psi_1,0)}\big|\gtrsim1$. Our claim then follows from from the implicit function theorem and \eqref{eq:codim1}.
{\underline{\emph{Step 2b.}}} By Step 1 and Step 2a, and our contradiction assumption, for every choice of $q_0$ there is $\tau_{\mathrm{trap}}(q_0)$ such that the corresponding solution satisfies $|q(\tau)|<\uplambda(\tau)$ for $\tau<\tau_{\mathrm{trap}}(q_0)$ and $|q(\tau_{\mathrm{trap}}(q_0))|=\uplambda(\tau_{\mathrm{trap}}(q_0))$. We use a standard shooting argument (see for instance \cite{DKSW,OP1}) to derive a contradiction from this. The main observation is that if
$
\frac{1}{2}\uplambda(\tau)<| q(\tau)|<\uplambda(\tau),
$
for some $\tau\leq \tau_f$, then
\begin{align}\label{eq:outgoing1}
\begin{split}
\frac{\mathrm{d}}{\mathrm{d}\tau}q^2(\tau)\geq \mu q^2(\tau).
\end{split}
\end{align}
Indeed, rewriting the equation $\frac{\mathrm{d}}{\mathrm{d} \tau}(e^{-\mu \tau}{\dot{a}}_{+})=-S(e^{-\mu \tau}{\dot{F}}_{+})$ as
\begin{align*}
\begin{split}
\dot{q}(\tau)= \mu q(\tau)-e^{\mu\tau} S(e^{-\mu \tau}{\dot{F}}_{+}(\tau)),
\end{split}
\end{align*}
and multiplying by $2q(\tau)$, the first term on the right gives $2\mu q^2(\tau)$. On the other hand, by the arguments in Section~\ref{sec:parametercontrol},
\begin{align*}
\begin{split}
|e^{\mu\tau} S(e^{-\mu \tau}{\dot{F}}_{+}(\tau))|\leq c \uplambda(\tau)< c q(\tau),
\end{split}
\end{align*}
for some $c\ll \mu$, proving \eqref{eq:outgoing1}. We will show that the map $\Lambda:(-\uplambda_0,\uplambda_0)\to\{\pm\uplambda_0\}$, $\Lambda(q_0)=q(\tau_{\mathrm{trap}}(q_0))$ is continuous. Since, by \eqref{eq:outgoing1}, $\Lambda(q_0)=-\uplambda_0$ if $q_0$ is close to $-\uplambda_0$ and $\Lambda(q_0)=\uplambda_0$ if $q_0$ is close to $\uplambda_0$, the continuity of $\Lambda$ contradicts the intermediate value theorem. By continuous dependence on initial data, it suffices to prove that $\tau_{\mathrm{trap}}(\cdot)$ is continuous. Fix $q_0\in(-\uplambda_0,\uplambda_0)$ and let $q$ denote the corresponding solution. By \eqref{eq:outgoing1}, given $\upvarepsilon>0$ there exists $\updelta\in(0,1)$ such that if $(1-\updelta)\uplambda(\tau)<|q(\tau)|<\uplambda(\tau)$ for some $\tau<\tau_f$, then $|\tau_{\mathrm{trap}}(q_0)-\tau|<\upvarepsilon$. Let $\tau_1<\tau_f$ be such that $(1-\updelta^2)\uplambda(\tau_1)<|q(\tau_1)|<(1-\delta^3)\uplambda(\tau_1)$, and note that if $q_1$ is sufficiently close to $q_0$ then the solution ${\tilde{q}}$ corresponding to $q_1$ satisfies $(1-\updelta)\uplambda(\tau_1)<|{\tilde{q}}(\tau_1)|<\uplambda(\tau_1)$, and hence $|\tau_{\mathrm{trap}}(q_0)-\tau_{\mathrm{trap}}(q_1)|\leq |\tau_{\mathrm{trap}}(q_0)-\tau_1|+|\tau_{\mathrm{trap}}(q_1)-\tau_1|<2\upvarepsilon$.
\end{proof}
\section{Parameter Control}\label{sec:parametercontrol}
In this section we prove Proposition~\ref{prop:bootstrappar1}. In the process we will also derive estimates on $\boldsymbol{\Omega}_i({\bfT}^k\phi):=\boldsymbol{\Omega}({\bfT}^k\vec{\phi},{\vec Z}_i)$, $i\in\{\pm\mu,1,\dots,2n\}$, $k\geq0$, which are of independent interest for the local-energy decay estimate. Recall equations~\eqref{equ:derivation_modul_equ3_alt} and \eqref{eq:wpoutline10} from Section~\ref{sec:interior} for ${\dot{\wp}}=({\dot{\ell}},{\dot{\xi}}-\ell)^{\intercal}$, $a_{-}$, and $a_{+}$,
\begin{align}\label{eq:pareqsrepeat1}
\begin{split}
{\dot{\wp}} = {\vec F}_{\wp}(S\vec{\psi},\ell,S{\vec N}-\beta\vec{\omega}),\qquad \frac{\mathrm{d}}{\mathrm{d} t}(e^{-\mu t}a_{+})=-S(e^{-\mu t}F_{+}),\qquad \frac{\mathrm{d}}{\mathrm{d} t}(e^{\mu t}a_{-})=S(e^{\mu t}F_{-}),
\end{split}
\end{align}
where by a slight abuse of notation we have suppressed the derivatives on $\vec{\psi}$ in \eqref{equ:derivation_modul_equ3_alt} and written ${\vec F}_{\wp}(S\vec{\psi},\ell,S{\vec{\calN}}_\wp-\beta\vec{\omega}):= {\vec G}( S\partial_\Sigma^{\leq2} \vec{\psi}, \ell, S{\vec N} - \beta \vec{\omega})$. Here $\vec{\omega}$ is defined as in \eqref{equ:derivation_modul_equ5} (see also \eqref{eq:omegasolutionform1}) as the solution of
\begin{align}\label{eq:vecomegarepeat1}
\begin{cases}
\partial_t\vec{\omega}+\beta\vec{\omega}=-S{\vec F}_\omega(\vec{\psi},\ell,\vec{\omega})\quad &t> 0\\
\vec{\omega}=0\quad &t\leq0
\end{cases},
\end{align}
with ${\vec F}_\omega$ as in \eqref{eq:Fomegadef1} (where again we have suppressed the derivatives on $\vec{\psi}$ in the notation). In view of the spatial support of the test functions in imposing orthogonality conditions, all the integrations appearing in the definitions of ${\vec F}_\wp$, $F_{\pm}$, and ${\vec F}_\omega$ are over the region $\{\rho\leq {R_1}\}$. We will also need the equations for $\boldsymbol{\Omega}_i(\psi)=\boldsymbol{\Omega}(\vec{\psi},{\vec Z}_i)$, $i=1,\dots,2n$, and $\boldsymbol{\Omega}_{\pm}(\phi)=\boldsymbol{\Omega}(\vec{\phi},{\vec Z}_{\pm}^\mu)$. First, recall that $\boldsymbol{\Omega}_i(\psi)=\Upomega_i+\omega_i$, where $\vec{\omega}=(\omega_1,\dots,\omega_{2n})$ is as in \eqref{eq:vecomegarepeat1} and $\vec{\Upomega}=(\Upomega_1,\dots\Upomega_{2n})$ satisfies
\begin{align}\label{eq:Upomegarepeat1}
\begin{split}
\vec{\Upomega}={\tilde{S}}({\vec N}+{\vec F}_\omega),
\end{split}
\end{align}
with ${\vec F}_\omega$ and $N$ are as above (see \eqref{equ:derivation_modul_equ5} and \eqref{eq:upOmegasolutionform1}). Similarly, with ${\tilde{F}}_{\pm}=(S-I)F_{\pm}$,\footnote{To be precise we should write $(S(e^{-\mu \cdot}{\tilde{F}}_{+}))(t)$ instead of $S(e^{-\mu t}{\tilde{F}}_{+}(t))$.}
\begin{align}\label{eq:bfOmegapmrepeat1}
\begin{split}
&\boldsymbol{\Omega}_{+}(\phi)(t)=e^{\mu t}S(e^{-\mu t}{\tilde{F}}_{+}(t)),\\
&\boldsymbol{\Omega}_{-}(\phi)(t)=e^{-\mu t}S(e^{\mu t}{\tilde{F}}_{-}(t)),
\end{split}
\end{align}
and
\begin{align}\label{eq:bfOmegapmrepeat2}
\begin{split}
&\frac{\mathrm{d}}{\mathrm{d} t}\boldsymbol{\Omega}_{+}(\phi)(t)=e^{\mu t}S(e^{-\mu t}{\tilde{F}}_{+}'(t)),\\
&\frac{\mathrm{d}^2}{\,\mathrm{d} t^2}\boldsymbol{\Omega}_{+}(\phi)(t)=e^{\mu t}S(e^{-\mu t}{\tilde{F}}_{+}''(t)),
\end{split}
\end{align}
and
\begin{align}\label{eq:bfOmegapmrepeat3}
\begin{split}
&\frac{\mathrm{d}}{\mathrm{d} t}\boldsymbol{\Omega}_{-}(\phi)(t)=e^{-\mu t}S(e^{\mu t}{\tilde{F}}_{-}'(t)),\\
&\frac{\mathrm{d}^2}{\,\mathrm{d} t^2}\boldsymbol{\Omega}_{-}(\phi)(t)=e^{-\mu t}S(e^{\mu t}{\tilde{F}}_{-}''(t)).
\end{split}
\end{align}
Finally recall that the smoothing operator $S$ and the operator ${\tilde{S}}$ are given by
\begin{align*}
\begin{split}
Sf(t)=\int K_S(s)f(t-s)\mathrm{d} s,\qquad {\tilde{S}} f(t)=\int K_{\tilde{S}}(s)f(t-s)\mathrm{d} s,
\end{split}
\end{align*}
where the smooth kernel $K_S$ and the non-smooth kernel $K_{\tilde{S}}$ are supported in $[0,1]$. In particular, if $|f(t)|\lesssim \jap{t}^{-\gamma}$ for some $\gamma>0$, then we also have $$|{\tilde{S}} f(t)|+ |Sf(t)|\lesssim \jap{t}^{-\gamma}.$$ We will use this observation in this section without further mention. Our starting point is to estimate $\vec{\omega}$.
\begin{lemma}\label{lem:vecomega1}
Under the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2}, if ${R_1}$ is sufficiently large, then
\begin{align*}
\begin{split}
\big(\frac{\mathrm{d}}{\mathrm{d} t}\big)^k\vec{\omega}\lesssim \epsilon^2 \jap{t}^{-\frac{9}{2}+\kappa},\quad \forall k\geq 0.
\end{split}
\end{align*}
\end{lemma}
\begin{proof}
Integrating equation \eqref{eq:vecomegarepeat1} gives
\begin{align*}
\begin{split}
\vec{\omega}(t)=\int_0^t e^{-\beta(t-s)}(S{\vec F}_\omega(\vec{\psi},\ell,\vec{\omega}))(s)\mathrm{d} s.
\end{split}
\end{align*}
Recalling that $|{\vec F}_\omega(x,p,w)|\lesssim |x| (|x|+|w|)$ (see \eqref{eq:Fomegaestimate1}), the desired estimate for $k=0$ follows from the assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2}. Here a point that deserves further clarification is the relation between $\vec{\psi}$ and $\phi$ and its derivatives. First, note that by writing $\vec{\psi}=\vec{\phi}+a_{+}{\vec Z}_\mu^{+}+a_{-}{\vec Z}_\mu^{-}$ and using the bootstrap assumptions on $a_{\pm}$, we can reduce the estimate on $\vec{\psi}$ to that on $\vec{\phi}$. Then observe that by the first component of equation \eqref{eq:wpoutline6} (viewed as an equation for $\vec{\phi}$) we can estimate $|{\dot{\phi}}|\lesssim |\partial\phi|+|{\dot{\wp}}|+|\phi|^2$. Finally, the higher order estimates $k\geq 1$ follow by exactly the same argument after differentiating equation \eqref{eq:vecomegarepeat1} and absorbing the time derivatives by the smoothing operator $S$.
\end{proof}
For the proof of Proposition~\ref{prop:bootstrappar1} we also need to prove some estimates for $\boldsymbol{\Omega}_i(\phi)$.
\begin{lemma}\label{lem:Omega1}
Under the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} and for ${R_1}$ sufficiently large,
\begin{align*}
\begin{split}
|\boldsymbol{\Omega}_i(\phi)|\lesssim o_{{R_1}}(1)\epsilon\jap{t}^{-\frac{5}{2}+\kappa}+\calO(\wp)\epsilon\jap{t}^{-\frac{5}{2}+\kappa}+\epsilon\jap{t}^{-\frac{9}{2}+\kappa},\quad i\in\{\pm,1,\dots,2n\},
\end{split}
\end{align*}
where $o_{{R_1}}(1)$ denotes a constant that goes to zero as ${R_1}\to \infty$.
\end{lemma}
\begin{proof}
Starting with $\boldsymbol{\Omega}_1(\psi),\dots,\boldsymbol{\Omega}_{2n}(\psi)$, observe that in view of Lemma~\ref{lem:vecomega1} it suffices to estimate $\vec{\Upomega}$. For this we distinguish between the first $n$ components and the last $n$ components of $\vec{\Upomega}$ by representing them as $\Upomega_i$ and $\Upomega_{n+i}$, respectively, with $i=1,\dots,n$. Appealing again to Lemma~\ref{lem:vecomega1} and the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2}, the only terms on the right-hand side of \eqref{eq:Upomegarepeat1} which need special treatment are the linear terms in $\vec{\psi}$. To use the bootstrap assumptions to draw this conclusion, note that one can account for the difference between $\vec{\psi}$ and $\phi$ and its derivatives in the same way as in the proof of Lemma~\ref{lem:vecomega1}. For the linear terms, recall from the form of ${\vec N}$ from Section~\ref{sec:interior} (see \eqref{equ:derivation_modul_equ1}) that the only term that is not bounded by $\calO(\wp)\epsilon\jap{t}^{-\frac{5}{2}+\kappa}$ directly by the bootstrap assumptions is $\boldsymbol{\Omega}(\vec{\psi},M{\vec Z}_i)$. But, $M{\vec Z}_i$ would be zero, if it were not for the cutoff function in the definition of ${\vec Z}_i$. Treating the difference between $\vec{\psi}$ and $\phi$ as before, since every term in the definition of ${\vec Z}_i$ comes with a decay of $r^{-n+1}$ or a factor of $\ell$, our task reduces to estimating ${R_1}^{-1}\angles{\chi_{\{r\simeq {R_1}\}}\partial\phi}{\calO(r^{-n+1})}$. This is then bounded by (recall that $n=5$)
\begin{align}\label{eq:lemOmega1temp1}
\begin{split}
{R_1}^{-1}\|\chi_{\{r\simeq {R_1}\}}r^{-\frac{5}{2}+\kappa}\partial\phi\|_{L^2}\Big(\int_{\{r\simeq {R_1}\}}r^{-5+1+5-2\kappa}\mathrm{d} r\Big)^{\frac{1}{2}}\lesssim \epsilon {R_1}^{-\kappa}\jap{t}^{-\frac{5}{2}+\kappa},
\end{split}
\end{align}
where for the last estimate we have used the bootstrap assumption \eqref{eq:dphiL2b1}. The case of $\Upomega_{n+i}$ is similar, with the difference that now we need to estimate $\boldsymbol{\Omega}(\vec{\psi},M{\vec Z}_{n+i})$ instead of $\boldsymbol{\Omega}(\vec{\psi},M{\vec Z}_i)$. Since $M{\vec Z}_{n+i}$ would be ${\vec Z}_{i}$ if it were not for the cutoffs in the definitions of ${\vec Z}_i$ and ${\vec Z}_{n+i}$, this leads to estimating $\boldsymbol{\Omega}(\vec{\psi},{\vec Z}_i)$ and ${R_1}^{-2}\angles{\chi_{\{r\simeq {R_1}\}}\phi}{\calO(r^{-n+1})}$. The first term was already treated above, and the second term is bounded, using \eqref{eq:dphiL2b1}, as (recall that $n=5$)
\begin{align}\label{eq:lemOmega1temp2}
\begin{split}
{R_1}^{-2}\|\chi_{\{r\simeq {R_1}\}}r^{-\frac{5}{2}+\kappa}\phi\|_{L^2}\Big(\int_{\{r\simeq {R_1}\}}r^{-5+1+5-2\kappa}\mathrm{d} r\Big)^{\frac{1}{2}}\lesssim \epsilon {R_1}^{-1-\kappa}\jap{t}^{-\frac{5}{2}+\kappa}.
\end{split}
\end{align}
This proves the estimates for $\boldsymbol{\Omega}_{i}(\psi)$ and the passage to $\boldsymbol{\Omega}(\phi)$ is again by decomposing $\vec{\psi}$ in terms of $\vec{\phi}$ and $a_{\pm}$ and observing the extra smallness in ${R_1}$ coming from $\angles{{\vec Z}_i}{{\vec Z}_{\pm}}$, $i=1,\dots,2n$. The estimates for $\boldsymbol{\Omega}_{\pm}(\phi)$ using \eqref{eq:bfOmegapmrepeat1} are similar, where again we use the smallness of $\angles{{\vec Z}_i}{{\vec Z}_{\pm}}$, $i=1,\dots,2n$, when estimating the linear contributions of ${\dot{\wp}}$ in $F_{\pm}$.
\end{proof}
We are now ready to prove Proposition~\ref{prop:bootstrappar1}.
\begin{proof}[Proof of Proposition~\ref{prop:bootstrappar1}]
We start with the estimates for ${\dot{\wp}}$ for which we use the first equation in~\eqref{eq:pareqsrepeat1}. As in the proof of Lemma~\ref{lem:vecomega1}, in view of the presence of the smoothing operator $S$, the higher derivatives are treated in the same way as ${\dot{\wp}}$. For ${\dot{\wp}}$, in view of the estimate for $\vec{\omega}$ from Lemma~\ref{lem:vecomega1} and the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2}, the only terms on the right-hand side of the equation ${\dot{\wp}}={\vec F}_\wp$ which need special attention are the linear terms in $\vec{\psi}$. Here the difference between $\vec{\psi}$ and $\phi$ and its derivatives is accounted for in the same way as in the proof of Lemma~\ref{lem:vecomega1}. Turning to these linear terms, recall that as above (see \eqref{equ:derivation_modul_equ1}) they are given by $\boldsymbol{\Omega}(\vec{\psi},M{\vec Z}_i)$ and $\boldsymbol{\Omega}(\vec{\psi},M{\vec Z}_{n+i})$, $i=1,\dots,n$. But, these can be estimated exactly as in \eqref{eq:lemOmega1temp1} and \eqref{eq:lemOmega1temp2}. The estimate for $a_{-}$ is similar, where now we use the last equation in \eqref{eq:pareqsrepeat1} which leads to the representation
\begin{align*}
\begin{split}
a_{-}(t)=a_{-}(0)e^{-\mu t}+\int_0^te^{-\mu t}S(e^{\mu s}F_{-}(s))\mathrm{d} s.
\end{split}
\end{align*}
The first term already has better decay than we need. For the second term we can again consider the linear and quadratic and higher order contributions of $F_{-}$ separately, and gain smallness in ${R_1}$ for the linear terms from the smallness of $\angles{{\vec Z}_i}{{\vec Z}_{\pm}}$. The higher derivatives are treated similarly, where in the case where derivatives fall on $SF_{-}$ we can absorb them in the smoothing operator $S$. To estimate $a_{+}$, we use the triangle inequality and the bootstrap assumption \eqref{eq:a+trap} to bound,
\begin{align*}
\begin{split}
|a_{+}(t)|\lesssim \delta_{\wp}t^{-3}+e^{\mu t}S(e^{-\mu t}F_{+}).
\end{split}
\end{align*}
The desired estimate then follows by estimating $F_{+}$ using similar considerations as for $F_{-}$. The higher derivative estimates for $a_{+}$ also follow similarly by using equation \eqref{eq:pareqsrepeat1} to express ${\dot{a}}_{+}$ algebraically in terms of $a_{+}$ and $F_{+}$ as ${\dot{a}}_{+}=\mu a_{+}+e^{\mu t}S(e^{\mu t}F_{+})$.
\end{proof}
In addition to Proposition~\ref{prop:bootstrappar1}, we will also need some integrated estimates on $\boldsymbol{\Omega}_i({\bfT}^k\phi)$, $a_\pm^{(k)}$, and ${\dot{\wp}}^{(k)}$ for our local-energy decay estimate. These estimates are the content of the next lemma.
\begin{lemma}\label{lem:OmegaiTkphi}
Under the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2}, if ${R_1}$ is sufficiently large, then for $k=1,2$, $j\geq 0$, and $i\in\{\pm,1,\dots,2n\}$,
\begin{align}\label{eq:OmegaiTkphi}
\begin{split}
&\|\boldsymbol{\Omega}_i({\bfT}^k\phi)\|_{L^2([t_1,t_2])}+\|{\dot{a}}_{\pm}^{(k+j)}\|_{L^2([t_1,t_2])}+\|{\dot{\wp}}^{(k+1+j)}\|_{L^2([t_1,t_2])}\\
&\lesssim (\delta_\wp+o_{{R_1}}(1)+\calO(\wp))\epsilon \jap{t_1}^{-3}+\epsilon\jap{t_1}^{-\frac{9}{2}+2\kappa}\\
&\quad+(o_{{R_1}}(1)+\calO(\wp))(\|{\bfT}^k\phi\|_{LE([t_1,t_2])}+\sup_{t_1\leq \tau\leq t_2}\|{\bfT}^k\phi\|_{E(\Sigma_\tau)}),\\
\end{split}
\end{align}
where $o_{{R_1}}(1)$ denotes a constant that goes to zero as ${R_1}\to \infty$.
\end{lemma}
\begin{proof}
We start with the estimate for $\boldsymbol{\Omega}_i({\bfT}^k\psi)$, $i\in\{1,\dots,n\}$. This is achieved by differentiating the expression $\boldsymbol{\Omega}_i(\psi)=\omega_i+\Upomega_i$. The desired estimate is then derived similarly to the proof of Lemma~\ref{lem:Omega1}. Indeed, using the bootstrap assumptions to estimate the quadratic and higher order terms by $\epsilon^2t_1^{-\frac{9}{2}+\kappa}$, we see that except for the contribution of $\boldsymbol{\Omega}({\bfT}^k\vec{\psi},M{\vec Z}_i)$ the remaining terms are bounded by
\begin{align*}
\begin{split}
\calO(\wp)\|{\bfT}^k\phi\|_{L^2([t_1,t_2])}+\calO(\wp)\|{\dot{a}}_{\pm}^{(k)}\|_{L^2([t_1,t_2])}+\calO(\wp)\|{\dot{\wp}}^{k+1}\|_{L^2([t_1,t_2])}.
\end{split}
\end{align*}
Here as usual we have expressed $\vec{\psi}$ in terms of $\phi$ and its derivatives as well as $a_{\pm}$ and ${\dot{\wp}}$. Note that the last two terms above can be absorbed on the left-hand side of \eqref{eq:OmegaiTkphi}. Similarly, using the smallness of $MZ={\vec Z}_i$, $i=1,\dots,n$, we can estimate the contribution of $\boldsymbol{\Omega}_i({\bfT}^k\vec{\psi},M{\vec Z}_i)$ by
\begin{align*}
\begin{split}
o_{{R_1}}(1)\|{\bfT}^k\phi\|_{L^2([t_1,t_2])}+o_{{R_1}}(1)\|{\dot{a}}_{\pm}^{(k)}\|_{L^2([t_1,t_2])}+o_{{R_1}}(1)\|{\dot{\wp}}^{k+1}\|_{L^2([t_1,t_2])}.
\end{split}
\end{align*}
Note that even though ${\tilde{S}}$ in \eqref{eq:Upomegarepeat1} is not a smoothing operator, since only $\vec{\psi}$ and not ${\bfT}\vec{\psi}$ appears in ${\vec N}$ and ${\vec F}_\omega$ (and similarly in ${\tilde{F}}_{\pm}$ in the discussion for $\boldsymbol{\Omega}_{\pm}({\bfT}^k\phi)$ below) and spatial derivatives can be integrated by parts to the lower order terms, there is no loss of regularity in these estimates. The passage from $\boldsymbol{\Omega}_i({\bfT}^k\psi)$ to $\boldsymbol{\Omega}_i({\bfT}^k\phi)$ follows as usual. The estimate for $\boldsymbol{\Omega}_{n+i}({\bfT}^k\phi)$ now also follows from that of $\boldsymbol{\Omega}_{i}({\bfT}^k\phi)$ in the same way as in the proof of Lemma~\ref{lem:Omega1}. The estimates for $\boldsymbol{\Omega}_{\pm}({\bfT}^k\phi)$ are proved similarly where now we use the differentiated equations \eqref{eq:bfOmegapmrepeat2} and \eqref{eq:bfOmegapmrepeat3}. Once the estimates for $\boldsymbol{\Omega}_i({\bfT}^k\phi)$, $i\in\{\pm,1,\dots,2n\}$, are established, the estimates for ${\dot{a}}_{-}^{(k+j)}$ and ${\dot{\wp}}^{(k+1+j)}$ follow as in the proof of Proposition~\ref{prop:bootstrappar1} by differentiating the corresponding equations in \eqref{eq:pareqsrepeat1}. As usual, any excess derivatives can be absorbed by the smoothing operator $S$. The argument for ${\dot{a}}_{+}$ is more delicate, and it is here that the improved decay in \eqref{eq:a+trap} comes in. Differentiating the differential equation for $a_{+}$ from \eqref{eq:pareqsrepeat1}, and using the notation ${\dot{F}}_{+}=\frac{\mathrm{d}}{\mathrm{d} t}F_{+}$, gives
\begin{align*}
\begin{split}
\frac{\mathrm{d}}{\mathrm{d} t}(e^{-\mu t}{\dot{a}}_{+})=-S(e^{-\mu t}{\dot{F}}_{+}).
\end{split}
\end{align*}
Integrating from the final bootstrap time $\tau_f$ and using the algebraic relation ${\dot{a}}_{+}=\mu a_{+}-e^{\mu t}S(e^{-\mu t}F_{+})$ (which is a rewriting of the differential equation for $a_{+}$), for any $t\leq \tau_f$ we get
\begin{align}\label{eq:higheraplustemp1}
\begin{split}
{\dot{a}}_{+}(t)=(\mu a_{+}(\tau_f)-e^{\mu\tau_f}S(e^{-\mu \tau_f}F_{+}(\tau_f)))e^{-\mu(\tau_f-t)}-\int_{t}^{\tau_f}e^{-\mu (s-t)}(e^{\mu s}S(e^{-\mu s}{\dot{F}}_{+}(s)))\mathrm{d} s.
\end{split}
\end{align}
The desired estimate for ${\dot{a}}_{+}$ now follows from \eqref{eq:a+trap} and an application of Schur's test with the kernel $e^{-\mu(s-t)}\chi_{s\geq t}$, where we use similar considerations as before to bound ${\dot{F}}_{+}$. The higher order estimates for $a_{+}$ are proved similarly by differentiating equation \eqref{eq:higheraplustemp1}.
\end{proof}
\section{Local Energy Decay}\label{sec:LED}
In this section we prove linear energy and local energy decay estimates. The relatively straightforward nonlinear applications are postponed to the next section after the nonlinearity and source terms of the equation are calculated more carefully.
For any $\tau_1<\tau_2$, let
\begin{align*}
\begin{split}
\Sigma_{\tau_1}^{\tau_2}= \cup_{\tau=\tau_1}^{\tau_2}\Sigma_\tau.
\end{split}
\end{align*}
and consider two functions $\uppsi,f:\Sigma_0^T\to \mathbb R$, satisfying
\begin{align}\label{eq:LEDlinearmodel1}
\begin{split}
\mathcal P\uppsi=f.
\end{split}
\end{align}
We make the following assumptions on $\mathcal P$, which are consistent with the linear operator arising in our problem: In the global non-geometric coordinates from Section~\ref{sec:GNGC}, $\mathcal P$ satisfies the properties stated in Remark~\ref{rem:nongeomglobal1}, while $\mathcal P_0$ in Remark~\ref{rem:nongeomglobal1} takes the form given in Section~\ref{sec:GGC} in the global geometric coordinates defined there. In the interior coordinates from Section~\ref{sec:INGC} and the exterior coordinate from Section~\ref{sec:ENGC}, we assume that $\mathcal P$ takes the forms \eqref{eq:LEDcalPint1} and \eqref{eq:abstractexteq1}, \eqref{eq:callP2}, \eqref{eq:ErrcallP1}, \eqref{eq:extgeomlin1} respectively. We use $K_{\mathrm{int}}$ and $K_{\mathrm{ext}}$ to denote two large compact regions in $\Sigma_0^T$ with $K_{\mathrm{ext}}\subseteq K_{\mathrm{int}}$, such that the coordinates $(t,\rho,\omega)$ (Section~\ref{sec:INGC}) are defined in a neighborhood $U_{\mathrm{int}}$ of $K_{\mathrm{int}}$, and the coordinates $(\tau,\rho,\theta)$ (Section~\ref{sec:ENGC}) are defined in a neighborhood of $\overline{K_{\mathrm{ext}}^c}$. We assume that ${R_1}$ in Section~\ref{sec:interior} (see for instance Section~\ref{subsec:modulationeqs}) is such that the region $\{\rho\leq {R_1}\}$ is much larger than $K_{\mathrm{int}}$.
For any $\tau$ the energy norm of $\uppsi$ on $\Sigma_\tau$ is defined by
\begin{align}\label{eq:standardenergydef1}
\begin{split}
E[\uppsi](\tau)\equiv\|\uppsi\|_{E(\Sigma_\tau)}^2&:=\int_{\Sigma_\tau}\chi_{\leq {\tilde{R}}}(|\partial\uppsi|^2+\jap{\rho}^{-2}|\uppsi|^2)\mathrm{d} V\\
&\quad+\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}(|\partial_\Sigma \uppsi|^2+r^{-2}|T\uppsi|^2+r^{-2}|\uppsi|^2) \mathrm{d} V.
\end{split}
\end{align}
Here $\chi_{\geq {\tilde{R}}}$ is a cutoff function supported in $\mathcal C_{\mathrm{hyp}}$, for some fixed large ${\tilde{R}}\gg1$, and $\chi_{\leq {\tilde{R}}}=1-\chi_{\geq {\tilde{R}}}$. The local energy norm on any (space-time) region $\mathcal R$ of the domain of definition of $\uppsi$ is defined by
\begin{align*}
\begin{split}
\|\uppsi\|_{LE(\mathcal R)}^2=\int_{\mathcal R}\chi_{\leq {\tilde{R}}} ((\rho{\tilde{\psi}})^2+(\rho\partial {\tilde{\psi}})^2+( \partial_\rho{\tilde{\psi}})^2)\mathrm{d} V+\int_{\mathcal R}\chi_{\geq {\tilde{R}}}( r^{-3-\alpha}\uppsi^2+r^{-1-\alpha}(\partial\uppsi)^2)\mathrm{d} V.
\end{split}
\end{align*}
Here $0<\alpha\ll 1$ is a fixed small positive number, and $\rho$ and $r$ are the radial coordinates introduced in Section~\ref{sec:coordinates}. The dual local energy norm is defined by
\begin{align*}
\begin{split}
\|f\|_{LE^\ast(\mathcal R)}^2=\int_{\mathcal R}\chi_{\leq {\tilde{R}}} f^2\mathrm{d} V+\int_{\mathcal R}\chi_{\geq {\tilde{R}}}r^{1+\alpha}f^2\mathrm{d} V.
\end{split}
\end{align*}
We use the notation
\begin{align*}
\begin{split}
\|\uppsi\|_{L^pL^q(\Sigma_{\tau_1}^{\tau_2})}=\Big(\int_{\tau_1}^{\tau_2}\|\uppsi\|_{L^q(\Sigma_\tau)}^p\mathrm{d} \tau\Big)^{\frac{1}{p}},
\end{split}
\end{align*}
with the usual modificaion when $p=\infty$. When $p=q$ we simply write $\|\uppsi\|_{L^p(\Sigma_{\tau_1}^{\tau_2})}$, and similarly with $\Sigma_{\tau_1}^{{\tau_2}}$ replaced by any other region. We also occasionally use the notation
\begin{align*}
\begin{split}
\angles{\uppsi_1}{\uppsi_2}\equiv\angles{\uppsi_1}{\uppsi_2}_{\Sigma_{\tau}}=\int_{\Sigma_{\tau}}\uppsi_1\uppsi_2\,\mathrm{d} V_{\Sigma_\tau}.
\end{split}
\end{align*} Since our focus in this section is on linear estimates, we introduce ${\boldsymbol{\Upomega}}_k$ as a linear proxy for $\boldsymbol{\Omega}_k$ (which was defined nonlinearly in terms of $\vec{\psi}$ and $\vec{\phi}$ in Section~\ref{sec:interior}):
\begin{align*}
\begin{split}
&{\boldsymbol{\Upomega}}_k(\uppsi)(c)=-\int_{\{\uptau= c\}}Z_kn^\alpha\partial_\alpha \uppsi \sqrt{|h|}\mathrm{d} y,\quad {\boldsymbol{\Upomega}}_{n+k}(\uppsi(c))=\int_{\{\uptau= c\}}\uppsi Z_kn^\alpha\partial_\alpha {\tilde{\uptau}} \sqrt{|h|}\mathrm{d} y,\quad k=1,\dots,n,\\
&{\boldsymbol{\Upomega}}_{\mu}^{\pm}(\uppsi)(c)=\int_{\{\uptau= c\}}(\pm\mu \uppsi Z_\mu \partial_\alpha {\tilde{\uptau}}-Z_\mu \partial_\alpha \uppsi)n^\alpha \sqrt{|h|}\mathrm{d} y.
\end{split}
\end{align*}
Here $n$ denotes the normal to $\Sigma_c$ with respect to $h$, and $y$ denotes the spatial variables (say $(\uprho,\uptheta)$) on $\Sigma_c$. Our goal in this section is to prove the following two estimates. The first is the energy estimate.
\begin{proposition}\label{prop:energyestimate1}
Suppose $\uppsi$ satisfies $\mathcal P\uppsi=f$, and $\sum_{k\in\{\pm\mu,1,\dots,2n\}}|{\boldsymbol{\Upomega}}_k(\uppsi(t))|\leq \delta\|\uppsi\|_{E(\Sigma_t)}$. Then if $\delta$ is sufficiently small, for any $t_1<t_2$ and $\varepsilon\ll 1$, $\uppsi$ satisfies the estimates
\begin{align}\label{eq:linenergyestimate1}
\begin{split}
&\sup_{t_1\leq t\leq t_2}\|\uppsi\|_{E(\Sigma_{t})}\lesssim \|\uppsi\|_{E(\Sigma_{t_1})}+C_\varepsilon \|f\|_{L^1L^2(\Sigma_{t_1}^{t_2})},\\
&\sup_{t_1\leq t\leq t_2}\|\uppsi\|_{E(\Sigma_{t})}\lesssim \|\uppsi\|_{E(\Sigma_{t_1})}+C_\varepsilon \|f\|_{LE^\ast(\Sigma_{t_1}^{t_2})}+\|{\bfT} f\|_{LE^\ast(\Sigma_{t_1}^{t_2})}\\
&\phantom{\sup_{t_1\leq t\leq t_2}\|\uppsi\|_{E(\Sigma_{t})}\lesssim }+\|f\|_{L^\infty L^2(\Sigma_{t_1}^{t_2})}+\varepsilon \|\uppsi\|_{LE(\Sigma_{t_1}^{t_2})}.
\end{split}
\end{align}
\end{proposition}
The second estimate we will prove in this section is a local energy decay (LED) estimate.
\begin{proposition}\label{prop:LED1}
Suppose $\uppsi$ satisfies $\mathcal P\uppsi=f$, and $\sum_{k\in\{\pm\mu,1,\dots,2n\}}|{\boldsymbol{\Upomega}}_k(\uppsi(t))|\leq \delta\|\uppsi\|_{E(\Sigma_t)}$. Then for any $t_1<t_2$ and $\varepsilon\ll 1$, $\uppsi$ satisfies the estimates
\begin{align*}
\begin{split}
&\|\uppsi\|_{LE(\Sigma_{t_1}^{t_2})}\lesssim \sum_{k\in\{\pm\mu,1,\dots,2n\}}\|{\boldsymbol{\Upomega}}_k(\uppsi)\|_{L^2([t_1,t_2])}+\|\uppsi\|_{E(\Sigma_{t_1})}+\|f\|_{L^1L^2(\Sigma_{t_1}^{t_2})},\\
&\|\uppsi\|_{LE(\Sigma_{t_1}^{t_2})}\lesssim \sum_{k\in\{\pm\mu,1,\dots,2n\}}\|{\boldsymbol{\Upomega}}_k(\uppsi)\|_{L^2([t_1,t_2])}+\|\uppsi\|_{E(\Sigma_{t_1})}+\|f\|_{LE^\ast(\Sigma_{t_1}^{t_2})}\\
&\phantom{\|\uppsi\|_{LE(\Sigma_{t_1}^{t_2})}\lesssim}+\|{\bfT} f\|_{LE^\ast(\Sigma_{t_1}^{t_2})}+\|f\|_{L^\infty L^2(\Sigma_{t_1}^{t_2})}.
\end{split}
\end{align*}
\end{proposition}
\begin{remark}\label{rem:Omegalinear1}
As mentioned earlier ${\boldsymbol{\Upomega}}_k$ is a linear substitute for $\boldsymbol{\Omega}_k$. It is easy to see from our proofs that in Propositions~\ref{prop:energyestimate1} and~\ref{prop:LED1} one can replace ${\boldsymbol{\Upomega}}_k$ by any other choice $\tilde{{\boldsymbol{\Upomega}}}_k$ as long as $\|\tilde{{\boldsymbol{\Upomega}}}_k(\uppsi)-{\boldsymbol{\Upomega}}_k(\uppsi)\|_{L^2([t_1,t_2])}$ is bounded by a small multiple of the $LE$ norm of $\uppsi$. In our nonlinear applications we will use this observation to apply these propositions with ${\boldsymbol{\Upomega}}_k$ replaced by $\boldsymbol{\Omega}_k$. The condition $|\boldsymbol{\Omega}_k|(\uppsi(t))\leq \delta\|\uppsi\|_{E(\Sigma_t)}$ will always be satisfied in our applications as a consequence of the orthogonality conditions. See for instance the arguments in Lemmas~\ref{lem:Omega1} and~\ref{lem:OmegaiTkphi}.
\end{remark}
\begin{remark}\label{rem:fLED1}
The proof of Proposition~\ref{prop:LED1} requires several multiplier identities. In applications, where we consider the equation after commuting derivatives, we may want to perform some integration by parts in the term $fQ\uppsi$, where $Q\uppsi$ denotes the multiplier, before placing $f$ in $LE^\ast(\Sigma_{t_1}^{t_2})$ or $L^1L^2(\Sigma_{t_1}^{t_2})$. This is the case for instance where $f$ is of the form $\partial_\Sigma^2g$, where $g$ denotes the unknown with fewer commuted derivatives. While such integration by parts manipulations are not explicitly contained in the statement of Proposition~\ref{prop:LED1}, they can be easily incorporated by an inspection of the proof. Specifically, they can be performed in the treatment of equation \eqref{eq:phifardef1} in Lemma~\ref{lem:phifar1}.
\end{remark}
\begin{remark}
The explanation for the second estimate in \eqref{eq:linenergyestimate1} is the same as for the corresponding estimate in Proposition~\ref{prop:LEDproduct}. See Remark~\ref{rem:LEDproduct1}.
\end{remark}
We start with the proof of Proposition~\ref{prop:energyestimate1}.
\begin{proof}[Proof of Proposition~\ref{prop:energyestimate1}]
Recall from Remark~\ref{rem:nongeomglobal1}, part (3), that in the global coordinates $(\uptau,\uprho,\upomega)$
\begin{align*}
\begin{split}
\mathcal P\uppsi=\frac{1}{\sqrt{|{\bf h}|}}\partial_\mu (\sqrt{|{\bf h}|}({\bf h}^{-1})^{\mu\nu}\partial_\nu\uppsi)+V\uppsi+{\tilde{\boldsymbol{\calP}}},
\end{split}
\end{align*}
where ${\tilde{\boldsymbol{\calP}}}$ has the structure given in Remark~\ref{rem:nongeomglobal1}, and $|\partial_\uptau {\bf h}|\lesssim \epsilon \uptau^{-\gamma}$ for some $\gamma>1$. We multiply equation \eqref{eq:LEDlinearmodel1} by $\partial_\uptau\uppsi\sqrt{|{\bf h}|}$ and integrate. Note that the contribution of $f\partial_\uptau\uppsi$ can be estimated by the right-hand side of each estimate in \eqref{eq:linenergyestimate1} plus a small multiple of the corresponding left-hand side, as in the proof of Proposition~\ref{prop:LEDproduct}. The main term in $\mathcal P\uppsi \partial_\uptau\uppsi\sqrt{|{\bf h}|}$ is
\begin{align}\label{eq:linenergyestimatetemp1}
\begin{split}
(\mathcal P-{\tilde{\boldsymbol{\calP}}})\uppsi\partial_\uptau\uppsi \sqrt{|{\bf h}|}&=\partial_\mu\big(\sqrt{|{\bf h}|}({\bf h}^{-1})^{\mu\nu}\partial_\mu\uppsi\partial_\uptau\uppsi\big)+\frac{1}{2}\partial_\uptau\big(\sqrt{|{\bf h}|}V\uppsi^2-\sqrt{|{\bf h}|}({\bf h}^{-1})^{\mu\nu}\partial_\mu\uppsi\partial_\nu\uppsi\big)\\
&\quad+\partial_\uptau(\sqrt{|{\bf h}|}({\bf h}^{-1})^{\mu\nu})\partial_\mu\uppsi\partial_\nu\uppsi.
\end{split}
\end{align}
In view of the assumption on ${\boldsymbol{\Upomega}}_k(\uppsi)$, the first line gives us the desired control of the energy of $\uppsi$. Indeed, we can write $\uppsi=\uppsi^\perp+\sum_k\angles{\uppsi}{{\underline Z}_i}_{\Sigma_\tau}{\underline Z}_i$ where ${\underline Z}_i$ denote truncated eigenfunctions $\chi \varphi_i$ of $\Delta_{\underline{\calC}}+V$ supported in some region $\{\uprho\leq \uprho_1\}$ with $\uprho_1$ large (specifically, $\uprho_1\geq {R_1}$), normalized to have $L^2$ norm equal to one, and where $\angles{\uppsi}{Z_i}_{\Sigma_\tau}=\int_{\Sigma_\tau} \uppsi {\underline Z}_i \mathrm{d} V$. The first line of \eqref{eq:linenergyestimatetemp1} then bounds the energy of $\uppsi^\perp$ and the energy of $\uppsi$ can be bounded in terms of that of $\uppsi^\perp$ using the assumption on ${\boldsymbol{\Upomega}}_k(\uppsi)$. The second line of \eqref{eq:linenergyestimatetemp1} can be absorbed by a small multiple of the energy in view of the $\uptau$ decay of $\partial_\uptau(\sqrt{|{\bf h}|}({\bf h}^{-1})^{\mu\nu})$. Here note that in view of the form of ${\bf h}$ from Remark~\ref{rem:nongeomglobal1} the terms in $\partial_\uptau(\sqrt{|{\bf h}|}({\bf h}^{-1})^{\mu\nu})$ where at least one of $\mu,\nu$ is $\uptau$, in particular $\partial_\uptau(\sqrt{|{\bf h}|}({\bf h}^{-1})^{\uptau\uprho})$, come with extra $\uprho$ decay which allows us to bound the corresponding errors in the exterior region by the energy. Finally the contribution of ${\tilde{\boldsymbol{\calP}}}\uppsi$ can again be bounded by a small multiple of the energy in view of the $\uptau$ decay of the coefficients of ${\tilde{\boldsymbol{\calP}}}$. Here the only term in ${\tilde{\boldsymbol{\calP}}}$ that needs special attention is ${\mathring a} (\partial_\uprho+\frac{n-1}{2\uprho})\partial_\uptau\uppsi$ which, after integration by parts, yields
\begin{align*}
\begin{split}
\frac{1}{2}\big(\partial_\uprho({\mathring a} \sqrt{|{\bf h}|})-\frac{n-1}{\rho}{\mathring a}\sqrt{|{\bf h}|}\big)(\partial_\uptau\uppsi)^2.
\end{split}
\end{align*}
Since $|\partial_\uprho({\mathring a} \sqrt{|{\bf h}|})-\frac{n-1}{\rho}{\mathring a}\sqrt{|{\bf h}|}|\lesssim \uptau^{-\gamma}\uprho^{-2}$ for large $\uprho$, this contribution can be bounded by the energy as well.
\end{proof}
We next turn to the proof of Proposition~\ref{prop:LED1}. The proofs of the two estimates in this proposition are different only in which energy estimate from Proposition~\ref{prop:energyestimate1} we use to bound the fluxes that come up in the integration by parts, so we give the proof only for the first estimate. Our starting point is a local energy decay estimate allowing for an $L^2$ error in a bounded region. For this we need to define some cutoff functions and auxiliary potentials. We fix $\chi_1\equiv \chi_1({\tilde{\uprho}})$ to be a smooth, non-decreasing, non-negative cutoff function supported in $U_{\mathrm{ext}}$ that is equal to one on $\overline{K_{\mathrm{ext}}^c}$ and satisfies $({\mathrm{sgn}} {\tilde{\uprho}}) \frac{\mathrm{d}}{\mathrm{d} {\tilde{\uprho}}} \chi_{1} \leq 0$. Similarly, $\chi_2\equiv\chi_2(\rho)$ is a smooth, non-negative cutoff supported in $U_{\mathrm{int}}$ that is equal to one on $K_{\mathrm{int}}$. Let $V_{\mathrm{temp}}\equiv V_{\mathrm{temp}}(\rho)$ be a compactly supported, non-negative, smooth potential such that
\begin{align*}
\begin{split}
{\mathrm{supp}}\,V_{\mathrm{temp}} \subseteq U_{\mathrm{int}},\qquad ({\mathrm{sgn}}\rho)\frac{\mathrm{d}}{\mathrm{d}\rho}V_{\mathrm{temp}}(\rho)\leq0~\mathrm{in~}U_{\mathrm{int}},\qquad ({\mathrm{sgn}}\rho)\frac{\mathrm{d}}{\mathrm{d}\rho}V_{\mathrm{temp}}(\rho)\leq -v_0<0~\mathrm{in~} K_{\mathrm{int}},
\end{split}
\end{align*}
and for some large constant $M$ to be fixed later, let
\begin{align}\label{eq:Vfardef1}
\begin{split}
V_{\mathrm{far}}:=MV_{\mathrm{temp}}.
\end{split}
\end{align}
Let $\uppsi_{\mathrm{far}}$ be the solution to
\begin{align}\label{eq:phifardef1}
\begin{split}
(\mathcal P-V_{\mathrm{far}})\uppsi_{\mathrm{far}} = f,\qquad (\uppsi_{\mathrm{far}},\partial_\uptau\uppsi_{\mathrm{far}})\vert_{\Sigma_{t_1}}=(\uppsi,\partial_\uptau\uppsi)\vert_{\Sigma_{t_1}},
\end{split}
\end{align}
and $\uppsi_{\mathrm{near}}:=\uppsi-\uppsi_{\mathrm{far}}$. Note that $\uppsi_{\mathrm{near}}$ satisfies
\begin{align}\label{eq:calPphinear1}
\begin{split}
\mathcal P\uppsi_{\mathrm{near}}=-V_{\mathrm{far}}\uppsi_{\mathrm{far}},\qquad (\uppsi_{\mathrm{far}},\partial_\uptau\uppsi_{\mathrm{near}})\vert_{\Sigma_{t_1}}=(0,0).
\end{split}
\end{align}
\begin{lemma}\label{lem:phifar1}
$\uppsi_{\mathrm{far}}$ and $\uppsi_{\mathrm{near}}$ as defined above satisfy
\begin{align}\label{eq:lemphifarbound1}
\begin{split}
\|\uppsi_{\mathrm{far}}\|_{LE(\Sigma_{t_1}^{t_2})}\lesssim \|\uppsi\|_{E(\Sigma_{t_1})}+ \|f\|_{L^1L^2(\Sigma_{t_1}^{t_2})},
\end{split}
\end{align}
and
\begin{align}\label{eq:lemphifarbound2}
\begin{split}
\|\uppsi_{\mathrm{near}}\|_{LE(\Sigma_{t_1}^{t_2})}&\lesssim \|\uppsi\|_{E(\Sigma_{t_1})}+ \|f\|_{L^1L^2(\Sigma_{t_1}^{t_2})}+\|\uppsi_{\mathrm{near}}\|_{L^2(K_{\mathrm{ext}})}.
\end{split}
\end{align}
\end{lemma}
\begin{proof}
The proof consists of two multiplier arguments, one in the exterior and one in the interior. The proofs for the estimates for $\uppsi_{\mathrm{near}}$ and $\uppsi_{\mathrm{far}}$ are similar so we carry out the details for $\uppsi_{\mathrm{far}}$ which is slightly more involved.
To simplify notation we write $\upphi$ for $\uppsi_{\mathrm{far}}$ and $U$ for $V_{\mathrm{far}}-V$ in the remainder of the proof. In addition to the smooth cutoffs $\chi_1$ and $\chi_2$ introduced above, we will write $\chi_A$ to denote an appropriate cutoff with support in a set $A$. Starting with the exterior we use \eqref{eq:callP2} and \eqref{eq:ErrcallP1} to write the equation in the exterior as (to be precise, we have used the conjugation \eqref{eq:varphiphi1} and $\upphi$ corresponds to the conjugated variable, but the estimates are easy to transfer between the conjugated and original variables)
\begin{align}\label{eq:psifarLEDtemp0}
\begin{split}
\Box_m\upphi-U\upphi+{\mathrm{Err}}_\mathcal P(\upphi)=f.
\end{split}
\end{align}
Let $Q$ be the multiplier defined in the $({\tilde{\uptau}},{\tilde{\uprho}},{\tilde{\uptheta}})$ coordinates, relative to the parameter values at $t_2$, as
\begin{align}\label{eq:psifarLEDtemp0.5}
\begin{split}
Q=-2\beta_1(\partial_{\tilde{\uprho}}-\partial_{\tilde{\uptau}})+(\beta_1'+\frac{n-1}{{\tilde{\uprho}}}\beta_1),
\end{split}
\end{align}
where (here $\chi_1$ is as defined before the statement of Lemma~\ref{lem:phifar1})
\begin{align*}
\begin{split}
\beta_1\equiv \beta_1({\tilde{\uprho}}) = (\frac{{\tilde{\uprho}}}{\jap{{\tilde{\uprho}}}}-\frac{\delta {\tilde{\uprho}}}{\jap{{\tilde{\rho}}}^{1+\alpha}})\chi_1({\tilde{\uprho}}),
\end{split}
\end{align*}
for suitable small constants $\alpha$ and $\delta$. We multiply equation \eqref{eq:psifarLEDtemp0} by $Q\upphi |m|^{\frac{1}{2}}$. Note that, except for the cutoff $\chi_1$, this choice of $Q$ is the standard multiplier for the proof of LED on Minkowski space near each asymptotically flat end ${\tilde{\uprho}} \to \pm \infty$. As usual, for concreteness, we focus on the end ${\tilde{\uprho}} \to \infty$. The main contribution comes from $\Box_m-U$. By direct computation, for any vectorfield $a^\mu\partial_\mu$ (here we use $i,j$ to denote tangential partial derivatives with respect to $\uprho$ and $\uptheta$),
\begin{align*}
(\Box_m-U) \upphi a^\lambda\partial_\lambda\upphi |m|^{\frac{1}{2}}
&=-\frac{1}{2}\partial_\uptau(|m|^{\frac{1}{2}}(m^{-1})^{ij}a^\tau \partial_i\upphi\partial_j\upphi-|m|^{\frac{1}{2}}U a^\tau \phi^2)-\frac{1}{2}\partial_\mu(U a^\mu|m|^{\frac{1}{2}})\upphi^2\\
&\quad+\partial_i((m^{-1})^{i\nu}|m|^{\frac{1}{2}}a^j\partial_\nu\upphi \partial_j\upphi-\frac{1}{2}(m^{-1})^{\mu\nu}|m|^{\frac{1}{2}}a^i\partial_\mu\upphi\partial_\nu\upphi+\frac{1}{2}|m|^{\frac{1}{2}}U a^j \phi^2)\\
&\quad+\frac{1}{2}\partial_\lambda((m^{-1})^{\mu\nu}|m|^{\frac{1}{2}}a^\lambda)\partial_\mu\upphi\partial_\nu\upphi-(m^{-1})^{\mu\nu}|m|^{\frac{1}{2}}(\partial_\mu a^\lambda)\partial_\nu\upphi\partial_\lambda\upphi,
\end{align*}
and for any scalar function $S$,
\begin{align*}
(\Box_m-U)\upphi S\upphi |m|^{\frac{1}{2}}
&=\partial_\tau(|m|^{\frac{1}{2}}(m^{-1})^{\tau\nu}\partial_\nu\upphi S\upphi-\frac{1}{2}|m|^{\frac{1}{2}}(m^{-1})^{\tau\nu}\partial_\nu S\upphi^2)\\
&\quad+\partial_j(|m|^{\frac{1}{2}}(m^{-1})^{j\nu}\partial_\nu\upphi S\upphi-\frac{1}{2}|m|^{\frac{1}{2}}(m^{-1})^{j\nu}\partial_\nu S\upphi^2)\\
&\quad+\frac{1}{2}|m|^{\frac{1}{2}}(\Box_m S)\upphi^2-|m|^{\frac{1}{2}}S(m^{-1})^{\mu\nu}\partial_\mu\upphi\partial_\nu\upphi-|m|^{\frac{1}{2}}US\upphi^2.
\end{align*}
We apply and add these identities in the $(\uptau,\uprho,\uptheta)$ coordinates with $a^\mu$ and $S$ determined by $Q$ above. It follows with $B^\mu[\upphi]$ determined through these identities,
\begin{align}\label{eq:psifarLEDtemp1.5}
\begin{split}
fQ\upphi|m|^{\frac{1}{2}}&=\partial_\mu B^\mu[\upphi]+\mathcal P_{\mathrm{pert}}\upphi Q\upphi |m|^{\frac{1}{2}}-|m|^{\frac{1}{2}}(|m|^{-\frac{1}{2}}\frac{1}{2}\partial_\mu(U a^\mu|m|^{\frac{1}{2}})\upphi^2-US\upphi^2)\\
&\quad+|m|^{\frac{1}{2}}\big(\frac{1}{2}|m|^{-\frac{1}{2}}\partial_\lambda((m^{-1})^{\mu\nu}|m|^{\frac{1}{2}}a^\lambda)\partial_\mu\upphi\partial_\nu\upphi-(m^{-1})^{\mu\nu}(\partial_\mu a^\lambda)\partial_\nu\upphi\partial_\lambda\upphi\big)\\
&\quad +|m|^{\frac{1}{2}}(\frac{1}{2}(\Box_m S)\upphi^2-S(m^{-1})^{\mu\nu}\partial_\mu\upphi\partial_\nu\upphi).
\end{split}
\end{align}
The contribution of $B^\mu[\upphi]$ can be bounded by the energy (see \eqref{eq:minvdecomp1} and \eqref{eq:m02inv1} for the form of $m$). To calculate the bulk terms, we first write $m={\underline m}+{\mathring m}$ where ${\underline m}$ is defined by freezing the $\uptau$ values of the coefficients at $\uptau=t_2$, and ${\mathring m}:=m-{\underline m}$. In view of \eqref{eq:minvdecomp1} and \eqref{eq:m02inv1}, and the $\uptau$ decay of ${\mathring m}$, the contribution of ${\mathring m}$ is bounded by the energy. Here note that the term $|m|^{\frac{1}{2}}(m^{-1})^{\uptau\uprho}$, which could lead to a transversal derivative with no spatial decay on the leaves $\Sigma_\uptau$, is independent of $\uptau$ to leading order in $\uprho$, so its leading order contribution to ${\mathring m}$ vanishes (see \eqref{eq:mdvol1}, \eqref{eq:m02inv1}). For the contribution of ${\underline m}$, except for the multiplicative $|{\underline m}|^{\frac{1}{2}}$, the expression of the bulk terms is coordinate invariant, so we can calculate in the $({\tilde{\uptau}},{\tilde{\uprho}},{\tilde{\uptheta}})$ coordinates. But then, using the asymptotic flatness of the metric in these coordinates, and the fact that $\chi_1'\geq 0$, if $\delta$ is sufficiently small the last two lines of \eqref{eq:psifarLEDtemp1.5} give control of
\begin{align}\label{eq:psifarLEDtemp1.75}
\begin{split}
-\uprho^{-1-\alpha}((\partial_\uprho\upphi)^2+(\uprho^{-1}\partial_\uptheta\upphi)^2+(\uprho^{-1}\upphi)^2).
\end{split}
\end{align}
Using similar considerations, the contributions of $\mathcal P_{\mathrm{pert}}$ and $U$ to \eqref{eq:psifarLEDtemp1.5} can be bounded by the energy and a small multiple of the LE norm of $\upphi$. To get control of the remaining derivative $\partial
_\uptau\upphi$, we again multiply \eqref{eq:psifarLEDtemp0} by $\beta_1'\upphi |m|^{\frac{1}{2}}$, and manipulate as above to get, for the appropriate choice of ${\tilde{B}}^\mu[\upphi]$,
\begin{align}\label{eq:psifarLEDtemp2.5}
\begin{split}
f\beta_1'\upphi |m|^{\frac{1}{2}}&=\partial_\mu {\tilde{B}}^\mu[\upphi]+\mathcal P_{\mathrm{pert}}\beta_1'\upphi |m|^{\frac{1}{2}}+|m|^{\frac{1}{2}}U\beta_1'\upphi^2\\
&\quad+|m|^{\frac{1}{2}}(\frac{1}{2}(\Box_m \beta_1')\upphi^2-\beta_1'(m^{-1})^{\mu\nu}\partial_\mu\upphi\partial_\nu\upphi).
\end{split}
\end{align}
Using similar arguments as above, and as in the standard Minkowski computation, this gives control of $\uprho^{-1-\alpha}(\partial_\uptau\upphi)^2$ in terms of \eqref{eq:psifarLEDtemp1.75}.
Note also that by a similar argument as in the proof of Proposition~\ref{prop:energyestimate1} we can prove an energy estimate for equation \eqref{eq:phifardef1}. Adding a suitably large multiple of \eqref{eq:psifarLEDtemp1.5} and the energy identity for \eqref{eq:phifardef1} to \eqref{eq:psifarLEDtemp2.5}, for a small constant $\varepsilon$ depending on the support of $\chi_1$, we get (note that the induced volume form and $\sqrt{|m|}$ are comparable in the support of $\chi_1$)
\begin{align}\label{eq:psifarLEDtemp3}
\begin{split}
\iint_{\Sigma_{t_1}^{t_2}}\chi_1 {\tilde{r}}^{-1-\alpha}((\partial\upphi)^2+({\tilde{r}}^{-1}\upphi)^2)\mathrm{d} V &\lesssim \|f\|^2_{L^1L^2(\Sigma_{t_1}^{t_2})}+\varepsilon \|\upphi\|^2_{LE(\Sigma_{t_1}^{t_2})}\\
&\quad+\iint_{\Sigma_{t_1}^{t_2}\cap \,{\mathrm{supp}} \chi_1'}|\upphi|^2\mathrm{d} V.
\end{split}
\end{align}
For the interior we multiply the equation by two multipliers $Q\phi$ and $P\phi$ of the forms,
\begin{align*}
\begin{split}
Q\phi :=q^\mu\partial_\mu\phi +|{\tilde{h}}|^{-\frac{1}{2}}\partial_\mu (|{\tilde{h}}|^{\frac{1}{2}}q^\mu \phi)\qquad{\ \ \text{and} \ \ }\qquad P\phi:= p\phi,
\end{split}
\end{align*}
where (recall the relations \eqref{eq:ttilt1}, \eqref{eq:ttilt2}, \eqref{eq:kappadef1})
\begin{align*}
\begin{split}
q^\mu=\delta_\rho^\mu\beta_2(\rho)-\gamma(t)\delta_{t}^\mu \ell(t)\cdot F_\rho(\rho,\omega) (1-\upkappa(1+\upkappa)^{-1})\beta_2(\rho), \qquad p= p_0 \rho^2\chi_2(\rho).
\end{split}
\end{align*}
Here $p_0$ is a constant to be fixed later (and with $\chi_2$ as defined before the statement of Lemma~\ref{lem:phifar1}),
\begin{align*}
\begin{split}
\beta_2(\rho)=\rho \chi_2(\rho).
\end{split}
\end{align*}
It is helpful to keep in mind that $\ell\cdot F_\rho=\frac{\rho}{\jap{\rho}}\ell\cdot\Theta$ vanishes at $\rho=0$. Note that in the $({\tilde{t}},{\tilde{\rho}},{\tilde{\omega}})$ coordinates ${\tilde{q}}^\mu\partial_\mu =\beta_2 \partial_{\tilde{\rho}}$ (recall that in our notation we use ${\tilde{q}}^\mu$ to denote the components of the vectorfield $q$ in the $({\tilde{t}},{\tilde{\rho}},{\tilde{\omega}})$ coordinates). Also, recalling \eqref{eq:LEDcalPint1} we use the following notation ${\mathring{\calP}}=a^{\mu\nu}\partial^2_{\mu\nu}+b^\mu \partial_\mu + c$ for the perturbation.
For the principal part of $\mathcal P$ we can change variables to $({\tilde{t}},{\tilde{\rho}},{\tilde{\omega}})$ to get
\begin{align}\label{eq:LEDintbulk1}
\begin{split}
(\Box_h -U)\phi Q\phi |{\tilde{h}}|^{\frac{1}{2}}&= \partial_\mu\Big(2|{\tilde{h}}|^{\frac{1}{2}}({\tilde{h}}^{-1})^{\mu\nu}{\tilde{q}}^\lambda \partial_\lambda\phi\partial_\nu\phi-|{\tilde{h}}|^{\frac{1}{2}}({\tilde{h}}^{-1})^{\lambda\nu}{\tilde{q}}^\mu\partial_\lambda\phi\partial_\nu\phi-|{\tilde{h}}|^{\frac{1}{2}}U{\tilde{q}}^\lambda\phi^2\Big)\\
&\quad+\partial_\mu\Big(({\tilde{h}}^{-1})^{\mu\nu}\partial_\lambda(|{\tilde{h}}|^{\frac{1}{2}}{\tilde{q}}^\lambda)\phi\partial_\nu\phi-\frac{1}{2}({\tilde{h}}^{-1})^{\mu\nu}\partial_\nu(|{\tilde{h}}|^{-\frac{1}{2}}\partial_\lambda(|{\tilde{h}}|^{\frac{1}{2}}{\tilde{q}}^\lambda))|{\tilde{h}}|^{\frac{1}{2}}\phi^2\Big)\\
&\quad-2\big(({\tilde{h}}^{-1})^{\lambda\nu}(\partial_\lambda{\tilde{q}}^\mu)-\frac{1}{2}{\tilde{q}}^\lambda\partial_\lambda({\tilde{h}}^{-1})^{\mu\nu}\big)\partial_\mu\phi\partial_\nu\phi|{\tilde{h}}|^{\frac{1}{2}}\\
&\quad+\frac{1}{2}\Box_{\tilde{h}}(|{\tilde{h}}|^{-\frac{1}{2}}\partial_\lambda(|{\tilde{h}}|^{\frac{1}{2}}{\tilde{q}}^\lambda))\phi^2|{\tilde{h}}|^{\frac{1}{2}}+({\tilde{q}}^\lambda\partial_\lambda U)\phi^2|{\tilde{h}}|^{\frac{1}{2}},
\end{split}
\end{align}
and
\begin{align}\label{eq:LEDintbulk2}
\begin{split}
(\Box_h-U) \phi P\phi |{\tilde{h}}|^{\frac{1}{2}}&= \partial_\mu\big(|{\tilde{h}}|^{\frac{1}{2}}({\tilde{h}}^{-1})^{\mu\nu}p\phi\partial_\nu\phi-\frac{1}{2}|{\tilde{h}}|^{\frac{1}{2}}({\tilde{h}}^{-1})^{\mu\nu}\partial_\mu p \phi^2\big)\\
&\quad-p({\tilde{h}})^{-1}\partial_\mu\phi\partial_\nu\phi|{\tilde{h}}|^{\frac{1}{2}}+\big(\frac{1}{2}\Box_{\tilde{h}} p-Up\big)\phi^2|{\tilde{h}}|^{\frac{1}{2}}.
\end{split}
\end{align}
Recalling that, in view of \eqref{eq:ttilt2}, $(1+\upkappa)^{-1}|h|^{\frac{1}{2}}=|{\tilde{h}}|^{\frac{1}{2}}$, by adding a small multiple $\epsilon_M$ of \eqref{eq:LEDintbulk2} to \eqref{eq:LEDintbulk1} and multiplying by $(1+\upkappa)^{-1}$ we get, for some constant $c_M=o(M)$,
\begin{align}\label{eq:LEDintbulk3}
\begin{split}
(U-\Box_h) \phi (Q\phi+\epsilon_MP\phi) |h|^{\frac{1}{2}}&\geq C\chi_1(c_M\rho^2(T\phi)^2+(\partial_\Sigma\phi)^2+M\phi^2)\\
&\quad-O(1)\chi_{U_{\mathrm{int}}\backslash K_{\mathrm{int}}}((\partial\phi)^2+(\phi)^2)\\
&\quad+O(\epsilon \,t^{-5/4})\chi_{U_{\mathrm{int}}}((\partial\phi)^2+(\phi/\rho)^2)\\
&\quad+\partial(O(1)\chi_{U_{\mathrm{int}}}(\partial\phi)^2+O(1)\chi_{U_{\mathrm{int}}}(\phi/\rho)^2).
\end{split}
\end{align}
In a similar manner, using the $t$ decay of $a,b,c$, we can see that
\begin{align}\label{eq:LEDintbulk4}
\begin{split}
P \phi (Q\phi+\epsilon_MP\phi) |h|^{\frac{1}{2}}&=O(\epsilon \,t^{-9/4})\chi_{U_{\mathrm{int}}}((\partial\phi)^2+(\phi/\rho)^2)\\
&\quad+\partial(O(\epsilon)\chi_{U_{\mathrm{int}}}(\partial\phi)^2+O(\epsilon)\chi_{U_{\mathrm{int}}}(\phi/\rho)^2).
\end{split}
\end{align}
The desired estimate now follows by integrating \eqref{eq:LEDintbulk3} and \eqref{eq:LEDintbulk4} (with respect to the measure $\mathrm{d} t \mathrm{d}\rho \mathrm{d}\omega$) and combining with the energy identity for \eqref{eq:phifardef1} and a suitably large multiple (independent of $M$) of \eqref{eq:psifarLEDtemp3}. Here note that the bulk error terms in \eqref{eq:LEDintbulk3} and \eqref{eq:LEDintbulk4} are absorbed by the suitably large multiple of \eqref{eq:psifarLEDtemp3}, while the bulk $L^2({\mathrm{supp}} \chi_1')$ error terms in the latter are absorbed by \eqref{eq:LEDintbulk3} if $M$ is chosen sufficiently large. The argument for \eqref{eq:lemphifarbound2} is similar, where now we incur some $L^2$ errors in a compact region in view of the absence of $V_{\mathrm{far}}$ from the left-hand side of \eqref{eq:calPphinear1}. The contribution of the source term in \eqref{eq:calPphinear1} is bounded in $LE^\ast$ using the decay of $V_{\mathrm{far}}$ and \eqref{eq:lemphifarbound1}.
\end{proof}
At this point we use the coordinates $(\uptau,\uprho,\uptheta)$ and the decomposition $\mathcal P=\mathcal P_0+\mathcal P_{\mathrm{pert}}$ (see the opening paragraphs of this section). For a globally defined function $u$, the frequency projections $P_{\leq N_0}$ and $P_N$ are defined by
\begin{align*}
\begin{split}
P_{\leq N_0}u(\uptau,\uprho,\uptheta)=\int_{-\infty}^{\infty}2^{N_0}\chi(2^{N_0}\uptau')u(\uptau-\uptau')\mathrm{d} \uptau',\qquad P_{N}u(\uptau,\uprho,\uptheta)=\int_{-\infty}^{\infty}2^{N_0}{\tilde{\chi}}(2^{N_0}\uptau')u(\uptau-\uptau')\mathrm{d} \uptau',
\end{split}
\end{align*}
where as usual $\hat\chi(\hat\uptau)=\int_{-\infty}^{\infty}\chi(\uptau)e^{-i\hat\uptau\uptau}\mathrm{d} \uptau$ and $\hat{{\tilde{\chi}}}(\hat\uptau)=\int_{-\infty}^{\infty}\chi(\uptau)e^{-i\hat\uptau\uptau}\mathrm{d} \uptau$ are supported in $\{\hat\uptau\lesssim 1\}$ and $\{\hat\uptau\simeq 1\}$ respectively.
In order to apply frequency projections, we need to extend $\uppsi$, $\uppsi_{\mathrm{far}}$, and $\uppsi_{\mathrm{near}}$. For this, we view these as functions of the variables $(\uptau,\uprho,\upomega)$ and extend them outside of their current domain of definition by requiring that they satisfy
\begin{align*}
\begin{split}
\mathcal P\uppsi-V_{\mathrm{far}} \uppsi=0,\qquad \mathcal P\uppsi_{\mathrm{far}}- V_{\mathrm{far}}\uppsi_{\mathrm{far}}=0,
\end{split}
\end{align*}
in $( \Sigma_{t_1}^{t_2})^c$. Here $\mathcal P$ is extended outside the original domain using the decomposition \eqref{eq:calPP0Ppertdecomp1}, by extending the coefficients of $\mathcal P_0$ independently of $\uptau$ and those of $\mathcal P_{\mathrm{pert}}$ smoothy such that the estimates \eqref{eq:calPP0Ppertdecomp1} are still satisfied.
It follows that $\uppsi_{\mathrm{near}}:=\uppsi-\uppsi_{\mathrm{far}}$ satisfies
\begin{align*}
\begin{split}
\mathcal P\uppsi_{\mathrm{near}} -V_{\mathrm{far}} \uppsi_{\mathrm{near}}=0,
\end{split}
\end{align*}
outside its original domain of definition. By a similar argument as in the proof of Lemma~\ref{lem:phifar1}, we can then replace $\|\uppsi_{\mathrm{far}}\|_{LE(\Sigma_{t_1}^{t_2})}$ and $\|\uppsi_{{\mathrm{near}}}\|_{LE(\Sigma_{t_1}^{t_2})}$ in the estimates \eqref{eq:lemphifarbound1} and \eqref{eq:lemphifarbound2} by $\|\uppsi_{\mathrm{far}}\|_{LE}$ and $\|\uppsi_{{\mathrm{near}}}\|_{LE}$, respectively, where $$LE\equiv LE(\cup_{\uptau}\Sigma_\uptau).$$
In view of Lemma~\ref{lem:phifar1} our task has reduced to estimating $\|\uppsi_{\mathrm{near}}\|_{L^2(K_{\mathrm{ext}})}$. The high frequency part of this error can already be absorbed by the $LE$ norm as shown in the next lemma.
\begin{lemma}\label{lem:LEDhighfreq1}
Given $\delta>0$, if $N_0$ is sufficiently large then $P_{>N_0}\uppsi_{\mathrm{near}}:=\uppsi_{\mathrm{near}} - P_{\leq N_0}\uppsi_{\mathrm{near}}$ satisfies
\begin{align*}
\begin{split}
\|P_{>N_0}\uppsi_{\mathrm{near}}\|_{L^2(K_{{\mathrm{ext}}})}\leq \delta \|\uppsi_{\mathrm{near}}\|_{LE}.
\end{split}
\end{align*}
\end{lemma}
\begin{proof}
This would be immediate from the definition of $P_{\geq N_0}$ and $\|\cdot\|_{LE}$ if we had $\|P_{>N_0}\rho\uppsi_{\mathrm{near}}\|_{L^2(K_{\mathrm{ext}})}$ instead of $\|P_{>N_0}\uppsi_{\mathrm{near}}\|_{L^2(K_{\mathrm{ext}})}$ (note that in $K_{\mathrm{ext}}$ the coordinates $(t,\rho,\omega)$ and $(\uptau,\uprho,\uptheta)$ agree). To insert the extra factor of $\rho$ we argue as follows. Let $u=P_{> N_0}\uppsi$. Then, with $\chi\equiv \chi(\rho)$ an appropriate cutoff, we need to estimate (recall that $|{\tilde{h}}|\simeq 1$ in $K_{\mathrm{ext}}$, in particular with no vanishing at $\rho=0$)
\begin{align*}
\begin{split}
&\int_{t_1}^{t_2}\int_{\mathbb S^{n-1}}\int_{-\infty}^\infty u^2\chi \mathrm{d} \rho\,\mathrm{d}\omega\mathrm{d} t= \int_{t_1}^{t_2}\int_{\mathbb S^{n-1}}\int_{-\infty}^\infty (\partial_\rho\rho)u^2\chi \mathrm{d} \rho\,\mathrm{d}\omega\mathrm{d} t\\
&=-\int_{t_1}^{t_2}\int_{\mathbb S^{n-1}}\int_{-\infty}^{\infty} \rho u^2\partial_\rho\chi\mathrm{d} \rho\,\mathrm{d}\omega \mathrm{d} t-2\int_{t_1}^{t_2}\int_{\mathbb S^{n-1}}\int_{-\infty}^{\infty}\rho u \partial_\rho u \chi \mathrm{d} \rho\,\mathrm{d}\omega \mathrm{d} t.
\end{split}
\end{align*}
The first integral is supported away from $\{\rho=0\}$, so we can insert a factor of $\rho$ making it an acceptable $L^2$ error. For the second integral we use
\begin{align*}
\begin{split}
|\rho u \partial_\rho u| \leq C_\epsilon \rho^2 u^2 + \epsilon (\partial_\rho u)^2.
\end{split}
\end{align*}
Since the coefficient of $\partial_\rho u$ in the LE norm is non-degenerate we can absorb the last term on the right (note that $P_{> N_0}$ and $\partial_\rho$ commute and that $P_{>N_0}$ is bounded in $LE$). The first term on the right is exactly the term we had hoped for.
\end{proof}
It follows that
\begin{align}\label{eq:phinearLEtemp1}
\begin{split}
\|\uppsi_{\mathrm{near}}\|_{LE}\lesssim \|\uppsi\|_{E(\Sigma_{t_1})}+ \|f\|_{L^1L^2(\Sigma_{t_1}^{t_2})}+\|P_{\leq N_0}\uppsi_{\mathrm{near}}\|_{L^2(K_{\mathrm{ext}})}.
\end{split}
\end{align}
Now to estimate $P_{\leq N_0}\uppsi_{\mathrm{near}}$ we again apply the near-far decomposition, this time with respect to $\mathcal P_0$. First note that
\begin{align*}
\begin{split}
\|P_{\leq N_0}\uppsi_{\mathrm{near}}\|_{L^2(K_{\mathrm{ext}})}\lesssim \|P_{\leq N_0}\uppsi_{\mathrm{near}}\|_{LE}.
\end{split}
\end{align*}
Let $\uppsi_{{\mathrm{near}},{\mathrm{far}}}$ be defined by (note that the operator on the left-hand side is applied to $\uppsi_{{\mathrm{near}},{\mathrm{far}}}$ while on right-hand side $\uppsi_{\mathrm{near}}$ and $\uppsi_{\mathrm{far}}$ appear and not $\uppsi_{{\mathrm{near}},{\mathrm{far}}}$)
\begin{equation} \label{eq:psi-nearfar}
\begin{split}
\begin{cases}(\mathcal P_0-V_{\mathrm{far}})\uppsi_{{\mathrm{near}},{\mathrm{far}}} = -\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}}-V_{\mathrm{far}}\uppsi_{\mathrm{far}}\quad&\mathrm{in~}\Sigma_{t_1}^{t_2}\\
(\mathcal P_0-V_{\mathrm{far}})\uppsi_{{\mathrm{near}},{\mathrm{far}}} =-\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}}\quad&\mathrm{in~}(\Sigma_{t_1}^{t_2})^c
\end{cases},\qquad \uppsi_{{\mathrm{near}},{\mathrm{far}}}\vert_{\Sigma_{t_1}}=0,
\end{split}
\end{equation}
so that $\uppsi_{{\mathrm{near}},{\mathrm{near}}}:=\uppsi_{\mathrm{near}}-\uppsi_{{\mathrm{near}},{\mathrm{far}}}$ satisfies
\begin{equation} \label{eq:psi-nearnear}
\begin{split}
\begin{cases}\mathcal P_0\uppsi_{{\mathrm{near}},{\mathrm{near}}}=-V_{\mathrm{far}}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\quad&\mathrm{~in~} \Sigma_{t_1}^{t_2}\\
(\mathcal P_0-V_{\mathrm{far}})\uppsi_{{\mathrm{near}},{\mathrm{near}}}=0\quad&\mathrm{~in~} (\Sigma_{t_1}^{t_2})^c
\end{cases},
\qquad \uppsi_{{\mathrm{near}},{\mathrm{near}}}\vert_{\Sigma_{t_1}}=0.
\end{split}
\end{equation}
Since $P_{\leq N_0}$ commutes with $\mathcal P_0$ and $V_{\mathrm{far}}$, we also have
\begin{align*}
\begin{split}
P_{\leq N_0}\uppsi_{\mathrm{near}} = P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{near}}}+P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}
\end{split}
\end{align*}
and
\begin{align*}
\begin{split}
&(\mathcal P_0-V_{\mathrm{far}})P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}=P_{\leq N_0}f_{{\mathrm{near}},{\mathrm{far}}},\\
&\mathcal P_0P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{near}}} =P_{\leq N_0}f_{{\mathrm{near}},{\mathrm{near}}},
\end{split}
\end{align*}
where
\begin{align*}
\begin{split}
f_{{\mathrm{near}},{\mathrm{far}}}:=\begin{cases}-\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}}-V_{\mathrm{far}}\uppsi_{\mathrm{far}}\quad&\mathrm{in~}\Sigma_{t_1}^{t_2}\\ -\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}}\quad&\mathrm{in~}(\Sigma_{t_1}^{t_2})^c\end{cases},\quad f_{{\mathrm{near}},{\mathrm{near}}}:=\begin{cases}-V_{\mathrm{far}}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\quad&\mathrm{in~}\Sigma_{t_1}^{t_2}\\ V_{\mathrm{far}}\uppsi_{{\mathrm{near}},{\mathrm{near}}}\quad&\mathrm{in~}(\Sigma_{t_1}^{t_2})^c\end{cases}.
\end{split}
\end{align*}
By a slight abuse of notation we will sometimes write
\begin{align*}
\begin{split}
-P_{\leq N_0}(\mathcal P_{\mathrm{pert}} \uppsi_{\mathrm{near}})-P_{\leq N_0}(V_{\mathrm{far}} \uppsi_{\mathrm{far}})
\end{split}
\end{align*}
for $P_{\leq N_0}f_{{\mathrm{near}},{\mathrm{far}}}$, and write the equation for $P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}$ simply as
\begin{align*}
\begin{split}
(\mathcal P_0-V_{\mathrm{far}})P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}=-P_{\leq N_0}(\mathcal P_{\mathrm{pert}} \uppsi_{\mathrm{near}})-P_{\leq N_0}(V_{\mathrm{far}} \uppsi_{\mathrm{far}}).
\end{split}
\end{align*}
The proof of the following lemma will occupy much of the remainder of this section.
\begin{lemma}\label{lem:phinearfar1}
$\uppsi_{{\mathrm{near}},{\mathrm{far}}}$ satisfies
\begin{align*}
\begin{split}
\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{L^2(K_{\mathrm{ext}})}\lesssim \|f\|_{L^1L^2(\Sigma_{t_1}^{t_2})}+\epsilon \sup_{t_1\leq t \leq t_2}\|\uppsi\|_{E(\Sigma_{t})}+\epsilon\|\uppsi_{\mathrm{near}}\|_{LE(\Sigma_{t_1}^{t_2})}.
\end{split}
\end{align*}
\end{lemma}
We postpone the proof of this lemma and proceed to prove Proposition~\ref{prop:LED1} using its statement.
\begin{proof}[Proof of Proposition~\ref{prop:LED1}]
Throughout the proof, we use an underline to denote the parameters, or other functions depending on the parameters, with values fixed at $\uptau=t_2$. So for instance we write ${\underline \ell}=\ell(t_2)$ and $\hbar$ for $h$ with $\ell$ replaced by $\underline{\ell}$. Let ${R_1}>0$ be a large constant (see for instance Section~\ref{subsec:modulationeqs}) so that $$K_{\mathrm{ext}}\subseteq \mathcal R_{t_1}^{t_2}:=\cup_{\uptau\in [t_1,t_2]}\Sigma_\uptau\cap\{\uprho\leq {R_1}\}.$$ In view of \eqref{eq:phinearLEtemp1} and Lemmas~\ref{lem:phifar1} and~\ref{lem:phinearfar1}, it suffices for us to prove the following estimate
\begin{align}\label{eq:phinearneartemp1}
\begin{split}
\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{near}}}\|_{L^2(K_{\mathrm{ext}}})&\lesssim \sum_k\|{\boldsymbol{\Upomega}}_k(\uppsi)\|_{L^2_\uptau([t_1,t_2])}+ \epsilon \|\uppsi\|_{LE(\mathcal R_{t_1}^{t_2})}\\
&\quad+\|\uppsi_{\mathrm{far}}\|_{LE}+\|\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE}+\|\uppsi\|_{LE((\Sigma_{t_1}^{t_2})^c)}.
\end{split}
\end{align}
To simplify notation let $u=P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{near}}}$ and $g=-P_{\leq N_0}f_{{\mathrm{near}},{\mathrm{near}}}$, so that the equation
\begin{align*}
\begin{split}
\mathcal P_0 u = g
\end{split}
\end{align*}
is satisfied globally.
We also recall that in the coordinates $({\tilde{\uptau}},{\tilde{\uprho}},{\tilde{\uptheta}})$ the operator $\mathcal P_0$ takes the form
\begin{align*}
\begin{split}
\mathcal P_0 = -\partial_{\tilde{\uptau}}^2+{\tilde{\Delta}} +V({\tilde{\uprho}}),
\end{split}
\end{align*}
where ${\tilde{\Delta}}$ denotes the Laplacian on the Riemannian Catenoid in polar coordinates:
\begin{align*}
\begin{split}
{\tilde{\Delta}}=\frac{1}{\jap{{\tilde{\uprho}}}^{n-1}|F_{{\tilde{\uprho}}}|}\partial_{\tilde{\uprho}}(\jap{{\tilde{\uprho}}}^{n-1}|F_{\tilde{\uprho}}|^{-1}\partial_\rho)+\frac{1}{\jap{{\tilde{\uprho}}}^2}{\mathring{\slashed{\Delta}}}.
\end{split}
\end{align*}
By \eqref{eq:coordinatetransformationrelation1}, in the region $\{\uprho\leq {R_1}\}$ the two coordinates are related by
\begin{align*}
\begin{split}
{\tilde{\uptau}}=\underline{\gamma}^{-1}\uptau-{\underline \ell}\cdot F(\uprho,\uptheta),\quad {\tilde{\uprho}}=\uprho,\quad {\tilde{\uptheta}}=\uptheta.
\end{split}
\end{align*}
We will use ${\tilde{y}}$ for the spatial coordinates $({\tilde{\uprho}},{\tilde{\uptheta}})$ and use $\angles{\cdot}{\cdot}_{\tilde{y}}$ for the $L^2$ pairing with respect to\footnote{Following our convention in this section, by a slight abuse of notation, we write $\sqrt{|{\underline{\tilh}}|}$ rather than $\sqrt{|\hbar|}$ to emphasize that we are working in the $({\tilde{\uptau}},{\tilde{\uprho}},{\tilde{\uptheta}})$ coordinates.} $\sqrt{|\hbar|}\mathrm{d} {\tilde{y}}$ on the $\{{\tilde{\uptau}}=\mathrm{constant}\}$ hypersurfaces. On these hypersurfaces we define the spectral projection $\mathbb P_c$ by
\begin{align*}
\begin{split}
u= \mathbb P_cu+\sum_{j=1}^{n} \angles{u}{\uppsi_j}_{\tilde{y}} \uppsi_j+\angles{u}{\uppsi_\mu}_{\tilde{y}} \uppsi_\mu,
\end{split}
\end{align*}
where $\uppsi_j$, $j=1,\dots,n$, denote the eigenfunctions of ${\tilde{\Delta}}+V$ with eigenvalue zero, and $\uppsi_{\mu}$ the eigenfunction with eigenvalue $-\mu^2<0$. In what follows, unless otherwise specified, when summing over the eigenfunctions $\uppsi_j$ we always let $j$ vary over $\{\mu,1,\dots,n\}$ without distinguishing between the zero and $-\mu^2$ eigenvalues. As in in the figure below,
\begin{center}
\begin{tikzpicture}[scale=1,transform shape]
\draw[->] (0,-0.25) -- (0,2) node[right] {$\uptau$};
\draw[name path= C, red, very thick,decorate] (-1,0.5) -- (1,0.5) node[right] {$\uptau=t_1$};
\draw[name path = D, red, very thick,-,decorate] (-1,1) node[left] {$\uptau=t_2$}-- (1,1) ;
\draw[red,very thick] (1,1) -- (1,0.5);
\draw[red, very thick] (-1,1) -- (-1,0.5) (-0.5,0.76) node{$\mathcal R_{t_1}^{t_2}$};
\tikzfillbetween[of=C and D]{red, opacity=0.1};
\coordinate (A) at (-3,2.25);
\coordinate (B) at (1,1);
\coordinate (C) at (3,0.75);
\draw[name path=O, thick,blue] plot [smooth] coordinates { (A) (B) (C) };
\coordinate (D) at (-3,1.2);
\coordinate (E) at (-1,0.5);
\coordinate (F) at (3,-0.25);
\draw[name path = U, thick,blue] plot [smooth] coordinates { (D) (E) (F) };
\node[right] at (C) {$\Blue{{\tilde{\uptau}}={\tilde{t}}_2}$};
\node[right] at (F) {$\Blue{{\tilde{\uptau}}={\tilde{t}}_1}$};
\tikzfillbetween[of=O and U]{blue, opacity=0.1};
\end{tikzpicture}
\end{center}
let ${\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2}=\{{\tilde{t}}_1\leq {\tilde{\uptau}}\leq {\tilde{t}}_2\}$ be the smallest infinite rectangle containing $\mathcal R_{t_1}^{t_2}$, and observe that (note that the implicit constant is independent of ${R_1}$ and rather depends on the size of $K_{\mathrm{ext}}$ which we can choose to be much smaller than ${R_1}$)
\begin{align*}
\begin{split}
\|u\|_{L^2(K_{\mathrm{ext}})}\lesssim\|u\|_{LE({\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2})}.
\end{split}
\end{align*}
Let $a_j:=\angles{u}{\uppsi_j}_{\tilde{y}}$, $a_\mu:=\angles{u}{\uppsi_\mu}_{\tilde{y}}$ and $a_j':=\frac{\mathrm{d}}{\mathrm{d} {\tilde{\uptau}}}\angles{u}{\uppsi_j}_{\tilde{y}}$, $a_\mu':=\frac{\mathrm{d}}{\mathrm{d} {\tilde{\uptau}}}\angles{u}{\uppsi_\mu}_{\tilde{y}}$, and denote by ${\tilde{I}}$ the time interval $\{{\tilde{t}}_1\leq {\tilde{\uptau}} \leq {\tilde{t}}_2\}$. We now apply the LED estimate Proposition~\ref{prop:LEDproduct}, using the second and fourth estimates in the statement there. Note that since $u=P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{near}}}$, we can drop the time derivative from the last term on the right-hand side of the second estimate in Proposition~\ref{prop:LEDproduct} and absorb the corresponding error by the left-hand side of the fourth estimate. Using this argument (recall that $j$ varies over $\{\mu,1,\dots,n\}$),
\begin{align}\label{eq:orthLEtemp1}
\begin{split}
\|u\|_{LE({\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2})}&\lesssim \|\mathbb P_cu\|_{LE({\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2})}+\sum_j(\|a_j\|_{L^2_{\tilde{t}} ({\tilde{I}})}+\|a_j'\|_{L^2_{\tilde{t}} ({\tilde{I}})})\\
&\lesssim \|\mathbb P_cg\|_{LE^\ast({\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2})}+\sum_j(\|a_j\|_{L^2_{\tilde{t}} ({\tilde{I}})}+\|a_j'\|_{L^2_{\tilde{t}} ({\tilde{I}})})\\
&\lesssim \|g\|_{L^2({\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2})}+\sum_j(\|a_j\|_{L^2_{\tilde{t}} ({\tilde{I}})}+\|a_j'\|_{L^2_{\tilde{t}} ({\tilde{I}})}).
\end{split}
\end{align}
Here, to pass to the last line, we have used that $\jap{{\tilde{y}}}^{\frac{1+\alpha}{2}}\uppsi_j\in L^2_{\tilde{y}}$ (which holds for $n\geq 4$) to bound
\begin{align*}
\begin{split}
\|\mathbb P_cg\|_{LE^\ast({\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2})}\leq \|g\|_{LE^\ast({\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2})}+\sum_j\|\angles{g}{\uppsi_j}_{\tilde{y}}\|_{L^2_{\tilde{t}}[{\tilde{t}}_1,{\tilde{t}}_2]}\|\jap{y}^{\frac{1+\alpha}{2}}\uppsi_j\|_{L^2_{\tilde{y}}}\lesssim \|g\|_{L^2({\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2})}.
\end{split}
\end{align*}
To treat the last term on the right-hand side of \eqref{eq:orthLEtemp1} we introduce some more notation. For $k=\mu,1,\dots,n$, let ${\underline Z}_k=\chi \uppsi_k$ where $\chi\equiv \chi({\tilde{\uprho}})$ is supported in $\{{\tilde{\uprho}}<{R_1}/2\}$. Then let
\begin{align}\label{eq:tilbfOmegabardef1}
\begin{split}
&{\tilde{\underline{\bfOmega}}}_k(v):=-\angles{\partial_{\tilde{\uptau}} v}{{\underline Z}_k}_{\tilde{y}},\qquad{\tilde{\underline{\bfOmega}}}_{n+k}(v):=\angles{v}{{\underline Z}_k}_{\tilde{y}},\quad k=1,\dots,n\\
&{\tilde{\underline{\bfOmega}}}^{+}_{\mu}(v):=\angles{v}{\mu {\underline Z}_\mu}_{\tilde{y}}-\angles{\partial_{\tilde{\uptau}} v}{{\underline Z}_k}_{\tilde{y}},\qquad{\tilde{\underline{\bfOmega}}}^{-}_{\mu}(v):=-\angles{v}{\mu {\underline Z}_\mu}_{\tilde{y}}-\angles{\partial_{\tilde{\uptau}} v}{{\underline Z}_k}_{\tilde{y}}.
\end{split}
\end{align}
For any $c$ we define $\mathcal T_c$ to be the intersection of $\{\uprho\leq {R_1}\}$ with the region bounded between $\{{\tilde{\uptau}}=c\}$ and $\{\uptau=\underline{\gamma} c\}$, and let $\mathcal T_{c,1}:=\mathcal T_c\cap\{\uptau\leq \underline{\gamma} c\}$ and $\mathcal T_{c,2}:=\mathcal T_c\cap\{\uptau\geq \underline{\gamma} c\}$. See the figure below.
\begin{center}\begin{tikzpicture}[scale=1,transform shape]
\draw[->] (0,-1.5) -- (0,1.5) node[right] {$\uptau$};
\draw[name path = O, anchor=center,red, very thick,decorate] (-2,0)--(-1.5,0) node[above,black]{$\mathcal T_{c,2}$}-- (1.5,0) node[below,black]{$\mathcal T_{c,1}$} -- (2,0);
\draw[red, very thick,decorate] (-2.25,0)--(-2,0) ;
\draw[red, very thick,decorate] (2,0) -- (2.25,0)node[right] {$\uptau=\underline{\gamma} c$};
\draw[dashed,thick] (2,1)node[left]{$\uprho={R_1}$}--(2,-1.5);
\draw[dashed,thick] (-2,1)--(-2,-1)node[right]{$\uprho=-{R_1}$};
\coordinate (A) at (-2,1);
\coordinate (B) at (0,0);
\coordinate (C) at (2,-1.5);
\draw [name path = U, thick,blue] plot [smooth] coordinates { (A) (B) (C) };
\node[right] at (C) {$\Blue{{\tilde{\uptau}}=c}$};
\tikzfillbetween[of=O and U]{blue, opacity=0.1};
\end{tikzpicture}
\end{center}
Finally, with ${\underline n}$ denoting the normal (with respect to $\hbar$) to $\{\uptau=c\}$, we let
\begin{align*}
\begin{split}
&{\underline{\bfOmega}}_k(v)(c)=-\int_{\{\uptau= c\}}{\underline Z}_k{\underline n}^\alpha\partial_\alpha v \sqrt{|\hbar|}\mathrm{d} y,\quad {\underline{\bfOmega}}_{n+k}(v(c))=\int_{\{\uptau= c\}}v{\underline Z}_k{\underline n}^\alpha\partial_\alpha {\tilde{\uptau}} \sqrt{|\hbar|}\mathrm{d} y,\quad k=1,\dots,n,\\
&{\underline{\bfOmega}}_{\mu}^{\pm}(v)(c)=\int_{\{\uptau= c\}}(\pm\mu v {\underline Z}_\mu \partial_\alpha {\tilde{\uptau}}-{\underline Z}_\mu \partial_\alpha v){\underline n}^\alpha \sqrt{|\hbar|}\mathrm{d} y.
\end{split}
\end{align*}
Returning to the last term on the right-hand side of \eqref{eq:orthLEtemp1} note that
\begin{align*}
\begin{split}
&{\tilde{\underline{\bfOmega}}}_k(u)={\tilde{\underline{\bfOmega}}}_k(\mathbb P_cu)-\sum_j a_j'\angles{\uppsi_j}{{\underline Z}_k}_{\tilde{y}},\quad {\tilde{\underline{\bfOmega}}}_{n+k}(u)={\tilde{\underline{\bfOmega}}}_{n+k}(\mathbb P_cu)+\sum_j a_j\angles{\uppsi_j}{{\underline Z}_k}_{\tilde{y}},\quad k=1,\dots,n,\\
&{\tilde{\underline{\bfOmega}}}_{\mu}^{\pm}(u)={\tilde{\underline{\bfOmega}}}_{\mu}^{\pm}(\mathbb P_cu)+\sum_j\big(\pm\mu a_j\angles{\uppsi_j}{ {\underline Z}_{\mu}}_{\tilde{y}}-a_j'\angles{\uppsi_j}{{\underline Z}_{\mu}}_{\tilde{y}}\big).
\end{split}
\end{align*}
Viewing this as a linear system for $a_j,a_j'$, $j\in \{\mu,1,\dots,n\}$, with invertible coefficient matrix, and since ${\underline Z}_k$, $k\in \{\mu,1,\dots,n\}$, are compactly supported, we get (below, each sum in $k$ is over~$\pm\mu,1,\dots,2n$)
\begin{align*}
\begin{split}
\sum_j(\|a_j\|_{L^2_{\tilde{\uptau}} ({\tilde{I}})}+\|a_j'\|_{L^2_{\tilde{\uptau}} ({\tilde{I}})})&\lesssim \sum_k(\|{\tilde{\underline{\bfOmega}}}_k(u)\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}+\|{\tilde{\underline{\bfOmega}}}_{k}(\mathbb P_cu)\|_{L^2_{\tilde{t}}({\tilde{I}})})\\
&\lesssim \|\mathbb P_cu\|_{LE({\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2})}+\sum_k\|{\tilde{\underline{\bfOmega}}}_k(u)\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}\\
&\lesssim \|g\|_{L^2({\widetilde{\calR}}_{{\tilde{t}}_1}^{{\tilde{t}}_2})}+\sum_k\|{\tilde{\underline{\bfOmega}}}_k(u)\|_{L^2_{\tilde{\uptau}}({\tilde{I}})},
\end{split}
\end{align*}
where to pass to the last line we have argued as in \eqref{eq:orthLEtemp1}.
It remains to estimate $\|{\tilde{\underline{\bfOmega}}}_k(u)\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}$.
Note that by the divergence theorem (note that if ${\underline{\tiln}}$ denotes the normal to $\{{\tilde{\uptau}}=\mathrm{constant}\}$, then $\partial_{{\tilde{\uptau}}}v$ in \eqref{eq:tilbfOmegabardef1} can be written as ${\underline{\tiln}}^\alpha\partial_\alpha v$), for $k=1,\dots,n$,
\begin{align}\label{eq:orthodivtemp1}
\begin{split}
{\tilde{\underline{\bfOmega}}}_k(u)({\tilde{\uptau}})={\underline{\bfOmega}}_k(u)(\underline{\gamma}{\tilde{\uptau}})+\iint_{\mathcal T_{{\tilde{\uptau}}}}({\underline Z}_k\mathcal P_0u-V{\underline Z}_k u+({\underline{\tilh}}^{-1})^{\alpha\beta}\partial_\alpha u \partial_\beta {\underline Z}_k)\sqrt{|{\underline{\tilh}}|}\mathrm{d} {\tilde{y}} \mathrm{d} {\tilde{\uptau}}',
\end{split}
\end{align}
so
\begin{align}\label{eq:tilOmegabarL2temp1}
\begin{split}
\|{\tilde{\underline{\bfOmega}}}_k(u)\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}&\leq \|{\underline{\bfOmega}}_k(u)(\underline{\gamma}{\tilde{\uptau}})\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}\\
&\quad+\Big\|\iint_{\mathcal T_{{\tilde{\uptau}}}}({\underline Z}_kg-V{\underline Z}_k u+({\underline{\tilh}}^{-1})^{\alpha\beta}\partial_\alpha u \partial_\beta {\underline Z}_k)\sqrt{|{\underline{\tilh}}|}\mathrm{d} {\tilde{y}} \mathrm{d} {\tilde{\uptau}}'\Big\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}.
\end{split}
\end{align}
For the first term, after a change of variables, and recalling that $u=P_{\leq N_0}\phi_{{\mathrm{near}},{\mathrm{near}}}= P_{\leq N_0}(\uppsi-\uppsi_{\mathrm{far}}-\uppsi_{{\mathrm{near}},{\mathrm{far}}})$,
\begin{align*}
\begin{split}
\|{\underline{\bfOmega}}_k(u)(\underline{\gamma}{\tilde{\uptau}})\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}&\lesssim \|{\underline{\bfOmega}}_k(u)(\uptau)\|_{L^2_\uptau([\underline{\gamma}{\tilde{t}}_1,\underline{\gamma}{\tilde{t}}_2])}\\
&\lesssim \|{\underline{\bfOmega}}_k(P_{\leq N_0}\uppsi)\|_{L^2_\uptau([\underline{\gamma}{\tilde{t}}_1,\underline{\gamma}{\tilde{t}}_2])}+\|{\underline{\bfOmega}}_k(P_{\leq N_0}(\uppsi_{{\mathrm{far}}}+\uppsi_{{\mathrm{near}},{\mathrm{far}}}))\|_{L^2_\uptau([\underline{\gamma}{\tilde{t}}_1,\underline{\gamma}{\tilde{t}}_2])}\\
&\lesssim \|{\underline{\bfOmega}}_k(P_{\leq N_0}\uppsi)\|_{L^2_t([\underline{\gamma}{\tilde{t}}_1,\underline{\gamma}{\tilde{t}}_2])}+\|\uppsi_{\mathrm{far}}\|_{LE}+\|\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE}.
\end{split}
\end{align*}
Here to pass to the last line we have used Lemmas~\ref{lem:phifar1} and~\ref{lem:phinearfar1}. To estimate $\|{\underline{\bfOmega}}_k(P_{\leq N_0}\phi)\|_{L^2_t([\underline{\gamma}{\tilde{t}}_1,\underline{\gamma}{\tilde{t}}_2])}$ note that since ${\underline Z}_k$ and $\hbar$ are independent of $\uptau$ (recall that $(\uprho,\uptheta)=({\tilde{\uprho}},{\tilde{\uptheta}})$ in the support of ${\underline Z}_k$),
\begin{align*}
\begin{split}
\|{\underline{\bfOmega}}_k(P_{\leq N_0}\uppsi)\|_{L^2_\uptau([\underline{\gamma}{\tilde{t}}_1,\underline{\gamma}{\tilde{t}}_2])}&=\|P_{\leq N_0}{\underline{\bfOmega}}_k(\uppsi)\|_{L^2_\uptau([\underline{\gamma}{\tilde{t}}_1,\underline{\gamma}{\tilde{t}}_2])}\lesssim \|{\underline{\bfOmega}}_k(\uppsi)\|_{L^2_\uptau}\\
&\leq \|{\underline{\bfOmega}}_k(\uppsi)-{\boldsymbol{\Upomega}}_k(\uppsi)\|_{L^2_\uptau([t_1,t_2])}+\|{\boldsymbol{\Upomega}}_k(\uppsi)\|_{L^2_\uptau([t_1,t_2])}+\|\uppsi\|_{LE((\Sigma_{t_1}^{t_2})^c)}.
\end{split}
\end{align*}
Since
\begin{align*}
\begin{split}
\|{\underline{\bfOmega}}_k(\uppsi)-{\boldsymbol{\Upomega}}_k(\uppsi)\|_{L^2_\uptau([t_1,t_2])}\lesssim \epsilon \|\uppsi\|_{LE(\mathcal R_{t_1}^{t_2})},
\end{split}
\end{align*}
combining the last few estimates we get,
\begin{align}\label{eq:tilOmegabarL2temp2}
\begin{split}
\|{\underline{\bfOmega}}_k(u)(\underline{\gamma}{\tilde{t}})\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}&\lesssim \|{\boldsymbol{\Upomega}}_k(\uppsi)\|_{L^2_\uptau([t_1,t_2])}+ \epsilon \|\uppsi\|_{LE(\mathcal R_{t_1}^{t_2})}\\
&\quad+\|\uppsi_{\mathrm{far}}\|_{LE}+\|\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE}+\|\uppsi\|_{LE((\Sigma_{t_1}^{t_2})^c)}.
\end{split}
\end{align}
To treat the second term on the right in \eqref{eq:tilOmegabarL2temp1} we write $\mathcal T_{\tilde{\uptau}}=\mathcal T_{{\tilde{\uptau}},1}\cup \mathcal T_{{\tilde{\uptau}},2}$ and treat the two regions separately. The estimates in these regions are similar so we carry out the details only for $\mathcal T_{{\tilde{\uptau}},1}$. For each $c$ define ${\tilde{\upsigma}}_\max(c)$ minimally and ${\tilde{\upsigma}}_\min(c)$ maximally such that ${\tilde{\uptau}}\in[{\tilde{\upsigma}}_\min(c),{\tilde{\upsigma}}_\max(c)]$ in $\mathcal T_{c,1}$ and let
\begin{align*}
\begin{split}
{\tilde{\upsigma}}_\max:=\sup_{{\tilde{\uptau}}\in[{\tilde{t}}_1,{\tilde{t}}_2]}{\tilde{\upsigma}}_\max({\tilde{\uptau}})\quad{\ \ \text{and} \ \ }\quad{\tilde{\upsigma}}_{\min}:=\inf_{{\tilde{\uptau}}\in[{\tilde{t}}_1,{\tilde{t}}_2]}{\tilde{\upsigma}}_\min({\tilde{\uptau}}).
\end{split}
\end{align*}
For each ${\tilde{\upsigma}}$ let
\begin{align*}
\begin{split}
w(\sigma):=\int_{\{{\tilde{\uptau}}={\tilde{\upsigma}}\}}\big|{\underline Z}_kg-V{\underline Z}_k u+({\underline{\tilh}}^{-1})^{\alpha\beta}\partial_\alpha u \partial_\beta {\underline Z}_k\big|\sqrt{|{\underline{\tilh}}|}\mathrm{d} {\tilde{y}}.
\end{split}
\end{align*}
We can then bound the contribution of $\mathcal T_{{\tilde{\uptau}},1}$ to the last term on the right in \eqref{eq:tilOmegabarL2temp1} as
\begin{align*}
\begin{split}
\Big(\int_{{\tilde{t}}_3}^{{\tilde{t}}_4}\Big(\int_{{\tilde{\upsigma}}_\min({\tilde{\uptau}})}^{{\tilde{\upsigma}}_\max({\tilde{\uptau}})}w({\tilde{\upsigma}})\mathrm{d} {\tilde{\upsigma}}\Big)^2\mathrm{d} {\tilde{\uptau}}\Big)^{\frac{1}{2}}=\Big(\int_{{\tilde{t}}_1}^{{\tilde{t}}_2}\Big(\int_{{\tilde{\upsigma}}_\min}^{{\tilde{\upsigma}}_{\max}}\chi_{\{{\tilde{\upsigma}}_{\min}({\tilde{\uptau}})\leq {\tilde{\upsigma}}\leq{\tilde{\upsigma}}_\max({\tilde{\uptau}})\}}w({\tilde{\upsigma}})\mathrm{d} {\tilde{\upsigma}}\Big)^2\mathrm{d} {\tilde{\uptau}}\Big)^{\frac{1}{2}},
\end{split}
\end{align*}
where we have used $\chi_S$ to denote the characteristic function of a set $S$. Applying Schur's test and noting that
\begin{align*}
\begin{split}
\|\chi_{\{{\tilde{\upsigma}}_{\min}({\tilde{\uptau}})\leq {\tilde{\upsigma}}\leq{\tilde{\upsigma}}_\max({\tilde{\uptau}})\}}\|_{L^\infty_{\tilde{\uptau}} L^1_{\tilde{\upsigma}}\cap L^\infty_{\tilde{\upsigma}} L_{\tilde{\uptau}}^1}\lesssim_{{R_1}}|\ell|\lesssim \epsilon,
\end{split}
\end{align*}
we get
\begin{align*}
\begin{split}
&\Big\|\iint_{\mathcal T_{{\tilde{t}},1}}\big({\underline Z}_kg-V{\underline Z}_k u+({\underline{\tilh}}^{-1})^{\alpha\beta}\partial_\alpha u \partial_\beta {\underline Z}_k\big)\sqrt{|{\underline{\tilh}}|}\mathrm{d} {\tilde{y}} \mathrm{d} {\tilde{\uptau}}'\Big\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}\\
&\lesssim \epsilon \Big(\int_{{\tilde{\upsigma}}_\min}^{{\tilde{\upsigma}}_\max}\Big(\int_{\{{\tilde{\uptau}}={\tilde{\uptau}}'\}} \big|{\underline Z}_kg-V{\underline Z}_k u+({\underline{\tilh}}^{-1})^{\alpha\beta}\partial_\alpha u \partial_\beta {\underline Z}_k\big|\sqrt{|{\underline{\tilh}}|}\mathrm{d} {\tilde{y}}\Big)^2\mathrm{d}{\tilde{\uptau}}'\Big)^{\frac{1}{2}}\\
&\lesssim \epsilon\big(\|u\|_{LE}+\|g\|_{L^2(\{\uprho\leq {R_1}\}}\big).
\end{split}
\end{align*}
Combining with \eqref{eq:tilOmegabarL2temp1} and \eqref{eq:tilOmegabarL2temp2} we have shown that
\begin{align*}
\begin{split}
\|{\tilde{\underline{\bfOmega}}}_k(u)\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}&\lesssim{\boldsymbol{\Upomega}}_k(\uppsi)\|_{L^2_\uptau([t_1,t_2])}+ \epsilon \|\uppsi\|_{LE(\mathcal R_{t_1}^{t_2})}\\
&\quad+\|\uppsi_{\mathrm{far}}\|_{LE}+\|\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE}+\|\uppsi\|_{LE((\Sigma_{t_1}^{t_2})^c)}.
\end{split}
\end{align*}
The estimate for $\|{\tilde{\underline{\bfOmega}}}_{n+k}(u)\|_{L^2_{\tilde{\uptau}}({\tilde{I}})}$ is similar except that when using the divergence identity to relate ${\tilde{\underline{\bfOmega}}}_k(u)({\tilde{\uptau}})$ and ${\underline{\bfOmega}}(u)(\underline{\gamma}{\tilde{\uptau}})$, analogously to \eqref{eq:orthodivtemp1}, we need to integrate the quantity
\begin{align*}
\begin{split}
v{\underline Z}_k\Box {\tilde{\uptau}}+({\underline{\tilh}}^{-1})^{\alpha\beta}\partial_\alpha{\tilde{\uptau}}\partial_\beta(v{\underline Z}_k)
\end{split}
\end{align*}
over $\mathcal T_{{\tilde{\uptau}}}$. The estimates for ${\tilde{\underline{\bfOmega}}}_{\mu}^{\pm}(u)$ are obtained similarly, completing the proof of \eqref{eq:phinearneartemp1}.
\end{proof}
It remains to prove Lemma~\ref{lem:phinearfar1}. For this we will use the following technical lemma.
\begin{lemma}\label{lem:LEDcommutator1}
Suppose $a$, $b$, and $c$, satisfy
\begin{align*}
\begin{split}
\sup_y(\jap{\uprho}^{\frac{1+\alpha}{2}}|a|+|b|+|c|)\lesssim \epsilon \jap{\uptau}^{-\gamma},
\end{split}
\end{align*}
for some $\gamma>1$. Then
\begin{align*}
\begin{split}
\mathcal X&:=\|[P_{\leq N_0},a]\partial^2_\uptau \uppsi_{\mathrm{near}}\|_{L^1_\uptau L^2_y\cap L^2_{\uptau,y}(I)}^2+\|[P_{\leq N_0},b]\partial^2_y \uppsi_{\mathrm{near}}\|_{L^1_\uptau L^2_y\cap L^2_{\uptau,y}(I)}^2\\
&\quad+\|[P_{\leq N_0},c]\partial^2_{\uptau y} \uppsi_{\mathrm{near}}\|_{L^1_\uptau L^2_y\cap L^2_{\uptau,y}(I)}^2
\end{split}
\end{align*}
satisfies
\begin{align*}
\begin{split}
\mathcal X\lesssim \epsilon \|\uppsi_{\mathrm{near}}\|_{LE}^2+\epsilon\|\uppsi_{\mathrm{far}}\|_{LE(I)}^2+\epsilon\sup_\uptau \|\uppsi_{\mathrm{near}}\|_{E(\Sigma_\uptau)}^2.
\end{split}
\end{align*}
\end{lemma}
\begin{proof}
To simplify notation we will write $\phi$ instead of $\uppsi_{\mathrm{near}}$ during the proof. Note that since the coefficients $\mathring{\uppi}_q^{\mu\nu}$ of $\mathcal P_{\mathrm{pert}}$ satisfy the conditions assumed on $a$, $b$, $c$, we can simultaneously carry out our estimates with $a$, $b$, $c$, replaced by $\mathring{\uppi}_q^{\mu\nu}$. This will allow us to absorb small multiplies of the quantity we are trying to estimate. With this in mind, to simplify notation, we simply assume that $a$, $b$, and $c$ are equal to $\mathring{\uppi}_q^{\uptau\uptau}$, $\mathring{\uppi}_q^{yy}$, and $\mathring{\uppi}_q^{\uptau y}$, respectively. We will repeatedly use the following standard weighted commutator estimate
\begin{align}\label{eq:commtempgen1}
\begin{split}
\|[P_{N},h]g\|_{L^r_\uptau}\lesssim 2^{-N}(1+2^{-\beta N})\|\jap{\uptau}\partial_\uptau h\|_{L^q_\uptau}\|\jap{\uptau}^{-\beta}g\|_{L^p_\uptau},\qquad \frac{1}{p}+\frac{1}{q}=\frac{1}{r}.
\end{split}
\end{align}
Indeed, with $\chi_N(\uptau)=2^N\chi(2^N\uptau)$, for an appropriate Schwartz function $\chi$ we have
\begin{align*}
\begin{split}
\|[P_{N},h]g\|_{L^r_\uptau}&\leq 2^{-N}\int_{\mathbb R}\int_0^1\chi_N(s)\|\big(\jap{\uptau-s}^\alpha h'(\uptau-ts))(\jap{\uptau-s}^{-\alpha}g(\uptau-s))\|_{L^r_\uptau}\mathrm{d} t \mathrm{d} s\\
&\leq 2^{-N}\int_{\mathbb R} \chi_N(s)\int_0^1\|\jap{\uptau}^\beta h\|_{L^q_\uptau}\|\jap{\uptau}^{-\beta}g\|_{L^p_\uptau}\|\jap{\uptau-(1-t)s}^\beta\jap{\uptau}^{-\beta}\|_{L^\infty_\uptau}\mathrm{d} t \mathrm{d} s\\
&\lesssim 2^{-N}\|\jap{\uptau}^\beta h'\|_{L^q_\uptau}\|\jap{\uptau}^{-\beta}g\|_{L^p_\uptau}\int_{\mathbb R}\chi_N(s)(1+2^{-\beta N}|s|^\beta)\mathrm{d} s,
\end{split}
\end{align*}
which proves the commutator estimate. In our applications $N\geq 1$ and we use $N$ instead of $ N^{-1}(1+N^{-\beta})$. Also note that the same estimate holds if $P_N$ is replaced by $P_{\leq N}$, ${\bf P}_N$, or ${\bf P}_{\leq N}$.
Let
\begin{align}\label{eq:calAcommdef1}
\begin{split}
\mathcal A&:=\|[P_{\leq N_0},a]\partial^2_\uptau \phi\|_{L^1_\uptau L^2_y\cap L^2_{\uptau,y}(I)}^2+\sum_{N\geq N_0}\|[P_N,a]\partial^2_\uptau\phi\|_{L^1_\uptau L^2_y\cap L^2_{\uptau,y}(I)}^2\\
&\quad+\|[P_{\leq N_0},b]\partial^2_y \phi\|_{L^1_\uptau L^2_y\cap L^2_{\uptau,y}(I)}^2+\sum_{N\geq N_0}\|[P_N,b]\partial^2_y\phi\|_{L^1_\uptau L^2_y\cap L^2_{\uptau,y}(I)}^2\\
&\quad+\|[P_{\leq N_0},c]\partial^2_{\uptau y} \phi\|_{L^1_\uptau L^2_y\cap L^2_{\uptau,y}(I)}^2+\sum_{N\geq N_0}\|[P_N,c]\partial^2_{\uptau y}\phi\|_{L^1_\uptau L^2_y\cap L^2_{\uptau,y}(I)}^2.
\end{split}
\end{align}
Since $\mathcal A\geq \mathcal X$, it suffices to prove the estimate (the extra terms are added to absorb error terms that arise in the estimates)
\begin{align}\label{eq:calAbound}
\begin{split}
\mathcal A\lesssim \epsilon \|\uppsi_{\mathrm{near}}\|_{LE}^2+\epsilon\|\uppsi_{\mathrm{far}}\|_{LE(I)}^2+\epsilon\sup_\uptau \|\uppsi_{\mathrm{near}}\|_{E(\Sigma_\uptau)}^2.
\end{split}
\end{align}
Let us start with the second terms on each line of \eqref{eq:calAcommdef1}. Let ${\bf P}_N=\sum_{M=N-4}^{M+4}P_M$ and ${\bf P}_{\leq N}=P_{\leq N+4}$ be fattened projections, and decompose
\begin{align}\label{eq:comfreqdecomp1}
\begin{split}
[P_N,a]\partial^2_\uptau\phi&= \sum_{M=N+1}^{N+4}[P_N, {\bf P}_{\leq N}a]\partial^2_\uptau P_M\phi+[P_N,{\bf P}_{N}a]\partial^2_\uptau P_{\leq N}\phi+\sum_{M> N+4}[P_N,{\bf P}_Ma]\partial^2_\uptau P_M\phi\\
&\quad+[P_N,P_{\leq N+4}a]\partial^2_\uptau P_{\leq N}\phi+\sum_{M>N+4}[P_N,P_Ma]\partial^2_\uptau P_{\leq N+3}\phi,
\end{split}
\end{align}
with similar decompositions for $[P_N,b]\partial^2_y\phi$ and $[P_N,c]\partial^2_{\uptau y}\phi$. With $\alpha_0:=\frac{1}{2}(1+\alpha)$ and ${\tilde{\bfP}}_N=\sum_{M=N+1}^{N+4}P_M$, the contribution of the first term on the right-hand side of \eqref{eq:comfreqdecomp1} is bounded as
\begin{align*}
\begin{split}
\sum_{N\geq N_0}\|[P_N, {\bf P}_{\leq N}a]\partial^2_\tau{\tilde{\bfP}}_N\phi\|_{L^2_{\uptau,y}}^2&\lesssim \sum_{N\geq N_0} 2^{-2N}\| \jap{\uprho}^{\alpha_0}\partial_\tau {\bf P}_{\leq N} a\|_{L^\infty_{\uptau,y}}^2\|\jap{\uprho}^{-\alpha_0}\partial^2_\tau {\tilde{\bfP}}_N\phi\|_{L^2_{\uptau,y}}^2\\
&\lesssim \epsilon\sum_{N\geq N_0}\|\jap{\uprho}^{-\alpha_0}\partial_\tau {\tilde{\bfP}}_{N}\phi\|_{L^2_{\uptau,y}}^2\lesssim \epsilon \|\phi\|_{LE}^2.
\end{split}
\end{align*}
The $L^1_\uptau L^2_y$ estimate is similar. The corresponding term for $c$ is bounded as
\begin{align*}
\begin{split}
\sum_{N\geq N_0}\|[P_N, {\bf P}_{\leq N}c]\partial^2_{\uptau,y}{\tilde{\bfP}}_N\phi\|_{L^1_\uptau L^2_y}^2&\lesssim \sum_{N\geq N_0} 2^{-2N}\| \jap{\uptau}^{\frac{1}{2}+\delta}\partial_\uptau {\bf P}_{\leq N} c\|_{L^2_\uptau L^\infty_{y}}^2\|\jap{\uptau}^{-\frac{1}{2}-\delta}\partial^2_{\uptau y} {\tilde{\bfP}}_N\phi\|_{L^2_{\uptau,y}}^2\\
&\lesssim \epsilon\sum_{N\geq N_0}2^{-2N}\big(\|\partial_\uptau {\tilde{\bfP}}_{N}\jap{\uptau}^{-\frac{1}{2}-\delta}\partial_y\phi\|_{L^2_{\uptau,y}}^2+\|[\jap{\uptau}^{-\frac{1}{2}-\delta},{\tilde{\bfP}}_N \partial_\uptau]\partial_y\phi\|_{L^2_{\uptau,y}}^2\big)\\
&\lesssim \epsilon \|\jap{\uptau}^{-\frac{1}{2}-\delta}\partial_y\phi\|_{L^2_{\uptau,y}}+\delta\sum_{N\geq N_0}2^{-2N}\|\partial_\uptau\jap{\uptau}^{-\frac{1}{2}-\delta}\|_{L^2_\uptau L^\infty_y}^2\|\partial_y\phi\|_{L^\infty_\uptau L^2_y}^2\\
&\lesssim \epsilon \|\partial_y\phi\|_{L^\infty_\uptau L^2_y}^2,
\end{split}
\end{align*}
which is bounded by the energy. In this calculation $[\jap{\uptau}^{-\frac{1}{2}-\delta},{\bf P}_N \partial_\uptau]$ was treated in a similar way as in the proof of \eqref{eq:commtempgen1}. The estimate for the same sum in $L^2_{\tau,x}$ is the same, except that in the first step we place $\jap{\tau}^{\frac{1}{2}+\epsilon}\partial_\tau {\bf P}_{\leq N} c$ in $L^\infty_{\tau,x}$ instead of $L^2_\tau L^\infty_x$. Let us now turn to the more difficult term $b$:
\begin{align*}
\begin{split}
\sum_{N\geq N_0}\|[P_N, {\bf P}_{\leq N}b]\partial^2_y{\tilde{\bfP}}_N\phi\|_{L^1_\uptau L^2_y}^2&\lesssim \sum_{N\geq N_0} 2^{-2N}\| \jap{\uptau}^{\frac{1}{2}+\delta}\partial_\uptau {\bf P}_{\leq N} b\|_{L^2_\uptau L^\infty_{y}}^2\|\jap{\uptau}^{-\frac{1}{2}-\delta}\partial^2_{ y} {\tilde{\bfP}}_N\phi\|_{L^2_{\uptau,y}}^2\\
&\lesssim\epsilon\sum_{N \geq N_0}2^{-2N}\|\jap{\uptau}^{-\frac{1}{2}-\delta}\mathcal P_{\mathrm{ell}} {\tilde{\bfP}}_N\phi\|_{L^2_{\uptau,y}}^2\\
&\lesssim\epsilon\sum_{N_\geq N_0}2^{-2N}\Big(\|\jap{\uptau}^{-\frac{1}{2}-\delta}[\mathcal P_{\mathrm{ell}} ,{\tilde{\bfP}}_N]\phi\|_{L^2_{\uptau,y}}^2+\|\jap{\uptau}^{-\frac{1}{2}-\delta}{\tilde{\bfP}}_N \mathcal P_\uptau\phi\|_{L^2_{\uptau,y}}^2\\
&\phantom{\lesssim\epsilon\sum_{N_\geq N_0}N^{-2}\Big(}+\|\jap{\uptau}^{-\frac{1}{2}-\delta}{\tilde{\bfP}}_N \mathcal P\phi\|_{L^2_{\uptau,y}}^2\Big).
\end{split}
\end{align*}
The last term above is bounded by the right-hand side of \eqref{eq:calAbound} using the equation for $\phi=\uppsi_{\mathrm{near}}$. In the line before last, the first term can be absorbed by $\mathcal A$, while the second term is treated as in the contribution of $a$ and $c$ above. The contribution of $\sum_{N\geq N_0}\|[P_N, {\bf P}_{\leq N}b]\partial^2_x{\bf P}_N\phi\|_{L^2_{\tau ,x}}^2$ can be handled similarly where now we bound $\jap{\tau}^{\frac{1}{2}+\epsilon}\partial_\tau {\bf P}_{\leq N} b$ in $L^{\infty}_{\uptau,y}$, instead of $L^2_\uptau L^\infty_y$, in the first step.
We next consider the second term in the decomposition \eqref{eq:comfreqdecomp1}. For $a$ we have
\begin{align*}
\begin{split}
\sum_{N\geq N_0}\|[P_N,{\bf P}_{N}a]\partial^2_\tau P_{\leq N}\phi\|_{L^2_\uptau L^2_y}^2&\lesssim \sum_{N\geq N_0} 2^{-2N}\|\jap{\uprho}^{\alpha_0} 2^N\partial_\tau {\bf P}_{ N} a\|_{L^\infty_{\uptau,y}}^2\|\jap{\uprho}^{-\alpha_0}\partial_\uptau P_{\leq N}\phi\|_{L^2_{\uptau,y} }^2\\
&\lesssim \|\jap{\uprho}^{\alpha_0}\partial^2_\uptau a\|_{L^\infty_{\uptau,y}}^2\|\jap{\uprho}^{-\alpha_0}\partial_\uptau\phi\|_{L^2_{\uptau,y}}^2\sum_{N\geq N_0} 2^{-2N}\lesssim \epsilon \|\phi\|_{LE}^2.
\end{split}
\end{align*}
The estimate in $L^1_\uptau L^2_y$ is similar. For $c$,
\begin{align*}
\begin{split}
\sum_{N\geq N_0}\|[P_N, {\bf P}_{ N}c]\partial^2_{\uptau,y}P_{\leq N}\phi\|_{L^1_\uptau L^2_y}^2&\lesssim \sum_{N\geq N_0} 2^{-2N}\| \jap{\uptau}^{\frac{1}{2}+\epsilon}\partial_\uptau {\bf P}_{N} c\|_{L^2_\uptau L^\infty_{y}}^2\|\jap{\uptau}^{-\frac{1}{2}-\epsilon}\partial^2_{\uptau y} P_{\leq N}\phi\|_{L^2_{\uptau,y}}^2\\
&\lesssim \sum_{N\geq N_0}2^{-2N}\|\partial_\uptau P_{\leq N}\jap{\uptau}^{-\frac{1}{2}-\epsilon}\partial_y\phi\|_{L^2_{\uptau,y}}^2\|\jap{\uptau}^{\frac{1}{2}+\epsilon}\partial_\uptau{\bf P}_Nc\|_{L^2_\uptau L^\infty_y}^2\\
&\quad+\sum_{N\geq N_0}2^{-2N}\|[\jap{\uptau}^{-\frac{1}{2}-\epsilon},P_{\leq N} \partial_\uptau]\partial_y\phi\|_{L^2_{\uptau,y}}^2\|\jap{\uptau}^{\frac{1}{2}+\epsilon}\partial_\uptau{\bf P}_Nc\|_{L^2_\uptau L^\infty_y}^2\\
&\lesssim\|\partial_y\phi\|_{L^\infty_\uptau L^2_y}^2\sum_{N\geq N_0}\|\jap{\uptau}^{\frac{1}{2}+\epsilon}{\bf P}_Nc\|_{L^2_\uptau L^\infty_y}^2\lesssim \epsilon \|\partial_y\phi\|_{L^\infty_\uptau L^2_y}^2.
\end{split}
\end{align*}
The $L^2_{\uptau,y}$ estimate is similar. For $b$ we again use elliptic estimates and argue more carefully as
\begin{align*}
\begin{split}
\sum_{N\geq N_0}\|[P_N, {\bf P}_{ N}b]\partial^2_y P_{\leq N}\phi\|_{L^1_\uptau L^2_y}^2&\lesssim \sum_{N\geq N_0} 2^{-2N}\| \jap{\uptau}^{\frac{1}{2}+\delta}\partial_\uptau {\bf P}_{N} b\|_{L^2_\uptau L^\infty_{y}}^2 \|\jap{\uptau}^{-\frac{1}{2}-\delta}[\mathcal P_{{\mathrm{ell}}} P_{\leq N}]\phi\|_{L^2_{\uptau,y}}^2\\
&\quad+ \sum_{N\geq N_0} 2^{-2N}\| \jap{\uptau}^{\frac{1}{2}+\delta}\partial_\uptau {\bf P}_{N} b\|_{L^2_\uptau L^\infty_{y}}^2 \|\jap{\uptau}^{-\frac{1}{2}-\delta} P_{\leq N}\mathcal P_\uptau\phi\|_{L^2_{\uptau,y}}^2\\
&\quad+ \sum_{N\geq N_0} 2^{-2N}\| \jap{\uptau}^{\frac{1}{2}+\delta}\partial_\uptau {\bf P}_{N} b\|_{L^2_\uptau L^\infty_{y}}^2 \|\jap{\uptau}^{-\frac{1}{2}-\delta}P_{\leq N}\mathcal P\phi\|_{L^2_{\uptau,y}}^2.
\end{split}
\end{align*}
The last two lines can be handled as in the case of $a$ and $c$ above and using the equation for $\phi=\uppsi_{\mathrm{near}}$. The first line is bounded by
\begin{align*}
\begin{split}
&\sum_{N\geq N_0} 2^{-2N}\| \jap{\uptau}^{\frac{1}{2}+\epsilon}\partial_\uptau {\bf P}_{N} b\|_{L^2_\uptau L^\infty_{y}}^2 \|[\mathcal P_{{\mathrm{ell}}}, P_{\leq N_0}]\phi\|_{L^2_{\uptau,y}}^2\\
&+\sum_{N\geq N_0}\sum_{N_0< M\leq N} 2^{-2N}\| \jap{\uptau}^{\frac{1}{2}+\epsilon}\partial_\uptau {\bf P}_{N} b\|_{L^2_\uptau L^\infty_{y}}^2 \|[\mathcal P_{{\mathrm{ell}}}, P_{M}]\phi\|_{L^2_{\uptau,y}}^2\\
&\lesssim \mathcal A \sum_{N\geq N_0}2^{-2N}\| \jap{\tau}^{\frac{1}{2}+\epsilon}\partial_\tau {\bf P}_{N} b\|_{L^2_\tau L^\infty_{x}}^2\lesssim \epsilon \mathcal A,
\end{split}
\end{align*}
which can be absorbed. The estimate in $\sum_{N\geq N_0}\|[P_N, {\bf P}_{ N}b]\partial^2_yP_{\leq N}\phi\|_{ L^2_{\uptau,y}}^2$ is similar.
We consider the third term in the decomposition \eqref{eq:comfreqdecomp1} next. For $a$ we have
\begin{align*}
\begin{split}
\sum_{N\geq N_0}\Big\|\sum_{M> N+4}[P_N,{\bf P}_Ma]\partial^2_\uptau P_M\phi\Big\|_{L^2_{\uptau,y}}^2&\lesssim \sum_{N\geq N_0}\Big(\sum_{M> N}2^{M-N}\|\jap{\uprho}^{\alpha_0}\partial_\uptau{\bf P}_Ma\|_{L^\infty_{\uptau,y}}\|\jap{-\uprho}^{\alpha_0}\partial_\uptau P_M\phi\|_{L^2_{\uptau,y}}\Big)^2\\
&\lesssim \|\jap{r}^{\alpha}\partial_\uptau^3a\|_{L^\infty_{\uptau,y}}^2\|\phi\|_{LE}^2\sum_{N\geq N_0}2^{-4N}\lesssim \epsilon \|\phi\|_{LE}^2.
\end{split}
\end{align*}
The estimate in $L^1_{\uptau}L^2_y$ is similar. For $c$,
\begin{align*}
\begin{split}
&\sum_{N\geq N_0}\Big\|\sum_{M> N+4}[P_N, {\bf P}_{M}c]\partial^2_{\uptau,y}P_{M}\phi\Big\|_{L^1_\uptau L^2_y}^2\\
&\lesssim \sum_{N\geq N_0} N^{-2}\Big(\sum_{M> N}M\| \jap{\uptau}^{\frac{1}{2}+\delta}\partial_\uptau {\bf P}_{M} c\|_{L^2_\uptau L^\infty_{y}}\|\jap{\uptau}^{-\frac{1}{2}-\delta}\partial_{ y} {\bf P}_{M}\phi\|_{L^2_{\uptau,y}}\Big)^2\\
&\lesssim \|\jap{\uptau}^{\frac{1}{2}+\epsilon}\partial_\uptau^2c\|_{L^2_\uptau L^\infty_y}^2\|\partial_y\phi\|_{L^\infty_\uptau L^2_y}^2\lesssim \epsilon\|\partial_y\phi\|_{L^\infty_\uptau L^2_y}^2,
\end{split}
\end{align*}
and the contribution in $L^2_{\uptau,y}$ is bounded in a similar way. For $b$ we again apply elliptic estimates. Except for the commutator with $\mathcal P_{\mathrm{ell}}$, the resulting terms can be bounded as was done for $a$ and $c$, and using the equation for $\phi=\uppsi_{\mathrm{near}}$, and the commutator term is bounded, using similar arguments as earlier, as (for the $L^1_\uptau L^2_y$ estimate; the $L^2_{\uptau,y}$ estimate is similar)
\begin{align*}
\begin{split}
&\sum_{N\geq N_0} N^{-2}\Big(\sum_{M\geq N}\|\jap{\uptau}^{\frac{1}{2}+\epsilon}\partial_\uptau {\bf P}_{M} b\|_{L^2_\uptau L^\infty_{y}} \|[\mathcal P_{{\mathrm{ell}}}, P_{M}]\phi\|_{L^2_{\uptau,y}}\Big)^2\\
&\lesssim \|\jap{\uptau}^{\frac{1}{2}+\epsilon}\partial_\uptau^2b\|_{L^2_\uptau L^\infty_y}^2\sum_{M> N_0}\|[\mathcal P_{\mathrm{ell}},P_M]\phi\|_{L^2_{\uptau,y}}^2\lesssim \epsilon \mathcal A,
\end{split}
\end{align*}
which can be absorbed.
The first term on the second line of \eqref{eq:comfreqdecomp1} can be further decomposed as
\begin{align*}
\begin{split}
\sum_{N\geq N_0}[P_N,P_{\leq N+4}a]\partial_{\uptau}^2P_{\leq N}\phi&=\sum_{N=N_0}^{N_0+10}[P_{N},P_{\leq N +4}a]P_{\leq N}\partial_{\uptau}^2\phi+\sum_{N>N_0+10}[P_N,{\bf P}_Na]\partial_{\uptau}^2P_{\leq N-4}\phi\\
&\quad+\sum_{N>N_0+10}\sum_{M=N-3}^{N}[P_N,P_{\leq N+3}a]\partial_{\uptau}^2P_M\phi,
\end{split}
\end{align*}
with similar decompositions for the contributions of $b$ and $c$. Each of the terms above can then be bounded using similar arguments to the earlier ones. Similarly, the sum over $N\geq N_0$ of the last term in \eqref{eq:comfreqdecomp1} can be decomposed as (with similar decompositions for $b$ and $c$ contributions)
\begin{align*}
\begin{split}
\sum_{N=N_0}^{N_0+10}\sum_{M>N+4}[P_N,P_Ma]\partial_{\uptau}^2P_{\leq N+3}\phi+\sum_{N=N_0}^{N_0+10}\sum_{M>N+4}\sum_{L=N-3}^{N+3}[P_N,P_Ma]\partial_{\uptau}^2P_{L}\phi,
\end{split}
\end{align*}
and each of these terms can be estimated as before.
It remains to estimate the first term on each line of the definition of $\mathcal A$. For this we use the following decomposition, where ${\tilde{\bfP}}_{\leq N_0}=P_{\leq N_0+2}$,
\begin{align}\label{eq:comfreqdecomp2}
\begin{split}
[P_{\leq N_0},a]\partial^2_\uptau\phi&=[P_{\leq N_0}, {\bf P}_{\leq N_0}a]\partial^2_\uptau {\tilde{\bfP}}_{\leq N_0}\phi+\sum_{N>N_0+2}[P_{\leq N_0},{\bf P}_Na]\partial^2_\uptau P_N\phi\\
&\quad+\sum_{N>N_0+4}[P_{\leq N_0},P_Na]\partial^2_\uptau {\tilde{\bfP}}_{\leq N_0}\phi,
\end{split}
\end{align}
and similarly for $b$ and $c$. The contribution of the first term is bounded as
\begin{align*}
\begin{split}
\|[P_{\leq N_0}, {\bf P}_{\leq N_0}a]\partial^2_\uptau{\tilde{\bfP}}_{\leq N_0}\phi\|_{L^2_{\uptau,y}}^2\lesssim \|\jap{\uprho}^{\alpha_0} \partial_\uptau {\bf P}_{\leq N_0}a\|_{L^\infty_{\uptau,y}}^2\|\jap{\uprho}^{-\alpha_0}\partial_\uptau^2{\tilde{\bfP}}_{\leq N_0}\phi\|_{L^2_{\uptau,y}}^2\lesssim \epsilon \|\phi\|_{LE}^2,
\end{split}
\end{align*}
with the $L^1_\uptau L^2_y$ estimate being similar. The corresponding contribution for $c$ is bounded as
\begin{align*}
\begin{split}
\|[P_{\leq N_0}, {\bf P}_{\leq N_0}c]\partial^2_{\uptau y}{\tilde{\bfP}}_{\leq N_0}\phi\|_{L^1_\uptau L^2_y}\lesssim \|\jap{\uptau}^{\frac{1}{2}+\epsilon}\partial_\uptau {\bf P}_{\leq N_0}c\|_{L^2_\uptau L^\infty_y}^2\|\jap{\uptau}^{-\frac{1}{2}-\epsilon}\partial_{\uptau y}{\tilde{\bfP}}_{\leq N_0}\phi\|_{L^2_{\uptau,y}}\lesssim \epsilon \|\partial_y\phi\|_{L^\infty_\uptau L^2_y}^2,
\end{split}
\end{align*}
and the corresponding estimate in $L^2_{\tau,x}$ is handled similarly. For $b$ we apply elliptic estimates and as usual use similar arguments as for $a$ and $c$ above as well as the equation for $\phi=\uppsi_{\mathrm{near}}$ to handle all terms except the commutator with $\mathcal P_{\mathrm{ell}}$, while this commutator error is bounded by $\epsilon \mathcal A$ which can be absorbed.
Turning to the second term in \eqref{eq:comfreqdecomp2}, the contribution of $a$ in $L^2_{\uptau,y}$ is bounded as
\begin{align*}
\begin{split}
\Big\|\sum_{N> N_0+2}[P_{\leq N_0},{\bf P}_Na]\partial^2_\tau P_N\phi\Big\|_{L^2_{\uptau ,y}}^2&\lesssim \Big(\sum_{N> N_0}\|\jap{\uprho}^{\alpha_0}\partial_\tau N{\bf P}_Na\|_{L^\infty_{\tau,y}}\|\jap{\uprho}^{-\alpha_0}\partial_\tau P_N\phi\|_{L^2_{\uptau,y} }\Big)^2\\
&\lesssim \|\jap{\uprho}^{\alpha_0}\partial_\tau^3a\|_{L^\infty_{\tau,y}}^2\|\phi\|_{LE}^2\lesssim \epsilon \|\phi\|_{LE}^2,
\end{split}
\end{align*}
and the $L^1_\uptau L^2_y$ contribution is similar. For $c$
\begin{align*}
\begin{split}
\Big\|\sum_{N> N_0+2}[P_{\leq N_0},{\bf P}_Nc]\partial^2_{\uptau y}P_N\phi\Big\|_{L^1_\uptau L^2_y}&\lesssim \Big(\sum_{N\geq N_0}\|\jap{\uptau}^{\frac{1}{2}+\delta}2^N\partial_\uptau {\bf P}_{N}c\|_{L^2_\uptau L^\infty_y}\|\jap{\uptau}^{-\frac{1}{2}-\delta}\partial_yP_N\phi\|_{L_{\uptau,y}^2}\Big)^2\\
&\lesssim \epsilon \|\partial_y\phi\|_{L^\infty_\uptau L^2_y}^2.
\end{split}
\end{align*}
The estimate for the corresponding term in $L^2_{\uptau,y}$ is similar. The contribution of $b$ is also handled using elliptic estimates as usual. Finally, the last term in \eqref{eq:comfreqdecomp2} can also be handled using similar arguments as above, and we omit the details. This completes the proof of \eqref{eq:calAbound}.
\end{proof}
We can now prove Lemma~\ref{lem:phinearfar1}.
\begin{proof}[Proof of Lemma~\ref{lem:phinearfar1}]
In this proof, when working in the $(\uptau,\uprho,\upomega)$ coordinates, we will use $y$ for the coordinates $(\uprho,\upomega)$. The notation $\iint$ is used for integration over $\Sigma_{t_1}^{t_2}$. We will also use the notation $I=[t_1,t_2]$ with $LE(I)=LE(\Sigma_{t_1}^{t_2})$ and $LE=LE(\cup_{\uptau}\Sigma_{\uptau})$. Note that it suffices to estimate $\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE(I)}$. By a similar multiplier argument as in the proof of Lemma~\ref{lem:phifar1}, and with multipliers $Q_j^{\mathrm{int}} P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}$ and $Q_j^{\mathrm{ext}} P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}$, $j=1,2$, we get (recall that, by a slight abuse of notation, we write $P_{\leq N_0}\mathcal P_{\mathrm{pert}}\uppsi_{{\mathrm{near}}}$ for $P_{\leq N_0}\uppsi$, where $\uppsi=\mathcal P_{\mathrm{pert}}\uppsi_{{\mathrm{near}}}$ in~$\Sigma_{t_1}^{t_2}$ and $\uppsi=0$ in $(\Sigma_{t_1}^{t_2})^c$)
\begin{align*}
\begin{split}
\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE(I)}^2&\lesssim \|V_{\mathrm{far}}\uppsi_{\mathrm{far}}\|_{LE^\ast(I)}^2+\sum_{j=1}^2\Big|\iint (P_{\leq N_0}\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}})(Q^{\mathrm{int}}_jP_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}))\sqrt{|\hbar|}\mathrm{d} y\mathrm{d}\tau\Big|\\
&\quad +\sum_{j=1}^2\Big|\iint (P_{\leq N_0}\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}})(Q^{\mathrm{ext}}_jP_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}))\sqrt{|\hbar|}\mathrm{d} y\mathrm{d}\tau\Big|\\
&\lesssim\|\uppsi\|_{E(\Sigma_{t_1})}^2+\|f\|_{L^1L^2(\Sigma_{\tau_1}^{\tau_2})}^2\\
&\quad+\Big|\iint (P_{\leq N_0}\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}})(QP_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}))\sqrt{|\hbar|}\mathrm{d} y\mathrm{d}\tau\Big|\\
&\quad +\sum_{j=1}^2\Big|\iint (P_{\leq N_0}\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}})(Q^{\mathrm{ext}}_jP_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}))\sqrt{|\hbar|}\mathrm{d} y\mathrm{d}\tau\Big|.
\end{split}
\end{align*}
Here $Q_1^{\mathrm{int}}$ and $Q_1^{\mathrm{ext}}$ denote the first order interior and exterior multipliers (see \eqref{eq:psifarLEDtemp1.5} and \eqref{eq:LEDintbulk1}), respectively, and $Q_2^{\mathrm{int}}$ and $Q_2^{\mathrm{ext}}$ are the corresponding order zero multipliers (see \eqref{eq:psifarLEDtemp2.5} and \eqref{eq:LEDintbulk2}; we have not used the notation $P$ for the order zero multipliers to prevent confusion with the frequency projections). Therefore, our task is reduced to estimating
\begin{align}\label{eq:mainerror}
\begin{split}
\Big|\iint (P_{\leq N_0}\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}})(Q_j^{{\mathrm{int}},{\mathrm{ext}}} P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}})\sqrt{|\hbar|}\mathrm{d} y\mathrm{d}\tau\Big|.
\end{split}
\end{align}
For the interior we place both terms in $LE$ where for $P_{\leq N_0}\mathcal P_{\mathrm{pert}} \uppsi_{{\mathrm{near}}}$ we use the frequency projection to remove one derivative. More precisely, let $\chi_K\equiv\chi_K(\uprho)$ be a cutoff to a large compact region $K$ containing the supports of $Q_j^{\mathrm{int}}$, and let $\chi_{K^c}=1-\chi_K$. Then (note that we can always insert a factor of $\uprho$ in the order zero terms using the same argument as in Lemma~\ref{lem:LEDhighfreq1})
\begin{align*}
\begin{split}
&\Big|\iint \chi_K(P_{\leq N_0}\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}})(Q_j^{\mathrm{int}} P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}))\sqrt{|\hbar|}\mathrm{d} y\mathrm{d}\tau\Big|\\
&\lesssim \delta\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE(I)}^2+C_\delta\iint(P_{\leq N_0}(\chi_K\mathcal P_{\mathrm{pert}}\uppsi_{\mathrm{near}}))^2\sqrt{|\hbar|}\mathrm{d} y\mathrm{d}\tau.
\end{split}
\end{align*}
The first term can be absorbed if $\delta$ is small. For the last term we write $\mathcal P_{\mathrm{pert}}$ as $$\mathcal P_{\mathrm{pert}}=\mathring{\uppi}_q^{\mu\nu}\partial^2_{\mu\nu}+\mathring{\uppi}_l^\mu\partial_\mu+\mathring{\uppi}_c,$$
where $|\mathring{\uppi}_q|,|\mathring{\uppi}_l|,|\mathring{\uppi}_c|\lesssim \epsilon$. The contribution of $\mathring{\uppi}_l^\mu\partial_\mu + \mathring{\uppi}_c$ is then already bounded by $\epsilon\|\uppsi_{\mathrm{near}}\|_{LE}^2$, which is admissible. For the second order part, if at least one of $\mu$ or $\nu$ is $\uptau$ then we can use the frequency projection to drop a $\partial_\uptau$ derivative and argue as before. When both derivatives are with respect to the spatial variables $y$, we use elliptic estimates using $\mathcal P$. That is, let
\begin{align*}
\begin{split}
\mathcal P= \mathcal P_\uptau+\mathcal P_{\mathrm{ell}},
\end{split}
\end{align*}
where $\mathcal P_\uptau$ contains the terms with at least one $\partial_\uptau$ derivative and $\mathcal P_{\mathrm{ell}}$ is the remainder which is elliptic. Then, with ${\tilde{\chi}}_K\equiv{\tilde{\chi}}_K(\uprho)$ a cutoff with slightly larger support than $\chi_K$, recalling equation~\eqref{eq:calPphinear1}, and using elliptic regularity, we can bound $\|P_{\leq N_0}(\chi_K\mathring{\uppi}^{yy}\partial_y^2\uppsi_{\mathrm{near}})\|_{L^2_{\tau,y}(I)}$ by
\begin{align*}
\begin{split}
& \|[P_{\leq N_0},\chi_K \mathring{\uppi}^{yy}\partial_y^2]\uppsi_{\mathrm{near}}\|_{L^2_{\tau,y}(I)}+\epsilon\|{\tilde{\chi}}_K\mathcal P_{\mathrm{ell}} P_{\leq N_0}\uppsi_{\mathrm{near}}\|_{L^2_{\tau,y}(I)}+\epsilon\|{\tilde{\chi}}_K P_{\leq N_0}\uppsi_{\mathrm{near}}\|_{L^2_{\tau,y}(I)}\\
&\lesssim \|[P_{\leq N_0},\chi_K \mathring{\uppi}^{yy}\partial_y^2]\uppsi_{\mathrm{near}}\|_{L^2_{\tau,y}(I)}+\epsilon\|[{\tilde{\chi}}_K\mathcal P_{\mathrm{ell}},P_{\leq N_0}]\uppsi_{\mathrm{near}}\|_{L^2_{\tau,y}(I)}\\
&\quad+\epsilon\|P_{\leq N_0}({\tilde{\chi}}_K\mathcal P_\uptau\uppsi_{\mathrm{near}})\|_{L^2_{\tau,y}(I)}+\epsilon\|P_{\leq N_0}({\tilde{\chi}}_KV_{\mathrm{far}}\uppsi_{\mathrm{far}})\|_{L^2_{\tau,y}(I)}+\epsilon\|{\tilde{\chi}}_K P_{\leq N_0}\uppsi_{\mathrm{near}}\|_{L^2_{\tau,y}(I)}.
\end{split}
\end{align*}
The last line contributes an admissible error by using the frequency projection to drop $\partial_\uptau$ derivatives in the first term and using Lemma~\ref{lem:phifar1} for the second term. The line before last with the commutators also contributes an admissible error by Lemma~\ref{lem:LEDcommutator1}.
For the exterior, note that by choosing the compact set $K$ above sufficiently large, we may assume that the coordinates $(\uptau,\uprho,\upomega)$ and $(\tau,r,\theta)$ agree in $K^c$. We will therefore use the notation $(\tau,r,\theta)$ instead of $(\uptau,\uprho,\upomega)$. In this region we need an extra weighted energy estimate for $\uppsi_{\mathrm{near}}$ which allows us to put more $r$ weights on $(\partial_r+\frac{n-1}{2r})\uppsi_{\mathrm{near}}$ (see \underline{\emph{Case 2}} below). The details are as follows. First recall that $\mathcal P_{\mathrm{pert}}$ is of the form
\begin{align*}
\begin{split}
{\mathring a}\partial_\tau(\partial_r+\frac{n-1}{2r})+{\mathring a}^{\mu\nu}\partial_{\mu\nu}+{\mathring b}^\mu \partial_\mu +{\mathring c},
\end{split}
\end{align*}
where for some $\gamma>1$,
\begin{align*}
\begin{split}
|{\mathring a}|, |{\mathring a}^{yy}|,\lesssim \epsilon \tau^{-\gamma}, \quad |\partial_y{\mathring a}^{yy}|, |{\mathring a}^{\tau y}|, |{\mathring b}^y|\lesssim \epsilon\tau^{-\gamma}r^{-1},\quad |{\mathring a}^{\tau\tau}|, |{\mathring b}^\tau|\lesssim \epsilon \tau^{-\gamma}r^{-2},\quad |{\mathring c}|\lesssim \epsilon\tau^{-\gamma} r^{-4}.
\end{split}
\end{align*}
while $Q_j^{\mathrm{ext}}$ have the structure
\begin{align*}
\begin{split}
Q_j^{\mathrm{ext}}=\beta^\tau \partial_\tau+\beta^y \partial_y +\frac{\beta}{r},
\end{split}
\end{align*}
with
\begin{align*}
\begin{split}
|\beta|, |\beta^\tau|, |\beta^y|\lesssim 1, \quad |\partial_y\beta|, |\partial_y\beta^\tau|, |\partial_y\beta^y|\lesssim r^{-1-\alpha},
\end{split}
\end{align*}
and $\beta$, $\beta^\tau$, $\beta^y$ supported outside of some large compact set ${\tilde{K}}$. We now consider the contribution of all combinations to \eqref{eq:mainerror}. Below we will use the shorthand notation $\angles{f}{g}=\iint fg \sqrt{\hbar}\mathrm{d} y \mathrm{d} \uptau.$
{\underline{\emph{Case 1:}}} First we consider the contribution of $\partial_\tau$ in $Q_j^{\mathrm{ext}}$ and every term except $2a\partial_\tau(\partial_r+\frac{n-1}{2r})$ in $\mathcal P_{\mathrm{pert}}$. For ${\mathring a}^{yy}$, after one integration by parts in $\partial_y$ we get
\begin{align*}
\begin{split}
&|\angles{P_{\leq N_0}{\mathring a}^{yy}\partial_y^2\uppsi_{\mathrm{near}}}{\beta^\tau\partial_\tau P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}}|\\
&\lesssim \angles{|P_{\leq N_0}{\mathring a}^{yy}\partial_r\uppsi_{\mathrm{near}}|}{|\partial_\tau P_{\leq N_0}\partial_r\uppsi_{{\mathrm{near}},{\mathrm{far}}}|}+ \angles{|P_{\leq N_0}{\mathring a}^{yy}\partial_r\uppsi_{\mathrm{near}}|}{|\partial_\tau P_{\leq N_0}r^{-1}\uppsi_{{\mathrm{near}},{\mathrm{far}}}|}\\
&\quad+ \angles{|P_{\leq N_0} r\partial_y{\mathring a}^{yy}\partial_r\uppsi_{\mathrm{near}}|}{|\partial_\tau P_{\leq N_0}r^{-1}\uppsi_{{\mathrm{near}},{\mathrm{far}}}|}+ \epsilon(\sup_{\tau}\|\uppsi_{\mathrm{near}}\|_{E}+\sup_{\tau\in I}\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_E)\\
&\lesssim \epsilon(\sup_{\tau}\|\uppsi_{\mathrm{near}}\|_{E}+\sup_{\tau\in I}\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_E),
\end{split}
\end{align*}
where in the last estimate we have used $P_{\leq N_0}$ to drop the $\tau$ derivatives and the $\tau$ decay of ${\mathring a}^{yy}$ and $r\partial_y{\mathring a}^{yy}$ to integrate in $\tau$. The contributions of the other terms in ${\mathring a}^{\mu\nu}\partial_{\mu\nu}$, ${\mathring b}^\mu \partial_\mu$, ${\mathring c}$ are handled similarly, where instead of integrating by parts we use the decay of $r$ and move one factor of $r^{-1}$ to $\uppsi_{{\mathrm{near}},{\mathrm{far}}}$.
{\underline{\emph{Case 2:}}} For the contribution of $\partial_\tau$ to $Q_j^{\mathrm{ext}}$ and $2{\mathring a}\partial_\tau(\partial_r+\frac{n-1}{2r})$ to $\mathcal P_{\mathrm{pert}}$ we use the $\tau$ decay and smallness of ${\mathring a}$ to estimate (here for $\uppsi_{{\mathrm{near}},{\mathrm{far}}}$ we do not drop $\partial_\tau$)
\begin{align*}
\begin{split}
&|\angles{P_{\leq N_0}{\mathring a}\partial_\tau(\partial_r+\frac{n-1}{2r})\uppsi_{\mathrm{near}}}{\beta\partial_\tau P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}}|\\
&\lesssim \epsilon\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE(I)}^2+\epsilon^{-1}\|\chi_{{\tilde{K}}^c}P_{\leq N_0}{\mathring a}\partial_\tau r^{\frac{1+\alpha}{2}}(\partial_r+\frac{n-1}{2r})\uppsi_{\mathrm{near}}\|_{L^2_{\tau,y}(I)}^2\\
&\lesssim \epsilon\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE(I)}^2+\epsilon \sup_{\tau}\|\chi_{{\tilde{K}}^c}r^{\frac{1+\alpha}{2}}(\partial_r+\frac{n-1}{2r})\uppsi_{\mathrm{near}}\|_{L^2_y}^2.
\end{split}
\end{align*}
For the last term we will need a weighted energy estimate for $\uppsi_{\mathrm{near}}$, which we will discuss below.
{\underline{\emph{Case 3:}}} Next we consider the contribution of $\beta^y\partial_y+r^{-1}\beta$ to $Q_j^{\mathrm{ext}}$ and the contribution of every term except ${\mathring a}^{yy}\partial^2_y$ to $\mathcal P_{\mathrm{pert}}$. Here for $\uppsi_{{\mathrm{near}},{\mathrm{far}}}$ we simply drop $P_{\leq N_0}$ while for $\uppsi_{\mathrm{near}}$ we use $P_{\leq N_0}$ to drop one $\partial_\tau$ in the second order terms in $\mathcal P_{\mathrm{pert}}$. Using the decay and smallness of the coefficients of $\mathcal P_{\mathrm{pert}}$, the corresponding contributions are then bounded by $$\epsilon(\sup_{\tau}\|\uppsi_{\mathrm{near}}\|_E^2+\sup_{\tau\in I}\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_E^2).$$
{\underline{\emph{Case 4:}}} For the contribution of ${\mathring a}\partial^2_y$ to $\mathcal P_{\mathrm{pert}}$ and $\beta^y\partial_u+r^{-1}\beta$ to $Q_j^{\mathrm{ext}}$ we use the equation $\mathcal P\uppsi_{\mathrm{near}} = -V_{\mathrm{far}} \uppsi_{\mathrm{far}}$ and elliptic estimates for $\mathcal P_{\mathrm{ell}}$. Note that, unlike the case of the interior above, in view of the $r$ decay of order zero term in $\mathcal P_{\mathrm{ell}}$ we do not need to add an $L^2_y$ term when applying elliptic estimates for $\mathcal P_{\mathrm{ell}}$. Using this observation and Lemma~\ref{lem:LEDcommutator1}, and with ${\tilde{K}}_1$ a compact region contained in ${\tilde{K}}$, the corresponding contribution is bounded by
\begin{align*}
\begin{split}
&\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{L^\infty_\tau E(I)}\|\chi_{{\tilde{K}}^c}P_{\leq N_0}{\mathring a}^{yy}\partial_y^2\uppsi_{\mathrm{near}}\|_{L_\tau^1L_y^2(I)}\\
&\lesssim \|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{L^\infty_\tau E(I)}(\|\chi_{{\tilde{K}}^c}[P_{\leq N_0},{\mathring a}^{yy}]\partial_y^2\uppsi_{\mathrm{near}}\|_{L_\tau^1L_y^2(I)}+\epsilon\|\tau^{-\gamma}\chi_{{\tilde{K}}^c}\partial_y^2P_{\leq N_0}\uppsi_{{\mathrm{near}}}\|_{L^1_\tau L^2_y(I)})\\
&\lesssim \|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{L^\infty_\tau E(I)}(\|\chi_{{\tilde{K}}^c}[P_{\leq N_0},{\mathring a}^{yy}]\partial_y^2\uppsi_{\mathrm{near}}\|_{L_\tau^1L_y^2(I)}+\epsilon\|\tau^{-\gamma}\chi_{{\tilde{K}}^c_1}\mathcal P_{\mathrm{ell}} P_{\leq N_0}\uppsi_{{\mathrm{near}}}\|_{L^1_\tau L^2_y(I)})\\
&\lesssim \epsilon\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{L^\infty_\tau E(I)}^2+\epsilon^{-1} \|\chi_{{\tilde{K}}^c}[P_{\leq N_0},{\mathring a}^{yy}]\partial_y^2\uppsi_{\mathrm{near}}\|_{L_\tau^1L_y^2(I)}^2+\epsilon\|\tau^{-\gamma} P_{\leq N_0}V_{\mathrm{far}}\uppsi_{{\mathrm{far}}}\|_{L^1_\tau L_y^2(I)}^2\\
&\quad+\epsilon\|\tau^{-\gamma}P_{\leq N_0}\chi_{{\tilde{K}}_1^c}\mathcal P_\tau\uppsi_{\mathrm{near}}\|_{L^1_\tau E(I)}^2+\epsilon \|\tau^{-\gamma}[\chi_{{\tilde{K}}^c}\mathcal P_{\mathrm{ell}}, P_{\leq N_0}]\uppsi_{{\mathrm{near}}}\|_{L^1_\tau L_y^2(I)}^2\\
&\lesssim \epsilon(\sup_{\tau}\|\uppsi_{\mathrm{near}}\|_E^2+\sup_{\tau\in I}\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_E^2+\sup_{\tau\in I}\|\uppsi_{\mathrm{far}}\|_E^2+\|\uppsi_{\mathrm{far}}\|_{LE}^2).
\end{split}
\end{align*}
Putting everything together we have shown that
\begin{align*}
\begin{split}
\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE(I)}&\lesssim \|f\|_{LE^\ast(I)}+\|\uppsi\|_{L^\infty_\tau E(I)}+\epsilon\|\uppsi_{{\mathrm{near}}}\|_{LE}+\epsilon\|P_{\leq N_0}\uppsi_{{\mathrm{near}}\,{\mathrm{far}}}\|_{L^\infty_\tau E(I)}\\
&\quad+\epsilon\|\chi_{{\tilde{K}}^c}r^{\frac{1+\alpha}{2}}(\partial_r+\frac{n-1}{2r})\uppsi_{\mathrm{near}}\|_{L^\infty_\tau L^2_y}.
\end{split}
\end{align*}
Using a similar argument with the multiplier $\partial_\tau P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}$, we can also prove an energy estimate for $P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}$ which allows us to absorb the last term on the first line above, and get
\begin{align}\label{eq:LEDnearcircletemp1}
\begin{split}
\|P_{\leq N_0}\uppsi_{{\mathrm{near}},{\mathrm{far}}}\|_{LE(I)}&\lesssim \|f\|_{LE^\ast(I)}+\|\uppsi\|_{L^\infty_\tau E(I)}+\epsilon\|\uppsi_{{\mathrm{near}}}\|_{LE}\\
&\quad+\epsilon\|\chi_{{\tilde{K}}^c}r^{\frac{1+\alpha}{2}}(\partial_r+\frac{n-1}{2r})\uppsi_{\mathrm{near}}\|_{L^\infty_\tau L^2_y}.
\end{split}
\end{align}
It remains to control $\|\chi_{{\tilde{K}}^c} r^{\frac{1+\alpha}{2}}(\partial_r+\frac{n-1}{2r})\uppsi_{\mathrm{near}}\|_{L^\infty_\tau L^2_y}$. This is achieved by the same argument as we will later use to prove $r^p$-energy estimates in the exterior. The more complete version of the argument is worked out in Lemma~\ref{lem:rpmult1} below, which holds independently of the results in this section. Without repeating the details, multiplying the equation $\mathcal P\uppsi_{\mathrm{near}}=-V_{\mathrm{far}}\uppsi_{\mathrm{far}}$ by $\chi_{{\tilde{K}}^c}r^p(\partial_r+\frac{n-1}{2r})\uppsi_{\mathrm{near}}$ (alternatively, by $\chi_{{\tilde{K}}^c}L({\tilde{r}}^{\frac{n-1}{2}}\uppsi_{\mathrm{near}})$), with $1+\alpha\leq p \leq 2$, and a few integration by parts yield the estimate (recall that $V_{\mathrm{far}}$ is compactly supported)
\begin{align*}
\begin{split}
\|\chi_{{\tilde{K}}^c}r^{p/2}(\partial_r+\frac{n-1}{2r})\uppsi_{\mathrm{near}}\|_{L^\infty_\tau L^2_y}&\lesssim \|\uppsi_{\mathrm{near}}\|_{LE}+ \|\uppsi_{\mathrm{near}}\|_{L^\infty_\tau E}\\
&\quad+|\angles{V_{\mathrm{far}} \uppsi_{\mathrm{far}}}{\chi_{{\tilde{K}}^c} r^{p}(\partial_r+\frac{n-1}{2r})\uppsi_{\mathrm{near}}}_{L^2_{\tau,y}}|\\
&\lesssim \|\uppsi_{\mathrm{near}}\|_{LE}+ \|\uppsi_{\mathrm{near}}\|_{L^\infty_\tau E}+\|\uppsi_{\mathrm{far}}\|_{LE}.
\end{split}
\end{align*}
Plugging this back into \eqref{eq:LEDnearcircletemp1} completes the proof of the lemma.
\end{proof}
\section{Exterior}\label{sec:exterior}
This section contains the proof of the decay estimates on $\phi$. We will use the $r^p$ weighted vectorfield method to derive decay estimates for the energy of $\phi$ and improved decay for the energy of $T^k\phi$, $k=1,2$. Then using elliptic and interpolation estimates we use the decay of the energies to deduce pointwise bounds, and in particular complete the proof of Proposition~\ref{prop:bootstrapphi1}. We will start by deriving some commutator formulas and expressing the equation in terms of the vectofields $L$, ${\underline L}$, $\Omega$.
\subsection{Frame Decomposition of the Operator and Commutation Relations}
We use the relations derived in Section~\ref{sec:profile2} to calculate the commutators among the vectorfields $T$, $L$, ${\underline L}$, $\Omega$. The calculations in this subsection are valid in the hyperboloidal region $\mathcal C_{\mathrm{hyp}}$, where these vectorfields are defined.
\begin{lemma}\label{lem:commutators1}
The following commutation relations hold among the vectorfields $L,{\underline L},\Omega$:
\begin{align}\label{eq:VFcomm1}
\begin{split}
&[{\underline L},L]= \calO({\dot{\wp}}^{\leq 2})L+\calO({\dot{\wp}}^{\leq 2}r^{-2}){\underline L}+\calO({\dot{\wp}}^{\leq 2} r^{-2})\Omega,\\
&[{\underline L},\Omega]= \calO({\dot{\wp}})\Omega+\calO({\dot{\wp}} r) L+\calO({\dot{\wp}}){\underline L},\\
&[L,\Omega]=\calO({\dot{\wp}} r^{-1})L+\calO({\dot{\wp}} r^{-2})\Omega+\calO({\dot{\wp}} r^{-2}){\underline L},\\
&[\Omega_{ij},\Omega_{k\ell}]=\delta_{[ki}\Omega_{\ell j]},\\
&[T,L]=-[T,{\underline L}]= \calO({\dot{\wp}}^{\leq 2})L+\calO({\dot{\wp}}^{\leq 2}r^{-2}){\underline L}+\calO({\dot{\wp}}^{\leq 2} r^{-2})\Omega,\\
&[T,\Omega]=\calO({\dot{\wp}})\Omega+\calO({\dot{\wp}} r) L+\calO({\dot{\wp}}){\underline L}.\\
\end{split}
\end{align}
\end{lemma}
\begin{proof}
Starting with $[{\underline L},L]$, note that since ${\underline L} = 2T-L$ this commutator is the same as $2[T,L]$. The desired structure then follows from the relations \eqref{eq:VFexpansion1}, \eqref{eq:Tprecise1}, and \eqref{eq:VFbasis1}. For $[\Omega_{ij},\Omega_{k\ell}]$ the desired relation follows from the fact that $\Omega_{ij}=\Lambda_\ell {\tilde{\Omega}}_{ij}$, where ${\tilde{\Omega}}_{ij}=y^i\partial_{y^j}-y^j\partial_{y^i}$ are tangential to the reference hyperboloids $\mathcal H_\sigma$. This tangentiality implies that $[\Omega_{ij},\Omega_{k\ell}]=\Lambda_\ell[{\tilde{\Omega}}_{ij},{\tilde{\Omega}}_{k\ell}]$. By the same reasoning, to compute the commutator $[L,\Omega]$ we decompose ${\tilde{L}}=\partial_{y^0}+\frac{y^i}{|y'|}\partial_{y^i}$ as (recall that $T= \Lambda_\ell \partial_{y^0}$)
\begin{align*}
\begin{split}
{\tilde{L}} = (1-{\tilde{c}})\partial_{y^0}+\frac{y^i}{|y'|}\partial_{y^i}+{\tilde{c}}\, \partial_{y^0},
\end{split}
\end{align*}
where ${\tilde{c}}$ is chosen so that ${\tilde{L}}_{\mathrm{temp}}:={\tilde{L}}-{\tilde{c}}\,\partial_{y^0}$ is tangential to $\mathcal H_\sigma$. Let $c={\tilde{c}}(y(\tau,r,\theta))$ where $y$ is as in \eqref{eq:yx1}. Since ${\tilde{\Omega}}$ is tangential to $\mathcal H_\sigma$, it follows that $\Omega( c(y(\tau,r,\theta))) = ({\tilde{\Omega}}{\tilde{c}})(y(\tau,r,\theta))$, and therefore
\begin{align*}
\begin{split}
[L,\Omega] &= \Lambda_\ell[{\tilde{L}}_{\mathrm{temp}},{\tilde{\Omega}}]+[cT,\Omega]=\Lambda_\ell[{\tilde{L}}_{\mathrm{temp}},{\tilde{\Omega}}]+(\Omega c) T+c[T,\Omega]\\
&=\Lambda_\ell([{\tilde{L}}_{\mathrm{temp}},{\tilde{\Omega}}]+({\tilde{\Omega}} {\tilde{c}}) \partial_{y^0})+c[T,\Omega]=\Lambda_\ell[{\tilde{L}},{\tilde{\Omega}}]+c[T,\Omega]\\
&=c[T,\Omega].
\end{split}
\end{align*}
To find ${\tilde{c}}$, recall that the defining equation for $\sigma=\sigma(\tau)$ is $(y^0-\gamma^{-1}\tau)^2-|y'|^2=1$, and therefore ${\tilde{c}}$ must be such that ${\tilde{L}}_{\mathrm{temp}}$ is Minkowski perpendicular to $(y^0-\gamma^{-1}\tau,y')$. It follows that
\begin{align*}
\begin{split}
{\tilde{c}} = 1-\frac{|y'|}{y^0-\gamma^{-1}\tau}=\calO({\tilde{r}}^{-2}),
\end{split}
\end{align*}
where the last estimate follows from \eqref{eq:yx1}. The last two observations together give $[L,\Omega]=\calO(r^{-2})[T,\Omega]$, and the desired expansion follows from \eqref{eq:VFexpansion1}, \eqref{eq:Tprecise1}, and \eqref{eq:VFbasis1}. Finally, the expansions for $[{\underline L},\Omega]$, $[T,L]$, $[T,{\underline L}]$, and $[T,\Omega]$ follow from the previous ones and the observation that ${\underline L}=2T-L$.
\end{proof}
With these preparations we can turn to the calculation of the wave operator $\Box_m$ in terms of $\{L, {\underline L}, \Omega\}$.
\begin{lemma}\label{lem:BoxVF1}
For any function $\uppsi$
\begin{align}\label{eq:BoxVF1}
\begin{split}
\Box_m\uppsi&= -{\underline L} L \uppsi -\frac{n-1}{2{\tilde{r}}}({\underline L}-L)\uppsi+\frac{1}{{\tilde{r}}^2}\sum_{\Omega}\Omega^2 \uppsi\\
&\quad+\calO({\dot{\wp}}^{\leq 2})L\uppsi+\calO({\dot{\wp}}^{\leq 2} r^{-2}){\underline L} \uppsi+\calO({\dot{\wp}}^{\leq 2} r^{-2})\Omega \uppsi,
\end{split}
\end{align}
and, with ${\tilde{\uppsi}}:={\tilde{r}}^{\frac{n-1}{2}}\uppsi$,
\begin{align}\label{eq:BoxVF2}
\begin{split}
{\tilde{r}}^{\frac{n-1}{2}}\Box_m\uppsi&= -{\underline L} L {\tilde{\uppsi}}+\frac{1}{{\tilde{r}}^2}\sum_{\Omega}\Omega^2{\tilde{\uppsi}}-\frac{(n-1)(n-3)}{4{\tilde{r}}^2}{\tilde{\uppsi}}\\
&\quad+\calO({\dot{\wp}}^{\leq 2})L{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq 2} r^{-2}){\underline L}{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq 2} r^{-2})\Omega{\tilde{\uppsi}} +\calO({\dot{\wp}}^{\leq 2}r^{-2}){\tilde{\uppsi}}.
\end{split}
\end{align}
\end{lemma}
\begin{proof}
We work with an orthonormal frame $\{e_I\}=\{{\tilde{L}}, L, e_A;~a=1,2\}$, where $e_A=\Lambda_\ell {\tilde{e}}_A$, and $\{{\tilde{e}}_1,{\tilde{e}}_2\}$ is a local orthonormal frame for the reference spheres on $\mathcal H_\sigma$. With $\Gamma_{IJ}^K$ the corresponding connection coefficients, that is $\nabla_{e_I}e_J=\Gamma_{IJ}^Ke_K$, and $D_I$ denoting scalar differentiation along $e_I$, the wave operator can be written as
\begin{align}\label{eq:BoxVFtemp1}
\begin{split}
\Box_m \uppsi&=(m^{-1})^{IJ} (D_I D_J-\Gamma_{IJ}^KD_K)\uppsi\\
&=-\frac{1}{2}{\underline L} L \uppsi-\frac{1}{2}L{\underline L} \uppsi + \sum_A D_A^2 \uppsi-(m^{-1})^{IJ}(\Gamma^L_{IJ}L\uppsi+\Gamma^{\underline L}_{IJ}{\underline L} \uppsi+\Gamma^A_{IJ}D_A\uppsi )\\
&=-{\underline L} L \uppsi-\frac{1}{2}[L,{\underline L}] \uppsi + \frac{1}{{\tilde{r}}^2}\sum_\Omega \Omega^2 \uppsi-(m^{-1})^{IJ}(\Gamma^L_{IJ}L\uppsi+\Gamma^{\underline L}_{IJ}{\underline L} \uppsi+\Gamma^A_{IJ}D_A\uppsi ).
\end{split}
\end{align}
Here to pass to the last line we have used the fact that $e_A$ are tangential to the foliation, so $\sum_A D_A^2\uppsi(y(\tau,r,\theta))= \sum_A ({\tilde{e}}_A^2\uppsi)(y(\tau,r,\theta))$. To calculate the connection coefficients we use Koszul's formula, which for an orthonormal frame reads
\begin{align*}
\begin{split}
\Gamma_{IJ}^M= \frac{1}{2}(m^{-1})^{KM}(m([e_I,e_J],e_K)-m([e_I,e_K],e_J)-m([e_J,e_K],e_I)).
\end{split}
\end{align*}
These can now be computed using Lemma~\ref{lem:commutators1}. Here note that the commutators with $e_A$ can be calculated in the same way as in the proof of Lemma~\ref{lem:commutators1}, by noting that $e_A$ is a linear combination of $\Omega_{ij}$ with coefficients of size $r^{-1}$. In particular, with ${\tilde{L}},{\tilde{\Lbar}}$ such that $L=\Lambda_\ell {\tilde{L}}$ and ${\underline L}=\Lambda_\ell {\tilde{\Lbar}}$ (see~\eqref{eq:VFdef0}),
\begin{align*}
\begin{split}
&[{\underline L},e_A]= \Lambda_\ell[{\tilde{\Lbar}},{\tilde{e}}_A]+\calO({\dot{\wp}} r^{-1})\Omega+\calO({\dot{\wp}} ) L+\calO({\dot{\wp}} r^{-1}){\underline L},\\
&[L,e_A]=\Lambda_\ell[{\tilde{L}},{\tilde{e}}_A]+\calO({\dot{\wp}} r^{-2})L+\calO({\dot{\wp}} r^{-3})\Omega+\calO({\dot{\wp}} r^{-3}){\underline L},\\
&[e_A,e_B]=\Lambda_\ell[{\tilde{e}}_A,{\tilde{e}}_B].
\end{split}
\end{align*}
It follows from this and Lemma~\ref{lem:commutators1} that
\begin{align*}
\begin{split}
&\Gamma^L_{{\underline L} L}=-\frac{1}{2}m([{\underline L},L],{\underline L})=\calO({\dot{\wp}}^{\leq 2}),\\
&\Gamma^L_{L{\underline L}}=-\frac{1}{4}(m([L,{\underline L}],{\underline L})-m([L,{\underline L}],{\underline L}))=0,\\
&\Gamma^{L}_{AA}=\frac{1}{2}m([e_A,{\underline L}],e_A).
\end{split}
\end{align*}
For the last term, for the purpose of deriving \eqref{eq:BoxVF1} it suffices to observe that $m([e_A,{\underline L}],e_A)=m([{\tilde{e}}_A,{\tilde{\Lbar}}],{\tilde{e}}_A)+\calO({\dot{\wp}})$. But, for \eqref{eq:BoxVF2} we will need the better estimate
\begin{align}\label{eq:GammaLAAtemp1}
\begin{split}
\Gamma^{L}_{AA}= \frac{1}{2}m([{\tilde{e}}_A,{\tilde{\Lbar}}],{\tilde{e}}_A)+\calO({\dot{\wp}} r^{-1}).
\end{split}
\end{align}
To prove \eqref{eq:GammaLAAtemp1}, using our usual notation as in the proof of Lemma~\ref{lem:commutators1}, note that
\begin{align*}
\begin{split}
\frac{1}{2}m([e_A,{\underline L}],e_A)=m([e_A,T],e_A)-\frac{1}{2}m([e_A,L],e_A)=m([e_A,T],e_A)+\frac{1}{2}m([{\tilde{e}}_A,{\tilde{\Lbar}}],{\tilde{e}}_A)+\calO({\dot{\wp}} r^{-2}).
\end{split}
\end{align*}
For the first term on the right, we write $[e_A,T]=\big(e_A(T^\mu)-T(e_A^\mu)\big)\partial_\mu$. On the other hand, in view of the expansions \eqref{eq:VFbasis1}, we have $m(\partial_r,e_A)=\calO(r^{-2})$ and $m(\partial_\tau,e_A)=0$. It then follows from~\eqref{eq:Tprecise1} and the expansion $e_A=\calO(r^{-1})\partial_a+\calO(1)\partial_r$ that
\begin{align*}
\begin{split}
m((e_AT^\mu)\partial_\mu,e_A)=\calO({\dot{\wp}} r^{-1}),
\end{split}
\end{align*}
and
\begin{align*}
\begin{split}
m(T(e_A^\mu)\partial_\mu,e_A)=\calO({\dot{\wp}} r^{-1})+\calO(1)m((\partial_\tau e_A^\mu)\partial_\mu,e_A).
\end{split}
\end{align*}
For the last term observe that by \eqref{eq:extchangeofvars1}, the metric components $m_{rr}=1-\frac{r}{\jap{r}}$, $m_{ra}=0$, and $m_{ab}=r^2\mathring{{\slashed{g}}}_{ab}$ are independent of $\tau$, so since $m(e_A,e_A)=1$ and $e_A=e_A^r\partial_r+e_A^a\partial_a$,
\begin{align*}
\begin{split}
m((\partial_\tau e_A^\mu)\partial_\mu,e_A)=m_{\mu\nu}e_A^\nu\partial_\tau e_A^\mu=\frac{1}{2}\partial_\tau (m_{\mu\nu}e_A^\mu e_A^\nu)=0.
\end{split}
\end{align*}
This completes the proof of \eqref{eq:GammaLAAtemp1}. Returning to the other connection coefficients, by the Koszul formula,
\begin{align*}
\begin{split}
&\Gamma_{{\underline L} L}^{\underline L}=0,\quad \Gamma_{L{\underline L}}^{\underline L}=\frac{1}{2}m([{\underline L},L],L)=\calO({\dot{\wp}}^{\leq 2} r^{-2}),\\
&\Gamma_{AA}^{\underline L}=\frac{1}{2}m([e_A,L],e_A)=\frac{1}{2}m([{\tilde{e}}_A,{\tilde{L}}],{\tilde{e}}_A)+\calO({\dot{\wp}} r^{-2}),
\end{split}
\end{align*}
and
\begin{align*}
\begin{split}
&\Gamma^B_{L{\underline L}}=\frac{1}{2}(m([L,{\underline L}],e_B)-m([L,e_B],{\underline L})-m([{\underline L},e_B],L))=\calO({\dot{\wp}} r^{-1})\\
&\Gamma^B_{L{\underline L}}=\frac{1}{2}(m([{\underline L},L],e_B)-m([{\underline L},e_B],L)-m([L,e_B],{\underline L}))=\calO({\dot{\wp}} r^{-1}),\\
&\Gamma^B_{AA}=-m([{\tilde{e}}_A,{\tilde{e}}_B],{\tilde{e}}_A).
\end{split}
\end{align*}
Inserting the expressions we have derived for the connection coefficients into \eqref{eq:BoxVFtemp1} and using the relations \eqref{eq:VFcomm1} gives
\begin{align}\label{eq:BoxVFtemp2}
\begin{split}
\Box_m\uppsi&= -{\underline L} L \uppsi -\frac{n-1}{2{\tilde{r}}}({\underline L}-L)\uppsi+\frac{1}{{\tilde{r}}^2}\sum_{\Omega}\Omega^2 \uppsi-[L,{\underline L}]^LL\uppsi\\
&\quad+\calO({\dot{\wp}}^{\leq 2}r^{-1})L\uppsi+\calO({\dot{\wp}}^{\leq 2} r^{-2}){\underline L} \uppsi+\calO({\dot{\wp}}^{\leq 2} r^{-2})\Omega \uppsi,
\end{split}
\end{align}
The expansion \eqref{eq:BoxVF1} follows from \eqref{eq:BoxVFtemp2} and the fact that, by \eqref{eq:VFcomm1}, $[L,{\underline L}]^L=\calO({\dot{\wp}}^{\leq2})$, but we will need a more precise expression for this commutator to derive \eqref{eq:BoxVF2}. To prove \eqref{eq:BoxVF2} first note that, with the notation ${\tilde{\uppsi}}={\tilde{r}}^{\frac{n-1}{2}}\uppsi$,
\begin{align*}
\begin{split}
{\tilde{r}}^{\frac{n-1}{2}}\Box \uppsi=\Box{\tilde{\uppsi}}-\Box({\tilde{r}}^{\frac{n-1}{2}})\uppsi - 2 m^{IJ} (e_I {\tilde{r}}^{\frac{n-1}{2}})(e_J\uppsi)=:I+II+III.
\end{split}
\end{align*}
In expanding the terms $I$, $II$, $III$ we use the notation ${\mathrm{Err}}$ to denote error terms which are acceptable on the right-hand side of \eqref{eq:BoxVF2}. Starting with $I$, by \eqref{eq:BoxVF1},
\begin{align*}
\begin{split}
I=-{\underline L} L{\tilde{\uppsi}}+\frac{1}{{\tilde{r}}^2}\sum\Omega^2{\tilde{\uppsi}}-\frac{n-1}{2{\tilde{r}}}{\tilde{r}}^{\frac{n-1}{2}}({\underline L}-L)\uppsi-\frac{n-1}{2{\tilde{r}}}\uppsi({\underline L}-L){\tilde{r}}^{\frac{n-1}{2}}+{\mathrm{Err}}.
\end{split}
\end{align*}
For $II$ we use \eqref{eq:VFcomm1}, the more precise expression \eqref{eq:BoxVFtemp2} for $\Box_m$ (applied to ${\tilde{r}}^{\frac{n-1}{2}}$), and the usual decomposition $L=L_{\mathrm{temp}}+\calO(r^{-2})T$, with $L_{\mathrm{temp}}{\tilde{r}}=1$, to write
\begin{align*}
\begin{split}
II&=\frac{n-1}{2}\uppsi{\underline L} {\tilde{r}}^{\frac{n-1}{2}-1}+\frac{n-1}{2{\tilde{r}}} \uppsi({\underline L}-L){\tilde{r}}^{\frac{n-1}{2}}+{\mathrm{Err}}\\
&=\frac{n-1}{2{\tilde{r}}}\uppsi({\underline L}-L){\tilde{r}}^{\frac{n-1}{2}}-\frac{(n-1)(n-3)}{4{\tilde{r}}^2}{\tilde{\uppsi}}+\frac{n-1}{2{\tilde{r}}} \uppsi({\underline L}+\frac{n-1}{2{\tilde{r}}}){\tilde{r}}^{\frac{n-1}{2}}\\
&\quad+\frac{n-1}{2}{\tilde{\uppsi}}({\underline L}-\frac{1}{{\tilde{r}}}){\tilde{r}}^{-1}+\uppsi[L,{\underline L}]^LL{\tilde{r}}^{\frac{n-1}{2}}+{\mathrm{Err}}.
\end{split}
\end{align*}
To treat the last line as an error, we need to use the more precise expansions \eqref{eq:Lprecise1} and \eqref{eq:tilrprecise1} to get (note that each term by itself is only $\calO({\dot{\wp}}^{\leq 2}r^{-1}){\tilde{\uppsi}}$)
\begin{align*}
\begin{split}
&\frac{n-1}{2}{\tilde{\uppsi}}({\underline L}-\frac{1}{{\tilde{r}}}){\tilde{r}}^{-1}+\uppsi[L,{\underline L}]^LL{\tilde{r}}^{\frac{n-1}{2}}\\
&=\frac{n-1}{2{\tilde{r}}^2}(L{\tilde{r}} -1){\tilde{\uppsi}}+\frac{n-1}{{\tilde{r}}}[L,T]^L(L{\tilde{r}}-1){\tilde{\uppsi}}+\frac{n-1}{{\tilde{r}}}([L,T]^L-{\tilde{r}}^{-1}T{\tilde{r}}){\tilde{\uppsi}}=\calO({\dot{\wp}}^{\leq 2}r^{-2}){\tilde{\uppsi}},
\end{split}
\end{align*}
and hence
\begin{align*}
\begin{split}
II=\frac{n-1}{2{\tilde{r}}}\uppsi({\underline L}-L){\tilde{r}}^{\frac{n-1}{2}}-\frac{(n-1)(n-3)}{4{\tilde{r}}^2}{\tilde{\uppsi}}+\frac{n-1}{2{\tilde{r}}}\uppsi({\underline L}+\frac{n-1}{2{\tilde{r}}}){\tilde{r}}^{\frac{n-1}{2}}+{\mathrm{Err}}.
\end{split}
\end{align*}
For the term $III$, since $e_A{\tilde{r}}=0$,
\begin{align*}
\begin{split}
III&=(L{\tilde{r}}^{\frac{n-1}{2}}){\underline L} \uppsi+({\underline L} {\tilde{r}}^{\frac{n-1}{2}})L\uppsi\\
&=\frac{n-1}{2{\tilde{r}}}{\tilde{r}}^{\frac{n-1}{2}}({\underline L}-L)\uppsi+(L\uppsi)({\underline L}+\frac{n-1}{2{\tilde{r}}}){\tilde{r}}^{\frac{n-1}{2}}+{\mathrm{Err}}.
\end{split}
\end{align*}
Equation \eqref{eq:BoxVF2} now follows by adding the expansions for $I$, $II$, and $III$, and using the observation that ${\tilde{r}}^{\frac{n-1}{2}}(L+\frac{n-1}{2{\tilde{r}}})\uppsi=L{\tilde{\uppsi}}+{\mathrm{Err}}$.
\end{proof}
Lemma~\ref{lem:BoxVF1} and equations \eqref{eq:callP2} and \eqref{eq:ErrcallP1} yield the following representation for $\mathcal P_{\mathrm{graph}} \uppsi$:
\begin{align}
{\tilde{r}}^{\frac{n-1}{2}}\mathcal P_{\mathrm{graph}} \uppsi&= -(1+\calO(r^{-4})){\underline L} L{\tilde{\uppsi}}+\frac{1+\calO(r^{-5})}{{\tilde{r}}^2}\sum\Omega^2{\tilde{\uppsi}}-\frac{(n-1)(n-3)}{4{\tilde{r}}^2}{\tilde{\uppsi}}\nonumber\\
&\quad+\calO(r^{-4})L^2{\tilde{\uppsi}}+\calO(r^{-4}){\underline L}^2{\tilde{\uppsi}}+\calO(r^{-5})\Omega L{\tilde{\uppsi}}+\calO(r^{-5}){\underline L}\Omega{\tilde{\uppsi}} \label{eq:calPVF1}\\
&\quad+\calO({\dot{\wp}}+r^{-5})L{\tilde{\uppsi}}+\calO({\dot{\wp}} r^{-2}+r^{-5}){\underline L}{\tilde{\uppsi}}+\calO({\dot{\wp}} r^{-2}+r^{-6})\Omega{\tilde{\uppsi}}+\calO({\dot{\wp}} r^{-2}+r^{-6}){\tilde{\uppsi}}.\nonumber
\end{align}
This is the representation we will use to derive multiplier identities in the exterior. Before starting on these multiplier identities, we calculate the equations satisfied by higher order derivatives of ${\tilde{\uppsi}}$. Our goal is to calculate the analogous equation to \eqref{eq:calPVF1} satisfied by ${\tilde{r}} L$, $T$, and $\Omega$ applied to ${\tilde{\uppsi}}$. Since, due of the presence of parameters, these vectorfields do not commute, we start by deriving an estimate for the commutator of a string of them, valid in the hyperboloidal region where they are defined.
\begin{lemma}\label{lem:commutators2}
If $X_1,\dots,X_k\in\{{\tilde{r}} L,\Omega,T\}$ are $k$ vectorfields with $k_1$ factors of ${\tilde{r}} L$, $k_2$ factors of $\Omega$ and $k_3$ factors of $T$, then for any function $\uppsi$,
\begin{align*}
\begin{split}
X_k\dots X_1\uppsi= ({\tilde{r}} L)^{k_1}\Omega^{k_2} T^{k_3}\uppsi+\sum_{j_1+j_2+j_3\leq k-1}\calO({\dot{\wp}}^{\leq 2k-2(j_1+j_2+j_3)})({\tilde{r}} L)^{j_1}\Omega^{j_2}T^{j_3}\uppsi.
\end{split}
\end{align*}
\end{lemma}
\begin{proof}
The proof is by induction on $k$. For $k=2$ the statement follows from Lemma~\ref{lem:commutators1}. For the induction step, suppose there are ${\tilde{k}}_1$, ${\tilde{k}}_2$, and ${\tilde{k}}_3$ factors of ${\tilde{r}} L$, $\Omega$, and $T$, respectively, among $X_1,\dots,X_{k-1}$. Then
\begin{align*}
\begin{split}
X_k\dots X_1\uppsi=X_k({\tilde{r}} L)^{k_1}\Omega^k_2 T^{k_3}\uppsi+X_k\sum_{j_1+j_2+j_3\leq k-2}\calO({\dot{\wp}}^{\leq 2k-2-2(j_1+j_2+j_3)})({\tilde{r}} L)^{j_1}\Omega^{j_2}T^{j_3}\uppsi=I+II.
\end{split}
\end{align*}
The term $II$ can be put in the desired form using the induction hypothesis. For the first term, if $X_k={\tilde{r}} L$, if $X_k=\Omega$ and ${\tilde{k}}_1=0$, or if $X_k=T$ and ${\tilde{k}}_1={\tilde{k}}_2=0$, then this is already of the desired form. If $X_k=\Omega$ and ${\tilde{k}}_1\neq0$, then by Lemma~\ref{lem:commutators1} the first term is
\begin{align*}
\begin{split}
({\tilde{r}} L) \Omega ({\tilde{r}} L)^{{\tilde{k}}_1-1}\Omega^{{\tilde{k}}_2} T^{{\tilde{k}}_3}\uppsi+(\calO({\dot{\wp}} r^{-1}){\tilde{r}} L +\calO({\dot{\wp}} r^{-2})\Omega+\calO({\dot{\wp}} r^{-2})T)({\tilde{r}} L)^{{\tilde{k}}_1-1}\Omega^{{\tilde{k}}_2} T^{{\tilde{k}}_3}\uppsi.
\end{split}
\end{align*}
The second term can again be put in the desired form by the induction hypothesis. Similarly by the induction hypothesis we can rearrange the first $k-1$ derivatives in the first term to put this in the desired form. If $X_k=T$ and ${\tilde{k}}_1=0$ but ${\tilde{k}}_2\neq 0$ then by Lemma~\ref{lem:commutators1} we can write $I$ as
\begin{align*}
\begin{split}
\Omega T\Omega^{{\tilde{k}}_2-1}T^{{\tilde{k}}_1}+(\calO({\dot{\wp}}){\tilde{r}} L + \calO({\dot{\wp}})\Omega+\calO({\dot{\wp}})T)\Omega^{{\tilde{k}}_2-1}T^{{\tilde{k}}_1}\uppsi,
\end{split}
\end{align*}
which, by the induction hypothesis, can be arranged into the desired form, using the same argument as above. The case where $X_k=T$ and ${\tilde{k}}_1\neq0$ is similar.
\end{proof}
In view of Lemma~\ref{lem:commutators2}, in order to estimate $X_k\dots X_1\uppsi$, with $X_i$ as in the lemma, it suffices to consider only the rearrangement $({\tilde{r}} L)^{k_1}\Omega^{k_2}T^{k_3}\uppsi$, so it suffices to consider commutators with \eqref{eq:calPVF1} in this order. This commutator is calculated in the next lemma. For this, we let ${\widetilde{\calP}}$ denote the non-perturbative part of the operator on the right-hand side of \eqref{eq:calPVF1}, that is,
\begin{align*}
\begin{split}
{\widetilde{\calP}}:=-{\underline L} L+\frac{1}{{\tilde{r}}^2}\sum\Omega^2-\frac{(n-1)(n-3)}{4{\tilde{r}}^2}.
\end{split}
\end{align*}
\begin{lemma}\label{lem:higherordereqs1}
For any function $\uppsi$ and any integers $k_1,k_2,k_3\geq0$, and with ${\tilde{\uppsi}}={\tilde{r}}^{\frac{n-1}{2}}\uppsi$ and $k=k_1+k_2+k_3$,
\begin{equation}\label{eq:higher1}
({\tilde{r}} L+1)^{k_1}\Omega^{k_2}T^{k_2}({\tilde{r}}^{\frac{n-1}{2}}\mathcal P \uppsi)={\widetilde{\calP}}(({\tilde{r}} L)^{k_1}\Omega^{k_2}T^{k_3}{\tilde{\uppsi}})+{\mathrm{Err}}_{k_1,k_2,k_3}[{\tilde{\uppsi}}],\\
\end{equation}
where
\begin{align}
{\mathrm{Err}}_{k_1,k_2,k_3}[{\tilde{\uppsi}}]&=\sum_{j=0}^{k_1-1}c_{j,k_1}{\widetilde{\calP}}_1(({\tilde{r}} L)^{j}\Omega^{k_2}T^{k_3}{\tilde{\uppsi}})+\calO(r^{-4})L^2({\tilde{r}} L)^{k_1}\Omega^{k_2}T^{k_3}{\tilde{\uppsi}}+\calO(r^{-4}){\underline L}^2({\tilde{r}} L)^{k_1}\Omega^{k_2}T^{k_3}{\tilde{\uppsi}}\nonumber\\
&\quad+\calO(r^{-5})\Omega L({\tilde{r}} L)^{k_1}\Omega^{k_2}T^{k_3}{\tilde{\uppsi}}+\calO(r^{-5}){\underline L}\Omega({\tilde{r}} L)^{k_1}\Omega^{k_2}T^{k_3}{\tilde{\uppsi}}\nonumber\\
&\quad+\calO({\dot{\wp}}^{\leq 2k+2}+r^{-5})\sum_{j_1+j_2+j_3\leq k}L({\tilde{r}} L)^{j_1}\Omega^{j_2}T^{j_3}{\tilde{\uppsi}}\nonumber\\
&\quad+\calO({\dot{\wp}}^{\leq 2k+2} r^{-2}+r^{-4})\sum_{j_1+j_2+j_3\leq k}{\underline L}({\tilde{r}} L)^{j_1}\Omega^{j_2}T^{j_3}{\tilde{\uppsi}}\label{eq:errorhigher1}\\
&\quad+\calO({\dot{\wp}}^{\leq 2k+2} r^{-2}+r^{-6})\sum_{j_1+j_2+j_3\leq k}\Omega({\tilde{r}} L)^{j_1}\Omega^{j_2}T^{j_3}{\tilde{\uppsi}}\nonumber\\
&\quad+\calO({\dot{\wp}}^{\leq 2k+2} r^{-2}+r^{-5})\sum_{j_1+j_2+j_3\leq k}({\tilde{r}} L)^{j_1}\Omega^{j_2}T^{j_3}{\tilde{\uppsi}},\nonumber
\end{align}
for some constants $c_{j,k_1}$ (which are nonzero only if $k_1\geq 1$), with $c_{k_1-1,k_1}=-k_1$, and where
\begin{align*}
\begin{split}
{\widetilde{\calP}}_1=LL+\frac{1}{{\tilde{r}}^2}\sum\Omega^2-\frac{(n-1)(n-3)}{4{\tilde{r}}^2}.
\end{split}
\end{align*}
\end{lemma}
\begin{proof}
Note that the terms involving ${\widetilde{\calP}}_1$ are what would come up by commuting $({\tilde{r}} L+1)^{k_1}$ if the parameters were treated as fixed. The proof is by induction on $k$, applied to every term on the right-hand side of \eqref{eq:calPVF1}. The treatment of the different terms is similar, so here we present the details only for ${\underline L} L{\tilde{\uppsi}}$. Starting with $T$, by Lemma~\ref{lem:commutators1}, and recalling that ${\underline L}=2T-L$,
\begin{align*}
\begin{split}
T{\underline L} L{\tilde{\uppsi}}&= {\underline L} L T{\tilde{\uppsi}}+[T,{\underline L}]L{\tilde{\uppsi}}+{\underline L}[T,L]{\tilde{\uppsi}}\\
&={\underline L} L T{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq2})L L{\tilde{\uppsi}} + \calO({\dot{\wp}}^{\leq 2})TL{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq2}){\underline L} T{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq2}r^{-2}){\underline L}\Omega{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq2}r^{-2})\Omega L{\tilde{\uppsi}}\\
&\quad+\calO({\dot{\wp}}^{\leq3})L{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq3}r^{-2}){\underline L}{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq3}r^{-2})\Omega{\tilde{\uppsi}}\\
&={\underline L} L T{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq2})L L{\tilde{\uppsi}} + \calO({\dot{\wp}}^{\leq 2})LT{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq2}){\underline L} T{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq2}r^{-2}){\underline L}\Omega{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq2}r^{-2})\Omega L{\tilde{\uppsi}}\\
&\quad+\calO({\dot{\wp}}^{\leq3})L{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq3}r^{-2}){\underline L}{\tilde{\uppsi}}+\calO({\dot{\wp}}^{\leq3}r^{-2})\Omega{\tilde{\uppsi}},
\end{split}
\end{align*}
which has the desired structure. We can then inductively apply this same identity together with Lemmas~\ref{lem:commutators1} and~\ref{lem:commutators2} to conclude that
\begin{align}\label{eq:hotemp1}
\begin{split}
T^{k_3}{\underline L} L{\tilde{\uppsi}}={\underline L} LT^{k_3}{\tilde{\uppsi}}+{\mathrm{Err}},
\end{split}
\end{align}
where ${\mathrm{Err}}$ has the structure given in the statement of the lemma. Next we apply $\Omega$ to \eqref{eq:hotemp1}. Note that $\Omega$ applied to ${\mathrm{Err}}$ in \eqref{eq:hotemp1} has the desired structure by Lemmas~\ref{lem:commutators1} and~\ref{lem:commutators2}. For the main term, again by Lemma~\ref{lem:commutators1}, and with $\Psi=T^k{\tilde{\uppsi}}$,
\begin{align*}
\begin{split}
\Omega{\underline L} L \Psi&= {\underline L} L \Omega\Psi+[\Omega,{\underline L}]L\Psi+{\underline L}[\Omega,L]\Psi\\
&={\underline L} L \Omega\Psi+\calO({\dot{\wp}}){\tilde{r}} L L\Psi+\calO({\dot{\wp}})\Omega L\Psi+\calO({\dot{\wp}})TL\Psi+\calO({\dot{\wp}} r^{-2}){\underline L}\Omega\Psi+\calO({\dot{\wp}} r^{-2}){\underline L} T\Psi\\
&\quad+\calO({\dot{\wp}}^{\leq2}r^{-1})L\Psi+\calO({\dot{\wp}}^{\leq2}r^{-2})\Omega\Psi+\calO({\dot{\wp}}^{\leq 2}r^{-2}){\underline L}\Psi\\
&={\underline L} L \Omega\Psi+\calO({\dot{\wp}})L({\tilde{r}} L)\Psi+\calO({\dot{\wp}}) L\Omega\Psi+\calO({\dot{\wp}})LT\Psi+\calO({\dot{\wp}} r^{-2}){\underline L}\Omega\Psi+\calO({\dot{\wp}} r^{-2}){\underline L} T\Psi\\
&\quad+\calO({\dot{\wp}}^{\leq3})L\Psi+\calO({\dot{\wp}}^{\leq3}r^{-2})\Omega\Psi+\calO({\dot{\wp}}^{\leq 3}r^{-2}){\underline L}\Psi,
\end{split}
\end{align*}
which has the desired structure. Here we have used the fact that ${\tilde{r}} L^2\Psi= L({\tilde{r}} L)\Psi+\calO(1+{\dot{\wp}} )L\Psi$. As for $T^{k_3}$ we can apply this identity inductively and use Lemmas~\ref{lem:commutators1} and~\ref{lem:commutators2} to conclude that
\begin{align}\label{eq:hotemp2}
\begin{split}
\Omega^{k_2}T^{k_3}{\underline L} L{\tilde{\uppsi}}={\underline L} L\Omega^{k_2}T^{k_3}{\tilde{\uppsi}}+{\mathrm{Err}},
\end{split}
\end{align}
where ${\mathrm{Err}}$ has the structure given in the statement of the lemma. Finally we apply ${\tilde{r}} L+1$ to \eqref{eq:hotemp2}. Again by Lemmas~\ref{lem:commutators1} and~\ref{lem:commutators2} the contribution of ${\mathrm{Err}}$ in \eqref{eq:hotemp2} has the desired form. For the main term we have, using Lemma~\ref{lem:commutators1} and with $\Psi=\Omega^{k_2}T^{k_1}{\tilde{\uppsi}}$,
\begin{align*}
\begin{split}
({\tilde{r}} L+1){\underline L} L \Psi&= {\underline L} L({\tilde{r}} L+1)\Psi+{\tilde{r}}[L,{\underline L}]L\Psi+[{\tilde{r}},{\underline L}]L^2\Psi+{\underline L}([{\tilde{r}},L]L\Psi)\\
&={\underline L} L({\tilde{r}} L+1)\Psi-{\underline L} L\Psi+LL\Psi+\calO({\dot{\wp}}^{\leq2}){\tilde{r}} L L\Psi+\calO({\dot{\wp}} r^{-1})TL\Psi\\
&\quad+\calO({\dot{\wp}}^{\leq2}r^{-1})\Omega L\Psi+\calO({\dot{\wp}} r^{-2})L\Psi\\
&={\underline L} L({\tilde{r}} L+1)\Psi-{\underline L} L\Psi+LL\Psi+\calO({\dot{\wp}}^{\leq2})L ({\tilde{r}} L)\Psi+\calO({\dot{\wp}} r^{-1})LT\Psi\\
&\quad+\calO({\dot{\wp}}^{\leq2}r^{-1}) L\Omega\Psi+\calO({\dot{\wp}}^{\leq3})L\Psi+\calO({\dot{\wp}}^{\leq3}r^{-2}){\underline L}\Psi+\calO({\dot{\wp}}^{\leq3}r^{-3})\Omega\Psi.
\end{split}
\end{align*}
The terms $-{\underline L} L\Psi+LL\Psi$ will contribute to the terms involving $\mathcal P_1$ on the right-hand side of \eqref{eq:higher1} and the remaining terms have the expected form. The desired structure now follows by inductively applying this identity and using Lemmas~\ref{lem:commutators1} and~\ref{lem:commutators2}. Here the commutators with $L^2\Psi$ and the remaining terms on the right-hand side of \eqref{eq:calPVF1} are treated inductively in a similar way as with ${\underline L} L$ above.
\end{proof}
We end this subsection by deriving expansions for the source and the cubic terms in equation \eqref{eq:uext1}. The cubic term refers to the part of the term (recall that $\varphi$ and $\phi$ are related by the conjugation \eqref{eq:varphiphi1})
\begin{align}\label{eq:purelycubic1}
\begin{split}
\frac{\nabla^\mu\varphi\nabla^\nu\varphi}{1+\nabla^\alpha v \nabla_\alpha v}\nabla^2_{\mu\nu}\varphi
\end{split}
\end{align}
in \eqref{eq:calF3_1} where no factors of $Q$ appear, which we expect to be the most difficult term in the nonlinearity. Recall from \eqref{eq:extpar1}, that in the exterior region
\begin{align*}
\begin{split}
Q\equiv Q_\wp=Q(rA_\ell\Theta-\gamma\jap{r}\ell)=Q({\tilde{r}}).
\end{split}
\end{align*}
In view of \eqref{eq:uext1} the source term $\mathcal F_0$ is given by
\begin{align}\label{eq:sourcetermrp1}
\begin{split}
\mathcal F_0=\Box_m Q-(1+\nabla^\alpha Q \nabla_\alpha Q)^{-1}\nabla^\mu Q\nabla^\nu Q\nabla^2_{\mu\nu}Q.
\end{split}
\end{align}
The more precise structure of $\mathcal F_0$ is calculated in the next lemma.
\begin{lemma}\label{lem:sourceext1}
The source term $\mathcal F_0$ satisfies the following estimate in the hyperboloidal region $\mathcal C_{\mathrm{hyp}}$:
\begin{align*}
\begin{split}
{\bfT}^k\mathcal F_0= \calO({\dot{\wp}}^{1+k\leq\cdot\leq 3+k}r^{-n+1}),\quad k=1,2,3.
\end{split}
\end{align*}
\end{lemma}
\begin{proof}
Recall that if $Q$ were a maximal embedding then $\mathcal F_0$ would vanish. In particular (by a slight abuse of notation we are identifying $Q(y)=Q(|y|)$ with a function of a single variable), $Q$ satisfies equation \eqref{eq:QRiem1}.
Moreover, $\Omega Q=0$ by construction. It follow from these facts and Lemma~\ref{lem:BoxVF1} that
\begin{align*}
\begin{split}
\mathcal F_0=\calO({\dot{\wp}}^{\leq 3})Q'+\calO({\dot{\wp}} r)Q'',
\end{split}
\end{align*}
which proves the desired bound for $k=0$. The higher order bounds are obtained similarly by differentiating the equation.
\end{proof}
Turning to \eqref{eq:purelycubic1}, we have the following expansion of the purely cubic part of this nonlinearity.
\begin{lemma}\label{lem:purelycubic1}
$\nabla^\mu\varphi\nabla^\nu\varphi\nabla^2_{\mu\nu}\varphi$ can be written as a linear combination of terms of the following forms in the hyperboloidal region $\mathcal C_{\mathrm{hyp}}$:
\begin{enumerate}
\item Quasilinear terms: $(L\varphi)^2{\underline L}^2\varphi$, $(L\varphi{\underline L}\varphi){\underline L} L\varphi$, $({\underline L}\varphi)^2L^2\varphi$, $(L\varphi e_A\varphi)e_A{\underline L}\varphi$, $({\underline L}\varphi e_A\varphi)e_AL\varphi$, $(e_A\varphi e_B\varphi)e_Ae_B\varphi$.
\item Semilinear terms: $\calO({\dot{\wp}}^{\leq 2}r^{-3})({\underline L}\varphi)^2L\varphi$, $\calO({\dot{\wp}} r^{-3})e_A\varphi$, $\calO({\dot{\wp}}^{\leq 2})(L\varphi)^2{\underline L}\varphi$, \\$\calO({\dot{\wp}}^{\leq2}r^{-1}){\underline L}\varphi L\varphi e_A \varphi$, $\calO({\dot{\wp}} r^{-1}){\underline L}\varphi e_A\varphi e_B\varphi$, $\calO({\dot{\wp}})L\varphi e_A\varphi e_B\varphi$, $\calO(1)e_A\varphi e_B\varphi e_C\varphi$.
\end{enumerate}
\end{lemma}
\begin{proof}
This follows by writing this expression as
\begin{align*}
\begin{split}
(m^{-1})^{II'}(m^{-1})^{JJ'}(D_{I'}\varphi D_{J'}\varphi)(D_{I}D_J\varphi-\Gamma_{IJ}^KD_K\varphi).
\end{split}
\end{align*}
The relevant connection coefficients can be calculated using the Koszul formula as in the proof of Lemma~\ref{eq:BoxVF2} and are given by
\begin{align*}
\begin{split}
&\Gamma_{LL}^{\underline L}=0,\quad \Gamma_{LL}^L=\calO({\dot{\wp}}^{\leq 2}r^{-2}),\quad \Gamma_{LL}^A=\calO({\dot{\wp}} r^{-3}),\quad \Gamma_{{\underline L} L}^{\underline L}=0, \quad\Gamma_{{\underline L} L}^L=\calO({\dot{\wp}}^{\leq2}), \quad\Gamma_{{\underline L} L}^{\underline L}=0,\\
&\Gamma_{{\underline L} L}^A=\calO({\dot{\wp}}^{\leq 2}r^{-1}),\quad \Gamma_{A L}^L=\calO({\dot{\wp}}^{-2}r^{-1}),\quad \Gamma_{AL}^{\underline L}=0,\quad \Gamma_{AL}^B=\calO({\dot{\wp}} r^{-1}), \quad \Gamma_{A{\underline L}}^{\underline L}=\calO({\dot{\wp}}^{\leq 2}r^{-1}),\\
&\Gamma_{A{\underline L}}^L=0,\quad \Gamma_{A{\underline L}}^B=\calO({\dot{\wp}}), \quad \Gamma_{AB}^L=\calO({\dot{\wp}}),\quad \Gamma_{AB}^{\underline L}=\calO({\dot{\wp}} r^{-1}), \quad \Gamma_{AB}^C=\calO(1).\qedhere
\end{split}
\end{align*}
\end{proof}
\subsection{The Main $r^p$ Multiplier Identity}
This section contains the main $r^p$ multiplier identity for $\mathcal P_{\mathrm{graph}}$. Recall that this operator arises when using the conjugated variable $\varphi$ defined in terms of $\phi$ in \eqref{eq:varphiphi1}. Since we will be interested in the exterior hyperboloidal region, we fix a cutoff function $\chi_{\geq {\tilde{R}}}$ supported in the region $\{r\geq {\tilde{R}}\}\subseteq \mathcal C_{\mathrm{hyp}}$. We will also use the notation $\chi_{\leq {\tilde{R}}}=1-\chi_{\geq {\tilde{R}}}$. Given $\uppsi$ with
\begin{align*}
\begin{split}
\mathcal P_{\mathrm{graph}} \uppsi=f,
\end{split}
\end{align*}
we let ${\tilde{\uppsi}}={\tilde{r}}^{\frac{n-1}{2}}\uppsi$ and ${\tilde{f}}={\tilde{r}}^{\frac{n-1}{2}}f$ as usual. Suppose $X_1,\dots,X_k$, $k=k_1+k_2+k_3$, are a collection of vectorfields from $\{{\tilde{r}} L, \Omega, T\}$, with $X_1,\dots X_{k_3}=T$, $X_{k_3+1},\dots X_{k_3+k_2}=\Omega$, and~$X_{k_3+k_2+1},\dots X_k={\tilde{r}} L$. We let
\begin{align}\label{eq:tilphikdef1}
\begin{split}
{\tilde{\uppsi}}_k=X_k\dots X_1{\tilde{\uppsi}},\qquad {\tilde{f}}_k = X_k\dots X_1{\tilde{f}},
\end{split}
\end{align}
and if the precise choice of the vectorfields is important we write
\begin{align}\label{eq:tilphikdef2}
\begin{split}
{\tilde{\uppsi}}_k={\tilde{\uppsi}}_{k_1,k_2,k_2},\qquad {\tilde{f}}_{k}={\tilde{f}}_{k_1,k_2,k_3}.
\end{split}
\end{align}
The basic $r^p$ boundary and bulk energies are defined as follows. For any $p\in[0,2]$,
\begin{align}\label{eq:rpenergiesdef1}
\begin{split}
&\mathcal E^p_k(\sigma)\equiv\mathcal E_p^k[\uppsi](\sigma):=\int_{\Sigma_\sigma} \chi_{\tilde{R}} {\tilde{r}}^p (L{\tilde{\uppsi}}_k)^2 \mathrm{d} \theta \mathrm{d} r,\\
&\mathcal B_{k}^{p}(\sigma_1,\sigma_2)\equiv\mathcal B_{k}^{p}[\uppsi](\sigma_1,\sigma_2):=\int_{\sigma_1}^{\sigma_2}\int_{\Sigma_\tau}\chi_{\tilde{R}}{\tilde{r}}^{p-1}\big((L{\tilde{\uppsi}}_k)^2+\big(\frac{2-p}{{\tilde{r}}^2}\big)(|\Omega{\tilde{\uppsi}}_k|^2+{\tilde{\uppsi}}_k^2)\big)\mathrm{d} S \mathrm{d} r \mathrm{d}\tau.
\end{split}
\end{align}
When there is a need to distinguish between the vectorfields applied to ${\tilde{\uppsi}}$ we write
\begin{align*}
\begin{split}
\mathcal E_{k_1,k_2,k_3}^p(\sigma) {\ \ \text{and} \ \ } \mathcal B_{k_1,k_2,k_3}^p(\sigma_1,\sigma_2)
\end{split}
\end{align*}
for the corresponding energies. We also define the standard energy (note that the definition agrees with $E$ in \eqref{eq:standardenergydef1} when $k=0$)
\begin{align*}
\begin{split}
E_k(\tau)&\equiv E_k[\uppsi](\tau)\\
&:=\int_{\Sigma_\tau}\chi_{\leq {\tilde{R}}}(|\partial\partial^k\uppsi|^2+\jap{\rho}^{-2}|\partial^k\uppsi|^2)\mathrm{d} V+\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}(|\partial_\Sigma X^k \uppsi|^2+r^{-2}|TX^k\uppsi|^2+r^{-2}|X^k\uppsi|^2) \mathrm{d} V.
\end{split}
\end{align*}
\begin{lemma}\label{lem:rpmult1}
Suppose $\mathcal P_{\mathrm{graph}} \uppsi=f$ and let ${\tilde{\uppsi}}={\tilde{r}}^{\frac{n-1}{2}}\uppsi$ and ${\tilde{f}}={\tilde{r}}^{\frac{n-1}{2}}f$. If the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:wpb1} hold, then with the notation introduced in \eqref{eq:tilphikdef1}, \eqref{eq:tilphikdef1}, \eqref{eq:rpenergiesdef1}, with $k=k_1+k_2+k_3$, and for any $0\leq p\leq 2$ and any $\tau_1<\tau_2$,
\begin{align}\label{eq:rpmult1}
\begin{split}
&\sum_{j\leq k_1}\big(\sup_{\tau\in[\tau_1,\tau_2]}\mathcal E_{j,k_2,k_3}^p(\tau)+\mathcal B_{j,k_2,k_3}^{p-1}(\tau_1,\tau_2)\big)\\
&\leq C\sum_{j\leq k_1}\mathcal E_{j,k_2,k_3}^{p}(\tau_1)+C\sum_{j\leq k_1}\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\chi_{\tilde{R}}{\tilde{r}}^p {\tilde{f}}_k (L{\tilde{\uppsi}}_{j,k_2,k_3} +\calO(r^{-5})\Omega{\tilde{\uppsi}}_{j,k_2,k_3})\mathrm{d} \theta \mathrm{d} r \mathrm{d}\tau\\
&\quad+C_{\tilde{R}}\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}|\partial\chi_{\tilde{R}}|(|\partial_{\tau,x}{\tilde{\uppsi}}_k|^2+|{\tilde{\uppsi}}_k|^2)\mathrm{d} \theta \mathrm{d} r \mathrm{d} \tau\\
&\quad+C\sum_{j\leq k}\sup_{\tau\in[\tau_1,\tau_2]}E_j(\tau)+C\delta \sup_{\tau\in[\tau_1,\tau_2]}\sum_{j\leq k}\mathcal E_j^p(\tau)\\
&\quad+C\delta\sum_{j\leq k}\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\chi_{\tilde{R}}(|\partial_{\tau,x}{\tilde{\uppsi}}_j|^2+{\tilde{r}}^{-2}|{\tilde{\uppsi}}_j|^2){\tilde{r}}^{-1-\alpha}\mathrm{d} \theta \mathrm{d} r \mathrm{d}\tau.
\end{split}
\end{align}
Here $C$ and $C_{\tilde{R}}$ are large constants constants and $\delta=o(\epsilon)+o({\tilde{R}})$ is a small constant that is independent of $C$ and $C_{\tilde{R}}$.
\end{lemma}
\begin{proof}
To simplify notation we write $\chi$ for $\chi_{\tilde{R}}$ and ${\tilde{\uppsi}}_k$ for ${\tilde{\uppsi}}_{k_1,k_2,k_3}$, and multiply each term in the expansion \eqref{eq:higher1},~\eqref{eq:errorhigher1} by $\chi{\tilde{r}}^p L{\tilde{\uppsi}}$.
\begin{align*}
\begin{split}
-(1+\calO(r^{-4}))\chi{\underline L} L{\tilde{\uppsi}}_k L{\tilde{\uppsi}}_k {\tilde{r}}^p&=-\frac{1}{2}{\underline L}((1+\calO(r^{-4}))\chi(L{\tilde{\uppsi}}_k)^2{\tilde{r}}^p)-\frac{p}{2}\chi{\tilde{r}}^{p-1}(L{\tilde{\uppsi}}_k)^2\\
&\quad+\calO({\dot{\wp}})(L{\tilde{\uppsi}}_k)^2{\tilde{r}}^p+\calO(r^{-4}){\tilde{r}}^{p-1}(L{\tilde{\uppsi}}_k)^2+\calO(1)({\underline L}\chi){\tilde{r}}^p(L{\tilde{\uppsi}}_k)^2.
\end{split}
\end{align*}
Similarly, using also \eqref{eq:VFcomm1} and Cauchy-Schwarz,
\begin{align*}
\begin{split}
(1+\calO(r^{-4}))\chi\Omega^2{\tilde{\uppsi}}_k L {\tilde{\uppsi}}_k {\tilde{r}}^{p-2}&=-\frac{1}{2}L((1+\calO(r^{-4}))\chi(\frac{\Omega}{{\tilde{r}}}{\tilde{\uppsi}}_k)^2{\tilde{r}}^{p})-\frac{2-p}{2}\chi(\frac{\Omega}{{\tilde{r}}}{\tilde{\uppsi}}_k)^2{\tilde{r}}^{p-1}\\
&\quad+\Omega((1+\calO(r^{-4}))\chi{\tilde{r}}^{p-2}\Omega{\tilde{\uppsi}} L{\tilde{\uppsi}}_k)\\
&\quad+\calO(1)(\Omega\chi)\Omega{\tilde{\uppsi}}_kL{\tilde{\uppsi}}_k{\tilde{r}}^{p-1}+\calO(1)(L\chi)(\frac{1}{{\tilde{r}}}\Omega{\tilde{\uppsi}}_k)^2{\tilde{r}}^{p}\\
&\quad+\calO(r^{-4})\chi(\frac{\Omega}{{\tilde{r}}}{\tilde{\uppsi}}_k)^2{\tilde{r}}^{p-1}+\calO({\dot{\wp}} r^{-2})\chi(L{\tilde{\uppsi}}_k)^2{\tilde{r}}^p\\
&\quad +\calO({\dot{\wp}} r^{-2})\chi(\frac{\Omega}{{\tilde{r}}}{\tilde{\uppsi}})^2{\tilde{r}}^p+\calO({\dot{\wp}} r^{-4})\chi({\underline L}{\tilde{\uppsi}})^2{\tilde{r}}^p,
\end{split}
\end{align*}
and
\begin{align*}
\begin{split}
-(1+\calO(r^{-4}))\chi{\tilde{\uppsi}}_k L{\tilde{\uppsi}}_k {\tilde{r}}^{p-2} &=-\frac{1}{2}L((1+\calO(r^{-4}))\chi(\frac{{\tilde{\uppsi}}_k}{{\tilde{r}}})^2{\tilde{r}}^p)-\frac{2-p}{2}\chi(\frac{{\tilde{\uppsi}}_k}{{\tilde{r}}})^2{\tilde{r}}^{p-1}\\
&\quad+\calO(1)(L\chi){\tilde{\uppsi}}_k^2{\tilde{r}}^{p-2}+\calO(r^{-4})\chi(\frac{{\tilde{\uppsi}}_k}{{\tilde{r}}})^2{\tilde{r}}^{p-1}+\calO({\dot{\wp}} r^{-2})(\frac{{\tilde{\uppsi}}_k}{{\tilde{r}}})^2{\tilde{r}}^p.
\end{split}
\end{align*}
Note that integrating the last three identities already gives the desired control on the left-hand side of \eqref{eq:rpmult1}. Here the terms involving ${\dot{\wp}}$ can be integrated in $\tau$ and absorbed by the left-hand side of \eqref{eq:rpmult1} or the standard energy $E_k$. Turning to the error terms, we first consider
\begin{align*}
\begin{split}
\calO(r^{-4})\chi L^2{\tilde{\uppsi}}_k^2L{\tilde{\uppsi}}_k {\tilde{r}}^p&=L(\calO(r^{-4})\chi(L{\tilde{\uppsi}}_k)^2{\tilde{r}}^p)+\calO(r^{-4})\chi(L{\tilde{\uppsi}}_k)^2{\tilde{r}}^{p-1}+\calO(1)(L\chi)(L{\tilde{\uppsi}}_k)^2.
\end{split}
\end{align*}
Similarly, using also \eqref{eq:VFcomm1},
\begin{align*}
\begin{split}
\calO(r^{-4})\chi{\underline L}^2{\tilde{\uppsi}}_k L{\tilde{\uppsi}}_k {\tilde{r}}^p&={\underline L}(\calO(r^{-4})\chi{\underline L}{\tilde{\uppsi}}_k L{\tilde{\uppsi}}_k{\tilde{r}}^p)+L(\calO(r^{-4})\chi({\underline L}{\tilde{\uppsi}}_k)^2{\tilde{r}}^p)+\calO(r^{-4})({\underline L}{\tilde{\uppsi}}_k)^2{\tilde{r}}^{p-1}\\
&\quad+\calO(r^{-5})\chi L{\tilde{\uppsi}}_k{\underline L}{\tilde{\uppsi}}_k {\tilde{r}}^{p-1}+\calO({\dot{\wp}} r^{-4})L{\tilde{\uppsi}}_k{\underline L}{\tilde{\uppsi}}_k {\tilde{r}}^{p}+\calO(r^{-4})({\underline L}\chi)L{\tilde{\uppsi}}_k{\underline L} {\tilde{\uppsi}}_k{\tilde{r}}^{p}\\
&\quad \calO({\dot{\wp}} r^{-5})(\frac{\Omega}{{\tilde{r}}}{\tilde{\uppsi}}_k){\underline L}{\tilde{\uppsi}}_k{\tilde{r}}^p+\calO(r^{-4})(L\chi)({\underline L}{\tilde{\uppsi}}_k)^2{\tilde{r}}^p,
\end{split}
\end{align*}
and
\begin{align*}
\begin{split}
\calO(r^{-5})\chi\Omega L{\tilde{\uppsi}}_k L{\tilde{\uppsi}}_k {\tilde{r}}^p=\Omega(\calO(r)^{-5}\chi(L{\tilde{\uppsi}}_k)^2{\tilde{r}}^p)+\calO(r^{-4})\chi(L{\tilde{\uppsi}}_k)^2{\tilde{r}}^{p-1}+\calO(r^{-5})(\Omega\chi)(L{\tilde{\uppsi}}_k)^2{\tilde{r}}^p.
\end{split}
\end{align*}
For the last term on the second line of \eqref{eq:errorhigher1} first observe that
\begin{align*}
\begin{split}
\calO(r^{-5})\chi{\underline L}\Omega{\tilde{\uppsi}}_k L{\tilde{\uppsi}}_k {\tilde{r}}^p &= {\underline L}(\calO(r^{-5})\chi\Omega{\tilde{\uppsi}}_k L{\tilde{\uppsi}}_k {\tilde{r}}^p)+\calO(r^{-5})\chi\Omega{\tilde{\uppsi}}_k{\underline L} L{\tilde{\uppsi}}_k{\tilde{r}}^p+\calO({\dot{\wp}} r^{-4})\chi(\frac{\Omega}{{\tilde{r}}}{\tilde{\uppsi}}_k)L{\tilde{\uppsi}}_k{\tilde{r}}^p\\
&\quad+\calO(r^{-5})\chi(\frac{\Omega}{{\tilde{r}}}{\tilde{\uppsi}}_k)L{\tilde{\uppsi}}_k{\tilde{r}}^p+\calO( r^{-4})({\underline L}\chi)(\frac{\Omega}{{\tilde{r}}}{\tilde{\uppsi}}_k)L{\tilde{\uppsi}}_k{\tilde{r}}^p
\end{split}
\end{align*}
To treat the second term on the right, we use equation \eqref{eq:higher1} to solve for ${\underline L} L{\tilde{\uppsi}}$, and replace this term by
\begin{align*}
\begin{split}
\calO(r^{-5})\chi\Omega{\tilde{\uppsi}}_k(\mathcal P_{\mathrm{graph}}+{\underline L} L){\tilde{\uppsi}}_k{\tilde{r}}^p+\calO(r^{-5})\chi\Omega{\tilde{\uppsi}}_k {\mathrm{Err}}_k [{\tilde{\uppsi}}]{\tilde{r}}^p.
\end{split}
\end{align*}
These terms can then be treated using similar considerations as above and below. The term $\chi {\mathrm{Err}}_k[{\tilde{\uppsi}}]L {\tilde{\uppsi}}_k{\tilde{r}}^p$ in \eqref{eq:higher1} is treated using repeated applications of the product rule (for integration by parts) and Lemmas~\ref{lem:commutators1} and~\ref{lem:commutators2} to write this term as a total derivative plus acceptable terms. We discuss only the contribution of ${\widetilde{\calP}}_1$, where by an abuse of notation we write ${\tilde{\uppsi}}_{j}$ for ${\tilde{\uppsi}}_{j,k_2,k_3}$. For the top order term ${\widetilde{\calP}}_1{\tilde{\uppsi}}_{k_1-1}$ the favorable sign of the coefficient $c_{k_1-1,k_1}$ is important, as we will illustrate with the term $L^2{\tilde{\uppsi}}_{k_1-1}$ appearing in ${\widetilde{\calP}}_1{\tilde{\uppsi}}_{k_1-1}$, for which we write
\begin{align*}
\begin{split}
L^2{\tilde{\uppsi}}_{k_1-1}L{\tilde{\uppsi}}_{k_1} {\tilde{r}}^2= L({\tilde{r}}^{-1}{\tilde{\uppsi}}_{k_1})L{\tilde{\uppsi}}_{k_1}{\tilde{r}}^2= (L{\tilde{\uppsi}}_{k_1})^2{\tilde{r}}-\frac{1}{2}L({\tilde{r}}^2(L{\tilde{\uppsi}}_{k_1})^2)+\calO({\dot{\wp}}){\tilde{\uppsi}}_{k_1} L {\tilde{\uppsi}}_{k_1}.
\end{split}
\end{align*}
Here the first term on the right has a favorable sign after multiplication by $c_{k_1-1,k_1}$ and the other terms can be bounded by the energy fluxes. The other terms in ${\widetilde{\calP}}_1{\tilde{\uppsi}}_{k_1-1}$ are treated similarly, with a few more integration by parts and commutations between $\Omega$ and $L$, again with the sign of $c_{k_1-1,k_1}$ playing an important role for the main bulk term. The contributions of ${\widetilde{\calP}}_1{\tilde{\uppsi}}_{j}$, $j\leq k_1-2$, are treated similarly where the error terms are absorbed inductively by adding a suitable multiple of the estimates for lower values of $k_1$. For instance
\begin{align*}
\begin{split}
\Omega^2{\tilde{\uppsi}}_j L{\tilde{\uppsi}}_{k_1}= \frac{1}{{\tilde{r}}}\Omega{\tilde{\uppsi}}_j\Omega{\tilde{\uppsi}}_{k_1}+\Omega(\Omega{\tilde{\uppsi}}_{j}L{\tilde{\uppsi}}_{k_1})-L(\Omega{\tilde{\uppsi}}_{j+1}\Omega{\tilde{\uppsi}}_{k_1})+[L,\Omega]{\tilde{\uppsi}}_j\Omega{\tilde{\uppsi}}_{k_1}-\Omega{\tilde{\uppsi}}_{j}[\Omega,L]{\tilde{\uppsi}}_k,
\end{split}
\end{align*}
and the first term on the right is bounded by $\delta |{\tilde{r}}^{-1}\Omega{\tilde{\uppsi}}_{k_1}|^2{\tilde{r}}+C_\delta|{\tilde{r}}^{-1}\Omega{\tilde{\uppsi}}_{j+1}|^2{\tilde{r}}$. The first term is absorbed by a corresponding term coming from ${\widetilde{\calP}}_1{\tilde{\uppsi}}_{k_1-1}$ and the second term by a similar term in the multiplier identity for ${\tilde{\uppsi}}_{j+1}$ (instead of ${\tilde{\uppsi}}_{k_1}$), after adding a suitably large multiple of that identity. The contribution of the last four lines of \eqref{eq:errorhigher1} need no further manipulations. Putting everything together and applying Cauchy-Schwartz we obtain (to be precise, as explained above, we should write this first for ${\tilde{\uppsi}}={\tilde{\uppsi}}_{1,k_2,k_3}$ and derive the corresponding estimate, and then inductively build up to ${\tilde{\uppsi}}={\tilde{\uppsi}}_{k_1,k_2,k_3}$)
\begin{align}\label{eq:rpmulttemp1}
\begin{split}
{\tilde{f}} L{\tilde{\uppsi}} {\tilde{r}}^p&=-\frac{1}{2}{\underline L}\big(\chi(L{\tilde{\uppsi}})^2{\tilde{r}}^p+{\mathrm{Err}}_{\underline L}\big)-L\big({\tilde{r}}^{p-2} \big((\Omega{\tilde{\uppsi}})^2+\frac{(n-1)(n-3)}{4}{\tilde{\uppsi}}^2\big)+{\mathrm{Err}}_{L}\big)\\
&\quad+\Omega( {\mathrm{Err}}_\Omega)+{\mathrm{Err}}_{\mathrm{int}}+\calO(\jap{r}^{-5})\Omega{\tilde{\uppsi}} {\tilde{f}}{\tilde{r}}^p\\
&\quad-\frac{p}{2}\chi{\tilde{r}}^{p-1}(L{\tilde{\uppsi}})^2-\frac{2-p}{2}\chi(\frac{\Omega}{{\tilde{r}}}{\tilde{\uppsi}})^2{\tilde{r}}^{p-1}-\frac{(2-p)(n-1)(n-3)}{8}\chi({\tilde{r}}^{-1}{\tilde{\uppsi}})^2{\tilde{r}}^{p-1},
\end{split}
\end{align}
where
\begin{align*}
\begin{split}
&{\mathrm{Err}}_{\underline L}\leq C\chi\sum_{j\leq k} ((\partial_{\tau,x}{\tilde{\uppsi}}_j)^2+{\tilde{r}}^{-2}{\tilde{\uppsi}}_j^2),\\
&{\mathrm{Err}}_{L}\leq C\chi{\tilde{r}}^p\sum_{j\leq k}\sum_{j\leq k} ((L{\tilde{\uppsi}})^2+{\tilde{r}}^{-2}(\Omega{\tilde{\uppsi}})^2+{\tilde{r}}^{-2}{\tilde{\uppsi}}_j^2+{\tilde{r}}^{-2}({\underline L}{\tilde{\uppsi}})^2),\\
&{\mathrm{Err}}_{\Omega}\leq C\chi{\tilde{r}}^{p-1}\sum_{j\leq k}\sum_{j\leq k} ((L{\tilde{\uppsi}})^2+{\tilde{r}}^{-2}(\Omega{\tilde{\uppsi}})^2+{\tilde{r}}^{-2}{\tilde{\uppsi}}_j^2+{\tilde{r}}^{-2}({\underline L}{\tilde{\uppsi}})^2),\\
&{\mathrm{Err}}_{\mathrm{int}}\leq C_{\tilde{R}}\sum_{j\leq k}|\partial\chi|(|\partial_{\tau,x}{\tilde{\uppsi}}_j|^2+|{\tilde{\uppsi}}_j|^2)+C\chi\sum_{j\leq k}(|\partial_{\tau,x}{\tilde{\uppsi}}_j|^2+{\tilde{r}}^{-2}|{\tilde{\uppsi}}_j|^2){\tilde{r}}^{-1-\alpha}\\
&\phantom{{\mathrm{Err}}_{\mathrm{int}}\leq }+\calO({\dot{\wp}})\sum_{j\leq k}\big((L{\tilde{\uppsi}}_j)^2{\tilde{r}}^p+({\tilde{r}}^{-1}\Omega{\tilde{\uppsi}}_j)^2+({\tilde{r}}^{-1}{\tilde{\uppsi}}_j)^2+{\tilde{r}}^{-2}({\underline L}{\tilde{\uppsi}}_j)^2\big).
\end{split}
\end{align*}
The desired estimate \eqref{eq:rpmult1} now follows from integrating \eqref{eq:rpmulttemp1}. Here note when integrating \eqref{eq:rpmulttemp1} we also encounter a term involving $\div V$, for $V=L$, ${\underline L}$, or $\Omega$ (from the difference of $V^\mu\partial_\mu u$ and $\partial_\mu(V^\mu u$)), but these terms come with ${\dot{\wp}}$, which has extra $\tau$ integrability, and can be absorbed.
\end{proof}
\subsection{Nonlinear Energy and Local Energy Decay Estimates}
In this section we again use the variable $\phi$, not the conjugated version $\varphi$ (see \eqref{eq:varphiphi1}). However, in view of the definition \eqref{eq:varphiphi1}, and under our bootstrap assumptions, the estimates on $\varphi$ easily transfer to estimates on $\phi$. As a first step in the proof of Proposition~\ref{prop:bootstrapphi1} we apply the results of Section~\ref{sec:LED} to derive energy and local energy decay estimates for $\phi$. Let
\begin{align*}
\begin{split}
\|\phi\|_{LE_k[\sigma_1,\sigma_2]}^2:=\|\chi_{\leq R}\partial^k\phi\|_{LE[\sigma_1,\sigma_2]}^2+\|\chi_{\geq R}X^k\phi\|_{LE[\sigma_1,\sigma_2]}^2.
\end{split}
\end{align*}
Our goal is to prove the following result.
\begin{proposition}\label{prop:energyLEDnonlin1}
If the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} are satisfied and $\epsilon$ is sufficiently small, then
\begin{align}
&\sup_{\sigma_1\leq \sigma \leq \sigma_2} E_k[\phi](\sigma)+\|\phi\|_{LE_k[\sigma_1,\sigma_2]}^2\lesssim \sum_{i\leq k}E_i[\phi](\sigma_1)+\epsilon^2{\boldsymbol{\upsigma}}_0(\sigma_1), \quad k\leq M,\label{eq:phienergyLEDboundtemp1}\\
&\sup_{\sigma_1\leq \sigma \leq \sigma_2} E_k[{\bfT}\phi](\sigma)+\|{\bfT}\phi\|_{LE_k[\sigma_1,\sigma_2]}^2\lesssim \sum_{i\leq k}E_i[{\bfT}\phi](\sigma_1)+\epsilon^2{\boldsymbol{\upsigma}}_1(\sigma_1), \quad k\leq M-1,\label{eq:TphienergyLEDboundtemp1}\\
&\sup_{\sigma_1\leq \sigma \leq \sigma_2} E_k[{\bfT}^2\phi](\sigma)+\|{\bfT}^2\phi\|_{LE_k[\sigma_1,\sigma_2]}^2\lesssim \sum_{i\leq k}E_i[{\bfT}^2\phi](\sigma_1)+\epsilon^2{\boldsymbol{\upsigma}}_2(\sigma_1), \quad k\leq M-2.,\label{eq:T2phienergyLEDboundtemp1}
\end{align}
where ${\boldsymbol{\upsigma}}_j(\sigma_1)=\sigma_1^{-2-2j}$ if $k+3j\leq M-2$ and $\upsigma_j(\sigma_1)=1$ otherwise.
\end{proposition}
We start with some estimates on the source term defined in \eqref{eq:sourcetermrp1} (see also Lemma~\ref{lem:sourceext1}).
\begin{lemma}\label{lem:sourceLE1}
Under the assumptions of Proposition~\ref{prop:energyLEDnonlin1}, and with $\mathcal R_{\sigma_1}^{\sigma_2}=\cup_{\sigma=\sigma_1}^{\sigma_2}\Sigma_\sigma$,
\begin{align}
&\|\jap{r}(\chi_{r\geq R}|X^k\mathcal F_0|+\chi_{r\leq R}|\partial^k\mathcal F_0|)\|_{L^2(\Sigma_\tau)}\lesssim \delta_\wp \epsilon \tau^{-\frac{5}{2}+\kappa},\label{eq:calF0constanttautemp1}\\
&\|\jap{r}(\chi_{r\geq R}|X^k{\bfT}\mathcal F_0|+\chi_{r\leq R}|\partial^k{\bfT}\mathcal F_0|)\|_{L^2(\Sigma_\tau)}\lesssim \delta_\wp \epsilon \tau^{-3},\label{eq:TcalF0constanttautemp1}\\
&\|\jap{r}(\chi_{r\geq R}|X^k{\bfT}^j\mathcal F_0|+\chi_{r\leq R}|\partial^k{\bfT}^j\mathcal F_0|)\|_{L^2(\mathcal R_{\sigma_1}^{\sigma_2})}\lesssim \delta_\wp \epsilon\sigma_1^{-4+2\kappa}+\delta_\wp\|{\bfT}^j\phi\|_{LE([\sigma_1,\sigma_2])},\nonumber\\
&\phantom{\|\jap{r}(\chi_{r\geq R}|X^k{\bfT}^j\mathcal F_0|+\chi_{r\leq R}|\partial^k{\bfT}^j\mathcal F_0|)\|_{L^2(\mathcal R_{\sigma_1}^{\sigma_2})}\lesssim} j=0,1,2.\label{eq:calF0tauintegratedtemp1}
\end{align}
\end{lemma}
\begin{proof}
The exterior estimates follow from Lemma~\ref{lem:sourceext1}, Proposition~\ref{prop:bootstrappar1}, and Lemma~\ref{lem:OmegaiTkphi}. For the interior the spatial decay of the source term is not important, so the estimates follow simply by counting the number of ${\bfT}$ derivatives and Proposition~\ref{prop:bootstrappar1} and Lemma~\ref{lem:OmegaiTkphi}.
\end{proof}
Note that even though the source term $\mathcal F_0$ was derived using the conjugated variable $\varphi$ in \eqref{eq:varphiphi1}, bounding $s$ in \eqref{eq:varphiphi1} using our bootstrap assumptions, the same estimates are satisfied by the source term in the equation for $\phi$ (see \eqref{eq:abstractexteq1}). We can now prove Proposition~\ref{prop:energyLEDnonlin1}.
\begin{proof}[Proof of Proposition~\ref{prop:energyLEDnonlin1}]
Using the global coordinates $(\uptau,\uprho,\uptheta)$, after commuting any number of $\partial_\uptau={\bfT}$ derivatives we collect the leading order terms to write the equation in the form \eqref{eq:LEDlinearmodel1} with $\mathcal P$ as in Section~\ref{sec:LED}. We start with the proof of the estimates when $k=0$. Applying Propositions~\ref{prop:energyestimate1} and~\ref{prop:LED1}, the nonlinear terms (including products of derivatives of ${\dot{\wp}}$ and $\phi$) can be estimated using the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2}, simply by treating them as quadratic. The contribution of the source terms is estimated using Lemma~\ref{lem:sourceLE1}, and the contribution of $\boldsymbol{\Omega}_k$ (see Remark~\ref{rem:Omegalinear1}) using Lemma~\ref{lem:OmegaiTkphi}, where we use the smallness of $\delta_\wp$ to absorb the $LE$ norms appearing in \eqref{eq:OmegaiTkphi} and~\eqref{eq:calF0tauintegratedtemp1}. Note that by the same procedure we can prove the estimates for higher powers of ${\bfT}$ without gaining extra decay (that is, by treating powers of ${\bfT}$ which are higher than three as arbitrary derivatives). There is one point that deserves further explanation in this process. Among the error terms after commuting $\partial_\uptau^k$, there will be terms of the forms (recall that the part of the equation without spatial decay is given by \eqref{eq:Boxm1}; below $\partial_y$ denotes an arbitrary tangential derivative of size one)
\begin{align*}
\begin{split}
\calO({\dot{\wp}}^{(j)})\partial^2_y{\bfT}^{k-j}\phi\quad{\ \ \text{and} \ \ }\quad \calO({\dot{\wp}}^{(j)})(\partial_\uprho+\frac{n-1}{2\uprho})\partial_\uptau {\bfT}^{k-j}\phi,
\end{split}
\end{align*}
with $j\geq1$, and the multipliers for the energy and LED estimates contain terms of the form $\calO(1)\partial_\uptau {\bfT}^k\phi$. Since $\partial_\tau{\bfT}^k\phi$ cannot be placed in the energy flux for ${\bfT}^k\phi$ in the hyperboloidal part of the foliation, some integration by parts are necessary to deal with these terms. For the error terms of the form $\calO({\dot{\wp}}^{(j)})\partial^2_y{\bfT}^{k-j}\phi$, we can integrate by parts twice to obtain terms of the forms
\begin{align*}
\begin{split}
\calO({\dot{\wp}}^{(j)})\partial_y{\bfT}^{k-j+1}\phi\partial_y {\bfT}^k\phi, \quad \calO({\dot{\wp}}^{(j+1)})\partial_y{\bfT}^{k-j}\phi\partial_y {\bfT}^k\phi,\quad \calO({\dot{\wp}}^{(j)}\uprho^{-1})\partial_y{\bfT}^{k-j}\phi \partial_\uptau {\bfT}^k\phi.
\end{split}
\end{align*}
These can be bounded, respectively, as (where $L^2_y$ denotes $L^2(\Sigma_\uptau)$)
\begin{align*}
\begin{split}
&\|{\dot{\wp}}^{(j)}\|_{L^1_\uptau}\|\partial_y{\bfT}^{k-j+1}\phi\|_{L^\infty_{\uptau}L^2_y}\|\partial_y {\bfT}^k\|_{L^\infty_\uptau L^2_y},\quad \|{\dot{\wp}}^{(j+1)}\|_{L^1_\uptau}\|\partial_y{\bfT}^{k-j}\phi\|_{L^\infty_{\uptau}L^2_y}\|\partial_y {\bfT}^k\|_{L^\infty_\uptau L^2_y},\\
& \|{\dot{\wp}}^{(j)}\|_{L^2_\uptau}\|\partial_y{\bfT}^{k-j}\phi\|_{L^\infty_{\uptau}L^2_y}\| {\bfT}^k\|_{LE}.
\end{split}
\end{align*}
For the error terms of the form $\calO({\dot{\wp}}^{(j)})(\partial_\uprho+\frac{n-1}{2\uprho})\partial_\uptau {\bfT}^{k-j}\phi$, we use the equation for ${\bfT}^{k-j}\phi$ (again see \eqref{eq:Boxm1}) to replace them by terms of the forms that were already handled above, or have better spatial decay.
Next, we use elliptic estimates to obtain energy and local energy estimates for arbitrary, size one, derivatives applied on $\phi$. For this, recall the decomposition of the operator $\mathcal P$ as
\begin{align*}
\begin{split}
\mathcal P=\mathcal P_{\mathrm{ell}}+\mathcal P_\uptau,\qquad {\ \ \text{where} \ \ }\qquad \mathcal P_\uptau=\calO(1)\partial {\bfT}+\calO(\jap{\uprho^{-1}}){\bfT}.
\end{split}
\end{align*}
Using the estimates for ${\bfT}\phi$, we can use elliptic estimates to bound $$\sup_{\sigma_1\leq \sigma \leq\sigma_2}\|\partial_\Sigma\phi\|_{E(\Sigma_\sigma)}+\|\partial_\Sigma\phi\|_{LE[\sigma_1,\sigma_2]}$$ by the right-hand side of \eqref{eq:phienergyLEDboundtemp1}. Here for the $LE$ norm the spatial norms can be inserted in the elliptic bound by writing
\begin{align*}
\begin{split}
\phi=\chi_{\leq 1}\phi+\sum_{j\geq 1}\chi_j \phi,
\end{split}
\end{align*}
where $\chi_{\leq 1}(\uprho)$ is supported in the region $\{|\uprho|\leq 1\}$, and $\chi_j(\uprho)$ in the region $\{|\uprho|\simeq 2^j\}$, and applying the elliptic estimate on each annulus separately. The estimates for $\partial_\Sigma^k{\bfT}^j\phi$ are proved similarly.
To upgrade the size one derivatives $\partial_\Sigma$ to vectorfield derivatives $X$ in the exterior, we argue as follows. Suppose we are commuting $X^k$ for some $k>0$. In view of Lemma~\ref{lem:commutators2} we can arrange to commute first the $T$ vectorfields, then the $\Omega$ vectorfields, and last the ${\tilde{r}} L$ vectorfields. The case where all vectorfields are $T$ was already discussed above. When there are $\Omega$ vectorfields, but no ${\tilde{r}} L$ vectorfields the argument is similar with the following points to keep in mind. First, since the equation in Lemma~\ref{lem:higherordereqs1} was calculated in terms of the conjugated variable ${\tilde{\varphi}}={\tilde{r}}^{\frac{n-1}{2}}\varphi$, we can directly carry out the energy and LED multiplier arguments in this setting. Indeed, with $\chi$ denoting a cutoff supported in the hyperboloidal region of the foliation, for the energy estimate we multiply the equation by $\chi TX^k{\tilde{\varphi}}$ while for the LED we use the two multipliers $\chi \beta (L-{\underline L})X^l{\tilde{\varphi}}$ (for the analogue of \eqref{eq:psifarLEDtemp1.5}) and $\chi \beta' {\tilde{\varphi}}$ (for the analogue of \eqref{eq:psifarLEDtemp2.5}). The errors which result from the derivatives falling on $\chi$ during the integration by parts are then absorbed by the LED estimate for the size one derivatives. Except for the treatment of the error terms of the form $\calO({\dot{\wp}}^{(\ell)})LX^m{\tilde{\varphi}}$ on the right-hand side of \eqref{eq:errorhigher1} (note that since we are not yet commuting ${\tilde{r}} L$ the terms involving ${\widetilde{\calP}}_1$ are not present), the remainder of the energy and LED estimate are similar to what has already been carried out, so we omit the details (see also below for the case $X={\tilde{r}} L$ where more details are worked out). The difficulty with $\calO({\dot{\wp}}^{(\ell)})LX^m{\tilde{\varphi}}$ errors is that when for the part of the multiplier which is of the form $\calO(1){\underline L} X^k{\tilde{\varphi}}$ (with $k\neq m$) we cannot simply use the $\uptau$ decay of ${\dot{\wp}}$ to bound this by the energy, as the unweighted ${\underline L}$ derivatives are not bounded by the energy flux. For the term $\calO({\dot{\wp}}^{(\ell)})LX^m{\tilde{\varphi}}$, recall that in \eqref{eq:phienergyLEDboundtemp1}--\eqref{eq:T2phienergyLEDboundtemp1} we want to estimate the corresponding contribution by $\epsilon^2\sigma_1^{-2j}$ for ${\bfT}^j\phi$, $j=1,2,3$ (in the more difficult case $3j+k\leq M-2$). The corresponding term we need to estimate in the multiplier identities is then the space-time integral of
\begin{align*}
\begin{split}
\calO({\dot{\wp}}^{(1+i)})(LX^m{\bfT}^{j-i}{\tilde{\varphi}}) ({\underline L} X^k {\bfT}^j{\tilde{\varphi}}),\quad 0\leq i\leq j.
\end{split}
\end{align*}
Considering the extreme cases $i=j$ and $i=0$, and with the same notation as above this is bounded, using Lemma~\ref{lem:OmegaiTkphi} and the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2}, by (here the ${\tilde{r}}^{n-1}$ part of the measure in the $L^2_y$ and $LE$ norms is already incorporated in ${\tilde{\varphi}}$)
\begin{align*}
\begin{split}
\|{\dot{\wp}}^{(1+j)}\|_{L^2_\uptau}\|{\tilde{r}}^{\frac{1+\alpha}{2}}LX^m{\tilde{\varphi}}\|_{L^\infty_\uptau L^2_y}\|X^m{\bfT}^j{\tilde{\varphi}}\|_{LE}\lesssim \epsilon \delta_\wp \sigma_1^{-\frac{1-\alpha}{2}} \|X^m{\bfT}^j{\tilde{\varphi}}\|_{LE}^2\lesssim \epsilon^3 \sigma_1^{-2j-2},
\end{split}
\end{align*}
when $i=j$, and
\begin{align*}
\begin{split}
\|{\dot{\wp}}\|_{L^2_\uptau}\|{\tilde{r}}^{\frac{1+\alpha}{2}}LX^m{\bfT}^j{\tilde{\varphi}}\|_{L^\infty_\uptau L^2_y}\|X^m{\bfT}^j{\tilde{\varphi}}\|_{LE}\lesssim \epsilon^2\sigma_1^{-2+\kappa}\sigma_1^{-j+\frac{1+\alpha}{2}}\|X^m{\bfT}^j{\tilde{\varphi}}\|_{LE}\lesssim \epsilon^3 \sigma_1^{-2j-2},
\end{split}
\end{align*}
when $i=0$.
Finally, we consider the case where some of the $X$ vectorfields are ${\tilde{r}} L$. The only difference with what was already considered above is that now we have to deal with the contribution of ${\widetilde{\calP}}_1$ in \eqref{eq:errorhigher1}. For this, we prove the energy and LED estimate simultaneously, using the multiplier $\chi \beta L X^k{\tilde{\varphi}}$, where $\chi$ is as above. The error terms when derivatives fall on $\chi$ can again be absorbed using the LED estimate for derivatives of size one. For simplicity of notation we consider first the case ${\tilde{r}} L{\tilde{\varphi}}$ (that is, when $k=1$ and $X={\tilde{r}} L$), and then the case $({\tilde{r}} L)^2{\tilde{\varphi}}$ to demonstrate how to treat higher powers of ${\tilde{r}} L$ inductively. One can of course replace ${\tilde{\varphi}}$ by $X^j{\tilde{\varphi}}$ where $X^j$ are a string of $\Omega$ and $T$ vectorfields. In the case $k=1$, a calculation using Lemma~\ref{eq:errorhigher1} gives (here $X={\tilde{r}} L$, and we are using the notation of Lemma~\ref{lem:higherordereqs1})
\begin{align}\label{eq:LEXtemp1}
\begin{split}
LX{\tilde{\varphi}}({\widetilde{\calP}} X{\tilde{\varphi}}-{\widetilde{\calP}}_1{\tilde{\varphi}})&=-\frac{1}{2}{\underline L} (LX{\tilde{\varphi}})^2+L({\mathrm{Err}}_L)+\Omega ({\mathrm{Err}}_\Omega)\\
&\quad-\frac{1}{{\tilde{r}}}\sum (\frac{1}{{\tilde{r}}}\Omega X{\tilde{\varphi}})^2-\frac{(n-1)(n-3)}{4{\tilde{r}}}(\frac{{\tilde{\varphi}}}{{\tilde{r}}})^2-{\tilde{r}} (L^2{\tilde{\varphi}})^2-\frac{1}{{\tilde{r}}}(L\Omega{\tilde{\varphi}})^2\\
&\quad+\calO({\tilde{r}}^{-1})({\tilde{r}}^{-1}\Omega{\tilde{\varphi}})^2+\calO({\tilde{r}}^{-1})({\tilde{r}}^{-1}{\tilde{\varphi}})^2+\calO({\dot{\wp}}){\mathrm{Err}}_\wp,
\end{split}
\end{align}
where
\begin{align*}
\begin{split}
|{\mathrm{Err}}_\wp|+|{\mathrm{Err}}_L|&\lesssim (LX{\tilde{\varphi}})^2+(L{\tilde{\varphi}})^2+(L\Omega{\tilde{\varphi}})^2+({\tilde{r}}^{-1}\Omega X{\tilde{\varphi}})^2\\
&\quad+({\tilde{r}}^{-1}\Omega{\tilde{\varphi}})^2+({\tilde{r}}^{-1}{\underline L} X{\tilde{\varphi}})^2+({\tilde{r}}^{-1}{\underline L}{\tilde{\varphi}})^2+({\tilde{r}}^{-1}X{\tilde{\varphi}})^2+({\tilde{r}}^{-1}{\tilde{\varphi}})^2.
\end{split}
\end{align*}
Multiplying \eqref{eq:LEXtemp1} by $\chi$, integrating, and adding a multiple of the LED estimate for size one derivatives, we get control of (the remaining error terms in \eqref{eq:errorhigher1} have better $\uptau$ or ${\tilde{r}}$ decay and can be handled more easily)
\begin{align*}
\begin{split}
\int_{\Sigma_\tau}\chi (LX{\tilde{\varphi}})^2\mathrm{d}\theta\mathrm{d} r\quad {\ \ \text{and} \ \ } \int_{\sigma_1}^{\sigma_2}\int_{\Sigma_\tau}\chi ({\tilde{r}}(L^2{\tilde{\varphi}})^2+{\tilde{r}}^{-1}(\frac{\Omega}{{\tilde{r}}} X{\tilde{\varphi}})^2+{\tilde{r}}^{-1}(\frac{X{\tilde{\varphi}}}{{\tilde{r}}})^2)\mathrm{d}\theta\mathrm{d} r \mathrm{d}\tau.
\end{split}
\end{align*}
Note that since we already have control of the $LE$ norm of $\phi$, the bulk term ${\tilde{r}} (L^2{\tilde{\varphi}})^2$ gives control of the term ${\tilde{r}}^{-1-\alpha}(LX{\tilde{\varphi}})^2$, but we will need this stronger estimate to treat the higher powers of ${\tilde{r}} L$ inductively. To control the remaining terms in the energy and local energy norms we argue as follows. First, for the energy norm, note that
\begin{align*}
\begin{split}
\frac{1}{{\tilde{r}}}\Omega X{\tilde{\varphi}} = L\Omega{\tilde{\varphi}} + [\Omega,L]{\tilde{\varphi}}\quad {\ \ \text{and} \ \ }\quad\frac{1}{{\tilde{r}}}T X {\tilde{\varphi}}= LT{\tilde{\varphi}} + \frac{T{\tilde{r}}}{{\tilde{r}}} L{\tilde{\varphi}} + [T,L]{\tilde{\varphi}},
\end{split}
\end{align*}
and all of the terms on the right-hand sides are already controlled by the energies of ${\tilde{\varphi}}$, $T{\tilde{\varphi}}$, and $\Omega{\tilde{\varphi}}$. Similarly, for the local energy norm,
\begin{align*}
\begin{split}
{\tilde{r}}^{-1-\alpha}({\underline L} X{\tilde{\varphi}})^2\lesssim {\tilde{r}}^{-1-\alpha}({\underline L} {\tilde{r}})^2(L{\tilde{\varphi}})^2+{\tilde{r}}^{1-\alpha}({\underline L} L{\tilde{\varphi}}).
\end{split}
\end{align*}
The first term is already controlled by the local energy norm of $\phi$, while for the second term we use the equation for ${\tilde{\varphi}}$ to replace ${\underline L} L{\tilde{\varphi}}$ by terms which we have already estimated. Next, we consider the error terms when commuting $X^2$, $X={\tilde{r}} L$. The term in the multiplier argument that needs a different treatment is ${\widetilde{\calP}}_1{\tilde{\varphi}} LX^2{\tilde{\varphi}}$, where we no longer want to use the sign of the coefficient $c_{0,2}$ in \eqref{eq:errorhigher1}. These error terms can be estimated using the space-time control of ${\tilde{r}} (L^2{\tilde{\varphi}})^2$ above. For instance, the term $L^2{\tilde{\varphi}}$ in ${\widetilde{\calP}}_1$ contributes terms of the form
\begin{align*}
\begin{split}
{\tilde{r}}^{2-j}L^2{\tilde{\varphi}} L^{3-j}{\tilde{\varphi}},\qquad 0\leq j\leq 2,
\end{split}
\end{align*}
all of which can be estimated in terms of $r(L^2{\tilde{\varphi}})^2$ and the $LE$ norm of ${\tilde{\varphi}}$ after a few integration by parts. The other terms in ${\widetilde{\calP}}_1$ are treated similarly. We can now proceed inductively to prove energy and LED estimates for higher powers of ${\tilde{r}} L$.
\end{proof}
\subsection{Proof of Proposition~\ref{prop:bootstrapphi1}}
We start by proving $\tau^{-2}$ decay for the energy at lower orders, and boundedness at higher orders.
\begin{lemma}\label{lem:energytau2decay}
Suppose the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} hold. Then for $k\leq M-2$,
\begin{align}\label{eq:rpenergydecay1}
\begin{split}
E_k[\phi](\tau)\lesssim \epsilon^2\tau^{-2},
\end{split}
\end{align}
and
\begin{align}\label{eq:rpenergydecay2}
\begin{split}
\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}((L+\frac{n-1}{2})X^k\phi)^2 r\mathrm{d} V\lesssim \epsilon \tau^{-1}.
\end{split}
\end{align}
Moreover, for any $k\leq M$,
\begin{align}
E_k[\phi](\tau)+\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}\big[r^2((L+\frac{n-1}{2})X^k\phi)^2+r(r^{-1}\Omega X^k\phi)^2+r(r^{-1}X^k\phi)^2\big]\mathrm{d} V\lesssim \epsilon^2.\label{eq:rpenergyboundedness1}
\end{align}
\end{lemma}
\begin{proof}
The estimate for $E_k$ in \eqref{eq:rpenergyboundedness1} follows directly from \eqref{eq:phienergyLEDboundtemp1}. The estimate for the second term on the left-hand side of \eqref{eq:rpenergyboundedness1} follows from Lemma~\ref{lem:rpmult1}. Here the error terms in estimate \eqref{eq:rpmult1} are absorbed by adding a suitable multiple of the local energy bound in \eqref{eq:phienergyLEDboundtemp1}, and the contribution of ${\tilde{f}}_k$ in \eqref{eq:rpmult1} is treated in the same way as in the proof of \eqref{eq:rpenergydecay1} below. We turn to the details for the proof of \eqref{eq:rpenergydecay1}. We introduce some auxiliary notation to avoid repeated long expressions in the proof:
\begin{align*}
\begin{split}
&{\mathscr{E}}_k^p(\tau):=\int_{\Sigma_\tau}\chi_{\leq {\tilde{R}}}|\partial \partial^k\phi|^2\mathrm{d} V+\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}((L+\frac{n-1}{2})X^k\phi)^2 r^p\mathrm{d} V,\\
&{\mathscr{B}}_k^p(\tau):=\int_{\Sigma_\tau}\chi_{\leq {\tilde{R}}}|\partial \partial^k\phi|^2\mathrm{d} V+\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}\big[((L+\frac{n-1}{2})X^k\phi)^2 +(2-p)(r^{-1}\Omega X^k\phi)^2
\\
&\phantom{{\mathscr{B}}_k^p(\tau):=\int_{\Sigma_\tau}\chi_{\leq {\tilde{R}}}|\partial \partial^k\phi|^2\mathrm{d} V+\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}\big[}\quad+(2-p)(r^{-1}X^k\phi)^2+r^{-p-\alpha}(TX^k\phi)^2\big]r^{p-1}\mathrm{d} V.
\end{split}
\end{align*}
Adding a suitable multiple of the LED estimate \eqref{eq:phienergyLEDboundtemp1} at one higher order (to compensate for the degeneracy in the LE norm) to \eqref{eq:rpmult1} in Lemma~\ref{lem:rpmult1} with $\psi=\varphi$, for any $p\leq 2$ and $\sigma_1<\sigma_2$ gives,
\begin{align}\label{eq:energytau2decaytemp0}
\begin{split}
\sum_{j\leq k}{\mathscr{E}}_j^p(\sigma_2)+\sum_{j\leq k}\int_{\sigma_1}^{\sigma_2}{\mathscr{B}}_j^p(\tau)\mathrm{d} \tau\lesssim {\mathscr{E}}_{k+1}^p(\sigma_1)+\sum_{j\leq k}\Big|\int_{\sigma_1}^{\sigma_2}\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}{\tilde{f}}_j L{\tilde{\fy}}_j {\tilde{r}}^p\mathrm{d}\theta\mathrm{d} r\mathrm{d}\tau\Big|.
\end{split}
\end{align}
Applying this identity with $p=2$, $k= M-1$, $\sigma_1=0$, and $\sigma_2=\sigma$ arbitrary, and using the boundedness of ${\mathscr{E}}_{j+1}^2$ for $j\leq M-1$, we get
\begin{align}\label{eq:energytau2decaytemp1}
\begin{split}
\sum_{j\leq {M-1}}{\mathscr{E}}_j^2(\sigma)+\sum_{j\leq M-1}\int_{0}^{\sigma}{\mathscr{B}}_j^2(\tau)\mathrm{d} \tau\lesssim \epsilon^2+\sum_{j\leq M-1}\Big|\int_{0}^{\sigma}\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}{\tilde{f}}_j L{\tilde{\fy}}_j {\tilde{r}}^2\mathrm{d}\theta\mathrm{d} r\mathrm{d}\tau\Big|.
\end{split}
\end{align}
We claim that the contribution of the last term is bounded or can be absorbed on the left. Here and in what follows we carry out the details for the proof of the calculations for the two representative terms for ${\tilde{f}}_k$ corresponding to the contributions of the source term $\mathcal F_0$ and cubic term $\nabla^\mu\varphi\nabla^\nu\varphi\nabla_{\mu\nu}\varphi$. See \eqref{eq:abstractexteq1} and Lemmas~\ref{lem:sourceext1} and \ref{lem:purelycubic1}. For the contribution of $\mathcal F_0$ we simply use estimate \eqref{eq:calF0tauintegratedtemp1}, and absorb the $LE$ norm on the left. For the contribution of the cubic term, in view of Lemma~\ref{lem:purelycubic1} and the bootstrap assumptions \eqref{eq:wpb1}, \eqref{eq:phiptwiseb1}, \eqref{eq:dphiptwiseb2}, \eqref{eq:dphiptwiseb3}, these satisfy the same types of estimates as the error terms on the right-hand side of \eqref{eq:calPVF1}, with extra additional smallness, so their contribution can be bounded in the same way as in the proof of Lemma~\ref{lem:rpmult1}. Going back to \eqref{eq:energytau2decaytemp1}, we conclude that the left-hand side of this estimate is bounded by~$\epsilon^2$, and therefore, there is an increasing sequence of dyadic ${\tilde{\tau}}_m$ such that ${\mathscr{B}}_j^2({\tilde{\tau}}_n)\lesssim {\tilde{\tau}}_n^{-1}\epsilon^2$ for $j\leq M-1$. Since ${\mathscr{E}}_j^1\lesssim {\mathscr{B}}_j^2$, another application of \eqref{eq:energytau2decaytemp1}, but on $[{\tilde{\tau}}_{m-1},{\tilde{\tau}}_m]$ and with $p=1$ and $k=M-2$ gives
\begin{align}\label{eq:energytau2decaytemp2}
\begin{split}
\sum_{j\leq M-2}{\mathscr{E}}_j^1({\tilde{\tau}}_m)+\sum_{j\leq M-2}\int_{{\tilde{\tau}}_{m-1}}^{{\tilde{\tau}}_m}{\mathscr{B}}_j^1(\tau)\mathrm{d} \tau\lesssim \epsilon{\tilde{\tau}}_m^{-1}+\sum_{j\leq M-2}\Big|\int_{{\tilde{\tau}}_{m-1}}^{{\tilde{\tau}}_m}\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}{\tilde{f}}_j L{\tilde{\fy}}_j {\tilde{r}}\mathrm{d}\theta\mathrm{d} r\mathrm{d}\tau\Big|.
\end{split}
\end{align}
Arguing as above, the contribution of the last term on the right can be absorbed or bounded by~$\epsilon^2{\tilde{\tau}}_m^{-1}$, and we can find a possibly different increasing dyadic sequence $\tau_m\simeq {\tilde{\tau}}_m$ such that ${\mathscr{B}}_j^1(\tau_m)\lesssim \epsilon^2\tau_m^{-2}$ for $j\leq M-2$. Since $E_j\lesssim {\mathscr{B}}_j^1$, estimate \eqref{eq:rpenergydecay1} follows from another application of the energy estimate \eqref{eq:phienergyLEDboundtemp1}. Finally for \eqref{eq:rpenergydecay2}, note that by \eqref{eq:energytau2decaytemp2} we already have this estimate for $\tau=\tau_m$. The estimate for all $\tau$ now follows from another application of \eqref{eq:energytau2decaytemp0} with $p=1$ and with arbitrary $\sigma_2\in(\tau_m,\tau_{m+1})$ and $\sigma_1=\tau_m$.
\end{proof}
To improve the pointwise decay assumptions \eqref{eq:wpb1}, \eqref{eq:phiptwiseb1}, \eqref{eq:dphiptwiseb2}, \eqref{eq:dphiptwiseb3} we need better decay for the energies of ${\bfT}\phi$ and ${\bfT}^2\phi$. This is the content of the next lemma.
\begin{lemma}\label{lem:energyhighertaudecay}
Suppose the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} hold. Then for any $k\leq M-5$,
\begin{align}\label{eq:Tenergytau4decay}
\begin{split}
E_k[{\bfT}\phi](\tau)\lesssim \epsilon^2\tau^{-4},
\end{split}
\end{align}
and for any $k\leq M-8$, (with constant that is independent of \eqref{eq:d2Tphienergyb1})
\begin{align}\label{eq:T2energytau5decay}
\begin{split}
E_k[{\bfT}^2\phi](\tau)\lesssim \epsilon^2\tau^{-6}.
\end{split}
\end{align}
\end{lemma}
\begin{proof}
The main observation is that in view of Lemma~\ref{lem:higherordereqs1}, in particular using equation \eqref{eq:higher1}, we can estimate (note that in view of \eqref{eq:Tprecise1} the difference between ${\bfT}$ and $T$ comes with factors of ${\dot{\wp}}$ which give extra decay)
\begin{align*}
\begin{split}
\sum_{j\leq k}{\mathscr{E}}^2_j[{\bfT}\phi](\tau)\lesssim \epsilon^2\tau^{-2}+\sum_{j\leq k+1}E_j[\phi](\tau)\lesssim \epsilon^2\tau^{-2}.
\end{split}
\end{align*}
Here for the first inequality we have used the bootstrap assumptions \eqref{eq:wpb1}, \eqref{eq:phiptwiseb1}, \eqref{eq:dphiptwiseb2}, \eqref{eq:dphiptwiseb3} as well as \eqref{eq:calF0constanttautemp1} to estimate the left-hand side of \eqref{eq:higher1}, and for the second in equality we have used Lemma~\ref{lem:energytau2decay}. We can now repeat the proof of Lemma~\ref{lem:energytau2decay}, starting by applying \eqref{eq:energytau2decaytemp0} applied to $T\phi$ on an increasing dyadic sequence $\tau_m$. By the observations we just made, the right-hand side is now bounded by $\tau_m^{-2}$, so repeating the proof of Lemma~\ref{lem:energytau2decay} we obtain \eqref{eq:Tenergytau4decay}. Returning to \eqref{eq:higher1} and repeating this argument we obtain \eqref{eq:T2energytau5decay}.
\end{proof}
Lemma~\ref{lem:energyhighertaudecay} and elliptic estimates contained in the next lemma
allow us to obtain decay of higher derivative norms of $\phi$ for arbitrary derivatives. To state the lemma we recall from Remark~\ref{rem:nongeomglobal1}, part~(1), that in the global coordinates $(\uptau,\uprho,\upomega)$, $\mathcal P$ admits the decomposition
\begin{align*}
\begin{split}
\mathcal P=\mathcal P_\uptau+\mathcal P_{\mathrm{ell}},
\end{split}
\end{align*}
satisfying the properties stated there.
\begin{lemma}\label{lem:ellipticestimate1}
Suppose $\mathcal P_{\mathrm{ell}}\uppsi=g$ on $\Sigma_\tau$, and that the bootstrap assumptions~\eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} hold, then (here the sum is over the $L^2(\Sigma_\tau)$ inner products with the truncated eigenfunctions $Z_\mu,Z_1,\dots,Z_n$)
\begin{align}\label{eq:ellipticestimate1}
\begin{split}
\|\jap{r}^{-2}\uppsi\|_{L^2(\Sigma_\tau)}+\|\partial^2\uppsi\|_{L^2(\Sigma_\tau)}\lesssim \|g\|_{L^2(\Sigma_\tau)}+\sum|\angles{\uppsi}{Z_i}|+\tau^{-\frac{5}{2}+\kappa}E(\uppsi),
\end{split}
\end{align}
and for $s\in(2,\frac{5}{2})$,
\begin{align}\label{eq:ellipticestimate1.5}
\begin{split}
\|\jap{r}^{-s}\uppsi\|_{L^2(\Sigma_\tau)}\lesssim \|\partial_\Sigma g\|_{L^2(\Sigma_\tau)}^{s-2}\| g\|_{L^2(\Sigma_\tau)}^{3-s}+\sum|\angles{\uppsi}{Z_i}|+\tau^{-\frac{5}{2}+\kappa}\sum_{j\leq 1}E_j(\uppsi).
\end{split}
\end{align}
Moreover, for small ${\tilde{\kappa}}$,
\begin{align}\label{eq:ellipticestimate2}
\begin{split}
\|\partial_\Sigma^3\uppsi\|_{L^2(\Sigma_\tau)}\lesssim \|\partial_\Sigma g\|_{L^2(\Sigma_\tau)}+ \|\partial_\Sigma g\|_{L^2(\Sigma_\tau)}^{\frac{1}{2}-{\tilde{\kappa}}} \|g\|_{L^2(\Sigma_\tau)}^{\frac{1}{2}+{\tilde{\kappa}}}+\sum|\angles{\uppsi}{Z_i}|+\tau^{-\frac{5}{2}+\kappa}\sum_{j\leq 1}E_j(\uppsi).
\end{split}
\end{align}
\end{lemma}
\begin{proof}
Since all norms are on $\Sigma_\tau$ we drop $\Sigma_\tau$ from the notation. Recalling the decomposition $\mathcal P_{\mathrm{ell}}=\Delta_{\underline{\calC}}+V+\mathcal P_{\mathrm{ell}}^{{\mathrm{pert}}}$, let $\uppsi_{\mathrm{far}}$ be a solution to
\begin{align*}
\begin{split}
(\Delta_{\underline{\calC}}+V_{\mathrm{far}})\uppsi_{\mathrm{far}}=g-\mathcal P_{\mathrm{ell}}^{\mathrm{pert}}\uppsi,
\end{split}
\end{align*}
where $V_{\mathrm{far}}$ is a potential that vanishes inside a large compact set, and is equal to $V$ outsides a larger compact set. We further decompose $g-\mathcal P_{\mathrm{ell}}^{\mathrm{pert}}\uppsi$ as
\begin{align*}
\begin{split}
g-\mathcal P_{\mathrm{ell}}^{\mathrm{pert}}\uppsi={\tilde{g}}+o_{\wp,{\Green{R}}}(1)(\partial^2_\Sigma\uppsi+\jap{\uprho}^{-1}\partial_\Sigma\uppsi+\jap{\uprho}^{-2}\uppsi),
\end{split}
\end{align*}
where ${\tilde{g}}=g+\calO({\dot{\wp}})\partial_\Sigma\uppsi+\calO({\dot{\wp}})\jap{\uprho}^{-2}\uppsi$.
We will prove estimates \eqref{eq:ellipticestimate1}, \eqref{eq:ellipticestimate1.5}, \eqref{eq:ellipticestimate2} with the last term on the right-hand sides removed, and with $g$ replaced by ${\tilde{g}}$. The desired estimate then follows from the triangle inequality and the bootstrap assumptions. Treating $V_{\mathrm{far}}$ perturbatively, and writing $\varepsilon$ for $o_{\wp,{\Green{R}}}(1)$, we have
\begin{align}\label{eq:lemellipticestimate1temp1}
\begin{split}
\|\jap{\uprho}^{-2}\uppsi_{\mathrm{far}}\|_{L^2}+\|\partial_\Sigma^2\uppsi_{\mathrm{far}}\|_{L^2}\lesssim \|{\tilde{g}}\|_{L^2}+\varepsilon (\partial^2_\Sigma\uppsi+\jap{\uprho}^{-1}\partial_\Sigma\uppsi+\jap{\uprho}^{-2}\uppsi).
\end{split}
\end{align}
Note that $\uppsi_{\mathrm{near}}:=\uppsi-\uppsi_{\mathrm{far}}$ satisfies
\begin{align}\label{eq:lemellipticestimate1temp2}
\begin{split}
(\Delta_{\underline{\calC}}+V)\uppsi_{\mathrm{near}}=V_{\mathrm{near}} \uppsi_{\mathrm{far}},
\end{split}
\end{align}
where $V_{\mathrm{near}}=V_{\mathrm{far}}-V$. We further decompose $\uppsi_{\mathrm{near}}$ as (where the sum is over the truncated eigenfunctions $Z_i$ of $\Delta_{\underline{\calC}}+V$, which we assumed are normalized in $L^2(\Sigma_\tau)$)
\begin{align*}
\begin{split}
&\uppsi_{\mathrm{near}}=\uppsi_{\mathrm{near}}^\perp+\sum\angles{\uppsi_{\mathrm{near}}}{Z_i}Z_i,\qquad \angles{\uppsi_{\mathrm{near}}^\perp}{Z_i}=0,\\
&(\Delta_{\underline{\calC}}+V)\uppsi_{\mathrm{near}}^\perp=V_{\mathrm{near}} \uppsi_{\mathrm{far}}+\sum\angles{\uppsi_{\mathrm{near}}}{Z_i}(\Delta_{\underline{\calC}}+V)Z_i.
\end{split}
\end{align*}
Since $\uppsi_{\mathrm{near}}^\perp$ is transversal to the eigenfunctions of $\Delta_{\underline{\calC}}+V$,
\begin{align*}
\begin{split}
\|\jap{\uprho}^{-1}\uppsi_{\mathrm{near}}^\perp\|_{L^2}&\lesssim \|\partial_\Sigma\uppsi_{\mathrm{near}}^\perp\|_{L^2}\lesssim \|\jap{\uprho}V_{\mathrm{near}} \uppsi_{\mathrm{far}}\|_{L^2}+\sum|\angles{\uppsi_{\mathrm{near}}}{Z_i}|\\
&\lesssim \|{\tilde{g}}\|_{L^2}+\sum|\angles{\uppsi}{Z_i}|+\varepsilon (\partial^2_\Sigma\uppsi+\jap{\uprho}^{-1}\partial_\Sigma\uppsi+\jap{\uprho}^{-2}\uppsi),
\end{split}
\end{align*}
where we have used the fact that $V_{\mathrm{near}}$ and $Z_i$ are compactly supported. But using the decomposition $\uppsi_{\mathrm{near}}=\uppsi_{\mathrm{near}}^\perp+\sum\angles{\uppsi_{\mathrm{near}}}{Z_i}Z_i$ we can replace $\|\jap{\uprho}^{-1}\uppsi_{\mathrm{near}}^\perp\|_{L^2}$ on the left-hand side of the estimate above by $\|\jap{\uprho}^{-1}\uppsi_{\mathrm{near}}\|_{L^2}$. Using equation~\eqref{eq:lemellipticestimate1temp2} again,
\begin{align*}
\begin{split}
\|\partial^2_\Sigma \uppsi_{\mathrm{near}}\|_{L^2}&\lesssim \|\jap{\uprho}^{-2}\uppsi_{\mathrm{near}}\|_{L^2}+\|\jap{\uprho}^{-2}\uppsi_{\mathrm{far}}\|_{L^2}\\
&\lesssim \|{\tilde{g}}\|_{L^2}+\sum|\angles{\uppsi}{Z_i}|+\varepsilon (\partial^2_\Sigma\uppsi+\jap{\uprho}^{-1}\partial_\Sigma\uppsi+\jap{\uprho}^{-2}\uppsi).
\end{split}
\end{align*}
Estimate \eqref{eq:ellipticestimate1} follows by combining the last two estimates with \eqref{eq:lemellipticestimate1temp1} and observing that $|\jap{\uprho}^{-2}\uppsi|\lesssim |\jap{\uprho}^{-2}\uppsi_{\mathrm{near}}|+|\jap{\uprho}^{-2}\uppsi_{\mathrm{far}}|$. To prove \eqref{eq:ellipticestimate1.5} and \eqref{eq:ellipticestimate2} we use the global coordinates $(\uptau,\uprho,\upomega)$, with $(\uprho,\upomega)$ coordinates on $\Sigma_\tau$, to define the operator $\mathcal P_{\mathrm{euc}}$ by smoothly modifying the coefficients of $\mathcal P_{\mathrm{ell}}$ such that
\begin{align*}
\begin{split}
\mathcal P_{\mathrm{euc}}=\begin{cases}\Delta_{{\mathrm{euc}}}\quad &\uprho\leq R_{\mathrm{euc}}\\ \mathcal P_{\mathrm{ell}}\quad &\uprho\geq R_{\mathrm{euc}}+1\end{cases}.
\end{split}
\end{align*}
Here $R_{\mathrm{euc}}$ is a fixed large constant and $\Delta_{{\mathrm{euc}}}=\partial_\uprho^2+\frac{4}{\uprho}\partial_\uprho+\frac{1}{\uprho^2}{\slashed{\Delta}}_{\mathbb S^{4}}$. Let $\chi\equiv \chi(\uprho)$ and ${\tilde{\chi}}\equiv(\uprho)$ be cutoff functions supported in the large $\uprho$ region such that ${\tilde{\chi}}\chi=\chi$, let $\upphi_{\mathrm{euc}}$ be the solution to
\begin{align*}
\begin{split}
\mathcal P_{\mathrm{euc}}\upphi_{\mathrm{euc}}= \chi {\tilde{g}},
\end{split}
\end{align*}
and let $\uppsi_{\mathrm{euc}}={\tilde{\chi}}\upphi_{\mathrm{euc}}$. The functions $\upphi_{\mathrm{euc}}$ and $\uppsi_{\mathrm{euc}}$ are defined in terms of the coordinates $(\uprho,\upomega)$, but since $\uppsi_{\mathrm{euc}}$ is supported in the large $\uprho$ region, we can view it as a function on $\Sigma_\tau$ as well. By the Euclidean theory and treating the difference between $\Delta_{\mathrm{euc}}$ and $\mathcal P_{\mathrm{euc}}$ in $\{\uprho\geq R_{\mathrm{euc}}\}$ perturbativly (here fractional derivatives are defined on $\mathbb R^5$ using the coordinates $(\uprho,\upomega)$, $\partial_\Sigma$ denotes the coordinate derivatives $\partial_\uprho$ and $\uprho^{-1}\partial_\upomega$, and the volume form is also as in $\mathbb R^5$, which is comparable with the geometric volume form on $\Sigma_\tau$ for large $\uprho$),
\begin{align}\label{eq:lemellipticestimate1temp4}
\begin{split}
\sum_{j=0}^2\|\jap{\uprho}^{j-s}\partial_\Sigma^j\upphi_{\mathrm{euc}}\|_{L^2}+\|(-\Delta_{{\mathrm{euc}}})^{\frac{s}{2}}\upphi_{\mathrm{euc}}\|_{L^2}\lesssim \|{\tilde{\chi}}\partial_\Sigma {\tilde{g}}\|_{L^2}^{s-2}\|{\tilde{\chi}}{\tilde{g}}\|_{L^2}^{3-s}.
\end{split}
\end{align}
Now $\uppsi_{\mathrm{cat}}:=\uppsi-\uppsi_{\mathrm{euc}}$ satisfies
\begin{align}\label{eq:lemellipticestimate1temp5}
\begin{split}
\mathcal P_{\mathrm{ell}}\uppsi_{\mathrm{cat}}=(1-\chi){\tilde{g}}-[\mathcal P_{\mathrm{ell}},{\tilde{\chi}}]\upphi_{\mathrm{euc}}-{\tilde{\chi}}(\mathcal P_{\mathrm{ell}}-\mathcal P_{\mathrm{euc}})\upphi_{\mathrm{euc}}.
\end{split}
\end{align}
We again decompose $\uppsi_{\mathrm{cat}}$ as $\uppsi_{\mathrm{cat}}=\uppsi_{\mathrm{cat}}^\perp+\angles{\uppsi_{\mathrm{cat}}}{Z_i}Z_i$. In view of the compact support of the right-hand side of \eqref{eq:lemellipticestimate1temp5} (note that the terms involving $\upphi_{\mathrm{euc}}$ are compactly supported in the large $\uprho$ region, so they can be viewed as functions on $\Sigma_\tau$), and by \eqref{eq:ellipticestimate1} and \eqref{eq:lemellipticestimate1temp4}, and arguing as we did for $\uppsi_{\mathrm{near}}$ above,
\begin{align*}
\begin{split}
\|\jap{\uprho}^{-1}\uppsi_{\mathrm{cat}}\|_{L^2}\lesssim \|\partial_\Sigma\uppsi_{\mathrm{cat}}\|_{L^2}\lesssim \|\partial_\Sigma {\tilde{g}}\|_{L^2}^{s-2}\|{\tilde{g}}\|_{L^2}^{3-s}+\sum|\angles{\uppsi}{Z_i}|,
\end{split}
\end{align*}
and \eqref{eq:ellipticestimate1.5} follows by observing that $|\jap{\uprho}^{-s}\uppsi|\lesssim |\jap{\uprho}^{-s}\uppsi_{\mathrm{euc}}|+|\jap{\uprho}^{-1}\uppsi_{\mathrm{cat}}|$. Note that the same argument in fact gives \eqref{eq:ellipticestimate1.5} with $\|\jap{\uprho}^{1-s}\partial_\Sigma \uppsi\|_{L^2}$ added on the left-hand side.
Therefore, to prove \eqref{eq:ellipticestimate2}, in view of the equation $\Delta_{\underline{\calC}}\uppsi=-V\uppsi+{\tilde{g}}+o_{\wp,{\Green{R}}}(1)(\partial^2_\Sigma\uppsi+\jap{\uprho}^{-1}\partial_\Sigma\uppsi+\jap{\uprho}^{-2}\uppsi)$, we can use the already established estimates and elliptic estimates for $\Delta_{\underline{\calC}}$, to get
\begin{align*}
\begin{split}
\|\partial_\Sigma^3\uppsi\|_{L^2}&\lesssim \|\partial_\Sigma {\tilde{g}}\|_{L^2}+\|\partial_\Sigma( V\uppsi)\|_{L^2}\lesssim \|\partial_\Sigma {\tilde{g}}\|_{L^2}+\|\jap{\uprho}^{-\frac{5}{2}+{\tilde{\kappa}}}\uppsi\|_{L^2}+\|\jap{\uprho}^{1-\frac{5}{2}+{\tilde{\kappa}}}\partial_\Sigma\uppsi\|_{L^2} \\
&\lesssim\|\partial_\Sigma {\tilde{g}}\|_{L^2}+\|\partial_\Sigma {\tilde{g}}\|_{L^2}^{\frac{1}{2}-\kappa}\|{\tilde{g}}\|_{L^2}^{\frac{1}{2}+\kappa}+\sum|\angles{\uppsi}{Z_i}|.\qedhere
\end{split}
\end{align*}
\end{proof}
The $L^2(\Sigma_\tau)$ decay of arbitrary higher derivatives is now a corollary of the previous two lemmas.
\begin{corollary}\label{cor:higherdL2decay}
Suppose the bootstrap assumptions \eqref{eq:a+trap}--\eqref{eq:Tjphienergyb2} hold. Then for $k\leq M-3$,
\begin{align}\label{eq:d2highertaudecay1}
\begin{split}
\|\partial^2_\Sigma (\chi_{\leq {\tilde{R}}}\partial^k\phi)\|_{L^2(\Sigma_\tau)}+\|\partial^2_\Sigma(\chi_{\geq {\tilde{R}}}X^k\phi)\|_{L^2(\Sigma_\tau)}\lesssim \epsilon\tau^{-2},
\end{split}
\end{align}
and for $k\leq M-4$,
\begin{align}\label{eq:d2Thighertaudecay1}
\begin{split}
\|\partial^2_\Sigma (\chi_{\leq {\tilde{R}}}{\bfT}\partial^k\phi)\|_{L^2(\Sigma_\tau)}+\|\partial^2_\Sigma(\chi_{\geq {\tilde{R}}}{\bfT} X^k\phi)\|_{L^2(\Sigma_\tau)}\lesssim \epsilon\tau^{-3}.
\end{split}
\end{align}
For $k\leq M-4$ and $s\geq \frac{5}{2}-\kappa$,
\begin{align}\label{eq:d3highertaudecay1}
\begin{split}
&\|\partial^3_\Sigma (\chi_{\leq {\tilde{R}}}\partial^k\phi)\|_{L^2(\Sigma_\tau)}+\|\partial^3_\Sigma(\chi_{\geq {\tilde{R}}}X^k\phi)\|_{L^2(\Sigma_\tau)}\\
&+\| \chi_{\leq {\tilde{R}}}\partial^k\phi\|_{L^2(\Sigma_\tau)}+\|\jap{r}^{-s}(\chi_{\geq {\tilde{R}}}X^k\phi)\|_{L^2(\Sigma_\tau)}\lesssim \epsilon\tau^{-\frac{5}{2}+\kappa}.
\end{split}
\end{align}
The implicit constants in \eqref{eq:d2highertaudecay1}, \eqref{eq:d2Thighertaudecay1}, \eqref{eq:d3highertaudecay1} are independent of $C_k$ in \eqref{eq:dphiL2b1} and \eqref{eq:d2Tphienergyb1}.
\end{corollary}
\begin{proof}
We start with the decomposition
\begin{align}\label{eq:higherdL2decaytemp1}
\begin{split}
\mathcal P_{\mathrm{ell}}\phi=\mathcal P\phi+\calO(1)\partial {\bfT}\phi+ \calO(r^{-1}){\bfT}\phi.
\end{split}
\end{align}
It follows from Lemmas~\ref{lem:energyhighertaudecay} and~\ref{lem:ellipticestimate1}, as well as \eqref{eq:wpb1}, \eqref{eq:phiptwiseb1}, \eqref{eq:dphiptwiseb2}, \eqref{eq:dphiptwiseb3}, \eqref{eq:calF0constanttautemp1}, that
\begin{align}
\begin{split}
\|\partial^2_\Sigma\phi\|_{L^2(\Sigma_\tau)}\lesssim \epsilon \tau^{-2},
\end{split}
\end{align}
which, using Lemma~\ref{lem:ellipticestimate1} again, implies \eqref{eq:d2highertaudecay1} for $k=0$. Note that here the contribution of $|\angles{\phi}{Z_i}|$ is bounded in terms of $\boldsymbol{\Omega}_i(\phi)$ and applying Lemma~\ref{lem:Omega1}. The case of higher $k$ is derived similarly using the $k$ times commuted equations. For \eqref{eq:d3highertaudecay1}, arguing as above we write
\begin{align*}
\begin{split}
\mathcal P_{\mathrm{ell}} {\bfT}\phi=[\mathcal P_{\mathrm{ell}},{\bfT}]\phi+{\bfT}\mathcal P\phi+\calO(1)\partial {\bfT}^2\phi+ \calO(r^{-1}){\bfT}^2\phi+\calO({\dot{\wp}})\partial{\bfT}\phi+\calO({\dot{\wp}} r^{-1}){\bfT}\phi,
\end{split}
\end{align*}
to get
\begin{align*}
\begin{split}
\|\partial^2 T\phi\|_{L^2(\Sigma_\tau)}\lesssim \tau^{-3}.
\end{split}
\end{align*}
Here we have again bounded $\angles{{\bfT}\phi}{Z_i}$ in terms of $\boldsymbol{\Omega}_i({\bfT}\phi)$, which is bounded by $o_{\wp,{R_1}}(1)\epsilon\uptau^{-3}$ by the same arguments as in Lemmas~\ref{eq:OmegaiTkphi} and~\ref{lem:Omega1}. Note that the factors $o_{\wp,{R_1}}(1)$ here and $\delta_\wp$ in \eqref{eq:TcalF0constanttautemp1} make the constants in this estimate independent of the bootstrap constants in \eqref{eq:d2Tphienergyb1}. By the same reasoning, and using equation \eqref{eq:higherdL2decaytemp1} and estimates \eqref{eq:calF0constanttautemp1} and \eqref{eq:ellipticestimate2} we get \eqref{eq:d3highertaudecay1} with a constant that is independent of \eqref{eq:dphiL2b1}. The estimate for higher $k$ is proved similarly.
\end{proof}
Corollary~\ref{cor:higherdL2decay} and the following standard Gagliardo-Nirenberg inequality (which we state without proof) allow us to close the bootstrap assumptions \eqref{eq:phiptwiseb1} and \eqref{eq:dphiptwiseb1}.
\begin{lemma}\label{lem:GN}
For any function $\uppsi\in H^3(\Sigma_\tau)$,
\begin{align*}
\begin{split}
\|\uppsi\|_{L^\infty(\Sigma_\tau)}\lesssim \|\partial_\Sigma^2\uppsi\|_{L^2(\Sigma_\tau)}^{\frac{1}{2}}\|\partial_\Sigma^3\uppsi\|_{L^2(\Sigma_\tau)}^{\frac{1}{2}}.
\end{split}
\end{align*}
\end{lemma}
We now have all the ingredients to prove Proposition~\ref{prop:bootstrapphi1}.
\begin{proof}[Proof of Proposition~\ref{prop:bootstrapphi1}]
Since the implicit constants in Corollary~\ref{cor:higherdL2decay} are independent of those in \eqref{eq:dphiL2b1} and \eqref{eq:d2Tphienergyb1}, estimates \eqref{eq:dphiL21} and \eqref{eq:d2Tphienergy1} follow if $C$ is chosen sufficiently large. Estimates \eqref{eq:phiptwise1} and \eqref{eq:dphiptwise1} then follow from \eqref{eq:dphiL21} and \eqref{eq:d2Tphienergy1} and Lemma~\ref{lem:GN}. Estimates \eqref{eq:Tjphienergy1} and \eqref{eq:Tjphienergy2} were also already proved in the proof of Lemmas~\ref{lem:energytau2decay} and~\ref{lem:energyhighertaudecay}. To prove \eqref{eq:dphiptwiseb2}, first note that for any function $\uppsi$ and for any~$r_1> {\tilde{R}}\gg1$, by the fundamental theorem of calculus,
\begin{align*}
\begin{split}
\int_{\Sigma_\tau\cap \{r=r_1\}}(r^{\frac{3}{2}}\uppsi)^2\mathrm{d} \theta&\lesssim \int_{\Sigma_\tau\cap \{r={\tilde{R}}\}}(r^{\frac{3}{2}}\uppsi)^2\mathrm{d} \theta+\int_{\Sigma_\tau\cap\{{\tilde{R}}\leq r\leq r_1\}}|r^{\frac{3}{2}}\uppsi||\partial_r(r^{\frac{3}{2}}\uppsi)|\mathrm{d}\theta\mathrm{d} r\\
&\lesssim E[\uppsi](\tau).
\end{split}
\end{align*}
Here to pass to the second line we have used the trace inequality for the integral on $\Sigma_\tau\cap \{r=r_0\}$, and \eqref{eq:VFbasis1} to express $\partial_r$ in terms of $L,T,\Omega$ for the integral on $\Sigma_\tau\cap\{r_0\leq r\leq r_1\}$. Applying the Sobolev inequality on the (non-geometric) sphere $\Sigma_\tau\cap \{r=r_1\}$ to $\phi$, and using \eqref{eq:VFbasis1} to express angular derivatives in terms of $L,\Omega,T$, for $r\geq {\tilde{R}}$ we get
\begin{align*}
\begin{split}
|r^{\frac{3}{2}}\phi(\tau)|^2\lesssim \sum_{j\leq 3}E_j[\phi].
\end{split}
\end{align*}
Estimate \eqref{eq:dphiptwiseb2} for $k=0$ now follows from \eqref{eq:rpenergydecay1}, and the estimate for higher $k$ is proved similarly. For \eqref{eq:dphiptwise3} we start with
\begin{align*}
\begin{split}
\int_{\Sigma_\tau\cap \{r=r_1\}}({\tilde{r}}^{2}\uppsi)^2\mathrm{d} \theta&\lesssim \int_{\Sigma_\tau\cap \{r={\tilde{R}}\}}({\tilde{r}}^{2}\uppsi)^2\mathrm{d} \theta+\int_{\Sigma_\tau\cap\{{\tilde{R}}\leq r\leq r_1\}}|{\tilde{r}}^{2}\uppsi||\partial_r({\tilde{r}}^{2}\uppsi)|\mathrm{d}\theta\mathrm{d} r\\
&\lesssim E[\uppsi](\tau)+\Big(\int_{\Sigma_\tau}\chi_{\geq {\tilde{R}}}((L+\frac{n-1}{2{\tilde{r}}})\uppsi)^2 r^2\mathrm{d} V\Big)^{\frac{1}{2}}(E[\uppsi](\tau))^{\frac{1}{2}},
\end{split}
\end{align*}
where we have again used the trace inequality and \eqref{eq:VFbasis1} to pass to the last line. Estimate \eqref{eq:dphiptwise3} now follows by the Sobolev estimate on the sphere $\Sigma_\tau\cap \{r=r_1\}$ as above, as well as \eqref{eq:rpenergydecay1} and ~\eqref{eq:rpenergyboundedness1}.
\end{proof}
\bibliographystyle{plain}
|
1,314,259,993,911 | arxiv | \section{Introduction}
Let $F$ be a field and $K$ be a finite Galois extension over $K$. If $a\in K$ is such that $\{\sigma(a):\sigma\in\text{Aut}(K/F)\}$ forms a basis of $K/F$, $a$ is called a {\em normal element} of $K$ over $F$ and $\{\sigma(a):\sigma\in\text{Aut}(K/F)\}$ is called a {\em normal basis} of $K$ over $F$. The existence of normal bases was proved a long time ago by Deuring \cite{Deuring-MA-1933}. (For a more accessible reference, see Lang \cite[\S VI.13]{Lang-2002}.) Normal bases have applications in many areas, especially in the theory and applications of finite fields.
Let $\Bbb F_q$ denote the finite field with $q$ elements. An irreducible polynomial $f\in\Bbb F_q[X]$ of degree $n$ is said to be {\em normal} over $\Bbb F_q$ if its roots are linearly independent over $\Bbb F_q$, i.e., the roots form a normal basis of $\Bbb F_{q^n}$ over $\Bbb F_q$. Many theoretic results and computational procedures concerning finite fields utilize normal bases. There is an extensive literature on normal bases of finite fields; see \cite{Gao-thesis-1993}, \cite[\S\S 5.2 -- 5.4]{Mullen-Panario-HF-2013}, and the references therein.
In this paper, we are interested in finding sufficient conditions for an irreducible polynomial over $\Bbb F_q$ to be normal. More precisely, we show that there is a polynomial $h_n(X_1,\dots,X_n)\in\Bbb Z[X_1,\dots,X_n]$, independent of $q$, such that if $f=X^n+a_1X^{n-1}+\cdots+a_n\in\Bbb F_q[X]$ is irreducible and $h_n(a_1,\dots,a_n)\ne 0$, then $f$ is normal over $\Bbb F_q$. The polynomial $h_n$ is computed explicitly for $n\le 5$ and partially for $n=6$. There is also a characteristic specific version of the polynomial $h_n$ which is simpler but has the same property. Let $p=\text{char}\,\Bbb F_q$. There is a polynomial $h_{p,n}(X_1,\dots,X_n)\in\Bbb F_p[X_1,\dots,X_n]$, depending on $p$, such that if $f=X^n+a_1X^{n-1}+\cdots+a_n\in\Bbb F_q[X]$ is irreducible and $h_{p,n}(a_1,\dots,a_n)\ne 0$, then $f$ is normal. We remind the reader that $h_{p,n}$ is not necessarily the reduction of $h_n$ modulo $p$.
The basic idea of our approach is the notion of symmetrization of polynomials.
Let $F$ be a field, $n\ge 2$ be an integer and $f(X_0,\dots,X_{n-1})\in F[X_0,\dots,X_{n-1}]$. We are interested in finding a symmetric polynomial $g(X_0,\dots,X_{n-1})\in F[X_0,\dots,X_{n-1}]$ such that $f\mid g$; we refer to such a $g$ as a {\em symmetrization} of $f$. Naturally, we require the degree of $g$ to be as low as possible. The relevance of this question will become clear in the next section.
Let the symmetric group $S_n$ act on $F[X_0,\dots,X_{n-1}]$ by permuting $X_0\cdots,X_{n-1}$. Let $\text{Stab}(f)=\{\sigma\in S_n:\sigma(f)=f\}$ be the stabilizer of $f$ in $S_n$. Let $\mathcal C$ be a system of representatives of the left cosets of $\text{Stab}(f)$ in $S_n$. Then $g=\prod_{\sigma\in\mathcal C}\sigma(f)$ is a symmetrization of $f$. (Note that $g$ is independent of the choice of $\mathcal C$.) To reduced the degree of the symmetrization of $f$, it is sometimes helpful to factor $f$ first and then determine the symmetrization fo each factor.
\section{Symmetrization of $\Delta_n$}
Define
\begin{equation}\label{2.1}
\Delta_n(X_0,\dots,X_{n-1})=\det\left[
\begin{matrix}
X_0&X_1&\cdots&X_{n-1}\cr
X_{n-1}&X_0&\cdots&X_{n-2}\cr
\vdots&\vdots&\ddots&\vdots\cr
X_1&X_2&\cdots&X_0
\end{matrix}\right]\in\Bbb Z[X_0\dots,X_{n-1}].
\end{equation}
Let $f=X^n+a_1X^{n-1}+\cdots+a_n\in\Bbb F_q[X]$ be irreducible, and let $r_0=r, r_1=r^q,\dots,r_{n-1}=r^{q^{n-1}}$ be the roots of $f$ in $\Bbb F_{q^n}$. It is well known that $r_0,\dots,r_{n-1}$ are linearly independent over $\Bbb F_q$ if and only if $\Delta_n(r_0,\dots,r_{n-1})\ne 0$; see \cite[Lemma~3.51]{Lidl-Niederreiter-FF-1997}.
Note that $\Delta_n(r_0,\dots,r_{n-1})$ cannot be expressed in terms of the coefficients $a_1,\dots, a_n$. However, let $\Sigma_n$ denote a symmetrization of $\Delta_n$. Then $\Sigma_n(r_0,\dots,r_{n-1})$ is a polynomial in $a_1,\dots, a_n$, and moreover, $\Sigma_n(r_0,\dots,r_{n-1})\ne 0$ implies $\Delta_n(r_0,\dots,r_{n-1})\ne 0$. The purpose of this section is to determine $\Sigma_n$.
We can write
\begin{equation}\label{2.2}
\Delta_n=\prod_{i\in\Bbb Z/n\Bbb Z}\Bigl(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{ij}X_j\Bigr),
\end{equation}
where $\epsilon_n=e^{2\pi\sqrt{-1}/n}$. Let
\begin{equation}\label{2.3}
\Psi_n(X_0,\dots,X_{n-1})=\prod_{i\in(\Bbb Z/n\Bbb Z)^\times}\Bigl(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{ij}X_j\Bigr).
\end{equation}
Clearly, $\Psi_n$ is invariant under the action of the Galois group $\text{Aut}(\Bbb Q(\epsilon_n)/\Bbb Q)$. Hence $\Psi_n\in\Bbb Q[X_0,\dots,X_{n-1}]$. Since the coefficients of $\Psi_n$ are integral over $\Bbb Z$, we have $\Psi_n\in\Bbb Z[X_0,\dots,X_{n-1}]$. For the formulas of $\Psi_n$ with $n\le 6$, see the appendix A1. From \eqref{2.2} we have
\begin{equation}\label{2.4}
\Delta_n=\prod_{m\mid n}\Psi_m\Bigl(\sum_{j=0}^{n/m-1}X_{mj},\sum_{j=0}^{n/m-1}X_{1+mj},\dots,\sum_{j=0}^{n/m-1}X_{m-1+mj}\Bigr).
\end{equation}
We treat the symmetric group $S_n$ as the permutation group of $\Bbb Z/n\Bbb Z=\{0,1,\dots,n-1\}$ and $S_{n-1}<S_n$ as the permutation group of $\{1,2,\dots,n-1\}$.
For $a\in(\Bbb Z/n\Bbb Z)^\times$ and $b\in\Bbb Z/n\Bbb Z$, let $\alpha_{a,b}$ be the permutation of $\Bbb Z/n\Bbb Z$ defined by the affine map $x\mapsto ax+b$. Let $G_n=\{\alpha_{a,b}:a\in(\Bbb Z/n\Bbb Z)^\times, b\in\Bbb Z/n\Bbb Z\}=\text{AGL}(1,\Bbb Z/n\Bbb Z)<S_n$ and let $G_n^*=\{\alpha_{a,0}: a\in(\Bbb Z/n\Bbb Z)^\times\}<S_{n-1}$.
\begin{lem}\label{L2.1}
We have
\[
\text{\rm Stab}(\Psi_n)=
\begin{cases}
\{\text{\rm id}\}&\text{if}\ n=1,2,\cr
G_n&\text{if}\ n>2.
\end{cases}
\]
\end{lem}
\begin{proof}
Since $\Psi_2(X_0,X_1)=X_0-X_1$, its stabilizer in $S_2$ is the trivial subgroup $\{\text{id}\}$.
Now assume that $n>2$. For $a\in(\Bbb Z/n\Bbb Z)^\times$, we have
\begin{align*}
\alpha_{a,0}(\Psi_n)\,&=\prod_{i\in(\Bbb Z/n\Bbb Z)^\times}\Bigl(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{ij}X_{aj}\Bigr)=\prod_{i\in(\Bbb Z/n\Bbb Z)^\times}\Bigl(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{ia^{-1}j}X_{j}\Bigr)\cr
&=\prod_{i\in(\Bbb Z/n\Bbb Z)^\times}\Bigl(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{ij}X_{j}\Bigr)=\Psi_n.
\end{align*}
For $b\in\Bbb Z/n\Bbb Z$, we have
\begin{align*}
\alpha_{1,1}(\Psi_n)\,&=\prod_{i\in(\Bbb Z/n\Bbb Z)^\times}\Bigl(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{ij}X_{j+1}\Bigr)=\prod_{i\in(\Bbb Z/n\Bbb Z)^\times}\Bigl(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{i(j-1)}X_{j}\Bigr)\cr
&=\prod_{i\in(\Bbb Z/n\Bbb Z)^\times}\epsilon_n^{-i}\Bigl(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{ij}X_{j}\Bigr)=\epsilon_n^{\sum_{i\in(\Bbb Z/n\Bbb Z)^\times}i}\prod_{i\in(\Bbb Z/n\Bbb Z)^\times}\Bigl(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{ij}X_{aj}\Bigr)\cr
&=\epsilon_n^{\sum_{i\in(\Bbb Z/n\Bbb Z)^\times}i}\Psi_n.
\end{align*}
In the above, since $n>2$, we have
\[
\sum_{i\in(\Bbb Z/n\Bbb Z)^\times}i=0.
\]
Hence $\alpha_{1,1}(\Psi_n)=\Psi_n$. Since $G_n$ is generated by $\alpha_{a,0}$, $a\in(\Bbb Z/n\Bbb Z)^\times$, and $\alpha_{1,1}$, we have $G_n\subset\text{Stab}(\Psi_n)$.
On the other hand, assume that $\sigma\in \text{Stab}(\Psi_n)$. Since $\sigma(\sum_{j=0}^{n-1}\epsilon_n^jX_j)=\sum_{j=0}^{n-1}\epsilon_n^{\sigma^{-1}(j)}X_j$ divides $\sigma(\Psi_n)=\Psi_n=\prod_{i\in(\Bbb Z/n\Bbb Z)^\times}(\sum_{j\in\Bbb Z/n\Bbb Z}\epsilon_n^{ij}X_j)$, there exist $i\in(\Bbb Z/n\Bbb Z)^\times$ and $t\in\Bbb C^\times$ such that
\[
(\epsilon_n^{\sigma^{-1}(0)},\dots,\epsilon_n^{\sigma^{-1}(n-1)})=t(\epsilon_n^{i\cdot 0},\dots,\epsilon_n^{i(n-1)}).
\]
It follows from here that $t=\epsilon_n^{\sigma^{-1}(0)}$ and $\sigma^{-1}(j)=ij+\sigma^{-1}(0)$ for all $j\in\Bbb Z/n\Bbb Z$. Hence $\sigma\in G_n$.
\end{proof}
Let $\mathcal C_n$ be a system of representatives of left cosets of $\text{Stab}(\Psi_n)$ in $S_n$ and let
\begin{equation}\label{2.5}
\Phi_n=\prod_{\sigma\in\mathcal C_n}\sigma(\Psi_n).
\end{equation}
Then $\Phi_n$ is a symmetrization of $\Psi_n$. Obviously,
\[
\Phi_1=X_0,\quad \Phi_2=(X_0-X_1)(X_1-X_0).
\]
For $n>2$, $\mathcal C_n$ is a system of representatives of left cosets of $G_n$ in $S_n$. In this case, we can choose $\mathcal C_n$ to be a system of representatives of left cosets of $G_n^*$ in $S_{n-1}$.
To construct $\mathcal C_n$, we can proceed as follows. Partition $\{1,\dots,n-1\}$ as $P_1\sqcup P_2$, where $P_1=(\Bbb Z/n\Bbb Z)^\times=\{1\le i\le n-1:\text{gcd}(i,n)=1\}$ and $P_2=\{1\le i\le n-1:\text{gcd}(i,n)>1\}$. We denote the permutation group of any $P\subset\{1,\dots,n-1\}$ by $S_P$. Consider a tower of subgroups $G_n^*<S_{P_1}\times S_{P_2}<S_{n-1}$.
Then $S_{P_1\setminus\{1\}}\times S_{P_2}$ is a system of representatives of the left cosets of $G_n^*$ in $S_{P_1}\times S_{P_2}$. Let $\mathcal P=\{P\subset\{1,\dots,n-1\}: |P|=\phi(n)\}$. For each $P\in \mathcal P$, choose any $\sigma_P\in S_{n-1}$ such that $\sigma_P(P_1)=P$. Then $\{\sigma_P:P\in\mathcal P\}$ is a system of representatives of the left cosets of $S_{P_1}\times S_{P_2}$ in $S_{n-1}$. Therefore, we can choose
\begin{equation}\label{Cn}
\mathcal C_n=\{\sigma_P\alpha:\alpha\in S_{P_1\setminus\{1\}}\times S_{P_2}, P\in\mathcal P\},\quad n>2.
\end{equation}
For the formulas of $\Phi_n$ with $1\le n\le 6$, see the appendix A2.
We now determine a symmetrization of
\begin{equation}\label{2.6}
\Psi_m\Bigl(\sum_{j=0}^{n/m-1}X_{mj},\sum_{j=0}^{n/m-1}X_{1+mj},\dots,\sum_{j=0}^{n/m-1}X_{m-1+mj}\Bigr),
\end{equation}
where $m\mid n$. Partition $\Bbb Z/n\Bbb Z$ into blocks $I_i=\{i+mj:0\le j\le n/m-1\}$, $0\le i\le m-1$ and write $Y_i=\sum_{a\in I_i}X_a$. For $G<S_m$, the wreath product $G \kern1pt \wr S_{n/m}$ is the subgroup of $S_n$ consisting all permutations that are obtained as follows: first permute the blocks $I_1,\dots, I_m$ using a permutation from $G$; then permute the elements in each block $I_i$ independently. More precisely,
\[
G\wr S_{n/m}=\{(\sigma;\sigma_0,\dots,\sigma_{m-1}):\sigma\in G, \sigma_i\in S_{n/m}, 0\le i\le m-1\},
\]
where $(\sigma;\sigma_0,\dots,\sigma_{m-1})$ maps $i+mj$ to $\sigma(i)+m\sigma_i(j)$. Note that $(\sigma;\sigma_0,\dots,\sigma_{m-1})(Y_i)=Y_{\sigma(i)}$ for all $0\le i\le m-1$.
\begin{lem}\label{L2.2}
Let $m\mid n$ and $Y_i=\sum_{j=0}^{n/m-1}X_{i+mj}$. We have
\[
\text{\rm Stab}(\Psi_m(Y_0,\dots,Y_{m-1}))=\text{\rm Stab}(\Psi_m)\wr S_{n/m},
\]
\end{lem}
\begin{proof}
Let $\sigma\in\text{Stab}(\Psi_m)$ and $\sigma_i\in S_{n/m}$, $0\le i\le m-1$.
Then
\begin{align*}
(\sigma;\sigma_0,\dots,\sigma_{m-1})(\Psi_m(Y_0,\dots,Y_{m-1}))\,&=\Psi_m(Y_{\sigma(0)},\dots,Y_{\sigma(m-1)})\cr
&=\Psi_m(Y_0,\dots,Y_{m-1}).
\end{align*}
Hence $\text{Stab}(\Psi_m)\wr S_{n/m}\subset \text{Stab}(\Psi_m(Y_0,\dots,Y_{m-1}))$.
On the other hand, assume that $\alpha\in \text{Stab}(\Psi_m(Y_0,\dots,Y_{m-1}))$. We first claim that $\alpha$ permutes the blocks $I_0,\dots,I_{m-1}$, i.e., $\alpha\in S_m\wr S_{n/m}$. Assume the contrary. Then there exist some $0\le i\le m-1$ and some $a,b\in I_i$ such that $\alpha(a)\in I_{i_1}$ and $\alpha(b)\in I_{i_2}$, where $i_1\ne i_2$. Since $\alpha\in S_n$ fixes $\Psi_m(Y_0,\dots,Y_{m-1})$, we know that there exist $u\in(\Bbb Z/m\Bbb Z)^\times$ and $t\in\Bbb C^\times$ such that
\[
\alpha\Bigl(\sum_{k=0}^{m-1}\epsilon_m^kY_k\Bigr)=t\Bigl(\sum_{k=0}^{m-1}\epsilon_m^{uk}Y_k\Bigr).
\]
Comparing the coefficients of $X_{\alpha(a)}$ and $X_{\alpha(b)}$ in the above gives
\[
\epsilon_m^i(X_{\alpha(a)}+X_{\alpha(b)})=t(\epsilon_m^{ui_1}X_{\alpha(a)}+\epsilon_m^{ui_2}X_{\alpha(b)}),
\]
which is impossible. Hence the claim is proved. Now write $\alpha=(\sigma;\sigma_0,\dots,\sigma_{m-1})\in S_m\wr S_{n/m}$, where $\sigma\in S_m$ and $\sigma_i\in S_{n/m}$, $0\le i\le m-1$. We have $\Psi_m(Y_0,\dots,Y_{m-1})=\alpha(\Psi_m(Y_0,\dots,Y_{m-1}))=\Psi_m(Y_{\sigma(0)},\dots,Y_{\sigma(m-1)})$. Since $Y_0,\dots,Y_{m-1}$ are independent indeterminates, it follows that $\sigma\in\text{Stab}(\Psi_m)$, i.e., $\alpha\in \text{Stab}(\Psi_m)\wr S_{n/m}$.
\end{proof}
For $m\mid n$, consider a tower of subgroups $\text{Stab}(\Psi_m)\wr S_{n/m}<S_m\wr S_{n/m}<S_n$ and recall that $\mathcal C_m$ is a system of representatives of the left cosets of $\text{Stab}(\Psi_m)$ in $S_m$. It is clear that
\[
\mathcal D_{n,m}:=\{(\sigma;\text{id},\dots,\text{id})\in S_m\wr S_{n/m}:\sigma\in \mathcal C_m\}
\]
is a system of representatives of the left cosets of $\text{Stab}(\Psi_m)\wr S_{n/m}$ in $S_m\wr S_{n/m}$. Let $\mathcal P_{n,m}$ be the set of all unordered partitions of $\{0,1,\dots,n-1\}$ into $m$ parts of size $n/m$. For $\{P_0,\dots,P_{m-1}\}\in\mathcal P_{n,m}$, choose a permutation $\phi_{\{P_0,\dots,P_{m-1}\}}\in S_n$ which maps $I_i$ to $P_i$, $0\le i\le m-1$. Then
\[
\mathcal E_{n,m}:=\{\phi_{\{P_0,\dots,P_{m-1}\}}:\{P_0,\dots,P_{m-1}\}\in\mathcal P_{n,m}\}
\]
is a system of representatives of the left cosets of $S_m\wr S_{n/m}$ in $S_n$. Therefore,
\[
\{\beta\alpha:\alpha\in\mathcal D_{n,m}, \beta\in\mathcal E_{n,m}\}
\]
is a system of representatives of the left cosets of $S_m\wr S_{n/m}$ in $S_n$. Let
\begin{align}\label{theta}
\Theta_{n,m}:\,&=\prod_{\substack{\alpha\in\mathcal D_{n,m}\cr \beta\in\mathcal E_{n,m}}}\beta\alpha(\Psi_m(Y_0,\dots, Y_{m-1}))\\
&=\prod_{\{P_0,\dots,P_{m-1}\}\in\mathcal P_{n,m}}\phi_{\{P_0,\dots,P_{m-1}\}}(\Phi_m(Y_0,\dots,Y_{m-1})).\nonumber
\end{align}
Then $\Theta_{n,m}$ is a symmetrization of $\Psi_m(Y_0,\dots,Y_{m-1})$. To see the second equality in \eqref{theta}, note that
\[
\prod_{\alpha\in\mathcal D_{n,m}}\alpha(\Psi_m(Y_0,\dots, Y_{m-1}))=\Bigl(\prod_{\sigma\in\mathcal C_m}\sigma(\Psi_m)\Big)(Y_0,\dots,Y_{m-1})=\Phi_m(Y_0,\dots,Y_{m-1}).
\]
For the formulas of $\Theta_{n,m}$, $1\le n\le 6$, $m\mid n$, see the appendix A3.
Now
\begin{equation}\label{Sigma}
\Sigma_n:=\prod_{m\mid n}\Theta_{n,m}\in\Bbb Z[X_0,\dots,X_{n-1}],
\end{equation}
is a symmetrization of $\Delta_n$.
\section{A Sufficient Condition for Normal Polynomials}
Let $s_i$ denote the $i$-th elementary symmetric polynomial in $X_0,\dots,X_{n-1}$. In \eqref{Sigma}, since $\Theta_{n,m}\in\Bbb Z[X_0,\dots,X_{n-1}]$ is a symmetric polynomial, there exists $\theta_{n,m}\in\Bbb Z[s_1,\dots,s_n]$ such that
\begin{equation}\label{Theta}
\Theta_{n,m}(X_0,\dots,X_{n-1})=\theta_{n,m} (s_1,\dots,s_n).
\end{equation}
For the formulas of $\theta_{n,m}$ with $1\le n\le 6$, $m\mid n$, $(n,m)\ne(6,6)$, see the appendix A4. $\theta_{6,6}$ is a polynomial of degree 120 in $s_1,\dots,s_6$ with 436140 terms, where 436140 is the number of nonnegative integer solutions of $x_1+2x_2+\cdots+6x_6=120$, The computation of $\theta_{6,6}$ is not impossible but would required enormous computing power.
Let
\begin{equation}\label{h_n}
h_n=\prod_{m\mid n}\theta_{n,m}.
\end{equation}
Then
\begin{equation}\label{h}
\Sigma_n=h_n(s_1,s_2,\dots,s_n).
\end{equation}
Our main result is the following criterion for the normality of an irreducible polynomial in terms of its coefficients.
\begin{thm}\label{main}
Let $f=X^n+a_1X^{n-1}+\cdots+a_n\in\Bbb F_q[X]$ be irreducible. If $h_n(a_1,\dots,a_n)\ne 0$, where $h_n$ is defined in \eqref{h_n}, then $f$ is normal over $\Bbb F_q$.
\end{thm}
\begin{proof}
Let $r\in\Bbb F_{q^n}$ be a root of $f$, and let $r_0=r, r_1=r^q,\dots,r_{n-1}=r^{q^{n-1}}$, which are all the roots of $f$. By \eqref{h},
\[
\Sigma_n(-r_0,\dots,-r_{n-1})=h_n(a_1,\dots,a_n)\ne 0.
\]
Since $\Delta_n\mid \Sigma_n$, we have $\Delta_n(-r_0,\dots,-r_{n-1})\ne 0$. Thus $f(-X)$ is a normal polynomial over $\Bbb F_q$ and so is $f(X)$.
\end{proof}
The above result holds in a more general setting. Let $F$ be any field. Let $f\in F[x]$ be a separable irreducible polynomial of degree $n$ and let $K$ be its splitting field over $F$. Such an $f$ is called a {\em normal polynomial} over $F$ if its roots form a basis (hence a normal basis) of the $K$ over $F$.
\begin{lem}\label{L3.2} Let $K/F$ be a finite Galois extension of degree $n$ with Galois group $\text{\rm Aut}(K/F)=\{\sigma_1,\dots,\sigma_n\}$. Then $x_1,\dots,x_n\in K$ are linearly independent over $F$ if and only if
\[
\det\left[
\begin{matrix}
\sigma_1(x_1)&\sigma_1(x_2)&\cdots &\sigma_1(x_n)\cr
\sigma_2(x_1)&\sigma_2(x_2)&\cdots &\sigma_2(x_n)\cr
\vdots&\vdots&&\vdots\cr
\sigma_n(x_1)&\sigma_n(x_2)&\cdots &\sigma_n(x_n)
\end{matrix}\right]\ne 0.
\]
\end{lem}
\begin{proof}
($\Rightarrow$)
Assume that $(a_1,\dots,a_n)\in K^n$ is such that
\[
(a_1,\dots,a_n)\left[
\begin{matrix}
\sigma_1(x_1)&\sigma_1(x_2)&\cdots &\sigma_1(x_n)\cr
\sigma_2(x_1)&\sigma_2(x_2)&\cdots &\sigma_2(x_n)\cr
\vdots&\vdots&&\vdots\cr
\sigma_n(x_1)&\sigma_n(x_2)&\cdots &\sigma_n(x_n)
\end{matrix}\right]=0.
\]
This means that $a_1\sigma_1+\cdots+a_n\sigma_n=0$ as a function from $K$ to $K$ since $x_1,\dots,x_n$ form a basis of $K$ over $F$. By the linear independence of characters \cite[Ch.VI, Theorem~4.1]{Lang-2002}, we have $a_1=\cdots=a_n=0$.
\medskip
($\Leftarrow$) Assume to the contrary that there exists $b_1,\dots,b_n\in F$, not all $0$, such that $a_1x_1+\cdots+a_nx_n=0$. Then
\[
\left[
\begin{matrix}
\sigma_1(x_1)&\sigma_1(x_2)&\cdots &\sigma_1(x_n)\cr
\sigma_2(x_1)&\sigma_2(x_2)&\cdots &\sigma_2(x_n)\cr
\vdots&\vdots&&\vdots\cr
\sigma_n(x_1)&\sigma_n(x_2)&\cdots &\sigma_n(x_n)
\end{matrix}\right]
\left[\begin{matrix}b_1\cr\vdots\cr b_n\end{matrix}\right]=0,
\]
which is a contradiction.
\end{proof}
\begin{thm}\label{T3.3}
Let $F$ be any field and let $f=X^n+a_1X^{n-1}+\cdots+a_n\in F[X]$ be separable and irreducible with a cyclic Galois group over $F$. If $h_n(a_1,\dots,a_n)\ne 0$, where $h$ is defined in \eqref{h_n}, then $f$ is normal over $F$.
\end{thm}
\begin{proof}
Using Lemma~\ref{L3.2}, the proof of is identical to that of Theorem~\ref{main}.
\end{proof}
\medskip
\noindent{\bf Remark.} Let $f=X^n+a_1X^{n-1}+\cdots+a_n=\prod_{i=0}^{n-1}(X-\alpha_i)$. It is not difficult to see that $h_n(a_1,\dots,a_n)\ne 0$ if and only if
\[
\Delta_n(\alpha_{\sigma(0)},\dots,\alpha_{\sigma(n-1)})\ne 0\quad\text{for all}\ \sigma\in S_n.
\]
\section{A $p$-ary Version}
Let $\text{char}\,\Bbb F_q=p$. The polynomial $h_n$ in \eqref{h_n} is independent of $p$. When taking $p$ into consideration, we get a characteristic specific version of $h_n$ which is simpler but has the same property.
Write $n=p^em$, where $p\nmid m$ and let $\varepsilon_m$ be a primitive $m$-th root of unity (in some extension of $\Bbb F_p$). The polynomial $\Delta_n$ defined in \eqref{2.1} is now treated as a polynomial over $\Bbb F_p$. The $p$-ary version of $\Psi_m$ in \eqref{2.3} is
\begin{equation}\label{Psi-p}
\Psi_{p,m}=\prod_{i\in(\Bbb Z/m\Bbb Z)^\times}\Bigl(\sum_{j\in\Bbb Z/m\Bbb Z}\varepsilon_m^{ij}X_j\Bigr).
\end{equation}
(Note that $\Psi_m$ is defined only when $p\nmid m$.) In fact, $\Psi_{p,m}$ is the reduction of $\Psi_m$ modulo $p$. To see this claim, let $\frak p$ be a prime of $\Bbb Z[\epsilon_m]$ (the ring of integers of $\Bbb Q(\epsilon_m)$) lying above $p$ and treat $\Bbb F_p(\varepsilon_m)$ as $\Bbb Z[\epsilon_m]/\frak p$.
We have
\begin{align}\label{delta-p}
\Delta_n(X_0,\dots,X_n)\,&=\prod_{i\in\Bbb Z/m\Bbb Z}\Bigl(\sum_{j\in\Bbb Z/m\Bbb Z}\varepsilon_m^{ij}(X_j+X_{j+m}+\cdots+X_{j+(p^e-1)m})\Bigr)^{p^e}\\
&=\Delta_m(Y_0,\dots, Y_{m-1})^{p^e},\nonumber
\end{align}
where $Y_j=\sum_{k=0}^{p^e-1}X_{j+km}$. Moreover,
\begin{align}\label{Delta_mY}
\Delta_m(Y_0,\dots, Y_{m-1})\,&=\prod_{l\mid m}\Psi_{p,l}\Bigl(\;\sum_{j\equiv 0\,\text{(mod\, $l$)}}Y_j,\;\sum_{j\equiv 1\,\text{(mod\, $l$)}}Y_j,\;\dots,\; \sum_{j\equiv l-1\,\text{(mod\, $l$)}}Y_j\Bigr),\\
&=\prod_{l\mid m}\Psi_{p,l}(Z_0,\dots,Z_{l-1}),\nonumber
\end{align}
where
\[
Z_k=\sum_{\substack{0\le j\le m-1\cr j\equiv k\,\text{(mod\, $l$)}}}Y_j=\sum_{\substack{0\le i\le n-1\cr i\equiv k\,\text{(mod\, $l$)}}}X_i.
\]
The computations in Section 3 carry through almost entirely in the case of characteristic $p$. For $l\mid m$, let $\Theta_{p,n,l}$ and $\theta_{p,n,l}$ be the reductions of $\Theta_{n,l}$ and $\theta_{n,l}$ modulo $p$, respectively. Then $\Theta_{p,n,l}$ is a symmetrization of $\Psi_{p,l}(Z_0,\dots,Z_{l-1})$ and $\prod_{l\mid m}\Theta_{p,n,l}$
is a symmetrization of $\Delta_m(Y_0,\dots,Y_{m-1})$. Moreover
\[
\Theta_{p,n,l}(X_0,\dots,X_{n-1})=\theta_{p,n,l} (s_1,\dots,s_n),
\]
Let
\begin{equation}\label{hpn}
h_{p,n}=\prod_{l\mid m}\theta_{p,n,l}.
\end{equation}
Then we have the following analogue of Theorem~\ref{main}.
\begin{thm}\label{T4.1}
Let $f=X^n+a_1X^{n-1}+\cdots+a_0\in\Bbb F_q[X]$ be irreducible, where $\text{\rm char}\,\Bbb F_q=p$. If $h_{p,n}(a_1,\dots,a_n)\ne 0$, then $f$ is normal over $\Bbb F_q$.
\end{thm}
\begin{rmk}\label{R4.2}\rm
(i) For $l\mid m$ and $0\le t\le e$, the mod $p$ reduction of $\Theta_{n,p^tl}$ is a power of $\Theta_{p,n,l}$, hence the mod $p$ reduction of $\theta_{n,p^tl}$ is a power of $\theta_{p,n,l}$. Therefore, for $a_1,\dots,a_n\in\Bbb F_q$, $h_n(a_1,\dots,a_n)\ne0$ if and only if $h_{p,n}(a_1,\dots,a_n)\ne0$.
\medskip
(ii) Let $f=X^n+a_1X^{n-1}+\cdots+a_n\in\Bbb F_q[X]$ be irreducible. A necessary condition for $f$ to be normal over $\Bbb F_q$ is that the sum of its roots is nonzero, i.e., $a_1\ne 0$. When $n=p^e$, i.e., $m=1$, we have $h_{p,n}(a_1,\dots,a_n)=a_1$. Thus in this case, $f$ is normal if and only if $a_1\ne 0$. This fact was first proved by Perlis \cite{Perlis-DMJ-1942}.
\medskip
(iii) Assume that $n$ is a prime different from $\text{char}\,\Bbb F_q$. Then \eqref{Delta_mY} becomes
\[
\Delta_n(X_0,\dots,X_{n-1})=(X_0+\cdots+X_{n-1})\Psi_{p,n}(X_0,\cdots,X_{n-1}).
\]
Further assume that $q$ is a generator of $(\Bbb Z/n\Bbb Z)^\times$. Let $\alpha\in\Bbb F_{q^n}\setminus\Bbb F_q$. We claim that
\[
\Psi_{p,n}(\alpha,\alpha^q,\dots,\alpha^{q^{n-1}})\ne0.
\]
Assume the contrary. Then $\sum_{j\in\Bbb Z/n\Bbb Z}\varepsilon_n^{ij}\alpha^{q^j}=0$ for some $i\in(\Bbb Z/n\Bbb Z)^\times$. Raising this equation to the power $q^k$, $k\in\Bbb N$, gives
\[
0=\sum_{j\in\Bbb Z/n\Bbb Z}\varepsilon_n^{q^kij}\alpha^{q^{j+k}}=\sum_{j\in\Bbb Z/n\Bbb Z}\varepsilon_n^{q^ki(j-k)}\alpha^{q^j}=\varepsilon_n^{-ikq^k}\sum_{j\in\Bbb Z/n\Bbb Z}\varepsilon_n^{q^kij}\alpha^{q^j},
\]
i.e.,
$\sum_{j\in\Bbb Z/n\Bbb Z}\varepsilon_n^{q^kij}\alpha^{q^j}=0$. Since $q$ generates $(\Bbb Z/n\Bbb Z)^\times$, it follows that that $\sum_{j\in\Bbb Z/n\Bbb Z}\varepsilon_n^{ij}\alpha^{q^j}=0$ for {\em all} $i\in(\Bbb Z/n\Bbb Z)^\times$. Consequently, $\alpha=\alpha^q=\cdots=\alpha^{q^{n-1}}$, which is a contradiction. Now, $\Delta_n(\alpha,\alpha^q,\dots,\alpha^{q^{n-1}})\ne 0$ if and only if $\alpha+\alpha^q+\dots+\alpha^{q^{n-1}}\ne 0$.
Therefore, in this case, $a_1\ne 0$ is also a necessary and sufficient condition for $f$ (irreducible) to be normal over $\Bbb F_q$. This result was proved first proved by Pei et. al. \cite{Pei-Wang-Omura-IEEE-IT-1986} for $q=2$.
\medskip
(iv) Chang et. al. \cite{Chang-Truong-Reed-JA-2001} proved the converse of (ii) and (iii) in the following sense: If every degree $n$ irreducible polynomial over $\Bbb F_q$ with nonzero trace is normal over $\Bbb F_q$, then $n$ is either a power of $p\,(\,=\text{char}\,\Bbb F_q)$ or a prime different from $p$ such that $q$ is a generator of $(\Bbb Z/n\Bbb Z)^\times$.
\end{rmk}
\section{Number of Normal Polynomials}
Let $N(q,n)$ denote the number of monic normal polynomials of degree $n$ over $\Bbb F_q$. This number is known; see Proposition~\ref{P4.1} below. There are at least two proofs, given in \cite{Akbik-JNT-1992} and \cite[Theorem~3.73]{Lidl-Niederreiter-FF-1997}, respectively. We include a short proof which only uses linear algebra. Let $p=\text{char}\,\Bbb F_q$ and $n=p^et$, where $e\ge 0$ and $t>0$ are integers such that $p\nmid t$. Let $X^t-1=Q_1\cdots Q_k$ be the factorization of $X^t-1$ over $\Bbb F_q[X]$ and let $d_i=\deg Q_i$. The integers $d_1,\dots,d_k$ are the sizes of the orbits in $\Bbb Z/t\Bbb Z$ under the multiplication by powers of $q$, i.e., the sizes of the $q$-cyclotomic cosets modulo $t$. For $d\mid t$, let $o_d(q)$ denote the order of $q$ in $(\Bbb Z/d\Bbb Z)^\times$. Then the multiset $\{d_1,\dots,d_k\}$ consists of $o_d(q)$ with multiplicity $\phi(d)/o_d(q)$ for all $d\mid t$.
\begin{prop}\label{P4.1}
In the above notation, we have
\[
N(q,n)=\frac 1n q^{(p^e-1)t}\prod_{i=1}^k(q^{d_i}-1)=\frac 1n q^{(p^e-1)t}\prod_{d\mid t}(q^{o_d(q)}-1)^{\phi(d)/o_d(q)}.
\]
\end{prop}
\begin{proof}
Let $\rho$ be the Frobenius of $\Bbb F_{q^n}/\Bbb F_q$, that is, $\rho(x)=x^q$ for all $x\in\Bbb F_{q^n}$. Then $\rho$ is an $\Bbb F_q$-linear map from $\Bbb F_{q^n}$ to itself. The minimal polynomial of $\rho$ over $\Bbb F_q$ is $X^n-1=\prod_{i=1}^kQ_i^{p^e}$, so the elementary divisors of $\rho$ are $Q_i^{p^e}$, $1\le i\le k$. Therefore, the matrix of $\rho$ is similar to
\[
M:=\left[
\begin{matrix} M_1\cr &\ddots\cr &&M_k\end{matrix}\right],
\]
where $M_i$ is the companion matrix of $Q_i^{p^e}$. We know that $x\in\Bbb F_{q^n}$ is a root of a normal polynomial of degree $n$ over $\Bbb F_q$ if and only if $g(\rho)(x)\ne 0$ for all $g\in\Bbb F_q[X]$ with $g\mid X^n-1$ and $g\ne X^n-1$. Identify $\Bbb F_{q^n}$ with $\Bbb F_q^n$ which is compatible with the above matrix $M$. Then $N(q,n)=|\mathcal X|/n$, where
\[
\mathcal X=\{x\in\Bbb F_q^n: g(M)x\ne 0\ \text{for all $g\in\Bbb F_q[X]$ with $g\mid X^n-1$ and $g\ne X^n-1$}\}.
\]
For
\[
x=\left[\begin{matrix}x_1\cr\vdots\cr x_k\end{matrix}\right]\in\Bbb F_q^n,
\]
where $x_i$ is of length $p^ed_i$, we have
\begin{align*}
&g(M)x\ne 0\ \text{for all $g\mid X^n-1$ with $g\ne X^n-1$}\cr
\Leftrightarrow\ &\Bigl(\frac{X^n-1}{Q_i}\Bigr)(M)x\ne 0\ \text{for all $1\le i\le k$}\cr
\Leftrightarrow\ &\Bigl(\frac{X^n-1}{Q_i}\Bigr)(M_i)x_i\ne 0\ \text{for all $1\le i\le k$}.
\end{align*}
By \cite[Lemma~6.11]{Hou-ams-gsm-2018},
\[
\text{nullity}\,\Bigl(\Bigl(\frac{X^n-1}{Q_i}\Bigr)(M_i)\Bigr)=\text{nullity}\,(Q_i^{p^e-1}(M_i))=\deg Q_i^{p^e-1}=(p^e-1)d_i.
\]
Hence
\[
|\mathcal X|=\prod_{i=1}^k(q^{p^ed_i}-q^{(p^e-1)d_i})=\prod_{i=1}^k q^{(p^e-1)d_i}(q^{d_i}-1)=q^{(p^e-1)t}\prod_{i=1}^k(q^{d_i}-1).
\]
\end{proof}
Theorem~\ref{T4.1} is a criterion that allows us to detect the normality of irreducible polynomials from their coefficients. However, since the condition in Theorem~\ref{T4.1} is sufficient but not necessary, irreducible polynomials that do not meet the criterion may still be normal. Naturally, one would like know how often ``false negative'' cases can occur. Let $N'(q,n)$ denote the number of monic normal polynomials of degree $n$ over $\Bbb F_q$ that are detected by Theorem~\ref{T4.1}. In Table~\ref{Tb1}, we computed $N(q,n)$ for $q\le 19$ and $n\le 6$ and $N'(q,n)$ for $q\le 19$ and $n\le 6$ except for $(q,n)=(5,6),(7,6),(11,6),(13,6),(17,6),(19,6)$. The result is quite surprising. For the range of $(q,n)$ computed, $N'(q,n)=N(q,n)$ with only two exceptions: $(q,n)=(2,6)$ and $(8,6)$.
\begin{table}[ht]
\caption{$N(q,n)$ vs the numbers of polynomials from Theorem~\ref{T4.1}}\label{Tb1}
\renewcommand*{\arraystretch}{1.2}
\centering
\begin{tabular}{c c|c|c|c}
\hline
$q$ & $n$ & $N(q,n)$ & \kern-0.2em $\begin{array}{ll}\textstyle\text{number of polynomials}\vspace{-0.5em}\cr \textstyle\text{from Theorem~\ref{T4.1}}\end{array}$ \kern-0.2em &\kern-0.2em $\begin{array}{ll}\textstyle\text{Remark~\ref{R4.2}}\vspace{-0.5em}\cr \textstyle\text{(ii) (iii)}\end{array}$ \kern-0.2em \\ \hline
2 & 1 & 1 &1 & 1 \\
& 2 & 1 &1 & 1 \\
& 3 & 1 &1 & 1 \\
& 4 & 2 &2 & 2 \\
& 5 & 3 &3 & 3 \\
& 6 & 4 &0 & \\
\hline
3 & 1 & 2 &2 & 2 \\
& 2 & 2 &2 & 2 \\
& 3 & 6 &6 & 6 \\
& 4 & 8 &8 \\
& 5 & 32 &32 & 32 \\
& 6 & 54 &54 & \\
\hline
4 & 1 & 3 & 3 & 3 \\
& 2 & 6 & 6 & 6 \\
& 3 & 9 & 9 & \\
& 4 & 48 & 48 & 48 \\
& 5 & 135 & 135 & \\
& 6 & 288 & 288 & \\
\hline
5 & 1 & 4 &4 & 4 \\
& 2 & 8 &8 & 8 \\
& 3 & 32 &32 & 32 \\
& 4 & 64 &64 \\
& 5 & 500 &500 & 500 \\
& 6 & 1,536 \\
\hline
7 & 1 & 6 &6 & 6 \\
& 2 & 18 &18 & 18 \\
& 3 & 72 &72 \\
& 4 & 432 &432 \\
& 5 & 2,880 &2,880 & 2,880 \\
& 6 & 7,776 \\
\hline
8 & 1 & 7 & 7 & 7 \\
& 2 & 28 & 28 & 28 \\
& 3 & 147 & 147 & 147 \\
& 4 & 896 & 896 & 896 \\
& 5 & 5,733 & 5,733 & 5,733 \\
& 6 & 37,632 & 35,280 \\
\hline
\end{tabular}
\end{table}
\addtocounter{table}{-1}
\begin{table}[ht]
\caption{continued}
\renewcommand*{\arraystretch}{1.2}
\centering
\begin{tabular}{c c|c|c|c}
\hline
$q$ & $n$ & $N(q,n)$ & \kern-0.2em $\begin{array}{ll}\textstyle\text{number of polynomials}\vspace{-0.5em}\cr \textstyle\text{from Theorem~\ref{T4.1}}\end{array}$ \kern-0.2em &\kern-0.2em $\begin{array}{ll}\textstyle\text{Remark~\ref{R4.2}}\vspace{-0.5em}\cr \textstyle\text{(ii) (iii)}\end{array}$ \kern-0.2em \\ \hline
9 & 1 & 8 & 8 & 8 \\
& 2 & 32 & 32 & 32 \\
& 3 & 216 & 216 & 216 \\
& 4 & 1,024 & 1,024 & \\
& 5 & 10,240 & 10,240 & \\
& 6 & 69,984 & 69,984 & \\
\hline
11 & 1 & 10 & 10 & 10 \\
& 2 & 50 & 50 & 50 \\
& 3 & 400 & 400 & 400 \\
& 4 & 3,000 & 3,000 & \\
& 5 & 20,000 & 20,000 & \\
& 6 & 240,000 & & \\
\hline
13 & 1 & 12 & 12 & 1 \\
& 2 & 72 & 72 & 72 \\
& 3 & 576 & 576 & \\
& 4 & 5,184 & 5,184 & \\
& 5 & 68,544 & 68,544 & 68,544 \\
& 6 & 497,664 & & \\
\hline
16 & 1 & 15 & 15 & 15 \\
& 2 & 120 & 120 & 120 \\
& 3 & 1,125 & 1,125 & \\
& 4 & 15,360 & 15,360 & 15,360 \\
& 5 & 151,875 & 151,875 & \\
& 6 & 2,304,000 & 2,304,000 & \\
\hline
17 & 1 & 16 & 16 & 16 \\
& 2 & 128 & 128 & 128 \\
& 3 & 1,536 & 1,536 & 1,536 \\
& 4 & 16,384 & 16,384 & \\
& 5 & 267,264 & 267,264 & 267,264 \\
& 6 & 3,536,944 & & \\
\hline
19 & 1 & 18 & 18 & 18 \\
& 2 & 162 & 162 & 162 \\
& 3 & 1,944 & 1,944 & \\
& 4 & 29,160 & 29,160 & \\
& 5 & 466,560 & 466,560 & \\
& 6 & 5,668,704 & & \\
\hline
\end{tabular}
\end{table}
\section*{Acknowledgments}
The author thanks Neranga Fernando for the helpful discussions and his assistance in computation.
\section*{Appendix}
\noindent A1. $\Psi_n$, $1\le n\le 6$.
{\tiny
\[
\Psi_1=X_0.
\]
\[
\Psi_2=X_0-X_1.
\]
\[
\Psi_3=X_0^2-X_1 X_0-X_2 X_0+X_1^2+X_2^2-X_1 X_2.
\]
\[
\Psi_4=X_0^2-2 X_2 X_0+X_1^2+X_2^2+X_3^2-2 X_1 X_3.
\]
\[
\longequation{
\Psi_5=X_0^4-X_1 X_0^3-X_2 X_0^3-X_3 X_0^3-X_4 X_0^3+X_1^2 X_0^2+X_2^2 X_0^2+X_3^2
X_0^2+X_4^2 X_0^2+2 X_1 X_2 X_0^2+2 X_1 X_3 X_0^2-3 X_2 X_3 X_0^2-3 X_1
X_4 X_0^2+2 X_2 X_4 X_0^2+2 X_3 X_4 X_0^2-X_1^3 X_0-X_2^3 X_0-X_3^3
X_0-X_4^3 X_0+2 X_1 X_2^2 X_0-3 X_1 X_3^2 X_0+2 X_2 X_3^2 X_0+2 X_1
X_4^2 X_0+2 X_2 X_4^2 X_0-3 X_3 X_4^2 X_0-3 X_1^2 X_2 X_0+2 X_1^2 X_3
X_0+2 X_2^2 X_3 X_0-X_1 X_2 X_3 X_0+2 X_1^2 X_4 X_0-3 X_2^2 X_4 X_0+2
X_3^2 X_4 X_0-X_1 X_2 X_4 X_0-X_1 X_3 X_4 X_0-X_2 X_3 X_4
X_0+X_1^4+X_2^4+X_3^4+X_4^4-X_1 X_2^3-X_1 X_3^3-X_2 X_3^3-X_1 X_4^3-X_2
X_4^3-X_3 X_4^3+X_1^2 X_2^2+X_1^2 X_3^2+X_2^2 X_3^2+2 X_1 X_2 X_3^2+X_1^2
X_4^2+X_2^2 X_4^2+X_3^2 X_4^2-3 X_1 X_2 X_4^2+2 X_1 X_3 X_4^2+2 X_2 X_3
X_4^2-X_1^3 X_2-X_1^3 X_3-X_2^3 X_3-3 X_1 X_2^2 X_3+2 X_1^2 X_2
X_3-X_1^3 X_4-X_2^3 X_4-X_3^3 X_4+2 X_1 X_2^2 X_4+2 X_1 X_3^2 X_4-3 X_2
X_3^2 X_4+2 X_1^2 X_2 X_4-3 X_1^2 X_3 X_4+2 X_2^2 X_3 X_4-X_1 X_2 X_3
X_4.
}
\]
\[
\longequation{
\Psi_6=X_0^2+X_1 X_0-X_2 X_0-2 X_3 X_0-X_4 X_0+X_5
X_0+X_1^2+X_2^2+X_3^2+X_4^2+X_5^2+X_1 X_2-X_1 X_3+X_2 X_3-2 X_1 X_4-X_2
X_4+X_3 X_4-X_1 X_5-2 X_2 X_5-X_3 X_5+X_4 X_5.
}
\]
}
\noindent A2. $\Phi_n$, $1\le n\le 6$.
{\tiny
\[
\Phi_1=X_0.
\]
\[
\Phi_2=(X_0-X_1)(X_1-X_0).
\]
\[
\Phi_3=X_0^2-X_1 X_0-X_2 X_0+X_1^2+X_2^2-X_1 X_2.
\]
\[
\Phi_4=\prod_{(i_1,i_2,i_3)}\Psi_4(X_0,X_{i_1},X_{i_2},X_{i_3}),
\]
where
\[
(i_1,i_2,i_3)=(1, 3, 2), (1, 2, 3), (2, 1, 3).
\]
\[
\Phi_5=\prod_{(i_2,i_3,i_4)}\Psi_5(X_0,X_1,X_{i_2},X_{i_3},X_{i_4}),
\]
where
\[
(i_2,i_3,i_4)=(2, 3, 4), (2, 4, 3), (3, 2, 4), (4, 2, 3), (3, 4, 2),
(4, 3, 2).
\]
\[
\Phi_6=\prod_{(i_1,i_2,i_3,i_4,i_5)}\Psi_6(X_0,X_{i_1},X_{i_2},X_{i_3},X_{i_4},X_{i_5}),
\]
where
\begin{align*}
(i_1,i_2,i_3,i_4,i_5)=\,&
(1,3,4,5,2),(1,3,5,4,2),(1,4,3,5,2),(1,4,5,3,2),(1,5,3,4,2),(1,5,4,3,2),\cr
&(1,2,4,5,3),(1,2,5,4,3),(1,4,2,5,3),(1,4,5,2,3),(1,5,2,4,3),(1,5,4,2,3),\cr
&(1,2,3,5,4),(1,2,5,3,4),(1,3,2,5,4),(1,3,5,2,4),(1,5,2,3,4),(1,5,3,2,4),\cr
&(1,2,3,4,5),(1,2,4,3,5),(1,3,2,4,5),(1,3,4,2,5),(1,4,2,3,5),(1,4,3,2,5),\cr
&(2,1,4,5,3),(2,1,5,4,3),(2,4,1,5,3),(2,4,5,1,3),(2,5,1,4,3),(2,5,4,1,3),\cr
&(2,1,3,5,4),(2,1,5,3,4),(2,3,1,5,4),(2,3,5,1,4),(2,5,1,3,4),(2,5,3,1,4),\cr
&(2,1,3,4,5),(2,1,4,3,5),(2,3,1,4,5),(2,3,4,1,5),(2,4,1,3,5),(2,4,3,1,5),\cr
&(3,1,2,5,4),(3,1,5,2,4),(3,2,1,5,4),(3,2,5,1,4),(3,5,1,2,4),(3,5,2,1,4),\cr
&(3,1,2,4,5),(3,1,4,2,5),(3,2,1,4,5),(3,2,4,1,5),(3,4,1,2,5),(3,4,2,1,5),\cr
&(4,1,2,3,5),(4,1,3,2,5),(4,2,1,3,5),(4,2,3,1,5),(4,3,1,2,5),(4,3,2,1,5).
\end{align*}
}
\noindent A3. $\Theta_{n,m}$, $1\le n\le 6$, $m\mid n$.
{\tiny
\[
\Theta_{1,1}=X_0.
\]
\[
\Theta_{2,1}=X_0+X_1.
\]
\[
\Theta_{2,2}=\Phi_2.
\]
\[
\Theta_{3,1}=X_0 + X_1 + X_2.
\]
\[
\Theta_{3,3}=\Phi_3.
\]
\[
\Theta_{4,1}=X_0 + X_1 + X_2 + X_3.
\]
\[
\Theta_{4,2}=\prod_{(i_1,i_2,i_3)}\Phi_2(X_0+X_{i_2},X_{i_1}+X_{i_3}),
\]
where
\[
(i_1,i_2,i_3)=(2,1,3), (1,2,3), (1,3,2).
\]
\[
\Theta_{4,4}=\Phi_4.
\]
\[
\Theta_{5,1}=X_0 + X_1 + X_2 + X_3 + X_4.
\]
\[
\Theta_{5,5}=\Phi_5.
\]
\[
\Theta_{6,1}=X_0 + X_1 + X_2 + X_3 + X_4 + X_5.
\]
\[
\Theta_{6,2}=\prod_{(i_1,i_2,i_3,i_4,i_5)}\Phi_2(X_0+X_{i_2}+X_{i_4}, X_{i_1}+X_{i_3}+X_{i_5}),
\]
where
\begin{align*}
(i_1,i_2,i_3,i_4,i_5)=\,&(3,1,4,2,5),(2,1,4,3,5),(2,1,3,4,5),(2,1,3,5,4),(1,2,4,3,5),\cr
&(1,2,3,4,5),(1,2,3,5,4),(1,3,2,4,5),(1,3,2,5,4),(1,4,2,5,3).
\end{align*}
\[
\Theta_{6,3}=\prod_{(i_1,i_2,i_3,i_4,i_5)}\Phi_3(X_0+X_{i_3},X_{i_1}+X_{i_4},X_{i_2}+X_{i_5}),
\]
where
\begin{align*}
(i_1,i_2,i_3,i_4,i_5)=\,&(2,4,1,3,5),(2,3,1,4,5),(2,3,1,5,4),(1,4,2,3,5),(1,3,2,4,5),\cr
&(1,3,2,5,4),(1,4,3,2,5),(1,2,3,4,5),(1,2,3,5,4),(1,3,4,2,5),\cr
&(1,2,4,3,5),(1,2,4,5,3),(1,3,5,2,4),(1,2,5,3,4),(1,2,5,4,3).
\end{align*}
\[
\Theta_{6,6}=\Phi_6.
\]
}
\noindent A4. $\theta_{n,m}$, $1\le n\le 6$, $m\mid n$, $(n,m)\ne (6,6)$.
{\tiny
\[
\theta_{1,1}=s_1.
\]
\[
\theta_{2,1}=s_1.
\]
\[
\theta_{2,2}=-s_1^2+4s_2.
\]
\[
\theta_{3,1}=s_1.
\]
\[
\theta_{3,3}=s_1^2-3s_2.
\]
\[
\theta_{4,1}=s_1.
\]
\[
\theta_{4,2}=-s_1^6+8 s_2 s_1^4-16 s_3 s_1^3-16 s_2^2 s_1^2+64 s_2 s_3 s_1-64
s_3^2.
\]
\[
\theta_{4,4}=s_1^6-8 s_2 s_1^4+4 s_3 s_1^3+20 s_2^2 s_1^2-24 s_4 s_1^2-8 s_2 s_3
s_1-16 s_2^3-8 s_3^2+64 s_2 s_4.
\]
\[
\theta_{5,1}=s_1.
\]
\[
\longequation{
\theta_{5,5}=
s_1^{24}-30 s_2 s_1^{22}+20 s_3 s_1^{21}+405 s_2^2 s_1^{20}-50 s_4
s_1^{20}-500 s_2 s_3 s_1^{19}-200 s_5 s_1^{19}-3250 s_2^3
s_1^{18}+150 s_3^2 s_1^{18}+1300 s_2 s_4 s_1^{18}+5500 s_2^2 s_3
s_1^{17}-850 s_3 s_4 s_1^{17}+4750 s_2 s_5 s_1^{17}+17250 s_2^4
s_1^{16}-2950 s_2 s_3^2 s_1^{16}+1125 s_4^2 s_1^{16}-14900 s_2^2 s_4
s_1^{16}-3250 s_3 s_5 s_1^{16}+500 s_3^3 s_1^{15}-35000 s_2^3 s_3
s_1^{15}+17750 s_2 s_3 s_4 s_1^{15}-49250 s_2^2 s_5 s_1^{15}+6500
s_4 s_5 s_1^{15}-63750 s_2^5 s_1^{14}+24500 s_2^2 s_3^2
s_1^{14}-24125 s_2 s_4^2 s_1^{14}+3750 s_5^2 s_1^{14}+99000 s_2^3 s_4
s_1^{14}-5250 s_3^2 s_4 s_1^{14}+61750 s_2 s_3 s_5 s_1^{14}-6750 s_2
s_3^3 s_1^{13}+13500 s_3 s_4^2 s_1^{13}+142500 s_2^4 s_3
s_1^{13}-158125 s_2^2 s_3 s_4 s_1^{13}+292375 s_2^3 s_5
s_1^{13}-15000 s_3^2 s_5 s_1^{13}-123750 s_2 s_4 s_5 s_1^{13}+168125
s_2^6 s_1^{12}+375 s_3^4 s_1^{12}-11250 s_4^3 s_1^{12}-111500 s_2^3
s_3^2 s_1^{12}+221500 s_2^2 s_4^2 s_1^{12}-65625 s_2 s_5^2
s_1^{12}-421750 s_2^4 s_4 s_1^{12}+82125 s_2 s_3^2 s_4
s_1^{12}-498125 s_2^2 s_3 s_5 s_1^{12}+75000 s_3 s_4 s_5
s_1^{12}+33750 s_2^2 s_3^3 s_1^{11}-221250 s_2 s_3 s_4^2
s_1^{11}+62500 s_3 s_5^2 s_1^{11}-387500 s_2^5 s_3 s_1^{11}-13750
s_3^3 s_4 s_1^{11}+783125 s_2^3 s_3 s_4 s_1^{11}-1094375 s_2^4 s_5
s_1^{11}+206250 s_2 s_3^2 s_5 s_1^{11}-87500 s_4^2 s_5
s_1^{11}+993750 s_2^2 s_4 s_5 s_1^{11}-318750 s_2^7 s_1^{10}+1250 s_2
s_3^4 s_1^{10}+190625 s_2 s_4^3 s_1^{10}+301250 s_2^4 s_3^2
s_1^{10}-1135000 s_2^3 s_4^2 s_1^{10}+53125 s_3^2 s_4^2
s_1^{10}+468750 s_2^2 s_5^2 s_1^{10}+6250 s_4 s_5^2 s_1^{10}+1202500
s_2^5 s_4 s_1^{10}-515625 s_2^2 s_3^2 s_4 s_1^{10}-6250 s_3^3 s_5
s_1^{10}+2221875 s_2^3 s_3 s_5 s_1^{10}-1068750 s_2 s_3 s_4 s_5
s_1^{10}-2500 s_3^5 s_1^9-70000 s_2^3 s_3^3 s_1^9-68750 s_3 s_4^3
s_1^9-175000 s_5^3 s_1^9+1471875 s_2^2 s_3 s_4^2 s_1^9-862500 s_2
s_3 s_5^2 s_1^9+712500 s_2^6 s_3 s_1^9+137500 s_2 s_3^3 s_4
s_1^9-2353750 s_2^4 s_3 s_4 s_1^9+2676250 s_2^5 s_5 s_1^9-1143750
s_2^2 s_3^2 s_5 s_1^9+1200000 s_2 s_4^2 s_5 s_1^9-4365625 s_2^3 s_4
s_5 s_1^9+181250 s_3^2 s_4 s_5 s_1^9+431250 s_2^8 s_1^8-37500 s_2^2
s_3^4 s_1^8+115625 s_4^4 s_1^8-1315625 s_2^2 s_4^3 s_1^8-486250 s_2^5
s_3^2 s_1^8+3543750 s_2^4 s_4^2 s_1^8-606250 s_2 s_3^2 s_4^2
s_1^8-1737500 s_2^3 s_5^2 s_1^8+384375 s_3^2 s_5^2 s_1^8+53125 s_2
s_4 s_5^2 s_1^8-2322500 s_2^6 s_4 s_1^8-6250 s_3^4 s_4 s_1^8+1650000
s_2^3 s_3^2 s_4 s_1^8-12500 s_2 s_3^3 s_5 s_1^8-843750 s_3 s_4^2
s_5 s_1^8-5968750 s_2^4 s_3 s_5 s_1^8+6159375 s_2^2 s_3 s_4 s_5
s_1^8+31250 s_2 s_3^5 s_1^7+18750 s_2^4 s_3^3 s_1^7+753125 s_2 s_3
s_4^3 s_1^7+1968750 s_2 s_5^3 s_1^7+93750 s_3^3 s_4^2 s_1^7-5071875
s_2^3 s_3 s_4^2 s_1^7+4550000 s_2^2 s_3 s_5^2 s_1^7+281250 s_3 s_4
s_5^2 s_1^7-875000 s_2^7 s_3 s_1^7-459375 s_2^2 s_3^3 s_4
s_1^7+4390625 s_2^5 s_3 s_4 s_1^7-4271875 s_2^6 s_5 s_1^7+37500
s_3^4 s_5 s_1^7+362500 s_4^3 s_5 s_1^7+3271875 s_2^3 s_3^2 s_5
s_1^7-6446875 s_2^2 s_4^2 s_5 s_1^7+11340625 s_2^4 s_4 s_5
s_1^7-1387500 s_2 s_3^2 s_4 s_5 s_1^7-406250 s_2^9 s_1^6-6250 s_3^6
s_1^6+162500 s_2^3 s_3^4 s_1^6-1246875 s_2 s_4^4 s_1^6+4734375 s_2^3
s_4^3 s_1^6-31250 s_3^2 s_4^3 s_1^6-2343750 s_3 s_5^3 s_1^6+437500
s_2^6 s_3^2 s_1^6-6893750 s_2^5 s_4^2 s_1^6+2534375 s_2^2 s_3^2
s_4^2 s_1^6+3490625 s_2^4 s_5^2 s_1^6-3984375 s_2 s_3^2 s_5^2
s_1^6-843750 s_4^2 s_5^2 s_1^6-1109375 s_2^2 s_4 s_5^2 s_1^6+3000000
s_2^7 s_4 s_1^6-40625 s_2 s_3^4 s_4 s_1^6-2803125 s_2^4 s_3^2 s_4
s_1^6+353125 s_2^2 s_3^3 s_5 s_1^6+7753125 s_2 s_3 s_4^2 s_5
s_1^6+9821875 s_2^5 s_3 s_5 s_1^6-356250 s_3^3 s_4 s_5
s_1^6-18446875 s_2^3 s_3 s_4 s_5 s_1^6-112500 s_2^2 s_3^5
s_1^5+318750 s_3 s_4^4 s_1^5+150000 s_2^5 s_3^3 s_1^5-2993750 s_2^2
s_3 s_4^3 s_1^5-7734375 s_2^2 s_5^3 s_1^5+3125000 s_4 s_5^3
s_1^5-684375 s_2 s_3^3 s_4^2 s_1^5+9521875 s_2^4 s_3 s_4^2
s_1^5+1250000 s_3^3 s_5^2 s_1^5-11093750 s_2^3 s_3 s_5^2 s_1^5+140625
s_2 s_3 s_4 s_5^2 s_1^5+687500 s_2^8 s_3 s_1^5+43750 s_3^5 s_4
s_1^5+506250 s_2^3 s_3^3 s_4 s_1^5-4962500 s_2^6 s_3 s_4
s_1^5+4287500 s_2^7 s_5 s_1^5-150000 s_2 s_3^4 s_5 s_1^5-2750000 s_2
s_4^3 s_5 s_1^5-5050000 s_2^4 s_3^2 s_5 s_1^5+17050000 s_2^3 s_4^2
s_5 s_1^5-1687500 s_3^2 s_4^2 s_5 s_1^5-17428125 s_2^5 s_4 s_5
s_1^5+3590625 s_2^2 s_3^2 s_4 s_5 s_1^5+253125 s_2^{10} s_1^4+25000
s_2 s_3^6 s_1^4-487500 s_4^5 s_1^4-262500 s_2^4 s_3^4 s_1^4+4900000
s_2^2 s_4^4 s_1^4-9381250 s_2^4 s_4^3 s_1^4+18750 s_2 s_3^2 s_4^3
s_1^4+16015625 s_2 s_3 s_5^3 s_1^4-162500 s_2^7 s_3^2 s_1^4+8153125
s_2^6 s_4^2 s_1^4+84375 s_3^4 s_4^2 s_1^4-4625000 s_2^3 s_3^2 s_4^2
s_1^4-3571875 s_2^5 s_5^2 s_1^4+13500000 s_2^2 s_3^2 s_5^2
s_1^4+3984375 s_2 s_4^2 s_5^2 s_1^4+4671875 s_2^3 s_4 s_5^2
s_1^4+312500 s_3^2 s_4 s_5^2 s_1^4-2481250 s_2^8 s_4 s_1^4+431250
s_2^2 s_3^4 s_4 s_1^4+2325000 s_2^5 s_3^2 s_4 s_1^4-181250 s_3^5
s_5 s_1^4-1331250 s_2^3 s_3^3 s_5 s_1^4+2468750 s_3 s_4^3 s_5
s_1^4-25781250 s_2^2 s_3 s_4^2 s_5 s_1^4-9562500 s_2^6 s_3 s_5
s_1^4+3734375 s_2 s_3^3 s_4 s_5 s_1^4+30365625 s_2^4 s_3 s_4 s_5
s_1^4+125000 s_2^3 s_3^5 s_1^3-1390625 s_2 s_3 s_4^4 s_1^3-218750
s_2^6 s_3^3 s_1^3+62500 s_3^3 s_4^3 s_1^3+5109375 s_2^3 s_3 s_4^3
s_1^3+12109375 s_2^3 s_5^3 s_1^3-7812500 s_3^2 s_5^3 s_1^3-19531250
s_2 s_4 s_5^3 s_1^3+1531250 s_2^2 s_3^3 s_4^2 s_1^3-9218750 s_2^5
s_3 s_4^2 s_1^3-7968750 s_2 s_3^3 s_5^2 s_1^3-3906250 s_3 s_4^2
s_5^2 s_1^3+11687500 s_2^4 s_3 s_5^2 s_1^3-8671875 s_2^2 s_3 s_4
s_5^2 s_1^3-312500 s_2^9 s_3 s_1^3-312500 s_2 s_3^5 s_4 s_1^3+125000
s_2^4 s_3^3 s_4 s_1^3+3109375 s_2^7 s_3 s_4 s_1^3-2453125 s_2^8 s_5
s_1^3-125000 s_2^2 s_3^4 s_5 s_1^3+937500 s_4^4 s_5 s_1^3+6671875
s_2^2 s_4^3 s_5 s_1^3+3875000 s_2^5 s_3^2 s_5 s_1^3-22390625 s_2^4
s_4^2 s_5 s_1^3+6687500 s_2 s_3^2 s_4^2 s_5 s_1^3+14656250 s_2^6
s_4 s_5 s_1^3-812500 s_3^4 s_4 s_5 s_1^3-3937500 s_2^3 s_3^2 s_4
s_5 s_1^3-93750 s_2^{11} s_1^2+15625 s_2^2 s_3^6 s_1^2+2203125 s_2
s_4^5 s_1^2+156250 s_2^5 s_3^4 s_1^2-8375000 s_2^3 s_4^4 s_1^2-296875
s_3^2 s_4^4 s_1^2+9734375 s_2^5 s_4^3 s_1^2+437500 s_2^2 s_3^2 s_4^3
s_1^2-29296875 s_2^2 s_3 s_5^3 s_1^2+19531250 s_3 s_4 s_5^3
s_1^2-31250 s_2^8 s_3^2 s_1^2-5359375 s_2^7 s_4^2 s_1^2-312500 s_2
s_3^4 s_4^2 s_1^2+3250000 s_2^4 s_3^2 s_4^2 s_1^2+1515625 s_2^6
s_5^2 s_1^2+2343750 s_3^4 s_5^2 s_1^2-3906250 s_4^3 s_5^2
s_1^2-15078125 s_2^3 s_3^2 s_5^2 s_1^2-1171875 s_2^2 s_4^2 s_5^2
s_1^2-6484375 s_2^4 s_4 s_5^2 s_1^2+8203125 s_2 s_3^2 s_4 s_5^2
s_1^2+1187500 s_2^9 s_4 s_1^2+62500 s_3^6 s_4 s_1^2-828125 s_2^3
s_3^4 s_4 s_1^2-609375 s_2^6 s_3^2 s_4 s_1^2+1312500 s_2 s_3^5 s_5
s_1^2+2250000 s_2^4 s_3^3 s_5 s_1^2-12265625 s_2 s_3 s_4^3 s_5
s_1^2+1250000 s_3^3 s_4^2 s_5 s_1^2+37734375 s_2^3 s_3 s_4^2 s_5
s_1^2+4953125 s_2^7 s_3 s_5 s_1^2-9468750 s_2^2 s_3^3 s_4 s_5
s_1^2-25875000 s_2^5 s_3 s_4 s_5 s_1^2-31250 s_2 s_3^7 s_1-31250
s_2^4 s_3^5 s_1-156250 s_3 s_4^5 s_1+1609375 s_2^2 s_3 s_4^4
s_1+93750 s_2^7 s_3^3 s_1+46875 s_2 s_3^3 s_4^3 s_1-3125000 s_2^4
s_3 s_4^3 s_1-5859375 s_2^4 s_5^3 s_1+19531250 s_2 s_3^2 s_5^3
s_1+29296875 s_2^2 s_4 s_5^3 s_1+31250 s_3^5 s_4^2 s_1-1031250 s_2^3
s_3^3 s_4^2 s_1+3593750 s_2^6 s_3 s_4^2 s_1+9375000 s_2^2 s_3^3
s_5^2 s_1+5859375 s_2 s_3 s_4^2 s_5^2 s_1-3593750 s_2^5 s_3 s_5^2
s_1-7812500 s_3^3 s_4 s_5^2 s_1+14453125 s_2^3 s_3 s_4 s_5^2
s_1+62500 s_2^{10} s_3 s_1+437500 s_2^2 s_3^5 s_4 s_1-359375 s_2^5
s_3^3 s_4 s_1-828125 s_2^8 s_3 s_4 s_1+609375 s_2^9 s_5 s_1-312500
s_3^6 s_5 s_1+93750 s_2^3 s_3^4 s_5 s_1-1562500 s_2 s_4^4 s_5
s_1-5078125 s_2^3 s_4^3 s_5 s_1+2343750 s_3^2 s_4^3 s_5 s_1-1109375
s_2^6 s_3^2 s_5 s_1+11781250 s_2^5 s_4^2 s_5 s_1-6796875 s_2^2
s_3^2 s_4^2 s_5 s_1-5187500 s_2^7 s_4 s_5 s_1+625000 s_2 s_3^4 s_4
s_5 s_1+1468750 s_2^4 s_3^2 s_4 s_5 s_1+15625 s_2^{12}+15625
s_3^8-31250 s_2^3 s_3^6+390625 s_4^6-2500000 s_2^2 s_4^5-15625 s_2^6
s_3^4+5250000 s_2^4 s_4^4+546875 s_2 s_3^2 s_4^4-4156250 s_2^6
s_4^3-156250 s_3^4 s_4^3-906250 s_2^3 s_3^2 s_4^3+9765625 s_2^3 s_3
s_5^3-48828125 s_2 s_3 s_4 s_5^3+31250 s_2^9 s_3^2+1500000 s_2^8
s_4^2+218750 s_2^2 s_3^4 s_4^2-296875 s_2^5 s_3^2 s_4^2-156250 s_2^7
s_5^2-1953125 s_2 s_3^4 s_5^2+9765625 s_2 s_4^3 s_5^2+2343750 s_2^4
s_3^2 s_5^2-7812500 s_2^3 s_4^2 s_5^2+9765625 s_3^2 s_4^2
s_5^2+1953125 s_2^5 s_4 s_5^2-11718750 s_2^2 s_3^2 s_4 s_5^2-250000
s_2^{10} s_4-109375 s_2 s_3^6 s_4+359375 s_2^4 s_3^4 s_4-140625
s_2^7 s_3^2 s_4-1093750 s_2^2 s_3^5 s_5-3906250 s_3 s_4^4
s_5-1328125 s_2^5 s_3^3 s_5+13671875 s_2^2 s_3 s_4^3 s_5-2734375 s_2
s_3^3 s_4^2 s_5-20937500 s_2^4 s_3 s_4^2 s_5-1015625 s_2^8 s_3
s_5+781250 s_3^5 s_4 s_5+7968750 s_2^3 s_3^3 s_4 s_5+8750000 s_2^6
s_3 s_4 s_5.
}
\]
\[
\theta_{6,1}=s_1.
\]
\[
\longequation{
\theta_{6,2}=
s_1^{20}-24 s_2 s_1^{18}+48 s_3 s_1^{17}+240 s_2^2 s_1^{16}-32 s_4
s_1^{16}-960 s_2 s_3 s_1^{15}-64 s_5 s_1^{15}-1280 s_2^3 s_1^{14}+960
s_3^2 s_1^{14}+640 s_2 s_4 s_1^{14}-640 s_6 s_1^{14}+7680 s_2^2 s_3
s_1^{13}-1280 s_3 s_4 s_1^{13}+1280 s_2 s_5 s_1^{13}+3840 s_2^4
s_1^{12}-15360 s_2 s_3^2 s_1^{12}+256 s_4^2 s_1^{12}-5120 s_2^2 s_4
s_1^{12}-2048 s_3 s_5 s_1^{12}+10752 s_2 s_6 s_1^{12}+10240 s_3^3
s_1^{11}-30720 s_2^3 s_3 s_1^{11}+20480 s_2 s_3 s_4 s_1^{11}-10240
s_2^2 s_5 s_1^{11}+1024 s_4 s_5 s_1^{11}-19456 s_3 s_6 s_1^{11}-6144
s_2^5 s_1^{10}+92160 s_2^2 s_3^2 s_1^{10}-4096 s_2 s_4^2
s_1^{10}-1024 s_5^2 s_1^{10}+20480 s_2^3 s_4 s_1^{10}-20480 s_3^2 s_4
s_1^{10}+32768 s_2 s_3 s_5 s_1^{10}-69632 s_2^2 s_6 s_1^{10}+18432
s_4 s_6 s_1^{10}-122880 s_2 s_3^3 s_1^9+8192 s_3 s_4^2 s_1^9+61440
s_2^4 s_3 s_1^9-122880 s_2^2 s_3 s_4 s_1^9+40960 s_2^3 s_5
s_1^9-24576 s_3^2 s_5 s_1^9-16384 s_2 s_4 s_5 s_1^9+245760 s_2 s_3
s_6 s_1^9+20480 s_5 s_6 s_1^9+4096 s_2^6 s_1^8+61440 s_3^4
s_1^8-245760 s_2^3 s_3^2 s_1^8+24576 s_2^2 s_4^2 s_1^8+8192 s_2
s_5^2 s_1^8+102400 s_6^2 s_1^8-40960 s_2^4 s_4 s_1^8+245760 s_2
s_3^2 s_4 s_1^8-196608 s_2^2 s_3 s_5 s_1^8+24576 s_3 s_4 s_5
s_1^8+212992 s_2^3 s_6 s_1^8-221184 s_3^2 s_6 s_1^8-229376 s_2 s_4
s_6 s_1^8+491520 s_2^2 s_3^3 s_1^7-98304 s_2 s_3 s_4^2 s_1^7-32768
s_3 s_5^2 s_1^7-49152 s_2^5 s_3 s_1^7-163840 s_3^3 s_4 s_1^7+327680
s_2^3 s_3 s_4 s_1^7-81920 s_2^4 s_5 s_1^7+294912 s_2 s_3^2 s_5
s_1^7+98304 s_2^2 s_4 s_5 s_1^7-1081344 s_2^2 s_3 s_6 s_1^7+425984
s_3 s_4 s_6 s_1^7-262144 s_2 s_5 s_6 s_1^7-491520 s_2 s_3^4
s_1^6+245760 s_2^4 s_3^2 s_1^6-65536 s_2^3 s_4^2 s_1^6+98304 s_3^2
s_4^2 s_1^6+32768 s_4 s_5^2 s_1^6-983040 s_2 s_6^2 s_1^6+32768 s_2^5
s_4 s_1^6-983040 s_2^2 s_3^2 s_4 s_1^6-131072 s_3^3 s_5 s_1^6+524288
s_2^3 s_3 s_5 s_1^6-294912 s_2 s_3 s_4 s_5 s_1^6-294912 s_2^4 s_6
s_1^6+1867776 s_2 s_3^2 s_6 s_1^6-131072 s_4^2 s_6 s_1^6+983040
s_2^2 s_4 s_6 s_1^6+294912 s_3 s_5 s_6 s_1^6+196608 s_3^5
s_1^5-655360 s_2^3 s_3^3 s_1^5+65536 s_5^3 s_1^5+393216 s_2^2 s_3
s_4^2 s_1^5+196608 s_2 s_3 s_5^2 s_1^5+1310720 s_3 s_6^2
s_1^5+1310720 s_2 s_3^3 s_4 s_1^5-327680 s_2^4 s_3 s_4 s_1^5+65536
s_2^5 s_5 s_1^5-1179648 s_2^2 s_3^2 s_5 s_1^5-262144 s_2^3 s_4 s_5
s_1^5+196608 s_3^2 s_4 s_5 s_1^5-1114112 s_3^3 s_6 s_1^5+1835008
s_2^3 s_3 s_6 s_1^5-3538944 s_2 s_3 s_4 s_6 s_1^5+1179648 s_2^2
s_5 s_6 s_1^5-262144 s_4 s_5 s_6 s_1^5+983040 s_2^2 s_3^4
s_1^4+65536 s_2^4 s_4^2 s_1^4-786432 s_2 s_3^2 s_4^2 s_1^4-131072
s_2^3 s_5^2 s_1^4-327680 s_3^2 s_5^2 s_1^4-262144 s_2 s_4 s_5^2
s_1^4+3014656 s_2^2 s_6^2 s_1^4-2621440 s_4 s_6^2 s_1^4-655360 s_3^4
s_4 s_1^4+1310720 s_2^3 s_3^2 s_4 s_1^4+1048576 s_2 s_3^3 s_5
s_1^4-524288 s_2^4 s_3 s_5 s_1^4+1179648 s_2^2 s_3 s_4 s_5
s_1^4+131072 s_2^5 s_6 s_1^4-4325376 s_2^2 s_3^2 s_6 s_1^4+1048576
s_2 s_4^2 s_6 s_1^4+655360 s_5^2 s_6 s_1^4-1572864 s_2^3 s_4 s_6
s_1^4+3276800 s_3^2 s_4 s_6 s_1^4-2490368 s_2 s_3 s_5 s_6
s_1^4-786432 s_2 s_3^5 s_1^3-524288 s_2 s_5^3 s_1^3+524288 s_3^3
s_4^2 s_1^3-524288 s_2^3 s_3 s_4^2 s_1^3+524288 s_3 s_4 s_5^2
s_1^3-6291456 s_2 s_3 s_6^2 s_1^3-2621440 s_2^2 s_3^3 s_4
s_1^3-262144 s_3^4 s_5 s_1^3+1572864 s_2^3 s_3^2 s_5 s_1^3+262144
s_2^4 s_4 s_5 s_1^3-1572864 s_2 s_3^2 s_4 s_5 s_1^3+4718592 s_2
s_3^3 s_6 s_1^3-2097152 s_3 s_4^2 s_6 s_1^3-786432 s_2^4 s_3 s_6
s_1^3+7864320 s_2^2 s_3 s_4 s_6 s_1^3-2097152 s_2^3 s_5 s_6
s_1^3+1048576 s_3^2 s_5 s_6 s_1^3+2097152 s_2 s_4 s_5 s_6
s_1^3+262144 s_3^6 s_1^2+524288 s_3 s_5^3 s_1^2+1572864 s_2^2 s_3^2
s_4^2 s_1^2+262144 s_2^4 s_5^2 s_1^2+1048576 s_2 s_3^2 s_5^2
s_1^2+524288 s_2^2 s_4 s_5^2 s_1^2-3145728 s_2^3 s_6^2 s_1^2+4194304
s_3^2 s_6^2 s_1^2+12582912 s_2 s_4 s_6^2 s_1^2+2621440 s_2 s_3^4
s_4 s_1^2-2097152 s_2^2 s_3^3 s_5 s_1^2+524288 s_3^3 s_4 s_5
s_1^2-1572864 s_2^3 s_3 s_4 s_5 s_1^2-2097152 s_3^4 s_6
s_1^2+1572864 s_2^3 s_3^2 s_6 s_1^2-2097152 s_2^2 s_4^2 s_6
s_1^2-3145728 s_2 s_5^2 s_6 s_1^2+524288 s_2^4 s_4 s_6
s_1^2-13631488 s_2 s_3^2 s_4 s_6 s_1^2+5767168 s_2^2 s_3 s_5 s_6
s_1^2-2097152 s_3 s_4 s_5 s_6 s_1^2+1048576 s_2^2 s_5^3 s_1-2097152
s_2 s_3^3 s_4^2 s_1-1048576 s_3^3 s_5^2 s_1-1048576 s_2^3 s_3 s_5^2
s_1-2097152 s_2 s_3 s_4 s_5^2 s_1+4194304 s_2^2 s_3 s_6^2
s_1-16777216 s_3 s_4 s_6^2 s_1-1048576 s_3^5 s_4 s_1+1048576 s_2
s_3^4 s_5 s_1+3145728 s_2^2 s_3^2 s_4 s_5 s_1-1048576 s_2^2 s_3^3
s_6 s_1+8388608 s_2 s_3 s_4^2 s_6 s_1+4194304 s_3 s_5^2 s_6
s_1+8388608 s_3^3 s_4 s_6 s_1-2097152 s_2^3 s_3 s_4 s_6 s_1+1048576
s_2^4 s_5 s_6 s_1-4194304 s_2 s_3^2 s_5 s_6 s_1-4194304 s_2^2 s_4
s_5 s_6 s_1+1048576 s_5^4-2097152 s_2 s_3 s_5^3+1048576 s_3^4
s_4^2+1048576 s_2^2 s_3^2 s_5^2+2097152 s_3^2 s_4 s_5^2+1048576 s_2^4
s_6^2+16777216 s_4^2 s_6^2-8388608 s_2^2 s_4 s_6^2-2097152 s_2 s_3^3
s_4 s_5-8388608 s_3^2 s_4^2 s_6+2097152 s_2^2 s_5^2 s_6-8388608 s_4
s_5^2 s_6+2097152 s_2^2 s_3^2 s_4 s_6-2097152 s_2^3 s_3 s_5
s_6+8388608 s_2 s_3 s_4 s_5 s_6.
}
\]
\[
\longequation{
\theta_{6,3}=
s_1^{30}-36 s_2 s_1^{28}+27 s_3 s_1^{27}+594 s_2^2 s_1^{26}+27 s_4
s_1^{26}-891 s_2 s_3 s_1^{25}+54 s_5 s_1^{25}-5940 s_2^3 s_1^{24}+324
s_3^2 s_1^{24}-864 s_2 s_4 s_1^{24}+1215 s_6 s_1^{24}+13365 s_2^2
s_3 s_1^{23}+648 s_3 s_4 s_1^{23}-1863 s_2 s_5 s_1^{23}+40095 s_2^4
s_1^{22}-9720 s_2 s_3^2 s_1^{22}-162 s_4^2 s_1^{22}+12555 s_2^2 s_4
s_1^{22}+1944 s_3 s_5 s_1^{22}-34992 s_2 s_6 s_1^{22}+2187 s_3^3
s_1^{21}-120285 s_2^3 s_3 s_1^{21}-18468 s_2 s_3 s_4 s_1^{21}+28431
s_2^2 s_5 s_1^{21}-243 s_4 s_5 s_1^{21}+28431 s_3 s_6
s_1^{21}-192456 s_2^5 s_1^{20}+131220 s_2^2 s_3^2 s_1^{20}+4374 s_2
s_4^2 s_1^{20}+10935 s_5^2 s_1^{20}-109350 s_2^3 s_4 s_1^{20}+6561
s_3^2 s_4 s_1^{20}-56862 s_2 s_3 s_5 s_1^{20}+450522 s_2^2 s_6
s_1^{20}-9477 s_4 s_6 s_1^{20}-59049 s_2 s_3^3 s_1^{19}-1458 s_3
s_4^2 s_1^{19}+721710 s_2^4 s_3 s_1^{19}+236196 s_2^2 s_3 s_4
s_1^{19}-253692 s_2^3 s_5 s_1^{19}+22599 s_3^2 s_5 s_1^{19}-729 s_2
s_4 s_5 s_1^{19}-710775 s_2 s_3 s_6 s_1^{19}+3645 s_5 s_6
s_1^{19}+673596 s_2^6 s_1^{18}+8748 s_3^4 s_1^{18}-2916 s_4^3
s_1^{18}-1049760 s_2^3 s_3^2 s_1^{18}-51759 s_2^2 s_4^2
s_1^{18}-263169 s_2 s_5^2 s_1^{18}+10935 s_6^2 s_1^{18}+634230 s_2^4
s_4 s_1^{18}-164754 s_2 s_3^2 s_4 s_1^{18}+735561 s_2^2 s_3 s_5
s_1^{18}-2187 s_3 s_4 s_5 s_1^{18}-3414636 s_2^3 s_6 s_1^{18}+295245
s_3^2 s_6 s_1^{18}+225990 s_2 s_4 s_6 s_1^{18}+708588 s_2^2 s_3^3
s_1^{17}+39366 s_2 s_3 s_4^2 s_1^{17}+161838 s_3 s_5^2
s_1^{17}-3031182 s_2^5 s_3 s_1^{17}+34992 s_3^3 s_4 s_1^{17}-1784592
s_2^3 s_3 s_4 s_1^{17}+1469664 s_2^4 s_5 s_1^{17}-572994 s_2 s_3^2
s_5 s_1^{17}-17496 s_4^2 s_5 s_1^{17}+91854 s_2^2 s_4 s_5
s_1^{17}+7798842 s_2^2 s_3 s_6 s_1^{17}-54675 s_3 s_4 s_6
s_1^{17}-87480 s_2 s_5 s_6 s_1^{17}-1732104 s_2^7 s_1^{16}-209952 s_2
s_3^4 s_1^{16}+69984 s_2 s_4^3 s_1^{16}+5511240 s_2^4 s_3^2
s_1^{16}+349920 s_2^3 s_4^2 s_1^{16}-2187 s_3^2 s_4^2 s_1^{16}+2786238
s_2^2 s_5^2 s_1^{16}-41553 s_4 s_5^2 s_1^{16}-236196 s_2 s_6^2
s_1^{16}-2571912 s_2^5 s_4 s_1^{16}+1828332 s_2^2 s_3^2 s_4
s_1^{16}+124659 s_3^3 s_5 s_1^{16}-5528736 s_2^3 s_3 s_5
s_1^{16}-50301 s_2 s_3 s_4 s_5 s_1^{16}+16879266 s_2^4 s_6
s_1^{16}-6344487 s_2 s_3^2 s_6 s_1^{16}-50301 s_4^2 s_6
s_1^{16}-2401326 s_2^2 s_4 s_6 s_1^{16}+347733 s_3 s_5 s_6
s_1^{16}+19683 s_3^5 s_1^{15}-4960116 s_2^3 s_3^3 s_1^{15}-26244 s_3
s_4^3 s_1^{15}-590490 s_5^3 s_1^{15}-452709 s_2^2 s_3 s_4^2
s_1^{15}-3346110 s_2 s_3 s_5^2 s_1^{15}-1515591 s_3 s_6^2
s_1^{15}+9093546 s_2^6 s_3 s_1^{15}-761076 s_2 s_3^3 s_4
s_1^{15}+8817984 s_2^4 s_3 s_4 s_1^{15}-5786802 s_2^5 s_5
s_1^{15}+6344487 s_2^2 s_3^2 s_5 s_1^{15}+400221 s_2 s_4^2 s_5
s_1^{15}-1194102 s_2^3 s_4 s_5 s_1^{15}-72171 s_3^2 s_4 s_5
s_1^{15}+1830519 s_3^3 s_6 s_1^{15}-49168134 s_2^3 s_3 s_6
s_1^{15}+1036638 s_2 s_3 s_4 s_6 s_1^{15}+800442 s_2^2 s_5 s_6
s_1^{15}+1627128 s_4 s_5 s_6 s_1^{15}+3247695 s_2^8 s_1^{14}+2204496
s_2^2 s_3^4 s_1^{14}+45927 s_4^4 s_1^{14}-741393 s_2^2 s_4^3
s_1^{14}-19840464 s_2^5 s_3^2 s_1^{14}-1469664 s_2^4 s_4^2
s_1^{14}+118098 s_2 s_3^2 s_4^2 s_1^{14}-16992990 s_2^3 s_5^2
s_1^{14}+1003833 s_3^2 s_5^2 s_1^{14}+1180980 s_2 s_4 s_5^2
s_1^{14}+2873718 s_2^2 s_6^2 s_1^{14}-2066715 s_4 s_6^2
s_1^{14}+7440174 s_2^6 s_4 s_1^{14}+98415 s_3^4 s_4 s_1^{14}-11757312
s_2^3 s_3^2 s_4 s_1^{14}-2716254 s_2 s_3^3 s_5 s_1^{14}-472392 s_3
s_4^2 s_5 s_1^{14}+26637660 s_2^4 s_3 s_5 s_1^{14}+1535274 s_2^2 s_3
s_4 s_5 s_1^{14}-56949480 s_2^5 s_6 s_1^{14}+58399461 s_2^2 s_3^2
s_6 s_1^{14}+314928 s_2 s_4^2 s_6 s_1^{14}+984150 s_5^2 s_6
s_1^{14}+14985324 s_2^3 s_4 s_6 s_1^{14}+787320 s_3^2 s_4 s_6
s_1^{14}-7046514 s_2 s_3 s_5 s_6 s_1^{14}-413343 s_2 s_3^5
s_1^{13}+22320522 s_2^4 s_3^3 s_1^{13}+551124 s_2 s_3 s_4^3
s_1^{13}+10431990 s_2 s_5^3 s_1^{13}+19683 s_3^3 s_4^2
s_1^{13}+2893401 s_2^3 s_3 s_4^2 s_1^{13}+29662281 s_2^2 s_3 s_5^2
s_1^{13}-1495908 s_3 s_4 s_5^2 s_1^{13}+28520667 s_2 s_3 s_6^2
s_1^{13}-9054180 s_5 s_6^2 s_1^{13}-19486170 s_2^7 s_3
s_1^{13}+7164612 s_2^2 s_3^3 s_4 s_1^{13}-29760696 s_2^5 s_3 s_4
s_1^{13}+15707034 s_2^6 s_5 s_1^{13}+334611 s_3^4 s_5 s_1^{13}+334611
s_4^3 s_5 s_1^{13}-40094271 s_2^3 s_3^2 s_5 s_1^{13}-3916917 s_2^2
s_4^2 s_5 s_1^{13}+7715736 s_2^4 s_4 s_5 s_1^{13}+905418 s_2 s_3^2
s_4 s_5 s_1^{13}-33421734 s_2 s_3^3 s_6 s_1^{13}-1436859 s_3 s_4^2
s_6 s_1^{13}+195649020 s_2^4 s_3 s_6 s_1^{13}-8739252 s_2^2 s_3 s_4
s_6 s_1^{13}-3306744 s_2^3 s_5 s_6 s_1^{13}+6770952 s_3^2 s_5 s_6
s_1^{13}-29248938 s_2 s_4 s_5 s_6 s_1^{13}-4330260 s_2^9
s_1^{12}+19683 s_3^6 s_1^{12}-13226976 s_2^3 s_3^4 s_1^{12}-905418 s_2
s_4^4 s_1^{12}+4527090 s_2^3 s_4^3 s_1^{12}-19683 s_3^2 s_4^3
s_1^{12}-8581788 s_3 s_5^3 s_1^{12}+17222625 s_6^3 s_1^{12}+49601160
s_2^6 s_3^2 s_1^{12}+3857868 s_2^5 s_4^2 s_1^{12}-1692738 s_2^2 s_3^2
s_4^2 s_1^{12}+65603439 s_2^4 s_5^2 s_1^{12}-17321040 s_2 s_3^2 s_5^2
s_1^{12}-12656169 s_2^2 s_4 s_5^2 s_1^{12}-23698332 s_2^3 s_6^2
s_1^{12}-25922511 s_3^2 s_6^2 s_1^{12}+38342484 s_2 s_4 s_6^2
s_1^{12}-15352740 s_2^7 s_4 s_1^{12}-1810836 s_2 s_3^4 s_4
s_1^{12}+48223350 s_2^4 s_3^2 s_4 s_1^{12}+25351704 s_2^2 s_3^3 s_5
s_1^{12}+8817984 s_2 s_3 s_4^2 s_5 s_1^{12}-85424220 s_2^5 s_3 s_5
s_1^{12}-826686 s_3^3 s_4 s_5 s_1^{12}-14840982 s_2^3 s_3 s_4 s_5
s_1^{12}+133372008 s_2^6 s_6 s_1^{12}+7085880 s_3^4 s_6
s_1^{12}-1889568 s_4^3 s_6 s_1^{12}-298551744 s_2^3 s_3^2 s_6
s_1^{12}+3779136 s_2^2 s_4^2 s_6 s_1^{12}-12912048 s_2 s_5^2 s_6
s_1^{12}-60859836 s_2^4 s_4 s_6 s_1^{12}-14211126 s_2 s_3^2 s_4 s_6
s_1^{12}+56844504 s_2^2 s_3 s_5 s_6 s_1^{12}+27044442 s_3 s_4 s_5
s_6 s_1^{12}+3720087 s_2^2 s_3^5 s_1^{11}+413343 s_3 s_4^4
s_1^{11}-66961566 s_2^5 s_3^3 s_1^{11}-4960116 s_2^2 s_3 s_4^3
s_1^{11}-76645602 s_2^2 s_5^3 s_1^{11}+1062882 s_4 s_5^3
s_1^{11}+59049 s_2 s_3^3 s_4^2 s_1^{11}-11160261 s_2^4 s_3 s_4^2
s_1^{11}+2775303 s_3^3 s_5^2 s_1^{11}-146264373 s_2^3 s_3 s_5^2
s_1^{11}+28225422 s_2 s_3 s_4 s_5^2 s_1^{11}-213225939 s_2^2 s_3
s_6^2 s_1^{11}-21493836 s_3 s_4 s_6^2 s_1^{11}+130911633 s_2 s_5
s_6^2 s_1^{11}+29229255 s_2^8 s_3 s_1^{11}+118098 s_3^5 s_4
s_1^{11}-38027556 s_2^3 s_3^3 s_4 s_1^{11}+69441624 s_2^6 s_3 s_4
s_1^{11}-29052108 s_2^7 s_5 s_1^{11}-6141096 s_2 s_3^4 s_5
s_1^{11}-4842018 s_2 s_4^3 s_5 s_1^{11}+158310369 s_2^4 s_3^2 s_5
s_1^{11}+21139542 s_2^3 s_4^2 s_5 s_1^{11}-3542940 s_3^2 s_4^2 s_5
s_1^{11}-29760696 s_2^5 s_4 s_5 s_1^{11}-3011499 s_2^2 s_3^2 s_4 s_5
s_1^{11}+253615455 s_2^2 s_3^3 s_6 s_1^{11}+14998446 s_2 s_3 s_4^2
s_6 s_1^{11}+25391070 s_3 s_5^2 s_6 s_1^{11}-507585204 s_2^5 s_3 s_6
s_1^{11}+9152595 s_3^3 s_4 s_6 s_1^{11}+43932456 s_2^3 s_3 s_4 s_6
s_1^{11}+3897234 s_2^4 s_5 s_6 s_1^{11}-111484512 s_2 s_3^2 s_5 s_6
s_1^{11}-9920232 s_4^2 s_5 s_6 s_1^{11}+214820262 s_2^2 s_4 s_5 s_6
s_1^{11}+3897234 s_2^{10} s_1^{10}-354294 s_2 s_3^6 s_1^{10}-413343
s_4^5 s_1^{10}+49601160 s_2^4 s_3^4 s_1^{10}+7322076 s_2^2 s_4^4
s_1^{10}+4133430 s_5^4 s_1^{10}-17419455 s_2^4 s_4^3 s_1^{10}+708588
s_2 s_3^2 s_4^3 s_1^{10}+117861804 s_2 s_3 s_5^3 s_1^{10}-248005800
s_2 s_6^3 s_1^{10}-85030560 s_2^7 s_3^2 s_1^{10}-5786802 s_2^6 s_4^2
s_1^{10}+59049 s_3^4 s_4^2 s_1^{10}+11455506 s_2^3 s_3^2 s_4^2
s_1^{10}-165691494 s_2^5 s_5^2 s_1^{10}+124061949 s_2^2 s_3^2 s_5^2
s_1^{10}+708588 s_2 s_4^2 s_5^2 s_1^{10}+69559722 s_2^3 s_4 s_5^2
s_1^{10}-13167927 s_3^2 s_4 s_5^2 s_1^{10}+127959183 s_2^4 s_6^2
s_1^{10}+386180460 s_2 s_3^2 s_6^2 s_1^{10}+38677095 s_4^2 s_6^2
s_1^{10}-293650677 s_2^2 s_4 s_6^2 s_1^{10}-183524292 s_3 s_5 s_6^2
s_1^{10}+22143375 s_2^8 s_4 s_1^{10}+13994613 s_2^2 s_3^4 s_4
s_1^{10}-130616388 s_2^5 s_3^2 s_4 s_1^{10}+354294 s_3^5 s_5
s_1^{10}-131443074 s_2^3 s_3^3 s_5 s_1^{10}+4960116 s_3 s_4^3 s_5
s_1^{10}-69087330 s_2^2 s_3 s_4^2 s_5 s_1^{10}+182697606 s_2^6 s_3
s_5 s_1^{10}+12105045 s_2 s_3^3 s_4 s_5 s_1^{10}+75346524 s_2^4 s_3
s_4 s_5 s_1^{10}-216355536 s_2^7 s_6 s_1^{10}-107528229 s_2 s_3^4
s_6 s_1^{10}+31177872 s_2 s_4^3 s_6 s_1^{10}+916027137 s_2^4 s_3^2
s_6 s_1^{10}-50782140 s_2^3 s_4^2 s_6 s_1^{10}-13226976 s_3^2 s_4^2
s_6 s_1^{10}+56627991 s_2^2 s_5^2 s_6 s_1^{10}+3956283 s_4 s_5^2 s_6
s_1^{10}+167935356 s_2^5 s_4 s_6 s_1^{10}+104634828 s_2^2 s_3^2 s_4
s_6 s_1^{10}+47652543 s_3^3 s_5 s_6 s_1^{10}-227929140 s_2^3 s_3 s_5
s_6 s_1^{10}-389251008 s_2 s_3 s_4 s_5 s_6 s_1^{10}-18600435 s_2^3
s_3^5 s_1^9-6200145 s_2 s_3 s_4^4 s_1^9+133923132 s_2^6 s_3^3
s_1^9+354294 s_3^3 s_4^3 s_1^9+24800580 s_2^3 s_3 s_4^3
s_1^9+302035635 s_2^3 s_5^3 s_1^9-59698539 s_3^2 s_5^3 s_1^9-21434787
s_2 s_4 s_5^3 s_1^9+225862425 s_3 s_6^3 s_1^9-3542940 s_2^2 s_3^3
s_4^2 s_1^9+26040609 s_2^5 s_3 s_4^2 s_1^9-38263752 s_2 s_3^3 s_5^2
s_1^9+12045996 s_3 s_4^2 s_5^2 s_1^9+434010150 s_2^4 s_3 s_5^2
s_1^9-207970578 s_2^2 s_3 s_4 s_5^2 s_1^9-131265927 s_3^3 s_6^2
s_1^9+790961355 s_2^3 s_3 s_6^2 s_1^9+310715838 s_2 s_3 s_4 s_6^2
s_1^9-716205321 s_2^2 s_5 s_6^2 s_1^9+11514555 s_4 s_5 s_6^2
s_1^9-29229255 s_2^9 s_3 s_1^9-1771470 s_2 s_3^5 s_4 s_1^9+124002900
s_2^4 s_3^3 s_4 s_1^9-110539728 s_2^7 s_3 s_4 s_1^9+35075106 s_2^8
s_5 s_1^9+46943955 s_2^2 s_3^4 s_5 s_1^9-885735 s_4^4 s_5
s_1^9+26572050 s_2^2 s_4^3 s_5 s_1^9-400529367 s_2^5 s_3^2 s_5
s_1^9-67315860 s_2^4 s_4^2 s_5 s_1^9+51018336 s_2 s_3^2 s_4^2 s_5
s_1^9+71921682 s_2^6 s_4 s_5 s_1^9-3365793 s_3^4 s_4 s_5
s_1^9-8857350 s_2^3 s_3^2 s_4 s_5 s_1^9+15411789 s_3^5 s_6
s_1^9-1022138190 s_2^3 s_3^3 s_6 s_1^9-24269139 s_3 s_4^3 s_6
s_1^9+20371905 s_5^3 s_6 s_1^9-38618046 s_2^2 s_3 s_4^2 s_6
s_1^9-264126177 s_2 s_3 s_5^2 s_6 s_1^9+855620010 s_2^6 s_3 s_6
s_1^9-124002900 s_2 s_3^3 s_4 s_6 s_1^9-148803480 s_2^4 s_3 s_4 s_6
s_1^9+15943230 s_2^5 s_5 s_6 s_1^9+718153938 s_2^2 s_3^2 s_5 s_6
s_1^9+112311198 s_2 s_4^2 s_5 s_6 s_1^9-817356258 s_2^3 s_4 s_5 s_6
s_1^9+159255153 s_3^2 s_4 s_5 s_6 s_1^9-2125764 s_2^{11}
s_1^8+2657205 s_2^2 s_3^6 s_1^8+5314410 s_2 s_4^5 s_1^8-119042784
s_2^5 s_3^4 s_1^8-31177872 s_2^3 s_4^4 s_1^8+1594323 s_3^2 s_4^4
s_1^8-53675541 s_2 s_5^4 s_1^8+43223868 s_2^5 s_4^3 s_1^8-7440174
s_2^2 s_3^2 s_4^3 s_1^8-626568939 s_2^2 s_3 s_5^3 s_1^8+13286025 s_3
s_4 s_5^3 s_1^8+1397689830 s_2^2 s_6^3 s_1^8-199290375 s_4 s_6^3
s_1^8+95659380 s_2^8 s_3^2 s_1^8+2834352 s_2^7 s_4^2 s_1^8-42515280
s_2^4 s_3^2 s_4^2 s_1^8+272983527 s_2^6 s_5^2 s_1^8+2657205 s_3^4
s_5^2 s_1^8-2125764 s_4^3 s_5^2 s_1^8-471919608 s_2^3 s_3^2 s_5^2
s_1^8-7440174 s_2^2 s_4^2 s_5^2 s_1^8-216827928 s_2^4 s_4 s_5^2
s_1^8+183347145 s_2 s_3^2 s_4 s_5^2 s_1^8-433655856 s_2^5 s_6^2
s_1^8-2252778399 s_2^2 s_3^2 s_6^2 s_1^8-468730962 s_2 s_4^2 s_6^2
s_1^8-209919195 s_5^2 s_6^2 s_1^8+1187239194 s_2^3 s_4 s_6^2
s_1^8-92470734 s_3^2 s_4 s_6^2 s_1^8+2059865316 s_2 s_3 s_5 s_6^2
s_1^8-21257640 s_2^9 s_4 s_1^8-58458510 s_2^3 s_3^4 s_4
s_1^8+233125452 s_2^6 s_3^2 s_4 s_1^8-5314410 s_2 s_3^5 s_5
s_1^8+409209570 s_2^4 s_3^3 s_5 s_1^8-62178597 s_2 s_3 s_4^3 s_5
s_1^8-11160261 s_3^3 s_4^2 s_5 s_1^8+291229668 s_2^3 s_3 s_4^2 s_5
s_1^8-252257328 s_2^7 s_3 s_5 s_1^8-
}
\]
\[
\longequation{
70150212 s_2^2 s_3^3 s_4 s_5
s_1^8-225330984 s_2^5 s_3 s_4 s_5 s_1^8+238971303 s_2^8 s_6
s_1^8+651546666 s_2^2 s_3^4 s_6 s_1^8+21789081 s_4^4 s_6
s_1^8-199821816 s_2^2 s_4^3 s_6 s_1^8-1689982380 s_2^5 s_3^2 s_6
s_1^8+241805655 s_2^4 s_4^2 s_6 s_1^8+138706101 s_2 s_3^2 s_4^2 s_6
s_1^8-60584274 s_2^3 s_5^2 s_6 s_1^8+202479021 s_3^2 s_5^2 s_6
s_1^8-76527504 s_2 s_4 s_5^2 s_6 s_1^8-317093130 s_2^6 s_4 s_6
s_1^8+33480783 s_3^4 s_4 s_6 s_1^8-399643632 s_2^3 s_3^2 s_4 s_6
s_1^8-614877237 s_2 s_3^3 s_5 s_6 s_1^8-129140163 s_3 s_4^2 s_5 s_6
s_1^8+440033148 s_2^4 s_3 s_5 s_6 s_1^8+2184222510 s_2^2 s_3 s_4
s_5 s_6 s_1^8+55801305 s_2^4 s_3^5 s_1^7-2125764 s_3 s_4^5
s_1^7+36669429 s_2^2 s_3 s_4^4 s_1^7+5845851 s_3 s_5^4
s_1^7-172186884 s_2^7 s_3^3 s_1^7-3188646 s_2 s_3^3 s_4^3
s_1^7-74401740 s_2^4 s_3 s_4^3 s_1^7-687153213 s_2^4 s_5^3
s_1^7+596276802 s_2 s_3^2 s_5^3 s_1^7+35075106 s_4^2 s_5^3
s_1^7+161026623 s_2^2 s_4 s_5^3 s_1^7-2319739965 s_2 s_3 s_6^3
s_1^7+597871125 s_5 s_6^3 s_1^7+26572050 s_2^3 s_3^3 s_4^2
s_1^7-33480783 s_2^6 s_3 s_4^2 s_1^7+208856313 s_2^2 s_3^3 s_5^2
s_1^7-116385579 s_2 s_3 s_4^2 s_5^2 s_1^7-778029624 s_2^5 s_3 s_5^2
s_1^7-41983839 s_3^3 s_4 s_5^2 s_1^7+758897748 s_2^3 s_3 s_4 s_5^2
s_1^7+1473154452 s_2 s_3^3 s_6^2 s_1^7+424089918 s_3 s_4^2 s_6^2
s_1^7-1412570178 s_2^4 s_3 s_6^2 s_1^7-1734623424 s_2^2 s_3 s_4
s_6^2 s_1^7+1749503772 s_2^3 s_5 s_6^2 s_1^7-990074583 s_3^2 s_5
s_6^2 s_1^7+163152387 s_2 s_4 s_5 s_6^2 s_1^7+17537553 s_2^{10} s_3
s_1^7+10628820 s_2^2 s_3^5 s_4 s_1^7-252965916 s_2^5 s_3^3 s_4
s_1^7+114791256 s_2^8 s_3 s_4 s_1^7-24977727 s_2^9 s_5
s_1^7-191318760 s_2^3 s_3^4 s_5 s_1^7+6908733 s_2 s_4^4 s_5
s_1^7-63772920 s_2^3 s_4^3 s_5 s_1^7+20194758 s_3^2 s_4^3 s_5
s_1^7+636134877 s_2^6 s_3^2 s_5 s_1^7+123825753 s_2^5 s_4^2 s_5
s_1^7-293355432 s_2^2 s_3^2 s_4^2 s_5 s_1^7-107351082 s_2^7 s_4 s_5
s_1^7+40920957 s_2 s_3^4 s_4 s_5 s_1^7+93002175 s_2^4 s_3^2 s_4 s_5
s_1^7-186535791 s_2 s_3^5 s_6 s_1^7+2298482325 s_2^4 s_3^3 s_6
s_1^7+261468972 s_2 s_3 s_4^3 s_6 s_1^7-136048896 s_2 s_5^3 s_6
s_1^7-41452398 s_3^3 s_4^2 s_6 s_1^7-106288200 s_2^3 s_3 s_4^2 s_6
s_1^7+875283327 s_2^2 s_3 s_5^2 s_6 s_1^7-55269864 s_3 s_4 s_5^2
s_6 s_1^7-906638346 s_2^7 s_3 s_6 s_1^7+655798194 s_2^2 s_3^3 s_4
s_6 s_1^7+356065470 s_2^5 s_3 s_4 s_6 s_1^7-58458510 s_2^6 s_5 s_6
s_1^7+133923132 s_3^4 s_5 s_6 s_1^7-63241479 s_4^3 s_5 s_6
s_1^7-2229926436 s_2^3 s_3^2 s_5 s_6 s_1^7-430467210 s_2^2 s_4^2 s_5
s_6 s_1^7+1663410330 s_2^4 s_4 s_5 s_6 s_1^7-1689982380 s_2 s_3^2
s_4 s_5 s_6 s_1^7+531441 s_2^{12} s_1^6-10628820 s_2^3 s_3^6
s_1^6+1594323 s_4^6 s_1^6-26040609 s_2^2 s_4^5 s_1^6+178564176 s_2^6
s_3^4 s_1^6+73870299 s_2^4 s_4^4 s_1^6-17006112 s_2 s_3^2 s_4^4
s_1^6+268909146 s_2^2 s_5^4 s_1^6-78653268 s_4 s_5^4 s_1^6-67493007
s_2^6 s_4^3 s_1^6+1062882 s_3^4 s_4^3 s_1^6+36137988 s_2^3 s_3^2
s_4^3 s_1^6-224799543 s_3^3 s_5^3 s_1^6+1596448764 s_2^3 s_3 s_5^3
s_1^6-167935356 s_2 s_3 s_4 s_5^3 s_1^6-3854010132 s_2^3 s_6^3
s_1^6+717445350 s_3^2 s_6^3 s_1^6+1674039150 s_2 s_4 s_6^3
s_1^6-63772920 s_2^9 s_3^2 s_1^6+5314410 s_2^8 s_4^2 s_1^6-5314410
s_2^2 s_3^4 s_4^2 s_1^6+88219206 s_2^5 s_3^2 s_4^2 s_1^6-283789494
s_2^7 s_5^2 s_1^6-27103491 s_2 s_3^4 s_5^2 s_1^6-9034497 s_2 s_4^3
s_5^2 s_1^6+1004954931 s_2^4 s_3^2 s_5^2 s_1^6+19131876 s_2^3 s_4^2
s_5^2 s_1^6+115322697 s_3^2 s_4^2 s_5^2 s_1^6+393266340 s_2^5 s_4
s_5^2 s_1^6-948622185 s_2^2 s_3^2 s_4 s_5^2 s_1^6+882192060 s_2^6
s_6^2 s_1^6-200884698 s_3^4 s_6^2 s_1^6-156775095 s_4^3 s_6^2
s_1^6+6377292000 s_2^3 s_3^2 s_6^2 s_1^6+2047642173 s_2^2 s_4^2 s_6^2
s_1^6+1656501597 s_2 s_5^2 s_6^2 s_1^6-2677399758 s_2^4 s_4 s_6^2
s_1^6+774840978 s_2 s_3^2 s_4 s_6^2 s_1^6-8448317577 s_2^2 s_3 s_5
s_6^2 s_1^6-129140163 s_3 s_4 s_5 s_6^2 s_1^6+12223143 s_2^{10} s_4
s_1^6+140831865 s_2^4 s_3^4 s_4 s_1^6-263594736 s_2^7 s_3^2 s_4
s_1^6+31886460 s_2^2 s_3^5 s_5 s_1^6-17537553 s_3 s_4^4 s_5
s_1^6-766337922 s_2^5 s_3^3 s_5 s_1^6+294418314 s_2^2 s_3 s_4^3 s_5
s_1^6+112665492 s_2 s_3^3 s_4^2 s_5 s_1^6-699376356 s_2^4 s_3 s_4^2
s_5 s_1^6+206199108 s_2^8 s_3 s_5 s_1^6-4782969 s_3^5 s_4 s_5
s_1^6+198758934 s_2^3 s_3^3 s_4 s_5 s_1^6+402832278 s_2^6 s_3 s_4
s_5 s_1^6-174312648 s_2^9 s_6 s_1^6+14348907 s_3^6 s_6
s_1^6-1967394582 s_2^3 s_3^4 s_6 s_1^6-183878586 s_2 s_4^4 s_6
s_1^6+627100380 s_2^3 s_4^3 s_6 s_1^6-102568113 s_3^2 s_4^3 s_6
s_1^6+468730962 s_3 s_5^3 s_6 s_1^6+1748972331 s_2^6 s_3^2 s_6
s_1^6-580333572 s_2^5 s_4^2 s_6 s_1^6-513372006 s_2^2 s_3^2 s_4^2
s_6 s_1^6-204604785 s_2^4 s_5^2 s_6 s_1^6-1514606850 s_2 s_3^2 s_5^2
s_6 s_1^6+64835802 s_4^2 s_5^2 s_6 s_1^6+438438825 s_2^2 s_4 s_5^2
s_6 s_1^6+397517868 s_2^7 s_4 s_6 s_1^6-325241892 s_2 s_3^4 s_4 s_6
s_1^6+820544904 s_2^4 s_3^2 s_4 s_6 s_1^6+2949497550 s_2^2 s_3^3 s_5
s_6 s_1^6+1228160151 s_2 s_3 s_4^2 s_5 s_6 s_1^6-217890810 s_2^5
s_3 s_5 s_6 s_1^6+363505644 s_3^3 s_4 s_5 s_6 s_1^6-5918126976
s_2^3 s_3 s_4 s_5 s_6 s_1^6-100442349 s_2^5 s_3^5 s_1^5+19131876
s_2 s_3 s_4^5 s_1^5+4782969 s_5^5 s_1^5+3188646 s_3^3 s_4^4
s_1^5-106819641 s_2^3 s_3 s_4^4 s_1^5-95659380 s_2 s_3 s_5^4
s_1^5+129140163 s_2^8 s_3^3 s_1^5+9565938 s_2^2 s_3^3 s_4^3
s_1^5+133923132 s_2^5 s_3 s_4^3 s_1^5+907169787 s_2^5 s_5^3
s_1^5-2032761825 s_2^2 s_3^2 s_5^3 s_1^5-181752822 s_2 s_4^2 s_5^3
s_1^5-578739249 s_2^3 s_4 s_5^3 s_1^5+76527504 s_3^2 s_4 s_5^3
s_1^5+8666739828 s_2^2 s_3 s_6^3 s_1^5-1076168025 s_3 s_4 s_6^3
s_1^5-5022117450 s_2 s_5 s_6^3 s_1^5-87687765 s_2^4 s_3^3 s_4^2
s_1^5+14348907 s_2^7 s_3 s_4^2 s_1^5-562796019 s_2^3 s_3^3 s_5^2
s_1^5-36669429 s_3 s_4^3 s_5^2 s_1^5+382637520 s_2^2 s_3 s_4^2 s_5^2
s_1^5+789189885 s_2^6 s_3 s_5^2 s_1^5+411335334 s_2 s_3^3 s_4 s_5^2
s_1^5-1403004240 s_2^4 s_3 s_4 s_5^2 s_1^5-6098285475 s_2^2 s_3^3
s_6^2 s_1^5-3228504075 s_2 s_3 s_4^2 s_6^2 s_1^5-1420541793 s_3
s_5^2 s_6^2 s_1^5+653672430 s_2^5 s_3 s_6^2 s_1^5-172186884 s_3^3
s_4 s_6^2 s_1^5+4751082540 s_2^3 s_3 s_4 s_6^2 s_1^5-1406192886
s_2^4 s_5 s_6^2 s_1^5+7963643385 s_2 s_3^2 s_5 s_6^2 s_1^5+573956280
s_4^2 s_5 s_6^2 s_1^5-1798396344 s_2^2 s_4 s_5 s_6^2 s_1^5-4782969
s_2^{11} s_3 s_1^5-31886460 s_2^3 s_3^5 s_4 s_1^5+312487308 s_2^6
s_3^3 s_4 s_1^5-70150212 s_2^9 s_3 s_4 s_1^5+7971615 s_2^{10} s_5
s_1^5+3188646 s_4^5 s_5 s_1^5+438438825 s_2^4 s_3^4 s_5
s_1^5+39858075 s_2^4 s_4^3 s_5 s_1^5-199290375 s_2 s_3^2 s_4^3 s_5
s_1^5-585116541 s_2^7 s_3^2 s_5 s_1^5-113196933 s_2^6 s_4^2 s_5
s_1^5-14348907 s_3^4 s_4^2 s_5 s_1^5+841802544 s_2^3 s_3^2 s_4^2 s_5
s_1^5+90876411 s_2^8 s_4 s_5 s_1^5-186535791 s_2^2 s_3^4 s_4 s_5
s_1^5-277412202 s_2^5 s_3^2 s_4 s_5 s_1^5+846585513 s_2^2 s_3^5 s_6
s_1^5+92470734 s_3 s_4^4 s_6 s_1^5-2700783162 s_2^5 s_3^3 s_6
s_1^5-1041092919 s_2^2 s_3 s_4^3 s_6 s_1^5+52612659 s_2^2 s_5^3 s_6
s_1^5-205667667 s_4 s_5^3 s_6 s_1^5+387420489 s_2 s_3^3 s_4^2 s_6
s_1^5+738171549 s_2^4 s_3 s_4^2 s_6 s_1^5+545258466 s_3^3 s_5^2 s_6
s_1^5-604248417 s_2^3 s_3 s_5^2 s_6 s_1^5-124357194 s_2 s_3 s_4
s_5^2 s_6 s_1^5+572361957 s_2^8 s_3 s_6 s_1^5+43046721 s_3^5 s_4
s_6 s_1^5-1677227796 s_2^3 s_3^3 s_4 s_6 s_1^5-586710864 s_2^6 s_3
s_4 s_6 s_1^5+44641044 s_2^7 s_5 s_6 s_1^5-1262703816 s_2 s_3^4 s_5
s_6 s_1^5+542069820 s_2 s_4^3 s_5 s_6 s_1^5+3163136832 s_2^4 s_3^2
s_5 s_6 s_1^5+487862838 s_2^3 s_4^2 s_5 s_6 s_1^5-430467210 s_3^2
s_4^2 s_5 s_6 s_1^5-1600700292 s_2^5 s_4 s_5 s_6 s_1^5+6375697677
s_2^2 s_3^2 s_4 s_5 s_6 s_1^5+23914845 s_2^4 s_3^6 s_1^4-12754584
s_2 s_4^6 s_1^4+57395628 s_2^3 s_4^5 s_1^4-3188646 s_3^2 s_4^5
s_1^4-153055008 s_2^7 s_3^4 s_1^4-92470734 s_2^5 s_4^4 s_1^4+68555889
s_2^2 s_3^2 s_4^4 s_1^4-623380293 s_2^3 s_5^4 s_1^4-86093442 s_3^2
s_5^4 s_1^4+597871125 s_2 s_4 s_5^4 s_1^4+60584274 s_2^7 s_4^3
s_1^4-9565938 s_2 s_3^4 s_4^3 s_1^4-90876411 s_2^4 s_3^2 s_4^3
s_1^4+1463588514 s_2 s_3^3 s_5^3 s_1^4+243931419 s_3 s_4^2 s_5^3
s_1^4-1943479737 s_2^4 s_3 s_5^3 s_1^4+741360195 s_2^2 s_3 s_4 s_5^3
s_1^4+5203870272 s_2^4 s_6^3 s_1^4-4519905705 s_2 s_3^2 s_6^3
s_1^4-4591650240 s_2^2 s_4 s_6^3 s_1^4+3228504075 s_3 s_5 s_6^3
s_1^4+19131876 s_2^{10} s_3^2 s_1^4-9565938 s_2^9 s_4^2 s_1^4+31886460
s_2^3 s_3^4 s_4^2 s_1^4-92470734 s_2^6 s_3^2 s_4^2 s_1^4+175375530
s_2^8 s_5^2 s_1^4+100442349 s_2^2 s_3^4 s_5^2 s_1^4-46235367 s_4^4
s_5^2 s_1^4+102036672 s_2^2 s_4^3 s_5^2 s_1^4-1135157976 s_2^5 s_3^2
s_5^2 s_1^4+38263752 s_2^4 s_4^2 s_5^2 s_1^4-770058009 s_2 s_3^2
s_4^2 s_5^2 s_1^4-412929657 s_2^6 s_4 s_5^2 s_1^4-43046721 s_3^4 s_4
s_5^2 s_1^4+2182628187 s_2^3 s_3^2 s_4 s_5^2 s_1^4-982102968 s_2^7
s_6^2 s_1^4+1549681956 s_2 s_3^4 s_6^2 s_1^4+899198172 s_2 s_4^3
s_6^2 s_1^4-8532816696 s_2^4 s_3^2 s_6^2 s_1^4-3813620616 s_2^3 s_4^2
s_6^2 s_1^4+903981141 s_3^2 s_4^2 s_6^2 s_1^4-3888553797 s_2^2 s_5^2
s_6^2 s_1^4+71744535 s_4 s_5^2 s_6^2 s_1^4+3207777876 s_2^5 s_4
s_6^2 s_1^4-2238429492 s_2^2 s_3^2 s_4 s_6^2 s_1^4-1420541793 s_3^3
s_5 s_6^2 s_1^4+14655017016 s_2^3 s_3 s_5 s_6^2 s_1^4+2023195887 s_2
s_3 s_4 s_5 s_6^2 s_1^4-3188646 s_2^{11} s_4 s_1^4-191318760 s_2^5
s_3^4 s_4 s_1^4+170592561 s_2^8 s_3^2 s_4 s_1^4-95659380 s_2^3 s_3^5
s_5 s_1^4+127545840 s_2 s_3 s_4^4 s_5 s_1^4+803538792 s_2^6 s_3^3
s_5 s_1^4+23914845 s_3^3 s_4^3 s_5 s_1^4-632946231 s_2^3 s_3 s_4^3
s_5 s_1^4-406552365 s_2^2 s_3^3 s_4^2 s_5 s_1^4+918330048 s_2^5 s_3
s_4^2 s_5 s_1^4-79716150 s_2^9 s_3 s_5 s_1^4+43046721 s_2 s_3^5 s_4
s_5 s_1^4-264657618 s_2^4 s_3^3 s_4 s_5 s_1^4-404958042 s_2^7 s_3
s_4 s_5 s_1^4+79716150 s_2^{10} s_6 s_1^4-129140163 s_2 s_3^6 s_6
s_1^4-30292137 s_4^5 s_6 s_1^4+2946308904 s_2^4 s_3^4 s_6
s_1^4+516560652 s_2^2 s_4^4 s_6 s_1^4+430467210 s_5^4 s_6
s_1^4-969348384 s_2^4 s_4^3 s_6 s_1^4+688747536 s_2 s_3^2 s_4^3 s_6
s_1^4-2683245609 s_2 s_3 s_5^3 s_6 s_1^4-822670668 s_2^7 s_3^2 s_6
s_1^4+704690766 s_2^6 s_4^2 s_6 s_1^4-43046721 s_3^4 s_4^2 s_6
s_1^4+696719151 s_2^3 s_3^2 s_4^2 s_6 s_1^4+503806068 s_2^5 s_5^2
s_6 s_1^4+3242852982 s_2^2 s_3^2 s_5^2 s_6 s_1^4-449599086 s_2 s_4^2
s_5^2 s_6 s_1^4-800350146 s_2^3 s_4 s_5^2 s_6 s_1^4-774840978 s_3^2
s_4 s_5^2 s_6 s_1^4-301327047 s_2^8 s_4 s_6 s_1^4+1104865839 s_2^2
s_3^4 s_4 s_6 s_1^4-806727438 s_2^5 s_3^2 s_4 s_6 s_1^4+129140163
s_3^5 s_5 s_6 s_1^4-6193944855 s_2^3 s_3^3 s_5 s_6 s_1^4+47829690
s_3 s_4^3 s_5 s_6 s_1^4-3869421921 s_2^2 s_3 s_4^2 s_5 s_6
s_1^4-465542316 s_2^6 s_3 s_5 s_6 s_1^4-2468012004 s_2 s_3^3 s_4
s_5 s_6 s_1^4+7601732064 s_2^4 s_3 s_4 s_5 s_6 s_1^4+4782969 s_3
s_4^6 s_1^3+100442349 s_2^6 s_3^5 s_1^3-57395628 s_2^2 s_3 s_4^5
s_1^3-
}
\]
\[
\longequation{
114791256 s_2 s_5^5 s_1^3-19131876 s_2 s_3^3 s_4^4
s_1^3+153055008 s_2^4 s_3 s_4^4 s_1^3+444816117 s_2^2 s_3 s_5^4
s_1^3-416118303 s_3 s_4 s_5^4 s_1^3-43046721 s_2^9 s_3^3
s_1^3-9565938 s_2^3 s_3^3 s_4^3 s_1^3-133923132 s_2^6 s_3 s_4^3
s_1^3-664832691 s_2^6 s_5^3 s_1^3-444816117 s_3^4 s_5^3
s_1^3+100442349 s_4^3 s_5^3 s_1^3+2530190601 s_2^3 s_3^2 s_5^3
s_1^3+14348907 s_2^2 s_4^2 s_5^3 s_1^3+1009206459 s_2^4 s_4 s_5^3
s_1^3-459165024 s_2 s_3^2 s_4 s_5^3 s_1^3-14004533232 s_2^3 s_3
s_6^3 s_1^3+5165606520 s_2 s_3 s_4 s_6^3 s_1^3+13774950720 s_2^2 s_5
s_6^3 s_1^3+138706101 s_2^5 s_3^3 s_4^2 s_1^3+14348907 s_2^8 s_3
s_4^2 s_1^3+746143164 s_2^4 s_3^3 s_5^2 s_1^3+28697814 s_2 s_3 s_4^3
s_5^2 s_1^3+358722675 s_3^3 s_4^2 s_5^2 s_1^3-454382055 s_2^3 s_3
s_4^2 s_5^2 s_1^3-368288613 s_2^7 s_3 s_5^2 s_1^3-1334448351 s_2^2
s_3^3 s_4 s_5^2 s_1^3+1138346622 s_2^5 s_3 s_4 s_5^2
s_1^3+10991262762 s_2^3 s_3^3 s_6^2 s_1^3-200884698 s_3 s_4^3 s_6^2
s_1^3-1147912560 s_5^3 s_6^2 s_1^3+7748409780 s_2^2 s_3 s_4^2 s_6^2
s_1^3+6543101592 s_2 s_3 s_5^2 s_6^2 s_1^3+1300967568 s_2^6 s_3
s_6^2 s_1^3+516560652 s_2 s_3^3 s_4 s_6^2 s_1^3-6552667530 s_2^4 s_3
s_4 s_6^2 s_1^3-1281835692 s_2^5 s_5 s_6^2 s_1^3-21437267058 s_2^2
s_3^2 s_5 s_6^2 s_1^3-3501133308 s_2 s_4^2 s_5 s_6^2
s_1^3+5012551512 s_2^3 s_4 s_5 s_6^2 s_1^3-258280326 s_3^2 s_4 s_5
s_6^2 s_1^3+47829690 s_2^4 s_3^5 s_4 s_1^3-210450636 s_2^7 s_3^3 s_4
s_1^3+19131876 s_2^{10} s_3 s_4 s_1^3+23914845 s_2 s_4^5 s_5
s_1^3-535692528 s_2^5 s_3^4 s_5 s_1^3-95659380 s_2^3 s_4^4 s_5
s_1^3-76527504 s_3^2 s_4^4 s_5 s_1^3+86093442 s_2^5 s_4^3 s_5
s_1^3+655266753 s_2^2 s_3^2 s_4^3 s_5 s_1^3+248714388 s_2^8 s_3^2
s_5 s_1^3+19131876 s_2^7 s_4^2 s_5 s_1^3+86093442 s_2 s_3^4 s_4^2
s_5 s_1^3-1205308188 s_2^4 s_3^2 s_4^2 s_5 s_1^3-33480783 s_2^9 s_4
s_5 s_1^3+377854551 s_2^3 s_3^4 s_4 s_5 s_1^3+377854551 s_2^6 s_3^2
s_4 s_5 s_1^3-1707519933 s_2^3 s_3^5 s_6 s_1^3-483079869 s_2 s_3
s_4^4 s_6 s_1^3+1210091157 s_2^6 s_3^3 s_6 s_1^3-157837977 s_3^3
s_4^3 s_6 s_1^3+1817528220 s_2^3 s_3 s_4^3 s_6 s_1^3+1128780684
s_2^3 s_5^3 s_6 s_1^3+2166684957 s_3^2 s_5^3 s_6 s_1^3+516560652 s_2
s_4 s_5^3 s_6 s_1^3-1219657095 s_2^2 s_3^3 s_4^2 s_6
s_1^3-1367929134 s_2^5 s_3 s_4^2 s_6 s_1^3-2238429492 s_2 s_3^3
s_5^2 s_6 s_1^3-272629233 s_3 s_4^2 s_5^2 s_6 s_1^3-1736217747 s_2^4
s_3 s_5^2 s_6 s_1^3+2468012004 s_2^2 s_3 s_4 s_5^2 s_6
s_1^3-215233605 s_2^9 s_3 s_6 s_1^3-258280326 s_2 s_3^5 s_4 s_6
s_1^3+2042327763 s_2^4 s_3^3 s_4 s_6 s_1^3+593088156 s_2^7 s_3 s_4
s_6 s_1^3+62178597 s_2^8 s_5 s_6 s_1^3+3960298332 s_2^2 s_3^4 s_5
s_6 s_1^3+4782969 s_4^4 s_5 s_6 s_1^3-1645341336 s_2^2 s_4^3 s_5
s_6 s_1^3-1052253180 s_2^5 s_3^2 s_5 s_6 s_1^3+583522218 s_2^4 s_4^2
s_5 s_6 s_1^3+2754990144 s_2 s_3^2 s_4^2 s_5 s_6 s_1^3+306110016
s_2^6 s_4 s_5 s_6 s_1^3+258280326 s_3^4 s_4 s_5 s_6
s_1^3-9795520512 s_2^3 s_3^2 s_4 s_5 s_6 s_1^3-4782969 s_4^7
s_1^2-28697814 s_2^5 s_3^6 s_1^2+23914845 s_2^2 s_4^6 s_1^2-47829690
s_2^4 s_4^5 s_1^2+28697814 s_2 s_3^2 s_4^5 s_1^2+57395628 s_3 s_5^5
s_1^2+57395628 s_2^8 s_3^4 s_1^2+47829690 s_2^6 s_4^4 s_1^2-124357194
s_2^3 s_3^2 s_4^4 s_1^2+602654094 s_2^4 s_5^4 s_1^2+229582512 s_2
s_3^2 s_5^4 s_1^2-200884698 s_4^2 s_5^4 s_1^2-1205308188 s_2^2 s_4
s_5^4 s_1^2-23914845 s_2^8 s_4^3 s_1^2+28697814 s_2^2 s_3^4 s_4^3
s_1^2+114791256 s_2^5 s_3^2 s_4^3 s_1^2-2439314190 s_2^2 s_3^3 s_5^3
s_1^2-631351908 s_2 s_3 s_4^2 s_5^3 s_1^2+1004423490 s_2^5 s_3 s_5^3
s_1^2+172186884 s_3^3 s_4 s_5^3 s_1^2-1348797258 s_2^3 s_3 s_4 s_5^3
s_1^2-2754990144 s_2^5 s_6^3 s_1^2+9298091736 s_2^2 s_3^2 s_6^3
s_1^2+4132485216 s_2^3 s_4 s_6^3 s_1^2-15496819560 s_2 s_3 s_5 s_6^3
s_1^2+4782969 s_2^{10} s_4^2 s_1^2-71744535 s_2^4 s_3^4 s_4^2
s_1^2+28697814 s_2^7 s_3^2 s_4^2 s_1^2-62178597 s_2^9 s_5^2
s_1^2-157837977 s_2^3 s_3^4 s_5^2 s_1^2+100442349 s_2 s_4^4 s_5^2
s_1^2-66961566 s_2^3 s_4^3 s_5^2 s_1^2-143489070 s_3^2 s_4^3 s_5^2
s_1^2+530909559 s_2^6 s_3^2 s_5^2 s_1^2-229582512 s_2^5 s_4^2 s_5^2
s_1^2+1391843979 s_2^2 s_3^2 s_4^2 s_5^2 s_1^2+258280326 s_2^7 s_4
s_5^2 s_1^2+258280326 s_2 s_3^4 s_4 s_5^2 s_1^2-1951451352 s_2^4
s_3^2 s_4 s_5^2 s_1^2+459165024 s_2^8 s_6^2 s_1^2-3874204890 s_2^2
s_3^4 s_6^2 s_1^2-1377495072 s_2^2 s_4^3 s_6^2 s_1^2+3702018006 s_2^5
s_3^2 s_6^2 s_1^2+2525407632 s_2^4 s_4^2 s_6^2 s_1^2-3874204890 s_2
s_3^2 s_4^2 s_6^2 s_1^2+1721868840 s_2^3 s_5^2 s_6^2 s_1^2-1549681956
s_3^2 s_5^2 s_6^2 s_1^2+1033121304 s_2 s_4 s_5^2 s_6^2
s_1^2-1607077584 s_2^6 s_4 s_6^2 s_1^2+2927177028 s_2^3 s_3^2 s_4
s_6^2 s_1^2+6973568802 s_2 s_3^3 s_5 s_6^2 s_1^2-774840978 s_3 s_4^2
s_5 s_6^2 s_1^2-7662316338 s_2^4 s_3 s_5 s_6^2 s_1^2-6715288476
s_2^2 s_3 s_4 s_5 s_6^2 s_1^2+129140163 s_2^6 s_3^4 s_4
s_1^2-47829690 s_2^9 s_3^2 s_4 s_1^2+143489070 s_2^4 s_3^5 s_5
s_1^2+43046721 s_3 s_4^5 s_5 s_1^2-243931419 s_2^2 s_3 s_4^4 s_5
s_1^2-373071582 s_2^7 s_3^3 s_5 s_1^2-186535791 s_2 s_3^3 s_4^3 s_5
s_1^2+554824404 s_2^4 s_3 s_4^3 s_5 s_1^2+602654094 s_2^3 s_3^3
s_4^2 s_5 s_1^2-545258466 s_2^6 s_3 s_4^2 s_5 s_1^2+4782969 s_2^{10}
s_3 s_5 s_1^2-129140163 s_2^2 s_3^5 s_4 s_5 s_1^2+100442349 s_2^5
s_3^3 s_4 s_5 s_1^2+186535791 s_2^8 s_3 s_4 s_5 s_1^2-19131876
s_2^{11} s_6 s_1^2+387420489 s_2^2 s_3^6 s_6 s_1^2+143489070 s_2
s_4^5 s_6 s_1^2-1707519933 s_2^5 s_3^4 s_6 s_1^2-478296900 s_2^3
s_4^4 s_6 s_1^2+86093442 s_3^2 s_4^4 s_6 s_1^2-1607077584 s_2 s_5^4
s_6 s_1^2+593088156 s_2^5 s_4^3 s_6 s_1^2-1434890700 s_2^2 s_3^2
s_4^3 s_6 s_1^2+3242852982 s_2^2 s_3 s_5^3 s_6 s_1^2-774840978 s_3
s_4 s_5^3 s_6 s_1^2+86093442 s_2^8 s_3^2 s_6 s_1^2-344373768 s_2^7
s_4^2 s_6 s_1^2+387420489 s_2 s_3^4 s_4^2 s_6 s_1^2+57395628 s_2^4
s_3^2 s_4^2 s_6 s_1^2-86093442 s_2^6 s_5^2 s_6 s_1^2+387420489 s_3^4
s_5^2 s_6 s_1^2+947027862 s_4^3 s_5^2 s_6 s_1^2-1033121304 s_2^3
s_3^2 s_5^2 s_6 s_1^2+1578379770 s_2^2 s_4^2 s_5^2 s_6
s_1^2-373071582 s_2^4 s_4 s_5^2 s_6 s_1^2+1549681956 s_2 s_3^2 s_4
s_5^2 s_6 s_1^2+105225318 s_2^9 s_4 s_6 s_1^2-1463588514 s_2^3 s_3^4
s_4 s_6 s_1^2+172186884 s_2^6 s_3^2 s_4 s_6 s_1^2-774840978 s_2
s_3^5 s_5 s_6 s_1^2+4634696961 s_2^4 s_3^3 s_5 s_6 s_1^2+57395628
s_2 s_3 s_4^3 s_5 s_6 s_1^2-387420489 s_3^3 s_4^2 s_5 s_6
s_1^2+4017693960 s_2^3 s_3 s_4^2 s_5 s_6 s_1^2+459165024 s_2^7 s_3
s_5 s_6 s_1^2+4649045868 s_2^2 s_3^3 s_4 s_5 s_6 s_1^2-3501133308
s_2^5 s_3 s_4 s_5 s_6 s_1^2-14348907 s_2 s_3 s_4^6 s_1-43046721
s_2^7 s_3^5 s_1+57395628 s_2^3 s_3 s_4^5 s_1+344373768 s_2^2 s_5^5
s_1+573956280 s_4 s_5^5 s_1+28697814 s_2^2 s_3^3 s_4^4 s_1-86093442
s_2^5 s_3 s_4^4 s_1-258280326 s_3^3 s_5^4 s_1-631351908 s_2^3 s_3
s_5^4 s_1+1779264468 s_2 s_3 s_4 s_5^4 s_1+57395628 s_2^7 s_3 s_4^3
s_1+229582512 s_2^7 s_5^3 s_1+1420541793 s_2 s_3^4 s_5^3
s_1-229582512 s_2 s_4^3 s_5^3 s_1-559607373 s_2^4 s_3^2 s_5^3
s_1+688747536 s_2^3 s_4^2 s_5^3 s_1+473513931 s_3^2 s_4^2 s_5^3
s_1-688747536 s_2^5 s_4 s_5^3 s_1+660049722 s_2^2 s_3^2 s_4 s_5^3
s_1+8264970432 s_2^4 s_3 s_6^3 s_1-6198727824 s_2^2 s_3 s_4 s_6^3
s_1-12397455648 s_2^3 s_5 s_6^3 s_1-86093442 s_2^6 s_3^3 s_4^2
s_1-14348907 s_2^9 s_3 s_4^2 s_1-229582512 s_3 s_4^4 s_5^2
s_1-387420489 s_2^5 s_3^3 s_5^2 s_1+315675954 s_2^2 s_3 s_4^3 s_5^2
s_1-1162261467 s_2 s_3^3 s_4^2 s_5^2 s_1+86093442 s_2^4 s_3 s_4^2
s_5^2 s_1+28697814 s_2^8 s_3 s_5^2 s_1+1434890700 s_2^3 s_3^3 s_4
s_5^2 s_1-200884698 s_2^6 s_3 s_4 s_5^2 s_1-7231849128 s_2^4 s_3^3
s_6^2 s_1+1033121304 s_2 s_3 s_4^3 s_6^2 s_1+4132485216 s_2 s_5^3
s_6^2 s_1-5509980288 s_2^3 s_3 s_4^2 s_6^2 s_1-6198727824 s_2^2 s_3
s_5^2 s_6^2 s_1+6198727824 s_3 s_4 s_5^2 s_6^2 s_1-1377495072 s_2^7
s_3 s_6^2 s_1+3788111448 s_2^5 s_3 s_4 s_6^2 s_1+2066242608 s_2^6
s_5 s_6^2 s_1+19629304776 s_2^3 s_3^2 s_5 s_6^2 s_1+6198727824 s_2^2
s_4^2 s_5 s_6^2 s_1-4132485216 s_2^4 s_4 s_5 s_6^2 s_1+3099363912
s_2 s_3^2 s_4 s_5 s_6^2 s_1-28697814 s_2^5 s_3^5 s_4 s_1+57395628
s_2^8 s_3^3 s_4 s_1+28697814 s_4^6 s_5 s_1-114791256 s_2^2 s_4^5
s_5 s_1+272629233 s_2^6 s_3^4 s_5 s_1+172186884 s_2^4 s_4^4 s_5
s_1+243931419 s_2 s_3^2 s_4^4 s_5 s_1-114791256 s_2^6 s_4^3 s_5
s_1-717445350 s_2^3 s_3^2 s_4^3 s_5 s_1-14348907 s_2^9 s_3^2 s_5
s_1+28697814 s_2^8 s_4^2 s_5 s_1-129140163 s_2^2 s_3^4 s_4^2 s_5
s_1+688747536 s_2^5 s_3^2 s_4^2 s_5 s_1-286978140 s_2^4 s_3^4 s_4
s_5 s_1-200884698 s_2^7 s_3^2 s_4 s_5 s_1+1291401630 s_2^4 s_3^5
s_6 s_1-43046721 s_3 s_4^5 s_6 s_1+573956280 s_2^2 s_3 s_4^4 s_6
s_1+1033121304 s_3 s_5^4 s_6 s_1+143489070 s_2^7 s_3^3 s_6
s_1+516560652 s_2 s_3^3 s_4^3 s_6 s_1-1176610374 s_2^4 s_3 s_4^3
s_6 s_1-1836660096 s_2^4 s_5^3 s_6 s_1-6715288476 s_2 s_3^2 s_5^3
s_6 s_1-2754990144 s_4^2 s_5^3 s_6 s_1+459165024 s_2^2 s_4 s_5^3
s_6 s_1+1291401630 s_2^3 s_3^3 s_4^2 s_6 s_1+860934420 s_2^6 s_3
s_4^2 s_6 s_1+1549681956 s_2^2 s_3^3 s_5^2 s_6 s_1+516560652 s_2
s_3 s_4^2 s_5^2 s_6 s_1+2238429492 s_2^5 s_3 s_5^2 s_6
s_1-1549681956 s_3^3 s_4 s_5^2 s_6 s_1-4821232752 s_2^3 s_3 s_4
s_5^2 s_6 s_1+57395628 s_2^{10} s_3 s_6 s_1+387420489 s_2^2 s_3^5
s_4 s_6 s_1-918330048 s_2^5 s_3^3 s_4 s_6 s_1-272629233 s_2^8 s_3
s_4 s_6 s_1-86093442 s_2^9 s_5 s_6 s_1-4132485216 s_2^3 s_3^4 s_5
s_6 s_1-774840978 s_2 s_4^4 s_5 s_6 s_1+1721868840 s_2^3 s_4^3 s_5
s_6 s_1+774840978 s_3^2 s_4^3 s_5 s_6 s_1-1119214746 s_2^6 s_3^2
s_5 s_6 s_1-1205308188 s_2^5 s_4^2 s_5 s_6 s_1-4390765542 s_2^2
s_3^2 s_4^2 s_5 s_6 s_1+344373768 s_2^7 s_4 s_5 s_6 s_1-774840978
s_2 s_3^4 s_4 s_5 s_6 s_1+4735139310 s_2^4 s_3^2 s_4 s_5 s_6
s_1+14348907 s_2^6 s_3^6+14348907 s_3^2 s_4^6-459165024 s_5^6-57395628
s_2^2 s_3^2 s_4^5-688747536 s_2 s_3 s_5^5+86093442 s_2^4 s_3^2
s_4^4-114791256 s_2^5 s_5^4+172186884 s_2^2 s_3^2 s_5^4-114791256 s_2
s_4^2 s_5^4+229582512 s_2^3 s_4 s_5^4-516560652 s_3^2 s_4
s_5^4-28697814 s_2^3 s_3^4 s_4^3-57395628 s_2^6 s_3^2 s_4^3-387420489
s_3^5 s_5^3+200884698 s_2^3 s_3^3 s_5^3+315675954 s_3 s_4^3
s_5^3-889632234 s_2^2 s_3 s_4^2 s_5^3-258280326 s_2^6 s_3
s_5^3-258280326 s_2 s_3^3 s_4 s_5^3+832236606 s_2^4 s_3 s_4
s_5^3-6198727824 s_2^3 s_3^2 s_6^3+18596183472 s_2^2 s_3 s_5
s_6^3+57395628 s_2^5 s_3^4 s_4^2+14348907 s_2^8 s_3^2 s_4^2+14348907
s_2^{10} s_5^2-43046721 s_4^5 s_5^2+86093442 s_2^4 s_3^4
s_5^2+186535791 s_2^2 s_4^4 s_5^2
}
\]
\[
\longequation{
-315675954 s_2^4 s_4^3
s_5^2+172186884 s_2 s_3^2 s_4^3 s_5^2+258280326 s_2^6 s_4^2
s_5^2+387420489 s_3^4 s_4^2 s_5^2-344373768 s_2^3 s_3^2 s_4^2
s_5^2-100442349 s_2^8 s_4 s_5^2-387420489 s_2^2 s_3^4 s_4
s_5^2+172186884 s_2^5 s_3^2 s_4 s_5^2+3099363912 s_2^3 s_3^4
s_6^2-6198727824 s_3 s_5^3 s_6^2+1033121304 s_2^6 s_3^2
s_6^2+3099363912 s_2^2 s_3^2 s_4^2 s_6^2+2066242608 s_2^4 s_5^2
s_6^2-6198727824 s_2^2 s_4 s_5^2 s_6^2-2066242608 s_2^4 s_3^2 s_4
s_6^2-9298091736 s_2^2 s_3^3 s_5 s_6^2-3099363912 s_2 s_3 s_4^2 s_5
s_6^2-3099363912 s_2^5 s_3 s_5 s_6^2+4132485216 s_2^3 s_3 s_4 s_5
s_6^2-28697814 s_2^7 s_3^4 s_4-86093442 s_2^5 s_3^5 s_5-14348907 s_2
s_3 s_4^5 s_5-129140163 s_3^3 s_4^4 s_5+57395628 s_2^3 s_3 s_4^4
s_5+14348907 s_2^8 s_3^3 s_5+344373768 s_2^2 s_3^3 s_4^3 s_5-86093442
s_2^5 s_3 s_4^3 s_5-286978140 s_2^4 s_3^3 s_4^2 s_5+57395628 s_2^7
s_3 s_4^2 s_5+129140163 s_2^3 s_3^5 s_4 s_5+57395628 s_2^6 s_3^3
s_4 s_5-14348907 s_2^9 s_3 s_4 s_5-387420489 s_2^3 s_3^6
s_6-86093442 s_2^6 s_3^4 s_6-387420489 s_2 s_3^2 s_4^4 s_6+688747536
s_2^2 s_5^4 s_6+2066242608 s_4 s_5^4 s_6+860934420 s_2^3 s_3^2 s_4^3
s_6+3099363912 s_3^3 s_5^3 s_6+1721868840 s_2^3 s_3 s_5^3
s_6+1033121304 s_2 s_3 s_4 s_5^3 s_6-43046721 s_2^9 s_3^2
s_6-774840978 s_2^2 s_3^4 s_4^2 s_6-602654094 s_2^5 s_3^2 s_4^2
s_6-344373768 s_2^7 s_5^2 s_6+1033121304 s_2 s_4^3 s_5^2
s_6-1549681956 s_2^4 s_3^2 s_5^2 s_6-2410616376 s_2^3 s_4^2 s_5^2
s_6-1549681956 s_3^2 s_4^2 s_5^2 s_6+1721868840 s_2^5 s_4 s_5^2
s_6+3099363912 s_2^2 s_3^2 s_4 s_5^2 s_6+516560652 s_2^4 s_3^4 s_4
s_6+172186884 s_2^7 s_3^2 s_4 s_6+1162261467 s_2^2 s_3^5 s_5
s_6+129140163 s_3 s_4^4 s_5 s_6+430467210 s_2^5 s_3^3 s_5
s_6-172186884 s_2^2 s_3 s_4^3 s_5 s_6+774840978 s_2 s_3^3 s_4^2 s_5
s_6+86093442 s_2^4 s_3 s_4^2 s_5 s_6+129140163 s_2^8 s_3 s_5
s_6-1549681956 s_2^3 s_3^3 s_4 s_5 s_6-172186884 s_2^6 s_3 s_4 s_5
s_6.
}
\]
}
|
1,314,259,993,912 | arxiv | \section{I. Introduction}
In their seminal paper~\cite{Hosoya-Morikawa}, Hosoya and Morikawa explored the consequences of
the quantization of the wave function of the Universe, now known as third quantization.
The main motivation was to overcome the problem of the probabilistic interpretation
of the wave function of the Universe, solution of the Wheeler-DeWitt
equation: since the latter is a hyperbolic second-order differential equation,
it does not admit conserved quantities that are positive definite.
Their proposal of a quantum field theory of the Universe resembles to the
one that successfully solved the problem of negative probability
in the case of the Klein-Gordon equation.
As a consequence of their investigation, Hosoya and Morikawa
discovered that universes are spontaneously created from ``nothing''
(the third-quantized vacuum), in the same way particles can be created from vacuum if
the external potential is time dependent. In third-quantization, the time-dependent potential
(the Wheeler-DeWitt potential) naturally arises from Einstein gravity, and
the time variable is played by the (logarithm) of the expansion parameter.
In their paper, Hosoya and Morikawa calculated the average number of flat universes created
from nothing in the presence of an homogeneous scalar field (the inflaton).
Recently enough, Kim~\cite{Kim} calculated the number of closed and open universes in the case
of vanishing potential of the scalar field. The aim of this paper is to
evaluate this number in the general case of nonvanishing scalar potential.
The plan of the paper is as follows. In Sec. II, we briefly review third quantization
in minisuperspace and in particular the mechanism of creation of universes from nothing.
In Sec. III, an analogy between universe creation and quantum potential scattering is analyzed.
This analogy will allow us to use standard WKB methods used in quantum mechanics for the
calculation of the number of cerated universes.
In Sections IV, V, and VI, we calculate the average numbers of flat, closed, and open
universes created out of nothing in the particular case of constant scalar potential,
both using WKB approximation and an approximate form of the Wheeler-DeWitt potential.
In Sec. VII, we discuss our result and we draw our conclusions.
\section{II. Third quantization and the creation of universes from nothing}
\subsection{IIa. Nothingness and multiverse}
The Wheeler-DeWitt equation in homogeneous
and isotropic minisuperspace is (using the units $\hbar = c = 4\pi G/3 = 1$)
\begin{equation}
\label{motion}
\left[\frac{\partial^2}{\partial \alpha^2} - \frac{\partial^2}{\partial \phi^2}
+ U(\alpha,\phi) \right] \! \Psi(\alpha,\phi) = 0,
\end{equation}
where $\alpha = \ln a$, with $a$ being the expansion parameter, $\phi$ is a real
scalar field (which we identify as the inflaton), $k$ is the signature of the spatial curvature,
and
\begin{equation}
\label{U}
U(\alpha,\phi) = V^2 e^{4\alpha} \left[2V(\phi) e^{2\alpha} - k \right]
\end{equation}
is the Wheeler-DeWitt potential.
The spatial volume $V$ is equal to $2\pi^2$ for closed universes ($k=1$).
For flat ($k=0$) and open ($k=-1$) universes, $V$ is a normalization volume
that can be taken as the (finite) volume of the region under consideration.
In third quantization, the ``Universe field'' $\Psi(\alpha,\phi)$ is
expanded in normal modes with the coefficients of expansions being the
annihilation and creation operators. A Fock space can be constructed
starting from a vacuum state which represents a state of {\it nothing},
a state in which even space-time does not exist.
Following~\cite{Hosoya-Morikawa}, we assume that the Universe is a neutral scalar.
In this case, we can write the Universe wave function in the in-Fock space as
\begin{equation}
\label{A}
\Psi(\alpha,\phi) =
\int \!\! \frac{dp}{2\pi} \! \left(c_p \, \psi_p(\alpha) \, e^{ip \phi} + c_p^{\dag} \, \psi_p^*(\alpha) \, e^{-ip \phi} \right) \!,
\end{equation}
where the subscript $p$ labels the mode function and its physical
meaning will be discussed later. Here, the annihilation and creation operators $c_p$ and $c_p^{\dag}$
satisfy the usual commutation relations, $[c_p, c_{p'}^{\dag}] = 2\pi \delta_{p p'}$ and
$[c_p, c_{p'}] = [c_p^{\dag}, c_{p'}^{\dag}] = 0$.
The functions $\psi_p(\alpha)$ are positive frequency
solutions (with respect to $\alpha$) of the Schrodinger-like equation
\begin{equation}
\label{motion3}
\ddot{\psi}_p = U_p \psi_p,
\end{equation}
where a dot indicates a derivative with respect to $\alpha$, and
\begin{equation}
\label{Up}
U_p(\alpha) = -p^2 - V^2 e^{4\alpha} \left(2V_0 e^{2\alpha} - k \right) \!.
\end{equation}
Hereafter, we consider only the case of a constant
scalar potential $V(\phi) = V_0$. Also, we assume $V_0 \neq 0$ throughout the paper, with
the expection of Section VIa, where we discuss the case of open universes with
vanishing scalar potential.
In order to have a self-consistent quantization, the mode $\psi_p(\alpha)$
must satisfy the Wronskian condition
$\psi_p \dot{\psi}_p^{*} - \dot{\psi}_p \psi_p^{*} = i$.
The vacuum state $|0\rangle$ is defined by
\begin{equation}
\label{multiverse}
\forall p \in \mathbb{R} \!: ~ c_p |0\rangle = 0 ~~ \mbox{(nothinghness)}
\end{equation}
and is normalized as $\langle 0|0\rangle = 1$.
The state $c_{p}^{\dag} |0\rangle$ represents the single universe,
the state $c_{p}^{\dag} c_{p'}^{\dag}|0\rangle$ represents a
double universe and, in general, the state
\begin{equation}
\label{multiverse}
\prod_{i=1}^{N} c_{p_i}^{\dag} |0\rangle ~~ \mbox{(multiverse)}
\end{equation}
represents the {\it multiverse}, namely a state with $N$ universes each
of them labeled by $p_i$.
\subsection{IIb. Universes from nothing}
As in the case of quantum field theory
in curved spacetime, the vacuum state is not unique. Different inequivalent
physical vacua can be introduced in different region in minisuperspace.
In particular, we can define in- and out-regions for $\alpha \rightarrow -\infty$
and $\alpha \rightarrow +\infty$, respectively, to which there
corresponds in- and out-vacuum states.
The in-vacuum state contains no in-universes in the in-region.
Such a ``Bunch-Davies vacuum'' can be constructed by
solving the Wheeler-DeWitt equation for the Universe states and then
by fixing the constants of integrations appearing in the
general solution by matching the latter with the corresponding
adiabatic solution for $\alpha \rightarrow -\infty$. Accordingly,
we can construct the in-Fock space based on the in-vacuum by
repeatedly applying the in-creation operator on the in-vacuum state.
Another Fock state can be constructed in this way, but this time
starting from a out-region $\alpha \rightarrow +\infty$.
It is clear from the above discussion that the two Fock spaces based on the two different
choices of the (Bunch-Davies) vacuum state are both physically
acceptable and must be then related. In particular, there will be a relation
between the in- and out-modes $\psi_p^{({\rm in})}$ and $\psi_p^{({\rm out})}$,
as well as a relation between the in- and out-creation and annihilation operators.
In order to find these relations, let us observe that
if $\psi_p^{(1)}$ and $\psi_p^{(2)}$ are two solutions of Eq.~(\ref{motion3}),
the following inner product is conserved,
\footnote{The possibility of introducing an inner product
remains valid even in superspace due to hyperbolicity of the Wheeler-DeWitt equation
(see, e.g.,~\cite{Kim} and references therein).}
\begin{equation}
\label{inner}
\langle \psi_p^{(1)} | \psi_p^{(2)} \rangle =
-i (\psi_p^{(1)} \dot{\psi}_p^{(2)} - \dot{\psi}_p^{(1)} \psi_p^{(2)}).
\end{equation}
We can then introduce the time-independent quantities
\begin{eqnarray}
\label{alphabeta1}
&& \alpha_p = \langle \psi_p^{({\rm in})} | \psi_p^{({\rm out}) *} \rangle, \\
\label{alphabeta2}
&& \beta_p = -\langle \psi_p^{({\rm in})} | \psi_p^{({\rm out})} \rangle,
\end{eqnarray}
and expand the in-$\psi$ mode in terms of the out-$\psi$ mode as
\begin{equation}
\label{in-out}
\psi_p^{({\rm in})} =
\alpha_p \psi_p^{({\rm out})} + \beta_p \psi_p^{({\rm out}) *},
\end{equation}
where we used the fact that
\begin{equation}
\label{inner2} \langle \psi_p^{({\rm in})} | \psi_p^{({\rm in}) *} \rangle =
\langle \psi_p^{({\rm out})} | \psi_p^{({\rm out}) *} \rangle = 1.
\end{equation}
Equation~(\ref{in-out}) is the wanted relation between the $\psi$ in-
and out- modes. A relation of this type is know as Bogolubov
transformation and the quantities $\alpha_p$ and $\beta_p$
are called Bogolubov coefficients. They satisfy the relation
\begin{equation}
\label{relation} |\alpha_p|^2 - |\beta_p|^2 = 1,
\end{equation}
which can be easily derived from their defining equations.
To find the relation between the in- and out-creation and annihilation operators,
we insert the Bogolubov transformation in Eq.~(\ref{A})
and compare the result with the the expression of $\Psi$ defined in the out-Fock space.
We find
$c_p^{({\rm out})} = \alpha_p \, c_p^{({\rm in})} - \beta_p^* \, c_{-p}^{({\rm in}) \dag}$.
From the above equation,
it follows immediately that the two Fock spaces
based on the two choices $|0, {\rm in} \rangle$ and $|0, {\rm out} \rangle$ of
the vacuum are generally different.
In particular, the in-vacuum state will contain out-universes as long as $\beta_p \neq 0$,
\begin{equation}
\label{n}
n_p = \langle 0, {\rm in} |N^{({\rm out})}_p| 0, {\rm in} \rangle = |\beta_p|^2,
\end{equation}
where $N^{({\rm out})}_p = c_p^{({\rm out}) \dag} c_p^{({\rm out})}$
is the number operator in the out-Fock space.
Note that universes are created in pairs with opposite $p$.
\subsection{IIc. Labeling universes}
\begin{figure*}[t!]
\begin{center}
\includegraphics[clip,width=0.6\textwidth]{Fig1.pdf}
\caption{The Wheeler-DeWitt potential $U_p(\alpha)$ in Eq.~(\ref{Up}) for $p = V_0 = 1$
and $k=1$ (continuous line), $k=-1$ (dashed line), and $k=0$ (dotted line). For
open and flat universes, the volume $V$ has been taken equal to unity.
The points $\alpha_1$ and $\alpha_2$ are the classical turning points in the WKB picture
discussed in Section III.}
\end{center}
\end{figure*}
Classically, the canonical momentum conjugate
to $\phi$ is given by $p_\phi^{({\rm cl})} = Va^3 d\phi/dt$~\cite{Hosoya-Morikawa}.
Accordingly, $p$ is related to the kinetic energy (density) of the scalar field,
$K_\phi$, through
\begin{equation}
\label{KK}
K_\phi = \frac12 \! \left( \! \frac{d\phi}{dt} \right)^{\!2} = \frac{p^2}{2Va^3} \, .
\end{equation}
Thus, $p$ essentially labels universes with different amounts of
kinetic energy of the inflaton.
In the out-region, where the created universes behave classically,
the expansion is governed by the usual Friedmann equation
\begin{equation}
\label{Friedmann}
H^2 = \left( \! \frac{d\phi}{dt} \right)^{\!2} +2V(\phi) -\frac{k}{a^2} \, ,
\end{equation}
where $H = (da/dt)/a$ is the Hubble parameter.
Taking into account Eq.~(\ref{KK}), the Friedmann equation takes the form
\begin{equation}
\label{Friedmann2}
H^2 = \frac{p^2}{V^2 a^6} + 2V_0 - \frac{k}{a^2} = -\frac{U_p(\alpha)}{V^2e^{6\alpha}} \, ,
\end{equation}
where $U_p(\alpha)$ is given by Eq.~(\ref{Up}) for the case of constant scalar potential
(see Fig.~1).
{\it Flat universes.} -- For flat universes, the solution of Eq.~(\ref{Friedmann2})
with $a(0)=0$ is easily found,
\begin{equation}
\label{P1}
a(t) = \left(\frac{|p|t}{V}\right)^{\!1/3} \! \sinh(3t/a_{\rm cr}),
\end{equation}
where we have defined
\begin{equation}
\label{acr}
a_{\rm cr} = 1/\sqrt{2V_0}.
\end{equation}
The above expression for the expansion parameter is well approximated by
\begin{eqnarray}
\label{P2}
a(t) \!\!& \simeq &\!\!
\left\{ \begin{array}{lll}
\!\! \left(\frac{|p|t}{V}\right)^{\!1/3} \!, & ~ a \lesssim a_{\rm cr} r^{1/6}, \\
\\
\!\! \left(\frac{|p|a_{\rm cr}}{V}\right)^{\!1/3} e^{t/a_{\rm cr}-1}, & ~ a \gtrsim a_{\rm cr} r^{1/6},
\end{array}
\right.
\end{eqnarray}
where
\begin{equation}
\label{r}
r = \left(\!\frac{p}{Va_{\rm cr}^2}\!\right)^{\!2} \! .
\end{equation}
Thus, flat universes created in the out region with sufficiently large expansion parameter,
$a \gtrsim a_{\rm cr} r^{1/6}$, undergo inflation, $a(t) \propto e^{\sqrt{2V_0} t}$, while
those created with small expansion parameter, $a \lesssim a_{\rm cr} r^{1/6}$,
are dominated by the kinetic energy of the scalar field, $a(t) \propto t^{1/3}$, and do not inflate
(see the upper panel of Fig.~2).
{\it Closed universes.} -- Closed universes are created in the out region only if $\alpha > \alpha_1$
(corresponding to $H^2 > 0$), where $\alpha_1$ is the largest zero of the potential
$U_p(\alpha)$ (see Fig.~1). This corresponds to expansion parameters
$a > \sqrt{x_1} a_{\rm cr}$, where $x_1$ is defined in Eq.~(\ref{x1}).
If $a < \sqrt{x_1} a_{\rm cr}$, the square of the Hubble parameter is negative,
which indicates a recollapsing universe. This analysis is true when $r < 4/27$ (see discussion
in Section V), while for $r > 4/27$ universes with any value of $a$ can be created.
Using the results of Section V (see in particular Fig.~3), the root $x_1(r)$ is
in the interval $2/3 < x_1(r) < 1$ for $0< r < 4/27$. For the sake of simplicity
and convenience, let us assume that created universes recollapse when $a \lesssim a_{\rm cr}$ for
$r \lesssim 1$.
Assuming that either $a \gtrsim a_{\rm cr}$ or $r \gtrsim 1$, the expression for the
expansion parameter can be approximated as
\begin{eqnarray}
\label{P3}
a(t) \!\!& \simeq &\!\!
\left\{ \begin{array}{lll}
\!\! \left(\frac{|p|t}{V}\right)^{\!1/3} \!, & ~ a \lesssim a_{\rm cr} r^{1/6}, ~ r \gtrsim 1, \\
\\
\!\! \left(\frac{|p|a_{\rm cr}}{V}\right)^{\!1/3} e^{t/a_{\rm cr}-1}, & ~ a \gtrsim a_{\rm cr} \max[1,r^{1/6}].
\end{array}
\right.
\end{eqnarray}
The upper branch of $a(t)$ in the above equation corresponds to the case of a dominant kinetic term
in the Hubble parameter, while the lower branch to the case of a dominant potential term.
In the former case, universes inflate, in the latter they do not (see the middle panel of Fig.~2).
{\it Open universes.} -- For open universes, the approximated solution of Eq.~(\ref{Friedmann2}) reads
\begin{eqnarray}
\label{P4}
a(t) \!\!& \simeq &\!\!
\left\{ \begin{array}{lll}
\!\! \left(\frac{|p|t}{V}\right)^{\!1/3} \!, & ~ a \lesssim a_{\rm cr} r^{1/4}, ~ r \lesssim 1, ~ \mbox{or} \\
& ~ a \lesssim a_{\rm cr} r^{1/6}, ~ r \gtrsim 1, \\
\\
\!\! \left(\frac{|p|a_{\rm cr}}{V}\right)^{\!1/3} e^{t/a_{\rm cr}-1}, & ~ a \gtrsim a_{\rm cr}, ~ r \lesssim 1,
~\mbox{or} \\
& ~ a \gtrsim a_{\rm cr} r^{1/6}, ~ r \gtrsim 1, \\
\\
\!\! t, & ~ a_{\rm cr} r^{1/4} \lesssim a \lesssim a_{\rm cr}, ~ r \lesssim 1.
\end{array}
\right.
\end{eqnarray}
The three branches correspond to a dominant kinetic, potential, and curvature term in
the Hubble parameter. The lower panel of Fig.~2 graphically shows the case of open
universes.
\begin{figure}[t!]
\begin{center}
\includegraphics[clip,width=0.48\textwidth]{Fig2a.pdf}
\hspace{0.3cm}
\includegraphics[clip,width=0.48\textwidth]{Fig2b.pdf}
\vspace{0.3cm}
\includegraphics[clip,width=0.48\textwidth]{Fig2c.pdf}
\caption{The expansion parameter of flat (upper panel), closed (middle panel),
and open (middle panel) universes created in the out region as a function of the parameter
$r$ defined by Eq.~(\ref{r}).}
\end{center}
\end{figure}
\section{III. Universe creation analogy with quantum potential scattering}
\subsection{IIIa. General considerations}
The Wheeler-DeWitt equation~(\ref{motion3})
for the $\psi$ modes is formally equal to the one-dimensional Schrodinger
equation with zero energy, mass equal to $1/2$, and potential energy
$U_p(\alpha)$, with $\alpha$ taking the place of the spatial coordinate
and $p$ being an external parameter.
Continuing the analogy, Eq.~(\ref{in-out}) connecting the
$\psi_{p}^{(\rm in)}$ and $\psi_{p}^{(\rm out)}$ modes describes the
scattering of $\psi$-waves off the potential $U_p$, the incident,
reflected, and transmitted waves being
\begin{eqnarray}
\label{waves}
&& \psi_p^{({\rm inc})} = \alpha_p \psi_p^{(\rm out)}, \\
&& \psi_p^{({\rm ref})} = \beta_p \psi_p^{(\rm out) *}, \\
&& \psi_p^{({\rm tr})} = \alpha_p \psi_p^{(\rm in)},
\end{eqnarray}
respectively, as illustrated in Fig.~1.
Moreover, one can define a density current associated to any $\psi$-mode as
\begin{equation}
\label{current}
j_p = \langle \psi_p | \psi_p^* \rangle.
\end{equation}
The conservation of the current~(\ref{current}), $\dot{j}_p = 0$, follows directly from the conservation of the inner product. The incident, reflected, and transmitted currents are then
\begin{eqnarray}
\label{currents}
&& j_p^{({\rm inc})} =
\langle \alpha_p \psi_p^{({\rm out})} | \alpha_p^* \psi_p^{({\rm out}) *} \rangle = |\alpha_p|^2, \\
&& j_p^{({\rm ref})} =
\langle \beta_p \psi_p^{({\rm out}) *} | \beta_p^* \psi_p^{({\rm out})} \rangle = -|\beta_p|^2, \\
&& j_p^{({\rm tr})} =
\langle \psi_p^{({\rm in})} | \psi_p^{({\rm in}) *} \rangle = 1,
\end{eqnarray}
where we used Eqs.~(\ref{inner2}).
Taking into account Eq.~(\ref{n}) and the Bogoliubov condition~(\ref{relation}), we find
the reflection and transmission coefficients
\begin{eqnarray}
\label{Ref}
&& R_{p} = -\frac{j_{p}^{({\rm ref})}}{j_{p}^{({\rm inc})}} = \frac{n_{p}}{1+n_{p}} \, , \\
\label{Tr}
&& T_{p} = \frac{j_{p}^{({\rm tr})}}{j_{p}^{({\rm inc})}} = \frac{1}{1+n_{p}} \, ,
\end{eqnarray}
from which the unitarity condition $R_{p} + T_{p} = 1$ directly follows.
It is clear that if $p^2 < \max U_0(\alpha)$, then the ``particle''
described by the wave function $\alpha_p \psi_p^{(\rm out)}$ will
penetrate through the potential barrier $U_p(\alpha)$.
To ``particles'' which deeply penetrate into the barrier, $p^2 \ll \max U_0(\alpha)$,
there will correspond a large reflection coefficient and, in turn,
by Eq.~(\ref{Ref}), a large ``particle'' number $n_p$.
On the other hand, if $p^2 > \max U_0(\alpha)$, the
``particle'' is reflected above the barrier. For $p^2 \gg \max U_0(\alpha)$,
the reflection coefficient for scattering above the barrier will be small.
To this case, there will correspond a small production of ``particles'', $n_p \ll 1$.
\subsection{IIIb. WKB approximation}
The usefulness of Eqs.~(\ref{Ref}) and (\ref{Tr}) resides in the fact that if the
potential $U_p$ is slowly varying, in the sense specified below,
one can apply the standard semiclassical (WKB) results for the
reflection and transmission coefficients. Using the formal equivalence
of the two problems of potential scattering in
quantum mechanics and the creation of
universes out from the vacuum in third quantization, one can then
find the expression for the universe number $n_p$.
The WKB approximation is valid whenever
the potential $U_p$ satisfies the semiclassical condition~\cite{Landau}
\begin{equation}
\label{conditionU}
\left|\frac{\dot{U}_p}{2U_p^{3/2}} \right| \ll 1.
\end{equation}
It can be verified that the above condition is satisfied
for the Wheeler-DeWitt potential~(\ref{U}) for values of $\alpha$ far from the
turning points, where the WKB approximation is in general not valid.
{\it Large universe number.} -- Let us consider the case of closed universes, $k>1$ (see Fig.~1).
Accordingly, there will be two classical turning points, $\alpha_2(p) < \alpha_1(p)$,
for a deep penetration through the potential barrier.
Since in this case $R_p \simeq 1$, and then $T_p \ll 1$, we have from Eq.~(\ref{Tr}),
$n_p \simeq T_p^{-1} \gg 1$. Using the standard result for the
expression of the transmission coefficient in WKB approximation~\cite{Landau},
we find
\begin{equation}
\label{sm2}
n_p = e^{2S_p},
\end{equation}
where
\begin{equation}
\label{sm3}
S_p = \int_{\alpha_2(p)}^{\alpha_1(p)} \! d\alpha \sqrt{U_p(\alpha)} \, .
\end{equation}
{\it Small universe number.} -- The probability that a ``particle'' is scattered above the
potential barrier is small for large values of $p^2$ compared
to the height of the Wheeler-DeWitt barrier $U_0(\alpha)$. This is true for
closed, flat, and open universes. Using Eq.~(\ref{Ref}), we
then have $n_p \simeq R_p \ll 1$. Using the standard result for the
expression of the reflection coefficient in WKB approximation~\cite{Landau},
we find
\begin{equation}
\label{sm4}
n_p = e^{-4 \mathrm{Im} \sigma_p},
\end{equation}
where
\begin{equation}
\label{sm5}
\sigma_p = \int_{\alpha_R}^{\alpha_I(p)} \! d\alpha \sqrt{-U_p(\alpha)} \, .
\end{equation}
Here, $\alpha_I(p)$ is the so-called imaginary turning point,
the complex solution of the equation $U_p(\alpha) = 0$ for $p^2 > \max U_0(\alpha)$,
and $\alpha_R$ is an arbitrary and inessential real parameter.
The integration in Eq.~(\ref{sm5}) has to be performed in the complex upper half-plane,
$\mbox{Im}[\alpha_I(p)] > 0$.
If the equation for the imaginary turning point admits more than one solution,
one must select the one for which $\sigma_p$ is smallest~\cite{Landau}.
\section{IV. Creation of flat inflationary universes}
{\it Exact solution.} -- The case $k=0$ was analyzed by Hosoya and Morikawa~\cite{Hosoya-Morikawa}.
An exact solution for the number of created universes is given by
\begin{equation}
\label{HM}
n_p = \frac{1}{e^{2\pi |p|/3}-1}
\end{equation}
and is, interestingly enough, independent on $V_0$. For large $|p|$, $n_p $ is exponentially suppressed, while for small $|p|$, $n_p$ is inversely proportional to $|p|$.
Equation~(\ref{HM}) is easily found by inserting the Bunch-Davies, in- and out-solutions
of Eq.~(\ref{motion3}) (with $k=0$),
\begin{eqnarray}
\label{hp}
&& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\psi_p^{(\rm in)} = \sqrt{\frac{\pi}{6}} \,
\sinh^{-1/2}(p\pi/3) J_{-ip/3}(V\sqrt{2V_0}e^{3\alpha}/3), \\
\label{hp2}
&& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\psi_p^{(\rm out)} = \sqrt{\frac{\pi}{12}} \,
e^{-p\pi/6} H_{-ip/3}^{(2)}(V\sqrt{2V_0}e^{3\alpha}/3),
\end{eqnarray}
into Eq.~(\ref{alphabeta2}) and then using Eq.~(\ref{n}).
Here, $J_\nu (x)$ is the Bessel function of first kind and $H_\nu^{(2)}(x)$ is the Hankel function of
second kind~\cite{Abramowitz}.
\begin{figure*}[t!]
\begin{center}
\includegraphics[clip,width=0.48\textwidth]{Fig3a.pdf}
\hspace{0.3cm}
\includegraphics[clip,width=0.48\textwidth]{Fig3b.pdf}
\caption{{\it Left panel}. The functions $x_1(r)$, $x_2(r)$, and $x_3(r)$ in
Eqs.~(\ref{x1})-(\ref{x3}) for $0 \leq r \leq 4/27$.
Observe that $x_1(4/27) = x_2(4/27) = 2/3$ and $x_3(4/27) = -1/3$.
{\it Right panel}. The function $f(r)$ in Eq.~(\ref{f}) for $0 \leq r \leq 4/27$.}
\end{center}
\end{figure*}
{\it WKB approximation.} -- In this case, the WKB approximation is valid for
large values of $|p|$. The imaginary turning points are
\begin{equation}
\label{new1}
\alpha_I(p) = \frac{1}{6} \! \left[\ln (p^2/2V^2V_0) + i\pi (2n + 1)\right]\!, ~~n \in \mathbb{N}.
\end{equation}
Accordingly,
\begin{equation}
\label{new2}
\sigma_p = - \frac{q}{3} + \frac{|p|}{6} \ln \!\frac{|p|+ q}{|p|-q} \, ,
\end{equation}
where $q = \sqrt{-U_p(\alpha_R)} = \sqrt{p^2 + 2V^2V_0 e^{6\alpha_R}}$,
so that $\mbox{Im} \sigma_p = \pi |p|/6$. It follows that
\begin{equation}
\label{new3}
n_p = e^{-2\pi |p|/3},
\end{equation}
in agreement with Eq.~(\ref{HM}) in the case of large $|p|$.
{\it The case of null scalar potential.} -- In the case of flat universes with null
scalar potential, the in and out $\psi$ modes are normalized plane waves. As in the case
of conformally flat quantum theories in curved space, there is no production
of ``particles'' out from the vacuum. The number of created universes is then exactly zero.
\section{V. Creation of closed inflationary universes}
The problem does not admit an exact analytical solution.
\subsection{Va. Large universe number: small $|p|$}
{\it WKB approximation.} -- Let us work in WKB approximation and consider Eqs.~(\ref{sm2}) and (\ref{sm3}).
Using the change of variable $x = 2V_0 e^{2\alpha}$, the universe number can be written as
\begin{equation}
\label{npf}
n_p = e^{2\pi^2 f(r)/3V_0},
\end{equation}
where $r$ is given by Eq.~(\ref{r}) with $V = 2\pi^2$, and we have introduced the function
\begin{equation}
\label{fInt}
f(r) = \frac32 \int_{x_2}^{x_1} \frac{dx}{x} \sqrt{-x^3+x^2-r}.
\end{equation}
Here, $x_1 \geq x_2 \geq 0$ correspond to the classical turning points, the real and positive
solutions of the equation $x^3 - x^2 + r = 0$. The three solution of such a cubic equation
can be written as
\begin{eqnarray}
\label{x1}
&& x_1(r) = \frac13 \left( \! 1 + 2\cos \frac{\theta}{3} \right) \! , \\
\label{x2}
&& x_2(r) = \frac13 \left( \! 1 - 2\cos \frac{\theta + \pi}{3} \right)\! , \\
\label{x3}
&& x_3(r) = \frac13 \left( \! 1 - 2\cos \frac{\theta - \pi}{3} \right) \! ,
\end{eqnarray}
with
\begin{equation}
\label{theta}
\theta(r) = \arccos (1-27r/2).
\end{equation}
As it easy to check, the above three solutions are real when
\begin{equation}
\label{rcondition}
0 \leq r \leq 4/27.
\end{equation}
In this case, $x_1$ and $x_2$ are positive, with $x_1 \geq x_2$, and $x_3$ is negative (see
the left panel of Fig.~3).
The integral in Eqs.~(\ref{fInt}) can be expressed in terms of the complete elliptical integrals as
\begin{equation}
\label{f}
f(r) = c_K K (m) + c_E E (m) + c_{\Pi} \Pi (n,m),
\end{equation}
where
\begin{equation}
\label{ccc}
c_K = \frac{x_3}{\sqrt{x_2-x_3}} \, , ~~ c_E = \frac{x_2-x_3}{\sqrt{x_2-x_3}} \, , ~~
c_{\Pi} = \frac{3 x_1 x_3}{\sqrt{x_2-x_3}} \, ,
\end{equation}
and
\begin{equation}
\label{mn}
m = \frac{x_2-x_1}{x_2-x_3} \, , ~~ n = \frac{x_2-x_1}{x_2} \, .
\end{equation}
Here, $K(m)$, $E(m)$, and $\Pi(n,m)$ are the complete elliptical integrals of first,
second, and third kind, respectively~\cite{Abramowitz}.
A plot of the function $f(r)$ is shown in the right panel of Fig.~3.
Notice that
\begin{equation}
\label{lim}
\lim_{r \rightarrow 0}f(r) = 1, ~~ \lim_{r \rightarrow 4/27}f(r) = 0.
\end{equation}
The WKB result~(\ref{npf}) is valid only if $n_p \gg 1$, namely when the exponent
$2\pi^2 f(r)/3V_0$ is much bigger than unity. For $r \ll 1$ (the case $r \gg 1$ will
be analyzed in Section Vb), this means $V_0 \ll 2 \pi^2/3$. In this case, we have
\begin{equation}
\label{npclosedOK}
n_p = e^{2\pi^2/3V_0}, ~~ |p| \ll 1 \ll 2\pi^2/3V_0,
\end{equation}
for the average number of created universes.
The case $r \ll 1$ and $V_0 \gg 2 \pi^2/3$, namely $|p| \ll 2\pi^2/3V_0 \ll 1$,
cannot be solved in WKB approximation. We proceed as follows.
{\it Approximate Wheeler-DeWitt potential.} -- Let us approximate the Wheeler-DeWitt potential as
\begin{eqnarray}
\label{1c}
U_p(\alpha) \!\!& \simeq &\!\!
\left\{ \begin{array}{lll}
-p^2 + 4\pi^4 e^{4\alpha}, & ~~~~ \alpha \leq \alpha_\star,
\\
-p^2 - 8\pi^4 V_0 e^{6\alpha}, & ~~~~ \alpha > \alpha_\star,
\end{array}
\right.
\end{eqnarray}
where $e^{\alpha_\star} = 1/\sqrt{3V_0}$, and $\alpha_\star$ is the
point of maximum of the Wheeler-DeWitt potential. The approximate potential~(\ref{1c})
is discontinuous at $\alpha = \alpha_\star$ with a jump discontinuity of
\begin{equation}
\label{jump}
\Delta = |\!\! \lim_{\alpha \rightarrow \alpha_\star^{-}} \!\! U_p(\alpha) \,
- \lim_{\alpha \rightarrow \alpha_\star^{+}} \!\! U_p(\alpha)|
= \frac53 \! \left(\frac{2\pi^2}{3V_0} \right)^{\!2}.
\end{equation}
In the analogue case of quantum potential scattering,
the reflection and transmission coefficients obtained by approximating a smooth potential with
one possessing a jump discontinuity are trustworthy
only if the wavelength of the incident particle is much bigger than the square root of the
jump (see, e.g.~\cite{Campanelli}).
In our case, such a validity condition translates into the condition
\begin{equation}
\label{conditionDelta}
|p| \ll \sqrt{\Delta/2} \simeq 2\pi^2/3V_0.
\end{equation}
The Bunch-Davies-normalized $\psi_p^{({\rm in})}$ and $\psi_p^{({\rm out})}$ wave functions are easily found
in the case of the approximate Wheeler-DeWitt potential. They are
\begin{eqnarray}
\label{2c}
\!\!\!\!\!\!\!\!\!\!\! \psi_p^{({\rm in})} \!\!& = &\!\!
\left\{ \begin{array}{lll}
u_p, & ~~~~ \alpha \leq \alpha_\star, \\
c_1 v_p + c_2 v_p^*, & ~~~~ \alpha > \alpha_\star,
\end{array}
\right.
\end{eqnarray}
and
\begin{eqnarray}
\label{3c}
\!\!\!\!\!\!\!\!\!\!\! \psi_p^{({\rm out})} \!\!& = &\!\!
\left\{ \begin{array}{lll}
c_3 u_p + c_4 u_p^*, & ~ \alpha \leq \alpha_\star, \\
v_p, & ~ \alpha > \alpha_\star,
\end{array}
\right.
\end{eqnarray}
respectively. Here, $u_p$ is given by
\begin{equation}
\label{up}
u_p = \frac{\Gamma(1-ip/2)}{2^{ip/2} \sqrt{2p}} \, I_{-ip/2}(\pi^2e^{2\alpha}),
\end{equation}
where $\Gamma(x)$ is the Gamma function and $I_\nu(x)$ is the modified Bessel
function of first kind~\cite{Abramowitz}. The function $u_p$ represents a normalized
in-mode of a closed universe with $V_0 = 0$. The function $v_p$, instead, is given
by the right hand side of Eq.~(\ref{hp2}) and represent a normalized out-mode of a
flat universe with $V_0 \neq 0$.
The constants of integrations $c_i$ ($i=1,2,3,4$) can be found by imposing the
continuity of $\psi_p^{({\rm in})}$ and $\psi_p^{({\rm out})}$,
and their first derivatives, at $\alpha = \alpha_\star$. We find
\begin{eqnarray}
&& c_1 = c_3^*=
\langle \psi_p^{({\rm in})} | \psi_p^{({\rm out}) *} \rangle_{|\alpha=\alpha_\star} = \alpha_p, \\
&& c_2 = -c_4 =
-\langle \psi_p^{({\rm in})} | \psi_p^{({\rm out})} \rangle_{|\alpha=\alpha_\star} = \beta_p.
\end{eqnarray}
Accordingly, the average number of universes is
\begin{equation}
\label{npclosed}
n_p = |\langle u_p | v_p \rangle|^2_{\alpha=\alpha_\star}.
\end{equation}
For $|p| \rightarrow 0$, or more precisely for $|p| \ll \min[1,2\pi^2/3V_0]$,
we find
\begin{equation}
\label{npclosed2}
n_p = \frac{H(V_0/2\pi^2)}{\pi |p|}
\end{equation}
at the leading order, where
\begin{eqnarray}
\label{hc}
&& \!\!\! H(x) = \frac{\pi^2}{1296 x^2} \nonumber \\
&& \!\!\! \times \!
\left[ \sqrt{6} I_1 \!\! \left(\frac{1}{6x}\right) \! H_0^{(1)}\!\! \left(\frac{\sqrt{6}}{27x} \right)
+ 2 I_0\!\! \left(\frac{1}{6x} \right) \! H_1^{(1)} \!\! \left(\frac{\sqrt{6}}{27x} \right) \right] \nonumber \\
&& \!\!\! \times \!
\left[ \sqrt{6} I_1 \!\! \left(\frac{1}{6x} \right) \! H_0^{(2)}\!\! \left(\frac{\sqrt{6}}{27x} \right)
+ 2 I_0 \!\! \left(\frac{1}{6x} \right) \! H_1^{(2)} \!\! \left(\frac{\sqrt{6}}{27x} \right) \right] \!,
\nonumber \\
\end{eqnarray}
with $H_\nu^{(1)}(x)$ being the Hankel function of first kind~\cite{Abramowitz}.
Figure~4 shows the function $H(x)$ together with its asymptotic expansions
for small and large values of the argument,
\begin{eqnarray}
\label{hasyc}
H(x) \!\!& = &\!\!
\left\{ \begin{array}{lll}
\frac{5}{4\sqrt{6}} \, e^{1/3x} \left( 1 + \mathcal{O}(x) \right) \! ,
& ~ x \rightarrow 0, \\
3/2 + \mathcal{O}(1/x^2), & ~ x \rightarrow \infty.
\end{array}
\right.
\end{eqnarray}
Accordingly, the average number of created universes for small $|p|$ is
\footnote{For large $|p|$, or more precisely for $|p| \gg \max[1,2\pi^2/3V_0]$,
Eq.~(\ref{npclosed}) would give an incorrect power-law decay for $n_p$,
instead of the correct exponential decay that will be derived in WKB approximation (see below).
This is due to the nonanalyticity of the approximate expression of the potential
$U_p(\alpha)$ at the point $\alpha_\star$. Indeed,
using perturbation theory and following~\cite{Landau} it is easy to find the expression
of the universe number in the case of large $|p|$. It turns out to be
\begin{equation}
\label{npclosed3}
n_p = \Delta^2/64 p^6 = 25\pi^8/729 V_0^4 p^4,
\end{equation}
where $\Delta$ is defined in Eq.~(\ref{jump}). We stress again that this result is
unphysical and follows from having approximated
the potential $U_p$ with a nonanalytical expression. Numerically, we checked that
Eq.~(\ref{npclosed}) ``correctly'' reduces to Eq.~(\ref{npclosed3}) for $|p| \gg \max[1,2\pi^2/3V_0]$.}
\begin{eqnarray}
\label{npskNEW}
n_p \!\!& \simeq &\!\!
\left\{ \begin{array}{lll}
\frac{5}{4\sqrt{6} \, \pi |p|} \, e^{2\pi^2/3V_0},
& ~ |p| \ll 1 \ll 2\pi^2/3V_0, \\ \\
\frac{3}{2\pi |p|} \, ,
& ~ |p| \ll 2\pi^2/3V_0 \ll 1.
\end{array}
\right.
\end{eqnarray}
The first equation in~(\ref{npskNEW}) is
in agreement with the result~(\ref{npclosedOK}) obtained in WKB approximation.
Notice that both equations are approximate results and that, in general,
the WKB approximation cannot be used to calculate the pre-exponential factor
in the transmission coefficient~\cite{Landau} that, in our case,
corresponds to the reciprocal of the average number of created universes.
It is interesting to observe that for small $|p|$ and large values of the scalar potential,
$|p| \ll 2\pi^2/3V_0 \ll 1$, the number of closed universes approaches the number
of flat universes [see Eq.~~(\ref{HM})], and that the former is exponentially amplified
for small scalar potentials, $|p| \ll 1 \ll 2\pi^2/3V_0$.
\begin{figure}[t!]
\begin{center}
\includegraphics[clip,width=0.48\textwidth]{Fig4.pdf}
\caption{The continuous and dotted lines represent, respectively,
the function $H(x)$ in Eq.~(\ref{hc}) and its asymptotic expansions in Eq.~(\ref{hasyc}).}
\end{center}
\end{figure}
\subsection{Vb. Small universe number: large $|p|$}
{\it WKB approximation.} -- The case of large $|p|$ can be only analyzed in WKB approximation (see footnote 2).
For $r > 4/27$, the solutions $x_1(r)$ and $x_2(3)$ are complex (conjugate), and $x_3(r)$ is negative. This
means that the Wheeler-DeWitt potential has no classical turning points. Using Eqs.~(\ref{sm4}) and (\ref{sm5}),
we find for the average number of created universes
\begin{equation}
\label{npg}
n_p = e^{-2\pi^2 g(r)/3V_0},
\end{equation}
where
\begin{equation}
\label{gInt}
g(r) = 3 \mbox{Im} \! \left[ \int_{x_R}^{x_1} \frac{dx}{x} \sqrt{x^3-x^2+r} \, \right] \!.
\end{equation}
Here, $x_{R}$ is a real and positive parameter, and between $x_1(r)$ and $x_2(r)$ we selected the former
as the imaginary turning point since it gives the smallest $\sigma_p$ (see discussion in Section IIIb).
Taking $x_{R} = -x_3(r)$, we find
\footnote{Interestingly enough, a numerical analysis shows that $g(r) = -f(r)$,
with $f(r)$ given by Eq.~(\ref{f}) for $r > 4/27$. We are not able to
provide an analytical proof of the above equality.}
\begin{equation}
\label{gg}
g(r) = -2 \mbox{Re} \! \left[ c_K F(\varphi,m) + c_E E(\varphi,m) + c_{\Pi} \Pi (n,\varphi,m) \right] \! ,
\end{equation}
where $c_K$, $c_E$, and $c_{\Pi}$ are given by Eq.~(\ref{ccc}), and
\begin{equation}
\label{varphi}
\varphi = \arcsin \! \sqrt{(x_2+x_3)/(x_2-x_1)} \, .
\end{equation}
Here, $F(\varphi,m)$, $E(\varphi,m)$, and $\Pi(n,\varphi,m)$
are the incomplete elliptical integrals of first, second, and third kind,
respectively~\cite{Abramowitz}.
Notice that
\begin{equation}
\label{limitg}
\lim_{r \rightarrow 4/27}g(r) = 0.
\end{equation}
A plot of $g(r)$ and its asymptotic expansion,
\begin{equation}
\label{gexp}
g(r) = \pi \sqrt{r} - C r^{1/6} + \mathcal{O}(r^{-1/6}), ~~ r \rightarrow +\infty,
\end{equation}
is shown in Fig.~5. The constant $C$ in the above equation is defined by
\begin{equation}
\label{Cconstant}
C = \frac{3z_1 E(z_2) - \sqrt{3} z_1^*K(z_2)}{\sqrt{8}} \simeq 1.12025,
\end{equation}
where $z_1 = \sqrt{3 + i\sqrt{3}}$ and $z_2 = (1+i\sqrt{3})/2$.
Inserting the leading term of the asymptotic expansion~(\ref{gexp}) into Eq.~(\ref{npg}), we find
\begin{equation}
\label{npL}
n_p \simeq e^{-2\pi |p|/3}, ~~ |p| \gg 1,
\end{equation}
for the average number of created universes. Thus, the
number of closed universes is exponentially suppressed for large $|p|$, as in the case
of flat universes [see Eq.~(\ref{HM})].
\begin{figure}[t!]
\begin{center}
\includegraphics[clip,width=0.48\textwidth]{Fig5.pdf}
\caption{The function $g(r)$ in Eq.~(\ref{gg}) for $r \geq 4/27$
with its asymptotic expansion~(\ref{gexp}).}
\end{center}
\end{figure}
{\it The case of null scalar potential.} -- In the case of closed universes with null
scalar potential, the out $\psi$ modes are exactly zero.
In this case, indeed, the Wheeler-DeWitt potential grows exponentially in the out region
($\alpha \rightarrow 0$), and represents an infinite potential barrier in the analogue case
of quantum potential scattering. No ``particles'' are present in the our region,
and the number of created universes is exactly zero.
The analogy between quantum potential scattering and the creation of universes from nothing
reposes on the assumption that the in and out modes are normalized according to the
Bunch-Davies ``prescription'', as in the case of quantum theory in curved space.
This in turns follows from the fact that the procedure of third quantization closely
mimics the one adopted in second quantization.
\footnote{Although in this paper we use the
{\it standard} Bunch-Davies vacuum, other possibilities cannot be excluded.
Indeed, the out-vacuum used by Kim~\cite{Kim} is not a Bunch-Davies normalized vacuum.
Not surprisingly, he found that the number of created closed universes in the case of
null scalar potential is different from zero. The number of created universes, thus,
strongly depends on the choice of the vacuum.
A similar situation occurs in second quantization, where the probability of creating a universe
strongly depends on the choice of the initial conditions for the wave
function of the Universe (see, e.g.,\cite{X}).}
\section{VI. Creation of open inflationary universes}
\subsection{VIa. The case of null scalar potential}
{\it Exact solution.} -- The case $k=-1$ and $V_0 = 0$ was analyzed by Kim~\cite{Kim}.
An exact solution for the number of created universes is given by
\begin{equation}
\label{Kim}
n_p = \frac{1}{e^{\pi |p|}-1}.
\end{equation}
For large $|p|$, $n_p $ is exponentially suppressed, while for small $|p|$, $n_p$ is inversely proportional to $|p|$.
Equation~(\ref{Kim}) is easily found by inserting the Bunch-Davies, in- and out-solutions
of Eq.~(\ref{motion3}) (with $k = -1$ and $V_0 = 0$),
\begin{eqnarray}
\label{jp}
&& \!\!\!\!\!\! \psi_p^{(\rm in)} = \sqrt{\frac{\pi}{4}} \,
\sinh^{-1/2}(p\pi/2) J_{-ip/2}(Ve^{2\alpha}/2), \\
\label{jp2}
&& \!\!\!\!\!\! \psi_p^{(\rm out)} = \sqrt{\frac{\pi}{8}} \,
e^{-p\pi/4} H_{-ip/2}^{(2)}(Ve^{2\alpha}/2),
\end{eqnarray}
into Eq.~(\ref{alphabeta2}) and then using Eq.~(\ref{n}).
{\it WKB approximation.} -- In this case, the WKB approximation is valid for
large values of $|p|$. The imaginary turning points are
\begin{equation}
\label{nnew1}
\alpha_I(p) = \frac{1}{4} \left[\ln (p^2/V^2) + i\pi (2n + 1) \right] \!, ~~ n \in \mathbb{N}.
\end{equation}
Accordingly,
\begin{equation}
\label{nnew2}
\sigma_p = - \frac{s}{2} + \frac{|p|}{4} \ln \! \frac{|p|+ s}{|p|-s} \, ,
\end{equation}
where $s = \sqrt{-U_p(\alpha_R)} = \sqrt{p^2 + V^2e^{4\alpha_R}}$,
so that $\mbox{Im} \sigma_p = \pi |p|/4$. It follows that
\begin{equation}
\label{nnew3}
n_p = e^{-\pi |p|},
\end{equation}
in agreement with Eq.~(\ref{Kim}) in the case of large $|p|$.
\subsection{VIb. Small universe number: large $|p|$}
\begin{figure*}[t!]
\begin{center}
\includegraphics[clip,width=0.48\textwidth]{Fig6a.pdf}
\hspace{0.3cm}
\includegraphics[clip,width=0.48\textwidth]{Fig6b.pdf}
\caption{{\it Left panel}. The functions $y_1(r)$, $y_2(r)$, and $y_3(r)$ in
Eqs.~(\ref{y1})-(\ref{y3}).
{\it Right panel}. The continuous and dotted lines represent, respectively,
the function $k(r)$ in Eq.~(\ref{kk}) and its asymptotic expansions in Eq.~(\ref{kasy}).}
\end{center}
\end{figure*}
{\it WKB approximation.} -- The case $k=-1$ and $V_0 \neq 0$ cannot be solved analytically.
Working in WKB approximation, we consider Eqs.~(\ref{sm4}) and (\ref{sm5}) since,
in this case, there are not classical turning points.
Using the change of variable $y = 2V_0 e^{2\alpha}$, the universe number can be written as
\begin{equation}
\label{npk}
n_p = e^{-V k(r)/3V_0},
\end{equation}
where
\begin{equation}
\label{kInt}
k(r) = 3 \mbox{Im} \! \left[ \int_{y_R}^{y_1} \frac{dy}{y} \sqrt{y^3+y^2+r} \, \right] \! .
\end{equation}
Here, $y_{R}$ is a real and positive parameter, while $y_1$ corresponds to the imaginary
turning point, the solution of the equation $y^3 + y^2 + r = 0$ which
gives the smallest $\sigma_p$ in Eq.~(\ref{sm5}).
The three solution of such a cubic equation can be written as
\begin{eqnarray}
\label{y1}
&& y_1 = -\frac13 \left(1 - 2\cos \frac{\vartheta}{3} \right) \! , \\
\label{y2}
&& y_2 = -\frac13 \left(1 + 2\cos \frac{\vartheta + \pi}{3} \right) \! , \\
\label{y3}
&& y_3 = -\frac13 \left(1 + 2\cos \frac{\vartheta - \pi}{3} \right) \! ,
\end{eqnarray}
with
\begin{equation}
\label{vartheta}
\vartheta(r) = \arccos (-1-27r/2).
\end{equation}
Notice that $\vartheta(r) = \pi - \theta(-r)$,
$y_1(r) = -x_3(-r)$, $y_2(r) = -x_2(-r)$, and $y_3(r) = -x_1(-r)$, where $\theta(r)$ is
defined in Eq.~(\ref{theta}), and $x_i(r)$ are given by Eqs.~(\ref{x1})-(\ref{x3}).
As it easy to check, $y_3$ is real and negative, while $y_1$ and $y_2$ are complex conjugate (see
the left panel of Fig.~6).
Taking $y_R = -y_3(r)$, the integral in Eqs.~(\ref{kInt}) can be expressed in terms of the
incomplete elliptical integrals as
\footnote{Numerically, we find that
$k(r) = C_K K(\mu) + C_E E(\mu) - C_{\Pi} \Pi (\nu,\mu)$. We are not able to
give an analytical proof of the above equality.}
\begin{equation}
\label{kk}
k(r) = 2\mbox{Re} \! \left[ C_K F(\omega,\mu)+ C_E E(\omega,\mu) - C_{\Pi} \Pi (\nu,\omega,\mu)\right] \!,
\end{equation}
where
\begin{equation}
\label{cccNew}
C_K = \frac{y_3}{\sqrt{y_2-y_3}} \, , ~~ C_E = \frac{y_2-y_3}{\sqrt{y_2-y_3}} \, , ~~
C_{\Pi} = \frac{3 y_1 y_3}{\sqrt{y_2-y_3}} \, ,
\end{equation}
\begin{equation}
\label{munu}
\mu = \frac{y_2-y_1}{y_2-y_3} \, , ~~ \nu = \frac{y_2-y_1}{y_2} \, ,
\end{equation}
and
\begin{equation}
\label{omega}
\omega = \arcsin \! \sqrt{(y_2+y_3)/(y_2-y_1)} \, .
\end{equation}
We show the graph of the function $k(r)$ in the
right panel of Fig.~6. Also shown are the asymptotic expansions of $k(r)$ for small and
large values of the argument,
\begin{eqnarray}
\label{kasy}
k(r) \!\!& = &\!\!
\left\{ \begin{array}{lll}
\frac32 \pi \sqrt{r} -\frac38 \pi r + \mathcal{O}(r^2), & ~ r \rightarrow 0, \\
\pi \sqrt{r} + C r^{1/6} + \mathcal{O}(r^{-1/6}), & ~ r \rightarrow \infty,
\end{array}
\right.
\end{eqnarray}
where the constant $C$ is given by Eq.~(\ref{Cconstant}).
Inserting the leading terms of the above asymptotic expansions into Eq.~(\ref{npk}), we find
\begin{eqnarray}
\label{npLk}
n_p \!\!& \simeq &\!\!
\left\{ \begin{array}{lll}
e^{-\pi |p|}, & ~ 1 \ll |p| \ll V/2V_0, \\
e^{-2\pi |p|/3}, & ~ |p| \gg \max[1,V/2V_0].
\end{array}
\right.
\end{eqnarray}
Thus, the number of open universes is exponentially suppressed for large $|p|$.
If the scalar potential is small, $V/2V_0 \gg 1$, the suppression factor is
the same as in the case of open universes with null scalar potential [see Eq.~(\ref{Kim})],
while if the scalar potential is large, $V/2V_0 \ll 1$, the suppression is similar
to that of flat universes [see Eq.~(\ref{HM})].
\subsection{VIc. Large universe number: small $|p|$}
{\it Approximate Wheeler-DeWitt potential.} -- The case of small $|p|$ cannot be solved in WKB approximation.
Let us proceed as in Section Va by approximating the Wheeler-DeWitt potential as
\begin{eqnarray}
\label{1}
U_p(\alpha) \!\!& \simeq &\!\!
\left\{ \begin{array}{lll}
-p^2 - V^2 e^{4\alpha}, & ~~~~ \alpha \leq \alpha_*,
\\
-p^2 - 2V^2 V_0 e^{6\alpha}, & ~~~~ \alpha > \alpha_*,
\end{array}
\right.
\end{eqnarray}
where $e^{\alpha_*} = 1/\sqrt{2V_0}$. The approximate potential is continuous at
$\alpha_*$, while its derivative is discontinuous with a jump discontinuity of
\begin{equation}
\label{jump2}
\delta = |\!\! \lim_{\alpha \rightarrow \alpha_\star^{-}} \!\! \dot{U}_p(\alpha) \,
- \lim_{\alpha \rightarrow \alpha_\star^{+}} \!\! \dot{U}_p(\alpha)|
= \frac{V^2}{2V_0^2} \, .
\end{equation}
As discussed in Section Va, the above approximation is trustworthy only
for values of $|p|$ small compared to the square root of the jump,
\begin{equation}
\label{conditionDelta}
|p| \ll \sqrt{|\alpha_*|\delta/2} \sim V/2V_0.
\end{equation}
The Bunch-Davies-normalized $\psi_p^{({\rm in})}$ and $\psi_p^{({\rm out})}$
wave functions are easily found in the case of the approximate Wheeler-DeWitt potential. They are
\begin{eqnarray}
\label{2}
\!\!\!\!\!\!\!\!\!\!\! \psi_p^{({\rm in})} \!\!& = &\!\!
\left\{ \begin{array}{lll}
w_p, & ~~~~ \alpha \leq \alpha_*,
\\
c_1 v_p + c_2 v_p^*, & ~~~~ \alpha > \alpha_*,
\end{array}
\right.
\end{eqnarray}
and
\begin{eqnarray}
\label{3}
\!\!\!\!\!\!\!\!\!\!\! \psi_p^{({\rm out})} \!\!& = &\!\!
\left\{ \begin{array}{lll}
c_3 w_p + c_4 w_p^*, & ~ \alpha \leq \alpha_*, \\
v_p, & ~ \alpha > \alpha_*,
\end{array}
\right.
\end{eqnarray}
respectively. Here, $w_p$ is given by the right hand side of Eq.~(\ref{jp}) and represents a
normalized in-mode of a open universe with $V_0 = 0$, while $v_p$ is given by the right hand side of Eq.~(\ref{hp2}) and represent a normalized out-mode of a flat universe with $V_0 \neq 0$.
The constants of integrations $c_i$ ($i=1,2,3,4$) can be found by imposing the continuity of $\psi_p^{({\rm in})}$ and $\psi_p^{({\rm out})}$,
and their first derivatives, at $\alpha_*$. We find
\begin{eqnarray}
&& c_1 = c_3^*=
\langle \psi_p^{({\rm in})} | \psi_p^{({\rm out}) *} \rangle_{|\alpha=\alpha_*} = \alpha_p, \\
&& c_2 = -c_4 =
-\langle \psi_p^{({\rm in})} | \psi_p^{({\rm out})} \rangle_{|\alpha=\alpha_*} = \beta_p.
\end{eqnarray}
Accordingly, the average number of universes is
\begin{equation}
\label{npopen}
n_p = |\langle w_p | v_p \rangle|^2_{\alpha=\alpha_*}.
\end{equation}
For $|p| \rightarrow 0$, or more precisely for $|p| \ll \min[1,V/2V_0]$,
we find
\begin{equation}
\label{npopen2}
n_p = \frac{h(V_0/V)}{\pi |p|}
\end{equation}
at the leading order, where
\begin{eqnarray}
\label{h}
&& \!\!\!\!\!\!\!\!\! h(x) = \frac{\pi^2}{96 x^2} \nonumber \\
&& ~\, \times \! \left[ J_1(1/4x) H_0^{(1)}(1/6x) - J_0(1/4x) H_1^{(1)}(1/6x) \right] \nonumber \\
&& ~\, \times \! \left[ J_1(1/4x) H_0^{(2)}(1/6x) - J_0(1/4x) H_1^{(2)}(1/6x) \right] \!.
\nonumber \\
\end{eqnarray}
Figure~7 shows the function $h(x)$ together with its asymptotic expansions
for small and large values of the argument,
\begin{eqnarray}
\label{hasy}
h(x) \!\!& = &\!\!
\left\{ \begin{array}{lll}
1 + x \cos (1/2x) + \mathcal{O}(x^2), & ~ x \rightarrow 0, \\
3/2 + \mathcal{O}(1/x^2), & ~ x \rightarrow \infty.
\end{array}
\right.
\end{eqnarray}
Therefore, at the leading order
\footnote{For large $|p|$, or more precisely for $|p| \gg \max[1,V/2V_0]$,
Eq.~(\ref{npopen}) would give an incorrect power-law decay for $n_p$,
instead of the correct exponential decay previously derived in WKB approximation. This,
as already discussed in footnote 4, is
due to the nonanalyticity of the potential $U_p(\alpha)$ at the point $\alpha_*$.
Using perturbation theory~\cite{Landau}, it is easy to find
\begin{equation}
\label{npopen3}
n_p = \delta^2/64 p^6 = V^4/256 V_0^4 p^6,
\end{equation}
where $\delta$ is defined by Eq.~(\ref{jump2}).
Numerically, we checked that
Eq.~(\ref{npopen}) ``correctly'' reduces to the ``unphysical'' result~(\ref{npopen3})
for $|p| \gg \max[1,V/2V_0]$.}
\begin{eqnarray}
\label{npsk}
n_p \!\!& \simeq &\!\!
\left\{ \begin{array}{lll}
\frac{1}{\pi |p|}, & ~ |p| \ll 1 \ll V/2V_0, \\ \\
\frac{3}{2\pi |p|}, & ~ |p| \ll V/2V_0 \ll 1.
\end{array}
\right.
\end{eqnarray}
Thus, for small $|p|$ and small values of the scalar potential,
$|p| \ll 1 \ll V/2V_0$, the number of open universes approaches the number
of open universes with null scalar potential [see Eq.~(\ref{Kim})], while
for small $|p|$ and large values of the scalar potential,
$|p| \ll V/2V_0 \ll 1$, it approaches the number of flat universes [see Eq.~(\ref{HM})].
\begin{figure}[t!]
\begin{center}
\includegraphics[clip,width=0.48\textwidth]{Fig7.pdf}
\caption{The continuous and dotted lines represent, respectively,
the function $h(x)$ in Eq.~(\ref{h}) and its asymptotic expansions in Eq.~(\ref{hasy}).}
\end{center}
\end{figure}
\section{VII. Discussion and Conclusions}
{\it Discussion.} -- In order to be consistent with cosmic microwave background observations,
the scale of inflation $V_0^{1/4}$, which is directly
related to the amplitude of the primordial tensor perturbations, has to be below
$1.7 \times 10^{16}$GeV~\cite{Planck}.
The minimum value for the so-called ``reheat temperature'' is around $4.7 \,$MeV~\cite{MM}.
This constraint, which comes from the analysis of cosmic microwave background radiation data,
assumes a scale of inflation greater than about $43\,$MeV, which can be taken as a lower limit
for $V_0^{1/4}$. In the units used in this paper, these limits on the scale of
inflation translate into the constraint
\begin{equation}
\label{V0limit1}
2.7 \times 10^{-81} \lesssim V_0 \lesssim 6.6 \times 10^{-11}
\end{equation}
for the value of the scalar potential.
Since, $V_0 \ll 1$, the number of created universes from the third-quantized vacuum is
\begin{eqnarray}
\label{number}
n_p \!\!& \sim &\!\!
\left\{ \begin{array}{lll}
\frac{1}{|p|} e^{2\pi^2/3V_0}, & ~ |p| \ll 1, ~ (k = 1), \\
\\
\frac{1}{|p|}, & ~ |p| \ll 1, ~ (k = 0,-1), \\
\\
e^{-c \pi |p|}, & ~ |p| \gg 1, ~ (k = -1,0,1),
\end{array}
\right.
\end{eqnarray}
where $c = 2/3$ for closed and flat universes, and for open universes with $|p| \gg \max[1,V/V_0]$,
while $c = 1$ for open universes with $1 \ll |p| \ll V/V_0$.
Thus, universes with large values of $|p|$ are essentially not created, while the
creation from nothing occurs only for those universes labelled by small values of $|p|$.
Closed universes that are created in the out region with $a \gtrsim a_{\rm cr}$
undergo inflation since, in this case, $r = p^2V_0^2/\pi^4 \ll 1$ (see the middle panel of Fig.~2).
After creation, flat universes can either be kinetic-energy dominated or inflate.
Newly created open universes can inflate, be kinetic-energy dominated, or
curvature-dominated. For flat and open universes, the type of classical evolution after
creation depends of the the value of the parameter $r = 4p^2V_0^2/V^2$
and on the ``size'' $a$ of the created universe (see the upper and lower panel of Fig.~2,
respectively).
For small $|p|$ (namely for values of $|p|$ such that universes are effectively created),
the ratio of the number of closed universes to either the number of flat or open
universes is given by the factor $e^{2\pi^2/3V_0}$. Using Eq.~(\ref{V0limit1}),
this ratio is given by
\begin{equation}
\label{n01}
10^{10^{10}} \lesssim \frac{n_{closed}}{n_{flat,open}} \lesssim 10^{10^{81}}.
\end{equation}
Interestingly enough, recent analyses of the Planck data on the Cosmic Microwave Background radiation
favour a positive-curvature Universe~\cite{Silk,Handley}.
{\it Conclusions.} -- The creation from nothing of closed, open, and flat universes in the presence of
a scalar field (the inflaton) is a general consequence of third quantization.
Solving the Wheeler-DeWitt equation both in WKB approximation and using a suitable
approximation of the Wheeler-DeWitt potential, we have found that
the creation of universes, both closed or open and flat, is inhibited for universes with large
amounts of kinetic energy of the inflaton. For small values of the kinetic energy, instead,
closed, open, and flat universes are created from the third-quantized vacuum, the state of ``nothingness''.
Due to the relatively small value of the inflaton potential, as observed in our universe,
and for a given small amount of scalar kinetic energy, the creation of closed universes
is exponentially favoured over the creation of flat and open ones.
|
1,314,259,993,913 | arxiv | \section{Introduction}
\label{sec::intro}
The study of mechanical contact and friction is a subject of high importance in many fields, from biological and engineering applications to geological sciences. Since natural and industrial surfaces always possess roughness under certain magnification, the contact between solid bodies occurs on separate patches corresponding to asperities of contacting surfaces,~\cite{Archard_1953,Archard_1957,Bowden_2001,Greenwood_1966}. The evolution of the ratio of real contact area to apparent one under increasing external load determines essential contact properties such as friction, wear, adhesion,
and is responsible for heat
transport
through contact interfaces.
At the same time, the distribution of the free volume between contacting surfaces governs the fluid transport along the interface and is responsible for leakage/percolation phenomena, see for example~\cite{Dapp_2012,paggi2015evolution}.
Lubrication, i.e. separation of
contacting surfaces by a fluid lubricant, is an efficient mechanism for friction and wear reducing. However, if the applied external load, pushing the contacting bodies together, is high enough or if the sliding velocities are small, the hydrodynamic pressure developing in the fluid is not sufficient to separate the solids, and asperities of both surfaces can get in direct contact despite the presence of the lubricant, which inevitably increases friction. This scenario corresponds to the so-called mixed regime, at which the load-bearing capacity is split between the fluid and the contact areas. For even higher pressures and lower velocities, the whole load is carried by the mechanical contacts, this regime is termed as boundary lubrication, see~\cite{Hamrock_2004,Azushima_2016} for details. On the other hand, under increasing external load the lubricating fluid may be trapped in valleys (pools) delimited completely by the contact zone. Fig.~\ref{fig::trapped_area} shows an example of the morphology of the contact interface between two elastic half-spaces with rough surfaces under external load~\cite{pei2005finite,carbone2008asperity,putignano2012influence,yastrebov_2017}. Note that the fraction of the \enquote{trapped} out-of-contact area (highlighted by red color), surrounded by contact patches, is significant.
\begin{figure}[t]
\centering
\begin{subfigure}{0.325\textwidth}
\begin{center}
\includegraphics[width=0.95\textwidth]{map_1.png}
\caption{ }
\label{fig::trapped_area_1}
\end{center}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\begin{center}
\includegraphics[width=0.95\textwidth]{map_3.png}
\caption{ }
\label{fig::trapped_area_2}
\end{center}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\begin{center}
\includegraphics[width=0.95\textwidth]{map_5.png}
\caption{ }
\label{fig::trapped_area_3}
\end{center}
\end{subfigure}
\caption{Morphology of the contact interface between an elastic half-space with a rough surface and a rigid flat under increasing external load, numerical simulation results~\cite{Yastrebov_2015}: black is the real contact area, white is the ``free'' out-of-contact area and red is the \enquote{trapped} out-of-contact area, bounded inside non-simply connected contact patches.}
\label{fig::trapped_area}
\end{figure}
The entrapment of the fluid in the interface can have a strong effect on the contact properties, especially if the fluid is highly incompressible~\cite{persson2012elastic,Matsuda_2016}. First, the trapped fluid resists the compression, and thus opposes the growth of the real contact area. Second, the applied external load is shared between
contacting asperities of the bodies and the pressurized fluid, so that the trapped fluid provides an additional load-carrying capacity (even in motionless contacts), reducing the normal pressure in the contact spots between the solid bodies. According to Coulomb's law of friction, the maximal tangential traction at the contact spots is proportional to the normal pressure, therefore the maximal macroscopic frictional force (of the whole contact interface) is proportional to the integral value of the normal pressure over the real contact area. Consequently, by taking into account the presence of the pressurized trapped fluid, a reduction of the global (apparent) coefficient of friction should take place.
The effect of lubricant entrapment on reduction of friction was first recognized in the study of cold metal forming processes \cite{Kudo_1965,Nellemann_1977}, where the authors performed experiments on the sheet metal drawing test and identified three states, corresponding to different levels of the external pressure~\cite{Azushima_1995}. Low values of external load are supported completely by the mechanical contact between asperities, and both global and local coefficients of friction are equal. At medium range of pressures, the global coefficient of friction decreases with increasing load due to closing of lubricant pools and generation of hydrostatic pressure in the fluid, which supports a part of the external load. At even higher load, fluid escapes from the pools and permeates into the contact zones, so that both the real contact area and the coefficient of friction decrease with increasing load. This effect is however biased by the fact that the real contact area does not evolve linearly under high pressures~\cite{Archard_1957}, but rather as a concave function of pressure~\cite{Persson_2001,yastrebov_2017}, thus also resulting in formal decrease of the friction coefficient in contact spots. Nevertheless, experimental results together with finite-element simulations of the problem of entrapment and permeation of the fluid into the contact interface during upsetting of a cylinder were presented and aforementioned states were also identified~\cite{Azushima_2000,Azushima_2011}. An extensive experimental study of lubricant entrapping and escape in cold rolling processes was presented in~\cite{Bech_1999}.
In biological sciences the effect of trapped lubricant in human joints was investigated in the view of reduction of friction between rough cartilage surfaces~\cite{Soltz_2003, Chan_2011}. The concept of trapped fluid rises in the study of fatigue cracks in the rolling contact, which considers the process of crack growth due to pressurized fluid lubricant, forced inside of the crack by the external load and trapped there~\cite{Bower_1988}. The trapped fluid problem is also relevant to the geophysical studies: a landslide or an earthquake can be caused by an elevation of the pressure of the fluid in the pores inside the rock, see for example~\cite{Viesca_2012,Garagash_2012}.
The effect of the trapped fluid is also of interest for the study of basal sliding of glaciers,~\cite{Cuffey_2010}: the melt water, which is responsible for the lubrication, flows in a linked system of cavities in the interface between the glacier and the bedrock, and may be trapped there. Finally, the trapped fluid problem is also of importance for poromechanics~\cite{yu2002fractal,dormieux2002jmps,budiansky1976ijss,Coussy_2004}.
\cite{Kuznetsov_1985} extended the Westergaard's celebrated analytical solution for the problem of contact between a regular wavy surface and a rigid half-plane~\cite{Westergaard_1939} by taking into account the presence of a compressible fluid, trapped in the valleys between contacting asperities. Kuznetsov's solution demonstrates how the external pressure is divided between the fluid and the solid contact, which results in the decrease of the global coefficient of friction under increasing external load.
However, due to
the assumptions (i) that the wavy surface behaves as a flat one and that Flamant's solution holds for every surface point, and (ii) that the horizontal component of the fluid pressure is negligible, it cannot describe the escape of the lubricant and depletion of the real contact area.
This question will be discussed later in detail.
Recently an analytical solution was proposed for the problem of sliding of a rigid periodical punch along a viscoelastic Winkler's foundation with the incompressible fluid present in the gap~\cite{Goryacheva_2012}.
Despite a significant attention to the problem of the trapped fluid in the contact interface, a few questions remain open, such as: the mechanism of the trap opening, the evolution of the real contact area and of the global coefficient of friction during this process, and also the distribution of the frictional shear tractions in the contact interface under external normal loading in the presence of the pressurized fluid in the interface. Note that these questions cannot be addressed in the framework of the boundary element method (BEM), since it assumes infinitesimal slopes of the surface roughness, which is, as we will show, a too restrictive assumption for the considered problem. We address these questions in the current study in the framework of the finite element method (FEM).
The paper is organized as follows. In Section~\ref{sec::problem} we present the statement of the problem of the mechanical contact coupled with pressurized compressible and incompressible fluids trapped in the contact interface.
In Section~\ref{sec::analytic} we outline existing analytical solutions of this problem, and in Section~\ref{sec::methods} we discuss methods for its numerical solution.
Section~\ref{sec::results} is devoted to results, including comparison of Kuznetsov's analytical solution with our numerical simulations, the evolution of the real contact area and the global coefficient of friction, as well as the simulation and analysis of the frictional behaviour of the system under normal and tangential external loading. In Section~\ref{sec::conclusions} we present the conclusions.
Under the assumption of small slopes, in \ref{app:aux} we derive an analytical solution for vertical displacement of a wavy surface under the action of uniformly distributed pressure.
\ref{sec::appendix} provides details of the numerical formulation of the coupled trapped fluid/mechanical contact problem.
\section{Problem statement}
\label{sec::problem}
We consider a mechanical contact problem between a deformable half-plane with a periodic wavy surface and a rigid flat under the action of a far-field external pressure. This case was historically the starting point for the study of contact of rough surfaces~\cite{Westergaard_1939,Johnson_1985}. In addition, we take into account the influence of compressible or incompressible fluid trapped in the free volume between the two bodies, see Fig.~\ref{fig::problem}. We assume the plane strain problem and a linear elastic or elastic-perfectly \textit{J2-}plastic (the latter one is presented in Section~\ref{sec::elasto_plastic}) isotropic constitutive laws for the solid. Note that this problem is similar to the one already solved by~\cite{Kuznetsov_1985}, with the difference that we
assume {\it small but finite} profile's slope, which, as will be shown in the paper, is of great importance for an accurate treatment of this problem.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{problem.pdf}
\caption{A sketch of the problem under study.}
\label{fig::problem}
\end{figure}
The initial gap between the wavy profile and the rigid plane, as well as the volume of this gap for one wavelength of the profile, are given, respectively, by:
\begin{equation}
\label{eq::gap_volume}
g_0(\mathbf{X}) = \Delta \left(1 - \cos\frac{2 \pi \mathbf{X}}{\lambda}\right), \quad V_{g0} = l\:\int\limits_0^\lambda \Delta \left(1 - \cos\frac{2 \pi \mathbf{X}}{\lambda}\right) \; d\mathbf{X} = l\:\lambda\:\Delta,
\end{equation}
where $\Delta$ and $\lambda$ are the amplitude and wavelength of the wavy surface profile, respectively, $X$ is the horizontal coordinate in the initial (reference) configuration and $l$ is the length in the direction of the third coordinate $z$, which under the assumption of the plane strain state of deformation will be assumed hereinafter equal to one length unit.
\section{Analytical solutions\label{sec::analytic}}
\subsection{Westergaard's solution}
The problem of contact between an elastic half-space with a regular wavy surface $y=\Delta\cos(2\pi x/\lambda)$ and a rigid flat without fluid in the interface was solved by~\cite{Westergaard_1939}, see also~\cite{Johnson_1985}, for the case of infinitesimal ratio $\Delta / \lambda \ll 1$, i.e. infinitesimal slope of the roughness profile. According to this solution the pressure distribution inside contact patches ($-a + \lambda n \leq x \leq a + \lambda n, \: n \in \mathbb{Z}$) is given by:
\begin{equation}
p_W(x, a) = \frac{2 \pi E}{1 - \nu^2}\frac{\Delta}{\lambda} \cos{\frac{\pi x}{\lambda}} \sqrt{\sin^2{\frac{\pi a}{\lambda}} - \sin^2{\frac{\pi x}{\lambda}}},
\label{eq::p_west}
\end{equation}
where $a$ is the half-length of contact patch within one wavelength of the profile $\lambda$, and elsewhere $p_W = 0$. $E$ and $\nu$ are Young's modulus and Poisson's ratio, respectively. The mean pressure over the whole contact interface is given by
\begin{equation}
\bar{p}_W(a) = \frac{1}{\lambda}\int\limits_0^{\lambda} p_W(x, a) \; dx = p^* \sin^2 \frac{\pi a}{\lambda},
\label{eq::p_west_mean}
\end{equation}
where $p^* = \pi E^* \Delta / \lambda$ is the pressure necessary to bring the entire interface in contact. In the static equilibrium $\bar{p}_W$ is equal to the value of the external pressure that we will denote by $p_{0}$. The complete contact is ensured, if $p_0 \geq p^*$.
By introducing the notations $A=2a$ and $A_0=\lambda$ for the real and apparent contact areas, respectively, the ratio of the real contact area to the apparent one, based on the Westergaard's solution, is given by:
\begin{equation}
\label{eq::area_west}
\frac{A}{A_0} = \frac{2a}{\lambda} = \frac{2}{\pi} \arcsin{\sqrt{\frac{p_0}{p^*}}}, \; 0\leq p_0\leq p^*.
\end{equation}
\subsection{Kuznetsov's solution}
\cite{Kuznetsov_1985} extended the Westergaard's solution~(\ref{eq::p_west}) by taking into account compressible fluid trapped in the valleys between contacting peaks of the wavy profile. Similarly, under the assumption of infinitesimal slope of the profile\footnote{
As was mentioned earlier, the infinitesimal-slope assumption implies here that (i) the wavy surface behaves as a flat one and that Flamant's solution holds for every surface point, and (ii) that the horizontal component of the fluid pressure is negligible.}, the stress state in the contact interface in the presence of the additional fluid pressure, applied beyond the contact patches, was considered as the superposition of the stress state corresponding to the same contact area, but without influence of the fluid (i.e. the Westergaard's solution~(\ref{eq::p_west})), and a uniform field of the fluid pressure $p_f$, applied everywhere and assumed not to distort the surface profile:
\begin{equation}
p_K(x, a) = \begin{cases}
p_f(a) + p_W(x, a), &\quad \text{if} -a + \lambda n \leq x \leq a + \lambda n, \: n \in \mathbb{Z}\\
p_f(a), &\quad \text{elsewhere}.
\end{cases}
\label{eq::kuz_press_eq}
\end{equation}
Integration of $p_K(x, a)$ over one period of the waviness gives the following relation between the external pressure $p_0$ and the contact area: $p_0(a) = p_f(a) + \bar{p}_W(a)$, where $\bar{p}_W(a)$ was defined in~\eqref{eq::p_west_mean}.
The fluid pressure $p_f$ can be related to the current contact half-width $a$ using a model of the compressible fluid with a bulk modulus $K$, which is defined as the ratio of infinitesimal pressure increase to the relative decrease of the volume:
\begin{equation}
K = -V_f\frac{d p_f}{dV_f}.
\label{eq::comp_def}
\end{equation}
In the linear compressibility model bulk modulus is a constant coefficient of proportionality between the relative change of volume of fluid and the fluid pressure~\cite{Kuznetsov_1985}:
\begin{equation}
p_f = K \left(1-\frac{V_f}{V_{f0}}\right),
\label{eq::comp_lin_model}
\end{equation}
where $V_{f0}$ is the volume of the fluid in unpressurized state and a smaller volume $V_f$ corresponds to the fluid pressure $p_f$. However, the linear model of compressible fluid~(\ref{eq::comp_lin_model}) does not provide satisfactory results for most of the fluids used in real-life lubrication problems, since a significant dependence of the compressibility modulus $K$ of fluid on the pressure $p_f$ takes place~\cite{Kuznetsov_1985}. The simplest model, and yet quite precise for most of lubricating fluids, which takes into account this dependence is the compressibility linearly evolving with pressure~\cite{Nellemann_1977, Kuznetsov_1985}:
\begin{equation}
K = K_0 + K_1 p_f,
\label{eq::comp_modulus_nonlin}
\end{equation}
where $K_0, K_1>0$ are model parameters. The linear dependence~(\ref{eq::comp_modulus_nonlin}), substituted into~(\ref{eq::comp_def}), upon integration results in the following non-linear relation between the fluid pressure and its volume:
\begin{equation}
p_f = \frac{K_0}{K_1} \left\{\left(\frac{V_f}{V_{f0}}\right)^{-K_1}-1\right\}.
\label{eq::comp_nonlin}
\end{equation}
Finally, it can be noted that the volume of the pressurized fluid $V_f$ is equal to the volume of the gap between the contacting surfaces $V_g$, which can be found from the displacement field of the Westergaard's solution~\cite{Kuznetsov_1985} and related to the current contact half-width $a$:
\begin{equation}
V_g(a) = V_{g0} \left[1 - \sin^2 \frac{\pi a}{\lambda} \left( 1 - \ln \left\{\sin^2 \frac{\pi a}{\lambda} \right\}\right)\right],
\label{eq::v_v0}
\end{equation}
where $V_{g0} = l \: \Delta$ is the initial gap, i.e., corresponding to $a=0$.
We generalize original results~\cite{Kuznetsov_1985} and allow a partial filling of the initial gap by the fluid, so that $V_{f0} = \theta \: V_{g0}, \; 0 < \theta \leq 1$.
Therefore, if the current gap volume is bigger than the initial fluid volume, $V_g > V_{f0}$, i.e. $V_g/V_{g0} > \theta$, then the fluid is not yet pressurized, and Westergaards solution is valid: $p_{0}(a) = \frac{\pi E^* \Delta}{\lambda} \sin^2\frac{\pi a}{\lambda}$. If $V_g < V_{f0}$, or, equivalently, $V_g/V_{g0} <\theta$, the equation connecting the contact area and the external load has the following form in the case of linear compressible fluid:
\begin{equation}
\label{eq::p_kuz_lin}
p_{0}(a) = \frac{\pi E^* \Delta}{\lambda} \sin^2\frac{\pi a}{\lambda} + \frac{K}{\theta} \left[\theta - 1 + \sin^2 \frac{\pi a}{\lambda} \left(1 - \ln\left\{\sin^2 \frac{\pi a}{\lambda}\right\}\right)\right], \quad\mbox{ if } V_g/V_{g0} <\theta,
\end{equation}
and in the case of non-linearly compressible fluid:
\begin{equation}
\label{eq::p_kuz_nonlin}
p_{0}(a) = \frac{\pi E^* \Delta}{\lambda} \sin^2\frac{\pi a}{\lambda} + \frac{K_0}{K_1}\left[\theta^{K_1}\left(1 - \sin^2 \frac{\pi a}{\lambda} \left( 1 - \ln \left\{\sin^2 \frac{\pi a}{\lambda} \right\}\right)\right)^{-K_1}-1\right], \quad\mbox{ if } V_g/V_{g0} <\theta.
\end{equation}
It is important to note also that Kuznetsov's solution even in the case of an arbitrary large modulus of compressibility of the fluid shows the growth of the contact patches under the increasing load. Furthermore, in the limit of incompressible fluid $K \rightarrow \infty$ it gives a constant value of the real contact area, which can be found from the equation $V_g(a) = V_{f0}$. Consequently, Kuznetsov's solution, based on the assumption of infinitesimal slope of the profile, cannot predict depletion of the real contact area and escape of the fluid from the trap, which we demonstrate in following sections dropping out the assumption of infinitesimal slopes.
\section{
Numerical methods\label{sec::methods}}
\subsection{Mechanical contact}
In case of the unilateral contact between a deformable body and a rigid flat with an outer normal $\boldsymbol{\nu}$, the motion of the body is constrained, which can be formalized upon introduction of the normal gap function $g$ -- a signed distance from the points on the surface of the deformable body to the rigid plane:
\begin{itemize}
\item[] $g > 0$, when the point is separated from the plane,
\item[] $g < 0$, when the point penetrates the plane (which is forbidden),
\item[] $g = 0$, when the point is on the plane.
\end{itemize}
We will denote by $\Gamma$ the potential contact zone (the whole surface), by $\Gamma_c \subset \Gamma$ the active contact zone, where the normal surface traction $\sigma_n$ must be negative in non-adhesive contact, and by $\Gamma\setminus\Gamma_c$ the inactive zone, which is out of contact. The constraints governing the frictionless unilateral contact problem are known as the Hertz-Signorini-Moreau conditions~\cite{Wriggers_2006}:
\begin{equation}
\label{eq::contact_cond}
g \geq 0, \; \sigma_n \leq 0, \; \sigma_n \: g = 0 \quad \text{at} \; \Gamma \quad \Leftrightarrow \quad
\begin{cases}
g = 0, & \sigma_n < 0, \quad \text{at} \:\Gamma_c \\
g > 0, & \sigma_n = 0, \quad \text{at} \:\Gamma\setminus\Gamma_c.
\end{cases}
\end{equation}
Therefore the considered problem is the constrained minimization problem for the potential energy of the mechanical system $\Pi(\vec{u})$, where $\boldsymbol{u}$ is the displacement field.
This problem can be solved using the Lagrange multipliers method~\cite{Kikuchi_1988,Wriggers_2006}, with the Lagrangian functional defined as:
\begin{equation}
\label{eq::Lagrangian_contact}
\mathcal{L}(\mathbf{u}, \lambda_c) = \Pi(\mathbf{u}) + \int\limits_{\Gamma_c} \lambda_c \, g(\mathbf{u}) \: d\Gamma_c,
\end{equation}
where $\lambda_c \leq 0$ is the Lagrange multiplier function, the values of which are equivalent to the normal traction in the contact zone.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{contact_initial.pdf}
\caption{ }
\label{fig::inactive_contact}
\end{subfigure}\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{contact_actual.pdf}
\caption{ }
\label{fig::active_contact}
\end{subfigure}
\caption{(a) Reference configuration: $\boldsymbol{X} \in \Omega^0$. (b) Actual configuration: $\boldsymbol{x} \in \Omega$, $p_0$ is the external pressure.}
\end{figure}
In case of frictional contact, along with Hertz-Signorini-Moreau conditions~(\ref{eq::contact_cond}), additional frictional constraints must be included in the problem, such as Coulomb's law of friction, which defines the following possible active contact states:
\begin{itemize}
\item Stick: $\vec{\dot g}_t = 0, \; |\vec\sigma_t| < \mu\: |\sigma_n|$;
\item Slip: $\vec\sigma_t = \mu \: |\sigma_n|\; \vec{\dot g}_t/|\vec{\dot g}_t|$;
\end{itemize}
where $\vec{\dot g}_t$ is the sliding velocity in the tangential plane between the corresponding points of the two surfaces, $\vec{\sigma}_t$ is the tangential contact traction and $\mu$ is the coefficient of friction (CoF).
In order to include frictional constraints in the formulated above constrained minimization problem, special methods must be used, such as the penalty method (combined with the return mapping algorithm) or augmented Lagrangian method, for details see~\cite{Wriggers_2006,Yastrebov_2013}.
\subsection{Trapped fluid constraints}
\subsubsection{Geometrical constraint for incompressible fluid}
The area of the gap between the contacting surfaces $V_g$ in the presence of trapped incompressible fluid of volume $V_f$ must satisfy the following geometrical constraint:
\begin{equation}
\label{eq::gap_volume_integral}
V_g \geq V_f = \text{const}, \quad V_g(\boldsymbol{X}+\mathbf{u}) = \int\limits_{\widetilde{\Gamma}_f}g(\boldsymbol{X}+\mathbf{u}) \: d\widetilde{\Gamma}_f,
\end{equation}
where $\Gamma_f = \Gamma \setminus \Gamma_c$ and $\widetilde{\Gamma}_f$ is the projection of $\Gamma_f$ on the rigid plane. The trapped fluid may fill completely or partially the gap between the contacting surfaces, therefore it can be present in two different states: ``inactive'', when $V_f < V_g$ and the fluid if not pressurized ($p_f$ = 0), and ``active'', when $V_f = V_g$, and pressure in the fluid $p_f>0$, see Fig.~\ref{fig::inactive},~\ref{fig::active}. We may formulate this two states in a way similar to Hertz-Signorini-Moreau conditions:
\begin{equation}
\label{eq::fluid_cond}
V_g \geq V_f, \quad p_f \geq 0, \quad p_f \: (V_g - V_f) = 0 \Leftrightarrow
\begin{cases}
V_g = V_f, & p_f > 0, \quad \text{(active state)}\\
V_g > V_f, & p_f = 0, \quad \text{(inactive state)}.
\end{cases}
\end{equation}
\subsubsection{Simulation of incompressible fluid using a Lagrange multiplier}
In the case of the inactive state of the trapped fluid we have only the mechanical contact problem between the elastic body and the rigid plane, while if the fluid is in the active state, we must consider additionally the gap volume constraint~(\ref{eq::gap_volume_integral}). The Lagrange multiplier method may be used again in order to fulfil this constraint, and the combined functional for the coupled problem can be defined as:
\begin{equation}
\label{eq::Lagrangian_coupled}
\mathcal{L}(\mathbf{u}, \lambda_c, \lambda_f) = \Pi(\mathbf{u}) + \int\limits_{\Gamma_c} \lambda_c \, g(\mathbf{u}) \: d\Gamma_c - \lambda_f (V_g(\mathbf{u}) - V_f),
\end{equation}
where $\lambda_f \geq 0$ is the Lagrange multiplier for the trapped fluid problem, which is equivalent to the fluid pressure $p_f$. The solution of the coupled problem is a stationary point of the Lagrangian~(\ref{eq::Lagrangian_coupled}), which requires the calculation of its variation:
\begin{align}
\label{eq::Lagrangian_variation}
\delta \mathcal{L}(\mathbf{u}, \lambda_c, \lambda_f) =& \frac{\partial \Pi(\mathbf{u})}{\partial\mathbf{u}} \delta \mathbf{u} + \int\limits_{\Gamma_c} \left[\delta \lambda_c \; g(\mathbf{u}) + \lambda_c \; \frac{\partial g(\mathbf{u})}{\partial\mathbf{u}} \delta \mathbf{u}\right] \; d\Gamma_c \nonumber \\
-&\left[\delta \lambda_f \; (V_g(\mathbf{u}) - V_f) + \lambda_f \; \frac{\partial V_g(\mathbf{u})}{\partial\mathbf{u}} \delta \mathbf{u} \right]= 0.
\end{align}
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{fluid_initial.pdf}
\caption{ }
\label{fig::inactive}
\end{subfigure}\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{fluid_actual.pdf}
\caption{ }
\label{fig::active}
\end{subfigure}
\caption{(a) Trapped fluid is in inactive state. (b) Trapped fluid is in active state, $p_f$ is the fluid pressure.}
\end{figure}
\subsubsection{Simulation of the compressible fluid with the penalty method}
The geometrical constrain~(\ref{eq::gap_volume_integral}) for the trapped fluid can also be treated with the penalty method. In accordance with the linear penalty method, instead of the term $\lambda_f (V_g(\mathbf{u}) - V_f)$, the following term should be added in~(\ref{eq::Lagrangian_coupled}) to take into account the trapped fluid constraint:
\begin{equation}
W_f(\mathbf{u}) = \frac{\epsilon}{2} \: \left(V_{f0}-V_g(\mathbf{u}) \right)^2,
\label{eq::penalty_func}
\end{equation}
if the fluid is in active state $V_g < V_{f0}$, and zero otherwise. In the above formula $\epsilon$ is the penalty parameter, and $V_{f0}$ is the initial volume of the fluid.
Let us assume that the fluid is in active state. Calculating the variation of~(\ref{eq::penalty_func}), we obtain the contribution of the trapped fluid to the balance of virtual works:
\begin{equation}
\delta W_f(\mathbf{u}) = -\epsilon \: (V_{f0}-V_g(\mathbf{u})) \frac{\partial V_g(\mathbf{u})}{\partial \mathbf{u}} \delta \mathbf{u},
\label{eq::penalty_func_var}
\end{equation}
where the value of the term $\epsilon \: (V_{f0}-V_g(\mathbf{u}))$ equals to the fluid pressure $p_f$. Under the penalty formulation the gap volume constraint~(\ref{eq::gap_volume_integral}) is never satisfied exactly, i.e. the current volume of the active fluid $V_g(\mathbf{u})$ is always smaller, than the initial fluid volume $V_{f0}$. Therefore, the penalty method corresponds to the model of the compressible fluid, and a comparison between~(\ref{eq::comp_lin_model}) and~(\ref{eq::penalty_func_var}) shows that the linear penalty method represents the compressibility model with the constant bulk modulus $K$, if $\epsilon = K / V_{f0}$.
In order to simulate the behaviour of the compressible fluid with pressure-dependent bulk modulus~\eqref{eq::comp_modulus_nonlin}-\eqref{eq::comp_nonlin}, the \textit{non-linear penalty} method for the trapped fluid constraint~(\ref{eq::gap_volume_integral}) may be used. The contribution of the fluid to the balance of virtual works in this case takes the form:
\begin{equation}
\delta W_f = -\frac{K_0}{K_1}\left\{\left(\frac{V_g(\mathbf{u})}{V_{f0}}\right)^{-K_1}-1\right\}\frac{\partial V_g(\mathbf{u})}{\partial \mathbf{u}} \delta \mathbf{u}.
\label{eq::penalty_nonlin_var}
\end{equation}
\section{Results and discussion\label{sec::results}}
We solved the coupled problem using the finite element method with implemented monolithic coupling scheme in finite element suite \textit{Z-set}~\cite{Besson_1997,Zset}.
Contrary to
Kuznetsov's analytical results or BEM analyses, we did not assume infinitesimal slopes, i.e. the value $\Delta / \lambda$ is arbitrary. We used a finite element mesh with $1024$ nodes in the contact interface per wavelength ($19364$ nodes in total in the structural mesh), see Fig.~\ref{fig::mesh}. Hereinafter, if not mentioned differently, we considered the roughness profile with $\Delta / \lambda = 0.01$. In the following, we will also discuss how this ratio affects the results. The horizontal dimension of the finite element mesh equals to
the wavelength $\lambda$ and the ratio of the profile amplitude $\Delta$ to the vertical mesh dimension $H$ is $\Delta / H = 0.005$. On the vertical boundaries of the mesh we apply symmetry boundary conditions ($u_x = 0$), and the bottom edge of the deformable solid is displaced vertically towards the rigid flat within 200 load steps. A corotational updated Lagrangian framework was used in our simulations, which is needed to capture properly that the fluid pressure applied to the updated configuration is collinear to element normals.
In simulation we measure the vertical reaction, the extension of the contact area, the pressure in the contact zone and the fluid pressure.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{mesh.png}
\caption{FEM mesh}
\label{fig::mesh}
\end{figure}
Hereinafter, if not mentioned differently, we performed frictionless simulations, and estimated the value of the global coefficient of friction using the following approach.
We consider the global coefficient of friction as the coefficient of proportionality between the maximal tangential force per wavelength $F_t$ and the normal one $F_n = p_0\lambda$, i.e. $|F_t| \le \mu_{\mbox{\tiny glob}} |F_n|$. The local Coulomb's coefficient of friction determines the following inequality: $|\sigma_t| \leq \mu_{\mbox{\tiny loc}} |\sigma_n|$, where $\sigma_t$ and $\sigma_n$ are the tangential and normal components of the traction vector in the contact interface, respectively.
We neglect shear forces in the fluid and therefore the ratio between the global and local coefficients of friction can be calculated as:
\begin{equation}
\frac{\mu_{\mbox{\tiny glob}}}{\mu_{\mbox{\tiny loc}}} = \int\limits_{\Gamma_c}\left.\vphantom{A^A_A}|\sigma_n| \; d\Gamma_c \right/|F_n| =
\int\limits_{\Gamma_c}\left.\vphantom{A^A_A}|\sigma_n| \; d\Gamma_c \right/p_0 \lambda.
\label{eq::est}
\end{equation}
Finally, using notations for the real $A$ and the apparent $A_0$ contact areas, which were introduced above, Eq.~\eqref{eq::est} can be rewritten as:
\begin{equation}
\frac{\mu_{\mbox{\tiny glob}}}{\mu_{\mbox{\tiny loc}}} = 1 - \frac{p_f}{p_0}\left(1-\frac{A}{A_0}\right).
\label{eq::est3}
\end{equation}
\subsection{Incompressible fluid\label{sec::results_incomp}}
In this section we study the model of an incompressible fluid trapped in the contact interface. Note that real-life lubricating fluids have significantly lower initial bulk moduli than metals. Nevertheless, this idealized model enables us to focus on the mechanism of the trap opening by the pressurized fluid, while compressible fluids will be considered in the following sections.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{incomp_2.pdf}
\caption{
(a) The evolution of the real contact area in the vicinity of the ``activation'' point of the incompressible fluid with respect to the external pressure normalized by $E^*$ for three profiles with different slopes $\Delta/\lambda$ and three cases with different ratios of the fluid volume to the initial gap volume $V_f/V_{g0}$. (b) The evolution of the real contact area until the complete opening of the trap for the case $V_f/V_{g0} = 0.9$ shown for different slopes $\Delta/\lambda$.
\label{fig::incomp_2}
}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{aux_vy.pdf}
\caption{(a) Sketch of the auxiliary problem:
deformation of the wavy surface under uniform normal load. (b) Evolution of the ratio $V_f/V_{g0}$ ($V_f$ is the volume between the deformed surface and a horizontal plane $y=y_0$, where $y_0$ is the current position of the crest, and $V_{g0}$ is the initial volume of the gap) with the increasing external pressure $p_0$ for several profiles with different slope $\Delta/\lambda$.}
\label{fig::aux}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=0.5\textwidth]{el_maps.pdf}
\caption{Stress and strain components in the bulk of the deformable solid during the process of trap opening due to the increasing pressure in the fluid.
Top to bottom: vertical stress component $\sigma_{yy}$, von Mises stress $\sigma_{vM}$, hydrostatic stress $p$, horizontal strain component $\varepsilon_{xx}$ and the vertical one
$\varepsilon_{yy}$. Three loading steps are considered, corresponding to, left to right: maximal contact area (activation of the fluid), half of the contact opened, contact area
is zero (trap is opened). The considered elastic material is typical aluminium ($E = 70 \text{ GPa}, \nu = 0.33$), the fluid is assumed incompressible.
\label{fig::el_map}
}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=0.99\textwidth]{incomp.pdf}
\caption{(a) Real contact area evolution during opening of the contact, caused by pressurized incompressible trapped fluid (with respect to the external pressure normalized by $E^*$)
(b) Contact normal force evolution during opening of the contact (with respect to the external pressure, normalized by $E^*$). (c) Evolution of the ratio between global and local
coefficients of friction, and (d) a zoom of this evolution for $V_f/V_{g0} = 0.9$, where, in addition, the results of \emph{frictional} simulations are plotted (crosses), as well as analytical approximations given by~\eqref{eq::cof_kuz} (dashed curves).}
\label{fig::incomp}
\end{figure}
We study the evolution of the real contact area in the presence of incompressible fluid in the interface under the increasing external pressure using the Lagrange multiplier method. We investigate how the magnitude of the slope of the profile ($\Delta / \lambda$) and the ratio between the trapped fluid volume and the initial gap volume $V_f / V_{g0}$ affect the solution of the coupled problem. The distribution of some stress and strain components in the bulk of the deformable solid during the process of trap opening is shown in Fig.~\ref{fig::el_map}.
The evolution of the contact area close to the moment of the activation of the fluid is presented in Fig.~\ref{fig::incomp_2}(a). The regime in which the fluid is not yet pressurized ($V_g>V_f$) coincides with Westergaard's equation~\eqref{eq::p_west}. According to this analytical solution the ratio of the current volume of the gap to the initial one $V_g/V_{g0}$ is a monotonically decreasing function of contact area and does not depend on the slope of the profile $\Delta/\lambda$, see~\eqref{eq::v_v0}. Therefore, the contact area $A_\text{act}$, reached when the fluid gets pressurized ($V_g = V_f$) does not depend on the slope of the profile and is increasing with decreasing $V_f / V_{g0}$. For a given $\Delta/\lambda$ the pressure necessary to activate the fluid $p_{\text{act}}$ is also increasing with decreasing $V_f / V_{g0}$. At the same time, for a given $V_f / V_{g0}$, the value of $p_{\text{act}}$ is proportional to the slope $\Delta/\lambda$.
One can note in Fig.~\ref{fig::incomp_2}(a) that once the fluid is pressurized, the contact area is slowly decreasing, contrary to the Kuznetsov's solution, which predicts the contact area to remain constant. In Fig.~\ref{fig::incomp_2}(b) we show the evolution of the contact area in a much wider range of loads, than in Fig.~\ref{fig::incomp_2}(a), and observe a monotonic decrease of the contact area, ultimately it reaches zero value, which corresponds to the opening of the trap. Surprisingly, results of simulations with different (decreasing) profile slope $\Delta/\lambda$ do not tend to the Kuznetsov's solution (derived under assumption of infinitesimal $\Delta/\lambda$ and assuming that the wave profile is similar to a flat one), but converge to a different limit! At the same time we observe that the external pressure necessary to open the trap $p_\text{open}$ also converges to a certain limit with $\Delta/\lambda\to0$.
In order to explain this intriguing result, first we note that since the solution of linearly elastic problem with and without contact is unique, the displacement field at the moment of opening of the trap with $p_0 = p_\text{open}$ must be equal (up to a rigid body motion) to the one corresponding to distributed hydrostatic pressure $p_f = p_\text{open}$ over the whole interface. Let us consider an auxiliary problem of the uniform hydrostatic fluid pressure on the wavy profile, see Fig.~\ref{fig::aux}(a). The Kuznetsov's solution is based on an assumption that a uniform distribution of the hydrostatic pressure does not distort the wavy surface~\cite{Kuznetsov_1985}. In our numerical simulations we showed that for small, but finite $\Delta/\lambda$ this assumption does not hold, the wavy surface distorts: the crest's displacement is bigger than the displacement of the trough, which is quite an evident result.
Due to the non-zero slope of the contact interface, the fluid pressure acts not only in the vertical direction but also in the horizontal one, thus leading to the additional in-plane compression of the material near the crest and, on the opposite, to the additional in-plane tensile contribution near the trough, see Fig.~\ref{fig::aux}(a). Thus, there exists a linearly elastic solution for a uniformly distributed pressure $p_f$, which results in such surface deformation, that the integral of the gap equals to the fluid volume $V_f$, i.e.:
\begin{equation}
\exists p_f\quad\text{such that}\int\limits_{\Gamma} (y_0 - (X^y+ u^y))d\Gamma = V_f,
\end{equation}
where $y_0$ is the position of the crest after the applying the uniform pressure $p_f$.
We derived an analytical formula for computation of $V_f$, based on the assumption of small, but finite $\Delta/\lambda$:
\begin{equation}
V_f/V_{g0} = 1 - \frac{2(1-2\nu)(1+\nu)p_f}{E},
\label{eq:open_vol}
\end{equation}
see~\ref{app:aux} for details. The relative change of volume induced by a uniformly applied pressure $p_f$ does not depend on the value $\Delta/\lambda$, but only on elastic properties of the solid.
In Fig.~\ref{fig::aux}(b) the comparison of this formula with the numerical results for several profiles with different $\Delta/\lambda$ is shown. Numerical results are tending towards the analytical solution for decreasing $\Delta/\lambda$. Therefore, we have shown that for any given $V_f/V_{g0}$ there exists uniform pressure $p_f$, which results in a such distortion of the surface, that the volume between the surface and a rigid flat equals to $V_f$. Moreover, in the limit of infinitesimal slopes, this critical pressure does not depend on the slope.
The obtained result explains why the curves of evolution of the real contact area with the increasing pressure for surfaces with different slopes tend to a certain limit with decreasing $\Delta/\lambda$ (which remains, however, finite) see Fig.~\ref{fig::incomp_2}(b), while $p_\text{open}$ is different for different $V_f/V_{g0}$, see Fig.~\ref{fig::incomp}(a). Equation~\eqref{eq:open_vol} can be readily used to compute the pressure needed to open the trap; it is valid for incompressible or compressible fluids.
We present in Fig.~\ref{fig::incomp}(b) the evolution of the integral contact pressure, i.e. the nominator in~\eqref{eq::est}, for different values of $\Delta/\lambda$ and $V_f / V_{g0}$. The results show, that just after the fluid becomes pressurized, the integral of contact pressure has an almost linear growth, which follows the linear dependence of the contact reaction on the external pressure $p_0$, provided by the Kuznetsov's solution in the limit $K\rightarrow \infty$:
\begin{equation}
\label{eq::react_kuz}
\frac{1}{E^* \lambda}\int_{\Gamma_c}|\sigma_n| \; d\Gamma_c = \frac{p_0}{E^*} \frac{A_\text{act}}{A_0} + \pi \left(1 - \frac{A_\text{act}}{A_0}\right) \frac{\Delta}{\lambda} \sin^2{\frac{\pi}{2}\frac{A_\text{act}}{A_0}},
\end{equation}
where, contrary to numerical results, it was assumed that $A_\text{act}$ remains constant under the increasing external pressure $p_0$.
However, due to the fact that we consider finite slope of the profile in the numerical solution, the linear part in the dependence of contact reaction on external pressure is followed by a non-linear concave part, reaching maximum value and then decreasing to zero. Consequently, the global coefficient of friction also vanishes. The results on the estimation of the ratio between global and local coefficients of friction are presented in Fig.~\ref{fig::incomp}(c).
Before the fluid gets pressurized, the global CoF equals to the local one. After that, the global CoF is monotonically decreasing with the increasing external pressure $p_0$. This decrease is related to repartition of the external load between the contact and the fluid; the latter is assumed not to resist shear in the quasi-static limit. Note that for high values of $p_0$, i.e. close to opening of the trap, the evolution of the global CoF is independent from the slope ($\Delta/\lambda$) and depends only on the ratio $V_f/V_{g0}$. On the other hand, for low values of $p_0$ slightly higher than the activation pressure (see Fig.~\ref{fig::incomp}(d)) the analytical approximation under the assumption of infinite $K$ shows the global CoF decreasing as $1/p_0$:
\begin{equation}
\label{eq::cof_kuz}
\frac{\mu_{\mbox{\tiny glob}}}{\mu_{\mbox{\tiny loc}}} = \frac{A_\text{act}}{A_0} + \pi \left(1 - \frac{A_\text{act}}{A_0}\right) \frac{\Delta}{\lambda} \frac{E^*}{p_0} \sin^2{\frac{\pi}{2}\frac{A_\text{act}}{A_0}}.
\end{equation}
Note, that the term containing $1/p_0$ is proportional to the ratio $\Delta/\lambda$.
In addition to estimations of the global coefficient of friction~\eqref{eq::est}-\eqref{eq::est3}, based on the frictionless simulation of the coupled problem under normal loading, we performed the direct computation of $\mu_{\text{glob}} = |F_t| / |F_n|$ in the frictional simulation of the coupled problem during sliding under normal and tangential loads. Note that in the latter simulation for both normal and frictional contact constraints we use the augmented Lagrangian method and the classic Lagrange multiplier method for the fluid constraint. The comparison of the results is presented in Fig.~\ref{fig::incomp}(d) for the case of $V_f/V_{g0} = 0.9$ and different ratios of $\Delta/\lambda$: the analytical asymptotic solution~\eqref{eq::cof_kuz} is presented with dashed curves, estimations based on frictionless simulation are shown as solid curves, the results calculated with taking into account friction in the interface are presented as crosses for a few particular values of external pressure $p_0$. This comparison shows that the frictionless result, based on the assumption of separate consideration of tangential and normal contributions in the interface~\cite{Johnson_1985}, provides a trustworthy estimation of the global coefficient of friction.
Note that these considerations can be applied to multi-cracked materials such as rocks with fluid in contact interfaces.
The irreversible deformation in rocks is related to the frictional sliding at crack interfaces,
which starts after the mean shear traction $\langle\sigma_t\rangle$ in the interface reaches the frictional limit determined by the coefficient of friction and the contact pressure $\mu_{\mbox{\tiny glob}}\langle\sigma_n\rangle$. Being homogenized over all randomly oriented crack orientations, these considerations give rise to Drucker-Prager-type constitutive behaviour with the initial yield surface given by $f = \sigma_{vm} + \mu_{\mbox{\tiny glob}} p - R_0$, where $\sigma_{vm}$ is the von Mises stress, $p = -\mathrm{trace}(\vec \sigma)/3$ is the hydrostatic pressure and $R_0$ is the initial yield stress for pure shear.
Because of the presence of an incompressible fluid in the interface, the frictional limit does not increase linearly (or equivalently the global coefficient of friction does not remain constant), but reaches its maximum and decreases down to zero as shown in Fig.~\ref{fig::incomp}(b). This behaviour is very similar to advanced pressure-dependent plasticity models with a so-called cap, which corresponds to the decay of the von Mises yield stress with increasing pressure~\cite{resende1985formulation}.
But contrary to the pore-collapse mechanism~\cite{suarez1990indentation,perrin1993rudnicki,issen2000conditions}, here this decay results from the decrease of the global friction with the hydrostatic pressure in presence of the fluid, this result also holds for non-linearly compressible fluids.
\subsection{Compressible fluid with constant bulk modulus\label{sec::results_comp_lin}}
Here our analysis is extended to the case of compressible fluids.In Fig.~\ref{fig::comp_lin}(a) we present the comparison of the numerical simulation of a linearly compressible trapped fluid under the linear penalty formulation with the analytical solution \eqref{eq::p_kuz_lin}. We plot the evolution of the ratio of the real contact area to the apparent one under increasing external pressure for the case when the fluid occupies 70\% of the initial gap, i.e. $V_{f0}/V_{g0} = 0.7$. Different curves correspond to different values of the modulus of compressibility of the fluid $K_f$, normalized by the bulk modulus of the solid body $K_s = E / 3(1 - 2\nu)$, and for each numerical result a
corresponding analytical curve is presented for comparison.
\begin{figure}[h]
\centering
\includegraphics[width=0.99\textwidth]{comp_lin.pdf}
\caption{(a) Evolution of the ratio of the real contact area to the apparent one under increasing external pressure $p_0$: comparison of numerical and analytical results for different values of the fluid modulus of compressibility, normalized by the bulk modulus of the solid $K_f/K_s$; $\Delta/\lambda = 0.01, \: V_{f0}/V_{g0} = 0.7$.
(b) Distribution of the normal pressure near the contact patch under the increasing external load $p_0$. Solid lines are the results of the numerical simulation and dashed lines correspond to the analytical solution under the same external pressure, $\Delta/\lambda = 0.01, \: V_{f0}/V_{g0} = 0.9$, $K_f / K_s = 6 \cdot 10^4$.}
\label{fig::comp_lin}
\end{figure}
Before pressurization of the fluid, the presence of the latter does not affect the solution and all curves follow the Westergaard's solution~\eqref{eq::p_west}. For the pressurized fluid, the results show a good agreement between numerical and analytical solutions for values $K_f / K_s \ll 1 $, and for $K_f \approx 0$ the solution coincides completely with the Westergaard's formula.
However, with the increase of the $K_f$, in the region corresponding to the active fluid, the difference between numerical and analytical solutions becomes more pronounced.
For the ratio $K_f/K_s$ close to unity, the numerical results shows an almost constant value of the real contact area under the increasing load. Note, that the same result will hold for an incompressible fluid trapped in the interface between two incompressible solids.
For even greater $K_f/K_s$, the numerical results show a decrease of the real contact area, which means that the pressurized fluid starts to open the contact.
Due to inherent assumptions of infinitesimal slopes, these effects cannot be predicted by the analytical solution.
In Fig.~\ref{fig::comp_lin}(a) the results were presented for $V_f/V_{g0} = 0.7$, note that the smaller this ratio is, the bigger are the value of pressure necessary to bring the fluid in active state and the corresponding value of the contact area. However, after the fluid becomes pressurized, for sufficiently high values of external pressure, the evolution of the contact area is influenced only by the compressibility modulus of the fluid and the mean slope of the profile. The bigger is the compressibility modulus or the slope, the smaller is the contact area for the same external pressure.
To emphasize the difference between the analytical and numerical solutions for a nearly incompressible fluid, we plot the pressure distribution near a contact patch under the increasing load for both solutions, see Fig.~\ref{fig::comp_lin}(b).
The representation of the stress state in the contact patches as a superposition of the stress state for the same contact area without the influence of the fluid and a uniform fluid pressure~(\ref{eq::kuz_press_eq}) still holds for the numerical solution, but unlike the analytic solution, in our results a significant reduction of the contact area for nearly incompressible fluid is observed.
Note that in our numerical solution for sufficiently high external pressure the real contact area vanishes, which means that the fluid separates the contacting surfaces everywhere, and the external pressure is entirely supported by the fluid under the pressure equal to the external one $p_f=p_0$.
\subsection{Compressible fluid with pressure-dependent bulk modulus\label{sec:results_comp_nonlin}}
\begin{figure}[p]
\centering
\includegraphics[width=0.99\textwidth]{comp_nonlin_incomp_el_pl.pdf}
\caption{
Evolution of (a) the ratio of real contact area to the apparent one, (b) the ratio between global and local coefficients of friction under increasing external pressure
for two elastic solids representing steel and aluminium, and non-linearly compressible fluids representing water, glycerine and oil.
The dashed curves correspond to the analytical solution given by~\eqref{eq::p_kuz_nonlin}.
Evolution of (c) the ratio of real contact area to the apparent one $A/A_0$, and of (d) the global to local coefficients of friction under increasing external pressure in the case of elastic-perfectly plastic solid and incompressible fluid. Note that in the initial configuration the fluid does not occupy the entire gap $V_f/V_{g0} = 0.9$. Dashed curves are presented for comparison with the cases of purely elastic solids, discussed in Sec.~\ref{sec::results_incomp}. Vertical dash-dotted line indicates the hardness taken to be $H = 3\sigma_Y$.}
\label{fig::comp_nonlin_incomp_el_pl}
\end{figure}
As was shown in Fig.~\ref{fig::comp_lin}(a) for the case of linearly compressible fluid (with constant bulk modulus), starting from the pressurization of the fluid, the real contact area evolves monotonically with the external pressure: if the fluid bulk modulus is less than the one of the solid ($K_f < K_s$), then the real contact area increases up to the full contact state, if $K_f > K_S$, then the contact area decreases down to zero, corresponding to the opening of the trap. The latter case is interesting for the study of the process of the fluid permeation into the contact zone and reduction of the global coefficient of friction, however, as it was mentioned in the Sec.~\ref{sec::results_incomp} for the incompressible fluid, the situation when the initial fluid bulk modulus is greater than that of the solid remains non-physical and serves as an idealized model.
On the other hand, real fluids behave non-linearly and their bulk modulus increases with increasing pressure, and thus even if the fluid bulk modulus is smaller than that of the solid in the first stage of pressurization, it eventually becomes greater than the one of the solid under the increasing pressure.
We present results of the numerical simulation for coupled problem with non-linear fluids - evolution of the contact area and global coefficient of friction with respect to increasing external pressure, see Figs.~\ref{fig::comp_nonlin_incomp_el_pl}(a),(c), respectively. Physically relevant values for two solid materials are used: a typical steel ($E = 200 \text{ GPa}, \nu = 0.28, K_s \approx 151.5 \text{ GPa}$) and aluminium ($E = 70 \text{ GPa}, \nu = 0.33, K_s \approx 83.33 \text{ GPa}$), and three types of fluid (see Eq.~\eqref{eq::comp_modulus_nonlin}): water ($K_0 = 2112.5 \text{ MPa}, K_1 = 6.5$), glycerine ($K_0 = 4151.5 \text{ MPa}, K_1 = 8.74$) and a typical mineral oil ($K_0 = 2000.0 \text{ MPa}, K_1 = 9.25$)~\cite{Kuznetsov_1985,Nellemann_1977}.
We limit this study to the contact problem with the fluid completely filling up the gap (but only up to the upper boundary) during the whole process of loading. Such formulation remains rather general since, due to the realistic fluid model, the contact zone will inevitably appear under the first loading.
At low external pressures numerical results coincide with the analytical solutions also obtained for non-linear fluids, see~\eqref{eq::p_kuz_nonlin}. However, in contrast to the analytical solution, which cannot account for depletion of the contact zone, the numerically obtained contact area, as expected, becomes a non-monotonic function of pressure and after reaching the maximum, decreases
Note that for each of considered materials, the obtained curves for water and oil coincide in the beginning of loading due to almost equal initial bulk moduli $K_0$ of these fluids, and deviate for higher external pressures due to difference in $K_1$, while for glycerine $K_0$ is significantly bigger, leading to a smaller contact area in this case.
The global coefficient of friction (CoF) also shows a non-monotonic behaviour, see Figs.~\ref{fig::comp_nonlin_incomp_el_pl}(c), it vanishes when the contact area is zero, and rapidly increases up to a certain maximal value. Within this stage, the numerical and analytical results are very close, while for higher pressures a strong deviation of analytical and numerical results is observed. In analytical solution, even though the global CoF may decrease after the first extremum-maximum (see results obtained for the steel), it eventually increases again after reaching the second extremum-minimum. More accurate numerical results predict a monotonic decrease of the global CoF after reaching the first maximum. Note that in the simplified case considered here, the hydrostatic lubrication effect decreases significantly the maximal global CoF, which does not exceed $\approx$ 36 \% of the local CoF for the steel, and does not exceed $\approx$ 24 \% of the local CoF for the aluminium. Such a strongly non-linear behaviour of the global coefficient of friction (with one or two extrema) is explained by a competition between non-linear fluid pressurization and non-linear contact area evolution (see Eq.~\eqref{eq::est3}).
The numerical solution shows that the maximal value of the CoF and its slope after passing the extremum both depend on the ratio between the bulk moduli of the fluid $K_f = K_0 + K_1 p_f$ and the solid $K_s$. The bigger is the initial modulus $K_0$, the higher is the maximal CoF
(which explains almost equal peak values of the CoF for water and oil and much lower value for glycerine).
At the same time, the bigger is the coefficient $K_1$, the faster the CoF decreases.
We performed additional simulations varying the slope of the roughness profile $\Delta/\lambda$ in the interval $[0.005; 0.02]$. The results showed that the evolution of the real contact area is almost independent of the ratio $\Delta/\lambda$ (similarly to the case of the incompressible fluid).
On the other hand, variation of this ratio has a considerable effect on the peak value of the global CoF, which increases with increasing $\Delta/\lambda$.
However, for high values of external pressure, the CoF does not depend on the slope of the profile, as it was also observed for the incompressible case.
It is important to note here that, even if in the numerical solution the contact area decreases with the increasing external pressure, it does not reach zero value even for extremely high values of the external pressure $p_0=E^*$. Thus using linearly elastic material model seems to be irrelevant at such high pressures. In Section~\ref{sec::elasto_plastic} a more realistic case will be presented taking into account a non-linearly compressible fluid and a relevant elasto-plastic material behaviour.
\subsection{Elastic-perfectly plastic solid\label{sec::elasto_plastic}}
\begin{figure}[h]
\centering
\includegraphics[width=0.99\textwidth]{el_pl_map.pdf}
\caption{Accumulated plastic strain near the contact interface is shown at three different external loads for the incompressible fluid and for $V_f/V_{g0}=0.9$ and $\Delta/\lambda= 0.01$, from left to right: (1) the step corresponding to activation of the fluid (maximal contact area); (2) contact area decreased by half; (3) zero contact area (the trap is opened; as seen from the plastic field, at this moment the entire solid is plastified).}
\label{fig::el_pl_map}
\end{figure}
Here we consider elastic-perfectly plastic materials (von Mises stress criterion): steel, $E = 200 \text{ GPa}$, $\nu = 0.28$, yield stress $\sigma_Y = 250 \text{ MPa}$ and aluminium, $E = 70 \text{ GPa}, \nu = 0.33, \sigma_Y = 240 \text{ MPa}$.
It is well known that in elasto-plastic mechanical contact, the contact pressure cannot exceed the material hardness, which can be reliably estimated as $H\approx3\sigma_Y$~\cite{Bowden_2001,Johnson_1985,Mesarovic1999spherical}. Thus it could be expected that after the pressure in the fluid reaches material hardness the contact abruptly opens. However, as demonstrated by our simulations, due to the high hydrostatic compressive state, the pressure in the contact can significantly overpass the material hardness.
First, we study incompressible fluid, and present in Fig.~\ref{fig::comp_nonlin_incomp_el_pl}(b) the evolution of contact area in the case of $V_f / V_{g0} = 0.9$. It shows significantly different behaviour compared to elastic material: after the fluid becomes activated, the contact area is non-monotonic function of external pressure, it has a small increase, and then an abrupt decrease, corresponding to the state when fluid pressure reaches the value of contact pressure, and, consequently, permeation becomes possible. Normal tractions in contact interface increase beyond $6 \sigma_Y$ - due to hydrostatic pressurization of the solid. In Fig.~\ref{fig::comp_nonlin_incomp_el_pl}(d) the resulting evolution of the global CoF is presented, which shows considerably lower values of the CoF for the both considered materials, than the ones observed in the purely elastic case (for the same external pressure). Fields of the accumulated plastic strain in the solid at different loading steps are presented in Fig.~\ref{fig::el_pl_map}, note that once the fluid gets pressurized, the plastic zone is not limited to the contact vicinity, but spreads over the entire interface and, consequently, the whole bulk of the solid. Notably, a secondary onset of plastic deformation appears in the trough of the wavy profile, it complements the classical plastic core appearing under the contact zone and spreading to the contact interface~\cite{Johnson_1985,Mesarovic1999spherical,kogut2002elastic,alcala2010reassessing}.
Varying the slope of the profile as in Sections~\ref{sec::results_incomp} and~\ref{sec:results_comp_nonlin}, we showed that in contrast to the case of elastic solids, where the evolution of the contact area during the process of trap opening does not depend on the slope of the profile $\Delta/\lambda$, in case of elasto-plastic solids, for a given ratio $V_f / V_{g0}$, once the fluid gets pressurized, the higher is the ratio $\Delta/\lambda$, the bigger is the contact area.
\begin{figure}[p]
\centering
\includegraphics[width=0.99\textwidth]{el_pl.pdf}
\caption{The behaviour of the system considering elasto-plastic material and non-linearly compressible fluid: (a) evolution of the ratio of real contact area to the apparent one under increasing external pressure; (b) the same as (a), but the results are shown in range $0 \leq p_0 \leq 0.025 E^*$; (c) evolution of the ratio between global and local coefficients of friction; (d) the same as (c), but the results are shown in range $0 \leq p_0 \leq 0.025 E^*$. Dashed curves are presented for comparison with the cases of purely elastic solids. Vertical dash-dotted line indicates the hardness $p_0 = H = 3\sigma_Y$.}
\label{fig::el_pl}
\end{figure}
The behaviour of the system incorporating the elasto-plastic material and non-linearly compressible fluid is shown in Figs.~\ref{fig::el_pl}(a-d): the contact area after reaching its maximum abruptly decreases, resulting in a fast permeation of the fluid in the contact interface and eventual opening of the contact. Note that after a relatively fast saturation of the contact pressure at approximate material hardness $H\approx3\sigma_Y$, a further increase in pressure without fluid permeation still remains possible up to huge pressure values $p_0 \gg \sigma_y$. In reality however, due to the micro-roughness permeation of the fluid in the contact interface may happen on earlier stages of the deformation.
In fig.~\ref{fig::el_pl}(c,d) the evolution of the global CoF is depicted, which shows a rather similar behaviour to the one observed in the case of the elastic solid, having multiple extrema in the beginning of loading. Note that the amplitude of the first maximum of CoF is increasing with increasing slope of the profile, which was also observed in the simulations with the purely elastic material.
\subsection{Friction in the contact interface}
\begin{figure}[p]
\centering
\includegraphics[width=0.99\textwidth]{incomp_fric.pdf}
\caption{Distribution of the tangential tractions in the contact interface: (a) Fluid is not pressurized. (b) Under increasing external load fluid gets pressurized, contact area is decreasing and a singularity in tangential traction appears (limited by the Coulomb's law). (c) Sketch of the analogous problem
for two bonded dissimilar solids with two aligned semi-infinite interfacial cracks in the interface. (d) Comparison of the numerical results for the shear tractions and approximation provided by the analogy with the LEFM.}
\label{fig::incomp_fric}
\end{figure}
In order to study the distribution of frictional tractions in the contact interface during the process of opening of the trap, we consider a coupled problem for an incompressible fluid with Coulomb's friction in the contact interface, as in previous analysis the shear forces in the trapped fluid are neglected because of quasi-static analysis.
The following geometrical parameters are used: $\Delta/\lambda = 0.01$, $V_f/ V_{g0} = 0.95$. In order to obtain more reliable results, we refined the mesh to have 512 nodes within the maximal extension of the contact zone $a/\lambda = 0.05$, with 1024 surface elements in total.
Two stages in the loading can be distinguished. During the first stage the external pressure $p_0$ increases from zero value to $p_{\text{act}}$, the value necessary to bring the fluid into active state, and the contact area reaches the maximum value. Results for the first stage are presented in Fig.~\ref{fig::incomp_fric}(a), where, in order to visualize stick and slip zones, we plot normal tractions, multiplied by the coefficient of friction (CoF) $\mu = 0.2$.
Those results are very close to the classic self-similar (remaining the same for any load under a proper coordinate/pressure scaling~\cite{Spence_1968}) distribution of tractions, because the wavy profile in the region of interest is very close to a parabolic curve.
During the second stage of loading ($p_0 > p_{\text{act}}$) the fluid is in the pressurized state and influences the interfacial traction distribution.
Since the slope of the roughness profile is small, the distribution of normal traction should resemble, at least for $p_0$ not much greater than $p_{act}$, the analytical solution for a fluid bulk modulus tending to infinity ($K\to\infty$), in which a uniform pressure offset is added everywhere to the field of the normal traction corresponding to the external pressure $p_{act}$. In accordance to that, tangential traction remains almost unchanged over the majority of the contact interface. Because of the contact pressure increase by the fluid pressure offset, all points pass to the stick state, i.e. adhere to their positions. However, due to the finiteness of the slope being taken into account, the distribution of normal traction slightly differs from the analytical solution in the same way as was discussed in Sec.~\ref{sec::results_comp_lin}, see Fig.~\ref{fig::comp_lin}(b), i.e. a slight decrease of the contact area takes place.
For $p_0$ sufficiently greater than $p_{\text{act}}$, see Fig.~\ref{fig::incomp_fric}(b), the effects of finite slope become more pronounced, the contact area is gradually decreasing and a remarkable evolution of the tangential traction is observed. A singularity in the tangential traction emerges at the boundary of the contact zone, with the value at the tip of this singularity limited by the Coulomb's law. In order to explain and verify this intriguing result, we consider an analogy between the process of the trap opening with the interfacial friction and the mode-II crack propagation in the framework of linear elastic fracture mechanics (LEFM) theory~\cite{tada1973stress}.
Note that the analogy is not complete in physical sense: during the process of trap opening due to pressurization of the incompressible fluid, new surface is not created, since no atomic bonds must be broken in order to separate the surfaces. The physical reason for the singularity in tangential stress is the following: when points of the surface loose contact, their normal traction reduces not down to zero, but to the value of fluid pressure, thus the frictional limit near the contact edge remains elevated. Thus, the points of the interface before loosing contact have non-zero shear traction, and being liberated from this traction after loosing the contact, these points slide freely, in absence of frictional resistance, towards the centre of the contact zone.
The fluid activation corresponds to the maximal extension of the contact zone (we shall denote the maximal contact half-length as $a^*$, and during the subsequent increase of the external pressure the width of the contact zone is monotonically decreasing. For sufficiently small slope of the roughness profile, the situation corresponding to contact half-length $a < a^*$ can be considered as a configuration of two bonded dissimilar solids with two aligned semi-infinite interfacial cracks in the interface, separated by $2a$, see Fig.~\ref{fig::incomp_fric}(c). Using the superposition principle, the observed stress state, corresponding to the half-length of the contact patch $a$, can be represented as a superposition of the initial shear traction $\sigma_{t}^*(x)$, corresponding to the moment of activation of the fluid, and a stress induced by the same traction with the opposite
sign, $\sigma_t^-(x) = -\sigma_{t}^*(x)$ applied only on the surfaces of the cracks in the intervals $x\in[-a^*, -a]\:\text{and}\:[a, a^*]$.
Such traction induces a singular shear stresses in the region between two cracks $x \in[-a, a]$, thus $\sigma_t^-(x)$ can be written as:
\begin{equation}
\sigma_t^-(x) =
\begin{cases}
-\sigma_{t}^*(x),& \; x \in[-a^*, -a] \cup [a, a^*]\\
\frac{1}{\sqrt{2\pi}}\text{Im}\left\{K(a, \sigma_t^*)\ \left(\frac{(x-a)^{i \epsilon}}{\sqrt{|x-a|}} - \frac{(x+a)^{i \epsilon}}{\sqrt{|x+a|}}\right)\right\},& \; x \in[-a, a]\\
0,& \; |x| > a^*,
\end{cases}
\label{eq::lefm}
\end{equation}
where $K$ is the complex stress intensity factor, see ~\cite{rice1965plane,rice1988elastic}, and two terms in brackets in~(\ref{eq::lefm}.2) correspond to two semi-infinite cracks being considered, so that $\sigma_t^-(0) = \sigma_t^*(0) = 0$, $\text{Im}$ is the imaginary part.
Therefore, the resulting distribution of shear tractions is given by the superposition $\sigma_t(x) = \sigma_t^*(x) + \sigma_t^-(x)$.
The complex stress intensity factor $K$ is calculated using the existing analytical formula for considered configuration and shear traction distribution~\cite{rice1965plane,rice1988elastic}:
\begin{equation}
K(a, \sigma_t^*) = \left[k_1(a, \sigma_t^*) + i k_2(a, \sigma_t^*)\right] \sqrt{\pi} \cosh{(\pi \epsilon)},
\label{eq::lefm_k}
\end{equation}
where
\begin{align}
&k_1(a, \sigma_t^*) = \frac{\sqrt{2}}{\pi}\int\limits_a^{a^*} \frac{\sigma_t^*(x) \sin{(\epsilon \ln{(x-a)})}}{\sqrt{x-a}} dx, \nonumber \\
&k_2(a, \sigma_t^*) = \frac{\sqrt{2}}{\pi}\int\limits_a^{a^*} \frac{\sigma_t^*(x) \cos{(\epsilon \ln{(x-a)})}}{\sqrt{x-a}} dx,
\label{eq::lefm_k1_k2}
\end{align}
and the parameter $\epsilon$ accounts for the different properties of the two bonded solids, in case one of them being rigid, it equals to
\begin{equation}
\epsilon = -\frac{1}{2\pi} \ln{(3 - 4 \nu)}.
\end{equation}
In Fig.~\ref{fig::incomp_fric}(d) we plot the approximation of the shear traction distribution in the interface during trap opening, discussed above.
A sound similarity is found between numerical results and analytical formulae provided by the LEFM.
Therefore, we have shown that during the process of trap opening due to increasing pressure in the fluid with friction taken into account, the tangential tractions near the contact edges are elevated up to the limit provided by the Coulomb friction law. Consequently, even if the majority of the interface remains in stick state, local slip zones emerge at the boundaries of contact zones. It is important to account for such an elevated shear stress near edges of contact zones, which surround trapped fluid, in the analysis of damage evolution and crack onset under monotonic and cycling loading, including fretting fatigue \cite{hills1994mechanics,proudhon2005fretting}.
\section{Conclusions}
\label{sec::conclusions}
In this work we solved the problem of mechanical contact between a deformable body with a wavy surface and a rigid flat, taking into account pressurized fluid trapped in the interface.
A mathematical framework for this coupled problem for both incompressible and compressible fluids was formulated.
In the latter case, either constant or pressure-dependent fluid bulk-moduli were considered; all models were implemented in the finite element framework using a monolithic approach.
The proposed framework accounts for a finite slope of the roughness profile, while in previous investigations using classical boundary element method (which accounts only for vertical displacements) and existing analytical solutions only infinitesimal slopes were considered.
We show that in the considered coupled problem, a reduction of the contact area can occur due to elastic flattening of asperities by fluid pressure.
Thus the reduction of the global coefficient of friction is caused not only by the external load repartition between the solid contact and the pressurized fluid, but also by the contact area reduction.
The reduction of the contact area takes place if the fluid bulk-modulus is higher than that of the solid.
In case of incompressible fluid this criterion is satisfied and the process of trap opening is observed.
However, this case is non-physical, since real lubricating fluids in the unpressurized state have much lower bulk modulus than solids.
A more relevant case is a compressible fluid with linear dependence of bulk modulus on pressure, which ensures a non-monotonic variation of the contact area, and thus of the global coefficient of friction, leading to reduction of the both for sufficiently large pressures.
Among other applications, the obtained results are relevant for the mechanical behaviour of multi-cracked materials such as rocks. We showed that due to the presence of pressurized fluid in the interface, the frictional limit does not increase linearly with increasing external load, but reaches its maximum and decreases down to zero.
This behaviour is similar to pressure-dependent plasticity models with a cap (e.g. Drucker-Prager cap model), which corresponds to the decay of the von Mises yield stress with the increasing pressure.
In addition to elasticity, we considered physically more relevant elasto-plastic materials in combination with realistic fluids.
In this case, the contact pressure is bounded, while the fluid can bear arbitrary pressure, consequently under certain external pressure fluid permeates in the contact zones abruptly.
When interfacial friction is considered in the coupled problem, previously unreported quasi-singularities appear in shear stresses near edges of contact patches during fluid-trap opening.
We showed that these singularities can be analytically estimated using the analogy between trap opening and crack propagation in the interface between two bonded dissimilar solids.
It is important to account for such an elevated shear stress, caused by the trapped fluid, in the analysis of damage evolution and crack onset under monotonic and cycling loading, including fretting fatigue.
The problem of trapped fluid is relevant for metal forming (drawing and rolling), where a lubricant is present in the interface and involved loads are high.
It is also relevant in poromechanics, especially in cracked media filled with fluid and subjected to complex stress states with high hydrostatic component, which can ensure contact between surfaces of internal cracks.
Finally, at the microscopic scale, where the surface roughness plays a crucial role, the trapped fluid provides additional load-bearing capacity, and thus reduces the macroscopic static friction.
Under increasing load, the trapped fluid is squeezed out of its trap thus resulting in even smaller global coefficient of friction.
\section{Acknowledgements}
The authors acknowledge the financial support of Safran Tech and MINES ParisTech (Th\`ese-Open) and are grateful to Julien Vignollet for his helpful suggestions and kind support.
Enlightening comments of an anonymous reviewer are kindly acknowledged.
|
1,314,259,993,914 | arxiv | \section{Introduction}
\label{sec:1}
Integrable Hamiltonian systems have important applications in diverse fields of physics and are
in the focus of intense investigation by a great variety of mathematical methods.
We are interested in the family of classical many-body systems introduced in their
simplest form by Calogero \cite{Cal}, Sutherland \cite{Suth} and Ruijsenaars and Schneider \cite{RS86}.
The relevance of these systems
to numerous
areas of mathematics and physics
is apparent from the
reviews devoted to them \cite{vDV, EtiR, N, OP1, PolR, RuijKup, Banff, SuthR}.
One of their fascinating features is that several pairs of such systems enjoy a duality
relation
that converts the particle positions of one system into the
action variables of the other system, and vice versa\footnote{Self-duality occurs when the related systems are identical,
except for a possible shift of their parameters.}.
This intriguing phenomenon was first analyzed in the ground-breaking papers \cite{SR88,RIMS95} by a direct method,
while its group-theoretic background came to light more recently \cite{JHEP, G, N}.
The treatment of the self-dual Calogero system by Kazhdan, Kostant and Sternberg \cite{KKS}
served as a source of inspiration for these developments.
Since this paper is devoted to the analysis of a particular dual pair, let us next outline
in more precise terms the notion of duality that we use.
An integrable Hamiltonian system is given by an
Abelian Poisson algebra $\fH$ of smooth
functions on a $2n$-dimensional symplectic manifold
$(M,\omega)$ such that the functional dimension of $\fH$ is $n$, and all elements of $\fH$ generate complete flows.
The systems of our interest possess another distinguished Abelian Poisson algebra $\fP$, which has the same
properties as $\fH$ and the following requirements hold:\\
(a) There exist Darboux coordinates, $\lambda_i, \theta_j$, on a dense open submanifold $M^o$ of $M$ such that
the restriction of $\fP$ to $M^o$ is functionally generated by the $\lambda_i$.\\
(b) $\fH$ contains a distinguished function $H$ whose restriction to $M^o$ admits
interpretation as a many-body Hamiltonian describing the dynamics
of $n$ interacting `point-particles' with positions $\lambda_i$ moving along one dimensional space
(a line or a circle).\\
The function $H$ is often called the `main Hamiltonian' and
$\fP$ is sometimes called the algebra of `global position variables'.
Now, suppose that we have two systems
\begin{equation}
(M, \omega, \fH, \fP, H) \quad\hbox{and}\quad (\hat M, \hat\omega, \hat\fH, \hat\fP, \hat H),
\label{I1}\end{equation}
with associated Darboux coordinates, according to conditions (a) and (b), $(\lambda,\theta)$ and $(\hat\lambda,\hat\theta)$.
We say that these two systems are in action-angle duality (also called Ruijsenaars duality)
if there exists a \emph{global} symplectomorphism $\cR\colon (M, \omega)\to (\hat M,\hat\omega)$ such that
\begin{equation}
\fH = \hat\fP\circ \cR
\quad\hbox{and}\quad
\hat\fH = \fP \circ \cR^{-1}.
\label{I2}\end{equation}
An additional feature,
valid in all known examples, is that the
Hamiltonian flows of $(M, \omega, \fP)$ and $(\hat M, \hat\omega, \hat\fP)$
can be written down explicitly, not only on the dense open parts, but globally.
Consequently, $(M, \omega, \fH)$ is integrated by means of $(\hat M, \hat\omega, \hat\fP)$,
and $(\hat M, \hat\omega, \hat\fH)$ is integrated by means of $(M, \omega, \fP)$.
This means that $\cR$ and $\cR^{-1}$ can be interpreted as global action-angle maps
for the Liouville integrable systems $(M, \omega,\fH)$ and $(\hat M, \hat\omega, \hat\fH)$.
One may also say that $\hat\fP$ represents global position-type variables
for the many-body system $(\hat M, \hat\omega, \hat H)$ and
global action-type variables for the system $(M, \omega, H)$, together
with the analogous `dual statement'.
For further description of this curious notion and its quantum mechanical counterpart,
alias the celebrated bispectral property \cite{DG}, the reader may consult the reviews \cite{RuijKup,Banff}.
We note in passing that in some examples the $\lambda_i$ are globally smooth and independent,
and then $M^o=M$, while
in other examples they lose their smoothness or independence outside a proper
submanifold $M^o$.
This should not come as a surprise since from the dual viewpoint the $\lambda_i$ are action variables,
which usually exhibit some singularities.
Their canonical conjugates $\theta_i$ may vary on the circle or on the line
depending on the example.
It was realized by Gorsky and his collaborators \cite{JHEP,G,N}, and
explored in detail by others (\cite{F-PLA}--\cite{FM}, \cite{P1,P}), that dual pairs of integrable many-body systems can be derived
by Hamiltonian reduction utilizing the following mechanism.
Suppose that we have a higher dimensional `master phase space' $\cM$ that admits a symmetry group $G$,
and two distinguished independent Abelian Poisson algebras $\fH^1$ and $\fH^2$ formed by $G$-invariant, smooth
functions on $\cM$. Then we can apply Hamiltonian reduction to $\cM$ and obtain
a reduced phase space ${\mathcal M}_{\mathrm{red}}$ equipped with two Abelian Poisson algebras
$\fH_{\mathrm{red}}^1$ and $\fH_{\mathrm{red}}^2$ that descend respectively from
$\fH^1$ and $\fH^2$.
We need to construct two distinct models $M$ and $\hat M$ of ${\mathcal M}_{\mathrm{red}}$ yielding $(M,\omega, \fH, \fP)$ and $(\hat M, \hat\omega, \hat\fH, \hat\fP)$
in such a way that the reduction of $\fH^1$ is represented by $\fH$ and $\hat\fP$,
and the reduction of $\fH^2$ is represented by $\fP$ and $\hat\fH$.
If this is achieved, then we obtain a natural map
$\cR\colon M \to \hat M$ that corresponds to the identity map on ${\mathcal M}_{\mathrm{red}}$
and relates the Abelian Poisson algebras on $M$ to those on $\hat M$
in the way stated in (\ref{I2}).
A crucial, and very intricate, requirement
is that the reduction must provide many-body systems: to fulfil this, one can rely only on experience and inspiration.
The heart of the matter is the choice of the correct master system
and its specific reduction.
The examples so far treated by the mechanism just outlined
include group theoretic reinterpretations of dual pairs previously constructed by direct methods
as well as
new dual pairs found by reduction.
At the same time, there still exist such known instances of dualities as well
(notably, the self-dual hyperbolic RS system \cite{SR88} and the dual pair involving the
relativistic Toda system \cite{SR90})
that stubbornly resist treatment in the reduction framework.
\begin{figure}[h!]
\centering
\begin{tikzcd}
&{\mathcal M}_0 \arrow{ld}[swap]{\psi} \arrow{d}{\pi_0} \arrow{rd} {\hat\psi} \arrow[hook]{r}{\iota_0}
&\cM\\
M\arrow{d}{\lambda} \arrow[bend right]{rr}[swap]{\mathcal R}& {\mathcal M}_{\mathrm{red}} \arrow{r}{\hat\Psi} \arrow{l}[swap]{\Psi} & \hat M\arrow{d}{\hat\lambda}\\
{\mathbb R}^n & & {\mathbb R}^n
\end{tikzcd}
\qquad\qquad
\begin{tikzcd}
&\iota_0^*(\fH^1)\times\iota_0^*(\fH^2) \\
\fH\times\fP\arrow{ru}{\psi^*} \arrow{r}{\Psi^*} & \fH^1_{\mathrm{red}}\times\fH^2_{\mathrm{red}}\arrow{u}[swap]{\pi_0^*} & \hat \fP\times\hat\fH \arrow{l}[swap]{\hat\Psi^*}\arrow{lu}[swap]{\hat\psi^*} \arrow[bend left]{ll}{\mathcal R^*}\\
\end{tikzcd}
\caption{ Illustration of how symplectic reduction is used to generate duality. These diagrams are designed
to help keep track of the notations. Using the embedding $\iota_0\colon \cM_0 \to \cM$
of the `constraint surface' $\cM_0$ into the master phase space $\cM$, the reduced Abelian algebras are defined by
$\fH^i_\red \circ \pi_0 = \fH^i\circ \iota_0$ for $i=1,2$. They turn into the Abelian algebras of the models
$M$ and $\hat M$ according to
$\fH\circ\Psi=\fH^1_{\mathrm{red}}=
\hat\fP\circ\hat\Psi$ and $\fP\circ\Psi=\fH^2_{\mathrm{red}}=\hat\fH\circ\hat\Psi$.
}
\label{figure X}
\end{figure}
The crucial advantage of the above outlined
approach to action-angle dualities is that, once the correct starting point is found, the
Hamiltonian reduction \emph{automatically} gives rise to complete flows
and symplectomorphisms between
the models of the reduced phase space.
For the realisation of this advantage, it is indispensable
to provide globally valid descriptions of the
reduced system, which can be a thorny issue.
The solution of such global issues is at the heart of our current investigation.
The goal of this paper is to present a thorough analysis of a dual pair of integrable
many-body systems recently derived in \cite{FG} and \cite{FM} by reduction of the Heisenberg double of
the standard Poisson-Lie group $\SU(2n)$. It is well-known \cite{STS,STSlectures} that the Heisenberg doubles
are Poisson-Lie analogues (and deformations) of corresponding cotangent bundles.
The relevant reduction is a direct Poisson-Lie generalization---making use of Lu's momentum map, \cite{Lu}---of the reduction of the cotangent bundle $T^*\SU(2n)$ used for deriving
the trigonometric $\BC_n$ Sutherland system and its dual in \cite{FG-JMP}.
Correspondingly, the reduction
of the Heisenberg double leads to a deformation of this dual pair.
We shall not only describe the deformed dual pair, but shall also show
how duality allows us to extract non-trivial information about the dynamics.
For example, it will allow us to prove that both of the resulting integrable
many-body Hamiltonians are non-degenerate since their flows
densely fill the corresponding Liouville tori.
Furthermore, it will be shown that all the flows of $\fH$ posses a common fixed point, as do the flows of $\hat\fH$.
These results will be established by utilizing the global descriptions of the dual models
$M$ and $\hat M$ of the reduced phase space.
Our current line of research was initiated in the paper \cite{M}, where the analogous
reduction of the Heisenberg double of $\SU(n,n)$ was considered. The investigation
in \cite{FG-JMP} was strongly influenced by the work of Pusztai \cite{P}, who studied a dual pair
arising from reduction
of $T^*\SU(n,n)$. The Poisson-Lie counterpart of the $\SU(n,n)$ dual pair appears more complicated
than what we report on here; its exploration is left for the future.
Before outlining the content of the paper, let us recall from \cite{FG,FM}
the local description of our many-body systems in duality, which
arises by restricting attention to dense open submanifolds of the reduced phase space.
These systems have 3 real parameters, $\mu>0$ and $u$ and $v$, whose range will be specified below.
Here, we use hatted letters to describe the model constructed in \cite{FG}.
The manifold $\hat M$ contains a dense open proper subset $\hat M^o$
parametrized by the Cartesian product
\begin{equation}
\widehat{\cD}_+ \times \T^n = \{ (\hat\lambda, \exp({\ri \hat\theta}))\},
\label{I3}\end{equation}
where $\T^n$ is an $n$-torus and
\begin{equation}
\widehat{\cD}_+ =\{\hat\lambda\in\R^n\,\mid
\mathrm{min}(0,v-u)>\hat\lambda_1>\dots >\hat\lambda_n,\,\,\, \hat\lambda_j-\hat\lambda_{j+1}>\mu,\,\, j=1,\dots,n-1\}.
\label{I4}
\end{equation}
The $\hat\lambda_i$ and the angles $\hat\theta_i$ are Darboux coordinates , i.e., on $\hat M^o$ we have
\begin{equation}
\hat\omega =\sum_{j=1}^nd\hat\theta_j\wedge d\hat\lambda_j.
\label{I5}\end{equation}
The main Hamiltonian $\hat H$ can be written on $\hat M^o$ as
\begin{equation}
\hat H(\hat\lambda, \hat\theta)=U(\hat\lambda) - \sum_{j=1}^n\cos(\hat\theta_j) U_1(\hat\lambda_j)^{1/2}
\prod_{\substack{k=1\\(k\neq j)}}^n
\bigg[1-\frac{\sinh^2\mu}{\sinh^2(\hat\lambda_j-\hat\lambda_k)}
\bigg]^{1/2}
\label{I6}\end{equation}
with
\begin{equation}
\begin{aligned}
U(\hat\lambda)&=\frac{e^{-2u}+e^{2v}}{2}\sum_{j=1}^n\exp({-2\hat\lambda_j}),\\
U_1(\hat\lambda_j) &= \big[1-(1+e^{2(v-u)})\exp({-2\hat\lambda_j})
+ e^{2(v-u)}\exp({-4\hat\lambda_j})\big].
\end{aligned}
\label{I7}\end{equation}
The phase space $M$ of the `dual model' possesses a dense open proper subset $M^o$ parametrized by
\begin{equation}
\cD_+\times \T^n =\{(\lambda, \exp({\ri \theta}))\}
\label{I8}\end{equation}
with
\begin{equation}
\cD_+ = \{ \lambda\in \R^n\mid \lambda_1 > \dots > \lambda_n >
\operatorname{max}(\vert v\vert, \vert u \vert),\,\,\, \lambda_{j} - \lambda_{j+1}> \mu,\,\, j=1,\ldots, n-1\}.
\label{I9}\end{equation}
It carries the Darboux form
\begin{equation}
\omega = \sum_{j=1}^nd \theta_j\wedge d\lambda_j.
\label{I10}\end{equation}
In terms of these variables, the main Hamiltonian $H$ reads
\begin{equation}
\begin{aligned}
H(\lambda,\theta)&=
V(\lambda) + e^{v-u}\sum_{j=1}^n\frac{\cos\theta_j}{\cosh^2\lambda_j}
\left[1 - \frac{\sinh^2v}{\sinh^2\lambda_j}\right]^{1/2} \left[1 - \frac{\sinh^2u}{\sinh^2\lambda_j} \right]^{1/2}\\
&\qquad\times
\prod_{\substack{k=1\\(k\neq j)}}^n \left[1 - \frac{\sinh^2\mu}{\sinh^2(\lambda_j - \lambda_k)}\right]^{1/2}
\left[1 - \frac{\sinh^2\mu}{\sinh^2(\lambda_j + \lambda_k)}\right]^{1/2}
\end{aligned}
\label{I11}\end{equation}
with
\begin{equation}
V(\lambda) =e^{v-u}\left(\frac{\sinh(v)\sinh(u)}{ \sinh^2\mu}
\prod_{j=1}^n\left[1 - \frac{\sinh^2\mu}{\sinh^2\lambda_j} \right]
-\frac{\cos( v)\cosh(u)}{\sinh^2\mu}
\prod_{j=1}^n\left[1 + \frac{\sinh^2\mu}{\cosh^2\lambda_j} \right]
+ C_0\right)
\label{I12}\end{equation}
where $\displaystyle{ C_0= ne^{u-v} + \frac{\cosh( v -u)}{\sinh^2\mu}}$.
The constant $C_0$ is included here for later convenience.
The formulae of the main Hamiltonians $\hat H$ (\ref{I6}) and $H$ (\ref{I11}) are invariant with respect to the independent transformations
$\mu \mapsto -\mu$
and $(u,v) \mapsto ( -v, -u)$.
Motivated by this, we assume throughout the paper that $\mu>0$ and at a later stage we shall also assume
that
\begin{equation}
\vert u \vert > \vert v \vert.
\label{I13}\end{equation}
The exclusion of $\vert u \vert = \vert v \vert$ is required for our reduction treatment,
while the choice (\ref{I13}) turns out to have technical advantages.
The above specified domains $\widehat\cD_+$ and $\cD_+$ emerge from the reduction,
but they can also be viewed as choices
made to guarantee the strict positivity of all expressions under the square roots
appearing in the Hamiltonians.
A few remarks are now in order. The
main Hamiltonians $\hat H$ and $H$ are reminiscent of many-body Hamiltonians introduced by
van Diejen \cite{vD1}. The relation regarding $\hat H$ was made precise in \cite{FG} and regarding $H$
it will be described in this paper.
The coordinates $\hat\lambda_i$ and $\lambda_i$ serve as position variables
for $\hat H$ and $H$, respectively, and we shall see that they yield globally smooth
(and analytic) functions on the underlying phase space.
Note that
the deformation parameter that brings this dual pair into the one obtained by reduction
of $T^* \SU(2n)$ \cite{FG-JMP}
is here set to unity. The cotangent bundle limits of $\hat H$ and $H$ are discussed in
\cite{FG} and in \cite{FM}.
Now we outline the content of the paper and highlight our main results.
In Section 2.1, we first recall the Heisenberg double $\cM$ equipped with the Abelian
Poisson algebras $\fH^1$ and $\fH^2$, then set up the pertinent reduction.
In Section 2.2, we review
the global model $\hat M$ of the reduced phase space found in \cite{FG}.
The material in Section 2
enhances several previous results.
For instance, Lemma 2.1 and the relation (\ref{T44}) of
$\cH_j^\red$ to Chebyshev polynomials appear here for the first time.
Section 3 contains the logical outline of the construction of
the global model $M$, which is our primary task.
This is summarized by Figure 2 at the end of Section 3.
The elaboration of the details required new ideas and a certain amount of labour:
it occupies Section 4, Section 5 and Section 6.1.
Our first main result is Theorem 5.6 in Section 5.
Crucially, this theorem establishes the range of the
$\lambda$-variables that arises from the reduction.
Building on the local results of \cite{FM}, it also yields the Darboux chart (\ref{I10})
on a dense open submanifold of $\cM_\red$ parametrized by (\ref{I9}).
Our second main result is given by Theorem 6.5, which
describes the symplectomorphism $\Psi$ between $(M,\omega)$, cast as $\C^n$ with its canonical symplectic structure, and
$(\cM_\red, \omega_\red)$.
Combining Theorem 6.5 with previous developments, we explain in Section 6.2 that
our reduction engenders a realization of the diagrams of Figure 1.
We consider this to be our principal achievement.
We also present consequences for the dynamics of the systems in duality
in Section 6.2 and in Section 7. Section 7 is devoted to further discussion
of the results and open problems.
Finally, two appendices are included. The first one is purely technical, while
in the second we clarify the connection between the Hamiltonian $H$ (\ref{I11}) and van Diejen's
five parametric integrable trigonometric Hamiltonians.
\section{Preparations}
\label{sec:2}
In this section
we set up the reduction of our interest and review the model $\hat M$ of the
reduced phase space.
All manifolds in this article are viewed as real. Hence the expression ``analytic'' must
always be understood to mean ``real-analytic''.
We shall focus on the $C^\infty$ character of the manifolds and maps of our concern, but shall
often also indicate their analytic nature by parenthetical remarks.
\subsection{The master system and its reduction}
We shall reduce the master phase space $\cM := \SL(2n,\C)$. Here, $\SL(2n,\C)$ is viewed as a real Lie group,
and we also need its subgroups
\begin{equation}
K:= \SU(2n),\quad B:= \SB(2n),
\label{T1}\end{equation}
where the latter is formed by upper triangular complex matrices with positive entries along the diagonal.
Every element $g\in \cM$ admits the alternative Iwasawa decompositions
\begin{equation}
g = k b = b_L k_R,
\qquad k, k_R \in K, \quad b, b_L \in B.
\label{T2}\end{equation}
By using these, $\cM$ is equipped with the Alekseev-Malkin \cite{AM} symplectic form
\begin{equation}
\omega_\cM=\frac{1}{2}\Im\tr(db_Lb_L^{-1}\wedge d k k^{-1})+
\frac{1}{2}\Im\tr( b^{-1}db \wedge k_R^{-1} d k_R).
\label{T3}\end{equation}
To display the corresponding Poisson bracket, for any
$\cF\in C^\infty(\cM, \R)$ we introduce the $\sl(2n,\C)$-valued
left- and right-derivatives $\nabla \cF$ and $\nabla' \cF$ by
\begin{equation}
\ds \cF(e^{sX}ge^{sY})=\Im\tr\big(X \nabla \cF(g) +Y \nabla' \cF(g)\big),
\quad\forall X,Y\in\sl(2n,\C).
\label{T4}\end{equation}
We prepare the linear operator
\begin{equation}
R = \frac{1}{2} (\pi_\cK - \pi_{\cB})
\label{T5}\end{equation}
on $\sl(2n,\C)$, utilizing the projectors associated with the real vector space decomposition
\begin{equation}
\sl(2n,\C) = \cK + \cB,
\label{T6}\end{equation}
where $\cK$ and $\cB$ are the Lie algebras of $K$ and $B$, respectively.
The Poisson bracket reads
\begin{equation}
\{ \cF, \cH\} = \Im\tr\left( \nabla \cF R(\nabla \cH) + \nabla'\cF R(\nabla' \cH) \right),
\qquad
\cF, \cH\in C^\infty(\cM,\R).
\label{T7}\end{equation}
The structure described above is known \cite{STS,STSlectures} as the Heisenberg double
of the standard Poisson-Lie group $\SU(2n)$.
The Abelian Poisson algebra $\fH^2$ is defined as follows.
Let $\cP$ denote the space of positive definite Hermitian matrices of size $2n$ and determinant $1$.
Consider the ring $C^\infty(\cP)^K$ of smooth real function on $\cP$ that are invariant with respect
to the natural action of $K$ on $\cP$ given by conjugation of a Hermitian matrix by a unitary one.
We set
\begin{equation}
\fH^2 = \{ \hat\cH \in C^\infty(\cM)\mid \hat \cH(g) = \hat h( b b^\dagger) \,\,\hbox{with}\,\,
\hat h \in C^\infty(\cP)^K\},
\label{fH2}\end{equation}
i.e., $\fH^2$ is the pull-back of $C^\infty(\cP)^K$ by the map $ \cM \ni g\mapsto bb^\dagger \in \cP$.
A generating set $\hat \cH_j$ for $\fH_\cM^2$ is provided by the functions $\hat \cH_j$ having the form
\begin{equation}
\hat \cH_j(g) = \hat h_j(bb^\dagger) \quad\hbox{with}\quad
\hat h_j(bb^\dagger):= \frac{1}{2}\tr\!\left((bb^\dagger)^j \right)\quad\hbox{ for}\quad j=1,\dots, 2n-1.
\label{T9}\end{equation}
The Hamiltonian vector field and the corresponding (complete) flow can be written down explicitly for any
$\hat \cH\in \fH^2$.
After our reduction the $n$ Hamiltonians descending from the functions $\hat\cH_1, \hat\cH_2,\dots,\hat\cH_n$
remain independent, and
the many-body Hamiltonian displayed in
(\ref{I6}) results from $\hat \cH_1$.
To present the other Abelian Poisson algebra of interest, $\fH^1$,
we define the matrix
\begin{equation}
I:= \diag(\1_n, - \1_n),
\label{T10}\end{equation}
where $\1_n$ is the $n\times n$ unit matrix, and introduce the subgroup
\begin{equation}
K_+ := \{ k \in K \mid k^\dagger I k = I\}.
\label{T11}\end{equation}
Let $C^\infty(K)^{K_+ \times K_+}$ denote those functions on $K$ that are invariant
with respect to both left- and right-multiplications by elements of $K_+$.
Then, referring to the Iwasawa decomposition (\ref{T2}),
we define
\begin{equation}
\fH^1 = \{ \cH \in C^\infty(\cM)\mid \cH(g) = h(k) \,\,\hbox{with}\,\,
h \in C^\infty(K)^{K_+\times K_+}\}.
\label{fH1}\end{equation}
A generating set is furnished by the functions $\cH_j$ given by
\begin{equation}
\cH_j(g) = h_j(k) \quad \hbox{with}\quad h_j(k):=\frac{1}{2}
\tr\!\left( (k^\dagger I k I)^j\right) \quad \hbox{for}\quad j=1,\dots, n.
\label{T13}\end{equation}
We recall that $C^\infty(K)$ carries a natural Poisson bracket associated with (\ref{T7}), for which
the map $g\mapsto k$ by (\ref{T2}) is a Poisson map. Explicitly,
\begin{equation}
\{f,h\}_K(k) =\Im\tr \left(D f(k) k (D' h(k)) k^{-1}\right),\quad \forall k\in K,\, f,h\in C^\infty(K).
\label{T14}\end{equation}
Here the $\cB$-valued left- and right-derivatives, $Df$ and $D'f$, of any $f\in C^\infty(K)$ are defined analogously to (\ref{T4}).
It is well-known that $K$ is a Poisson-Lie group and
$K_+ < K$ is a Poisson-Lie subgroup of $K$ with respect to this Poisson structure.
The following lemma implies that $\fH^1$ is an Abelian Poisson algebra.
\medskip\noindent
{\bf Lemma 2.1.} \emph{The invariant functions $C^\infty(K)^{K_+ \times K_+}$
Poisson commute with respect to
$\{\ ,\ \}_K$.}
\medskip\noindent{\bf Proof.}
Let us start by noting that every $k\in K$ may be written in
the form
\begin{equation}
k = \kappa_1 \Delta \kappa_2,\qquad \hbox{for}\ \kappa_1, \kappa_2\in K_+, \ \hbox{and}\
\Delta=\begin{pmatrix}\Gamma& \ri\Sigma\\ \ri\Sigma&\Gamma\end{pmatrix},
\label{T15}\end{equation}
where
\begin{equation}
\Gamma=\diag(\cos q_1,\dots,\cos q_n),\qquad \Sigma=\diag(\sin q_1,\dots,\sin q_n)
\label{T16}\end{equation}
with
\begin{equation}
\frac{\pi}{2}\geq q_1 \geq q_2\geq \dots \geq q_n\geq 0.
\end{equation}
If $h_1$ and $h_2$ are two $(K_+\times K_+)$-invariant smooth functions on $K$, then their Poisson bracket is
also $(K_+\times K_+)$-invariant.
Therefore it is enough to show that $\{h_1,h_2\}$ vanishes at any point of the form $\Delta$ given in (\ref{T15}).
The $(K_+\times K_+)$-invariance of $h\in C^\infty(K)$ means that the $\cB$-valued
left- and right-derivatives
$D h, D' h$ have the form
\begin{equation}
D h=\begin{pmatrix}
0&A\\ 0&0
\end{pmatrix},
\qquad
D' h=\begin{pmatrix}
0&\tilde A\\ 0&0
\end{pmatrix},
\label{T18}\end{equation}
where we use the obvious $2\times 2$ block-structure defined by $I$ (\ref{T10}).
On account of the identity
\begin{equation}
D h(k)=\pi_\cB(k(D'h(k))k^\dag) = k (D' h(k)) k^\dag - \pi_\cK(k(D'h(k))k^\dag),
\label{T19}\end{equation}
we must also have
\begin{equation}
D'h(k) - k^\dag ( D h(k)) k \in\cK.
\label{T20}\end{equation}
Applying this at $k=\Delta$, we obtain
\begin{equation}
\begin{pmatrix}
-\ri\Gamma A \Sigma & \tilde A - \Gamma A \Gamma\\
-\Sigma A \Sigma & \ri\Sigma A \Gamma
\end{pmatrix}
\in\cK,
\label{T21}\end{equation}
where the dependence of $A$ and $\tilde A$ on $\Delta$ is suppressed.
This gives us the conditions (the first two from skew symmetry of the diagonal blocks,
and the third---after a calculation---from comparison of the off-diagonal blocks)
\begin{equation}
\begin{aligned}
&(i)\qquad\,\, \Sigma A \Gamma = \Gamma A^\dag\Sigma,\\
&(ii)\qquad \Gamma A \Sigma = \Sigma A^\dag \Gamma ,\\
&(iii)\qquad\,\, \Gamma\tilde A = A \Gamma.
\end{aligned}
\label{T22}\end{equation}
Let $h_1$ and $h_2$ be two $(K_+\times K_+)$-invariant functions, and use
$A_1$, $\tilde A_1$ and $A_2$, $\tilde A_2$ as in (\ref{T18})
for their derivatives.
By substitution into the Poisson bracket (\ref{T14}) on $K$ we get
\begin{equation}
\{h_1,h_2\}_K(\Delta) =
\Im\,\tr\left(\Sigma A_1 \Sigma \tilde A_2\right),
\label{T23}\end{equation}
which
then produces
$\{h_1,h_2\}_K(\Delta)
= \Im\,\tr \left(A_1\Sigma\Gamma^{-1} A_2 \Gamma\Sigma\right)$
using $(iii)$ of (\ref{T22}). Utilizing alternatively $(ii)$ and $(i)$ then gives
\begin{equation}
\{h_1,h_2\}_K(\Delta)=
\Im\,\tr\left( \Sigma^2 A_1A_2^\dag\right)\quad\hbox{and}\quad
\{h_1,h_2\}_K(\Delta)=
\Im\,\tr\left(\Sigma^2 A_1^\dag A_2\right).
\label{T24}\end{equation}
The combination of $(i)$ and $(ii)$ yields $\Gamma^2 A \Sigma^2 = \Sigma^2 A \Gamma^2$,
and thence $[\Sigma^2,A]=0$.
Applying this to the two expressions in (\ref{T24}) and then adding them, we have
\begin{equation}
\begin{aligned}
2\{h_1,h_2\}_K(\Delta)&=\Im\,\tr\left( A_1\Sigma^2A_2^\dag\right) + \Im\,\tr\left(\Sigma^2 A_1^\dag A_2\right)\\
&=
-\Im\,\tr\left(A_2\Sigma^2A_1^\dag\right) + \Im\,\tr\left(\Sigma^2 A_1^\dag A_2\right)
=
\Im\,\tr\left(A_1^\dag[A_2,\Sigma^2]\right)=0,
\end{aligned}
\label{T25}\end{equation}
which completes the proof.
\hfill $\square$
\medskip
The Hamiltonian vector fields corresponding to the collective Hamiltonians $\cH\in \fH^1$ (\ref{fH1})
are all complete.
Actually the completeness is valid
for any $\cH\in C^\infty(\cM)$ given by $\cH(g) = h(k)$ using the Iwasawa decomposition $g=kb$ (\ref{T2})
and any $h\in C^\infty(K)$. In this case the derivatives of $\cH$ are related to the derivatives of $h$
according to
\begin{equation}
\nabla' \cH(g) = b^{-1} (D' h(k)) b \, \in\cB,\qquad
\nabla \cH(g) = k (D' h(k)) k^{-1}.
\label{T26}\end{equation}
This implies that the integral curves $g(t) = k(t) b(t)$ of the Hamiltonian vector field of $H$ on $\cM$
are determined by the `decoupled' differential equations
\begin{equation}
\dot{k}= \pi_\cK( k (D' h(k)) k^{-1}) k \quad\hbox{and}\quad \dot b = - (D'h(k)) b.
\label{T27}\end{equation}
The vector field on $K$ occurring in the first equation is complete due to compactness of $K$.
After substituting a solution $k(t)$ into the second equation, $b(t)$ can be found (in principle) by
performing a finite number of integrations: this is because of the triangular structure of the group $B$.
Now, with the master phase space $\cM$ and its two distinguished Abelian Poisson
algebras $\fH^1$ and $\fH^2$ at our
disposal, we summarize the reduction procedure that concerns us.
The basic steps of defining a reduction are the specifying of the symmetry
and of the constraints to be used.
As our symmetry group, we take the direct product $K_+ \times K_+$ and let it act
on the phase space by the map
\begin{equation}
\Phi\colon K_+ \times K_+ \times \cM \to \cM,
\qquad
(\eta_L, \eta_R, g) \mapsto \eta_L g \eta_R^{-1}.
\label{T28}\end{equation}
This is a Poisson action if $K_+$ is endowed with its
natural multiplicative Poisson structure inherited from (\ref{T14}) \cite{STS,STSlectures}.
The momentum map generating this action
sends $g$ to the pair of matrices given by the block-diagonal parts of $b_L$ and $b$ (\ref{T2}).
The constraints restrict the value of the momentum map to a suitable constant.
To define the constraints, we fix a positive number $\mu$ and a vector $\hat v \in \C^n$,
and let $\sigma$ denote the unique upper triangular matrix with positive diagonal entries
that verifies
\begin{equation}
\sigma \sigma^\dagger = \alpha^2 \1_n +
\hat v \hat v^\dagger, \quad \hat v^\dagger \hat v = \alpha^2 (\alpha^{-2n} - 1), \quad \alpha:= e^{-\mu}.
\label{T29}\end{equation}
Then we impose the `left-handed' momentum map constraint forcing $b_L$ to have the form
\begin{equation}
b_L =
\begin{bmatrix} y^{-1}\sigma & \chi_L \\
0 & y \1_n \end{bmatrix},
\qquad y = e^{-u} ,
\label{T30}\end{equation}
and also impose the `right-handed' momentum map constraint by requiring that $b$ reads
\begin{equation}
b =
\begin{bmatrix} x \1_n & \chi \\
0 & x^{-1} \1_n \end{bmatrix},\qquad
x = e^{-v}
\label{T31}\end{equation}
with real parameters $u$ and $v$ subject to
$\vert u \vert \neq \vert v \vert$.
We use a $2\times 2$ block-matrix notation corresponding to $I$ (\ref{T10}), and thus $\chi_L$, $\chi$ are
$n\times n$ complex matrices.
The submanifold $\cM_0$ of $\cM$ defined by these momentum constraints,
\begin{equation}
\cM_0=\{g\in\cM\ |\ b_L(g)\ \hbox{and}\ b(g)\ \hbox{determined by
(\ref{T30}) and (\ref{T31})}\},
\label{defM0}\end{equation}
is stable under the action of the
`gauge group' $K_+(\sigma) \times K_+$, where
\begin{equation}
K_+(\sigma):= \{ \eta_L\in K_+ \mid \eta_L \diag(\sigma \sigma^\dagger, \1_n) \eta_L^{-1}
= \diag(\sigma \sigma^\dagger, \1_n) \}.
\label{T32}\end{equation}
According to general principles, the reduced phase space $\cM_\red$ is the quotient
\begin{equation}
\cM_\red = \cM_0/ (K_+(\sigma) \times K_+).
\label{T33}\end{equation}
It was shown in \cite{FG} that the `effective gauge group'
\begin{equation}
(K_+(\sigma) \times K_+)/ \Z_{2n}^\diag
\label{T34}\end{equation}
acts \emph{freely} on $\cM_0$, where
$\Z_{2n}^\diag$ is the subgroup that acts trivially
\begin{equation}
\Z_{2n}^\diag =\{ (\zeta \1_{2n}, \zeta \1_{2n}) \in K_+(\sigma)\times K_+ \mid \zeta \in \Z_{2n}\}.
\label{T35}\end{equation}
In other words, $\pi_0\colon \cM_0 \to \cM_\red$ is a principal fibre bundle with structure group (\ref{T34}).
It follows that $\cM_\red$ is a smooth (and analytic) symplectic manifold,
and we let $\omega_\red$ denote its symplectic form that descends from $\omega_\cM$.
It is readily seen that all elements of
$\fH^1$ and $\fH^2$
are invariant with respect to the group action
$\Phi$ (\ref{T28}) on $\cM$,
and thus they give rise to two Abelian Poisson algebras
$\fH_\red^1$ and $\fH_\red^2$
on the symplectic manifold $(\cM_\red, \omega_\red)$.
Referring to equations (\ref{T9}), (\ref{T13}) and using the embedding $\iota_0\colon\cM_0\to \cM$ as in Figure 1,
the defining relations of the reduced Hamiltonians of our principal interest are
\begin{equation}
\cH_j^\red\circ \pi_0 = \cH_j \circ \iota_0, \quad \hat \cH_j^\red\circ \pi_0 = \hat \cH_j \circ \iota_0,
\label{redhams}\end{equation}
and of course $ \pi_0^*(\omega_\red)= \iota_0^\star (\omega_\cM)$.
In the spirit of the general scheme outlined in the Introduction,
our task now is to construct a suitable pair of models of $\cM_\red$.
One model was already found before, and next we briefly recall it.
\subsection{The model $\hat M$ of $\cM_\red$ and its consequences}
The construction presented in this subsection is extracted from \cite{FG}, where details can be found.
As the first main step,
a parametrization of a dense open submanifold of the reduced phase space
by the domain $\widehat\cD_+ \times \T^n$ (\ref{I4}) was constructed, where the variables $\hat\lambda_i$ are related
to the invariant $\Delta$ (\ref{T15}) formed from $k$ in
$g=kb\in \cM_0$ by setting
\begin{equation}
\sin q_i = \exp({\hat\lambda_i}),
\label{T36}\end{equation}
using that $q_n>0$ for $g\in \cM_0$.
It proves useful to combine the
$\hat\lambda_i\in{\mathbb R}_{<0}$ and their
canonical conjugates $\hat\theta_i\in{\mathbb R}/2\pi{\mathbb Z}$
into
complex variables by defining
\begin{equation}
\cZ_j(\hat\lambda,\exp({\ri\hat\theta}))
=(\hat\lambda_j-\hat\lambda_{j+1}-\mu)^{\tfrac{1}{2}}\prod_{k=j+1}^n \exp({\ri\hat\theta_k}),
\quad j=1,\dots,n-1,
\label{T37}\end{equation}
and
\begin{equation}
\cZ_n(\hat\lambda,\exp({\ri\hat\theta}))=(s-{\hat\lambda_1})^{\tfrac{1}{2}}\prod_{k=1}^n\exp({\ri\hat\theta_k})\quad\hbox{with}
\quad s=\operatorname{min}(0, v-u).
\label{T38}\end{equation}
The variable $\cZ$ is naturally extended ro run over the whole of $\C^n$, equipped with
the symplectic form
\begin{equation}
\omega_\can = \ri\sum_{j=1}^{n}d \cZ_j\wedge d \cZ_j^*.
\label{T39}\end{equation}
The domain $\widehat\cD_+ \times \T^n$ with (\ref{I5}) is
symplectomorphic to the dense open subset $(\C^*)^n$ of $\C^n$.
The main result of \cite{FG} says that
\begin{equation}
(\hat M, \hat \omega) \equiv (\C^n, \omega_\can)
\label{T40}\end{equation}
is a model of the \emph{full reduced phase space $(\cM_\red, \omega_\red)$ (\ref{T33})}.
In fact, one can construct a symplectomorphism
\begin{equation}
\hat \Psi\colon \cM_\red \to \hat M, \qquad \hat \Psi^*(\hat \omega) = \omega_\red.
\label{T41}\end{equation}
The $n$-tuples $(\hat\lambda_1,\dots, \hat\lambda_n)$ and $(\vert \cZ_1\vert^2,\dots, \vert \cZ_n\vert^2)$
yield analytic maps from $\hat M$ to $\R^n$, which
are related by an affine $\GL(n,\Z)$ transformation. Explicitly, we have
\begin{equation}
\hat\lambda_j(\cZ)=s-(j-1)\mu - |\cZ_n|^2 -\sum_{l=1}^{j-1} \vert \cZ_l \vert^2,
\qquad j=1,\dots, n.
\label{T42}\end{equation}
The functions $\vert \cZ_j \vert^2$ generate the obvious Hamiltonian action
of the torus $\T^n$ on $\hat M=\C^n$. Namely, the flows of $\vert \cZ_1\vert^2,\dots, \vert \cZ_n\vert^2$
with time parameters
$t_1,\dots, t_n$ act by the map
\begin{equation}
(\cZ_1,\dots, \cZ_n) \mapsto (\cZ_1e^{\ri t_1},\dots, \cZ_n e^{\ri t_n}).
\label{T43}\end{equation}
The origin $\cZ_1 = \dots = \cZ_n =0$ is the unique fixed point of this action.
Applying (\ref{T15}) and (\ref{T36}), the reduced Hamiltonians
$\cH_j^\red\in C^\infty(\cM_\red)$ that descend from the functions
$\cH_j$ (\ref{T13}) are found to
take the following form in terms of the model $\hat M$:
\begin{equation}
\cH_j^\red\circ \hat \Psi^{-1} = \sum_{i=1}^n P_j(\exp({ \hat\lambda_i})),
\label{T44}\end{equation}
where $P_j$ is the polynomial determined by the relations
\begin{equation}
\cos(2j q_a) = P_j( \exp({ \hat\lambda_a})),
\quad
\exp({\hat\lambda_a})=\sin q_a \quad\hbox{for}\quad 0< q_a \leq \frac{\pi}{2}.
\label{T45}\end{equation}
That is,
\begin{equation}\begin{aligned}
P_j(\exp({\hat\lambda_a})) &= T_j\bigl(\cos(2 q_a)\bigr) = T_j\bigl(1-2\sin^2(q_a)\bigr) =
(-1)^jT_j\bigl(2\sin^2(q_a)-1\bigr) \\
&= (-1)^jT_{2j}\bigl(\sin(q_a)\bigr)=(-1)^jT_{2j}\bigl(\exp({\hat\lambda_a})\bigr) ,
\end{aligned}
\label{T46}\end{equation}
where $\{T_m(x) \mid m=0,1,2,\dots\}$ are Chebyshev polynomials of the first kind, characterized by
$
T_m(\cos\varphi)=\cos(m\varphi)$.
Altogether, we see that the $\hat\lambda_j$, or equivalently the $\vert \cZ_j\vert^2$,
are action variables
for the Liouville integrable system defined by the reduced Hamiltonians $\cH_1^\red,\dots, \cH_n^\red$.
The subset of $\hat M$ on which $\prod_{j=1}^n \cZ_j=0$ is mapped by (\ref{T42})
onto the boundary of the closure of $\widehat\cD_+$, with
$\cZ=0$ corresponding to the vertex
\begin{equation}
\hat\lambda_j= s - (j-1)\mu, \qquad j=1,\dots, n, \quad s=\min(0,v-u).
\label{T47}\end{equation}
The point $\cZ=0$ is a common equilibrium for the Hamiltonians $\cH_j^\red\circ \hat \Psi^{-1}$.
Moreover, $\cH_1^\red\circ \hat \Psi^{-1}$ reaches its global minimum on $\hat M$ at $\cZ=0$.
This follows from the fact that $\cos(2 q_a)$ is monotonically
decreasing for $ 0 < q_a \leq \frac{\pi}{2}$ and the joint maxima of the $q_a$ for $a=1,\dots, n$ is reached
at the vertex (\ref{T47}) corresponding to $\cZ=0$.
On the dense open subset parametrized by $\widehat\cD_+\times \T^n$, the flow generated by
$\cH_j^\red\circ \hat \Psi^{-1}$ (\ref{T44})
is linear
\begin{equation}
\hat\lambda_a(t_j) = \hat\lambda_a(0),\quad
\hat\theta_a(t_j) = \hat\theta_a(0) + t_j \hat \Omega_{j,a}(\hat\lambda_a),\qquad a=1,\dots, n,
\label{T49}\end{equation}
with the frequencies
\begin{equation}
\hat \Omega_{j,a}(\hat\lambda_a) = \frac{\partial P_j(\exp({ \hat\lambda_a}))}{\partial \hat\lambda_a}
= 2(-1)^j j\, \exp({\hat\lambda_a})U_{2j-1}\bigl(\exp({\hat\lambda_a})\bigr),
\label{T50}\end{equation}
where $\{U_m(x)\mid m=0,1,2,\dots\}$ are Chebyshev polynomials of the second kind, characterized by
$
U_{m}(\cos\varphi)={\sin((m+1)\varphi)}/{\sin(\varphi)}
$.
It is obvious that for generic $\hat\lambda$ and any fixed $j$ the frequencies
\begin{equation}
\hat \Omega_{j,1}(\hat\lambda_1),\dots, \hat \Omega_{j,n}(\hat\lambda_n)
\label{T51}\end{equation}
are independent over the field of rational numbers, and therefore
the flow of $\cH_j^\red\circ\hat \Psi^{-1}$ densely fills the generic Liouville tori.
This implies that every element in the commutant of
$\cH_j^\red\circ \hat \Psi^{-1}$ in $C^\infty(\hat M)$ is a
function of
the action variables $\hat\lambda_1,\dots, \hat\lambda_n$. In other words, each
$\cH_j^\red\circ \hat \Psi^{-1}$ is a non-degenerate
completely integrable Hamiltonian.
On the full phase space $\hat M$, the flow generated by the function
$\cH_j^\red\circ \hat \Psi^{-1}$ has the following form:
\begin{eqnarray}
&&\cZ_k(t_j) = \cZ_k(0)
\exp\left(\ri t_j \left(\hat \Omega_{j,k+1}(\hat\lambda_{k+1}) +\dots + \hat\Omega_{j,n}(\hat\lambda_n)\right)\right),
\quad k=1,\dots, n-1,\nonumber\\
&&\cZ_n(t_j) = \cZ_n(0)
\exp\left(\ri t_j\left(\hat \Omega_{j,1}(\hat\lambda_1) +\dots + \hat\Omega_{j,n}(\hat\lambda_n)\right)\right),
\label{T52}\end{eqnarray}
where here $\hat\lambda$ is evaluated on the initial value $\cZ(0)$.
As for the
reduced Hamiltonians $\hat H_j := \hat \cH_j^\red \circ \hat\Psi^{-1}$
descending from $\hat \cH_j$ (\ref{T9});
$\hat H \equiv \hat H_1$ takes
the Ruijsenaars--Schneider--van Diejen (RSvD) type many-body form (\ref{I6})
in terms of the variables $(\hat\lambda, \hat\theta)$. This Hamiltonian
as well as all members of its commuting family yield
analytic functions on the full reduced phase space modelled by $\hat M$.
Explicit formulae can be obtained following the lines of \cite{FG}.
By using its analyticity and the asymptotic behavior where the particles are far apart,
it can be shown that the determinant
$\det\big[d_{\hat \theta}\hat H_1,d_{\hat \theta}\hat H_2,\dots,d_{\hat \theta}\hat H_n\big]$
is non-zero on a dense open subset of $\widehat\cD_+ \times \T^n$.
This not only implies the Liouville integrability of the Hamiltonians $\hat H_j$,
but it
shows also that the $2n$ functions
$\hat\lambda_j\in\hat \fP$ and $\hat H_j\in \hat \fH$, for $j=1,\dots, n$, are functionally independent.
In particular, the Hamiltonian vector fields of the
elements of $\hat \fP$ and $\hat \fH$ together span the tangent space $T_m \hat M$
at generic points $m\in \hat M$.
As a consequence of the formula (\ref{T44}), $\{\hat \lambda_j\}_{j=1}^n$
and $\{\cH_j^\red\circ \hat \Psi^{-1}\}_{j=1}^n$ represent
alternative generating sets for the algebra $\hat \fP$ of the global position variables.
\medskip\noindent
{\bf Remark 2.1.}
In \cite{FG} the model $\hat M$ was obtained under the assumptions that $v>u$ and $\vert u \vert \neq \vert v \vert$,
but now we find that essentially nothing
changes if only $\vert u\vert \neq \vert v\vert$ is assumed.
The condition $\hat\lambda_1 \leq s=\mathrm{min}(0,v-u)$ arises from the requirement that all entries
of the $n\times n$ diagonal matrix given by
$K_{22} K_{22}^\dagger = e^{-2u}\1_n - (\sin q)^2 e^{-2v}$
in equation (3.8)
of \cite{FG} must be non-negative.
Another difference is that \cite{FG}
defined $z_1,\dots, z_{n-1}$
in the same way as (\ref{T37}), but introduced a variable $z_n$
instead of $\cZ_n$ (\ref{T38}) by
\begin{equation}
z_n(\hat\lambda, \exp({\ri \hat\theta})) = (e^s-e^{\hat\lambda_1})^{\tfrac{1}{2}}\prod_{k=1}^n \exp({\ri\hat\theta_k}),
\label{T53}\end{equation}
which varies in the open disc $\D_r$ of radius $r=e^{s/2}$, and
is related to $\cZ_n\in \C$ by an analytic diffeomorphism.
\section{Constructing the model $M$ of $\cM_\red$: general outline}
\label{sec:3}
The model $\hat M$ of $\cM_\red$ was obtained
by explicitly constructing a global cross-section
of the gauge orbits in $\cM_0$. The construction of the new model $M$ that
we achieve in this paper
is somewhat more complicated. We here collect the main concepts
that will appear in the construction, hoping that this will enhance readability.
The reader is recommended to keep an eye on Figure 2, placed at the end of the section.
We shall describe the quotient $\cM_\red$ (\ref{T33}) by exhibiting a new set of unique representatives
for each orbit of the `gauge group' $K_+(\sigma) \times K_+$ acting on $\cM_0$.
We now display $\cM_\red$ as
\begin{equation}
\cM_\red = K_+(\sigma) \backslash \cM_0/ K_+ \, ,
\label{P1}\end{equation}
emphasising that $(\eta_L, \eta_R) \in K_+(\sigma) \times K_+$ acts by left- and
by right-multiplication, respectively.
We shall arrange taking the quotient into convenient consecutive steps, using in addition to the obvious direct product
structure of the gauge group also the fact that $K_+(\sigma)$ itself can be decomposed as
the direct product
\begin{equation}
K_+(\sigma) = K_+(\hat w) \times \T_1,
\label{P2}\end{equation}
where
\begin{equation}
\T_1 = \{\hat \gamma:= \diag(\gamma \1_n, \gamma^{-1} \1_n)\mid \gamma\in \U(1)\},
\label{P4}\end{equation}
\begin{equation}
K_+(\hat w)=\{ \kappa \in K_+\mid \kappa \hat w = \hat w \} \quad\hbox{with}\quad \hat
w= (\hat v,0)^T\in \C^{2n},
\label{P3}\end{equation}
and $\hat v\in \C^n$ is the fixed vector in (\ref{T29}).
It is easy to check that every element of $K_+(\sigma)$ can be written as a product of these
two disjunct, mutually commuting subgroups.
As in \cite{FM}, we call $b\in B$ `quasi-diagonal' if it has the
form
\begin{equation}
b =
\begin{bmatrix} e^{-v} \1_n & \beta \\
0 & e^v \1_n \end{bmatrix}
\quad\hbox{with}\quad
\beta= \diag(\beta_1, \ldots, \beta_n), \quad \beta_1 \geq \beta_2\geq \dots \geq \beta_n \geq 0,
\label{P5}\end{equation}
and define the subset $\cM_1$ of the `constraint surface' $\cM_0\subset \cM$ by
\begin{equation}
\cM_1:= \{ g = k b\in \cM_0\mid \hbox{$b$ is quasi-diagonal}\}.
\label{P6}\end{equation}
The `left-handed' gauge transformations by $K_+(\sigma)$ map $\cM_1$ to itself and by using this
we introduce the quotient
\begin{equation}
\cN:= K_+(\hat w)\backslash \cM_1.
\label{P7}\end{equation}
It will be useful to identify $\cN$ with the image of the map
\begin{equation}
\cM_1 \ni g= k b \mapsto (w(k), L(k), \beta)
\quad \hbox{with}\quad w(k):= k^{-1} \hat w,\,\, L(k):= k^{-1} I k I.
\label{P8}\end{equation}
Such an identification is possible since
$(w(k),L(k))=(w(k'),L(k'))$ for $k, k'\in K$ if and only if $k'k^{-1}\in K_+(\hat w)$.
Directly from the definitions, we have
$K_+(\sigma)\backslash\cM_1 =\T_1\backslash\cN$, where $\hat \gamma\in \T_1$ (\ref{P4}) acts according to
$w(\hat \gamma k)=\gamma^{-1} w(k)$, because
of the form of $\hat w$ in (\ref{P3}), while $L(k)$
and $\beta$ are unchanged.
The gauge transformation (\ref{T28}) by $(\eta_L, \eta_R)\in K_+(\sigma)\times K_+$ acts on the $k$ and $b$ components
of $g=kb\in \cM_0$ by
\begin{equation}
(k,b) \mapsto (\eta_L k \eta_R^{-1}, \eta_R b \eta_R^{-1}),
\label{P9}\end{equation}
and thus operate on the constituent $\chi$ (\ref{T31}) of $b$ according to
\begin{equation}
\chi \mapsto \eta_R(1) \chi \eta_R(2)^{-1},
\label{P10}\end{equation}
where we employ the block-matrix notation
\begin{equation}
\eta_R=
\begin{bmatrix} \eta_R(1) & 0 \\
0 & \eta_R(2) \end{bmatrix} ,
\quad
\eta_R(1), \eta_R(2)\in \U(n), \quad \det(\eta_R(1) \eta_R(2)) =1.
\label{P11}\end{equation}
Recalling the singular value decomposition of $n\times n$ matrices,
we observe from (\ref{P10}) that every element $g\in \cM_0$ can be gauge transformed into $\cM_1$,
and the components $\beta_i$ of the resulting element of $\cM_1$ are uniquely determined by $g$.
To proceed further, we restrict ourselves to the `regular part' defined
by the strict inequalities
\begin{equation}
\beta_1 > \beta_2 > \dots >\beta_n >0.
\label{P12}\end{equation}
We call such $\beta$ and the corresponding quasi-diagonal $b$ \emph{regular}, and apply
the notations $\cM_1^\reg$, $\cM_0^\reg$, $\cN^\reg$ and $\cM_\red^\reg$ for the corresponding subsets.
Specifically, $\cM_0^\reg$ consists of the elements of $\cM_0$ that can be
gauge transformed into $\cM_1^\reg$, $\cN^\reg = K_+(\hat w)\backslash \cM_1^\reg$ and
$\cM_\red^\reg = K_+(\sigma)\backslash\cM_0^\reg/K_+$.
\emph{ Later it will emerge that in fact $\cM_0 = \cM_0^\reg$.}
If $\beta$ is regular, then the corresponding $b$ in (\ref{P5}) is fixed by the following
Abelian subgroup, $\T_{n-1}$, of $K_+$:
\begin{equation}
\T_{n-1}:=\{ \delta= \diag(\delta_1,\dots, \delta_n, \delta_1,\dots, \delta_n)\mid
\delta_i\in \U(1),\quad \prod_{i=1}^n \delta_i^2=1\}.
\label{P13}\end{equation}
We shall also use the subgroup of $\T_1 \times \T_{n-1}$ given by
\begin{equation}
\tilde \Z_{2n}^\diag =
\{ (\hat \zeta , \zeta \1_{2n})\mid \zeta\in \Z_{2n}\},
\label{P14}\end{equation}
where $\Z_{2n}$ denotes the $(2n)^{\mathrm{th}}$ roots of unity and we employ the notation (\ref{P4}).
Defining
\begin{equation}
\T_n := \{ \tau=\diag(\tau_1,\dots, \tau_n, \tau_1,\dots, \tau_n)\mid \tau_i\in \U(1)\},
\label{P15}\end{equation}
we have the isomorphism
\begin{equation}
\T_{n} \simeq (\T_1 \times \T_{n-1})/\tilde \Z_{2n}^\diag ,
\label{P16}\end{equation}
which is provided by the map
\begin{equation}
\tau_i = \gamma^{-1} \delta_i
\label{P17}\end{equation}
using the above parametrizations of the elements of $\T_1$ (\ref{P4}), $\T_{n-1}$ (\ref{P13})
and $\T_n$ (\ref{P15}).
After these preparations, we come to the main points.
First, we let $\delta\in \T_{n-1}$ act on $\cM_1^\reg \times K_+$ by
\begin{equation}
\delta\colon (g, \eta) \mapsto (g \delta^{-1}, \delta \eta)
\label{P18}\end{equation}
and also let $\eta_R\in K_+$ act by
\begin{equation}
\eta_R \colon ( g, \eta) \mapsto (g, \eta \eta_R^{-1}).
\label{P19}\end{equation}
Then we introduce the identification
\begin{equation}
\cM_0^\reg \longleftrightarrow (\cM_1^\reg \times K_+)/\T_{n-1}
\label{P20}\end{equation}
by means of the map
\begin{equation}
(\cM_1^\reg \times K_+)\ni (g, \eta) \mapsto g\eta\in \cM_0^\reg,
\label{P21}\end{equation}
which is invariant with respect to the action (\ref{P18}) of $\T_{n-1}$.
Since the actions of $\T_{n-1}$ and $K_+$ on $\cM_1^\reg \times K_+$ commute, we have
\begin{equation}
\cM_0^\reg/K_+ = ((\cM_1^\reg \times K_+)/ \T_{n-1})/ K_+ =((\cM_1^\reg \times K_+)/K_+)/ \T_{n-1}= \cM_1^\reg/\T_{n-1},
\label{P22}\end{equation}
where on the right-end we refer to the action of $\T_{n-1}$ on $\cM_1^\reg$ given by
$\cM_1^\reg \ni k b \mapsto k b\delta^{-1}= k\delta^{-1} b$.
We continue by applying the decomposition (\ref{P2}) of $K_+(\sigma)$ to deduce the identification
\begin{equation}
\cM_\red^\reg = ( K_+(\hat w)\times\T_1 )\backslash \cM_1^\reg / \T_{n-1} = \T_1 \backslash \cN^\reg /\T_{n-1},
\label{P23}\end{equation}
where we have taken into account that $\cN^\reg = K_+(\hat w)\backslash \cM_1^\reg$ (see (\ref{P7})).
The action of $\T_1 \times \T_{n-1}$ on $\cN^\reg$ factors through the homomorphism (\ref{P17}).
The induced action of $\T_n$ (\ref{P15}) on $\cN^\reg$ is given,
in terms of the triples $(w, L, \beta)$ in (\ref{P8}) representing the elements of $\cN^\reg$, by the formula
\begin{equation}
(w,L,\beta) \mapsto (\tau w, \tau L \tau^{-1},\beta),\qquad \forall \tau\in \T_n.
\label{P26}\end{equation}
One sees this from the definitions in (\ref{P8}) and in (\ref{P17}) using that
$(\hat \gamma, \delta) \in \T_1 \times \T_{n-1}$ sends $g = k b\in \cM _1^\reg$ to
$(\hat \gamma k \delta^{-1}) b\in \cM_1^\reg$.
The final outcome is the following identification:
\begin{equation}
\cM_\red^\reg = \cN^\reg/\T_n.
\label{P25}\end{equation}
The action (\ref{P26}) of $\T_n$ on $\cN^\reg$
is actually a free action. This is a consequence of
the fact \cite{FG} that
the action of $(K_+(\sigma) \times K_+)/ \Z_{2n}^\diag$ on $\cM_0$ is free.
\medskip\noindent
{\bf Remark 3.1.}
Every element of $\cM_0$ can be mapped into $\cM_1$ by a gauge transformation, which is
unique up to residual gauge transformations acting on $\cM_1$.
It is a useful fact that \emph{locally},
in a neighbourhood of any fixed element of $\cM_0^\reg$, a well-defined map $f_0$ can be chosen,
\begin{equation}
f_0\colon \cM_0^\reg \ni g \mapsto g_1\in \cM_1^\reg,
\label{f01}\end{equation}
in such a manner that the gauge transformed matrix $g_1$ depends
analytically on the local coordinates on the manifold $\cM_0^\reg$.
We next explain this statement.
Let $P^\reg$ denote the manifold of $n\times n$ Hermitian matrices having distinct, positive eigenvalues,
and $G^\reg$ denote the open subset of $\GL(n,\C)$ diffeomorphic to $P^\reg \times \U(n)$ by
the polar decomposition, presented as
\begin{equation}
\chi = p(\chi) u(\chi),
\qquad
\chi \in G^\reg,\, p(\chi)\in P^\reg,\, u(\chi)\in \U(n).
\label{P27}\end{equation}
Here, $p(\chi)$ and $u(\chi)$ depend analytically on $\chi$.
Let $D^\reg\subset P^\reg$ denote the manifold of real diagonal matrices
$\beta = \diag(\beta_1,\dots, \beta_n)$ satisfying $\beta_1 > \dots > \beta_n >0$.
We recall that $P^\reg$ is diffeomorphic to $D^\reg \times (U(n)/\T(n))$ by the correspondence
\begin{equation}
p = \xi(p) \beta(p) \xi(p)^{-1}
\quad\hbox{where} \quad \xi(p) \T(n) \in \U(n)/\T(n)
\label{P28}\end{equation}
with the standard maximal torus $\T(n)< \U(n)$.
Invoking the fact \cite{KN} that $\U(n)$ is a locally trivial bundle over the coset space $\U(n)/\T(n)$,
we see that $\xi(p)\in \U(n)$ in (\ref{P28}) can be \emph{locally}
chosen to be a well-defined, smooth function of $p$.
Now introduce $\zeta(\chi):= (\det (\xi(p(\chi))^{-2} u(\chi)))^{\frac{1}{2n}}$,
choosing it so as to give a smooth function locally around a fixed $\chi$ at hand.
As the final outcome, a locally well-defined map $f_0$ (\ref{f01}) is obtained
as follows:
\begin{equation}
f_0\colon g \mapsto g_1(g) = g \eta_R(g)^{-1}
\quad \hbox{with}\quad
\eta_R(g)^{-1}=
\zeta(\chi) \begin{bmatrix} \xi(p(\chi)) & 0 \\
0 & u(\chi)^{-1} \xi(p(\chi)) \end{bmatrix}\in K_+,
\label{P29}\end{equation}
where $\chi:=\chi(g)$ is the upper-right block of $b$ in $g=kb$. Since $\chi$ depends analytically on $g$,
the local choices guarantee that $g_1(g)$ depends analytically
on the coordinates on $\cM_0^\reg$.
We remark in passing that $\beta_n^2$ resulting from (\ref{P10}) is the smallest eigenvalue of
$\chi \chi^\dagger$, and therefore
$\beta_n$ is not a smooth function on $\cM_0$ at those points where it vanishes.
As we shall see later (from equation (\ref{C12}) and Theorem 5.6), the assumption (\ref{I13}) excludes this eventuality.
\medskip
In the above we established
the various identifications only at the set-theoretic level.
Although we shall not rely on it technically,
we wish to note
that all above identifications hold
in the category of smooth (and analytic) manifolds as well.
We next prove a lemma, which implies that $\cM_1^\reg$ is an embedded submanifold of $\cM_0$;
itself known---from \cite{FG}---to be an embedded submanifold of $\cM$.
Utilizing the assumption (\ref{I13}), it will be shown later that $\cM_1^\reg= \cM_1$.
Then it follows that $\cM_1 \subset \cM_0$ represents a
reduction of the structure group
(\ref{T34}) of the principal fibre bundle $\cM_0$ over $\cM_\red$ to the subgroup
$(K_+(\sigma) \times \T_{n-1})/\Z_{2n}^\diag$, and $\cN$ is a principal fibre bundle over $\cM_\red$
with structure group
$\T_n$, in the standard sense \cite{KN}.
\medskip
\noindent{\bf Lemma 3.2.} \emph{Define ${\tilde \cM}_1 \subset \cM_0$ to be the common level set,
at zero value, of the analytic functions $\phi_\xi$ on $\cM_0$ given by
\begin{equation}
\phi_\xi(g) = \Im \tr (\xi \chi),
\label{P30}\end{equation}
where $\xi$ is any $n\times n$ complex matrix with real diagonal, and we use $\chi$ in (\ref{T31}).
Let ${\tilde \cM}_1^\reg \subset {\tilde \cM}_1$ consist of the elements $g=kb$ with $b$ of the form
(\ref{P5}), but $\beta\in \R^n$ now restricted by $\beta_i \neq \beta_j$ for $i\neq j$
and $\prod_{i=1}^n \beta_i \neq 0$.
Then the exterior derivatives of the functions $\phi_\xi$ are linearly independent at each point of
${\tilde \cM}_1^\reg$.}
\medskip\noindent{\bf Proof.}
Take an arbitrary $g\in \tilde \cM_1^\reg$ and note that the infinitesimal
gauge transformations by the elements of $\mathrm{Lie}(K_+)$ generate a $(2n-1)n$ dimensional
subspace of the tangent space $T_g \cM_0$, which coincides with the dimension of the
real linear space of the matrices $\xi$.
A general element of $\mathrm{Lie}(K_+)$ is a matrix of the form
\begin{equation}
\diag(X,Y),
\qquad
X,Y\in \mathrm{u}(n), \quad \tr(X+Y)=0,
\label{P31}\end{equation}
and denoting the induced tangent vector by $V_{(X,Y)}(g)$, we find the derivative
\begin{equation}
\langle d\phi_\xi(g), V_{(X,Y)}(g) \rangle = \Im \tr\left(\xi ( X \beta - \beta Y )\right), \qquad
\forall g\in \tilde\cM_1^\reg.
\label{P32}\end{equation}
One can easily check that this derivative
vanishes for every $(X,Y)$
if and only if $\xi=0$.
This means that the exterior derivatives $d\phi_\xi(g)$ span a $(2n-1)n$ dimensional subspace
of $T^*_g \cM_0$ at each $g\in \tilde \cM_1^\reg$, which establishes the claim.
\hfill $\square$
\medskip
The statement of the lemma is non-trivial only if $\tilde{\cM}_1^\reg$ is non-empty, which
turns out to hold. We then also have the non-empty open subset $\tilde \cM_0^\reg$,
which is defined by the condition that the real parts of the diagonal entries of $\chi$ are
pairwise distinct and non-zero. The lemma implies directly that $\tilde \cM_1^\reg$ is an embedded
submanifold of $\tilde \cM_0^\reg$, and hence it is an embedded submanifold of $\cM_0$, too.
Finally, we see that $\cM_1^\reg$ (specified by (\ref{P12})) is
itself an open subset of $\tilde \cM_1^\reg$. It turns out to be non-empty,
and is therefore also an embedded submanifold of $\cM_0$.
Eventually, we shall obtain the desired model $M$ of $\cM_\red$ as an explicit global cross-section
for the action of $\T_n$ on $\cN = \cN^\reg$.
We shall use Remark 3.1 to show the analyticity of the natural map from
$\cM_0$ onto this cross-section.
This will enable us to prove that the construction
gives a model of the symplectic manifold $(\cM_\red, \omega_\red)$.
The procedure is summarized in the following
commutative diagram:
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=1.5]
\node (A) at (0,2) {${\mathcal M}_1$};
\node (B) at (2,2) {${\mathcal M}_0$};
\node (F) at (4,2) {$\mathcal M$};
\node (C) at (0,1) {$\mathcal N$};
\node (D) at (0,0) {$M$};
\node (E) at (2,0) {${\mathcal M}_{\red}$};
\path[right hook->] (A) edge node[below]{$\iota_1$}(B);
\path[right hook->] (B) edge node[below]{$\iota_0$}(F);
\path[->,font=\scriptsize,>=angle 90]
(A) edge node[left]{$\pi_1$} (C)
(B) edge node[right]{$$} (E)
(B) edge node[right]{$\psi$} (D)
(B) edge[dashed, bend right] node[above]{$f_0$} (A)
(C) edge node[left]{$\pi_{\mathcal N}$} (D)
(E) edge node[below]{$\Psi$} (D);
\end{tikzpicture}
\end{center}
\caption{Construction of the model $M$ of $\cM_\red$.
The vertical arrows and $\psi$ denote bundle projections; $\iota_1$ and $\iota_0$ are
embeddings.
The sets $\cM_0$, $\cM_1$ and $\cN$ are respectively defined in (\ref{defM0}),
(\ref{P6}) and (\ref{P7}). The arrow $f_0$ represents a locally well-defined gauge transformation (\ref{f01})
depending smoothly on $\cM_0$. The map $\pi_1$ is given by (\ref{P7}) and (\ref{P8}).
The map $\pi_\cN$ denotes the realization of the quotient (\ref{P25}) provided by Proposition 6.4 and Theorem 6.5.}
\label{figure Y}
\end{figure}
\section{A useful characterization of the space $\cN$}
\label{sec:4}
Proposition 4.3 below establishes the equations that determine the image of the map (\ref{P8}),
which can be identified with the space $\cN$ (\ref{P7}). More precisely, we shall proceed
with the help of new variables $(\tilde w, Q, \lambda)$
equivalent to $(w,L, \beta)$. The usefulness of this characterization
lies in the fact that we will be able to describe
all solution of the constraint equation (\ref{cruc3}) explicitly, and shall rely on this
to construct the desired model $M$ of $\cM_\red$ (\ref{P25}).
We start by recalling a lemma from \cite{FM}.
\medskip\noindent
{\bf Lemma 4.1.} \emph{The left-handed momentum constraint on $g\in \cM$ defined by (\ref{T30}) is
equivalent to the condition
\begin{equation}
y^2 g g^\dagger - \frac{1}{2} g g^\dagger (\1_{2n} - I) g g^\dagger =
\frac{1}{2} \alpha^2 (\1_{2n}+ I) + \hat w \hat w^\dagger,
\label{cruc1}\end{equation}
where $\hat w\in \C^{2n}$ is the fixed vector introduced in (\ref{P3}).}
\medskip\noindent {\bf Proof.}
Irrespective of the constraints, $b_L$ in $g=b_L k_R\in \cM$ can be written as
\begin{equation}
b_L =
\begin{bmatrix} b_1 & \chi_L \\
0 & b_2 \end{bmatrix},
\label{C2}\end{equation}
and the left-handed constraint requires that $b_2 = y \1_n$
and $b_1 = y^{-1} \sigma$.
By simply spelling it out for $g= b_L k_R$,
the matrix on the L.H.S. of (\ref{cruc1}) reads explicitly as
\begin{equation}
y^2 \begin{bmatrix} b_1b_1^\dagger + \chi_L \chi_L^\dagger & \chi_L b_2^\dagger \\
b_2\chi_L^\dagger & b_2 b_2^\dagger \end{bmatrix}
-
\begin{bmatrix} \chi_L b_2 b_2^\dagger \chi_L^\dagger & \chi_L b_2^\dagger b_2 b_2^\dagger \\
b_2b_2^\dagger b_2 \chi_L^\dagger & (b_2 b_2^\dagger)^2 \end{bmatrix}.
\label{C3}\end{equation}
The equality of the bottom-right blocks on the two sides of (\ref{cruc1})
is equivalent to $b_2 = y\1_n$.
Then the off-diagonal blocks on both
sides are zero, while (using (\ref{T29})) the top-left block boils
down to the equality $b_1 b_1^\dagger = y^{-2} \sigma \sigma^\dagger$, which implies the statement.
\hfill $\square$
\medskip
From now on we work on $\cM_1\subset \cM_0$ (\ref{P6}).
Taking any quasi-diagonal $b$, it will prove useful to diagonalize
the positive definite matrix
\begin{equation}
bb^\dagger = \begin{bmatrix}e^{-2v}\1_n + \beta^2& e^v \beta\\ e^v \beta &e^{2v}\1_n\end{bmatrix}.
\label{C4}\end{equation}
Let us introduce the real functions $s(t)$ and $c(t)$ by the
formulae\footnote{If $v=0$ then in the limit $t\rightarrow0$ we have $s(0)=c(0)= 1/\sqrt{2}$.}
\begin{equation}
c(t):= \left[\frac{e^{2t} - e^{2v}}{e^{2t} - e^{-2t}} \right]^\frac{1}{2},
\quad
s(t):= \left[\frac{e^{2v} - e^{-2t}}{e^{2 t} - e^{-2t}} \right]^\frac{1}{2},
\qquad \forall t \geq \vert v \vert,
\label{C5}\end{equation}
which imply the identity $c^2(t) + s^2(t)=1$.
Then consider $\lambda \in \R^n$ subject to the condition
\begin{equation}
\lambda_1 \geq \lambda_2\geq \dots \geq \lambda_n \geq \vert v\vert,
\label{C6}\end{equation}
and define the diagonal matrices
\begin{equation}
\Lambda (\lambda):= \diag (e^{2\lambda_1},\dots, e^{2\lambda_n}, e^{-2 \lambda_1}, \dots, e^{-2\lambda_n}),
\label{C7}\end{equation}
\begin{equation}
C(\lambda) = \diag(c(\lambda_1),\dots, c(\lambda_n)),\qquad
S(\lambda) = \diag(s(\lambda_1),\dots, s(\lambda_n)),
\label{C8}\end{equation}
and the matrix
\begin{equation}
\rho(\lambda) = \begin{bmatrix} C(\lambda) & S(\lambda) \\
S(\lambda) & -C(\lambda) \end{bmatrix}.
\label{C9}\end{equation}
Notice that $\rho(\lambda)$ is real, symmetric and orthogonal,
\begin{equation}
\rho(\lambda) = \rho(\lambda)^* =\rho(\lambda)^\dagger = \rho(\lambda)^{-1}.
\label{C10}\end{equation}
Here and throughout the paper, the suffix star on matrices and vectors denotes complex conjugate, and dagger
denotes Hermitian adjoint.
\medskip\noindent
{\bf Lemma 4.2 \cite{FM}.} \emph{For any quasi-diagonal $b$ given by (\ref{P5}),
$b b^\dagger$ can be written as
\begin{equation}
b b^\dagger = \rho(\lambda) \Lambda(\lambda) \rho(\lambda)^{-1},
\label{C11}\end{equation}
where $\beta$ is related to $\lambda$ according to the one-to-one correspondence
\begin{equation}
\beta_i= \sqrt{ 2 (\cosh( 2 \lambda_i) - \cosh (2 v))} = 2 \sqrt{\sinh(\lambda_i + v) \sinh(\lambda_i-v)},
\qquad i=1,\dots, n.
\label{C12}\end{equation}
}
Now we are ready to formulate the main result of this section.
\medskip\noindent
{\bf Proposition 4.3.} \emph{Take an arbitrary $g = k b \in \cM_1$ (\ref{P6}) for which $\beta$ and $\lambda$
are connected by (\ref{C12}),
and (using $L$ and $w$ from (\ref{P8}))
define $Q\in \U(2n)$ and $\tilde w\in \C^{2n}$ by
\begin{equation}
Q:= \rho(\lambda)^\dagger L I \rho(\lambda) = \rho(\lambda)^\dagger k^\dagger I k \rho(\lambda)
\quad\hbox{and}\quad \tilde w:= \rho(\lambda)^\dagger w= \rho(\lambda)^\dagger k^\dagger \hat w.
\label{C13}\end{equation}
Then the matrix $Q$ and the vector $\tilde w$ satisfy the constraint equation
\begin{equation}
\Lambda(\lambda) Q \Lambda(\lambda) - \alpha^2 Q = (\Lambda(\lambda)^2 +
\alpha^2 \1_{2n} - 2 y^2 \Lambda(\lambda)) +
2\tilde w {\tilde w}^\dagger
\label{cruc3}\end{equation}
and the relations
\begin{equation}
{\tilde w^\dagger}\tilde w= \alpha^2 (\alpha^{-2n} - 1),
\quad
Q\tilde w = \tilde w.
\label{aux}\end{equation}
Conversely, pick $\lambda \in \R^n$ verifying (\ref{C6})
and suppose that a matrix $Q\in \U(2n)$ and a vector $\tilde w\in \C^{2n}$ satisfy (\ref{cruc3})
as well as the
relations (\ref{aux}) and the condition that $Q$ is conjugate to $I$ (\ref{T10}).
Then there exists $g=kb \in \cM_1$
from which $Q$ and $\tilde w$
can be constructed according to (\ref{C13}), and
such $g$ is unique up to left-multiplication by the elements of the
subgroup $K_+(\hat w)$ (\ref{P3}) of the left-handed gauge group $K_+(\sigma)$. }
\medskip
\noindent
{\bf Proof.}
It is readily checked that the constraint equation (\ref{cruc1}), together with (\ref{T29}) and
the definitions of $I$ in (\ref{T10}) and $\hat w$ in (\ref{P3}),
implies (\ref{cruc3}) and (\ref{aux}) for $Q$
and $\tilde w$ defined by (\ref{C13}).
In order to prove the converse, which
gives the reconstruction of $g\in \cM_1$ from $\lambda$, $Q$ and $\tilde w$,
we start by noting that if $Q\in \U(2n)$ is conjugate to $I$ (\ref{T10}),
then we can a find an element $\kappa\in K$ for which
\begin{equation}
\rho Q \rho^{-1} = \kappa^\dagger I \kappa, \quad\hbox{where}\quad \rho := \rho(\lambda).
\label{C16}\end{equation}
Next, we observe that the auxiliary condition $Q \tilde w = \tilde w$ is equivalent to
\begin{equation}
I \kappa \rho \tilde w = \kappa \rho\tilde w.
\label{C17}\end{equation}
By using (\ref{C17}) and the property that
${\tilde w^\dagger}\tilde w= \alpha^2 (\alpha^{-2n} - 1)$, we see that there
exists an element $k_+ \in K_+$ for which
\begin{equation}
k_+ \kappa \rho \tilde w = \hat w.
\label{C18}\end{equation}
Let us now define $g=kb$ by using
\begin{equation}
k:= k_+ \kappa
\label{C19}\end{equation}
together with the quasi-diagonal $b$ associated to $\lambda$ via (\ref{C12}).
Then routine manipulations show that equation (\ref{cruc3}) implies for $g$ the left-handed momentum
map constraint (\ref{cruc1}).
Now let us inspect the ambiguity in the above constructed $k$, and thus in $g$.
If $\kappa'$ and $k_+'$ represent another choice in the above equalities, then we have
\begin{equation}
\kappa'= \eta_+ \kappa
\quad
\hbox{for some}\quad \eta_+\in K_+,
\label{C20}\end{equation}
and thus
\begin{equation}
k_+ \kappa \rho \tilde w = k_+'\kappa' \rho \tilde w = k_+' \eta_+ \kappa \rho \tilde w =
\hat w.
\label{C21}\end{equation}
Therefore
\begin{equation}
k_+^\dagger \hat w = (k_+' \eta_+)^\dagger \hat w,
\label{C22}\end{equation}
and hence
\begin{equation}
k_+' \eta_+ = \hat \eta_L k_+
\quad\hbox{for some}\quad \hat \eta_L \in K_+(\hat w).
\label{C23}\end{equation}
This entails that
\begin{equation}
k'= k_+' \kappa' = k_+' \eta_+ \kappa= \eta_L k_+ \kappa = \hat \eta_L k
\quad\hbox{and}\quad
g' = k' b = \hat \eta_L g,
\label{C24}\end{equation}
that is, $k$ and $g$ are unique up to left-multiplication by the isotropy subgroup of the vector
$\hat w$ in $K_+$.
\hfill $\square$
\medskip
\noindent
{\bf Definition 4.4.}
Let us call a triple $(\tilde w, Q,\lambda) \in \C^{2n}\times \C^{2n\times 2n} \times \R^n$ \emph{admissible} if
$\lambda$ satisfies (\ref{C6}),
the constraint equation (\ref{cruc3}) holds, $Q$ is unitary, conjugate to $I$ (\ref{T10}), and
the auxiliary conditions (\ref{aux}) are met.
Denote by $\cN(\lambda)$ the set of admissible triples associated with fixed $\lambda$,
and let $\cM_1(\lambda)$ stand for the subset of $\cM_1$ corresponding, by Proposition 4.3,
to the admissible triples
with fixed $\lambda$. Moreover, denote by $\cM_0(\lambda)$ the subset of $\cM_0$ whose elements
can be gauge transformed into $\cM_1(\lambda)$.
Finally, denote by $\cD(u,v,\mu)$ the set of the admissible $\lambda$ variables, i.e.,
those that appear in admissible triples.
\medskip
It is clear from the relations (\ref{C12}) and (\ref{C13}) that the
triple $( \tilde w, Q,\lambda)$ is equivalent
to the triple $( w, L,\beta)$ (in the obvious sense that one can be expressed in terms of the other).
By using this equivalence, and Proposition 4.3,
we identify $\cN(\lambda)$ as defined above with
the image of the map (\ref{P8}), with $\beta$ taking the value (\ref{C12}).
We now elaborate the gauge transformation properties of the variables $( \tilde w,Q, \lambda)$.
For this, we
note first of all that
if a triple $( \tilde w,Q, \lambda)$ is admissible, then so is
$( \gamma^{-1} \tilde w, Q, \lambda)$ for any $\gamma\in \U(1)$.
This reflects the gauge freedom
whereby the elements $g\in \cM_1$ are transformed as $g \mapsto \hat \gamma g$
with $\hat \gamma \in \T_1$ (\ref{P4}).
The set $\cM_1(\lambda)$ is also
mapped to itself by the right-handed gauge transformations generated by those
$\eta_R = \diag(\eta_R(1), \eta_R(2)) \in K_+$
for which
\begin{equation}
\eta_R(1)\diag(\beta_1(\lambda),\dots, \beta_n(\lambda)) \eta_R(2)^{-1} =
\diag(\beta_1(\lambda), \dots, \beta_n(\lambda)).
\label{C25}\end{equation}
We denote the corresponding
subgroup of the right-handed gauge group $K_+$ by
$K_+(\lambda)$.
Using this and the relations (\ref{P2}) and (\ref{P7}),
it is readily seen that Proposition 4.3 gives rise to the following
natural identifications:
\begin{equation}
\cM_\red(\lambda) := K_+(\sigma)\backslash \cM_0(\lambda)/K_+= K_+(\sigma)\backslash
\cM_1(\lambda)/ K_+(\lambda) =\T_1\backslash \cN(\lambda)/ K_+(\lambda),
\label{C26}\end{equation}
where the last quotient refers to the gauge transformations
\begin{equation}
\cN(\lambda)\ni (\tilde w, Q, \lambda) \mapsto ( \gamma^{-1} \eta_R \tilde w, \eta_R Q \eta_R^{-1}, \lambda ),
\qquad
\forall (\hat \gamma, \eta_R)\in \T_1\times K_+(\lambda).
\label{C27}\end{equation}
In the regular case (\ref{P12}), we have
\begin{equation}
\lambda_1 > \lambda_2 > \dots > \lambda_n > \vert v \vert,
\label{C28}\end{equation}
and $K_+(\lambda) =\T_{n-1}$.
Then, the transformations (\ref{C27}) yield
the gauge action of $\T_n$ (\ref{P15}):
\begin{equation}
( \tilde w, Q,\lambda) \mapsto ( \tau\tilde w, \tau Q \tau^{-1}, \lambda),\qquad
\forall \tau \in \T_n,
\label{C29}\end{equation}
which is completely equivalent to (\ref{P26}) via the definitions in (\ref{C13}).
In order to construct the desired model of $\cM_\red$, we need to describe all admissible
triples $(\tilde w, Q,\lambda)$.
A crucial part of this problem is to find the admissible $\lambda$, which parametrize the eigenvalues
of $bb^\dagger$ for $g=k b\in \cM_0$. These eigenvalues, and thus also the components
of $\lambda$, can be viewed as continuous functions on $\cM_0$, and we are
looking for the range of the corresponding map, $\cL$,
\begin{equation}
\qquad \cD(u,v,\mu) = \cL(\cM_0) \quad\hbox{with}\quad \cL\colon g \mapsto \lambda.
\label{C30}\end{equation}
In the following section, we shall describe $\cD(u,v,\mu)$ and the corresponding solutions
of (\ref{cruc3}) explicitly. See Theorem 5.6 for the result.
We can explain at this point why an open subset of the reduced phase space
can be parametrized by the $\lambda_i$ together with $n$ angular variables;
which appear in (\ref{I8}).
To this end, let us take an arbitrary element
\begin{equation}
e^{\ri \xi} \equiv \diag(e^{\ri \xi_1},\dots, e^{\ri \xi_{2n}})
\label{C31}\end{equation}
from the torus $\T^{2n}$, and notice that if $( \tilde w, Q, \lambda)$ is admissible,
then so is
\begin{equation}
( e^{\ri \xi} \tilde w, e^{\ri \xi} Q e^{-\ri \xi},\lambda),\qquad \forall e^{\ri \xi}\in \T^{2n}.
\label{C32}\end{equation}
Indeed, the conditions described in Proposition 4.3 are respected by these transformations.
In addition to the gauge transformations by $\tau\in \T_n$ in (\ref{C29}), these $\T^{2n}$ transformations
involve $n$ arbitrary angles, which parametrize $\T^{2n}/\T_n$.
It is clear that, for generic $\lambda$, equation (\ref{cruc3})
permits the expression of $Q$ in terms of
$\lambda$ and $\tilde w$. Moreover, we shall see shortly that the $\vert \tilde w_a\vert $
can be expressed in terms of $\lambda$, and generically none of
them vanish.
This implies that generically the elements of
$\cN(\lambda)/\T_n$ can indeed be parametrized by $n$-angles.
\medskip\noindent
{\bf Remark 4.5.}
We know that the $\T_n$ action on $\cN(\lambda)$ is free, and shall also confirm this explicitly later.
Moreover, it will turn out that the $\T^{2n}$ action, sending $(\tilde w,Q,\lambda)$ to (\ref{C32}),
is transitive on $\cN(\lambda)$; and is also free
except for a certain lower dimensional subset of the admissible $\lambda$ values.
\section{Solution of the constraints}
\label{sec:5}
Locally, the general solution of the constraint equation (\ref{cruc3}) was already
found in \cite{FM}.
Here, `locally' means that the form of the domain of the $\lambda$-variables
was not established.
In this section, we shall prove that $\cD(u,v,\mu)$ (\ref{C30}) is the
closure of $\cD_+$ in (\ref{I9}), as was anticipated in \cite{FM}.
Moreover, we shall describe all admissible triples forming $\cN$ (\ref{P7}) explicitly.
When combined with the local results of \cite{FM}, this yields a model
of the reduced system coming from the Abelian Poisson algebra $\fH^1$ (\ref{fH1})
restricted
to a dense open submanifold, and will
permit us to derive the desired global model $ M$ of $\cM_\red$ in Section 6.
For technical reasons that will become clear shortly, we initially work on a certain
dense open subset of $\cM_0$. To define this subset, let us consider the following
symmetric polynomials
in $2n$ indeterminates:
\begin{equation}
p_1(\Lambda) = \prod_{k\neq \ell}^{2n} (\Lambda_k - \Lambda_\ell) (\Lambda_k \Lambda_\ell - \alpha^2),
\label{F1}\end{equation}
and
\begin{equation}
p_2(\Lambda) = \prod_{k=1}^{2n} (\Lambda_k - \alpha)( y^2 \Lambda_k - \alpha^2)(\Lambda_k - y^2) (\Lambda_k - x^{2}).
\label{F2}\end{equation}
Since $\cM_0$ (\ref{defM0}) is a joint level surface of independent
analytic functions on $\cM$, it
is an analytic submanifold of $\cM$, and
thus
we obtain analytic functions on $\cM_0$ if we substitute
the eigenvalues $\Lambda_k(g)$ of $ g g^\dagger = k bb^\dagger k^{-1}$ into the above polynomials.
This follows since, being symmetric polynomials in the eigenvalues,
the $p_i(\Lambda(g))$
can be expressed as polynomials in the coefficients of the characteristic polynomial of $g g^\dagger$.
We know that $\cM_0$ is connected and, as explained in Remark 5.1,
can also conclude that
\begin{equation}
p(g):= p_1(\Lambda(g)) p_2(\Lambda(g))
\label{F3}\end{equation}
does
not vanish identically on $\cM_0$.
By analyticity, this implies that
\begin{equation}
\cM_0^{\sreg}:= \{ g\in\cM_0\mid p(g)\neq 0\}
\label{F4}\end{equation}
is a \emph{dense} open subset of $\cM_0$.
We call its elements \emph{strongly regular}. We shall apply the same adjective to the
$\lambda$-values for which (using (\ref{C7})) $ p(\Lambda(\lambda)) \neq 0$,
and call also strongly regular the corresponding admissible triples $( \tilde w, Q, \lambda)$,
whose set is denoted $\cN^\sreg$.
The admissible strongly regular $\lambda$-values form the dense subset
\begin{equation}
\cD(u,v,\mu)^\sreg = \cL(\cM_0^\sreg) \subset \cD(u,v,\mu).
\label{F5}\end{equation}
\medskip
\noindent
{\bf Remark 5.1.}
Let us recall from \cite{FG} that the reductions of the Hamiltonians
\begin{equation}
\hat \cH_j(g) = \frac{1}{2}\tr\!\left((b b^\dagger)^j\right), \qquad j=1,\dots, n,
\label{F6}\end{equation}
provide a Liouville integrable system on the $2n$-dimensional reduced phase space $\cM_\red$.
These reduced Hamiltonians can be expressed in terms of the $\lambda_i$ ($i=1,\dots, n$) as
\begin{equation}
\hat \cH_j^\red = \sum_{i=1}^n \cosh (2j \lambda_i).
\label{F7}\end{equation}
Their functional independence implies that the range of the $\lambda$-variables
must contain an open subset of $\R^n$. It follows from this that $\cM_0^{\sreg}$
cannot be empty.
\medskip
Focusing on $\cN^\sreg$,
we introduce the $2n \times 2n$ diagonal matrices
\begin{equation}
\cW:= \diag(\tilde w_1,\dots, \tilde w_{2n}),\qquad
D_{l m} = \frac{ \Lambda_l^2 + \alpha^2 - 2y^2 \Lambda_l}{\Lambda_l^2 - \alpha^2} \delta_{lm},
\label{F8}\end{equation}
and the Cauchy-like matrix $C$,
\begin{equation}
C_{lm} := \frac{1}{\Lambda_l \Lambda_m - \alpha^2}.
\label{F9}\end{equation}
The denominators do not vanish since $\lambda$ is strongly regular.
The constraint equation (\ref{cruc3}) leads to the
following formula for the matrix $Q$:
\begin{equation}
Q = D + 2\cW C {\cW^\dagger}.
\label{F10}\end{equation}
Since $Q$ is conjugate to $I$ (\ref{T10}), $Q^2 = \1_{2n}$ holds, and this translates into
\begin{equation}
D^2 + 2\cW D C \cW^\dagger + 2\cW C D \cW^\dagger + 4\cW C (\cW \cW^\dagger) C \cW^\dagger = \1_{2n}.
\label{F11}\end{equation}
Let us observe that the matrix $\cW$ is
invertible whenever $\lambda$ is strongly regular.
Indeed, if some component $\tilde w_a=0$, then (\ref{F11}) yields $D_a^2=1$,
which is excluded by strong regularity.
Next, we substitute (\ref{F10}) into the equation
$Q \tilde w = \tilde w$ in (\ref{aux}), which gives
\begin{equation}
D_{jj} \tilde w_j + 2\tilde w_j \sum_{m=1}^{2n} C_{jm} \vert \tilde w_m \vert^2 = \tilde w_j,
\qquad
\forall j=1,\dots, 2n.
\label{F12}\end{equation}
Dividing by $\tilde w_j$ produces the formula
\begin{equation}
\vert \tilde w_j \vert^2 = \frac{1}{2}
\sum_{l=1}^{2n} (C^{-1}(\lambda))_{jl} ( 1 - D(\lambda)_{ll}),
\label{simple}\end{equation}
where $C^{-1}$ is the inverse of the matrix $C$ (\ref{F9}) and we took into account (\ref{C7}).
This expresses the moduli $\vert \tilde w_j \vert$ as functions of $\lambda$.
Using the parameter $\mu$ instead of $\alpha= e^{-\mu}$,
define the $2n$ functions
\begin{equation}
\begin{aligned}
F_a(\lambda)&= \prod_{\substack{i=1\\(i\neq a)}}^{n}
\left(\frac{\sinh(\lambda_a+\lambda_i+\mu)\sinh(\lambda_a-\lambda_i+\mu)}
{{\sinh(\lambda_a-\lambda_i)}\sinh(\lambda_a+\lambda_i)}\right),
\quad
1\leq a \leq n, \\
F_{n+a}(\lambda)&=
\prod_{\substack{i=1\\(i\neq a)}}^{n}
\left(\frac{\sinh(\lambda_a+\lambda_i-\mu)\sinh(\lambda_a-\lambda_i-\mu)}
{{\sinh(\lambda_a-\lambda_i)}\sinh(\lambda_a+\lambda_i)}\right),
\end{aligned}
\label{F14}\end{equation}
as well as the functions
\begin{equation}
\begin{aligned}
\cF_a(\lambda) &= e^{-\mu}\left(e^{2 \lambda_a} - y^2\right) \frac{\sinh(\mu)}{\sinh(2\lambda_a)} F_a(\lambda),
\quad 1\leq a\leq n, \\
\cF_{n+a} (\lambda)&= e^{-\mu}\left(y^2 - e^{-2 \lambda_a}\right) \frac{\sinh(\mu)}{\sinh(2\lambda_a)} F_{n+a}(\lambda).
\end{aligned}
\label{F15}\end{equation}
\medskip\noindent
{\bf Proposition 5.2.}
\emph{
The moduli of $\tilde w_j(g)$ defined by (\ref{C13}) are gauge invariant functions of
$g=kb\in\cM_1^\reg$ and depend only
on $\lambda$ that parametrizes the eigenvalues of $bb^\dagger$ according to (\ref{C7}) and (\ref{C11}).
Explicitly, these functions are given by the relation
\begin{equation}
\vert \tilde w_j(g) \vert^2 = \cF_j(\lambda), \qquad j=1,\dots, 2n,
\label{*}\end{equation}
with the functions $\cF_j$ (\ref{F15}).
The component $Q$ in any admissible strongly regular triple $( \tilde w, Q,\lambda)\in \cN^\sreg$
can be written as (\ref{F10}), where the phases of the entries of
$\tilde w\in \C^{2n}$ can be chosen arbitrarily.}
\medskip\noindent
{\bf Proof.} For a strongly regular admissible $\lambda$, the formula (\ref{*}) is a
reformulation \cite{FM}
of (\ref{simple}).
It remains valid on the whole of $\cM_1^\reg$, since the functions on the two sides
of (\ref{*}) are gauge invariant continuous functions on $\cM_1^\reg$, and
$\cM_1^\sreg$ is a dense subset of $\cM_1^\reg$ (in consequence of the density
of $\cM_0^\sreg$ in $\cM_0$).
In the strongly regular case, the formula (\ref{F10}) for $Q$ was derived above.
The phases of $\tilde w_j$ can take arbitrary values, because one can
use arbitrary $e^{\ri \xi}\in \T^{2n}$ in equation (\ref{C32}).
\hfill $\square$
\medskip
The definitions guarantee the positivity of $\vert \tilde w_j\vert(\lambda)$ for every
$\lambda\in \cD(u,v,\mu)^\sreg$
(see below (\ref{F11})).
Thus, the explicit formula (\ref{*}) leads to a necessary condition on $\lambda$ to belong to the
(still unknown) set $\cD(u,v,\mu)^\sreg$.
Indeed, our aim below is to identify the `maximal domain' on which the functions $\cF_j$
\emph{as given by the formula} (\ref{F15}) are positive.
More precisely,
we are interested in the set
\begin{equation}
\cD_+(u,v,\mu):= \{ \lambda\in \R^n\mid \lambda_1> \lambda_2 > \dots > \lambda_n > \vert v \vert,\,\,
\cF_j(\lambda)>0,\,\,
\forall j=1,\dots, 2n \}.
\label{F17}\end{equation}
We stress that in this definition $\lambda$ is \emph{not} assumed to be admissible or strongly regular;
the formula (\ref{F15}) is used to define $\cF_j(\lambda)$ for the $\lambda$ that occur.
Next, we shall give the elements of $\cD_+(u,v,\mu)$ explicitly.
After that, we shall prove that $\cD(u,v,\mu)$ (\ref{C30}) is the closure of $\cD_+(u,v,\mu)$.
Our notation anticipates that the definition (\ref{F17}) turns out to give the set (\ref{I9}).
\medskip\noindent
{\bf Proposition 5.3.}
\emph{The set $\cD_+(u,v,\mu)$ defined by (\ref{F17}) can be described explicitly as
\begin{equation}
\cD_+(u,v,\mu)= \{ \lambda\in \R^n\mid \lambda_n > \operatorname{max}(\vert u \vert, \vert v \vert),\,\,
\lambda_i -\lambda_{i+1} > \mu,\, \forall i=1,\dots, n-1\}.
\label{F18}\end{equation}
}
\noindent
{\bf Proof.}
It is straightforward to check that if $\lambda\in \R^n$ verifies
\begin{equation}
\lambda_n > \operatorname{max}(\vert u \vert, \vert v \vert),\,\,
\lambda_i -\lambda_{i+1} > \mu,\, \forall i=1,\dots, n-1,
\label{F19}\end{equation}
then $\cF_j(\lambda)>0$, and actually also $F_j(\lambda)>0$, for all $j=1,\dots, 2n$.
To prove the converse, suppose that $\lambda$ meets the requirements imposed in (\ref{F17}), and
that it also satisfies
\begin{equation}
\lambda_n > - u.
\label{F20}\end{equation}
This latter assumption holds automatically for $\vert v \vert > \vert u\vert $, and also when $\vert u \vert > \vert v \vert$ if $u>0$.
It follows from (\ref{F20}) that
\begin{equation}
(e^{2 \lambda_a} - y^2)= (e^{2 \lambda_a} - e^{-2 u}) >0,
\label{F21}\end{equation}
and hence the positivity of $\cF_a(\lambda)$ implies
\begin{equation}
F_a(\lambda) >0, \quad \forall a=1,\dots, n.
\label{F22}\end{equation}
We note that $F_1(\lambda) >0$ holds as a consequence of $\lambda_1>\lambda_2> \dots >\lambda_n > \vert v\vert$.
Then we look at $F_2$ and find that $F_2(\lambda) >0$ forces
$\lambda_1 - \lambda_2 >\mu$.
Next we inspect $F_3$, and wish to show that its positivity implies
$\lambda_2 - \lambda_{3} > \mu$.
For this, we notice that the only factors in $F_3$ that are not manifestly positive are those in the product
\begin{equation}
\frac{\sinh(\lambda_3 - \lambda_1 + \mu)}{\sinh(\lambda_3 - \lambda_1)}\frac{ \sinh(\lambda_3-\lambda_2 +\mu) }{\sinh(\lambda_3 - \lambda_2)}.
\label{F23}\end{equation}
We recast this product slightly as
\begin{equation}
\frac{\sinh(\lambda_1 - \lambda_2 - \mu + (\lambda_2 - \lambda_3))}{\sinh(\lambda_1 - \lambda_3)} \frac{\sinh(\lambda_2 - \lambda_3 - \mu)}{\sinh(\lambda_2 - \lambda_3)},
\label{F24}\end{equation}
and since we already know that $\lambda_1 - \lambda_2 > \mu$, we see that each factor is positive
except possibly $\sinh(\lambda_2 - \lambda_3 -\mu)$. Thus the positivity of $F_3(\lambda)$ leads to
$\lambda_2 - \lambda_3 > \mu$.
We go on in this manner and find that the positivity of all
\begin{equation}
F_1(\lambda), F_2(\lambda),\ldots, F_a(\lambda)
\label{F25}\end{equation}
implies (actually is equivalent to)
\begin{equation}
\lambda_i - \lambda_{i+1} >\mu,
\quad
\forall i=1,\ldots, a-1.
\label{F26}\end{equation}
This holds for each $a=2,\dots, n$.
We now observe that if $\lambda_i - \lambda_{i+1} >\mu$ for all $i$, then $F_{n+a}(\lambda)>0$ is valid for all $a=1,\dots, n$ as well.
Therefore the positivity of $\cF_{2n}(\lambda)$ requires that
\begin{equation}
(e^{-2u} - e^{-2\lambda_n})>0,
\label{F27}\end{equation}
which in the case $u>0$ enforces that $\lambda_n > \vert u\vert $.
At this stage, the proof is complete whenever (\ref{F20}) is guaranteed. Therefore,
it only remains to show that $\lambda_n > \vert u \vert $ must
hold also when $\vert u \vert > \vert v \vert$ and $u<0$.
This follows from Lemma 5.4 below.
\hfill $\square$
\medskip
\noindent
{\bf Lemma 5.4.}
\emph{If $u<0$, then there does not exist any
$\lambda\in \R^n$, $\lambda_1 > \lambda_2 > \dots > \lambda_n >0$ for which $\lambda_n < \vert u \vert $
and the expressions (\ref{F15}) satisfy
$\cF_m(\lambda) >0$ for all $m=1,\dots,2n$.}
\medskip\noindent
{\bf Proof.}
If $\lambda_n < \vert u\vert$ and $\cF_1(\lambda) >0$ by (\ref{F15}), then there exists a smallest index
$1 < k \leq n$ such that $\lambda_{k-1} > \vert u \vert$ but $\lambda_k < \vert u\vert$.
This follows since $\lambda_1$ must be larger than
$\vert u\vert$, otherwise $\cF_1(\lambda)>0$ cannot hold.
The positivity of $\cF_m(\lambda)$ for all $m$ then requires
\begin{equation}
F_1(\lambda)>0,\dots, F_{k-1}(\lambda)>0,\,\, F_{k}(\lambda)<0,\dots, F_n(\lambda)<0,\,\,
F_{n+1}(\lambda) >0,\dots, F_{2n}(\lambda)>0.
\label{F28}\end{equation}
Let us now suppose that
\begin{equation}
2\leq k \leq n-1, \quad (n>2).
\label{F29}\end{equation}
We find that the positivity of $F_1,\dots, F_{k-1}$ is equivalent to
the $(k-2)$ conditions
\begin{equation}
\lambda_1 - \lambda_2 > \mu,\dots, \lambda_{k-2} - \lambda_{k-1} > \mu.
\label{F30}\end{equation}
In particular, these conditions are empty for $k=2$.
Then the negativity of $F_k$ leads to the condition
\begin{equation}
\lambda_{k-1} - \lambda_k < \mu.
\label{F31}\end{equation}
Moreover, the negativity of $F_{k+1},\dots, F_n$ leads to the conditions
\begin{equation}
\lambda_{k} - \lambda_{k+1} <\mu, \dots, \lambda_{n-1} - \lambda_n < \mu
\label{F32}\end{equation}
together with
\begin{equation}
\lambda_{k-1} - \lambda_{k+1} >\mu,\dots, \lambda_{n-2} - \lambda_n >\mu.
\label{F33}\end{equation}
But then we find that the above inequalities imply
\begin{equation}
F_{n+ k -1}(\lambda) < 0.
\label{F34}\end{equation}
We here used that $\lambda_{k-1} > \mu$, which follows from the above.
We have proved that $\lambda$ satisfying our conditions does not exist
if $2\leq k \leq n-1$.
It remains to consider the case
$k=n$,
when we must have $F_n(\lambda)<0$, but all
the other $F_k$ must be positive. Inspecting these functions for $k=2,\dots, n-1$ we find
$\lambda_i - \lambda_{i+1} > \mu$ for $i=1,\dots, n-2$ and from $F_n(\lambda)<0$ we find $\lambda_{n-1} - \lambda_n < \mu$.
Then one can check that $F_{n+1},\dots, F_{2n-2}$ are positive, while
the positivity of $F_{2n-1}(\lambda)$ requires $\lambda_{n-1} + \lambda_n < \mu$. The inequalities derived
so far entail that $F_{2n}(\lambda) < 0$, and thus $\lambda$ with the required properties does not
exist in the $k=n$ case either.
In the above it was assumed that $n>2$, but the arguments are easily adapted to cover the $n=2$
case, too.
\hfill $\square$
\medskip
We see from Proposition 5.3 that the sets given by (\ref{F5}) and (\ref{F18}) satisfy
\begin{equation}
\cD(u,v,\mu)^\sreg \subseteq \cD_+(u,v,\mu).
\label{F35}\end{equation}
Since $\cD(u,v,\mu)^\sreg$ is a dense subset of the set $\cD(u,v,\mu)$ of admissible $\lambda$-values, we
obtain
\begin{equation}
\cD(u,v,\mu) \subseteq \overline{ \cD_+(u,v,\mu)},
\label{F36}\end{equation}
where
\begin{equation}
\overline{\cD_+(u,v,\mu)} = \{ \lambda \in \R^n\mid \lambda_n \geq \operatorname{max}(\vert u\vert, \vert v \vert),\,\,
\lambda_i -\lambda_{i+1} \geq \mu,\, \forall i=1,\dots, n-1\}
\label{F37}\end{equation}
is the closure of $\cD_+(u,v,\mu)$.
We shall shortly demonstrate that in (\ref{F36}) equality holds.
Employing the notation (\ref{C31}), let us take an arbitrary element $e^{\ri \xi}\in \T^{2n}$
and consider, for $l,m=1,\dots,2n$, the formulae
\begin{equation}
Q_{lm}(\lambda, e^{\ri \xi}) = D_{lm}(\lambda) + 2\tilde w_l(\lambda,e^{\ri \xi}) C_{lm}(\lambda) {\tilde w_m}^*
(\lambda, e^{\ri\xi}),
\qquad
\tilde w_{l}(\lambda, e^{\ri \xi}) = e^{\ri \xi_l} \sqrt{\cF_l(\lambda)},
\label{F40}\end{equation}
where non-negative square roots are used for all $\lambda \in \overline{\cD_+(u,v,\mu)}$.
The matrix element $Q_{lm}$ shows an apparent singularity
at the $\lambda$-values for which the denominator in $C_{lm}(\lambda)$ (\ref{F9}) becomes zero.
However,
all those `poles' cancel either against zeros of
$\sqrt{\cF_l(\lambda) \cF_m(\lambda)}$ or against a corresponding pole of $D_{lm}(\lambda)$.\\
\medskip\noindent
{\bf Lemma 5.5.} \emph{The formulae (\ref{F40}) for $Q_{lm}$ and $\tilde w_l$ yield unique continuous functions on
the domain
$ \overline{ \cD_+(u,v,\mu)} \times \T^{2n}$, which are analytic on the
interior $\cD_+(u,v,\mu)\times \T^{2n}$. The components of $\rho(\lambda)$ (\ref{C9}) and
$\beta(\lambda)$ (\ref{C12}) are also analytic on $\cD_+(u,v,\mu)$ and continuous on its closure.
}
\medskip
\noindent {\bf Proof.}
For any fixed $j=1,\dots, n-1$, the matrix element
\begin{equation}
C_{j+1, n+j}(\lambda) = - \frac{1}{2}
\frac{e^{\mu + \lambda_j - \lambda_{j+1}}}{\sinh(\lambda_j -\lambda_{j+1} -\mu)},
\label{F41}\end{equation}
becomes infinite as $\lambda_j - \lambda_{j+1} -\mu$ tends to zero.
This pole is cancelled by the corresponding zero of
\begin{equation}
\sqrt{\cF_{j+1}(\lambda) \cF_{n+j}(\lambda)} = \sinh(\lambda_j - \lambda_{j+1} - \mu) f_{j+1, n+j}(\lambda),
\label{F42}\end{equation}
where $f_{j+1, n+j}(\lambda)$ remains finite as $\lambda$ approaches the pole.
The only other source of potential singularity of $Q_{lm}$ (\ref{F40}) is the vanishing of the denominators of
$D_{2n,2n}$ (\ref{F8}) and $C_{2n,2n}$ (\ref{F9}) as $\lambda_n$ tends to $\mu/2$.
This may be excluded by the form of $\cD_+(u,v,\mu)$, but when it is not excluded then
one can check easily that these poles cancel against each other.
The continuity of the resulting functions on $\overline{ \cD_+(u,v,\mu)} \times \T^{2n}$ and their
analyticity on the interior also follow immediately from their explicit formulae.
The statements regarding $\rho(\lambda)$ and $\beta(\lambda)$ are plainly true.
$ \hfill\square$
\medskip
The following theorem summarizes one of our main results.
\medskip\noindent
{\bf Theorem 5.6.}
\emph{The set of admissible triples $(\tilde w, Q, \lambda)$, which according to Proposition 4.3
is in bijective correspondence
with the set $\cN$ (\ref{P7}), is formed precisely by the triples $(\tilde w, Q,\lambda)$
given explicitly
by Lemma 5.5.
Consequently, the image $\cD(u,v,\mu)$ of the `eigenvalue map' $\cL$ (\ref{C30}) equals the closure
of $\cD_+(u,v,\mu)$, given by (\ref{F37}).
The dense open submanifold of the reduced phase space defined by
\begin{equation}
\cM_\red^+ := K_+(\sigma) \backslash \cL^{-1}(\cD_+(u,v,\mu))/K_+
\label{F43}\end{equation}
is in bijective correspondence with set of admissible triples given by Lemma 5.5 using
$\lambda\in \cD_+(u,v,\mu)$ and $e^{\ri \xi}$ taking the form
\begin{equation}
(e^{\ri \xi_1},\dots, e^{\ri \xi_n}, e^{\ri \xi_{n+1}},\dots, e^{\ri \xi_{2n}}) =
(e^{\ri \theta_1},\dots, e^{\ri \theta_n}, 1,\dots, 1)
\quad\hbox{with}\quad e^{\ri \theta}\in \T^n.
\label{F44}\end{equation}
This yields a symplectomorphism between $\cM_\red^+$ equipped with the restriction of $\omega_\red$
and the product manifold $ \cD_+(u,v,\mu) \times \T^n$ equipped with the symplectic
form
$ \sum_{j=1}^n d \theta_j \wedge d \lambda_j$.
}
\medskip
\noindent
{\bf Proof.}
In what follows, we first show that all triples
given by Lemma 5.5 are admissible, that is, they represent elements on
$\cN$. In particular\footnote{From now on we drop $u,v,\mu$
from $\cD(u,v,\mu)$, $\cD_+(u,v,\mu)$ and $\cD^\sreg(u,v,\mu)$.}, $\cD$ defined in (\ref{C30}) is the closure of
$\cD_+$ in (\ref{F18}).
Then we apply a density argument to demonstrate that the admissible triples of Lemma 5.5 exhaust $\cN$.
Finally, we explain the statement about the model of the subset $\cM_\red^+$ of $\cM_\red$.
We have seen that for any $\lambda \in \cD^\sreg\subset \cD$ every admissible triple
$(\tilde w,Q,\lambda)$ is of the form
(\ref{F40}),
and we also know that $\cD^\sreg$ is a non-empty open
subset
of $\cD_+$. By noting that the triple (\ref{C32})
is admissible whenever $(\tilde w,Q,\lambda)$ is admissible, we conclude that the conditions on admissible triples
formulated in Definition 4.4 are satisfied by the triples given by (\ref{F40}) with
$(\lambda, e^{\ri \xi})$ taken from the open subset
$\cD^\sreg \times \T^{2n} \subset \cD_+ \times \T^{2n}$.
Because these conditions require the vanishing of analytic functions, they must then hold on the connected
open set $\cD_+ \times \T^{2n}$, and by continuity on its closure as well. Thus, we have proved that all triples
given by Lemma 5.5 are admissible. On account of (\ref{F36}),
this implies that $\cD = \overline{\cD_+}$.
We now show that Lemma 5.5 gives \emph{all} admissible triples.
To this end, let us choose an admissible triple, denoted $( \tilde w_\sharp, Q_\sharp, \lambda_\sharp)$, for which
$\lambda_\sharp\in (\cD \setminus \cD^\sreg)$. This corresponds by equation
(\ref{C13}) to some element $g_{1\sharp}\in \cM _1$, which
is obtained by a right-handed gauge transformation from some element $g_\sharp\in \cM_0$.
We fix $g_{1\sharp}$ and $g_\sharp$.
We can find a sequence $g(j)\in \cM_0^\sreg$ that converges to $g_\sharp$, because
$\cM_0^\sreg$ is a dense subset of $\cM_0$.
It is easy to see that the sequence $g(j)$ can be gauge transformed into a sequence $g_1(j)\in \cM_1$ (\ref{P6})
that \emph{converges to} $g_{1\sharp}$.
(This follows from the continuous dependence on $g$ of the eigenvalues $\beta_i^2$ of $\chi \chi^\dagger$,
where $\chi$
is the top-right block of $b$ from $g=kb\in \cM_0$.)
The convergent sequence $g_1(j)\in \cM_1$
corresponds by equation (\ref{C13}) to a sequence $(\tilde w(j), Q(j), \lambda(j))$ of strongly regular
admissible triples
that converges to $( \tilde w_\sharp, Q_\sharp, \lambda_\sharp)$.
Then, as for any $\lambda \in \cD^\sreg$ every admissible triple is of the form (\ref{F40}) , we obtain
a sequence
$(\lambda(j), e^{\ri \xi}(j))\in \cD^\sreg \times \T^{2n}$ that obeys
\begin{equation}
\lim_{j\to \infty} \left(\tilde w\!\left(\lambda(j), e^{\ri \xi}(j)\right)\,,\, Q\!\left(\lambda(j), e^{\ri \xi}(j)\right)\,, \,\lambda(j)\right) =
\left(\tilde w_\sharp\,,\, Q_\sharp\,,\, \lambda_\sharp\right).
\label{F46}\end{equation}
By the compactness of $\T^{2n}$, possibly going to a subsequence, we can assume that $e^{\ri \xi}(j)$ converges
to some $e^{\ri \xi_\sharp}$.
By the continuous dependence of the triple in Lemma 5.5 on $(\lambda, e^{\ri \xi})$, it finally follows that
\begin{equation}
\left(\tilde w_\sharp\,,\, Q_\sharp\,, \,\lambda_\sharp\right) = \left(\tilde w\!\left(\lambda_\sharp, e^{\ri \xi_\sharp}\right)\,, \, Q\!\left(\lambda_\sharp, e^{\ri \xi_\sharp}\right)\,,\, \lambda_\sharp\right),
\label{F47}\end{equation}
i.e., every admissible triple is given by Lemma 5.5.
It remains to establish the
symplectomorphism between $\cM_\red^+$ in (\ref{F43}) and $\cD_+ \times \T^n$.
Before going into this, we need some preparation.
We first note $\cM^+_\red$ is open subset of $\cM_\red$ since
$\cD_+$ is an open subset of $\R^n$ and $\cL\colon \cM_0 \to \R^n$
defined in (\ref{C30}) is a continuous, gauge invariant map,
which descends to a continuous map from $\cM_\red$ to $\R^n$.
As a consequence of (\ref{F35}), $\cM_\red^+$ is dense in $\cM_\red$.
It is also true that $\cL$ is an analytic map,
because its components are logarithms of eigenvalues of $gg^\dagger$, and
(\ref{F36}) ensures that
the eigenvalues of $gg^\dagger$ are pairwise distinct positive numbers for any $g\in \cM_0$.
Let us define $\cM_0^+:= \cL^{-1}(\cD_+)$, and introduce also
$\cM_1^+:= \cM_1\cap \cM_0^+$, as well as the subset $\cN^+\subset \cN$ consisting of the admissible triples
$(\tilde w, Q, \lambda)$ for which $\lambda \in\cD_+$.
Finally, let $\cS^+\subset \cN^+$ stand for the set of admissible triples parametrized by $\cD_+ \times \T^n$
using (\ref{F40}) with $\lambda\in \cD_+$ and
the phases $e^{\ri \xi_a}$ of $\tilde w_a$ satisfying (\ref{F44}).
Any admissible triple $(\tilde w, Q, \lambda) \in \cN^+$
is gauge equivalent to a unique admissible triple in $\cS^+$, parametrized
by $(\lambda, e^{\ri \theta})\in \cD_+ \times \T^n$ with
\begin{equation}
e^{\ri \theta_j } = \frac{\tilde w_j \tilde w_{j+n}^*}{ \vert \tilde w_j \tilde w_{n+j}\vert},
\qquad j=1,\dots, n.
\label{F50}\end{equation}
By this formula, we can view $e^{\ri \theta}$ as a gauge invariant
function on $\cN^+$, and we also obtain the identification $\cN^+/\T_n \simeq \cS^+$
with respect to the gauge action in (\ref{C29}).
Now we define a map
\begin{equation}
\psi_+\colon \cM_0^+ \to \cD_+ \times \T^n\equiv \cS^+
\label{psi+}\end{equation}
by composing a gauge transformation
$f_0\colon \cM_0^+ \to \cM_1^+$ with the map $\pi_1\colon \cM_1^+ \to \cN^+$
given by equation (\ref{C13}), and with the map $\cN^+ \to \cS^+$ operating according to (\ref{F50}).
(The notations are borrowed from Figure 2. See also Remark 3.1.)
Since the $\lambda$-values belonging to $\cD_+$ are regular,
the map $\psi_+$ is smooth (even analytic). It is obviously gauge invariant, surjective and maps
different gauge orbits to different points.
Therefore $\psi_+$ descends to a one-to-one smooth map
$\Psi_+\colon \cM_\red^+ \to \cD_+ \times \T^n$.
It was shown in \cite{FM} (without explicitly specifying the domain $\cD_+$ in the calculation)
that $\Psi_+$ satisfies
\begin{equation}
\Psi_+^* (\sum_{j=1}^n d \theta_j \wedge d \lambda_j)= \omega_\red^+
\label{F49}\end{equation}
with the restriction $\omega_\red^+$ of the reduced symplectic form on $\cM_\red^+ \subset \cM_\red$.
In particular, the Jacobian determinant of $\Psi_+$ is everywhere non-degenerate, and therefore the inverse map is
also smooth (and analytic).
\hfill $\square$
\medskip
We finish this section with a few remarks.
The strong regularity condition was employed to ensure
that we never divide by zero in the course of the analysis.
The non-vanishing of $ p_1$ (\ref{F1}) and the first factor of $ p_2$ (\ref{F2}) prevents zero
denominators in (\ref{F8}), (\ref{F9}) and (\ref{F14}). The non-vanishing of the second factor of
$ p_2$
was used in the argument below (\ref{F11}). The last two factors of $ p_2$
exclude the vanishing of the functions $\cF_k$ (\ref{F15}) or of a component of $\rho$ (\ref{C9}),
which are not differentiable at those excluded values of $\lambda$ on account of some square roots
becoming zero.
Notice from (\ref{F50}) that (because of vanishing denominators) the
variables $e^{\ri \theta_j}$ cannot all be well-defined at such points where $\lambda$ belongs to the boundary of $\cD$.
Up to this point in the paper, we have not
used the assumption (\ref{I13}).
We shall utilize it in the following section,
where we introduce new
variables that cover also the part of $\cM_\red$ associated with the boundary of $\cD$.
Imposing $\vert u\vert >\vert v \vert $
ensures, by virtue of $\cD= \overline{\cD_+}$ (\ref{F37}), that the regularity condition (\ref{P12}) holds globally,
since $\lambda_n > \vert v \vert$ is equivalent to $\beta_n >0$.
This in turn ensures, by the arguments developed in Section 3 and Section 4 (see
(\ref{P25}) and (\ref{C26})), that we
have the identification
\begin{equation} \cM_\red =(K_+(\hat w)\times \T_1)\backslash \cM_1/ \T_{n-1} = \cN/\T_n.
\label{F38}\end{equation}
If $\vert v \vert > \vert u \vert$, then $\beta_n=0$ corresponding to
$\lambda_n = \vert v \vert$ is allowed for elements of $\cM_1$.
As mentioned after equation (\ref{P29}), this would complicate
the arguments. Also, if $\beta_n =0$, then the corresponding isotropy group
$K_+(\lambda)$ that appears in (\ref{C26}) is larger then $\T_{n-1}$ in (\ref{P13}).
The desire to avoid these complications, together with the symmetry mentioned above (\ref{I13}),
motivates adopting this assumption in Section 6.
Finally, we recall from \cite{FM} that
\emph{the reduction of $\cH_1$ (\ref{T13})
gives the RSvD type Hamiltonian (\ref{I11}) in terms of the Darboux variables $(\lambda, e^{\ri \theta})$}.
\section{The global model $M$ of $\cM_\red$ and consequences}
\label{sec:6}
We construct the global model $M$ by bringing every admissible triple $(\tilde w, Q, \lambda)\in \cN$
to a convenient normal form. We then present consequences
for our pair of integrable systems.
\subsection{Construction of the model $M$ of $\cM_\red$}
Adopting the assumption (\ref{I13}), we start with the observation that most (but not all) functions
$\vert \tilde w_a \vert(\lambda)$ contain a factor of the form
\begin{equation}
\sqrt{\lambda_j - \lambda_{j+1} - \mu}, \quad j=1,\dots, n-1,
\qquad
\sqrt{\lambda_n - \vert u \vert},
\label{S1}\end{equation}
multiplied by a function of $\lambda$ which is strictly positive and analytic
in an open neighbourhood of $\cD\equiv \cD(u,v,\mu)$.
On account of the formula (\ref{F40}), the moduli of the components of $Q$ depend only on $\lambda$,
and for certain indices they are strictly positive, analytic functions.
The precise way in which this happens depends on the sign of $u$, and
now we assume for concreteness that
\begin{equation}
\vert u \vert > \vert v \vert \quad\hbox{and}\quad u < 0.
\label{S2}\end{equation}
We shall comment on the modifications necessary when this does not hold.
\medskip \noindent
{\bf Lemma 6.1.}
\emph{Under the assumptions (\ref{S2}), for every admissible triple $(\tilde w, Q, \lambda)\in \cN$ we have
\begin{equation}\begin{aligned}
&\vert \tilde w_1 \vert = f_1(\lambda),
\\
&\vert \tilde w_j \vert = \sqrt{\lambda_{j-1} - \lambda_j -\mu} \,f_j(\lambda), \quad j=2,\dots, n-1,\\
&\vert \tilde w_n \vert = \sqrt{\lambda_n - \vert u \vert}\,\sqrt{\lambda_{n-1} - \lambda_n - \mu}\, f_n(\lambda),\\
&\vert \tilde w_{n+j} \vert = \sqrt{\lambda_{j} - \lambda_{j+1} -\mu} \,f_{n+j}(\lambda), \quad j=1,\dots, n-1,\\
&\vert \tilde w_{2n} \vert = f_{2n}(\lambda), \\
\end{aligned}\label{S3}\end{equation}
and
\begin{equation}
\begin{aligned}
&\vert Q_{j+1, n+j}\vert = f_{j+1, n+j}(\lambda),\quad j=1,\dots, n-2,\\
&\vert Q_{n, 2n-1}\vert = \sqrt{\lambda_n - \vert u \vert}\,f_{n, 2n-1}(\lambda),
\end{aligned}
\label{S4}\end{equation}
where the $f_i$ and the $f_{j+1, n+j}$ are strictly positive, analytic function in a neighbourhood of $\cD$.
All components of $\rho(\lambda)$ (\ref{C9}) are also analytic functions in a neighbourhood of $\cD$. }
\medskip
It is straightforward to write explicit formulae for the functions $f_i$ and $f_{j+1,n+j}$.
We shall not use them, but for completeness present some of them in Appendix A.
Here, we note only that, as was pointed out in the proof of Lemma 5.5, the vanishing denominators
of $C_{j+1,n+j}$ in
$Q_{j+1, n+j}$ are cancelled by a zero of $\tilde w_{j+1} \tilde w_{n+j}^*$, for any $j$.
Analogous formulae can be written for all matrix elements of $Q$.
The only non-displayed matrix element of $Q$ that never vanishes is $Q_{1,2n}$.
The factors (\ref{S1}) lose their smoothness when they become zero, which happens at the boundary of $\cD$.
This is analogous to the failure of the function $f\colon \C \to \R$ given by $f(z) = \vert z\vert$
to be differentiable at the origin in $\C$.
Our globally valid new variables will be $n$ complex numbers running over $\C$, whose
moduli are the factors (\ref{S1}).
Before presenting this, let us remark that
in terms of a complex variable the standard symplectic form on $\R^2\simeq \C$
can be written (up to a constant) as $\ri dz \wedge d z^*$, and the equality
\begin{equation}
\ri dz \wedge d z^*= d r^2 \wedge d\phi \quad\hbox{with}\quad z = r e^{\ri \phi}
\label{S5}\end{equation}
holds on $\C^*= \C\setminus\{0\}$.
This may motivate one to introduce new Darboux coordinates on $\cD_+ \times \T^n$ like in the next lemma.
\medskip\noindent
{\bf Lemma 6.2.}
\emph{
The following formulae define a diffeomorphism from $\cD_+\times\T^n$ to $(\C^*)^n$
\begin{equation}
\zeta_j:= \sqrt{\lambda_j - \lambda_{j+1} - \mu} \prod_{l=1}^j
e^{-\ri \theta_l} \quad\hbox{for}\quad j=1,\dots, n-1,
\quad
\zeta_n:= \sqrt{\lambda_n - \vert u \vert} \prod_{l=1}^n e^{-\ri \theta_l}.
\label{S6}\end{equation}
The symplectic form that appears in (\ref{F49}) satisfies
\begin{equation}
\sum_{j=1}^n d\theta_j \wedge d\lambda_j = \ri \sum_{j=1}^n d\zeta_j \wedge d \zeta_j^*.
\label{S7}\end{equation}
}
Extending the definition (\ref{S6}) to $\cD \times \T^n$,
the boundary of $\cD$ corresponds to the subset of $\C^n$ on which
$\prod_{i=1}^n \zeta_i=0$.
Since we know that the boundary of $\cD$ is part of the admissible $\lambda$ values,
it is already rather clear that $\zeta_i$ as defined above extend to global
coordinates on $\cM_\red$. Nevertheless, this requires a proof.
The proof will enlighten the origin of the complex variables $\zeta_i$.
It is clear from Lemma 6.1 that for any $(\tilde w, Q, \lambda)\in \cN$ there exists a
unique gauge transformation\footnote{One also sees from this that the action of $\T_n$ on $\cN$ is free.
This can be used to confirm that the effective gauge group (\ref{T34}) acts freely on $\cM_0$.}
(\ref{C29}) by $\tau=\tau(\tilde w, Q, \lambda)\in \T_n$ (\ref{P15})
such that for the gauge
transformed triple the first and last components of $\tau \tilde w$ are real and positive and
the components $(\tau Q \tau^{-1})_{j+1, j+n}$ are real and negative for all $j=2,\dots, n-2$.
(The choice of negative sign stems from (\ref{F41}).)
This map can be calculated explicitly.
By using this, we are able to obtain an analytic, gauge invariant map from $\cM_0$ onto
$\C^n$, which gives rise to a symplectomorphism between $\cM_\red$ and $\C^n$.
Below, we elaborate this statement.
\medskip
\noindent
{\bf Definition 6.3.} Let $S \subset \cN$ be the set of admissible triples, denoted $(\tilde w^S, Q^S, \lambda)$,
satisfying the following gauge fixing conditions:
\begin{equation}
\tilde w^S_1 >0, \quad \tilde w^S_{2n}>0, \quad Q^S_{j+1, n+j} <0 \quad\hbox{for}\quad j=1,\dots, n-2.
\label{S8}\end{equation}
As in the proof Theorem 5.6, let
$\cS^+\subset \cN^+$ denote the set of admissible triples parametrized by $\cD_+ \times \T^n$
using (\ref{F40}) with $\lambda\in \cD_+$ and
the phases $e^{\ri \xi_a}$ of $\tilde w_a$ satisfying (\ref{F44}).
\medskip
We know that $\cS^+$ defines a unique normal form for the elements of $\cN^+\subset \cN$, and
$S$ defines a unique normal form for the whole of $\cN$.
For any $(\tilde w, Q, \lambda)\in \cN$, we define the $n$ phases $X_1, X_n, X_{j+1,n+j}\in \U(1)$ by
writing
\begin{equation}
\tilde w_1 = X_1 f_1(\lambda),\quad \tilde w_{2n} = X_{2n} f_{2n}(\lambda),\quad
Q_{j+1, n+j} = - X_{j+1, n+j} f_{j+1, n+j}(\lambda)
\label{S9}\end{equation}
for every $j=1,\dots, n-2$. The map $(\tilde w, Q, \lambda)\mapsto (\tilde w^S, Q^S,\lambda)$
sends any admissible triple to the intersection of its $\T_n$ orbit (defined by (\ref{C29}))
with $S$, which is given by
\begin{equation}
(\tilde w^S, Q^S,\lambda) =( \tau \tilde w, \tau Q \tau^{-1},\lambda )
\,\,\, \hbox{with}\,\,\,
\tau_1 = X_1^{-1},\quad \tau_{2n}= X_{2n}^{-1},\quad \tau_j =X_1^{-1}\prod_{i=1}^{j-1} X_{i+1,n+i}^{-1}
\label{S10} \end{equation}
for $j=2,\dots, n-1$. This yields $\tilde w^S$ and $Q^S$
as gauge invariant functions on $\cN$, and by using them we can define the
$\C^n$ valued gauge invariant map $\pi_\cN\colon (\tilde w, Q, \lambda) \mapsto \zeta$ on $\cN$ as follows:
\begin{equation} \begin{aligned}
&\zeta_j(\tilde w, Q, \lambda) := \tilde w^S_{n+j}/f_{n+j}(\lambda), \quad j=1,\dots, n-1,\\
&\zeta_n(\tilde w, Q, \lambda):= (Q^S_{n, 2n-1})^*/f_{n,2n -1}(\lambda).
\end{aligned}\label{S11}\end{equation}
For the remaining components of the function $\tilde w^S$ given by (\ref{S10}), we find
\begin{equation}
\tilde w^S_j = \zeta_{j-1} f_j(\lambda), \quad j=2,\dots, n-1,\quad \tilde w^S_n=
\zeta_n^* \zeta_{n-1} f_n(\lambda)
\label{S12}\end{equation}
with the functions of $\lambda$ in (\ref{S3}), and of course $\tilde w^S_1 = f_1(\lambda)$ and
$\tilde w^S_{2n} = f_{2n}(\lambda)$.
The function $Q^S$ (\ref{S10}) is given by substituting $\tilde w^S$ for $\tilde w$ in the formula (\ref{F40}).
Equation (\ref{S12}) can be checked
by writing every $(\tilde w, Q, \lambda)$ in terms of
$(\lambda, e^{\ri \xi}) \in \cD \times \T^{2n}$ as in (\ref{F40}), cf.~Lemma 5.5.
By applying this, we obtain, for $j=1,\dots, n-1$,
\begin{equation}
\zeta_j = \sqrt{ \lambda_j - \lambda_{j+1} -\mu} \prod_{l=1}^j e^{-\ri \xi_l} e^{\ri \xi_{n+l}}
\quad\hbox{and}\quad
\zeta_n = \sqrt{\lambda_n - \vert u \vert} \prod_{l=1}^n e^{-\ri \xi_l} e^{\ri \xi_{n+l}}.
\label{S13}\end{equation}
This shows manifestly that the range of $\zeta$ covers the whole of $\C^n$.
If we restrict this formula to $\cS^+$, parametrized by $\cD_+ \times \T^n$ using (\ref{F44}),
\emph{then we recover our previous formulae (\ref{S6})}.
We now summarize these claims.
\medskip \noindent
{\bf Proposition 6.4.} \emph{The $\T_n$ gauge invariant map
$ \pi_\cN\colon (\tilde w, Q, \lambda) \mapsto \zeta$ exhibited
in (\ref{S11})
induces a bijection between $\cN/\T_n$
and $\C^{n}$. The restriction of the component functions $\zeta_i$ to $\cS^+ \subset \cN$
is given by the formula (\ref{S6}).
The inverse map from $\C^n$ to $S\simeq \cN/\T^n$
can be written down explicitly by first expressing $\lambda$ in terms of $\zeta$ as
\begin{equation}
\lambda_j = \vert u \vert + (n-j) \mu + \sum_{l=j}^n \vert \zeta_l \vert^2, \quad j=1,\dots, n,
\label{S14}\end{equation}
then expressing $\tilde w^S$ by means of $\zeta$ using (\ref{S11}) and (\ref{S12}), and finally obtaining
$Q^S$ as a function of $\zeta$ via substitution of $\tilde w^S(\zeta)$ for $\tilde w$
in the formula (\ref{F40}).}
\medskip
\noindent
{\bf Proof.} The surjectivity onto $\C^n$ was explained above, and
the injectivity is clear because we can explicitly write down the inverse from $\C^n$ onto
the global cross-section $S$ of the $\T_n$ action on $\cN$.
\hfill $\square$
\medskip
Our main theorem says that the construction just presented gives a global
model of $\cM_\red$:
\begin{equation}
(M, \omega) \equiv (\C^n, \omega_{\can})
\quad \hbox{with}\quad
\omega_{\can} = \ri \sum_{j=1}^n d \zeta_j \wedge d \zeta_j^*.
\label{S15}\end{equation}
\medskip
\noindent
{\bf Theorem 6.5.} \emph{Take an arbitrary element $g_0\in \cM_0$ and pick $g(g_0)$ to be an element of $\cM_1$
which is gauge equivalent to $g_0$.
Then define the map $\psi\colon \cM_0 \to \C^n$ by the rule
\begin{equation}
\psi\colon g_0 \mapsto \zeta\left(\tilde w(g(g_0)), Q(g(g_0)), \lambda(g(g_0))\right),
\label{S16}\end{equation}
combining (\ref{S11}) with the map $ \cM_1\ni g \mapsto (\tilde w, Q, \lambda)\in \cN$ given by
equations (\ref{C12}) and (\ref{C13}).
The map $\psi$ is analytic, gauge invariant and it descends to a diffeomorphism $\Psi\colon \cM_\red \to \C^n$
having the symplectic property
\begin{equation}
\Psi^* (\omega_{\can}) = \omega_\red.
\label{S17}\end{equation}
}
\medskip
\noindent
{\bf Proof.}
Since it does not depend on the choice for $g(g_0)$,
the analyticity of $\psi$ follows from the possibility of an analytic local choice (see Remark 3.1) and the explicit formulae involved in the definition
(\ref{S16}).
Its bijective character is a direct consequence of Proposition 6.4.
The symplectic property follows from Theorem 5.6
and a density argument.
Namely, on $\cM_\red^+$ we can convert $\Psi_+$ satisfying (\ref{F49}) into $\Psi$
by means of the map $(\lambda, e^{\ri \theta}) \mapsto \zeta$ as given by (\ref{S6}).
This and Lemma 6.2 imply the equality (\ref{S17}) for the restriction of $\Psi$ on $\cM_\red^+$,
and then the equality
extends to the whole space by the smoothness of $\Psi$, $\omega_\can$ and $\omega_\red$.
As a consequence of (\ref{S17}), the inverse map is
smooth as well.
\hfill $\square$
\medskip\noindent
{\bf Remark 6.6.}
The formulae of the complex variables used in Section 2.2 can be converted into those applied
in this section by introducing new `tilded variables' as
\begin{equation}
\tilde \lambda_j := - \hat\lambda_{n+1-j} + c,\quad
\tilde \theta_j := - \hat \theta_{n+1-j},\quad
\tilde \cZ_{k} := \cZ_{n-k},\quad \tilde\cZ_n = \cZ_n,
\label{tildedvars}\end{equation}
for $j=1,\dots, n$ and $k=1,\dots, n-1$.
Then $\tilde \cZ$ depends on $\tilde \lambda, \tilde \theta$
by the same formula (\ref{S6}) whereby $\zeta$ depends on $\lambda, \theta$.
By choosing the constant $c$ appropriately, the domain of $\tilde \lambda$ also becomes identical
to the domain of $\lambda$.
\medskip\noindent
{\bf Remark 6.7.}
As promised, we now comment on the modification of the construction for the cases when (\ref{S2}) does not hold.
If instead we have $\vert u \vert > \vert v \vert$ and $u>0$, then the definition (\ref{S6}) is
still applicable, but (\ref{F15}) implies that
the factor $\sqrt{\lambda_n- \vert u \vert}$ is contained in $\vert \tilde w_{2n}\vert$
instead of $\vert\tilde w_n\vert$,
and thus $\vert Q_{n,2n-1}\vert$ does not contain this factor
(cf.~(\ref{S3})). Then one may proceed by defining a global cross-section $S \subset \cN$ with the
help of the gauge fixing conditions $\tilde w_1^S>0$ and $Q^S_{j+1, n+j}<0$ for all
$j=1,\dots, n-1$ (cf.~(\ref{S8})).
The construction works quite similarly to the above one, and all consequences described in
the next subsection remain true.
As was discussed in the Introduction, we can impose (\ref{I13}) without loss of generality.
Nevertheless, it could be a good exercise to detail the construction of
the counterpart of our model $M$ when (\ref{I13}) does not hold. We only note that one must then define
$\zeta_n$ in such a way that $\vert \zeta_n\vert = \sqrt{\lambda_n - \vert v \vert}$ and use that,
on account of (\ref{C5}),
this factor is contained in a matrix element of $\rho(\lambda)$ (\ref{C9}).
\subsection{Consequences of the model of $M$ and the duality map}
Our symplectic reduction yields two Abelian Poisson algebras,
$\fH_\red^1$ and $\fH^2_\red$, on the reduced phase space $(\cM_\red, \omega_\red)$.
Concretely, $\{ \cH_j^\red\}_{j=1}^n $, descending from the functions $\cH_j$ (\ref{T13}), is a generating set for $\fH^1_\red$
and $\{ \hat \cH_j^\red\}_{j=1}^n$, descending from the functions $\hat \cH_j$ (\ref{T9}), is a generating set for
$\fH^2_\red$.
We have two models $(\hat M, \hat \omega)$ and $(M,\omega)$
of $(\cM_\red, \omega_\red)$, endowed with the symplectomorphisms
\begin{equation}
\hat \Psi\colon \cM_\red \to \hat M, \qquad \Psi\colon \cM_\red \to M,
\label{S18}\end{equation}
and the duality map
\begin{equation}
\quad \cR:= \hat \Psi \circ \Psi^{-1}\colon M \to \hat M.
\label{S19}\end{equation}
The restriction of
\begin{equation}
\hat H\equiv \hat \cH_1^\red \circ \hat \Psi^{-1}
\label{hatHid}\end{equation}
to $\hat M^o = (\C^*)^n $ acquires the
form
(\ref{I6}) if $\hat M^o$ is parametrized by $\widehat\cD_+ \times \T^n$ as described in Section 2.2,
and the restriction of
\begin{equation}
H \equiv \cH_1^\red \circ \Psi^{-1}
\label{Hid}\end{equation}
to $M^o = (\C^*)^n$ takes
the form (\ref{I11}) if $M^o = (\C^*)^n$ is parametrized by $\cD_+ \times \T^n$ as
given by (\ref{S6}).
The interpretations of the reduced Hamiltonians from the perspective of the model
$(\hat M,\hat \omega)$ were
outlined
in Section 2.2,
and we now discuss the significance of the model $(M, \omega)$.
The first basic point about $M$ is that the flow of the RSvD type Hamiltonian
$H$ (\ref{I11}) is not complete on the dense open subset $M^o \subset M$, while its
reduction origin ensures completeness on $M$.
The flows of all $\cH_j^\red \circ \Psi^{-1}$ are also complete on $M$, simply since they are projections
of complete flows on the unreduced phase space $\cM$.
The second basic point is that $(M,\omega)$ serves naturally as action-angle phase space
for the integrable Hamiltonians $\hat \cH_j^\red \circ {\hat \Psi}^{-1}$, which include the RSvD type Hamiltonian
(\ref{I6}).
Indeed, the map $\cR$ `trivializes'
the Hamiltonians $\hat \cH_j^\red \circ \hat \Psi^{-1}$, since we have
\begin{equation}
(\hat \cH_j^\red \circ \hat \Psi^{-1})\circ \cR = \hat \cH_j^\red \circ \Psi^{-1}= \sum_{l=1}^n \cosh (2 j \lambda_l).
\label{S20}\end{equation}
Thus, the functions $\lambda_l\colon M\to \R$ are action variables for the (completed)
integrable many-body system (\ref{I6}) on $\hat M$.
The actions $\lambda_l$ are related by a $\GL(n,\Z)$ transformation combined with a constant shift
to the distinguished
action variables defined by the functions $\vert \zeta_i \vert^2$ on $M = \C^n$.
These latter action variables generate the standard $\T^n$ action on
$M= \C^n$.
The origin $\zeta=0$ is a fixed point for the torus action, and it represents
the unique joint minimum of the Hamiltonians (\ref{S20}).
Moreover, this is the only equilibrium point that any single Hamiltonian of the form (\ref{S20}) possesses.
It follows from the above that $ \cR(0)\in \hat M $ is a joint equilibrium point for the Hamiltonians
$\hat \cH_j^\red \circ \hat \Psi^{-1}$.
It also follows that each Hamiltonian $\hat \cH_j^\red\circ \hat \Psi^{-1}$
is non-degenerate (has no extra conserved quantities), because this property of the equivalent Hamiltonians
(\ref{S20})
is easily seen.
Of course, one can write down the analogues of equations (\ref{T49}) -- (\ref{T52}) for the
flows of the Hamiltonians (\ref{S20}) on $M$.
For any fixed $j$, the counterparts of the $n$ frequencies (\ref{T51}) are
given by $\Omega_{j,a}(\lambda) = 2 j \sinh{(2j\lambda_a)}$, which
generically are independent over the field of
rational numbers.
The existence of an equilibrium point for $\hat H$ (\ref{I6})
is not obvious. It is an open problem to find the $\cZ$-coordinates of $\cR(0)\in \hat M$; we believe that it
lies inside the dense open set $\hat M^o$.
A similar open problem is to find $\cR^{-1}(0)\in M$, which gives the unique joint equilibrium for the
Hamiltonians $\cH_j^\red \circ \Psi^{-1}$.
We have established the alternative
interpretations of the $\vert \zeta_i \vert^2\in C^\infty(M)$ as action variables for
$\hat \cH_j^\red \circ \hat \Psi^{-1}$ and
global position variables $\cH_j^\red \circ \Psi^{-1}$, respectively.
At the same time, the functions $\vert \cZ_i \vert^2\in C^\infty(\hat M)$ serve as actions for
$\cH_j^\red \circ \Psi^{-1}$
and global position variables for $\hat \cH_j^\red \circ \hat \Psi^{-1}$.
This shows that the integrable many-body systems engendered by the `main Hamiltonians' displayed in
(\ref{I6}) and (\ref{I11}) are indeed in action-angle duality.
A special feature of the dual pair at hand is that the action-angle phase spaces $(M,\omega)$ and
$(\hat M,\hat \omega)$
are also \emph{the same} in an obvious manner, namely, both are equal to $(\C^n,\omega_\can)$.
Distinguished action variables of both systems generate the standard
torus action on $\C^n\simeq \R^{2n}$ equipped with its canonical symplectic form.
It is by no means true that every Liouville integrable system corresponds to a globally well-defined
Hamiltonian torus action, and for global torus actions there could be several
inequivalent possibilities.
Integrable many-body systems in action-angle duality live on symplectomorphic phase spaces,
but their respective action variables cannot in general be intertwined by a symplectomorphism.
Apart from the current example and self-dual systems, such an action-intertwining symplectomorphism
was previously found only for dual pairs of purely scattering systems, such as
the hyperbolic Sutherland system and its Ruijsenaars dual \cite{SR88}, and the analogous
$\mathrm{BC}_n$ systems \cite{P}.
It may be worth stressing that the duality map $\cR$ (\ref{S19}) is just the identity map on $\cM_\red$
written in terms of two distinct models. On the other hand, the
map $M \to \hat M$ given by $\zeta \mapsto \cZ = \zeta$
encodes
a non-trivial map on $\cM_\red$, for which $\Psi^{-1}(\zeta) \mapsto \hat \Psi^{-1}(\zeta)$,
$\forall \zeta\in \C^n$.
We end by remarking that one can perform semiclassical quantization for both systems using their respective
action variables. Even more, one can quantize any action variable of the form
$\vert \zeta_j\vert^2\in C^\infty(\C)$ by the replacement
\begin{equation}
\zeta_j^* \zeta_j \longrightarrow {\hat \zeta_j}^\dagger \hat \zeta_j,
\label{S21}\end{equation}
where the hatted letters stand for annihilation and creation operators on the standard Fock space.
In this manner, one obtains that the spectrum of each action variable $\vert \zeta_j \vert^2$ consists
of all non-negative integers. This then gives immediately the (semi-classical) spectra of the corresponding
integrable Hamiltonians. Regarding the Hamiltonians (\ref{S20}),
one simply expresses $\{\lambda_i\}$ in terns of $\{\vert \zeta_j \vert^2\}$.
One can deal with the Hamiltonians $\cH_j^\red\circ \hat \Psi^{-1}$ (\ref{T43})
in the same spirit.
\section{Discussion and outlook}
\label{sec:7}
We have presented
a thorough description of
the models $M$ and $\hat M$ of the reduced phase space $\cM_\red$ (\ref{T33}) and gained
a detailed understanding of how these models are equipped with a pair of
integrable many-body systems in action-angle duality.
Our principal result is that we have established the validity
of Figure 1 of the Introduction for the case at hand.
In particular, we have seen that $\lambda\colon M \to \R^n$ yields via the duality map $\cR$
the momentum map
for the torus action associated with the integrable Hamiltonians
$\hat \cH_j^\red\circ \hat \Psi^{-1}$ that contain $\hat H$ (\ref{I6})
and at the same time it provides global position variables for the
Hamiltonians $\cH^\red_j\circ \Psi^{-1}$ that contain $H$ (\ref{I11}).
This and the analogous dual interpretations for
the map $\hat \lambda\colon \hat M \to \R^n$ are explained in Section 2.2 and Section 6.2.
To put it slightly differently, we have seen that
$\{ \lambda_j\}$ and $\{\hat \cH_j^\red\circ \Psi^{-1}\}$ (\ref{S20})
are alternative generating sets for the Abelian Poisson algebra $\fP$ on $(M,\omega)$, while
$\{ \hat \lambda_j\}$ and $\{ \cH_j^\red \circ \hat \Psi^{-1}\}$ (\ref{T44}) provide alternative
generating sets for $\hat \fP$ on $(\hat M, \hat \omega)$.
The main technical achievement of this paper is the construction of the model $M$,
which is summarized by Figure 2 and Theorem 6.5.
The constructions of the maps $\psi$ and $\hat \psi$ that feature in the two figures rely
respectively on the singular value decomposition and on the generalized Cartan decomposition
of certain matrices, and other algebraic operations.
These maps, and especially
the duality map $\cR$, cannot be presented explicitly, basically since
the eigenvalues of higher rank matrices cannot be given in closed form.
Nevertheless, the duality proves very
useful for understanding the qualitative features of the respective systems.
Our study gives rise to the first example of systems in duality for which
the two systems are different (not a self-dual case) and both have quasi-periodic motions
on compact Liouville tori.
The duality map $\cR$ allowed us to demonstrate
that in our case each one of the two systems $(M,\omega, \fH,\fP,H)$ and
$(\hat M, \hat\omega, \hat\fH,\hat\fP,\hat H)$
has a unique equilibrium position, which corresponds to the origin
in $\C^n$ used to represent both $M$ and $\hat M$.
We also pointed out that each reduced Hamiltonian $\cH_j^\red$ and $\hat \cH_j^\red$ possesses
Abelian commutants in
the Poisson algebra $C^\infty(\cM_\red)$.
As another spin-off, let us now explain that
the particle positions evaluated along any fixed phase space trajectory of our Hamiltonians
stay in a compact set, i.e., all motions are bounded.
Indeed, any trajectory of $\cH_j^\red \circ \Psi^{-1}$ is contained in a set
$(\hat\lambda \circ \cR)^{-1}(\hat\lambda_0)$ for some
$\hat\lambda_0 \in \R^n$, which is compact, since---being equivalent to the standard
$\T^n$ momentum map on $(\C^n,\omega_\can)$ (\ref{T40})---the map $\hat\lambda \colon \hat M\to \R^n$ is proper.
This compact subset of $M$ is sent by $\lambda$ onto a compact subset of $\R^n$,
simply because $\lambda\colon M\to \R^n$ is continuous.
A similar argument can be applied to the trajectories generated by
the Hamiltonians $\hat \cH_j^\red \circ \hat \Psi^{-1}$ as well.
We remark that in principle we can derive Lax pairs
for our systems, since we know the `unreduced Lax matrices' (see (\ref{T9}) and (\ref{T13})) that
generate the Abelian Poisson algebras $\fH^1$ and $\fH^2$ on $\cM$, and
those unreduced Lax matrices satisfy Lax equations already before reduction \cite{FM,M1}.
The specific formulae should be worked out and compared with the Lax matrices obtained
recently in \cite{PG}.
We have seen that the complex `oscillator variables' provide an easy way for finding
the semiclassical spectra of the actions, by (\ref{S21}), and thus also the spectra of
the many-body Hamiltonians.
It is an interesting problem for future work to compare
this `action-angle quantization' with a `Schr\"odinger quantization'
of the RSvD type many-body Hamiltonians (\ref{I6}) and (\ref{I11}) built on analytic difference operators.
For this, the recent paper by van Diejen and Emsiz \cite{vDE} should serve
as a good starting point.
Another promising project is to explore reductions of the Heisenberg double of $\SU(2n)$
at generic values of the momentum map. This is expected to produce extensions with internal degrees of freedom of the many-body
systems (\ref{I6}) and (\ref{I11}). A suitably generalized version of action-angle duality
should hold also for such systems, analogously to the systems
investigated by Reshetikhin \cite{Resh1,Resh2}.
Finally, we wish to draw attention to our supplementary new result presented in Appendix B, where we
show how the Hamiltonian $H$ (\ref{I11}) can be recovered as a scaling limit of
van Diejen's 5-parametric integrable Hamiltonians \cite{vD1}.
We stress that our reduced
Hamiltonians automatically have complete flows on $\cM_\red$, while
the completeness of the flow for general real forms of van Diejen's systems has not
yet been studied. However, see \cite{PG}, and also
\cite{PR} for a detailed study of classical scattering in a 2-parameter hyperbolic case.
The most intriguing open problem in this area is to find a
Hamiltonian reduction treatment for van Diejen's 5-parametric systems.
This would enhance their group theoretic understanding, and
would also help to explore their classical dynamics.
\bigskip\bigskip \noindent\bf Acknowledgements. \rm
We wish to thank Alexei Rosly and Simon Ruijsenaars for useful
discussions.
We are also grateful to Tam\'as G\"orbe and G\'abor Pusztai for comments on the manuscript.
L.F.~is indebted to Youjin Zhang for hospitality at Tsinghua University during the
final stage of the work.
This work was supported in part by the Hungarian Scientific Research
Fund (OTKA) under the grant K-111697.
\renewcommand{\thesection}{\Alph{section}}
\setcounter{section}{0}
\section{Some explicit formulae}
In this appendix we display the explicit formulae of some of the functions that appear in Lemma 6.1.
We begin by noting that
$f_1(\lambda) = \sqrt{\cF_1(\lambda)}$ and $f_{2n}(\lambda) = \sqrt{\cF_{2n}(\lambda)}$,
since for these suffixes the functions (\ref{*}) are positive in a neighbourhood of the domain
$\cD=\overline{\cD_+}$ (\ref{F37}). We here used the assumption (\ref{S2}) and the explicit
formula (\ref{F15}).
To deal with the other components $\vert \tilde w_j\vert$ in (\ref{S3}), we use the analytic function
\begin{equation}
J(x)= \sinh(x)/x,
\label{A1}\end{equation}
which is positive for all $x\in \R$. Then we have the following formulae.
First,
\begin{equation}
\begin{aligned}
&f_j(\lambda) = \Bigl[J(\lambda_{j-1} - \lambda_j - \mu)
\frac{ e^{-\mu}\sinh(\mu)}{\sinh(2\lambda_j)}
\frac{(e^{2 \lambda_j} - e^{-2u})\sinh(\lambda_j+ \lambda_{j-1} + \mu)}
{\sinh(\lambda_{j-1} + \lambda_j) \sinh(\lambda_{j-1} - \lambda_j)}
\Bigr]^{\frac{1}{2}} \\
\quad &\times \Bigl[
\prod_{\substack{i=1\\(i\neq j,j-1)}}^{n} \left(\frac{\sinh(\lambda_j+\lambda_i+\mu)\sinh(\lambda_j-\lambda_i+\mu)}
{{\sinh(\lambda_j-\lambda_i)}\sinh(\lambda_j+\lambda_i)}\right)
\Bigr]^{\frac{1}{2}}, \quad j=2,\dots, n-1,
\end{aligned}
\label{A2}\end{equation}
then
\begin{equation}
\begin{aligned}
&f_n(\lambda) = \Bigl[ 2 J(\lambda_{n-1} - \lambda_n - \mu)
\frac{ \sinh(\mu)}{\sinh(2\lambda_n)}
\frac{e^{ \lambda_n-u -\mu }\sinh(\lambda_n+ \lambda_{n-1} + \mu)}
{\sinh(\lambda_{n-1} + \lambda_n) \sinh(\lambda_{n-1} - \lambda_n)}
\Bigr]^{\frac{1}{2}} \\
\quad &\times \Bigl[J(\lambda_n - \vert u \vert)
\prod_{\substack{i=1\\(i\neq n,n-1)}}^{n} \left(\frac{\sinh(\lambda_n+\lambda_i+\mu)\sinh(\lambda_n-\lambda_i+\mu)}
{{\sinh(\lambda_n-\lambda_i)}\sinh(\lambda_n+\lambda_i)}\right)
\Bigr]^{\frac{1}{2}},
\end{aligned}
\label{A3}\end{equation}
and finally
\begin{equation}
\begin{aligned}
&f_{n+j}(\lambda) = \Bigl[J(\lambda_{j} - \lambda_{j+1} - \mu)
\frac{ e^{-\mu}\sinh(\mu)}{\sinh(2\lambda_j)}
\frac{(e^{-2u}-e^{-2 \lambda_j} )\sinh(\lambda_j+ \lambda_{j+1} - \mu)}
{\sinh(\lambda_{j} + \lambda_{j+1}) \sinh(\lambda_{j} - \lambda_{j+1})}
\Bigr]^{\frac{1}{2}} \\
\quad &\times \Bigl[
\prod_{\substack{i=1\\(i\neq j,j-1)}}^{n} \left(\frac{\sinh(\lambda_j+\lambda_i-\mu)\sinh(\lambda_j-\lambda_i-\mu)}
{{\sinh(\lambda_j-\lambda_i)}\sinh(\lambda_j+\lambda_i)}\right)
\Bigr]^{\frac{1}{2}}, \quad j=1,\dots, n-1.
\end{aligned}
\label{A4}\end{equation}
It is easy to see from (\ref{F15}), (\ref{*}) that (\ref{S3}) holds with the above formulae.
Combining (\ref{F40}) and (\ref{F41}) with (\ref{S3}), one can write explicit
formulae for the functions in
(\ref{S4}) as well.
The main point is that the vanishing denominators $\sinh(\lambda_j - \lambda_{j+1} - \mu)$ of $C_{j+1, n+j}$
(\ref{F41}) cancel for each $j=1,\dots, n-1$.
The formulae are not enlightening and we omit them.
\renewcommand{\thesection}{\Alph{section}}
\section{The relation of $H$ (\ref{I11}) to van Diejen's Hamiltonian}
Our starting point is the following real form
of van Diejen's Hamiltonian \cite{vD1}, with real parameters $a,b,c,d,\mu$,
\begin{equation}\label{hvd}
H_{\rm{vD}}[\mu;a,b,c,d]\,(\lambda,\theta) = \sum_{j=1}^n (\cos\theta_j) (V_jV_{-j})^{1/2}(\lambda)
\,-\,
\frac12\sum_{j=1}^n(V_j+V_{-j})(\lambda),
\end{equation}
with $V_{\pm j}=V_{\pm j}^{(1)} V_{\pm j}^{(2)}$ and $V_{\pm j}^{(1,2)}$ given by
\begin{equation}\label{V12pm}
\begin{aligned}
V_{\pm j}^{(1)}(\lambda) &= \frac{\cosh(a\pm \lambda_j)\cosh(b\pm \lambda_j)\sinh(c\pm \lambda_j)\sinh(d\pm \lambda_j)}{\cosh^2\lambda_j\sinh^2\lambda_j}\\
V_{\pm j}^{(2)}(\lambda) &= \prod_{k\neq j}^n\frac{\sinh\bigl(\mu\pm(\lambda_j+\lambda_k)\bigr)\sinh\bigl(\mu\pm(\lambda_j-\lambda_k)\bigr)}{\sinh(\lambda_j+\lambda_k)\sinh(\lambda_j-\lambda_k)}.
\end{aligned}
\end{equation}
For convenience, we shall refer to the two terms in the formula for $H_{\mathrm{vD}}(\lambda, \theta)$ as ``kinetic'' and ``potential''.
We will prove the following result.
\begin{proposition}\label{appbprop}
The Hamiltonian in (\ref{I11}) is a special limiting case of the
van Diejen Hamiltonian. Specifically, on the domain $\cD_+(u,v,\mu) \times \T^n$ (\ref{I9}), we have
\begin{equation}
H = e^{v-u}\lim_{\substack{a\rightarrow-\infty\\ b\rightarrow+\infty\\ c\rightarrow u,\, d\rightarrow v}}\Bigl(4 e^a e^{-b} H_{\rm{vD}}[\mu;a,b,c,d]\Bigr) +n.
\end{equation}
\end{proposition}
Before giving the proof of this result, let us state an intermediate one.
\begin{lemma}\label{appblemma}
The product in the kinetic term can be expressed in the form
\begin{equation}\label{kvd}
\begin{aligned}
\bigl(V_jV_{-j}\bigr)(\lambda) &= \left(1+\frac{\sinh^2a}{\cosh^2\lambda_j}\right) \left(1+\frac{\sinh^2b}{\cosh^2\lambda_j}\right) \left(1-\frac{\sinh^2c}{\sinh^2\lambda_j}\right) \left(1-\frac{\sinh^2d}{\sinh^2\lambda_j}\right)\\
&\qquad\qquad\times \prod_{k\neq j}^n\left(1-\frac{\sinh^2\mu}{\sinh^2(\lambda_j-\lambda_k)}\right)\left(1-\frac{\sinh^2\mu}{\sinh^2(\lambda_j+\lambda_k)}\right),
\end{aligned}
\end{equation}
and the potential term in (\ref{hvd}) may be written in the form
\begin{equation}\label{pvd}
\begin{aligned}
&- \frac12\sum_{j=1}^n(V_j+V_{-j})(\lambda)
=
\frac{1}{\sinh^2\mu}\cosh(a)\cosh(b)\sinh(c)\sinh(d) \prod_{k=1}^{n}\left(1-\frac{\sinh^2\mu}{\sinh^2\lambda_k}\right)\\
&\qquad +\frac{1}{\sinh^2\mu}\sinh(a)\sinh(b)\cosh(c)\cosh(d)\prod_{k=1}^{n}\left(1+\frac{\sinh^2\mu}{\cosh^2\lambda_k}\right)
+C[\mu;a,b,c,d]
\end{aligned}
\end{equation}
with constant
\begin{equation}\label{cvd}
\begin{aligned}
C[\mu;a,b,c,d] &=
\frac{1}{2\sinh^2\mu}\Bigl[\cosh(a-b)\cosh(c-d) - \bigl(\cosh(a+b-\mu)\cosh(c+d-\mu) \Bigr]\\
&\qquad
-\frac{\sinh\bigl(a+b+c+d + (2n-1)\mu\bigr)}{2\sinh\mu}.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proofof}{Proposition}{appbprop}
Implementing the limit for the potential term, making use of (\ref{pvd}), yields
\begin{equation}
\begin{aligned}
&\lim_{\substack{a\rightarrow-\infty\\ b\rightarrow +\infty}} e^ae^{-b}\left(-\frac12\sum_{j=1}^n(V_j+V_{-j})\right)
=\\
&\quad\quad
\frac14\frac{\sinh c\sinh d}{\sinh^2\mu}
\prod_{k=1}^n\left(1 - \frac{\sinh^2\mu}{\sinh^2\lambda_k}\right)
-
\frac14\frac{\cosh c\cosh d}{\sinh^2\mu}
\prod_{k=1}^n\left(1 + \frac{\sinh^2\mu}{\cosh^2\lambda_k}\right)
+
\frac14 \frac{\cosh(c-d)}{\sinh^2\mu}.
\end{aligned}
\end{equation}
Applying the same limit to the kinetic term, using (\ref{kvd}), we obtain
\begin{equation}
\begin{aligned}
\lim_{\substack{a\rightarrow-\infty\\ b\rightarrow+\infty}}e^{2a}e^{-2b}V_jV_{-j}
&=
\frac1{16}\frac1{\cosh^4\lambda_j}\left(1-\frac{\sinh^2c}{\sinh^2\lambda_j}\right)\left(1-\frac{\sinh^2d}{\sinh^2\lambda_j}\right)\\
&\qquad\times
\prod_{k\neq j}^n\left(1-\frac{\sinh^2\mu}{\sinh^2(\lambda_j+\lambda_k)}\right) \left(1-\frac{\sinh^2\mu}{\sinh^2(\lambda_j-\lambda_k)}\right).
\end{aligned}
\end{equation}
Putting these together, we obtain
\begin{equation}
\begin{aligned}
&\lim_{\substack{a\rightarrow-\infty\\ b\rightarrow+\infty\\ c\rightarrow u,\, d\rightarrow v}}\Bigl(4e^ae^{-b}H_{\rm{vD}}[\mu;a,b,c,d]\,(\lambda,\theta)\Bigr) = \\
&\sum_{j=1}^n\frac{\cos\theta_j}{\cosh^2\lambda_j}\left[\left(1-\frac{\sinh^2u}{\sinh^2\lambda_j}\right)
\left(1-\frac{\sinh^2v}{\sinh^2\lambda_j}\right)
\prod_{k\neq j}^n\left(1-\frac{\sinh^2\mu}{\sinh^2(\lambda_j+\lambda_k)}\right) \left(1-\frac{\sinh^2\mu}{\sinh^2(\lambda_j-\lambda_k)}\right)\right]^{1/2}\\
&\qquad
+\frac{\sinh u\sinh v}{\sinh^2\mu}
\prod_{k=1}^n\left(1 - \frac{\sinh^2\mu}{\sinh^2\lambda_k}\right)
-
\frac{\cosh u\cosh v}{\sinh^2\mu}
\prod_{k=1}^n\left(1 + \frac{\sinh^2\mu}{\cosh^2\lambda_k}\right)
+
\frac{\cosh(u-v)}{\sinh^2\mu}.
\end{aligned}
\end{equation}
\end{proofof}
\begin{proofof}{Lemma}{appblemma}
Checking (\ref{kvd}) is straightforward. To derive the formula for the potential term, let us define the meromorphic one-form
\begin{equation}
\Omega(z) := F(z)dz,
\end{equation}
with the function $F$ defined by
\begin{equation}
F(z) = \frac12\frac{(Az+A^{-1})(Bz+B^{-1})(Cz-C^{-1})(Dz-D^{-1})}{(\alpha^{-2}-1)z(z^2-1)(z^2-\alpha^2)}\left(\prod_{a=1}^{2n}\frac{\alpha^{-1} z\Lambda_a-\alpha}{z-\Lambda_a}\right).
\end{equation}
The poles of $\Omega(z)$ are at $z=0$, $z=\pm1$, $z=\pm\alpha$, $z=\infty$, $z=\Lambda_a$, and the sum of the residues is zero. Thus we have
\begin{equation}\label{appbzerosum}
-\sum_{a=1}^{2n}\rez_{z=\Lambda_a}\Omega(z) = \left(\rez_{z=+1}+\rez_{z=-1}+\rez_{z=0}+\rez_{z=\infty}+\rez_{z=+\alpha}+\rez_{z=-\alpha}\right)\Omega(z).
\end{equation}
Upon making the substitutions
\begin{equation}
\alpha=e^{-\mu},\quad A=e^a,\quad B=e^b,\quad C=e^c,\quad D=e^d,\quad
\Lambda_j=e^{2\lambda_j},\quad \Lambda_{n+j}= e^{-2\lambda_j},
\end{equation}
(\ref{appbzerosum}) is the same as (\ref{pvd}). That is\\
---the sum of the residues at $z=\Lambda_1,\dots, \Lambda_{2n}$
is $(-1)$ times the van Diejen potential,\\
---the sum of the residues at $z=\pm1$ yields the first two terms on the rhs of (\ref{pvd}),\\
---the sum of the residues at $z=\pm\alpha$ yields the first line on the rhs of (\ref{cvd}),\\
---the sum of the residues at $z=0$ and $z=\infty$ yields the second line on the rhs of (\ref{cvd}).
\end{proofof}
|
1,314,259,993,915 | arxiv | \section{Introduction}
The quantum concept of entanglement is the most intriguing feature that allows to establish new physical paradigms in information processing. The corresponding non-classical protocols need a certain degree of entanglement as a quantum resource \cite{BEN98}. Consequently one has to understand how entanglement can be processed and how it can be measured \cite{VEP98,MHO01}.
In this respect the typical situation is the following. Several parties share components of an entangled system. Processing of entanglement then means that they can perform local unitary operations and local measurements on their respective parts of the complete system. Under these conditions it has been shown \cite{BBP96, BDS96, DEM96, GIS96, DNMV03} that two parties, supplied with non-maximally entangled qubit pairs, can extract a sample of stronger entangled pairs.
But so far no general approach to optimal distillation exists. However, in order to improve the fidelity, modifications of the original versions have been proposed \cite{OKU99,MS02,MET02}. Lower bounds for the fidelity of entanglement distillation based on faulty local operations have also been studied \cite{GBCZ99}.
In the present work we investigate the entanglement distillation protocol described in \cite{DEM96} for the case that only a finite sample of entangled qubit pairs is available. In particular, we propose a relatively simple, iterative distillation scheme that starts from a finite number of identical pairs and delivers a distilled pair applicable for further communication tasks. The behaviour of the corresponding mean fidelity turns out to be particularly interesting for small initial samples of entangled qubit pairs.
\section{Entanglement distillation}
For any non-classical communication, entangled systems first have to be distributed between a sender (Alice) and a receiver (Bob). In the course of this distribution the systems are influenced by various noise sources, which reduce the amount of entanglement. Any distillation process for such distributed systems is restricted to local operations and classical communication (LOCC) of the parties. Moreover, realistic entanglement distillation schemes have to take into account errors \cite{GBCZ99} and the fact that Alice and Bob share only a finite amount $N$ of mixed entangled systems. In the present paper we focus on the latter restriction regarding resources.
The explicit distillation protocol we refer to was introduced in Refs. \cite{BBP96,DEM96}. In the present work we only study this specific distillation process under the constraint of finite resources. However, other highly effective distillation protocols have been proposed in the literature. Their exclusion in the present discussion does not mean that they cannot be applied to finite resources. In particular, the quantum hashing protocol \cite{BDS96} can be well applicable to finite resources and presumably leads to highly distilled qubit pairs. It will, however, need a considerable effort for the corresponding book keeping of the so-called likely sets.
The process \cite{BBP96,DEM96} conditionally increases the fidelity
\begin{equation}
F(\hat{\rho}) \equiv \operatorname{tr}\left(\mathbf{\Phi}^{+} \hat{\rho} \right) = A
\end{equation}
of the Bell-diagonal mixed state
\begin{equation}
\label{belldiag}
\hat{\rho} = A \: \mathbf{\Phi}^{+} \: + B \: \mathbf{\Psi}^{-} \: + C \: \mathbf{\Psi}^{+} \: + D \: \mathbf{\Phi}^{-} \;
\end{equation}
describing a qubit pair, where we introduced the abbreviations $\mathbf{\Phi}^{\pm} \equiv \ketbra{\phi^\pm}$ and $\mathbf{\Psi}^{\pm} \equiv \ketbra{\psi^\pm}$ for the four Bell-states.
This state is non-separable if any of the coefficients $A$, $B$, $C$ or $D$ is larger than $1/2$ \cite{MPRH97}. Without loss of generality we choose $A>1/2$.
The central element of the protocol is a CNOT transformation. We briefly recall the basic ideas \cite{BBP96,DEM96}. Two qubit pairs 1 and 2 each described by the state $\hat{\rho}$, Eq.~\eqref{belldiag}, are processed in one step. Alice holds the qubits $1_A$ and $2_A$ and Bob holds the qubits $1_B$ and $2_B$. These qubits can now be treated locally using a sequence of operations \cite{DEM96}: (I) Alice and Bob rotate their qubits locally. These rotations exchange the $\mathbf{\Psi}^{-}$ contribution with the $\mathbf{\Phi}^{-}$ contribution of the initial state $\hat{\rho}$, Eq.~\eqref{belldiag} \footnote{Note that this step is necessary for an iterative application of the distillation scheme since otherwise it does not converge, for more details see the analysis in \cite{CMA98}.}. (II) Alice and Bob then perform CNOT operations on their respective qubits. The qubits of pair 1 (qubits $1_A$ and $1_B$) act as control qubits. (III) Both measure their target qubits (qubits $2_A$ and $2_B$) in the computational basis $\{\ket{0},\ket{1}\}$. They obtain either the result {\guillemotright{$0$}\guillemotleft} or {\guillemotright{$1$}\guillemotleft}. (IV) Alice and Bob classically communicate their measurement results. The distillation is successful with probability
\begin{equation}
\label{ps}
p^\text{(s)} \equiv (A+B)^2+(C+D)^2
\end{equation}
if the combined result reads {\guillemotright{$00$}\guillemotleft} or {\guillemotright{$11$}\guillemotleft}. Then they keep pair 1 now described by the conditioned density operator
\begin{align}
\label{rhosvier}
\begin{split}
\rhoh^\text{(s)} & = \frac{1}{p^\text{(s)}}\bigg[ \;(A^2+B^2) \: \mathbf{\Phi}^{+} \: + 2 C D \: \mathbf{\Psi}^{-}\: \\
& \phantom{=} +\; (C^2+D^2) \: \mathbf{\Psi}^{+} \: + 2 A B \: \mathbf{\Phi}^{-} \;\bigg].
\end{split}
\end{align}
If Alice and Bob read off the results {\guillemotright{$01$}\guillemotleft} or {\guillemotright{$10$}\guillemotleft}, the reduced density operator of pair 1 reads
\begin{equation}
\label{rhouvier}
\begin{split}
\rhoh^\text{(u)} & = \frac{1}{1-p^\text{(s)}} \bigg[ \; (AC+BD) \mathbf{\Phi}^{+} + (AD+BC) \mathbf{\Psi}^{-} \\
& \phantom{=} +\; (AC+BD) \mathbf{\Psi}^{+} + (AD+BC) \mathbf{\Phi}^{-}\;\bigg].
\end{split}
\end{equation}
In this unsuccessful case Alice and Bob discard pair 1.
In the successful case pair 1 is mapped from a Bell-diagonal state $\hat{\rho}$, Eq.~\eqref{belldiag}, to the Bell-diagonal state $\rhoh^\text{(s)}$, Eq.~\eqref{rhosvier}. The corresponding new fidelity is given by
\begin{equation}
\label{fsucc}
F(\rhoh^\text{(s)}) = \frac{A^2+B^2}{(A + B)^2 + (C + D)^2}\;,
\end{equation}
which is higher than the initial fidelity for $A>0.5$.
However, this fidelity does not completely describe the single-step process. A complete description also has to take into account the unsuccessful case. The distillation process then leads to the conditioned density operator $\rhoh^\text{(u)}$, Eq.~\eqref{rhouvier}. Hence if we are interested in the performance of this particular distillation process, we have to take into account the fidelity $F(\rhoh^\text{(u)}) \leq \frac{1}{2}$. On the other hand, using local operations and classical communication the parties can always generate two qubits with fidelity $\frac{1}{2}$ if the unsuccessful case occurs. Applying such additional operations we can define the average fidelity
\begin{equation}
\label{aveinzel}
\begin{split}
\average{F} & \equiv p^\text{(s)} F(\rhoh^\text{(s)}) + (1-p^\text{(s)}) \frac{1}{2}\\
& = A + B (1 - 2 A)
\end{split}
\end{equation}
for a single step process.
For all possible values of $B$ the average fidelity $\average{F}$ turns out to be equal to or less than the original fidelity $A$. This is of course also true for the average fidelity
\begin{equation}
\begin{split}
\average{\tilde{F}} & \equiv p^\text{(s)} F(\rhoh^\text{(s)}) + (1-p^\text{(s)}) F(\rhoh^\text{(u)}) \\
& = A^2 + A(C-B) + B(1-C)
\end{split}
\end{equation}
based on the density operator $\rhoh^\text{(u)}$, Eq.~\eqref{rhouvier}.
This is consistent with the fact that the total amount of entanglement cannot increase under a LOCC process \cite{MHO01}.
However, we emphasise that so far we have just discussed a single distillation step performed on two entangled qubit pairs. The situation becomes different and more interesting when we now consider a finite ensemble with $N>2$ pairs.
\section{Distillation for a finite set of entangled systems}
Alice and Bob now perform the distillation scheme with an even number
$N$ of pairs\footnote{ Due to the fact that the entangled qubit pairs are being processed pairwise in the protocol, we assume an even number of pairs. For a single application of the protocol this assumption is without loss of generality.}.
After one distillation step they have $j \leq N/2 $ pairs left with probability
\begin{equation}
\label{pn}
p(N,j) = \binom{N/2}{j} \big[p^\text{(s)}\big]^j\big[1-p^\text{(s)}\big]^{N/2-j},
\end{equation}
which contains the success probability $p^\text{(s)}$, Eq.~\eqref{ps}.
The totally unsuccessful case occurs when all pairs have to be discarded, that is $j=0$, in a single distillation step. Hence starting from $N$ pairs we can define the average fidelity
\begin{equation}
\label{midfidelity}
\average{F}(N) \equiv p(N,0) \; F^\text{(u)} \; + [1-p(N,0)] \; F(\rhoh^\text{(s)})
\end{equation}
using the density operators $\hat{\rho}^{(s)}$, Eq.~\eqref{rhosvier}, and the fidelity $F^\text{(u)}$ for the unsuccessful case.
With an increasing amount $N$ of pairs the average fidelity $\average{F}(N)$ also increases because of the decreasing probability $p(N,0)$ to lose all qubit pairs. Using Eq.~\eqref{midfidelity} one finds that
\begin{equation}
\label{maxensemble}
N_{\text{min}} = 2 \frac{\:\displaystyle \ln \left[ \frac{A - F(\rhoh^\text{(s)}) }{ F^\text{(u)} - F(\rhoh^\text{(s)}) } \right] \:}{\displaystyle \ln(1-p^\text{(s)})}
\end{equation}
pairs are needed to obtain an average fidelity $\average{F}(N_\text{min})=A$. This minimal number $N_\text{min}$ of qubit pairs shows a strong difference depending on the choice for $F^\text{(u)}$. Figure \ref{ensemblesize} shows the minimal size $N_\text{min}$ for a finite ensemble described by Werner-states ($B=C=D=(1-A)/3$) \cite{WER89}. If we simply choose $F^\text{(u)}=F(\rhoh^\text{(u)})$ to characterise the CNOT distillation itself we obtain a strong dependence (dashed curve) on the initial fidelity $A$. In particular, $N_\text{min}$ diverges like $\ln(A-\frac{1}{2})$ for $A\rightarrow\frac{1}{2}$. If, on the other hand, we substitute the LOCC boundary $F^\text{(u)}=\frac{1}{2}$, we always get finite values for $N_\text{min}$ which only depend weakly on $A$. In the plot one can therefore clearly identify the region where on average the fidelity increases, that is $\average{F}(N)>A$.
\begin{figure}[h!]
\begin{center}
\includegraphics{CNOTEnsembleSizeNew}
\end{center}
\caption{\label{ensemblesize} Minimal size $N_\text{min}$, Eq.~\eqref{maxensemble}, of the finite sample of entangled qubits to obtain an average fidelity $\average{F}(N_\text{min})=A$ in a single step distillation. The curves have been calculated for specific Bell-diagonal states (Werner-states) with $B=C=D=(1-A)/3$. Nevertheless, they show the generic behaviour. The dashed curve results from the substitution $F^\text{(u)}=F(\rhoh^\text{(u)})$ in Eq.~\eqref{maxensemble} and diverges for small initial fidelities. In contrast to this, the minimal size for the LOCC choice $F^\text{(u)}=\frac{1}{2}$ stays always finite (solid curve). Clearly for $A\!\to\!1$ we obtain $N_\text{min}=2$.}
\end{figure}
Moreover, if $j \geq 2 $ pairs are left after such a first step we can continue with the distillation. We consider such an iteration in the following paragraph.
\section{Iterative distillation}
The resulting state $\rhoh^\text{(s)}$ of a successful distillation step is again Bell-diagonal and hence the complete process can be applied iteratively, as long as qubit pairs are left to use. Starting from an initial density operator $\rhoh^\text{(s)}_0 \equiv \hat{\rho}$, Eq.~\eqref{belldiag}, the density operator $\rhoh^\text{(s)}_{i-1}$ is mapped on a density operator $\rhoh^\text{(s)}_i$ if the $i^\text{th}$ distillation step was successful. The corresponding coefficients of the Bell-projectors, see Eq.~\eqref{rhosvier}, transform as
\begin{equation}
\label{succmapping}
\begin{pmatrix}
A^\text{(s)}_{i} \\
B^\text{(s)}_{i} \\
C^\text{(s)}_{i} \\
D^\text{(s)}_{i}
\end{pmatrix} =
\frac{1}{p^\text{(s)}_i}
\begin{pmatrix}
\big(A^\text{(s)}_{i-1}\big)^2 + \big( B^\text{(s)}_{i-1} \big)^2 \\
2\; C^\text{(s)}_{i-1} \; D^\text{(s)}_{i-1} \\
\big( C^\text{(s)}_{i-1} \big)^2 + \big( D^\text{(s)}_{i-1} \big)^2 \\
2\; A^\text{(s)}_{i-1} \; B^\text{(s)}_{i-1}
\end{pmatrix}
\end{equation}
with the corresponding success probability $p^\text{(s)}_i$. This mapping was studied in detail in Ref.~\cite{CMA98}.
If, however, the $i^\text{th}$ step was unsuccessful we have in principle a mapping from $\rhoh^\text{(s)}_{i-1}$ to $\rhoh^\text{(u)}_i$. In general, the density operator $\rhoh^\text{(u)}_i$, see also Eq.~\eqref{rhouvier}, has a lower fidelity than $\frac{1}{2}$, which is the fidelity of the density operator that can be generated by a LOCC process. If the step was unsuccessful we therefore assume from now on that Alice and Bob perform appropriate operations to prepare local qubits with fidelity $F^\text{(u)}=\frac{1}{2}$.
A full iteration of the entanglement distillation for finite quantum resources has to take into account all possible combinations of these successful and unsuccessful steps. Our aim then is to obtain at the end of a CNOT distillation an entangled qubit pair with an average fidelity as high as possible.
Note that one can also ask a different question for a small finite sample. Can we distil a single qubit pair with very high fidelity if we allow for a lower success probability? That is, we do not care if the distillation fails several times, but when it is successful it must deliver a highly entangled pair. Then the CNOT distillation process seems to be not very suitable. In this case quantum hashing can turn out to be a powerful method.
\begin{figure}[t]
\begin{center}
\includegraphics{CNOTFloat}
\end{center}
\caption{\label{CNOTFloat}Flow chart for the iterative CNOT distillation scheme with finite quantum resources. Starting from $N$ qubit pairs we extract a distilled pair. In every step $n/2$ projective measurements are performed and some qubit pairs have to be discarded. To increase the average fidelity a backup pair can be saved in every iteration step for an odd number $n$. Moreover, the iteration stops if only two pairs are left after the projective measurement and no backup pair was saved previously. Rarely the distillation may still fail if no pair was saved and all pairs after the last projection have to be dismissed. The scheme also defines the corresponding fidelity $F_\text{it}$ of the iterative process. Averaging over many runs we obtain $\average{F_\text{it}}(N)$ for $N$ qubit pairs.}
\end{figure}
The complete iterative CNOT distillation process for finite resources can be expressed in algorithmic form. For a given initial density operator the process depends only on the initial amount of entangled qubit pairs shared by Alice and Bob. An important iterative improvement of the process can be achieved for odd numbers of qubit pairs. In each step of the iteration we may obtain an odd number $n$ of pairs and hence in the simplest case we can store the additional pair as a backup \cite{FMF01}. This backup pair can be used whenever we would have to discard all pairs in some further step of the iteration.
The stop condition of the algorithm depends on the specific aim of the process. As we have already seen in Eq.~\eqref{aveinzel}, it is impossible to obtain an increase of the average fidelity for two qubit pairs. So if we end up with only two pairs left after a distillation, it seems to be unwise to continue with the iteration, in particular if we have stored no backup so far. The corresponding flow chart of such an algorithm is shown in Fig. \ref{CNOTFloat}. The average fidelity $\average{F_\text{it}}(N)$ of the iterative process can be computed using this algorithm.
In the following example we present the behaviour of such an iterative distillation process. We simulate the process using the probability $p(n,j)$, Eq.~\eqref{pn}, to obtain $j$ qubit pairs when starting from $n$. Moreover, we need the mapping of Eq.~\eqref{succmapping}.
\section{Example of the iterative distillation for small finite sets}
To demonstrate the behaviour of the iterative distillation we have again chosen Werner-states
($B^\text{(s)}_0=C^\text{(s)}_0=D^\text{(s)}_0=(1-A^\text{(s)}_0)/3$) as initial states.
\begin{figure}[ht]
\begin{center}
\includegraphics{CNOTPurifyEbits}
\end{center}
\caption{\label{FidelityDevelopMulti}Relative average fidelity $\average{F_\text{it}}(N)/A^\text{(s)}_0$ of the iteration as a function of the initial fidelity $A^\text{(s)}_0$ for different amounts $N$ of qubit pairs. The initial states are again Werner-states ($B^\text{(s)}_0=C^\text{(s)}_0=D^\text{(s)}_0=(1-A^\text{(s)}_0)/3$). For $N=4$ qubit pairs (solid line) we only get an improvement if $A^\text{(s)}_0 > 0.65$. For greater amounts of qubit pairs we always get an improvement. The advantage of the backup can be seen by comparing the two cases $N=5$ (dotted line) and $N=6$ (dashed line). Due to the fact that for odd numbers we always have a backup pair, the average fidelity becomes higher as compared to the surrounding even cases.}
\end{figure}
First we compare in Fig.~\ref{FidelityDevelopMulti} the relative average fidelity $\average{F_\text{it}}(N)/A^\text{(s)}_0$ as a function of the initial fidelity $A^\text{(s)}_0$ for small resources of qubit pairs. In accordance with the algorithm of the previous paragraph we have calculated the final average fidelity $\average{F_\text{it}}(N)$ with backup for $N=4$ (solid line), $N=5$ (dotted line) and $N=6$ (dashed line) pairs. We see that $N=5$ pairs are on average always superior to the nearby even cases, because for an odd $N$ already in the first step of the distillation a backup pair is stored. For $N=4$ there is a minimal initial fidelity which is needed to have a successful iterative distillation.
\begin{figure}[ht]
\begin{center}
\includegraphics{CNOTPurify}
\end{center}
\caption{\label{CNOTCompare} Comparison of the average fidelities of an iteration depending on the amount $N$ of initially accessible qubit pairs. The plot shows the average fidelities of iterative schemes without backup (dotted curve), with backup (dashed curve), Fig~\ref{CNOTFloat}, and the mapping of the completely successful case (solid curve), Eq.~\eqref{succmapping}. We show their behaviour for the mentioned example state with an initial fidelity $A^\text{(s)}_0=0.75$. The plot again shows the advantage of storing one pair in the case of odd numbers $N$. It turns out that for even numbers $N$ it is on average advantageous to drop one pair and to start the distillation with one pair less. This shows that simple backup schemes improve the CNOT distillation and more sophisticated methods can certainly be discussed.}
\end{figure}
Second, we show in Fig.~\ref{CNOTCompare} the generic behaviour of the average fidelity $\average{F_\text{it}}(N)$ depending on the available amount $N$ of pairs. We compare the iterative results without and with backup for a fixed initial fidelity $A^\text{(s)}_0=0.75$ to the completely successful mapping, Eq.~\eqref{succmapping}.
Storing a backup pair in the case of an odd amount clearly leads to a higher average fidelity. The zigzag in the curves reveals the difference between odd and even numbers $N$ of pairs. If the iteration starts with an odd number of entangled qubit pairs, at least one pair is always saved. One can even see that for even numbers it is on average better to drop one pair before performing the first distillation step. Although this means to start with one pair less, the average fidelity is higher. This is most obvious for small finite ensembles.
\section{Conclusions}
In the present paper we have analysed an iterative scheme for a known distillation protocol. This protocol is especially useful for the implementation of an iteration since it puts very low restrictions on the density operators of the processed systems. In particular, we have emphasised the application to finite quantum resources. In this respect it was possible to demonstrate the limitations on the needed number of entangled qubit pairs as well as on their initial entanglement for a successful distillation. Stronger entanglement can be obtained iteratively already for small initial numbers of pairs, even though many pairs have to be sacrificed in order to obtain one distilled pair. We have lowered this loss by introducing backup pairs in our algorithmic description.
Clearly this is not the only possibility to achieve iterative entanglement distillation. First, one can think of recycling backup systems in the distillation process. This, however, works only in a rather narrow regime in which the fidelities of the recycled pairs are close. Second, it is also possible to iterate completely different distillation methods, like quantum hashing \cite{BBP96} for finite resources. Both possibilities need a considerable amount of book keeping in contrast to the iteration presented here.
Finally, it would be important to simulate distillation for an experimental system, which offers a way to control a finite number of qubit pairs. A possible candidate for this would be an optical lattice filled with atoms, which can be controlled by collisions \cite{BCJ99,JBC99,MGW03}.
\begin{acknowledgments}
We acknowledge financial support by the Deutsche Forschungsgemeinschaft within the \emph{Schwerpunktprogramm Quanten-Informationsverarbeitung} (SPP1078).
\end{acknowledgments}
|
1,314,259,993,916 | arxiv | \section{Introduction}
The investigation of the time evolution of an arbitrary number $N$ of
point-particles the dynamics of which is determined by Newtonian equations
of motion ("accelerations equal forces") is of course a fundamental topic in
physics and mathematics. The identification in this context of models
\textit{amenable to exact treatments} is a major area of research in
mathematical physics and applied mathematics, having a centuries-old history
and having been boosted by developments in the last few decades, which also
impacted several areas of physics beyond mechanics and many fields of pure
mathematics. An interesting related development which is now becoming of
interest is the study of such models in which the motion is restricted to
lie on an \textit{a priori} prescribed manifold: see for instance \cit
{AM1959} \cite{CG1993} \cite{H2011} \textbf{\cite{ET2012}. }In this paper we
make some initial, simple steps in this direction by focussing on various
many-body models describing the evolution of $N$ points whose positions on a
plane are characterized by $N$ \textit{unit} 2-vectors, thereby forcing
their motion to be confined to \textit{a circle of unit radius centered at
the origin}. All these models are characterized by \textit{Newtonian}
equations of motion: accelerations equal forces, which in these models are
of \textit{one-body}, \textit{two-body} or, in some cases, \textit{many-body}
type, and might depend on the velocities of the moving particles in addition
to their positions. All these models are \textit{autonomous:} their
equations of motion are time-independent. They are \textit{all amenable to
exact treatments}: in particular they \textit{all} allow the explicit
identification of $N$ \textit{constants of motion} in terms of the $N$
dependent variables and their $N$ time-derivatives (for terminological
simplicity we hereafter call such models \textit{integrable}). In some cases
their \textit{initial-value problems} can be moreover \textit{solved} by
(explicitly performable) quadratures and subsequent functional inversions,
preceded by purely algebraic operations, such as solving systems of \textit
linear} constant-coefficients ODEs, or (equivalently) evaluating the $N$
\textit{eigenvalues} of known (time-dependent) $N\times N$ matrices or
(equivalently) the $N$ \textit{zeros} of known (time-dependent) polynomials
of degree $N$ (for terminological simplicity we hereafter call such models
\textit{solvable}). The techniques to manufacture these models are \textit
not new}; some of these models are themselves \textit{new}; others are
essentially \textit{reinterpretations of known models}. The dynamics of
these models are not analyzed in detail; but in some cases the main features
of their behavior are ascertained, for instance for \textit{isochronous}
models the time evolution of which is \textit{isochronous} (i. e., \textit
completely periodic} with a \textit{fixed} period independent of the initial
data), or for models \textit{all} motions of which are \textit{multiply
periodic}.
The equations of motion of the $N$-body problems treated below are listed
with minimal comments in the following Section 2, to facilitate the hasty
reader wishing to get an immediate idea of the findings reported in this
paper. These results are then proven in the subsequent Section 3: the titles
of its subsections indicate case-by-case the techniques employed to arrive
at the relevant results. Finally, a terse Section 4 entitled "Outlook"
outlines possible developments, to be eventually reported in other papers.
Some mathematical details are confined to two Appendices.
\section{Many-body models on a circle amenable to exact treatments}
In the following subsections we display, with minimal comments, various $N
-body problems of Newtonian type ("accelerations equal forces") describing
motions on a circle and amenable to exact treatments (detailed in the
following Section 3). But we provide firstly a terse subsection devoted to
notation.
\subsection{Notations}
The models under consideration generally feature $N$ points moving in a
plane. We identify these $N$ points by 3-vectors $\vec{r}_{n}$, $n=1,2,...,N$
for which we use the following 3-dimensional notation
\begin{equation}
\vec{r}_{n}\equiv \left( \cos \theta _{n},~\sin \theta _{n},~0\right) \equiv
\left( x_{n},~y_{n},~0\right) ~. \label{rn}
\end{equation}
Hereafter $N$ is an \textit{arbitrary positive integer} (generally $N\geq 2
) and indices such as $n,$ $m,$ $\ell $ run over the \textit{positive
integers} from $1$ to $N$ (unless otherwise explicitly indicated).
Clearly these vectors $\vec{r}_{n}$ have \textit{unit} length,
\begin{subequations}
\begin{equation}
\vec{r}_{n}\cdot \vec{r}_{n}=1~. \label{rnunit}
\end{equation
Throughout this paper the dot sandwiched among two vectors denotes the
standard \textit{scalar} product, so that for instanc
\begin{equation}
\vec{r}_{n}\cdot \vec{r}_{m}=\cos \left( \theta _{n}-\theta _{m}\right) ~.
\label{rndotrm}
\end{equation
It is moreover convenient to introduce the \textit{unit} vector $\hat{z}$
orthogonal to the $xy$-plane,
\end{subequations}
\begin{equation}
\hat{z}\equiv \left( 0,~0,~1\right) ~, \label{zhat}
\end{equation
and to denote by the "wedge" symbol $\wedge $ the standard (3-dimensional)
\textit{vector} product, so that
\begin{subequations}
\begin{equation}
\hat{z}\wedge \vec{r}_{n}=-\vec{r}_{n}\wedge \hat{z}=\left( -\sin \theta
_{n},~\cos \theta _{n},~0\right) ~,
\end{equation
\begin{equation}
\left( \hat{z}\wedge \vec{r}_{m}\right) \cdot \vec{r}_{n}=\left( \vec{r
_{m}\wedge \vec{r}_{n}\right) \cdot \hat{z}=\sin \left( \theta _{n}-\theta
_{m}\right) ~. \label{rmwedgerndotzhat}
\end{equation}
Hereafter we deal with time-dependent vectors
\end{subequations}
\begin{equation}
\vec{r}_{n}\left( t\right) \equiv \left( \cos \theta _{n}\left( t\right)
,~\sin \theta _{n}\left( t\right) ,~0\right) ~,
\end{equation
and superimposed dots indicate derivatives with respect to the time variable
$t$ so that, for instance,
\begin{subequations}
\begin{equation}
\overset{\cdot }{\vec{r}}_{n}=\dot{\theta}_{n}~\left( -\sin \theta
_{n},~\cos \theta _{n},~0\right) =\dot{\theta}_{n}~\hat{z}\wedge \vec{r
_{n}~, \label{rndot}
\end{equation
\begin{eqnarray}
\overset{\cdot \cdot }{\vec{r}}_{n} &=&\ddot{\theta}_{n}~\left( -\sin \theta
_{n},~\cos \theta _{n},~0\right) -\dot{\theta}_{n}^{2}~\left( \cos \theta
_{n},~\sin \theta _{n},~0\right) \notag \\
&=&\ddot{\theta}_{n}~\hat{z}\wedge \vec{r}_{n}-\dot{\theta}_{n}^{2}~\vec{r
_{n}~. \label{rndotdot}
\end{eqnarray
Note that here we omitted, for notational simplicity, to indicate \textit
explicitly} the time-dependence of the quantities appearing in these $N$
equations; we will often do this below without repeating this warning.
Several other identities are reported in Appendix A: they are useful to
obtain the results reported below, but are not necessary to understand the
findings reported in the following subsections.
\subsection{Two models obtained via techniques of generalized Lagrangian
interpolation}
\textit{First model}:
\end{subequations}
\begin{subequations}
\label{ManyBodyForcesModel}
\begin{eqnarray}
&&\mu _{n}~\overset{\cdot \cdot }{\vec{r}}_{n}=-\mu _{n}~\left( \overset
\cdot }{\vec{r}}_{n}\cdot \overset{\cdot }{\vec{r}}_{n}\right) ~\vec{r}_{n}
\notag \\
&&+\hat{z}\wedge \vec{r}_{n}~\left\{ \left[ \mu _{n}~\left( \overset{\cdot }
\vec{r}}_{n}\cdot \overset{\cdot }{\vec{r}}_{n}\right) +\eta _{n}~\left(
\vec{r}_{n}\wedge \overset{\cdot }{\vec{r}}_{n}\right) \cdot \hat{z}\right]
~\sum_{\ell =1,~\ell \neq n}^{N}\left[ \frac{\left( \vec{r}_{\ell }\cdot
\vec{r}_{n}\right) }{\left( \vec{r}_{\ell }\wedge \vec{r}_{n}\right) \cdot
\hat{z}}\right] \right. \notag \\
&&\left. +\left[ \left( \vec{r}_{n}\wedge \overset{\cdot }{\vec{r}
_{n}\right) \cdot \hat{z}\right] \sum_{\ell =1,~\ell \neq n}^{N}\left[ \frac
\sigma _{n}\left( \underline{\vec{r}}\right) }{\sigma _{\ell }\left(
\underline{\vec{r}}\right) }~\frac{\mu _{\ell }~\left( \vec{r}_{\ell }\wedge
\overset{\cdot }{\vec{r}}_{\ell }\right) \cdot \hat{z}+\eta _{\ell }}{\left(
\vec{r}_{\ell }\wedge \vec{r}_{n}\right) \cdot \hat{z}}\right] \right\} ~,
\label{ManyBodyForcesOnCircle}
\end{eqnarray
\begin{equation}
\sigma _{n}\left( \underline{\vec{r}}\right) =\dprod\limits_{\ell =1,~\ell
\neq n}^{N}\left[ \left( \vec{r}_{\ell }\wedge \vec{r}_{n}\right) \cdot \hat
z}\right] ~. \label{sigman}
\end{equation}
\textit{Second model}:
\end{subequations}
\begin{eqnarray}
&&\mu _{n}~\overset{\cdot \cdot }{\vec{r}}_{n}==-\mu _{n}~\left( \overset
\cdot }{\vec{r}}_{n}\cdot \overset{\cdot }{\vec{r}}_{n}\right) ~\vec{r}_{n}
\notag \\
&&+\sum_{\ell =1,~\ell \neq n}^{N}\left\{ \left[ \left( \vec{r}_{\ell
}\wedge \vec{r}_{n}\right) \cdot \hat{z}\right] ^{-1}~\left\{ \left[ \left(
\vec{r}_{n}\wedge \overset{\cdot }{\vec{r}}_{n}\right) \cdot \hat{z}\right]
\left[ \mu _{\ell }~\left( \vec{r}_{\ell }\wedge \overset{\cdot }{\vec{r}
_{\ell }\right) \cdot \hat{z}+\eta _{\ell }\right] \right. \right. \notag \\
&&\left. \left. +\left[ \mu _{n}~\left( \vec{r}_{n}\wedge \overset{\cdot }
\vec{r}}_{n}\right) \cdot \hat{z}+\eta _{n}\right] ~\left[ \left( \vec{r
_{\ell }\wedge \overset{\cdot }{\vec{r}}_{\ell }\right) \cdot \hat{z}\right]
\right\} ~\left( \vec{r}_{\ell }\wedge \vec{r}_{n}\right) \right\} ~.
\label{TwoBodyForcesOnCircle}
\end{eqnarray
In these Newtonian equations $\mu _{n}$ and $\eta _{n}$ are $2N$ arbitrary
constants, and for the rest of the notation see Subsection 2.1; note in
particular the property (\ref{rnunit}), implying that the $N$ vectors $\vec{
}_{n}$ have \textit{unit} modulus, hence that the $N$ points whose time
evolution is determined by these equations of motion are constrained to move
on the circle of \textit{unit} radius centered at the origin of the
Cartesian plane.
These equations of motion are \textit{covariant}, implying that the
corresponding $N$-body problems are \textit{rotation-invariant}.
These two $N$-body problems are both \textit{integrable}: they possess $N$
\textit{constants of motion}, the explicit expressions of which in terms of
the vectors $\vec{r}_{n}$ and their time-derivatives $\overset{\cdot }{\vec{
}}_{n}$ are displayed in the following Subsection 3.1. The equations of
motion of the first, (\ref{ManyBodyForcesOnCircle}), of these two models
feature \textit{many-body} forces due to the presence in their right-hand
("forces") sides of the quantities $\sigma _{n}\left( \underline{\vec{r}
\right) $, see (\ref{sigman}), but \textit{their initial-value problem is}
\textit{solvable by purely algebraic operations}; nevertheless their time
evolution can be quite complicated (detailed analyses are not performed in
this paper; the fact that \textit{solvable} models can exhibit quite
complicated dynamics is of course well known, see for instance the papers
where a 3-body model is studied the time evolution of which is highly
nontrivial in spite of the fact that its Aristotelian equations of
motion---"velocity equal forces"---are quite neat and that its initial-value
problem can be reduced to solving a single algebraic equation \cite{CGSS})
\textit{\ }
\subsection{Two \textit{solvable} models obtained via a reinterpretation of
known models}
The \textit{first model} is merely a transcription of the \textit{solvable}
"Sutherland model", see Subsection 3.2. It reads as follows
\begin{equation}
\overset{\cdot \cdot }{\vec{r}}_{n}=-\left( \overset{\cdot }{\vec{r}
_{n}\cdot \overset{\cdot }{\vec{r}}_{n}\right) ~\vec{r}_{n}+g^{2}~\hat{z
\wedge \vec{r}_{n}~\sum_{\ell =1,~\ell \neq n}^{N}\left\{ \frac{\vec{r
_{n}\cdot \vec{r}_{\ell }}{\left[ \left( \vec{r}_{\ell }\wedge \vec{r
_{n}\right) \cdot \hat{z}\right] ^{3}}\right\} ~. \label{2a}
\end{equation
Here $g$ is an \textit{arbitrary} "coupling constant", and the rest of the
notation is, we trust, clear (see Subsection 2.1).
The \textit{second model} is also merely a transcription of a well-known
\textit{solvable} model ("of goldfish type"), see Subsection 3.2. It reads
as follows
\begin{eqnarray}
&&\overset{\cdot \cdot }{\vec{r}}_{n}=-\left( \overset{\cdot }{\vec{r}
_{n}\cdot \overset{\cdot }{\vec{r}}_{n}\right) ~\vec{r}_{n}+g_{0}~\hat{z
\wedge \vec{r}_{n}+g_{1}~\overset{\cdot }{\vec{r}}_{n} \notag \\
&&+\hat{z}\wedge \vec{r}_{n}~\sum_{\ell =1,~\ell \neq n}^{N}\left\{ \frac{2
\overset{\cdot }{\vec{r}}_{n}\cdot \overset{\cdot }{\vec{r}}_{\ell }+g_{2}
\left[ \left( \overset{\cdot }{\vec{r}}_{n}\wedge \vec{r}_{\ell }+\overset
\cdot }{\vec{r}}_{\ell }\wedge \vec{r}_{n}\right) \cdot \hat{z}\right]
+g_{3}~\vec{r}_{n}\cdot \vec{r}_{\ell }}{\left( \vec{r}_{\ell }\wedge \vec{r
_{n}\right) \cdot \hat{z}}\right\} ~. \notag \\
&& \label{2b}
\end{eqnarray
Here the $4$ constants $g_{0},$ $g_{1},$ $g_{2}$ and $g_{3}$ are \textit
arbitrary} constants, and the rest of the notation is, we trust, clear (see
Subsection 2.1).
These equations of motion are \textit{covariant}, implying that the
corresponding $N$-body problems are \textit{rotation-invariant}.
\subsection{Two $N$-body problems on a circle obtained by changes of
dependent variables}
These two \textit{solvable} models are merely transcriptions of two
well-known one-dimensional \textit{solvable} models, see Subsection 3.3. The
\textit{first model} reads as follows:
\begin{subequations}
\begin{eqnarray}
&&\overset{\cdot \cdot }{\vec{r}}_{n}=-\left( \overset{\cdot }{\vec{r}
_{n}\cdot \overset{\cdot }{\vec{r}}_{n}\right) ~\vec{r}_{n}-\hat{z}\wedge
\vec{r}_{n}~\left\{ 2~\left[ \left( \overset{\cdot }{\vec{r}}_{n}\cdot
\overset{\cdot }{\vec{r}}_{n}\right) ~\frac{y_{n}}{x_{n}}\right] \right.
\notag \\
&&\left. +4~x_{n}~y_{n}-x_{n}^{5}~\sum_{\ell =1,~\ell \neq n}^{N}\left[
\frac{y_{\ell }}{\left( \vec{r}_{\ell }\wedge \vec{r}_{n}\right) \cdot \hat{
}}\right] ^{3}\right\} ~. \label{CalCircle}
\end{eqnarray
Here $x_{n}\equiv \cos \theta _{n}$ and $y_{n}\equiv \sin \theta _{n}$ are
the two Cartesian components in the plane of the vector $\vec{r}_{n},$ see
\ref{rn}).
This model is \textit{isochronous} with period $\pi $
\begin{equation}
\vec{r}_{n}\left( t\pm \pi \right) =\vec{r}_{n}\left( t\right) ~.
\end{equation}
The \textit{second model} reads as follows:
\end{subequations}
\begin{eqnarray}
&&\overset{\cdot \cdot }{\vec{r}}_{n}=-\left( \overset{\cdot }{\vec{r}
_{n}\cdot \overset{\cdot }{\vec{r}}_{n}\right) ~\vec{r}_{n}-\hat{z}\wedge
\vec{r}_{n}~\left\{ 2~\left[ \left( \overset{\cdot }{\vec{r}}_{n}\cdot
\overset{\cdot }{\vec{r}}_{n}\right) ~\frac{y_{n}}{x_{n}}\right] \right.
\notag \\
&&\left. +x_{n}~y_{n}-x_{n}~\sum_{\ell =1,~\ell \neq n}^{N}\left\{ \frac
2~+x_{n}^{2}~x_{\ell }^{2}}{x_{\ell }~\left[ \left( \vec{r}_{\ell }\wedge
\vec{r}_{n}\right) \cdot \hat{z}\right] }\right\} \right\} ~.
\label{GoldCircle}
\end{eqnarray
Here $x_{n}\equiv \cos \theta _{n}$ and $y_{n}\equiv \sin \theta _{n}$ are
again the two Cartesian components in the plane of the vector $\vec{r}_{n},$
see (\ref{rn}).
\textit{All} solutions of this model are \textit{multiply periodic}, see
Subsection 3.3.
Note that---in contrast to the equations of motions reported in the two
preceding subsections---those displayed herein, (\ref{CalCircle}) and (\re
{GoldCircle}), are\textit{\ not} written in \textit{covariant} fashion, i.
e. without any explicit appearance of the Cartesian components $x_{n}\equiv
\cos \theta _{n}$ and $y_{n}\equiv \sin \theta _{n}$ of the vector $\vec{r
_{n}$; indeed these equations of motion are \textit{not} rotation-invariant,
or equivalently, they are \textit{not} invariant for translations along the
circle (on which the motions take place due to the constraint (\ref{rnunit
)).
\section{Proofs}
In the following subsections we substantiate the findings reported in the
preceding Section 2.
\subsection{Solvable and integrable models on the circle manufactured via
techniques of generalized Lagrangian interpolation}
In this subsection we employ the technique to manufacture many-body models
amenable to exact treatments introduced in \cite{C2001} (see in particular
Chapter 3 of this book, entitled "$N$-body problems treatable via techniques
of exact Lagrangian interpolation in spaces of one or more dimensions"). We
begin with a terse review of this method, in the specific case of
one-dimensional space with an appropriate choice of the set of "seeds"
(namely, of the $N$ functions providing the point of departure for the
generalized Lagrangian interpolation approach).
The set of seeds we conveniently take as basis for our treatment are the $N$
functions
\begin{eqnarray}
&&\left\{ s_{n}\left( \theta \right) \right\} _{n=1}^{N}=\left\{ \exp \left[
i~\left( 2~n-N-1\right) ~\theta \right] \right\} _{n=1}^{N} \notag \\
&=&\{\exp \left[ i~\left( 1-N\right) ~\theta \right] ,~\exp \left[ i~\left(
3-N\right) ~\theta \right] ,~... \notag \\
&&...\exp \left[ i~\left( N-3\right) ~\theta \right] ,~\exp \left[ i~\left(
N-1\right) ~\theta \right] \}~. \label{seeds}
\end{eqnarray}
\textit{Remark 3.1.1}. These exponential functions with \textit{imaginary}
argument are \textit{complex}, but clearly this set of seeds could be
replaced without significant changes by an equivalent set featuring instead
sines and cosines of \textit{real} arguments. The use of exponentials merely
facilitates some of the following developments. Likewise the factor $2$ in
the argument of these functions has been introduced merely to yield neater
versions of the equations of motions that will be obtained, see below. The
fact that these seeds are invariant under the transformation $\theta
\Rightarrow \theta +2\pi $ suggests to interpret the variable $\theta $ as
an \textit{angle} in the plane. $\blacksquare $
We then consider a function $f\left( \theta \right) $ representable as a
\textit{linear} superposition of these $N$ seeds,
\begin{subequations}
\label{GenLag}
\begin{equation}
f\left( \theta \right) =\sum_{n=1}^{N}\left[ h_{n}~s_{n}\left( \theta
\right) \right] ~, \label{fhn}
\end{equation
where the $N$ coefficients $h_{n}$ are \textit{a priori} arbitrary numbers.
And we denote with $f_{n}$ the $N$ values that this function takes at the $N$
(\textit{arbitrarily assigned}) "nodes" $\theta =\theta _{n}$
\begin{equation}
f_{n}=f\left( \theta _{n}\right) ~; \label{fn}
\end{equation
and we display the representation of this function in terms of these $N$
values, via the ("generalized Lagrangian interpolation") formul
\begin{equation}
f\left( \theta \right) =\sum_{n=1}^{N}\left[ f_{n}~q^{\left( n\right)
}\left( \theta ~\left\vert \underline{\theta }\right. \right) \right] ~.
\label{fqn}
\end{equation
The $N$ "interpolational functions" $q^{\left( n\right) }\left( \theta
~\left\vert \underline{\theta }\right. \right) $ depend on the variable
\theta $ and on the $N$ nodes $\theta _{n}$ (hence on the $N$-vector having
these nodes as its components, hereafter denoted as $\underline{\theta
\equiv \left( \theta _{1},~\theta _{2},~...,~\theta _{N}\right) $); they are
themselves \textit{linear} superpositions of the seeds $s_{n}\left( \theta
\right) $, to insure consistency among (\ref{fqn}) and (\ref{fhn}); and they
feature the property
\end{subequations}
\begin{equation}
q^{\left( n\right) }\left( \theta _{m}~\left\vert \underline{\theta }\right.
\right) =\delta _{nm} \label{qdeltanm}
\end{equation
to insure consistency among (\ref{fqn}) and (\ref{fn}) (here and hereafter
\delta _{nm}$ is the Kronecker symbol: $\delta _{nm}=1$ if $n=m$, $\delta
_{nm}=0$ if $n\neq m$).
The explicit representation of these interpolational functions $q^{\left(
n\right) }\left( \theta ~\left\vert \underline{\theta }\right. \right) $ in
terms of the $N$ seeds $s_{n}\left( \theta \right) $ and the $N$ nodes
\theta _{n}$ reads \cite{C2001}
\begin{subequations}
\label{Reprqn}
\begin{equation}
q^{(n)}(\theta ~\left\vert \underline{\theta }\right. )=\frac{\Delta (\theta
_{1},\ldots ,\theta _{n-1},\theta ,\theta _{n+1},\ldots ,\theta _{N})}
\Delta (\theta _{1},\ldots ,\theta _{N})}~,
\end{equation
where
\begin{equation}
\Delta (\underline{\theta })=\left\vert
\begin{array}{cccc}
s_{1}(\theta _{1}) & s_{2}(\theta _{1}) & \ldots & s_{N}(\theta _{1}) \\
s_{1}(\theta _{2}) & s_{2}(\theta _{2}) & \ldots & s_{N}(\theta _{2}) \\
\vdots & \vdots & \ddots & \vdots \\
s_{1}(\theta _{N}) & s_{2}(\theta _{N}) & \ldots & s_{N}(\theta _{N}
\end{array
\right\vert ~.
\end{equation
This determinant---with the set of seeds (\ref{seeds})---is of Vandermonde
type hence it can be explicitly evaluated, yielding for the interpolational
functions the expression
\end{subequations}
\begin{equation}
q^{\left( n\right) }\left( \theta ~\left\vert \underline{\theta }\right.
\right) =s_{1}\left( \theta -\theta _{n}\right) ~\dprod\nolimits_{\ell
=1,~\ell \neq n}^{N}\left[ \frac{\exp \left( 2~i~\theta \right) -\exp \left(
2~i~\theta _{\ell }\right) }{\exp \left( 2~i~\theta _{n}\right) -\exp \left(
2~i~\theta _{\ell }\right) }\right] ~. \label{qn}
\end{equation}
The next step is to introduce the time variable $t$. As in \cite{C2001}, we
assume hereafter that the $N$ seeds $s_{n}\left( \theta \right) $ are
time-independent; we moreover assume the function $f\left( \theta \right) $
to be also time-independent (thereby simplifying the more general treatment
of \cite{C2001}). A time-dependence is only introduced for the nodes $\theta
_{n}\equiv \theta _{n}\left( t\right) ;$ indeed they shall be the dependent
variables of the dynamical systems we manufacture. Of course the fact that
the nodes $\theta _{n}\left( t\right) $ evolve over time entails that the
values $f_{n}$ taken by the function $f\left( \theta \right) $ at these
nodes (see (\ref{fn})) also evolve over time:
\begin{equation}
f_{n}\equiv f_{n}\left( t\right) =f\left[ \theta _{n}\left( t\right) \right]
~. \label{fnt}
\end{equation}
We then posit a convenient relation among the time evolution of the $N$
nodes $\theta _{n}\left( t\right) $ and the time evolution of the $N$
quantities $f_{n}\left( t\right) $, by settin
\begin{equation}
f_{n}\left( t\right) =\rho _{n}\left[ \underline{\theta }\left( t\right)
\right] ~\dot{\theta}_{n}\left( t\right) +\gamma _{n}\left[ \underline
\theta }\left( t\right) \right] ~. \label{qndot}
\end{equation
Here we introduced the $2N$ functions $\rho _{n}\left( \underline{\theta
\right) $ and $\gamma _{n}\left( \underline{\theta }\right) $ of the $N$
nodes $\theta _{n}$, that will be assigned later at our convenience (but
note that we forsake---again, for simplicity---the possibility to assign an
\textit{explicit} time-dependence to these functions, in addition to their
dependence on the $N$ nodes).
The next step is to ascertain the time dependence of the $N$ nodes $\theta
_{n}\equiv \theta _{n}\left( t\right) $ implied by these assignments. To
this end we time-differentiate the relation (\ref{qndot}), getting the
following expressions for the second time-derivatives of the $N$ nodes
\theta _{n}\equiv \theta _{n}\left( t\right) $
\begin{equation}
\rho _{n}\left( \underline{\theta }\right) ~\ddot{\theta}_{n}=\dot{f
_{n}-\sum_{m=1}^{N}\left\{ \left[ \frac{\partial ~\gamma _{n}\left(
\underline{\theta }\right) }{\partial ~\theta _{m}}+\frac{\partial ~\rho
_{n}\left( \underline{\theta }\right) }{\partial ~\theta _{m}}~\dot{\theta
_{n}\right] ~\dot{\theta}_{m}\right\} ~. \label{thetandotdot}
\end{equation}
Our next step is to evaluate the quantity $\dot{f}_{n},$ which (see (\re
{fnt})) read
\begin{equation}
\dot{f}_{n}=\frac{\partial ~f\left( \theta _{n}\right) }{\partial ~\theta
_{n}}~\dot{\theta}_{n}~.
\end{equation
To evaluate this quantity we can use the finite-dimensional representation
of the differential operator, yielding (for functions which are \textit
linear} superpositions of the seeds $s_{n}\left( \theta \right) $, see (\re
{GenLag})), the \textit{exact} formula \cite{C2001}
\begin{subequations}
\begin{equation}
\frac{\partial ~f\left( \theta _{n}\right) }{\partial ~\theta _{n}
=\sum_{m=1}^{N}\left[ D_{nm}\left( \underline{\theta }\right) ~f_{m}\right]
~,
\end{equation
with the $N\times N$ matrix $D$ defined componentwise as follows \cite{C2001
\begin{equation}
D_{nm}\left( \underline{\theta }\right) =\frac{\partial ~q^{\left( m\right)
}(\theta ~\left\vert \underline{\theta }\right. )}{\partial ~\theta }~~
\text{evaluated at~~~}\theta =\theta _{n}~,\
\end{equation
hence in our case (see (\ref{seeds}) and (\ref{Reprqn})) reading
\end{subequations}
\begin{subequations}
\label{MatrixD}
\begin{equation}
D_{nm}\left( \underline{\theta }\right) =\delta _{nm}~\sum_{\ell =1,~\ell
\neq n}^{N}\cot \left( \theta _{n}-\theta _{\ell }\right) +\left( 1-\delta
_{nm}\right) ~\frac{\sigma _{n}\left( \underline{\theta }\right) }{\sigma
_{m}\left( \underline{\theta }\right) }~\frac{1}{\sin \left( \theta
_{n}-\theta _{m}\right) }~, \label{Dnm}
\end{equation
\begin{equation}
\sigma _{n}\left( \underline{\theta }\right) =\dprod\limits_{\ell =1,~\ell
\neq n}^{N}\left[ \sin \left( \theta _{n}-\theta _{\ell }\right) \right] ~.
\label{sigma}
\end{equation
Note that this definition coincides, via (\ref{rmwedgerndotzhat}), with (\re
{sigman}).
We therefore conclude that the system (\ref{thetandotdot}) yields the
following set of $N$ Newtonian equations of motion for the dependent
variables $\theta _{n}\equiv \theta _{n}\left( t\right) $:
\end{subequations}
\begin{eqnarray}
&&\rho _{n}\left( \underline{\theta }\right) ~\ddot{\theta}_{n}=\dot{\theta
_{n}~\left[ \rho _{n}\left( \underline{\theta }\right) ~\dot{\theta
_{n}+\gamma _{n}\left( \underline{\theta }\right) \right] ~\sum_{\ell
=1,~\ell \neq n}^{N}\left[ \cot \left( \theta _{n}-\theta _{\ell }\right)
\right] \notag \\
&&+\dot{\theta}_{n}~\sum_{\ell =1,~\ell \neq n}^{N}\left\{ \frac{\sigma
_{n}\left( \underline{\theta }\right) }{\sigma _{\ell }\left( \underline
\theta }\right) }~\frac{\left[ \rho _{\ell }\left( \underline{\theta
\right) ~\dot{\theta}_{\ell }+\gamma _{\ell }\left( \underline{\theta
\right) \right] }{\sin \left( \theta _{n}-\theta _{\ell }\right) }\right\}
\notag \\
&&-\sum_{m=1}^{N}\left\{ \left[ \frac{\partial ~\rho _{n}\left( \underline
\theta }\right) }{\partial ~\theta _{m}}~\dot{\theta}_{n}+\frac{\partial
~\gamma _{n}\left( \underline{\theta }\right) }{\partial ~\theta _{m}}\right]
~\dot{\theta}_{m}\right\} ~. \label{GenNbody}
\end{eqnarray
Of course to obtain this system of $N$ second-order ODEs we also used (\re
{qndot}).
Let us now emphasize that, as a consequence of the way these $N$-body
problems have been manufactured, they are \textit{integrable}$.$ It is
indeed plain that the time independence of the function $f\left( \theta
\right) $ entails (via (\ref{fhn}), (\ref{fn}) and (\ref{qndot})) the
relations
\begin{subequations}
\label{FirstOrderODEs}
\begin{equation}
\sum_{m=1}^{N}\left\{ h_{m}~s_{m}\left[ \theta _{n}\left( t\right) \right]
\right\} =\rho _{n}\left[ \underline{\theta }\left( t\right) \right] ~\dot
\theta}_{n}\left( t\right) +\gamma _{n}\left[ \underline{\theta }\left(
t\right) \right] ~. \label{hthetadot}
\end{equation
Here we have displayed the \textit{time-dependence} of the various
quantities, in order to emphasize the \textit{time-independence} of the $N$
coefficients $h_{m}$, which can actually be evaluated by solving this system
of $N$ \textit{linear }equations, thereby obtaining (via (\ref{Reprqn})) the
following formulas
\begin{equation}
h_{m}=q^{\left( m\right) }\left( \vartheta _{m}~\left\vert \underline{\theta
}\right. \right) ~,~~~~\vartheta _{m}\equiv \frac{i~\log \left[ \rho
_{m}\left( \underline{\theta }\right) ~\dot{\theta}_{m}+\gamma _{m}\left(
\underline{\theta }\right) \right] }{2~m-N-1}~, \label{hn}
\end{equation
where of course the $N$ nodes $\theta _{m}\equiv \theta _{m}\left( t\right) $
and their $N$ time derivatives $\dot{\theta}_{m}\equiv \dot{\theta
_{m}\left( t\right) $ can be evaluated at any arbitrary time $t$. It is thus
plain that the $N$-body systems (\ref{GenNbody}) are \textit{integrable} for
any \textit{arbitrary} assignment of the $2N$ functions $\rho _{m}\left(
\underline{\theta }\right) $ and $\gamma _{m}\left( \underline{\theta
\right) $ of the $N$ dependent variables $\theta _{n}$, with these $N$
quantities $h_{m}$ providing $N$ \textit{constants of motion} given by
explicit (generally nontrivial) expressions in terms of the $N$ nodes
\theta _{n}$ and their $N$ time-derivatives $\dot{\theta}_{n}$.
We are still free to assign the $2N$ functions $\rho _{n}\left( \underline
\theta }\right) $ and $\gamma _{n}\left( \underline{\theta }\right) .$ There
are two natural choices.
The first one reads simply
\end{subequations}
\begin{equation}
\rho _{n}\left( \underline{\theta }\right) =\mu _{n}\text{ ,~~~}\gamma
_{n}\left( \underline{\theta }\right) =\eta _{n}~, \label{rhogammaconstants}
\end{equation
with $\mu _{n}$ and $\eta _{n}$ arbitrary \textit{constant} parameters. It
clearly yields (see (\ref{GenNbody})) an $N$-body system characterized by
the following set of Newtonian equations of motion
\begin{eqnarray}
&&\mu _{n}~\ddot{\theta}_{n}=\dot{\theta}_{n}~\left( \mu _{n}~\dot{\theta
_{n}+\eta _{n}\right) ~\sum_{\ell =1,~\ell \neq n}^{N}\left[ \cot \left(
\theta _{n}-\theta _{\ell }\right) \right] \notag \\
&&+\dot{\theta}_{n}~\sum_{\ell =1,~\ell \neq n}^{N}\left[ \frac{\sigma
_{n}\left( \underline{\theta }\right) }{\sigma _{\ell }\left( \underline
\theta }\right) }~\frac{\left( \mu _{\ell }~\dot{\theta}_{\ell }+\eta _{\ell
}\right) }{\sin \left( \theta _{n}-\theta _{\ell }\right) }~\right] ~.
\label{WithManyBodyForces}
\end{eqnarray
Here the functions $\sigma _{n}\left( \underline{\theta }\right) $ of the $N$
nodes $\theta _{m}$ are of course defined by (\ref{sigma}).
The second assignment of the $2N$ functions $\rho _{n}\left( \underline
\theta }\right) $ and $\gamma _{n}\left( \underline{\theta }\right) $ is
suggested by the structure of the system (\ref{GenNbody}). It read
\begin{equation}
\rho _{n}\left( \underline{\theta }\right) =\mu _{n}\text{~}\sigma
_{n}\left( \underline{\theta }\right) \text{,~\ ~}\gamma _{n}\left(
\underline{\theta }\right) =\eta _{n}~\sigma _{n}\left( \underline{\theta
\right) ~, \label{rhogmmasigma}
\end{equation
where again $\mu _{n}$ and $\eta _{n}$ are arbitrary \textit{constant}
parameters and the functions $\sigma _{n}\left( \underline{\theta }\right) $
are defined as above, see (\ref{sigma}), implying (by logarithmic
differentiation)
\begin{subequations}
\begin{equation}
\frac{\partial ~\gamma _{n}\left( \underline{\theta }\right) }{\partial
~\theta _{m}}=\gamma _{n}\left( \underline{\theta }\right) ~\left\{ \delta
_{nm}~\sum_{\ell =1,~\ell \neq n}^{N}\left[ \cot \left( \theta _{n}-\theta
_{\ell }\right) \right] -\left( 1-\delta _{nm}\right) ~\cot \left( \theta
_{n}-\theta _{m}\right) \right\} ~,
\end{equation
and likewis
\begin{equation}
\frac{\partial ~\rho _{n}\left( \underline{\theta }\right) }{\partial
~\theta _{m}}=\rho _{n}\left( \underline{\theta }\right) ~\left\{ \delta
_{nm}~\sum_{\ell =1,~\ell \neq n}^{N}\left[ \cot \left( \theta _{n}-\theta
_{\ell }\right) \right] -\left( 1-\delta _{nm}\right) ~\cot \left( \theta
_{n}-\theta _{m}\right) \right\} ~.
\end{equation
Thereby the $N$-body system gets characterized by the following, simpler set
of Newtonian equations of motion:
\end{subequations}
\begin{equation}
\mu _{n}~\ddot{\theta}_{n}=\sum_{\ell =1,~\ell \neq n}^{N}\left[ \frac{\dot
\theta}_{n}~\left( \mu _{\ell }~\dot{\theta}_{\ell }+\eta _{\ell }\right)
+\left( \mu _{n}~\dot{\theta}_{n}+\eta _{n}\right) ~\dot{\theta}_{\ell
}~\cos \left( \theta _{n}-\theta _{\ell }\right) }{\sin \left( \theta
_{n}-\theta _{\ell }\right) }\right] ~. \label{WithTwoBodyForces}
\end{equation}
The differences among these two $N$-body systems, (\ref{WithManyBodyForces})
and (\ref{WithTwoBodyForces}), deserve to be emphasized: the $N$-body model
\ref{WithManyBodyForces}) involves \textit{many-body} forces, due to the
presence of the functions $\sigma _{n}\left( \underline{\theta }\right) $
and $\sigma _{\ell }\left( \underline{\theta }\right) $ in its right-hand
("forces") side; while the $N$-body model (\ref{WithTwoBodyForces}) only
involves \textit{two-body} forces. Both systems can be integrated once,
corresponding to the transition from their $N$ \textit{second-order}
Newtonian equations of motion to the corresponding $N$ \textit{first-order}
ODEs (\ref{hthetadot}). On the other hand, as we show below, only the first
of these two \textit{integrable} systems is \textit{solvable}.
Indeed, for the first system (but not for the second!), the $N$ first-order
ODEs (\ref{hthetadot}) are \textit{uncoupled}, reading simply, via (\re
{rhogammaconstants}),
\begin{subequations}
\label{ODEsForThetan}
\begin{equation}
\mu _{n}~\dot{\theta}_{n}=-\eta _{n}+\sum_{m=1}^{N}\left[ h_{m}~s_{m}\left(
\theta _{n}\right) \right] ~,
\end{equation
or, equivalently (see (\ref{seeds})
\begin{equation}
\mu _{n}~\exp \left[ \left( N+1\right) ~i~\theta _{n}\right] ~\dot{\theta
_{n}=-\eta _{n}~\exp \left[ \left( N+1\right) ~i~\theta _{n}\right]
+\sum_{m=1}^{N}\left[ h_{m}~\exp \left( 2~m~i~\theta _{n}\right) \right] ~,
\end{equation
where the $N$ quantities $h_{n}$ are explicitly known in terms of the $2N$
initial data $\theta _{n}\left( 0\right) $, $\dot{\theta}_{n}\left( 0\right)
$ (via (\ref{hn}), (\ref{rhogammaconstants}) and (\ref{qn}): see Appendix B).
These first-order ODEs can be integrated; we confine the relevant
developments to Appendix B.
Although the technique to manufacture these two \textit{solvable} and
\textit{integrable} $N$-body problems, (\ref{WithManyBodyForces}) and (\re
{WithTwoBodyForces}), is \textit{not} new \cite{C2001}, these models are, to
the best of our knowledge, themselves \textit{new}; hence a detailed
discussion of the actual behavior of these systems has not yet been done. In
the present paper we limit our consideration to pointing out how these
models can be reformulated to describe the evolution of $N$ points whose
positions on a plane are characterized by $N$ \textit{unit} 2-vectors $\vec{
}_{n}\left( t\right) $, see the notation introduced in Subsection 2.1. To
this end one utilizes the formulas (\ref{rndotdot}), (\ref{rndotrm}), (\re
{rmwedgerndotzhat}) and the relevant ones among those conveniently collected
in Appendix A. And it is plain that one thereby obtains the two models (\re
{ManyBodyForcesModel}) and (\ref{TwoBodyForcesOnCircle}).
\subsection{Solvable models on the circle manufactured by reinterpreting
known solvable models}
In this section we tersely indicate how to obtain the two models (\ref{2a})
and (\ref{2b}).
The \textit{first model} obtains from the $N$-body system characterized by
the following Newtonian equations of motion (with velocity-independent
two-body forces):
\end{subequations}
\begin{equation}
\ddot{\theta}_{n}=g^{2}~\sum_{\ell =1,~\ell \neq n}^{N}\left[ \frac{\cos
\left( \theta _{n}-\theta _{\ell }\right) }{\sin ^{3}\left( \theta
_{n}-\theta _{\ell }\right) }\right] ~. \label{Suth}
\end{equation
Here $g$ is an arbitrary "coupling constant", and the rest of the notation
is, we trust, clear.
This is a well-known \textit{solvable} many-body problem, generally
associated with the name of Bill Sutherland, who was the first to show the
possibility to treat this $N$-body problem by exact methods (originally in a
quantal context \cite{S}); its treatment in a classical (Hamiltonian)
context is provided in several textbooks, see for instance \cite{P1990} \cit
{C2001} \cite{S2004}.
It is plain that the model (\ref{2a}) is merely the transcription of this
model via the notation of Subsection 2.1.
The \textit{second model} obtains from the $N$-body system characterized by
the following Newtonian equations of motion (with velocity-dependent
one-body and two-body forces)
\begin{equation}
\ddot{\theta}_{n}=g_{0}+g_{1}~\dot{\theta}_{n}+\sum_{\ell =1,~\ell \neq
n}^{N}\left\{ \left[ 2~\dot{\theta}_{n}~\dot{\theta}_{\ell }+g_{2}~\left(
\dot{\theta}_{n}+\dot{\theta}_{\ell }\right) +g_{3}\right] ~\cot \left(
\theta _{n}-\theta _{\ell }\right) \right\} ~.
\end{equation
Here $g_{0},$ $g_{1},$ $g_{2}$ and $g_{3}$ are $4$ arbitrary coupling
constants, and we again trust the rest of the notation to be clear.
This is also a well known \textit{solvable} model, see for instance eq.
(2.3.5-12) on page 199 of \cite{C2001}.
And it is again plain that the model (\ref{2b}) is merely the transcription
of this model via the notation of Subsection 2.1 and Appendix A.
\subsection{How to manufacture $N$-body problems with \textit{angles} as
dependent variables}
In the preceding subsection we have shown how certain $N$-body models with
dependent variables naturally interpretable as \textit{angles} can be
reformulated as $N$-body models describing the time evolution on a plane of
particles \textit{constrained to move on a circle}. In this subsection we
indicate how, via a simple change of dependent variables, essentially
\textit{any} $N$-body model can be reformulated so that its dependent
variables can be interpreted as \textit{angles}, hence subsequently it can
also be reformulated (in fact in many ways) so that it describes the time
evolution of particles \textit{constrained to move on a plane circle}.
The trick to achieve this goal is quite elementary and general; we
illustrate it below via two examples.
Consider an $N$-body model in which the positions of the $N$
point-particles---moving in one-dimensional space---are identified by $N$
coordinates $z_{n}\equiv z_{n}\left( t\right) ,$ and perform the change of
dependent variables by positing, say
\begin{equation}
z_{n}\left( t\right) =\tan \left[ \theta _{n}\left( t\right) \right] ~.
\label{ZnTanThetan}
\end{equation}
\textit{Remark 3.3.1}. Of course this assignment defines $\theta _{n}\left(
t\right) $ only $\func{mod}\left( \pi \right) $; and clearly many other
assignments could be instead made---different but having an analogous
effect, such as $z_{n}=1/\sin \left( 2\theta _{n}\right) $, or $z_{n}=\tan
^{3}\theta _{n}$, etc. . $\blacksquare $
In the \textit{first example} we take as point of departure the $N$-body
problem characterized by the Newtonian equations of motion
\begin{subequations}
\begin{equation}
\ddot{z}_{n}=-4~z_{n}+g^{2}~\sum_{\ell =1,~\ell \neq n}^{N}\left[ \left(
z_{n}-z_{\ell }\right) ^{-3}\right] ~. \label{Cal}
\end{equation
Here $g$ is an arbitrary (real) coupling constant. This is a well-known
\textit{solvable} model (see for instance \cite{C2001}); it is \textit
isochronous}, all its solutions being \textit{completely periodic with
period }$\pi $,
\begin{equation}
z_{n}\left( t\pm \pi \right) =z_{n}\left( t\right) ~. \label{iso}
\end{equation}
Via the change of dependent variables (\ref{ZnTanThetan}) the equations of
motion (\ref{Cal}) become (as the diligent reader will easily verify,
utilizing if need be the identities reported in the last part of Appendix A)
\end{subequations}
\begin{subequations}
\begin{eqnarray}
&&\ddot{\theta}_{n}=-2~\dot{\theta}_{n}^{2}~\tan \theta _{n}-4~\sin \theta
_{n}~\cos \theta _{n} \notag \\
&&+g^{2}~\sum_{\ell =1,~\ell \neq n}^{N}\left[ \frac{\cos ^{5}\theta
_{n}~\sin ^{3}\theta _{\ell }}{\sin ^{3}\left( \theta _{n}-\theta _{\ell
}\right) }\right] ~. \label{3a}
\end{eqnarray}
\textit{Remark 3.3.2}. This model of course hereditates the property of
\textit{isochrony} of the model (\ref{Cal}) it has been obtained from
\begin{equation}
\theta _{n}\left( t\pm \pi \right) =\theta _{n}\left( t\right) ~~~\func{mod
\left( \pi \right) ~.~\blacksquare
\end{equation}
The next task is to transform these equations of motion, (\ref{3a}), into
equations of motion for points moving in the plane but constrained to stay
on a \textit{circle} of \textit{unit} radius centered at the origin. To
realize this goal one may now use the change of dependent variables from the
\textit{angles} $\theta _{n}$ to the \textit{vectors} $\vec{r}_{n}$
described in Subsection 2.1, using if need be the identities reported in the
first part of Appendix A. And it is plain that in this manner one arrives at
the equations of motion (\ref{CalCircle}).
In the \textit{second example} we take as point of departure the well-known
\textit{solvable }$N$-body problem characterized by the following Newtonian
equations of motion (see eq. (2.3.4.2-1) on page 188 of \cite{C2001}):
\end{subequations}
\begin{equation}
\ddot{z}_{n}=-z_{n}+\sum_{\ell =1,~\ell \neq n}^{N}\left( \frac{2~\dot{z
_{n}~\dot{z}_{\ell }+1}{z_{n}-z_{\ell }}\right) ~. \label{Gold}
\end{equation
\textit{All} solutions of this model are \textit{multiply periodic}, being
(generally nonlinear) superpositions of the $N$ functions $b_{m}\left(
t\right) =\cos \left( \sqrt{m}~t+\beta _{m}\right) ,$ $m=1,...,N$ (with the
N$ phases $\beta _{m}$ depending on the initial data); for special initial
data only functions $b_{m}\left( t\right) $ with $m$ a \textit
squared-integer} contribute, yielding solutions \textit{completely periodic}
with period $2\pi $. \cite{C2001}
Via the change of dependent variables (\ref{ZnTanThetan}) equations of
motion (\ref{Gold}) become (as the diligent reader will easily verify,
utilizing again, if need be, the identities reported in the last part of
Appendix A
\begin{eqnarray}
&&\ddot{\theta}_{n}=-2~\dot{\theta}_{n}^{2}~\tan \theta _{n}-\sin \theta
_{n}~\cos \theta _{n} \notag \\
&&+\cos \theta _{n}~\sum_{\ell =1,~\ell \neq n}^{N}\left[ \frac{2~\dot{\thet
}_{n}~\dot{\theta}_{\ell }+\cos ^{2}\theta _{n}~\cos ^{2}\theta _{\ell }}
\cos \theta _{n}~\sin \left( \theta _{n}-\theta _{\ell }\right) }\right] ~.
\end{eqnarray
Then we transform these equations of motion into equations of motion for
points moving in the plane but constrained to stay on a \textit{circle} of
\textit{unit} radius centered at the origin, by using again the change of
dependent variables from the \textit{angles} $\theta _{n}$ to the \textit
vectors} $\vec{r}_{n}$ described in Subsection 2.1 via---if need be---the
identities reported in the first part of Appendix A. And it is plain that in
this manner one arrives at the equations of motion (\ref{GoldCircle}).
\section{Outlook}
Our original motivation to undertake this line of research was the intention
to manufacture $N$-body problems amenable to exact treatments describing
motions on a sphere, or more generally on manifolds. We consider the results
reported in this paper as a modest first step in that direction. We also
believe that the actual behavior of the \textit{new} models reported in this
paper---see (\ref{ManyBodyForcesModel}) and (\ref{TwoBodyForcesOnCircle
)---shall eventually deserve a more detailed scrutiny than that provided in
Subsection 3.1.
\section{Appendix A: identities}
It is plain that the notation introduced in Subsection 2.1 entails the
following additional identities:
\begin{subequations}
\begin{equation}
\overset{\cdot }{\vec{r}}_{n}\cdot \vec{r}_{n}=0~,~~~\overset{\cdot }{\vec{r
}_{n}\cdot \overset{\cdot }{\vec{r}}_{n}=\dot{\theta}_{n}^{2},~~~\left( \vec
r}_{n}\wedge \overset{\cdot }{\vec{r}}_{n}\right) \cdot \hat{z}=\dot{\theta
_{n}~, \label{Ardotscalar}
\end{equation}
\begin{equation}
\overset{\cdot \cdot }{\vec{r}}_{n}\cdot \vec{r}_{n}=-\dot{\theta
_{n}^{2}~,~~~\overset{\cdot \cdot }{\vec{r}}_{n}\cdot \left( \hat{z}\wedge
\vec{r}_{n}\right) =\ddot{\theta}_{n}~, \label{Ardotdotscalar}
\end{equation
\begin{equation}
\overset{\cdot }{\vec{r}}_{n}\cdot \vec{r}_{m}=-\dot{\theta}_{n}~\sin \left(
\theta _{n}-\theta _{m}\right) ~, \label{Arndotrmscal}
\end{equation
\begin{equation}
\overset{\cdot }{\vec{r}}_{n}\cdot \overset{\cdot }{\vec{r}}_{m}=\dot{\theta
_{n}~\dot{\theta}_{m}~\cos \left( \theta _{n}-\theta _{m}\right) ~;
\label{Arndotrmdotscal}
\end{equation
\end{subequations}
\begin{subequations}
\begin{equation}
\hat{z}\wedge \overset{\cdot }{\vec{r}}_{n}=-\dot{\theta}_{n}~\vec{r}_{n}~,
\label{Azhatvectrdot}
\end{equation}
\begin{equation}
\hat{z}\wedge \overset{\cdot \cdot }{\vec{r}}_{n}=-\ddot{\theta}_{n}~\vec{r
_{n}-\dot{\theta}_{n}^{2}~\hat{z}\wedge \vec{r}_{n}~;
\label{Azhatvectrdotdot}
\end{equation
\end{subequations}
\begin{subequations}
\begin{equation}
\left( \overset{\cdot }{\vec{r}}_{n}\wedge \vec{r}_{m}\right) \cdot \hat{z}=
\dot{\theta}_{n}~\cos \left( \theta _{n}-\theta _{m}\right) ~,
\label{Arndotrmvect}
\end{equation
\begin{equation}
\left( \overset{\cdot }{\vec{r}}_{n}\wedge \overset{\cdot }{\vec{r}
_{m}\right) \cdot \hat{z}=-\dot{\theta}_{n}~\dot{\theta}_{m}~\sin \left(
\theta _{n}-\theta _{m}\right) ~. \label{Arndotrmdotvect}
\end{equation}
We also display here some relations among the time-dependent "coordinates"
\end{subequations}
\begin{subequations}
\begin{equation}
z_{n}\equiv z_{n}\left( t\right) =\tan \theta _{n}\left( t\right) ~,
\label{Axn}
\end{equation
and the "angles" $\theta _{n}\equiv \theta _{n}\left( t\right) $
\begin{equation}
z_{n}-z_{m}=\frac{\sin \left( \theta _{n}-\theta _{m}\right) }{\cos \theta
_{n}~\cos \theta _{m}}~,~~~\frac{1}{z_{n}-z_{m}}=\frac{\cos \theta _{n}~\cos
\theta _{m}}{\sin \left( \theta _{n}-\theta _{m}\right) }~;
\label{Aznminuszm}
\end{equation
\end{subequations}
\begin{equation}
\dot{z}_{n}=\frac{\dot{\theta}_{n}}{\cos ^{2}\theta _{n}}~,~~~\dot{z
_{n}~z_{m}=\frac{\dot{\theta}_{n}~\sin \theta _{m}}{\cos ^{2}\theta
_{n}~\cos \theta _{m}}~,~~~\dot{z}_{n}~\dot{z}_{m}=\frac{\dot{\theta}_{n}
\dot{\theta}_{m}}{\cos ^{2}\theta _{n}~\cos ^{2}\theta _{m}}~;
\label{Azndot}
\end{equation
\begin{subequations}
\label{Azndotzm}
\begin{equation}
\frac{\dot{z}_{n}+\dot{z}_{m}}{z_{n}-z_{m}}=\frac{\dot{\theta}_{n}~\cos
^{2}\theta _{m}+\dot{\theta}_{m}~\cos ^{2}\theta _{n}}{\cos \theta _{n}~\cos
\theta _{m}~\sin \left( \theta _{n}-\theta _{m}\right) }~,
\end{equation
\begin{equation}
\frac{\dot{z}_{n}~z_{m}+\dot{z}_{m}~z_{n}}{z_{n}-z_{m}}=\frac{\dot{\theta
_{n}~\sin \theta _{m}~\cos \theta _{m}+\dot{\theta}_{m}~\sin \theta
_{n}~\cos \theta _{n}}{\cos \theta _{n}~\cos \theta _{m}~\sin \left( \theta
_{n}-\theta _{m}\right) }~,
\end{equation
\begin{equation}
\frac{\dot{z}_{n}~\dot{z}_{m}}{z_{n}-z_{m}}=\frac{\dot{\theta}_{n}~\dot
\theta}_{m}}{\cos \theta _{n}~\cos \theta _{m}~\sin \left( \theta
_{n}-\theta _{m}\right) }~;
\end{equation
\end{subequations}
\begin{equation}
\ddot{z}_{n}=\frac{\ddot{\theta}_{n}}{\cos ^{2}\theta _{n}}+\frac{2~\dot
\theta}_{n}^{2}~\sin \theta _{n}~}{\cos ^{3}\theta _{n}}=\frac{\ddot{\theta
_{n}+2~\dot{\theta}_{n}^{2}~\tan \theta _{n}}{\cos ^{2}\theta _{n}}~.
\label{Azndotdot}
\end{equation}
\section{Appendix B: solution of the system (\protect\ref{ODEsForThetan})}
In this Appendix we indicate how the initial-value problem of the system of
N$ (decoupled)\ first-order ODEs (\ref{ODEsForThetan}) is solved.
Let us, for notational convenience, make here the following change of
variables:
\begin{subequations}
\begin{equation}
\zeta _{n}\left( t\right) =\exp \left[ i~\theta _{n}\left( t\right) \right]
~, \label{Bzn}
\end{equation
entailin
\begin{equation}
\dot{\zeta}_{n}\left( t\right) =i~\dot{\theta}_{n}\left( t\right) ~\exp
\left[ i~\theta _{n}\left( t\right) \right] ~.
\end{equation}
We then use the relation (\ref{Bzn}) to rewrite the equations of motion (\re
{ODEsForThetan}) as follows:
\end{subequations}
\begin{equation}
\mu ~\zeta ^{N}~\dot{\zeta}=i~\left[ -\eta ~\zeta
^{N+1}+\sum_{m=1}^{N}\left( h_{m}~\zeta ^{2m}\right) \right] ~. \label{ODE}
\end{equation}
\textit{Remark B.1}. Let us emphasize that, in the last formula and below
(in this Appendix B), as a notational simplification, we \textit{omit} to
indicate explicitly the time-dependence of the dependent variable $\zeta
_{n}\equiv \zeta _{n}\left( t\right) $, as well as its dependence on the
index $n$; and likewise the dependence on this index $n$ of the parameters
\mu _{n}$ and $\eta _{n}$. $\blacksquare $
The ODE (\ref{ODE}) can clearly be solved by the following quadrature
\begin{equation}
\dint\limits_{\zeta \left( 0\right) }^{\zeta \left( t\right) }d\xi ~\xi
^{N-2}~\left\{ -\eta ~\xi ^{N-1}+\sum_{m=1}^{N}\left[ h_{m}~\xi ^{2\left(
m-1\right) }\right] \right\} ^{-1}=\frac{i~t}{\mu }~. \label{zt}
\end{equation}
To perform the integration it is convenient to introduce the $2\left(
N-1\right) $ zeros $\xi _{j}$ of the polynomial of degree $2\left(
N-1\right) $ appearing in the denominator of the integrand,
\begin{subequations}
\begin{equation}
-\eta ~\xi ^{N-1}+\sum_{m=1}^{N}\left[ h_{m}~\xi ^{2\left( m-1\right)
\right] =h_{N}~\dprod\limits_{j=1}^{2\left( N-1\right) }\left( \xi -\xi
_{j}\right) ~,
\end{equation
and then the $2\left( N-1\right) $ "residues" $\phi _{j}$ defined by settin
\begin{equation}
\left\{ -\eta ~\xi ^{N-1}+\sum_{m=1}^{N}\left[ h_{m}~\xi ^{2\left(
m-1\right) }\right] \right\} ^{-1}=h_{N}^{-1}~\dsum\limits_{j=1}^{2\left(
N-1\right) }\left( \frac{\phi _{j}}{\xi -\xi _{j}}\right) ~. \label{Res}
\end{equation
Note that these formulas imply that the computation of, firstly, the
2\left( N-1\right) $ zeros $\xi _{j},$ and, secondly, the $2\left(
N-1\right) $ residues $\phi _{j},$ is a purely \textit{algebraic} task
(although not one that can be analytically performed for $N\geq 3$); hence
these quantities can in principle be considered known functions of the
parameter $\eta $ (from which they inherit a dependence on the index $n$,
see \textit{Remark B.1}) and of the $N$ constants of motion $h_{m}$. As for
these $N$ quantities $h_{m}$ (which are of course independent of the index $n
$) they are---in the context of the \textit{initial-value} problem for the
dynamical system (\ref{WithManyBodyForces})---explicitly given by the
formulas (\ref{hn}) at $t=0$ (let us reiterate that these expressions of the
$N$ constants of motion $h_{m}$ are valid throughout the time evolution, and
of course, in particular, at the \textit{initial} time $t=0$).
The final step is to perform the integration in the left-hand side of (\re
{zt}). Via (\ref{Res}) the key ingredient to do so is the formula
\end{subequations}
\begin{eqnarray}
&&\dint\limits_{\zeta _{0}}^{\zeta }d\xi ~\frac{\xi ^{N-2}}{\xi -\xi _{0}
=\dint\limits_{\zeta _{0}-\xi _{0}}^{\zeta -\xi _{0}}d\xi ~\frac{\left( \xi
+\xi _{0}\right) ^{N-2}}{\xi } \notag \\
&=&\dint\limits_{\zeta _{0}-\xi _{0}}^{\zeta -\xi _{0}}d\xi ~\sum_{k=0}^{N-2}
\left[ \binom{N-2}{k}~\xi ^{k-1}~\xi _{0}^{N-2-k}\right] \notag \\
&=&\xi _{0}^{N-2}~\log \left( \frac{\zeta -\xi _{0}}{\zeta _{0}-\xi _{0}
\right) +\sum_{k=1}^{N-2}\left\{ \binom{N-2}{k}~\frac{\xi _{0}^{N-2-k}}{k}
\left[ \left( \zeta -\xi _{0}\right) ^{k}-\left( \zeta _{0}-\xi _{0}\right)
^{k}\right] \right\} ~. \notag \\
&&
\end{eqnarray}
|
1,314,259,993,917 | arxiv |
\section{Introduction}
\label{sec:intro}
Measurements of ultrahigh energy cosmic rays are only possible
through the detection of secondary particles produced in extensive air showers (EAS).
The inference of some of the properties of the primary cosmic ray particles
like their nuclear mass, relies on the comparison of measured EAS observables
to predictions from simulations~\cite{Kampert:2012mx}.
These simulations are performed by Monte Carlo codes
that make use of hadronic interaction models to describe the nucleus-air and hadron-air
collisions along the shower development~\cite{Engel:2011zzb}.
Although simulations using recent hadronic models
can provide a good overall description of EAS,
it has been observed by cosmic ray experiments that
the hadronic models fail on describing the muon production in EAS.
Measurements by HiRes-MIA~\cite{AbuZayyad:1999xa},
Pierre Auger Observatory~\cite{Aab:2014pza,Aab:2016hkv,Aab:2014dua,Aab:2017cgk},
Telescope Array~\cite{Abbasi:2018fkz},
KASCADE-Grande~\cite{Apel:2017thr},
IceTop/IceCube~\cite{Dembinski:2017zkb}
and Sugar~\cite{Bellido:2018toz} show that
there is an inconsistency between data and simulations for
observables related to the muonic component of air showers.
In particular,
the number of muons (\ensuremath{\mbox{$N_\mu$}}\xspace) obtained from simulations is observed to be
significantly smaller than the measured ones, which is known
as the ``muon deficit problem''.
The majority of muons in EAS are produced by the decay of
charged mesons, which in turn, are produced in meson-air and nucleon-air
interactions. Depending on the primary energy
and detection distance, the relevant meson-air and nucleon-air interaction energies
are between 10 and 1000 \ensuremath{\mbox{Ge\kern-0.1em V}}\xspace~\cite{Meurer:2005dt,IoanaICRC2009}.
Therefore, measurements of particle production
in this energy range are of great value for understanding muon production
in EAS and consequently for improving its modeling. Of particular interest
are the production spectra of (anti-)baryons and $\rho^{0}$\xspace in meson-air and nucleon-air.
It is well known ~\cite{Pierog:2006qv,Drescher:2007hc}
that the production of (anti-)baryons and $\rho^{0}$\xspace
mesons in hadronic interactions is important to predict the muon
content of air showers. Therefore the production cross sections
of these particles needs to be known accurately for a precise modeling
of air showers
\NASixtyOne experiment~\cite{\NASixtyOnePaper} (see~\cref{sec:shine})
has provided a number of particle production and cross section
measurements which are relevant for
the modeling of hadronic interaction in EAS (e.g. Refs~\cite{Abgrall:2015hmv,Aduszkiewicz:2017sei}).
In this paper, however, the focus will be on the results of the 2009 run with negatively
charged pion beam colliding against a thin carbon target ($\pi^-$+C\xspace data) at 158 and 350 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace.
Since $\pi$-air is the most abundant hadronic interaction occurring in an EAS,
our $\pi^-$+C\xspace data is of high relevance for the tuning of
hadronic interaction models
dedicated to EAS simulations.
The results are presented in three parts: the spectra of charged hadrons
($\pi^\pm$\xspace, K$^\pm$\xspace, p\xspace and $\bar{\text{p}}$\xspace)
are presented in~\cref{sec:hadrons}, the spectra of $V^0$\xspace mesons
($\Lambda$\xspace, $\bar{\Lambda}$\xspace and K$_\text{S}^{0}$\xspace) in~\cref{sec:vzero}
and the spectra of resonance mesons ($\rho^{0}$\xspace, K$^{*0}$\xspace and $\omega$) in \cref{sec:resonance}.
\section{The \NASixtyOne experiment}
\label{sec:shine}
\begin{figure}
\centering
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.495\textwidth]{figures/shineColor}
\put(15.5,23){}
\end{overpic}
\caption{Schematic layout of the \NASixtyOne experiment~\cite{\NASixtyOnePaper}.}
\label{fig:na61}
\end{figure}
\NASixtyOne (SHINE = SPS Heavy Ion and Neutrino Experiment)
is a fixed target experiment at the CERN SPS designed to study
hadron production in nucleus-nucleus and hadron-nucleus
collisions. Its physics goals comprise a) the strong interaction
program, which investigates the properties of the onset of
deconfinement and search for the critical point of strongly
interacting matter, b) the neutrino program,
to precisely measure the hadron production important to calculate
the neutrinos and antineutrino fluxes in the T2K neutrino experiment~\cite{Abe:2011ks},
and c) the cosmic rays program, focused on the measurements of the
hadron and meson production which are most relevant for the modeling
of extensive air showers. The full description of the \NASixtyOne experiment
and its science program can be found in Ref.~\cite{\NASixtyOnePaper}
The \NASixtyOne detector measures charged particles produced
by the collision of the beam particles with the target through
a set of five Time Projection Chambers (TPC). Since two of the TPCs
are placed in the magnetic field produced by superconducting
dipole magnets, the charge and the momenta of the particles
can be measured and the achieved resolution on \ensuremath{p}\xspace is of the order of
$\sigma(p)/p^2 = 10^{-4}$ (\ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace)$^{-1}$. Additionally, the
energy loss per unit of length (\ensuremath{\mbox{\text{d}$E$/\text{d}$x$}}\xspace) in the TPCs is used in this
work for particle identification. The experimental layout of the
\NASixtyOne detector is shown in~\cref{fig:na61}.
A beam detector system composed of scintillation and Cherenkov counters is
placed upstream of the detector to identify and measure the
beam particles. The position of the beam is measured by a
set of three beam position detectors, which are also placed upstream of
the target.
\section{Production of $\pi^\pm$\xspace, K$^\pm$\xspace, p\xspace and $\bar{\text{p}}$\xspace}
\label{sec:hadrons}
\begin{figure}
\centering
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.4\textwidth]{figures/dist_158_v0_c0_x13_y3}
\end{overpic}
\vspace{0.3cm}
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.4\textwidth]{figures/dist_158_v0_c1_x13_y3}
\put(37,58){}
\end{overpic}
\caption{Example of the \ensuremath{\mbox{\text{d}$E$/\text{d}$x$}}\xspace distributions for one phase space
bin ($\langle p \rangle = 2.19 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace$ and $\langle p_\text{T}\rangle = 0.35 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace$)
of the 158 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace data set.
The black markers show the measured distributions and the colored
distributions show the result of the \ensuremath{\mbox{\text{d}$E$/\text{d}$x$}}\xspace fit. Negatively charged
particles are shown on the top and positively ones on the bottom.}
\label{fig:hadron:dedx}
\end{figure}
Charged particles are identified in \NASixtyOne
by the track-by-track measurement of the deposited energy,
\ensuremath{\mbox{\text{d}$E$/\text{d}$x$}}\xspace, performed by the TPCs.
After splitting the data into bins of total and transverse momentum
(\ensuremath{p}\xspace and \ensuremath{p_\text{T}}\xspace), a \ensuremath{\mbox{\text{d}$E$/\text{d}$x$}}\xspace model is fitted to the measured \ensuremath{\mbox{\text{d}$E$/\text{d}$x$}}\xspace
distributions by accounting for contributions of 5 particle types
($e$, $\pi$, $K$, $p$ and deuterons). From the results of the fit,
the particle yields of $\pi^\pm$\xspace, K$^\pm$\xspace and p($\bar{\text{p}}$)\xspace are determined.
Examples of measured \ensuremath{\mbox{\text{d}$E$/\text{d}$x$}}\xspace distributions and of the results of the
\ensuremath{\mbox{\text{d}$E$/\text{d}$x$}}\xspace fit are shown in~\cref{fig:hadron:dedx}. After performing
the particle identification through the \ensuremath{\mbox{\text{d}$E$/\text{d}$x$}}\xspace fit, the detector effects
(e.g. acceptance, efficiency) are corrected by using a set of Monte Carlo
simulations and the spectra are derived. A more detailed description
of the analysis procedure can be found in Ref.~\cite{Prado:2017hub}.
The single-differential spectra
as a function of \ensuremath{p}\xspace (integrated over \ensuremath{p_\text{T}}\xspace) for
$\pi^\pm$\xspace, K$^\pm$\xspace and p($\bar{\text{p}}$)\xspace are shown
in~\cref{fig:hadron:int158,fig:hadron:int350},
where the measurements are compared to the predictions of
{\scshape Epos\,1.99}\xspace~\cite{Pierog:2006qv},
{\scshape Sibyll\,2.1}\xspace~\cite{Ahn:2009wx}, {\scshape Sibyll\,2.3}\xspace~\cite{Engel:2017icrc},
{\scshape QGSJet\,II-04}\xspace~\cite{Ostapchenko:2010vb} and {\scshape Epos\,LHC}\xspace~\cite{Pierog:2013ria}.
The double-differential spectra as a function of \ensuremath{p}\xspace and \ensuremath{p_\text{T}}\xspace can
be found in Ref.~\cite{Herve:2015lra} for the $\pi^\pm$\xspace spectra and in Ref.~\cite{Prado:2017hub} for the K$^\pm$\xspace and p($\bar{\text{p}}$)\xspace spectra.
\begin{figure*}[!h]
\centering
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.33\textwidth]{figures/mass_Data350_t0_ph1_h0_x5_y0}
\put(17,82){\huge$\Lambda$\xspace}
\end{overpic}
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.33\textwidth]{figures/mass_Data350_t0_ph1_h1_x5_y0}
\put(17,82){\huge$\bar{\Lambda}$\xspace}
\end{overpic}
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.33\textwidth]{figures/mass_Data350_t0_ph1_h2_x5_y1}
\put(17,82){\hugeK$_\text{S}^{0}$\xspace}
\end{overpic}
\caption{Example of the $m_\text{inv}$\xspace distributions for one phase space
bin ($\langle p \rangle$ and $\langle p_\text{T}\rangle$ are indicated on
the top of each plot) of the 158 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace data set.
The black markers show the measured distributions and the colored
lines show the result of the signal extraction fit, where the signal
is shown in blue and the background in red.}
\label{fig:vzero:mass}
\end{figure*}
\section{Production of $\Lambda$\xspace, $\bar{\Lambda}$\xspace and K$_\text{S}^{0}$\xspace}
\label{sec:vzero}
Since $\Lambda(\bar{\Lambda})$\xspace and K$_\text{S}^{0}$\xspace are neutral weakly decaying particles,
they can be measured by \NASixtyOne through the detection of the
charged particles which are produced in their decays.
The invariant mass ($m_\text{inv}$\xspace) spectra for a given decay channel
can then be used to extract their signal.
The decay channels used here are $\Lambda\rightarrow p + \pi^-$,
$\bar{\Lambda}\rightarrow \bar{p} + \pi^+$,
$K_\text{S}^0\rightarrow \pi^+ + \pi^-$.
To extract the signal, the $m_\text{inv}$\xspace distributions were fitted
by considering a signal contribution, modeled by using Monte Carlo templates,
and the background, modeled by a 2nd-degree polynomial function.
Examples of the fitted $m_\text{inv}$\xspace distributions are shown in~\cref{fig:vzero:mass}.
This analysis was performed in 2-dimensional
phase space bins of \ensuremath{p}\xspace and \ensuremath{p_\text{T}}\xspace. For each phase space bin,
the detector effects were corrected by using Monte Carlo simulations.
The full double-differential spectra
as a function of \ensuremath{p}\xspace and \ensuremath{p_\text{T}}\xspace for $\Lambda(\bar{\Lambda})$\xspace and K$_\text{S}^{0}$\xspace
at 158 and 350 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace can be found in Ref.~\cite{PradoHEP2018}.
In~\cref{fig:vzero:lamb,fig:vzero:kzeros} we show the measured
single-differential spectra as function of \ensuremath{p}\xspace (integrated over \ensuremath{p_\text{T}}\xspace)
together with predictions of the hadronic models.
\section{Production of $\rho^{0}$\xspace, K$^{*0}$\xspace and $\omega$}
\label{sec:resonance}
\begin{figure}[!h]
\centering
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.495\textwidth]{figures/Fits_158-03}
\put(17,82){}
\end{overpic}
\caption{Example of the $m_\text{inv}(\pi^+\pi^-)$\xspace distribution for one
\ensuremath{x_\text{F}}\xspace bin ($0.3 < \ensuremath{x_\text{F}}\xspace < 0.4$) of the 158 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace data set.
The black markers show the measured distributions and the colored
distributions show the results of the template fit.}
\label{fig:resonance:mass}
\end{figure}
By using the \NASixtyOne apparatus, the yields of $\rho^{0}$\xspace, K$^{*0}$\xspace and $\omega$
can be measured through the $\pi^+\pi^-$\xspace invariant mass ($m_\text{inv}(\pi^+\pi^-)$\xspace) spectra.
The signal extraction is performed by fitting Monte Carlo templates
to the measured $m_\text{inv}(\pi^+\pi^-)$\xspace distribution.
The Monte Carlo events were generated using {\scshape Epos\,1.99}\xspace as hadronic
interaction model and they were passed through the full \NASixtyOne
detector simulation and reconstruction chain.
The estimation of the combinatorial background were done
by two methods: the charge mixing method, in which the
$\pi^+\pi^+$ and $\pi^-\pi^-$ are
treated as the background, and the Monte Carlo method, in which the
background mass distribution is obtained directly from
simulations. One example of the $m_\text{inv}(\pi^+\pi^-)$\xspace distributions with
the results of the template fit is shown in~\cref{fig:resonance:mass}.
After the signal extraction, the particle yields
were corrected by the detector effects and the production
spectra were derived. The full description of the analysis procedure
and the results can be found in Ref.~\cite{Aduszkiewicz:2017anm}.
We show in~\cref{fig:resonance:spec} the obtained $\rho^{0}$\xspace, K$^{*0}$\xspace and $\omega$ spectra
together with predictions from simulations with the hadronic
models.
The $\rho^{0}$\xspace spectra are shown for both beam energies,
158 and 350 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace, and the $\omega$ and K$^{*0}$\xspace spectra are limited to the 158 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace
data set because of the large uncertainties obtained at 350 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace.
\section{Summary and conclusions}
\label{sec:summary}
The \NASixtyOne experiment, within its very rich program,
has provided a large number of measurements which
have been used for testing and tuning of hadronic interaction
models used by the cosmic ray community.
In this paper, we have summarized the results of the
special cosmic ray runs for $\pi^-$+C\xspace interactions.
First, we have shown the identified spectra of charged hadrons
obtained by using the \ensuremath{\mbox{\text{d}$E$/\text{d}$x$}}\xspace measurements. Of particular interest here
is the production spectra of p($\bar{\text{p}}$)\xspace, which are relevant
to study the (anti)baryon productions in hadron-air interactions
and its implications on the muon production in EAS.
From the $\bar{\text{p}}$\xspace spectra shown
in~\cref{fig:hadron:int158,fig:hadron:int350},
one can see that the (anti)baryon production is not underestimated
in general by the models. In particular, the {\scshape Epos}\xspace model shows to describe
very well the $\bar{\text{p}}$\xspace production. As a conclusion, the
underproduction of (anti)baryons in $\pi$-air interactions
by the hadronic models
is unlikely to be the most relevant source of the lack of muons
in simulations.
Secondly, we have shown the results of the $V^0$\xspace analysis, aiming
the $\Lambda(\bar{\Lambda})$\xspace and K$_\text{S}^{0}$\xspace spectra. Although these measurements are
surely relevant for model testing and tuning, our main motivation here
is to reduce the systematic uncertainties on the $\pi^\pm$\xspace and p($\bar{\text{p}}$)\xspace spectra
due to the feed-down contributions from weak decays.
Since a significant fraction of $\pi^\pm$\xspace and p($\bar{\text{p}}$)\xspace detected
are produced by the decay of $\Lambda$\xspace, $\bar{\Lambda}$\xspace and K$_\text{S}^{0}$\xspace, this effect has to be
corrected. In the results shown in~\cref{sec:hadrons} (and in Ref.\cite{Prado:2017hub})
this correction is done by using Monte Carlo simulations and the model
dependence of this procedure is added to the systematic uncertainties.
By measuring the spectra of $\Lambda$\xspace, $\bar{\Lambda}$\xspace and K$_\text{S}^{0}$\xspace, we are able
to avoid this model dependence and consequently reduce the systematic uncertainties.
Updated $\pi^\pm$\xspace and p($\bar{\text{p}}$)\xspace spectra with improved
systematic uncertainties will be presented
in the future in another publication.
Finally, we have shown the final results of
the meson resonance analysis which have
already been published in Ref.~\cite{Aduszkiewicz:2017anm}.
From the $\rho^{0}$\xspace production spectra shown in~\cref{fig:resonance:spec},
one can see that none of the hadronic models
can describe well the measurements. The small excess of
$\rho^{0}$\xspace observed with relation to the predictions from simulations
can be relevant to explain the muon deficit in simulations.
\begin{figure*}
\centering
\begin{overpic}[clip, rviewport=0.01 0.125 0.97 0.92,width=0.495\textwidth]{figures/new_int158_0_1_std}
\put(58,50){\Large\NASixtyOne}
\put(56,43){\Large PRELIMINARY}
\end{overpic}
\begin{overpic}[clip, rviewport=0.01 0.125 0.97 0.92,width=0.495\textwidth]{figures/new_int158_1_1_std}
\put(15.5,23){}
\end{overpic}
\begin{overpic}[clip, rviewport=0.01 0.125 0.97 0.91,width=0.495\textwidth]{figures/new_int158_0_2_std}
\put(15.5,23){}
\end{overpic}
\begin{overpic}[clip, rviewport=0.01 0.125 0.97 0.91,width=0.495\textwidth]{figures/new_int158_1_2_std}
\put(15.5,23){}
\end{overpic}
\begin{overpic}[clip, rviewport=0.01 0 0.97 0.91,width=0.495\textwidth]{figures/new_int158_0_3_std}
\put(60,30){
\includegraphics[clip, rviewport=0.65 0.5 1 1, width=0.16\textwidth]{figures/new_leg}
}
\end{overpic}
\begin{overpic}[clip, rviewport=0.01 0 0.97 0.91,width=0.495\textwidth]{figures/new_int158_1_3_std}
\put(15.5,23){}
\end{overpic}
\caption{Spectra of $\pi^\pm$\xspace, K$^\pm$\xspace and p($\bar{\text{p}}$)\xspace as a function of \ensuremath{p}\xspace (integrated over \ensuremath{p_\text{T}}\xspace),
for the 158 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace data set.
The statistical uncertainties are shown as black bars and the systematic ones as gray bands.}
\label{fig:hadron:int158}
\end{figure*}
\begin{figure*}
\centering
\begin{overpic}[clip, rviewport=0.01 0.125 0.97 0.92,width=0.495\textwidth]{figures/new_int350_0_1_std}
\put(58,50){\Large\NASixtyOne}
\put(56,43){\Large PRELIMINARY}
\end{overpic}
\begin{overpic}[clip, rviewport=0.01 0.125 0.97 0.92,width=0.495\textwidth]{figures/new_int350_1_1_std}
\put(15.5,23){}
\end{overpic}
\begin{overpic}[clip, rviewport=0.01 0.125 0.97 0.91,width=0.495\textwidth]{figures/new_int350_0_2_std}
\put(15.5,23){}
\end{overpic}
\begin{overpic}[clip, rviewport=0.01 0.125 0.97 0.91,width=0.495\textwidth]{figures/new_int350_1_2_std}
\put(15.5,23){}
\end{overpic}
\begin{overpic}[clip, rviewport=0.01 0 0.97 0.91,width=0.495\textwidth]{figures/new_int350_0_3_std}
\put(60,30){
\includegraphics[clip, rviewport=0.65 0.5 1 1, width=0.16\textwidth]{figures/new_leg}
}
\end{overpic}
\begin{overpic}[clip, rviewport=0.01 0 0.97 0.91,width=0.495\textwidth]{figures/new_int350_1_3_std}
\put(15.5,23){}
\end{overpic}
\caption{Spectra of $\pi^\pm$\xspace, K$^\pm$\xspace and p($\bar{\text{p}}$)\xspace as a function of \ensuremath{p}\xspace (integrated over \ensuremath{p_\text{T}}\xspace),
for the 350 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace data set.
The statistical uncertainties are shown as black bars and the systematic ones as gray bands.}
\label{fig:hadron:int350}
\end{figure*}
\begin{figure*}
\centering
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.495\textwidth]{figures/new_int_158_h2}
\put(70,75){\Large\NASixtyOne PRELIMINARY}
\put(28,63){$\pi^-$+C$\rightarrow$K$_\text{S}^{0}$\xspace+X}
\put(29,58){at 158 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace}
\end{overpic}
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.495\textwidth]{figures/new_int_350_h2}
\put(28,63){$\pi^-$+C$\rightarrow$K$_\text{S}^{0}$\xspace+X}
\put(29,58){at 350 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace}
\end{overpic}
\caption{Spectra of K$_\text{S}^{0}$\xspace as a function of \ensuremath{p}\xspace (integrated over \ensuremath{p_\text{T}}\xspace),
for the 158 and 350 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace data set.
The statistical uncertainties are shown as black bars and the systematic ones as gray bands.}
\label{fig:vzero:kzeros}
\end{figure*}
\begin{figure*}
\centering
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.47\textwidth]{figures/new_int_158_h0}
\put(70,75){\Large\NASixtyOne PRELIMINARY}
\put(36,63){$\pi^-$+C$\rightarrow$$\Lambda$\xspace+X}
\put(37,58){at 158 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace}
\end{overpic}
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.47\textwidth]{figures/new_int_350_h0}
\put(36,63){$\pi^-$+C$\rightarrow$$\Lambda$\xspace+X}
\put(37,58){at 350 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace}
\end{overpic}
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.47\textwidth]{figures/new_int_158_h1}
\put(36,63){$\pi^-$+C$\rightarrow$$\bar{\Lambda}$\xspace+X}
\put(37,58){at 158 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace}
\end{overpic}
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.47\textwidth]{figures/new_int_350_h1}
\put(36,63){$\pi^-$+C$\rightarrow$$\bar{\Lambda}$\xspace+X}
\put(37,58){at 350 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace}
\end{overpic}
\caption{Spectra of $\Lambda(\bar{\Lambda})$\xspace as a function of \ensuremath{p}\xspace (integrated over \ensuremath{p_\text{T}}\xspace),
for the 158 and 350 \ensuremath{\mbox{Ge\kern-0.1em V}\kern-0.1em/\kern-0.05em c}\xspace data set.
The statistical uncertainties are shown as black bars and the systematic ones as gray bands.}
\label{fig:vzero:lamb}
\end{figure*}
\begin{figure*}
\centering
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.47\textwidth]{figures/Rho0_Result}
\put(58,50){}
\end{overpic}
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.47\textwidth]{figures/Rho0_Result_350}
\put(15.5,23){}
\end{overpic}
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.47\textwidth]{figures/K0Star_Result}
\put(60,30){}
\end{overpic}
\begin{overpic}[clip, rviewport=0 0 1 1,width=0.47\textwidth]{figures/Omega_Result}
\put(15.5,23){}
\end{overpic}
\caption{Spectra of $\rho^{0}$\xspace, K$^{*0}$\xspace and $\omega$ as a function of \ensuremath{x_\text{F}}\xspace.
The statistical uncertainties are shown as black bars and the systematic ones as gray bands.}
\label{fig:resonance:spec}
\end{figure*}
\section{Acknowledgments}
We would like to thank the CERN EP, BE and EN Departments for the strong support of \NASixtyOne.
This work was supported by the Hungarian Scientific Research Fund (Grants NKFIH 123842–123959), the
J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the Polish Ministry of Science
and Higher Education (grants 667\slash N-CERN\slash 2010\slash 0, NN 202 48 4339 and NN 202 23 1837), the Polish
National Center for Science (grants 2011\slash 03\slash N\slash ST2\slash 03691, 2013\slash11\slash N\slash ST2\slash03879, 2014\slash 13\slash N\slash ST2\slash02565,
2014\slash14\slash E\slash ST2\slash00018, 2014\slash15\slash B\slash ST2\slash02537 and 2015\slash18\slash M\slash ST2\slash00125, 2015\slash19\slash N\slash ST2 \slash01689,
2016\slash23\slash B\slash ST2\slash00692), the Russian Science Foundation, grant 16-12-10176, the Russian Academy of
Science and the Russian Foundation for Basic Research (grants 08-02-00018, 09-02-00664 and
12-02-91503-CERN), the Ministry of Science and Education of the Russian Federation, grant No.
3.3380.2017\slash4.6, the National Research Nuclear University MEPhI in the framework of the Russian
Academic Excellence Project (contract No. 02.a03.21.0005, 27.08.2013), the Ministry of Education,
Culture, Sports, Science and Technology, Japan, Grant-in-Aid for Scientific Research (grants 18071005,
19034011, 19740162, 20740160 and 20039012), the German Research Foundation (grant GA 1480\slash2-2),
the Bulgarian Nuclear Regulatory Agency and the Joint Institute for Nuclear Research, Dubna (bilateral
contract No. 4418-1-15\slash17), Bulgarian National Science Fund (grant DN08\slash11), Ministry of Education and
Science of the Republic of Serbia (grant OI171002), Swiss Nationalfonds Foundation (grant
200020117913\slash1), ETH Research Grant TH-01 07-3 and the U.S. Department of Energy.
|
1,314,259,993,918 | arxiv | \section{Introduction}
Search-oriented conversational systems aim to return an appropriate answer in response to the user requests. But some requests might be ambiguous and the systems can be enhanced by asking a good clarifying question. The main reserach challenges on clarifying questions for a conversational information retrival systems (ClariQ) are the following \cite{DBLP:journals/corr/abs-2009-11352}:
\begin{itemize}
\item When to ask clarifying questions during dialogues?
\item How to generate or select the clarifying questions?
\end{itemize}
In this paper, we propose a clarifying question selection system that consists of response understanding, candidate question recalling and clarifying question ranking. The whole system architecture is shown in Figure \ref{fig:system}. And our system won the first place of the ConvAI3 challenge. Firstly we fine-tune a ELECTRA classification model to determine whether the user's responses need to be clarified. Then a candidate clarifying question recalling model is followed. We mainly focus on the clarifying question ranking task and treat it as a point-wise sequence classification task. We convert the related sequence pairs of the official dataset to the positive samples, and construct the negative samples by the BM25 model and randomly sampling approach, which are introduced in Section \ref{sec:clariq_data_preparation}. In Section \ref{sec:clariq_models} we describe the details of two models: a) a vallina ELECTRA transformer encoder, and b) a multi-task ELECTRA model. During inference, we sum up the output probabilities of these two models and take the highest probability question as the appropriate clarifying question. Experiments show that the ensemble model has a nice performance in the document relevance task and achieves the best recall@[20,30] metrics in question relevance task. And in multi-turn conversation evaluation in stage2, our system achieve the top score of all document relevance metrics.
\begin{figure}
\centering
\caption{System Architectures}
\includegraphics[width=\textwidth]{system.pdf}
\label{fig:system}
\end{figure}
\section{User Response Understanding}
\label{sec:user_response_understanding}
We solve the user response understanding task by constructing a corresponding dataset and fine-tune a sequence classification model based on ELECTRA discriminator \cite{DBLP:journals/corr/abs-2003-10555}.
\subsection{Data preparation}
\label{sec:understanding_data_preparation}
There are some system automatic evaluation metrics given by the official dataset, including MRR, P@[1,3,5,10,20], nDCG@[1,3,5,20]. These metrics are used to evaluate the quality of the selected clarifying question together with its corresponding answer. For the training data of the user response understanding model, we use P@5 metric to build the classification label, including $l_{need\_clarify}$ and $l_{no\_need\_clarify}$. The P@N decision support metric calculates the fraction of n recommendations that are good. So we label an ambiguous query to $l_{need\_clarify}$ if the P@5 score is 0, which means the query cannot be clearly described user’s need. On the other hand, label $l_{no\_need\_clarify}$ is set to the clear query while the P@5 score is greater than 0. Data statistics and some data samples are shown in Table \ref{tab:understanding_data}.
\begin{table}[]
\centering
\caption{Understanding Data Statistic}
\begin{tabular}{l|l|l|l|l}
\hline
\textbf{initial\_request} & \multicolumn{1}{c|}{\textbf{question}} & \multicolumn{1}{c|}{\textbf{answer}} & \multicolumn{1}{c|}{\textbf{label}} & \multicolumn{1}{c}{\textbf{\# of label}} \\ \hline
\begin{tabular}[c]{@{}l@{}}What is Fickle\\ Creek Farm\end{tabular} & \begin{tabular}[c]{@{}l@{}}are you looking for\\ information about staying\\ at fickle creek farm\end{tabular} & no & $l_{need\_clarify}$ & 4,895 \\ \hline
\begin{tabular}[c]{@{}l@{}}Tell me about\\ Obama family tree.\end{tabular} & \begin{tabular}[c]{@{}l@{}}would you like to know\\ about obamas ancestors\end{tabular} & \begin{tabular}[c]{@{}l@{}}yes particualarly\\ information about his\\ parents and grandparents\end{tabular} & $l_{no\_need\_clarify}$ & 3,671 \\ \hline
\end{tabular}
\label{tab:understanding_data}
\end{table}
\subsection{Model}
\label{sec:understanding_model}
We utilize ELECTRA, a transformer based attentive neural architecture. The pretrained large ELECTRA models have shown strong performance for the sequence classification \cite{DBLP:journals/corr/abs-2003-10555}. For fine-tuning the pretrained ELECTRA model on the previous dataset, we use the HuggingFace's Transformers library \cite{DBLP:journals/corr/abs-1910-03771}. The model is optimized 10 epochs with a batch size of 24 and a learning rate of 5e-6. The large ELECTRA architecture consists of 24-layer self-attention heads (1024 dimensional states and 16 attention heads) and a 2-layer fully connected network. The model architecture is the same as Figure \ref{fig:electra}. In the prediction stage, the understanding model takes the last clarifying question and user answer as input and determine if they need to be clarified (with label $l_{need\_clarify}$) or not (with label $l_{no\_need\_clarify}$).
\section{Candidate Question Recalling}
\label{sec:bm25_recaller}
The official BM25 baseline information retrieval model \cite{Manning:2135372} ranks all the questions from the question bank based on the query terms appearing in each question. For enhancing the BM25 model, we build the retrieval indexes of each question with not only its terms but also the terms from its related initial requests, answers and topic descriptions in official single turn training dataset.
Since we found that the shorter the question, the more general, all the questions which not appear in the training set were sorted ascendingly by the number of their terms, denoted as $Q_{na}$. For each user initial request, the candidate clarifying questions will be recalled not only by the enhanced BM25 model but also from $Q_{na}$. In our system setting, we recall totally 200 candidate questions for each utterance, including with top 100 most related questions from BM25 model and top 100 shortest questions from $Q_{na}$.
\section{Clarifying Question Ranking}
\label{sec:clariq_ranking}
\subsection{Data preparation}
\label{sec:clariq_data_preparation}
We convert this task to a sentence pair binary classification task. The positive samples are pairs contained the asked clarifying question and the initial request from the official training set. The negative samples are constructed by negative sampling. We try two methods to build the negative samples. One method is to randomly sample from all the irrelevant questions. Another method is to use the BM25 model for sampling, which is mentioned in Section \ref{sec:bm25_recaller}. The difference from the previous recalling model is that, for every initial request, we recall the top 200 related questions by BM25 model and randomly sample 300 questions from the question bank. Finally the size of our training dataset is 95,762 samples consisting of 2,600 positives and 93,162 negatives. For the development dataset in training stage, we use the same approach to build 25,603 samples including 681 positives and 24,922 negatives.
We also construct some document relevant scores of positive samples, by searching the MRR100 and nDCG@3 scores from the official given evaluation file, and zero for negative samples. Table \ref{tab:construced_data} shows some training samples in our training dataset and Table \ref{tab:data_statistic} shows the data statistics.
\begin{table}[]
\caption{constructed training data}
\begin{center}
\begin{tabular}{l|l|l|l|l}
\hline
\textbf{initial\_request} & \textbf{question} & \textbf{label} & \textbf{MRR100} & \textbf{nDCG@3} \\ \hline
Tell me about Obama family tree. & \makecell[l]{do you want to know more about obamas \\ parents} & 1 & 0.5 & 0.3333 \\ \hline
Tell me about Obama family tree. & \makecell[l]{are you referring to the time magazine \\ essay} & 1 & 0.6667 & 0.4115 \\ \hline
Tell me about Obama family tree. & \makecell[l]{what would you like to know about his \\ family (\textit{recall by bm25})} & 0 & 0 & 0 \\ \hline
Tell me about Obama family tree. & \makecell[l]{do you want a one pot recipe \\ (\textit{randomly sample})} & 0 & 0 & 0 \\ \hline
\end{tabular}
\end{center}
\label{tab:construced_data}
\end{table}
\begin{table}[]
\caption{data statistic}
\begin{center}
\begin{tabular}{l|l}
\hline
\textbf{Feature} & \textbf{Value \#} \\ \hline
\# of train / dev / test topics & 187 / 50 / 62 \\ \hline
\# of total question & 3,929 \\ \hline
\# of training data / positive / negative & 95,762 / 2,600 / 93,162 \\ \hline
\# of dev data / positive / negative & 25,603 / 681 / 24,922 \\ \hline
\end{tabular}
\end{center}
\label{tab:data_statistic}
\end{table}
\subsection{Models}
\label{sec:clariq_models}
In this sentence ranking task, we fine-tune two sentence classification models. One is the vanilla ELECTRA-large-Discriminator, the other is a multi-task model. We will introduce more details in the next section \ref{sec:electra} and section \ref{sec:multi_task_model}. The fine-tuning details and the prediction details are introduced in section \ref{sec:finetuning_details} and \ref{sec:inference_details}.
\subsubsection{ELECTRA}
\label{sec:electra}
Referring to BERT's sentence pair data construction method \cite{DBLP:conf/naacl/DevlinCLT19}, we add a "[SEP]" token at the end of the initial\_request value and the question value. After packing together the sentence pairs, we add a "[CLS]" token before the first word. Finally we take this new single sequence as the input of the model.
Our sentence pair classification model is a multilayer Transformer encoder based on the ELECTRA \cite{DBLP:journals/corr/abs-2003-10555}. We use a large size model configuration and more details is in Section \ref{sec:finetuning_details}. Figure \ref{fig:electra} shows the architecture of our sequence pair classification model.
In fine-tuning stage, we take the final hidden state $h_{}$ to represent the input data, which is the transformer output of the first token of the sequence. Then $h_{}$ will be fed into a two-layer fully connected network activated by the GLEU activation function \cite{DBLP:journals/corr/HendrycksG16} and output the sentence representation $h_s = FC(h_{})$. Finally the label probabilities are computed with a standard $softmax$, $p = softmax(h_s)$. All of the parameters of the electra-discriminator and FC layer are fine-tuned jointly to minimize the negative log-probability of the correct label.
\begin{figure}
\centering
\caption{Model Architectures}
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=\textwidth]{electra.pdf}
\label{fig:electra}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=\textwidth]{electra_multitask.pdf}
\label{fig:electra_multitask}
\end{minipage}
\end{figure}
\subsubsection{Multi-task Model}
\label{sec:multi_task_model}
For the input data of the model, we apply the same data construction method in Section \ref{sec:electra}. While using the classification label, we added two numerical values as the additional regression tasks to fine-tune the model, detailed in Figure \ref{fig:electra_multitask}.
The sequence encoder of multi-task model is also the same as the one in Section \ref{sec:electra}, which outputs the sentence representation $h_{}$. The classification task model is also the same as the previous model. For the two regression tasks, each output layer is constructed by a two-layer full connected network and a $sigmoid$ activation layer. Finally the fine-tuning of the parameters of the multi-task model is done by optimizing a combination of a binary classification cross entropy loss $Loss_{bce}$ and two MSE regression loss $MSE_{ndcg}$ and $MSE_{mrr}$.
\begin{equation}
Loss_{total} = Loss_{bce} + MSE_{ndcg} + MSE_{mrr}
\end{equation}
\subsubsection{Fine-tuning Details}
\label{sec:finetuning_details}
We use a 24-layer large ELECTRA-discriminator as the sequence encoder with the self-attention heads (1024 dimensional states and 16 attention heads). The electra-discriminator is based on a recently published PyTorch adaptation by the HuggingFace's Transformer library \cite{DBLP:journals/corr/abs-1910-03771}. All the parameters of ELECTRA are initialized from the pretrained electra-large-discriminator model presented by google team \footnote{\url{https://huggingface.co/google/electra-large-discriminator}}.
We fine-tune the model with a batch size of 24 sequences having a max of 256 tokens for 1 epoch, which is approximately 4000 steps over our constructed training dataset. We used the training framework implemented by Huggingface's Transformers, setting a learning rate of 5e-6. The learning rate was linearly decayed to zero over the course of the training. Fine-tuning the model takes about 2h on four GeForce GTX 1080Ti GPUs.
\subsubsection{Inference Details}
\label{sec:inference_details}
As for the stage 1 of the ClariQ challenge, we select the top 30 highest probability questions for each initial request from an ensemble model, which sum up the predicted probabilities of the previous two models. But as for our system, we select the most relevant clarifying question from the recalling candidate question set and return it to the user.
It is worth mentioning that these two models are trained on the single turn dataset and take the combinations of initial request and conversation context as input in the inference stage. For more details to the system input, the clarifying question will be added into the initial requests as the input if there is a "yes" in the corresponding answer. If there is a "no" in the user's answer, we won't add anything to the initial request. In other cases, the system input is the initial request together with the user's answer.
\subsection{Experiments}
\begin{table}[]
\centering
\caption{Results of Document Relevance Task}
\begin{tabular}{l|c c c c|c c c c}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{model}}} & \multicolumn{4}{c|}{\textbf{devset}} & \multicolumn{4}{c}{\textbf{testset}} \\ \cline{2-9}
\multicolumn{1}{c|}{} & \textbf{MRR} & \textbf{P@1} & \textbf{NDCG@3} & \textbf{NDCG@5} & \textbf{MRR} & \textbf{P@1} & \textbf{NDCG@3} & \textbf{NDCG@5} \\ \hline
bm25 baseline & 0.3096 & 0.2313 & 0.1608 & 0.1530 & 0.3134 & 0.2193 & 0.1151 & 0.1061 \\
bert reranker & 0.3453 & 0.2563 & 0.1824 & 0.1744 & 0.2553 & 0.1784 & 0.0892 & 0.0818 \\ \hline
electra & 0.3548 & 0.2436 & 0.2013 & 0.1921 & - & - & - & - \\
electra multitask & 0.3520 & 0.2688 & 0.2033 & 0.1925 & 0.3006 & 0.2230 & 0.1031 & 0.097 \\
\textbf{ensemble model} & \textbf{0.3761} & \textbf{0.3} & \textbf{0.2113} & \textbf{0.1955} & \textbf{0.3140} & \textbf{0.2379} & \textbf{0.1229} & \textbf{0.1097} \\ \hline
\end{tabular}
\label{tab:document_relevance}
\end{table}
\begin{table}[]
\centering
\caption{Results of Question Relevance Task}
\begin{tabular}{l|c c c c|c c c c}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{model}}} & \multicolumn{4}{c|}{\textbf{dev}} & \multicolumn{4}{c}{\textbf{test}} \\ \cline{2-9}
\multicolumn{1}{c|}{} & \textbf{R@5} & \textbf{R@10} & \textbf{R@20} & \textbf{R@30} & \textbf{R@5} & \textbf{R@10} & \textbf{R@20} & \textbf{R@30} \\ \hline
bm25 baseline & 0.3245 & 0.5638 & 0.6675 & 0.6913 & 0.3170 & 0.5705 & 0.7292 & 0.7682 \\
bert rerank & 0.3475 & 0.6122 & 0.6913 & 0.6913 & 0.3444 & 0.6062 & 0.7585 & 0.7682 \\ \hline
electra & 0.3604 & 0.6618 & 0.8368 & 0.8632 & - & - & - & - \\
\textbf{electra multitask} & \textbf{0.3648} & \textbf{0.6753} & \textbf{0.8510} & 0.8744 & \textbf{0.3414} & \textbf{0.6351} & 0.8316 & 0.8721 \\
\textbf{ensemble model} & 0.3604 & 0.6749 & 0.8478 & \textbf{0.8761} & 0.3404 & 0.6329 & \textbf{0.8335} & \textbf{0.8744} \\ \hline
\end{tabular}
\label{tab:question_relevance}
\end{table}
\subsubsection{Evaluation Metric}
We send the top 30 highest probability clarifying questions of the user requests from the single turn test dataset to the task organizer for automatic evaluation. The evaluation metrics of document relevance task here are MRR, P@3 and nDCG@[3,5]. The selected clarifying question and its corresponding answer are added to the original request, which is then used to retrieve documents from the collection. These metrics are evaluated how much the question and its answer affect the performance of document retrieval.For the question relevance task, the models are evaluated in terms of Recall@[5, 10, 20, 30]. More details about automatic evaluation metrics can be found in the challenge description \cite{DBLP:journals/corr/abs-2009-11352}.
In the multi-turn conversations evaluation in stage2, our system achieve the top scores in all metrics, which is shown in \ref{tab:system_evaluation}.
\subsubsection{Results}
The results of our experiments are summarized in Table \ref{tab:document_relevance} and Table \ref{tab:question_relevance}. We utilize the official BM25 baseline and BERT reranker as the benchmark here for comparison. For our models, we compared three different configurations: ELECTRA, multi-task model based on ELECTRA, and the ensemble of these two models. The final leaderboard can be found at the official challenge website \footnote{\url{https://github.com/DeepPavlov/convai}}.
Table \ref{tab:document_relevance} shows that the ensemble model outperforms in both development dataset and test dataset in the document relevance task. As for question relevance task whose result is shown in Table \ref{tab:question_relevance}, ELECTRA-multitask model achieves the best recall@[5,10] and the ensemble model achieves the best recall@[20,30] in single turn test dataset.
Table \ref{tab:system_result} shows some examples of the system simulation. Our system chooses the question which asks for the user's purpose in the first case and selects a more detailed clarifying question in the second case. In the third case, our system asks a further question according to the conversation context. And in the fourth case, the system asks the question different from the last question if the user answers no. When the user's answer gives more details, which is shown in the fifth case, our system no longer asks questions.
\begin{table}[]
\centering
\caption{Results of Multi-turn Conversations Evaluation}
\begin{tabular}{l | l | c c c c}
\hline
\textbf{Rank} & \textbf{Creator} & \textbf{MRR} & \textbf{P@1} & \textbf{nDCG@3} & \textbf{nDCG@5} \\ \hline
\textbf{1} & \textbf{NTES\_ALONG} & \textbf{0.1798} & \textbf{0.1161} & \textbf{0.0553} & \textbf{0.0536} \\ \hline
2 & TAL ML & 0.1669 & 0.1067 & 0.0522 & 0.0494 \\ \hline
\end{tabular}
\label{tab:system_evaluation}
\end{table}
\begin{table}[]
\centering
\caption{Results of System Simulation}
\begin{tabular}{l}
\hline
\begin{tabular}[c]{@{}l@{}}initial request : I want to know about appraisals.\\ conversation context :\\ \ \ None\\ clarifying question : What kind of appraisal are you looking for ?\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}initial request : How to cure angular cheilitis ?\\ conversation context :\\ \ \ None\\ clarifying question : Are you interested in home remedies for angular cheilitis ?\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}initial request : I want to know about appraisals.\\ conversation context :\\ \ \ question : What kind of appraisal are you looking for ?\\ \ \ answer : I need information about antique appraisals.\\ clarifying question : Would you like to find appraisers near you ?\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}initial request : Where can I buy pressure washers ?\\ conversation context :\\ \ \ question : Are you wondering what a pressure washer is ?\\ \ \ answer : No.\\ clarifying question : Are you looking for a place to buy pressure washer parts ?\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}initial request : Find information on Hoboken .\\ conversation context :\\ \ \ question : Are you looking for an apartment in hoboken ?\\ \ \ answer : No, I would like to find restaurants there.\\ clarifying question : None (means the request is clear)\end{tabular} \\ \hline
\end{tabular}
\label{tab:system_result}
\end{table}
\section{Conclusions and Future Work}
For the challenge of clarifying questions for conversational information retrieval systems (ClariQ), we present a clarifying question selection system which is a pipeline approach combining with response understanding, candidate question recalling and clarifying question ranking. We firstly fine-tune a ELECTRA classification model to determine which utterance needs to be clarified. For the ambiguous requests, our system recalls the candidate questions by an enhanced BM25 method and select the most related from them by an ensemble ranking model. Especially for the ensemble ranking model, we fine-tune an ELECTRA model and a multi-task learning ELECTRA model with our ranking dataset. Compared with the benchmark models, our models have showed competitive performances in document relevance task. And in question relevance task, our ensemble model ranks 1st among all participant models. In practical application, our system tends to return the clear and coherent questions to clarify the user’s query. And our system won the first place of the ConvAI3 challenge in EMNLP 2020 SCAI workshop.
For future work, we plan to investigate whether applying different interaction linking structures and tune the hyper-parameters extensively can further improve the system’s performance. In addition, we will apply our framework on various dataset from different domains to evaluate its generalize ability and robustness.
\bibliographystyle{unsrt}
|
1,314,259,993,919 | arxiv | \section{Introduction} \label{sec:introduction}
UAVs (Unmanned Aerial Vehicles) or drones as they are popularly known are paving their ways into different fields of applications$,$ which has led to their increased presence in the consumer market. Significant research work is now being focused on communication problems associated with UAVs and how to remediate their vulnerabilities. Drones help us in reaching areas difficult to access often because of the lack of physical infrastructure. As a consequence, drones are often used for critical operations such as rescue$,$ surveillance$,$ transportation in various types of fields, including agriculture$,$ forestry$,$ environmental protection$,$ and security.
Initially, drone units were used independently; nowadays, however, multiple synchronized drones often perform critical operations together. In these scenarios, drone communication plays a critical role. Thus, it is important to understand various aspects of UAV communication. On the other hand, different types of wireless channels and network protocols are employed in drone communications. Therefore$,$ the communication mechanism which is used for the UAV network depends on the application. For example, in outdoor communication, it has been observed that a simple line of sight point-to-point communication link between the drone and the device can be utilized without any break in signal transmission. Another example is surveillance, where drones effectively communicate through satellite communication links. Satellite communication technique is a preferable choice for drone communication when they are used for security, defense, or more extensive outreach operations. On the other hand, for civil and personal applications, cellular communication technologies are preferred. However, for indoor communication, especially in the case of the mesh network and Wireless Sensor Network (WSN), communication through Bluetooth and other point-to-point (P2P) protocols has been more efficient. Communication to a multi-layered network can be a complicated process when applied to drones. Some of the significant concerns are illustrated below.
Previous work \cite{pantelimon2018survey} have explored various communication and mission control approaches, for multi-drone applications, along with their classifying systems: centralized and decentralized. In time-sensitive missions, centralized systems serve a better purpose. But ideally, a hybrid of both would give the best results, where drones are centrally operated and learn from each other. WiFi, Bluetooth, ZigBee, acoustic, and cellular technologies were analyzed for a UAV communication system. It was concluded that the selection of communication technology should be made by taking into account parameters like bandwidth, range, power requirements, speed, compatibility, payload weight, and cost.
Yanmaz \emph{et al.} \cite{yanmaz2018drone} analyzed various types of technologies for a drone network with different functionalities such as sensing, coordination, communication, and networking. Many useful suggestions were also provided, e.g., drones should be integrated into emerging large-scale networks such as future cellular networks. Asadpour \emph{et al.} \cite{asadpour2013ground} showed that current wireless networking standards could not cope with the high mobility of UAVs and increased signal frequencies. Doppler effect or changes in relative speeds and antenna directions associated with UAVs could lead to high packet losses. Selection of appropriate communication technology is essential and various aspects like accuracy, sum rate, antenna device, and resource handling platforms should be taken into consideration as suggested by researchers \cite {mozaffari2019tutorial, sanchez2018survey, hayat2016survey, vahidi2018low, sudheesh2018sum, zabihi2017monopole, ngamjanyaporn2017switch, zhao2018antenna, multerer2017low, pizetta2016hardware, burkle2011towards, christensen2015design, dantu2011programming}. Data transmission is a crucial aspect of any network, and appropriate routing protocol should be used accordingly. For a single UAV or a swarm of UAVs, networking is an important feature \cite{rahman2014enabling, yang2017routing, kitagawa2018mobility, yoshikawa2017resource, fabra2017impact, shrit2017new,kim2016multi, uchida2014evaluation, sun2017latency, souidi2017node, naqvi2018drone, erdelj2017help, wu2017orsca, alnoman2017d2d, shi2018drone}. Drones have been incorporated into the wireless sensor network, vehicular communication system, and mobile communication network to extend their applications along with the use of internet of things. \cite{oubbati2017intelligent,wang2016vdnet, moran2017hybrid, li2015drone, koubaa2017service, condoluci2016enabling, fotouhi2017understanding, narang2017cyber, motlagh2016low,jayakody2019self,article}.
Artificial intelligence$,$ navigation strategies, and cryptography have been integrated into UAV communication techniques by various researchers to maintain efficient, reliable, and low-latency communications between nodes of the UAV network \cite{park2016prediction,jung2017acods,saha2018cloud,kong2017autonomous,erdelj2017help,wu2017orsca,chi2012civil,perazzo2015verifier,yuan2017outdoor,wang2016vdnet,moran2017hybrid,zhao2018antenna,thomas2015secure,ramdhan2016codeword,cheon2018toward,singandhupe2018reliable,quist2013novel,steinmann2016uas,he2017drone,multerer2017low,samland2012ar,knoedler2016detection,clarke2014regulation,oubbati2017intelligent,shrit2017new,sharma2018coagulation}. However$,$ it is important to consider energy efficiency$,$ as well as the speed of drones for reliable secure communication. Drones often face issues of inadequate energy and computing resources \cite{jung2017acods, koubaa2017service, shetti2015evaluation, long2018energy, naqvi2018drone, zorbas2013energy, fotouhi2017understanding, ma2017drone, kagawa2017study, sun2017latency}. Researchers have given insights to optimize solutions for these problems. Another problem is communication failure due to aerial network jamming. Such interference can be a serious issue. Networks of UAVs are being used now for emergency communication infrastructure and surveillance, as suggested in \cite{camara2014cavalry, kang2016spatial, deruyck2018designing,thapa2016impact, zahariadis2017preventive, he2017drone, miyamoto2015demo, moon2016uav, jung2017acods}.
A diagram summarizing communication technologies of drones, their linkage with recent technological advancements, and their combined applications are laid out in Fig. \ref{introduction:scope}. The notion is to concisely present how each piece from the left, middle, and right portion can be associated together. For instance, if communication is established between a drone and an ambulance through a sophisticated vehicular communication system, an artificial intelligence algorithm - running offline on drone or online on cloud - can monitor the paths and determine the best route to provide emergency aid. The left portion of the diagram presents some key attributes of communication technology of drones. The figure also shows the association with the four major disciplines in the middle portion. The association between these two portions alone constitutes a vast amount of research. Along with performance analysis in applications (as shown on the right portion) such as surveillance and emergency aid, the magnitude of scope is nearly unfathomable. Their technological entanglement will be broken down for further investigation, along with the identification of some links which are missing or need further consideration.
{\color{blue}\subsection{Review of Previous Survey Works}}
\textcolor{blue}{In addition to the growing number of new solutions for UAV communication networks in recent years, a number of surveys have been published focusing on UAV communication. These surveys suggested different types of technologies to improve the performance of UAV communication. A summary of these existing survey and tutorial articles is provided in Table \ref{Survey}. The authors of references \cite{mozaffari2019tutorial} and \cite{fotouhi2019survey} provided a comprehensive study on the use of UAVs in wireless networks. In addition, two main UAV applications were investigated, namely, UAV-assisted aerial base stations and cellular-connected UAVs. Especially, reference \cite{fotouhi2019survey} presented research based on the cyber-physical security of UAV-assisted cellular communications. In \cite{yan2019comprehensive}, the authors conducted a comprehensive survey and analysis of air-to-ground channel measurements and channel model for the UAV communication. In addition, they analyzed the link budget for UAV communications, presented the design guideline for managing the link budget, taking into account spread losses and link fading. UAV communication research in the areas of routing, seamless handover and energy efficiency have been discussed in \cite{gupta2015survey}. In addition, reference \cite{bithas2019survey} offered a detailed summary of relevant studies, ML-based UAV communication strategies to optimize various model and functional aspects such as UAV channel modeling, resource management, positioning and security.}
\begin{table*}[!htbp]
\caption{Comparison of Existing Survey Articles}
\label{Survey}
\centering
\color{blue}
{\begin{tabular}{c c c}
\hline
\multicolumn{1}{p{1.00in}}{ \textbf{Paper}} &
\multicolumn{1}{p{1.47in}}{ \textbf{Focused communication technologies/areas}} &
\multicolumn{1}{p{2.95in}}{\textbf{Key features} }\\
\hhline{---}
\multicolumn{1}{p{1.00in}}{ \cite{yan2019comprehensive}} &
\multicolumn{1}{p{1.47in}}{ UAV communication links and channels} &
\multicolumn{1}{p{2.95in}}{\begin{itemize}
\item Channel models for UAV communications \par
\item Link budget analysis for UAV communications \par
\item MIMO Communications for UAVs \par
\item Air-to-ground (A2G), ground-to-ground (G2G), and air-to-air (A2A) channel measurements and modeling for UAV communications
\end{itemize}} \\
\hhline{---}
\multicolumn{1}{p{1.00in}}{\cite{fotouhi2019survey}} &
\multicolumn{1}{p{1.47in}}{Cellular connected UAVs} &
\multicolumn{1}{p{2.95in}}{\begin{itemize}
\item UAV types \par
\item Prototyping and field test \par
\item Mobile edge computing with UAVs \par \item Aerial base stations \par
\item Channel modeling \par \item UAV regulation \par
\item UAV communication security
\end{itemize}} \\
\hhline{---}
\multicolumn{1}{p{1.00in}}{\cite{bithas2019survey}} &
\multicolumn{1}{p{1.47in}}{ Artificial intelligence and Machine Learning (ML) for UAV communications} &
\multicolumn{1}{p{2.95in}}{\begin{itemize}
\item UAV characteristics \par \item Communication issues in ML-Enhanced UAV networks \par \item UAV communication security
\end{itemize}} \\
\hhline{---}
\multicolumn{1}{p{1.00in}}{\cite{mozaffari2019tutorial} } &
\multicolumn{1}{p{1.47in}}{UAV-enabled wireless networks} &
\multicolumn{1}{p{2.95in}}{\begin{itemize}
\item Mathematical tools for designing UAV communication systems. \par \item Cellular-Connected drones \par \item Flying Ad-hoc Networks with UAVs \par \item Trajectory Optimization
\end{itemize}} \\
\hhline{---}
\multicolumn{1}{p{1.00in}}{\cite{gupta2015survey}} &
\multicolumn{1}{p{1.47in}}{ UAV communication networks } &
\multicolumn{1}{p{2.95in}}{\begin{itemize}
\item Ad hoc networks \par
\item UAV networks and configurations \par
\item Routing protocols for UAV networks \par
\item Handover mechanisms for UAV networks
\end{itemize}} \\
\hhline{---}
\end{tabular}}
\end{table*}
{\color{blue}\subsection{Contributions of This Article}
Despite the existing UAV communication related articles highlighted in Section 1.1, no contributions have been reported in providing a comprehensive review of the emerging technologies in UAV communication. Therefore, our objective in this paper is to focus more on emerging UAV communication technologies and their applications for the next-generation wireless networks.
Motivated by the vision, in this paper, we fully investigate various emerging UAV communication technologies with their advantages, use case scenarios, technical challenges and future directions. The scope of this survey covers communication and network technologies for UAVs through investigation of suitable task modules, antennas, resource handling platforms, and network architectures. We survey most of the emerging technologies from both academic and industrial perspectives based on the most recent literature. Moreover, we provide comprehensive summary of UAV communication related concepts such as UAV-assisted wireless networks, cellular connected UAVs, IoT-enabled UAV communication System, URLLC-enabled UAV communication, navigation strategies for UAVs, machine learning and artificial intelligence-enhanced UAV networks. Also, we articulate on the future directions of UAV communication and their applications in modern technologies such as the IoT, 5G, and wireless sensor networks. Finally, we discuss key research challenges and future directions with the objective of realizing high performance UAV communication systems.
}
{\color{blue}\subsection{Paper Organization}
This paper is divided into six sections. Section 1 provides an overview of the key points covered in this paper. In Section 2, we outline vital communication technologies that are available for UAV communication. Section 3 covers different technologies like artificial intelligence, navigation strategies, security mechanisms, and optimization theory that enhance the performance of UAV communication. Various novel applications for drone research are introduced in Section 4. Section 5 points the direction for future research and presents challenging open problems that must be addressed. Section 6 concludes the paper. Figure \ref{stp} depicts the structure of the paper.
\begin{figure*}[!htbp]
\includegraphics[width=0.95\textwidth ]{UAV.pdf}
\caption{ Structure of the Paper}
\label{stp}
\end{figure*}
}
\section{Communication \textcolor{blue}{and Network} Technologies for UAVs}
\begin{table*}[!htbp]
\caption{Technological Comparison and Evaluation of Existing Algorithms and Techniques for Drone Networks}
\label{t1}
\centering
\begin{tabular}{lllll}
\hline
\textbf{\begin{tabular}[c]{@{}l@{}}Comparison / \\ Evaluation\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Corre-\\spondent\end{tabular}} & \textbf{Selected} & \textbf{Selection criteria} & \textbf{Advantage over rest} \\ \hline
\begin{tabular}[c]{@{}l@{}}WiMax, ZigBee,\\ WiFi, XBee\end{tabular} & \cite{rahman2014enabling} & WiMax & \begin{tabular}[c]{@{}l@{}}SHERPA network \\ standard\end{tabular} & \begin{tabular}[c]{@{}l@{}}Broader coverage and lower \\ data loss rate in hostile areas.\\ Consider other parameters too.\end{tabular} \\ \hline
AFAR-D, DSDV & \cite{lee2016devising} & AFAR-D & Packet Routing & Better packet delivery ratio. \\ \hline
RMICN &
{\cite{kitagawa2018mobility}} & RMICN & \begin{tabular}[c]{@{}l@{}}Communication between \\ disjoint networks\end{tabular}
& \begin{tabular}[c]{@{}l@{}}Improved flexibility and \\ efficiency.\end{tabular}
\\ \hline
IACO & \cite{akka2018} & IACO & Path planning & \begin{tabular}[c]{@{}l@{}}Better network between\\ regions.\end{tabular}
\\
\hline
\end{tabular}
\end{table*}
To establish a proper UAV communication network$,$ communication modules and protocols are of the utmost importance. Various methods are suggested by the research community in which a few critical factors such as antenna design, network architecture, and resource management platform, were considered. In this section, communication modules, multiple networking schemes and utilization of the internet of things in various aspects of drone communication are discussed. A comparison of different algorithms and methods used in drone networks is presented in Table \ref{t1}.
\subsection{Communication Modules}\label{sec:2.1}
A significant amount of research work has been dedicated to the enhancement of communication technology. In this section, a review of different aspects of communication technology has been presented and innovative methods for improvement have been proposed. Especially, accuracy and stability are critical performance criteria in UAV communication. Existing wireless technologies including WiMAX, LTE, and ZigBee, have been analyzed by Hayat \emph{et al.} \cite{hayat2016survey} following these criteria.
Vahidi \emph{et al.} \cite{vahidi2018low} used Multiple-Input and Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO $-$OFDM) to reconstruct accurate transmitted data at the receiver end with reduced overhead and computational complexity. However, maximizing the sum rate could be another basis for improving the communication system.
For high altitude platforms (HAPs), a drone ground-station interference alignment scheme has been proposed by Sudheesh \emph{et al.} \cite{sudheesh2018sum} in which communication is assisted by a tethered balloon using half-duplex relaying. This system helps in achieving the maximum DOF (degrees of freedom ) and sum-rate, especially when HAPs lack channel state information (CSI). The use of DOF to characterize a communication channel was pioneered by Somaraju \emph{et al.} \cite{somaraju2010degrees}.
A simplified diagram of various communication modules being used is shown in Fig. \ref{modules}, where the development of each of the modules has been carefully designed based on certain factors for their utility in drones, termed here as utility factor. To the right side, utility factors such as good bandwidth, expansion of radio control, and antenna security are grouped together. The efficacy of all of them corresponds to the characteristics of the antenna. Further to the right, many research outputs and products are linked to the utility factor. Many of these are described in greater detail in later sections. The complete interlinking demonstrates the association of different modules with each other and how the development of each module can be categorized under a utility factor and related to the specific components of drones. Similarly, the left portion of the diagram outlines the relation of development platforms such as Karma or its alternative research products with a particular utility factor and their categorization under resource handling platforms.
\begin{figure*}[!htbp]
\includegraphics [width=0.95\textwidth]{MODULES_DRONECOMM.png}
\caption{Different modules for handling drone resources and antennas.}
\label{modules}
\end{figure*}
\subsection{Antenna Design}
Efficient antenna design is essential for signal exchange and information interchange among drones. The work of Zabihi \emph{et al.} \cite{zabihi2017monopole} has suggested a design that maximizes antenna performance by taking bandwidth requirement into consideration. Their research concluded that printed designs are the best, especially wrapped PIFA (Planar Inverted F Antenna). Ngamjanyaporn \emph{et al.} \cite{ngamjanyaporn2017switch} proposed extending the radio control distance of a UAV controller through a switch-beam, circular-array antenna using two-beam switching Yagi-Uda antenna at 2.4 GHz operating frequency. Directional Yagi antenna has also been used to study the power amplification of a device \cite{zhao2018antenna}. The study focuses on the security aspect of LOS (Line of sight) and nLOS (non-Line of sight) threat scenarios. Using antenna devices such as dual-frequency PIFA, directional antennas, and angle reflectors for drawing an electronic fence, the system is able to detect invasion of amateur drones. In the field of security, Multerer \emph{et al.} \cite{multerer2017low} used an RF jammer with a bidirectional antenna and a 3D MIMO radar for protection against surveillance.
\subsection{Resource Handling Platforms}
Research has been under way to develop operating platforms that can be used by researchers and developers to perform processing tasks with ease. A decentralized platform, AuRoRA, has been used as a ground station for sending control signals to the servo motors of vehicles as described by Pizetta \emph{et al.} \cite{pizetta2016hardware}. This approach prevented the overloading of a single computer with the integration of flight data and control signals. However, in the field of swarm robotics, controlling multiple UAVs could be a very tedious task and requires precise synchronization among them. Burkle \emph{et al.} \cite{burkle2011towards} suggested a platform for the formation of a swarm of multiple drones, with a generic ground station responsible for the integration of several sensor platforms. The drones had been integrated into a modular sensor network, centrally controlled by GCS (ground control station). Communication infrastructure was designed using channels for broadcast, control, data, and co-operation, which provided links for communication between drones and the ground control station. Christensen \emph{et al.} \cite{christensen2015design} presented the Heterogeneous Ad-hoc Network for the Coordination of Aquatic Drones (HANCAD) and Control of Aquatic Drones for Maritime Tasks (CORATAM) projects, focusing on control of swarms of aquatic drones and the communication among them. One of the main goals of the projects was to enable Mobile Ad-hoc networks (MANETs) to be used with low-cost aquatic drones. Another unique system, named “Karma”, has been proposed by Dantu \emph{et al.} \cite{dantu2011programming} and it was based on a drone hive model, which simplifies the hardware and software complexity of individual Micro Aerial Vehicles (MAV) by moving the complexity of coordination to a central hive computer entirely, thereby making communication more feasible and efficient.
\subsection{Networking Technologies for UAV Communication System}
A significant amount of research work has been focused on different aspects of communication networks of drones, which resulted in improved technology and more robust networks. Rahman \cite{rahman2014enabling} chose worldwide interoperability for Microwave Access network $($ WiMAX $)$ as a suitable technology for studying wireless communication technologies such as ZigBee, WiFi, XBee, and WiMAX, which are based on SHERPA network standard criteria. Lee \emph{et al.} \cite{lee2016devising} used an Adaptive Forward Area Based Routing-algorithm (AFAR) for drones while using Geographic Information Systems (GIS) to study flooding, which is well suited for drones when correctly modified. Evaluation with Destination-Sequenced Distance-Vector (DSDV) routing protocol has confirmed a better packet delivery ratio for AFAR-D. Kitagawa \emph{et al.} \cite{kitagawa2018mobility} aimed at developing a networking system RMICN (Router-movable Information-centric Networking) particularly for facilitating communication between disjointed networks. It used the movement of physical control of flying routers and relay nodes to improve flexibility and efficiency. A path planning algorithm called Improved Ant Colony Optimization (IACO) was used for a group of mobile robots \cite{akka2018}. Yoshikawa \emph{et al.} \cite{yoshikawa2017resource} focused on another aspect of resource allocation, identifying the best frequency band for individual drones so as to enable the maximum number of drones to use the using main communication band while simultaneously avoiding interference. Once the power outage probability of a radar and a drone was derived, Yang further optimized the maximum ratio of drones using the main band relative to the total number of drones by increasing the size of primary exclusive region. High interference was observed at the radio control unit only in the 2.4 GHz wireless band. Fabra \emph{et al.} \cite{fabra2017impact} studied optimization techniques and their experimental results demonstrated the incompatibility of WiFi in this band due to the large number of remote control devices already utilizing this band. \textcolor{blue}{However, when creating the formation of a swarm network of drones, a light and efficient solution was proposed by Shrit \emph{et al.} \cite{shrit2017new} to synchronize them into position using only ad-hoc communications. For the operation of a swarm, a leader drone is piloted by a human, and the other drones autonomously follow the leader using the strength of WiFi signals. UAV swarm work has recently started to gain more interest for general applications. There have been many UAV swarm demonstrations, but, in most demonstrations, the degree of autonomous activity has been small. In most cases, each individual UAS is regulated simultaneously by a GCS. Current UAV swarm demonstrations use one of the two general forms of swarm communication architecture from infrastructure-based swarm architecture or ad-hoc network-based architecture\cite{campion2018review}.Flying Ad-Hoc Network (FANET) was described by Kim \emph{et al.} \cite{kim2016multi}, where the communication problem causing limitation on the operational range of drones was solved. FANET relay technology can also be used for controlling drones that get disconnected from the ground control system (GCS). \textcolor{blue}{
A “return to the next-hop drone’s location” scheme is useful for network recovery of drones that get disconnected from neighboring drones. Besides, self-recovery networks have been explored by Uchida \emph{et al.} \cite{uchida2014evaluation}, where a resilient network consisting of Autonomous Flight Wireless (AFW) nodes with Delay Tolerant Networks (DTN) and Never Die Networks (NDN) is implemented to seek possible wireless stations and send messages in isolated areas.}}
\subsection{\textcolor{blue}{UAV-Assisted Wireless Sensor Networks and UAV-Assisted Vehicular Communication Systems}}
Incorporation of drones in WSN (Wireless sensor network) efficiently is a strenuous work due to the positioning of dense sensors in a large area. Erdelj \emph{et al.} \cite{erdelj2017help} have shown that static WSN deployments become less effective with progressing stages of damage. Recommendations for WSN and UAV have been made based on the proposed classification of three stages of disaster management, i.e., pre-disaster preparedness, disaster assessment, disaster response and recovery. Wu \emph{et al.} \cite{wu2017orsca} proposed gathering mobile data by a UAV in a WSN. A routing scheme was formulated for a Route Selection and Communication Association (RSCA) problem using a regulated greedy algorithm.
\textcolor{blue} {D2D can be an efficient approach for inter-UAV communication. A review of recent advancements in D2D technologies was presented by Alnoman \emph{et al.} \cite{alnoman2017d2d}. D2D communication with frequency reuse and power control using a multi-player multi-armed bandit model has been investigated by Kuo \emph{et al.} \cite{kuo2020d2d}. \textcolor{blue} Research shows that devices with cellular network capability can find other devices in impacted areas.} Additionally, the proximity of mobile devices can be exploited for high data transmission rates and for establishing private networks. Other research \cite{shi2018drone} introduced a comprehensive drone-assisted vehicular networks (DAVN) architecture for integrating drones with ground vehicular networks, using drones to improve infrastructure coverage, vehicle-to-vehicle connectivity, network inter-working efficiency, and data collection ability.
Some innovative work has also been done on UAV-assisted VANETs. Protocols such as UAVR-S (air-to-air communication) and UAVR-G (ground-to-air) have been introduced by Oubbati \emph{et al.} \cite{oubbati2017intelligent}. An ad$-$hoc network of UAVs acting as relays are deployed when ground communication is poor or the vehicular density is too low for routing packets. Yang \emph{et al.} \cite{yang2017routing} devised a lightweight ForWard-Back (FWB) queuing architecture. In exchange for small network delays, an appropriate path for a final destination is determined adaptively by leveraging the queuing and transmission delays. An infrastructure-less UAV-assisted Vehicular ad-hoc network (VANET) system called Vehicle-Drone hybrid vehicular ad-hoc Network (VDNet) was devised by Wang \emph{et al.} \cite{wang2016vdnet}, which utilizes UAVs for boosting data transmission between vehicles and achieves significant performance.
Li \emph{et al.} \cite{li2015drone} proposed a smart drone for a First Responder Network Authority (FirstNet). It used a kind of multi-hop device-to-device (D2D) communication, which relayed transmission between the base station and terminal devices. Simulation results show that a drone is needed only if the distance or the required transmit power exceeds a specified threshold.
\subsection{IoT-Enabled UAV Communication System}
Due to the limited processing capabilities and low on-board storage, drones are unable to perform computationally demanding applications. Integration of drones with Internet$-$of$-$Things $($IoT$)$ and the cloud is envisioned as a viable solution to this shortcoming. A service$-$oriented cloud$-$based management system$,$ or Drone Planner$,$ as suggested by Koubaa \emph{et al.} \cite{koubaa2017service} uses MAVLink protocol for communication and provides a simple yet efficient API to develop drone applications. Alternatively$,$ machine$-$type multicast service $($MtMS$)$ has been proposed by Condoluci \emph{et al.} \cite{condoluci2016enabling} for enabling the concurrent data transmission to MTC devices. Its architecture and procedures have been designed to optimize latency and reduce energy consumption and control overhead. Various papers exploring IoT utilization in end-to-end systems have unfolded significant results. Fotouhi \emph{et al.} \cite{fotouhi2017understanding} experimented with a commercial drone, the DJI Phantom, to incorporate IoT applications and revealed some key practical maneuverability factors.
\
\begin{figure*}[!htbp]
\includegraphics [width=0.95\textwidth]{IOT_DRONE.png}
\caption{A typical scenario of the internet as the medium for drone communication.}
\label{internet}
\end{figure*}
Fig. \ref{internet} shows a typical setup of different components for a fully functional drone system with IoT supporting its communication. Tradeoffs between turning agility, flying speed, and battery life have been analyzed with the help of these factors and various experiments. A buses-and-drones mobile infrastructure (AidLife) has been proposed by Narang \emph{et al.} \cite{narang2017cyber}, which utilizes an existing public transport system to establish an adaptable system for reliable communication during a disaster. Motlagh \emph{et al.} \cite{motlagh2016low} have conducted a comprehensive survey on the architecture for the delivery of UAV-based IoT services. Additionally, physical collisions, IoT equipment selection, communication technology, efficient UAV-networking, as well as regulatory concerns, have been discussed. Moreover, cloudlets and computational offloading (CO) were shown to be one of the best solutions for efficient computing while conserving energy.
\textcolor{blue}{\subsection{UAV-Enabled Mobile Edge Computing}
Mobile Edge Computing (MEC) provides communication services and near-user processing facilities to users and has been a promising technology for the further UAV communication \cite{wu2020energy,yangtwc2019,zhoutc2020}. UAV-enabled MEC networks are promising to increase computing efficiency and reduce execution latency. In addition, unmanned aerial vehicles are implemented as a relay edge computing node and UAV-enabled MEC networks are suggested to address the shortcomings of the current MEC network with fixed base stations and minimal computing capacity. In addition to WPT (wireless power transfer) and energy harvesting that can prolong the operational time of UAVs, Zhou \emph{et al.} \cite{zhou2018uav} studied the UAV-enabled wireless MEC system. Also, they have jointly optimized the number of the offloading computation bits, the local computation frequencies of users and the UAV, and the trajectory of the UAV. However, the running time and battery of the UAV are limited and usually a large number of users need to be served in the geographic coverage area, but it is still necessary to establish efficient resource allocation schemes for UAV-enabled MEC networks with multiple users and multiple UAVs \cite{zhou2019mobile}.}
\textcolor{blue}{
\subsection{URLLC-Enabled UAV Communication System}
Ultra-reliable and low-latency communications (URLLC) will enable modern wireless networking technologies in the fifth-generation mobile networks that are important for mission-critical applications such as autonomous vehicles \cite{she2018uav,ozger2018towards,han2019uav,li2018uav}. On the other hand, transmission of control signals from the drone operator to the UAV poses new challenges to UAV communication, as such connections have strict latency and reliability requirements to serve critical safety functions, such as the monitoring of collisions in real time. Ren \emph{et al.} \cite{ren2019achievable} suggested that the use of short packets for the Control and Non-Payload Communications (CNPC) would enable URLLC on a UAV communication network. Unlike conventional communications with relatively long transmission delays and large packet sizes, packets with a finite block length will support extremely low latency transmission. Since a low data rate is usually sufficient to share control details between the operator and the UAV, short packet transmission does not degrade the transmission quality. However, literature related to short packet communication has shown that certain adjustments to the classical information theoretical principles are needed to model such a communication channel \cite{ostman2019}. In addition, researchers have analyzed the UAV relay networks with URLLC criteria. Effective iterative low-complexity algorithms have been proposed to solve the optimization problems associated with these types of relay networks \cite{pan2019joint}. Work by Ajam \emph{et al.} \cite{ajam2020} established the ergodic sum rate of a UAV-based relay networks with mixed RF and free-space optical channels. Their analysis showed that these networks are able to provide high rate, which can be further enhanced by the optimal positioning of the UAV. The development can help meet the requirement of URLLC. }
\subsection{Integrating UAVs into Cellular Networks}
In the past few years, there has been significant interest in integrating a UAV communication system into the existing and future cellular networks \cite{fouda2018uav,mozaffari2017mobile,chen2017caching}. Ever since the early 2000s, many attempts have been made to integrate UAVs with cellular networks. Wzorek \emph{et al.} \cite{wzorek2006gsm} presented a prototype network created between two UAVs and a ground operator using GPRS technology in 2006. However, due to technology limitations, the idea has not been further developed nor commercialized. In 2016, China Mobile Research Institute and Ericsson presented field results collected in a prototype LTE-UAV integrated network. In this prototype, they elaborate on how the drone ecosystem can benefit from mobile technologies, summarize key capabilities required by drone applications, and analyze the service requirements of mobile networks \cite{zeng2018cellular,yang2018telecom,muruganathan2018overview}. Researchers further investigated this scheme, and the 3rd generation partnership project (3GPP) released several proposals that investigated the ability for aerial vehicles to serve using LTE network \cite{korhonen}. These series of studies were completed at the end of 2017, and the outcomes were documented in the 3GPP technical report \cite{korhonen}, which included comprehensive analysis, evaluation, and field measurement results. Field trials were performed by a number of telecommunication companies to analyze the performance of a cellular-connected UAV in a commercial cellular network and to compare handover and link reliability between ground and airborne UEs. Overall, these studies provided insights into various aspects and shortcomings when UAVs are integrated with the existing cellular networks. These studies identified the following potential issues when aerial vehicles are integrated with the LTE network.
\begin{itemize}
\item \textbf{High Line of Sight Interference}\\
In the downlink, the percentage of cellular-connected UAVs experiencing cell-edge like radio conditions (i.e., poor downlink SINR) is much higher compared to terrestrial UEs. This is because cellular-connected UAVs are subjected to higher downlink interference from a larger number of cells due to their high line-of-sight propagation probability than typical terrestrial users. Also, the number of neighboring cells causing high levels of downlink interference at the cellular-connected UAVs is higher than terrestrial users.
\item \textbf{High Altitude}\\
Compared to conventional terrestrial users, UAVs typically fly at much higher altitudes. If the Base Transceiver Station (BTS) antennas are tilted downwards, either mechanically or electronically, a cellular-connected UAV is likely to be served by side lobes of the antennas, especially if they are directly above the BTS antenna boresight. Due to the presence of possible nulls in the sidelobes, a cellular-connected UAV may see a stronger signal from a faraway BTS than one that is geographically closest. Hence, a cellular-connected UAV may be served by a faraway base station instead of the closest one.
\item \textbf{Measurement Reporting Mechanism}\\
The RSRP (Reference Signal Received Power) and RSRQ (Reference Signal Received Quality) measurement of a cellular-connected UAV in the air are different from those associated with terrestrial users.
\item \textbf{High Mobility}\\
The high mobility of UAVs generally results in more frequent signal handovers and time-varying wireless backhaul links with ground stations. Hence, the mobility performance of the cellular-connected UAVs are worse than terrestrial users.\\\end{itemize}
Most of the current research in the field of cellular $-$connected UAVs focuses on finding potential solutions to the issues mentioned above. In this section$,$ several solutions and promising technologies to efficiently enable a cellular-connected communication system for UAVs have been discussed. These solutions and technologies can be divided into two categories$:$ network-based solutions and user equipment-based solutions.
\begin{itemize}
\item \textbf
{Full Dimension MIMO (FD-MIMO)} Full dimension multiple-input and multiple-output (FD-MIMO) is one of the crucial technologies currently being studied in the mobile communication field. The technique features scalability and potential to deliver very high and stable throughput \cite{chandhar2017massive}. A massive MIMO cellular system may use multiple antennas at a base station to mitigate the interference in a UAV communication system. In FD-MIMO transmission, the number of antennas has been increased beyond what is supported in conventional cellular communication systems, and antennas are no longer placed in a linear one-dimensional(1D) array, but in a two-dimensional (2D) \cite{kim2014full} planar array.
\item \textbf{Non$-$Orthogonal Multiple Access $($NOMA$)$}\\
A multiple access technique is an extremely important technology for a cellular$-$connected UAV communication system and currently researchers have proposed several access techniques such as Non$-$Orthogonal Multiple Access $($NOMA$),$ Time Division Multiple Access $($TDMA$),$ Orthogonal Multiple Access $($OMA$),$ and Beam Division Multiple Access $($BDMA$)$. However$,$ NOMA has received remarkable attention from both academia and industry \cite{liu2019uav,ding2017application,liu2018non,cai2017modulation,qureshi2018divide,khan2019machine,qureshi2017successive,wang2019multiple,nasir2019uav}. The fundamental idea of NOMA is to use different power levels for multiple users on the same resource block $($time $/$ frequency $/$ code $/$ space$)$$,$
whereas the previous generations of mobile networks have used different frequencies for handling multiple users. Various recent studies have considered the use of NOMA to improve the performance of a cellular-connected UAV communication system. In \cite{liu2019uav}, the authors have considered a cellular-connected UAV communication network that serves a large number of users by employing NOMA, and they have formulated the maximum rate optimization problem under total power, total bandwidth, UAV altitude, and antenna beamwidth constraints.
\item \textbf{Directional Antennas of Cellular-connected UAVs}\\
In this scenario, UAVs are assumed to be equipped with directional antennas instead of omnidirectional antennas. Directional antennas are used to mitigate interference in the downlink to aerial UEs by decreasing the interference power coming from a broad range of angles. Even with a high density of UAVs$,$ directional antennas are found to be beneficial in limiting the impact on downlink terrestrial users’ throughput. Since the use of directional antennas is closely related to the implementation in UAVs, specific enhancements may be needed. The direction of UAV travels and LOS (Line-of-sight) capabilities are considered when tracking the LOS direction between a UAV and the serving cell. Depending on the capability of tracking the LOS direction between a UAV and its serving cell$,$ the UAV can align the antenna direction with the LOS direction to amplify the power of the useful signal.
\item \textbf{Beamforming for Cellular-connected UAVs}\\
Beamforming is a powerful technique widely used in signal processing, radar, sonar, navigation, and in particular, in wireless communications. In cellular mobile communications, beamforming has been used to control the transmitted and/or received signal amplitude and phase according to the desired application and channel environment \cite{bogale2017mmwave}. Applying beamforming technique in a cellular-connected UAV network has its challenges due to the highly mobile structures of the network elements. However, the Linearly Constrained Minimum Variance (LCMV) beamformer and the Reference Signal Based (RSB) beamformer have attracted increasing attention in the UAV communication research field \cite{shen2018adaptive}. Recently, Zhang \emph{et al.} \cite{zhangbf2020} have proposed a hybrid beamforming scheme for 5G and beyond cellular mobile communications, which is expected to have an increasing impact on UAV communication.
\end{itemize}
Based on the aforementioned promising technologies$,$ it is concluded that cellular networks are capable of serving UAVs$,$ but there may be challenges related to interference as well as mobility. More implementation-based solutions and solutions that require specification enhancements should be identified to address these issues.
{\color{blue}
\subsection{Summary of Lessons Learned}
\par To summarise, the main lessons learnt from this section are:
\begin{itemize}
\item The architecture of the UAV communication network is affected by the configuration of the antenna and resource handling platforms used for communications. Antenna design for the UAV communication network is an important research direction and can be achieved using a number of techniques, such as 3D MIMO.
\item To date, researchers in the UAV communication area have investigated a variety of UAV cellular connected user cases and have obtained some results. They also faced both opportunities and challenges on both sides, as both the 5G and UAV fields are still young. Nevertheless, researchers must continue to tackle these problems by trial and error before the 5G drone becomes a reality.
\item Despite the significant number of works on the URLLC-enabled UAV communication system, there are many fundamental open issues that need to be studied and the requirement of highly reliable and time-critical connectivity remains a challenge for UAVs. Although some difficulties remain in the implementation of a Mobile Edge Computing System (MECS) based approach to the UAV communication, MECS can be further enhanced and can provide better QoS for the UAV communication networks.
\item A variety of research work addresses the wide range of IoT technologies existing or even under standardization that would need to be integrated into the future communication network. UAVs would be suggested as possible solutions to ease this integration, resolve the weaknesses of the terrestrial network.
\end{itemize}
}
\section{Recent Technological Advancements}
\begin{table*}[!htbp]
\centering
\caption{Advanced Techniques for UAV Communication Enhancement}
\label{t2}
\begin{tabular}{lll}
\hline
\textbf{Technological Advancement} & \textbf{Major topic} & \textbf{Contribution} \\ \hline
\multirow{5}{*}{Artificial intelligence} & Packet transmission failure prediction & \cite{park2016prediction} \\ \cline{2-3}
& Vehicular density estimation & \cite{oubbati2017intelligent} \\ \cline{2-3}
& Response time prediction module & \cite{jung2017acods} \\ \cline{2-3}
& Swarm Intelligence & \cite{shrit2017new} \cite{saha2018cloud} \\ \cline{2-3}
& Classification of disaster stages & \cite{erdelj2017help} \\ \hline
\multirow{4}{*}{Navigation Strategies} & Path Planning & \cite{chi2012civil} \cite{yuan2017outdoor} \\ \cline{2-3}
& Data gathering and routing & \cite{wu2017orsca} \\ \cline{2-3}
& Position verification via shortest path & \cite{perazzo2015verifier} \\ \cline{2-3}
& Sensor support for navigation & \cite{coppola2018board} \cite{moran2017hybrid} \\ \hline
\multirow{5}{*}{{Techniques for secure UAV communication}} & Electronic fence technique & \cite{zhao2018antenna} \\ \cline{2-3}
& Jelly-fish attack on MANET in UAV network & \cite{thomas2015secure} \\ \cline{2-3}
& Encryption & \begin{tabular}[c]{@{}l@{}}\cite{ramdhan2016codeword} \cite{cheon2018toward} \cite{singandhupe2018reliable}\\ \cite{quist2013novel} \cite{steinmann2016uas} \cite{he2017drone}\\ \cite{samland2012ar}\end{tabular} \\ \cline{2-3}
& Physical intrusion attack & \cite{multerer2017low} \cite{knoedler2016detection} \\ \cline{2-3}
& Rules and Regulations & \cite{clarke2014regulation} \\ \hline
\multirow{5}{*}{Optimization theory for UAV Communication
System
} & Preservation of energy & \begin{tabular}[c]{@{}l@{}}\cite{jung2017acods} \cite{koubaa2017service} \cite{long2018energy}\\ \cite{zorbas2013energy} \cite{fotouhi2017understanding}\end{tabular} \\ \cline{2-3}
& Data compression & \cite{shetti2015evaluation} \\ \cline{2-3}
& Power allocation & \cite{naqvi2018drone} \\ \cline{2-3}
& Battery-free network & \cite{ma2017drone} \\ \cline{2-3}
& Latency reduction & \cite{kagawa2017study} \cite{sun2017latency} \\ \hline
\end{tabular}
\end{table*}
Technological advancements such as machine learning, artificial intelligence, and navigation strategies enhance communication for drones. However, an issue of concern for drones is security. Several cryptographic practices come into play in addressing this issue. Good cryptographic design must be fast and energy-efficient. To ensure this, optimization needs to be performed. Incorporating these fields in data transmission increases the resilience and robustness of a system. In this section, the effects of these technologies are reviewed. Subtopics for each technological advancement, with their respective contributing papers, are summarized in Table \ref{t2}.
\subsection{Artificial Intelligence }
The rise of Artificial intelligence (AI) has benefited a multitude of fields, including drone communication and control. AI is being applied to different aspects of communication to improve efficiency, resilience, and robustness for drones. Park \emph{et al.} \cite{park2016prediction} have made an attempt to predict failure using machine learning. Packet transmission rates of a network have been simulated with UAVs. Monte-Carlo Simulation (MCS) has been used for computing the success and failure probabilities of transmission. Network transmission process has been simulated using Susceptible-Infected-Recovery (SIR) model. Predictions of Support Vector Machine with Quadratic Kernel (SVM-QK) method were found to be faster and more accurate than Linear Regression (LR). Oubbati \emph{et al.} \cite{oubbati2017intelligent} showed that vehicular density could be estimated for a given road segment using UAV with the help of Machine Learning (ML) to support the deployment decision. Jung \emph{et al.} \cite{jung2017acods} presented a response-time prediction module, which guides a decision engine to smartly choose between processing data on-board or transmitting it using a MultiPath TCP (MPTCP), which increases wireless network performance. ML and MPTCP together form the Adaptive Computation Offloading Drone System (ACODS), which provides performance improvement. With the help of artificial intelligence, suitable algorithms are being developed to provide efficient controls over swarms of drones as exemplified by Shrit \emph{et al.} \cite{shrit2017new}. A swarm intelligence-based design with specific communication among systems of drones and bot clusters was proposed by Saha \emph{et al.} \cite{saha2018cloud}. A master drone fetches the sensor information from the cloud upon request, thereby achieving coordination between ground and sky systems.
Additionally, a new auto relay method has been designed by Kong \cite{kong2017autonomous} for enhancing millimeter wave communication by quickly driving drones to optimal relay locations. Directionality is adjusted by frequent matrix updates and real-time samples of link quality to find optimal locations, resulting in higher stability and accuracy than KNN and TR algorithms. Classification algorithms have played essential roles in developing intelligent systems that are used for surveying regions with UAVs. Erdeji \emph{et al.} \cite{erdelj2017help} presented work for classifying disaster stages and outlined suitable network architectures for efficient communication management using UAVs. Static WSN deployments become less effective with progressing disaster stages. Based on the suggested classification, WSN and UAV have been recommended accordingly.
\subsection{Navigation Strategies for UAVs}
Certain drone-enhanced communication systems and applications require specialized routing and navigation strategies. Mobile data gathering and routing schemes using UAVs are among them \cite{wu2017orsca}. Chi \emph{et al.} \cite{chi2012civil} used 3G communication to design a path planning algorithm with a slight modification of the A* algorithm to extend the service range of UAVs while avoiding no-signal areas and keeping communication links intact. A traveler location verifier problem (TLVP) was investigated by Perazzo \emph{et al.} \cite{perazzo2015verifier} to securely verify the positions of devices through multi-lateration verification which required the shortest path for a drone. VerifierBee, a path planning algorithm, has been proposed as a solution to improve the path length. A different routing technique uses a Decentralized Model Predictive Control (DMPC) algorithm called flocking. It was introduced by Yuan \emph{et al.} \cite{yuan2017outdoor} for a multi-drone system, which depends on the communication range of XBee wireless module used in broadcast mode. Coppola \emph{et al.} \cite{coppola2018board} proposed an innovative technique of using communication technology instead of sensors for multi-UAV collision avoidance. Wireless communication has been suggested as a relative localization tool to be used by cooperating vehicles. UAVs have been made to communicate with each other using wireless transceivers and exchanging their on-board states for use in collision avoidance algorithms based on the collision cone approach. Assistance of vehicular communication systems in navigation is also on the rise.
\subsection{Techniques for Secure UAV Communication}
The increasing use of UAVs has attracted potential security threats, especially in communication protocols \cite{sharma2019neural,sharma2018coagulation,li2019joint}. It was observed by Zhao \emph{et al.} \cite{zhao2018antenna} that high frequency bands (60 GHz) have better performance for detecting the invasion of amateur drones than the over-crowded frequency bands (2.5 GHz or 5 GHz). One major threat, the Jelly Fish attack, was explored by Thomas \emph{et al.} \cite{thomas2015secure} using MANET in sync with a UAV network. They developed a mechanism to prevent such attacks by the use of multicast routing protocols. A routing algorithm selects trustworthy nodes by making decisions for the most reliable and secure paths. Cryptography is another methodology for securing information used in vehicles. Ramdhan \emph{et al.} \cite{ramdhan2016codeword} proposed a data collection protocol based on optical codewords. Others have suggested a hierarchical UAV-network architecture composed of different levels, including sensor nodes, drone nodes, and data collection nodes. Network problems have been approached along two different branches of thought: the first, identification of nodes in the network with the help of optical codewords, and the second, for transferring data from drone nodes to route it to a root drone for further processing and decision making. A controller-based security measure using a technique of homomorphic cryptography was studied by Cheon \emph{et al.} \cite{cheon2018toward} via the design of a practical Linearly Homomorphic Authenticated Encryption (LinHAE) for implementation in controllers.
Another work by Singandhupe \emph{et al.} \cite{singandhupe2018reliable} generated an Advanced Encryption Standard (AES) encryption key, derived from an operator’s electroencephalogram (EEG) signal, to encrypt communication between XBees. To generate secret keys for video image encryption, Quist-Aphetsi \emph{et al.} \cite{quist2013novel} used a quantum key distribution method. Here, these keys are shared and known only to the two parties over the channel. Since each photon, which signifies a qubit and is altered immediately when read, it is impossible for any adversary to intercept messages without being detected. Through the use of encryption key negotiation method, as discussed by Steinmann \emph{et al.} \cite{steinmann2016uas}, authentication and security can be ensured for partitioned data stored on UAVs and exchanged between a UAV and the Ground Station (GS).
A pseudo-random attribute, generated from the GS, is sent to a UAV to produce its own key. The GS stores all the random attributes to generate all keys and decrypt data from UAV afterward. Fabra \emph{et al.} \cite{fabra2017impact} have suggested using cryptographic keys for setting up public safety networks, along with intrusion detection systems. A security area also deals with threats from amateur drones which intrude sensitive locations. The work by Long \emph{et al.} \cite{long2018energy} devised a surveillance system using a 3D MIMO radar and an RF jammer with a bidirectional antenna. A target is tracked by a 3D image produced by a radar and then a target detection algorithm is applied. Afterward, based on the coordinates of the tracked target, servos are provided with suitable instructions to steer the directional antennas to the direction of the intruding drone. Jamming signal is then fed to the antenna, blocking the control of the drone from its control station, thus preventing snooping.
There are other obvious security vulnerabilities such as communication over unencrypted WLAN and the prevalence of User Datagram Protocol (UDP). Work by Samland \emph{et al.} \cite{samland2012ar} has claimed that the introduction of a link encryption layer over wireless communication evades most security issues. Considering the increasing number of drones, a successful GSM-based, Passive Coherent Location (PCL) system for the detection of small UAVs has been proposed, which is based on the integration of input from different base stations \cite{knoedler2016detection}. While there exists a highly articulated and well-understood regulatory regime for large aircrafts, regulatory arrangements for small civil drones are very uncertain and unreliable in addressing security concerns like behavioral and data privacy. The majority of the world waits for the International Civil Aviation Organization (ICAO) to impose regulations but it has declared “model aircrafts” and ”recreational uses” as national responsibilities, even though these crafts have caused international incidents several times, shutting down airports, and causing substantial economic losses. Insight regarding rules and regulations for the security aspect of drones was given by Clarke \emph{et al.} \cite{clarke2014regulation}.
\subsection{Optimization Theory for UAV Communication System}
Optimization plays an essential role in drone communication to save power and reduce latency wherever possible\cite{yang2018joint}. A technique known as computational offloading reduces the burden of on-board platforms by transmitting the images to a Ground Control Station (GCS) for processing. The Adaptive Computation Offloading Drone System (ACODS) introduced by Jung \emph{et al.} \cite{jung2017acods} smartly chooses between on-board processing and transmitting, thus preserving energy in the process. To conserve energy, Koubaa \emph{et al.} \cite{koubaa2017service} suggested the use of IoT with drones to minimize the need for high on-board computational capability. Shetti \emph{et al.} \cite{shetti2015evaluation} presented another unique method of data reduction from a typical sensor like an on-board camera by using a Compressive Sensing (CS) technique. No changes are required on the communication infrastructure such as WLAN 802.11a with the method, and it can be extended to other communication links as well. For the problem of limited battery capacity, the concept of an energy-neutral internet-of-drones has been introduced to operate a large number of drones using renewable energy resources \cite{long2018energy}. A wireless power-transfer optical-communication scheme provides harvested energy to drones. \textcolor{blue}{Yang \emph{et al.} \cite{yangtvt2020} studied a UAV-enabled wireless communication system in which users send data to the UAV by energy harvested from the surrounding. The problem was formulated as an optimization problem and elaborate mathematical analysis was performed to obtain the solution.} Naqvi \emph{et al.} \cite{naqvi2018drone} also focused on a power allocation strategy for a microwave base station and small base stations operating in 28 GHz frequency band. Zorbas \emph{et al.} \cite{zorbas2013energy} presented LAS, a localized solution for shrinking the total energy consumption of a fleet of drones during an event covering scenario. To ensure that drone scheduling is reliable with minimum power, drones are allowed to adjust their altitude using a localized approach. Greater energy conservation has been observed compared to statically placed drones. A different study by Fotouhi \emph{et al.} \cite{fotouhi2017understanding} considered battery life, maximum turning frequency, and acceleration. They analyzed the tradeoff between turning agility, flying speed, and battery life. A variety of moving models such as circular, zigzag, and straight-line patterns were evaluated for assessing drone limitations.
Drones can also be leveraged as a full-duplex relay for battery-free networks, as described in a new system, RFly \cite{ma2017drone}. The relay can ideally be integrated with an already deployed RFID infrastructure to preserve phase and timing of forwarded packets.
Along with power reduction, optimization of latency is a vital component in designing competent systems. A wireless communication system has been developed to improve latency, especially in multi-hop networks. Beyond Line of Sight (BLOS) communication has been adopted for controlling robots and drones using Time Division Multiple Access (TDMA) in the data-link layer to ameliorate the fluctuation in delay time \cite{kagawa2017study}. Interference robustness is strengthened when a system switches between four frequencies using RF modules, out of which 169 MHz gives a larger coverage area than 920 MHz. The experiments by Samland \emph{et al.} \cite{samland2012ar} utilized another optimization technique that focused on the energy capacity limitation of a drone base-station to minimize the latency ratio of mobile users. Latency-Aware Drone Base-Station Placement (LEAP) algorithm was designed for achieving the desired results.
{\color{blue}\subsection{Summary of Lessons Learned}
\par\par To summarise, the main lessons learnt from this section are:
\begin{itemize}
\item Optimizing the UAV trajectory is a critical concern for design, because it greatly affects the performance of UAV communication networks. Several limits and parameters must be addressed in order to optimize the trajectory of UAVs. The trajectory of the UAV is determined on the basis of the user's QoS specifications, the energy usage of the UAV, the size of UAV as well as the shape and placement of environmental barriers.
\item With an increasing number of UAVs operating in the sky, security is becoming an increasingly important requirement for UAVs to secure the data they are collecting and transmitting to the ground against potential hijacking attempts. Although it is clear that the issue can be greatly mitigated by implementing new software and hardware technologies.
\item
In summary, a number of approaches need to be used to overcome the key challenges of UAV communication systems and to allow the effective use of UAVs for wireless networking applications. Machine learning and other artificial intelligence techniques can be used to address navigation planning issues, response time prediction and packet transmission failure prediction.
\end{itemize}}
\section{Applications of UAV Communication}
The communication network capacity of UAVs has been utilized in a variety of applications. In surveillance or situations where other modes of communication fail, drones may prove a useful tool to provide aid by developing into a self-sustaining infrastructure. This section looks into the work in the deployment of UAVs for various scenarios. Table \ref{t3} summarizes the features of different techniques utilized to establish emergency communication infrastructure.
\begin{table*}[!htbp]
\begin{center}
\caption{Communication Techniques and Their Features for Emergency Applications Through Drones}
\label{t3}
\begin{tabular}{lll}
\hline
\textbf{Correspondent} & \textbf{Techniques / Equipments} & \textbf{Features} \\ \hline
\cite{camara2014cavalry} & Store, carry and forward technique & \begin{tabular}[c]{@{}l@{}}1. Faster deployment of new infrastructure\\ 2. Push-button deployment\end{tabular} \\ \hline
\cite{kang2016spatial} & Cooperative spatial retreat (CSR) & \begin{tabular}[c]{@{}l@{}}1. Evacuation from collapsed\\ \hspace{3mm} communication sites\end{tabular} \\ \hline
\cite{deruyck2018designing} & Special emergency network deployment tool & \begin{tabular}[c]{@{}l@{}}1. With intervention duration and\\ \hspace{3mm} number of users, drone requirement \\ \hspace{3mm} increases linearly\end{tabular} \\ \hline
\cite{thapa2016impact} & Powerful dual-band AP & \begin{tabular}[c]{@{}l@{}}1. Low-cost balloon network \\ 2. Provide free WiFi signals\end{tabular} \\ \hline
\cite{zahariadis2017preventive} & 5G architecture \& radar & \begin{tabular}[c]{@{}l@{}}1. Preventive Maintenance as a Service \\ \hspace{3mm} (PMaas) \\ 2. Mobile device monitor\end{tabular} \\ \hline
\cite{he2017drone} & Special communication hardware & \begin{tabular}[c]{@{}l@{}}1. Aerial mobile stations \\ 2. Reduce coverage gap and network \\ \hspace{3mm} congestion\end{tabular} \\ \hline
\cite{miyamoto2015demo} & \begin{tabular}[c]{@{}l@{}}Survivor devices with connectionless broadcast \\ communication\end{tabular} & \begin{tabular}[c]{@{}l@{}}1. Devices emit stress signals\\ 2. Separate mobile application\end{tabular} \\ \hline
\cite{moon2016uav} & Kalman filter \& special hardware module & \begin{tabular}[c]{@{}l@{}}1. 3-D position detection using sensor \\ \hspace{3mm} fusion \\ 2. Diverse signal detection\end{tabular} \\ \hline
\end{tabular}
\end{center}
\end{table*}
\subsection{UAV-Aided Disaster Management Network}
Drones are being tested to provide network infrastructure in case of emergencies, like natural disasters, to replace damaged infrastructure or reduce the deployment time of new infrastructure. An architecture composed of specialized drones has been proposed in \cite{camara2014cavalry} which uses internal modules to organize and accomplish specific objectives. It has been devised in a "push-button" way to deploy as a fleet of drones for scanning a region and conveying information. Store-carry-and-forward technique was emphasized. For improving the trust of net-drones, Cooperative Spatial Retreat (CSR) method was devised by Kang \emph{et al.} \cite{kang2016spatial} for net-drones to physically evacuate from an area when a communication collapse is imminent. A deployment tool for UAV-aided emergency networks was suggested by Deruyck \emph{et al.} \cite{deruyck2018designing} and applied in a realistic large-scale disaster scenario at the center of Ghent, Belgium. Their study showed that the number of required drones scaled linearly with the intervention duration and the number of users covered. Thapa \emph{et al.}
\cite{thapa2016impact} presented a framework consisting of a low-cost balloon network with a powerful dual-band AP for rescue operation when other internet connections get interrupted. In such situations, balloons are used to provide free WiFi signals. Aerial vehicles can also act as monitoring mobile devices (MDs) and for searching trapped earthquake survivors. Zahariadis \emph{et al.} \cite{zahariadis2017preventive} also utilized drones’ remote control for critical infrastructures with a 5G architecture to provide Preventive Maintenance as a Service (PMaaS) in a distribution and transmission network of energy (electricity and gas).
Drones have made their way into surveillance from the beginning of the era of UAVs. An introduction of drones in the field of security was provided by He \emph{et al.} \cite{he2017drone}, where drones were equipped with communication hardware and sent to suitable positions for ensuring public safety. These drones act as aerial mobile stations with the advantage of reducing coverage gaps and network congestion. A survivor locator system consists of smart devices, drones, and connectionless broadcast. Communication for survivor devices was demonstrated by Miyamoto \emph{et al.} \cite{miyamoto2015demo}. Survivor devices may emit messages to a rescue team, which could be detected using opportunistic, connection-oriented content sharing. A prototype for such an application was developed, which exploited hardware functionalities.
Additionally, a drone-based framework was suggested by Moon \emph{et al.} \cite{moon2016uav}, which worked with sensor fusion for 3D positioning, while exploiting WiFi for measuring 2D, and barometer data for measuring Z values from buried personal mobile phones.
Drones are typically equipped with a hardware module for detecting diverse signal strengths such as RSSI (Received Signal Strength Indication). Studies found that conventional GPS modules equipped on drones gave poor accuracy. Thus, a variety of algorithms, such as Kalman filter and other optimization algorithms, have been considered to reduce distance errors. Alternatively, techniques such as Real Time Kinematic (RTK), Post Processed Kinematic (PPK) and Ground Control Points (GCPs) can also be used to improve the accuracy \cite{rtkppk2017}.
Naqvi \emph{et al.} \cite{naqvi2018drone} suggested a cellular-connected UAV communication network that provides mobile connectivity to disaster areas where the terrestrial cellular network might have been damaged due to ongoing conflict, natural hazard, or technological hazards. In the paper, the authors have proposed a routing protocol for a cellular-connected UAV communication network to maintain reliable and secure connectivity within affected areas.
Since cellular-connected UAV communication is a prominent topic in the 5G arena, both academia and industry concur that cellular-connected UAV communication networks will enable real-time feedback loops. This helps to control UAVs to offer emergency supplies for survival in disaster areas while maintaining non-stop connectivity with public safety agencies and emergency response teams. Due to the expansive mobility of UAVs, it is possible to offer a rapid service recovery in case the terrestrial cellular network is damaged. It also allows the first responders to have closed-circuit communication and command mechanism and provide additional power to amplify broadcast warning and updates.
Furthermore, research in the field of UAV communication resorts to device-to-device (D2D) communication as it increases the reliability of the cellular network and uplink capacity available to responders outside the affected areas. Fundamentally, these promising technologies create a ‘ubiquitous’ experience to the emergency workers that allows them to get an immediate aerial view of the damaged areas to identify the crucial physical infrastructure to facilitate an improved situational awareness. Even if network infrastructure is not damaged due to hazards, UAVs may continue to act as flying base stations for the cellular network and allow the release of some traffic from the terrestrial network to provide additional bandwidth for people in the affected areas.
Since airplanes cannot stay airborne for a long time and satellites are too far above the Earth's surface, emergency responders can no longer rely on conventional aircrafts alone to get live updates, such as aerial photography and videography, from the affected areas. Thus, UAVs act as substitute objects in the atmosphere for them. Generally, these live video or GIS updates help to identify and locate vulnerable and affected people, infrastructures, livestock, and other entities. However, the transmission of the information feed such as real-time video streaming or images from the UAVs to the responder's end relies on the quality and capabilities of the wireless links. Usually, quality of the link depends on the speed of UAVs and the distance between the ground station and UAVs. However, traditional video-streaming techniques used for mobile and web applications are not suitable for UAVs because of their high mobility. As a solution to this problem, Wang \emph{et al.} \cite{wang2016skyeyes} proposed a new video streaming algorithm to improve the quality of real-time video streaming and reduce the uncertainty of the wireless links of the UAV communication system.
Mayor \emph{et al.} \cite{mayor2019deploying} presented a UAV communication strategy for disaster management, which integrates a WiFi network with the UAV network to enable VoIP communication for affected people. The authors have used well-known machine learning algorithms such as K-means clustering and genetic algorithms to improve the performance of the network. However, one critical weakness of this network is its inability to deal with user mobility. To date, researchers in the UAV communication field have explored many methods to build UAV-aided disaster management networks and have achieved some results. Also, they have faced both opportunities and challenges, since both the real-time response systems and UAV communication fields are still at their infancy. However, researchers will continue to explore through trial and error and tackle these challenges until the 5G-aided emergency drones become a reality.
\textcolor{blue}{\section{Challenges, Open Issues, and Future Directions}}
\begin{table*}[!htbp]
\caption{Drone Communication: Advanced Algorithms and Platforms}
\centering
\label{t4}
\begin{tabular}{lccl}
\hline
\textbf{Source} & \textbf{Algorithm / Platform} & \textbf{Domain} & \textbf{Functionality} \\ \hline
\cite{pizetta2016hardware} & AuRoRA & Resource Handling & \begin{tabular}[c]{@{}l@{}}1. Serves as ground station\\ 2. Prevent overloading of single \\ computer ground stations\end{tabular} \\ \hline
\cite{dantu2011programming} & Karma & Resource Handling & \begin{tabular}[c]{@{}l@{}}1. Shift complex coordination \\ task to central hive computer\end{tabular} \\ \hline
\cite{lee2016devising} & AFAR & Drone network & \begin{tabular}[c]{@{}l@{}}1. Utilizes geographical \\ information and flooding\end{tabular} \\ \hline
\cite{akka2018} & IACO & Path Planning & \begin{tabular}[c]{@{}l@{}}1. Can successfully solve mobile\\ agent routing problem \\ 2. Robust and self-adaptive\end{tabular} \\ \hline
\cite{perazzo2015verifier} & VerifierBee & Path Planning & 1. Give shortest path for TLVP \\ \hline
\cite{yuan2017outdoor} & DMPC & Multi-UAV system & 1. Based on XBee communication \\ \hline
\cite{cheon2018toward} & LinHAE & Cryptography & \begin{tabular}[c]{@{}l@{}}1. Linear homography authentication\\ for controllers\end{tabular} \\ \hline
\cite{ma2017drone} & RFly & Drone network & \begin{tabular}[c]{@{}l@{}}1. Combine with existing RFID\\ infrastructure \\ 2. Preserve phase and time of\\ forward packets.\end{tabular} \\ \hline
\cite{sun2017latency} & LEAP & Optimization of latency & \begin{tabular}[c]{@{}l@{}}1. Study the energy capacity\\ limitation of drone base station\end{tabular} \\ \hline
\end{tabular}
\end{table*}
\begin{figure*}[!htbp]
\includegraphics [width=0.95\textwidth]{5g.png}
\caption{Proposed Cellular-connected UAV Communication Network}
\label{5g}
\end{figure*}
{\color{blue}
\begin{table*}[!htbp]
\centering
\caption{Communication Advancements in Multi-UAVs}
\label{multiuav}
\begin{tabular}{llll}
\hline
\textbf{Target} & \textbf{Key technology} & \textbf{Peculiarity} & \textbf{Contribution} \\ \hline
\multirow{3}{*}{\textit{\begin{tabular}[c]{@{}l@{}}Formation of swarm \\ of multiple drones\end{tabular}}} & \begin{tabular}[c]{@{}l@{}}Integration of sensor network \\ with GCS\end{tabular} & Centrally controlled GCS & \cite{burkle2011towards} \\ \cline{2-4}
& \begin{tabular}[c]{@{}l@{}}Use of Ad-hoc \\ Communication\end{tabular} & \begin{tabular}[c]{@{}l@{}}Piloted leader drone and autonomous \\ followers, Light and efficient\end{tabular} & \cite{shrit2017new} \\ \cline{2-4}
& ‘Karma’: Hive Drone Model & \begin{tabular}[c]{@{}l@{}}Complexity of individual MAV \\ moved to Central computer entirely\end{tabular} & \cite{dantu2011programming} \\ \hline
\multirow{3}{*}{\textit{\begin{tabular}[c]{@{}l@{}l@{}}Communication \\ between aquatic \\drones\end{tabular}}} & \begin{tabular}[c]{@{}l@{}}HANCAD and CORATAM \\ projects\end{tabular} & \begin{tabular}[c]{@{}l@{}}Enabling MANET on low-cost aquatic\\ drones\end{tabular} & \cite{christensen2015design} \\ \cline{2-4}
& \begin{tabular}[c]{@{}l@{}}Cloud information based \\ Drones-bots cluster system\end{tabular} & \begin{tabular}[c]{@{}l@{}}Coordination between sky and ground \\ with information fetch by master\\ drone on requirement\end{tabular} & \cite{saha2018cloud} \\ \hline
\textit{\begin{tabular}[c]{@{}l@{}}Multi UAV collision \\ Avoidance\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Exchange of on-board \\ states of vehicles\end{tabular} & \begin{tabular}[c]{@{}l@{}}Without use of sensors, application of \\ collision cone technique with \\ communication modules\end{tabular} & \cite{coppola2018board} \\ \hline
\textit{Routing technique} & \begin{tabular}[c]{@{}l@{}}DMPC(Decentralized Model \\ Predictive Control)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Use of XBee wireless modules in\\ broadcast mode\end{tabular} & \cite{yuan2017outdoor} \\ \hline
\end{tabular}
\end{table*}
With the emergence of new communication technologies such as 5G, communication networks are becoming more resilient, reliable, and robust. New technologies are being devised to utilize the present structure of stationary base-stations and mobile users. The novel approaches of UAV utilization in the various researches mentioned in this paper still lack the many benefits from current advancements, mainly due to the dynamic nature and unstable structure of multi-UAV systems. Wireless technologies like IEEE 802.11x WLAN can provide high throughput and meet the requirements of many applications, yet they are not optimized for such highly mobile networks. A reliable wireless technology that can sustain high throughput over extended coverage is still lacking. Instead of inventing new energy resources for UAVs, new research has been shifting towards better utilization of existing energy resources. A technique called computation offloading has shown reliable results for reducing energy consumption on board and many other adjustments using advanced intelligent techniques and incorporating IoT have been implemented separately. Nevertheless, there are still ways to exploit the implications of such technologies. Regardless of how progressive drone technology is becoming, communication and video footage recordings are still not secured. Privacy breaching and hijacking of communication channels can cause harm in critical missions. Therefore, it is necessary to devise less complicated encryption techniques for drones, which are secure and can be easily implemented on UAVs.
\subsection{Artificial intelligence techniques for the future UAV communication systems}
The use of artificial intelligence techniques in UAV communication systems will be expanded in the coming decade. Researchers will leverage artificial neural networks, deep learning, and machine learning techniques to optimize UAV communication networks, as these techniques have shown prominent advantages in many applications \cite{sharma2019neural,challita2019machine,zhang2018machine,chen2009survey,challita2018artificial,wang2019deep}. There are significant challenges in implementing artificial intelligence in areas such as position verification, route management, and estimating the success rate of missions related to UAVs. When designing an AI-based approach for a UAV communication system, the first challenge is to choose suitable artificial intelligence techniques. As there are so many artificial intelligence techniques which can be utilized for different applications, it is tough even for an experienced researcher to choose the suitable technique. As the UAV communication system is a multi-dimensional network that is more complex than current terrestrial communication networks, how to devise the appropriate artificial intelligence technique still needs further exploration by the research community.
On the other hand, when we deploy AI-based proposals, the computation time and the transmission latency between the UAV and ground station will negatively impact the network performance. Since AI techniques usually involve much more computations than conventional methods, it is necessary to improve the computation efficiency when considering the use of AI techniques to optimize the performance of a UAV communication network. Moreover, to deploy the AI-based strategies, we also need to make some modifications to current communication hardware. \textcolor{blue}{In \cite{wang2019deep}, a novel UAV communication network was proposed. The UAVs possess visible light communication capability and the communication strongly depends on the ambient illumination. An algorithm that combines gated recurrent units (GRUs) and convolutional neural networks (CNNs) in machine learning is used to optimize UAV deployment and minimize total transmit power.} In the future, research should be continued to address problems in constructing an efficient and reliable AI-based UAV communication system.
\subsection{Future UAV networks}
Communication and networking strategies mentioned in this survey are essential in UAV
interaction and proper functioning. Future technology for inter- and intra-UAV communication
will come from scientific innovation. LoRa and 6LoWPAN
have also emerged as potential technologies for UAV communication in short
distance. The challenges related to frequency disturbance, rate adaptation, high altitude
performance, and mobility can be handled with communication and network technologies
mentioned in this work. However, it is observed that in the future, power, connectivity, and stable
functioning will need to be improved. Total flying time, control in dense geographical areas beyond
sight, and data compression failure prediction also need to be improved. Energy conservation and
utilization is still a challenge in present systems, especially in multi-UAV scenarios where frequent
data transmissions and connection with ground operators are required.
\subsection{Future Cellular-Connected UAV Networks}
The use of cellular connection, channel characteristics enhancement for high altitude UAVs, and communication link features such as uplink and downlink traffic management, will be open issues for future UAV communication \cite{zeng2018cellular}. The involvement of 5G and new communication methods such as Non-Orthogonal Multiple Access (NOMA), Industrial IoUAV and others, \cite{li2018uav,sohail2018non,nasir2019uav,hou2018multiple,basnayake2020new} has shown promising results in energy saving, fast integration, and easiness to adopt. On the other hand, researchers in both academia and industry are currently investigating accurate models for a cellular-connected UAV networks using different techniques. One such proposed model is shown in Fig. \ref{5g}. To date, many use cases of cellular-connected UAVs have been explored and some preliminary results have been reported \cite{soni2019performance,sharma2019impact,jayakody2019self,sharma2018positioning,mozaffari2018beyond,mozaffari2017optimal,lin2018mobile,lyu2019network,zhang2018machine,li2018uav}.
\subsection{UAV communication in the Future World}
Future UAVs integrated with 5G and IoT technologies will have strong implications in smart cities for commercial and safety purposes. However, it is important to consider the rules and regulations related to usage as per applications. In enhancing the cognitivity in UAV communication, artificial intelligence, communication technologies, and security will play vital roles in future UAVs. Furthermore, it is expected that the use of UAVs will not be limited to construction, mining, forestry, and agriculture-related operations, but will include public safety, transportation, surveillance, and security. It is expected that with the ongoing development of smart cities, 5G, IoT and artificial intelligence, UAV communication will be more robust, stable and reliable.}
\section{Conclusion}
Wireless communication technology for both indoor and outdoor communication is becoming more ubiquitous, consequently leading to advances for UAV communication. Table \ref{t4} mentions various platforms and algorithms with their specific domains and functionalities. This paper reviews recent UAV improvements in communication technologies. The inclusion of 5G technology will provide safer and more reliable networks. By testing of UAVs usability in diverse geographical locations, it was observed that reliable and safe communication features are still a challenge in UAV communication. This paper analyzes the UAV communication technologies for both hardware and algorithm-based software, including antenna arrays and signal management, and utilization of centralized and decentralized techniques. Technologies such as FANET, NDN, AFW, and DTN help in synchronization and latency minimization. \textcolor{blue}{Various methods of communication such as queuing delay and transmission delay (QDTD) based routing protocol \cite{yangicc2017} and Certificateless Signcryption Tag KeyEncapsulation Mechanism (eCLSC-TKEM) \cite{won2015} serve as initial steps in establishing secure and reliable communication between drones and other entities.}
Numerous available techniques and multiple layers of communication have been implemented to maximize security features. However, due to the constraints of power consumption and latency related issues, implementation is still at a testing stage and needs improvement. Power consumption is a huge challenge for UAVs. A brief review of power and optimization techniques for UAVs has been provided, including various methods suggested by researchers, such as ACODS, power optimization of input/output devices, and analysis of battery life. It has been observed that the current solutions are not adequate to significantly increase the flying time of UAVs. Drones are used in diverse scenarios, either for navigation, surveillance, emergency communication infrastructure, or for IoT purposes, even though the communication technologies are different for different applications. The major technologies utilized by drones for targeted tasks are shown in Table \ref{multiuav}. The vast diversity in functionality shows that the future possibility for drone communication related to inter-drone and intra-drone communication is virtually unlimited. The essential components of UAV-based networks and infrastructures are communication, mechanical structure, and optimization algorithms. The perfect balance of drone type, application, and communication technology should be able to produce safe, reliable, and powerful drones with long flying times and minimal communication latency.
\section*{Acknowledgements}
This work was funded, in part, by the Scheme for Promotion of Academic and Research Collaboration (SPARC),
Ministry of Human Resource Development, India under the SPARC/2018-2019/P145/SL, in part,
by the framework of Competitiveness Enhancement Program of the National Research Tomsk Polytechnic University,
in part, and, in part, by the international cooperation project of Sri Lanka Technological Campus, Sri Lanka and
Tomsk Polytechnic University, No. RRSG/19/5008. The reported work is also funded, in part, by the Russian Foundation Basic Research grant № 19-37-90037 and № 19-37-90105.
\bibliographystyle{IEEEtran}
|
1,314,259,993,920 | arxiv |
\section{Introduction}
\input{sections/introduction}
\subsection{Notation}
\input{sections/notation}
\section{Gauss--Newton acceleration of PANOC}\label{sec:gn-panoc}
\input{sections/derivation}
\section{Optimal control}\label{sec:optimal-control}
\input{sections/optimal-control}
\section{Algorithmic details}\label{sec:algorithm}
\input{sections/algorithm}
\section{Experimental results}\label{sec:experiments}
\input{sections/experiments}
\section{Conclusion}\label{sec:conclusion}
\input{sections/conclusion}
\subsection{Problem formulation}
Consider the following general formulation of a nonlinear optimal control
problem with finite horizon \(N\).
\newcommand\xinit{x_\text{init}}
\newcommand\xref{x_\text{r}}
\newcommand\uref{u_\text{r}}
\begin{equation}\label{eq:OCP} \tag{OCP}\hspace{-0.8em}
\begin{aligned}
&\minimize_{u,x} && \sum_{k=0}^{N-1} \ell_k\big(h_k(x^k, u^k)\big) + \ell_N\big(h_N(x^N)\big)\hspace{-0.8em} \\
&\subjto && u \in \U \\
&&& x^0 = \xinit \\
&&& x^{k+1} = f(x^k, u^k) \quad\quad \raisebox{0.3ex}{$\scriptstyle(0 \le k \lt N)$}
\end{aligned}
\end{equation}
The function \(f : \R^\nnx \times \R^\nnu \to \R^\nnx\) models the discrete-time,
nonlinear dynamics of the system, which starts from an initial state \(\xinit\).
The functions \(h_k : \R^\nnx \times \R^\nnu \to \R^{n_y^k}\) for \(0 \le k \lt N\)
and \(h_N : \R^\nnx \to \R^{n_y^N}\) can be used to represent the (possibly time-varying) output mapping of the
system,
and the convex functions \(\ell_k : \R^{n_y^k} \to \R\) and
\(\ell_N : \R^{n_y^N} \to \R\) define the stage costs and the terminal cost
respectively.
The problem \eqref{eq:OCP} can be transformed into formulation \eqref{eq:P} as follows.
Recursively define the state transition function \(\Phi^k\) as \(\Phi^0(u) \defeq \xinit\)
and \(\Phi^{k+1}(u) \defeq f\big(\Phi^k(u), u^k\big)\). Define \(G\) as the function
that maps a sequence of inputs to the interleaved states and inputs over the horizon,
\(G(u) = \begin{pmatrix}
\Phi^0(u), & u_0, & \Phi^1(u), & u_1, & \dots, & \Phi^N(u)
\end{pmatrix}\).
Using this definition, the \textit{single-shooting} or \textit{sequential} formulation of problem \eqref{eq:OCP}
is an instance of \eqref{eq:P}, with \(\ell = \ell_0 \oplus \dots \oplus \ell_N\),
\(h = h_0 \times \dots \times h_N\), \(F = h \circ G\), \(\psi = \ell \circ F\) and \(g = \delta_{\U}\).
Specifically,
\begin{equation}\label{eq:SS-OCP} \tag{SS-OCP}
\begin{aligned}
&\minimize_u && \ell\big(h\big(G(u)\big)\big) \\
&\subjto && u \in \U. \\
\end{aligned}
\end{equation}
\subsection{Gauss--Newton approximations for optimal control} \label{sec:gn-for-oc}
By specializing the Gauss--Newton QP \eqref{eq:eqp} for this class of optimal
control problems,
and by exploiting the separable structure of the objective function,
the Gauss--Newton step can be shown to be the solution to
the equality-constrained, finite-horizon, linear quadratic regulator problem \eqref{eq:ocp-qp}.
For the sake of readability, we defined the following variables.
\vspace{0.5em}
\begin{equation}
\makebox[0pt][c]{\(
\begin{aligned}
&\begin{aligned}
\bar x^k &\defeq \Phi^k(\bar u) & \hhbar^k &\defeq h_k(\barxuk) \\
A_k &\defeq \jac_f^x(\barxuk) &
B_k &\defeq \jac_f^u(\barxuk) \\
q^k &\defeq \tp{\jac_{h_k}^x\!(\barxuk)} \nabla \ell_k(\hhbar^k)\;\; &
r^k &\defeq \tp{\jac_{h_k}^u\!(\barxuk)} \nabla \ell_k(\hhbar^k) \\
\Lambda_k &\defeq \partial^2 \ell_k(\hhbar^k) \\
\end{aligned} \\
&\begin{aligned}
Q_k &\defeq \tp{\jac_{h_k}^x\!(\barxuk)} \Lambda_k\, \jac_{h_k}^x\!(\barxuk) \\
S_k &\defeq \tp{\jac_{h_k}^u\!(\barxuk)} \Lambda_k\, \jac_{h_k}^x\!(\barxuk) \\
R_k &\defeq \tp{\jac_{h_k}^u\!(\barxuk)} \Lambda_k\, \jac_{h_k}^u\!(\barxuk) \\
\end{aligned}
\end{aligned}
\)}
\end{equation}
In order to transform \eqref{eq:ocp-qp} into a standard linear quadratic
regulator formulation, eliminate the fixed variables \(u_{\K}\).
The result is the problem \eqref{eq:ocp-qp-elim}, where we used the following
definitions.
\begin{equation}
\begin{aligned}
\hat S_k &\defeq S_{k}{\scriptstyle[\J\!,\,\cdot\,]} &
\hat R_k &\defeq R_{k}{\scriptstyle[\J\!,\J]} \\
\hat q_k &\defeq q^k + \tp S_{k}{\scriptstyle[\,\cdot\,,\K]}\, u^k_{\K} &\hspace{1.6em}
\hat r_k &\defeq r^k_{\!\J} + R_{k}{\scriptstyle[\J\!,\K]}\, u^k_{\K} \\
\hat B_k &\defeq B_{k}{\scriptstyle[\,\cdot\,,\J]} &
\hat c_k &\defeq B_{k}{\scriptstyle[\,\cdot\,,\K]}\, u^k_{\K} \\
\end{aligned}
\end{equation}
\begin{rem}
In the absence of box constraints, we have \(\K = \emptyset\), and the algorithm
reduces to the iterative linear quadratic regulator (ILQR) method for nonlinear
MPC of \cite[]{torodov_ilqr} with a line search.
\end{rem}
\subsection{Handling state constraints}\label{sec:state-constr}
Consider a standard state-constrained finite-horizon optimal control problem of
the following form.
\begin{equation}\label{eq:OCP-state-constr} \tag{SC-OCP}\hspace{-1em}
\begin{aligned}
& \minimize_{u,x} & & \tfrac12\sum_{k=0}^{N-1} \bigg[ \normsq{x^k - \xref}_Q + \normsq{u^k -\uref}_R \bigg] \hspace{-1.1em} \\
& & & \hspace{-1.21em}+ \tfrac12 \normsq{x^N - \xref}_{Q_N} \\
& \subjto & & u \in \U \\
& & & \begin{aligned}
& x^0 = \xinit \\
& x^{k+1} = f(x^k, u^k) & & \quad \raisebox{0.3ex}{$\scriptstyle(0 \le k \lt N)$} \\
& c_k(x^k) \in \D_k & & \quad \raisebox{0.3ex}{$\scriptstyle(0 \le k \le N)$} \\
\end{aligned}
\end{aligned}
\end{equation}
As before, \(f\) describes the possibly nonlinear discrete-time dynamics,
\(\xinit\) is the initial state of the system, \(\xref\) is the reference state,
and \(\uref\) the reference input. The inputs are constrained by the box \(\U\),
and some smooth, possibly nonlinear function \(c_k\) of the states enables the
representation of general equality and inequality constraints by constraining
its image to the box \(\D\).
It is common practice to relax the state constraints by means of a penalty
method. That is, the hard constraints are turned into soft constraints
by adding them as quadratic penalty terms to the objective function,
e.g. \(\frac{\mu}{2}\dist_{\D_k}^2\!\!\big(c_k(x^k)\big)\) for some sufficiently
large \(\mu > 0\).
Such a soft-constrained optimal control problem fits into the framework of
\eqref{eq:SS-OCP} by defining
\begin{equation}
\begin{aligned}
\ell_k(x, u, z) & \defeq \tfrac12 \normsq{x-\xref}_Q
+ \tfrac12 \normsq{u-\uref}_R + \tfrac{\mu_k}2 \dist^2_{\D_k}(z),\hspace{-1.5em} \\
\ell_N(x, z) & \defeq \tfrac12 \normsq{x-\xref}_{Q_N}
+ \tfrac{\mu_N}2 \dist^2_{\D_N}(z), \\
h_k(x, u) & \defeq \begin{pmatrix}
x, &
u, &
c_k(x) \vphantom{\big|}
\end{pmatrix}, \\
h_N(x) & \defeq \begin{pmatrix}
x, &
c_N(x) \vphantom{\big|}
\end{pmatrix}. \\
\end{aligned}
\end{equation}
Because of the squared distance, the cost \(\ell\) is no longer twice
differentiable, but its gradient \(\nabla\ell\) is locally Lipschitz continuous,
and hence its Clarke generalized Jacobian \(\partial^2\ell\) is well defined and nonempty \cite[Prop.~7.1.4]{pang}.
Additionally, the gradient is semismooth, so Proposition~\ref{prop:lna} applies.
The following proposition gives a sufficient condition for the solution to
the Gauss--Newton QP \eqref{eq:eqp} to be uniquely defined.
\begin{prop}
If the cost matrix \(R\) is positive definite, \(Q\) is positive semidefinite,
and \(\mu_k \ge 0\) for all \(k\),
then the Gauss--Newton matrix \(\Hgn\) for the soft-constrained optimal
control problem is positive definite.
\end{prop}
\begin{pf}By algebraic manipulations of \(\Hgn\).\\
Because of the block-diagonal structure of \(\partial^2\ell\) and \(\jac_h\),
their product \(L \defeq \tp{\jac_h}\partial^2\ell\,\jac_h\) is also block-diagonal,
with blocks of the form
\begin{equation*}
\begin{pmatrix}
Q + \tp C_k M_k C_k & 0 \\ 0 & R
\end{pmatrix} \possdef 0,
\end{equation*}
where \(C_k \defeq \jac_{c_k}(x^k)\) and \(M_k \in \partial^2\big(\frac{\mu}{2}\dist_{\D_k}^2\!(c_k(x^k))\big)\). Because of the
structure of \(G\) (it includes the identity map of \(u\)), the block
rows of \(\jac_G(u)\) that correspond to the inputs have full rank (they contain \(\nnu\times\nnu\) identity matrices)
and line up with the positive definite blocks \(R\) in \(L\).
Hence, the full product
\(\Hgn = \tp{\jac_G(u)}L\,\jac_G(u)\) is positive definite.
\hfill\(\square \)
\end{pf}
\subsection{Evaluation of the objective and its gradient}
Application of PANOC to problem \eqref{eq:SS-OCP} requires efficient evaluation
of the cost function \(\psi = \ell \circ h \circ G\) and its gradient. This can
be achieved by performing a
forward simulation (Algorithm~\ref{alg:lqr-sim}) followed by a backward sweep
(Algorithm~\ref{alg:lqr-grad}). The backward sweep only requires the evaluation
of gradient-vector products, but the Jacobian matrices \(A_k\) and \(B_k\) of
the dynamics can later be reused for the computation of the Gauss--Newton step.
\begin{algorithm2e}
\newcommand\assign{\leftarrow}
\DontPrintSemicolon
\KwIn{$\bar u, \xinit$}
\KwOut{$\psi, \bar x, \hhbar$}
$\bar x^0 \assign \xinit$\;
$\psi \assign 0$\;
\For{$k = 0,\, ...,\, N-1$}
{
$\bar x^{k+1} \assign f(\barxuk)$\;
$\hhbar^k \assign h_k(\barxuk)$\;
$\psi \assign \psi + \ell_k(\hhbar^k)$\;
}
$\hhbar^N \assign h_N(\bar x^N)$\;
$\psi \assign \psi + \ell_N(\hhbar^N)$\;
\caption{Forward simulation}\label{alg:lqr-sim}
\end{algorithm2e}
\begin{algorithm2e}
\newcommand\assign{\leftarrow}
\DontPrintSemicolon
\KwIn{$\bar u^k, \bar x^k, \hhbar^k$}
\KwOut{$\nabla\psi, A_k, B_k, q^k, r^k$}
$\lambda^N \assign \tp{\jac_{h_N}\!(\bar x^N)} \nabla \ell_N(\hhbar^N) $\;
\For{$k = N - 1,\, ...,\, 0$}
{
$\begin{pmatrix}
A_k & B_k
\end{pmatrix} \assign \jacf(\barxuk)$\;
$q^k \assign \tp{\jac_{h_k}^x\!(\barxuk)} \nabla \ell_{k}(\hhbar^k)$\;
$r^k \assign \tp{\jac_{h_k}^u\!(\barxuk)} \nabla \ell_{k}(\hhbar^k)$\;
$\nabla_{\!u^{\hspace{-0.6pt}k}}\psi \assign r^k + \tp B_k \lambda^{k+1}$\;
$\lambda^k \assign q^k + \tp A_k \lambda^{k+1}$\;
}
\caption{Backward gradient evaluation}\label{alg:lqr-grad}
\end{algorithm2e}
\subsection{Solution of the LQR problem}
The Gauss--Newton step \(\delu\) can be computed as the solution to
\eqref{eq:ocp-qp-elim} using LQR factorization and LQR solution
routines based on the Riccati recursion \cite[\S 8.8.3]{rawlings2017model},
\cite[Alg.~3-4]{patrinoslqrsolve}.
These routines, specialized to the problem at hand, are listed in
Algorithms~\ref{alg:lqr-factor} and \ref{alg:lqr-solve}.
\begin{algorithm2e}[h]
\newcommand\assign{\leftarrow}
\DontPrintSemicolon
\KwIn{$Q_k, \hat S_k, \hat R_k, \hat q_k, \hat r_k, A_k, \hat B_k, \hat c_k$}
\KwOut{$K_k, e_k$}
$P_N \assign Q_N$\;
$s_N \assign \hat q_N$\;
\For{$k = N - 1,\, ...,\, 0$}
{
$\bar R \assign \hat R_k + \tp{\hat B_k} P_{k+1} \hat B_k$\;
$\bar S \assign \hat S_k + \tp{\hat B_k} P_{k+1} A_k$\;
$y \assign P_{k+1} \hat c_k + s_{k+1}$\;
$K_k \assign -\inv{\bar R} \bar S$\;
$e_k \assign -\inv{\bar R} (\tp{\hat B_k} y + \hat r_k)$\;
$s_k \assign \tp{\bar S} e_k + \tp A_k y + \hat q_k$\;
$P_k \assign Q_k + \tp A_k P_{k+1} A_k + \tp{\bar S}\! K_k$\;
}
\caption{LQR factor}\label{alg:lqr-factor}
\end{algorithm2e}
\begin{algorithm2e}[h]
\newcommand\assign{\leftarrow}
\DontPrintSemicolon
\KwIn{$A_k, B_k, K_k, e_k, \delu_\K$}
\KwOut{$\delu_{\!\J}, \delx$}
$\delx^0 \assign 0$\;
\For{$k = 0,\, ...,\, N - 1$}
{
$\delu^k_{\!\J} \assign K_k \delx^k + e_k$\;
$\delx^{k+1} \assign A_k \delx^k + B_k \delu^k$\;
}
\caption{LQR solve}\label{alg:lqr-solve}
\end{algorithm2e}
An important observation is that the cost for the computation of the
Gauss--Newton direction using these routines scales \textit{linearly} with the
horizon length \(N\). In the worst case, when \(\K(\bar u) = \emptyset\),
Algorithm~\ref{alg:lqr-factor} requires the factorization of \(N\) matrices of
size \(\nnu \times \nnu\) and some matrix products.
In contrast, general direct solution methods for
system \eqref{eq:struc-panoc-sys} require a single factorization of a much
larger \(\nnu N \times \nnu N\) matrix, with a cost that scales
\textit{cubically} with \(N\).
\subsection{Practical considerations} \label{sec:practical-considerations}
For iterates that are far from the solution, the quadratic Gauss--Newton model
might not approximate the actual function well, and the Gauss--Newton step might
not perform much better than an L--BFGS step.
Considering the significant difference in computational cost between
Gauss--Newton and L--BFGS (the former requires evaluation of the Jacobians of
the dynamics, matrix factorizations and multiplications, whereas the latter
only requires a limited number of vector operations),
we propose to only compute the Gauss--Newton step every \(k_\mathrm{GN} \ge 1\)
iterations. In between, much cheaper structured \panoc{} L--BFGS steps
are used \cite[\S III]{pas2022alpaqa}. When eventually a Gauss--Newton step is
accepted by the line search with step size \(\tau = 1\),
the algorithm continues to perform Gauss--Newton steps, for as long as they
keep getting accepted with unit step size.
Using this technique, the algorithm initially maintains a relatively low
cost per iteration, and eventually enjoys the fast local convergence of the
more expensive Gauss--Newton steps. This will be corroborated experimentally
in the following section.
\subsection{Number of iterations}
In a first experiment, the convergence in terms of the number of iterations is compared for
the PANOC algorithm with Gauss--Newton acceleration as described in this
publication, and for the structured PANOC algorithm with L--BFGS acceleration without the
off-diagonal Hessian--vector term from \cite[]{pas2022alpaqa}. For the Gauss--Newton
accelerator, the parameter \(k_\mathrm{GN}\)
from Section~\ref{sec:practical-considerations} is set to one (i.e. a
Gauss--Newton step is computed on each PANOC iteration). The L--BFGS memory is set to 40,
equal to the length of the horizon.
Figure~\ref{fig:conv-iter} shows the convergence of the two algorithms. Initially,
they both perform similarly, but after around 20 iterations, the Gauss--Newton
directions are accepted with unit step size, enabling very fast linear
convergence.
It should be noted that similar graphs in terms of absolute solver run time
would look quite different: even though the reduction of the residual per
iteration is comparable for the first 20 iterations,
the computational cost per iteration for the Gauss--Newton accelerator is around
one order of magnitude higher than for the L--BFGS accelerator. This can be
greatly improved by increasing \(k_\mathrm{GN}\).
\subsection{Run time in function of horizon length}
In a second experiment, we explore the effect of the horizon length on the
solver run time. For each horizon length between \(N=10\)
and \(N=45\), 256 optimal control problems are composed, each with a different
initial state \(\xinit\), generated by applying uniformly random inputs in
\([-1, 1]\) for five time steps.
The parameter \(k_\mathrm{GN}\) described in Section~\ref{sec:practical-considerations}
was set to 30 for this experiment, and the L--BFGS memory was set equal to the
horizon length \(N\). The solvers declare convergence when
\(\left\| u^{(\nu)} - \proj{\U}\!\left(u^{(\nu)} - \nabla \psi(u^{(\nu)})\right) \right\| \le 10^{-10}\).
The run times of both algorithms (structured PANOC with L--BFGS, and
PANOC with Gauss--Newton acceleration) are reported in Figure~\ref{fig:horiz}.
The algorithm with Gauss--Newton acceleration is more than twice as fast as
the L--BFGS variant, and the run time scales not much worse than linearly with
the horizon length \(N\), although longer horizons appear to be more challenging.
\subsection{Model predictive control}
Finally, both solvers are applied in a closed-loop controller. A disturbance
of \([-1, 1, 1]\, \mathrm{m/s}\) is applied for five time steps, and the
system with the MPC controller is subsequently simulated for one minute. The run times of the
two solvers described earlier are reported in Figure~\ref{fig:mpc}.
The Gauss--Newton solver (with \(k_\mathrm{GN} = 10\)) outperforms the
L--BFGS-based solver in terms of both average and worst-case run time.
The fast local convergence of Gauss--Newton is especially noticeable when the
initial guess is close to the solution, e.g. by warm starting the solver using
the shifted solution from the previous time step, and when the system starts to
settle near the end of the simulation. For reference,
the popular \texttt{Ipopt} solver \cite[]{ipopt} requires around 1.7 seconds to
solve the first OCP (invoked from CasADi, without just-in-time compilation),
which is over 50 times longer than the 30 ms required by the \panoc{} solver
with Gauss--Newton acceleration.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig/chain/gn-chain-conv-iter.pdf}
\vspace{-1em}
\caption{Comparison of the convergence of structured \panoc{} with L--BFGS and \panoc{} with the proposed
Gauss--Newton accelerator (\(k_\mathrm{GN} = 1\)), when applied to the chain of masses MPC benchmark.} \label{fig:conv-iter}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig/chain/gn-chain-compare-N-horiz-confidence-parallel.pdf}
\vspace{-1em}
\caption{Median solver run time over the 256 test problems for each horizon length, for structured \panoc{} with L--BFGS and \panoc{} with the Gauss--Newton accelerator (\(k_\mathrm{GN} = 30\)).
The shaded area indicates the P10 and P90 percentiles.}\label{fig:horiz}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig/chain/gn-chain-mpc-times.pdf}
\vspace{-1em}
\caption{Solver run times for structured \panoc{} with L--BFGS and \panoc{} with the Gauss--Newton accelerator (\(k_\mathrm{GN} = 10\)) when applied to a model predictive control problem.
For data labeled \textit{warm}, the shifted solution of the previous time step is used as initial guess for the solvers, whereas it is set to zero for data labeled \textit{cold}.}\label{fig:mpc}
\end{figure}
\subsection{Linear Newton approximations for \panoc}
Local solutions to \eqref{eq:P} correspond to fixed points of the \textit{forward-backward operator}
\(\fbop(u) \defeq \prox_{\gamma g}\big(u - \gamma\nabla\psi(u)\big)\),
and are characterized by the nonlinear inclusion
\(0 \in \fpr(u)\),
where \(\fpr \defeq \inv\gamma(\Id - \fbop)\) is the \textit{fixed-point residual} of \(\fbop\).
Traditionally, \panoc{} applies the L--BFGS quasi-Newton method to this root-finding
problem to achieve fast convergence. A line search over the \textit{forward-backward envelope} \(\fbe\)
is used as a globalization strategy.
This paper explores alternative directions to accelerate \panoc{} by studying
generalized Jacobians to construct a \textit{linear Newton approximation} (LNA) \cite[]{pang}
of the fixed-point residual \(\fpr\).
\newcommand\proxjac{B}
\begin{prop}\label{prop:lna} (LNA scheme for \(\fpr\))\\
Suppose that \(\nabla\psi\) is semismooth around \(\bar u\in \R^n\) and
that \(\prox_{\gamma g}\) with \(\gamma > 0\) is semismooth at
\(\bar u - \gamma\nabla \psi(\bar u)\). Then,
\begin{equation}\label{eq:lna}
H_\gamma(u) \defeq \inv\gamma \I - \proxjac(u)\big(\inv\gamma\I - \partial^2\psi(u)\big),
\end{equation}
where \(\proxjac(u) = \partial_C \prox_{\gamma g} \big(u - \gamma\nabla\psi(u)\big)\)
and \(\partial^2\psi(u) = \partial_C\big(\nabla\psi(u)\big)\),
furnishes an LNA scheme for \(\fpr\) at \(\bar u\).
\cite[Lem.~6]{6760233}
\cite[Prop.~3.7]{fbtruncnewton}
\cite[\S 15.4.13]{themelis2019acceleration}
\end{prop}
\begin{pf}
Because of the semismoothness of \(\prox_{\gamma g}\) and \(\nabla\psi\),
\(\proxjac(u)\) is an LNA scheme for \(\prox_{\gamma g}\) at \(\bar u - \gamma\nabla\psi(\bar u)\),
and \(\I - \gamma\partial^2\psi(u) = \partial_C\big( u - \gamma\nabla\psi(u)\big)\) is
an LNA scheme for \(\Id - \gamma\nabla\psi\) at \(\bar u\). By \cite[Thm.~7.5.17]{pang},
the product \(\proxjac(u)\big(\inv\gamma\I - \partial^2\psi(u)\big)\) is an LNA scheme for
the composition \(\fbop = \prox_{\gamma g} \circ (\Id - \gamma\nabla\psi)\) at \(\bar u\).
\hfill\(\square\)
\end{pf}
This proposition motivates using a solution \(\delu\) of the Newton system \(H_\gamma(\bar u)\,\delu = -\fpr(\bar u)\)
as an update direction for \panoc{}, using the LNA around the current iterate
\(\bar u\).
\subsection{Structured PANOC}
In the case where the nonsmooth term \(g\) in \eqref{eq:P} is the indicator of a closed rectangular
box \(\U\), i.e. \(g \defeq \delta_\U\),
\(\prox_g\) is a separable projection. This structure can be exploited to reduce
the dimension of the Newton system \cite[\S III]{pas2022alpaqa}.
Represent the box \(U \defeq \mybigtimes_{i=1}^n U_i\)
as a Cartesian product of one-dimensional intervals. Then, \(\proxjac(u) = \partial_C \proj{\U}\!\big(u - \gamma\nabla\psi(u)\big)\) is a
set of diagonal matrices with
\begin{equation}
\proxjac(u)_{ii} \in \begin{cases}
\{0\} &\text{ if } u_i - \gamma\nabla_{\!i}\psi(u) \not\in \U_i, \\
\{1\} &\text{ if } u_i - \gamma\nabla_{\!i}\psi(u) \in \interior \U_i, \\
[0, 1] &\text{ if } u_i - \gamma\nabla_{\!i}\psi(u) \in \boundary \U_i. \\
\end{cases}
\end{equation}
Motivated by these different cases, let us define the index sets \(\K(u) \defeq \defset{i\in \N_{[1,\,n]}}{u_i - \gamma \nabla_{\!i} \psi(u) \not\in \interior \U_i}\)
and \(\J(u) \defeq \defset{i\in \N_{[1,\,n]}}{u_i - \gamma \nabla_{\!i} \psi(u) \in \interior \U_i}\)
of active and inactive constraints respectively,
and choose \(\hat \proxjac(u) \in \proxjac(u)\), defining \(\hat \proxjac(u)_{ii} \defeq 0\) if \(i \in \K(u)\) and \(\hat \proxjac(u)_{ii} \defeq 1\) if \(i \in \J(u)\).
\newcommand\newtdir{\delu}
By permutation of \eqref{eq:lna}, the Newton step \(\newtdir\) at a point \(\bar u\) can then be computed by solving
the system
\begin{equation}\label{eq:struc-panoc-sys}
\left\{
\begin{aligned}
\newtdir_\K &= \bar u_\K - \fbop(\bar u)_\K, \\
\partial^2_{\!\J\!\J}\psi(\bar u)\, \newtdir_{\!\J} &= -\nabla_{\!\!\J}\psi(\bar u) - \partial^2_{\!\J\!\K} \psi(\bar u)\, \newtdir_\K. \\
\end{aligned}\right.
\end{equation}
\input{sections/lqr-eqn}
\subsection{Gauss--Newton approximation}
We will now specialize to problems where the smooth term is a composition
\(\psi(u) \defeq \ell\big(F(u)\big)\)
of \(\ell : \R^m \to \R\) convex and \(F : \R^n \to \R^m\).
Considering the computational cost of evaluating and factorizing the second-order
derivatives of \(\psi\), the proposed method approximates \eqref{eq:struc-panoc-sys}
using the Gauss--Newton matrix \(\Hgn \defeq \tp{\jacF(u)}\, \partial^2 \ell\big(F(u)\big)\, \jacF(u)\)
\cite[\S3]{schraudolph2002fast}.
\begin{rem}
For \(\psi \in C^2\), we have \(\nabla^2\psi = \Hgn + \egn\) with
\(\egn(u) \defeq \sum_{i=1}^m \nabla_{\!i\,} \ell\big(F(u)\big)\, \nabla^2 F_i(u)\).
If the function \(F\) is linear around a solution \(u^\star\), or if \(F(u^\star)\)
is a stationary point of \(\ell\), the error term \(\egn\) vanishes, and the
Gauss--Newton approximation approaches the true Hessian matrix of \(\psi\).
\end{rem}
Substituting \(\partial^2\psi\) by \(\Hgn\) in \eqref{eq:struc-panoc-sys} and
writing the solution to the resulting system as the solution of an equality
constrained quadratic program yields
\begin{equation}\label{eq:eqp}\tag{GN-QP}
\begin{aligned}
&\minimize_{\delu}&& \tfrac12\, \tp \delu \Hgn(\bar u)\, \delu + \ttp{\nabla\psi(\bar u)} \delu \\
&\subjto && \delu_\K = u_\K - \fbop(\bar u)_\K.
\end{aligned}
\end{equation}
The following sections explore methods for efficiently solving this
Gauss--Newton QP by making use of the particular structure of finite-horizon
optimal control problems. The Gauss--Newton step \(\delu\) can then be used as
an accelerated direction for \panoc.
|
1,314,259,993,921 | arxiv | \section{Introduction}
The ability to detect high-energy radiation is crucial for applications in high-energy physics, medicine, as well as homeland security \cite{Rod97, Kno10}. For these applications, one frequently relies on scintillators, which are materials that convert high energy radiation into low energy photons. By correlating the number of photons generated during a short time interval with the energy of an incident radiation quantum, it is possible to deduce the energy spectrum of the incoming signal. Provided the energy resolution is sufficiently high this in principle allows identification of the radiation source. The energy resolution that is achievable using classical scintillators such as NaI and CsI is, however, limited and does not meet the requirements for radioactive isotope identification \cite{NelGosKna11}.
An analysis of counting statistics demonstrates that resolution improves with an increase in luminosity, which usually results from a higher conversion efficiency, i.e. relatively more photons are generated per incident energy. Yet the resolution of most scintillators is significantly worse than what can be expected based on their respective luminosities \cite{Dor10}. During the last decades it has been established that this discrepancy can be traced to the non-linear response of the material to the energy of the incident radiation quantum, which accordingly has been studied intensively \cite{DorHaaEij95, RooVal96, Vas08, MosPayCho08, MosBizWil12}.
While the fundamental physical mechanisms from track creation to final photon emission are understood at least on a qualitative level, the relative importance of each of these events is still unclear. In spite of several recent promising attempts to resolve this situation \cite{PayCheHul09, KerRosCan09, BizMosSin09, WilGriLi11, WanWilGri13}, there is currently no established numerical framework that has successfully combined the aforementioned physical mechanisms into a predictive model. One of the major reasons is the enormous complexity and uncertainty connected with thermalization and transport of excitation carriers (primarily electron, holes, and excitons) as well as their respective contributions to the response of the system.
Existing models \cite{PayCheHul09, KerRosCan09, BizMosSin09, WilGriLi11, WanWilGri13} typically rely on a number of physical parameters, such as dielectric function, migration barriers of self-trapped excitons, electron and hole mobilities, defect trapping rates, as well as Auger recombination rates of free carriers and excitonic states. Since at least some of these quantities are notoriously difficult to measure experimentally, parameter-free electronic-structure calculations are very valuable for providing not only physical insight but input data for such models. They also serve to complement and guide experimental efforts. The impact of electronic properties such as fundamental band gaps and charge carrier mobilities has recently been studied for a number of scintillating materials \cite{SetGauRom09, LiGriWil11}. There remains, however, a pronounced gap in our knowledge and understanding of both quantities and processes that relate to the interaction of charge carriers and excitations, specifically excitonic effects. This is at least in part because the methods required to access these effects come with a significant computational burden.
Two materials that are known to be very promising for scintillator applications are LaBr$_3$:Ce (see e.g., Ref.~\onlinecite{LoeDorEij01}) and SrI$_2$:Eu (see e.g., Refs.~\onlinecite{CheHulDro08, WilLoeGlo08, HawGroCui08, ChePayAsz09}), both of which exhibit very high luminosity and significantly improved energy resolution compared to NaI. While LaBr$_3$ has already been characterized rather extensively both experimentally \cite{DorLoeVin06, DotMcGHar07, Dor10, BizDor07} and theoretically \cite{BizDor07, VanJafKer10, CanChaBou11, AndKolDor07, Sin10, McIGaoTho07, AbeSadErh12}, information on the properties of SrI$_2$, in particular its electronic structure, is sparse \cite{Sin08}. Even fundamental quantities such as the band gap are still to be determined consistently and the influence of excitonic and polaronic effects is entirely unknown at this point. Yet, a deeper understanding of this material is crucial since properties such as the band gap, exciton binding energies and dielectric functions constitute essential building blocks for any theoretical study of e.g., free carrier transport, polaron formation and migration as well as Auger recombination. In addition, they represent important parameters for the interpretation of experimental data.
The objective of the present work is to alleviate this situation by providing predictions for electronic and optical properties on the basis of modern theoretical spectroscopy techniques. We specifically aim at accurately describing band structures and densities of states (including fundamental band gaps), optical absorption spectra, exciton binding energies and exciton localization. In order to assess the reliability of our parameter-free simulations and to put our results in context, we also carry out calculations for the classic scintillator NaI, for which extensive experimental data are available.
The remainder of this paper is organized as follows. In \sect{sect:computational_parameters} we summarize the computational approach employed in this work. A convenient decomposition of the six-dimensional exciton wave function into single-particle densities and an envelope function is introduced in \sect{sect:envelope_function}. Results for band structure, dielectric properties and absorption spectra are presented in Sects.~\ref{sect:qp} and \ref{sect:absorption}. Exciton binding energies as well as an analysis of their spatial extent are the subject of Sects.~\ref{sect:binding_energies} and \ref{sect:excdens}. Finally, we summarize the results and discuss them in the context of the scintillation properties of NaI and SrI$_2$ in \sect{sect:discussion}.
\section{Theoretical approach}
Since this work aims at a precise description of electronic structure as well as optical properties it requires techniques beyond standard density functional theory (DFT). Specifically, it is essential to employ computational schemes that are capable of describing quasiparticle (QP) and excitonic effects. To this end, we use a combination of DFT and $G_0W_0$ calculations \cite{Hed65} to describe single-particle excitations that govern band structures and densities of states. Two-particle (electron-hole) excitations have to be taken into account when computing exciton binding energies, dielectric functions, and optical absorption spectra. This is achieved by solving the Bethe-Salpeter equation (BSE) for the optical polarization function \cite{SalBet51, OniReiRub02}.
\subsection{Computational parameters}
\label{sect:computational_parameters}
In this work we use experimental values for the crystallographic geometries. Sodium iodide adopts the rocksalt structure with a lattice constant of 6.48\,\AA. SrI$_2$ belongs to space-group Pbca (number 61 in the International Tables of Crystallography, Ref.~\onlinecite{itca}). Its unit cell contains 24 atoms with nine internal degrees of freedom. The experimentally determined lattice parameters are $a=15.22\,\AAA$, $b=8.22\,\AAA$, and $c=7.90\,\AAA$ with the following internal coordinates: Sr on Wyckoff site $8c$ ($x=0.1105$, $y=0.4505$, $z=0.2764$), I(1) on Wyckoff site $8c$ ($x=0.2020$, $y=0.1077$, $z=0.1630$), and I(2) on Wyckoff site $8c$ ($x=-0.0341$, $y=0.2682$, $z=0.0054$) \cite{BarBecGru69}.
All calculations were carried out using the projector-augmented wave method to describe the electron-ion interaction \cite{Blo94, KreJou99}. We used a plane-wave expansion for the wave functions with a cutoff energy of 228\,eV for both materials. DFT and $G_0W_0$ electronic structures were generated using the Vienna \emph{Ab-initio} Simulation Package \cite{KreHaf93, KreHaf94, KreFur96a, KreFur96b, ShiKre06}. The corresponding BSE implementation has been discussed in Refs.~\onlinecite{SchGluHah03, FucRodSch08, RodFucFur08}.
For NaI and SrI$_2$ we use the generalized-gradient approximation \cite{PerBurErn96} and the local-density approximation \cite{CepAld80}, respectively, to represent exchange-correlation effects at the DFT level. Brillouin-zone (BZ) integrations for both DFT and $G_0W_0$ calculations were carried out by summing over $\Gamma$-centered $6\,\times\,6\,\times\,6$ Monkhorst-Pack\cite{MonPac76} (MP) grids in the case of NaI. For SrI$_2$ the density of states (DOS) was computed on the DFT level using a $6\,\times\,11\,\times\,11$ $\Gamma$-centered MP $\kb$-point grid while the band structure was obtained on the basis of a $4\,\times\,7\,\times\,7$ mesh. Calculations of $G_0W_0$ QP energies were carried out for several $\kb$-point grids up to $\Gamma$-centered $3\,\times\,4\,\times\,4$ and up to 2880 bands were included in the calculations to achieve convergence of the dielectric function entering the screened interaction $W$. Spin-orbit coupling (SOC) was taken into account using the projector-augmented wave implementation described in Ref.~\onlinecite{AbeSadErh12}. Based on convergence tests we estimate that these computational parameters yield QP shifts around the band edges that are converged to within 50\,meV for both materials.
In order to describe optical properties, we calculated dielectric functions using regular MP meshes of $16\,\times\,16\,\times\,16$ and $4\,\times\,6\,\times\,6$ $\kb$-points for NaI and SrI$_2$, respectively. For a more efficient sampling of the BZ, each grid was displaced by a small random vector. For NaI and SrI$_2$ we used 32 and 48 conduction bands, respectively. In addition, the number of Kohn-Sham states contributing to the BSE Hamiltonian is limited by the BSE cutoff energy which specifies the maximum non-interacting electron-hole pair energy that is taken into account. Here we used a BSE cutoff energy of at least 13.0 and 5.0\,eV for NaI and SrI$_2$ to set up the excitonic Hamiltonian from independent electron-hole pairs. The screened Coulomb interaction $W$ was constructed assuming the $\boldsymbol q$-diagonal model function of Bechstedt \etal\ \cite{BecDelCap92} and static electronic dielectric constants of 3.69 and 4.58 as obtained for NaI and SrI$_2$ on the DFT level.
The Coulomb singularity present in the BSE Hamiltonian effectively prevents accurate one-shot calculations of exciton binding energies for a given $\kb$-point mesh \cite{PusAmb02, FucRodSch08}. This is somewhat alleviated by the introduction of so-called singularity corrections. Still present implementations are left with error terms that are proportional to the inverse number of $\kb$-points. Therefore, the most efficient scheme currently consists of extrapolating calculated binding energies for a number of $\kb$-point grids to the continuum limit \cite{FucRodSch08}. We sample the BZ using both regular and hybrid $\kb$-point meshes as defined in Ref.~\onlinecite{FucRodSch08}. For NaI we employed regular $\Gamma$-centered grids up to $21\times21\times21$ as well as $11^3:6^3:x^3$ hybrid grids with $x$=$\{22, 25\,\nicefrac[]{2}{3}, 29\,\nicefrac[]{1}{3}, 33\}$. For SrI$_2$ the finest regular meshes we used were $7\times 9\times 9$, $5\times 11\times 11$, and $3\times 13\times 13$ and, in addition, we employed hybrid meshes of $5\times5\times5:4\times2\times2:x$ with $x$=$\{5\times15\times15,5\times20\times20, 5\times25\times25 \}$ for finer sampling around the $\Gamma$-point in the $\kb_y$ and $\kb_z$ directions.
\subsection{Electron-hole densities}
\label{sect:envelope_function}
The electron-hole separation of a free exciton is readily obtained within the effective mass approximation for a two-band model \cite{Ell57}. In cases where this approximation is not applicable, the BSE wave functions must be analyzed instead. To this end, Dvorak \etal\ have computed the relative distribution of the electron around the hole as well as spread and localization lengths by a straightforward integration of the full excitonic wave function \cite{DvoWeiWu13}. The BSE wave functions and electron hole distribution function are, however, \mbox{six-dimensional} functions of the electron and hole coordinates which renders the problem, at least computationally, more involved. Furthermore, when discussing the spatial distribution of electrons or holes, a charge density (or wave function) is usually constructed by fixing the electron (hole) at the position of an atom belonging to the conduction (valence) band edge \cite{RohLou98, IsmLou05, HumPusAmb04, CudAttTok10}. This choice is though somewhat arbitrary since the Bloch states may be strongly hybridized. To alleviate this problem and to avoid the representation of a density over the entire Born-von-K\'arm\'an cell on a dense Cartesian mesh in the case of calculating the electron-hole separation, we follow another route.
The electron-hole pair distribution function $\rho(\rb_e, \rb_h)$ specifies the probability of simultaneously finding an electron at $\rb_e$ and a hole at $\rb_h$. In the single-particle band picture we thus can write
\begin{align}
\rho(\rb_e, \rb_h)=\rho_e(\rb_e) \rho_h(\rb_h).
\end{align}
In this approximation both electron and hole can be completely delocalized over the entire crystal. However, since electron and hole are coupled through the Coulomb interaction the distribution function assumes a more complex form
\begin{align}
\rho(\rb_e, \rb_h)=\rho_e(\rb_e) \rho_h(\rb_h) g_{eh}(\rb_e, \rb_h),
\end{align}
where $g_{eh}$ represents the explicit pair correlation between electron and hole. Thus, given an electron at $\rb_e$ the probability of finding a hole at $\rb_h$ is $\rho_h(\rb_h) g_{eh}(\rb_e, \rb_h)$. In the following we will derive explicit expressions for the excitonic single-particle densities and an approximate correlation function that turns out to be solely a function of the electron-hole separation. Thereby, we will obtain a partitioning of the pair distribution function into two single-particle {\em cell-periodic} densities, representing the local variations of electron and hole densities, as well as an associated pair distribution function that is coarse-grained over each periodic cell.
The exciton wave function can be represented in terms of electron and hole coordinates via the amplitude \cite{Str88, RohLou00}
\begin{align}
\chi_j(\rb_e, \rb_h) &= \braket{N;0}{\psihd(\rb_e)\psih(\rb_h) }{N;j},
\end{align}
where $\ket{N;j}$ is the $j$-th excited state of the $N$-electron system, and $\psih(\rb)$ and $\psihd(\rb)$ are the standard annihilation and creation operators acting on coordinate $\rb$, respectively. Translated into the basis of Bloch orbitals this becomes
\begin{align}
\chi_j(\rb_e, \rb_h) &= \sum_{\kb c v} A^j_{\kb c v} \phi^*_{\kb v}(\rb_h) \phi_{\kb c}(\rb_e),
\label{eq:exc_wf}
\end{align}
where the coefficients $A^j_{\kb c v}$ are the eigenvectors of the BSE matrix. Therefore we can write the associated electron-hole density as
\begin{align}
\rho^{j}_{eh}(\rb_e, \rb_h) &= \sum_{\kb c v} \sum_{\kb' c' v'} A^{j*}_{\kb c v} A^{j}_{\kb' c' v'} \notag \\
& \times \phi_{\kb v}(\rb_h) \phi^*_{\kb c}(\rb_e) \phi^*_{\kb' v'}(\rb_h) \phi_{\kb' c'}(\rb_e). \label{eq:excden}
\end{align}
By integrating over either the electron or hole coordinate we obtain the cell-periodic hole and electron charge densities, respectively,
\begin{subequations}
\begin{align}
\rho^j_e(\rb_e) &= \sum_\kb \sum_{c\,c'} \phi^*_{\kb c}(\rb_e) \phi_{\kb' c'}(\rb_e) \sum_v A^{j*}_{\kb c v} A^{j}_{\kb c' v}, \label{eq:rhoe} \\
\rho^j_h(\rb_h) &= \sum_\kb \sum_{v\,v'} \phi_{\kb v}(\rb_h) \phi^*_{\kb' v'}(\rb_h) \sum_c A^{j*}_{\kb c v} A^{j}_{\kb c v'}. \label{eq:rhoh}
\end{align}
\end{subequations}
We note that the electron-hole density of the entire $N$-electron system can be written as $\rho^j=\rho^{0}+\rho^j_e-\rho^j_h$, where $\rho^0$ is the ground state charge density. This can be seen by expanding the $N$-electron exciton wave function $\chi_j(\rb_1,\ldots,\rb_N)=\ibraket{\rb_1,\ldots,\rb_N}{N;j}$ into Slater determinants.
Returning to the question of electron-hole separation we again consider the two-particle density of \eq{eq:excden} and make the variable substitution $\rb_{e/h}=\rb'_{e/h}+\Rb_{e/h}$, where now $\rb'_{e/h}$ is constrained to one unit cell and $\Rb_{e/h}$ denotes a lattice vector. By integrating the resulting two-particle density over $\rb'_e$ and $\rb'_h$ we obtain a coarse-grained pair distribution function
\begin{align}
\widetilde{g}^{j}_{eh}(\Rb_e, \Rb_h) &=\sum_{\kb c v} \sum_{\kb' c' v'} A^{j*}_{\kb c v} A^{j}_{\kb' c' v'} \nonumber \\
\times & \exp \left[
i\left(\kb'-\kb\right)\cdot \left(\Rb_e-\Rb_h\right)
\right] I^*_{\kb v,\kb' v'} I_{\kb c,\kb' c'}, \label{eq:envelope}
\end{align}
where $I_{\kb n,\kb' n'}=\int d\rb \phi^*_{\kb n}(\rb) \phi_{\kb' n'}(\rb)$ is an integral over one unit cell only. Also note that $\widetilde{g}_{eh}$ of \eq{eq:envelope} is only a function of the electron-hole separation $\Rb_e-\Rb_h$ and thus resembles an envelope function. As a result, we can approximately write the electron-hole pair distribution function as
\begin{align}
\rho^j_{eh}(\rb_e+\Rb_r,\rb_h+\Rb_h) =
\rho^j_e(\rb_e)\rho^j_h(\rb_h) \widetilde{g}^{j}_{eh}(\Rb_e-\Rb_h).
\label{eq:eh_pair_distribution}
\end{align}
In this simplified picture, we can therefore write the exciton pair distribution function as an envelope function $\widetilde{g}_{eh}$ modulated by the product of two periodic single-particle charge densities, $\rho^j_e$ and $\rho^j_h$.
\section{Results}
\subsection{QP band structures}
\label{sect:qp}
\newcommand{0.65}{0.65}
\begin{figure}
\includegraphics[scale=0.65]{fig1.eps}
\caption{
(a) NaI non-relativistic QP energies from $G_0W_0$ (empty circles) superimposed on DFT+$\Delta$ band structure (colored lines). The spin-orbit split valence bands are shown by black lines and filled squares. (b) Partial density of states for NaI as obtained from DFT+$\Delta$ calculations.
}
\label{fig:nai_bands}
\end{figure}
\begin{figure*}
\includegraphics[scale=0.65]{fig2a.eps}
\includegraphics[scale=0.65]{fig2b.eps}
\includegraphics[scale=0.65]{fig2c.eps}
\caption{
Partial density of states for SrI$_2$ obtained from DFT calculations (a) without and (b) with spin-orbit coupling. The major change upon the inclusion of SOC effects is an upward shift of the valence band edge by 0.3\,eV, which is indicated by the gray bar in panel (b).
(c) QP energy shifts from $G_0W_0$ calculations with respect to initial DFT eigenenergies.
(d) QP energies from $G_0W_0$ superimposed onto DFT+$\Delta$ band structure. The two different sets of $G_0W_0$ QP energies correspond to $\Gamma$-centered $\kb$-point grids with (1) $2\times2\times2$ and (2) $3\times4\times4$ divisions, respectively.
}
\label{fig:SrI2_dos}
\label{fig:SrI2_gw}
\end{figure*}
In the case of NaI DFT calculations yield a band gap of 3.7\,eV. This value reduces to 3.4\,eV when SOC is taken into account, which is considerably less than experimental values that lie in the range between 5.8 and 6.3\,eV \cite{TeeBal67, BroGahFuj70, PooJenLec74}. The $G_0W_0$ band gap is 5.8\,eV without and 5.4\,eV with SOC in much better agreement with experiment, see \fig{fig:nai_bands}. The major effect of SOC is to cause a splitting of the I\,$5p$ band by about 0.9\,eV at the $\Gamma$-point.
The partial DOS of SrI$_2$ is shown in \fig{fig:SrI2_dos} calculated both without and with SOC taken into account. The uppermost valence band is composed almost exclusively of I\,$5p$ states whereas the conduction band is dominated by Sr\,$4d$ states with a contribution from Sr\,$5s$ states at the bottom of the conduction band. The largest change due to SOC is a splitting of 1.15\,eV observed for the Sr\,$4p$ states that lie between $-15.8$ and $-17.0\,\eV$ below the valence band maximum. The band gap decreases by 0.3\,eV upon inclusion of SOC due to a shift of the valence band maximum (VBM). Otherwise the structure of both valence and lower conduction band states is largely preserved. Figure~\ref{fig:SrI2_gw}(c) demonstrates that there are no qualitative changes in going from DFT to $G_0W_0$ within a band manifold. The valence band width increases from 2.86\,eV to 3.00\,eV and the band gap from 3.7 to 5.5\,eV. The band characters and their ordering are only weakly affected and, as a result, a rigid upward shift (a scissor correction, referred to as DFT+$\Delta$ from here on) of the DFT conduction band yields reasonable agreement between DFT and DFT+$G_0W_0$ QP energies. This is demonstrated in \fig{fig:SrI2_gw}(d), which shows QP energies from $G_0W_0$ superimposed on a DFT band structure.
Due to the prohibitive computational cost, SOC effects were not taken into account on the $G_0W_0$ level for SrI$_2$. Our calculations for NaI as well as LaBr$_3$ (see Ref.~\onlinecite{AbeSadErh12}), however, show that DFT and $GW$ yield similar results for SOC induced shifts. We therefore can employ the band gap reduction calculated on the DFT level [compare \fig{fig:SrI2_dos}(a,b)] to estimate the SOC corrected band gap of SrI$_2$ as 5.2\,eV.
\subsection{Dielectric functions and absorption spectra}
\label{sect:absorption}
\begin{figure}
\includegraphics[scale=0.65]{fig3.eps}
\caption{
(a) Imaginary part of the dielectric function and (b) absorption spectrum for NaI. The vertical arrow indicates the fundamental QP band gap (excluding SOC effects). The experimental absorption data is taken from Ref.~\onlinecite{TeeBal67}.
}
\label{fig:dielec}
\end{figure}
\begin{figure}
\includegraphics[scale=0.65]{fig4.eps}
\caption{
(a) Imaginary part of the dielectric function and (b) absorption spectrum for SrI$_2$. Panel (b) also shows excitation spectra recorded by Pankratov \etal\ (Ref. \onlinecite{PanPopShi13}). The vertical arrow indicates the fundamental QP band gap (excluding SOC effects).
}
\label{fig:absorp}
\end{figure}
We now address two-particle excitations and optical properties based on the dielectric function. As discussed above, the good agreement between DFT+$G_0W_0$ and rigidly shifted DFT band structures (DFT+$\Delta$) enables us to use the latter as the starting point for the BSE calculation. In this work we use values of 2.03\,eV and 1.82\,eV for the rigid shift $\Delta$ for NaI and SrI$_2$, respectively.
We first study the spectroscopic properties of NaI by comparing the imaginary part of the dielectric function computed using DFT+$\Delta$ to the one obtained from BSE calculations. Figure~\ref{fig:dielec} reveals a pronounced peak with large oscillator strength near the absorption onset related to an excitonic bound state, which is not captured by the single-particle DFT picture approximation. Overall the BSE result exhibits a pronounced redistribution of peak weights leading to structural changes in the dielectric function. One also notices a red shift of the entire spectrum in going from DFT+$\Delta$ to BSE. These features are attributed to excitonic effects and are also apparent in the predicted absorption spectrum, which is shown in \fig{fig:absorp}(b) along with experimental data recorded at 10\,K \cite{TeeBal67}. The BSE spectrum is overall in good agreement with the experimental data, in particular if compared to the DFT+$\Delta$ result. The most pronounced difference occurs between 6.5 and 7.5\,eV where the experimental spectrum exhibits two peaks whereas there is only one distinct feature in the BSE data.
On the basis of the similarity between the absorption spectra of gaseous xenon and the iodide ion Teegarden and Baldini \cite{TeeBal67} proposed that the two lowest peaks, which are separated by approximately 1\,eV, originate from the spin-orbit split $5p6s$ atomic state. In fact the separation of the two features is comparable to the spin-orbit splitting of 0.9\,eV calculated for the I\,$4p$ band on a DFT/$G_0W_0$ level [see \fig{fig:SrI2_dos}(a)]. Unfortunately SOC is currently not included in our BSE calculations and thus we cannot directly assess this assignment. It should, however, be noted that the separation of the second and third peak in the experimental spectrum, which overlap with a single peak in the BSE spectrum, also exhibit a separation of about 0.9\,eV.
Figure~\ref{fig:dielec} displays the predicted dielectric function as well as absorption and transmission spectra at the DFT+$\Delta$ and BSE levels. As in the case of NaI the BSE spectra show a strong red shift with respect to DFT+$\Delta$ and a redistribution of spectral weight due to excitonic effects. The absorption spectra are rather featureless but clearly the onset of absorption is below the fundamental QP band gap and a shoulder is apparent around 5.5\,eV. The latter feature is corroborated by experimental emission spectra \cite{PanPopShi13}.
\subsection{Exciton binding energies}
\label{sect:binding_energies}
\begin{figure}
\centering
\includegraphics[scale=0.65]{fig5.eps}
\caption{
Convergence of binding energy of lowest exciton state for NaI.
}
\label{fig:bse_convergence_NaI}
\end{figure}
As mentioned in \sect{sect:computational_parameters} the slow convergence of exciton binding energies with $\kb$-point sampling requires an extrapolation scheme \cite{FucRodSch08} the applicability of which is contingent upon reaching the linear regime. We illustrate this approach for NaI in \fig{fig:bse_convergence_NaI}, where it can be seen that linear behavior is not quite accomplished for regular meshes containing as many as $21\,\times\,21\,\times\,21$ $\kb$-points. However, denser sampling was possible by using hybrid $\kb$-point meshes \cite{FucRodSch08} that are well suited in the case of a material with approximately parabolic bands. This denser sampling justifies a linear fit of the exciton binding energy (see \fig{fig:bse_convergence_NaI}) and extrapolation yields a value for the lowest excitonic bound state of 216\,meV. This number is in good agreement with experimental measurements at 80\,K, which yield a binding energy of 240\,meV \cite{EbyTeeDut59}.
\begin{figure}
\includegraphics[scale=0.65]{fig6.eps}
\caption{
Convergence of exciton binding energies in SrI$_2$ from BSE with respect to $\kb$-point sampling for (a) dipole forbidden and (b,c) dipole allowed transitions. Empty and filled symbols denote binding energies calculated using standard and hybrid meshes, respectively. Arrows and gray bars indicate extrapolated binding energies and associated error estimates.
}
\label{fig:bse_convergence_SrI}
\end{figure}
As illustrated in \fig{fig:SrI2_gw}(d), the band structure of SrI$_2$ is much more complicated than the one for NaI. The minimum of the lowest conduction band is less pronounced and, as a consequence, a large number of very flat conduction bands lie within a small energy range and are likely to contribute to the lowest excitonic states. This situation is further exacerbated by the large number of very shallow valence bands with several extrema that are energetically close [see \fig{fig:SrI2_gw}(d)] and, hence, are also expected to contribute to the formation of the lowest excitonic states. In addition, unlike NaI, SrI$_2$ is not of cubic symmetry, hence, the exciton localization is generally anisotropic, an aspect that will be explored in more detail below. This situation motivates the application of anisotropic $\kb$-point meshes when converging exciton binding energies using both regular (empty symbols in \fig{fig:bse_convergence_SrI}) and hybrid grids (filled symbols in \fig{fig:bse_convergence_SrI}).
Due to the symmetry of the states that are involved, the lowest bound exciton state corresponds to an optically (dipole) forbidden transition. In \fig{fig:bse_convergence_SrI} we show the dependence of the binding energy on $\kb$-point sampling for this state as well as for the first states that are dipole allowed for $x$-polarized ($\vec{E}\,||\,\vec{x}$) and $\vec{z}$-polarized ($\vec{E}\,||\,\vec{z}$) light, respectively. The most highly converged data point in \fig{fig:bse_convergence_SrI} corresponds to 685 $\kb$-points in the full BZ and requires setting up and (iteratively) diagonalizing a BSE matrix with a rank of over 140,000. We extract binding energies by linear extrapolation \cite{FucRodSch08} based on the two most highly converged data points for the data sets corresponding to $n_k^x=5$ and $n_k^x=7$ as indicated in \fig{fig:bse_convergence_SrI}.
In this fashion we obtain a binding energy of $195\pm25\,\meV$ for the dipole forbidden state from separate fits to regular and hybrid meshes, where the error estimate is based on the deviation between the fits. For the dipole allowed excitons we obtain binding energies of $180\pm20\,\meV$ ($\vec{x}$-polarized) and $190\pm25\,\meV$ ($\vec{z}$-polarized). These values are in rough agreement with the estimate of 260\,meV obtained by Pankratov \etal\ based on the effective mass approximation \cite{PanPopShi13}. Note, however, that this agreement should be considered rather fortuitous due to the strong anisotropy of the hole effective mass tensor. The analysis of the excitonic wave functions in the following section will provide further evidence for anisotropic excitonic properties.
\subsection{Exciton densities and electron-hole separation}
\label{sect:excdens}
\begin{figure*}
\includegraphics[scale=0.65]{fig7.eps}
\caption{
(a) The logarithm of the electron-hole density $\rho_{eh}(\rb_e, \rb_h)$ defined in \eq{eq:excden} for the exciton ground state state of NaI in a $\left\{110\right\}$ plane assuming the hole is located at an iodine site, i.e. $\rb_h=0$. The black bar indicates the lattice constant. Also shown are logarithmic contour maps of the envelope function $\widetilde{g}_{eh}(\rb)$ defined in \eq{eq:envelope} for (b) the ground state exciton of NaI and (c,d) the lowest lying dipole forbidden exciton state of SrI$_2$ in projected onto (001) and (100).
}
\label{fig:envelopes}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=0.65]{fig8.eps}
\caption{
Envelope function $\widetilde{g}_{eh}(r)$ and exciton electron density $\rho_{eh}(r,0)$ for NaI as defined in Eqs.~(\ref{eq:envelope}) and (\ref{eq:excden}) projected onto $\left<100\right>$ direction with the hole placed on an iodine ($\rb_h+\Rb_h=0$). The vertical dotted lines indicate the lattice spacing ($a_0=6.480\,\AAA$).
}
\label{fig:env_vs_den}
\end{figure}
The cell-periodic electron and hole densities [see Eqs.~(\ref{eq:rhoe}) and (\ref{eq:rhoh})] for the three degenerate exciton ground state states in NaI are spherically centered around iodine atoms, reflecting the fact that the heavy hole bands are almost exclusively formed from iodine $p$-states. The conduction bands of NaI are strongly hybridized and thus it is not entirely obvious that also the electron will localize on iodine atoms. This picture is, however, corroborated by visualizing the electron charge density of the exciton with the hole placed at an iodine atom as shown in \fig{fig:envelopes}(a). The exciton ground state envelope function shown in \fig{fig:envelopes}(a) is spherically symmetric as well. The deviations from spherical symmetry at large electron-hole separation that are apparent in \fig{fig:envelopes}(a) are an artifact of finite $\kb$-point sampling and the periodic boundary conditions applied in our calculations. Fitting the envelope function to the charge density of a $1s$ hydrogen-like wave function $\rho(\rb)\propto\exp[-2r/a]$ yields a Bohr radius (defined as the most probable electron-hole distance) of $a=9.3\,\text{\AA}$ for a $14^3$ $\Gamma$-centered MP grid. This corresponds to an exciton binding energy of 210\,meV in very good agreement with the extrapolated value of 216\,meV obtained above. Thus it is no surprise that for NaI also the effective mass approximation is reasonably accurate yielding a binding energy of 256\,meV.
Figure~\ref{fig:env_vs_den} illustrates the relation between the envelope function $\widetilde{g}_{eh}(\rb)$ introduced in Eqs.~(\ref{eq:envelope}--\ref{eq:eh_pair_distribution}) and the electron-hole density $\rho_{eh}(\rb_e,\rb_h)$ in the case of NaI. While the latter exhibits fluctuations that obey the lattice periodicity, the envelope function is smooth and decays monotonically and exponentially. The lattice periodicity enters in \eq{eq:eh_pair_distribution} via the excitonic single-particle densities $\rho_e(\rb_e)$ and $\rho_h(\rb_h)$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig9.eps}
\caption{
Excitonic single-particle (a-c) electron $\rho_e(\rb_e)$ and (d-f) hole densities $\rho_h(\rb_h)$ according to Eqs.~(\ref{eq:rhoe}) and (\ref{eq:rhoh}) for the lowest energy dipole forbidden, $\vec{x}$-polarized, and $\vec{z}$-polarized excitons in SrI$_2$. Small purple and large green spheres indicate I and Sr ions, respectively.
}
\label{fig:sri_dens}
\end{figure}
SrI$_2$ again proves itself much more complex and appears to be not well suited for a simple description in terms of the effective mass equation. The hole and electron densities of the three states discussed in the previous section are displayed in \fig{fig:sri_dens}. In the dipole forbidden and $\vec{x}$-polarized states the electron densities exhibit $s$-like maxima centered on all atoms with equal amplitude, whereas the Sr\,$4d$-character dominates in the $\vec{z}$-polarized case. Note that the exciton single particle densities shown in \fig{fig:sri_dens}(a-c) does not simply coincide with the single-particle density of the lowest lying conduction band state at $\Gamma$, which primarily exhibits Sr\,$5s$-character.
The hole densities shown in \fig{fig:sri_dens}(d-f) display I\,$p$-character for all three considered states. It is noteworthy that the dipole forbidden state only has appreciable hole density around iodine atoms belonging to the I(2) set of Wyckoff sites (compare \sect{sect:computational_parameters}) as shown in \fig{fig:sri_dens}(d). The same behavior is observed for other dipole forbidden states. In the dipole allowed excitons we find hole densities of equal amplitude on all iodide atoms.
The envelope functions for the SrI$_2$ excitons discussed here differ from those of NaI in several aspects:
({\em i}) They do not display hydrogenic ($\rho(\rb)\propto\exp[-2r/a]$) density dependence, at least not in the vicinity of the center-of-mass. Rather, by fitting several different functional types, an ellipsoidal Gaussian function was judged to most closely reproduce the BSE result. In principle, it might be possible to benchmark our results against the eigenvalues of the anisotropic effective mass equation \cite{Dev69, Sch97}. This is, however, beyond the scope of the present work.
({\em ii}) Because of the crystal symmetry the envelope functions in SrI$_2$ are anisotropic as illustrated in \fig{fig:envelopes}(c,d). For the dipole forbidden state the best fit resulted in anisotropic full width at half maxima (FWHM) of 21, 17, and 21\,\AA\ in the $x$, $y$, and $z$-directions, respectively.
({\em iii}) The effective Bohr radius of 9.3\,\AA\ for NaI corresponds to a FWHM of 6.4\,\AA. As a result, despite the apparent dominance of the localized Sr\,$4d$ states of the conduction bands, the lowest lying SrI$_2$ excitons have a spread which is about two to three times larger than the $1s$ exciton in NaI. This is solely due to the single $s$-like minimum at the $\Gamma$-point [see top panel of \fig{fig:SrI2_gw}(d)].
\section{Summary and Conclusion}
\label{sect:discussion}
In this work we have studied from first principles, single and two-particle excitations in two prototypical scintillator materials, NaI and SrI$_2$. This was motivated by the need to understand the role of free excitons in scintillator non-proportionality. The two systems were judiciously chosen to represent two significantly different responses to incident radiation. NaI is a standard scintillator material with strongly non-linear dependence of light-yield as a function of incoming photon/electron energy, while SrI$_2$ has recently been discovered to have excellent proportionality.
On the basis of DFT and $G_0W_0$ calculations we obtained rather similar band gaps of 5.5 and 5.2\,eV for NaI and SrI$_2$, respectively. The dielectric functions of NaI and SrI$_2$ displayed significant red shift of the oscillator strengths due to excitonic effects. As a result the optical spectra calculated from BSE deviate substantially both in intensity and structure from the single-particle RPA spectra calculated with DFT+$\Delta$ over the energy range considered in this study, i.e. up to approximately 6\,eV above the conduction band edge. Although the difference is expected to diminish at high energies, these results highlight the need for incorporating excitonic effects in the dielectric models used in the study of carrier and exciton generation by the photoelectrons during the cascade.
Almost all models for scintillation require knowledge of the binding energy of excitons, which determines their population relative to free carriers at very early times $<1\,\text{ps}$ after the impact of the ionizing radiation. In this regard the main result of this work is that the calculated groundstate exciton binding energies fall in the range between 200 and 220\,meV for both NaI and SrI$_2$. Hence, the superior energy resolution of SrI$_2$ cannot be directly correlated with the population of the free excitons.
To study localization and spatial structure of excitons we introduced a decomposition of the six-dimensional exciton wave function int a product of single-particle densities and an envelope function. In the case of NaI we obtained a perfectly spherical envelope function, which can be fit very well assuming a hydrogen-like wave function. The binding energy obtained in this way is in good agreement with the values obtained directly from BSE calculations and via the effective mass approximation, which is expected given the hydrogen-like character of the excitonic groundstate.
The SrI$_2$ low-energy excitons revealed more complex character, which could not be fitted using simple hydrogenic wave functions. The envelope function was found to possess anisotropic first moments in accord with its anisotropic effective mass tensor. More surprising is the finding that excitonic wave functions in SrI$_2$ are more extended than in NaI, which suggests that non-radiative exciton-exciton annihilation mediated by exchange is stronger in SrI$_2$. This is unexpected as such processes contribute to stronger non-proportional response in conventional models of scintillation. This study thus calls for more detailed investigation of the temporal evolution of carrier density during early stages of scintillation from first principles, and considering the relative importance of free versus self-trapped excitons in scintillator non-proportionality.
\begin{acknowledgments}
We acknowledge fruitful discussions with C.~R\"odl. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 with support from the National Nuclear Security Administration Office of Nonproliferation Research and Development (NA-22). P.E. acknowledges support through the ``Areas of Advance -- Materials Science'' at Chalmers and computer time allocations by the Swedish National Infrastructure for Computing at NSC (Link\"oping) and C3SE (Gothenburg).
\end{acknowledgments}
|
1,314,259,993,922 | arxiv | \section{Spontaneous symmetry breaking}
This is a feature of many physical systems, often accompanying
a phase transition. For example, there are no preferred
directions in a liquid such as water --- it has complete
rotational symmetry. But when it freezes the resulting
crystal \emph{does} have a definite orientation --- the full
rotational symmetry is broken, leaving only a very restricted
group of symmetries of the crystal. It would be difficult for
a microscopic inhabitant of the crystal to infer the existence
of the larger symmetry!
When a liquid is cooled through its freezing point, crystals
of solid will start to form, often nucleated by small
impurities, but one cannot predict what their orientation will
be. The choice is random, and different choices may be made
at different centres, so that when the entire liquid has
frozen, there may be mismatches, leading to grain boundaries
or more subtle defects where the crystal lattice is deformed.
For instance, one can have an extra layer of atoms ending on a
linear `edge defect', where the crystal is strained. Similar
`topological defects' occur in many theories of fundamental
particle interactions.
Spontaneous symmetry breaking is a ubiquitous feature of our
theories of fundamental particle interactions. (See, for
example, \cite{Wei96}.) The famous electroweak theory, for
which Sheldon Glashow, Abdus Salam and Steven Weinberg won the
1979 Nobel Prize, exhibits an underlying symmetry between the
carriers of the electromagnetic force and the weak force ---
the photon and the W and Z particles. When we only had access
to low-energy experiments this symmetry was completely hidden,
its existence as difficult to guess as would be the full
symmetry of the crystal to its microscopic inhabitant. The
symmetry \emph{is} apparent in very high-energy experiments, at
energy scales well above 100 GeV ($10^{11}$ electron volts),
but we live in a low-temperature phase where it is
spontaneously broken by the so-called Higgs mechanism which
imparts masses to the W and Z bosons, while leaving the photon
massless.
Following the success of the electroweak model, physicists
started to ask if the strong interactions too could be brought
within a single unified framework. There is some experimental
support for this idea of `grand unification'. The strengths
of the fundamental interactions are determined by three
`coupling constants' $g_1,g_2,g_3$, which, despite their name,
depend weakly (logarithmically) on energy. If one
extrapolates from the low energies where they are measured one
finds that all three come together at an energy scale of about
$10^{16}$ GeV, strongly suggesting that something interesting
happens at that scale. Several different `grand unified
theories' (GUTs) have been proposed to embody this idea, the most
succesful involve a symmetry between bosons and fermions,
called supersymmetry.
Unfortunately particle energies of $10^{16}$ GeV are far
beyond any scale accessible to present or future laboratory
experiments, so GUTs are hardly likely to get the kind of
solid experimental support the electroweak theory received.
There is however one place --- or rather time --- where we
believe such energies did occur, the very early universe in
the first fraction of a second after the Big Bang. We know
that the temperature of the early universe, back to when it
was a few minutes old (the time of primordial nucleosynthesis),
was falling like the inverse square root of its age, $T\propto
1/\sqrt{t}$. If we trace its history back still further, our
best guess is that its temperature would have been above the
electroweak phase transition, at a temperature of around
$10^{15}$ K, when it was less than a microsecond old. Before
that the full electroweak symmetry would have been evident.
Even further back, if the grand unification idea is correct,
it would have gone through a GUT transition, at the
unimaginably early age of $10^{-35}$ s.
Of course no observers were around to check these
speculations. We have to rely instead on very indirect
evidence of these early phase transitions. One clue could
come from looking for the characteristic topological defects
that might have formed in the process of spontaneous symmetry
breaking. Such defects are often stable, so some at least of
them could have survived to much later times, perhaps even
to the present day. Many types of defects are possible,
depending on the nature of the symmetry breaking --- point
defects (monopoles), linear defects (cosmic strings), planar
defects (domain walls), as well as combinations of these.
However, the most interesting have turned out to be cosmic
strings. (Reviews can be found in \cite{HinK95,VilS94,Raj03}.)
\section{Cosmic strings}
The simplest model that shows what cosmic strings might be
like is one with a single complex scalar field $\phi$, which
we can also think of as a pair of real fields $\phi_{1,2}$,
with $\phi=\phi_1+i\phi_2$. The symmetry in this case is a
phase symmetry. We assume that the Hamiltonian which defines
the dynamics of the field is invariant under the phase
rotation $\phi\to\phi e^{i\alpha}$, or equivalently
\begin{equation}
\begin{split}
&\phi_1\to\phi_1\cos\alpha-\phi_2\sin\alpha,\\
&\phi_2\to\phi_1\sin\alpha+\phi_2\cos\alpha.
\end{split}
\end{equation}
In particular, there is a potential energy term $V$ which is a
function only of $|\phi|$, usually taken to be the `sombrero
potential'
\begin{equation}
V=\frac{1}{2}\lambda(|\phi|^2-\eta^2)^2=
\frac{1}{2}\lambda(\phi_1^2+\phi_2^2-\eta^2)^2,
\end{equation}
\begin{figure}[htb]
\centerline{\psfig{file=sombrero.eps}}
\vspace*{8pt}
\caption{The sombrero potential.}
\end{figure}
where $\eta$ is a constant (see Fig.~1). The important thing
to notice is that the minimum is not at $\phi=0$ but around the
circle $|\phi|=\eta$. There is a degenerate ground state: any
of the points $\phi=\eta e^{i\alpha}$ around the circle
defines a ground state.
At high temperatures, there are large fluctuations in $\phi$
and the central hump in the potential is unimportant. Then
there is obvious symmetry: fluctuations of $\phi$ in any
direction are equally likely. But as the temperature falls
there comes a point where the energy is too low to permit
fluctuations over the hump. Then the field tends to settle
towards one of the ground states. Which point on the circle
of minima is chosen is a matter of random choice, like the
choice of direction of fall for a pencil balanced on its tip.
The spontaneous choice then breaks the original symmetry.
When a large system goes through a phase transition like this,
each part of it has to make this random choice of the phase angle
$\alpha$, but as in the process of crystallization the choice
may not be the same everywhere --- one part of the system does
not `know' of the choice made in a distant part \cite{Kib76}.
Because there are terms in the energy involving the gradient of
$\phi$, the phase angle will tend to become more uniform as
the system cools. But this process may be frustrated; the
random choices may not fit together neatly. In particular,
there may be linear defects --- cosmic strings (see Fig.~2) ---
around which the phase angle varies by $2\pi$ (or a multiple of
$2\pi$).
\begin{figure}[htb]
\centerline{\psfig{file=string.eps}}
\vspace*{8pt}
\caption{A cosmic string. The directions of the arrows
indicate the values of $\alpha$. The field $\phi$ vanishes in
the core of the string.}
\end{figure}
\section{Strings in the early universe}
The important thing about cosmic strings is their stability.
Continuity of $\phi$ means that a string cannot simply come to
an end; it must form a closed loop or extend to infinity (or
at least beyond the region we can see). For this reason,
strings, once formed, are hard to eliminate. In the core of
the string, $\phi$ vanishes, so there is trapped potential
energy (as well as gradient energy). So strings represent
trapped energy. In fact, the core of the string may be
regarded as a relic of the earlier high-temperature phase, and
the energy density there is similar to what it was before the
transition. To lower this energy, strings will tend to
shrink, smoothing out kinks, though at the same time they are
being stretched by the expansion of the universe. They may
cross and exchange partners (see Fig.~3a) --- a process known
as `intercommuting', which creates new kinks. A string may
also cross itself, forming a closed loop (see Fig.~3b). Such
\begin{figure}[htb]
\centerline{\psfig{file=intercommuting.eps}}
\vspace*{8pt}
\caption{a. Intercommuting of strings. b. Formation of a
closed loop.}
\end{figure}
loops may shrink and eventually disappear, but a long string
cannot do so directly. If a random tangle of strings was
formed in the early universe, there would always be some
longer than the radius of the visible universe, so a few would
remain even today.
Analogous defects are formed in many condensed-matter systems
undergoing low-temperature phase transitions. Examples are
vortex lines in superfluid helium, magnetic flux tubes in some
superconductors, and disclination lines in liquid crystals. A
nematic liquid crystal, for example, consists of rod-shaped
molecules that like to line up parallel to each other.
Everywhere in the liquid there is a preferred orientation, but
note that diametrically opposite directions are equivalent.
Around a disclination line, the preferred orientation rotates
by $180^\circ$ (see Fig.~4). Along the line, molecules do
not know which way to turn, and there is excess trapped
energy. It is easy to see in this case too that a
disclination line cannot simply end.
\begin{figure}[htb]
\centerline{\psfig{file=nematic.eps}}
\vspace*{8pt}
\caption{A disclination line in a nematic liquid crystal.}
\end{figure}
Experiments on cosmic strings would be impossible, even if we
knew for sure that they existed. But because there are these
various analogues of cosmic strings, many interesting
experiments have been done testing various aspects of the
cosmic string formation and evolution scenario \cite{Kib02},
though of course none of this can tell us whether there really
are cosmic strings in the universe. For that we have to turn
to astronomical observations, to which we shall return later.
In the late 80s and early 90s, cosmic strings generated a lot
of excitement among cosmologists because they seemed to offer
a plausible explanation for the origin of the density
inhomogeneities from which galaxies later developed. Because
they represent a lot of trapped energy, cosmic strings
thrashing about in the early universe would significantly
perturb the matter distribution, and it is not hard to get at
least a rough estimate of how big the effect would be. The key
parameter is the energy per unit length of the string which,
for reasons of relativistic invariance (the characteristic
speed of waves on the string is the speed of light, $c$), is
equal to the string tension $\mu$. The strings are
exceedingly thin, but very massive. Typically, for strings
produced at a possible GUT transition, the mass per unit
length $\mu/c^2$ would be of order $10^{21}$ kg m$^{-1}$; a
string of length equal to the solar diameter would be about as
massive as the Sun itself. The gravitational effects
of a string are governed by a dimensionless parameter,
$G\mu/c^4$, where $G$ is Newton's constant. In particular,
strings in the early universe would create density
perturbations in which the fractional change in density is
\begin{equation}
\frac{\delta\rho}{\rho}\sim\frac{G\mu}{c^4}.
\end{equation}
It happens that for GUT-scale strings, this ratio would be
roughly $10^{-6}$ or $10^{-7}$. This is just the right order
of magnitude to seed galaxy formation. In what follows, we
shall choose units in which $c=1$ and talk of the parameter
$G\mu$.
Unfortunately this nice idea, like so many others, succumbed
to the harsh realities of observation, in particular the
observations of the small anisotropies in the cosmic microwave
background. Measurements made by the COBE (COsmic Background
Explorer) satellite and more recently by WMAP (the Wilkinson
Microwave Anisotropy Probe) have yielded very precise
information about these anisotropies. In particular, the
angular power spectrum shows a series of peaks, so-called
`acoustic peaks', representing particular scales on which the
anisotropy is large (see Fig.~5). The cosmic string scenario
\begin{figure}[htb]
\centerline{\psfig{file=cmb.eps}}
\vspace*{8pt}
\caption{Angular power spectrum of CMB \cite{Pog+03}. The
solid (red) line corresponds to $B=0$, the dotted (black) line
$B=0.05$, the short-dash (green) line $B=0.1$, the long-dash
(light blue) line $B=0.15$ and the dot-dash (dark blue) line to
$B=1$, where $B$ is the fraction of the power due to defects.}
\end{figure}
has no explanation for these features, predicting instead a
single broad, flattish hump. On the other hand, the peaks are
exactly what was predicted by the rival theory of inflation,
in which the origin of the density perturbations can be traced
to quantum fluctuations during an early period of accelerated
expansion.
So inflation won, and cosmic strings were relegated at best to
a minor supporting role, responsible for no more than 10\% of
the density perturbation at most. Many people lost interest
in the idea.
\section{Superstring theory}
There has, however, been a revival of interest, stemming
largely from developments in our understanding of a very
different kind of string --- the fundamental strings of
superstring theory, or its more modern incarnation, M-theory.
(For a recent review, see \cite{Gre00}.)
Fundamental string theory also originated in a search for
unification, in particular the unification of gravity with the
other interactions. This has long been the holy grail of
theoretical physics, but has proved remarkably elusive. A
major obstacle to creating a quantum theory of gravity
(unified or not) has been the appearance of infinities.
\emph{All} quantum field theories are plagued by infinities,
but for the other interactions we have learned how to tame
them, by the process of `renormalization'. This allows us to
extract meaningful, finite answers to physically significant
questions, hiding all the infinities in supposedly
unmeasurable `renormalization constants'. This has never been
a wholly satisfactory procedure, but in any case it fails
entirely in the case of gravity. No quantum version of
Einstein's theory of general relativity is renormalizable;
there are infinities that cannot be swept under the carpet.
The basic reason for the appearance of infinities is the fact
that we are dealing with point particles. They appear even in
classical electromagnetic theory, for example, in the
`self-energy' of a charged particle: the potential energy
stored in a spherical distribution of charge goes to infinity
as its radius tends to zero.
This observation led to a very intriguing proposal: perhaps the
fundamental objects of our theory should not be point
particles, but extended objects, in particular strings. The
basic idea is that all the particles we commonly think of as
elementary --- electrons, quarks, photons, and the rest ---
can be regarded as different modes or oscillation states of
a fundamental string.
Even with strings as the basic building blocks, it proved
difficult to eliminate all the infinities. One feature that
made it easier was to incorporate \emph{supersymmetry} into
the theory. This is a remarkable symmetry that relates bosons
(particles of integer spin in units of $\hbar$) to fermions
(with half-integral spin). In a perfectly supersymmetric
world, every boson would be partnered by a corresponding
fermion of equal mass, and \emph{vice versa}. Such partners
have never been seen, so if our world is fundamentally
supersymmetric, this symmetry too must be broken. But the
great virtue of supersymmetry is that it removes a lot of
infinities. This is because bosons and fermions often make
equal and opposite contributions, so that if they are exact
partners the infinities cancel.
One other strange feature was needed. Superstring models
were eventually constructed that did seem to be free of all
infinities, but \emph{not} in the familiar four dimensions
(three of space and one of time). The models were only
consistent in 10 dimensions (nine space and one time) --- or
even sometimes 26! So the suggestion emerged that our
universe is fundamentally ten-dimensional, but six of the
dimensions are curled up very small, so that from our
macroscopic perspective it looks four-dimensional --- just as a
drinking straw looks one-dimensional on a large scale.
These fundamental strings, as originally envisaged, were very
different in many ways from cosmic strings. Firstly, their
characteristic energy scale was much larger. Gravity is
strongly energy-dependent, but would become as strong as the
other interactions at the so-called \emph{Planck scale},
corresponding to an energy around $10^{19}$ GeV, at least a
thousand times higher than the GUT scale. Thus the parameter
$G\mu$ for a fundamental string was of order one.
Moreover, the fundamental strings could never have extended to
macroscopic size: if you try to expand a fundamental string it
will simply break into several small pieces.
But this picture has changed in several ways. There are other
mechanisms for reducing the dimension from 10 to 4, whose
effects are rather different, in particular the
\emph{brane-world} idea. Strings are not the only localized
objects in a superstring theory. There can be two-dimensional
\emph{membranes} or their higher-dimensional analogues, which
have come to be known as `$p$-branes' (of dimension $p$) --- a
particle is a 0-brane, a string a 1-brane, and so on. We have
D-branes (or D$p$-branes), where the D denotes Dirichlet
boundary conditions (see \cite{Gaunt98} for a review). Essentially this
means that in addition
to closed loops of fundamental string there may be open strings
whose ends are tied to D-branes. There are also anti-branes,
$\overline{\mathrm{D}}$-branes. A D$p$ brane and a
$\overline{\mathrm{D}p}$ have equal and opposite conserved
charges, which means that they attract each other. Open strings
usually give rise to matter fields while gravity comes from closed
loops. This means that matter may be trapped on a
D$p$-brane while gravity feels all the extra dimensions.
The brane-world concept has also emerged recently from M-theory.
M-theory is a conjectured umbrella theory that, in certain
limits, reduces to the five known string theories or to
supergravity. Supergravity was another attempt to unify all
the forces, including gravity and incorporating
supersymmetry and was formulated in $11$ dimensions! It was
shown that string theory at low energy is described by
eleven-dimensional supergravity with the eleventh dimension
compactified on an interval with $\mathbb{Z}_2$ (mirror)
symmetry. The two boundaries of space--time are
10-dimensional planes, on which matter is confined. The
extra six dimensions are compactified, but these compact
dimensions are substantially smaller than the space between
the two boundary branes. Thus, viewed on a larger scale,
space--time looks five-dimensional with four-dimensional
boundary branes. This is an example of a brane world model
(see \cite{braxbd04} for a recent review).
More generally speaking, in brane-world models normal matter
is confined on a hypersurface (called a brane) embedded in a
higher-dimensional space (called the bulk). Our universe may
be such a brane-like object. Within the brane-world scenario,
constraints on the size of extra dimensions become weaker
because the standard particles propagate only in three spatial
dimensions, while gravity (and some other exotic matter)
can propagate in the bulk. Newton's law of gravity, however, is
sensitive to the presence of extra dimensions, and deviations
are expected on scales below the brane separation. Gravity has
so far been tested only on scales larger than a tenth of a
millimeter \cite{Hoyle01}; possible deviations below that scale can be
envisaged.
\section{Cosmic Superstrings}
One of the main motivations behind the brane-world scenario was to
try to explain the
vast difference between the Planck scale of gravity of $10^{19}$
GeV and the electroweak scale, which mediates radioactive decay,
of $10^2$ GeV. The idea was to introduce \emph{warped}
space-time. In special relativity we are used to the invariant
distance being given by $ds^2=dt^2-d\boldsymbol{x}^2$. When
space-time is `warped' this becomes
\begin{equation}
ds^2=e^{-A(\underline{y})}(dt^2-d\boldsymbol{x}^2)
-d\underline{y}^2,
\end{equation}
where $\underline{y}$ represents the extra dimensions and
$A(\underline{y})$, the `warp' factor, is a known, positive function. The
warp factor is essentially a gravitational red-shift in the
compact directions. In five-dimensional brane worlds this
`warping' of space-time was used to generate a hierarchy of
scales such that gravity, which propagates on both the brane
and in the bulk, could be at the Planck scale whilst the usual
physics, confined to the brane, could have an energy scale much
less than this. However, the warping of space-time is more
general than brane-worlds and arises in many string theory
models where there are $6$ compact extra dimensions. In some
cases the compact dimensions form a simple space such as a
sphere or a torus, with constant warp factor. But there are
also solutions in which the warp factor varies strongly as a
function of the compact dimensions, $\underline{y}$, with
special regions known as \emph{throats} where it falls sharply
to very small values, as shown in Figure~6. Here the
\begin{figure}[htb]
\centerline{\psfig{file=throat.eps}}
\vspace*{8pt}
\caption{Space with throat. In the middle we have the compactified
space, with the throat on the left of the figure. The
D$/\overline{\mathrm{D}}$ branes are in the throat.}
\end{figure}
warp factor is essentially $1$ in the central region and much
less than $1$ in the throat. This means that, for a
four-dimensional observer the fundamental string mass per unit
length would appear to be
\begin{equation}
\mu=e^{-A(y)}\mu_0,
\end{equation}
where $\mu_0$ is the ten-dimensional scale. Consequently
fundamental strings may not be at such high energies after
all, even if $\mu_0$ is at the Planck scale.
Another recent development in string theory is the concept of
\emph{brane inflation}. The idea of \emph{inflation} is that,
in the very early Universe there was a period of exponential
expansion such that the visible Universe today was
exponentially small at very early times. Since string theory
is meant to be a theory of everything it should also explain
the Universe at very early times, and consequently inflation.
Recently it was discovered that string theory could give rise
to inflation (for a review see \cite{quevedo02}). Consider
a Universe containing an extra \emph{brane} and
\emph{anti-brane}. The brane and anti-brane are attracted to
each other in the same way as an electron and positron are.
However, when they annihilate they give rise to
lower-dimensional branes \cite{mahbub} rather than photons.
If the early universe contained an extra brane and
anti-brane separated in the compact dimension, then the
distance between them plays the role of a scalar field,
called the inflaton. The potential energy of these branes
drives the exponential expansion and therefore inflation. As
the branes approach the potential between them becomes
steeper before they annihilate. Once they annihilate lower
dimensional branes are formed. It was shown that D-strings
were generically formed in brane inflation
\cite{sarangitye}.
The most fully developed model of brane inflation involves a
D$3/\overline{\mathrm{D}3}$ at the bottom of a throat,
The D$3$ branes wrap the usual three spatial dimensions, so
they appear as points in the throat of the extra dimensions,
as shown in Figure~6. The inflation and subsequent brane
annihilation take place in the throat. After brane inflation
lower dimensional D-branes are formed, also in the throat. In
the above example D1 branes or D-strings are formed. In
addition fundamental strings, called F-strings can also be
formed. Since the brane annihilation was in the throat where space-time
is highly warped, then the energy scale of these strings is no
longer the Planck scale but at a much lower energy scale.
Estimates give the range to be between
$10^{-11}<G\mu<10^{-6}$ depending on details of the
theory (see \cite{polchinski04} for a review).
The idea of cosmic superstrings is not new --- they were
proposed as long ago as 1985 \cite{witten} --- but they were
dismissed as at too high an energy scale and unstable.
However, the recent developments in string theory mean that
not merely is this a distinct possibility but also perhaps the
best way of observing string theory. The cosmic superstrings
can have a range of values of string mass per unit length,
$\mu$, compatible with observations
and also the throat essentially provides a stabilizing
potential. These two features circumvent the previous problems
with cosmic superstrings.
Whilst both D- and F-strings can be formed in brane inflation
they are fundamentally different objects. D-strings are more
similar to the usual cosmic strings discussed in section~2 and
are essentially classical objects, though like all D-branes
they have a conserved charge. On the other hand F-strings are
quantum mechanical objects.
Finally it now appears that grand unified theories will
almost inevitably give rise to cosmic strings. In the long
road from M-theory to the low-energy physics we observe in the
laboratory, a natural route is via grand unified theories at
an intermediate stage. In grand unified theories the coupling
constants for the weak, electromagnetic and strong
interactions meet at a high energy scale of about $10^{16}$
GeV. However, the theory is only successful when it is
supersymmetric. A recent study \cite{jeannerot} considered all
possible grand unified theories, up to a certain level of
complexity, with all possible symmetry breaking schemes which
gave rise to the electroweak theory at low energy. The theories
they considered all had a period of inflation. The ones that
were not in conflict with observations \emph{all} predicted
the formation of cosmic strings at the end of inflation.
Consequently it seems that cosmic strings are almost inevitable.
Since the grand unified theories studied were supersymmetric
it seems natural to study the nature of cosmic strings in
such theories. Supersymmetric theories give rise to two sorts
of strings, called D-term or F-term strings \cite{DDT}, where
the D and F refer to the type of potential required to break
the symmetry, and have nothing directly to do with the distinction
between the D- and F-strings discussed above.
A recent analysis of supersymmetric theories with a
D-term suggests that D-term cosmic strings may well be
D-strings \cite{dvali}. However, F-term strings are classical
objects, and not apparently related to D- or F-strings.
Nevertheless, the subtle relationships between different string
theories regarded as limits of M-theory may affect some of
these distinctions.
\section{Cosmology of D- and F-strings}
The cosmology of D-strings and F-strings is a little different
from that of ordinary cosmic strings. In section~3 it was
explained that when two cosmic strings meet they intercommute
and that loops are formed by a cosmic string
self-intersecting. For ordinary cosmic strings, the
probability of intercommutation is $1$. This is not the case
for D- and F-strings. For D-strings this is because they can
`miss' each other in the compact dimension and F-strings are
fundamentally quantum objects so their scattering can only be
computed with a quantum mechanical calculation. The
probability of intercommuting has been estimated to be
between $10^{-3}<P<1$ for F-strings and $10^{-1}<P<1$ for
D-strings. Similarly the probability of a string
self-intersecting is reduced. This means that a network of
such strings could look different from that of cosmic
strings. There are suggestions that such a network would be
denser, with the distance between strings related to $P$, and
slower \cite{dvali&vilenkin, sakellariadou}. It is likely
that the net result would be to increase the number of string
loops, despite the reduction in string self-intersection. A
network of D- or F-strings could also emit exotic particles
as a result of the underlying superstring theory. Ordinary
cosmic strings emit particles, but those coming from cosmic
superstrings could have distinctive characteristics.
Another interesting possibility is that, because they are different strings,
when D- and F-strings meet they are unable to intercommute, instead forming
a three-string junction, with a composite DF-string, as shown in
Fig.~7. If this were the case then
\begin{figure}[htb]
\centerline{\psfig{file=network.eps}}
\vspace*{8pt}
\caption{Crossing of strings of different types.}
\end{figure}
they would not form loops very effectively. This could be a problem
since a usual cosmic string network loses energy via loop production.
It is also possible in string theory to have bound states of $p$
F-strings and $q$ D-strings! The evolution of such an exotic system
would be different from that discussed in section~3. It is possible
that a such a network would become \emph{frozen}
and just stretch with the expansion of the universe. Consequently,
there is much to investigate in the cosmology of D- and F-strings.
\section{Observation of cosmic strings}
The most promising way of observing cosmic strings is by
searching for their very characteristic gravitational effects,
either as gravitational lenses or emitters of gravitational
waves.
The gravitational field around a straight, static string is
quite unusual. Particles in the vicinity feel no gravitational
acceleration, because in general relativity tension is a
negative source of gravity and, since tension equals energy
per unit length, their effects cancel. Space-time around the
string is locally flat, but not globally flat. In
cross-section, the space is cone-shaped, with a \emph{deficit
angle} $\delta=8\pi G\mu$, as though a wedge of angle $\delta$
had been removed and the edges stuck together. The deficit
angle is $\delta=5''\!\!.2 (G\mu/10^{-6})$, so for GUT-scale
strings it is a few seconds of arc.
Thus the string acts as a cylindrical gravitational lens,
creating double images of sources behind the string, with a
typical angular separation of order $\delta$ (see Fig.~8). A
\begin{figure}[htb]
\centerline{\psfig{file=lensing.eps}}
\vspace*{8pt}
\caption{Gravitational lensing by a cosmic string.}
\end{figure}
string would yield a very special pattern of lensing. We
should expect to see an approximately linear array of lensed
pairs, each separated in the transverse direction. In most
the two images would have essentially the same magnitude. (The
exception would be if we see only part of one of the images.)
This is a very unusual signature; most ordinary gravitational
lenses produce images of substantially different magnitude,
usually an odd number of them \cite{gravlens}.
There are several factors that may complicate this picture.
Cosmic strings are not generally either straight or static.
Whenever strings exchange partners kinks are created that
straighten out only very slowly, so we expect a lot of
small-scale structure on the strings. Viewed from a large
scale, the effective tension and energy per unit length will
no longer be equal. Since the total length of a kinky
string between two points is greater, it will have a larger
effective energy per unit length, $U$, while the effective
tension $T$, the average longitudinal component of the tension
force, is reduced, so $T<\mu<U$. This means that there is a
non-zero gravitational acceleration towards the string,
proportional to $U-T$. Moreover, the strings acquire large
velocities, generally a significant fraction of the speed of
light. If the string is moving with velocity $\boldsymbol{v}$
perpendicular to its direction, the expression for the angular
separation of an image pair is
\begin{equation}
\alpha = \frac{8\pi GU}{\gamma(1-v_r)}
\frac{D_{ls}}{D_s}\sin\theta,
\end{equation}
where $D_s$ is the angular-diameter distance of the
source, $D_{ls}$ that of the source from the lensing
string, $\theta$ is the angle between the string and the line
of sight, $\gamma=(1-\boldsymbol{v}^2)^{-1/2}$, and $v_r$ is
the radial component of $\boldsymbol{v}$.
Another very characteristic effect is the distortion of the
CMB produced by a moving string. A string moving across our
field of vision will cause a blue-shift of the radiation ahead
of it, and a red-shift of that behind, leading to a
discontinuity in temperature of magnitude $\delta T/T=8\pi
GU\gamma v_\perp\sin\theta$, where $v_\perp$ is the component
of the string velocity normal to the plane containing the
string and the line of sight.
Accelerated cosmic strings are sources of gravitational
radiation. The most important signal would come from special
places where the strings are moving exceptionally fast.
Loops of string undergo periodic oscillations, with a period
related to the size of the loop. The dynamical equations
predict that during each oscillation there will be a few
points at which the string instantaneously forms a cusp, where
it doubles back on itself (see Fig.~9). In the neighbourhood
\begin{figure}[htb]
\centerline{\psfig{file=cusp.eps}}
\vspace*{8pt}
\caption{Cosmic string loop with a cusp.}
\end{figure}
of the cusp, the string velocity approaches the speed of
light. Such an event would generate an intense pulse of
gravitational radiation, strongly beamed in the direction of
motion of the cusp. If massive cosmic strings do indeed
exist, these pulses are likely to be among the most prominent
signals seen by the gravitational-wave detectors now in
operation or planned, in particular LIGO and LISA.
This effect has already provided a stringent, though indirect,
limit on the value of $G\mu$. This comes from observations of
the timing of millisecond pulsars. Gravitational waves
between us and a pulsar would distort the intervening
space-time, and so cause random fluctuations in the pulsar
timing. The fact that pulsar timings are extremely regular
places an upper limit on the energy density in gravitational
waves, and hence on $G\mu$. The upper limit is of order
$10^{-7}$, though there is considerable uncertainty because
this depends on assumptions about the evolution of small-scale
structure.
For cosmic superstrings the situation is similar. However,
because the intercommutation probability is less than unity
the evolution of the network is a little different, resulting
in a denser, slower network of strings with more cusps on it.
A recent study suggests this could enhance the gravitational
radiation emitted \cite{damour&vilenkin04}
and that such strings could be detectable with the
gravitational-wave detectors in the near future. Seeing such
cosmic strings could provide a window into superstring theory!
\section{Recent observations}
\subsection{A possible example of lensing by a cosmic string}
One exciting recent piece of evidence was the observation of a
possible example of cosmic-string lensing by a
Russian--Italian collaboration, between the Observatory of
Capodimonte in Naples and the Sternberg Astronomical Institute
in Moscow.
What Sazhin \emph{et al.}\ \cite{Saz+03} saw was a pair of
images of apparently very similar elliptical galaxies, both
with a red-shift of 0.46, separated by $2''$. The images have
the same magnitude and the same colour --- the magnitudes in
three separate wavelength bands are equal within the errors.
They could of course be images of two distinct, but very
similar galaxies that just happen to lie very close together.
Close pairs are not unusual, but it would be a remarkable
coincidence to find two so similar so close together. The
images could also be due to lensing by some more conventional
foreground object, but the authors show that it would have to
be a giant galaxy, of which there is no sign.
They conclude that the most likely explanation is lensing by a
cosmic string. If so, the observed separation requires that
$G\mu>4\times 10^{-7}$, which is at least marginally in
conflict with the upper limits from CMB anisotropy and pulsar
timing observations. However, it should be remembered that
this is actually a limit on the effective quantity $GU$ rather
than $G\mu$ so there is at least some room for manoeuvre.
Another important piece of evidence comes from a later study
by the same authors of the surrounding region \cite{Saz+04}.
If the image pair is due to lensing by a cosmic string, one
would expect other lensed pairs in the vicinity, along the
line of the cosmic string. The authors searched for such
pairs in a $16'\times16'$ patch of sky around the original
image pair (which is called CSL-1, the
\emph{Capodimonte--Sternberg Lens Candidate no.~1} --- there
are three others yet to be analyzed). Among the roughly 2200
galaxies within this patch, they found 11 very likely
candidates for lensed pairs, based on separation and colour
matching. They estimate that a string should produce
somewhere between 9 and 200 lensed pairs (depending on its
configuration), while they should expect no more than 2 due to
conventional lensing. So this adds weight to their
interpretation, though they emphasize that the identification
needs to be confirmed by a spectroscopic analysis of the pairs.
We can learn a lot from the distribution and alignment of the
candidate pairs. A straight string should produce a linear
array of lensed pairs, with the pairs separated in the
transverse direction. A picture of the six brightest pairs
\cite{Saz+04b} (see Fig.~9) does not show such a sharp
\begin{figure}[htb]
\centerline{\psfig{file=candidates.eps}}
\vspace*{8pt}
\caption{Positions of candidate lensed pairs in the vicinity
of CSL-1 \cite{Saz+04b}.}
\end{figure}
concentration, but nor do they seem to be randomly scattered.
The position angles of the pairs nos.\ 2,3,5,6 do suggest that
they could line up on a smooth curve of string \cite{Saz04}.
The others could perhaps be fitted to a string with a couple of
kinks. The important test of this idea will come from a
spectroscopic analysis of the candidate pairs, to show whether
they are indeed images of the same object.
\subsection{Possible lensing by an oscillating loop}
The other intriguing development is an analysis by Schild
\emph{et al}.\ \cite{Sch+04} of brightness fluctuations in a
very well known gravitational lens system that has been studied
extensively for 25 years. The system is famous because it has
been used to provide an estimate of the Hubble constant,
independent of other distance measurements. It consists of a
pair of images, which are known to be images of a single
quasar, because of the constant observed time delay:
brightness fluctuations in image $A$ are generally followed by
similar fluctuations in
$B$ 417.1 days later. The lensing in this case is due to a
clearly visible foreground galaxy, and the time delay occurs
because one light path from the quasar is a little more than
one light year longer than the other.
In addition to the correlated fluctuations there are
independent fluctuations of the two images, primarily caused
by microlensing by individual stars in the foreground galaxy,
that is, lensing in which different images are not resolved.
What Schild \emph{et al}.\ have have found in data from
observations in 1994--95 is an apparent sequence of synchronous
fluctuations in both images with an amplitude of about 4\% and
no time delay. They see a sequence of three or four
oscillations with a period of about 80 days.
If these oscillations do have a common origin, they must be
due to some object quite close to us. One possibility would
be lensing by a binary star, but to get the right period and
amplitude the stars would need be among our near neighbours
and to have masses of around eighty solar masses. It is
inconceivable that such massive stars so near us could have
escaped detection.
Another possibility is an oscillating loop of cosmic string.
Since the period is proportional to the length of the string
loop, the required 80-day period would imply a length of 160
light-days. The loop would probably be moving with a
substantial fraction of the speed of light, so it would only
remain between us and the source for a year or so; thus it is
not surprising that only a few oscillation cycles were
seen. The apparent angular size would need to be somewhat
less than the separation between the images (otherwise there
would be sharp spikes of intensity when the string actually
crossed one of the paths). On the other hand the loop cannot
be very far away, otherwise the required value of $G\mu$ would
be impossibly large. In fact, it must be well inside our
galaxy. Since we know from CMB and gravity-wave limits that
loops of this size must be quite rare, to find one so near
would be remarkably fortuitous.
Of course, the sequence of synchronous fluctuations might just
be a coincidence, but the authors argue that the probability
of seeing three or more synchronous oscillations by chance is
quite low. Nevertheless, this may be the simplest
explanation; we need a proper statistical analysis of how
likely it is that this should have happened by chance at some
time during the many years of observation.
\section{Conclusions}
As we have described, recent developments in fundamental
string theory, especially the brane-world concept, have
greatly extended the range of different kinds of cosmic
strings that might have been formed in the very early
history of the universe. There have been intriguing
hints of observations that might be signatures of cosmic
strings. Further work in the near future should clarify
their status. Even if these observations turn out to have
more prosaic explanations, the quest for evidence of cosmic
strings will certainly continue, in particular via searches
for the gravitational waves they emit. The very
characteristic signature of emission from a cusp
\emph{might} be detectable even by the present generation
of gravitational-wave detectors, and certainly by the
next. This may well provide the first direct evidence for
an underlying superstring or M-theory.
\section*{Acknowledgments}
We are indebted for valuable comments and discussion to Ana
Ach\'ucarro, Levon Pogosian, Fernando Quevedo, Mairi
Sakellariadou and Miguel Sazhin. The work reported here was
supported in part by PPARC, and in part by the ESF through the
COSLAB Programme.
|
1,314,259,993,923 | arxiv | \section{Introduction}
Working out propagators is the difficult part about formulating
quantum field theoretic perturbation theory on exotic backgrounds.
It is typically accomplished by solving the differential equation
that the propagator must obey, however, this procedure is ambiguous
up to a homogeneous solution. It has long been realized that some
choices for this homogeneous solution do not make the resulting
Green's function into a true propagator. That is, the Green's
function does not correspond to the expectation value of the
time-ordered product of two free fields in the presence of any state
\cite{TW1}.
There is nothing mysterious about the problem, nor does it require
field theory to understand. Consider the simple harmonic oscillator
whose position as a function of time is $q(t)$ and whose Lagrangian
is,
\begin{equation}
L = \frac12 m \dot{q}^2 - \frac12 m \omega^2 q^2 \; .
\end{equation}
The propagator equation for this system is,
\begin{equation}
-m \Bigl[ \Bigl(\frac{d}{dt}\Bigr)^2 + \omega^2\Bigr] i\Delta(t;t')
= i \delta(t \!-\! t') \; . \label{HOprop}
\end{equation}
The general solution to this equation which is symmetric under
interchange of $t$ and $t'$ has three free parameters,
\begin{eqnarray}
\lefteqn{i\Delta(t;t') = -\frac{i}{2 m \omega} \, \sin\Bigl[\omega
\vert t \!-\! t'\vert\Bigr] + \alpha \cos(\omega t) \cos(\omega t')}
\nonumber \\
& & \hspace{5.5cm} + \beta \sin\Bigl[\omega (t \!+\! t')\Bigr] +
\gamma \sin(\omega t) \sin(\omega t') \; . \qquad \label{gensol}
\end{eqnarray}
Although any choice of $\alpha$, $\beta$ and $\gamma$ in
(\ref{gensol}) gives a solution to the propagator equation
(\ref{HOprop}), the result is not a propagator unless they obey two
inequalities,
\begin{equation}
\alpha + \gamma \geq \frac1{2 m \omega} \qquad {\rm and} \qquad
\alpha \gamma \geq \frac14 \beta^2 \; . \label{ineqs}
\end{equation}
To see this, note first that the Heisenberg picture operator $q(t)$
can be expressed in terms of its initial position and momentum as,
\begin{equation}
q(t) = q_0 \cos(\omega t) + \frac{p_0}{m \omega} \, \sin(\omega t)
\; .
\end{equation}
For $i\Delta(t;t')$ to be a propagator there must be a state $\vert
\psi\rangle$ such that,
\begin{eqnarray}
\lefteqn{i\Delta(t;t') = \Bigl\langle \psi \Bigl\vert T\Bigl[ q(t)
q(t') \Bigr] \Bigr\vert \psi \Bigr\rangle } \\
& & = -\frac{i}{2 m \omega} \, \sin\Bigl[\omega \vert t \!-\!
t'\vert\Bigr] + \Bigl\langle \psi \Bigl\vert \frac{q_0^2}{2}
\Bigr\vert \psi \Bigl\rangle \,
\cos(\omega t) \cos(\omega t') \nonumber \\
& & \hspace{.3cm} + \Bigl\langle \psi \Bigl\vert \frac{q_0 p_0 \!+\!
p_0 q_0}{2 m \omega} \Bigr\vert \psi \Bigr\rangle \,
\sin\Bigl[\omega (t \!+\! t')\Bigr] + \Bigr\langle \psi \Bigl\vert
\frac{p_0^2}{2 m^2 \omega^2} \Bigr\vert \psi \Bigr\rangle \,
\sin(\omega t) \sin(\omega t') \; . \qquad
\end{eqnarray}
We can therefore identify the constants $\alpha$, $\beta$ and
$\gamma$ as,
\begin{equation}
\alpha = \Bigl\langle \psi \Bigl\vert \frac{q_0^2}{2} \Bigr\vert
\psi \Bigl\rangle \quad , \quad \beta = \Bigl\langle \psi \Bigl\vert
\frac{q_0 p_0 \!+\! p_0 q_0}{2 m \omega} \Bigr\vert \psi
\Bigl\rangle \quad , \quad \gamma = \Bigl\langle \psi \Bigl\vert
\frac{p_0^2}{2 m^2 \omega^2} \Bigr\vert \psi \Bigl\rangle \; .
\end{equation}
For the ground state one has,
\begin{equation}
{\rm Ground\ State} \qquad \Longrightarrow \qquad \alpha = \gamma =
\frac1{4 m \omega} \quad {\rm and} \quad \beta = 0 \; .
\end{equation}
For a general state the first inequality in (\ref{ineqs}) results
from requiring the expectation value of the energy to be greater
than or equal to $\frac12 \omega$; the second is just the Schwarz
inequality.
A more subtle set of issues can arise in gauge theories. To
understand the one of interest for this work we must digress to
explain the difference between an ``exact gauge'' and an ``average
gauge'' \cite{TW2}. The former is obtained by choosing the gauge
parameter to make the vector potential obey some equation at each
point in space and time. This is the normal type of gauge fixing in
classical field theory. Familiar examples are,
\begin{eqnarray}
{\rm Temporal\ Gauge} & : & A_0(t,\vec{x}) = 0 \; , \\
{\rm Coulomb\ Gauge} & : & \vec{\nabla} \!\cdot\! \vec{A}(t,\vec{x})
= 0
\; , \\
{\rm Lorentz\ Gauge} & : & \partial^{\mu} A_{\mu}(t,\vec{x}) =
-\dot{A}_0(t,\vec{x}) + \vec{\nabla} \!\cdot\! \vec{A}(t,\vec{x}) =
0 \; .
\end{eqnarray}
Although exact gauges can be used in quantum field theory the more
common type of gauge fixing is accomplished by adding some
noninvariant term to the invariant Lagrangian. For example, the
Feynman gauge Lagrangian is,
\begin{equation}
\mathcal{L} = -\frac14 F_{\mu\nu} F^{\mu\nu} - \frac12
(\partial^{\mu} A_{\mu})^2 \; . \label{Feyn}
\end{equation}
The functional integral representation for this type of gauge
condition can be viewed as a weighted average of exact gauges. For
example, the Feynman gauge functional formalism results from
imposing the exact gauge,
\begin{equation}
\partial^{\mu} A_{\mu}(t,\vec{x}) = f(t,\vec{x}) \; ,
\end{equation}
where $f(x)$ is a ${\rm C}\llap{\vrule height7.1pt width1pt depth-.4pt\phantom t}$-number field. One then functionally
averages over $f(x)$ with a Gaussian weighting functional,
\begin{equation}
\rlap{$\Biggl\rfloor$}\Biggl\lceil [df] \, \exp\Bigl[-\frac{i}2 \int \!\! dt \int \!\! d^3x \,
f^2(t,\vec{x}) \Bigr] \; .
\end{equation}
From this discussion it is obvious that a fairly involved set of
functional changes of variables connects the exact gauge conditions
of the canonical formalism to the average gauge conditions typically
employed in the functional formalism. The late Sidney Coleman worked
this out explicitly for flat space on the manifold $R^4$ to derive
the Faddeev-Popov ansatz for this case \cite{SRC}, but the result is
often assumed without justification for general metrics on any
manifold. We suspect that the unjustified use of average gauge
fixing is behind a dispute about the graviton propagator on de
Sitter background.
It is easy to see that certain average gauges can be problematic
when linearization instabilities are present. Consider flat space
electrodynamics on the manifold $T^3 \times R$. Because the spatial
sections are compact, both sides of the spatially averaged, $\mu = 0$
Maxwell equation must vanish separately,
\begin{equation}
\partial_{\nu} F^{\nu\mu} = J^{\mu} \qquad \Longrightarrow \qquad
\int \!\! d^3x \, \partial_i F^{i0}(t,\vec{x}) = \int \!\! d^3x \,
J^0(t,\vec{x}) \; .
\end{equation}
Because this zero charge constraint follows from the invariant field
equations it must be true as well in every exact gauge. However,
naively imposing an average gauge can result in a very different
theory. For example the field equations of Feynman gauge
(\ref{Feyn}) are,
\begin{equation}
\Bigl[-\partial_t^2 + \nabla^2\Bigr] A^{\mu}(t,\vec{x}) =
J^{\mu}(t,\vec{x}) \; .
\end{equation}
These equations can be solved for any total charge. One can argue
about how the problem happened, or how significant it is, but there
cannot be any doubt that something went wrong.
The issues we have been discussing are relevant to a debate between
cosmologists and relativists concerning perturbative quantum gravity
on de Sitter background. From the perspective of inflationary
cosmology it is natural to view de Sitter as a special case of
homogeneous, isotropic and spatially flat geometries whose invariant
element in co-moving coordinates takes the general form,
\begin{equation}
ds^2 = -dt^2 + a^2(t) d\vec{x} \cdot d\vec{x} \; .
\end{equation}
One gets the open coordinate sub-manifold of de Sitter by setting
the scale factor to $a(t) = e^{Ht}$ with constant $H$. For any scale
factor the transverse-traceless components of the graviton field
obey the same equation as the massless, minimally coupled scalar
\cite{Grishchuk},
\begin{equation}
\Bigl[\Bigl(\frac{\partial}{\partial t}\Bigr)^2 + 3 H
\frac{\partial}{\partial t} - \frac{\nabla^2}{a^2}\Bigr]
h^{tt}_{ij}(t,\vec{x}) = 0 \; .
\end{equation}
The power spectrum for gravitational radiation \cite{AAS1} is
proportional to the canonically normalized, super-horizon scalar
mode functions $u(t,k) \sim H/k^{\frac32}$,
\begin{equation}
\mathcal{P}_h \sim G \times \vert u(t,k)\vert^2 \times k^3 \sim G
H^2 \; .
\end{equation}
From scale invariance --- which would be exact in de Sitter
--- one sees that the mode functions of physical gravitons diverge
too strongly at small $k$ to give a convergent result for the
Fourier mode sum of a part of the graviton propagator which must be
present in any gauge. It follows that there can be no de Sitter
invariant graviton propagator, just as there is no de Sitter
invariant propagator for the massless, minimally coupled scalar
\cite{AF}.
People who abhor the breaking of de Sitter invariance typically
dismiss it as unphysical, but this argument cannot be accepted
because the tensor power spectrum is certainly physical. (Indeed,
strenuous efforts are underway to observe it!) Nor is there any
support to be gained from the tiny distinction between de Sitter and
primordial inflation, which typically makes the infrared behavior
worse in any case. So one would think that the noninvariance of free
gravitons on de Sitter must be accepted as universally as that of
the massless, minimally coupled scalar. This is is not so because
relativists have been able to find de Sitter invariant solutions to
the propagator equation in average gauges \cite{math}. Explicit
solutions in what seem to be valid gauges are just as compelling as
inferences from the tensor power spectrum. However, we have just
seen that average gauges may not be reliable on manifolds such as de
Sitter which possess linearization instabilities.
The early recognition of a problem \cite{IAEM} with one of these de
Sitter invariant solutions led to the development of a noninvariant,
average gauge \cite{TW3,RPW1}. The associated propagator has been
shown to obey the Ward identities at tree order \cite{TW4} and one
loop \cite{TW5}; and the only fully renormalized, dimensionally
regulated loop results for quantum gravity on de Sitter background
have been obtained using it \cite{TW6,MW,KW1}. Although the gauge
fixing functional of this propagator is not de Sitter invariant, it
does preserve the one-parameter subgroup of dilatations,
\begin{eqnarray}
t & \longrightarrow & t - \frac1{H} \, \ln(k) \; , \\
\vec{x} & \longrightarrow & k \vec{x} \; ,
\end{eqnarray}
so the fact that the propagator breaks dilatation invariance cannot
be blamed on the gauge. Moreover, Kleppe has shown that the
propagator's de Sitter breaking is physical by the standard
technique of supplementing naive de Sitter transformations with a
compensating gauge transformation to restore the noninvariant gauge
condition \cite{Kleppe}.
As with the cosmological power spectrum, one would think these
results decisive, but the interest in a de Sitter invariant graviton
propagator persists \cite{Higuchi}. What seems to be necessary to
settle the issue is two things:
\begin{itemize}
\item{A proof that the average gauges for which de Sitter invariant
solutions have been found are not valid; and}
\item{An explicit construction of the graviton propagator in an exact,
de Sitter invariant gauge which is valid over the full de Sitter
manifold.}
\end{itemize}
Of course the imposition of a de Sitter invariant gauge would make
the propagator equation de Sitter invariant, but that does not imply
a de Sitter invariant solution for the graviton propagator any more
than it does for the massless, minimally coupled scalar propagator
which obeys an invariant equation but is not invariant \cite{AF}. If
the graviton propagator in an exact gauge breaks de Sitter
invariance then the breaking must be accepted as physical. This
would not only resolve a contentious dispute, the resulting
propagator might be easier to use and it would reduce the number of
counterterms \cite{MW,KW1}.
In section 2 of this paper we give a proof that the average, de Sitter
invariant gauges are not valid; in subsequent sections we develop the
machinery for constructing the propagator in exact, de Donder gauge.
The technique for our construction is to find the field-dependent gauge
transformation $\xi_{\mu}[h]$ which enforces the
gauge condition,
\begin{equation}
h_{\mu\nu}' = h_{\mu\nu} - 2 \xi_{\mu ; \nu}[h] \qquad {\rm such\
that} \qquad g^{\rho\sigma} \Bigl[h_{\mu \rho ; \sigma}' - \frac12
h_{\rho\sigma ; \mu}'\Bigr] = 0 \; .
\end{equation}
(In this and subsequent formulae, $g_{\mu\nu}$ stands for the
spacelike background de Sitter metric and a semi-colon denotes
covariant differentiation with respect to this background.) Then we
use this transformation to carry the existing, noninvariant
propagator \cite{TW3,RPW1} into de Donder gauge. If the de Sitter
breaking of the existing propagator is completely due to the
noninvariant gauge then the de Donder gauge result should be
invariant; if the de Sitter breaking is physical then it should
persist after the transformation.
As a warmup for de Sitter gravity we here carry out the same exercise
for de Sitter electromagnetism. That is, we find the field-dependent
gauge transformation $\theta[A]$ which imposes exact Lorentz gauge,
\begin{equation}
A_{\mu}' = A_{\mu} - \partial_{\mu} \theta[A] \qquad {\rm such\
that} \qquad \partial_{\mu} \Bigl( \sqrt{-g} g^{\mu\nu}
A_{\nu}'\Bigr) = 0 \; .
\end{equation}
Then we use this transformation on the photon propagator in a
noninvariant, average gauge \cite{RPW1,KW2}. Because we already know
the unique, de Sitter invariant solution for the propagator equation
in Lorentz gauge \cite{TW7}, obtaining this known solution by
transformation demonstrates that the technique will remove de Sitter
breaking that arises from using a noninvariant gauge condition. The
simpler setting of electromagnetism, and the close relation between
the noninvariant electromagnetic and gravitational gauges should
also teach us much of value for the main project.
This paper consists of six sections of which the first is ending.
In section 2 we review the functional changes of variables which
carry one from an exact, canonical gauge to a covariant, average
gauge, with special attention to what goes wrong when linearization
instabilities are present. The context is flat space electrodynamics
on the $D$-dimensional manifolds $R^D$ and $T^{D-1} \!\times\! R$. In
section 3 we switch to $D$-dimensional de Sitter and carry out the
field-dependent gauge transformation that enforces exact, Lorentz
gauge. Of course this gauge transformation is ambiguous up to a
homogeneous solution, which is uniquely determined by de Sitter
invariance but which we leave unspecified at this stage. In section
4 we describe the unique, de Sitter invariant solution that was
found by solving the propagator equation in Lorentz gauge \cite{TW7}.
In section 5 we show how the hitherto unspecified, homogeneous part
of the gauge transformation from section 3 can be chosen to make the
two propagators agree. Our conclusions comprise section 6.
\section{Deriving Average Gauges}
The purpose of this section is to explain how one derives average
gauges from the exact gauges of the canonical formalism. We shall
take flat space quantum electrodynamics (QED) as the object of study
and first sketch the technique for passing from Coulomb gauge to
Feynman gauge on the manifold $R^D$. We then consider the same
process for the manifold $T^{D-1} \times R$ to show explicitly why
full Feynman gauge cannot be imposed.
The QED Lagrangian is,
\begin{equation}
\mathcal{L} = -\frac14 F_{\mu\nu} F^{\mu\nu} + \overline{\psi}
\gamma^{\mu} \Bigl(i \partial_{\mu} \!-\! e A_{\mu}\Bigr) \psi - m
\overline{\psi} \psi \; .
\end{equation}
The canonical dynamical variables of Coulomb gauge ($\vec{\nabla}
\cdot \vec{A}(t,\vec{x}) = 0$) are the transverse components of the
vector potential, $A_i^T(t,\vec{x})$, and the electric field,
$E_i^T(t,\vec{x})$, as well as $\psi(t,\vec{x})$ and
$\overline{\psi}(t,\vec{x})$. The $\mu = 0$ component of the vector
potential is not an independent dynamical variable but rather a
nonlocal functional $\Phi[\overline{\psi},\psi](t,\vec{x})$ of the
charged fields,
\begin{equation}
A_0 = \Phi[\overline{\psi},\psi] \equiv -\frac1{\nabla^2} \Bigl[e
\overline{\psi} \gamma^0 \psi\Bigr] \; .
\end{equation}
And the Hamiltonian density of Coulomb gauge is,
\begin{equation}
\mathcal{H} = \frac12 E_i^T E_i^T + \frac14 F_{ij} F_{ij} - \frac12
\Phi \nabla^2 \Phi + \overline{\psi}\Bigl( -i \gamma^i \partial_i +
e \gamma^i A_i^T + m \Bigr) \psi \; .
\end{equation}
The usual connection between the canonical and functional formalisms
is made through the vacuum expectation values of $T^*$-ordered
products of operators. ($T^*$-ordering is the same as time-ordering
except that derivatives are taken outside the ordering.) Suppose
$O[\overline{\psi},\psi,E^T,A^T]$ represents an arbitrary functional
of the canonical operators. The fundamental relation between the
canonical and functional integral formalisms is,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl(
\mathcal{O}\Bigl[\overline{\psi},\psi,E^T,A^T\Bigr]\Bigr) \Bigr\vert
\Omega \Bigr\rangle } \nonumber \\
& & \hspace{1cm} = \rlap{$\Biggl\rfloor$}\Biggl\lceil [d\overline{\psi}] [d\psi] [dE^T] [dA^T] \,
e^{i \int \! d^Dx \, [i\overline{\psi} \gamma^0 \dot{\psi} + E^T_i
\dot{A}^T_i - \mathcal{H}]} \, \mathcal{O}\Bigl[ \overline{\psi},
\psi,E^T,A^T\Bigr] \; . \qquad \label{FI1}
\end{eqnarray}
Because they will play no role in our analysis we have suppressed
the initial and final state wave functionals.
We henceforth denote expression (\ref{FI1}) as $\langle
\mathcal{O}\rangle$. A 7-step process of functional manipulations
carries it to Feynman gauge on $R^D$:
\begin{enumerate}
\item{{\it Perform the Gaussian integrals over the transverse
electric field,}
\begin{eqnarray}
\lefteqn{\rlap{$\Biggl\rfloor$}\Biggl\lceil [dE^T] \, e^{i \int d^Dx \, [E^T_i \dot{A}^T_i -
\frac12 E^T_i E^T_i]} \,
\mathcal{O}\Bigl[\overline{\psi},\psi,E^T,A^T\Bigr]} \nonumber \\
& & \hspace{5.6cm} =
\mathcal{O}'\Bigl[\overline{\psi},\psi,A^T\Bigr] e^{i\int \! d^Dx \,
\frac12 \dot{A}^T_i \dot{A}^T_i} \; , \qquad \label{FI2}
\end{eqnarray}
where $\mathcal{O}'[\overline{\psi},\psi,A^T]$ is
$\mathcal{O}[\overline{\psi},\psi,\dot{A}^T,A^T]$ plus the delta
function correlator terms that arise from eliminating the various
pairings of $E^T$'s.}
\item{{\it Restore the longitudinal component of the vector potential},
\begin{equation}
\rlap{$\Biggl\rfloor$}\Biggl\lceil [dA^T] = \rlap{$\Biggl\rfloor$}\Biggl\lceil [d\vec{A}] \, \delta\Bigl[\vec{\nabla}
\!\cdot\! \vec{A}\Bigr] \, \sqrt{{\rm det}(-\nabla^2)} \;.
\label{FI3}
\end{equation}
Note that this gives the square root of the Faddeev-Popov
determinant for Coulomb gauge.}
\item{{\it Restore the temporal component of the vector potential,}
\begin{equation}
e^{i \int \! d^Dx \, \frac12 \Phi \nabla^2 \Phi} = \sqrt{{\rm
det}(-\nabla^2)} \times \rlap{$\Biggl\rfloor$}\Biggl\lceil [dA_0] \, e^{i \int \! d^Dx \,
[\frac12 \partial_i A_0 \partial_i A_o - \overline{\psi} \gamma^0 e
A_0 \psi]} \; . \label{FI4}
\end{equation}
Note that this gives the remaining bit of the Faddeev-Popov
determinant. At this stage $\langle \mathcal{O}\rangle$ takes the
form,
\begin{eqnarray}
\lefteqn{\langle \mathcal{O}\rangle = \rlap{$\Biggl\rfloor$}\Biggl\lceil [d\overline{\psi}]
[d\psi] [dA] \, \delta\Bigl[ \vec{\nabla}\! \cdot \! \vec{A}\Bigr]
\, {\rm det}(-\nabla^2) } \nonumber \\
& & \hspace{-.3cm} \times e^{i\int \! d^Dx \, [\frac12 \dot{A}_i
\dot{A}_i + \frac12 \partial_i A_0 \partial_i A_0 - \frac14 F_{ij}
F_{ij} + \overline{\psi} (i \gamma^{\mu} \partial_{\mu} -
\gamma^{\mu} e A_{\mu} - m ) \psi]} \, \mathcal{O''}\Bigl[
\overline{\psi},\psi,A\Bigr] \; , \qquad \label{FI5}
\end{eqnarray}
where $\mathcal{O}''[\overline{\psi},\psi,A]$ is
$\mathcal{O}'[\overline{\psi},\psi,\vec{A}]$ with possible factors
of $\Phi[\overline{\psi},\psi]$ replaced by $A_0$(if desired; it is
not necessary) and with the addition of appropriate correlator terms
for pairings of $A_0$'s.}
\item{{\it Express the integrand as an invariant.} Any good gauge
can be used to express an arbitrary functional of the fields as a
gauge invariant which happens to agree with the original functional
when the gauge condition is obeyed \cite{TW2,RPW2}. We do this for
the action and for the operator
$\mathcal{O}''[\overline{\psi},\psi,A]$,
\begin{eqnarray}
\delta\Bigl[\vec{\nabla} \!\cdot\! \vec{A}\Bigr] \times e^{i \int \!
d^Dx \, [\frac12 \dot{A}_i \dot{A}_i + \frac12 \partial_i A_0
\partial_i A_0]} = \delta\Bigl[\vec{\nabla} \! \cdot \!
\vec{A}\Bigr] \times e^{i \int \! d^Dx \, \frac12 F_{0i} F_{0i}} \;
, \label{FI6a} \\
\delta\Bigl[\vec{\nabla} \!\cdot\! \vec{A}\Bigr] \times
\mathcal{O}''\Bigl[\overline{\psi},\psi,A\Bigr] =
\delta\Bigl[\vec{\nabla} \!\cdot\! \vec{A}\Bigr] \times
\mathcal{O}_{\rm inv}\Bigl[\overline{\psi},\psi,A\Bigr] \; .
\label{FI6b}
\end{eqnarray}
After invariantizing in this way our expression for $\langle
\mathcal{O}\rangle$ is,
\begin{equation}
\langle \mathcal{O}\rangle = \rlap{$\Biggl\rfloor$}\Biggl\lceil [d\overline{\psi}] [d\psi] [dA]
\, \delta\Bigl[\vec{\nabla} \!\cdot\! \vec{A}\Bigr] \, {\rm
det}(-\nabla^2) \, e^{i S_{\rm inv}} \, \mathcal{O}_{\rm inv} \; .
\label{FI7}
\end{equation}}
\item{{\it Make a functional change of variables to Lorentz gauge.}
Consider the field-dependent gauge transformation,
\begin{eqnarray}
A_{\mu}' & = & A_{\mu} - \partial_{\mu} \theta_1[A] \; , \\
\psi' & = & e^{ie \theta_1[A]} \times \psi \; , \\
\overline{\psi}' & = & e^{-ie \theta_1[A]} \times \overline{\psi} \;
\end{eqnarray}
where the gauge parameter is,
\begin{equation}
\theta_1[A] = -\frac1{\partial^2} \, \dot{A}_0 \; .
\end{equation}
Because $S_{\rm inv}$ and $\mathcal{O}_{\rm inv}$ are gauge
invariant, only the gauge fixing delta functional and the measure
will change. To get them, note that the inverse transformation for
the vector potential is,
\begin{equation}
A_{\mu} = A_{\mu} ' -\partial_{\mu} \frac1{\nabla^2} \dot{A}_0' \; .
\end{equation}
It follows that the Coulomb gauge condition on $A_{\mu}$ implies the
Lorentz gauge condition on $A_{\mu}'$,
\begin{equation}
\partial_i A_i = \partial^{\mu} A_{\mu} ' \; . \label{AtoA'}
\end{equation}
And the functional Jacobian converts the Faddeev-Popov determinant
to the one appropriate for Lorentz gauge,
\begin{equation}
[dA] \, {\rm det}(-\nabla^2) = [dA'] \, {\rm det}(-\partial^2) \; .
\end{equation}
We can therefore write $\langle \mathcal{O} \rangle$ as,
\begin{equation}
\langle \mathcal{O}\rangle = \rlap{$\Biggl\rfloor$}\Biggl\lceil [d\overline{\psi}'] [d\psi']
[dA'] \, \delta\Bigl[\partial^{\mu} A_{\mu}'\Bigr] \, {\rm
det}(-\partial^2) \, e^{i S_{\rm inv}} \, \mathcal{O}_{\rm inv} \; .
\label{FI8}
\end{equation}}
\item{{\it Add a inhomogeneous, ${\rm C}\llap{\vrule height7.1pt width1pt depth-.4pt\phantom t}$-number term to the gauge
fixing functional.} Make an additional change of variable,
\begin{eqnarray}
A_{\mu}'' & = & A_{\mu}' - \partial_{\mu} \theta_2[f] \; , \\
\psi'' & = & e^{i e \theta_2[f]} \times \psi \; , \\
\overline{\psi}'' & = & e^{-i e \theta_2[f]} \times \overline{\psi}
\; ,
\end{eqnarray}
where the gauge parameter is defined in terms of an arbitrary
${\rm C}\llap{\vrule height7.1pt width1pt depth-.4pt\phantom t}$-number field $f(t,\vec{x})$,
\begin{equation}
\theta_2[f] = \frac1{\partial^2} \, f \; .
\end{equation}
After the transformation we have,
\begin{equation}
\langle \mathcal{O}\rangle = \rlap{$\Biggl\rfloor$}\Biggl\lceil [d\overline{\psi}''] [d\psi'']
[dA''] \, \delta\Bigl[\partial^{\mu} A_{\mu}'' - f\Bigr] \, {\rm
det}(-\partial^2) \, e^{i S_{\rm inv}} \, \mathcal{O}_{\rm inv} \; .
\label{FI9}
\end{equation}}
\item{{\it Functionally average over the inhomogeneous term.}
By construction $\langle \mathcal{O} \rangle$ has no dependence upon
the function $f(t,\vec{x})$. It is therefore unchanged if we
multiply by a normalized Gaussian and functional integrate over $f$,
\begin{eqnarray}
\langle \mathcal{O} \rangle & = & \rlap{$\Biggl\rfloor$}\Biggl\lceil [df] \, e^{i \int \! d^Dx \,
\frac12 f^2} \times \langle \mathcal{O}\rangle \; , \\
& = & \rlap{$\Biggl\rfloor$}\Biggl\lceil [d\overline{\psi}''] [d\psi''] [dA''] \, {\rm
det}(-\partial^2) \, e^{i S_{\rm Feynman}} \, \mathcal{O}_{\rm inv}
\; . \qquad \label{FI10}
\end{eqnarray}
This is the Feynman gauge functional formalism we sought.}
\end{enumerate}
Let us see how the sequence of functional manipulations described above
changes when the noncompact spatial manifold $R^{D-1}$ is replaced by the
compact manifold $T^{D-1}$. A major difference is that Fourier integrals
become discrete sums. Suppose the range of each spatial coordinate is $-L
\leq x^i < +L$ for $i = 1,2,\ldots,(D\!-\!1)$. Then any function
$f(t,\vec{x})$ can be expressed as a discrete Fourier sum,
\begin{equation}
f(t,\vec{x}) = \sum_{\vec{n} \in Z^{D-1}} f_{\vec{n}}(t) e^{i k
\vec{n} \cdot \vec{x}} \; ,
\end{equation}
where $k \equiv \pi/L$ is the fundamental wave number. Note that the action
of $\nabla^2$ on any such function annihilates the $\vec{n} = 0$ mode,
\begin{equation}
\nabla^2 f(t,\vec{x}) = - \sum_{\vec{n} \in Z^{D-1}} (k n)^2
f_{\vec{n}}(t) e^{i\pi L^{-1} \vec{n} \cdot \vec{x}} \; .
\end{equation}
Hence the instantaneous Coulomb potential can only be defined for
configurations of $\overline{\psi}(t,\vec{x})$ and $\psi(t,\vec{x})$ which
have zero total charge (this is the linearization stability constraint!) and
the resulting potential has no $\vec{n} = 0$ mode,
\begin{eqnarray}
\Phi(t,\vec{x}) & = & -\frac1{\nabla^2} \Bigl[\overline{\psi} \gamma^0 e
\psi\Bigr](t,\vec{x}) \; , \\
& = & \sum_{\vec{n} \neq 0} \frac1{(k n)^2 (2L)^{D-1}} \int \!\! d^{D-1}\!x'
\, e^{i k \vec{n} \cdot (\vec{x} - \vec{x}')} \; \overline{\psi}(t,\vec{x}')
\gamma^0 e \psi(t,\vec{x}') \; . \qquad
\end{eqnarray}
Of course this means that when $A_0(t,\vec{x})$ is restored in step 3, it
cannot contain any $\vec{n} = 0$ mode.
Another important change concerns restoring the longitudinal part of
the vector potential in step 2. Under a gauge transformation the Fourier
mode $\vec{A}_{\vec{n}}(t)$ goes to,
\begin{equation}
\vec{A}_{\vec{n}}'(t) = \vec{A}_{\vec{n}}(t) - i k n \theta_{\vec{n}}(t) \; .
\end{equation}
It follows that all three vector components of $\vec{A}_0(t)$ are physical.
(In the noninteracting theory they would behave like free quantum mechanical
particles rather than harmonic oscillators.) A second consequence is that
the Coulomb gauge delta functional lacks a $\vec{n} = 0$ mode,
\begin{equation}
\delta\Bigl[\vec{\nabla} \!\cdot\! \vec{A}\Bigr] = \prod_{t \in R}
\prod_{\vec{n} \neq 0} \delta\Bigl( k \vec{n} \!\cdot\! \vec{A}_{\vec{n}}(t)
\Bigr) \; .
\end{equation}
To recapitulate, the changes associated with working on the compact spatial
manifold $T^{D-1}$ are:
\begin{itemize}
\item{The field $A_0(t,\vec{x})$ contains no $\vec{n} = 0$ mode; and}
\item{The Coulomb gauge delta functional contains no $\vec{n} = 0$ mode.}
\end{itemize}
It is immediately obvious that, while we can still make the functional
change of variables in step 7 to enforce exact Lorentz gauge, neither the
resulting $A_0'(t,\vec{x})$ nor the Lorentz gauge delta functional will
contain an $\vec{n} = 0$ mode. {\it It follows that there is no valid way
to add the $\vec{n} = 0$ mode of the Feynman gauge fixing term.} So the
result is just what we expected: adding the full Feynman gauge fixing
term is incorrect, and all conclusions drawn from this formalism are suspect.
It should be clear that the same sort of problem must occur as well in de
Sitter, and for gravitons as well as for electromagnetism, {\it so we now
have a proof that the average gauges for which de Sitter invariant solutions
have been found are not valid.}
Let us return to the context of flat space electromagnetism on the manifold
$T^{D-1} \!\times \! R$ and consider what goes wrong if the problem we have
just demonstrated is ignored and the covariant gauge fixing term (with
$\vec{n} = 0$ mode) is erroneously added to the invariant action. In that
case there is an extra, homogeneous contribution to $A_0(t,\vec{x})$, which
should not be present, and a corresponding extra contribution to propagator.
For a point charge $q$ this extra term produces a homogeneous contribution
to $A_0(t,\vec{x})$ which grows like $t^2$,
\begin{equation}
A_0(t,\vec{x}) = \frac{q t^2}{2^D L^{D-1}} + {\rm Higher\ Modes} \; .
\end{equation}
One might object that the extra term is harmless because it makes no
contribution to the electric field, however, the undifferentiated potential
does contribute to the interaction energy. A special case of some interest
in quantum electrodynamics is the interaction of a particle with its own force
fields. In this context it has been noted that using the de Sitter invariant,
Feynman gauge propagator \cite{AJ} (which must also contain a spurious
homogeneous part) results in on-shell singularities for the one loop
self-mass-squared of a charged scalar \cite{KW2}. Just as the analysis of
this section suggests, these on-shell singularities disappear either:
\begin{itemize}
\item{When using a non-de Sitter invariant gauge on the open coordinate
sub-manifold (which has no linearization instability) \cite{KW2}; or}
\item{When using the de Sitter invariant Lorentz propagator (which is exact)
\cite{PTW1}.}
\end{itemize}
It should be emphasized that there is no mistake in the Allen-Jacobson
solution for the Feynman gauge propagator \cite{AJ}; it is the gauge fixing
functional which is at fault.
\section{Imposing Lorentz Gauge on de Sitter}
The purpose of this section is to make a field-dependent gauge transformation
which carries the photon propagator from a non-de Sitter invariant average
gauge, defined on the open coordinate submanifold, to the de Sitter invariant,
exact Lorentz gauge, which can be extended to the entire de Sitter manifold.
We begin by reviewing the geometry and coordinate system, then we give the
noninvariant gauge condition and the associated propagator. The next step is
making the transformation. Of course this is ambiguous up to surface terms
which we leave to be specified in section 5. We close by decomposing the
transformed propagator (without the homogeneous contributions) up into two
convenient pieces.
We work on the open conformal coordinate submanifold of $D$-dimensional de
Sitter space. A spacetime point $x^{\mu}$ can be decomposed into its temporal
($x^0$) and spatial $x^i$ components which take values in the ranges,
\begin{equation}
-\infty < x^0 < 0 \qquad {\rm and} \qquad {\rm and} -\infty < x^i <
+\infty \; .
\end{equation}
In these coordinates the invariant element is,
\begin{equation}
ds^2 \equiv g_{\mu\nu} dx^{\mu} dx^{\nu} = a_x^2 \eta_{\mu\nu} dx^{\mu}
dx^{\nu} \; ,
\end{equation}
where $\eta_{\mu\nu}$ is the Lorentz metric and $a_x = -1/Hx^0$ is the
scale factor. The parameter $H$ is known as the ``Hubble constant''.
Most of the various propagators between points $x^{\mu}$ and
$z^{\mu}$ can be expressed in terms of the de Sitter length function
$y(x;z)$,
\begin{equation}
y(x;z) \equiv \Bigl\Vert \vec{x} \!-\! \vec{z}\Bigr\Vert^2 - \Bigl(
\vert x^0 \!-\! z^0\vert \!-\! i \epsilon\Bigr)^2 \; . \label{ydef}
\end{equation}
Except for the factor of $i\epsilon$ (whose purpose is to enforce Feynman
boundary conditions) the function $y(x;z)$ is closely related to the
invariant length $\ell(x;z)$ from $x^{\mu}$ to $z^{\mu}$,
\begin{equation}
y(x;z) = 4 \sin^2\Bigl( \frac12 H \ell(x;z)\Bigr) \; .
\end{equation}
Because $y(x;z)$ is a de Sitter invariant, so too are covariant derivatives
of it,
\begin{eqnarray}
\frac{\partial y(x;z)}{\partial x^{\mu}} & = & H a_x \Bigl(y \delta^0_{\mu}
\!+\! 2 a_z H \Delta x_{\mu} \Bigr) \; , \\
\frac{\partial y(x;z)}{\partial z^{\nu}} & = & H a_z \Bigl(y \delta^0_{\nu}
\!-\! 2 a_x H \Delta x_{\nu} \Bigr) \; , \\
\frac{\partial^2 y(x;z)}{\partial x^{\mu} \partial z^{\nu}} & = & H^2 a_x a_z
\Bigl(y \delta^0_{\mu} \delta^0_{\nu} \!+\! 2 a_z H \Delta x_{\mu}
\delta^0_{\nu} \!-\! 2 a_x \delta^0_{\mu} H \Delta x_{\nu} \!-\! 2
\eta_{\mu\nu}\Bigr) \; . \qquad
\end{eqnarray}
Here and subsequently $\Delta x_{\mu} \equiv \eta_{\mu\nu} (x \!-\! z)^{\nu}$.
Electromagnetism is conformally invariant in $D=4$ dimensions, which means
that it takes the same form in conformal coordinates as in flat space. This
is obvious from the gauge invariant Lagrangian,
\begin{equation}
\mathcal{L}_{\rm inv} = -\frac14 F_{\mu\nu} F_{\rho\sigma} g^{\mu\rho}
g^{\sigma\nu} \sqrt{-g} = -\frac14 a^{D-4} F_{\mu\nu} F_{\rho\sigma}
\eta^{\mu\rho} \eta^{\nu\sigma} \; .
\end{equation}
The wonderful simplicity of using known flat space results will not be
preserved if one adds any multiple of the de Sitter invariant, Feynman
gauge fixing functional,
\begin{equation}
\mathcal{L}_{\rm dS} = -\frac12 \Bigl(g^{\mu\nu} A_{\mu ; \nu}\Bigr)^2
= -\frac12 a^{D-4} \Bigl(\eta^{\mu\nu} A_{\mu , \nu} - (D\!-\!2) H a A_0
\Bigr)^2 \; . \label{LdS}
\end{equation}
(A semi-colon denotes covariant differentiation whereas a comma stands
for the ordinary derivative.) However, a very simple formalism results
from replacing the factor of $(D\!-\!2)$ with $(D\!-\!4)$,
\begin{equation}
\mathcal{L}_{\rm NdS} = -\frac12 a^{D-4} \Bigl(\eta^{\mu\nu} \partial_{\mu}
A_{\nu} - (D \!-\! 4) H a A_0\Bigr)^2 \; . \label{LNdS}
\end{equation}
With this gauge fixing functional the propagator takes the form
\cite{RPW1,KW2},
\begin{equation}
i\Bigl[\mbox{}_{\mu} \Delta^{\rm NdS}_{\nu}\Bigr](x;z) = a_x a_z
i\Delta_B(x;z) \Bigl(\eta_{\mu\nu} + \delta^0_{\mu} \delta^0_{\nu}\Bigr)
- a_x a_z i\Delta_C(x;z) \delta^0_{\mu} \delta^0_{\nu} \; , \label{NdSprop}
\end{equation}
where the de Sitter invariant scalar propagators are,
\begin{eqnarray}
i\Delta_B(x;z) & \equiv & B\Bigl(y(x;z)\Bigr) =
\frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \frac{\Gamma(D \!-\!2)}{\Gamma(\frac{D}2)}
\, \mbox{}_2 F_1\Bigl(D\!-\!2,1;\frac{D}2;1 \!-\! y\Bigr) \; , \qquad \\
i\Delta_C(x;z) & \equiv & C\Bigl(y(x;z)\Bigr) =
\frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \frac{\Gamma(D \!-\!3)}{\Gamma(\frac{D}2)}
\, \mbox{}_2 F_1\Bigl(D\!-\!3,2;\frac{D}2;1 \!-\! y\Bigr) \; . \qquad
\end{eqnarray}
One nice thing about (\ref{NdSprop}) is that its tensor factors are constants.
Another is that each of the scalar propagators which multiply them consists
of the conformal propagator plus a series of less singular terms which vanish
in $D=4$ dimensions,
\begin{eqnarray}
\lefteqn{B(y) = \frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \Biggl\{\Gamma\Bigl(
\frac{D}2 \!-\! 1\Bigr) \Bigl(\frac4{y}\Bigr)^{\frac{D}2 -1} +
\sum_{n=0}^{\infty} \Biggr[ \frac{\Gamma(n \!+\! \frac{D}2)}{\Gamma(n \!+\! 2)}
\Bigl(\frac{y}4 \Bigr)^{n -\frac{D}2 + 2} } \nonumber \\
& & \hspace{6.7cm} - \frac{\Gamma(n\!+\! D\!-\!2)}{\Gamma(n\!+\! \frac{D}2)}
\Bigl(\frac{y}4\Bigr)^{n}\Biggr]\Biggr\} \; , \qquad \label{Bexp} \\
\lefteqn{C(y) = \frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \Biggl\{\Gamma\Bigl(
\frac{D}2 \!-\! 1\Bigr) \Bigl(\frac4{y}\Bigr)^{\frac{D}2 -1} -
\sum_{n=0}^{\infty} \Biggr[ \Bigl(n \!-\! \frac{D}2 \!+\!3 \Bigr)
\frac{\Gamma(n \!+\! \frac{D}2 \!-\! 1)}{\Gamma(n \!+\! 2)}
\Bigl(\frac{y}4 \Bigr)^{n-\frac{D}2+2} } \nonumber \\
& & \hspace{6.7cm} - (n \!+\! 1) \frac{\Gamma(n\!+\!
D\!-\!3)}{\Gamma(n\!+\! \frac{D}2)} \Bigl(\frac{y}4\Bigr)^{n}\Biggr]\Biggr\}
\; . \qquad \label{Cexp}
\end{eqnarray}
So the infinite series terms only need to be retained when they multiply
a potentially divergent quantity. Because the higher values of $n$ vanish
more and more rapidly at coincidence (that is, for $y = 0$) only a finite
number of these extra terms ever needs to be included.
It is now time to make the field-dependent transformation to Lorentz gauge,
\begin{equation}
A_{\mu}'(x) = A_{\mu}(x) - \partial_{\mu} \theta[A](x) \; .
\end{equation}
This would be step 5 in the scheme of the previous section. The fact that
$A_{\mu}'$ obeys Lorentz gauge implies a differential equation for $\theta[A]$,
\begin{equation}
\partial_{\mu} \Bigl( \sqrt{-g} g^{\mu\nu} \partial_{\nu} \theta\Bigr) =
\partial_{\mu} \Bigl( \sqrt{-g} g^{\mu\nu} A_{\nu}\Bigr) \; .
\end{equation}
Of course there are many solutions, related to one another by homogeneous
terms. Any choice of homogeneous term will enforce Lorentz gauge, whereas
there can be at most one choice which gives a de Sitter invariant propagator.
Because Section 5 is devoted to establishing de Sitter invariance and
correspondence with the known solution \cite{TW7}, we postpone specification
of the homogeneous term until then. For now we express the solution in a
general way,
\begin{equation}
\overline{\theta}[A](x) = \int_{V} \! d^Dx' \, G(x;x') \frac{\partial}{
\partial x^{\prime \rho}} \Bigl( \sqrt{-g(x')} g^{\rho\sigma}(x')
A_{\sigma}(x') \Bigr) \; , \label{thetabar}
\end{equation}
where $G(x;x')$ is some Green's function of the scalar d'Alembertian, which
we specify in the next section, and $V$ is some region of the manifold. The
actual solution for $\theta[A](x)$ consists of (\ref{thetabar}) --- with
definite choices for $G(x;x')$ and $V$ --- plus a definite homogeneous
solution. For now we study the field transformed with only
$\overline{\theta}[A](x)$,
\begin{equation}
\overline{A}_{\mu}(x) \equiv A_{\mu}(x) - \partial_{\mu}
\overline{\theta}[A](x) \; .
\end{equation}
The transformed propagator is the vacuum expectation value of the
$T^*$-ordered product of two $\overline{A}$'s. Because $T^*$-ordering
moves any derivatives outside the time-ordering symbol we can express this
as,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \overline{A}_{\mu}(x)
\overline{A}_{\nu}(z)\Bigr] \Bigr\vert \Omega \Bigr\rangle =
\Bigl\langle \Omega \Bigl\vert T\Bigl[ A_{\mu}(x) A_{\nu}(z)\Bigr] \Bigr\vert
\Omega \Bigr\rangle } \nonumber \\
& & \hspace{-.5cm} -\frac{\partial}{\partial x^{\mu}} \int_{V} \! d^Dx' \,
G(x;x') \times \frac{\partial}{\partial x^{\prime \rho}} \Biggl[ \sqrt{-g(x')}
g^{\rho\sigma}(x') \Bigl\langle \Omega \Bigl\vert T\Bigl[ A_{\sigma}(x')
A_{\nu}(z)\Bigr] \Bigr\vert \Omega \Bigr\rangle \Biggr] \nonumber \\
& & \hspace{-.5cm} -\frac{\partial}{\partial z^{\nu}} \int_{V} \! d^Dz' \,
G(z;z') \times \frac{\partial}{\partial z^{\prime \alpha}} \Biggl[ \sqrt{-g(z')}
g^{\alpha\beta}(z') \Bigl\langle \Omega \Bigl\vert T\Bigl[ A_{\mu}(x)
A_{\beta}(z')\Bigr] \Bigr\vert \Omega \Bigr\rangle \Biggr] \nonumber \\
& & \hspace{-.5cm} + \frac{\partial}{\partial x^{\mu}} \frac{\partial}{
\partial z^{\nu}} \int_{V} \! d^Dx' \, G(x;x') \int_{V} \! d^Dz' \, G(z;z')
\nonumber \\
& & \times \frac{\partial}{\partial x^{\prime \rho}}
\frac{\partial}{\partial z^{\prime \alpha}} \Biggl[ \! \sqrt{-g(x')}
g^{\rho\sigma}(x') \sqrt{-g(z')} g^{\alpha\beta}(z')
\Bigl\langle \Omega \Bigl\vert T\Bigl[ A_{\sigma}(x')
A_{\beta}(z')\Bigr] \Bigr\vert \Omega \Bigr\rangle \! \Biggr] . \qquad
\end{eqnarray}
By substituting the noninvariant propagator (\ref{NdSprop}) and using the
fact that $y(x;z)$ depends upon spatial coordinates only through their
difference, we can write the three differentiated, square-bracketed terms as,
\begin{eqnarray}
\lefteqn{\frac{\partial}{\partial x^{\prime \rho}} \Biggl[ \sqrt{-g(x')}
g^{\rho\sigma}(x') \Bigl\langle \Omega \Bigl\vert T\Bigl[ A_{\sigma}(x')
A_{\nu}(z)\Bigr] \Bigr\vert \Omega \Bigr\rangle \Biggr] =
-\frac{\partial}{\partial z^{\nu}} \Bigl[a_{x'}^{D-1} a_z B\Bigl(y(x';z)
\Bigr)\Bigr] } \nonumber \\
& & \hspace{1.4cm} + \delta^0_{\nu} \Biggl\{ \frac{\partial}{\partial z^0}
\Bigl[a_{x'}^{D-1} a_z B\Bigl(y(x';z)\Bigr)\Bigr] +
\frac{\partial}{\partial x^{\prime 0}} \Bigl[a_{x'}^{D-1} a_z
C\Bigl(y(x';z)\Bigr)\Bigr] \Biggr\} , \qquad \\
\lefteqn{\frac{\partial}{\partial z^{\prime \alpha}} \Biggl[ \sqrt{-g(z')}
g^{\alpha\beta}(z') \Bigl\langle \Omega \Bigl\vert T\Bigl[ A_{\mu}(x)
A_{\beta}(z')\Bigr] \Bigr\vert \Omega \Bigr\rangle \Biggr] =
-\frac{\partial}{\partial x^{\mu}} \Bigl[a_x a_{z'}^{D-1} B\Bigl(y(x;z')
\Bigr)\Bigr] } \nonumber \\
& & \hspace{1.4cm} + \delta^0_{\mu} \Biggl\{ \frac{\partial}{\partial x^0}
\Bigl[a_x a_{z'}^{D-1} B\Bigl(y(x;z')\Bigr)\Bigr] +
\frac{\partial}{\partial z^{\prime 0}} \Bigl[a_x a_{z'}^{D-1}
C\Bigl(y(x;z')\Bigr)\Bigr] \Biggr\} , \qquad \\
\lefteqn{\frac{\partial}{\partial x^{\prime \rho}}
\frac{\partial}{\partial z^{\prime \alpha}} \Biggl[ \! \sqrt{-g(x')}
g^{\rho\sigma}(x') \sqrt{-g(z')} g^{\alpha\beta}(z')
\Bigl\langle \Omega \Bigl\vert T\Bigl[ A_{\sigma}(x')
A_{\beta}(z')\Bigr] \Bigr\vert \Omega \Bigr\rangle \! \Biggr] =} \nonumber \\
& & \hspace{-.5cm} \frac{\partial}{\partial x^{\prime i}}
\frac{\partial}{\partial z^{\prime i}} \Bigl[ (a_{x'} a_{z'})^{D-1} B\Bigl(
y(x';z')\Bigr)\Bigr] - \frac{\partial}{\partial x^{\prime 0}}
\frac{\partial}{\partial z^{\prime 0}} \Bigl[ (a_{x'} a_{z'})^{D-1} C\Bigl(
y(x';z')\Bigr)\Bigr] \; . \qquad
\end{eqnarray}
All of this suggests that we would do well to organize the transformed
propagator into a doubly differentiated ``Integral Term'' and the
remaining, ``Other Term'',
\begin{equation}
\Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \overline{A}_{\mu}(x)
\overline{A}_{\nu}(z)\Bigr] \Bigr\vert \Omega \Bigr\rangle =
\frac{\partial}{\partial x^{\mu}} \frac{\partial}{\partial z^{\nu}}
\overline{\mathcal{I}}(x;z) + \Bigl[\mbox{}_{\mu} \overline{\mathcal{O}}_{\nu}
\Bigr](x;z) \; . \label{breakup}
\end{equation}
The Integral Term is,
\begin{eqnarray}
\lefteqn{ \overline{\mathcal{I}}(x;z) \equiv } \nonumber \\
& & \hspace{-.5cm} a_z \int_{V} \! d^Dx' \, G(x;x') a_{x'}^{D-1}
B\Bigl( y(x';z)\Bigr) + a_x \int_{V} \! d^Dz' \, G(z;z')
a_{z'}^{D-1} B\Bigl( y(x;z')\Bigr) \nonumber \\
& & \hspace{-.5cm} + \int_{V} \! d^Dx' \, G(x;x') \int_{V} \! d^Dz' \, G(z;z')
\Biggl\{ \frac{\partial}{\partial x^{\prime i}}
\frac{\partial}{\partial z^{\prime i}} \Bigl[ (a_{x'} a_{z'})^{D-1} B\Bigl(
y(x';z')\Bigr)\Bigr] \nonumber \\
& & \hspace{5.4cm} - \frac{\partial}{\partial x^{\prime 0}}
\frac{\partial}{\partial z^{\prime 0}} \Bigl[ (a_{x'} a_{z'})^{D-1} C\Bigl(
y(x';z')\Bigr)\Bigr] \Biggr\} . \qquad
\end{eqnarray}
And the Other Term contains everything else,
\begin{eqnarray}
\lefteqn{ \Bigl[\mbox{}_{\mu} \overline{\mathcal{O}}_{\nu} \Bigr](x;z) \equiv
a_x a_z B\Bigl(y(x;z)\Bigr) \Bigl[\eta_{\mu\nu} \!+\! \delta^0_{\mu}
\delta^0_{\nu}\Bigr] - a_x a_z C\Bigl(y(x;z)\Bigr) \delta^0_{\mu}
\delta^0_{\nu} } \nonumber \\
& & - \delta^0_{\nu} \frac{\partial}{\partial x^{\mu}}
\int_{V} \! d^Dx' \, G(x;x') \Biggl\{\frac{\partial}{\partial z^0}
\Bigl[a_{x'}^{D-1} a_z B\Bigl(y(x';z)\Bigr)\Bigr] \nonumber \\
& & \hspace{1cm} + \frac{\partial}{\partial x^{\prime 0}} \Bigl[a_{x'}^{D-1}
a_z C\Bigl(y(x';z)\Bigr)\Bigr] \Biggr\} - \delta^0_{\mu} \frac{\partial}{
\partial z^{\nu}} \int_{V} \! d^Dz' \, G(z;z') \nonumber \\
& & \hspace{2.cm} \times \Biggl\{\! \frac{\partial}{\partial x^0}
\Bigl[a_x a_{z'}^{D-1} B\Bigl(y(x;z')\Bigr)\Bigr]
+ \frac{\partial}{\partial z^{\prime 0}} \Bigl[a_x a_{z'}^{D-1}
C\Bigl(y(x;z')\Bigr)\Bigr] \! \Biggr\} . \qquad
\end{eqnarray}
It remains to act the derivatives to simplify our expressions for $\overline{
\mathcal{I}}(x;z)$ and $[\mbox{}_{\mu} \overline{\mathcal{O}}_{\nu}](x;z)$.
This is facilitated by some important identities obeyed by any function of
$y(x;z)$,
\begin{eqnarray}
\lefteqn{\square_x F(y) =
\frac{i 4 \pi^{\frac{D}2} \delta^D(x \!-\! z)}{\Gamma(\frac{D}2 \!-\! 1) \,
H^{D-2} a^D} \times {\rm Res}[F] + H^2 \Biggl\{\! (4y \!-\! y^2) F'' +
D (2 \!-\! y) F' \! \Biggr\} , \qquad } \label{FBox} \\
\lefteqn{\frac{\partial^2 F(y)}{\partial x^0 \partial z^0} =
\frac{i 4 \pi^{\frac{D}2} \delta^D(x \!-\! z)}{\Gamma(\frac{D}2 \!-\! 1) \,
(H a)^{D-2}} \times {\rm Res}[F] + a_x a_z H^2 \Biggl\{ \Bigl[8 \!-\! (4 y
\!-\! y^2) \Bigr] F'' } \nonumber \\
& & \hspace{3.9cm} - (2\!-\!y) F' + \Bigl(\frac{a_x}{a_z} \!+\! \frac{a_z}{a_x}
\Bigr) \Bigl[ -2(2 \!-\!y) F'' \!+\! 2 F'\Bigr] \Biggr\}
\; , \qquad \label{F00} \\
\lefteqn{\frac{\partial^2 F(y)}{\partial x^i \partial z^i} = a_x a_z H^2
\Biggl\{4(2 \!-\!y) F'' -2 (D\!-\!1) F' - 4 \Bigl(\frac{a_x}{a_z} \!+\!
\frac{a_z}{a_x}\Bigr) F'' \Biggr\} \; , \qquad } \label{Fii} \\
\lefteqn{ H \Bigl[a_x \frac{\partial}{\partial z^0} \!+\! a_z \frac{\partial}{
\partial x^0}\Bigr] F(y) = a_x a_z H^2 \Biggl\{-2 (2 \!-\!y) F' +
2 \Bigl(\frac{a_x}{a_z} \!+\! \frac{a_z}{a_x}\Bigr) F'\Biggr\} . \qquad }
\label{F0}
\end{eqnarray}
Here $\square$ is the covariant scalar d'Alembertian and ${\rm Res}[F]$ is
the coefficient of $y^{1-\frac{D}2}$ in the Laurent expansion of $F(y)$. We
shall also require some identities specific to $B(y)$ and $C(y)$,
\begin{eqnarray}
(4y \!-\! y^2) B''(y) + D(2 \!-\! y) B'(y) & = & (D \!-\! 2) B(y)
\; , \label{BID}\\
(4y \!-\! y^2) C''(y) + D(2 \!-\! y) C'(y) & = & 2 (D \!-\! 3) C(y)
\; . \label{CID}
\end{eqnarray}
And there is a very useful relation between $B(y)$ and $C(y)$,
\begin{equation}
C(y) = \frac12 (2\!-\!y) B(y) + \frac{k}{D\!-\!3} \qquad {\rm where} \qquad
k \equiv \frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \frac{\Gamma(D\!-\!1)}{\Gamma(
\frac{D}2)} \; . \label{BCID}
\end{equation}
Note finally that substituting (\ref{BCID}) into (\ref{CID}) and using
(\ref{BID}) implies,
\begin{equation}
(4y \!-\! y^2) B'(y) + (D\!-\!2) (2 \!-\! y) B(y) = -2 k \; . \label{BID2}
\end{equation}
It is best to start with the Other Term because it involves only first
derivatives. The reduction is straightforward for the $B$ term,
\begin{eqnarray}
\frac{\partial}{\partial z^0} \Bigl[ a_x^{D-1} a_z B\Bigl( y(x;z) \Bigr)\Bigr]
& = & a_x^{D-1} a_z \Bigl[ H a_z B + \frac{\partial y}{\partial z^0}
\, B'\Bigr] \; , \\
& = & a_x^D a_z H \Biggl\{ 2 B' + \frac{a_z}{a_x} \Bigl[- (2 \!-\!y)
B' + B\Bigr] \Biggr\} \; . \qquad \label{Bcontr}
\end{eqnarray}
We begin the same way with the $C$ term but then use (\ref{BCID}) to convert
most of the $C$'s to $B$'s and simplify with (\ref{BID2}),
\begin{eqnarray}
\lefteqn{\frac{\partial}{\partial x^0} \Bigl[ a_x^{D-1} a_z C\Bigl( y(x;z)
\Bigr)\Bigr] = a_x^{D-1} a_z \Bigl[ (D \!-\!1) H a_x C + \frac{\partial y}{
\partial x^0} \, C'\Bigr] \; , } \\
& & = a_x^D a_z H \Biggl\{ -(2 \!-\! y) C' + (D \!-\! 1) C + \frac{a_z}{a_x}
\Bigl[2 C'\Bigr] \Biggr\} \; , \qquad \\
& & = a_x^D a_z H \Biggl\{ 2 C -\frac12 (2 \!-\! y)^2 B' + \frac12 (D \!-\! 2)
(2 \!-\!y) B + k + \nonumber \\
& & \hspace{8.5cm} \frac{a_z}{a_x} \Bigl[(2 \!-\! y) B' \!-\! B\Bigr] \Biggr\}
\; , \qquad \\
& & = a_x^D a_z H \Biggl\{ 2 C -2 B' + \frac{a_z}{a_x} \Bigl[(2 \!-\! y) B'
\!-\! B\Bigr] \Biggr\} \; . \qquad \label{Ccontr}
\end{eqnarray}
Hence (\ref{Bcontr}) and (\ref{Ccontr}) almost completely cancel and our
final result for the Other Term is,
\begin{eqnarray}
\lefteqn{ \Bigl[\mbox{}_{\mu} \overline{\mathcal{O}}_{\nu} \Bigr](x;z) \equiv
a_x a_z B\Bigl(y(x;z)\Bigr) \eta_{\mu\nu} + a_x a_z \Biggl\{ \frac12 y(x;z)
B\Bigl(y(x;z)\Bigr)\!-\! \frac{k}{D\!-\!3} \Biggr\} \delta^0_{\mu}
\delta^0_{\nu} } \nonumber \\
& & \hspace{2cm} - 2 H a_z \delta^0_{\nu} \frac{\partial}{\partial x^{\mu}}
\int_{V} \! d^Dx' \sqrt{-g(x')} \, G(x;x') C\Bigl(y(x';z)\Bigr) \nonumber \\
& & \hspace{3.3cm} - 2 H a_x \delta^0_{\mu} \frac{\partial}{\partial z^{\nu}}
\int_{V} \! d^Dz' \sqrt{-g(z')} \, G(z;z') C\Bigl(y(x;z')\Bigr) .
\qquad \label{Other}
\end{eqnarray}
The Integral term $\overline{\mathcal{I}}(x;z)$ involves second derivatives.
We only need (\ref{Fii}) to reduce the spatial case,
\begin{eqnarray}
\lefteqn{\frac{\partial}{\partial x^i} \frac{\partial}{\partial z^i}
\Bigl[ (a_{x} a_{z})^{D-1} B\Bigl(y(x;z)\Bigr)\Bigr] } \nonumber \\
& & \hspace{1cm} = H^2 (a_x a_z)^D \Biggl\{4 (2 \!-\!y)
B'' - 2(D\!-\!1) B' + \Bigl(\frac{a_x}{a_z} \!+\! \frac{a_z}{a_x}\Bigr)
\Bigl[ -4 B''\Bigr]\Biggr\} . \qquad
\end{eqnarray}
Reducing the temporal derivative term is more involved. We begin by passing
the derivatives through the scale factors, then employ relations (\ref{F00})
and (\ref{F0}), and convert most of the $C(y)$'s to $B(y)$ using (\ref{BCID}),
eliminating second derivatives with (\ref{BID}-\ref{CID}) as needed. The
result is,
\begin{eqnarray}
\lefteqn{\frac{\partial}{\partial x^0} \frac{\partial}{\partial z^0}
\Bigl[ (a_x a_z)^{D-1} C\Bigl( y(x;z)\Bigr)\Bigr] = (a_x a_z)^{D-1}
\Biggl\{ \frac{\partial}{\partial x^0} \frac{\partial}{\partial z^0} }
\nonumber \\
& & \hspace{1.5cm} + (D\!-\!1) H \Bigl[a_x \frac{\partial}{\partial z^0} \!+\!
a_z \frac{\partial}{\partial x^0}\Bigr] + (D\!-\!1)^2 a_x a_z H^2\Biggr\}
C\Bigl(y(x;z)\Bigr) \; , \qquad \\
& & = i a_x^D \delta^D(x\!-\! z') + H^2 (a_x a_z)^D \Biggl\{4 C + 4(2\!-\!y)
B'' - 2 (D\!+\!3) B' \nonumber \\
& & \hspace{4.9cm} + \Bigl(\frac{a_x}{a_z} \!+\! \frac{a_z}{a_x}\Bigr)
\Bigl[-4 B'' + 2 (2 \!-\! y) B' - 2 B\Bigr] \Biggr\} . \qquad
\end{eqnarray}
Adding the two terms gives a compact form,
\begin{eqnarray}
\lefteqn{\frac{\partial}{\partial x^i} \frac{\partial}{\partial z^i}
\Bigl[ (a_{x} a_{z})^{D-1} B(y)\Bigr] - \frac{\partial}{\partial x^0}
\frac{\partial}{\partial z^0} \Bigl[ (a_x a_z)^{D-1} C(y)\Bigr] =
-i a_x^D \delta^D(x \!-\! z) } \nonumber \\
& & \hspace{1.5cm} + (a_x a_z)^D H^2 \Biggl\{\! -4 C + 8 B'
+ \Bigl(\frac{a_x}{a_z} \!+\! \frac{a_z}{a_x}\Bigr) \Bigl[-2 (2 \!-\!y) B'
+ 2 B\Bigr] \! \Biggr\} . \qquad
\end{eqnarray}
A further simplification can be effected by means of the identity,
\begin{equation}
(a_x a_z)^D \square_x \Bigl[ \frac{a_x}{a_z} B(y)\Bigr] = i a_x^D \delta^D(x
\!-\! z) + (a_x a_z)^D H^2 \Biggl\{-4 B' + \frac{a_x}{a_z} \Bigl[2 (2 \!-\!y)
B' - 2 B\Bigr] \Biggr\} .
\end{equation}
Using this and the result with $x^{\mu}$ and $z^{\mu}$ interchanged gives,
\begin{eqnarray}
\lefteqn{\frac{\partial}{\partial x^i} \frac{\partial}{\partial z^i}
\Bigl[ (a_{x} a_{z})^{D-1} B(y)\Bigr] - \frac{\partial}{\partial x^0}
\frac{\partial}{\partial z^0} \Bigl[ (a_x a_z)^{D-1} C(y)\Bigr] =
i a_x^D \delta^D(x \!-\! z) } \nonumber \\
& & \hspace{3.5cm} + (a_x a_z)^D \Biggl\{\! -4 H^2 C - \square_x \Bigl[
\frac{a_x}{a_z} B\Bigr] - \square_z \Bigl[ \frac{a_z}{a_x} B\Bigr]
\Biggr\} . \qquad
\end{eqnarray}
Our final result for the Integral Term involves the surface integral,
\begin{eqnarray}
\lefteqn{\mathcal{S}_B(x;z) \equiv \frac{a_x}{a_z} B\Bigl(y(x;z)\Bigr)
\!-\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, G(x;x') \square_{x'} \Biggl[\!
\frac{a_{x'}}{a_z} B\Bigl(y(x';z)\Bigr) \!\Biggr] , \qquad } \\
& & \hspace{-.7cm} = \!\! \int_{\partial V} \!\!\!\!\!\! d^{D-1}\!x_{\mu}' \!
\sqrt{-g'} \!g^{\prime \mu\nu} \! \Biggl[\! \frac{a_{x'}}{a_z} i\Delta_B(x';z)
\partial_{\nu}' G(x;x') \!-\! G(x;x') \partial_{\nu}' \Bigl[
\frac{a_{x'}}{a_z} i\Delta_B(x';z)\Bigr] \! \Biggr] . \qquad
\end{eqnarray}
This function is obviously homogeneous; that is, $\square_x$ annihilates it.
In the next section we will show how to choose the homogeneous contribution
to the full gauge parameter $\theta[A](x)$ so as to cancel it and similar
terms. The final result for the Integral Term is,
\begin{eqnarray}
\lefteqn{\overline{\mathcal{I}}(x;z) = \int_{V} \! d^Dx' \sqrt{-g(x')} \,
G(x;x') \mathcal{S}_B(z;x') } \nonumber \\
& & + \int_{V} \! d^Dz' \sqrt{-g(z')} \, G(z;z') \mathcal{S}_B(x;z')
+ i \int_{V} \! d^Dx' \sqrt{-g(x')} \, G(x;x') G(z;x') \nonumber \\
& & - 4 H^2 \int_{V} \! d^Dx' \sqrt{-g(x')} \, G(x;x')
\int_{V} \! d^Dz' \sqrt{-g(z')} \, G(z;z') C\Bigl( y(x';z')\Bigr) \; .
\qquad \label{Integral}
\end{eqnarray}
\section{The Invariant Propagator}
The purpose of this section is to show that the transformed propagator of
the previous section agrees, up to surface terms, with the unique de Sitter
invariant propagator which was found by solving the propagator equation in
Lorentz gauge \cite{TW7}. Of course we begin by describing that solution.
We then decompose it in analogy with the scheme (\ref{breakup}) of the
previous section, into an ``Integral Term'' and an ``Other Term.'' At this
stage there is a digression to derive an identity for the convolution of
scalar propagators. The section closes by applying this identity to
demonstrate that the two propagators agree up to surface integrals.
The Lorentz gauge propagator equation has a unique de Sitter invariant
solution which can expressed in terms of a function $\gamma(y)$ \cite{TW7},
\begin{eqnarray}
\lefteqn{ i\Bigl[\mbox{}_{\mu} \Delta^{\rm dS}_{\nu}\Bigr](x;z) =
\frac1{4 (D\!-\!1) H^2} \Biggl\{ \frac{\partial^2 y(x;z)}{\partial x^{\mu}
\partial z^{\nu}} \Bigl[-(4y \!-\! y^2) \gamma' \!-\! (D\!-\!1)
(2\!-\!y) \gamma\Bigr] } \nonumber \\
& & \hspace{6cm} + \frac{\partial y}{\partial x^{\mu}} \,
\frac{\partial y}{\partial z^{\nu}} \Bigl[ (2 \!-\!y) \gamma'
\!-\! (D\!-\!1) \gamma\Bigr] \! \Biggr\} . \qquad \label{dSprop}
\end{eqnarray}
The function $\gamma(y)$ has a very complicated series expansion,
\begin{eqnarray}
\lefteqn{\gamma(y) = \frac12 \Bigl(\frac{D\!-\!1}{D\!-\!3}\Bigr)
\frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \Biggl\{ (D\!-\!3) \Gamma\Bigl(\frac{D}2
\!-\!1\Bigr) \Bigl(\frac{4}{y}\Bigr)^{\frac{D}2-1} \!\!+ \sum_{n=0}^{\infty}
\Biggl[ \frac{(n\!+\!1) \Gamma(n \!+\!D \!-\!1)}{\Gamma(n \!+\! \frac{D}2
\!+\!1)} } \nonumber \\
& & \hspace{2.5cm} \times \Bigl[\psi\Bigl(2\!-\!\frac{D}2\Bigr) \!-\!
\psi\Bigl(\frac{D}2 \!-\! 1\Bigr) \!+\! \psi(n \!+\! D \!-\! 1) \!-\!
\psi(n \!+\! 2)\Bigr] \Bigl(\frac{y}{4}\Bigr)^n \nonumber \\
& & - \frac{(n\!-\!\frac{D}2 \!+\! 3) \Gamma(n \!+\! \frac{D}2 \!+\!1)}{
\Gamma(n \!+\! 3)} \Bigl[\psi\Bigl(2\!-\!\frac{D}2\Bigr) \!-\!
\psi\Bigl(\frac{D}2 \!-\! 1\Bigr) \nonumber \\
& & \hspace{4cm} + \psi\Bigl(n \!+\! \frac{D}2 \!+\! 1\Bigr) \!-\!
\psi\Bigl(n \!-\! \frac{D}2 \!+\! 4\Bigr)\Bigr] \Bigl(\frac{y}{4}\Bigr)^{n
-\frac{D}2 + 2} \Biggr] \Biggr\} . \qquad \label{gamma}
\end{eqnarray}
Although it might seem unwieldy, this formalism has been used to perform
several two loop computations in scalar quantum electrodynamics
\cite{PTW1,PTW2}.
The function $\gamma(y)$ obeys the second order differential equation,
\begin{equation}
(4y \!-\! y^2) \gamma'' + (D\!+\!2) (2\!-\!y) \gamma' - 2 (D\!-\!1) \gamma
= 2 (D\!-\!1) B'(y) \; . \qquad \label{gameqn}
\end{equation}
One consequence is,
\begin{equation}
\frac{\partial}{\partial y} \Biggl[-(4y \!-\! y^2) \gamma' \!-\! (D\!-\!1)
(2 \!-\!y) \gamma \!+\! 2(D\!-\!1) B\Biggr] = (2 \!-\! y) \gamma' \!-\!
(D\!-\!1) \gamma \; .
\end{equation}
Hence we can decompose the invariant propagator in a form analogous to
that of the transformed propagator (\ref{breakup}),
\begin{eqnarray}
\lefteqn{ i\Bigl[\mbox{}_{\mu} \Delta^{\rm dS}_{\nu}\Bigr](x;z) = -
\frac1{2 H^2} B\Bigl(y(x;z)\Bigr) \, \frac{\partial^2 y(x;z)}{\partial x^{\mu}
\partial z^{\nu}} } \nonumber \\
& & \hspace{-.1cm} + \frac1{4 (D\!-\! 1) H^2} \frac{\partial}{\partial x^{\mu}}
\frac{\partial}{\partial z^{\nu}} \, I\Biggl[-(4y \!-\! y^2) \gamma' \!-\!
(D\!-\!1) (2 \!-\!y) \gamma \!+\! 2(D\!-\!1) B\Biggr] , \qquad \label{invprop}
\end{eqnarray}
where the notation ``$I[f]$'' of a function $f(y)$ stands for its indefinite
integral,
\begin{equation}
I[f](y) \equiv \int^y \!\! dy' \, f(y') \; .
\end{equation}
At this point it is useful to digress on the subject of scalar propagators.
Three were introduced in the previous section --- $+i \times G(x;z)$,
$i\Delta_B(x;z)$ and $i\Delta_C(x;z)$ --- and it might seem that there is a
bewildering variety of them, each with its own important special properties.
However, a unified treatment can be given in terms of the equation,
\begin{equation}
\sqrt{-g(x)} \, \Biggl\{ \square_x + \Bigl[ \nu^2 \!-\! \Bigl(\frac{D-1}2
\Bigr)^2 \Bigr] H^2 \Biggr\} i\Delta_{\nu}(x;z) = i \delta^D(x \!-\! z) \; .
\label{nuprop}
\end{equation}
The three propagators of the previous section correspond to the following
choices for $\nu$:
\begin{eqnarray}
i \times G(x;z) & \Longrightarrow & \nu = \Bigl(\frac{D-1}2\Bigr) \; , \\
i \Delta_B(x;z) & \Longrightarrow & \nu = \Bigl(\frac{D-3}2\Bigr) \; , \\
i \Delta_C(x;z) & \Longrightarrow & \nu = \Bigl(\frac{D-5}2\Bigr) \; .
\end{eqnarray}
For general $\nu$ the spatial plane wave mode functions corresponding to
Bunch-Davies vacuum are,
\begin{equation}
u_{\nu}(x^0,k) \equiv \sqrt{\frac{\pi}{4 H}} \; a^{-\frac{D-1}2} \,
H^{(1)}_{\nu}(-k x^0) \; . \label{unu}
\end{equation}
When it exists, the Fourier mode sum for the propagator is \cite{JMPW},
\begin{eqnarray}
\lefteqn{i\Delta_{\nu}(x;z) = \int \!\! \frac{d^{D-1}k}{(2\pi)^{D-1}} \,
e^{i \vec{k} \cdot (\vec{x} - \vec{z})} \Biggl\{ \theta(x^0 \!-\! z^0)
u_{\nu}(x^0,k) u^*_{\nu}(z^0,k) } \nonumber \\
& & \hspace{6.4cm} + \theta(z^0 \!-\! x^0) u^*_{\nu}(x^0,k) u_{\nu}(z^0,k)
\Biggr\} . \qquad \label{modesum}
\end{eqnarray}
When this sum exists the result is de Sitter invariant \cite{CT},
\begin{equation}
i\Delta_{\nu}(x;z) = \frac{H^{D-2}}{(4\pi)^{\frac{D}2}}
\frac{\Gamma(\frac{D-1}2 \!+\! \nu) \Gamma(\frac{D-1}2 \!-\! \nu)}{
\Gamma(\frac{D}2)} \, \mbox{}_2 F_1\Bigl( \frac{D-1}2 \!+\! \nu,
\frac{D-1}2 \!-\! \nu; \frac{D}2;1 \!-\! \frac{y}4\Bigr) \; .
\end{equation}
When the Fourier mode sum (\ref{modesum}) is infrared divergent it must be
cut off, either by making the mode functions less singular for super-horizon
wave lengths \cite{Vilenkin} or by working on a spatially compact manifold
\cite{TW8}. Either procedure breaks de Sitter invariance. A special case of
some importance to our discussion is $\nu = (D-1)/2$, for which the result is
\cite{OW,JMPW},
\begin{equation}
\nu = \Bigl(\frac{D\!-\!1}2\Bigr) \qquad \Longrightarrow \qquad
i\Delta_A(x;z) = A\Bigl( y(x;z)\Bigr) + k \ln(a_x a_z) \; , \label{DeltaA}
\end{equation}
where the constant $k$ was defined in (\ref{BCID}) and the function $A(y)$
has the expansion,
\begin{eqnarray}
\lefteqn{A(y) \equiv \frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \Biggl\{
\frac{\Gamma(\frac{D}2)}{\frac{D}2 \!-\! 1}
\Bigl(\frac{4}{y}\Bigr)^{ \frac{D}2 -1} \!+\!
\frac{\Gamma(\frac{D}2 \!+\! 1)}{\frac{D}2 \!-\! 2}
\Bigl(\frac{4}{y} \Bigr)^{\frac{D}2-2} \!-\! \pi
\cot\Bigl(\frac{\pi D}2\Bigr)
\frac{\Gamma(D \!-\! 1)}{\Gamma(\frac{D}2)} } \nonumber \\
& & \hspace{.7cm} + \sum_{n=1}^{\infty} \Biggl[\frac1{n}
\frac{\Gamma(n \!+\! D \!-\! 1)}{\Gamma(n \!+\! \frac{D}2)}
\Bigl(\frac{y}4 \Bigr)^n \!\!\!\! - \frac1{n \!-\! \frac{D}2 \!+\!
2} \frac{\Gamma(n \!+\! \frac{D}2 \!+\! 1)}{ \Gamma(n \!+\! 2)}
\Bigl(\frac{y}4 \Bigr)^{n - \frac{D}2 +2} \Biggr] \Biggr\} . \qquad
\end{eqnarray}
As with the expansions (\ref{Bexp}-\ref{Cexp}) for $B(y)$ and $C(y)$, the
infinite series terms of $A(y)$ vanish for $D=4$, so they only need to be
retained when multiplying a potentially divergent quantity, and even then
one only needs to include a handful of them. This makes loop computations
manageable. For a massless, minimally coupled scalar with a quartic
self-interaction, two loop results have been obtained for the expectation
value of the stress tensor \cite{OW}, for the scalar self-mass-squared
\cite{BOW} and for the quantum-corrected mode functions \cite{KO}. In
Yukawa theory it has been used to compute the expectation vlaue of the
coincident vertex function at two loop order \cite{MW2}, and it has been
used for a variety of two loop computations in scalar quantum electrodynamics
\cite{PTW1,PTW2}. It should also be noted that the de Sitter breaking
correction to $i\Delta_A(x;z)$ in expression (\ref{DeltaA}) can be derived
from the infrared-truncated mode sum \cite{JMPW}, and it serves to reproduce
the classic and well known result for the coincidence limit of the propagator
\cite{classic}.
The function $A(y)$ obeys a differential equation analogous to (\ref{BID})
and (\ref{CID}),
\begin{equation}
(4 y \!-\! y^2) A'' + D (2 \!-\! y) A' = (D \!-\! 1) k \; . \label{AID}
\end{equation}
A number of identities relate the derivative of $A(y)$ to $B(y)$ and $C(y)$,
\begin{eqnarray}
A' & = & -\frac12 (D \!-\!3) B + C' \; , \label{ABC} \\
(4 y \!- y^2) A' & = & -2 (D \!-\! 2) B - (2 \!-\! y) k \; .
\end{eqnarray}
It is also useful to note the result of acting the scalar d'Alembertian on
a function of the scale factor,
\begin{equation}
\square f(a) = -H^2 \Bigl[a^2 f''(a) + D a f'(a)\Bigr] \; . \label{aID}
\end{equation}
Now consider Green's second identity for any two functions $F(x')$ and
$G(x')$,
\begin{eqnarray}
\lefteqn{F(x') \sqrt{-g(x')} \, \square' G(x') - G(x') \sqrt{-g(x')} \,
\square' F(x')} \nonumber \\
& & \hspace{2cm} = \partial_{\mu}' \Biggl\{ \sqrt{-g(x')} \, g^{\mu\nu}(x')
\Bigl[ F(x') \partial_{\nu}' G(x') - G(x') \partial_{\nu}' F(x')\Bigr]
\Biggr\} . \qquad \label{Green2}
\end{eqnarray}
We choose $G(x')$ to be any symmetric Green's function $G(x;x') = G(x';x)$,
\begin{equation}
\sqrt{-g(x)} \, \square G(x;x') = \delta^D(x \!-\! x') \; .
\end{equation}
We can obviously integrate (\ref{Green2}) over any region $V$ with boundary
$\partial V$ to conclude,
\begin{eqnarray}
\lefteqn{ F(x) = \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, G(x;x') \square_{x'}
F(x') } \nonumber \\
& & \hspace{1cm} + \int_{\partial V} \!\! d^{D-1}\!x_{\mu}' \sqrt{-g'} \,
g^{\prime\mu\nu} \Biggl[F(x') \partial_{\nu}' G(x;x') - G(x;x') \partial_{\nu}'
F(x') \Biggr] \; . \qquad \label{intrel}
\end{eqnarray}
Relation (\ref{intrel}) is true for any Green's function so we are free to
use the $A$-type propagator, $G(x;x') = -i \times i\Delta_A(x;x')$. Relation
(\ref{intrel}) is also valid for any function $F(x)$ so we are free to
make the choice,
\begin{equation}
F(x) \longrightarrow \frac{i\Delta_{\nu}(x;z) \!-\! i\Delta_A(x;z)}{[
(\frac{D-1}2)^2 \!-\! \nu^2] \, H^2} \; .
\end{equation}
The surface terms involving $i\Delta_A$ obviously cancel so the result is,
\begin{eqnarray}
\lefteqn{-i \!\! \int_{V} \! d^Dx' \sqrt{-g(x')} \, i\Delta_A(x;x')
i\Delta_{\nu}(x';z) = \frac{i\Delta_{\nu}(x;z) \!-\! i\Delta_A(x;z)}{[(
\frac{D-1}2)^2 \!-\! \nu^2] H^2} } \nonumber \\
& & \hspace{-.7cm} +i \!\! \int_{\partial V} \!\!\!\! d^{D-1}\!x'_{\rho}
\sqrt{-g'} \, g^{\prime \rho\sigma} \Biggl[\frac{i\Delta_{\nu}(x';z)
\partial_{\sigma}' i\Delta_A(x;x')\!-\!i\Delta_A(x;x') \partial_{\sigma}'
i\Delta_{\nu}(x';z)}{[(\frac{D-1}2)^2 \!-\! \nu^2] H^2} \Biggr] \! . \qquad
\label{convolution}
\end{eqnarray}
We call (\ref{convolution}) the ``Convolution Identity.''
Choosing $\nu = (D-5)/2$ in the Convolution Identity (\ref{convolution})
gives us a relation for the $C$-type propagator $C(y)$,
\begin{equation}
-i \!\! \int_{V} \! d^Dx' \sqrt{-g(x')} \, i\Delta_A(x;x')
i\Delta_C(x';z) = \frac{i\Delta_C(x;z) \!-\! i\Delta_A(x;z)
\!-\! \mathcal{S}_C(x;z)}{2 (D \!-\! 3) H^2} \; , \label{convC}
\end{equation}
where the surface term is,
\begin{equation}
\mathcal{S}_C(x;z) \equiv \!\! \int_{\partial V}\!\!\!\!\!\! d^{D-1}\!x'_{\rho}
\!\sqrt{-g'} g^{\prime \rho\sigma} \Bigl[i\Delta_C(x';z)
\partial_{\sigma}' G(x;x')\!-\! G(x;x') \partial_{\sigma}'
i\Delta_C(x';z) \Bigr] . \label{surfC}
\end{equation}
We now substitute (\ref{convC}) in our result (\ref{Other}) for the the Other
Term of the previous section,
\begin{eqnarray}
\lefteqn{ \Bigl[\mbox{}_{\mu} \overline{\mathcal{O}}_{\nu} \Bigr](x;z) =
a_x a_z B\Bigl(y(x;z)\Bigr) \eta_{\mu\nu} + a_x a_z \Biggl\{ \frac12 y(x;z)
B\Bigl(y(x;z)\Bigr)\!-\! \frac{k}{D\!-\!3} \Biggr\} \delta^0_{\mu}
\delta^0_{\nu} } \nonumber \\
& & \hspace{3cm} - 2 H \Biggl[a_z \delta^0_{\nu} \frac{\partial}{\partial
x^{\mu}} \!+\! a_x \delta^0_{\mu} \frac{\partial}{\partial z^{\nu}} \Biggr]
\Biggl\{ \frac{i\Delta_C(x;z) \!-\! i\Delta_A(x;z)}{2 (D \!-\! 3) H^2}
\Biggr\} \nonumber \\
& & \hspace{4cm} + \frac{a_z \delta^0_{\nu}}{(D \!-\! 3) H} \frac{\partial
\mathcal{S}_C(x;z)}{\partial x^{\mu}} + \frac{a_x \delta^0_{\mu}}{(D \!-\! 3)H}
\frac{\partial \mathcal{S}_C(z;x)}{\partial z^{\nu}} \; . \qquad
\end{eqnarray}
The derivative is easy to simplify using (\ref{ABC}),
\begin{eqnarray}
\lefteqn{- 2 H a_z \delta^0_{\nu} \frac{\partial}{\partial x^{\mu}} \Biggl\{
\frac{i\Delta_C(x;z) \!-\! i\Delta_A(x;z)}{2 (D \!-\! 3) H^2}\Biggr\} }
\nonumber \\
& & \hspace{3cm} = -\frac{a_z \delta^0_{\nu}}{(D \!-\!3) H} \,
\frac{\partial y}{\partial x^{\mu}} \, \Bigl(C' - A'\Bigr) + \frac{k}{D\!-\!3}
\, \delta^0_{\mu} \delta^0_{\nu} \, a_x a_z \; , \qquad \\
& & \hspace{3cm} = a_x a_z \Biggl\{ -a_z H \Delta x_{\mu} \delta^0_{\nu} B
-\frac12 y B \delta^0_{\mu} \delta^0_{\nu} + \frac{k}{D \!-\! 3} \,
\delta^0_{\mu} \delta^0_{\nu} \Biggr\} . \qquad
\end{eqnarray}
Combining everything results in an expression for the Other Term which is
almost de Sitter invariant, modulo the surface terms,
\begin{eqnarray}
\lefteqn{ \Bigl[\mbox{}_{\mu} \overline{\mathcal{O}}_{\nu} \Bigr](x;z) =
a_x a_z \Biggl\{ -\frac12 y \delta^0_{\mu} \delta^0_{\nu} - a_z H \Delta
x_{\mu} \delta^0_{\nu} + a_x \delta^0_{\mu} H \Delta x_{\nu} + \eta_{\mu\nu}
\Biggr\} B } \nonumber \\
& & \hspace{.5cm} + \frac{k}{D \!-\!3} \, a_x a_z \delta^0_{\mu} \delta^0_{\nu}
+ \frac{a_z \delta^0_{\nu}}{(D \!-\! 3) H} \frac{\partial \mathcal{S}_C(x;z)}{
\partial x^{\mu}} + \frac{a_x \delta^0_{\mu}}{(D \!-\!3) H} \frac{\partial
\mathcal{S}_C(z;x)}{\partial z^{\nu}} \; , \qquad \\
& & \hspace{0cm}=\!-\frac{\partial^2 y(x;z)}{\partial x^{\mu}\partial z^{\nu}}
\frac{B}{2 H^2} \!+\! \frac{k a_x a_z \delta^0_{\mu} \delta^0_{\nu}}{D
\!-\!3} \!+\! \frac{a_z \delta^0_{\nu}}{D \!-\!3} \frac{\partial
\mathcal{S}_C(x;z)}{\partial H x^{\mu}} \!+\! \frac{a_x \delta^0_{\mu}}{
D \!-\! 3} \frac{\partial \mathcal{S}_C(z;x)}{\partial H z^{\nu}} .
\qquad \label{Obar}
\end{eqnarray}
The first term on the right hand side of (\ref{Obar}) agrees with the first
term in our decomposition (\ref{invprop}) for the invariant propagator. We
must obviously choose the homogeneous contributions to the gauge parameter
$\theta[A](x)$ so as to cancel the surface terms in (\ref{Obar}). That leaves
the term proportional to $k$, which relation (\ref{aID}) allows us to
recognize as a potential part of the Integral Term,
\begin{equation}
\frac{k a_x a_z \delta^0_{\mu} \delta^0_{\nu}}{D \!-\!3} = \frac{\partial}{
\partial x^{\mu}} \, \frac{\partial}{\partial z^{\nu}} \Biggl\{
\frac{k \ln^2(a_x a_z)}{2 (D \!-\! 3) H^2} + {\rm const} \times \ln(a_x a_z)
\Biggr\} \; . \label{logsq}
\end{equation}
We will presently see that precisely the bracketed expression is needed to
make $\overline{\mathcal{I}}(x;z)$ de Sitter invariant, up to surface terms.
A matter of great importance for us is what the Convolution Identity
(\ref{convolution}) gives when the index $\nu$ is chosen to be $(D\!-\!1)/2$,
corresponding to the $A$-type propagator. The term on the right hand side
obviously gives a derivative with respect to the index $\nu$,
\begin{eqnarray}
\lim_{\nu \rightarrow (\frac{D-1}2)} \Biggl[ \frac{i\Delta_{\nu}(x;z) \!-\!
i\Delta_A(x;z)}{[(\frac{D-1}2)^2 \!-\! \nu^2] H^2} \Biggr] & = & -
\frac{\frac{\partial}{\partial \nu} \, i\Delta_{\nu}(x;z) }{(D \!-\!1) H^2}
\Biggl\vert_{\nu = (\frac{D-1}2)} \; , \qquad \\
& \equiv & -\frac{i\Delta_{A'}(x;z)}{(D \!-\!1) H^2} \; . \label{DeltaA'}
\end{eqnarray}
Hence the convolution of two $A$-type propagators gives,
\begin{equation}
-i \!\! \int_{V} \! d^Dx' \sqrt{-g(x')} \, i\Delta_A(x;x') i\Delta_A(x';z)
= -\frac{i\Delta_{A'}(x;z)}{(D \!-\!1) H^2} + \frac{\mathcal{S}_A(x;z)}{
(D \!-\!1) H^2} \; ,
\end{equation}
where the surface term is,
\begin{equation}
\mathcal{S}_A(x;z) \!\equiv \!\!
\int_{\partial V} \!\!\!\!\!\! d^{D-1}\!x'_{\rho} \! \sqrt{-g'}
g^{\prime \rho\sigma} \Bigl[i\Delta_{A'}(x';z) \partial_{\sigma}' G(x;x')
\!-\! G(x;x') \partial_{\sigma}' i\Delta_{A'}(x';z) \Bigr] .
\end{equation}
Like the $A$ propagator, the $A'$ propagator breaks de Sitter invariance.
The simplest way to see this is by differentiating relation (\ref{nuprop})
with respect to $\nu$ and then setting $\nu = (D-1)/2$,
\begin{eqnarray}
0 & = & \frac{\partial}{\partial \nu} \Biggl\{ \Biggl[\square_x + \Bigl[\nu^2 -
\Bigl(\frac{D\!-\!1}2\Bigr)^2\Bigr] H^2 \Biggr] i\Delta_{\nu}(x;z)
\Biggr\}_{\nu = (\frac{D-1}2)} \; , \qquad \\
& = & \square_x i\Delta_{A'}(x;z) + (D\!-\!1) H^2 i\Delta_A(x;z) \; .
\qquad \label{BoxA}
\end{eqnarray}
Because $i\Delta_A(x;z) = A(y) + k \ln(a_x a_z)$ has a de Sitter breaking
term it is clear that $i\Delta_{A'}(x;z)$ must as well. From relation
(\ref{aID}) we infer,
\begin{equation}
i\Delta_{A'}(x;z) = \mathcal{A}\Bigl(y(x;z)\Bigr) + {\rm const} \times
\ln(a_x a_z) + \frac12 k \ln^2(a_x a_z) \; . \label{A'form}
\end{equation}
We do not require the coefficient of the $\ln(a_x a_z)$ term but the series
expansion for $\mathcal{A}(y)$ is,
\begin{eqnarray}
\lefteqn{\mathcal{A}(y) = \frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \,
\Bigl(\frac{4}{y}\Bigr)^{\frac{D}2 - 2} \!\!\times 2 \Bigl(\frac{D \!-\! 1}{D
\!-\! 4}\Bigr) \Gamma\Bigl(\frac{D}2 \!-\! 1\Bigr) - \frac{H^{D-2}}{(4\pi)^{
\frac{D}2}} \sum_{n=0}^{\infty} \Biggl\{ \frac{(\frac14 y)^{n -\frac{D}2 +3}}{
n \!-\! \frac{D}2 \!+\!3} } \nonumber \\
& & \hspace{0cm} \times \frac{\Gamma(n\!+\!\frac{D}2 \!+\! 2)}{(n \!+\! 2)!}
\Biggl[\psi\Bigl(2 \!-\! \frac{D}2\Bigr) \!-\! \psi\Bigl(\frac{D}2 \!-\! 1
\Bigr) \!+\! \psi\Bigl(n \!+\! \frac{D}2 \!+\! 2\Bigr) \!-\! \psi\Bigl(n \!-\!
\frac{D}2 \!+\! 3\Bigr)\Biggr] \nonumber \\
& & \hspace{-.7cm} - \frac{(\frac14 y)^{n+1}}{n \!+\! 1} \!\times\!
\frac{\Gamma(n \!+\! D)}{\Gamma(n \!+\! \frac{D}2 \!+\! 1)}
\Biggl[ \psi\Bigl(2 \!-\! \frac{D}2 \Bigr) \!-\! \psi\Bigl(\frac{D}2 \!-\! 1
\Bigr) \!+\! \psi(n \!+\! D) \!-\! \psi(n \!+\! 1)\Biggr] \!\Biggr\} . \qquad
\label{calA}
\end{eqnarray}
One can hardly fail to notice the similarities in the series expansion
(\ref{gamma}) for $\gamma(y)$ and the expansion (\ref{calA}) we have just
given for the de Sitter invariant part of the $A'$ propagator. The relation
between them is,
\begin{equation}
\mathcal{A}(y) = \frac14 \Bigl(\frac{D \!-\! 3}{D \!-\! 1}\Bigr)
I\Bigl[(4 y \!-\! y^2) \gamma' + (D\!-\!1) (2 \!-\! y) \gamma\Bigr]
-\frac12 (D \!-\!2) I[B] + {\rm const} \; . \label{calAgam}
\end{equation}
It is tedious but straightforward to check (\ref{calAgam}) using the
series expansions but a simpler way of recognizing it is to act the scalar
d'Alembertian on both sides. In view of (\ref{BoxA}) the left hand side gives,
\begin{equation}
\frac{\square}{H^2} \mathcal{A}(y) = -(D\!-\! 1) A(y) + {\rm const} \; .
\end{equation}
To compute the right hand side we need the lemma,
\begin{equation}
(2\!-\!y) (4y \!-\! y^2) \gamma' + (4y \!-\! y^2) \gamma + D (2 \!-\! y)^2
\gamma = 2 (D \!-\! 1) I\Bigl[ (2 \!-\! y) B'\Bigr] + {\rm const} \; .
\label{lemma}
\end{equation}
This follows from differentiation with respect to $y$ and using the
equation (\ref{gameqn}) for $\gamma(y)$. Now act $\square/H^2$ on the first
term on the right hand side of (\ref{calAgam}), then use the $\gamma$
equation (\ref{gameqn}), and finally relations (\ref{BID2}) and (\ref{lemma}),
\begin{eqnarray}
\lefteqn{\frac{\square}{H^2} I\Bigl[(4 y \!-\! y^2) \gamma' \!+\! (D\!-\!1)
(2 \!-\!y) \gamma\Bigr] = (4y \!-\!y^2) \Bigl[(4y \!-\! y^2) \gamma'' \!+\!
(D\!+\!1) (2 \!-\! y) \gamma' } \nonumber \\
& & \hspace{2.5cm} - (D\!-\!1) \gamma\Bigr] + D (2 \!-\!y)
\Bigl[(4 y \!-\! y^2) \gamma' \!+\! (D \!-\!1) (2 \!-\!y) \gamma\Bigr]
\; , \qquad \\
& & \hspace{-.3cm} = (D \!-\!1) \Biggl\{(2 \!-\! y) (4 y \!-\! y^2) \gamma'
\!+\! (4 y \!-\! y^2) \gamma \!+\! D (2 \!-\! y)^2 \gamma \!+\! 2 (4 y \!-\!
y^2) B'\Biggr\} . \qquad \\
& & \hspace{-.3cm} = 2 (D \!-\!1) \Biggl\{(D \!-\!1) I\Bigl[(2 \!-\! y) B'
\Bigr] \!-\! (D \!-\! 2) (2 \!-\! y) B' \!+\! {\rm const} \Biggr\} . \qquad
\end{eqnarray}
Acting on the right hand side of (\ref{calAgam}) and using identities
(\ref{BID2}), (\ref{BCID}) and (\ref{ABC}) proves the relation,
\begin{eqnarray}
\lefteqn{\frac{\square}{H^2} \Biggl\{\frac14 \Bigl(\frac{D \!-\! 3}{D \!-\! 1}
\Bigr) I\Bigl[(4 y \!-\! y^2) \gamma' \!+\! (D\!-\!1) (2 \!-\!y) \gamma\Bigr] -
\frac12 (D\!-\!2) I[B] \Biggr\} } \nonumber \\
& & \hspace{.5cm} = \frac12 (D \!-\!3) (D\!-\! 1) I\Bigl[ (2 \!-\!y) B'\Bigr]
- \frac12 (D\!-\!3) (D\!-\!2) (2 \!-\! y) B \nonumber \\
& & \hspace{7cm} - (D\!-\!2) (2 \!-\! y) B + {\rm const} \; , \qquad \\
& & \hspace{.5cm} = \frac12 (D \!-\!3) (D \!-\!1) I[B] -\frac12 (D\!-\!1)
(2 \!-\! y) B + {\rm const} \; , \qquad \\
& & \hspace{.5cm} = -(D \!-\! 1) A + {\rm const} \; .
\end{eqnarray}
The point of enduring all this analysis is that we can now recognize the
``Integral Term'' of the invariant propagator (\ref{invprop}) as a
collection of propagators plus the $\ln^2(a_x a_z)$ term of expression
(\ref{logsq}),
\begin{eqnarray}
\mathcal{I}(x;z) & \equiv & \frac1{4 (D\!-\!1) H^2} \, I\Biggl[- (4 y \!-\!
y^2) \gamma' \!-\! (D\!-\!1) (2 \!-\! y) \gamma \!+\! 2 (D \!-\!1) B\Biggr] ,
\label{invInt0} \\
& = & -\frac{\mathcal{A}}{(D \!-\! 3) H^2} - \Biggl[\frac{C \! - \! A}{(D
\!-\! 3)^2 H^2} \Biggr] + {\rm const} \; , \qquad \\
& = & - \frac{i\Delta_{A'}(x;z)}{(D \!-\!3) H^2} - \Biggl[\frac{i\Delta_C(x;z)
\!-\! i\Delta_A(x;z)}{(D \!-\! 3)^2 H^2} \Biggr] + \frac{k \ln^2(a_x a_z)}{2
(D \!-\!3) H^2} \nonumber \\
& & \hspace{5.5cm} + {\rm const} \times \ln(a_x a_z) + {\rm const}
\; . \qquad \label{invInt}
\end{eqnarray}
Note that the two unknown constants are irrelevant because they drop
out when one differentiates with respect to $x^{\mu}$ and $z^{\nu}$ to
get the propagator.
We can make contact between the Integral Term (\ref{invInt}) of the invariant
propagator and the Integral Term (\ref{Integral}) of the transformed
propagator by expressing the propagators as convolution integrals,
\begin{eqnarray}
\lefteqn{-\frac{i\Delta_{A'}(x;z)}{(D \!-\!3) H^2} =
\Bigl(\frac{D \!-\! 1}{D \!-\! 3}\Bigr) \!\! \int_{V} \!\!
d^Dx' \!\sqrt{-g(x')} \, G(x;x') i\Delta_A(x';z) - \frac{\mathcal{S}_A(x;z)}{
(D \!-\!3) H^2} \, , \qquad } \label{A'term} \\
\lefteqn{-\Biggl[\frac{i\Delta_C(x;z) \!-\! i\Delta_A(x;z)}{(D \!-\! 3)^2 H^2}
\Biggr] = - \!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \,
G(x;x') \frac{i\Delta_C(x';z)}{D \!-\!3} } \nonumber \\
& & - \frac{\mathcal{S}_C(x;z)}{2 (D \!-\!3)^2 H^2} - \!
\int_{V} \!\! d^Dz' \sqrt{-g(z')} \, G(z;z') \frac{i\Delta_C(x;z')}{D \!-\!3}
- \frac{\mathcal{S}_C(z;x)}{2 (D \!-\!3)^2 H^2} \, . \qquad \label{Cterm}
\end{eqnarray}
Now break up the prefactor of (\ref{A'term}) as,
\begin{equation}
\Bigl(\frac{D \!-\! 1}{D \!-\! 3}\Bigr) = 1 + \frac1{D \!-\! 3} +
\frac1{D \!-\!3} \; ,
\end{equation}
and combine the convolutions multiplying the last two factors with the
convolutions of (\ref{Cterm}) to produce the combination $i\Delta_C -
i\Delta_A$ that can be recognized as another convolution,
\begin{eqnarray}
\lefteqn{-\frac{i\Delta_{A'}(x;z)}{(D \!-\!3) H^2} -\Biggl[\frac{i\Delta_C(x;z)
\!-\! i\Delta_A(x;z)}{(D \!-\! 3)^2 H^2} \Biggr] } \nonumber \\
& & = -i \!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, i\Delta_A(x;x')
i\Delta_A(x';z) \nonumber \\
& & \hspace{1cm} + i \!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, i\Delta_A(x;x')
\Biggl[ \frac{ i\Delta_C(z;x') \!-\! i\Delta_A(z;x')}{D -3} \Biggr]
\nonumber \\
& & \hspace{1cm} + i \!\! \int_{V} \!\! d^Dz' \sqrt{-g(z')} \, i\Delta_A(z;z')
\Biggl[ \frac{ i\Delta_C(x;z') \!-\! i\Delta_A(x;z')}{D -3} \Biggr]
\nonumber \\
& & \hspace{4cm} - \frac{\mathcal{S}_A(x;z)}{(D \!-\!3) H^2}
- \frac{\mathcal{S}_C(x;z)}{2 (D \!-\!3)^2 H^2} - \frac{\mathcal{S}_C(z;x)}{
2 (D \!-\!3)^2 H^2} \; , \qquad \\
& & = -i \!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, i\Delta_A(x;x')
i\Delta_A(x';z) \nonumber \\
& & \hspace{.5cm} + 4 H^2 \!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \,
i\Delta_A(x;x') \!\! \int_{V} \!\! d^Dz' \sqrt{-g(z')} \, i\Delta_A(z;z') \,
i\Delta_C(x';z') \nonumber \\
& & \hspace{.5cm} - \!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, G(x;x')
\frac{\mathcal{S}_C(z;x')}{D \!-\! 3}
- \!\! \int_{V} \!\! d^Dz' \sqrt{-g(z')} \, G(z;z')
\frac{\mathcal{S}_C(x;z')}{D \!-\! 3} \nonumber \\
& & \hspace{4cm} - \frac{\mathcal{S}_A(x;z)}{(D \!-\!3) H^2} - \frac{
\mathcal{S}_C(x;z)}{ 2 (D \!-\!3)^2 H^2} - \frac{\mathcal{S}_C(z;x)}{2
(D \!-\! 3)^2 H^2} \; . \qquad
\end{eqnarray}
We obtain the desired relation by adding the Integral Term (\ref{Integral})
of the transformed propagator to the $\ln^2(a_x a_z)$ contribution
(\ref{logsq}) from the Other Term (and some pieces which drop out when
differentiated by $x^{\mu}$ and $z^{\nu}$),
\begin{eqnarray}
\lefteqn{\overline{\mathcal{I}}(x;z) + \frac{k \ln^2(a_x a_z)}{2 (D\!-\!3) H^2}
+ {\rm const} \times \ln(a_x a_z) + {\rm const} } \nonumber \\
& & \hspace{1.5cm}=\mathcal{I}(x;z) + \frac{\mathcal{S}_A(x;z)}{(D \!-\!3) H^2}
+ \Biggl[\frac{\mathcal{S}_C(x;z) \!+\! \mathcal{S}_C(z;x)}{2 (D\!-\!3)^2 H^2}
\Biggr] \nonumber \\
& & \hspace{2.5cm} + \!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, G(x;x')
\Biggl[\mathcal{S}_B(z;x') \!+\! \frac{\mathcal{S}_C(z;x')}{D \!-\! 3}
\Biggr] \nonumber \\
& & \hspace{3.5cm} + \!\! \int_{V} \!\! d^Dz' \sqrt{-g(z')} \, G(z;z')
\Biggl[\mathcal{S}_B(x;z') \!+\! \frac{\mathcal{S}_C(x;z')}{D \!-\! 3}
\Biggr] . \qquad \label{Ibar}
\end{eqnarray}
\section{Determining the Homogeneous Part}
The purpose of this section is to show that we can make the transformed
propagator agree with the invariant one by correctly choosing the homogeneous
part of the full gauge parameter $\theta[A](x)$. We begin by summarizing the
relevant results of the previous two sections concerning the gauge parameter
$\overline{\theta}[A](x)$, given in (\ref{thetabar}), which enforces Lorentz
gauge but not de Sitter invariance. The resulting propagator agrees with the
invariant one (\ref{dSprop}-\ref{gamma}) up to three surface terms which we
denote as ``$A$-type,'' ``$B$-type'' and ``$C$-type'' according to the mode
functions which they involve. We then exhibit a homogeneous gauge parameter
$\Delta \theta[A](x)$, depending upon $A_0$, which can be used to absorb the
$B$-type and $C$-type surface terms. The section closes by deriving a
homogeneous gauge parameter $\delta \theta[A](x)$, depending upon $A_i$,
which absorbs the $A$-type surface terms and results in complete agreement
with the invariant propagator.
\subsection{Summary of Previous Results}
Our goal is to construct a functional change of variables that is also a
gauge transformation,
\begin{equation}
A_{\mu}'(x) = A_{\mu}(x) - \partial_{\mu} \theta[A](x) \; .
\end{equation}
We want the field-dependent gauge parameter $\theta[A](x)$ to do two things:
\begin{enumerate}
\item{Make the field $A_{\mu}'(x)$ obey Lorentz gauge; and}
\item{Make the propagator associated with $A_{\mu}'(x)$ agree with the
unique, de Sitter invariant solution of the Lorentz gauge propagator
equation \cite{TW7}.}
\end{enumerate}
The first condition implies a second order differential equation for
$\theta[A](x)$,
\begin{equation}
\sqrt{-g} \, \square \theta = \partial_{\mu} \Bigl[\sqrt{-g} g^{\mu\nu}
A_{\nu}\Bigr] \; . \label{condition}
\end{equation}
Of course this only defines $\theta[A](x)$ up to a term which is
annihilated by the scalar d'Alembertian. Because propagators obey Feynman
boundary conditions we took the inhomogeneous solution to be the convolution
of $-i$ times the scalar propagator ($G(x;z) \equiv -i \times i\Delta_A(x;z)$)
with the right hand side of (\ref{condition}),
\begin{equation}
\overline{\theta}[A](x) \equiv \int_{V} \!\! d^Dx' \, G(x;x')
\frac{\partial}{\partial x^{\prime \rho}} \Bigl[ \sqrt{-g(x')} \,
g^{\rho\sigma}(x') A_{\sigma}(x')\Bigr] \; . \label{thbar}
\end{equation}
The result of just performing this transformation defines a field,
\begin{equation}
\overline{A}_{\mu}(x) \equiv A_{\mu}(x) - \partial_{\mu}
\overline{\theta}[A](x) \; , \label{partial}
\end{equation}
which obeys the Lorentz gauge condition but whose propagator does not quite
agree with the invariant one.
We decomposed the propagator of $\overline{A}_{\mu}(x)$ into the double
gradient of an ``Integral Term'' (\ref{Integral}) and an ``Other Term''
(\ref{Other}),
\begin{equation}
\Bigl\langle \Omega \Bigl\vert T^*\Bigl[\overline{A}_{\mu}(x) \overline{A
}_{\nu}(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle = \frac{\partial}{\partial
x^{\mu}} \frac{\partial}{\partial z^{\nu}} \, \overline{\mathcal{I}}(x;z) +
\Bigl[\mbox{}_{\mu} \overline{\mathcal{O}}_{\nu}\Bigr](x;z) \; .
\end{equation}
The invariant propagator can be broken up in similar fashion (\ref{invprop}),
\begin{equation}
i\Bigl[\mbox{}_{\mu} \Delta^{\rm dS}_{\nu}\Bigr](x;z)= \frac{\partial}{\partial
x^{\mu}} \frac{\partial}{\partial z^{\nu}} \, \mathcal{I}(x;z) - \frac1{2 H^2}
B\Bigl(y(x;z)\Bigr) \frac{\partial^2 y(x;z)}{\partial x^{\mu} \partial z^{\nu}}
\; .
\end{equation}
It is desirable to shift the double gradient of a spatially constant term,
\begin{equation}
\Delta \overline{\mathcal{I}}(x;z) \equiv \frac{k \ln^2(a_x a_z)}{2 (D \!-\!3)}
+ {\rm const} \times \ln(a_x a_z) + {\rm const} \; ,
\end{equation}
from $[\mbox{}_{\mu} \overline{\mathcal{O}}_{\nu}](x;z)$ to $\overline{
\mathcal{I}}(x;z)$. When this is done, the difference between what we want the
full transformation to produce and what the $\overline{\theta}[A](x)$
transformation actually gives is,
\begin{eqnarray}
\lefteqn{\Bigl[\mbox{}_{\mu} \mathcal{O}_{\nu}\Bigr] -
\Bigl[\mbox{}_{\mu} \overline{\mathcal{O}}_{\nu}\Bigr] +
\frac{\partial^2 \Delta \overline{\mathcal{I}}}{ \partial x^{\mu}
\partial z^{\nu}} = - \frac{a_z \delta^0_{\nu}}{D \!-\!3} \frac{\partial
\mathcal{S}_C(x;z)}{\partial H x^{\mu}} - \frac{a_x \delta^0_{\mu}}{
D \!-\!3} \frac{\partial \mathcal{S}_C(z;x)}{\partial H z^{\nu}} \; , \qquad}
\label{DelO} \\
\lefteqn{\mathcal{I}(x;z) - \overline{\mathcal{I}}(x;z) -
\Delta \overline{\mathcal{I}}(x;z)
= - \frac{\mathcal{S}_A(x;z)}{(D \!-\!3) H^2}
- \Biggl[\frac{\mathcal{S}_C(x;z) \!+\! \mathcal{S}_C(z;x)}{2 (D\!-\!3)^2 H^2}
\Biggr] } \nonumber \\
& & \hspace{2.5cm} - \!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, G(x;x')
\Biggl[\mathcal{S}_B(z;x') \!+\! \frac{\mathcal{S}_C(z;x')}{D \!-\! 3}
\Biggr] \nonumber \\
& & \hspace{3.5cm} - \!\! \int_{V} \!\! d^Dz' \sqrt{-g(z')} \, G(z;z')
\Biggl[\mathcal{S}_B(x;z') \!+\! \frac{\mathcal{S}_C(x;z')}{D \!-\! 3}
\Biggr] . \qquad \label{DelI}
\end{eqnarray}
Each of the surface integrals, $\mathcal{S}_F(x;z)$, consists of a Dirichlet
and a Neumann contribution,
\begin{equation}
\mathcal{S}_F(x;z) \equiv \!\! \int_{\partial V}\!\!\!\!\!\! d^{D-1}\!x'_{\rho}
\!\sqrt{-g'} g^{\prime \rho\sigma} \Bigl[F(x';z) \partial_{\sigma}' G(x;x')
\!-\! G(x;x') \partial_{\sigma}' F(x';z) \Bigr] . \label{surfF}
\end{equation}
The functions $F(x;z)$ associated with the three integrals are,
\begin{eqnarray}
\mathcal{S}_A(x;z) & \Longrightarrow & F(x';z) = i\Delta_{A'}(x';z) \; , \\
\mathcal{S}_B(x;z) & \Longrightarrow & F(x';z) = \frac{a_{x'}}{a_z}
i\Delta_B(x';z) \; , \\
\mathcal{S}_C(x;z) & \Longrightarrow & F(x';z) = i\Delta_C(x';z) \; .
\end{eqnarray}
Note that each $\mathcal{S}_F(x;z)$ is homogeneous on the first argument
$x^{\mu}$. The integral $\mathcal{S}_A(x;z)$ is homogeneous on $z^{\mu}$ as
well, and also symmetric under interchange of $x^{\mu}$ and $z^{\mu}$.
\subsection{Absorbing the $B$-type and $C$-type Surface Terms}
Rather than absorb all the surface terms at once it is simpler to
first cancel those of the ``Other Term,'' which must also reduce those
that remain in the ``Integral Term'' to pure $A$-type (homogeneous on
both $x^{\mu}$ and $z^{\mu}$). We accordingly seek a homogeneous gauge
parameter $\Delta \theta[A](x)$ which cancels (\ref{DelO}). Because
this will also change the ``Integral Term'' we write out the full
transformed field,
\begin{equation}
\widehat{A}_{\mu}(x) = \overline{A}_{\mu}(x) -\frac{\partial}{\partial x^{\mu}}
\Delta \theta[A](x) \; .
\end{equation}
The propagator of $\widehat{A}$ is,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[\widehat{A}_{\mu}(x)
\widehat{A}_{\nu}(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle =
\Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \overline{A}_{\mu}(x)
\overline{A}_{\nu}(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle } \nonumber \\
& & - \frac{\partial}{\partial x^{\mu}} \Bigl\langle \Omega \Bigl\vert
T^*\Bigl[\Delta \theta(x) A_{\nu}(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle
- \frac{\partial}{\partial z^{\nu}} \Bigl\langle \Omega \Bigl\vert T^*\Bigl[
A_{\mu}(x) \Delta \theta(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle \nonumber \\
& & \hspace{1.3cm} + \frac{\partial}{\partial x^{\mu}} \frac{\partial}{\partial
z^{\nu}} \Bigl\langle \Omega \Bigl\vert T^*\Bigl[\Delta \theta(x)
\overline{\theta}(z) \!+\! \overline{\theta}(x) \Delta \theta(z) \!+\!
\Delta \theta(x) \Delta \theta(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle
\; . \qquad \label{fullprop}
\end{eqnarray}
The terms on the final line of (\ref{fullprop}) must belong to the Integral
Term (\ref{DelI}), and most of the middle line of (\ref{fullprop}) must
similarly belong to the Other Term (\ref{DelO}). If we assume $\Delta
\theta[A](x)$ depends only upon $A_0$ then the break is clean and we have,
\begin{eqnarray}
\lefteqn{ \Bigl\langle \Omega \Bigl\vert T^*\Bigl[\Delta \theta(x) A_0(z)
\Bigr] \Bigr\vert \Omega \Bigr\rangle = \frac{a_z \mathcal{S}_C(x;z)}{(D \!-\!
3) H} \; , } \label{con1} \\
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[\Delta \theta(x)
\overline{\theta}(z) \!+\! \overline{\theta}(x) \Delta \theta(z) \!+\!
\Delta \theta(x) \Delta \theta(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle
= \Bigl( A\!\!-\!\!{\rm type\ Terms} \Bigr) } \nonumber \\
& & - \Biggl[\frac{\mathcal{S}_C(x;z) \!+\! \mathcal{S}_C(z;x)}{2 (D\!-\!3)^2
H^2} \Biggr] - \!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, G(x;x')
\Biggl[\mathcal{S}_B(z;x') \!+\! \frac{\mathcal{S}_C(z;x')}{D \!-\! 3}
\Biggr] \nonumber \\
& & \hspace{3.5cm} - \!\! \int_{V} \!\! d^Dz' \sqrt{-g(z')} \, G(z;z')
\Biggl[\mathcal{S}_B(x;z') \!+\! \frac{\mathcal{S}_C(x;z')}{D \!-\! 3}
\Biggr] . \qquad \label{con2}
\end{eqnarray}
It is straightforward to see that relation (\ref{con1}) fixes the homogeneous
part of the gauge parameter to be,
\begin{equation}
\Delta \theta(x) = \frac{-1}{(D \!-\!3) H} \! \int_{\partial V} \!\!\!
d^{D-1}\!x_{\rho}' \sqrt{-g'} g^{\prime \rho\sigma} \Biggr[ \frac{A_0(x')}{
a_{x'}} \, \partial_{\sigma}' G(x;x') - G(x;x') \partial_{\sigma}'
\frac{A_0(x')}{a_{x'}} \Biggr] . \label{Deltheta}
\end{equation}
Combining (\ref{thbar}) and (\ref{Deltheta}) gives,
\begin{equation}
\Bigl\langle \Omega \Bigl\vert T^*\Bigl[\Delta \theta(x) \overline{\theta}(z)
\Bigr] \Bigr\vert \Omega \Bigr\rangle = \frac{-1}{(D \!-\!3) H} \! \int_{V}\!\!
d^Dz' \, G(z;z') \frac{\partial}{\partial z^{\prime 0}} \Bigl[ a_{z'}^{D-1}
\mathcal{S}_C(x;z')\Bigr] \; .
\end{equation}
The surface integral $\mathcal{S}_C(x;z')$ has the form (\ref{surfF}) with
the function $F(x';z') = i\Delta_C(x';z')$. Multiplying this by the factor
of $a_{z'}^{D-1}$ and taking the derivative gives an expression which we can
simplify using relations (\ref{ABC}) and (\ref{BCID}),
\begin{eqnarray}
\lefteqn{ \frac{\partial}{\partial z^{\prime 0}} \Bigl[ a_{z'}^{D-1}
i\Delta_C(x';z')\Bigr] = H a_{z'}^D \Biggl\{ (D \!-\!1) C - (2 \!-\!y) C'
+ 2 \frac{a_{x'}}{a_{z'}} \, C'\Biggr\} , } \\
& & = H a_{z'}^D \Biggl\{2 C + (D \!-\! 3) C -\frac12 (D \!-\!3)
(2 \!-\!y) B \nonumber \\
& & \hspace{5cm} + (D \!-\!3) \frac{a_{x'}}{a_{z'}} \, B - (2 \!-\! y) A' + 2
\frac{a_{x'}}{a_{z'}} \, A' \Biggr\} , \qquad \\
& & = H a_{z'}^D \Biggl\{2 C + (D \!-\! 3) \frac{a_{x'}}{a_{z'}} \, B\Biggr\}
+ a_{z'}^{D-1} \frac{\partial}{\partial z^{\prime 0}} \Bigl[ i\Delta_A(x';z')
\Bigr] \; .
\end{eqnarray}
The final term involving $i\Delta_A(x';z')$ gives rise to an $A$-type surface
term whose form we will work out in the next subsection. We can therefore
write,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[\Delta \theta(x)
\overline{\theta}(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle
= \Bigl( A\!\!-\!\!{\rm type\ Terms} \Bigr) } \nonumber \\
& & \hspace{2cm} -\!\! \int_{V} \!\! d^Dz' \sqrt{-g(z')} \, G(z;z')
\Biggl[\mathcal{S}_B(x;z') \!+\! \frac2{D \!-\!3} \, \mathcal{S}_C(x;z')
\Biggr] . \qquad \label{mixed1}
\end{eqnarray}
Interchanging $x^{\mu}$ and $z^{\mu}$ gives,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[\overline{\theta}(x)
\Delta \theta(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle
= \Bigl( A\!\!-\!\!{\rm type\ Terms} \Bigr) } \nonumber \\
& & \hspace{2cm} -\!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, G(x;x')
\Biggl[\mathcal{S}_B(z;x') \!+\! \frac2{D \!-\!3} \, \mathcal{S}_C(z;x')
\Biggr] . \qquad \label{mixed2}
\end{eqnarray}
The term with two $\Delta \theta$'s yields a surface integral of
surface integrals that we can write as a volume integral of surface
integrals using Green's 2nd identity,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[\Delta \theta(x)
\Delta \theta(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle = - \frac1{(D \!-\!3)^2
H^2} \!\! \int_{\partial V} \!\!\! d^{D-1}\!x_{\rho}' \sqrt{-g(x')} \,
g^{\rho\sigma}(x') } \nonumber \\
& & \hspace{3cm} \times \Biggl[ \mathcal{S}_C(z;x')
\frac{\partial}{\partial x^{\prime \sigma}} G(x;x') - G(x;x') \frac{\partial}{
\partial x^{\prime \sigma}} \mathcal{S}_C(z;x') \Biggr] \; , \qquad \\
& & \hspace{-.7cm} = \! \frac{-1}{(D \!-\!3)^2 H^2} \!\!\int_{V} \!\!\! d^D\!x'
\!\sqrt{\!-g(x')} \Bigl[ \mathcal{S}_C(z;x') \square_{x'} G(x;x') \!-\! G(x;x')
\square_{x'} \mathcal{S}_C(z;x')\Bigr] . \qquad
\end{eqnarray}
Of course we can use the identity $\sqrt{-g(x')} \, \square_{x'} G(x;x') = i
\delta^D(x - x')$, and the quantity $\sqrt{-g(x')} \, \square_{x'}
\mathcal{S}_C(z;x')$ involves,
\begin{equation}
\sqrt{-g(x')} \, \square_{x'} i\Delta_C(z';x') = i\delta^D(x' \!-\! z')
+ 2(D\!-\!3) H^2 \sqrt{-g(x')} \, i\Delta_C(z';x') \; . \label{BoxC}
\end{equation}
The delta function in (\ref{BoxC}) gives another $A$-type Term whose form
we work out in the next subsection. Hence we have,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[\Delta \theta(x)
\Delta \theta(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle =
\Bigl( A\!\!-\!\!{\rm type\ Terms} \Bigr) } \nonumber \\
& & \hspace{2cm} - \frac{\mathcal{S}_C(z;x)}{(D \!-\! 3)^2 H^2} +
\frac2{D \!-\!3} \! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, G(x;x')
\mathcal{S}_C(z;x') \; . \qquad
\end{eqnarray}
The result is symmetric in $x^{\mu}$ and $z^{\mu}$ so we can express it as,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[\Delta \theta(x)
\Delta \theta(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle =
\Bigl( A\!\!-\!\!{\rm type\ Terms} \Bigr) -
\frac{\mathcal{S}_C(x;z) \!+\! \mathcal{S}_C(z;x)}{2 (D \!-\!3)^2 H^2} }
\nonumber \\
& & \hspace{-.7cm} + \int_{V} \!\! d^Dx' \! \sqrt{-g(x')} \, G(x;x')
\frac{\mathcal{S}_C(z;x')}{D \!-\! 3} + \!\! \int_{V} \!\! d^Dz' \!
\sqrt{-g(z')} \, G(z;z') \frac{\mathcal{S}_C(x;z')}{D \!-\! 3} \; .
\qquad \label{pure}
\end{eqnarray}
Combining expressions (\ref{mixed1}-\ref{mixed2}) with (\ref{pure})
gives the desired form (\ref{con2}),
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[\Delta \theta(x)
\overline{\theta}(z) \!+\! \overline{\theta}(x) \Delta \theta(z) \!+\!
\Delta \theta(x) \Delta \theta(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle
= \Bigl( A\!\!-\!\!{\rm type\ Terms} \Bigr) } \nonumber \\
& & - \Biggl[\frac{\mathcal{S}_C(x;z) \!+\! \mathcal{S}_C(z;x)}{2 (D\!-\!3)^2
H^2} \Biggr] - \!\! \int_{V} \!\! d^Dx' \sqrt{-g(x')} \, G(x;x')
\Biggl[\mathcal{S}_B(z;x') \!+\! \frac{\mathcal{S}_C(z;x')}{D \!-\! 3}
\Biggr] \nonumber \\
& & \hspace{3.5cm} - \!\! \int_{V} \!\! d^Dz' \sqrt{-g(z')} \, G(z;z')
\Biggl[\mathcal{S}_B(x;z') \!+\! \frac{\mathcal{S}_C(x;z')}{D \!-\! 3}
\Biggr] . \qquad
\end{eqnarray}
\subsection{Absorbing the $A$-type Surface Terms}
We should begin this section by clarifying precisely what the $A$-type
surface terms are. They reside entirely in the ``Integral Term,'' and
they consist of $\mathcal{S}_A/(D-3)H^2$ plus the $A$-type surface terms
induced by the gauge parameter $\Delta \theta[A]$. We first reduce
$\mathcal{S}_A$ to a pair of temporal surface terms, then derive
similar expressions for the $A$-type surface terms from $\Delta \theta[A]$.
This will motivate our construction of the final gauge parameter $\delta
\theta[A]$ which absorbs the $A$-type surface terms and gives full
agreement with the invariant propagator.
Recall that the surface integral $\mathcal{S}_A(x;z)$ is,
\begin{equation}
\mathcal{S}_A(x;z) \equiv \!\! \int_{\partial V}\!\!\!\!\!\! d^{D-1}\!x'_{\rho}
\!\sqrt{-g'} g^{\prime \rho\sigma} \Bigl[i\Delta_{A'}(x';z) \partial_{\sigma}'
G(x;x') \!-\! G(x;x') \partial_{\sigma}' i\Delta_{A'}(x';z) \Bigr] ,
\end{equation}
where $i\Delta_{A'}(x;z)$ is the derivative with respect to $\nu$ (evaluated
at $\nu = (D\!-\!1)/2$) of the Fourier mode sum,
\begin{eqnarray}
\lefteqn{i\Delta_{\nu}(x;z) = \int \!\! \frac{d^{D-1}k}{(2\pi)^{D-1}} \,
e^{i \vec{k} \cdot (\vec{x} - \vec{z})} \Biggl\{ \theta(x^0 \!-\! z^0)
u_{\nu}(x^0,k) u^*_{\nu}(z^0,k) } \nonumber \\
& & \hspace{6.4cm} + \theta(z^0 \!-\! x^0) u^*_{\nu}(x^0,k) u_{\nu}(z^0,k)
\Biggr\} . \qquad
\end{eqnarray}
Because $G(x;z)$ is $-i$ times the same mode sum (again evaluated at $\nu =
(D\!-\!1)/2$) we see that the surface terms at spatial infinity make no
contribution. One can therefore express $\mathcal{S}_A(x;z)$ as a Fourier
mode sum of temporal surface terms,
\begin{eqnarray}
\lefteqn{\mathcal{S}_A(x;z) = i \! \int \!\! \frac{d^{D-1}k}{(2\pi)^{D-1}} \,
e^{i \vec{k} \cdot (\vec{x} - \vec{z})} } \nonumber \\
& & \hspace{-.3cm} \times \Biggl\{ u_A^*(x^0,k) u_A^*(z^0,k) \times
\mathcal{F}(-k\eta_2) - u_A(x^0,k) u_A(z^0,k) \times \mathcal{F}^*(-k\eta_1)
\Biggr\} , \qquad
\end{eqnarray}
where $\eta_1$ and $\eta_2$ are the initial and final times, respectively,
and the function $\mathcal{F}(-k\eta)$ is,
\begin{eqnarray}
\lefteqn{\mathcal{F}(-k\eta) \equiv a^{D-2} \Biggl\{ \frac{\partial
u_{\nu}(\eta,k)}{\partial \nu} \frac{\partial u_{\nu}(\eta,k)}{\partial \eta}
- u_{\nu}(\eta,k) \frac{\partial^2 u_{\nu}(\eta,k)}{\partial \nu \partial \eta}
\Biggr\}_{\nu = \frac{D-1}2} \!\!\!\!\! , } \\
& & = \frac{\pi}{4 H a} \Biggl\{ \frac{\partial H^{(1)}_{\nu}(-k\eta)}{\partial
\nu} \frac{\partial H^{(1)}_{\nu}(-k\eta)}{\partial \eta} - H^{(1)}_{\nu}(-k
\eta) \frac{\partial^2 H^{(1)}_{\nu}(-k\eta)}{\partial \nu \partial \eta}
\Biggr\}_{\nu = \frac{D-1}2} \!\!\!\!\! . \qquad
\end{eqnarray}
This function $\mathcal{F}(z)$ has the interesting property that it can be
related to the product of two Hankel functions, without any derivatives
with respect to the index or the argument \cite{TW9}. To see the relation
we define,
\begin{eqnarray}
\mathcal{E}_{\nu}(z) & \equiv & z \Bigl[ H^{(1)}_{\nu}(z)\Bigr]^2 \; , \\
\mathcal{G}_{\nu}(z) & \equiv & z \Biggl[ \partial_{\nu} H^{(1)}_{\nu}(z)
\partial_z H^{(1)}_{\nu}(z) - H^{(1)}_{\nu}(z) \partial_{\nu} \partial_z
H^{(1)}_{\nu}(z) \Biggr] .
\end{eqnarray}
Of course we have,
\begin{equation}
\mathcal{F}(-k\eta) = -\frac{\pi}{4} \times \mathcal{G}_{\nu}(-k\eta) \; ,
\end{equation}
and the relation to $\mathcal{E}_{\nu}$ is \cite{TW9},
\begin{equation}
\partial_z \mathcal{G}_{\nu}(z) = -\frac{2\nu}{z^2} \, \mathcal{E}_{\nu}(z)\; .
\end{equation}
The integration constant can be fixed using the asymptotic expansion for
large $z$ to give,
\begin{equation}
\mathcal{G}_{\nu}(z) = 2 \nu \!\! \int_{z}^{\infty} \!\! dz' \,
\frac{\mathcal{E}_{\nu}(z')}{z^{\prime 2}} \; .
\end{equation}
The key identity for $\Delta \theta[A]$ to produce $A$-type surface terms is,
\begin{eqnarray}
\lefteqn{\frac{\partial}{\partial x^{\prime 0}} \frac{\partial}{\partial
z^{\prime 0}} i\Delta_C(x';z') = \frac{i}{a_{x'}^{D-2}} \,
\delta^D(x' \!-\! z') } \nonumber \\
& & \hspace{1cm} + \frac{\partial^2 y(x';z')}{\partial x^{\prime 0}
\partial z^{\prime 0}} \, C'\Bigl(y(x';z')\Bigr) +
\frac{\partial y(x';z')}{\partial x^{\prime 0}}
\frac{\partial y(x';z')}{\partial z^{\prime 0}} \,
C''\Bigl(y(x';z')\Bigr) \; . \qquad \label{keyID}
\end{eqnarray}
The $A$-type surface terms come exclusively from the delta function term;
the other contributions produce $B$-type and $C$-type surface terms we have
already included. Note that because one gets a $D$-dimensional delta
function, whereas the initial and final surface integrals are only
$(D-1)$-dimensional, it is necessary to regulate $\Delta \theta[A]$ to make
the $A$-type surface term well-defined. An obvious regularization is to
integrate the initial and final time surfaces over a small range of duration
$\Delta \eta = 2 \epsilon$,
\begin{eqnarray}
\lefteqn{\Delta \theta_{\epsilon}(x) \equiv \frac1{2 \epsilon (D \!-\!3) H}
\Biggl[ \int_{\eta_2 -\epsilon}^{\eta_2 + \epsilon} \!\!\!\!\!\!\!\!\!\!
dx^{\prime 0} - \int_{\eta_1 -\epsilon}^{\eta_1 + \epsilon}
\!\!\!\!\!\!\!\!\!\! dx^{\prime 0}\Biggr] \int \!\! d^{D-1}\!\vec{x}' }
\nonumber \\
& & \hspace{1cm} \times a_{x'}^{D-2} \Biggl\{ \frac1{a_{x'}} \, A_0(x')
\frac{\partial}{\partial x^{\prime 0}} \, G(x;x') - G(x;x')
\frac{\partial}{\partial x^{\prime 0}} \Bigl[\frac1{a_{x'}} \, A_0(x')\Bigr]
\Biggr\} . \qquad
\end{eqnarray}
Let us now work out the $A$-type surface term from $\Delta \theta_{\epsilon}(x)
\times \overline{\theta}(z)$. The full expectation value is,
\begin{eqnarray}
\lefteqn{ \Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \Delta \theta_{\epsilon}(x)
\overline{\theta}(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle =
\Biggl[ \int_{\eta_2 -\epsilon}^{\eta_2 + \epsilon} \!\!\!\!\!\!\!\!\!\!
dx^{\prime 0} \!-\! \int_{\eta_1 -\epsilon}^{\eta_1 + \epsilon}
\!\!\!\!\!\!\!\!\!\! dx^{\prime 0}\Biggr]\! \int \!\! d^{D-1}\!\vec{x}'
\!\! \int_{V} \!\! d^Dz' \frac{a_{x'}^{D-2} G(z;z')}{2 \epsilon (D \!-\!3) H}
} \nonumber \\
& & \hspace{-.5cm} \times \frac{\partial}{\partial z^{\prime 0}}
\Biggl\{ a_{z'}^{D-1} i\Delta_C(x';z') \frac{\partial}{\partial x^{\prime 0}}
\, G(x;x') - a_{z'}^{D-1} G(x;x') \frac{\partial}{\partial x^{\prime 0}} \,
i\Delta_C(x';z')\Biggr\} . \qquad
\end{eqnarray}
However, we already accounted for most of this in the previous subsection;
it is only the delta function from using (\ref{keyID}) on the final surface
term which makes the new contribution we seek,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \Delta \theta_{\epsilon}(x)
\overline{\theta}(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle_{A-{\rm type}} =
\frac{-i}{2 \epsilon (D \!-\!3) H}
\Biggl[ \int_{\eta_2 -\epsilon}^{\eta_2 + \epsilon} \!\!\!\!\!\!\!\!\!\!
dx^{\prime 0} \!-\! \int_{\eta_1 -\epsilon}^{\eta_1 + \epsilon}
\!\!\!\!\!\!\!\!\!\! dx^{\prime 0}\Biggr] } \nonumber \\
& & \hspace{3.5cm} \times \!\! \int \!\! d^{D-1}\!\vec{x}'
\!\! \int_{V} \!\! d^Dz' \, G(z;z') a_{z'}^{D-1} G(x;x')
\delta^D(x' \!-\! z') \; . \qquad
\end{eqnarray}
The integration over $z^{\prime \mu}$ is not affected by our regularization
of $\Delta \theta_{\epsilon}(x)$,
\begin{equation}
\int \!\! d^Dz' \equiv \int_{\eta_1}^{\eta_2} \!\!\!\! dz^{\prime 0} \!
\int \!\! d^{D-1}\vec{z}' \; .
\end{equation}
It is therefore only half the $x^{\prime 0}$ range over which the delta
function can be saturated. Taking the unregulated limit gives,
\begin{eqnarray}
\lefteqn{\lim_{\epsilon \rightarrow 0} \Bigl\langle \Omega \Bigl\vert
T^*\Bigl[ \Delta \theta_{\epsilon}(x) \overline{\theta}(z) \Bigr] \Bigr\vert
\Omega \Bigr\rangle_{A-{\rm type}} } \nonumber \\
& & \hspace{.5cm} = \lim_{\epsilon \rightarrow 0}
\frac{-i}{2 \epsilon (D \!-\!3) H} \Biggl[ \int_{\eta_2 -\epsilon}^{\eta_2}
\!\!\!\!\!\!\!\!\!\! dx^{\prime 0} \!-\! \int_{\eta_1}^{\eta_1 + \epsilon}
\!\!\!\!\!\!\!\!\!\! dx^{\prime 0}\Biggr] \! \int \!\! d^{D-1}\!\vec{x}' \,
G(z;x') a_{x'}^{D-1} G(x;x') \; , \qquad \\
& & \hspace{.5cm} = \frac{i}{2 (D\!-\!3) H} \int \!\! d^{D-1}\!\vec{x}' \,
\Biggl[ a_{x'}^{D-1} i\Delta_A(x;x') i\Delta_A(z;x') \Biggr]_{x^{\prime 0}
= \eta_1}^{x^{\prime 0} = \eta_2} . \qquad \label{left}
\end{eqnarray}
One obviously gets the same result (\ref{left}) from $\overline{\theta}(x)
\times \Delta \theta_{\epsilon}(z)$ so the total for these ``mixed'' terms
is,
\begin{eqnarray}
\lefteqn{\lim_{\epsilon \rightarrow 0} \Bigl\langle \Omega \Bigl\vert
T^*\Bigl[ \Delta \theta_{\epsilon}(x) \overline{\theta}(z) \!+\!
\overline{\theta}(x) \Delta \theta_{\epsilon}(z) \Bigr] \Bigr\vert
\Omega \Bigr\rangle_{A-{\rm type}} } \nonumber \\
& & \hspace{-.5cm} = \frac{i}{(D\!-\!3) H} \int \!\! d^{D-1}\!\vec{x}' \,
\Biggl[ a_{x'}^{D-1} i\Delta_A(x;x') i\Delta_A(z;x') \Biggr]_{x^{\prime 0}
= \eta_1}^{x^{\prime 0} = \eta_2} , \qquad \\
& & \hspace{-.5cm} = \frac{i}{(D\!-\!3) H} \int \!\! \frac{d^{D-1}k}{
(2\pi)^{D-1}} \, e^{i \vec{k} \cdot (\vec{x} - \vec{z})} \Biggl\{ a_2^{D-1}
u_A^*(x^0,k) u_A^*(z^0,k) \Bigl[u_A(\eta_2,k)\Bigr]^2 \nonumber \\
& & \hspace{4.5cm} - a_1^{D-1} u_A(x^0,k) u_A(z^0,k) \Bigl[u_A^*(\eta_1,k)
\Bigr]^2 \Biggr\} , \\
& & \hspace{-.5cm} = \frac{i}{(D\!-\!3) H^2} \int \!\! \frac{d^{D-1}k}{
(2\pi)^{D-1}} \, e^{i \vec{k} \cdot (\vec{x} - \vec{z})} \Biggl\{
u_A^*(x^0,k) u_A^*(z^0,k) \times \frac{\pi}4 \Bigl[ H^{(1)}_{\nu}(-k\eta_2)
\Bigr]^2 \nonumber \\
& & \hspace{4.5cm} -u_A(x^0,k) u_A(z^0,k) \times \frac{\pi}4 \Bigl[
H^{(1)}_{\nu}(-k\eta_2) \Bigr]^{*2} \Biggr\} . \qquad \label{mixed}
\end{eqnarray}
Expression (\ref{mixed}) combines nicely with the $A$-type surface term
from $\overline{\theta}$,
\begin{eqnarray}
\lefteqn{ \frac{\mathcal{S}_A(x;z)}{(D \!-\!3) H^2}
= \frac{i}{(D\!-\!3) H^2} \int \!\! \frac{d^{D-1}k}{(2\pi)^{D-1}} \,
e^{i \vec{k} \cdot (\vec{x} - \vec{z})} } \nonumber \\
& & \hspace{-.5cm} \times \Biggl\{ u_A^*(x^0,k) u_A^*(z^0,k) \times
\mathcal{F}(-k\eta_2) - u_A(x^0,k) u_A(z^0,k) \times \mathcal{F}^*(-k\eta_1)
\Biggr\} . \qquad
\end{eqnarray}
By partial integration we can express $\mathcal{F}(z)$ as,
\begin{eqnarray}
\mathcal{F}(z) & = & -\frac{\pi}4 \times 2\nu \int_z^{\infty} \!\!\!\! dz'
\frac1{z'} \Bigl[ H^{(1)}_{\nu}(z')\Bigr]^2 \; , \\
& = & -\frac{\pi}4 \times \Bigl[ H^{(1)}_{\nu}(z)\Bigr]^2 -\frac{\pi}4
\times \int_z^{\infty} \!\!\!\! dz' \frac1{z^{\prime 2\nu}}
\frac{\partial}{\partial z'} \Bigl[ z^{\prime \nu} H^{(1)}_{\nu}(z')\Bigr]^2
\; . \qquad \label{reexp}
\end{eqnarray}
So the first term of (\ref{reexp}) is cancelled by (\ref{mixed}).
The full expectation value for $\Delta \theta_{\epsilon}(x) \times \Delta
\theta_{\epsilon}(z)$ is,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \Delta \theta_{\epsilon}(x)
\Delta \theta_{\epsilon}(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle
= \frac{-1}{4 \epsilon^2 (D \!-\!3)^2 H^2} } \nonumber \\
& & \hspace{-.5cm} \times \Biggl[ \int_{\eta_2 -\epsilon}^{\eta_2 + \epsilon}
\!\!\!\!\!\!\!\!\!\! dx^{\prime 0} \!-\! \int_{\eta_1-\epsilon}^{\eta_1 +
\epsilon} \!\!\!\!\!\!\!\!\!\! dx^{\prime 0} \Biggr] \! \int \!\!
d^{D-1}\!\vec{x}' \, a_{x'}^{D-2} \Biggl[ \int_{\eta_2 -\epsilon}^{\eta_2 +
\epsilon} \!\!\!\!\!\!\!\!\!\! dz^{\prime 0} \!-\! \int_{\eta_1 - \epsilon}^{
\eta_1 + \epsilon} \!\!\!\!\!\!\!\!\!\! dz^{\prime 0} \Biggr] \! \int
\!\! d^{D-1}\!\vec{z}' \, a_{z'}^{D-2} \nonumber \\
& & \hspace{0cm} \times \Biggl\{ i\Delta_C(x';z') \frac{\partial G(x;x')}{
\partial x^{\prime 0}} \frac{\partial G(z;z')}{\partial z^{\prime 0}}
- G(z;z') \frac{\partial G(x;x')}{\partial x^{\prime 0}}
\frac{\partial i\Delta_C(x';z')}{\partial z^{\prime 0}} \nonumber \\
& & \hspace{.5cm} - G(x;x') \frac{\partial G(z;z')}{\partial z^{\prime 0}}
\frac{\partial i\Delta_C(x';z')}{\partial x^{\prime 0}}
+ G(x;x') G(z;z') \frac{\partial^2 i\Delta_C(x';z')}{\partial x^{\prime 0}
\partial z^{\prime 0}} \Biggr\} . \qquad
\end{eqnarray}
As with the mixed term (\ref{left}) we have already reduced most of this
in the previous subsection. The only new contribution derives from the
delta function one obtains by using (\ref{keyID}) on the final term,
\begin{eqnarray}
\lefteqn{\Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \Delta \theta_{\epsilon}(x)
\Delta \theta_{\epsilon}(z) \Bigr] \Bigr\vert \Omega \Bigr\rangle_{A\!-\!{\rm
type}} \!\!\!\!\! = \frac{-i}{4 \epsilon^2 (D \!-\!3)^2 H^2}
\Biggl[ \int_{\eta_2 -\epsilon}^{\eta_2 + \epsilon} \!\!\!\!\!\!\!\!\!\!
dx^{\prime 0} \!-\! \int_{\eta_1-\epsilon}^{\eta_1 + \epsilon}
\!\!\!\!\!\!\!\!\!\! dx^{\prime 0} \Biggr] \! \int \!\! d^{D-1}\!\vec{x}'
\, a_{x'}^{D-2} } \nonumber \\
& & \hspace{2.5cm} \times \Biggl[ \int_{\eta_2 -\epsilon}^{\eta_2+ \epsilon}
\!\!\!\!\!\!\!\!\!\! dz^{\prime 0} \!-\! \int_{\eta_1- \epsilon}^{\eta_1 +
\epsilon} \!\!\!\!\!\!\!\!\!\! dz^{\prime 0} \Biggr] \! \int \!\!
d^{D-1}\!\vec{z}' \, G(x;x') G(z;z') \delta^D(x' \!-\!z') \; , \qquad \\
& & \hspace{-.5cm} = \frac{i}{4 \epsilon^2 (D \!-\!3)^2 H^2}
\Biggl[ \int_{\eta_2 -\epsilon}^{\eta_2 + \epsilon} \!\!\!\!\!\!\!\!\!\!
dx^{\prime 0} \!-\! \int_{\eta_1 - \epsilon}^{\eta_1 + \epsilon}
\!\!\!\!\!\!\!\!\!\! dx^{\prime 0} \Biggr] \! \int \!\! d^{D-1}\!\vec{x}'
\, a_{x'}^{D-2} i\Delta_A(x;x') i\Delta_A(z;x') \; , \qquad \\
& & \hspace{-.5cm} = \frac{i}{4 \epsilon^2 (D \!-\!3)^2 H^2} \!\int \!\!
\frac{d^{D-1}\!k}{(2\pi)^{D-1}} \, e^{i \vec{k} \cdot (\vec{x} - \vec{z})}
\Biggl\{ u_A^*(x^0,k) u_A^*(z^0,k)
\!\! \int_{\eta_2 -\epsilon}^{\eta_2 + \epsilon} \!\!\!\!\!\!\!\!\! d\eta \,
a_{\eta}^{D-2} \Bigl[u_A(\eta,k)\Bigr]^2 \nonumber \\
& & \hspace{4cm} - u_A(x^0,k) u_A(z^0,k)
\!\! \int_{\eta_1 -\epsilon}^{\eta_1 + \epsilon} \!\!\!\!\!\!\!\!\! d\eta \,
a_{\eta}^{D-2} \Bigl[u_A^*(\eta,k)\Bigr]^2 \Biggr\} . \qquad
\end{eqnarray}
The integrations with respect to $\eta$ can be performed, but it is not
possible to take the unregulated limit,
\begin{eqnarray}
\lefteqn{\int_{\eta_2 -\epsilon}^{\eta_2 + \epsilon} \!\!\!\!\!\!\!\!\!
d\eta \, a_{\eta}^{D-2} \Bigl[u_A(\eta,k)\Bigr]^2 = -\frac{\pi}4 \!
\int_{\eta_2 -\epsilon}^{\eta_2 + \epsilon} \!\!\!\!\!\!\!\!\! d\eta \,
\eta \Bigl[H^{(1)}_{\nu}(-k\eta)\Bigr]^2 \; , } \\
& & \hspace{1cm} = -\frac{\pi}{4 k^2} \! \int_{-k(\eta_2 -\epsilon)}^{-k(\eta_2
+ \epsilon)} \!\!\!\!\!\!\!\!\! dz \, z \Bigl[H^{(1)}_{\nu}(z)\Bigr]^2 \; , \\
& & \hspace{1cm} = -\frac{\pi}{8 k^2} \Biggl\{ (z^2 \!-\! \nu^2) \Bigl[
H^{(1)}_{\nu}(z) \Bigr]^2 + z^2 \Bigl[ \frac{\partial}{\partial z}
H^{(1)}_{\nu}(z)\Bigr]^2 \Biggr\} \Biggr\vert_{z = -k (\eta_2-\epsilon)}^{z
= -k (\eta_2 + \epsilon)} , \qquad \\
& & \hspace{1cm} = -\frac{\pi}4 \times 2\epsilon \times \eta_2 \Bigl[
H^{(1)}_{\nu}(-k\eta_2) \Bigr]^2 + O(\epsilon^3) \; . \qquad
\end{eqnarray}
Combining the various $A$-type surface terms gives a result of the form,
\begin{eqnarray}
\lefteqn{\Bigl( A\!-\!{\rm type\ Terms} \Bigr) = \frac{-i}{(D \!-\!3) H^2}
\!\int \!\! \frac{d^{D-1}\!k}{(2\pi)^{D-1}} \, e^{i \vec{k} \cdot (\vec{x}
- \vec{z})} } \nonumber \\
& & \hspace{.3cm} \times \Biggl\{ u_A^*(x^0,k) u_A^*(z^0,k)
\mathcal{A}(\eta_2,k,\epsilon) - u_A(x^0,k) u_A(z^0,k)
\mathcal{A}^*(\eta_1,k,\epsilon) \Biggr\} . \qquad \label{finA}
\end{eqnarray}
The function $\mathcal{A}(\eta,k,\epsilon)$ is,
\begin{equation}
\mathcal{A}(\eta,k,\epsilon) = \frac{\pi}4 \Biggl\{
\frac{ \eta [H^{(1)}_{\nu}(-k\eta)]^2}{2 \epsilon (D\!-\!3)} +
\int_{-k\eta}^{\infty} \!\!\!\!\!\!\!\! dz' \, \frac1{z^{\prime 2\nu}}
\frac{\partial}{\partial z'} \Bigl[ z^{\prime \nu} H^{(1)}_{\nu}(z')\Bigr]^2
+ O(\epsilon) \Biggr\} . \label{scriptA}
\end{equation}
We seek a homogeneous gauge parameter $\delta \theta[A](x)$ which cancels
(\ref{finA}-\ref{scriptA}), in the limit that $\epsilon$ goes to zero,
without changing the ``Other Term.'' If we construct it from $A_i$,
rather than $A_0$ then there will be no interference between $\Delta
\theta[A]$ and $\delta \theta[A]$. Suppose further that $\delta \theta[A](x)$,
like $\Delta \theta[A](x)$ involves an integral over a dummy variable
$x^{\prime \mu}$, and that the field $A_i(x')$ is differentiated with
respect to $x^{\prime 0}$. What we want is that the expectation value of
$\delta \theta[A](x) \times A_i(z)$ is zero unless there is also a
derivative with respect to $z^0$. If we can construct a $\delta \theta[A]$
with this property then the only nonzero contribution to the transformed
propagator will come from $\delta \theta[A](x) \times \delta \theta[A](z)$.
It is simplest to construct the term we want by analogy with the simple
harmonic oscillator, whose Heisenberg position operator is,
\begin{equation}
q(t) = \frac1{\sqrt{2m\omega}} \Bigl[ a \, e^{-i\omega t} + a^{\dagger}
e^{i \omega t}\Bigr] \; . \label{SHOq}
\end{equation}
Note that we can isolate the raising and lower operators by taking
linear combinations of $\dot{q}$ and $i\omega q$,
\begin{equation}
\dot{q}(t) + i \omega q(t) = \frac{2i\omega}{\sqrt{2m\omega}} \,
a^{\dagger} e^{i\omega t} \qquad , \qquad \dot{q}(t) - i \omega q(t)
= \frac{-2i\omega}{\sqrt{2m\omega}} \, a\, e^{-i\omega t} \; .
\label{isolate}
\end{equation}
We assume the usual commutation relations and ground state $\vert \Omega
\rangle$,
\begin{equation}
[a,a^{\dagger}] = 1 \qquad , \qquad a \vert \Omega \rangle = 0 = \langle
\Omega \vert a^{\dagger} \; . \label{SHOaO}
\end{equation}
If $t$ comes before the last time $t_2$, and after the earliest time $t_1$,
then we have,
\begin{eqnarray}
\Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \Bigl(\dot{q}(t_2) \!+\! i \omega
q(t_2)\Bigr) q(t)\Bigr] \Bigr\vert \Omega \Bigr\rangle & = & 0 \qquad
\forall \; t< t_2 \; , \\
\Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \Bigl(\dot{q}(t_1) \!-\! i \omega
q(t_1)\Bigr) q(t)\Bigr] \Bigr\vert \Omega \Bigr\rangle & = & 0 \qquad
\forall \; t_1< t \; .
\end{eqnarray}
The only nonzero expectation value comes from the $T^*$-ordered product of
two factors of the combinations (\ref{isolate}),
\begin{eqnarray}
\Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \Bigl(\dot{q}(t_2) \!+\! i \omega
q(t_2)\Bigr) \Bigl(\dot{q}(t_2') \!+\! i \omega q(t_2')\Bigr) \Bigr]
\Bigr\vert \Omega \Bigr\rangle & = & \frac{i}{m} \, \delta(t_2 \!-\! t_2')
\; , \qquad \\
\Bigl\langle \Omega \Bigl\vert T^*\Bigl[ \Bigl(\dot{q}(t_1) \!-\! i \omega
q(t_1)\Bigr) \Bigl(\dot{q}(t_1') \!-\! i \omega q(t_1')\Bigr) \Bigr]
\Bigr\vert \Omega \Bigr\rangle & = & -\frac{i}{m} \, \delta(t_1 \!-\! t_1')
\; . \qquad
\end{eqnarray}
We construct a gauge parameter $\delta \theta[A](x)$ with the desired
properties by analogy. The free field mode sum for $A_i(x)$ is \cite{RPW1,KW2},
\begin{equation}
A_i(x) = \int \!\! \frac{d^{D-1}k}{(2\pi)^{D-1}} \Biggl\{ a _{x} u_B(x^0,k)
e^{i \vec{k} \cdot \vec{x}} \beta_i(\vec{k}) + a_{x} u_B^*(x^0,k)
e^{-i \vec{k} \cdot \vec{x}} \beta_i^{\dagger}(\vec{k}) \Biggr\} ,
\end{equation}
where the $u_B(\eta,k)$ mode functions are given by (\ref{unu}) with
index $\nu = (D-3)/2$ and the $\beta_i(\vec{k})$ are canonically
normalized annihilation operators. One can isolate $\beta_i(\vec{k})$ by
taking the spatial Fourier transform,
\begin{equation}
\widetilde{A}_i(x^0,\vec{k}) \equiv \int \!\! d^{D-1}\vec{x} \, e^{-i
\vec{k} \cdot \vec{x}} A_i(x^0,\vec{x}) \; .
\end{equation}
Now form linear combinations analogous to (\ref{isolate}),
\begin{eqnarray}
\frac1{a} \, \widetilde{A}_i(\eta,\vec{k}) \frac{\partial}{\partial \eta}
\, u_B^*(\eta,k) - u_B^*(\eta,k) \frac{\partial}{\partial \eta} \,
\Bigl[\frac1{a} \, \widetilde{A}_i(\eta,\vec{k}) \Bigr] & = &
\frac{i \beta_i(\vec{k})}{a^{D-2}} \; , \qquad \\
\frac1{a} \, \widetilde{A}_i(\eta,-\vec{k}) \frac{\partial}{\partial \eta}
\, u_B(\eta,k) - u_B(\eta,k) \frac{\partial}{\partial \eta} \,
\Bigl[\frac1{a} \, \widetilde{A}_i(\eta,-\vec{k}) \Bigr] & = &
-\frac{i \beta_i^{\dagger}(\vec{k})}{a^{D-2}} \; . \qquad
\end{eqnarray}
It follows that the desired gauge parameter is,
\begin{eqnarray}
\lefteqn{ \delta \theta_{\epsilon}(x)} \nonumber \\
& & \hspace{-.5cm} = \frac{i}{\sqrt{2\epsilon}\,(D\!-\!3) H}
\! \int \!\! \frac{d^{D-1}k}{(2\pi)^{D-1}} \, e^{i \vec{k} \cdot \vec{x}}
\Biggl\{ u_A^*(x^0,k) \! \int_{\eta_2 -\epsilon}^{\eta_2 + \epsilon}
\!\!\!\!\!\!\!\!\!\! dx^{\prime 0} a_{x'}^{D-2} \Bigl[(D \!-\!3)
\mathcal{A}(x^{\prime 0},k,\epsilon) \Bigr]^{\frac12} \nonumber \\
& & \hspace{1.4cm} \times \Biggl[ \frac{k_i \widetilde{A}_i(x^{\prime 0},
\vec{k})}{k a_{x'}} \frac{\partial u_B(x^{\prime 0},k)}{\partial x^{\prime 0}}
- u_B(x^{\prime 0},k) \frac{\partial}{\partial x^{\prime 0}} \Bigl[
\frac{k_i \widetilde{A}_i(x^{\prime 0},\vec{k})}{k a_{x'}}\Bigr] \Biggr]
\nonumber \\
& & \hspace{0cm} - u_A(x^0,k) \! \int_{\eta_1 -\epsilon}^{\eta_1 + \epsilon}
\!\!\!\!\!\!\!\!\!\! dx^{\prime 0} a_{x'}^{D-2} \Bigl[(D\!-\!3)
\mathcal{A}^*(x^{\prime 0},k,\epsilon) \Bigr]^{\frac12} \nonumber \\
& & \hspace{1.4cm} \times \Biggl[ \frac{k_i \widetilde{A}_i(x^{\prime 0},
\vec{k})}{k a_{x'}}\frac{\partial u_B^*(x^{\prime 0},k)}{\partial x^{\prime 0}}
- u_B^*(x^{\prime 0},k) \frac{\partial}{\partial x^{\prime 0}} \Bigl[
\frac{k_i \widetilde{A}_i(x^{\prime 0},\vec{k})}{k a_{x'}}\Bigr] \Biggr]
\Biggr\} . \qquad
\end{eqnarray}
\section{Discussion}
There are two generic ways to freeze local symmetries:
\begin{itemize}
\item{{\it Exact Gauge Fixing}, in which the fields are made to obey some
equation; and}
\item{{\it Average Gauge Fixing}, in which a term is added to the Lagrangian.}
\end{itemize}
We have shown that certain average gauges cannot be derived from the
canonical formalism on manifolds such as de Sitter for which there are
linearization instabilities. Ignoring this problem in electrodynamics
causes the vector potential to possess an unphysical and incorrect part
which drops out of the field strength but affects interaction energies.
This may be the origin of the on-shell singularities found in Feynman
gauge for the one loop self-mass-squared of charged scalars on de Sitter
\cite{KW2}.
We have also constructed the field-dependent gauge transformation that
enforces exact, Lorentz gauge on de Sitter electrodynamics. This was
applied to the photon propagator from a non-de Sitter invariant, average
gauge and the result agrees exactly with the de Sitter invariant solution
previously obtained from solving the Lorentz gauge propagator equation
\cite{TW7}. It was already known from adding the compensating gauge
transformation to the naive de Sitter transformation that the propagator
in the non-invariant gauge shows no physical breaking of de Sitter
invariance \cite{RPW1}. So the fact that our transformation technique
produces an invariant result demonstrates that the technique indeed
eliminates unphysical breaking of de Sitter invariance.
In a subsequent work we will employ the same technique to transform the
graviton propagator from a non-de Sitter invariant, average gauge
\cite{TW3,RPW1} to the exact and de Sitter invariant, de Donder gauge.
Adding the compensating transformation shows that the breaking of de
Sitter invariance in this propagator is physical \cite{Kleppe}, so the
expectation is that the transformation technique will not remove it.
Because the transformed propagator will obey a de Sitter invariant gauge
condition, this should settle the issue about whether or not free
gravitons have any de Sitter invariant states. Note that simply obeying
a de Sitter invariant propagator equation does not guarantee a de Sitter
invariant solution, as the case of the massless, minimally coupled
scalar proves \cite{AF}. Note also that physical graviton modes obey
precisely the same equation as the massless, minimally coupled scalar
\cite{Grishchuk}.
Constructing the de Donder gauge propagator is a worthy goal in its
own right for two reasons. First, exploiting the gauge condition makes a
vast simplification in tensor algebra \cite{KW1}. Second, using a de
Sitter invariant gauge would preclude the need for noninvariant
counterterms, even though the actual propagator is not de Sitter
invariant \cite{MW,KW1}.
A significant technical result of this paper is the ``Convolution Identity''
(\ref{convolution}) for integrating the propagator $i\Delta_A$ of a massless,
minimally coupled scalar up against the propagator $i\Delta_{\nu}$ of a
massless scalar with conformal coupling,
\begin{equation}
\xi = \frac1{D (D \!-\!1)} \Bigl[ \Bigl(\frac{D \!-\!1}2\Bigr)^2 - \nu^2
\Bigr] \; .
\end{equation}
The result follows from Green's second identity,
\begin{eqnarray}
\lefteqn{-i \!\! \int_{V} \! d^Dx' \sqrt{-g(x')} \, i\Delta_A(x;x')
i\Delta_{\nu}(x';z) = \frac{i\Delta_{\nu}(x;z) \!-\! i\Delta_A(x;z)}{[(
\frac{D-1}2)^2 \!-\! \nu^2] H^2} } \nonumber \\
& & \hspace{-.7cm} -i \!\! \int_{\partial V} \!\!\!\! d^{D-1}\!x'_{\rho}
\sqrt{-g'} \, g^{\prime \rho\sigma} \Biggl[\frac{i\Delta_{\nu}(x';z)
\partial_{\sigma}' i\Delta_A(x;x')\!-\!i\Delta_A(x;x') \partial_{\sigma}'
i\Delta_{\nu}(x';z)}{[(\frac{D-1}2)^2 \!-\! \nu^2] H^2} \Biggr] \! . \qquad
\end{eqnarray}
We expect this to be of great utility in the subsequent graviton project
because field dependent gauge transformations result in precisely such
convolutions.
\centerline{\bf Acknowledgements}
This work was partially supported by FQXi Mini Grant \#MGB-08-008,
by FOM grant 07PR2522, by Utrecht University, by European Union grant
MRTN-CT-2004-512194, by Hellenic grant INTERREG IIIA, by NSF grants
PHY-0653085 and PHY-0855021, and by the Institute for Fundamental Theory
at the University of Florida.
|
1,314,259,993,924 | arxiv | \section{\@startsection {section}{1}{\z@}{-8.5ex plus -1ex minus
-.2ex}{3.3ex plus .2ex}{\large\bf}
\def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus
-1ex minus -.2ex}{1.5ex plus .2ex}{\bf}}
\def\subsubsection{\@startsection{subsubsection}{3}{\z@}{-3.25ex plu
-1ex minus -.2ex}{1.5ex plus .2ex}{\sl}}
\renewcommand{\thefootnote}{\alph{footnote}}
\begin{document}
\begin{titlepage}
\vspace*{-2cm}
\begin{flushright}
\end{flushright}
\vspace{0.3cm}
\begin{center}
{\Large {\bf Type II defects revisited}} \\
\vspace{1cm} {\large E.\ Corrigan\footnote{\noindent E-mail: {\tt edward.corrigan@york.ac.uk}}}\\
\vspace{0.5cm}
{\em Department of Mathematics \\ University of York, York YO10 5DD, U.K.} \\
\vspace{0.3cm} {\large and}\\ \vspace{0.5cm}
{\large C.\ Zambon\footnote{\noindent E-mail: {\tt cristina.zambon@durham.ac.uk}}} \\
\vspace{0.3cm}
{\em Department of Physics \\ Durham University, Durham DH1 3LE, U.K.} \\
\vspace{2cm} {\bf{ABSTRACT}}\\ \end{center} \vspace{.5cm}
\p Energy and momentum conservation in the context of a type II, purely transmitting, defect, within a single scalar relativistic two-dimensional field theory, places a severe constraint not only on the nature of the defect but also on the potentials for the scalar fields to either side of it. The constraint is of an unfamiliar type since it requires the Poisson Bracket of the defect contributions to energy and momentum with respect to the defect discontinuity and its conjugate to be balanced by the potential difference across the defect. It is shown that the only solutions to the constraint correspond to the known integrable field theories.
\p \\
\vfill
\end{titlepage}
\section{Introduction}
Defects within (relativistic) integrable field theory models in two dimensions have been studied for some time from both classical and quantum viewpoints (see, for example \cite{dms1994,kl1999,Mintchev02,bcz2003,bcz2004, bcz2005,gyz2006,cz2007,hk2007,gyz2007,c2008, cz2009,n2009,ad2012,agsz2014,d2016}). In essence, a defect always involves a discontinuity of some kind, and in an integrable model experience has shown that this discontinuity is a jump in the field value at a specific point (similar to the discontinuity in velocity across a shock in a fluid flow), with `sewing' conditions across the defect relating the fields on either side in such a manner that suitably adjusted conservation laws are maintained. Characteristically, such defects break space translation invariance and are purely transmitting. Intriguingly, insisting upon sewing conditions that maintain the conservation of energy and momentum seems to be sufficient to guarantee integrability. There is no direct proof of this but there is a body of evidence from many specific cases that indicates it should be the case.
\p So far, there are basically two types of defect that appear to be integrable, called type I (where the defect has no degrees of freedom of its own \cite{bcz2003,bcz2004}), and type II (where the defect carries its own degrees of freedom \cite{cz2009,cr2013}). However, they can be mixed together as they have been recently, for example, to discuss defects within the $d^{(1)}_r$ series of affine Toda field theories \cite{br2017}. There may be other possibilities, yet to be found, that for example encompass affine Toda field theories based on the $e_r^{(1)},\ r=6,7,8$ root systems.
\p The aim of this paper is to take the first step at a systematic classification, by examining defect sewing relations required to preserve energy-momentum conservation, but without specifying the field theories themselves, in order to determine the constraints on the field theory potentials. For type I this is straightforward and was carried out previously demonstrating that, for example, within the class of affine Toda field theories only those based on the roots represented by the extended Dynkin diagrams for $a^{(1)}_r$ can support type I defects \cite{bcz2004,cz2007}. The simplest example of the type I defect is included here for comparison with the type II defect, which is more intricate. For type II, the analysis seems to be far from straightforward and the main part of the paper is classifying the possibilities in the simplest of cases where there is a single scalar field defined on each side of the defect. In either situation, the only possiblities are the known integrable field theories except that the Tzitz\'eica model ($a_2^{(2)}$ affine Toda) is excluded from the set of models supporting type I defects but can support a type II defect.
\p One intriguing possibility is that integrable models are actually characterised by their ability to support integrable discontinuities. However, a proof of that fact, if true, remains distant.
\section{The formalism}
In this paper, field theories will be analysed by examining carefully the sewing conditions across a defect taking into account the requirements of energy-momentum conservation including contributions from the defect. For the purposes of this article the defect is taken to be situated at $x=0$ (though in principle it might be situated anywhere along the $x$-axis), with scalar fields $u$ and $v$ to the left and right of it, respectively. There is no a priori assumption that the fields are of the same type, though they often are. In other words, in their respective domains the two fields satisfy the equations
$$\partial^2 u=-U^\prime(u),\ \ (x<0);\quad \partial^2 v=-V^\prime(v),\ \ (x>0),$$ where $U(u)$ and $V(v)$ are the potentials.
The field equations need to be supplemented by conditions relating the fields $u,v$ and/or their derivatives across the defect. The idea is that by making very few assumptions, not only the defect conditions are specified by the requirements, but also the potentials $U,\ V$.
\subsection{Energy}
\p Consider first the contributions to the total energy and how it might be conserved. The time derivative of the contributions to the total energy from the fields to either side of the defect is given (on using the equations of motion) by
$$\dot E=\int_{-\infty}^0\,\left(\frac{1}{2}(u_t^2+u_x^2)+U(u)\right)_tdx +\int_0^{\infty}\,\left(\frac{1}{2}(v_t^2+v_x^2)+V(v)\right)_tdx = [u_tu_x]^0+[v_tv_x]_0,$$
assuming the contributions from $\pm \infty$ are zero. Thus, the sewing conditions should be designed to convert the right hand side to a total time derivative of the energy contributed by the defect.
\p One possibility (type I) is to require:
$$x=0:\quad u_x=v_t- {\cal E}_u,\ \ v_x=u_t+{\cal E}_v,$$
where ${\cal E}$ depends on both $u$ and $v$ and partial derivatives with respect to $u$ or $v$ are denoted by subscripts, then
$$\dot E=-u_t{\cal E}_u-v_t{\cal E}_v=-\frac{d{\cal E}}{dt}.$$
Thus, the total energy $E+{\cal E}$ is conserved.
\p Another possiblity (type II) is to introduce a quantity $\lambda$, defined only at $x=0$ but depending on time, and then to set
$$x=0:\quad u_x=\lambda_t- {\cal E}_u,\ \ v_x=\lambda_t+{\cal E}_v,\ \ u_t-v_t =-\ce_\lambda,$$
where now $\ce$ depends on $u,\ v$ and $\lambda$, then
$$\dot E=-u_t{\cal E}_u-v_t{\cal E}_v-\lambda_t\ce_\lambda=-\frac{d{\cal E}}{dt},$$
and $E+\ce$ is conserved as before, though in this case $\ce$ has additional dependence on $\lambda$. The defect does not break time translation invariance so it is not surprising that little effort is required to conserve energy, and the energy ${\cal E}$ introduced by the impurity is unconstrained.
\p It is also worth recalling that both sets of sewing relations follow directly from Lagrangian descriptions of the defect:
\begin{equation}\label{defectL}
{\cal L}(u,v)={\cal L}(u)\theta(-x)+{\cal L}_D\delta(x)+{\cal L}(v)\theta(x)
\end{equation}
with
\begin{equation}\label{defectLI}
{\cal L}(u)=\frac{1}{2}(u_t^2-u_x^2)-U(u),\quad {\cal L}(v)=\frac{1}{2}( v_t^2-v_x^2) -V(v),
\end{equation}
and with the type I or type II defect Lagrangian ${\cal L}_D$ given by
\begin{equation}\label{defectLII}
{\cal L}_I=uv_t-{\cal E}(u,v),\quad {\cal L}_{II}=(u-v)\lambda_t-{\cal E}(u,v,\lambda).
\end{equation}
In these expressions, subscripts denote derivatives with respect to $t$ and $x$, and the defect energy functional ${\cal E}$ depends only on the fields not their time (or space) derivatives.
\subsection{Momentum}
\p In a similar manner, the time derivative of the contributions to the total field momentum is given by
$$\dot P=\int_{-\infty}^0\left(u_t u_x\right)_t dx +\int_0^{\infty}\left(v_t v_x\right)_t dx=\left[\frac{1}{2}(u_t^2+u_x^2)-U(u)\right]^0+\left[\frac{1}{2}(v_t^2+v_x^2)-V(v)\right]_0,$$
with the same assumption as before. Since space translation is broken explicitly by the defect the requirement of overall momentum conservation is expected to impose stringent conditions on the fields. The two cases introduced above will be dealt with separately.
\subsubsection{Type I}
\p Using the type I sewing conditions (in this section all fields are evaluated at $x=0$):
$$\dot P=-v_t\ce_u-u_t\ce_v +\frac{1}{2}\ce_u^2-\frac{1}{2}\ce_v^2 -U(u)+V(v)= -\frac{d\cp}{dt},$$
where $\cp$ is related to $\ce$ and strongly constrained by the following relationships:
\begin{equation}\label{typeIconditions}\ce_u=\cp_v,\ \ \ce_v=\cp_u,\ \ \frac{1}{2}\left(\ce_u^2-\ce_v^2\right) =U(u)-V(v).\end{equation}
These conditions are powerful. The first pair require that $\ce\pm\cp$ is a function of $u\mp v$. To examine the third condition, it is convenient to define new variables $p,\ q$ by
$$p=\frac{u+v}{2},\ \ q=\frac{u-v}{2},\ {\rm at}\ x=0,$$
then the last condition of \eqref{typeIconditions} becomes
$$\frac{\ce_p\,\ce_q}{2}=U(p+q)-V(p-q).$$
Then, since $\ce=F(p)+G(q), \ \cp=F(p)-G(q)$, for some functions $F,G$, this requires $$\frac{F^\prime(p)G^\prime(q)}{2}=U(p+q)-V(p-q),$$
which restricts possible choices for the potentials $U,V$. This is because the difference on the right hand side must factor into a function of $p$ multiplied by a function of $q$. From this observation, it is straightforward to find the possible solutions for $F,\ G, \ U,$ and $V$. It is enough to note that the left hand side must satisfy
$$\left(F^\prime(p)G^\prime(q)\right)_{pp}=\left(F^\prime(p)G^\prime(q)\right)_{qq}$$
and hence that
$$\frac{F^{\prime\prime\prime}}{F^\prime}=\frac{G^{\prime\prime\prime}}{G^\prime}=k^2,$$
where $k$ is constant. For example, if $k\ne 0$
$$F^\prime(p)=\alpha e^{kp}+\beta e^{-kp},\ \ G^\prime(q)=\gamma e^{kq}+\delta e^{-kq},$$
where $\alpha,\ \beta,\ \gamma,\ \delta$ are also constants. Also, if $k=0$ then
$$F^\prime(p)=\alpha p+\beta,\ \ G^\prime(q)=\gamma q+\delta.$$
Hence, the allowed potentials can be deduced leading to the following possibilities: the fields $u,v$ can both be free massive (with the same mass), or free massless, or both be Liouville, or both be sine/sinh-Gordon (with the same parameters). Or, one of $u$ or $v$ could be free massless and the other could be Liouville. In the latter case both field theories are conformal.
\subsubsection{Type II}
\p Using the type II sewing conditions leads to a different type of constraint on the defect contributions to the total energy and momentum. Considering the field contributions to the momentum, following the same steps as in the previous subsection, gives
$$\dot P=-p_t\ce_\lambda -\lambda_t\ce_p +\frac{1}{2}\ce_u^2-\frac{1}{2}\ce_v^2 -U(u)+V(v)=-\frac{d\cp}{dt},$$
which, assuming $\cp$ is a function only of $q,p,\lambda$, and noting
$$\frac{d\cp}{dt}=q_t\cp_q+p_t\cp_p+\lambda_t\cp_\lambda= -\left(\frac{1}{2}\ce_\lambda\cp_q-p_t\cp_p-\lambda_t\cp_\lambda\right),$$ requires
$$\ce_\lambda=\cp_p,\ \ \ce_p=\cp_\lambda, \ \ \frac{1}{2}\left(\cp_\lambda\ce_q-\cp_q\ce_\lambda\right)=U(p+q)-V(p-q).$$
The last of these is intriguing because as far as the defect contribution to the Lagrangian \eqref{defectLII} is concerned $\lambda$ and $q$ are conjugate variables. Thus, the nonlinear relationship states that the Poisson bracket with respect to these conjugate variables of the defect energy and momentum is twice the `potential difference' across the defect.
\p Now, since $\ce,\ \cp$ are functions of $\lambda,\ p, \ q$ and $\ce\pm\cp$ is a function of $p\mp \lambda$ together with $ q$, it follows that $$\ce=F(p+\lambda,q)+G(p-\lambda, q), \ \cp=F(p+\lambda,q)-G(p-\lambda,q).$$
Then, explicitly in terms of $F,\ G$ the nonlinear Poisson Bracket constraint is:
\begin{equation}\label{PBrel}
F_\lambda G_q-F_q G_\lambda=\{F,G\}=U(p+q)-V(p-q).
\end{equation}
The constraint equation \eqref{PBrel} is powerful because the left hand side depends on $\lambda$ while the right hand side does not.
\p While several examples are known the general solution to \eqref{PBrel} is not yet clear. The objective in this article is to describe an approach to solving a functional equation of this unfamiliar type in which all four functions $F,G,U,V$ are strongly constrained. In particular, it is necessary to investigate whether or not there are any solutions beyond those known already all of which correspond to integrable field theories, namely, sine-Gordon, Tzitzi\'eca, Liouville and massive or massless free.
\section{An approach to solving the Poisson Bracket Equation}
\p One approach, used with success previously \cite{cz2009}, is to guess that solutions for $F,G$ must be sums of exponentials. An alternative might be to try to be systematic, assume each has a Taylor expansion and write
$$F(p+\lambda,q)=\sum_{k=0}^\infty \frac{(p+\lambda)^k}{2^k k!}\,f_k(q),\ \ G(p-\lambda,q)=\sum_{l=0}^\infty \frac{(p-\lambda)^l}{2^ll!}\,g_l(q).$$
Then the Poisson bracket relation becomes:
$$F_\lambda G_q-F_q G_\lambda=\frac{1}{2}\sum_{k,l} \frac{(p+\lambda)^k}{2^kk!}\frac{(p-\lambda)^l}{2^ll!}(f_{k+1}g^\prime_l+f_k^\prime g_{l+1}).$$
The latter can be rewritten (grouping together terms of constant $N=k+l$) as
$$F_\lambda G_q-F_q G_\lambda=\frac{1}{2}\sum_{N=0}^\infty\sum_{k=0}^N \frac{(p+\lambda)^k}{2^kk!}\frac{(p-\lambda)^{N-k}}{2^{N-k}(N-k)!}\,(f_{k+1}g^\prime_{N-k}+f_k^\prime g_{N-k+1}),$$
which seems to require the coefficients of each term in the set of terms corresponding to a particular $N$ to be the same (apart from the factorial factors)
so that gathering the terms together they can be recognised as being the coefficients in the binomial expansion of $(p+\lambda + p-\lambda)^N=(2p)^N$, which is clearly independent of $\lambda$, as required. In other words,
$$F_\lambda G_q-F_q G_\lambda=\frac{1}{2}\sum_{N=0}^\infty \frac{p^N}{N!}\, h_N(q),$$ where
\begin{equation}\label{fghrels}
h_N(q)=f^{\phantom{\prime}}_{k+1}\,g^\prime_{N-k}+f_k^\prime \, g^{\phantom{\prime}}_{N-k+1}, \ \ k=0,\dots, N.
\end{equation}
\p On the other hand, assuming the potentials also have a Taylor expansion, the right hand side can be written
\begin{equation}\label{}U(p+q)-V(p-q)=\frac{1}{2}\sum_{N=0}^\infty \frac{p^N}{N!}\,\left(U^{(N)}(q)-V^{(N)}(-q)\right)\end{equation}
where the superscript $(N)$ denotes the $N^{th}$ derivatives of $U,\ V$. Hence, formally,
$$h_N(q)=U^{(N)}(q)-V^{(N)}(-q).$$
\p The aim is to find compatible expressions for the $g,$ $f$ and $h$ sequences of functions of $q$ in such a way as to be able to reconstruct the functions $G$ and $F$ and then seek potentials $U,$ $V$ such that \eqref{PBrel} will be satisfied.
\p In order to achieve this, assuming that none of the coefficients $f_k, g_k$ vanish, notice first that the expressions \eqref{fghrels} can be rearranged to
\begin{equation}\label{fghrelsRearranged}
\frac{h_N}{f_{k+1}g_{N-k+1}}= a_{N-k+1}+b_{k+1},\ \ k=0,\dots, N, \quad N=0, 1,2,\dots\end{equation}
where
\begin{equation}\label{abdefs}a_{N-k+1}=\frac{g'_{N-k}}{g_{N-k+1}},\ \ b_{k+1}=\frac{f'_{k}}{f_{k+1}}.
\end{equation}
\section{Special case: the sinh-Gordon model}
\label{h2k=0}
\p The simplest special case to analyse assumes the two potentials $U,V$ are the same and even. In other words $U(q)=V(q)=U(-q)$ and hence, $h_N(q)=0$ when $N$ is even.
Then it follows directly from \eqref{fghrelsRearranged} and \eqref{abdefs} that $a_l=-b_l$ for all positive integers $l.$ Moreover, $a_l=a_2$ if $l$ is an even positive integer and $a_l=a_1$ if $l$ is odd. Hence relations \eqref{fghrelsRearranged} can be rewritten to involve only the functions $a_1,$ $a_2$. Thus, for instance
\begin{eqnarray}\label{specialcase}
a_2-a_1&=&\frac{h_1}{f_1g_2}=-\frac{h_1}{f_2g_1},\quad N=1,\nonumber\\
a_2-a_1&=&\frac{h_3}{f_1g_4}=-\frac{h_3}{f_2g_3}=\frac{h_3}{f_3g_2}=-\frac{h_3}{f_4g_1},\quad N=3,
\end{eqnarray}
and so on.
Comparing these for different $N$ then implies relations among the $f_l,g_l$ themselves. For example,
$$h_3=\frac{h_1 g_3}{g_1}=\frac{h_1 g_4}{g_2},\quad \Rightarrow \quad \frac{g_3}{g_1}=\frac{g_4}{g_2}.$$
On the other hand $a_1=a_3$ and $a_2=a_4,$ which implies
$$\frac{g_3}{g_1}=\frac{g_2'}{g_0'},\quad\frac{g_4}{g_2}=\frac{g_3'}{g_1'}.$$
It follows that
$$\frac{g_3}{g_1}=\frac{g_3'}{g_1'},$$
which in turn implies
$$g_3=\alpha\, g_1,$$
where $\alpha$ is a constant.
Hence,
$$g_2=\alpha g_0+c,\quad g_3=\alpha\, g_1,\quad g_4=\alpha^2\,g_0+\alpha c,\quad h_3=\alpha\, h_1,$$
where $c$ is also a constant.
Thus, in general:
$$g_{2k+1}=\alpha\, g_{2k-1},\quad g_{2k+2}=\alpha \, g_{2k},\quad h_{2k+1}=\alpha\, h_{2k-1},\quad k=1,\dots$$
implying
$$g_{2k+1}=\alpha^k\,g_1,\quad g_{2k}=\alpha^k\,g_0+\alpha^{k-1}c,\quad h_{2k+1}=\alpha^k\,h_1,\quad k=1,\dots\ .$$
In order to find expressions for members of the sequence of $\{ f_l\}$, remember that $b_1=-a_1$ and $b_2=-a_2,$ which implies
$$f_1=-\frac{f_0^\prime g_1}{g_0'},\quad f_2=-\frac{f_1^\prime g_2}{g_1'}.$$
On the other hand, the first line of \eqref{specialcase} requires
$$f_2=-\frac{f_1g_2}{g_1}.$$
Equating the two expressions for $f_2$ gives
$$\frac{f_1}{f_1'}=\frac{g_1}{g_1'}\quad \implies f_1=\beta \,g_1,$$
where $\beta$ is a constant. It follows that
$$f_0'=-\beta g_0',\quad f_2=-\beta(\alpha\,g_0+c),\quad f_3=\beta \alpha g_1,\quad f_4=-\beta \alpha (\alpha\,g_0+c),$$
and hence
$$f_{2k+1}=\beta\alpha^k\,g_1,\quad f_{2k}=-\beta(\alpha^k\,g_0+\alpha^{k-1}c).$$
Finally, still using \eqref{specialcase},
$$h_1=f_1g_2(a_2-a_1)=\beta\left(g_1g_1'-\alpha g_0g_0'-cg_0\right).$$
In summary:
\begin{eqnarray*}
g_{2k+1}&=&\alpha^k\,g_1,\quad f_{2k+1}=\beta g_{2k+1},\quad k=0,1\dots\\
f_0&=&-\beta g_0+d,\quad g_{2k}=\alpha^k\,g_0+\alpha^{k-1}c,\quad f_{2k}=-\beta g_{2k},\quad k=1,2\dots,\\
h_1&=&\frac{\beta}{2}\left(g_1^2-\alpha g_0^2-2cg_0\right)', \quad h_{2k+1}=\alpha^k\,h_1,\quad k=0,1\dots.
\end{eqnarray*}
Here, $g_0,$ $g_1$ are undetermined functions of $q$, and $\alpha,$ $\beta,$ $c,$ and $d$ are constants.
Using this data, the reconstructed $F$ and $G$ functions are:
\begin{eqnarray*}
F(p+\lambda,q)&=&-\beta g_0\cosh\left(\frac{\sqrt{\alpha}(p+\lambda)}{2}\right)-\frac{\beta c}{\alpha}\left(\cosh\left(\frac{\sqrt{\alpha}(p+\lambda)}{2}\right)-1\right)\\
&&+\frac{\beta g_1}{\sqrt{\alpha}}\sinh\left(\frac{\sqrt{\alpha}(p+\lambda)}{2}\right)+d\\
G(p-\lambda,q)&=&g_0\cosh\left(\frac{\sqrt{\alpha}(p-\lambda)}{2}\right)+\frac{c}{\alpha}\left(\cosh\left(\frac{\sqrt{\alpha}(p-\lambda)}{2}\right)-1\right)\\
&&+\frac{g_1}{\sqrt{\alpha}}\sinh\left(\frac{\sqrt{\alpha}(p-\lambda)}{2}\right).
\end{eqnarray*}
Then
$$F_\lambda G_q-F_q G_\lambda=\frac{\beta}{2\sqrt{\alpha}}\left(g_1g_1'-\alpha g_0g_0'-cg_0'\right)\sinh(\sqrt{\alpha}\,p)=U(p+q)-U(p-q).$$
To satisfy this relation requires
$$g_1g_1'-\alpha g_0g_0'-cg_0'=A\,\left(e^{\sqrt{\alpha}\,q}-\,e^{-\sqrt{\alpha}\,q}\right),$$
from which it follows that
$$U(u)=\frac{A\beta}{4\sqrt{\alpha}}\,\left(e^{\sqrt{\alpha}\,u}+e^{-\sqrt{\alpha}\,u}\right),
$$
which correspond to the sinh-Gordon potential.
\p Note: if $\alpha=0$, then
$$F(p+\lambda,q)=-\frac{\beta c}{2}\left(\frac{p+\lambda}{2}\right)^2-\beta g_0+ d+\beta g_1 \left(\frac{p+\lambda}{2}\right),\quad
G(p-\lambda,q)=\frac{c}{2}\left(\frac{p-\lambda}{2}\right)^2 + g_0+ g_1 \left(\frac{p-\lambda}{2}\right),$$
and
$$F_\lambda G_q-F_q G_\lambda =U(p+q)-U(p-q)=\frac{\beta p}{2} \,(g_1g_1'-c g_0').$$
The only non zero solution to this requires
$$g_1g_1'-c g_0'\sim q,$$
and leads to the potential for a free massive scalar field,
$$U(u)=\frac{\beta}{8}\,u^2.$$
\section{General case}
\label{generalcase}
\p Consider a pair of relations \eqref{fghrelsRearranged}, which share one of the ratios appearing on their right hand side and then subtract them. If this operation is performed for all possible pairs sharing a common ratio,
the following expressions are found
\begin{eqnarray}
f_{k+1}&=&\frac{h_{k+r}g_{s+1}-h_{k+s}g_{r+1}}{g_{r}'g_{s+1}-g_{s}'g_{r+1}},\nonumber\quad f_{k}'=\frac{h_{k+s}g_{r}'-h_{k+r}g_{s}'}{g_{r}'g_{s+1}-g_{s}'g_{r+1}},\quad r\neq s=0,1,\dots,\quad k =0,1,\dots \nonumber\\
g_{k+1}&=&\frac{h_{k+r}f_{s+1}-h_{k+s}f_{r+1}}{f_{r}'f_{s+1}-f_{s}'f_{r+1}},\nonumber\quad g_{k}'=\frac{h_{k+s}f_{r}'-h_{k+r}f_{s}'}{f_{r}'f_{s+1}-f_{s}'f_{r+1}},\quad r\neq s=0,1,\dots,\quad k =0,1,\dots \\\label{generalrelationsfg}
\end{eqnarray}
where formulas in the first and second lines are obtained taking into account `$f$-common' ratios and `$g$-common' ratios, respectively.
\p Consider first only a subset of these expressions. The strategy is to find a solution for the subset and look for a pattern that allows a generalisation of the formulas found
to all $f,$ $g,$ $h$-functions.
Then verify whether these expressions satisfy all \eqref{generalrelationsfg} relations or whether the remaining \eqref{generalrelationsfg} introduce further constraints on the possible solution.
The subset adopted contains the expressions, for which the pairs of indices $(r,s)$ in \eqref{generalrelationsfg} are $(0,1),$ $(1,2),$ $(0,2).$ Combine the two expressions obtained for each pair of indices $(r,s)$
by eliminating the $f$-functions. Then the relations found are:
\begin{eqnarray}\label{GeneralRelations_hg_firstline}
g_{k+1}&=&\frac{\left|\begin{array}{ccc}
h_k & h_{k+1} & 0 \\
h_0 & h_1 & g_1 \\
h_1 & h_2 & g_2
\end{array}\right|}{H_A},\quad
g_{k}'=\frac{\left|\begin{array}{ccc}
h_k & h_{k+1} & 0 \\
h_0 & h_1 & g_0' \\
h_1 & h_2 & g_1'
\end{array}\right|}{H_A},\quad
g_{k+1}=\frac{\left|\begin{array}{ccc}
h_{k+1} & h_{k+2} & 0 \\
h_2 & h_3 & g_2 \\
h_3 & h_4 & g_3
\end{array}\right|}{H_B},\\ \nonumber \\
g_{k}'&=&\frac{\left|\begin{array}{ccc}
h_{k+1} & h_{k+2} & 0 \\
h_2 & h_3 & g_1' \\
h_3 & h_4 & g_2'
\end{array}\right|}{H_B},\quad
g_{k+1}=\frac{\left|\begin{array}{ccc}
h_k & h_{k+2} & 0 \\
h_0 & h_2 & g_1 \\
h_2 & h_3 & g_3
\end{array}\right|}{H_C},\quad
g_{k}'=\frac{\left|\begin{array}{ccc}
h_k & h_{k+2} & 0 \\
h_0 & h_2 & g_0' \\
h_1 & h_3 & g_2'
\end{array}\right|}{H_C},\label{GeneralRelations_hg_secondline}
\end{eqnarray}
where
\begin{eqnarray}\label{FGfunctions}
H_A&=&\left|\begin{array}{cc}
h_1 & h_0 \\
h_2 & h_1
\end{array}
\right|=\left|\begin{array}{cc}
g_0' & g_1 \\
g_1' & g_2
\end{array}
\right|\left|\begin{array}{cc}
f_0' & f_1 \\
f_1' & f_2
\end{array}
\right|=G_AF_A,\nonumber\\
H_B&=&\left|\begin{array}{cc}
h_3 & h_2 \\
h_4 & h_3
\end{array}
\right|=\left|\begin{array}{cc}
g_1' & g_2 \\
g_2' & g_3
\end{array}
\right|\left|\begin{array}{cc}
f_1' & f_2 \\
f_2' & f_3
\end{array}
\right|=G_BF_B,\nonumber\\
H_C&=&\left|\begin{array}{cc}
h_2 & h_0 \\
h_4 & h_2
\end{array}
\right|=\left|\begin{array}{cc}
g_0' & g_1 \\
g_2' & g_3
\end{array}
\right|\left|\begin{array}{cc}
f_0' & f_1 \\
f_2' & f_3
\end{array}
\right|=G_CF_C.
\end{eqnarray}
The assumption is that $H_A$, $H_B$, $H_C$ are different from zero. The cases in which these determinants are zero do not lead to new results. An example of these cases will be discussed in appendix \ref{H_A=0}.
\p Look, for instance, at the expressions with $H_A$. It can be noticed that for $k=0$ and $k=1$ these relations are identically satisfied.
Additional information starts to emerge for $k=2.$ Similar
considerations can be applied to all the other expressions. Then, by expanding the determinants with respect to their $g$-column, the first non trivial relations from each expression in
\eqref{GeneralRelations_hg_firstline}, \eqref{GeneralRelations_hg_secondline} are:
\begin{equation}\label{noidentities-g}
g_3H_A=g_2\Lambda-g_1\Delta,\quad
g_1H_B=g_2\Gamma-g_3\Delta,\quad
g_2H_C=g_3\Lambda+g_1\Gamma,
\end{equation}
and
\begin{equation}\label{noidentities-gprime}
g_2'H_A=g_1'\Lambda-g_0'\Delta,\quad
g_0'H_B=g_1'\Gamma-g_2'\Delta,\quad
g_1'H_C=g_2'\Lambda+g_0'\Gamma,
\end{equation}
where
\begin{equation*}
\Lambda=\left|\begin{array}{cc}
h_2 & h_3 \\
h_0 & h_1
\end{array}\right|,\quad
\Delta=\left|\begin{array}{cc}
h_2 & h_3 \\
h_1 & h_2
\end{array}\right|,
\quad
\Gamma=\left|\begin{array}{cc}
h_2 & h_1 \\
h_4 & h_3
\end{array}\right|.
\end{equation*}
After some algebra, they lead to
\begin{equation}\label{g_2g_3_relations}
g_3=g_2\frac{\Lambda}{H_A}-g_1\frac{\Delta}{H_A},\quad g_2'=g_1'\frac{\Lambda}{H_A}-g_0'\frac{\Delta}{H_A},
\end{equation}
$$g_2\left(\Gamma H_A-\Lambda\Delta\right)=g_1\left(H_B H_A-\Delta^2\right),\quad
g_2\left(H_CH_A-\Lambda^2\right)=g_1\left(\Gamma H_A-\Lambda\Delta\right),$$
$$g_1'\left(\Gamma H_A-\Lambda\Delta\right)=g_0'\left(H_B H_A-\Delta^2\right),\quad
g_1'\left(H_C H_A-\Lambda^2\right)=g_0'\left(\Gamma H_A-\Lambda\Delta\right),$$
where the compatibility condition reads
$$\left(H_BH_A-\Delta^2\right)\left(H_CH_A-\Lambda^2\right)=\left(\Gamma H_A-\Lambda\Delta\right)^2.$$
Notice that it is possible to write the determinants $\Lambda,$ $\Delta$ and $\Gamma$ as products of $F$ and $G$-determinants \eqref{FGfunctions}. Better still,
the latter and the $H$-determinants as well can be written as products of only $F_A,$ $G_A,$ $G_B$ and $G_C.$ In fact
\begin{equation}\label{FGrelations}
H_A=\frac{F_A}{G_A}\,G_A^2,\quad H_B=\frac{F_A}{G_A}\,G_B^2,\quad H_C=\frac{F_A}{G_A}\,G_C^2,\quad
\frac{F_B}{F_A}=\frac{G_B}{G_A},\quad \frac{F_C}{F_A}=\frac{G_C}{G_A},
\end{equation}
$$\Lambda=\frac{F_A}{G_A}\,G_AG_C,\quad \Delta=\frac{F_A}{G_A}\,G_BG_A,\quad \Gamma=\frac{F_A}{G_A}\,G_CG_B.$$
It follows that
$$H_BH_A-\Delta^2=H_CH_A-\Lambda^2=\Gamma H_A-\Lambda\Delta=0$$
and also
$$H_BH_C-\Gamma^2=\Lambda H_B-\Delta\Gamma=\Delta H_C-\Lambda\Gamma=0,$$
which is equivalent to
\begin{eqnarray}\label{hDeterminant}
\left|\begin{array}{ccc}
h_0 & h_1 & h_2 \\
h_1 & h_2 & h_3 \\
h_2 & h_3 & h_4
\end{array}\right|
=0.
\end{eqnarray}
The expressions \eqref{g_2g_3_relations} become
\begin{equation}\label{g3andg2prime}
g_3=g_2\frac{G_C}{G_A}-g_1\frac{G_B}{G_A},\quad g_2'=g_1'\frac{G_C}{G_A}-g_0'\frac{G_B}{G_A}.
\end{equation}
Finally from \eqref{hDeterminant} it possible to infer the following
$$
h_2=h_1\frac{G_C}{G_A}-h_0\frac{G_B}{G_A},\quad
h_3=h_2\frac{G_C}{G_A}-h_1\frac{G_B}{G_A},\quad
h_4=h_3\frac{G_C}{G_A}-h_2\frac{G_B}{G_A}.
$$
It seems that a pattern starts to emerge. In order to explore it, consider the expressions with $H_A$ in \eqref{GeneralRelations_hg_firstline} for $k=3.$ They lead to
\begin{equation}\label{g4andg3prime}
g_4=g_3\frac{G_C}{G_A}-g_2\frac{G_B}{G_A},\quad g_3'=g_2'\frac{G_C}{G_A}-g_1'\frac{G_B}{G_A}.
\end{equation}
On the other hand the last expression with $H_B$ in \eqref{GeneralRelations_hg_firstline} for $k=3$ and the middle expression with $H_A$ for $k=4$ lead to
\begin{equation}\label{h5andg4prime}
h_5=h_4\frac{G_C}{G_A}-h_3\frac{G_B}{G_A},\quad g_4'=g_3'\frac{G_C}{G_A}-g_2'\frac{G_B}{G_A}.
\end{equation}
Then, differentiating expressions for $g_3,$ $g_4$ in \eqref{g3andg2prime}, \eqref{g4andg3prime} and comparing them with expressions for $g_3',$ $g_4'$ in \eqref{g4andg3prime}, \eqref{h5andg4prime}, it is found
$$g_2\left(\frac{G_C}{G_A}\right)'=g_1\left(\frac{G_B}{G_A}\right)',\quad g_3\left(\frac{G_C}{G_A}\right)'=g_2\left(\frac{G_B}{G_A}\right)'.$$
These expression are satisfied if $G_C/G_A$ and $G_B/G_A$ are constants or if
$$\frac{g_3}{g_2}=\frac{g_2}{g_1}\quad \Rightarrow \quad G_C=G_A\left(\frac{g_2}{g_1}\right)+ G_B\left(\frac{g_1}{g_2}\right)$$
and
$$\frac{g_1}{g_2}=\frac{G_C'G_A-G_A'G_C}{G_B'G_A-G_A'G_B}\quad \Rightarrow \quad G_A\left(\frac{g_2}{g_1}\right)'\left(G_A-G_B\left(\frac{g_2}{g_1}\right)^2\right)=0.$$
Since $G_A\neq 0$ and $(g_2/g_1)'\neq 0$ \footnote{In fact, $(g_2/g_1)'= 0$ implies $H_B=0,$ which also must be different from zero.}, it is found
$$\frac{G_B}{G_A}=\left(\frac{g_2}{g_1}\right)^2,\quad \frac{G_C}{G_A}=2\left(\frac{g_2}{g_1}\right),$$
where the latter is obtained using the expression for $g_3$ in \eqref{g3andg2prime}.
\p Before summarising the results obtained so far, a few words about the $f$-functions are necessary. Because of the symmetry between the $g$ and the $f$-functions, an analysis started
with expressions similar to the ones in \eqref{GeneralRelations_hg_firstline}, \eqref{GeneralRelations_hg_secondline} with the $g$-functions replaced by the $f$-functions, would have led to similar results. Note also the relations $F_C/F_A=G_C/G_A$ and $F_B/F_A=G_B/G_A$ in
\eqref{FGrelations}. Taking all of this into account, the tentative, uncompleted solutions are:
\begin{eqnarray}\label{solutionsI_II}
\mbox{Solution A:}\phantom{mmmmmm}\nonumber\\
\frac{G_C}{G_A}&=&\frac{F_C}{F_A}=2\,\xi,\quad \frac{G_B}{G_A}=\frac{F_B}{F_A}=\xi^2,\quad
\xi=\frac{g_2}{g_1},\quad f_2=f_1 \xi,\nonumber\\
g_0'&=&2\frac{g_1'}{\xi}-\frac{g_2'}{\xi^2},\quad f_0'=\frac{f_1'}{\xi}-\frac{f_1\xi'}{\xi^2}\nonumber\\
g_{k+1}&=&2\xi g_{k}-\xi^2g_{k-1},\ \ f_{k+1}=\xi^kf_1,\ \ h_k=2\xi h_{k-1}-\xi^2h_{k-2},\ k=2,3,\dots\nonumber\\
\end{eqnarray}
where $\xi$ is a function of $q$ and it cannot be a constant and
\begin{eqnarray}\label{solutionsI_I}
\mbox{Solution B:}\phantom{mmmmmm}\nonumber\\
\frac{G_C}{G_A}&=&\frac{F_C}{F_A}=a,\ \ \frac{G_B}{G_A}=\frac{F_B}{F_A}=b,
\nonumber\\
g_2&=&a \,g_1-b \,g_0+c,\ \
f_2=a \,f_1-b \,f_0+d,\nonumber\\
g_{k+1}&=&ag_{k}-bg_{k-1}, f_{k+1}=af_{k}-bf_{k-1}, h_k=ah_{k-1}-bh_{k-2},\ k=2,3,\dots\nonumber \\
\end{eqnarray}
where $a,$ $b,$ $c,$ $d$ are constants.
\subsection{Solution A}
\label{GeneralCaseII}
\p For solution A, the missing functions $h_0$ and $h_1$ can be found using \eqref{fghrels}. Then, the relations for solutions A can be rewritten in simple forms as
\begin{eqnarray}
g_0'&=&\left(\frac{g_1}{\xi}\right)',\quad f_0'=\left(\frac{f_1}{\xi}\right)',\quad
g_{k+1}=\xi^kg_1,\quad f_{k+1}=\xi^kf_1, \quad k=0,1,\dots\nonumber\\
\xi(q)&=&\frac{g_2}{g_1},\quad h_k=\xi^{k}\left(g_1\left(\frac{f_1}{\xi}\right)'+f_1\left(\frac{g_1}{\xi}\right)'+\frac{kf_1g_1\xi'}{\xi^2}\right),\quad k=0,1,\dots,
\end{eqnarray}
where $g_0',$ $g_1,$ $g_2,$ $f_0',$ $f_1$ are free functions of $q.$
These expressions can be used to verify that all \eqref{generalrelationsfg} relations are satisfied. Then, $G$ and $F$ functions can be reconstructed. They are:
$$F(p+\lambda,q)=f_0+\frac{f_1}{\xi}\left(e^{\xi(p+\lambda)/2}-1\right),\quad G(p-\lambda,q)=g_0+\frac{g_1}{\xi}\left(e^{\xi(p-\lambda)/2}-1\right).$$
It follows that
$$F_\lambda G_q-F_q G_\lambda=\frac{e^{p\,\xi }}{2}\left(f_1\left(\frac{g_1}{\xi}\right)'+g_1\left(\frac{f_1}{\xi}\right)'+p\frac{f_1g_1\xi'}{\xi}\right).$$
Given that the function $\xi$ cannot be a constant, there are no potentials $U$ and $V$ such that \eqref{PBrel} is satisfied.
\subsection{Solution B}
\label{GeneralCaseI}
Solution B seems to be more complicated to analyse. Similarly to what has been done for solution A, it is possible to obtain expressions for $h_0$ and $h_1$ using \eqref{fghrels}. They are:
\begin{eqnarray}
h_0&=&f_0'g_1+f_1g_0',\nonumber\\
h_1&=&g_1f_1'+g_0'(a f_1-b f_0 +d)=f_1g_1'+f_0'(a g_1-b g_0 +c).\label{h_0h_1I}
\end{eqnarray}
The second line provides a constraint. Before investigating this constraint, it is useful to look at the expressions for $h_2$ provided by \eqref{fghrels}, that is:
$$h_2=g_1f_2'+g_0'f_3=f_0'g_3+f_1g_2'=g_2f_1'+g_1'f_2.$$
Using \eqref{solutionsI_I}, it is easy to see that the first two expressions for $h_2$ can be rewritten as $h_2=ah_1-bh_0.$ On the other hand, the third expression becomes
$$h_2=h_1\left(\frac{g_1'}{g_0'}+\frac{f_1'}{f_0'}\right)-h_0\left(\frac{g_1'}{g_0'}\frac{f_1'}{f_0'}\right),$$
which implies
$$\frac{g_1'}{g_0'}+\frac{f_1'}{f_0'}=a,\quad \frac{g_1'}{g_0'}\frac{f_1'}{f_0'}=b.$$
Hence
$$\left(\frac{g_1'}{g_0'}\right)^2-a\left(\frac{g_1'}{g_0'}\right)+b=0$$
with roots
$$\alpha=\frac{a}{2}+\frac{\sqrt{a^2-4b}}{2},\quad \beta=\frac{a}{2}-\frac{\sqrt{a^2-4b}}{2},\quad a=\beta+\alpha, \quad b=\alpha\beta.$$
Then
\begin{equation}\label{g_1f_1}
g_1=\alpha g_0+\gamma,\quad f_1=\beta f_0+\delta,
\end{equation}
where $\gamma$ and $\delta$ are constants.
\p Before going back to the constraint in \eqref{h_0h_1I}, notice that the tentative solution \eqref{solutionsI_I} can be rewritten in a more compact formulation using only the functions
$g_0,$ $g_1,$ $f_0,$ $f_1,$ $h_0$ and $h_1,$ as
\begin{eqnarray}\label{solutionsIg0f0g1f1}
g_{k+1}&=&A_{k}(ag_1-bg_0+c)-A_{k-1}bg_1,\quad f_{k+1}=A_{k}(af_1-bf_0+d)-A_{k-1}bf_1,\nonumber\\
h_{k+1}&=&A_{k}(ah_1-bh_0)-A_{k-1}bh_1,\nonumber\\
A_{k+1}&=&aA_k-bA_{k-1},\quad A_{0}=0,\quad A_1=1,\qquad\qquad k=1,2,\dots.
\end{eqnarray}
Then, using \eqref{g_1f_1}, and the constant $\alpha,$ $\beta$ instead of $a,$ $b,$ these expressions become
\begin{eqnarray}\label{solutionsIg0f0}
g_{k+1}&=&\alpha(A_{k+1} - A_k \beta)g_0+A_{k+1}\gamma ,\quad f_{k+1}=\beta(A_{k+1} - A_k \alpha)f_0+A_{k+1}\delta,\nonumber\\
h_{k+1}&=&A_{k+1}h_1-\alpha\beta A_{k} h_0,\nonumber\\
A_{k+1}&=&(\alpha+\beta)A_k-\alpha\beta A_{k-1},\quad A_{0}=0,\quad A_1=1,\qquad \qquad k=1,2,\dots,
\end{eqnarray}
and the $h_0$ and $h_1$ functions in \eqref{h_0h_1I} are:
\begin{equation*}
h_0=f_0'g_0\alpha+f_0g_0'\beta+f_0'\gamma+g_0'\delta,\quad
2h_1=(\alpha+\beta)h_0+g_0'\alpha\delta+f_0'\beta\gamma,
\end{equation*}
where the constants of integrations, $c$ and $d,$ have been absorbed into the constants $\gamma$ and $\delta.$
\p Now it is time to analyse the constraint in \eqref{h_0h_1I}. It reads:
\begin{equation}\label{constraintI}
(\alpha- \beta)(\beta g_0'f_0+\alpha g_0f_0')+ f_0'\gamma\alpha-g_0'\delta\beta=0.
\end{equation}
There are two possibilities that will be explored in the next two subsections.
\subsubsection{Solution B$_1$}
\label{GeneralCaseIa}
\p If $\alpha=\beta$ the constraint \eqref{constraintI} simplifies to
\begin{equation}\label{zetaconstantg0f0}
f_0'\gamma=g_0'\delta,\quad \Rightarrow \quad f_0=\frac{\delta}{\gamma}\,g_0+\varepsilon,
\end{equation}
where $\varepsilon$ is a constant and the functions $h_0,$ $h_1$ become
\begin{equation}\label{zetaconstanth0h1}
h_0=g_0\,g_0'\,2\alpha \frac{\delta}{\gamma}+g_0'(2\delta+\alpha \varepsilon),\quad h_1=\alpha (h_0+g_0'\delta).
\end{equation}
Finally, it can be easily notice that $A_k$ in \eqref{solutionsIg0f0} can be rewritten as $A_k=k\,\alpha^{k-1}$ for all $k.$ Hence, expressions \eqref{solutionsIg0f0} simplify and
the solution B$_1$ is:
\begin{eqnarray}\label{solutionIa}
g_{k}&=&\alpha^k\,g_0+k\,\gamma\,\alpha^{k-1},\quad
f_{k}=\alpha^k\left(\frac{\delta}{\gamma}\,g_0+\varepsilon\right)+k\,\delta\alpha^{k-1},\nonumber \\
h_{k}&=&\alpha^k (h_0+k \,g_0'\delta),\qquad \qquad k=0,1,2,\dots
\end{eqnarray}
where $g_0$ is a free function of $q$ and $\alpha,$ $\delta,$ $\gamma,$ $\varepsilon$ are constants.
The claim is that this solution satisfies all the relations \eqref{generalrelationsfg}. Some details will be provided in appendix \ref{all_relations}.
\p Hence
\begin{equation*}
F(p+\lambda,q)=e^{(p+\lambda)\alpha/2}\left(\frac{\delta}{\gamma}g_0+\varepsilon+\delta\left(\frac{p+\lambda}{2}\right)\right),\ \ G(p-\lambda,q)=e^{(p-\lambda)\alpha/2}\left(g_0+\gamma\left(\frac{p-\lambda}{2}\right)\right),
\end{equation*}
and
$$F_\lambda G_q-F_q G_\lambda=\frac{e^{p\,\alpha}}{2}\left(g_0'g_0 \,2\alpha\frac{\delta}{\gamma}+g_0'(\alpha\varepsilon+2\delta+\alpha \,\delta \,p)\right)=U(p+q)-V(p-q).$$
If $\delta\ne 0$, the presence of the last term proportional to $p$ makes it hopeless to find suitable potentials. However, setting $\delta=0$ the previous expression becomes
$$F_\lambda G_q-F_q G_\lambda=e^{p\,\alpha}\,g_0'\frac{\alpha\, \varepsilon}{2}.$$
This suggests
$$g_0'\sim (e^{\alpha\,q}-e^{-\alpha\,q})\quad \Rightarrow \quad U(u)=V(u)=\frac{\alpha\, \varepsilon}{2}\,e^{\alpha u},$$
or
$$g_0'\sim e^{\alpha\,q}\quad \Rightarrow \quad U(u)=\frac{\alpha\, \varepsilon}{2}\,e^{\alpha u},\quad V=0.$$
In the first case, a field satisfying the Liouville potential is located on both sides
of the defect and in the second case there is a Liouville field on one side of the defect and a free massless field on the other.
\subsubsection{Solution B$_2$}
\label{GeneralCaseIb}
\p This expression \eqref{constraintI} can be rewritten as
$$\frac{g_0'}{f_0'}=\frac{\alpha g_0(\alpha- \beta)+\alpha \gamma}{\beta\delta-\beta f_0(\alpha- \beta)}\equiv \zeta,$$
where $\zeta$ is a function of $q.$ This leads to two equations that need to be solved
\begin{equation}\label{systemOfDofEqI}
g_0'=\zeta f_0',\quad g_0=\frac{\zeta \delta\beta-\alpha\gamma}{\alpha(\alpha- \beta)}-\frac{\beta}{\alpha}\,\zeta f_0.
\end{equation}
By differentiating the second equation it is found
$$\frac{f_0'(\alpha^2- \beta^2)}{\beta\delta-\beta f_0(\alpha-\beta)}=\frac{\zeta'}{\zeta},$$
which leads to
\begin{equation}\label{g_0f_0Ib}
f_0=-\frac{\varepsilon\zeta^{-\beta/(\alpha+\beta)}}{\beta(\alpha- \beta)}+\frac{\delta}{(\alpha- \beta)},\quad
g_0=\frac{\varepsilon\zeta^{\alpha/(\alpha+\beta)}}{\alpha(\alpha- \beta)}-\frac{\gamma}{(\alpha- \beta)},
\end{equation}
and
\begin{equation}\label{h_0h_1Ib}
h_0=-f_0'\frac{\beta \gamma}{(\alpha- \beta)}+g_0'\frac{\alpha \delta}{(\alpha- \beta)}\quad
h_1=-f_0'\frac{\beta^2 \gamma}{(\alpha- \beta)}+g_0'\frac{\alpha^2 \delta}{(\alpha- \beta)}.
\end{equation}
Looking at expressions \eqref{solutionsIg0f0}, it can be noticed that
$$A_{k+1}-\alpha A_k=\beta^{k+1},\quad A_{k+1}-\beta A_k=\alpha^{k+1},\quad A^k=\left(\frac{\alpha^k- \beta^k}{\alpha- \beta}\right),\qquad k=0,1,\dots$$
Clearly $\alpha\neq \beta,$ which is fine since the case $\alpha= \beta$ has been explored in the previous subsection.
Then the solution B$_2$ is:
\begin{eqnarray}\label{SolutionIb}
f_{k}&=&\beta^{k}f_0+\delta\,\left(\frac{\alpha^k- \beta^k}{\alpha- \beta}\right),\quad
g_{k}=\alpha^kg_0+\gamma\,\left(\frac{\alpha^k- \beta^k}{\alpha- \beta}\right),\nonumber\\
h_k&=&-f_0'\frac{\beta^{k+1} \gamma}{(\alpha- \beta)}+g_0'\frac{\alpha^{k+1} \delta}{(\alpha- \beta)},\qquad \qquad k=0,1,\dots
\end{eqnarray}
with $g_0$ and $f_0$ given in \eqref{g_0f_0Ib}. Once again, all relations \eqref{generalrelationsfg} are satisfied. Details can be found in appendix \ref{all_relations}.
Hence
\begin{eqnarray*}
F(p+\lambda,q)&=&\frac{\delta}{(\alpha- \beta)}\,e^{\alpha(p+\lambda)/2}+\left(f_0-\frac{\delta}{(\alpha- \beta)}\right)e^{\beta(p+\lambda)/2}\\
G(p-\lambda,q)&=&\left(g_0+\frac{\gamma}{(\alpha- \beta)}\right)\,e^{\alpha(p-\lambda)/2}-\frac{\gamma}{(\alpha- \beta)}\,e^{\beta(p-\lambda)/2},
\end{eqnarray*}
\begin{eqnarray}\label{PB_relationIb}
F_\lambda G_q-F_q G_\lambda&=&g'_0\,\frac{\delta\,\alpha}{2(\alpha- \beta)}\,e^{\alpha p}-f'_0\,\frac{\gamma\,\beta}{2(\alpha- \beta)}\,e^{\beta p}\nonumber\\
&=&\frac{\varepsilon\,\zeta'\,z^{-\beta/(\alpha+\beta)}}{2(\alpha-\beta)^2(\alpha+\beta)}\left(\delta\alpha\,e^{\alpha p}-\gamma\beta\,\zeta^{-1}\, e^{\beta p}\right)=U(p+q)-V(p-q).\nonumber\\
\end{eqnarray}
Before looking at the most general solution, it is easy to see that a solution is provided by
$$\zeta\sim e^{\alpha+\beta}\quad \Rightarrow \quad U(u)=\frac{\varepsilon\,\delta\,\alpha}{2(\alpha+\beta)^2}\,e^{\alpha u}\quad V(v)=\frac{\varepsilon\,\gamma\,\beta}{2(\alpha+\beta)^2}\,e^{\beta v}.$$
Notice that this suggests the possibility to have two Liouville potentials on the two sides of the defect with different and arbitrary normalisations.
This is not surprising given that no mass is involved and that a type II defect can be seen as the result of a two fused type I defects \cite{cz2010}.
\p The most general solution to \eqref{PB_relationIb} is instead obtained by setting
\begin{eqnarray}\label{zeta_forIb}
\frac{\varepsilon\,\alpha\,\delta}{2(\alpha-\beta)^2(\alpha+\beta)}\,\zeta'\,\zeta^{-\beta/(\alpha+\beta)}&=&A\,e^{\alpha q}-B\,e^{-\alpha q},\nonumber\\
\frac{-\varepsilon\,\beta\,\gamma}{2(\alpha-\beta)^2(\alpha+\beta)}\,\zeta'\,\zeta^{-1-\beta/(\alpha+\beta)}&=&C\,e^{\beta q}-D\,e^{-\beta q},
\end{eqnarray}
whose ratio suggests
$$-\frac{\alpha\,\delta}{\beta\,\gamma}\zeta=\left(\frac{A\,e^{\alpha q}-B\,e^{-\alpha q}}{C\,e^{\beta q}-D\,e^{-\beta q}}\right)\equiv \frac{X(q)}{Y(q)}.$$
Then, the first expression in \eqref{zeta_forIb} can be rewritten as follows
\begin{equation}\label{XY_relation}
c(X'Y-Y'X)X^{n-1}=Y^{n+2}\quad \mbox{with} \quad c\equiv\frac{\varepsilon\,\gamma^{n+1}}{2\,\delta^n\,\beta^2}\frac{n^{3+n}}{(2n+1)^2(n+1)^n},
\end{equation}
where $n=-\beta/(\alpha+\beta).$ This expression has solution only for $n=1$ and $n=-2$ (the cases $n=0$ and $n=-1$ are excluded since they imply $b=0$ and therefore $H_B=0$). These two solutions lead to the same $U$ and $V$ potentials.
Note in fact that by sending $n$ to $-n-1,$ \eqref{XY_relation} becomes
$$c'(Y'X-X'Y)Y^{n-1}=X^{n+2},$$
where $c'$ is a constant, unimportant for the current discussion. Clearly, if \eqref{XY_relation} has a solution, then this expression has a solution as well and the two solutions are related by the swapping of $\alpha$ and $\beta.$
In the end, setting $n=1$ the most general solution to \eqref{zeta_forIb} is:
$$\zeta=\frac{18 \beta}{\varepsilon\,\gamma}\left(C\,e^{\beta q}+D\,e^{-\beta q}\right)$$
$$U(u)=C\,e^{\beta u}-\frac{36\,D^2\,\delta\,\beta}{\varepsilon \gamma^2}\,e^{-2\beta u}\quad V(v)=D\,e^{\beta v}-\frac{36\,C^2\,\delta\,\beta}{\varepsilon \gamma^2}\,e^{-2\beta v},$$
which correspond to the Tzitz\'eica potentials. Once again there is a freedom in the choice of the exponential coefficients, which reflects the freedom of shifting the fields $u$ and $v$ by arbitrary constants,
as it was the case for the sinh-Gordon potential in section \ref{h2k=0}.
Setting $\beta=-1,$ $C=D=2$ and $36\,\delta/\varepsilon\,\gamma^2=1/2$
$$U(u)=V(u)=2\,e^{-u}+e^{2u},$$
so that a more familiar form for the Tzitz\'eica potential is recovered.
\p Finally, note that the sinh-Gordon potential is not a solution of \eqref{PB_relationIb}. In fact, it corresponds to the case $\alpha=-\beta,$ which is explicitly excluded since it would imply $a=0$, and therefore $H_C=0.$
\section{Conclusion}
\p This article adds another piece in the complex mosaic that represents integrable field theory. That these models are special is a well known fact.
It turns out that they are the only relativistic field theories able to support a purely transmitting defect,
which is defined by the requirement of both energy and momentum conservation. All of this is achieved by preserving their most distinctive feature: integrability. Somehow, demanding both energy and momentum conservation
singles out the integrable models. Previously known results concerning the sinh-Gordon,
Tzitz\'eica and Liouville models have been recovered.
In addition, it was interesting to see how the possibility to have two differently normalised Liouville models on either side of the defect emerges naturally from the current investigation.
This is a possibility that, though not surprising, was not explicitly considered previously.
\p Nevertheless, the present investigation has a limitation. Only models with a single scalar field have been considered. Previously, multi-scalar field theories supporting type I defects have been analysed and found to be the non-affine and affine Toda fields models based on the $a_n^{(1)}$ root data \cite{bcz2004,cz2007}. In order to extend the more general investigation to multiple scalar field models, it is useful to borrow an
idea presented recently in \cite{br2017}, then applied successfully to the $d_r^{(1)}$ affine Toda models, to mix the type I and type II defects in order
to increase the range of models that support these kinds of defect and hopefully to demonstrate that this will allow all integrable Toda models. The most general sewing conditions for the type I defects can be found in
\cite{bcz2004,cz2007} and are:
$$u_x=Au_t+(1-A)v_t-{\cal E}_u,\quad v_x=-Au_t+(1+A)u_t+{\cal E}_v,$$
where the fields $u$ and $v$ are now vectors representing multi component scalar fields and $A$ is an antisymmetric matrix.
On the other hand the sewing conditions for the type II defect look unchanged with respect to the ones seen previously. In fact they are:
$$u_x=\lambda_t- {\cal E}_u,\ \ v_x=\lambda_t+{\cal E}_v,\ \ u_t-v_t =-\ce_\lambda,$$
where $u$ $v$ and $\lambda$ are now vectors.
The mixing idea consists in splitting the space in which the fields live into two pieces. The fields belonging to one part will satisfy type I sewing conditions at the defect and the fields belonging to the other part will
satisfy type II sewing conditions. In order to keep track of this aspect, it is convenient to introduce two projection operators, $\Gamma_1$ and $\Gamma_2$ such that $\Gamma_1+\Gamma_2=1, \ \Gamma_k^2=\Gamma_k,\ k=1,2.$
Momentum conservation leads to the following constraints:
\begin{eqnarray*}
&&{\cal E}_{(\Gamma_1 \,q)}+2A{\cal E}_{(\Gamma_1\, p)}=-{\cal P}_{(\Gamma_1\, q)},\quad {\cal E}_{(\Gamma_1\, p)}={\cal P}_{(\Gamma_1\, p)},\\
&&{\cal E}_{(\Gamma_2 \,p)}={\cal P}_{(\Gamma_2 \,\lambda)},\quad {\cal E}_{(\Gamma_2 \,\lambda)}={\cal P}_{(\Gamma_2 \,p)},\\
&& \frac{1}{2}\left({\cal E}_{(\Gamma_2\, q)}{\cal P}_{(\Gamma_2 \,\lambda)}-{\cal E}_{(\Gamma_2\, \lambda)}{\cal P}_{(\Gamma_2\, q)}+{\cal E}_{(\Gamma_1\, p)}{\cal E}_{(\Gamma_1\, q)}\right)=U-V,
\end{eqnarray*}
where the usual definition $p=(u+v)/2,$ $q=(u-v)/2$ have been used. Note that the subscripts in parentheses indicate derivatives.
It is useful to introduce a new field variable $\xi=-A\Gamma_1 \,q.$ Then the previous constraints become
\begin{eqnarray*}
&&{\cal E}_{(\Gamma_1 \,\xi)}-2{\cal E}_{(\Gamma_1\, p)}=-{\cal P}_{(\Gamma_1\, \xi)},\quad {\cal E}_{(\Gamma_1\, p)}={\cal P}_{(\Gamma_1\, p)},\\
&&{\cal E}_{(\Gamma_2 \,p)}={\cal P}_{(\Gamma_2 \,\lambda}),\quad {\cal E}_{(\Gamma_2 \,\lambda)}={\cal P}_{(\Gamma_2 \,p)},\quad {\cal E}_{(\Gamma_2\, \xi)}=-{\cal P}_{(\Gamma_2\, \xi)},\\
\end{eqnarray*}
which imply
\begin{eqnarray*}
{\cal E}&=&F(\Gamma_2 (p+\lambda), \Gamma_2 q, \Gamma_1(p+\xi))+G(\Gamma_2 (p-\lambda), \Gamma_2 q, \Gamma_2\xi),\\
{\cal P}&=&F(\Gamma_2 (p+\lambda), \Gamma_2 q, \Gamma_1(p+\xi))-G(\Gamma_2 (p-\lambda), \Gamma_2 q, \Gamma_2\xi)
\end{eqnarray*}
and
\begin{equation}\label{newPBrelation}
{F}_{(\Gamma_2\, \lambda)}{G}_{(\Gamma_2 \,q)}-{G}_{(\Gamma_2\, \lambda)}{F}_{(\Gamma_2 \,q)}-\frac{1}{2}{F}_{(\Gamma_1\, p)}\Gamma_1 A{F}_{(\Gamma_1\, \xi)}-\frac{1}{2}{F}_{(\Gamma_1\, p)}\Gamma_2 A{F}_{(\Gamma_2\, \xi)}=U-V.
\end{equation}
It can be seen how the first two terms on the left hand side are similar to the terms in the Poisson-Bracket relation \eqref{PBrel} investigated in the present article. On other hand, the other two terms take into account a mixing represented by the fields
$\xi,$ which are not confined to a single subspace. In \cite{br2017}, solutions for an expression similar to \eqref{newPBrelation} have been found. However, a complete set of solutions is still missing.
|
1,314,259,993,925 | arxiv | \section{Introduction} \label{sec:intro}
Stellar activity can manifest as a number of magnetic phenomena, most prominently as starspots and flares. In fully convective stars or cool stars with convective outer envelopes, starspots are dark regions where the local magnetic field suppresses the convection \citep[][and references therein]{ber05,str09}. Flares are the energetic results of magnetic field line reconnection during which there are rapid increases in the stellar flux due to sudden energy release \citep[particularly in the optical and UV; see][and references therein]{pri02}.
The \emph{Kepler} space telescope launched in 2009 with the primary goal of detecting Earth-like planets in Earth-like orbits around Sun-like stars \citep{bor10,koc10}, but the observations have provided a wealth of data for stellar activity studies. For example, \citet{bas10} provided an early look at \emph{Kepler} activity, while \citet{wal11,dav14} focused on the stellar flares. These studies are of particular interest to improving our understanding of stellar activity, as many of the observed stars behave significantly differently than the Sun with larger starspots and with more frequent, as well as stronger, flares \citep[e.g.,][]{mae12,roe13,vid16}. Beyond stellar astrophysics, understanding stellar activity in detail will reveal vital information for the interaction between stars and their planetary systems \citep[e.g.,][]{vid17,roe17a}.
On the Sun, spot groups can be resolved into individual sunspots, but this is not possible for other stars, even with interferometry where a small number of spotted stars can be resolved \citep[e.g.,][]{roe16b,roe17b}. Because of this, the starspots that we see evidence of photometrically are likely to be analogous to sunspot groups. Additionally, on the Sun, the origin of solar flares can be determined, but the limitations of stellar photometry prevent the position of the flare on the stellar disk from being determined.
Here, we investigate whether the stellar flares show a connection using the location of the starspot groups with a collection of stars showing evidence of both starspots and flares. In Section \ref{sec:obs}, we describe the observations used for this study. In Section \ref{sec:analysis}, we describe our method of analysis for the light curves, and in Section \ref{sec:discussion}, we describe our results. In the final section, Section \ref{sec:conclusions}, we conclude. In the Appendix, we include a discussion of the model light curves we used to test our analysis method and those results; a simple model of a spotted, flaring star with flares connected with the spots; and further discussion of subsets of the results.
\section{Observations} \label{sec:obs}
The \emph{Kepler} space telescope observed $\sim 150,000$ stars during the its four years of observation \citep{bor10,koc10}. Among those stars were 34,030 main-sequence stars for which \citet{mcq14} found rotation periods based on rotational modulation using an autocorrelation method. \citet{dav16} analyzed the flares of the \emph{Kepler} catalog, finding 4,041 flaring dwarf stars (stars with 100 or more potential flaring events with at least ten of those events above a completeness threshold). The intersection of these two works is a set of 402 stars, which we use as our initial sample.
\citet{dav16} uses both long (30-minute exposure) and short (1-minute exposure) cadence data, but here we use only the long cadence data in order to keep our sample homogeneous and because the timescale of stellar rotation is on the order of days so the short-cadence light curves provide no additional information in our science case, as the rotational timescale is much longer than the uncertainties caused by light curve sampling.
The stars in this sample are all assumed to be on the main sequence \citep[following][]{mcq14} and range from late-F to mid-M spectral types \citep[based on masses from][]{dav16}.
\section{Analysis} \label{sec:analysis}
This subset of \emph{Kepler} targets represent some of the most active stars in the archive. Here, we analyze the potential correlation between the position of starspots and the occurrence of flares.
We are developing a suite of codes for analyzing stellar activity, particularly with respect to understanding activity in the context of potential planetary hosts, called Stellar Activity for Understanding and Characterizing Exoplanetary Systems (SAUCES)\footnote{SAUCES is currently written in \texttt{IDL} and is available at \url{https://github.com/RMRoettenbacher/SAUCES}. This suite of codes will be expanded with future studies to further investigate stellar activity. \textbf{Codes will be available upon publication.}}.
Here, SAUCES uses the long-cadence \emph{Kepler} light curve\footnote{The \emph{Kepler} light curves have been obtained from the Barbara A.\ Mikulski Archive for Space Telescopes (MAST).} \citep{tho16}. The Simple Aperture Photometry (SAP) long-cadence light curve is the flux in the custom apertures created for the star for 270 integrations spanning 29.424 minutes after the Photometric Analysis module of the \emph{Kepler} Science Operations Center data-processing pipeline has been applied. The Pre-search Data Conditioning (PDC) light curve has had an additional PDC module applied to remove astrophysical signatures in order to better isolate transits and eclipses. The PDC module can eliminate the signatures of starspots, so we use the SAP light curve. For SAUCES, individual data points are removed if either the SAP or PDC flux values are negative or if any of the SAP quality flags are listed.
To remove the systematics inherent in the \emph{Kepler} data while preserving stellar astrophysics, SAUCES applied cotrending basis vectors (CBVs) with code adapted from \texttt{kepcotrend.pro}\footnote{Currently available at \url{http://www.lpl.arizona.edu/~ bjackson/idl\_code/kepcotrend.pro}.}, developed by B.\ Jackson and N.\ Thom. The SAP light curves are median-divided to account for the flux jumps across the different quarters of observation.
SAUCES then performs a period search on the CBV-removed, median-divided light curves using a Fast Fourier Transform (FFT) to detect the period. We recognize that there are other period-finding algorithms available \citep[e.g., autocorrelation;][]{mcq14}, but the FFT method lends itself well to understanding if a rotation period is indicative of a spotted surface or a more regular light curve, such as those from pulsations or eclipsing binaries. Because pulsations and eclipsing binaries are periodic with minimal variation from cycle to cycle, the periodogram of such a signal will resemble a Dirac-$\delta$ function at the periods evident in the data. Conversely, starspots are potentially not as regular---potentially exhibiting evolution (growth or decay) and drift on the surface, on timescales as short as a rotation period. Differential rotation could also cause a periodogram signature that is more complex than Dirac-$\delta$ functions \citep[e.g.,][]{ola03,rei13}. While active longitudes are possible \citep[e.g.,][]{ola06,roe16a}, starspots often form at different latitudes across the stellar surface \citep[e.g.,][]{roe13,vid14,oza18}. Because starspots vary over time (shape, size, and potentially even latitude), the signatures of longer observations can yield an ensemble of peaks around the strongest frequency. If a star has more than one spot at different latitudes evolving over the length of observation, those multiple starspots at different latitudes will result in periodogram peaks at different periods due to differential rotation \citep[e.g., Figure 8 of][]{roe16a}.
By running a FFT on the \emph{Kepler} light curves of the stars at the intersection of the \citet{mcq14} and \citet{dav16} catalogs, SAUCES can distinguish between stars that belong in the sample of flaring, spotted stars and those that have mistakenly received the classification (e.g., RR Lyrae variables, non-radially pulsating stars, eclipsing binaries) based on the characteristics of the periodogram. Of the 402 stars in the cross-section of the \citet{mcq14} and \citet{dav16} samples, SAUCES recategorizes 227 stars as eclipsing binaries or pulsating stars. Additionally, we categorized the light curves as likely starspots or not based on a by-eye identification aimed to eliminate stars with light curves that suggest they are $\gamma$ Doradus or $\delta$ Scuti stars (for examples of these light curves and a detailed discussion, see \citealt{uyt11}).
Here, we will focus only on the 119 stars that both of these methods have designated as spotted. We acknowledge that this is dramatically different from the initial 402 stars in the cross-section of the \citet{mcq14} and the \citet{dav16} catalogs, but our aims here require a sample of light curves that show evidence of evolving starspots.
SAUCES then locates the local minima in the light curves created by the rotational modulation of starspot groups. The light curve is first smoothed with a Gaussian kernel with a width of $0.1 \times P_\mathrm{rot}$ to remove noise that could create false local minima, where $P_\mathrm{rot}$ is the rotation period estimated by the FFT method mentioned above (with the associated \citet{mcq14} period used as a starting parameter for centering the FFT search to reduce computation time; see Figure \ref{lcs}). Once the light curve is smoothed, local minima are recorded simply by finding the lowest point in moving windows of about 10.5 hours, which was chosen to allow SAUCES to find multiple minima in a starspot group should they be present.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.6,angle=90]{Kepler_LCs_CBV_AND_SMOOTH_sample_plot.eps}
\end{center}
\caption{Sample \emph{Kepler} long-cadence light curves (in gray) for four spotted, flaring stars in our sample. Overplotted (in black) are the light curves smoothed with a Gaussian kernel of width $0.1 \times P_\mathrm{rot}$. These smoothed light curves are then used to determine the time of local minima for locating the position of starspots facing \emph{Kepler}. Near several of the minima caused by starspots shown, flares occur, distorting the shape of the smoothed light curve. This effect is accounted for by SAUCES, as described in Section \ref{sec:analysis}. }
\label{lcs}
\end{figure*}
The CBV-corrected light curves are separately run through our flare-finding algorithm FLAre deTection With Ransac Method \citep[FLATW'RM\footnote{FLATW'RM is available at \url{https://github.com/vidakris/flatwrm}.};][]{vid18}.
FLATW'RM uses a machine-learning algorithm to give a robust model of the light curves in order to detect flare events and uses a voting system implemented to keep false positive detections to a minimum.
For further details on the FLATW'RM algorithm, see \citet{vid18}.
FLATW'RM detects flares and reports the times the flare starts and ends, the time of maximum flux, and the maximum percent increase of flux over the light curve around the flare, along with estimated flare energies, either from the raw light curve or by fitting an analytic flare model. Here, we require the events to consist of at least three sequential data points in order to be detected and analyzed by FLATW'RM. We note that the flares found by FLATW'RM (with no restrictions on flux increases) are fewer in number than \citet{dav16} reported, and, in many cases, significantly fewer. There are, however, seven stars for which FLATW'RM identified more events as flares than \citet{dav14}. The discrepancy is potentially due to different spurious events that fit the selection criteria, but are not flares. We account for this discrepancy by focusing our analysis to potential flares with flux increases $\ge 1\%$, and \citet{dav16} identifies flares that are above a $68\%$ completeness threshold.
SAUCES compares the time of the observed flare maximum and the time of the starspot minimum, where the flux has decreased the most due to the presence of a starspot group. We compute the difference in phase of flare maximum and the phase of starspot minimum, $\Delta \varphi$. Strong flares occurring in or near the center of the light curve minimum will distort the shape of the light curve and will affect the smoothed light curve, shifting the location of the local minimum. In the event that the time of flare maximum is less than 0.40 days from the flare minimum,
we recalculate the time of the light curve local minimum by shifting one rotation period before and one rotation period after and locate the local minima there. We then use the average of those two local minima times to assign as the time of the local minimum nearest to the flare. In doing so, we assume that the starspot is somewhat stable over three rotation periods \citep[some M dwarfs e.g.,][show daily, but insubstantial variations in starspots]{vid16} and any changes in the location of the spot in phase is due to differential rotation (a slightly different period for the spot than that assigned to the star) and not due to a drift in latitude or dramatic evolution. If a flare is also located in one of the neighboring rotations, SAUCES readjusts the time of local minimum using the next spot minima (two rotations before the flare event and two rotations after). If there is a flare in those one of those minima, SAUCES does not continue searching further away because many of the stars in the \emph{Kepler} sample show dramatic evolution on timescales shorter than five rotations. In these cases, SAUCES assigns this last calculation for $\Delta \varphi$. For our sample, this occurs for $1.8\%$ of the flares observed and will not significantly affect our results.
If local minima are reported near a gap in the data ($\ge 0.5$ days), we also apply these same shifts using the surrounding minima to determine $\Delta \varphi$.
It may be the case that a flare starts and peaks more than 0.40 days away from the nearest local minimum, but the flare is of sufficiently high energy that it will affect the light curve for a long enough time after the initial brightening that the tail will significantly alter the shape of the light curve nearby. For flares that end within 0.40 days of the local minimum, we apply the same readjustments to the location of the local minimum as described above.
The strength of a flare dictates the lifetime of the flare, and, therefore, how long it affects the light curve. In the event of an isolated, single flare, predicting this lifetime and the flare's impact on the light curve could be done through the models determined by FLATW'RM. However, we use the same, conservative length of impact for all the flares to account for the possibility that the flares are compound events leading to an incorrect model from FLATW'RM, as the relatively sparse sampling of the long-cadence \emph{Kepler} light curves and the exponential nature of the events can yield potentially unreasonable fits.
Additionally, we only look at the flares that occur when only one resolved starspot group is present on the stellar surface during the rotation in which the flare occurs. With this restriction, we can clearly identify the flares that occur when the single starspot group is present ($-0.25 < \Delta \varphi < 0.25$) or when it is on the opposite side of the star ($\Delta \varphi > 0.25$ and $\Delta \varphi < -0.25$). We also eliminate flares that have start times within two data points of a gap in the data ($\ge 0.5$ days) or end times within one data point of a gap in the data to further eliminate spurious results. For examples and tests of this method, see Appendices A and B.
\section{Discussion} \label{sec:discussion}
For the sample of 119 stars, FLATW'RM detected 2447 flares that increase the flux of the star by $1\%$ or more in the light curves of 111 of the stars. We plot the strength of each flare plotted against the phase difference, $\Delta \varphi$, in Figures \ref{strengths}. In Figure \ref{hist}, we plot the number of flares in phase difference bins of $\Delta \varphi = 0.01$ for flares with an increase of at least $1\%$ in flux (with flux increases between $1-5\%$ and $>5\%$, inclusive, for a total of 1972 and 475 flares, respectively). On solar-type stars, flares with increases in brightness of $0.1-1\%$ are considered superflares \citep[e.g.,][]{mae12}. As all of the flares observed increase the overall stellar flux in the \emph{Kepler} bandpass by at least $1\%$, the flares in this sample are dramatically stronger than solar flares across the observed spectral types, with the strongest flares occurring in the lower mass stars.
\begin{figure}
\begin{center}
\includegraphics[scale=0.35,angle=90]{All_stars_flare_flux001-10_jdfix040_endjdfix040_2minmove_1spot_100bins_log_bw_plot.eps}
\end{center}
\caption{Flare strengths as a function of phase difference, $\Delta \varphi$, for the 111 flaring, spotted stars in our sample. Each black dot represents one flare. All flares with flux increases of $1\%$ or more are plotted. Note that the ordinate axis is presented on a logarithmic scale. }
\label{strengths}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.35,angle=90]{All_stars_flare_flux001-10_005-10_jdfix040_endjdfix040_2minmove_1spot_100bins_color_histplot.eps}
\end{center}
\caption{Histogram of the number of flares occurring in bins of phase difference $\Delta \varphi = 0.01$. The orange bins represents the flares that have flux increases between $1-5\%$, and the purple bins represent the flares that have flux increases $>5\%$. Together, the purple and orange regions represent the total number of flares with flux increases $\ge 1\%$ that occur in each phase bin for the 111 flaring, spotted stars in our sample.}
\label{hist}
\end{figure}
We performed a one-sided Kuiper test (an invariant Kolmogorov--Smirnov test appropriate for circular variables)
to determine if the phase differences between the starspot minima and the maximum amplitude of the flares were drawn from a uniform sample, which is expected after the results of \citet{hun12, roe13}; \citet{doy18}, and others for active stars showing evidence that the flares were possibly uniformly distributed.
Differing from previous studies, this work combines the results from 111 stars indicating that the phase differences between the starspot minima and the flare maxima are not drawn from a uniform distribution ($p \approx 0$ for flares with flux increases $> 1\%$ and $p = 0.002$ for flares with flux increases $>5 \%$). In Figure \ref{hist}, there is a distinct peak in the distribution centered around a zero phase difference between the starspot minimum and the flare maximum (note that the starspot groups are in the field of view between $-0.25 \lesssim \Delta \varphi \lesssim 0.25$). This suggests that the flares may be correlated with the starspots such that the magnetic fields causing the large starspot group are more frequently reconnecting to make flares than the magnetic fields elsewhere on the star.
Interestingly, when we look only at the strong flares (only the purple distribution in Figure \ref{hist}), the peak is not nearly as prominent, suggesting that the small flares are more tied to the location of the starspots than the large flares.
We considered whether this trend would change dependent upon spectral type. In order to investigate this, we divided the sample at approximately the boundary between G and K stars, around $0.78 M_\odot$ \citep{dav16}. This gave a sample of 57 F/G stars and 54 K/M stars. While the K and M stars were more active, more weak flares occur close to the starspot in both the subsample of F/G stars and the subsample of K/M stars (see Appendix C).
Long-term monitoring of the fully convective M-dwarf V374\,Peg (M4V, $P_\mathrm{rot} \approx 0.44$ days) has shown that flares can occur anywhere on the surface and are not necessarily associated with large starspot groups \citep{vid16}, although on that object, stronger flares seemed to be associated with one of the active regions. This is in agreement with the numerical model of \cite{yad15} that suggests that magnetic field is distributed over the stellar surface in these type of stars, implying a flare distribution independent of rotation phase. Additionally, \citet{haw14} and \citet{lur15} investigated the relationship between starspots and flares on the M dwarfs GJ 1243 (M4, $P_\mathrm{rot} = 0.5927 \pm 0.0002$ days), GJ 1245A (M5, $P_\mathrm{rot} = 0.2632 \pm 0.0001$ days), and GJ 1245B (M5, $P_\mathrm{rot} = 0.709 \pm 0.001$ days). In all three cases, they found that there was no correlation between starspot location and flare occurrence. They suggest that the stars may have polar spots that do not cause periodic changes in the light curve but are associated with flares. Interestingly, the light curves of V374 Peg, GJ 1243, and GJ 1245AB are all varying (evolving) very slowly, appearing nearly constant over the years of observation. This is a distinct difference from the stars in our sample, which show surface evolution.
The shorter term study of 34 dwarfs in a field of \emph{K2} by \citet{doy18} reported no evidence of correlation between starspot location and flare timing. While the goal was similar, their sample was significantly different from ours: short-cadence \emph{K2} light curves are used, allowing for the detection of shorter, less-energetic flares; the lengths of observation ($\sim 80$ days) are significantly shorter than for the \emph{Kepler} light curves; $29\%$ would be rejected by our analysis, as they have rotation periods below 1 day, and they were unable to determine the periods of another $48\%$ of their sample given their short light curves; and several of their targets exhibit more than one spot structure at a time. Of the seven of their stars with confirmed periods over $1$ day that could have been included in our sample, only three exhibit flares that lasted long enough to affect three long-cadence \emph{Kepler} data points, as we require for FLATW'RM. The significant differences between our samples are likely to have led to our conflicting conclusions.
Because the \citet{haw14,lur15,vid16,doy18} studies focused on stars that largely would not be included in our sample, we suggest that the correlation we see has potential to be appropriate for certain classes of stars.
For the highest energy flares, the events investigated here seem to be observed regardless of their proximity to the starspot groups, including when the starspot group is out of view. Because these flares are large, energetic events, it is possible that they can be seen not only when the flaring regions are located on the hemisphere facing \emph{Kepler}, but also over the limb. The weaker flares, perhaps, follow the same trend, but are not strong enough to be seen over the stellar limb, appearing to occur more frequently around the starspot groups. An additional explanation could be if some of the events are associated with a polar spot that can be seen regardless of the rotational phase of the object, and which would cause only small rotational variations in the light curve.
In the case of the Sun, flares are observed mainly in bipolar regions between spots of different polarity. Solar flares, where reconnection occurs between different regions are rare. However, it is possible that on stars with stronger magnetic fields such reconnections between active nests are more frequent. This could explain why stronger flares seem to be less associated with the dark spots (as the measured timing of the flare is associated approximately with the middle of the brightening flare loop). On the Sun, in the case of bipolar spots, the leading spots are known to be larger and longer lived than the trailing sunspots \citep[see, e.g.,][]{mur14,zag15}. If this analog is true for other stars, too, that would mean that the weight of a detected active region is closer to the leading spot. This could explain the connection between the weaker flares and the starspots and the slight shift to $-0.1$ in phase instead of 0 in Figure \ref{hist}, suggesting that the size of the active region is on the scale of 0.2 phase \citep[$30^\circ$ diameter, which is not unlikely in photometric spot models, cf.][]{qiu17}. Furthermore, strong flares covering large area of the stellar disk can also wash away the exact timing of the event (on the Sun flare area is roughly connected to their strength). For a simple model illustrating such an effect, see Appendix B.
\subsection{Caveats} \label{sec:caveats}
A number of complications in the data could move the location of the spot minima causing some of the phase differences to be shifted slightly. In Section \ref{sec:analysis}, we accounted for a number of situations in which the spot minima could be altered. There are, however, a few situations that could continue to be problematic and are not easily accounted for with the SAUCES algorithms. We discuss those issues here.
\begin{itemize}
\item In the case where the adjacent rotations must be considered because the flare occurs near the starspot minimum, if evolution changes the starspot group significantly from one rotation to the next, using the adjacent rotations to estimate the actual time of minimum near the flare could lead to an inaccurate phase difference, $\Delta \varphi$.
\item Low-amplitude spots across the stellar surface or a polar spot will lead to undetectable changes in the light curve, but will bring active regions into and out of view and the detected starspot minima will be wrongly associated with the flares.
\item A number of widely-separated spots rotating in and out of view could present in the light curve as a single spot structure \citep[e.g., Figures 3 and 7 of][]{roe17b}, which would shift the minimum associated with the starspot group.
\end{itemize}
\section{Conclusions} \label{sec:conclusions}
The sample of stars included here are at the cross-section of the \citet{mcq14} and \citet{dav16} samples, while also fitting our criterion of showing starspot evolution. Our sample of 119 main-sequence stars with evolving starspots were run through the FLATW'RM algorithm \citep{vid18} to detect the flares and the SAUCES pipeline to identify the starspot minima and compare the timing of the flares and starspots.
We find that there are 2447 flares that increase the observed flux at least $1\%$ over the star in 111 of the four-year, long-cadence \emph{Kepler} light curves in our sample. Of these flares, 475 increase the flux of the star by at least $5\%$. We found that though the flares were not uniformly distributed across the surface, the smaller energy flares appeared to occur predominantly when the large starspot group was in view. This could be a result of stronger flares being visible both when the flaring region is visible to \emph{Kepler} and when the region has crossed over the limb, but weaker flares are not strong enough to be seen over the limb, appearing to be more correlated with the starspots. Another potential explanation could be the presence of polar starspots that are not detectable with \emph{Kepler} photometry alone.
To further investigate whether the starspot groups and the flares are connected, in future work, we will apply FLATW'RM and SAUCES to a wider variety of stars in the \emph{Kepler} archive. However, efforts are required to further automate isolating the stars with evolving stellar activity.
\section*{Acknowledgements}
The authors thank the useful discussion with L.~van Driel--Gesztelyi, A.N.~Aarnio, J.D.~Monnier, and A.B.~Davis. The authors acknowledge the Hungarian National Research, Development and Innovation Office grants OTKA K-109276, OTKA K-113117, and support through the Lend\"ulet-2012 Program (LP2012-31) of the Hungarian Academy of Sciences. KV is supported by the Bolyai J\'anos Research Scholarship of the Hungarian Academy of Sciences. This work has used K2 data from the proposal number GO12046. Funding for the \emph{Kepler} and \emph{K2} missions is provided by the NASA Science Mission directorate.
This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France.
\facility{Kepler}
\software{FLATW'RM \citep{vid18}, SAUCES}
\section*{Appendix} \label{sec:appendix}
\subsection*{Appendix A} \label{sec:appendixA}
In order to test the abilities of SAUCES, we performed a series to tests using 1000-day-long model light curves. For each of five model light curves, we included two starspots on a star with a rotation period of $P_\mathrm{rot} = 3.00$ days. The larger spot was centered on a longitude of $120^\circ$ (0.33 in phase) and the smaller spot was centered on a longitude of $320^\circ$ (0.89 in phase). The spots did not change over time in shape, size, darkness, or location (i.e., the spots do not evolve or show differential rotation with respect to one another). While the stars in our observational sample do exhibit starspot evolution, we select our observational sample to be the instances of only a single starspot group. Because we look at individual rotations, we do not see the effects of differential rotation. Even in the cases of looking to the rotations before and after to better identify the starspot minimum, differential rotation is not dramatic in these systems.
For each of our four models, we included 50 flares distributed (A) from a uniform distribution, (B) all exactly at longitude $120^\circ$ (phase of 0.33), (C) from a uniform distribution within $\pm 0.05$ of phase 0.33, and (D) from a uniform distribution within $\pm 0.10$ of phase 0.33. The flares are given random strengths with lifetimes generated from the flare strengths, following the description of a single-peak flare of \citet{dav14}.
FLATW'RM recovers 47, 45, 48, and 49 flares, respectively. The deviation from the 50 injected flares is the result of some smaller flares not satisfying the search criteria of FLATW'RM and SAUCES ignoring flares that occur less than one rotation within the beginning or the end of the light curve, cases which make identifying the nearest minima difficult in observational light curves.
For the user-supplied parameters of FLATW'RM, we required each flare contain at least three data points. For more details on the FLATW'RM algorithm, see \citet{vid18}. We apply the SAUCES algorithms to the test data sets as described in Section \ref{sec:analysis}. We calculate the difference in phases, $\Delta \varphi$, between the starspot minima and the maximum brightness of the flares. We show the results accounting for the effect of the flares on the surrounding light curve in Figure \ref{testsABCD}.
We note that for Tests B--D, none of the flares are close enough to the smaller starspot located at a longitude of $320^\circ$ to cause SAUCES to compare the timing of flare to that starspot. Because the flares of Test A can occur anywhere in the light curve with respect to the starspots as a uniform distribution, it is not surprising that the smaller phase differences have more flares than the larger phase differences. Unlike in the cases of single spots, the largest separation possible for Test A between a flare and a spot is $\pm 0.28$, which is observed (compared to $\pm 0.50$ when only one spot is present). That phase difference is only possible when the starspot at longitude $120^\circ$ is rotating out of view before the other spot comes into view. Phase differences of $-0.22 \le \Delta \varphi \le +0.22$ are accessible as either spot rotates out of view. Because there are two spots in the model light curve, SAUCES correctly finds phase differences that are only less than $\Delta \varphi \approx 0.28$, the maximum phase difference a flare can be from a local minimum. Therefore, we do not expect a uniform distribution to be returned in our histogram.
We note that Test A differs from our observed sample in that there are two spots present, a case we do not consider in the observed sample, but include here for the illustrative purpose of the difficulty differentiating the nearest starspot when multiple are present. While the second spot is still present for Tests B--D, the flares by definition do not occur close enough to the second spot for SAUCES to calculate the $\Delta \varphi$ between the flare and the smaller spot at longitude $320^\circ$.
In Test B, we see that in most cases, SAUCES does properly account for flares that occur in the center or the starspot signature. The cases here where the detection is offset from $\Delta \varphi = 0$ are related to flares occurring in at rotations such that SAUCES cannot properly determine the minimum (e.g., at least three sequential rotations or three of four sequential rotations), affecting SAUCES' ability to accurately locate the minimum (see Section \ref{sec:analysis} for details on how the algorithm handles flares in local minima). Tests C and D reproduce the distributions which were injected, and are also subjected to the same potential problems as the other tests.
While the location of some flares may be skewed when flares happen in sequential minima, we found that this was the case for only $1.8\%$ of the flares in our observational sample. The effect is greater here in Test B because the flares are limited to only one phase and must then occur in $15\%$ of the rotations.
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[scale=0.35,angle=90]{Test_Flares_1_flare_flux001-10_jdfix040_endjdfix040_2minmove_100bins_bw_plot_NEW.eps}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.35,angle=90]{Test_Flares_2_flare_flux001-10_jdfix040_endjdfix040_2minmove_100bins_bw_plot_NEW.eps}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.35,angle=90]{Test_Flares_3_flare_flux001-10_jdfix040_endjdfix040_2minmove_100bins_bw_plot_NEW.eps}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.35,angle=90]{Test_Flares_4_flare_flux001-10_jdfix040_endjdfix040_2minmove_100bins_bw_plot_NEW.eps}
\end{subfigure}
\caption{Histograms of the comparison of the phase difference between each model flare and the nearest model starspot. Upper left: For Test A, the flares occurred over a uniform distribution of phases. Upper right: For Test B, the flares occurred when the spot centered on longitude $120^\circ$ faces the observer (phase = 0.33). Lower left: For Test C, the flares occurred within $\pm 0.05$ in phase of the spot centered on longitude $120^\circ$ faces the observer (phase = 0.33). Lower right: For Test D, the flares occurred within $\pm 0.10$ in phase of the spot centered on longitude $120^\circ$ faces the observer (phase = 0.33). Each bin represents 0.01 in phase.}
\label{testsABCD}
\end{figure*}
\subsection*{Appendix B} \label{sec:appendixB}
Similarly to the study in Appendix A, here we model a 1000-day long light curve with $P_\mathrm{rot} = 3.00$ days. Two spots are again present, but they are now located adjacent to each other at the same latitude. The ``leading'' spot, or the first spot to appear over the limb as the star rotates, is defined to have a radius of $20^\circ$ on the surface of the star. The ``trailing'' spot is assigned a radius of $15^\circ$ on the surface of the star. The proximity of the spots and the difference in their sizes creates a light curve shape where there is only one local minimum, when the center of the larger spot is facing the observer.
With this model, we aim to investigate the potential of a starspot group with a leading-trailing pair of starspots (like that described in Section \ref{sec:discussion}). The leading spot is slightly larger, and the location at which the flare occurs varies depending on the test.
In Test E, the flares are uniformly distributed across the light curve. In Test F, the flares occur only when the point between the starspots is facing the observer. In Tests G and H, the flares occur within $\pm 0.05$ and $\pm 0.10$ in phase of when the point between the starspots is facing the observer. These phase difference limits are chosen as the flares occurring within this range will still be strong associated with the starspots (the sum of the starspots' radii spans approximately 0.1 in phase). With Tests F--H, the over-density of flares in our observed sample near $\Delta \varphi = -0.1$ is recreated (at a different phase, dependent upon the spot characteristics), suggesting that such a scenario of starspot groups in an active nest is possible, as described in Section \ref{sec:discussion}.
Each light curve included 50 flares, and 48, 44, 49, and 48 were recovered by FLATW'RM. Again, this deviation from 50 flares is the same as that in Appendex A, where the flares can go undetected if they are weak, of short duration, or occur too close to the beginning or end of the light curve. The results of applying the SAUCES alogrithms are shown in Figure \ref{testsEFGH}.
While no single star in our sample exhibits the features displayed in these tests, the combination of all stars in the sample does show a correlation between the starspots and the flares with flux increases between $1\% - 5\%$. The correlation, as well as the particular over-density near $\Delta \varphi = -0.1$, may be explained by many stars exhibiting behavior similar to that shown in Tests F--H.
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[scale=0.35,angle=90]{Test_Flares_1_flare_flux001-10_jdfix040_endjdfix040_2minmove_100bins_bw_plot_1520.eps}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.35,angle=90]{Test_Flares_2_flare_flux001-10_jdfix040_endjdfix040_2minmove_100bins_bw_plot_1520.eps}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.35,angle=90]{Test_Flares_3_flare_flux001-10_jdfix040_endjdfix040_2minmove_100bins_bw_plot_1520.eps}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.35,angle=90]{Test_Flares_4_flare_flux001-10_jdfix040_endjdfix040_2minmove_100bins_bw_plot_1520.eps}
\end{subfigure}
\caption{Histograms of the comparison of the phase difference between each model flare and the nearest model starspot for a leading-trailing spot configuration. Upper left: For Test E, the flares occurred over a uniform distribution of phases. Upper right: For Test F, the flares occurred when the point between the starspots is facing the observer. Lower left: For Test G, the flares occurred within $\pm 0.05$ in phase of the point when the point between the starspots is facing the observer. Lower right: For Test H, the flares occurred within $\pm 0.10$ in phase of the point when the point between the starspots is facing the observer. Each bin represents 0.01 in phase.}
\label{testsEFGH}
\end{figure*}
\subsection*{Appendix C} \label{sec:appendixC}
The stars in our sample range in spectral type from late-F to early-M, according to masses given by \citet{dav16}. Here, we cut the distribution in mass at $0.78 \ M_\odot$, approximately the mass boundary between G and K stars. Our objective was to determine whether or not the distinct distribution present in Figure \ref{hist} would be associated with only certain masses of main sequence stars.
In Figure \ref{FGandKM}, the histograms for F/G stars (left) and K/M stars (right) are shown to both have a concentration of flares occurring near the starspots for flares with flux increases between $1\% - 5\%$. While the peak is stronger for the late-type stars, more flares occurred in this sample of light curves, potentially strengthening the single. Therefore, the phenomenon of these weaker flares being associated with the starspots is observed in all spectral types of convective-envelope, main-sequence stars.
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[scale=0.35,angle=90]{All_stars_flare_flux001-10_005-10_jdfix040_endjdfix040_2minmove_1spot_100bins_FGonly_color_histplot.eps}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.35,angle=90]{All_stars_flare_flux001-10_005-10_jdfix040_endjdfix040_2minmove_1spot_100bins_KMonly_color_histplot.eps}
\end{subfigure}
\caption{Histogram of the number of flares occurring in bins of phase difference $\Delta \varphi = 0.01$. The color scheme follows that of Figure \ref{hist}. Left: Flares occurring on the late-F and G-type stars in the sample. The increased number of flares in the phases near the starspots is weakly evident. Right: Flares occurring on the K and early-M-type stars in the ample. The lower-energy (orange) flares are more numerous in the phases closely surrounding the starspots. }
\label{FGandKM}
\end{figure*}
|
1,314,259,993,926 | arxiv | \section{Introduction}
Conclusive evidence exists for black holes in the stellar-mass
($\sim 3-30~M_\odot$) and supermassive (SMBH; $\sim
10^6-10^{10}~M_\odot$) ranges. This evidence comes from direct
measurements of masses from orbital motion in, respectively,
binaries and galactic nuclei. In contrast, the range between
these masses does not yet provide clear dynamical evidence for
black holes of intermediate mass (IMBHs). This is essentially
because candidates are rare (hence the nearest possible IMBH in
a binary is not easily observed) and have small radii of
influence (hence SMBH-like observations of nearby stars are also
challenging). There are, however, numerous indirect suggestions
of the $\sim 10^{2-4}~M_\odot$ mass range from the high fluxes
of ultraluminous X-ray sources (ULXs; see \cite{Fab89,CM99}),
from their relatively low thermal temperatures
\cite{M03,M04a,M04b}, and from aspects of the dynamics of
globular clusters \cite{vdM02,GRH02,NGB08}. See
\cite{MillerColbert04} for a recent summary of the data. The
lack of definitive dynamical evidence means that alternate
scenarios have also been proposed for ULXs, including beamed
emission \cite{Rey97,Ki01} and super-Eddington emission
\cite{RB03,Beg06}.
Here we assume that IMBHs exist and explore various ways in
which their interactions could lead to gravitational radiation
detectable in the $\sim 10^{-4}-10^{-2}$~Hz frequency range
of the {\it Laser Interferometer Space Antenna} (LISA).
In \S~2 we discuss the broader context of IMBHs and proposed
formation mechanisms. We also give basic formulae for the
amplitude and frequency of gravitational waves from binaries,
showing why IMBHs could be bridge sources between space-based
and ground-based detectors. In \S~3.1 we evaluate the prospects
for IMBH-IMBH mergers and what they might tell us. In \S~3.2 we
discuss IMBH-SMBH mergers and their prospects as uniquely
precise testbeds for strong gravity predictions of general
relativity. We present our conclusions in \S~4.
\section{Context and formation of IMBHs}
Most formation scenarios of supermassive black holes propose
that they pass through a life stage in which they are IMBHs.
Therefore, study of IMBHs can yield insight into the early
formation of structure in the universe. This is particularly
true if the formation of SMBH seeds is in part due to interactions
in dense stellar clusters, as this is a leading candidate for
how IMBHs form in the current universe. We now discuss proposed
formation mechanisms and their implications.
\subsection{Suggested formation mechanisms for IMBHs}
The reason for setting the lower limit on IMBHs at $\sim 100~M_\odot$
is that this appears to be in excess of the maximum black hole mass
that can form from a solitary star in the current universe
(see \cite{MR01} for a discussion in the context of IMBHs).
IMBH existence therefore requires new formation scenarios.
The three basic ideas are:
\begin{itemize}
\item The first generation of stars had negligible metallicity.
This reduces radiative opacity and thus the strength of radiation
driven winds, hence stars can start more massive and retain more
of their mass than current stars \cite{MR01}.
\item In young massive stellar clusters that have relaxation times
for the most massive stars that are less than the lifetimes of those
stars (about 2.5~million years), mass segregation leads to collisions
and mergers of massive stars. If the combined objects do not have
catastrophic wind mass loss (see \cite{Belkus07}), they can in principle
accumulate up to thousands of solar masses and might then become
black holes with similar masses \cite{E01,PZ02,PZ04,G04}.
\item If a seed black hole with a mass of more than $\sim 200~M_\odot$
is formed by the previous process in a dense stellar cluster, subsequent
binary-single and binary-binary interactions can allow it to merge
and accumulate mass without being vulnerable to ejection by either
three-body kicks or recoil from asymmetric gravitational wave
emission \cite{MH02a,MH02b,Gultekin2004,Gultekin2006,OLeary2006,OLeary2007}.
\end{itemize}
Of special interest to LISA observations is that some simulations
suggest that if the initial binary fraction in a stellar cluster
exceeds $\sim 10$\% then more than one IMBH can form in that cluster
(\cite{G04}; but see \cite{Belkus07}).
We will explore the consequences of this in \S~3.1.
In addition, since massive stellar clusters are often found near
the nuclei of galaxies that are interacting actively, the clusters
themselves can sink to the centers of galaxies where their IMBHs
spiral into the galaxy's supermassive black hole. We discuss this
in \S~3.2.
The last piece of basic physics has to do with gravitational
waves themselves. For a circular binary with a total mass $M=m_1+m_2$
and a reduced mass of $\mu=m_1m_2/M$ at a distance $d$ from us small
enough that redshifts are unimportant,
orbiting at a frequency $f_{\rm orb}$ so that the dominant gravitational
wave frequency is $f_{\rm GW}=2f_{\rm orb}$, the angle-averaged
dimensionless strain amplitude that we observe is
\begin{equation}
h=6\times 10^{-21}\left(f_{\rm GW}/1~{\rm Hz}\right)^{2/3}
\left(M_{\rm ch}/10^3~M_\odot\right)^{5/3}\left(1~{\rm Gpc}/d\right)\; .
\end{equation}
Here $M_{\rm ch}$ is the ``chirp mass", defined as
$M^{5/3}_{\rm ch}\equiv \mu M^{2/3}$. The maximum frequency
of orbits that evolve relatively slowly is often approximated
by the frequency at the innermost stable circular orbit (ISCO),
although technically the ISCO concept is strictly valid only
when there are no mechanisms for angular momentum loss (i.e.,
for test particles in geodesic orbits). For a nonrotating
spacetimes, this maximum frequency is
\begin{equation}
f_{\rm GW,max}({\rm ISCO})=4.4~{\rm Hz}(10^3 M_\odot/M)\; .
\end{equation}
From this expression we see that IMBHs in the entire mass
range of $M\sim 10^{2-4}~M_\odot$ are potential LISA sources,
but also that towards the low end of the masses they might
be sources for ground-based detectors, which focus on
$f_{\rm GW}>10~{\rm Hz}$.
\section{Gravitational waves from mergers of IMBHs with other black holes}
We will focus on mergers of IMBHs with other IMBHs or with SMBHs,
because the rate at which LISA will detect the coalescence of
stellar-mass black holes with IMBHs is negligibly low \cite{Will04},
although such mergers may be detected by next-generation ground-based
instruments such as Advanced LIGO or Advanced Virgo \cite{Man08}.
\subsection{IMBH-IMBH mergers}
As demonstrated by \cite{Fre06}, if more than one IMBH forms
in a young stellar cluster, as may happen if the primordial binary
fraction in the cluster exceeds $\sim$10\% \cite{G04}, the subsequent
coalescence of the IMBHs can be visible out to large distances.
Specifically, \cite{Fre06} found that a comparable-mass binary with a
total rest-frame mass of $1000~M_\odot$ would have a coalescence
visible with LISA out to a redshift $z\approx 1$. Given that
the star formation rate increases dramatically from $z=0$ to
$z=1$, if these mergers occur within a few tens of millions
of years after the formation of the clusters then LISA observations
of such events could be unique probes of star formation and
cluster dynamics.
To explore this further, \cite{Ama08}
performed detailed N-body simulations of two IMBHs in a cluster.
They started with equal-mass IMBHs (either $300~M_\odot$ or
$1000~M_\odot$ each) at a separation of 0.1~pc, in a cluster
of $3.2\times 10^4$ stars either all at $1~M_\odot$ or selected
from a Kroupa \cite{Kroupa00b} initial mass function. They then followed
the inspiral of the IMBHs until the black holes eventually formed a very
hard binary. At that point, they passed off the properties of
the binary to a three-body scattering program, where the speeds
and masses of the interacting stars were drawn from the appropriate
external distribution. Their three major conclusions are:
\begin{itemize}
\item In the simulations, and by extension in real clusters with
more stars, the IMBH merger has only a minor effect on the cluster
in general. Superficially it might seem that the effect could be
substantial, because the binding energy of an IMBH binary at the
point of the last scattering with a star can exceed the total
binding energy of the cluster by a significant factor. Given that
interactions with stars are what harden the binary to this point, it
might thus appear that the cluster could be disrupted. The reason
that this does not happen is that the energy extracted from
three-body binary hardening is only put back into the cluster if the
star emerges with a speed less than the escape speed of the
cluster. In contrast, high-speed ejections share very little energy
with the cluster because the relaxation time (which is needed to
alter energies significantly) is orders of magnitude greater than
the time to leave the cluster. Since the binding energy of the IMBH
binary is much less than the binding energy of the cluster at the
point when stars are ejected from the cluster, the net effect on the
cluster is small.
\item The duration of merger is indeed tens of millions of years
or less, and as expected is dominated by the phase in which the
binary is hard (because the cross section for interactions is less
than when the binary is wider). In none of the dozens of runs done
in \cite{Ama08} did this phase take more than $10^8$~years.
As a result, these mergers will indeed serve as good snapshots of
star formation.
\item In all other categories of comparable-mass black hole mergers,
the expected eccentricity upon entry into the frequency band of the
relevant detector (LISA for supermassive black holes; Virgo or LIGO
for stellar-mass holes) is likely to be so small as to be basically
negligible (see \cite{Ama07} for a discussion;
for potential exceptions for stellar-mass holes, see
\cite{Wen03} and \cite{OLeary2008}). For IMBH-IMBH coalescences in young
clusters, however, \cite{Ama08} find that at the LISA
lower frequency limit of $f_{\rm GW}\approx 10^{-4}$~Hz the eccentricity
is commonly $e\sim 0.1-0.3$. As they demonstrate, this is detectable
and will have to be included in algorithms that characterize the
inspiral.
\end{itemize}
In Figure~1 we show two sample evolutions of
eccentricity versus semimajor axis for runs similar to those of
\cite{Ama08}. The systematic increase in eccentricity
down to separations of $a\sim$few AU is consistent with the results
of \cite{Quinlan96}, who found that a massive binary interacting with
much less massive stars commonly increases its eccentricity. The
decrease at smaller semimajor axes is the result of circularization
by gravitational radiation dominating over increases in eccentricity
caused by three-body interactions.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{aenew.ps}}
\caption{Two sample tracks (black lines) in semimajor axis versus eccentricity
for a $1000~M_\odot-1000~M_\odot$ binary starting at $a=400$~AU
and $e=0.5$ in a stellar cluster of total number density
$n=3\times 10^5$~pc$^{-3}$ with a Kroupa \cite{Kroupa00b} mass
function. The dashed line shows the semimajor axis at which the
dominant gravitational wave frequency is $f_{\rm GW}=10^{-4}$~Hz,
the lower end of the LISA sensitivity band.
The convergence at small semimajor axis occurs because
for these cluster and binary parameters inspiral becomes dominated
by gravitational radiation when the pericenter distance becomes
less than about 0.1~AU. This figure is similar to figures in
\cite{Ama08}.\label{fig:binary}}
\end{figure}
\subsection{IMBH-SMBH Mergers}
When IMBHs merge with the central supermassive black holes in
galaxies, the resulting gravitational waveform contains uniquely
precise information about the spacetime around a rotating black
hole. The basic reason is that the mass ratio is extreme enough
(typically $\sim 1000:1$) that approximate techniques can be used
to follow the inspiral, without the need to resort to
computationally expensive numerical relativity simulations. These
are therefore similar to the more familiar extreme mass ratio
inspirals (EMRIs), in which a stellar-mass black hole of typically
$\sim 10~M_\odot$ coalesces with a supermassive black hole
(see \cite{Ama07} for a review).
However, when the secondary is an IMBH with a mass of $\sim
1000~M_\odot$ the amplitude of the waves at a given luminosity
distance is two orders of magnitude greater than for a standard
EMRI, hence the waveform and comparisons of it with general
relativistic predictions can be obtained with far greater
precision.
\cite{M05} showed that for favorable cases (e.g.,
$M_{\rm SMBH}=10^{5.5}~M_\odot$, $M_{\rm IMBH}=10^3~M_\odot$,
and an angular diameter distance of 3~Gpc) the signal to noise
near the end of the inspiral can be great enough that the source
would be detectable with LISA in a standard power density
spectrum, without the need for matched filtering. More specifically,
if one took a power density spectrum over the optimal period of
time, in which the frequency drift during this period is equal to
the frequency resolution of the spectrum (which equals the reciprocal
of the period of integration), then the signal to noise of the
single peak occupied by the inspiral during this period would
be $S/N=$tens. As a result, it would be possible to string together
such detections and connect the phases of the inspiral, thus
building up an empirical waveform without the need to assume
that general relativity is correct. Hence even a single detection
of such an event would provide a uniquely powerful test of the
properties of massive spinning black holes.
The rate of IMBH-SMBH mergers depends on various details of the
dynamics of IMBHs in dense stellar clusters. The basic idea is
that although IMBHs not in galactic centers cannot by themselves
spiral to the core within a Hubble time (because dynamical friction
on such light objects is too weak), if they form in massive
clusters within tens to possibly hundreds of parsecs of the center
then the cluster will sink as a unit within a few billion years.
The cluster itself is eventually disrupted by the tidal field of
the galaxy and supermassive black hole, leaving the now solitary
IMBH much closer than before and able, in principle, to spiral in
to merger (see \cite{M05}). This has been proposed as one
mechanism to shepherd the young, massive S stars observed near the
center of our Galaxy \cite{Han03}. \cite{PZ06,Mat07} examined this
process using N-body simulations, and concluded that the rate of
LISA detections of these events could be tens per year, depending
on how efficiently IMBHs form in clusters.
More recently, \cite{Koch08} explored
additional effects, such as the interactions of IMBHs with themselves
around an SMBH, assuming that the full coalescence process takes
longer than the time needed for new clusters and IMBHs to sink to
the center. They found that such encounters tend to eject one IMBH
(although slowly, so that it will sink back in), and leave the
other in an eccentric orbit that decays readily by emission of
gravitational radiation. Regardless of the properties or rates
of such encounters, any detected by LISA will be valuable probes
of strong gravity.
\section{Conclusions}
The evidence for IMBHs is currently strong but circumstantial,
pointing to the need for dynamical mass measurements of binary
motion that will establish their existence definitively.
Nonetheless, their likely formation mechanisms and dynamical
interactions link them to many exciting topics in the current
and early universe. As gravitational wave sources they will be
unique in several respects: as bridge objects between space-based
and ground-based detectors, as comparable-mass binaries with
palpable eccentricities, and as the events that potentially will yield
the most precise tests of general relativity. Many explorations
need to be done, but current results are highly encouraging for
their study.
\ack
We thank Tal Alexander, Pau Amaro-Seoane, Mike Gill, Doug Hamilton,
Clovis Hopman, Vanessa Lauburg, Fred Rasio, Derek Richardson, and
Michele Trenti for stimulating conversations. This work was supported
in part by NASA ATFP grant NNX08AH29G.
\bigskip
|
1,314,259,993,927 | arxiv |
\section{Experimental Results}
\input{tables/result.tex}
\begin{figure*}[ht]
\label{corr_plot}
\includegraphics[width=\textwidth]{figures/CORR_Plots.png}
\caption{Empirical Correlation Coefficient Plots by Prediction Horizon}
\end{figure*}
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth]{figures/prediction_plot.png}
\centering{\caption{Test Prediction Plots}\label{pred_plot}}
\end{figure}
We present the experimental results on the defined metrics presented in Table \ref{results}. For each dataset, we provide the model a fixed input of $T=168$ timesteps and obtained predictions against a horizon window of 15, 30, 60, and 120 timesteps. We note that these horizons are multi-timestep windowed predictions, as opposed to a single-point estimation. The Best performing models with respect to the empirical correlations, are indicated by a bolded font face for each dataset and horizon. We also provide a comparative line graph of the Empirical Correlation Coefficient as shown in Figure \ref{corr_plot}. We also plot sample predictions against our test set, with an empirical comparison against LSTNet, as shown in Figure \ref{pred_plot}.
Based on our empirical results, our proposed model out performs the baseline metrics for Electricity, Solar Energy, and Traffic datasets, while outperforming some of the models for the Exchange Rates dataset. We attribute our analysis based on two key observations: i) the use of stateful versus stateless model types and ii) the autocorrelations present in the dataset.
First, in Figure \ref{corr_plot}, we observe a noticeable cluster between stateless models (i.e. 1D CNN, LSTNet, and TSSNet) and stateful models (i.e. LSTM, GRU, and VAR) in terms of the overall performance. Furthermore, we observe a sharper performance degradation over larger temporal horizon for stateful models than stateless models. Within stateless models, our proposed model consistently retains an upper bound performance, while the 1D CNN model primarily remains as a lower bound with the LSTNet in between. This is also present in the Exchange Rate dataset as well.
The other factor affecting the overall performance of our model is the presence of autocorrelations in the dataset - in particular the seasonality. As previously shown in Figure \ref{acf_plot}, we have noted that there is a strong presence of autocorrelational behaviors found within the Electricity, Solar Energy, and Traffic dataset, while weaker in the Exchange Rate dataset. This realization in context with our empirical results shed light on some observations. For highly autocorrelational data, stateless models tend to perform better over stateful models. However, with weaker autocorrelational signals such as the Exchange Rate dataset, our models perform better than the stateless models, but marginally worse than the stateful model variants.
In particular, our experimental results with 1D CNNs and LSTNet demonstrates that the methods in how convolution is applied over the temporal data can lead to difference in performance. Hence, this provides sufficient evidence that our transformation method enables high efficiency of information gain with respect to the autocorrelational features captured over a large temporal region.
\section{Related Work}
In this section, we review the prior work for multivariate time series prediction. We first introduce some of the known classical statistical methods and then present some of the state-of-the-art deep learning methods for time series modeling.
Time series prediction has been well studied with various statistical modeling methods for both univariate and multivariate data. For univariate time series modeling, the Auto Regressive Integrated Moving Average (ARIMA) model is often applied using the Box-Jenkins modeling process \cite{box2015time}. Other variants of the ARIMA models have been proposed to model temporal patterns of the data, for example, seasonality (SARMA) and coefficient dependent periodicity (PARMA) \cite{hannan1955test}. Furthermore, methods combining ARIMA and neural networks have also been developed to model both linear and non-linear dynamics \cite{zhang2003time}. For multivariate time series prediction, vector based methods, such as Vector Auto Regression (VAR) methods, extend the Auto Regressive (AR) models for univariate time series modeling \cite{box2015time} \cite{lutkepohl2005new}. Variants of the VAR model have also been proposed, such as the VARMAX model \cite{milhoj2016multiple} and the VARX model \cite{bierens2004var}, where the model subsumes properties of the original VAR model and jointly learns interactions between the given variables. Additionally, Gaussian Process \cite{alvarez2009sparse}, as a non-parametric statistical model, is used to predict over a distribution of continuous variables, as opposed to the aforementioned parametric methods. Statistical models often have very high computational complexity and face numerical issues, when the number of variables in the time series is high.
With the advent of deep learning, new neural network architectures have been proposed for multivariate time series prediction. Borovykh et al. \cite{borovykh2017conditional} develop a multivariate time series model based on the WaveNet architecture \cite{van2016wavenet}, which is originally designed for speech audio signal processing. They augment the original architecture by simplifying and optimizing its core algorithms with the dialated convolution to capture long-term multivariate temporal data with noisy signals. Another multivariate time series modeling framework is proposed by Lai et al. \cite{lai2018modeling}, namely LSTNet, which combines CNN and RNN to extract hierarchical short-term and long-term temporal dependencies from the time series. In addition to model dependencies, LSTNet also accounts for the autoregressive component of the model as a residual connection between the CNN and LSTM components. Qin et al. \cite{qin2017dual} develop a dual-stage attention-based neural network architecture for multivariate time series modeling, which utilizes two sets of RNN as an encoder-decoder based architecture. Through each stage of the attention-based network architecture, the method attends to both the feature and temporal dimensions to adaptively select the relevant driving series.
\section{Experiment}
We evaluate the performance of the proposed neural network architecture in this section. We first introduce several baseline models used for the benchmark comparison. We then introduce the evaluation metrics and data sets, and finally present the experimental results.
\subsection{Baseline Models}
To compare the performance and robustness of our proposed model, we benchmarked against the following methods:
\begin{itemize}
\item Vector Auto Regression (VAR)
\item Long Short-Term Memory (LSTM)
\item Gated Recurrent Unit (GRU)
\item 1D Convolutional Neural Network (CNN)
\item LSTNet \cite{lai2018modeling}
\end{itemize}
\noindent
All implementations were developed in PyTorch \cite{paszke2017automatic}, with the exception of the Vector Autoregression (VAR) model which is based on the Python StatsModels package \cite{seabold2010statsmodels}. our evaluations are performed across models which are stateful (i.e. VAR and RNN variants), stateless (i.e. CNN variants), and a hybrid of the two (i.e. LSTNet).
For Recurrent Neural Network variants, we implemented a many-to-one single layer vanilla architecture for the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) with both having two fully connected layers appended at the end. Likewise, the 1D Convolutional Neural Network (CNN) architecture utilizes a single 1D CNN and a max pooling block with two fully connected layers. In this architecture, we treat the 2D input time series as a single channel image and perform convolution over the temporal-variable plane. Our LSTNet architecture is based on the code package from Lai et. al \cite{lai2018modeling}. For all the neural network based models, they generate a batched multistep prediction values instead of a single time step output. For Vector Auto Regression, the predictions are generated based on an iterative manner, where the prediction at the current time step is appended to the previous input in order to predict the value at the next time step.
\subsection{Evaluation Metrics}
To compare all the methods, we use the following two evaluation metrics. The objective of our models is to minimize the Root Mean Squared Error (RMSE), while jointly maximizing the empirical correlation coefficient (CORR).\\
\noindent
\textbf{Root Mean Square Error (RMSE):}
$$RMSE = \dfrac{1}{n} \sum_{i=1}^{n} \sqrt{\sum_{j=1}^{m} \sum_{t=1}^{h} (\mathcal{Y}_{ijt} - \hat{\mathcal{Y}}_{ijt})^2}$$
\noindent
where $\mathcal{Y}$, $\hat{\mathcal{Y}} \in \Bbb R^{n \times m \times h}$ such that $n$ is the total number of samples being computed. In this formulation we aggregate the error across all of the $m$ different feature for the total $h$ horizon of the predicted values.\\
\noindent
\textbf{Empirical Correlation Coefficient (CORR):}
$$CORR = \dfrac{1}{n} \sum_{i=1}^{n}
\dfrac{
\sum_{j=1}^{m} \sum_{t=1}^{h} (\mathcal{Y}_{ijt} - \bar{\mathcal{Y}}_{i:t}) (\hat{\mathcal{Y}}_{ijt} - \bar{\hat{\mathcal{Y}}}_{i:t})
}
{
\sqrt{\sum_{j=1}^{m} \sum_{t=1}^{h} (\mathcal{Y}_{ijt} - \bar{\mathcal{Y}}_{i:t})^2 (\hat{\mathcal{Y}}_{ijt} - \bar{\hat{\mathcal{Y}}}_{i:t})^2}
}
$$
\noindent
where $\mathcal{Y}$, $\hat{\mathcal{Y}} \in \Bbb R^{n \times m \times h}$, $\bar{\mathcal{Y}}$, $\bar{\hat{\mathcal{Y}}}$, each denotes the ground truth value, the predicted value, the mean value of the ground truth for each feature, and the mean value of the predicted value for each feature, respectively.
\subsection{Datasets}
We use four benchmark datasets across a wide variety of domain applications to compare the performance of all the models. Each dataset presented here has various degrees of non-stationarity. This allows us to evaluate the potential strengths and weaknesses of our proposed model architecture. Due to the memory-constraints, for each dataset, excluding the Exchange Rate dataset, we select at random ten features as a method of dimensionality reduction. We plot the autocorrelational functions for each dataset, as shown in Figure \ref{acf_plot}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.475\textwidth]{figures/ACF_Plots.png}
\centering{\caption{Autocorrelation function plots of the raw datasets}\label{acf_plot}}
\end{figure}
\subsubsection{Electricity Usage \cite{UCI_electricity}}
The dataset contains electricity consumption recordings sampled at 15 min intervals between the year of 2012 to 2014 for 321 clients. The dataset was pre-processed to reflect the hourly consumption rate of the electricity usage instead. From the ACF plot, we see that different features have different levels of autocorrelation, notably a strong presence of seasonality as most of the features tend to have a consistent lag at around 24 hours.
\subsubsection{Foreign Exchange Rates}
The data set was collected over a period of 26 years between 1990 and 2016, and consists of the daily exchange rates from eight different countries. The dataset contains a total sample size of 7,588. Out of the four datasets, it does not show strong presence of the auto-correlational signal.
\subsubsection{Solar Power Production \cite{NREL_solar}}
The dataset comes from the Solar Power Data for Integration Studies, which contains the solar power and hourly day-ahead forecasts for approximately 6,000 simulated PV plants, at a sampling rate of every 5 minutes, in the year of 2006. From the ACF plot, we can see that the selected features all have uniform seasonality patterns with lag values around 140 hours.
\subsubsection{San Francisco Traffic \cite{PEMS_traffic}}
The dataset is from the Caltrans Performance Measurement System (PeMS), where traffic data was collected in real-time from over many detectors over the span of one year between 2015 and 2016, in the San Francisco Bay Area freeway. These detectors represent the density of the traffic between 0 and 1, where 0 indicates no traffic and 1 represents high amounts of congestion. From the ACF plot, we can see that most of the selected features have a seasonality with lag values around 160 hours.
For evaluation, we split the dataset into three different partitions in a chronological manner, with the first 60\% of the dataset will be used for training the model, the next 20\% of the data will be used for computing validation scores, and the remaining 20\% of the data will be used to test the model.
\subsection{Hyperparameters}
For training models, we tuned the hyperparameters using the \textit{Gaussian Process Optimization Framework (GPyOpt)} \cite{gpyopt2016}. We focused the parameter optimizations on the Temporal-Slicing Stack Transformation, in particular, the window size, $\omega$, between the range of 5 to 10, and the stride, $s$, between the range of 1 to 5. Furthermore, we also tuned the learning rate of our model between values of $[0, 0.01]$ and applied gradient clipping value of 10. Furthermore, we correspondingly adjusted the hidden weight dimensions of the model such that the dimension of the first dense layer had a greater number of parameters than the second output dense layer, or the final output dimension of the model. We perform 100 iterations of the modeling process and select the model with the highest empirical correlation coefficient score as our objective value to optimize over.
\subsection{Experimental Results}
\input{tables/result.tex}
\begin{figure*}[ht]
\includegraphics[width=\textwidth]{figures/CORR_Plots.png}
\caption{Empirical correlation coefficient plots by prediction horizon}
\label{corr_plot}
\end{figure*}
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth]{figures/prediction_plot.png}
\centering{\caption{Comparison of TSSNet and LSTNet predictions}\label{pred_plot}}
\end{figure}
We present the experimental results on the two defined evaluation metrics in Table \ref{results}. For each dataset, we provide the model a fixed input of $T=168$ time steps and obtained predictions against a horizon window of 15, 30, 60, and 120 time steps. We note that these horizons are multi-time step windowed predictions, as opposed to a single-point estimation. The best performing models with respect to the empirical correlations are in bold for each dataset and horizon. We also provide a comparative graph summary of the Empirical Correlation Coefficient as shown in Figure \ref{corr_plot}.
Based on the empirical results, our proposed model (denoted by TSSNet) outperforms the baseline metrics for the Electricity, Solar Energy, and Traffic datasets, and outperforms some of the models for the Exchange Rates dataset. We attribute our analysis based on two key observations: i) the use of stateful versus stateless model types and ii) the autocorrelations present in the dataset.
In Figure \ref{corr_plot}, we observe a noticeable cluster between stateless models (i.e. 1D CNN, LSTNet, and TSSNet) and stateful models (i.e. LSTM, GRU, and VAR) in terms of the overall performance. Furthermore, we observe a sharper performance degradation over larger temporal horizon for stateful models than stateless models. Within stateless models, our proposed model consistently retains an upper bound performance, while the 1D CNN model primarily remains as a lower bound with the LSTNet in between. This is also present in the Exchange Rate dataset.
The other factor affecting the overall performance of our model is the presence of autocorrelations in the dataset, in particular, the seasonality. As previously shown in Figure \ref{acf_plot}, we have noted that there is a strong presence of autocorrelationa within the Electricity, Solar Energy, and Traffic datasets, while weaker in the Exchange Rate dataset. This realization in context with our empirical results shed light on some observations. For highly autocorrelational data, stateless models tend to perform better over stateful models. However, with weaker autocorrelational signals such as the Exchange Rate dataset, our model performs better than the stateless models, but marginally worse than the stateful model variants.
In particular, our experimental results with 1D CNNs and LSTNet demonstrate that the methods in how convolution is applied over the temporal data can lead to difference in performance. Hence, this comparison indicates that our transformation method enables high efficiency of information gain with respect to the autocorrelational features captured over a large temporal region.
We also plot sample predictions against our test set at horizons $h=120$, with an empirical comparison against LSTNet. The results are shown in Figure \ref{pred_plot}. We have chosen to evaluate LSTNet for its marginally close performance in terms of the empirical correlation coefficient, but having slightly lower RMSE scores as shown in Table \ref{results}. From Figure \ref{pred_plot}, we can observe how effectively our model can predict non-linear patterns as the model is able to adapt to most of the acute signals in the data. From the Electricity dataset, we observe much more precise adaptations to the peaks and troughs of the cyclical signals. In the Traffic dataset, the proposed TSSNet is able to capture the sinusoidal patterns, however is slightly less precise to sudden peaks such as those found in the last two cycles of the dataset. In the Solar Energy dataset, we observe a closer fit to the overall shape of the data, as it is able to very closely capture both the non-linear and linear components of the data. However, for the Exchange Rate dataset, we see that both models have a relatively hard time with the predictions due to the high noisy outputs. In TSSNet, we observe that the predictions are centralized around a consistent range of values, while LSTNet's predictions are sporadically shifting the prediction values from time to time.
\section{Introduction}
Multivariate time series analysis has gained wide spread applications in many fields, e.g., financial market prediction, weather forecasting, and energy consumption prediction. It is used to model and explain the underlying temporal patterns among a group of time series variables in dynamical systems. Various methods have been proposed to predict multivariate time series based on statistical modeling and deep neural networks.
Classical statistical models assume that the time series is stationary, i.e., the summary statistics of data points are consistent over time. Preprocessing procedures are usually needed to remove trend, seasonality, and other time-dependent structures from the raw series in order to make the data stationary. In addition, these models also assume the independence condition in the underlying linear regression problem, i.e., the random errors in the model are not correlated over time. Autocorrelation and partial autocorrelation functions are usually applied to identify the appropriate order of variables. Constructing statistically meaningful prediction models requires performing various preprocessing, transformations, and feature engineering, which are time consuming and difficult to scale. On the other hand, deep learning-based approaches, such as recurrent neural network, have demonstrated state-of-the-art performance in modeling time series data through the use of stateful models. Recently, convolutional neural network (CNN) has also been applied to predict multivariate time series. Specifically, CNN is used as a stateless model to directly extract features from raw time series and generate predictions, or is used as part of the feature extraction step within a RNN architecture.
Existing work of using CNN for multivariate time series prediction treats the time series as an image. For example, the number of variables\footnote{The terms ``variable'' and ``feature'' are used interchangeably in this paper.} is equal to the width of the image while the number of time steps is equal to the length of the image. The convolution is conducted over the temporal-variable plane. In this context, one of the key underlying structural premises of the convolution operation assumes a local spatially dependent topology of the data, as opposed to the dense layer where all the inputs are jointly modeled. That is, in a convolutional layer, neurons receive input from a restricted sub-area of the previous layer (aka the receptive field), whereas neurons in the dense layer receives input from the entire previous layer. Therefore, when the CNN kernel convolves over the variable-temporal plane, it is only able to observe a narrow set of variable interactions within a given time window. Alternatively, convolution operations where we consider each feature in the time series as an independent channel also suffers from similar issues where the kernel is only able to observe a small local-window of time steps as its filters convolve over the data. In both scenarios, the limited view of the receptive field renders a local focus of the time series. Different from image processing, where objects in different regions could be quite distinct, time series data tend to be relatively ``homogeneous''. The prediction of future values depend more on the global pattern within the historical time window, rather than a local pattern. In this paper, we propose a novel neural network architecture for multivariate time series prediction with a new class of transformation functions known as \textit{temporal-slicing stack transformation}. These operations transform the original input data structure into a higher-order tensor, where the individual features in the time series are rearranged from a 1D temporal sequence into a 2D matrix. This transformation expands the view of the receptive field. As a result, the convolution is operated on a temporally larger region, which may help capture time-dependent features such as trend and seasonality, as well as variable interactions in a longer range.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{figures/tt_diag.jpg}
\centering{\caption{Visualization of the Temporal-Slicing Stack Transformation process}\label{tt_visual}}
\end{figure}
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth]{figures/TTT_ACF_Figure.png}
\centering{\caption{Comparison of the autocorrelation plots and the resulting Temporal Tensor Transformations}\label{tt_acf}}
\end{figure*}
The temporal-slicing stack transformation is demonstrated by Figure \ref{tt_visual}, where a window is utilized to slide over the 2D time series data and extract a collection of time series slices. These slices are then correspondingly stacked on top of each other sequentially to form a 3D tensor. Specifically the three dimensions are ``features'', ``time'', and ``stack''. For multivariate time series, the transformation will convert a 2D time series into a 3D tensor. For univariate time series, the transformation will render a 2D matrix. The resulting structure yields an emergent pattern, where a spatial feature extractor, such as CNN, can explicitly model complex and nonlinear autocorrelational-like features. To illustrate this, we use the transformation of univariate time series as an example. Figure \ref{tt_acf} shows the transformation results of univariate time series from four different datasets. We also show their corresponding autocorrelation plots. The transformation demonstrates strong patterns for time series of high autocorrelations. Comparing with the original 1D time series, the 2D matrix has much richer information and therefore may enable more informative feature extraction.
The following sections of this paper are organized as follows. We highlight various statistical and deep learning approaches for time series analysis in Section II. Definitions of the temporal tensor transformation, the temporal-slicing stack transformation, and the proposed model architecture are described in Section III. The experimental setup, dataset, metrics, and results are presented in Section IV. In Section V, we perform sensitivity analysis based on experiments with synthesized datasets and controlling the input and output dimensions. We finally conclude and discuss future directions in Section VI.
\section{Sensitivity Analysis}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\textwidth]{figures/featmap.png}
\caption{CNN feature maps of synthesized function datasets} \label{featmap}
\end{figure*}
In this section, we perform sensitivity analysis to evaluate and understand the advantages and limitations of our proposed architecture. This analysis fixates the intermediate model architecture and manipulates the input and output data parameters in a controlled manner. We observe the resulting effects to the overall performance of the model and draw conclusions on the robustness of our architecture.
In particular, our objectives in this analysis are two folds. First, we empirically study and demonstrate the underlying properties of how our proposed model is able to learn emergent autocorrelational patterns from controlled and synthesized inputs. We subsequently analyze the activation maps to better understand the extracted features. In our second analysis, we evaluate the robustness of the proposed model architecture by varying the input and output dimensions. These two experiments will help us better identify some of the core inner workings and draw insights of the model.
\subsection{Synthesized Data Sensitivity Analysis}
In this section, we describe methods used for observing the underlying feature maps that are generated by CNN. We perform these experiments to empirically identify properties of autocorrelational signals such as seasonality and trend. For this experiment, we generated a set of synthetic datasets from simple functions which explicitly exhibit properties of seasonality and trend, as demonstrated in Figure \ref{featmap}. Through these controlled and simplified experiments, we can uncover some of the underlying mechanisms for extracting autocorrelational features from the data.
We overfit our models with inputs from known simple deterministic functions, such as the sine and linear functions. We then evaluate the resulting activation maps (after the first convolutional layer) generated by the model, in order to identify which salient features are extracted by the learned CNN filters. Furthermore, to evaluate robustness of our model, we inject noise of various degrees by adding a noise factor $\epsilon$ within the sine function argument as $sin(x + \alpha \epsilon)$, where $\alpha$ is the ratio of the noise present, and $\epsilon \in \Bbb R$ is a random number sampled from a standard normal distribution. For each function, we empirically evaluate the robustness by increasing the degree of $\alpha$ from 0 to 0.75.
Based on the experimental formulation above, we demonstrate the feature maps generated by the CNN layer, as presented in Figure \ref{featmap}. Along the vertical axis, we list the different deterministic functions utilized to generate the synthetic dataset, while on the horizontal axis we show the effect over the different rates of noise injection.
Empirically, we found that the CNN filters perform the roles of both a feature extractor as well as a de-noising component of the model. For the first function, $y = sin(x)$, we can observe that the model is able to pick up the uniform repetitive pattern of the data as exhibited by the uniform checkerboard like pattern. Even with the injected noise, we see that the filter is able to de-noise a fair amount of the artifacts from the model and still retain the core checkerboard-like pattern as exhibited when $\alpha = 0$. In the second function, $y = sin(x) + x$, we observe across all values of $\alpha$, the activation map shows a uniform gradient-like pattern. This indicates that the model attenuates the $sin(x)$ component of the function and focuses only on the linear component of the model. In essence, the filter treats $sin(x)$ as a constant and emphasizes the monotonically increasing nature of the time series data from the linear $x$ component of the equation. Hence, the resulting activation map reveals a gradient like pattern which indicates an increasing trend pattern. For the third and fourth functions, we observe the same uniform checkerboard pattern similar to the first function. Furthermore, we observe a slight gradient effect indicating that the values on the left region of the plot is darker and the right region is lighter. This implies that the model's learned filter can extract both the seasonal component as well as the linear trend pattern concurrently from the synthetic dataset. The fourth function contains a slightly darker pattern which is indicative of the extra linear factor.
From these controlled experiments, we empirically demonstrate the effectiveness of using CNNs as feature extractors to identify core autocorrelational components of the model, notably, seasonality and trend from the transformed temporal tensor representations. Furthermore, we also demonstrate that such learned filters are able to act as de-noising filters, which can enable generalizations for noisy input time series.
\subsection{Sensitivity Analysis on Input vs Prediction Horizon}
In our previous experiments, we have only evaluated our models against a fixed input size of 168 time steps while we correspondingly varied the output horizon time steps for prediction. In this experiment, we perform a sensitivity analysis by varying both the input and output dimensions. One key motivation behind this experimentation is to assess the balance of the input and output size and how they influence the overall predictive power of the model. Various degrees of temporal granularity, such as daily, weekly, or monthly perspectives of the input data can strongly influence the respective outcome of the prediction due to the observational scope of the information provided to the model. This is particularly important when considering autocorrelational patterns, as long-range dependencies of repetitive patterns can significantly change with respect to the different temporal granularities of the data. As a result, the representation the model learns will also differ and consequently affect the prediction performance. For this analysis, we constrain our experiment to only evaluate the performance of our proposed model architecture (i.e. TSSNet). We vary the input sizes by 32, 64, 128, and 256, while also vary the output horizon sizes by 15, 30, 60, and 120, respectively. We perform experiments with every combination of these input and output dimensions for evaluation.
\input{tables/io_results.tex}
\begin{figure*}[ht]
\includegraphics[width=\textwidth]{figures/results_2a.png}
\caption{Input size vs output horizon correlation degradation}
\label{results_2a}
\end{figure*}
\begin{figure*}[ht]
\includegraphics[width=\textwidth]{figures/results_2b.png}
\caption{Output horizon vs input size correlation degradation}
\label{results_2b}
\end{figure*}
The experimental results are shown in Table \ref{results_2}. We also provide two plots, which evaluate the empirical correlation coefficient performance degradation from the perspective of the input and output dimensions respectively, as shown in Figures \ref{results_2a} and \ref{results_2b}. From these plots, we can draw upon several key insights regarding the overall robustness of our model, as well as some heuristics which can potentially help with improving the overall performance of the model.
In Figure \ref{results_2a}, we present a plot which demonstrates how the empirical correlation coefficient performance with different input dimensions degrades with respect to the different output horizon dimensions. First, we can observe a similar performance degradation effect shown from our previous experiments. In particular, one key observation to note is the variance of the correlation with respect to each of the different output horizons are greater when the data is not highly auto-correlated. For example, the Exchange Rate time series does not have strong autocorrelation, therefore exhibiting a wider variance of correlation scores for the input window size with respect to the output horizon dimensions. In contrast, for datasets such as the Electricity and Traffic, the overall correlation performance degradation is relatively smaller and its variance bounds are significantly tighter. However, one detail to make note of in Figure \ref{results_2a} is the top performing models for each of the different datasets. While in the Exchange Rate dataset, the model with input window of 32 performs the best, the other datasets mostly have larger input window sizes performing better in general. This results suggest that the overall input size can significantly affect the performance of the model. This also provides further evidence that our proposed model can better extract features from data which exhibits long-range autocorrelational patterns.
In Figure \ref{results_2b}, we present a different view of the results which demonstrates how the empirical correlation coefficient performance with different output dimensions is influenced by varying the input sizes. The motivation behind this analysis allows us to evaluate the overall robustness of the different output horizon dimensions of the model's prediction with respect to varying the overall input window dimensions. Unlike the previous analysis from our first experiment, we focus on empirically evaluating the effect of the benefited information gain the model has obtained when we provide a greater amount of temporal information (i.e., a larger input window). In Figure \ref{results_2b}, we also notice very similar effects of the variance bounds of the correlation results with respect to the different input window sizes, as highly autocorrelated data tend to have smaller bounds while non-autocorrelated data have larger variance ranges. In particular, increasing the input window sizes with respect to the different output horizon dimensions, we note that the overall performance degradation is minimal, and in some cases performance improves for datasets which exhibit high autocorrelational properties. In contrast, when the data does not contain any sort of autocorrelational signals (e.g., the Exchange Rate), the model performance drops significantly as we increase the input window sizes. These results empirically reinforce the notion that our model is able to robustly learn long-range temporal dependencies with high autocorrelational features.
\section{Conclusion}
We propose a neural network architecture for multivariate time series prediction through a new class of transformation function known as Temporal Tensor Transformations. In particular, we present the Temporal-Slicing Stack Transformation which utilizes a slice-based operator to transform the original 2D time series into a 3D tensor. This transformation encodes long-range auto-correlational features that can be extracted by a Convolutional Neural Network. Both experimental results and sensitivity analysis provide strong evidence that the proposed architecture is able to learn non-linear auto-correlational patterns effectively from the data.
For future work, we plan to investigate various components of the proposed architecture. To further understand the underlying mechanisms and sensitivity of the architecture, we aim to carefully identify which hyperparameters are more sensitive with respect to the overall model performance. Additionally, to better improve the prediction performance of the model on data without strong autocorrelation, we will explore the use of hybrid approaches that combine both our stateless approach in feature extraction in conjunction with recurrent neural networks. As an extension to the proposed temporal tensor transformation, we can also explore other types of transformation methods which utilize explicit components, such as multivariate interactions and variable sampling rates among different variables. The core challenges behind these classes of transformation functions lie in how to design new structures that can enable efficient information gain through the use of CNN for feature extractions.
\section{Model Architecture}
In this section, we present the proposed \textit{Temporal Tensor Transformation Network} architecture. First, we introduce the key notations used in the paper as well as the formal problem definition for multivariate time series prediction. Then we introduce the \textit{Temporal Tensor Transformation} operation. Finally, we present the proposed neural network architecture.
\subsection{Notations}
We use $x$ to denote a one-dimensional vector, $X$ to denote a two-dimensional matrix, and $\mathcal{X}$ to represent a three-dimensional tensor. Scalars with respect to their corresponding indices are denoted by lower-case letters, followed by sub-scripted letter triplets. For instance, $x_{i, j, k}$ of a 3D-tensor $\mathcal{X}$ indicates the \textit{(i, j, k)-th} scalar value of the tensor $\mathcal{X}$. Further notation and variables will be introduced and defined as they appear in the context.
\subsection{Problem Definition}
We now formally define the multivariate time series prediction problem. Given a complete multivariate time series dataset $X = \{ x_1, x_2, ..., x_n \}$, where $x \in \Bbb R^{m \times n}$, $m$ is the number of features, and $n$ is the total number of time steps in the multivariate series, our objective is to predict a future series of values up to a defined horizon window. Specifically, given a subset of the time series up to time step $T$, our model, denoted as function $F(\cdot)$, will take in an input series of $X_{T} = \{ x_1, x_2, ..., x_T\} \in \Bbb R^{m \times T}$, and subsequently generate an output sequence of $\hat{Y}_{T+h} \in \Bbb R^{m \times h}$, where $h$ is the horizon window. Hence, the model can be formulated as the mapping function $F(X_{T}) = \hat{Y}_{T+h}$.
\subsection{Temporal Tensor Transformation}
The proposed transformation augments the original data by adding a single higher-order dimension to the original data input. For example, if the input series data is a 1D vector of values (i.e., univariate time series), the transformed data structure will be a 2D matrix structure. Likewise, if the input is a 2D matrix (i.e., multivariate time series), the resulting structure will be a 3D tensor.
We define the Temporal Tensor Transformation as a mapping function $TT: X \rightarrow \tilde{\mathcal{X}}$, where $X \in \Bbb R^{m \times T}$ is the input multivariate time series and the resulting transformation generates a 3D tensor $\tilde{\mathcal{X}} \in \Bbb R^{m \times \omega \times o}$. Here, $\omega$ is the slice window size (i.e., the number of steps within a time window), and $o$ is the number of slices or the stack height of the resulting transformed tensor. The value of $o$ can be computed based on the hyperparameters. Our specific temporal tensor transformation is referred to as the \textit{Temporal-Slicing Stack Transformation}. Before introducing the Temporal-Slicing Stack operation, we will first introduce the hyperparameters involved in the slicing process.
The \textit{window size}, denoted by $\omega$, defines the overall slicing window of the transformation function, and determines one of the major dimensions of the output tensor. The length of the slicing window is fixed by the number of features in the time series, i.e., $m$. Therefore, the resulting slicing window is of dimension $m \times \omega$. The \textit{stride} parameter of the slicing window, denoted by $s$, indicates how many time steps to advance the slicing window along the temporal dimension. The greater the stride, the lower the overall pattern resolution, due to the loss of information among contiguous values within the time series. The \textit{dilation} parameter $d$ is similar to the parameter introduced by Yu et al. \cite{yu2015multi} for dilated CNN, which allows us to slice a wider receptive window size without compromising to limited memory space. The \textit{padding} value $p$ is similar to how CNN inherently increases the dimensionality of the original data structure. It allow for shift-invariant transformations as well as retaining the dimensional size of the data. Specifically, padding in our context refers to the process of symmetrically appending a set of 1D-vectors of size $m$ to \textit{both} ends of the input time series matrix along the temporal dimension. However, we hypothesize that such values appended to the time series data may be potentially problematic, due to the risk of contaminating the original series data with noise or out-of-distribution values. Thus, we need to carefully choose the padding values, for example, using the same adjacent values or the local mean of vectors within a predetermined time window.
Given the hyperparameters defined above, we can deterministically derive the number of slices $o$ as follows: \\
$$o = \Bigl\lfloor \dfrac{T + 2p - 2d(\omega - 1) - 1}{s} + 1\Bigr\rfloor$$
We summarize the Temporal-Slicing Stack Transformation in Algorithm \ref{alg}. For simplicity, we do not consider padding and dilation in this formulation (\textit{i.e. $p = 0$ and $d = 1$}).
\begin{algorithm}
\caption{Temporal-Slicing Stack Transformation}
\label{alg}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $X \in \Bbb R^{m \times T}$: 2D input multivariate time series
\ENSURE $\mathcal{\tilde{X}} \in \Bbb R^{m \times \omega \times o}$: 3D output temporal tensor \\
\textit{Init}: $\mathcal{\tilde{X}} \in \Bbb R^{m \times \omega \times o}$
\FOR {$i = 1$ to $o$}
\STATE $\mathcal{\tilde{X}}$[:, :, $i$] $\leftarrow X$[:, $i*s$:$i*s+\omega$]
\ENDFOR
\RETURN $\mathcal{\tilde{X}}$
\end{algorithmic}
\end{algorithm}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.975\textwidth]{figures/arch.png}
\centering{\caption{Temporal Tensor Transformation Network Architecture}\label{architecture}}
\end{figure*}
\subsection{Neural Network Architecture}
In this section, we describe the deep learning architecture on top of the proposed temporal tensor transformation, which is referred to as \textit{TSSNet}. The neural network architecture is based on a fairly simple network structure, as shown in Figure \ref{architecture}.
\subsubsection{Temporal-Slicing Stack Transformation}
Given the initial input multivariate time series $X$, we first perform the temporal tensor transformation as described in Algorithm \ref{alg}. As noted previously, the Temporal-Slicing Stack transformation is defined as a mapping of $TT: X \rightarrow \tilde{\mathcal{X}}$, where the input data is a two dimensional matrix $X \in \Bbb R^{m \times T}$ and the output is a three dimensional tensor $\tilde{\mathcal{X}} \in \Bbb R^{m \times \omega \times o}$.
\subsubsection{Convolutional Neural Network}
After transforming the input time series data to $\tilde{\mathcal{X}}$, we utilize a CNN to extract features from the tensor along the $\omega \times o$ plane. That is, we treat each feature as a separate channel. The total number of channels is therefore equal to the number of features $m$. We also fix one dimension of the CNN kernel to be the stack height $o$. The dimension of the kernel is thus $o \times k$, where $k$ is the width of the convolving kernel. Suppose we have $l$ CNN kernels, the $i$-th kernel learns the following set of weights:
$$h_{i} = W_{i} \ast \tilde{\mathcal{X}} + b_{i}, \hspace{0.5cm}i \in \{1,2,...,l\}$$
\noindent
where $\ast$ denotes the convolution operator, $W_{i}$ and $b_{i}$ are the weight and bias parameters, respectively. Note that during this process, we do not apply any sort of activation function as the model is sensitive to any type of operation that can collapse the range of values from the input (e.g., negative values will become zero in RELU). We empirically set the number of kernels to be $m$. The output feature maps from the convolution operation is subsequently fed into a max pooling operation, which is applied over the same $\omega \times o$ plane. We repeat these two operations twice in a successive manner.
\subsubsection{Dense Layer}
After the features are extracted from the CNN layer, the corresponding feature maps are then flattened out as a single vector. It is then subsequently fed into one fully connected hidden layer, $fc_1$:
$$fc_1 = W_1 \times h + b_1$$
\noindent
where $h$ is the 1D flatted out feature map from the previous CNN feature maps, and $W_1$ and $b_1$ are the weight and bias parameters, respectively. As a heuristic, the dimension of the hidden weight parameters used in this dense layer is usually greater than that of the output layer dimensions. Finally the subsequent latent vector $fc_1$ is fed into the output layer, where we learn the following set of parameters:
$$\hat{y} = W_2 \times fc_1 + b_2$$
\noindent
where $W_2$ and $b_2$ are the weight and bias parameters. The final output is a 1D vector, $\hat{y} \in \Bbb R^{m \times h}$, which can also be reshaped as a 2D matrix of dimensions $m \times h$.
\subsection{Objective Function}
To train the proposed neural network architecture, we minimize the squared error loss function:
$$\min_{\Theta} \; \sum_{t \in T_{train}} \| Y_t - \hat{Y_t}\|^{2}_{F}$$
\noindent
where the cost is minimized w.r.t. the parameters $\Theta$, and $\|\cdot\|^2_F$ denotes the Frobenius norm.
\subsection{Optimization Method}
To optimize the cost function, we utilize canonical methods for optimizing standard neural network. Specifically, we can apply common gradient-based methods such as stochastic gradient descent (SGD) or the Adam algorithm \cite{kingma2014adam}.
|
1,314,259,993,928 | arxiv | \section{Introduction}
\label{sec:intro}
It is often convenient to regard a space of certain loops in a smooth
manifold as a smooth manifold itself with the aim of doing
differential topology thereon. Depending on the application this
approach can vary from the conceptual to the rigorous. The two most
popular types of loop are continuous and smooth, for both of which
there is a rigorous theory of infinite dimensional manifolds making
these into smooth manifolds: \cite{wk}, \cite{sl}, \cite{jm},
\cite{ho}. Other types of loop have also been considered: it is often
convenient to use a manifold modelled on a Hilbert space whence one
usually uses the space of loops with square-integrable first
derivative.
There is a standard method of constructing the smooth structure which
is used in each case mentioned above. Our main purpose in this paper
is to extend this construction to a reasonably arbitrary class of
loops. In so doing, we obtain a list of conditions on the model space
such that if they hold then this method applies. This enables us to
reduce the general problem of whether or not a particular type of loop
forms a smooth manifold to a check-list for the model space and means
that we can avoid writing out the full construction in each and every
case.
Before giving the list of conditions, we feel it relevant to comment
on calculus. The examples of spaces given in the first paragraph were
all ``nice'' as regards calculus. Two were Banach spaces (one a
Hilbert space, no less) and the other is one of the nicest Fr\'echet
spaces that one could hope to meet. In all of these cases, calculus
is well-understood and well-defined. However, once one leaves the
realm of Fr\'echet spaces and departs for more general locally convex
topological spaces then the notion of what is ``smooth'' becomes
increasingly hard to pin down. The usual idea of taking ``smooth'' to
mean ``infinitely differentiable'' leads to many complications, not
least that this is not uniquely defineable. Fortunately, an
alternative approach has been developed in which the notion of
``smooth'' is based on something other than differentiability. This
calculus is laid out in the weighty tome \cite{akpm}. The
introduction and the historical remarks at the end of chapter~1 of
\cite{akpm} make for an interesting read on this subject.
The impact that this has on our work is more subtle than might be
expected. The place where one would expect this issue to arise is in
showing that the transition functions are smooth. However, this
depends on certain functions on the model space being smooth and so we
build this into one of our conditions. We can therefore phrase the
corresponding condition in such terms that it could apply whatever
type of calculus we were using.
There are other places, however, where the calculus used makes an
appearance. The most important being the definition of an infinite
dimensional manifold. One of the issues in infinite dimensional
calculus is that maps can be smooth without being continuous and this
leads to a certain \emph{laissez faire} attitude to topology. The
traditional definition of a manifold is of a topological space with a
smooth atlas. The definition in \cite{akpm} is of a \emph{set} with a
smooth atlas which is then topologised using said atlas. Thus if one
wishes to apply the results of this paper using a calculus other than
that of \cite{akpm}, this issue might be important. Certainly, the
traditional approach to building infinite dimensional manifolds
modelled on Banach spaces has been to mirror the standard approach of
topology first and smooth structure second. Therefore, if taking a
different calculus, it might be necessary to add the additional step
of checking that the original topology and the new topology were one
and the same. We shall not bother with this issue explicitly, but
shall provide some tools which will help if it is considered important
by others.
Having made that point, we turn to our list of conditions. We start
with a class of maps \(S^1 \to \ensuremath {\mathbb{R}}\xspace\) which we write as \(L^x \ensuremath {\mathbb{R}}\xspace\), or
as \(C^x(S^1, \ensuremath {\mathbb{R}}\xspace)\) when we want to emphasise the domain, and refer to
as \(C^x\)-\hspace{0pt}{}loops. We want to consider actual maps, and not
equivalence classes of maps, because we want to be able to ``locate''
our maps on a manifold in order to be able to examine them in charts.
Thus we regard \(L^x \ensuremath {\mathbb{R}}\xspace\) as a subset of \(\map(S^1,\ensuremath {\mathbb{R}}\xspace)\). Using the
natural identification of \(\map(S^1, \ensuremath {\mathbb{R}}\xspace^n)\) with \(\map(S^1, \ensuremath {\mathbb{R}}\xspace)^n\)
we define \(L^x \ensuremath {\mathbb{R}}\xspace^n\) as \((L^x \ensuremath {\mathbb{R}}\xspace)^n\). For a subset \(A \subseteq
\ensuremath {\mathbb{R}}\xspace^n\), we define \(L^x A\) as the subset of \(L^x \ensuremath {\mathbb{R}}\xspace^n\) consisting
of maps which take values in \(A\).
Our conditions are:
\begin{enumerate}
\item Being in \(L^x \ensuremath {\mathbb{R}}\xspace\) is a \emph{local} property.
\label{cond:local}
That is, for a loop \(\gamma \colon S^1 \to \ensuremath {\mathbb{R}}\xspace\), then \(\gamma \in
L^x \ensuremath {\mathbb{R}}\xspace\) if there exists an open cover \(\m{U}\) of \(S^1\) and
loops \(\gamma_U \in L^x \ensuremath {\mathbb{R}}\xspace\) for \(U \in \m{U}\) such that
\(\gamma\) agrees with \(\gamma_U\) on \(U\).
\item The set \(L^x \ensuremath {\mathbb{R}}\xspace\) is a subspace of \(\map(S^1, \ensuremath {\mathbb{R}}\xspace)\).
\label{cond:vspace}
\item The vector space \(L^x \ensuremath {\mathbb{R}}\xspace\) can be given a topology with
respect to which it is a locally convex topological vector space.
\label{cond:lctvs}
\item The locally convex topological vector space \(L^x \ensuremath {\mathbb{R}}\xspace\) is
\emph{convenient}.
\label{cond:cmplt}
This is a completeness condition. We have phrased it in the language
of \cite{akpm} but it is the same as a concept known as \emph{locally
complete} which is from ordinary functional analysis. Local
completeness is weaker than sequential completeness, though it
coincides with completeness for metrisable spaces. This completeness
condition is to ensure that derivatives that ought to exist actually
do.
\item As subspaces of \(\map(S^1, \ensuremath {\mathbb{R}}\xspace)\) we have inclusions:
\[
L \ensuremath {\mathbb{R}}\xspace \subseteq L^x \ensuremath {\mathbb{R}}\xspace \subseteq L^0 \ensuremath {\mathbb{R}}\xspace
\]
where \(L \ensuremath {\mathbb{R}}\xspace = \ensuremath {C^\infty}\xspace(S^1, \ensuremath {\mathbb{R}}\xspace)\) and \(L^0 \ensuremath {\mathbb{R}}\xspace = C^0(S^1, \ensuremath {\mathbb{R}}\xspace)\). The
inclusion maps are all continuous when each is given its standard
topology.
\label{cond:smthcts}
We will be considering loops in a smooth manifold and therefore will
need to know that the condition of being a \(C^x\)--loop is invariant
under post-composition by diffeomorphisms. This essentially forces
smooth loops to be \(C^x\)--loops. For the other inclusion, as we
remarked above we want to be able to ``locate'' a loop on a manifold
so that we can consider it in charts. The simplest way to do this is
to ensure that a \(C^x\)--loop is continuous.
\item The action of post-composition of \(C^x\)--loops by smooth maps
is well-defined and is smooth. That is, let \(U \subseteq \ensuremath {\mathbb{R}}\xspace^m\) and
\(V \subseteq \ensuremath {\mathbb{R}}\xspace^n\) be open sets; let \(\phi \colon U \to V\) be a smooth
map. The induced map \(\phi_* \colon L^x U \to L^x V\),
\(\gamma \mapsto \phi \circ \gamma\), is well-defined and is smooth.
\label{cond:postcomp}
This is the crucial condition that will ensure that the transition
functions are defined and are diffeomorphisms.
\end{enumerate}
Having stated our conditions, we can now state our first theorem. We
make two assumptions on our target manifold: that it be orientable and
that it have no boundary. The first is really a convenience to allow
us not to have to discuss twisted model spaces; the second is
necessary as the loop space of a manifold with boundary is a
complicated object indeed.
\begin{thm}
\label{th:smooth}
Let \(L^x \ensuremath {\mathbb{R}}\xspace\) be a class of maps satisfying the above
conditions. Let \(M\) be a smooth, orientable finite dimensional
manifold without boundary. Then \(L^x M\) can be defined and is
a smooth manifold in the sense of \cite{akpm}.
\end{thm}
We emphasise that the phrase ``in the sense of \cite{akpm}'' does not
refer to the calculus but to the definition of a smooth manifold once
one has decided on a calculus.
Having defined the smooth structure, the next question is to examine
the general properties of the manifold. In most cases, these descend
from the model space. The result which allows us to devolve these
properties is the following theorem on submanifolds.
\begin{thm}
\label{th:submfd}
Let \(L^x \ensuremath {\mathbb{R}}\xspace\) be a class of maps satisfiying the above
conditions. Let \(M, N\) be smooth, orientable finite dimensional
manifolds without boundary and suppose that there is an embedding of
\(M\) as a submanifold of \(N\). Then \(L^x M\)
is an embedded submanifold of \(L^x N\). Moreover, if \(M\) is
closed, resp.\ open, in \(N\) then \(L^x M\) is closed, resp.\
open, in \(L^x N\).
\end{thm}
\begin{cor}
\label{cor:top}
In the statement of theorem~\ref{th:submfd} suppose that we can take
\(N = \ensuremath {\mathbb{R}}\xspace^n\) with \(M\) closed in \(N\). Then the following
properties are inherited by \(L^x M\) from \(L^x \ensuremath {\mathbb{R}}\xspace\):
separable, metrisable, Lindel\"of, paracompact, normal, smoothly
regular, smoothly paracompact, and smoothly normal.
That is, those properties which hold for \(L^x \ensuremath {\mathbb{R}}\xspace\) also hold
for \(L^x M\).
\end{cor}
The last property that we wish to examine is the natural circle
action, and more generally the natural action of the diffeomorphisms
of the circle.
\begin{thm}
\label{th:diffact}
Under the conditions of corollary~\ref{cor:top}, the actions of the
circle and of the diffeomorphisms of the circle are also inherited by
\(L^x M\) from \(L^x \ensuremath {\mathbb{R}}\xspace\).
\end{thm}
In light of this, we conclude this paper with a discussion as to the
various possible levels of continuity and smoothness of the circle
acting on an infinite dimensional locally convex topological vector
space (lctvs).
\medskip
This paper is organised as follows. In section~\ref{sec:prelim} we
prove some preliminary results, including the definition of \(L^x
M\). Section~\ref{sec:smooth} is concerned with the construction of
the charts and showing that the transition maps are diffeomorphisms;
in particular we prove theorem~\ref{th:smooth}.
In section~\ref{sec:topology} we transfer our attention to the
topology of the manifold and prove theorem~\ref{th:submfd} and its
corollaries. Finally, in section~\ref{sec:diff} we look at the action
of the circle and its diffeomorphisms.
\medskip
The standard construction of the smooth structure on the space of
smooth loops can be found in many places, for example in \cite{pm3}
and in \cite{akpm}. Some other articles and books which deal with the
infinite dimensional manifolds in varying levels of generality are:
\cite{jm}, \cite{ho}, \cite{je3}, \cite{jeke}, \cite{sl}, \cite{wk}.
Most of the work in this paper is firmly in the realms of differential
topology and should be comprehensible to anyone with a firm grasp of
the basics of the theory in finite dimensions. The exception to this
is the discussion of actions of the circle and diffeomorphism group
which uses some standard functional analysis. This may be unfamiliar
to differential topologists, at whom this article is aimed, in which
case we recommend \cite{hs} and \cite{hj} for the necessary
background.
\medskip
We regard the circle as the quotient \(\ensuremath {\mathbb{R}}\xspace / \ensuremath {\mathbb{Z}}\xspace\) and shall write it
additively. We shall often write a small neighbourhood of a point as
\((t - \epsilon, t + \epsilon)\) without worrying about
``wrap-around''; either the ``wrap-around'' will have no effect on
the subsequent discussion or we will be allowed to take \(\epsilon\)
small enough that there is no ``wrap-around''. We shall also employ
the language of \emph{intervals} for connected subsets of \(S^1\) --
including \(S^1\) itself.
\section{Preliminaries}
\label{sec:prelim}
In this section we shall set up the basic machinery that we need to
define and construct the smooth manifold of loops. From hereon, let
us assume that we have a class of loops, \(L^x \ensuremath {\mathbb{R}}\xspace\), satisfying
the conditions stated in the introduction. Let \(M\) be a smooth,
orientable, finite dimensional manifold without boundary. Our first
task is to define \(L^x M\). Our second is to define and
examine the space of \(C^x\)-\hspace{0pt}{}sections in a smooth vector bundle
over \(S^1\); these will prove crucial in the atlas for \(L^x
M\).
\subsection{Loops in a Manifold}
\label{sec:defloops}
To define \(L^x M\) we need to show that we can examine a loop
locally to see whether or not it is in \(L^x M\).
Condition~\ref{cond:local} is almost what we need but isn't quite
local enough.
\begin{defn}
Let \(I \subseteq S^1\) be an open interval. Define \(C^x(I, \ensuremath {\mathbb{R}}\xspace)\) to
be the space of maps \(\gamma \colon I \to \ensuremath {\mathbb{R}}\xspace\) which are locally of type
\(C^x\). That is, there is an open cover \(\m{U}\) of \(I\) and maps
\(\gamma_U \in L^x \ensuremath {\mathbb{R}}\xspace\) for \(U \in \m{U}\) such that \(\gamma\)
agrees with \(\gamma_U\) on \(U\).
\end{defn}
Note that we don't assume that the whole function extends, merely that it
locally extends. It follows from the definition that the restriction
map \(C^x(I, \ensuremath {\mathbb{R}}\xspace) \to C^x(J, \ensuremath {\mathbb{R}}\xspace)\) is defined for \(J \subseteq I\).
\begin{lemma}
\label{lem:local}
In the locality condition, it is enough to assume that the local
functions are only defined locally.
That is, a map \(\gamma \colon S^1 \to \ensuremath {\mathbb{R}}\xspace\) is a \(C^x\)-\hspace{0pt}{}map if there
is a cover \(\m{U}\) of \(S^1\) by open intervals and functions
\(\gamma_U \in C^x(U, \ensuremath {\mathbb{R}}\xspace)\) such that \(\gamma\) agrees with
\(\gamma_U\) on \(U\).
\end{lemma}
\begin{proof}
Let \(t \in S^1\). Then there is some \(U \in \m{U}\) with \(t \in
U\). As \(\gamma_U \in C^x(U,\ensuremath {\mathbb{R}}\xspace)\) there is an open cover \(\m{V}\)
of \(U\) and loops \(\beta_V \in L^x \ensuremath {\mathbb{R}}\xspace\) such that \(\gamma_U\)
agrees with \(\beta_V\) on \(V\). There is some \(V \in \m{V}\) such
that \(t \in V\). Then on \(V\), \(\beta_V\) agrees with \(\gamma_U\)
which agrees with \(\gamma\). Repeating this for all \(t \in S^1\)
gives the family required to apply condition~\ref{cond:local}.
\end{proof}
Another piece of preliminary work that we need to do, or rather just
to note as it is trivial, is to show that all of our conditions and
results are equally valid for \(\ensuremath {\mathbb{R}}\xspace^n\) as for \ensuremath {\mathbb{R}}\xspace.
Condition~\ref{cond:postcomp} is already in full generality.
\begin{lemma}
Let \(L^x \ensuremath {\mathbb{R}}\xspace\) be a class of maps satisfying the conditions of
section~\ref{sec:intro}. Then \(L^x \ensuremath {\mathbb{R}}\xspace^n\) satisfies analogous
conditions and the corresponding version of lemma~\ref{lem:local}.
\end{lemma}
\begin{proof}
This is trivial and follows from the fact that \(L^x \ensuremath {\mathbb{R}}\xspace^n\) is
canonically identified with \((L^x \ensuremath {\mathbb{R}}\xspace)^n\).
\end{proof}
One other result that we need, which is equally trivial, is the
following statement about open sets.
\begin{lemma}
Let \(U \subseteq \ensuremath {\mathbb{R}}\xspace^n\) be open. Then \(L^x U\) is open in
\(L^x \ensuremath {\mathbb{R}}\xspace^n\).
\end{lemma}
\begin{proof}
This holds for \(L^0 \ensuremath {\mathbb{R}}\xspace^n\) and so holds because \(L^x U =
L^0 U \cap L^x \ensuremath {\mathbb{R}}\xspace^n\).
\end{proof}
With this in place we can define our space of interest.
\begin{defn}
Let \(L^x \ensuremath {\mathbb{R}}\xspace\) be a class of maps satisfying the conditions of
section~\ref{sec:intro}. Let \(M\) be a smooth finite dimensional
manifold. Define \(L^x M\) to be the subset of \(\map(S^1, M)\)
consisting of those loops \(\gamma \colon S^1 \to M\) for which there
exists a covering \(\{I_\alpha \colon \alpha \in A\}\) of \(S^1\) by open
intervals and charts \(\{(\iota_\alpha, U_\alpha, V_\alpha) : \alpha
\in A\}\) for \(M\) (not necessarily making a full atlas) such that
for each \(\alpha \in A\), \(\gamma(I_\alpha) \subseteq V_\alpha\) and
the map:
\[
\gamma_\alpha \colon I_\alpha \to S^1 \xrightarrow{\gamma} V_\alpha
\xrightarrow{{\iota_{\alpha}}^{-1}} U_\alpha
\]
lies in \(C^x(I_\alpha, U_\alpha)\).
\end{defn}
Thus we have defined \(C^x\)--loops in \(M\) to be those that look
like \(C^x\)--loops whenever we look locally. Lemma~\ref{lem:local}
and condition~\ref{cond:postcomp} easily combine to show that this
definition does not depend on any of the choices made.
There is another way to make this definition; if \(M\) were a
submanifold of, say, \(\ensuremath {\mathbb{R}}\xspace^n\) then we already have a definition of
\(L^x M\): namely that subset of \(L^x \ensuremath {\mathbb{R}}\xspace^n\) of loops
which take values in \(M\). The next result shows that these two
definitions coincide. We prefer the above as the actual definition as
it treats the manifold without reference to any surrounding space.
\begin{proposition}
\label{prop:cxsub}
Let \(L^x \ensuremath {\mathbb{R}}\xspace\) be a class of maps satisfying the conditions in
section~\ref{sec:intro}. Let \(M,N\) be smooth, finite dimensional
manifolds with \(M\) an embedded submanifold of \(N\). Then
\[
L^x M = \{\gamma \in L^x N : \gamma(S^1) \subseteq M\}.
\]
\end{proposition}
\begin{proof}
Since this is true for arbitrary maps, what we need to show is that a
loop in \(M\) is a \(C^x\)-\hspace{0pt}{}loop when viewed in \(M\) if and only
if it is a \(C^x\)-\hspace{0pt}{}loop when viewed in \(N\). To do this, we
ensure that the charts in \(M\) in which we are looking are all
submanifold charts; that is, restrictions of charts on \(N\) which
take \(M\) to \(\ensuremath {\mathbb{R}}\xspace^k\) inside \(\ensuremath {\mathbb{R}}\xspace^n\). The desired result then
follows from the fact that a loop in \(\ensuremath {\mathbb{R}}\xspace^n\) is a \(C^x\)-\hspace{0pt}{}loop
if and only if its co-ordinates are \(C^x\)-\hspace{0pt}{}loops; whereupon the
\(C^x\)-\hspace{0pt}{}loops in \(\ensuremath {\mathbb{R}}\xspace^k\) are precisely the \(C^x\)-\hspace{0pt}{}loops in
\(\ensuremath {\mathbb{R}}\xspace^n\) which happen to lie in \(\ensuremath {\mathbb{R}}\xspace^k\).
\end{proof}
\subsection{Sections and Submanifolds}
\label{sec:cxsect}
In this section we define and examine the space of
\(C^x\)-\hspace{0pt}{}sections of a vector bundle over \(S^1\).
\begin{defn}
Let \(E \to S^1\) be a smooth fibre bundle. Define
\(\Gamma_{S^1}^x(E)\) as the space of sections of \(E\) which are
\(C^x\)-\hspace{0pt}{}loops when viewed as maps into the total space of \(E\).
\end{defn}
In the particular case that \(E\) is an orientable vector bundle, we
can identify this space of sections with \(L^x \ensuremath {\mathbb{R}}\xspace^n\).
\begin{lemma}
\label{lem:cxsec}
Let \(E \to S^1\) be a smooth orientable vector bundle of fibre
dimension \(n\). A smooth trivialisation of \(E\) defines a bijection
\(\Gamma^x_{S^1}(E) \to L^x \ensuremath {\mathbb{R}}\xspace^n\). The map \(L^x \ensuremath {\mathbb{R}}\xspace^n
\to L^x \ensuremath {\mathbb{R}}\xspace^n\) induced by two trivialisations of \(E\) is a
linear diffeomorphism.
\end{lemma}
\begin{proof}
As it is obvious that a smooth trivialisation of \(E\) defines a
bijection from the space of all sections of \(E\) to \(\map(S^1,
\ensuremath {\mathbb{R}}\xspace^n)\) all we need to check to show the first part is that a section
of \(E\) is a \(C^x\)-\hspace{0pt}{}section if and only if this bijection takes
it to a \(C^x\)-\hspace{0pt}{}loop in \(\ensuremath {\mathbb{R}}\xspace^n\).
Condition~\ref{cond:postcomp} assures us that a diffeomorphism on the
target space induces a bijection on the spaces of \(C^x\)-\hspace{0pt}{}loops.
Therefore we have a bijection from \(L^x E\) to \(L^x (S^1
\times \ensuremath {\mathbb{R}}\xspace^n)\). As the trivialisation of \(E\) intertwines the
projection maps, this bijection takes sections to sections and so
induces a bijection \(\Gamma_{S^1}^x(E) \to \Gamma_{S^1}^x(S^1 \times
\ensuremath {\mathbb{R}}\xspace^n)\). Thus the problem is reduced to the case of a trivial vector
bundle.
Now it is clear from the definition of \(C^x\)-\hspace{0pt}{}loops in a
manifold that a loop in a (finite) product is a \(C^x\)-\hspace{0pt}{}loop if
and only if each of the factors is a \(C^x\)-\hspace{0pt}{}loop. Therefore a
loop in \(S^1 \times \ensuremath {\mathbb{R}}\xspace^n\) is a \(C^x\)-\hspace{0pt}{}loop if and only if the
projections to \(S^1\) and to \(\ensuremath {\mathbb{R}}\xspace^n\) are \(C^x\)-\hspace{0pt}{}loops. Now a
loop is a section if and only if it projects to the identity on
\(S^1\) and the identity is smooth, whence a \(C^x\)-\hspace{0pt}{}loop.
Therefore a section of \(S^1 \times \ensuremath {\mathbb{R}}\xspace^n\) is a \(C^x\)-\hspace{0pt}{}section
if and only if the projection to \(\ensuremath {\mathbb{R}}\xspace^n\) produces a
\(C^x\)-\hspace{0pt}{}loop.
Tracing this through shows that the trivialisation does induce a
bijection \(\Gamma_{S^1}^x(E) \to L^x \ensuremath {\mathbb{R}}\xspace^n\).
Two such trivialisations of \(E\) define a diffeomorphism \(\phi
\colon S^1
\times \ensuremath {\mathbb{R}}\xspace^n \to S^1 \times \ensuremath {\mathbb{R}}\xspace^n\) covering the identity on \(S^1\).
We extend this to a smooth map \(\ensuremath {\mathbb{R}}\xspace^2 \times \ensuremath {\mathbb{R}}\xspace^n \to \ensuremath {\mathbb{R}}\xspace^2 \times
\ensuremath {\mathbb{R}}\xspace^n\), viewing \(S^1\) as a submanifold of \(\ensuremath {\mathbb{R}}\xspace^2\). Note that we do
not assume that this extension is a diffeomorphism (an extension to a
diffeomorphism may not exist). The induced map \(L^x \ensuremath {\mathbb{R}}\xspace^n \to L^x
\ensuremath {\mathbb{R}}\xspace^n\) factors as:
\begin{align*}
L^x\ensuremath {\mathbb{R}}\xspace^n &\to L^x(\ensuremath {\mathbb{R}}\xspace^2 \times \ensuremath {\mathbb{R}}\xspace^n) & \gamma &\mapsto
(0,\gamma) \\
L^x(\ensuremath {\mathbb{R}}\xspace^2 \times \ensuremath {\mathbb{R}}\xspace^n) &\to L^x(\ensuremath {\mathbb{R}}\xspace^2
\times \ensuremath {\mathbb{R}}\xspace^n) & (\alpha, \gamma) &\mapsto (\alpha + \iota, \gamma) \\
L^x(\ensuremath {\mathbb{R}}\xspace^2
\times \ensuremath {\mathbb{R}}\xspace^n) &\xrightarrow{\phi_*} L^x(\ensuremath {\mathbb{R}}\xspace^2 \times \ensuremath {\mathbb{R}}\xspace^n) &
(\alpha, \gamma) &\mapsto \phi \circ (\alpha, \gamma) \\
L^x( \ensuremath {\mathbb{R}}\xspace^2 \times \ensuremath {\mathbb{R}}\xspace^n) &\to
L^x \ensuremath {\mathbb{R}}\xspace^n & (\alpha, \gamma) &\mapsto \gamma,
\end{align*}
where \(\iota \colon S^1 \to \ensuremath {\mathbb{R}}\xspace^2\) is the inclusion. The first map is
continuous and linear, hence smooth. The second map is a translation,
hence smooth. The third map is smooth by
condition~\ref{cond:postcomp}. The final map is continuous and
linear, hence smooth. Thus \(\phi_* \colon L^x\ensuremath {\mathbb{R}}\xspace^n \to L^x \ensuremath {\mathbb{R}}\xspace^n\) is
smooth. Applying the same to \(\phi^{-1}\) shows that \(\phi_*\) is a
diffeomorphism, as required.
\end{proof}
Using this we transfer the smooth structure of \(L^x \ensuremath {\mathbb{R}}\xspace^n\) to
\(\Gamma_{S^1}^x(E)\). If we are using the calculus of \cite{akpm},
it is a curious fact that although \(\phi_*\) is a linear
diffeomorphism, it need not be a homeomorphism as it, or its inverse,
need not be continuous. They will, however, be \emph{bounded} maps.
If we assume that the topology on \(L^x \ensuremath {\mathbb{R}}\xspace^n\) is \emph{bornological}
-- a condition that we can readily impose by a standard alteration of
the topology -- then bounded maps are continuous and so we do have a
homeomorphism.
\begin{corollary}
Let \(E \to S^1\) be a finite dimensional orientable smooth vector
bundle. Then \(\Gamma^x_{S^1}(E)\) naturally has the structure of a
convenient vector space and is diffeomorphic to \(L^x \ensuremath {\mathbb{R}}\xspace^n\), where
\(n = \dim E\). \hspace*{\fill}\qedsymbol
\end{corollary}
We can adapt the proof of lemma~\ref{lem:cxsec} to demonstrate the
following properties of \(\Gamma^x_{S^1}(E)\).
\begin{lemma}
\label{lem:chartprelim}
Let \(E,F \to S^1\) be finite dimensional orientable smooth vector
bundles. Let \(U \subseteq E\) and \(V \subseteq F\) be open subsets
of the total space and \(\phi \colon U \to V\) a smooth map covering the
identity on \(S^1\). Let \(\Gamma_{S^1}^x(U) \coloneqq \{\gamma \in
\Gamma_{S^1}^x(E) : \gamma(S^1) \subseteq U\}\), and similarly for
\(V\). Then \(\Gamma_{S^1}^x(U)\) is open in \(\Gamma_{S^1}^x(E)\)
and the induced map \(\gamma \mapsto \phi \circ \gamma\) is a smooth
map \(\Gamma_{S^1}^x(U) \to \Gamma_{S^1}^x(V)\).
\end{lemma}
\begin{proof}
It is sufficient to examine the case where \(E\) and \(F\) are
trivial; say, \(E = S^1 \times \ensuremath {\mathbb{R}}\xspace^m\) and \(F = S^1 \times \ensuremath {\mathbb{R}}\xspace^n\). In
this case we have a topological embedding of \(L^x \ensuremath {\mathbb{R}}\xspace^m\) as an affine
subspace of \(L^x(\ensuremath {\mathbb{R}}\xspace^2 \times \ensuremath {\mathbb{R}}\xspace^m)\) via \(\gamma \mapsto (\iota,
\gamma)\). There is a set \(W \subseteq \ensuremath {\mathbb{R}}\xspace^2 \times \ensuremath {\mathbb{R}}\xspace^m\) which
restricts to \(U\) on \(S^1 \times \ensuremath {\mathbb{R}}\xspace^m\) and, under the above
embedding, \(\Gamma_{S^1}^x(U)\) is the intersection of \(L^x \ensuremath {\mathbb{R}}\xspace^m\)
with \(L^x W\), hence is open in \(\Gamma_{S^1}^x(E)\).
Now smoothness is a local property so we may assume that \(\phi \colon U
\to V\) extends to a smooth map \(\ensuremath {\mathbb{R}}\xspace^2 \times \ensuremath {\mathbb{R}}\xspace^m \to \ensuremath {\mathbb{R}}\xspace^2 \times
\ensuremath {\mathbb{R}}\xspace^n\). To deduce the general case from this we choose a sequence of
open sets \(U_n\) such that \(U = \bigcup U_n\) and \(\overline{U}_n
\subseteq U_{n+1}\). Using bump functions we can define maps \(\phi_n
\colon \ensuremath {\mathbb{R}}\xspace^2 \times \ensuremath {\mathbb{R}}\xspace^m \to \ensuremath {\mathbb{R}}\xspace^2 \times \ensuremath {\mathbb{R}}\xspace^n\) such that \(\phi_n = \phi\)
on \(U_n\). Thereupon if we can show that each \(\phi_n\) is smooth
then we can deduce that \(\phi\) is locally smooth and hence smooth.
We now use the same method as in the proof of lemma~\ref{lem:cxsec} to
deduce that \(\phi_* \colon \Gamma_{S^1}^x(U) \to \Gamma_{S^1}^x(V)\) is
smooth.
\end{proof}
\section{Building a Smooth Manifold}
\label{sec:smooth}
In this section we construct the charts for \(L^x M\) and show that
the transition maps are smooth.
\subsection{Charts}
\label{sec:charts}
The key tool for defining the charts for the loop space is the notion
of a \emph{local addition} on \(M\), cf~\cite[\S 42.4]{akpm}:
\begin{defn}
\label{def:locadd}
Let \(U \subseteq M\) be an open subset of \(M\). A \emph{local
addition over \(U\)} for \(M\) consists of an open subset \(U
\subseteq M\) and smooth map \(\eta \colon T U \to U\) such that
\begin{enumerate}
\item the composition of \(\eta\) with the zero section is the
identity on \(U\), and
\item there exists an open neighbourhood \(V\) of the diagonal in
\(U\) such that the map \(\pi \times \eta \colon T U \to U \times U\) is a
diffeomorphism onto \(V\).
\end{enumerate}
For a subset \(A \subseteq M\), a \emph{local addition for \(A\)} is a
local addition defined over a neighbourhood of \(A\). If \(f \colon X \to
M\) is a map, a \emph{local addition for \(f\)} is a local addition
defined over a neighbourhood of the image of \(f\).
\end{defn}
This is closely related to what is called a local addition in \cite[\S
42.4]{akpm} but is not quite the same. One difference, that we use
the whole of the fibres, is for simplicity whilst the other
difference, that we do not assume it to be defined on the whole of
\(M\), is to make later analysis easier. The following result is
contained in the discussion following the definition of a local
addition in \cite[\S 42.4]{akpm}:
\begin{proposition}
Any finite dimensional manifold without boundary admits a local
addition defined over the whole of the manifold. \hspace*{\fill}\qedsymbol
\end{proposition}
We start by constructing our chart maps for \(L^x M\). They
will be anchored at \emph{smooth} loops rather than arbitrary
elements of \(L^x M\). This is to ensure that all the maps
between finite dimensional objects are smooth so we don't need to
consider \(C^x\)-\hspace{0pt}{}maps with arbitrary domains.
\begin{lemma}
Let \(\alpha \colon S^1 \to M\) be a \emph{smooth} loop. Let \(\eta
\colon T U \to U\) be a local addition for \(\alpha\) with
neighbourhood \(V\) of the diagonal. Define the set \(U_\alpha
\subseteq L^x M\) by:
\[
U_\alpha \coloneqq \{\beta \in L^x M \colon (\alpha, \beta) \in L^x V\}.
\]
Then \(\pi \times \eta \colon T U \to V\) induces a bijection from
\(\Gamma^x_{S^1}(\alpha^* T M)\) to \(U_\alpha\). Under this
bijection, the zero section of \(\alpha^* T M\) maps to
\(\alpha\).
\end{lemma}
\begin{proof}
By definition, the image of \(\alpha\) lies in \(U\). As \(U\) is an
open subset of \(M\), the bundles \(\alpha^* T M\) and \(\alpha^* T
U\) are naturally identified. We claim that there is a diagram:
\[
\xymatrix{
L^x T U \ar[r]^{(\pi \times \eta)_*} &
L^x V \\
\Gamma^x_{S^1}(\alpha^* T M) \ar[u] &
U_\alpha \ar[u]_{\beta \mapsto (\alpha, \beta)},
}
\]
such that the map at the top is a bijection and takes the image of the
left-hand vertical map to the image of the right-hand one. Both of
the vertical maps are injective -- the right-hand one obviously so, we
shall investigate the left-hand one in a moment -- and thus the
bijection \((\pi \times \eta)_*\) induces a bijection from the lower
left to the lower right.
As \(T U\) is an open subset of \(T M\) and \(V\) of \(M \times M\),
both are smooth manifolds. The map \(\pi \times \eta \colon T U \to V\) is
a diffeomorphism and hence induces a bijection on the sets of
\(C^x\)-\hspace{0pt}{}maps into each. This is the map we have labelled
\((\pi \times \eta)_*\).
The left-hand vertical map, \(\Gamma^x_{S^1}(\alpha^* T M) \to
L^x T U\), is defined as follows: the total space \(\alpha^* T
M = \alpha^* T U\) is:
\[
\{(t,v) \in S^1 \times T U \colon \alpha(t) = \pi(v)\}.
\]
It is an embedded submanifold of \(S^1 \times T U\). Therefore by
proposition~\ref{prop:cxsub}, a map into \(\alpha^* T M\) is a
\(C^x\)-\hspace{0pt}{}map if and only if the compositions with the projections
to \(S^1\) and to \(T U\) are both \(C^x\)-\hspace{0pt}{}maps. Now a map \(S^1
\to \alpha^* T U\) is a section if and only if it projects to the
identity on \(S^1\). Therefore there is a bijection (of sets):
\begin{align*}
\Gamma^x_{S^1}(\alpha^* T M) &\cong \{\beta \in L^x T U \colon (t,
\beta(t)) \in \alpha^* T M \text{ for all } t \in S^1\} \\
&= \{\beta \in L^x T U \colon \alpha(t) = \pi \beta(t) \text{ for all
} t \in S^1\} \\
&= \{\beta \in L^x T U \colon \pi_* \beta = \alpha\}.
\end{align*}
In particular, the map \(\Gamma^x_{S^1}(\alpha^* T M) \to L^x T U\) is
injective.
We apply \((\pi \times \eta)_*\) to the image of
\(\Gamma^x_{S^1}(\alpha^* T M)\) and see that it is the preimage under
this map of everything of the form \((\alpha, \gamma)\) in \(L^x
V\). By construction, \(\gamma \in L^x M\) is such that
\((\alpha, \gamma) \in L^x V\) if and only if \(\gamma \in
U_\alpha\). Hence \((\pi \times \eta)_*\) identifies the image of
\(\Gamma^x_{S^1}(\alpha^* T M)\) with \(\{\alpha\} \times U_\alpha\).
Finally, note that the zero section of \(\alpha^* T M\) maps to the
image of \(\alpha\) under the zero section of \(T U\). Since \(\eta\)
composed with the zero section of \(T U\) is the identity on \(U\),
the image of the zero section of \(\alpha^* T M\) in \(V\) is
\((\alpha,\alpha)\) as required which projects to \(\alpha\) in
\(U_\alpha\).
\end{proof}
\begin{defn}
Let \(\Psi_\alpha \colon \Gamma^x_{S^1}(\alpha^* T M) \to U_\alpha\) be
the resulting bijection.
\end{defn}
In detail, this map is as follows: let \(\beta \in
\Gamma^x_{S^1}(\alpha^* T M)\) and let \(\tilde{\beta}\) be the
corresponding loop in \(T U\), so \(\beta(t) = (t,
\tilde{\beta}(t))\). Then \((\pi \times \eta)_*(\tilde{\beta}) =
(\alpha, \eta_*(\tilde{\beta}))\) so \(\Psi_\alpha(\beta) =
\eta_*(\tilde{\beta})\).
The domains of these charts are naturally convenient vector spaces.
On the other end, we need to show that the codomains cover \(L^x M\).
\begin{lemma}
The codomains of the charts cover \(L^x M\).
\end{lemma}
\begin{proof}
This follows from the density of \(L M\) in \(L^0 M\).
We choose a local addition defined over the whole of \(M\), \(\eta \colon T
M \to M\), with corresponding neighbourhood \(V \subseteq M \times M\)
of the diagonal. As such, for any \(\beta \in L^0 M\) there is
some \(\alpha \in L M\) such that \((\alpha, \beta) \in
L^0 V\). Whereupon, if \(\beta \in L^x M\) then \(\beta
\in U_\alpha\). Hence the sets \(U_\alpha\) cover \(L^x M\).
\end{proof}
\subsection{Transitions}
\label{sec:trans}
Having defined the charts, we turn to the transition functions. Let
\(\alpha_1, \alpha_2\) be smooth loops in \(M\). Let \(\eta_1 \colon T U_1
\to U_1\) and \(\eta_2 \colon T U_2 \to U_2\) be local additions for
\(\alpha_1\) and \(\alpha_2\) respectively, with corresponding open
sets \(V_1 \subseteq U_1 \times U_1\) and \(V_2 \subseteq U_2 \times
U_2\). Let \(\Psi_1 \colon \Gamma^x_{S^1}({\alpha_1}^* T M) \to
U_{\alpha_1}\), \(\Psi_2 \colon \Gamma^x_{S^1}({\alpha_2}^* T M) \to
U_{\alpha_2}\) be the corresponding charts. Let \(U_{1 2} \coloneqq
U_{\alpha_1} \cap U_{\alpha_2}\).
\begin{lemma}
Let \(W_{1 2} \subseteq {\alpha_1}^* T M\) be the set:
\[
\{(t,v) \in {\alpha_1}^* T M \colon (\alpha_2(t), \eta_1(v)) \in V_2\}.
\]
Then \(W_{1 2}\) is open and \({\Psi_1}^{-1}(U_{1 2}) =
\Gamma^x_{S^1}(W_{1 2})\).
\end{lemma}
Here \(\Gamma^x_{S^1}(W_{1 2})\) is the set of sections of
\({\alpha_1}^* T M\) which take values in \(W_{1 2}\).
\begin{proof}
The set \(W_{1 2}\) is open as it is the preimage of an open set via
a continuous map. To show the second statement we need to prove
that \(\gamma \in \Gamma^x_{S^1}({\alpha_1}^* T M)\) takes values in
\(W_{1 2}\) if and only if \(\Psi_1(\gamma) \in U_2\) (by
construction we already have \(\Psi_1(\gamma) \in U_1\)).
So let \(\gamma \in \Gamma^x_{S^1}(\alpha^* T M)\) and let
\(\tilde{\gamma}\) be the image of \(\gamma\) in \(L^x T U\).
Thus \(\gamma(t) = (t, \tilde{\gamma}(t))\). Now \(\gamma\) takes
values in \(W_{1 2}\) if and only if:
\[
\big(\alpha_2(t), \eta(\tilde{\gamma}(t))\big) \in V_2
\]
for all \(t \in S^1\). That is to say, if and only if \((\alpha_2,
\eta_*(\tilde{\gamma})) \in L^x V_2\). By definition, this
is equivalent to the statement that \(\eta_*(\tilde{\gamma}) \in
U_{\alpha_2}\). Now \(\eta_*(\tilde{\gamma}) = \Psi_1(\gamma)\)
so \(\gamma\) takes values in \(W_{1 2}\) if and only if
\(\Psi_1(\gamma) \in U_{\alpha_1} \cap U_{\alpha_2}\).
\end{proof}
\begin{proposition}
\label{prop:trans}
The transition function:
\[
\Phi_{1 2} \coloneqq {\Psi_1}^{-1} \Psi_2 \colon {\Psi_1}^{-1}(U_{1 2}) \to
{\Psi_2}^{-1}(U_{1 2})
\]
is a diffeomorphism.
\end{proposition}
\begin{proof}
We define \(W_{2 1} \subseteq {\alpha_2}^* T M\) as the set of
\((t,v) \in \alpha_2^* T M\) with \((\alpha_1(t), \eta_1(v)) \in
V_1\). As for \(W_{1 2}\), \(\Psi_2^{-1}(U_{1 2}) =
\Gamma^x_{S^1}(W_{2 1})\).
The idea of the proof is to set up a diffeomorphism between \(W_{1
2}\) and \(W_{2 1}\). Our assumptions on \(C^x\)-\hspace{0pt}{}maps say
that the resulting map on sections is a diffeomorphism. Finally, we
show that this diffeomorphism is the transition function defined in
the statement of this proposition.
Let \(\theta_1 \colon W_{1 2} \to T M\) be the map:
\[
\theta_1(t,v) = (\pi \times \eta_2)^{-1}(\alpha_2(t), \eta_1(v)).
\]
The definition of \(W_{1 2}\) ensures that \((\alpha_2(t),
\eta_1(v)) \in V_2\) for \((t,v) \in W_{1 2}\) and this is the image
of \(\pi \times \eta_2\). Hence \(\theta_1\) is well-defined. Define
\(\theta_2 \colon W_{2 1} \to T M\) similarly. These are both smooth maps.
Notice that \(\pi(\pi \times \eta_i)^{-1} \colon V_i \subseteq U_i \times
U_i \to U_i\) is the projection onto the first factor and \(\eta_i(\pi
\times \eta_i)^{-1} \colon V_i \to U_i\) is the projection onto the second.
Thus \(\pi \theta_1(t,v) = \alpha_2(t)\). Hence \(\theta_1 \colon W_{1
2} \to T M\) is such that \((t, \theta_1(t,v)) \in {\alpha_2}^* T
M\) for all \((t,v) \in W_{1 2}\). Then:
\[
\big(\alpha_1(t), \eta_2(\theta_1(t,v))\big) = (\alpha_1(t), \eta_1(v))
\in V_1
\]
so \((t, \theta_1(t,v)) \in W_{2 1}\). Hence we have a map
\(\phi_{1 2} \colon W_{1 2} \to W_{2 1}\) given by:
\[
\phi_{1 2}(t,v) = (t, \theta_1(t,v)).
\]
Similarly we have a map \(\phi_{2 1} \colon W_{2 1} \to W_{1 2}\). These
are both smooth since the composition with the inclusion into \(S^1
\times T M\) is smooth.
Consider the composition \(\phi_{2 1}\phi_{1 2}(t,v)\). Expanding
this out yields:
\begin{align*}
\phi_{2 1}\phi_{1 2}(t,v) &= \phi_{2 1}(t, \theta_1(t,v)) \\
&= (t, \theta_2(t,\theta_1(t,v))) \\
&= \big(t, (\pi \times \eta_1)^{-1}(\alpha_1(t),
\eta_2(\theta_1(t,v))) \big) \\
&= \big(t, (\pi \times \eta_1)^{-1}(\alpha_1(t), \eta_1(v)) \big)
\\
&= \big(t, (\pi \times \eta)^{-1}(\pi(v), \eta_1(v)) \big) \\
&= (t, v).
\end{align*}
The penultimate line is because \((t,v) \in {\alpha_1}^* T M\) so
\(\pi(v) = \alpha_1(t)\).
Hence \(\phi_{2 1}\) is the inverse of \(\phi_{1 2}\) and so
\(\phi_{1 2} \colon W_{1 2} \to W_{2 1}\) is a diffeomorphism. Thus by
lemma~\ref{lem:cxsec}, the map \({\phi_{1 2}}_*\) is a
diffeomorphism from \({\Psi_1}^{-1}(U_{1 2})\) to
\({\Psi_2}^{-1}(U_{1 2})\). We just need to show that this is
the transition function. It is sufficient to show that \(\Psi_2
{\phi_{1 2}}_* = \Psi_2 \Phi_{1 2}\). The right-hand side is, by
definition, \(\Psi_1\), which satisfies:
\[
\Psi_1(\gamma)(t) = \eta_1(\tilde{\gamma}(t))
\]
where \(\tilde{\gamma} \colon S^1 \to T M\) is such that \(\gamma(t)
= (t, \tilde{\gamma}(t))\). On the other side:
\begin{align*}
{\phi_{1 2}}_*(\gamma)(t) &= \phi_{1 2}(\gamma(t)) \\
&= \big(t, \theta_1(t, \tilde{\gamma}(t))\big) \\
&= \big(t, (\pi \times \eta_2)^{-1}(\alpha_2(t),
\eta_1(\tilde{\gamma}(t))) \big),
\intertext{hence:}
\Psi_2 {\phi_{1 2}}_*(\gamma)(t) &= \eta_2(\pi \times
\eta_2)^{-1}( \alpha_2(t), \eta_1(\tilde{\gamma}(t))) \\
&= \eta_1(\tilde{\gamma}(t)),
\end{align*}
as required. Thus \({\phi_{1 2}}_* = \Phi_{1 2}\) and so the
transition functions are diffeomorphisms.
\end{proof}
We therefore have a smooth atlas for \(L^x M\).
\section{Topology}
\label{sec:topology}
Following \cite[ch 27]{akpm} we proceed to topologise \(L^x M\) with
the inductive topology for the chart maps. Our concern now is to
determine some topological properties of \(L^x M\). The key is
theorem~\ref{th:submfd}. Once we have proved this then the passage
from \(L^x \ensuremath {\mathbb{R}}\xspace\) to \(L^x M\) is straightforward. We also show that
the inclusion \(L M \to L^x M\) is a homotopy equivalence.
\subsection{Submanifolds}
\begin{proposition}
Let \(M, N\) be finite dimensional smooth manifolds with \(M\) an
embedded submanifold of \(N\). Then \(L^x M\) is an embedded
submanifold of \(L^x N\).
\end{proposition}
\begin{proof}
By the tubular neighbourhood theorem there is an open neighbourhood
\(U\) of \(M\) in \(N\), a smooth vector bundle \(E \to M\), and a
diffeomorphism \(\phi \colon E \to U\) which maps the zero section to the
inclusion of \(M\) in \(U\). Let \(\eta \colon T M \to M\) be a local
addition over \(M\) with neighbourhood \(V \subseteq M \times M\).
Let \(E_V \subseteq E \times E\) be the restriction of \(E \times E\)
to \(V\). Choose a connection on \(E\). Let \((u,v) \in V\). Let
\(p = (\pi \times \eta)^{-1}(u,v)\). This lies in \(T_u M\) so the
path \(t \mapsto t p\) goes from the zero vector in \(T_u M\) to
\(p\). Applying \(\eta\) results in a path from \(\eta(0_u) = u\) to
\(\eta(p) = v\). Let \(P(u,v) \colon E_u \to E_v\) be the operator defined
by parallel transport along this path.
Using the connection a point in \(T E\) can be thought of as a
quadruple \((u,p,v,w)\) where \(u \in M\), \(p \in T_u M\), and \(v, w
\in E_u\) with the projection \(T E \to E\) being \((u,p,v,w) \to
(u,v)\) (we include \(u\) in the notation to emphasise the fibre).
Define \(\eta^E \colon T E \to E\) by \(\eta^E(u,p,v,w) =
(\eta(p),P(u,\eta(p))(v + w))\). Then \(\pi^E \times \eta^E \colon T E \to
E \times E\) is:
\[
(\pi^E \times \eta^E)(u,p,v,w) =
\big((u,v),(\eta(p),P(u,\eta(p))(v + w)) \big).
\]
The projection of this to \(M \times M\) is \((u,\eta(p))\); so the
image of \(\pi^E \times \eta^E\) is in \(E_V\). Since the map \((u,p)
\to (u,\eta(p))\) is onto \(V\), varying \(v\) and \(w\) shows that we
can get all of \(E_V\). The inverse map is:
\[
\big((u,v),(x,w)\big) \mapsto ((\pi \times \eta)^{-1}(u,x), v,
P(\pi \times \eta^{-1}(u,x))(w) - v).
\]
This is smooth, so \(\pi^E \times \eta^E\) is a diffeomorphism onto
\(E_V\) and thus defines a local addition over the whole of \(E\).
Using the diffeomorphism \(E \cong U\) we transfer this to \(U\) and
so get a local addition for \(M \subseteq N\). The charts that this
defines make up part of the smooth atlas for \(N\). Taking a chart
based at a smooth loop \(\alpha\) in \(M\), we see that the inclusion
of \(L^x M\) in \(L^x N\) looks like the inclusion of
\(\Gamma_{S^1}^x(\alpha^* T M)\) in \(\Gamma_{S^1}^x(\alpha^* T N)\).
This, in turn, looks like the inclusion of \(L^x \ensuremath {\mathbb{R}}\xspace^k\) in
\(L^x \ensuremath {\mathbb{R}}\xspace^n\). Hence \(L^x M\) is an embedded submanifold
of \(L^x N\).
\end{proof}
\begin{corollary}
Let \(M\) be a closed finite dimensional smooth manifold. Then the
following properties hold for \(L^x M\) if they hold for
\(L^x \ensuremath {\mathbb{R}}\xspace^n\): separable, metrisable, Lindel\"of, paracompact,
normal, smoothly regular, smoothly paracompact, and smoothly normal.
\end{corollary}
\begin{proof}
There is an embedding of \(M\) as a submanifold of \(\ensuremath {\mathbb{R}}\xspace^n\). As it is
compact, the image is closed in \(\ensuremath {\mathbb{R}}\xspace^n\). We therefore have an
embedding of \(L^x M\) in \(L^x \ensuremath {\mathbb{R}}\xspace^n\) which is also
closed. The properties listed are all inherited by closed subspaces.
\end{proof}
\subsection{Homotopy Equivalence}
\label{sec:htpy}
One remarkable fact about the spaces \(L M\) and \(L^0 M\) is that
they are homotopy equivalent. The standard method of this is to find
\emph{mollifiers} which ``smooth out'' continuous functions. This
approach does not work with an arbitrary family of maps. The paths
defined by the homotopy lie in the space of smooth loops at all points
except one end-point, therefore if smooth loops are not dense in the
given family of maps this homotopy cannot be continuous. However
using the fact that \(L^x M\) is a smooth manifold one can still show
that the inclusion \(L M \to L^x M\) is a homotopy
equivalence.
The first step is to define the reverse map. The basic idea is to use
a mollifier but we have to be a bit selective. Let \(M\) be a closed
smooth finite dimensional manifold. Via an embedding, regard \(M\) as
a submanifold of some Euclidean space, \(\ensuremath {\mathbb{R}}\xspace^n\). By the tubular
neighbourhood theorem, there is a neighbourhood of \(M\) in \(\ensuremath {\mathbb{R}}\xspace^n\)
which retracts onto \(M\). That is, there is some open neighbourhood
\(U \subseteq \ensuremath {\mathbb{R}}\xspace^n\) of \(M\) and a map \(p \colon U \to M\) which is
the identity on \(M\). Let \(\eta \colon T M \to M \times M\) be a
local addition on \(M\) with image \(V\).
As \(M\) is compact we can find \(\mu > 0\) such that if \(x,y
\in M\) are such that \(\norm[x - y] < \mu\) then \((x,y) \in
V\). We can also find \(\epsilon > 0\) such that if \(x \in M\) and
\(y \in \ensuremath {\mathbb{R}}\xspace^n\) are such that \(\norm[x - y] < \epsilon\) then \(y \in
U\) and \(\norm[x - p(y)] < \mu\).
\begin{lemma}
There is a continuous function \(\delta \colon L^0 \ensuremath {\mathbb{R}}\xspace^n \to \ensuremath {\mathbb{R}}\xspace\) such
that for \(\gamma \in L^0 \ensuremath {\mathbb{R}}\xspace^n\) then whenever \(\abs{s - t} <
\delta(\gamma)\), \(\norm[\gamma(s) - \gamma(t)] < \epsilon\).
\end{lemma}
\begin{proof}
Let \(\gamma \in L^0 \ensuremath {\mathbb{R}}\xspace^n\). Then there is some \(\delta_\gamma
> 0\) such that whenever \(\abs{s - t} < \delta_\gamma\),
\(\norm[\gamma(s) - \gamma(t)] < \epsilon/3\). Let \(\beta\) be such
that \(\norm[\beta - \gamma]_\infty < \epsilon/3\). Then whenever
\(\abs{s - t} < \delta_\gamma\),
\begin{align*}
\norm[\beta(s) - \beta(t)] &\le \norm[\beta(s) - \gamma(s)] +
\norm[\gamma(s) - \gamma(t)] + \norm[\beta(t) - \gamma(t)]\\
&\le
2 \norm[\beta - \gamma]_\infty + \norm[\gamma(s) - \gamma(t)] \\
&<
\epsilon.
\end{align*}
Now \(L^0 \ensuremath {\mathbb{R}}\xspace^n\) is metrisable, hence paracompact and Hausdorff.
It therefore admits partitions of unity. Choose a partition,
\(\{\tau_\lambda \colon \lambda \in \Lambda\}\), subordinate to the cover
of open balls of radius \(\epsilon/3\). For each \(\lambda \in
\Lambda\) choose \(\gamma_\lambda\) such that the support of
\(\tau_\lambda\) is within the \(\epsilon/3\)-\hspace{0pt}{}ball around
\(\gamma_\lambda\). Let \(\delta_\lambda \coloneqq
\delta_{\gamma_\lambda}\). Define \(\delta \colon L^0 \ensuremath {\mathbb{R}}\xspace^n \to \ensuremath {\mathbb{R}}\xspace\)
by:
\[
\delta(\gamma) = \sum_{\lambda \in \Lambda} \delta_\lambda
\tau_\lambda(\gamma).
\]
For \(\gamma \in L^0 \ensuremath {\mathbb{R}}\xspace^n\), consider the set \(\Lambda(\gamma)
\coloneqq \{\lambda : \tau_\lambda(\gamma) \ne 0\}\). This set is finite and
if \(\lambda \in \Lambda(\gamma)\) then \(\norm[\gamma -
\gamma_\lambda] < \epsilon/3\). As \(\delta(\gamma)\) is a convex
sum of the set \(\{\delta_\lambda : \lambda \in \Lambda(\gamma)\}\),
there is some \(\lambda \in \Lambda(\gamma)\) with \(\delta(\gamma)
\le \delta_\lambda\). Whereupon we have that if \(\abs{s - t} <
\delta(\gamma)\), \(\norm[\gamma(s) - \gamma(t)] < \epsilon\) as
required.
\end{proof}
Using this, we define a continuous map \(R \colon L^0 \ensuremath {\mathbb{R}}\xspace^n \to
L \ensuremath {\mathbb{R}}\xspace^n\) with the property that \(\norm[\gamma -
R(\gamma)]_\infty < \epsilon\) for all \(\gamma\). Let \(\phi \colon
\ensuremath {\mathbb{R}}\xspace \to \ensuremath {\mathbb{R}}\xspace\) be a smooth bump function with support in \([-1,1]\) and
\(\int_\ensuremath {\mathbb{R}}\xspace \phi = 1\). For \(r > 0\) let \(\phi_r \colon \ensuremath {\mathbb{R}}\xspace \to \ensuremath {\mathbb{R}}\xspace\) be the
function \(t \mapsto r \phi(t/r)\). This has support in \([-r,r]\)
and satisfies \(\int_\ensuremath {\mathbb{R}}\xspace \phi_r = 1\). Define \(R \colon L^0 \ensuremath {\mathbb{R}}\xspace^n \to
L \ensuremath {\mathbb{R}}\xspace^n\) by:
\[
\gamma \mapsto \gamma * \phi_{\delta(\gamma)}.
\]
By this we mean that each component of \(\gamma\) is regarded as a map
with domain \ensuremath {\mathbb{R}}\xspace and is convoluted with the map
\(\phi_{\delta(\gamma)}\).
\begin{lemma}
The map \(R \colon L^0 \ensuremath {\mathbb{R}}\xspace^n \to L \ensuremath {\mathbb{R}}\xspace^n\) is continuous and
satisfies \(\norm[\gamma - R(\gamma)]_\infty < \epsilon\) for all
\(\gamma\).
\end{lemma}
\begin{proof}
Let \(\gamma \in L^0 \ensuremath {\mathbb{R}}\xspace\) and \(\beta \in \ensuremath {C^\infty}\xspace(\ensuremath {\mathbb{R}}\xspace,\ensuremath {\mathbb{R}}\xspace)\). Then:
\[
(\gamma * \beta)(t) = \int_\ensuremath {\mathbb{R}}\xspace \gamma(s) \beta(t - s) d s.
\]
Hence:
\begin{align*}
(\gamma * \beta)(t + 1) &= \int_\ensuremath {\mathbb{R}}\xspace \gamma(s) \beta(t + 1 - s) d s \\
&=
\int_\ensuremath {\mathbb{R}}\xspace \gamma(\tilde{s} + 1) \beta(t - \tilde{s}) d \tilde{s} \\
&=
\int_\ensuremath {\mathbb{R}}\xspace \gamma(\tilde{s}) \beta(t - \tilde{s}) d \tilde{s} \\
&= (\gamma
* \beta)(t),
\end{align*}
whence \(R(\gamma)\) is periodic so can be viewed as a map with domain
\(S^1\). The convolution of a continuous function by a smooth
function is again smooth so \(\gamma * \beta \in L \ensuremath {\mathbb{R}}\xspace\), with
derivative \(D (\gamma * \beta ) = \gamma * D\beta\). Hence the image
of \(R\) is \(L \ensuremath {\mathbb{R}}\xspace^n\) as required.
To show that it is continuous, it is sufficient to show that the map
\(L^0 \ensuremath {\mathbb{R}}\xspace \times (0,\infty) \to L \ensuremath {\mathbb{R}}\xspace\), \((\gamma, r)
\mapsto \gamma * \phi_r\) is continuous. Now for bounded maps on \ensuremath {\mathbb{R}}\xspace,
the map \((\alpha, \beta) \mapsto \alpha * \beta\) is bilinear and
satisfies:
\[
\norm[\alpha * \beta]_\infty \le \norm[\alpha]_\infty
\norm[\beta]_\infty
\]
where, by abuse of notation, we have used \(\norm_\infty\) for the
supremum norm for bounded functions on \ensuremath {\mathbb{R}}\xspace. Hence for \(\alpha, \beta
\in L^0 \ensuremath {\mathbb{R}}\xspace\), \(r,s \in (0,\infty)\), and \(k \in \ensuremath {\mathbb{N}}\xspace\),
\[
\norm[\alpha * (\phi_r)^{(k)} - \beta * (\phi_s)^{(k)}]_\infty \le
\norm[\alpha]_\infty \norm[(\phi_r)^{(k)} - (\phi_s)^{(k)}]_\infty +
\norm[\alpha - \beta]_\infty \norm[(\phi_s)^{(k)}]_\infty.
\]
This shows that providing \(\alpha\) and \(\beta\) are close and
providing \((\phi_r)^{(k)}\) and \((\phi_s)^{(k)}\) are close then
\((\alpha * \phi_r)^{(k)}\) and \((\beta * \phi_r)^{(k)}\) are close.
Thus to show that \(R\) is continuous it is sufficient to observe that
the map \(r \to \phi_r\) is continuous as a path into \(\ensuremath {C^\infty}\xspace(\ensuremath {\mathbb{R}}\xspace,\ensuremath {\mathbb{R}}\xspace)\)
and this is straightforward.
Finally, let \(\gamma \in L^0 \ensuremath {\mathbb{R}}\xspace^n\). For \(t \in S^1\), as
\(\int_\ensuremath {\mathbb{R}}\xspace \phi_r = 1\), \(\gamma(t) - R(\gamma)(t)\) is given by:
\[
\gamma(t) - \int_\ensuremath {\mathbb{R}}\xspace \gamma(s) \phi_r(t - s) d s = \int_\ensuremath {\mathbb{R}}\xspace (\gamma(t)
- \gamma(s)) \phi_{\delta(\gamma)}(t - s) d s.
\]
Now \(\phi_{\delta(\gamma)}(t - s)\) is zero outside \([t -
\delta(\gamma), t + \delta(\gamma)]\) and on this interval
\(\abs{\gamma(t) - \gamma(s)} < \epsilon\), by definition of
\(\delta(\gamma)\). Hence \(\abs{\gamma(t) - R(\gamma)(t)} <
\epsilon\) as required.
\end{proof}
\begin{corollary}
\label{cor:manmoll}
There is a continuous map \(R_M \colon L^0 M \to L M\) with
the property that for all \(\gamma \in L^0 M\), \((\gamma(t),
R_M(\gamma)(t)) \in V\) for all \(t \in S^1\).
\end{corollary}
\begin{proof}
We restrict the map \(R\) to the domain \(L^0 M\). By
construction, \(R(\gamma)\) takes values in \(U\), the neighbourhood
of \(M\). The map \(R_M\) is the composition of this with the
projection \(p \colon U \to M\). The required property holds because of
the choices made.
\end{proof}
\begin{theorem}
Let \(L^x \ensuremath {\mathbb{R}}\xspace\) be a class of maps satisfying the assumptions of
section~\ref{sec:intro}. Then the inclusion \(L M \to
L^x M\) is a homotopy equivalence.
\end{theorem}
\begin{proof}
The reverse map is the composition of the inclusion of \(L^x M\)
in \(L^0 M\) with the map \(R_M\). We will denote this again by
\(R_M\).
By construction, for \(\gamma \in L^x M\), \((R_M(\gamma),
\gamma)\) takes values in \(V\). Define \(H \colon L^x M \times
[0,1] \to L^x M\) by:
\[
H(\gamma,s) = \pi_2 \eta(s \eta^{-1}(R_M(\gamma), \gamma)),
\]
where in \(T M\) we have used the natural \(\ensuremath {\mathbb{R}}\xspace\)-\hspace{0pt}{}action on the
fibres and \(\pi_2\) is the projection onto the second factor. This
is continuous as it is the composition of continuous maps. For \(s =
1\) we have \(H(\gamma,1) = \pi_2(R_M(\gamma),\gamma) = \gamma\). For
\(s = 0\), \(H(\gamma,0) = \pi_2(\eta(R_M(\gamma), R_M(\gamma))) =
R_M(\gamma)\). Hence \(H\) is the required homotopy.
The same homotopy works for smooth maps, showing that the composition
\(L M \to L^x M \to L M\) is homotopic to the identity.
\end{proof}
\subsection{Based Loops}
\label{sec:based}
All of our discussion so far holds for based loops as well as free
loops. For the main part, the only modification needed is the
insertion of the word ``based'' at appropriate points. The only place
where more is required is in proving the homotopy
equivalence. The problem there is that the result of applying a
mollifier to a based continuous loop may no longer be based. The fix
is simple, however, since at that point in the construction of the
homotopy equivalence we are dealing with loops in \(\ensuremath {\mathbb{R}}\xspace^n\). We can
therefore define the based mollifier \(R_0\) in terms of the map \(R\)
defined in section~\ref{sec:htpy} as
\[
R_0(\gamma) = R(\gamma) - R(\gamma)(0)
\]
(we are tacitly assuming that the basepoint of \(\ensuremath {\mathbb{R}}\xspace^n\) is the
origin). To ensure that the resulting map \(R_0(\gamma)\) has the
properties analogous to corollary~\ref{cor:manmoll} we need to replace
\(\epsilon\) by \(\epsilon/2\) in section~\ref{sec:htpy}.
The relationship between based loops and free loops is an important
one. In homotopy theory there is a fibration
\[
\Omega^0 M \to L^0 M \to M
\]
and so, via the homotopy equivalences, we can deduce that
\[
\Omega^x M \to L^x M \to M
\]
is also a fibration. This describtion, however, is not one of the
flavour of differential topology. A more suitable description would
be that it is a locally trivial fibre bundle. This will follow from
the following theorem. Let \(e_0 \colon L^x M \to M\) be the
evalutation map at \(0\), \(\gamma \mapsto \gamma(0)\).
\begin{theorem}
Let \(M, P\) be a smooth finite dimensional orientable manifolds
without boundary. Suppose that \(P\) is an embedded submanifold of
\(M\) with tubular neighbourhood \(U \subseteq M\) and normal bundle
\(N \to P\). Define
\[
L^x_P M \coloneqq \{\gamma \in L^x M : \gamma(0) \in P\}
\]
and \(L^x_U M\) similarly. Then \(L^x_P M\) is an embedded
submanifold of \(L^x M\) with tubular neighbourhood \(L^x_U M\) and
normal bundle \({e_0}^* N\). Moreover, the evalutation map \(e_0
\colon L^x M \to M\) takes the quadruple \((L^x M, L^x_P M ,L^x_U M,
{e_0}^* N)\) to \((M, P, U, N)\) preserving all the structure.
\end{theorem}
\begin{proof}
We omit the proof that \(L^x_P M\) is a smooth submanifold of \(L^x
M\) as this is a simple modification of the work of previous sections;
the model space is \(L^x_{\ensuremath {\mathbb{R}}\xspace^k} \ensuremath {\mathbb{R}}\xspace^n\). The case of \(L^x_U M\) is
simpler as it is an open submanifold of \(L^x M\).
Let \(\pi \colon N \to P\) be the projection and let \(\phi \colon U
\to N\) be the diffeomorphism. Our strategy is to find a continuous
map \(\Psi \colon U \to \diff_c(U)\), where \(\diff_c(U)\) is
diffeomorphisms of \(U\) that are the identity outside a compact set,
such that \(\Psi_u(u) = \pi\phi(u)\). Using this, we define the
diffeomorphism \(L_U^x M \to {e_0}^* N\) by
\[
\alpha \mapsto \left(\Psi_{\alpha(0)}(\alpha), \phi(\alpha(0))\right)
\]
with inverse
\[
(\beta,v) \mapsto \Psi_{\phi^{-1}(v)}^{-1}(\beta).
\]
That these are smooth follows from the fact that they are defined
entirely in terms of smooth maps of the original manifolds and these
induce smooth maps on our loop spaces by assumption.
Thus we need to find the map \(\Psi \colon U \to \diff_c(U)\) with the
appropriate conditions. We actually define the map for \(N\) as there
we can use the vector bundle structure and then transfer it to \(U\)
via the diffeomorphism \(\phi\).
The first stage of defining \(\Psi\) is to define a map \(s \colon N
\to \Gamma_c(N)\), the space of sections of \(N\) with compact
support. Let \(\{(U_\lambda, \nu_\lambda, \rho_\lambda) : \lambda \in
\Lambda\}\) be a family of triples where
\begin{enumerate}
\item \(\{U_\lambda : \lambda \in \Lambda\}\) is a locally finite open
cover of \(P\) such that each \(U_\lambda\) has compact closure in
\(P\);
\item \(\nu_\lambda \colon \pi^{-1}(U_\lambda) \to U_\lambda \times
\ensuremath {\mathbb{R}}\xspace^n\) is a trivialisation of \(N\) over \(U_\lambda\);
\item \(\{\rho_\lambda : \lambda \in \Lambda\}\) squares to a
partition of unity subordinate to the cover \(\{U_\lambda\}\); that
is, \(\rho_\lambda \colon P \to \ensuremath {\mathbb{R}}\xspace\) is a bump function with support
in \(U_\lambda\) and \(\sum_\lambda \rho_\lambda(x)^2 = 1\) for
all \(x \in P\).
\end{enumerate}
Let \(\tilde{\nu}_\lambda \colon \pi^{-1}(U_\lambda) \to \ensuremath {\mathbb{R}}\xspace^n\) be the
composition of \(\nu_\lambda\) with the projection onto \(\ensuremath {\mathbb{R}}\xspace^n\).
Define \(s \colon N \to \Gamma_c(N)\) by
\[
s(v)(x) \coloneqq \sum_\lambda \rho_\lambda(\pi(v)) \rho_\lambda(x)
\nu_\lambda^{-1}(x,\tilde{\nu}_\lambda(v)).
\]
The sum is well-defined since \(\rho_\lambda(\pi(v))\rho_\lambda(x)\)
can only be non-zero if both \(\pi(v)\) and \(x\) are in the domain of
\(\nu_\lambda\). Each \(s(v)\) is clearly a section and its support
is contained in the finite union \(\bigcup \{U_\lambda : \pi(v) \in
U_\lambda\}\), and hence has compact support. Observe that
\[
s(v)(\pi(v)) = \sum_\lambda \rho_\lambda(\pi(v))\rho_\lambda(\pi(v))
\nu_\lambda^{-1}(\pi(v),\tilde{\nu}_\lambda(v)) = \sum_\lambda
\rho_\lambda(\pi(v))^2 v = v.
\]
There is a canonical embedding of \(N\) in \(T N\) as the vertical
tangent bundle and thus we can extend any section \(\sigma\) of \(N\)
to a vector field on \(N\) by defining \(\tilde{\sigma}(v) =
\sigma(\pi(v))\). If the original section had compact support then
the resulting vector field will have compact horizontal support. We
wish to apply this proceedure to the sections that we have defined
above, but we also wish to ensure that the resulting vector fields
have genuine compact support. To do that, we choose an inner product
on the fibres of \(N\) which varies smoothly over \(P\) and a bump
function \(\tau \colon \ensuremath {\mathbb{R}}\xspace \to \ensuremath {\mathbb{R}}\xspace\) which takes the value \(1\) on \([
0, 1 ]\) and is zero above, say, \(2\). Define
\[
X_v(u) \coloneqq - \tau \left(\norm[u]^2/(1 + \norm[v]^2)\right)
s(v)(\pi(u)).
\]
This has compact support, is a vertical vector field, and for \(u \in
N_v\) with \(\norm[u] \le \norm[v]\) then \(X_v(u) = - v\).
We therefore have a continuous map \(N \to \Xi_c(N)\). We compose
this with the exponential map \(\exp \colon \Xi_c(N) \to \diff_c(N)\)
to define \(\Psi \colon N \to \diff_c(N)\). The properties of \(X_v\)
translate into properties of \(\Psi_v\). As \(X_v\) is a vertical
vector field, \(\Psi_v\) preserves the fibres of \(N\). Most
importantly, \(\Psi_v(v) = 0_v\).
This is the required map and so establishes \(L^x_U M\) as the tubular
neighbourhood of \(L^x_P M\). It is clear from the setup that the
evaluation map has the properties stated in the theorem.
\end{proof}
\begin{corollary}
The evaluation map \(L^x M \to M\) is a locally trivial fibre bundle
with fibre \(\Omega^x M\).
\end{corollary}
\begin{proof}
Take \(P = \{x_0\}\) to be the basepoint and \(U\) the codomain of a
chart near \(x_0\) with domain \(\ensuremath {\mathbb{R}}\xspace^n\).
\end{proof}
\section{Circle Actions}
\label{sec:diff}
The diffeomorphism group of the circle acts on maps with domain
\(S^1\) by precomposition. It is usual to assume that \(L^x \ensuremath {\mathbb{R}}\xspace\) is
closed under this action, whence we get an action on \(L^x M\) for
\(M\) a smooth finite dimensional manifold. We would like to transfer
knowledge of that action from \(L^x \ensuremath {\mathbb{R}}\xspace\) to \(L^x M\).
\subsection{Transferring the Action}
We are going to prove an inheritance result which states that the
action on \(L^x M\) is the same as that on \(L^x \ensuremath {\mathbb{R}}\xspace^n\). This will be
an easy corollary of theorem~\ref{th:submfd}. The more important
point of this section is to consider what types of action there are.
\begin{defn}
Let \(M\) be a smooth finite dimensional manifold. Let \(G \subseteq
\diff(S^1)\) be a sub-\hspace{0pt}{}Lie group of the set of diffeomorphisms of
the circle. We define the following possible types of action of \(G\)
on \(L^x M\).
\begin{enumerate}
\item The action is by bijections.
\item The action is by homeomorphisms.
\item The action is by diffeomorphisms.
\item The action map, \(G \times L^x M \to L^x M\), is continuous.
\item The action map, \(G \times L^x M \to L^x M\), is smooth.
\item The representation map, \(G \to \homeo(L^x M)\), is
continuous.
\item The representation map, \(G \to \diff(L^x M)\), is
smooth.
\end{enumerate}
\end{defn}
The diffeomorphism group, \(\diff(S^1)\), is an open subset of \(L
S^1\) and thus inherits the structure of a smooth manifold. The
circle, acting by rigid rotations, is a subset of \(\diff(S^1)\) and
the inherited structure is the same as its usual one.
Although these levels have been written in a necessarily linear form,
the relationships between them are more complicated than this
suggests. For example, a continuous representation map does not
necessarily imply a continuous action map as the evaluation map
\(\homeo(X) \times X \to X\) is not necessarily jointly continuous.
\begin{proposition}
All the levels defined are inherited by \(L^x M\) from \(L^x \ensuremath {\mathbb{R}}\xspace\).
\end{proposition}
\begin{proof}
As diffeomorphisms of the circle act linearly on \(L^x \ensuremath {\mathbb{R}}\xspace\) this
proposition holds for \(M = \ensuremath {\mathbb{R}}\xspace^n\) simply by taking finite products.
The result for general \(M\) follows from theorem~\ref{th:submfd}.
Since \(L^x M\) is an embedded submanifold of \(L^x \ensuremath {\mathbb{R}}\xspace^n\), a map into
\(L^x M\) is continuous or smooth if and only if it is continuous or
smooth into \(L^x \ensuremath {\mathbb{R}}\xspace^n\); and the restriction to \(L^x M\) of a
continuous or smooth map from \(L^x \ensuremath {\mathbb{R}}\xspace^n\) is again continuous or
smooth.
For the representation maps, we are being deliberately vague about the
topologies on the homeomorphism and diffeomorphism groups. There is
considerable freedom in choosing this topology and we wish to allow
for this freedom, only assuming that the topologies are compatbile for
\(M\) as for \(\ensuremath {\mathbb{R}}\xspace^n\). Then as \(L^x M\) is a smooth retract of an
open subset of \(L^x \ensuremath {\mathbb{R}}\xspace^n\), the restriction map from the subspace of
homeomorphisms, resp.\ diffeomorphisms, of \(L^x \ensuremath {\mathbb{R}}\xspace^n\) which preserve
\(L^x M\) as a set to the set of homeomorphisms, resp.\
diffeomorphisms, of \(L^x M\) is defined and continuous, resp.\
smooth. As the representation map of \(G\) factors through this map
its properties are inherited from those of the representation on \(L^x
\ensuremath {\mathbb{R}}\xspace\).
\end{proof}
\subsection{Continuity of Linear Circle Actions}
The most common action considered on loop spaces is that of the circle
itself. In light of the inheritance properties of circle actions, it
seems a good idea to consider the general case of the circle acting on
a locally convex topological vector space. As this is, by its very
nature, more in the realm of functional analysis than differential
topology, at each stage we shall consider how it applies to the
examples of smooth loops and continuous loops in order to ground the
discussion in terms familiar to the differential topologist.
We start with a more detailed discussion of what it might mean for a
circle action to be ``continuous''. There are several ``levels'' of
continuity that one could consider, more than those listed in the
previous section. The following definition contains the ones that we
think are interesting or useful.
\begin{defn}
\label{def:cts}
Let \(E\) be a lctvs. Suppose that the circle acts on \(E\) by linear
maps, not necessarily continuous. Let \(R_t\) be the linear map
corresponding to \(t \in S^1\). We define the following levels of
continuity for this action:
\begin{enumerate}
\item The representation is continuous; that is, the action induces a
continuous map \(S^1 \to \m{L}_b(E)\). Here, \(\m{L}_b(E)\) denotes
the space of continuous linear maps from \(E\) to itself equipped with
the \emph{strong} topology; that is, the topology of uniform
convergence on bounded sets.
\label{it:rep}
\item The action is continuous; that is, the action is continuous as a
map \(S^1 \times E \to E\).
\label{it:cts}
\item The action is separately continuous; that is, for each \(t \in
S^1\) then \(x \to R_t x\) is a continuous map \(E \to E\), and
for each \(x \in E\) then \(t \to R_t x\) is a continuous map \(S^1
\to E\).
\label{it:sep}
\item The action is by equicontinuous linear maps; that is to say, for
each \(0\)-\hspace{0pt}{}neighbourhood \(V\) in \(E\) there is a
\(0\)-\hspace{0pt}{}neighbourhood \(U\) such that \(R_t U \subseteq V\) for all \(t
\in S^1\).
\label{it:equi}
\item There is a \(0\)-\hspace{0pt}{}basis of \(S^1\)-\hspace{0pt}{}invariant sets.
\label{it:nbdin}
\item The action is by continuous linear maps; that is, each \(R_t\)
is continuous.
\label{it:clin}
\item The topology on \(E\) is \(S^1\)-\hspace{0pt}{}invariant.
\label{it:topin}
\end{enumerate}
\end{defn}
The strong topology is the finest topology that one would sanely use.
Thus positive results for the strong topology will propagate forwards
to any coarser topology. A \(0\)-\hspace{0pt}{}basis for this topology
consists of the sets:
\[
N(B,U) \coloneqq \{T \in \m{L}(E) : T(B) \subseteq U\}
\]
where \(B, U\) are subsets of \(E\) with \(B\) bounded and \(U\) a
\(0\)-\hspace{0pt}{}neighbourhood. If \(E\) is a Banach space then this is the
usual topology which is normable with norm \(\norm[T] \coloneqq \sup\{\norm[T
x] : \norm[x] \le 1\}\)..
We shall now show how the list in definition~\ref{def:cts} is,
roughly, from the strictest to the weakest. We start with just the
results that apply to all lctvs.
\begin{proposition}
Let \(E\) be a lctvs with an action of the circle by linear maps. We
have the following links between the levels of continuity:
\begin{enumerate}
\renewcommand{\theenumi}{(\roman{enumi})}
\item \ref{it:equi} is equivalent to \ref{it:nbdin};
\label{it:eqnbd}
\item \ref{it:clin} is equivalent to \ref{it:topin};
\label{it:ctop}
\item \ref{it:cts} is equivalent to having both \ref{it:sep} and
\ref{it:equi};
\label{it:cteq}
\item \ref{it:sep} implies \ref{it:clin};
\label{it:secl}
\item \ref{it:rep} implies \ref{it:sep}.
\label{it:reep}
\end{enumerate}
\end{proposition}
Before proving this we remark that the reason why \ref{it:rep} does
not automatically imply \ref{it:cts} is because the evaluation map
\(\m{L}_b(E) \times E \to E\) is not, in general, continuous but only
separately continuous. Thus the action map is separately continuous
as it factors as:
\[
S^1 \times E \to \m{L}_b(E) \times E \to E
\]
but we cannot deduce from this that it is continuous.
\begin{proof}
The equivalences \ref{it:eqnbd} and \ref{it:ctop} are obvious, as
is the implication \ref{it:secl}. The deduction of \ref{it:sep}
from \ref{it:cts} is also obvious. We have already explained
\ref{it:reep}.
Thus only \ref{it:cteq} remains and of that we need to show that
\ref{it:cts} implies \ref{it:equi} and that together
\ref{it:sep} and \ref{it:equi} imply \ref{it:cts}.
To show that \ref{it:cts} implies \ref{it:equi} let \(V\) be an
open \(0\)-\hspace{0pt}{}neighbourhood in \(E\). By assumption, for each \(t \in
S^1\) there is some open \(0\)-\hspace{0pt}{}neighbourhood \(U_t\) and \(\delta_t >
0\) such that \((t - \delta_t, t + \delta_t) \times U_t\) maps into
\(V\). As \(S^1\) is compact there is some finite set \(\{t_1,
\dotsc, t_n\}\) such that the intervals \(\{(t_j - \delta_{t_j}, t_j
+ \delta_{t_j})\}\) cover \(S^1\). Let \(U = \bigcap_{j=1}^n
U_{t_j}\). Then \(U\) is a finite intersection of open
\(0\)-\hspace{0pt}{}neighbourhoods, hence is one itself. For \(t \in S^1\) there
is some \(j\) such that \(t \in (t_j - \delta_{t_j}, t_j +
\delta_{t_j})\) whence, as \(U \subseteq U_{t_j}\), \(R_t(U) \subseteq
V\). Thus the action is by equicontinuous linear maps.
For the converse we assume both \ref{it:sep} and \ref{it:equi}.
Let \(x \in E\) and \(t \in S^1\). Let \(V\) be a convex
\(0\)-\hspace{0pt}{}neighbourhood which, by \ref{it:equi}, we may assume to be
\(S^1\)-\hspace{0pt}{}invariant. Then \(\frac12V\) is also a convex,
\(S^1\)-\hspace{0pt}{}invariant \(0\)-\hspace{0pt}{}neighbourhood so as the map \(s \to R_s x\)
is continuous at \(t\) there is some \(\delta > 0\) such that if
\(\abs{s} < \delta\) then \(R_t x - R_{t+s} x \in \frac12 V\). Let
\(s \in S^1\) be such that \(\abs{s} < \delta\) and let \(y \in x +
\frac12 V\). We have:
\begin{align*}
R_t x - R_{t+s} y &= R_t x - R_{t+s} x + R_{t+s} x - R_{t+s} y \\
&= R_t x - R_{t+s} x + R_{t+s}(x - y) \\
&= \frac12 (2 R_t x - 2 R_{t+s} x) + \frac12 R_{t+s}(2 x - 2 y).
\end{align*}
Now \(2 R_t x - 2 R_{t+s} x\) and \(2 x - 2 y\) both lie in \(V\). As
\(V\) is \(S^1\)-\hspace{0pt}{}invariant, \(R_{t+s}(2 x - 2 y)\) also lies in
\(V\). Thus as \(V\) is convex, \(R_t x - R_{t+s} y\) is in \(V\).
Hence \((t - \delta, t+ \delta) \times x + \frac12 V\) lies in the
preimage of \(R_t x + V\). Hence the action is continuous.
\end{proof}
There are more connections between these conditions if the space \(E\)
has more structure.
As mentioned above the failure of \ref{it:rep} to automatically
imply \ref{it:cts} is due to possibility that the evaluation map is
not continuous. It is continuous if, and only if, \(E\) is normable.
Thus we deduce:
\begin{lemma}
\label{lem:normcts}
Let \(E\) be a normable lctvs with an action of the circle by linear
maps. Then \ref{it:rep} implies \ref{it:cts}. \hspace*{\fill}\qedsymbol
\end{lemma}
A more general class of spaces that allows us to strengthen the links
is the family of \emph{barrelled} lctvs. This is a technical property
of lctvs which we shall not describe here, we merely need one of its
well-known consequences. It follows from \cite[11.1.5]{hj} and
Baire's theorem that \(L^0 \ensuremath {\mathbb{R}}\xspace\) and \(L \ensuremath {\mathbb{R}}\xspace\) are barrelled.
\begin{proposition}
\label{prop:barrel}
Let \(E\) be a barrelled lctvs with an action of the circle by linear
maps. Then \ref{it:sep} implies \ref{it:equi}. Hence each of
\ref{it:rep} and \ref{it:sep} imply \ref{it:cts}.
\end{proposition}
\begin{proof}
The proof that \ref{it:sep} implies \ref{it:equi} is similar to
\cite[III\S5.3]{hs}. That the action is separately continuous means
that the map \(S^1 \to \m{L}(E)\) is well-defined and is continuous
for the topology of uniform convergence on all \emph{finite} sets.
Thus the image of \(S^1\) in \(\m{L}(E)\) is simply bounded and hence,
as \(E\) is barrelled, equicontinuous.
Since \ref{it:sep} and \ref{it:equi} together imply \ref{it:cts}
we therefore have that \ref{it:sep} alone implies \ref{it:cts}.
Also as \ref{it:rep} implies \ref{it:sep} we also have that
\ref{it:rep} implies \ref{it:cts}.
\end{proof}
A useful property of \(L \ensuremath {\mathbb{R}}\xspace\) is that closed bounded subsets are
compact; this follows from \cite[II\S7.2]{hs} as it is a complete
nuclear space.
\begin{proposition}
\label{prop:ctsrep}
Let \(E\) be a lctvs with an action of the circle by linear maps.
Suppose that every closed, bounded subset of \(E\) is compact. Then
\ref{it:cts} implies \ref{it:rep}.
\end{proposition}
\begin{proof}
We shall show that if the action is continuous then the map \(S^1 \to
\m{L}(E)\) is continuous for the topology of uniform convergence on
compact sets. The assumption on \(E\) then says that this is
precisely the topology of uniform convergence on bounded sets.
So assume that the circle action on \(E\) is continuous. Let \(C, V
\subseteq E\) be such that \(C\) is compact and \(V\) is a convex,
circled \(0\)-\hspace{0pt}{}neighbourhood. Let \(t_0 \in S^1\). As the circle
action is continuous then for each \(c \in C\) there is some
\(\delta_c > 0\) and \(U_c\) a neighbourhood of \(c\) in \(E\) such
that if \(x \in U_c\) and \(\abs{t} < \delta_c\) then \(R_{t_0 + t} x
- R_t c \in \frac12V\).
The neighbourhoods \(\{U_c\}\) cover \(C\) so there is some finite
subset which will do; say, \(U_1, \dotsc, U_n\) corresponding to
points \(c_1, \dotsc, c_n \in C\). Let \(\delta\) be the minimum of the
corresponding subfamily of \(\{\delta_c\}\); then \(\delta > 0\).
Let \(t\) be such that \(\abs{t} < \delta\). Let \(c \in C\), then
there is some \(j\) such that \(c \in U_j\). Thus \(R_{t_O + t} c -
R_{t_0} c_j \in \frac12 V\). Now the choice of \(c_j\) depended only
on \(c\) and not on \(t\). Therefore we also have \(R_{t_O + 0} c -
R_{t_0} c_j \in \frac12 V\). Thus:
\[
R_{t_0 + t} c - R_{t_0} c = R_{t_0 + t} c - R_{t_0} c_j + R_{t_0}
c_j - R_{t_0} c
\]
which, for the usual convexity reasons, lies in \(V\). Hence for
\(\abs{t} < \delta\), \(R_{t_0 + t} - R_{t_0}\) maps \(C\) into
\(V\). Thus the map \(S^1 \to \m{L}(E)\) is continuous for the
topology of uniform convergence on compact subsets.
\end{proof}
\subsection{Circle Actions on Loop Spaces}
In this section we shall use the results of the previous one to
determine how continuous are the circle actions on our example
spaces. For convenience we list the technical properties of our
spaces so that we know which of the above results apply. We also, for
quick reference, list a \(0\)-\hspace{0pt}{}basis.
\begin{enumerate}
\item \(L^0 \ensuremath {\mathbb{R}}\xspace\) is barrelled. The topology is determined by
the sets:
\[
U(\epsilon) \coloneqq \{\gamma : \sup\{\abs{\gamma(t)} : t \in S^1\} <
\epsilon\}.
\]
\item \(L \ensuremath {\mathbb{R}}\xspace\) is barrelled and every closed bounded subset is
compact. The topology is determined by the sets:
\[
U(n,\epsilon) \coloneqq \{\gamma : \sup\{\abs{\gamma^{(k)}(t)} : t \in S^1,
0 \le k \le n\}< \epsilon \}
\]
\end{enumerate}
We shall now determine how continuous is the action of rotation of
loops on each of these spaces.
\begin{proposition}
For both spaces the action is by continuous linear maps.
\end{proposition}
\begin{proof}
We just need to show that the topology is \(S^1\)-\hspace{0pt}{}invariant. It is
sufficient to show this for the \(0\)-\hspace{0pt}{}neighbourhoods listed above.
We have:
\begin{gather*}
R_s U(\epsilon) = U(\epsilon), \\
R_s U(n, \epsilon) = U(n, \epsilon),
\end{gather*}
Thus in each case the topology is \(S^1\)-\hspace{0pt}{}invariant and so the action
is by continuous linear maps.
\end{proof}
\begin{proposition}
For both spaces the action is by equicontinuous linear maps.
\end{proposition}
\begin{proof}
From the previous proof it is obvious that the given \(0\)-\hspace{0pt}{}basis is of
\(S^1\)-\hspace{0pt}{}invariant sets. Hence for these three the action is by
equicontinuous linear maps.
\end{proof}
\begin{proposition}
The circle action on both of \(L^0 \ensuremath {\mathbb{R}}\xspace\) and \(L \ensuremath {\mathbb{R}}\xspace\) is separately
continuous.
\end{proposition}
\begin{proof}
We already know that the circle acts by continuous linear maps which
is half of separate continuity. Thus we need to show that for each
loop \(\gamma\) then the map \(t \to R_t \gamma\) is continuous.
We shall give the proof in full for \(L \ensuremath {\mathbb{R}}\xspace\). The proof for
\(L^0 \ensuremath {\mathbb{R}}\xspace\) is a simplification of this. We need to show that for
\(\gamma \in L \ensuremath {\mathbb{R}}\xspace\), \(t_0 \in S^1\), and a
\(0\)-\hspace{0pt}{}neighbourhood \(V\) then there is some \(\delta > 0\) such that
if \(\abs{t} < \delta\) then \(R_{t_0 + t} \gamma - R_{t_0} \gamma \in
V\). It is sufficient to do this for \(V = U(n, \epsilon)\) whence
we need to show that \(\norm[(R_{t_0 + t} \gamma)^{(k)} - (R_{t_0}
\gamma)^{(k)}]_\infty < \epsilon\) for \(0 \le k \le n\).
Expanding out the definition of the norm and using
\((R_s \alpha)^{(k)}(t) = \alpha^{(k)}(t + s)\) we see that we want to
ensure that:
\[
\sup\{\abs{\gamma^{(k)}(s + t) - \gamma^{(k)}(s)} : s \in S^1, 0 \le
k \le n\} < \epsilon
\]
whenever \(\abs{t} < \delta\). That such a \(\delta > 0\) exists
comes from the fact that the loops \(\gamma, \gamma^{(1)}, \dotsc,
\gamma^{(n)}\) are all uniformly continuous on \(S^1\) and there is
only a finite number of them.
For \(L^0 \ensuremath {\mathbb{R}}\xspace\) the situation is slightly simplified in that we
only need to consider \(\gamma\) and not any of its derivatives (which
it may not have, of course).
\end{proof}
From propositions~\ref{prop:barrel} and~\ref{prop:ctsrep} we deduce
the following.
\begin{corollary}
The circle actions on \(L^0 \ensuremath {\mathbb{R}}\xspace\) and \(L \ensuremath {\mathbb{R}}\xspace\) are continuous. The
representation for the action on \(L \ensuremath {\mathbb{R}}\xspace\) is also continuous. \hspace*{\fill}\qedsymbol
\end{corollary}
Thus the action on \(L \ensuremath {\mathbb{R}}\xspace\) is the best it can be. This is not true
of \(L^0 \ensuremath {\mathbb{R}}\xspace\). We deduce this from a more general result which says
that this is not the fault of the type of loop but rather of using a
normed vector space of loops. Recall that a trigonometric polynomial
is a (finite) linear span of sines and cosines.
\begin{proposition}
Let \(E \subseteq \map(S^1, \ensuremath {\mathbb{R}}\xspace)\) be an \(S^1\)-\hspace{0pt}{}invariant vector
space of loops which contains the trigonometric polynomials. Let
\(p\) be an \(S^1\)-\hspace{0pt}{}invariant semi-norm on \(E\) which restricts to a
norm on the subspace of trigonometric polynomials. Let \((\tilde{E},
\norm)\) be the associated Banach space. Then circle action is by
equicontinuous linear maps but the associated representation is not
continuous.
\end{proposition}
Thus in this general case the only question to answer is whether or not
the circle action itself is continuous.
\begin{proof}
As the set-up is \(S^1\)-\hspace{0pt}{}invariant, the unit ball in \(\tilde{E}\) is
\(S^1\)-\hspace{0pt}{}invariant and so the circle acts by equicontinuous linear
maps.
Let \(\delta > 0\). Choose \(n \in \ensuremath {\mathbb{N}}\xspace\) such that \(1/n < \delta\).
As \(E\) contains the trigonometric polynomials it contains the loop
\(\gamma(t) = \cos(2 \pi n t) v\) where \(v \in \ensuremath {\mathbb{R}}\xspace\) is non-zero.
By assumption on the semi-norm, \(\gamma\) represents a non-zero
element in \(\tilde{E}\). Let \(h = 1/(2n)\), then \(R_h \gamma = -
\gamma\). Hence \(\norm[(I - R_h)\gamma] = 2 \norm[\gamma]\) and so
\(\norm[I - R_h] \ge 2\). Thus the map \(t \to R_t\) is not
continuous into \(\m{L}_b(\tilde{E})\).
\end{proof}
In summary, the circle action on \(L \ensuremath {\mathbb{R}}\xspace\) is as good as it can be
whereas that on \(L^0 \ensuremath {\mathbb{R}}\xspace\) is almost that good and is as good as it
can be given that it is a normed vector spaces.
\subsection{Smooth Actions}
We conclude with a comment on how smooth are the circle actions on \(L
\ensuremath {\mathbb{R}}\xspace\) and on \(L^0 \ensuremath {\mathbb{R}}\xspace\). As with continuity we can ask for different
levels of smoothness.
For the positive results in this section we have to decide on a type
of calculus. We choose the convenient calculus of \cite{akpm}. This
states that a map into a locally complete lctvs is smooth if and only
if its composition with each continuous linear functional is a smooth
map into \ensuremath {\mathbb{R}}\xspace. This provides us with test functions to determine
whether or not a map is smooth. For the negative results we do not
need to pick a calculus as for any calculus, continuous linear maps
are certainly smooth and so we can still use them as test functions to
determine if a map is not smooth.
We start with some positive results about \(L \ensuremath {\mathbb{R}}\xspace\).
\begin{proposition}
\label{prop:smoothissmooth}
The action map \(\rho \colon S^1 \times L \ensuremath {\mathbb{R}}\xspace \to L \ensuremath {\mathbb{R}}\xspace\) is
smooth.
\end{proposition}
\begin{proof}
As \(L \ensuremath {\mathbb{R}}\xspace\) is a closed subspace of \(\ensuremath {C^\infty}\xspace(\ensuremath {\mathbb{R}}\xspace, \ensuremath {\mathbb{R}}\xspace)\) and
\ensuremath {\mathbb{R}}\xspace is a covering space of \(S^1\), it is clearly sufficient to show
that the map \(\tilde{\rho} \colon \ensuremath {\mathbb{R}}\xspace \times \ensuremath {C^\infty}\xspace(\ensuremath {\mathbb{R}}\xspace, \ensuremath {\mathbb{R}}\xspace) \to \ensuremath {C^\infty}\xspace(\ensuremath {\mathbb{R}}\xspace, \ensuremath {\mathbb{R}}\xspace)\),
\((t, \zeta) \to (s \mapsto \zeta(s + t))\), is smooth. We need to
show that it takes smooth curves in \(\ensuremath {\mathbb{R}}\xspace \times \ensuremath {C^\infty}\xspace(\ensuremath {\mathbb{R}}\xspace, \ensuremath {\mathbb{R}}\xspace)\) to
smooth curves in \(\ensuremath {C^\infty}\xspace(\ensuremath {\mathbb{R}}\xspace, \ensuremath {\mathbb{R}}\xspace)\).
Let \(c \colon \ensuremath {\mathbb{R}}\xspace \to \ensuremath {\mathbb{R}}\xspace \times \ensuremath {C^\infty}\xspace(\ensuremath {\mathbb{R}}\xspace, \ensuremath {\mathbb{R}}\xspace)\) be smooth. Let \(\tilde{c} =
\tilde{\rho} \circ c\). We can write \(c = (c_1, c_2)\) for smooth
curves \(c_1 \colon \ensuremath {\mathbb{R}}\xspace \to \ensuremath {\mathbb{R}}\xspace\) and \(c_2 \colon \ensuremath {\mathbb{R}}\xspace \to \ensuremath {C^\infty}\xspace(\ensuremath {\mathbb{R}}\xspace, \ensuremath {\mathbb{R}}\xspace)\) since the
obvious projection maps are smooth. Then for \(t \in \ensuremath {\mathbb{R}}\xspace\)
\[
\tilde{c}(t) = (s \mapsto c_2(t)(s + c_1(t))).
\]
By the exponential law, \(\tilde{c}\) is smooth if and only if its
adjoint, \(\tilde{c}^\lor \colon \ensuremath {\mathbb{R}}\xspace^2 \to \ensuremath {\mathbb{R}}\xspace\) is smooth. This adjoint is
\[
(s,t) \mapsto c_2(t)(s + c_1(t)).
\]
Now as \(c_2 \colon \ensuremath {\mathbb{R}}\xspace \to \ensuremath {C^\infty}\xspace(\ensuremath {\mathbb{R}}\xspace, \ensuremath {\mathbb{R}}\xspace)\) is smooth its adjoint,
\(c_2^\lor\), is also smooth, again by the exponential law. This is
the map \((s,t) \mapsto c_2(t)(s)\). Thus \(\tilde{c}^\lor\) is
smooth as it factors as the composition
\[
(s,t) \mapsto (s + c_1(t),t) \xrightarrow{c_2^\lor} c_2(t, s +
c_1(t)).
\]
Hence \(\tilde{\rho}\) maps smooth curves to smooth curves and is thus
smooth.
\end{proof}
\begin{corollary}
The representation map \(S^1 \to \m{L}_b(L \ensuremath {\mathbb{R}}\xspace)\) is smooth.
\end{corollary}
\begin{proof}
This follows from the uniform boundedness principle, see
\cite[I.5.18]{akpm}: a map into \(\m{L}_b(L \ensuremath {\mathbb{R}}\xspace)\) is smooth if and
only if all composites with evaluations at points in \(L \ensuremath {\mathbb{R}}\xspace\) are
smooth.
\end{proof}
Note that we cannot deduce from this that \(S^1 \to \m{L}_b(L
\ensuremath {\mathbb{R}}\xspace)\) is continuous since we have left the realm where the
\(c^\infty\)-\hspace{0pt}{}topology agrees with the locally convex topology one.
Now we turn to the negative result and recall that here we do not
assume a particular calculus.
\begin{proposition}
\label{prop:mustbesmooth}
Let \(L^x \ensuremath {\mathbb{R}}\xspace\) be a class of loops satisfying the conditions
\ref{cond:vspace}, \ref{cond:lctvs}, and \ref{cond:smthcts} of the
introduction. Let \(\gamma \in L^x \ensuremath {\mathbb{R}}\xspace\) be such that the map \(S^1
\to L^x \ensuremath {\mathbb{R}}\xspace\), \(t \mapsto R_t \gamma\), is smooth. Then \(\gamma
\colon S^1 \to \ensuremath {\mathbb{R}}\xspace\) is smooth.
If, in addition, for all \(\gamma \in L \ensuremath {\mathbb{R}}\xspace\) then the maps \(S^1 \to L
\ensuremath {\mathbb{R}}\xspace\), \(t \mapsto R_t \gamma\), are smooth then the above becomes an
if-\hspace{0pt}{}and-\hspace{0pt}{}only-\hspace{0pt}{}if.
\end{proposition}
\begin{proof}
Let \(e_0 \colon L^x \ensuremath {\mathbb{R}}\xspace \to \ensuremath {\mathbb{R}}\xspace\) be the evaluation map at \(0\). This
is continuous by the assumptions and hence is smooth. As \(S^1 \to
L^x \ensuremath {\mathbb{R}}\xspace\), \(t \mapsto R_t \gamma\), is smooth its composition with
\(e_0\) is a smooth map \(S^1 \to \ensuremath {\mathbb{R}}\xspace\). This composition is \(t
\mapsto e_0(R_t \gamma) = \gamma(0 + t) = \gamma(t)\). Thus
\(\gamma\) is smooth.
For the second part, let \(\gamma \in L^x \ensuremath {\mathbb{R}}\xspace\) be a smooth loop. The
associated map \(S^1 \to L^x \ensuremath {\mathbb{R}}\xspace\) factors as \(S^1 \to L \ensuremath {\mathbb{R}}\xspace \to L^x
\ensuremath {\mathbb{R}}\xspace\). The first factor is smooth by assumption whilst the second is a
continuous linear map and hence smooth.
\end{proof}
Thus although for continuity there is not much to choose between
\(L \ensuremath {\mathbb{R}}\xspace\) and \(L^0 \ensuremath {\mathbb{R}}\xspace\), once we get to smoothness we
easily see the difference.
|
1,314,259,993,929 | arxiv | \section{Introduction.}
Supersymmetric field theories in a variety of
space-time dimensions give rise to non-linear sigma models with a
restricted target-space geometry. There are many
examples in the literature where this phenomenon led to
surprising results, sometimes with interesting connections to
mathematics. Furthermore, the fact that some of these
supersymmetric theories in different space-time dimensions are
related by (supersymmetric) dimensional reduction offers a way of
connecting seemingly unrelated geometries.
In the context of this
paper $N=2$ supergravity is relevant. In five space-time
dimensions, one may consider the coupling of a certain number
(say $n-1$) of supersymmetric abelian vector multiplets. As was
shown some time ago \cite{GuSiTo}, these theories are
characterized by a cubic polynomial in $n$ (real) variables,
which gives rise to a non-linear sigma model corresponding to a
real ($n-1$)-dimensional space. Some of these polynomials
correspond to symmetric spaces and are
related to Jordan algebras. After dimensional reduction of this
theory, one finds $N=2$ supergravity in four space-time dimensions,
coupled to $n$ abelian vector multiplets. It is known that the
non-linear sigma models in the four-dimensional theory correspond
to K\"ahler spaces of complex dimension $n$, characterized by a
homogeneous holomorphic function of second degree, depending on
$n+1$ complex variables \cite{dWVP}. Such K\"ahler
manifolds are called {\em special} \cite{special}. Special
K\"ahler geometry is
relevant to string theory, where compactifications of type-II
superstrings on $(2,2)$
superconformal field theories with central charge $c=9$ lead to
$N=2$ supergravity coupled to vector multiplets. The massless
scalars of these vector multiplets play the role of coordinates of
the moduli space of the conformal theories, so
that the study of supergravity may thus lead to interesting
results for the moduli geometry of certain superconformal
theories \cite{Seiberg}. Because certain
(tree-level) results for compactifications of the heterotic string
on a $(2,2)$ superconformal system depend only on the choice of
the conformal theory, special geometry plays a role
for all string compactifications of this type, which include
those on
Calabi-Yau spaces. Indeed, this fact has been verified in several
studies where various aspects of this intriguing connection have
been explored [4-8].
After dimensional reduction of four-dimensional $N=2$
supergravity coupled to $n$ vector supermultiplets to three
space-time dimensions, one finds a non-linear sigma model
corresponding to a
quaternionic manifold of quaternionic dimension $n+1$. In this
way one thus obtains a class of quaternionic manifolds
whose structure is encoded in the homogenous holomorphic function
of the special K\"ahler manifold. Hence there exists a map
between special K\"ahler manifolds of complex dimension $n$ and
certain quaternionic manifolds of quaternionic dimension
$n\!+\!1$, which in \cite{CecFerGir} was called the $\bf c$ map.
It was also shown that the $\bf c$ map plays an interesting
role in string theory. When compactifying IIA and IIB
strings on the same conformal theory, the resulting non-linear
sigma models consist of a product space of a K\"ahler manifold and
a quaternionic manifold. The latter is also special, in the sense
that it is characterized in terms of a homogeneous holomorphic
function. When comparing the result of the compactification of
the IIA to that of the IIB string, it turns out that the two
manifolds are interchanged according to the action of the $\bf c$
map \cite{Seiberg,CecFerGir}.
Likewise one can introduce the {$\bf r$ map}, which yields for every real
space that couples to $d=5$ supergravity the corresponding {K\"ahler}
space that one finds upon reduction to four space-time
dimensions. The {$\bf r$ map}\ thus assigns a {K\"ahler} space of complex
dimension $n$ to a real space of dimension $n\!-\!1$.
Supersymmetry combined with dimensional reduction, which
preserves supersymmetry, are the essential ingredients in these
two maps.
We shall use the term "special
geometry" for both the real spaces originating from five-dimensional
supergravity, the K\"ahler spaces originating from
four-dimensional supergravity and the quaternionic spaces
that are in the image of the $\bf c$ map\footnote{
In the literature, special K\"ahler spaces were sometimes called
K\"ahler spaces of restricted type; the special quaternionic
spaces were also called dual-quaternionic spaces.}.
It should be clear that the inverse $\bf r$ and $\bf c$ maps
are not always defined\ as there are spaces that couple to
supergravity, but the corresponding supergravity theory does not
necessarily originate from a higher-dimensional theory. When the
coupling of a certain
space to supergravity is not unique, the result of
the maps will depend on the type of coupling as, for instance,
characterized by the way in which the
subgroup of the sigma model isometries that can be extended to a
symmetry of the full supergravity action, is realized. In four
space-time dimensions these invariances
usually act on the field strengths of the abelian vector fields,
and not on the fields themselves, so that they only leave the
equations of motion and not the action invariant. These
transformations, called duality transformations,
constitute a subgroup of $Sp(2n\!+\!2,\Rbar)$. The complex
nature of the {K\"ahler} manifolds is thus related to the complex nature
of the (anti-)selfdual Minkowskian field strengths
\cite{dual,dWVP}.
In five space-time dimensions, we are dealing with a real
manifold, so that the transformations are realized directly on
the vector fields, whereas in three dimensions the vector fields
are converted into scalar fields (the relation of all these
symmetries upon dimensional reduction will be discussed in
\cite{dWVPVan}; see also \cite{sbstring}).
Under these maps the dimensionality of the manifold increases.
For homogeneous spaces the isometries act transitively on
the manifold so that every two points are related by an element
of the isometry group. The orbit swept out by the action of the isometry
group $G$ from any given point is (locally) isomorphic to the
coset space $G/H$, where $H$ is the isotropy group of that point.
For non-compact homogeneous spaces where $H$ is the maximal compact
subgroup of $G$, there exists a solvable subgroup that acts
transitively, whose dimension is equal to the dimension of the
space. Such spaces are called {\em normal}. It implies that there
exists a
solvable algebra $s$ such that $\frac{G}{H}=e^s$. The
construction of this algebra follows from the
Iwasawa decomposition of the algebra $g=h+s$ (see e.g. \cite{Helg}). The
dimension of the Cartan subalgebra of $s$ equals the {\it rank} of the
homogeneous space. It will turn out that the rank of the symmetry algebra
and of its solvable subalgebra increase by one unit under the
$\bf c$ and {$\bf r$ map} s.
In the context of this paper the following considerations are important. If
the result of the $\bf c$ map is a homogeneous quaternionic
space, then the duality invariance (the symmetry of the
scalar-vector sector of the theory) of the original theory must
act transitively on the corresponding
manifold parametrized by the scalar fields. The proof of this
result, which applies also to the $\bf r$ map, is given in
\cite{dWVPVan}. Also the converse is true: if the
vector-scalar symmetries act transitively on the manifold
parametrized by the scalars, then one can show that the symmetry
group after dimensional reduction gives rise to additional
symmetries, which leave the original scalar fields invariant but
act transitively on the new scalar fields. In this respect it is
important that the process of dimensional reduction always entails new
symmetries whose number is larger than or equal to the
number of new coordinates.
The above results show that homogenous quaternionic spaces that
are in the image of the $\bf c$ map correspond to special homogeneous
K\"ahler spaces. On the other hand, special
homogeneous K\"ahler spaces give rise to special homogeneous
quaternionic spaces, provided that the scalar-vector symmetry
transformations act transitively on the K\"ahler manifold.
Likewise, such
K\"ahler spaces that are themselves in the image of the $\bf r$ map
correspond to special homogeneous real spaces. Again, special
homogeneous real spaces give rise
to homogeneous K\"ahler spaces, provided that the vector-scalar
symmetries act transitively on the real manifold.
In this connection Alekseevski\v{\i}'s classification of homogeneous
quaternionic spaces \cite{Aleks} is relevant, as was first
pointed out in \cite{CecFerGir}.
In \cite{Aleks} it was conjectured that the homogeneous
quaternionic spaces consist of compact symmetric
quaternionic spaces and (non-compact) normal quaternionic spaces.
Normal quaternionic spaces are quaternionic spaces that admit a
transitive completely solvable group of motions.
According to Alekseevski\v{\i}\ there are two different
types of normal quaternionic spaces characterized by their
so-called canonical quaternionic subalgebra. The first type has as
canonical subalgebra $C_1^1$, the solvable algebra corresponding to
$Sp(1,1)/(Sp(1)\otimes Sp(1))$, and corresponds
to the quaternionic projective spaces $Sp(m,1)/(Sp(m)\otimes Sp(1))$.
These spaces are {\em not} in the image of the $\bf c$ map.
The second type has a canonical subalgebra $A_1^1$, the solvable subalgebra
of $SU(2,1)/(SU(2)\otimes U(1))$. Denoting the dimension of the normal
quaternionic algebra as $4(n\!+\!1)$, the
structure of the solvable algebra is such that it always contains a normal
K\"ahler algebra ${\cal W}^{\rm s}$ of dimension $2n$, whose action on
the remaining part of the algebra naturally defines a
$(2n\!+\!2)$-dimensional representation corresponding to a
solvable subgroup of $Sp(2n+2, \Rbar)$.\footnote{This
representation thus acts on $2n\!+\!2$ of the
generators. The two remaining generators of
in the quaternionic algebra, $e_0$ and $e_+$, are inert under
${\cal W}^{\rm s}$. The sum of the Cartan
subalgebra of ${\cal W}^{\rm s}$ and $e_0$ constitutes the
Cartan subalgebra of the quaternionic algebra, whose rank is thus
1 higher than that of the K\"ahler
algebra. The weight of the K\"ahler algebra under $e_0$ is thus zero,
while the weight of the generators that constitute the
$(2n+2)$-dimensional representation is 1/2 times the weight of
$e_+$.}
Therefore each
normal quaternionic space of this type defines the basic ingredients of
a special normal K\"ahler space, encoded in its solvable
transitive group of duality transformations.
Alekseevski\v{\i}'s analysis thus strongly indicates that the corresponding
$N\!=\!2$ supergravity theory should exist, so that under
the $\bf c$ map one will recover the original normal quaternionic
space. To establish the existence of the supergravity theory,
one must prove that a corresponding holomorphic function
$F(X)$ exists that allows for these duality transformations. This
program was
carried out by Cecotti \cite{Cecotti}, who explicitly constructed
the function $F(X)$ corresponding to each of the normal quaternionic
spaces with canonical subalgebra $A^1_1$ that appears in the
classification of Alekseevski\v{\i}. With the
exception of the so-called minimal coupling, where $F(X)$ is a quadratic
polynomial, all the {K\"ahler} spaces are in the image of the {$\bf r$ map}. The
corresponding special K\"ahler manifolds were
denoted by $H(p,q)$ and $K(p,q)$. Under the $\bf c$ map, they
lead to the normal quaternionic manifolds $V(p,q)$ and $W(p,q)$
defined in \cite{Aleks}. If Alekseevski\v{\i}'s classification is complete,
there can be no other special K\"ahler
spaces with solvable transitive duality transformations.
In this paper we start at the other end and derive a
classification of all homogeneous
quaternionic spaces that are in the image of the {${\bf c}{\scriptstyle\circ}{\bf r}$ map}.
The analysis can be performed completely at the level of the
special real spaces, and amounts to classifying all the cubic polynomials
whose invariance group acts transitively
on the corresponding special real spaces. This invariance group
leaves the full $d=5$ supergravity Lagrangian invariant. The
corresponding real spaces are obviously homogeneous, but because
of the results quoted above, so are the corresponding
{K\"ahler} and quaternionic spaces that emerge under the action of the {$\bf r$ map}\
and the {${\bf c}{\scriptstyle\circ}{\bf r}$ map}. When comparing the result to the
classification of Alekseevski\v{\i}\ (and the corresponding one of Cecotti) we
find that their classification is incomplete!
The cubic functions that are classified in this paper, are
parametrized by
\begin{equation}
C(h) = d_{ABC}\,h^A\,h^B\,h^C \ . \qquad (A,B,C = 1,
\ldots, n) \label{Cpoly}
\end{equation}
The corresponding sigma model, which is contained in the
five-dimensional supergravity Lagrangian \cite{GuSiTo}, is
defined by the Lagrangian
\begin{equation}
{\cal L} = -{\textstyle{3\over 2}} d_{ABC} \,h^A\,\partial_\mu
h^B\, \partial^\mu h^C \ , \label{sigma}
\end{equation}
where the scalar fields $h^A$ are restricted by $C(h)=1$, so that
the sigma model corresponds to a $(n\!-\!1)$-dimensional real space.
Linear redefinitions of the fields $h^A$ that leave $C(h)$
invariant constitute invariances of the full $N\!=\!2$
supergravity Lagrangian. However, it is not excluded
that the sigma model Lagrangian \eqn{sigma} has additional symmetries,
which cannot be extended to symmetries of the full
supersymmetric Lagrangian.
The polynomial $C(h)$ is left invariant by linear transformations
of the fields $h^A$, whose infinitesimal form is parametrized by
matrices $B^A_{\;B}$,
\begin{equation}
\delta h^A = B^A_{\;B}\,h^B \ , \label{Btrans}
\end{equation}
restricted by the condition
\begin{equation}
B^D_{\,(A}\,d_{BC)D} = 0 . \label{Binv}
\end{equation}
As explained above, our aim is to determine all tensors $d_{ABC}$
whose invariance group acts transitively on the manifold defined
by (\ref{sigma}). To analyze this question we first
redefine the scalar fields in some reference
point where the metric associated with the sigma model has
positive signature\footnote{Positive signature is required to ensure
a positive-definite Hilbert space of physical states. The
necessary and sufficient condition for this is that the variables
$h^A$ are restricted to a domain where
\[
\left(3d_{ACD}\,d_{BEF}-2d_{ABC}\,d_{DEF}\right) h^Ch^Dh^Eh^F
\]
is a positive definite matrix.}.
One may
choose this reference point equal to $h^A= (1, 0,\ldots,0)$. In
that case the coefficients $d_{ABC}$ can be redefined according
to the so-called canonical parametrization
\begin{equation}
d_{11a} =0,\quad d_{1ab} =-{\textstyle\frac{1}{2}} \,d_{111}\,\delta_{ab}\ . \qquad (a,b = 2,
\ldots, n) \label{canpar}
\end{equation}
with $d_{111}>0$. To preserve this parametrization only
orthogonal redefinitions of the fields $h^a$ are allowed.
The condition \eqn{Binv} that the $C(h)$ be invariant
is then analyzed in the canonical parametrization. Putting
$d_{111}=1$ for convenience, \eqn{Binv} implies
that $B^A{}_{\!B}$ takes the following form (see \cite{BEC} where
the corresponding K\"ahler spaces were analyzed)
\begin{equation}
B^1{}_1 = 0\ ,\quad B^a{}_1 =
B^1{}_a\ , \quad B^a{}_b = B^c{}_1 \,
d_{abc} + A_{ab}
\label{cBpar}
\end{equation}
where $A_{ab}$ is an antisymmetric matrix with $a,b,\ldots = 2,
\ldots,n$. This matrix is subject to the condition
\begin{equation}
\Gamma_{abcd}\, B^d{}_1 = d_{d(ab}\, A_{c)d} ,
\end{equation}
where
\begin{equation}
\Gamma _{abcd}\equiv d_{e(ab}\,d_{cd)e}-{\textstyle{\textstyle\frac{1}{2}}} \,
\delta_{(ab}\,\delta_{cd)}\ .
\end{equation}
Now we observe that transformations associated with
the matrices $A_{ab}$ that are independent of the parameters
$B^a{}_1$, leave the canonical reference point invariant and
thus correspond to the isotropy group. Hence we are left with the
requirement that the symmetry group should contain $n-1$
independent parameters $B^a{}_1$. Writing $A_{ab} =
B^c{}_1 \,A_{ab;c}\,$, where $A_{ab;c}$ is antisymmetric in its
first two indices, this leads to the equation
\begin{equation}
\Gamma _{abcd}= D_{abc;d}\ , \label{GamD1}
\end{equation}
where $\Gamma _{abcd}$ was defined above and
\begin{equation}
D_{abc;d} = d_{e(ab}\,A_{c)e;d} \ .
\end{equation}
{}From the above results, it is clear that
the homogeneous real manifold corresponding to (\ref{sigma}) is
locally isomorphic to $G/H$, where $G$ is the invariance group of
the tensor $d_{ABC}$ and $H$ is the orthogonal invariance group
of the tensor $d_{abc}$.
{}From the arguments given earlier it follows that there is a
corresponding analysis for the special {K\"ahler} and quaternionic
spaces that follow from the real spaces that we introduced above.
One may thus consider the {K\"ahler} spaces and require that the
symmetry group of the $d=4$ supergravity Lagrangian acts
transitively on the space.
The cubic polynomial $C(h)$ is directly
related to the holomorphic function $F(X)$, which encodes the
information of the special K\"ahler manifolds that follow from
the real manifolds by the $\bf r$ map. It reads
\begin{equation}
F(X) = id_{ABC}{X^A\,X^B\,X^C\over X^0} \ , \label{Fd}
\end{equation}
where $X^0$ and $X^A$ are complex variables. The K\"ahler manifold is
only $n$-dimensional because two points $(X^0,X^A)$ that are
related by multiplication with an arbitrary complex number are
identified. The {$\bf r$ map}\ thus introduces $n\!+\!1$ new coordinates,
but at the same time it leads to at least $n\!+\!1$ additional
symmetries \cite{dWVPVan,sbstring} so that the analysis proceeds
along the same steps. Similarly, for quaternionic manifolds, the
requirement of transitivity rests upon the same analysis as
presented for the real manifolds\footnote{The isometries for
the special quaternionic spaces were analyzed in \cite{dWVP2}.}
Therefore there is no need for going into further details.
A special case of \eqn{GamD1} (namely $\Gamma_{abcd}=0$) was
analyzed in \cite{GuSiTo} in the context of Jordan algebras and
in \cite{BEC} for the special {K\"ahler} spaces. The connection with
Jordan algebras arose because the \eqn{GamD1} is equivalent to
the condition that the torsion tensor associated with the special
real space is covariantly constant. In that case the real space
is symmetric (but this does not exhaust the special symmetric
spaces). Likewise the corresponding {K\"ahler} and quaternionic spaces
that one obtains by means of the {$\bf r$ map}\ and the {${\bf c}{\scriptstyle\circ}{\bf r}$ map}\ are
symmetric (and in this case there are no other (special)
symmetric spaces). Surprisingly, the equation $\Gamma_{abcd}=0$
emerges also in a different context, namely that of
$W_3$ algebras \cite{Zam}, where it corresponds to the condition that
ensures that the higher-spin invariances of a
two-dimensional conformal field theory can be
consistently truncated to
the energy-momentum tensor and a spin-3 charge \cite{Wred}.
In view of these
and possible other applications of \eqn{GamD1}, we shall keep the
analysis of \eqn{GamD1} self-contained without using the
connection with the special geometries. The central result of
this paper, namely the
classification of the tensors $d_{abc}$ that satisfy \eqn{GamD1},
is presented in section~\ref{proofd}. The reader who is
only interested
in the results can skip this section as well as section 3, where
we rewrite the results for $d$ in a simpler form and present the
solutions for the tensors $A$ in \eqn{GamD1}. The final result
for the cubic polynomial $C(h)$ is given in section 4. It can
be expressed as follows (not in the canonical
parametrization). First we decompose the coordinates $h^A$ into
$h^1$, $h^2$, $h^\mu$ and $h^m$, where the range of the indices
$\mu$ and $m$ is equal to $q+1$ and $r$, respectively. Hence we have
\begin{equation}
n= 3+q+r ,
\end{equation}
so that $n\geq 2$. Then $C(h)$ can be written as
\begin{equation}
C(h) = 3\Big\{ h^1\,
\big(h^2\big)^2 -h^1\,\big(h^\mu\big)^2 -h^2\,\big(h^m\big)^2
+\gamma_{\mu mn}\,h^\mu\, h^m\,h^n\Big\} \ ,\label{genC1}
\end{equation}
where the coefficients $\gamma_{\mu mn}$ are the generators of a
$(q\!+\!1)$-dimensional real Clifford algebra with positive
signature.
Further results and and implications are given in section~\ref{conclusions}.
\section{Classification.} \label{proofd}
\setcounter{equation}{0}
In this section we study
\begin{equation}
\Gamma _{abcd}= D_{abc;d}\ , \label{GamD}
\end{equation}
where the indices $a, b,\dots$ take $n-1$ values $2, \ldots, n$,
\begin{eqnarray}
\Gamma _{abcd}&=& d_{e(ab}\,d_{cd)e}
-{\textstyle\frac{1}{2}} \delta _{(ab}\,\delta_{cd)}\ , \\
D_{abc;d} &=& d_{e(ab}\,A_{c)e;d} \ ,
\end{eqnarray}
and $A_{ab;c}$ is a tensor that is antisymmetric in its first
two indices. Observe that $D_{abc;d}$ is only manifestly symmetric
in three indices; full symmetry is only obtained after imposing
\eqn{GamD}. The tensors $d_{abc}$ are symmetric and are concisely
summarized by the cubic polynomial,
\begin{equation}
{\cal Y}(x) = d_{abc}\,x_a\,x_b\,x_c\ . \label{cubic}
\end{equation}
We will now give a complete classification of
the tensors $d_{abc}$ that satisfy \eqn{GamD} up to orthogonal
redefinitions. Obviously, the tensors
$A_{ab;c}$ can only be determined up to the generators of
orthogonal transformations
that leave $d_{abc}$, and thus the function ${\cal Y}(x)$
invariant. The analysis is done in two
steps. First we show that after a suitable
$O(n-1)$ rotation, it is always possible to
bring the tensors $d_{abc}$ into a form such that
\begin{eqnarray}
d_{22a} &=& \frac{1}{\sqrt 2} \, \delta_{a2}\ ,
\nonumber \\
\Gamma_{222a} &=& A_{a2;2} = 0\ . \label{step1}
\end{eqnarray}
The second step is then to bring the $d_{abc}$ coefficients
in a form where $d_{2ab}$ is diagonal for general $a$
and $b$ and examine the consequences of \eqn{GamD}.
Let us start by using $O(n-1)$ transformations to define
a "2" direction (which will not necessarily coincide with the
"2" direction chosen in \eqn{step1}) such that
\begin{equation}
d_{abb}=\lambda \,\delta_{a2}\; .\label{deflambda}
\end{equation}
A contraction of \eqn{GamD} over two indices then implies that the
following three tensors must be equal,
\begin{eqnarray}
3\Gamma _{abcc}&=&2 d_{acd}\,d_{bcd}
+\lambda\, d_{2ab} -{\textstyle\frac{1}{2}} (n+1)\,\delta _{ab} \;,\nonumber\\
3D_{cca;b}&=& \lambda\, A_{a2;b}\; , \nonumber\\
3D_{abc;c}&=& 2 d_{ec(a}\, A_{b)e;c}+ d_{abc}\,A_{dc;d}\;.
\label{cGamD}
\end{eqnarray}
Now we distinguish between three different cases, denoted by I, II
and III, which will play
a role throughout this analysis.
In case I we have
\begin{equation}
\lambda= 0 \Longleftrightarrow d_{abb} = 0\;.
\end{equation}
According to \eqn{cGamD} we then have
\begin{equation}
\Gamma_{ccab}= D_{cca;b}=D_{abc;c} = 0\;. \label{Deq}
\end{equation}
Using a notation where $d_a$ and $A_a$ are $(n-1)\times(n-1)$
matrices defined by
$(d_a)_{bc}\equiv d_{abc}$ and $(A_a)_{bc} \equiv A_{bc;a}$,
the first equation \eqn{Deq} reads
\begin{equation}
\langle d_a\, d_b\rangle = \textstyle{1\over 4}(n+1)\,
\delta _{ab} \;,
\end{equation}
where $\langle A \rangle$ denotes the trace of a matrix $A$.
Making use of this result we contract the tensors appearing in
\eqn{GamD} with $d_{cdf}$, leading to
\begin{eqnarray}
3\Gamma _{abcd}\,d_{cdf}&=&\textstyle{1\over 4}(n-3)\,d_{abf}
+2 \langle d_a\, d_f\,d_b\rangle \ , \nonumber\\
3D_{acd;b}\, d_{cdf}&=&\textstyle{1\over 4} (n+1)\,
A_{af;b}+2\langle d_a\,d_f\,A_b \rangle\ . \label{l0}
\end{eqnarray}
According to \eqn{GamD} these two tensors should be equal. However,
the first one is symmetric and the second one antisymmetric in $a$
and $f$. Therefore they should vanish separately.
Combining the above results, we derive
\begin{equation}
\Gamma _{abcd}\,\Gamma _{abce}=
\Gamma _{abcd} \,D_{abc;e}= -{\textstyle\frac{1}{2}} D_{aad;e} =0.
\end{equation}
For case I we therefore obtain
\begin{equation}
\Gamma _{abcd}=D_{abc;d} = d_{abb}=0 .
\end{equation}
These are the equations that were analyzed in
the appendix of \cite{BEC}. The first part of
this analysis coincides with the one
that we are about to present for case II and III in the limit
where the $A_{ab;c}$ tensors are put to zero or coincide with
generators of the
$O(n-1)$ subgroup that is left invariant by $d_{abc}$.
A minor complication
is that the "2" direction is not yet defined for case I,
in view of the fact that $d_{abb}=0$. However, the analysis only
requires that $d_{222}\not=0$.
Hence we proceed to case II and III where $\lambda\not=0$.
Therefore we know from \eqn{cGamD} that
$A_{2a;b}$ is symmetric in $a$ and $b$.
{}From this it follows that $A_{ab;c}=0$
whenever two of its indices are equal to 2, which leads to
$D_{222;2}=0$.
In case II we assume that $d_{22i}= 0$, where $i=3,\ldots ,n$.
Using $\Gamma_{2222}= D_{222;2}=0$ one finds that
\begin{equation}
(d_{222})^2 = {\textstyle\frac{1}{2}} .
\end{equation}
As we can choose the sign of $d_{222}$ at will, we thus find that
case II also leads to \eqn{step1}.
In case III we assume that not all $d_{22i}$ vanish. Diagonalizing
the symmetric matrix $d_{2ij}-A_{2i;j}$ gives
\begin{equation}
A_{2i;j}=d_{2ij}-\lambda _i\,\delta _{ij}. \label{diagdA}
\end{equation}
Then $\Gamma_{222i}= D_{222;i}$ yields
\begin{equation}
d_{22i}\left( \alpha +\lambda _i\right) =0 , \nonumber
\end{equation}
where we used the notation $\alpha \equiv d_{222}$.
For those values of $i$ for which $d_{22i}\neq 0$, we have
the same eigenvalue $\lambda _i=-\alpha$.
Hence, by means of a rotation, we can define a "3"
direction such that
\begin{equation}
d_{22i}=\beta \, \delta_{i3}\ , \qquad \lambda_3 =-\alpha ,
\end{equation}
where $\beta \not=0$ (otherwise we would be dealing with case II).
Using $\Gamma_{222i} =D_{22i;2}$ gives
\begin{equation}
d_{233}=-\alpha \ ,\qquad
A_{\alpha 3;2}=3d_{23\alpha } \ ,
\end{equation}
with $\alpha =4,\ldots ,n$.
Hence we also have
\begin{equation}
A_{23;3} = d_{233}-\lambda_3 = 0\ .
\end{equation}
Then from $\Gamma_{2222} = D_{222;2} = 0$, one derives
\begin{equation}
\alpha ^2+\beta ^2={\textstyle\frac{1}{2}}\ .\label{2222}
\end{equation}
Now we analyze \eqn{GamD} with indices
$(2233)$. First $D_{332;2}=D_{223;3}$ takes the form
\begin{equation}
-2\left( d_{23\alpha }\right) ^2
={\textstyle\frac{2}{3}}\left( d_{23\alpha }\right) ^2 \ .
\end{equation}
Combining the above equations gives
\begin{equation}
d_{23\alpha }=A_{\alpha 3;2}=A_{23;\alpha }=A_{2\alpha ;3}=0\ .
\end{equation}
This implies that also $D_{223;\alpha }=0$ and thus
\begin{equation}
3D_{22\alpha ;3}=\beta\,A_{\alpha 3;3}=0.
\end{equation}
Hence the tensor $A_{ab;c}$ vanishes whenever two of its indices
are equal to 2 or 3. Moreover we have $A_{2\alpha;\beta} =
A_{2\beta;\alpha}$.
Subsequently we deduce from $D_{332;2}=D_{223;3}=0$ that
$\Gamma _{2233}=0$. Combining this with
\eqn{2222} shows that
\begin{equation}
d_{333}=-\beta \ .
\end{equation}
Then $\Gamma _{223\alpha }=0$ gives
\begin{equation}
d_{33\alpha }=0\ .
\end{equation}
Hence our results for the $d$ coefficients take the form
\begin{eqnarray}
&&d_{222}=\alpha \ ,\ d_{223}=\beta \ ,\ d_{233}=-\alpha \ ,\
d_{333}=-\beta \nonumber\\
&&\alpha ^2+\beta ^2={\textstyle\frac{1}{2}} \ ,\nonumber\\
&&d_{22\alpha }=d_{23\alpha }=d_{33\alpha }=0\nonumber\\
&&d_{2bb}= d_{2\beta\beta} =\lambda \not= 0\ ,\nonumber \\
&& d_{3bb} = d_{3\beta\beta} = 0\ , \ d_{\alpha bb}=
d_{\alpha \beta \beta }= 0\ .
\end{eqnarray}
Now we may perform an $O(2)$ transformation in the
$(2,3)$ space such that the new coefficient
$d_{223}$ vanishes. In terms of the cubic function ${\cal Y}(x)$
this transformation corresponds to an orthogonal redefinition of
$x_2$ and $x_3$,
\begin{eqnarray}
x'_2 &=& x_2\cos \phi \pm x_3\sin\phi\ , \nonumber\\
x'_3 &=& -x_2\sin \phi \pm x_3\cos\phi \ .
\end{eqnarray}
Using \eqn{2222} and defining $\alpha
=\frac{1}{\sqrt{2}}\cos \theta $ and $\beta
=\frac{1}{\sqrt{2}}\sin \theta $, we obtain a one-parameter family
of coefficients
\begin{equation}
\alpha '=\frac{1}{\sqrt{2}} \cos (\pm\theta -3\phi )\ ;\qquad
\beta '=\frac{1}{\sqrt{2}} \sin (\pm\theta -3\phi ). \label{phiab}
\end{equation}
We can thus choose a parametrization such that
\begin{eqnarray}
&& d_{222}=\frac{1}{\sqrt{2}}\ , \qquad
d_{233}=-\frac{1}{\sqrt{2}}\ , \nonumber \\
&& d_{223}= d_{22\alpha} = d_{333} = d_{23\alpha} = d_{33\alpha}
= d_{\alpha\beta\beta}=0 \ ,
\end{eqnarray}
so that case III also allows the parametrization \eqn{step1}.
Observe that
after this redefinition $d_{abb}$ may only differ from zero for
$a=2$ or 3. Hence
\begin{equation}
d_{abb}= \lambda_2\,\delta_{a2} + \lambda_3\,\delta_{a3} \ .
\end{equation}
Case I is now characterized by $\lambda_2=\lambda_3=0$, case II
by $\lambda_2\not=0\ , \lambda_3=0$, and case III by $\lambda_3\not=
0$.
Note, however, that the angle $\phi$ in \eqn{phiab} is not uniquely
determined. There are 6 solutions. This means that there is still the
possibility of redefining $x_2$ and $x_3$, such that we remain
within the parametrization \eqn{step1}. Those redefinitions
consist of products of reflections,
\begin{equation}
x_3 \rightarrow -x_3 \label{ch3m} \ ,
\end{equation}
and $2\pi /3$ rotations,
\begin{eqnarray}
x_2 &\rightarrow &-\textstyle\frac{1}{2}\, x_2
+ \textstyle\frac{1}{2}\sqrt{3}\,x_3\ , \nonumber\\
x_3 &\rightarrow &-\textstyle{1\over2}\sqrt{3}\, x_2
- \textstyle\frac{1}{2} \,x_3\ . \label{ch23}
\end{eqnarray}
These replacements do not change the part of $\cal Y$ that is
quadratic or cubic in $x_2$ and $x_3$,
\begin{equation}
{\cal Y}(x) =\frac{1}{\sqrt{2}}
\left( x_2^{\,3}-3x_2\,x_3^{\,2}\right) +\ldots .
\end{equation}
Later we shall see that the
above redefinitions allow one to rewrite some of
the solutions belonging to case II
into those belonging to case III.
{\hspace*{\fill}\rule{2mm}{2mm}\linebreak}
This concludes the proof of \eqn{step1}. The second step in the
classification starts by diagonalizing
$d_{2ab}$ for all $a$ and $b$ (this
is consistent with \eqn{step1}). Hence we adjust the
frame of reference, such that
\begin{equation}
d_{2ij }= \mu _i \, \delta _{ij} \ ,
\end{equation}
where we recall that $i,j=3,\ldots, n$.
Now we consider \eqn{GamD} with indices $(22ij)$, according
to which the following three tensors should be equal,
\begin{eqnarray}
3\Gamma _{22ij }&=&\Big( \frac{1}{\sqrt{2}}
\,\mu _i +2\mu _i^2-{\textstyle\frac{1}{2}}\Big)
\delta _{ij} \ , \nonumber\\
3D_{22i;j}&=&\Big( -\frac{1}{\sqrt{2}}
+2\mu _i\Big) A_{2i;j}\ , \nonumber\\
3D_{ij2;2}&=&(\mu _j -\mu _i )\, A_{ij;2}\ .
\end{eqnarray}
As the last tensor vanishes for
$i=j$, while the first one takes its non-zero values in that case,
the three tensors should vanish separately. The vanishing of
the first one implies that
$\mu _i$ can only take two possible values, $-\frac{1}{\sqrt{2}}$ or
$\frac{1}{2\sqrt{2}}$. Therefore it is convenient to
split the indices $i$ according to
these values into indices $\mu, \nu, \ldots$ and $m, n, \ldots$ such
that
\begin{equation}
\mu _\mu =-\frac{1}{\sqrt{2}}\ ,\qquad \mu _m=\frac{1}{2\sqrt{2}}\ .
\end{equation}
Furthermore we obtain
\begin{equation}
A_{\mu m;2}= A_{2\mu;\nu }=A_{2\mu ;m}=0.\label{A20}
\end{equation}
It is clear that the special
index value $i=3$ that occurred in the analysis of
case III, is contained in the index set labeled by
$\mu, \nu, \ldots$.
The next step is the analysis of \eqn{GamD} with indices
$(2ijk)$. The corresponding tensors are
\begin{eqnarray}
3\Gamma _{2ijk}&=&d_{ijk}\left( \mu _i+\mu _j+\mu _k\right)\ ,
\nonumber\\
3D_{ijk;2} &=&3d_{l(ij}A_{k)l;2}\ , \nonumber\\
3D_{2ij;k} &=&d_{lij}A_{2l;k} + (\mu _i-\mu _j)\,
A_{ji;k} \ .\label{2ijk}
\end{eqnarray}
Using \eqn{A20} it follows that $d_{\mu\nu\rho}\,D_{\mu\nu\rho;2}
= d_{\mu\nu m}\,D_{\mu\nu m;2} = d_{mnp}\,D_{mnp;2} = 0$ by
virtue of the antisymmetry of the coefficients $A_{ij;2}$.
Therefore the tensor $\Gamma_{2ijk}$ should vanish when contracted with
these $d$ coefficients. As $\Gamma_{2ijk}$ is itself
proportional to the
$d$ coefficients, it follows that certain components
should vanish, i.e.,
\begin{equation}
d_{\mu \nu \rho }=d_{\mu\nu m} = d_{mnp}=0\ .
\end{equation}
As $\Gamma_{2\mu mn}$ already vanishes by virtue of the
fact that $\mu_\mu+\mu_m+\mu_n =0$, we have thus established that
all components of the
$\Gamma$ tensor with one or more indices equal to 2 now vanish.
Most of the components of $D_{ijk;2}$ and $D_{2ij;k}$ now
vanish identically. The equation $ D_{\mu mn;2} = 0$ implies
that the $d$ tensors should be left invariant by orthogonal
transformations characterized by
the $A_{ij;2}$. The latter can be put to zero and do
not restrict the $d$-coefficients. Furthermore, there is
\begin{equation}
D_{2\mu m;i} =0\ \Longleftrightarrow \
A_{m\mu ;i } = \textstyle{2\over 3}\sqrt{2} \,
d_{\mu mn}A_{2n;i}\ .\label{condmi}
\end{equation}
The only components of $\Gamma_{ijkl}$ that do not vanish
identically at this point, are
\begin{eqnarray}
\Gamma _{\mu \nu mn}&=&\textstyle{\frac{2}{3}}
d_{mp(\mu }\,d_{\nu) np}
-\textstyle{\frac{1}{4}}\delta_{\mu \nu }\,
\delta _{mn}\ ,\label{defGmu}\\
\Gamma _{mnpq}&=&d_{\mu (mn}\,d_{pq)\mu }
-\textstyle{\frac{3}{8}} \delta _{(mn}\, \delta_{pq)}\ .
\label{defGm}\end{eqnarray}
According to \eqn{GamD}, they should satisfy
\begin{eqnarray}
\Gamma _{\mu \nu mn}&=&D_{\mu \nu m;n}=- \Gamma _{\mu
\nu mp}\,H_{pn}\ ,\label{GmuH} \\
\Gamma _{\mu \nu mn}&=& D_{mn\mu ;\nu} =\textstyle{2\over3}\,
d_{\mu q(m}\,A_{n)q;\nu }+\textstyle{1\over3}\,d_{\rho mn}\,
A_{\mu \rho ;\nu }\ , \label{GmuA} \\
\Gamma _{mnpq}&=&D_{mnp;q} =\Gamma_{mnpr}\,H_{rq}\ ,
\label{GmH}
\end{eqnarray}
where we made use of \eqn{condmi} and
defined $H_{mn}\equiv {2\over 3}\sqrt 2 A_{2m;n}$.
Contractions of the above equations will give useful
information.
Denoting the range of the indices $\mu $ by $q+1$, and the range
of the indices $m$ by $r$, so that
\begin{equation}
n=3+q+r\ ,
\end{equation}
we have
\begin{eqnarray}
\Gamma _{\mu \nu mm}&=& \textstyle{2\over3}\,{\rm tr}\,(d_\mu \,d_\nu )
-\textstyle{\frac{1}{4}}r\,\delta _{\mu \nu}
=\textstyle{1\over3}\,d_{\rho mm}\,A_{\mu \rho ;\nu }\ ,\label{cGmuA}\\
\Gamma_{\mu \mu mn}&=& \textstyle{\frac{2}{3}} (d\,d)_{mn}
-\textstyle{\frac{1}{4}}(q+1)\,\delta_{mn}\ , \label{cGmut}\\
\Gamma _{ppmn}&=&\textstyle{\frac{2}{3}}(d\,d)_{mn}
-\textstyle{1\over 8}(r+2)\,\delta_{mn}
+\textstyle{\frac{1}{3}} d_{\mu pp}\,d_{\mu mn}\ , \label{cGmt}
\end{eqnarray}
where $(d\,d)_{mn}\equiv d_{\mu mp}\,d_{\mu np}$.
The remaining equations for which
the corresponding $\Gamma$ tensors vanish, $D_{m\mu\nu;\rho} =
D_{mnp;\mu} = D_{\mu mn;p}=0$, are solved by $A_{2m;\mu
}=A_{mn;p}=A_{\mu \nu ;p}=0$. Other solutions that
satisfy these equation, correspond to non-trivial invariances of
the $d_{abc}$ tensor.
Let us now turn again to the three cases discussed previously.
The only non-vanishing components of $d_{abc}$
are $d_{\mu mn}$ and
\begin{equation}
d_{222}= {1\over \sqrt2}\ , \quad
d_{2\mu\nu} = -{1\over\sqrt 2}\,
\delta_{\mu\nu}\ , \quad
d_{2mn}= {1\over 2\sqrt 2}\,\delta_{mn}\ , \label{parad}
\end{equation}
corresponding to
\begin{equation}
{\cal Y}(x) = {1\over \sqrt 2}\left(x_2^{\,3} -3 x_2\,(x_\mu^{\,2}
-\textstyle{1\over 2}\,x_m^{\,2} )\right) + 3d_{\mu mn}\,
x_\mu\,x_m\,x_n \ . \label{paray}
\end{equation}
The three cases are characterized by
the possible non-vanishing values of $d_{abb}$, which are
\begin{equation}
d_{2bb} = \frac{1}{2\sqrt 2} (r-2q), \quad \mbox{and}\quad d_{\mu bb}
= d_{\mu mm} .
\end{equation}
In case I we have $r=2q$, so that $n=3(q+1)$, and $d_{\mu mm}=0$. As
we established already, one must have
\begin{equation}
\Gamma_{\mu\nu mn} = \Gamma_{mnpq} = 0\ .
\end{equation}
Hence the $d_{\mu mn}$ may be regarded as $r\times r$ matrices, which
generate a $(q+1)$-dimensional Clifford algebra. In view of the
second condition, the dimension of this algebra is severely
constrained. According to \cite{BEC}, only $q= 1$, 2, 4 and 8 are
possible, corresponding to $n=6$, 9, 15 and 27, respectively.
This conclusion follows from the possible dimension of the
reducible representations of the Clifford algebra.
This case is related to Jordan algebras and the magic square
\cite{GuSiTo}. In addition we have the trivial case with $q=0$
and $n=3$.
For case II we have $d_{\mu mm}=0$ and $r-2q\not=0$.
It turns out that it is sufficient to restrict our analysis to
the case $q=-1$.
Then there are no indices $\mu$, so that
the non-vanishing coefficients $d_{abc}$ are
\begin{equation}
d_{222}={1\over\sqrt2}\ , \quad
d_{2mn}= {1\over2\sqrt2}\, \delta_{mn} \ ,
\end{equation}
with $r= n-2$ arbitrary. Obviously, we have
\begin{equation}
\Gamma_{mnpq} = - \textstyle{3\over8} \,\delta_{(mn}\,\delta_{pq)}\ ,
\quad H_{mn} = \delta_{mn}\ ,
\end{equation}
while all other components of $\Gamma$ vanish.
The reason why we do not have to consider $q\geq 0$, is that, after
identifying one of the indices $\mu$ with 3, we can always perform
a redefinition \eqn{ch23}. After this redefinition
we no longer have $d_{\mu mm}=0$, so that we can perform the
same steps as before, but now for case III. Nevertheless for
clarity of the presentation we briefly derive the consequences for
case II with arbitrary $q$.
We first use \eqn{GmH} to obtain
\begin{equation}
\Gamma^{(2)}_{mn} = \Gamma^{(2)}_{mp}\,H_{pn} ,\quad
\mbox{with} \quad
\Gamma^{(2)}_{mn} \equiv \Gamma_{mpqr}\,\Gamma_{pqrn} .
\label{Gam2H}
\end{equation}
Let us now decompose the space associated
with the indices $m, n, \ldots$
into the null space of $\Gamma^{(2)}$ and its orthogonal
complement. The
indices $m,n, \ldots$ are split accordingly
into indices $A, B, \ldots$ and $M, N,\ldots$, so that
$\Gamma^{(2)}_{Am} =\Gamma^{(2)}_{mA} = 0$ and
$\det \big(\Gamma^{(2)}_{MN}\big) \not= 0$.
This implies that
\begin{equation}
\Gamma_{mnpA} = 0, \label{Gamvan}
\end{equation}
while \eqn{Gam2H} restricts the matrix $H$ according to
$H_{MA} = 0$ and $H_{MN} = \delta_{MN}$.
Combining $d_{\mu mm}=0$ and \eqn{cGmut}, \eqn{cGmt} and
\eqn{Gamvan}, we find
\begin{equation}
\Gamma_{\mu\mu AB} = \textstyle{1\over 8} (r-2q)\,\delta_{AB},
\qquad
\Gamma_{\mu\mu AM} = \Gamma_{\mu\mu MA} = 0.
\end{equation}
{}From \eqn{GmuH} it then follows that the non-vanishing matrix
elements of $H$ are given by
$H_{AB} = - \delta_{AB}$ and $H_{MN}= \delta_{MN}$.
while
\begin{equation}
\Gamma_{\mu\nu MN} = 0\ . \label{Gresult}
\end{equation}
Therefore $\Gamma_{\mu\mu mn}$ is now fully known and non-vanishing.
On the other hand, $\Gamma_{\mu\mu mm}$ is
restricted to vanish by \eqn{cGmuA}. This implies that the
null space of $\Gamma^{(2)}$ is in fact empty, so that there are no
indices $A, B, \ldots$. Hence we find that $H_{mn} =
\delta_{mn}$. From \eqn{GmuH} it then follows that
$\Gamma_{\mu\nu mn}$ vanishes,
\begin{equation}
\Gamma_{\mu\nu m n}= 0\ ,
\end{equation}
while $\Gamma_{mnpq}$ remains arbitrary. Hence the coefficients
$d_{\mu mn}$ may again be regarded as $r\times r$ matrices generating
a $(q+1)$-dimensional Clifford algebra. This puts restrictions
on $r$ and $q$, but those are considerably weaker than
in the previous case.
Now let us turn to case III, where $d_{\mu mm}\not=0$.
By a suitable rotation of the components labeled by $\mu, \nu, \ldots$,
we choose the direction in which $d_{\mu mm}$ does not vanish to
be equal to $\mu=3$. The remaining indices $\mu$ will be denoted
by $\hat\mu$. Subsequently we diagonalize $d_{3mn}$,
\begin{equation}
d_{3mn}=\sqrt{\textstyle\frac{3}{8}}\lambda _m\, \delta _{mn}\ .
\label{diag3}
\end{equation}
We then obtain from \eqn{cGmuA} that $A_{3\mu ;\nu }$ is symmetric in
$\mu$ and $\nu$ (this conclusion requires $d_{\mu mm}\not= 0$), which
implies that $A_{3\mu ;3}=0$. Substituting
this result into
\eqn{GmuA} for $\mu=\nu=3$, we obtain
\begin{equation}
\textstyle{1\over 4}(\lambda^2_m -1)
\delta_{mn}= (\lambda _m-\lambda_n)A_{nm;3}\ .\label{33}
\end{equation}
As $A_{nm;3}$ is antisymmetric in $n$ and $m$, it follows
that both sides of the equation should vanish separately,
so that
\begin{equation}
\lambda_m^2 = 1\ , \qquad \Gamma_{\mu\nu mn}
= 0 \quad\mbox{for} \quad \mu=\nu=3\ .
\end{equation}
Splitting the range of indices $m,n,\ldots$ into indices
$x, y, \ldots$ and $\dot x, \dot y, \ldots$ such that
\begin{equation}
\lambda _x=1\ ;\qquad \lambda _{\dot x}=-1.
\end{equation}
it follows from \eqn{33} that $A_{x\dot y;3}=0$.
Subsequently, consider again
\eqn{GmuA} but now with $\mu=\hat\mu\not=3$,
$\nu =3$, $m=x$, $n=y$,
\begin{equation}
2d_{\hat\mu xy}=2d_{\hat \mu z(x}A_{y)z;3}+d_{\hat\rho xy}A_{\hat \mu
\hat\rho ;3}. \end{equation}
Multiplying the right-hand side with $d_{\hat\mu xy}$
gives zero by virtue of
the antisymmetry of the $A$ coefficients. This implies
$d_{\hat\mu xy}=0$. The same derivation
can be repeated for two dotted indices, so we are left with
the coefficients $d_{\hat\mu x\dot y}$ with mixed indices.
This then yields $\Gamma _{3\hat\mu mn}=0$.
In cases I and II we proved that
$\Gamma _{\mu \nu mn}=0$ in general. Therefore, in all cases with
$q\geq 0$, one can identify a suitable index $\mu =3$
and bring $d_{3mn}$ in diagonal form like in \eqn{diag3}, so that
one can employ the parametrization in terms of dotted and
undotted indices and
derive the restrictions for $d_{\hat\mu mn}$ as found above.
The present formulation is thus fully applicable to
all three cases with $q\geq0$ (for the moment we ignore
the results obtained above for the $A$ tensors, which apply only
to case III). Let us therefore proceed and present the relevant
equations in this formulation for the general case.
{}From $D_{3x\dot y;\hat\mu} = 0$ it follows that
\begin{equation}
A_{x\dot y;\hat\mu} = d_{\hat\nu x\dot y}\, H_{\hat\nu\hat\mu}\ ,
\end{equation}
where $H_{\hat\mu\hat\nu}\equiv \sqrt{2/3}\,
A_{3\hat\mu;\hat\nu}$. In addition we have $ D_{\hat\mu x\dot y;3}=0$,
which can be solved by $A_{\dot x\dot y;3}=A_{xy;3}=A_{\hat\mu
\hat\nu;3}=0$ and has no consequences for the $d$ tensor.
When non-zero values are possible for the $A$ tensors, they
define non-trivial invariances of the $d$ coefficients.
Let us give the non-vanishing components of the
$\Gamma$ tensor in this notation (cf. (\ref{defGmu}, \ref{defGm})),
\begin{eqnarray}
\Gamma _{\hat\mu \hat\nu xy}&=&\textstyle{\frac{2}{3}}\,
d_{\dot z x(\hat \mu}\,d_{\hat \nu )y\dot z}
-\textstyle{\frac{1}{4}}\,
\delta_{\hat\mu\hat\nu }\, \delta _{xy} \ ,\nonumber\\
\Gamma_{\hat\mu \hat\nu \dot x\dot y}&=&
\textstyle{\frac{2}{3}}\,d_{z \dot x(\hat \mu }\,
d_{\hat \nu )\dot y z}-\textstyle{\frac{1}{4}}\,
\delta_{\hat\mu \hat\nu }\,\delta_{\dot x\dot y} \ ,\nonumber\\
\Gamma_{x y\dot z\dot w} &=&
\textstyle{\frac{2}{3}}\,d_{\hat\mu x(\dot z} \,
d_{\dot w)y\hat\mu }-\textstyle{\frac{1}{4}} \,
\delta _{xy} \,\delta _{\dot z\dot w} \ .\label{defGamhat}
\end{eqnarray}
We denote the range of indices $\hat\mu$, $x$ and $\dot x$ by $q$,
$p$ and $\dot p$, respectively, so that $r=p+\dot p$ and $n= 3+
q+ p+\dot p$.
These equations have a remarkable symmetry under interchange of the
indices $\hat \mu $, $x$ and $\dot x$. This is not a coincidence
and is related to the redefinitions that were explained previously.
To see this, consider the cubic polynomial $\cal Y$, which has
acquired the following form (for $q\geq 0$),
\begin{eqnarray}
{\cal Y}(x) &=&\frac{1}{\sqrt{2}}\,x_2\,(x_2 +\sqrt3\, x_3)\,
(x_2 -\sqrt 3\, x_3)\nonumber\\
&&+ \frac{3}{\sqrt{2}}\left( -x_2\,x_{\hat \mu }^2+{\textstyle\frac{1}{2}}
(x_2+\sqrt{3}\,x_3)\,x_x^2
+{\textstyle\frac{1}{2}} (x_2-\sqrt{3}\,x_3)\,x_{\dot x}^2 \right)\nonumber\\
&& +6\,d_{\hat \mu m\dot m}\,x_{\hat \mu}\,x_x\,x_{\dot x}.
\label{paray2}
\end{eqnarray}
The replacement \eqn{ch23} induces an interchange of the quantities
$x_2$ and $-{\textstyle\frac{1}{2}}(x_2 \pm \sqrt 3 x_3)$, which leaves the form of
the function ${\cal Y}(x)$ unchanged, except that the labels
$\hat\mu$, $x$ and $\dot x$ are interchanged. Similarly, the
replacement \eqn{ch3m} corresponds to an interchange of labels
$x$ with $\dot x$ (of course, the range of the various indices
changes accordingly).
Case I is now characterized by $q=p=\dot p$, case II
by $p=\dot p\not= q$, and case III by $p\not=\dot p$. Contraction of
the above tensors leads to the following equations,
\begin{eqnarray}
&&\Gamma_{\hat\mu\hat\mu xy} = \Gamma_{\dot z\dot z xy} +
\textstyle{1\over4}(\dot p-q)\,\delta_{xy}\ , \nonumber\\
&&\Gamma_{\hat\mu\hat\mu \dot x\dot y} = \Gamma_{z z\dot x\dot y} +
\textstyle{1\over4}(p-q)\,\delta_{\dot x\dot y}\ , \nonumber\\
&&\Gamma_{\hat\mu\hat\nu xx} = \Gamma_{\hat\mu\hat\nu \dot x\dot x} +
\textstyle{1\over4}(\dot p-p)\,\delta_{\hat\mu\hat\nu}\ .
\label{Gamcontr}
\end{eqnarray}
In this notation, the equations (\ref{GmuH}-\ref{GmH}) take the form
\begin{eqnarray}
&&\Gamma_{\hat \mu\hat \nu x y} = \Gamma_{\hat \rho\hat \nu
xy}\,H_{\hat \rho\hat \mu} = -\Gamma_{\hat \mu\hat \nu zy}\, H_{zx}\ ,
\nonumber \\ &&\Gamma_{\hat \mu\hat \nu \dot x\dot y} =
-\Gamma_{\hat \rho\hat \nu \dot x\dot y}\, H_{\hat \rho\hat \mu}
= -\Gamma_{\hat \mu\hat \nu \dot z\dot y}\, H_{\dot z\dot x}\ , \nonumber
\\ &&\Gamma_{x y\dot x\dot y}= \Gamma_{zy\dot x\dot y}
\, H_{zx} = \Gamma_{xy\dot z\dot y }
\, H_{\dot z\dot x}\ , \label{GHHH}
\end{eqnarray}
where we suppressed the equations involving $H_{x\dot y}$ and
$H_{\dot x y}$, which have no consequences for the
$d$-coefficients.
The symmetry noted above should be taken into account when
identifying inequivalent $d$ tensors. However, its presence also
facilitates our work, as it allows us to apply the following
lemma in three possible situations: \\
\noindent
{\bf Lemma}: {\it
Consider one of the three matrices $H$, say, $H_{\hat
\mu\hat\nu}$. Then, either
the two other matrices $H$ are of equal dimension
($p=\dot p$), in which case $\Gamma_{\hat\mu\hat\nu xx} =
\Gamma_{\hat\mu\hat\nu \dot x\dot x}=0$, or they
are not of equal dimension ($p\not=\dot p$), in which case
$H_{\hat\mu\hat\nu}$ is equal to plus or minus
the identity matrix, with
$\Gamma_{\hat\mu\hat\nu xy}$ or
$\Gamma_{\hat\mu\hat\nu \dot x\dot y}$ vanishing, respectively.}\\
\noindent
To prove this lemma, multiply the third equation (\ref{Gamcontr})
with $H_{\hat\mu\hat\rho}$ and apply (\ref{GHHH}).
When $p=\dot p$
the corresponding equations lead to $\Gamma_{\mu\nu xx} =
\Gamma_{\mu\nu \dot x\dot x}=0$, as claimed above. On the other
hand, when $p\not=\dot p$,
it follows that $H_{\hat\mu\hat\nu}$ is a symmetric matrix which
can be diagonalized. Consider first
the case where $\hat\mu$ and $\hat\nu$
belong to an eigenspace of $H$ with
eigenvalue different from $\pm 1$. Then it follows from (\ref{GHHH})
that $\Gamma_{\hat\mu\hat\nu xx} =
\Gamma_{\hat\mu\hat\nu \dot x\dot x} =0$, which leads to $p=\dot p$
and thus to a contradiction. Hence $H_{\hat\mu\hat\nu}$ has only
eigenvalues equal to $\pm 1$. Assume now that both eigenvalues occur.
Consider then indices $\hat\mu$ and $\hat\nu$ corresponding to
the subspace with
eigenvalue $+1$, and an index $\hat\rho$ belonging to the subspace
with eigenvalue $-1$. Then (\ref{GHHH}) implies that (no sum over repeated
$\hat \rho $ index) \begin{equation}
\Gamma_{\hat\mu\hat\nu \dot x\dot y} =
\Gamma_{\hat\rho\hat\rho x y} =
\Gamma_{\hat\mu\hat\rho \dot x\dot y} =
\Gamma_{\hat\mu\hat\rho x y} =
\Gamma_{\hat\nu\hat\rho \dot x\dot y} =
\Gamma_{\hat\nu\hat\rho x y} = 0 \ .
\end{equation}
According to the last four equations
$d_{\hat\rho}$ anticommutes as
a matrix with $d_{\hat\mu}$ and $d_{\hat\nu}$. Thus we perform
the following calculation (no sum over repeated
$\hat \rho $ index),
\begin{eqnarray}
0&=&d_{\hat\rho x \dot x}\,\Gamma _{\hat \mu \hat\nu\dot x\dot y}\,
d_{\hat\rho\dot yy} \nonumber\\
&=&\textstyle{\frac{2}{3}}\big( d_{\hat\rho}\, d_{(\hat \mu }
\, d_{\hat \nu )}\, d_{\hat\rho} \big)_{xy}
-\textstyle{\frac{1}{4}}\left(d_{\hat\rho}\,d_{\hat\rho}\right)
_{x y}\,\delta _{\hat \mu \hat \nu }\nonumber\\
&=&\Gamma _{\hat \mu \hat \nu
x z}\, (d_{\hat\rho}\,d_{\hat\rho})_{z y}
=\textstyle{\frac{3}{8}}\,\Gamma _{\hat \mu
\hat \nu x y} \ .
\end{eqnarray}
Hence both $\Gamma_{\hat\mu\hat\nu xy}$ and
$\Gamma_{\hat\mu\hat\nu \dot x\dot y}$ vanish, which requires that
$p=\dot p$, thus leading to a contradiction. Hence the eigenvalues
of $H$ must all be equal, which completes the proof of the lemma.{\hspace*{\fill}\rule{2mm}{2mm}\linebreak}
With the help of this lemma it is straightforward to analyze the
various solutions of (\ref{defGamhat},\ref{GHHH}). First we
assume that $q$, $p$ and $\dot p$ are non-vanishing. Application of the
lemma then reveals that there are no solutions with different
values for $q$, $p$ and $\dot p$, simply because two $\Gamma$
tensors must then vanish, which, by (\ref{Gamcontr}) implies that
at least two of the parameters $q$, $p$ or $\dot p$ should be equal.
Because of the symmetry we can choose either two of the
parameters equal. Let us assume, for instance, \underline{$q\neq
p= \dot p\neq 0$}. Then, from the lemma applied to the three
matrices $H$ one finds four possibilities, two of which implying
that two uncontracted $\Gamma$ tensors vanish, which is
inconsistent with $q\not=p$ or $q\not=\dot p$. Then one has the
third possibility corresponding to
\begin{equation}
p=\dot p\ :\quad
\Gamma _{xy\dot x\dot y}=\Gamma _{\hat\mu\hat\nu xx}
=\Gamma _{\hat\mu\hat\nu \dot x\dot x}=0\ , \quad
H_{xy} =- \delta_{xy} \ , \quad H_{\dot x\dot y} = -\delta_{\dot
x\dot y}\ .
\end{equation}
On the other hand the first line of \eqn{Gamcontr} implies that
$\Gamma _{\hat \mu \hat \mu xx}=\textstyle{\frac{1}{4}}(p-q)p$,
which must vanish according to the above equation. Hence $p=0$;
this is one of the cases to be discussed below. The remaining
possibility, which leaves $q$ arbitrary corresponds to
\begin{equation}
p=\dot p\ :\quad \Gamma_{\hat\mu\hat\nu xy} =
\Gamma_{\hat\mu\hat\nu \dot x\dot y} = 0\ ,\quad
H_{xy} = \delta_{xy} \ , \quad H_{\dot x\dot y} =
\delta_{\dot x\dot y}\ . \label{resp=dp}
\end{equation}
Clearly, this solution belongs to case II, while the equivalent
solution with $q=p\neq \dot p$ or $q=\dot p\neq p$ belongs to
case III.
The case \underline{$q=p=\dot p$} is case I, for which we showed already
before that all $\Gamma $ symbols are zero with traceless $d$ coefficients.
What remains is to investigate the situation where
at least one of the parameters $q$, $p$ or $\dot p$ vanishes (this may
occur in case I, II or III depending on the values of the other
two parameters). In that case only one of the tensors $\Gamma$
remains (unless one of the other parameters vanishes as well).
Let us choose \underline{$q=0$}. There is only
\begin{equation}
q=0\ :\quad \Gamma_{xy\dot x\dot y} = -\textstyle{1\over 4}
\,\delta_{xy}\,\delta_{\dot x\dot y}\ , \quad H_{xy} =
\delta_{xy}\ ,\quad H_{\dot x\dot y} = \delta_{\dot x \dot y}\ ,
\end{equation}
This completes the classification of the coefficients $d_{abc}$ satisfying
the equation (\ref{GamD}).
\newpage
\section{Results of the classification.}
\setcounter{equation}{0}
\subsection{$d$-coefficients and Clifford algebras.}
In the previous section we obtained the possible tensors
$d_{abc}$ that are solutions to \eqn{GamD}, up to arbitrary
$O(n\!-\!1)$
rotations. The indices $a,b,\ldots$, are decomposed into indices
2, $\mu $ and $m$, where $\mu $ and $m$ take $q+1$ and
$r$ values, respectively. We thus have $n=3+q+r$.
The general results for the $d$ tensors are summarized in
\eqn{parad} and (\ref{paray}), where, as we shall see shortly,
the coefficients $d_{\mu mn}$ satisfy the defining
relation (up to a proportionality factor) of the generators of
a Clifford
algebra and can thus be expressed as (symmetric, real) gamma matrices
according to
\begin{equation}
d_{\mu mn}=\sqrt{\textstyle\frac{3}{8}} \left( \gamma _\mu \right)
_{mn}.\label{gamma}
\end{equation}
Therefore the function ${\cal Y}(x)$ acquires the generic form
\begin{equation}
{\cal Y}(x) = {1\over \sqrt 2}\left\{x_2^{\,3} -3 x_2\,(x_\mu^{\,2}
-\textstyle{1\over 2}\,x_m^{\,2} ) + \textstyle{3\over
2}\sqrt{3}\left(\gamma _\mu\right) _{mn}\, x_\mu\,x_m\,x_n
\right\}\ ,
\label{parayf} \end{equation}
where the gamma matrices generate a real representation of the Clifford
algebra ${\cal C}(q+1,0)$ with positive metric. Let us now
analyze the various solutions found in the previous section.
The first case is \underline{$q=-1$} (i.e., indices $\mu$ are
absent). As $d_{\mu mn}$ does not exist, the $d$ coefficients
are completely given by \eqn{parad}, and the corresponding
function $\cal Y$ reads
\begin{equation}
{\cal Y}(x) = {1\over \sqrt 2}\left(x_2^{\,3} +\textstyle{3\over
2}x_2\,x_m^{\,2} \right) \ .
\end{equation}
As shown previously, $\Gamma_{mnpq}= -{3\over 8}\delta_{(mn}\,
\delta_{pq)}$, which vanishes only for $r=0$. We denote
these solutions by $L(-1,r)$ with $r\geq 0$ and $n=2+r$.
For $q\geq 0$ there is one value of $\mu $ which we
denote by "3" and we split indices $m$ into $x$ or $\dot x$,
taking $p$ and $\dot p$ values, respectively. These indices are
distinguished by
\begin{equation}
d_{3xy}=\sqrt{\textstyle\frac{3}{8}}\, \delta _{xy}\ ,\qquad
d_{3\dot x\dot y}=-\sqrt{\textstyle\frac{3}{8}}\, \delta _{\dot
x\dot y}\ , \qquad d_{3x\dot y}=0.
\end{equation}
For \underline{$q=0$} there is no further restriction; $p$ and
$\dot p$ are arbitrary
and we denote the solution as $L(0,P,\dot P)\equiv L(0,\dot P,
P)$, where we replace $p$ and $\dot p$ by $P$ and $\dot P$ in
order to have a uniform
notation for the reducible representations of the Clifford
algebra (to be discussed below). Whenever $P$
or $\dot P$ are zero we write $L(0,P)\equiv L(0,P,0)$. The
diagonal matrix
\begin{equation}
\left( \gamma_3\right) _{mn}\equiv \sqrt{\textstyle\frac{8}{3}}\,
d_{3mn} \ ,
\end{equation}
can be viewed as a gamma matrix that generates a
one-dimensional
Clifford algebra ${\cal C}(1,0)$. This algebra has two inequivalent
irreducible representations corresponding to $+1$ and $-1$. The
numbers $P$ and $\dot P$ specify the multiplicities of these
representations in $\gamma_3$. The corresponding function $\cal
Y$ follows directly from \eqn{parayf},
\begin{eqnarray}
{\cal Y}(x) &=&\frac{1}{\sqrt{2}}\Big\{x_2\,(x_2 +\sqrt3\, x_3)\,
(x_2 -\sqrt 3\, x_3)\nonumber\\
&&\qquad + {\textstyle{3\over 2}} (x_2+\sqrt{3}\,x_3)\,x_x^2
+{\textstyle{3\over 2}} (x_2-\sqrt{3}\,x_3)\,x_{\dot x}^2
\Big\}\ .
\end{eqnarray}
The non-vanishing $\Gamma$ tensor is
\begin{equation}
\Gamma_{xy\dot x\dot y}= -{\textstyle{1\over 4}} \delta_{xy}\,
\delta_{\dot x\dot y}\ ,
\end{equation}
which vanishes whenever $p$ or $\dot p$ vanishes. Note that we
have $n=3+p+\dot p$.
For \underline{$q>0$} we may restrict ourselves to $r>0$, as the case $q>0,\
r=0$ is equivalent to $L(0,q,0)$ by a rotation \eqn{ch23}.
Denoting the values of $\mu\not=3$ by $\hat \mu$, the tensors
$d_{\hat\mu mn}$ satisfy
\eqn{resp=dp}, which implies that we can define the following
$r\times r$ gamma matrices,
\begin{equation}
\gamma _{\hat \mu } = \sqrt{\textstyle\frac{8}{3}}
\pmatrix {0& d_{\hat \mu x\dot v}\cr
d_{\hat\mu \dot y w}&0\cr } \ ,
\qquad
\gamma _{3} = \sqrt{\textstyle\frac{8}{3}}\pmatrix {1&0 \cr
0&-1\cr}\ .
\end{equation}
We have thus established (\ref{parayf}). To classify all cases
with $q>0$ one must consider all possible gamma matrices that
generate a real Clifford algebra ${\cal C}(q+1,0)$. The
irreducible representations (with positive definite metric) are
listed in table~\ref{Cliffp0} \cite{CliffordR}.
\begin{table}[tb]
\begin{center}
\begin{tabular}{||c|c|c|c|c|c||}\hline
$q$ &$q+1$ &${\cal C}(q+1,0)$& ${\cal D}_{q+1}$&$\Rbar ({\cal D}_{q+1})$&
${\bf C}$ \\ \hline &&&&&\\[-3mm]
$-1$ & 0&$\Rbar$ &1 &$\Rbar $ &$\Rbar$ \\
0 & 1&$\Rbar\oplus \Rbar $&1&$\Rbar $&$\Rbar$ \\
1 & 2&$\Rbar(2)$ &2 &$\Rbar (2)$&$\Rbar$ \\
2 & 3&$\Cbar(2)$ &4 &${\cal C}(3,1)$& $\Cbar$ \\
3 & 4&$\Hbar(2)$ &8 &$\Hbar \otimes \Hbar (2)$& $\Hbar$
\\ 4 & 5&$\Hbar (2)\oplus \Hbar (2)$&8
&$\Hbar \otimes \Hbar (2)$& $\Hbar$\\
5 & 6&$\Hbar(4)$ &16&$\Hbar \otimes \Hbar (4)$& $\Hbar$ \\
6 & 7&$\Cbar(8)$ &16&${\cal C}(8,0)$ & $\Cbar$ \\
7 & 8&$\Rbar(16)$&16&$\Rbar (16)$ & $\Rbar$ \\
$n+7$ & $n+8$ & $\Rbar(16)\otimes{\cal C}(n,0)$&16 ${\cal D}_n$ & $\Rbar
(16)\otimes \Rbar ({\cal D}_n)$& as for $n$\\[1mm]
\hline
\end{tabular}
\end{center}
\caption{Real Clifford algebras ${\cal C}(q\!+\!1,0)$. Here
${\bf F}(n)$ stands for $n\times n$
matrices with entries over the field
$\bf F$, while ${\cal D}_{q+1}$ denotes the
real dimension of an irreducible representation of
the Clifford algebra. We decompose the matrices $\Rbar ({\cal
D}_{q+1})$ acting on the real irreducible
representation space, either as a direct product
with the Clifford algebra {\em representation} as a factor, or in
the form of a higher-dimensional Clifford algebra. This
decomposition is used to determine the centralizer $\bf C$ of the
Clifford algebra in this representation.}
\label{Cliffp0}
\end{table}
They are unique
except when the Clifford module consists of a direct sum of two
factors. As shown in table~\ref{Cliffp0} this is the case for
$q=0$ mod 4, where there exist two inequivalent irreducible
representations\footnote{
These are the only dimensions for which the product of all
gamma matrices, $Q\equiv \gamma_1\cdots\gamma_{q+1}$, commutes
with every individual matrix and has square $\unity$; the two
inequivalent representations are related by an overall sign
change in the gamma matrices: $\gamma_\mu\to - \gamma_\mu$, so
that $Q$ changes from $+\unity $ to $-\unity $, or vice
versa.}. This implies
that for $q\neq 0$ mod 4, the gamma matrices are unique once we
specify the number of irreducible representations. The solution
for the $d$ coefficients is then denoted by $L(q,P)$, where $P$
denotes the number of irreducible representations. We have thus
$r=P\, {\cal D}_{q+1}$, or, equivalently, $n=3+q+P\,{\cal
D}_{q+1}$.
However, when $q$ is a multiple of 4 (i.e., $q=4m$ with $m$ integer),
then there exist two inequivalent irreducible representations and
the solutions are characterized by specifying the multiplicities
$P$ and $\dot P$ of each of the two representations. The
solutions are therefore denoted by $L(4m,P,\dot P)\equiv L(4m, \dot P,
P)$ and we have $n=3+4m+(P+\dot P){\cal D}_{4m+1}$. Whenever $P$
or $\dot P$ vanishes, we denote the solutions by $L(4m,P)\equiv
L(4m,P,0)$.
This concludes the classification of the various solutions.
The only components of $\Gamma _{abcd}$ that possibly differ from zero
are
\begin{equation}
\Gamma _{mnpq}={\textstyle\frac{3}{8}}\left[ \left( \gamma _\mu
\right) _{(mn} \left( \gamma _\mu \right) _{pq)} -
\delta_{(mn}\delta _{pq)}\right] .
\end{equation}
As one easily verifies, this tensor vanishes only for $L(-1,0)$,
$L(0,r)$, $L(1,1)$, $L(2,1)$, $L(4,1)$ and $L(8,1)$,
corresponding to $n= 2$, $3+r$, 6, 9, 15 and 27, respectively.
Note that the contracted tensor
\begin{eqnarray}
\Gamma_{mnpp}&=&\textstyle{\frac{1}{8}} (2q-r)\, \delta_{mn}\qquad
\mbox{for }q\neq 0 \\
&=& -\textstyle{\frac{1}{8}}P\left(\delta
_{mn} -\left(\gamma_3\right)
_{mn}\right) -\textstyle{\frac{1}{8}}\dot P\left(\delta
_{mn} +\left(\gamma_3\right) _{mn}\right)\qquad
\mbox{for }q=0 \nonumber
\end{eqnarray}
($q=0$ is special, because it represents the only case where a gamma
matrix can have a non-zero trace) has only
zero eigenvalues in those cases where we already know that
$\Gamma _{mnpq}=0$. This
implies that the equation $\Gamma _{mnpq} Z^q=0$ has only
non-trivial solutions $Z^q$ when $\Gamma_{mnpq}$ vanishes.
\subsection{$A$-coefficients and symmetry groups}
Now that we have found the non-vanishing components of $\Gamma
_{abcd}$ we consider the solutions for the corresponding tensors
$A_{ab;c}$ as defined by \eqn{GamD}. They are determined modulo
solutions of the homogeneous equation,
\begin{equation}
d_{d(ab}\,A_{c)d} = 0, \label{homeq}
\end{equation}
which define the invariances of the coefficients $d_{abc}$.
We recall that these symmetries must
preserve the metric $\delta _{ab}$, so that
the matrices $A_{ab}$ are antisymmetric.
There are only two types of invariances. First there is
\begin{equation}
A_{2m}=-A_{m2} =\sqrt{3}\,\zeta_m\ , \qquad A_{m\mu}= -A_{\mu m} =
\left( \gamma _\mu\right) _{mn}\, \zeta_n \ ,\label{solzetA}
\end{equation}
where $\zeta_m$ must satisfy
\begin{equation}
\Gamma_{mnpq}\,\zeta_q = 0\ .
\end{equation}
However, from the discussion at the end of the previous subsection it
follows that this equation has only non-trivial solutions for $\zeta _m$
when $\Gamma_{mnpq}$ vanishes.
The second type of solutions of (\ref{homeq}) corresponds to
the invariance group of the tensor $d_{\mu mn}\propto \gamma_{\mu mn}$
associated with the matrices $A_{\mu\nu}$ and $A_{mn}$,
\begin{equation}
A_{\mu\nu}\, \left( \gamma_\nu\right) _{mn} +
\gamma_{\mu p(m}\, A_{n)p} =0 .
\label{HOq+1}
\end{equation}
For any $A_{\mu \nu }$ there is the solution
\begin{equation}
A_{mn} = {\textstyle{1\over4}} A_{\mu\nu} \,
\big(\gamma_\mu\gamma_\nu\big)_{mn} .
\label{cover}
\end{equation}
Obviously, the group
associated with $A_{\mu\nu}$ is the rotation group $SO(q\!+\!1)$,
which acts on the spinor coordinates labeled by $m$ according to
its cover group. Besides there can be additional invariances
that act exclusively in spinor space and commute with the gamma
matrices and thus with the corresponding representation of the
Clifford algebra. Hence we are interested in the metric-preserving
elements of the centralizer of the Clifford algebra in the
$r$-dimensional real representation (i.e., the
antisymmetric matrices $A_{mn}$ belonging to $\Rbar(r)$ that
commute with $\gamma_\mu$). Let us first determine the
centralizers for the irreducible representations.
According to
Schur's lemma, matrices that commute with an {\em irreducible}
representation of the Clifford algebra must form a division
algebra. Table~\ref{Cliffp0} lists the centralizers of the
real irreducible representations, which are thus equal to
$\Rbar$, $\Cbar$ or $\Hbar$. We briefly present the arguments
leading to this result\footnote{Many results
on real irreducible representations of the Clifford algebras and their
centralizers have been explicitly worked out in \cite{Okubo}. Another
useful reference is \cite{Coq}.}.
First consider $q+1$ even. The only commuting element in the Clifford
algebra representation is $\unity $, while the centralizer is just the
factor in $\Rbar({\cal D}_{q+1})$ that
multiplies the Clifford algebra representation
(cf. table~\ref{Cliffp0}). In this way we find that the centralizer
is $\Rbar$ for $q+1=0$ or 2 mod 8, and $\Hbar$ for $q+1=4$ or 6
mod 8. Now take $q+1$ odd. For $q=4m$ the irreducible
representation of the Clifford algebra corresponds to only one of
the terms of the direct sum in ${\cal C}(4m+1)$.
Just as above, the only commuting element in this
representation is $\unity$, and the
centralizer is obtained as the factor that multiplies the
Clifford algebra representation in
$\Rbar ({\cal D}_{q+1})$, i.e., $\Rbar $
for $q+1=1$ mod 8 and $\Hbar $ for $q+1=5$ mod 8.
What remains is the case $q=2+4m$. Then, as indicated in
table~\ref{Cliffp0}, the representation space is isomorphic to a
higher-dimensional Clifford algebra,
which makes it easy to verify that only
$\unity $ and $Q\equiv \gamma _1\cdots\gamma _{q+1}$
span the centralizer.
(Note that for $q=2+4m$, $Q^2=-\unity $, while for $q=4m$, $Q$ itself is
represented by $\pm \unity $). We thus conclude that the centralizer is
equal to $\Cbar$ for $q+1=3$ or 7 mod 8.
To analyze the reducible representations, we first rewrite $\Rbar
(r)$ as $\Rbar(p)\otimes \Rbar ({\cal D}_{q+1})$, where $p$ is
the number of irreducible representations, thus $p=P$ for
$q\neq 4m$ and $p=P+\dot P$ for $q=4m$.
Consider first $q\neq 4m$ such
that $\gamma _\mu =\unity _P\otimes \gamma_\mu ^{irr}$.
This shows that the centralizer is the direct product of $\Rbar
(P)$ with the centralizer of $\gamma_\mu ^{irr}$. leading to
$\Rbar (P)$ for
$q=1,7$ mod 8, to $\Cbar (P)$ for $q=2,6$ mod 8, and to $\Hbar
(P)$ for $q=3,5$ mod 8. What remains are the cases $q=0$ mod 4,
when we have $\gamma _\mu
=\eta \otimes \gamma _\mu ^{irr}$, where $\eta =\mbox{diag}(1,
\ldots 1,-1,\ldots - 1)$. Writing $A_{mn}$ as $A=H\otimes S$,
where $H\subset \Rbar(p)$ and $S\subset\Rbar({\cal D}_{q+1} )$,
we have the condition
\begin{equation}
[A,\gamma _\mu ]= H\eta \otimes S\gamma _\mu -\eta H\otimes \gamma _\mu
S=0. \end{equation}
In the sector proportional to
$(\unity \pm\eta)H(\unity\mp\eta)$,
it follows that $S$ anticommutes with
$\gamma_\mu ^{irr}$; $S\,S^T$ is then a symmetric matrix
that commutes with
$\gamma_\mu ^{irr}$, so that it must be proportional to $\unity$.
Therefore $S$ is orthogonal and $S\,\gamma_\mu ^{irr}S^{-1}
=-\gamma_\mu ^{irr}$. This leads to a contradiction, as it
implies that $\gamma_\mu ^{irr}$ and $-\gamma_\mu ^{irr}$ are
equivalent representations. Consequently the matrices $H$ are
restricted to the $\Rbar (P)\oplus\Rbar (\dot P)$ matrices
commuting with $\eta $. For these matrices the same
considerations apply as for $q\neq 4m$. The result is then that
the centralizer is the direct product of $\Rbar (P)\oplus
\Rbar (\dot P)$ with the
centralizer of $\gamma_\mu ^{irr}$, which corresponds to
$\Rbar (P)\oplus \Rbar (\dot P)$ for $q=0$ mod 8, and $\Hbar
(P)\oplus \Hbar (\dot P)$ for $q=4$ mod 8.
Now we determine the antisymmetric matrices in these centralizers
corresponding to the generators of the metric-preserving
subgroups. In each case these
centralizers can be written as the direct product of real
matrices with a division algebra (in the real representation, so
that the imaginary units become antisymmetric matrices).
Therefore in
the complex or the quaternionic representation the antisymmetry
requirement takes the form of an antihermiticity requirement. The
metric-preserving groups are therefore
\begin{eqnarray}
\mbox{for }q=1,7 \mbox{ mod }8&:& SO(P)\nonumber\\
\mbox{for }q=0 \mbox{ mod }8&:& SO(P)\otimes SO(\dot P)\nonumber\\
\mbox{for }q=2,6 \mbox{ mod }8&:& U(P)\nonumber\\
\mbox{for }q=3,5 \mbox{ mod }8&:& U(P,\Hbar )\equiv U\!Sp(2P)\nonumber\\
\mbox{for }q=4 \mbox{ mod }8&:& U\!Sp(2P)\otimes U\!Sp(2\dot P)
\label{mpgc}
\end{eqnarray}
In conclusion, we summarize the symmetries of the tensors
$d_{abc}$. First there are the symmetries \eqn{solzetA} for the
cases $L(-1,0)$, $L(0,r)$, $L(1,1)$, $L(2,1)$, $L(4,1)$ and $L(8,1)$.
Secondly there is the group $SO(q+1)$ and the group mentioned in
\eqn{mpgc} represented by matrices $S_{mn}$. This gives
\begin{eqnarray}
A_{\mu \nu }&=& \mbox{arbitrary}\nonumber\\
A_{mn}&=& \textstyle{\frac{1}{4}}\left( \gamma_\mu \gamma _\nu \right)
_{mn}A_{\mu \nu }+S_{mn}\nonumber\\
A_{2m}&=&\sqrt{3}\,\zeta _{m}\nonumber\\
A_{m\mu }&=&\left( \gamma _\mu \right) _{mn}\zeta_n .
\label{gensolA}
\end{eqnarray}
Now that we have determined the solutions of the homogeneous equation
\eqn{homeq}, we turn to the inhomogeneous equation \eqn{GamD}. A
particular solution is
\begin{equation}
A_{2m;n}=-A_{m2;n} ={\textstyle{3\over 4}} \sqrt 2\, \delta_{mn}\ ,
\qquad A_{m\mu;n}= -A_{\mu m;n} = \textstyle{\frac{1}{4}}\sqrt{6}
\left( \gamma _\mu\right) _{mn} \ .\label{specsolA} \end{equation}
When $\Gamma_{mnpq}= 0$ these solutions correspond to an
invariance of the $d_{abc}$ coefficients and are already contained
in the previous transformations.
\section{Implications for homogeneous special spaces}
\label{conclusions}
\setcounter{equation}{0}
Now we return to special geometry and the cubic
polynomial $C(h)$, defined in (\ref{Cpoly}). Using the canonical
parametrization, we first introduce an extra coordinate $x_1$,
and add the corresponding terms $x_1^{\,3} - {1\over 2}x_1\,
x_a^{\,2}$ to the polynomial (\ref{cubic}). Giving up the canonical
parametrization, we no longer have to restrict ourselves to $O(n-1)$
redefinitions, and we can make arbitrary
linear redefinitions of the $x_1,\ldots,x_n$. Using
\begin{eqnarray}
&&h^1 = 3^{-1/3}\,\big(x_1+\sqrt 2 \,x_2\big)\ , \nonumber\\
&&h^2 = 3^{-1/3}\,\big(x_1-{\textstyle{1\over 2}}\sqrt 2\,
x_2\big)\ , \nonumber \\
&&h^\mu =2^{-1/2}\cdot 3^{1/6}\, x_\mu \ , \qquad h^m =2^{-1/2}\cdot
3^{1/6}\, x_m \ , \end{eqnarray}
the polynomial $C(h)$ acquires the generic form given in section
1 (cf. \ref{genC1}),
\begin{equation}
C(h) = 3\Big\{ h^1\,
\big(h^2\big)^2 -h^1\,\big(h^\mu\big)^2 -h^2\,\big(h^m\big)^2
+\gamma_{\mu mn}\,h^\mu\, h^m\,h^n\Big\} \ . \label{genC}
\end{equation}
We stress that this parametrization no longer coincides
with the canonical one. The possible realizations for the gamma
matrices were discussed in the previous section. Note that we have
\begin{equation}
n= 3+q+r\ ,\qquad \mbox{with}\quad r=(P+\dot P)\,{\cal D}_{q+1}\ ,
\end{equation}
where the integers $P$ and $\dot P$ characterize the
representations for the gamma matrices, as discussed in the
previous section.
We now summarize the linear transformations of $h^A$ that leave
(\ref{genC}) invariant. They can either be determined directly from
\eqn{genC}, or can be evaluated from $\delta x^A = B^A{}_B x^B$,
using \eqn{cBpar} with $A_{ab}=B^c{}_1 \,A_{ab;c}\,$, where
$A_{ab;c}$ is taken from \eqn{specsolA}, plus a homogeneous
solution as in \eqn{gensolA},
\begin{eqnarray}
\delta h^1 &=& 2\xi_2\,h^1 + 2\xi_m h^m \ , \nonumber \\
\delta h^2 &=& -\xi_2\,h^2 -\zeta_m\, h^m+ 2\xi_\mu\,h^\mu \ ,
\nonumber \\
\delta h^\mu &=& -\xi_2\,h^\mu + 2\xi_\mu \,h^2 -
\zeta_n\,\gamma_{\mu mn}\,h^m + A_{\mu\nu}\,h^\nu \ , \label{htrans} \\
\delta h^m &=& {\textstyle{1\over 2}}\xi_2\,h^m + \xi_m \,h^2 -
\zeta_m\,h^1 + \xi_n \,\gamma_{\mu mn}\,h^\mu + \xi_\mu\,
\gamma_{\mu mn}\,h^n + A_{mn}\,h^n\ . \nonumber
\end{eqnarray}
The symmetries
corresponding to the parameters $\zeta_m$ only exist
when the tensor $\Gamma_{mnpq}$ vanishes. As before, $A_{mn}$ and
$A_{\mu\nu}$ are antisymmetric matrices that leave the gamma matrices
invariant (cf. (\ref{HOq+1})). As explained in the previous section,
$A_{\mu\nu}$ and $A_{mn}$ generate the product of $SO(q\!+\!1)$ and the
metric-preserving group in the centralizer of the corresponding Clifford
algebra representation given in \eqn{mpgc}. The parameters are
defined as
follows \begin{eqnarray}
&& B^1_2= \sqrt 2\,\xi_2 \ ,\qquad
B^1_m = \sqrt{\textstyle{2\over 3}}\,
\big(\xi_m-\zeta_m\big)\ ,\qquad
B^1_\mu = 2 \sqrt{\textstyle{2\over 3}}\, \xi_\mu \ , \\
&& A_{2m} =-A_{m2}= {\textstyle{1\over 2}}\sqrt 3\,\big(\xi_m +
\zeta_m\big)\ , \qquad A_{m\mu} =-A_{\mu m}={\textstyle{1\over 2}}
\gamma_{\mu mn}\, \big(\xi_m + \zeta_m\big)\ . \nonumber
\end{eqnarray}
It is illuminating to decompose the generators with respect to
the abelian generator $e_2$ associated with the parameter
$\xi_2$. The algebra then decomposes according to
\begin{equation}
{\cal X} = {\cal X}_{-3/2} + {\cal X}_0 + {\cal X}_{3/2} \ ,
\end{equation}
where ${\cal X}_{-3/2}$ contains the generators associated with
the parameters $\zeta_m$ (which is thus only present when
$\Gamma_{mnpq}=0$), ${\cal X}_0$ consists of the generators
associated with $\xi_2$, $\xi_\mu$, $A_{\mu\nu}$ and $A_{mn}$, and
${\cal X}_{3/2}$
contains the generators corresponding to the parameters $\xi_m$.
Obviously ${\cal X}_{3/2}$ constitutes a solvable algebra of
dimension $r$. Also
${\cal X}_0$ contains a solvable algebra (of dimension
$q\!+\!2$). This follows directly from the observation that the
subalgebra consisting of the generators associated with the
parameters $\xi^\mu$ and $A_{\mu\nu}$ constitute $so(q\!+\!1,1)$,
which, by its Iwasawa decomposition, contains a solvable subalgebra of
dimension $q\!+\!1$
and rank 1 (for $q\geq 0$; for $q=-1$ the algebra is empty, so
that the rank is 0).\footnote{The
action of $SO(q\!+\!1,1)$ on the spinor coordinates follows from
the explicit terms in (\ref{htrans}) proportional to $\xi_\mu$ and
the generators (\ref{cover}) contained in $A_{mn}$ corresponding
to the cover of
$SO(q\!+\!1)$. The additional generators in $A_{mn}$
corresponding to \eqn{mpgc} are compact; they commute with
$SO(q\!+\!1,1)$ and have no bearing on the solvable
subalgebra of ${\cal X}_0$.}
Indeed, the subspace of the special real manifold corresponding
to $h^m=0$ and $h^1$ fixed and non-zero, corresponds precisely to
the coset space $SO(q\!+\!1,1)/SO(q\!+\!1)$.
The complete solvable transitive group of motions thus consists of the
transformations \eqn{htrans} corresponding to the parameters
$\xi _a$, combined with (for $q\geq 0$)
\begin{equation}
A_{\mu \nu }=4\delta _{3[\mu }\xi _{\nu ]}\ ;\qquad A_{mn}= \left( \gamma
_{[3}\gamma _{\mu ]}\right) _{mn}\xi _\mu \ , \end{equation}
where 3 denotes some arbitrary direction in the space of vectors
labeled by indices $\mu $.\vspace{.6cm}
Let us now discuss the implications of our results for the
homogeneous special real spaces with a transitive isometry group
that constitutes an invariance of the polynomial $C(h)$, and thus of
the corresponding $N\!=\!2$ supergravity theory in five
space-time dimensions. These spaces are classified in terms of
the polynomials $C(h)$, as given in (\ref{genC}). The rank of these spaces
is equal to 1 or 2, because the Cartan subalgebra of the solvable algebra
consists of the transformations $\xi _2$, and the Cartan subalgebra of the
solvable algebra corresponding to $SO(q+1,1)/SO(q+1)$. The rank-1 spaces
have $q=-1$ and the corresponding expression for $C(h)$ is
\begin{equation}
L(-1,r):\qquad C(h) = 3\,h^2\,\big( h^1\, h^2
-\big(h^m\big)^2 \big) \ .
\end{equation}
Their solvable algebra is that of
\begin{equation}
L(-1,r):\qquad {SO(r\!+\!1,1)\over SO(r\!+\!1)}\ , \qquad (n=r+2)
\end{equation}
and we therefore identify them with these spaces. They are thus
symmetric and were exhibited in the context of
$d\!=\!5$ supergravity in \cite{GuSiTo2}.
A simple counting argument shows, however, that not all the ${1\over
2}(r+1)(r+2)$ symmetries of this space correspond to
invariances of the cubic polynomial $C(h)$, as there are only $r$
invariances associated with ${\cal X}_{3/2}$ and ${1\over2}r(r-1)
+ 1$ with ${\cal X}_0$ (corresponding to $A_{mn}\sim SO(r)$ and
$\xi_2$, respectively). Indeed, explicit calculations \cite{dWVPVan} show
that the missing $r$ isometries do {\em not} correspond to linear
transformations of the coordinates $h^A$. The case $r=0$ is an
exception in this respect, as all isometries of the real manifold
coincide with the invariances of $C(h)$. The non-linear transformations of
$h$ are not full invariances of the full $d=5$ supergravity action (only
of the scalar part \eqn{sigma}), and the lower dimensional actions do
therefore not exhibit these invariances. This is the reason why
the K\"ahler and quaternionic spaces resulting from the {$\bf c$ map}\
and the {${\bf c}{\scriptstyle\circ}{\bf r}$ map}\ applied to $L(-1,r)$ are in general {\em not}
symmetric, with
the exception of the spaces corresponding to $L(-1,0)$ \footnote{In
\cite{GuSiTo2} it was assumed that the space remains
symmetric after reduction; therefore the corresponding K\"ahler
spaces were incorrectly identified with
the minimal couplings of $d=4,\ N=2$ supergravity.}.
Their quaternionic counterparts are missing in the classification
of homogeneous spaces in \cite{Aleks} and the corresponding
K\"ahler spaces are therefore also missing in \cite{Cecotti}.
The rank-2 spaces with $q=0$ are special, because
$C(h)$ factorizes in certain cases (corresponding to the
symmetric spaces where either $P$ or $\dot P$ vanishes),
\begin{eqnarray}
L(0,P,\dot P):\quad C(h) &=&- 3\Big\{ h^1\,
\big(h^2+h^3\big)\,\big(h^2-h^3\big)\\
&& \qquad +\big(h^2-h^3\big)\,
\big(h^x\big)^2 +\big(h^2+h^3\big)\,
\big(h^{\dot x}\big)^2\Big\} \ , \nonumber
\end{eqnarray}
where we have decomposed the indices $m$ into $P$ indices $x$ and
$\dot P$ indices $\dot x$, as explained in the preceding
sections, with $n=3+P+\dot P$. The quaternionic and K\"ahler
spaces corresponding to $L(0,P,\dot P)$ were called $W(P,\dot P)$
and $K(P,\dot P)$ in \cite{Aleks} and \cite{Cecotti},
respectively. We shall denote the real spaces by $Y(P,\dot P)$.
The rank-2 spaces corresponding to $L(q,P)$ with $q>0$ have a
rank-4 quaternionic extension and a rank-3
K\"ahler extension, which were denoted by $V(P,q)$ and $H(P,q)$ in
\cite{Aleks} and \cite{Cecotti}, respectively. We shall denote
the corresponding real spaces by $X(P,q)$.
According to the classification of \cite{Aleks} and
\cite{Cecotti}, for $q=4m\geq 4$ one has precisely one
quaternionic and one K\"ahler space of given (allowed) dimension.
However, the
existence of inequivalent real representations of the Clifford algebra for
$q=4m$ implies the existence of inequivalent real, K\"ahler and
quaternionic spaces corresponding to $L(4m, P,\dot P)$. We
already encountered an example of the same phenomenon for $q=0$.
As follows from the above arguments quaternionic spaces originating from
special real spaces via special K\"ahler spaces have rank 3 or 4. But as
mentioned before, these do not constitute all possible homogeneous
quaternionic spaces. In fact, we know rank-2 symmetric quaternionic
spaces, which originate from special K\"ahler spaces, but not from real
spaces, and rank-1 symmetric quaternionic spaces that also have no
K\"ahler origin. We summarize these in table~\ref{homnonsp}.
\begin{table}[tb]
\begin{center}
\begin{tabular}{||c|c|c||c|c||}
\hline
real & K\"ahler & quaternionic & $n\!+\!1$& $R$ \\
\hline &&&&\\[-3mm]
& & SG &$0$&0\\[2mm]
& & $\frac{USp(2n+2,2)}{USp(2n+2)\otimes SU(2)} $
&$n+1\geq 0$&1 \\[2mm]
& SG &$\frac{U(1,2)}{U(1)\otimes U(2)} $
& 1&1\\[2mm]
&$\frac{U(n,1)}{U(n)\otimes U(1)}$
&$\frac{U(n+1,2)}{U(n+1)\otimes U(2)} $ & $n+1\geq 2$
&2 \\[2mm]
SG & $\frac{SU(1,1)}{U(1)}$
&$\frac{G_{2(+2)}}{SU(2)\otimes SU(2)}$ &2 &2\\[2mm]
\hline
\end{tabular}
\end{center}
\caption{Normal quaternionic spaces with rank $R\leq 2$ and
quaternionic dimension $n\!+\!1$ and the corresponding special
real and K\"ahler spaces (whenever they exist).}
\label{homnonsp}
\end{table}
\begin{table}[tb]
\begin{center}
\begin{tabular}{||l||c|c|c||c|c||}\hline
$C(h)$&real & K\"ahler & quaternionic & $R$& \\
\hline&&&&&\\[-3mm]
$L(-1,m-1)$&$\frac{SO(m,1)}{SO(m)}$& $\star$ & $\star$ &3&$m\geq 2$\\[2mm]
$L(-1,0)$&$SO(1,1)$&$\left[\frac{SU(1,1)}{U(1)}\right]^2$&$\frac{SO(3,4)}{(
S U ( 2 ) ) ^ 3 } $ &3&\\[2mm]
\hline&&&&&\\[-3mm]
$L(0,P,\dot P)$&$Y(P,\dot P)$&$K(P,\dot P)$&$W(P,\dot P)$&4&$P,\dot P\geq
0$\\[2mm] $L(q,P)$&$X(P,q)$&$H(P,q)$&$V(P,q)$&4&$P,q\geq 1$\\[2mm]
$L(4m,P,\dot P)$& $\star$ & $\star$& $\star$& 4&$m,P,\dot P\geq 1$\\[2mm]
\hline \end{tabular}
\end{center}
\caption{Homogeneous special real spaces with corresponding
K\"ahler and quaternionic spaces. Those that were discussed for the
first time in this paper are indicated by a $\star$. $R$ is the rank of the
quaternionic space. } \label{homsp}
\end{table}
The corresponding K\"ahler and real spaces have real and complex
dimension $n-1$ and $n$, and their rank is equal to $R-2$ and
$R-1$, respectively. Because of the low rank, only a real space
with zero rank can occur (which necessarily has zero dimension).
This corresponds precisely to the pure $N\!=\!2$ supergravity
theory in five space-time dimensions. In the table, this case is
represented by "SG". A similar situation occurs for $R=1$ and
$R=0$, where
the only possibility for a special K\"ahler and quaternionic space
corresponds to pure supergravity in four and three dimensions,
respectively. Hence none of the spaces discussed in the table are
related to the spaces classified in this paper. Observe that all
spaces in table~\ref{homnonsp} are symmetric. Together with the
homogeneous spaces resulting from the analysis of this paper,
which are summarized in table~\ref{homsp},
they constitute all the homogeneous quaternionic and special
K\"ahler spaces that are known.
A proof that this list contains all the symmetric special K\"ahler spaces is
given in \cite{CremVP}. The symmetric rank-4 quaternionic
spaces and their related special real and K\"ahler spaces correspond
to $L(0,P),\ L(1,1),\ L(2,1),\ L(4,1)$, and $L(8,1)$.
These tables show a remarkable pattern. We have the pure $N=2$ supergravity
theory in 3 dimensions ("the empty quaternionic space") and the minimal
couplings~: the quaternionic projective spaces. Then the remaining rank-1
quaternionic symmetric space is the one originating from pure $d=4$
supergravity ("the empty special K\"ahler space"). The minimal couplings of
vector multiplets in $d=4,\ N=2$ supergravity, the complex projective
spaces, are the origin of an infinite series of rank-2 quaternionic
spaces. The remaining rank-2 quaternionic space originates from pure $d=5$
supergravity ("empty real space"), while the real projective
spaces are the origin of an infinite series of rank-3 homogeneous
quaternionic spaces (as discussed before, the reduction does not
preserve the property that the space be symmetric). Seeing the
ensuing pattern, it looks as if the remaining rank-3 quaternionic
space should arise from the reduction of pure
$d=6$ supergravity. The rank-4 quaternionic spaces would then find their
origin in matter coupled $d=6$ supergravity. This is then also the last
step, because $d=6$ is the largest space-time dimension in which a
supergravity theory can exist with 8 independent supersymmetries
(corresponding to a $d=6$ spinor). These $d=6$ couplings would then be
characterized by
the possible real realizations of positive-definite Clifford
algebras (while $L(-1,0)$ corresponds to the "empty Clifford algebra").
This is in accord with a conjecture in \cite{Romans}
(cf. eq.(5.6)) where
$d=6,\ N=2$ tensor and vector multiplets are
incorporated in the field strength
\begin{equation}
F_{abc }^\mu =3\partial _{[a}A_{bc]}^\mu +\left( A^m\right) _{[a}
\left(\gamma^\mu \right) _{mn}\left( F^n\right) _{bc]},
\end{equation}
which leads to a coupling of $q+1$ tensor multiplets (with tensor
field $A^\mu _{ab}$ and field strength $F^\mu _{abc}$) to $r$
vector multiplets (with vectors $A^m_a$ and field strengths
$A^m_{ab}$).
In this paper we
presented a complete classification of the special real homogeneous
spaces with a transitive group of motions that leaves the polynomial
$C(h)$, and thus the corresponding $d=5$ supergravity theory, invariant.
Therefore we also obtained the corresponding classification for
the homogeneous special K\"ahler and quaternionic spaces
that are in the image of the {$\bf c$ map}\ and
the {${\bf c}{\scriptstyle\circ}{\bf r}$ map}. However, we expect that the tables~\ref{homnonsp} and
\ref{homsp} in fact comprise
all possible homogeneous quaternionic spaces.
This result should still follow from the analysis of
\cite{Aleks}, and we believe that the absence in \cite{Aleks} of the spaces
indicated by a $\star$ in table~\ref{homsp} is merely due to a
calculational error. The nice pattern described above lends support to our
conjecture that the classification of homogeneous quaternionic spaces is
now complete, as the new spaces exhibited above are precisely needed for
completing the overall picture.
\vspace{0.6cm}
\noindent{\large \bf Acknowledgments}\vspace{0.3cm}
We thank S.~Cecotti, R.~Coquereaux, A.S.~Galperin, V.I.~Ogievetsky,
W.~Troost and F.~Vanderseypen for valuable discussions.
|
1,314,259,993,930 | arxiv | \section{Introduction}
This work covers the process of using real microcontrollers to test a data collection strategy based on Bluetooth Mesh \cite{bm51}. That data collection strategy (MAM) was described and simulated by past work \cite{winsys21mam} \cite{jv2022exploring}. MAM differs significantly from Bluetooth Mesh, since instead of exclusively relying on a controlled flooding approach, MAM tries to establish a route towards the data collector (i.e. Mobile-Hub). Although it is a significant difference, we designed it as an extension of the Bluetooth Mesh specification, and we were able to implement it in ESP-32 firmware without needing to rewrite major parts of its Bluetooth-certified code.
The real-world results we obtained differed significantly from the past simulations that we did. The Mobile-Hub collected more than three times fewer data in some experiments, as we describe in more detail on section \ref{sec:outdoor}. This significant difference between the simulations and the real-world experiments is expected, as in our simulations our model did not account for radio interference that is inherent to an urban environment. There are many past reports and discussions about performance evaluation in wireless ad-hoc networks and the gap between simulations and physical environments, such as \cite{7378437} (analyzed 674 papers and discussed the failure of simulations in reproducing physical environments conditions and the low reproducibility in experimental testbeds), \cite{1129592} (discussed simulations issues for sensor networks and future directions for improving simulated MAC fidelity) and \cite{1298203} (evaluated OMNET++ simulations and testbed experiments in the wireless networks domain and discussed test-bed limitations).
This paper is organized as follows. Section \ref{sec:datacollection}, describes the MAM data collection strategy, how it works and how it was simulated in past work. On section \ref{sec:esp32}, we discuss the microcontroller we chose to run our experiments - ESP-32 - and its Bluetooth Mesh implementation, as well as how we extended it firmware to support our requirements. Section \ref{sec:metrics} describes the metrics we had to implement in the microcontroller, command and control features of the experiments, and some of the challenges we had to overcome during the implementation. Section \ref{sec:indoor} describes initial indoor experiments we performed, which firmware parameters we had to change, and we also present the indoor experiment results we obtained. Finally, on section \ref{sec:outdoor} we describe the outdoor experiments which were the main objective of our research: we present the results, compare them with the simulated results from previous work, and discuss additional challenges and pitfalls of the experiments.
We believe that our main contributions with these experiments are: 1) evaluating the gap between past work simulations and the real-world experiments we conducted; 2) presenting the challenges we experienced when designing and running the real world experiments and how we addressed them; 3) presenting a command and control approach for running Bluetooth Mesh performance experiments using ESP-32 microcontrollers that could be useful for future research with ESP-32 testbeds and real-world experiments.
\section{Related Work}
There are only few works that present Bluetooth Mesh real world experiments and discuss their findings. To the best of our knowledge, there is no past work that extended Bluetooth Mesh routing and evaluated the extension's performance with real world experiments.
In \cite{Hortelano2021}, the authors described real world Bluetooth Mesh experiments through indoor experiments using Nordic devices, a different platform than what we used. Nodes were separated more than 20 meters apart, and the paper described some challenges of the Bluetooth Mesh provisioning process, proposing a more lightweight provisioning process that according to the authors is up to 36.56\% faster than Bluetooth Mesh provisioning. That work focused on the provisioning process, so its results are orthogonal to the experiments that we did. They suggested a real application test as future work. Testing their approach using the data collection problem described in \cite{jv2022exploring} with the implementation we describe on this paper could be an interesting development of this research.
\cite{9217675} describes the implementation of a Bluetooth Low Energy (BLE) routing approach for IoT environments using ESP-32 SoCs. That approach, however, does not use Bluetooth Mesh. Even though it does not use Bluetooth Mesh, it uses its underlying BLE technology, and the same SoC devices (ESP32) we used and presented on this paper. Their tests, however, were conducted exclusively on a testbed, composed of seven ESP-32 devices, separated 50 centimeters apart, whereas in our experiments, as described on section 7 and on Figure 4, we used 10 ESP-32 devices, separated up to 11 meters apart. Our experiments also involved using batteries, and having to provision devices in the Bluetooth Mesh network, since we did not have USB access at all time during the simulation like we would on a testbed experiment. We cover those additional challenges in sections \ref{sec:indoor} and \ref{sec:outdoor}.
When considering the challenges of wireless experiments with microcontrollers, there are surveys such as \cite{TONNEAU2015115} and \cite{6624996} that describe some of the challenges we experienced when conducting our experiments, such as lack of proper monitoring tools along the whole experiment execution, detecting faulty and unreachable devices, making experiments repeatable, as well as other practical challenges researchers might face in testbeds and when running tests in the wild. On section \ref{fig:outdoormap} we describe many of the challenges we experienced when performing outdoor experiments.
\section{MAM data collection}
\label{sec:datacollection}
In \cite{jv2022exploring}, our research group has discussed some of the challenges of sensor data collection, and presented a Bluetooth Mesh extension named MAM, as an alternative to Bluetooth Mesh's original routing behavior (BTM-R). MAM aims to optimize sensor data collection by establishing a route towards the data collector (Mobile-Hub) instead of relying on Bluetooth Mesh's controlled flooding approach. We simulated MAM and BTM-R using the OMNET++\footnote{https://omnetpp.org} discrete event simulator, which yielded interesting results, but we also concluded that practical experiments would further enrich discussions about MAM and the usage of Bluetooth Mesh for data collection.
MAM's basic idea is to dynamically create routes from ground nodes of a Mesh network towards a data collector (Mobile-Hub) that passes by to collect data. To form those routes, MAM uses a simple cache for the best direction (next best node) to send data towards the Mobile-Hub. The MAM algorithm requires a recursively broadcast heartbeat that is sent periodically by the Mobile-Hub, so that nodes can choose the best neighbor towards the Mobile-Hub; this decision is taken according to the message TTL (the number of hops from the node to the Mobile-Hub) and to an expiration parameter $\Delta$ (after a $\Delta$ amount of time has passed since the best neighbor was set, the node will ignore the best known number of hops and set the best neighbor as the next node that forwards it a Mobile-Hub message). The expiration is needed because if the Mobile-Hub is constantly moving, the best route towards the Mobile-Hub in the Mesh network is not static.
Algorithm \ref{alg:bm} describes in pseudocode the Bluetooth Mesh Relay strategy (BTM-R), based on recursive flooding, and Algorithm \ref{alg:mam_delta} describes the MAM algorithm. This same pseudocode, as well as a more detailed explanation of the MAM/BTM-R algorithms were presented by \cite{jv2022exploring}.
\begin{algorithm}
\caption{BTMesh Relay (BTM-R)}
\label{alg:bm}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE senderAddress, messageHops, messageBody
\STATE byte hash $\leftarrow$ hashMessage(messageBody)
\STATE bool recentlyRelayed $\leftarrow$ isInLRUCache(hash)
\IF {(recentlyRelayed == true \OR messageHops > 126)}
\RETURN
\ENDIF
\STATE hops $\leftarrow$ messageHops + 1
\STATE broadcastMessage(messageBody, hops)
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{$MAM_{\Delta}$ - Reactive least-hop route}
\label{alg:mam_delta}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Init:}}
\ENSURE bestNodeAddress $\leftarrow$ NULL, bestNodeHops $\leftarrow$ 0, expiry $\leftarrow$ 0
\REQUIRE senderAddress, messageHops, messageBody
\IF {(isDiscoveryMessage(messageBody) == false)}
\IF {(bestNodeAddress != NULL)}
\STATE hops $\leftarrow$ messageHops + 1
\STATE sendMessage(bestNodeAddress, messageBody, hops)
\ENDIF
\RETURN
\ENDIF
\STATE bool expired $\leftarrow$ NOW() > expiry
\IF {($expired$ == true \OR $messageHops < bestNodeHops$)}
\STATE bestNodeAddress $\leftarrow$ senderAddress
\STATE bestNodeHops $\leftarrow$ messageHops
\STATE $expiry \leftarrow NOW() + \Delta$
\ENDIF
\STATE bluetoothMeshRelay(senderAddress, messageHops, messageBody)
\end{algorithmic}
\end{algorithm}
MAM simulations involved a different set of maps and topologies, and measured unique packets received, duplicate packets received, delivery rate, energy consumption (in Joules) and efficiency (in Bytes per Joule).
The results of the MAM simulations indicated that MAM in some cases is a better routing approach when compared to Bluetooth Mesh's routing approach (BTM-R). BTM-R does not establish routes; it is based on a controlled flooding approach. The unique delivered packets metric of the MAM algorithm was significantly increased compared to BTM-R in scenarios where relay node density was high. In those scenarios, we observed significant increases on energy efficiency (more than 6.61\% savings with the Delta parameter value we chose for the ESP-32 experiments - $\Delta=100$; we chose this specific value for comparison because it showed the most promising simulation results in terms of unique data collection and energy consumption, as described in \cite{jv2022exploring}).
\section{ESP-32 and Bluetooth Mesh}
\label{sec:esp32}
As a continuation of the MAM's development, we decided to start looking into experiments with real mesh nodes that supported Bluetooth Mesh. We chose the Espressif's ESP-32 platform for the experiments we present and discuss on this paper, because it supports Bluetooth Mesh and our research group already had some experience with it. Figure \ref{fig:espBed} shows ESP-32 nodes powered on before one of the indoor experiments we conducted, with labels to help us identify them.
The ESP-32 platform is certified by Bluetooth's SIG\footnote{Bluetooth Special Interest Group, a non-profit organization that oversees Bluetooth's development} for the Bluetooth Mesh specification, and its Bluetooth Mesh implementation is open source. However, at the time of this work there was not plenty documentation about Bluetooth Mesh usage. We spent around three weeks experimenting with the platform, and had to request Espressif support to discover how to send one simple string from one node to another using Bluetooth Mesh. It took the authors' close to two months of research and development to reach an extended version of the esp-idf firmware, with the MAM algorithm.
Bluetooth Mesh incorporates security mechanisms on exchanged packets, and we decided to avoid dealing with this extra layer of complexity when extending it for the MAM algorithm. For certain MAM commands, we simply used existing Bluetooth Mesh opcodes, so that we wouldn't be required to make profound code changes.
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{images/esp-1.PNG}
\caption{ESP-32 nodes powered on before an indoor test.}
\label{fig:espBed}
\end{figure}
\section{Microcontroller metrics}
\label{sec:metrics}
In order to establish a fair comparison between the simulated experiments and the real world tests, we implemented the same metrics we used to evaluate MAM in our past work. Implementing the metrics on the ESP-32 platform involved a great amount of work, since, to the best of our knowledge, the esp-idf platform did not provide any native support for collecting metrics. We also had to deal with issues such as unexpected restarts, firmware updates, and simulation commands. The main metrics we implemented and used for comparison were the duplicate packet count and unique packet count. We did not implement battery consumption metrics since this was not easily available through the device's API.
First, we needed a way to identify each node that was visually easy to identify and tag nodes. With that constraint, MAC addresses were discarded (since it would not be very simple to visually inspect them and write them on labels placed on the microcontrollers). So, we chose sequential ids for each node, which simplified labeling both physically with the labels and virtually when drawing representations ("maps") of where the nodes were placed. To assign node ids, we used esp-idf's non-volatile memory API, and flashed each node manually, setting its unique sequential id.
Regarding the total packets received metric, this was easily achievable with a counter (long value, 8 bytes) at the Mobile-Hub node, which was incremented every time a sensor message was processed. However, we had to differentiate between duplicate and unique packets which could not be done with just simple counters, but rather a more complex data structure.
Our approach for differentiating between unique and duplicate messages was to start with a simple, naïve and inefficient solution: a hash map. This structure mapped message unique ids to a counter. Every time a message was received, a map entry was either created or updated if the message had been received before. Of course, this leaves a significant memory footprint for a microcontroller with limited amount of RAM, which later turned out to be a problem in outdoor experiments.
Besides implementing the metrics, we had to develop an approach to control the experiments - send routing parameters, start the experiment, collect the metrics, and reset it, without having to re-flash each node which would take a significant amount of time (re-flashing takes around one minute per node).
ESP-32 has Over-The-Air (OTA) update capabilities using Wi-Fi, however, due to network and memory constraints we decided to not use them. Instead, we implemented a CLI (command-line interface) for a commander node, which has simulation administration capabilities, with commands that are sent using Bluetooth Mesh. The commander node's role is to send command and control information, serving as a bridge between a computer with an interface to send commands and the experiment's nodes. Figure \ref{fig:experimentArch} illustrates the experiments' node types.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{images/MESHEXPERIMENTS.PNG}
\caption{ESP-32 experiment nodes.}
\label{fig:experimentArch}
\end{figure}
The commander node CLI has the following commands that can be sent via telnet:
\begin{itemize}
\item {\verb|sim-reset|}: resets the simulation statistics on all nodes.
\item {\verb|set-mam|}: sets MAM as the configured relay algorithm on all nodes.
\item {\verb|set-btmr|}: sets BTM-R as the configured relay algorith on all nodes.
\item {\verb|sim-stats|}: orders all nodes to send their statistics, to the Mobile-Hub; the Mobile-Hubs prints all received statistics as well as their own statistcis.
\item {\verb|reboot|}: restarts the commander node.
\end{itemize}
\section{Indoor experiments}
\label{sec:indoor}
During development, we kept ESP-32 nodes connected to the computer, and flashed them when we needed to verify behavior. Once we completed the MAM ESP-32 implementation, metrics and management features, we started to experiment indoor.
For indoor experiments, we placed 10 nodes in an apartment building, which had a lot of interference on the 2.4GHz band, used by Bluetooth Mesh, due to Wi-Fi routers from the apartment complex and its surroundings.
The initial experiments indicated lots of room for improvement to our tests. At the very first test with more than five nodes, the messages sent by the nodes quickly resulted into transmission errors, halting the entire test. The node logs indicated the need of increasing the size of the transmission buffers (an esp-idf configuration parameter). We increased those buffers by ~3.33 times (from the default of 60 to 200), and used an option to separate relay buffers from regular buffers with a size of 100 (see table \ref{tab:espidf_params}, parameters $RELAY\_ADV\_BUF$, $ADV\_BUF\_COUNT$ and $RELAY\_ADV\_BUF\_COUNT$). Increasing buffers meant increasing memory usage (in total, allocating five times more memory for advertising buffers), but was required for our tests.
After increasing transmission buffers, the experiments ran without any errors that would halt communication, during one hour and thirty minutes, before the nodes ran out of battery. The logs were quite polluted by warnings, which motivated us to reduce the log level to filter out warnings (making it easier to spot severe problems when inspecting node logs).
\begin{table}
\caption{ESP-IDF firmware configuration}
\centering
\begin{tabular}{lll}
\toprule
Parameter Name (prefix: $CONFIG\_BLE\_MESH\_$) & Default Value & Changed Value \\
\midrule
$RX\_SEG\_MSG\_COUNT$ & 10 & 20 \\
$WAIT\_FOR\_PROV\_MAX\_DEV\_NUM$ & 10 & 14 \\
$MAX\_PROV\_NODES$ & 10 & 14 \\
$MSG\_CACHE\_SIZE$ & 10 & 20 \\
$ADV\_BUF\_COUNT$ & 60 & 200 \\
$RELAY\_ADV\_BUF$ & n & y \\
$RELAY\_ADV\_BUF\_COUNT$ & 0 & 100 \\
\bottomrule
\end{tabular}
\label{tab:espidf_params}
\end{table}
Table \ref{tab:espidf_params} contains the full list of ESP-32 parameters we used for our tests. We chose those parameters based on trial and error, during the phase of implementation of MAM in the ESP-32.
Another challenge when deploying Bluetooth Mesh nodes is how to provision them (which means, adding the node to a network, so that they can exchange public keys and communicate securely). Provisioning requires a node to be in an unprovisioned state, and another node (the provisioner) must be within reach. When provisioned, the node must store its provisioning data in memory. If the provisioned node reboots but starts-up without the provisioning data, it will be in an unprovisioned state and won't be able to join the network again until re-provisioned (hence, until the provisioner is again within reach).
The esp-idf platform provides an option to persist the provisioning data in non-volatile memory, however, when this option was enabled with our configuration options, less than one minute after the simulation started the nodes ran out of memory and the communication stopped. So, we had to run all our tests, both indoor and outdoor, without this capability.
\begin{table}
\caption{MAM Indoor Experiment Results using ESP-32s}
\centering
\begin{tabular}{llll}
\toprule
Algorithm & Total Time (minutes) & Unique Data Received (count) & Duplicate Data Received (count) \\
\midrule
BTM-R & 15 & 617 & 90 \\
MAM & 15 & 632 & 106 \\
BTM-R & 15 & 532 & 80 \\
MAM & 15 & 497 & 78 \\
BTM-R & 15 & 279 & 46 \\
MAM & 15 & 359 & 42 \\
BTM-R & 15 & 495 & 60 \\
MAM & 15 & 745 & 119 \\
BTM-R & 30 & 1301 & 391 \\
MAM & 30 & 1354 & 798 \\
BTM-R & 60 & 1532 & 142 \\
MAM & 60 & 2682 & 328 \\
\bottomrule
\end{tabular}
\label{tab:indoor_tests}
\end{table}
Table \ref{tab:indoor_tests} contains the results of the indoor experiments we performed. Image \ref{fig:indoormap} illustrates the indoor experiments nodes' position in a three-bedroom apartment. In 3 out of 4 of the 15 minute tests the Mobile-Hub collected more unique data messages with the MAM algorithm than with BTM-R. However, tests using MAM collected duplicate packets, which was not expected for the MAM algorithm, and it collected more duplicate packets than BTM-R did in 2 out of 4 of the 15 minute tests. In the longer tests, 30 minutes and 60 minutes, the Mobile-Hub collected more unique data packets, with up to +75\% more unique data packets (60 minute test); it also collected more duplicate packets than BTM-R. We believe that collecting duplicate packets in the MAM algorithm might have happened due to an implementation error in our extension of the esp-idf firmware, but since we were able to run the experiments without errors that halted communication, we decided to proceed with the outdoor experiments, which we describe in the next section.
\section{outdoor experiments}
\label{sec:outdoor}
We also conducted initial outdoor experiments to measure the maximum distance between two ESP-32 microcontrollers communicating using Bluetooth Mesh. We placed the devices on the ground, and kept moving one away from each other until signal was lost (until messages stopped being received). Then, we reaproximated them slightly, enough to start receiving messages again, and wrote down the distance between them each time we ran the experiment (varying nodes' antenna alignment). Those tests indicated that the two nodes placed on the ground could not communicate with a distance greater than 6m (meters). When we placed the nodes 11 centimeters above the ground, they could communicate up to 40m apart (both antennas facing each other), 32m (antennas in opposite directions).
After the initial tests, we placed 10 devices around an urban street (with only two-floor houses), placed on small walls/on the ground, respecting the 6m distance limit when placed on the ground and the 40m distance when the antennas faced each other 11cm above the ground. Figure \ref{fig:outdoormap} indicates how we placed nodes on those types of test.
\begin{table}
\caption{MAM Outdoor Experiment Results using ESP-32s - average of 3 runs}
\centering
\begin{tabular}{llll}
\toprule
Algorithm & Minutes & Unique Data Received (count) & Duplicate Data Received (count) \\
\midrule
BTM-R & 5 & 477.00 (stdev=9.54) & 175.67 (stdev=20.03) \\
MAM & 5 & 484.67 (stdev=60.29) & 180.33 (stdev=41.53) \\
BTM-R & 10 & 938.33 (stdev=17.62) & 296.33 (stdev=126.88) \\
MAM & 10 & 996.00 (stdev=137.98) & 370.33 (stdev=60.62) \\
BTM-R & 15 & 1395.33 (stdev=14.29) & 534.67 (stdev=13.61) \\
MAM & 15 & 1458.00 (stdev=144.21) & 536.67 (stdev=72.28) \\
\bottomrule
\end{tabular}
\label{tab:outdoor_tests_avg}
\end{table}
\begin{table}
\caption{MAM Simulated Experiment Results using OMNET++}
\centering
\begin{tabular}{llll}
\toprule
Algorithm & Minutes & Unique data received (count) & Energy Draw (Joules) \\
\midrule
BTM-R & 3.33 & \textbf{2992} & 104.30 \\
MAM & 3.33 & 1498 & 24.99 \\
\bottomrule
\end{tabular}
\label{tab:omnet_simulated_test_results}
\end{table}
\begin{table}
\caption{MAM Simulated and "Scaled" (rule of three) Outdoor Experiment Results}
\centering
\begin{tabular}{lllll}
\toprule
Algorithm & Type & Minutes & Unique data received (count) \\
\midrule
BTM-R & Simulated & 3.33 & \textbf{2992} \\
MAM & Simulated & 3.33 & 1498 \\
BTM-R & Outdoor 5min & 3.33 & 317.68 \\
MAM & Outdoor 5min & 3.33 & 322.79 \\
BTM-R & Outdoor 10min & 3.33 & 312.46 \\
MAM & Outdoor 10min & 3.33 & 331.68 \\
BTM-R & Outdoor 15min & 3.33 & 309.76 \\
MAM & Outdoor 15min & 3.33 & 323.67 \\
BTM-R & Outdoor (avg) & 3.33 & 313.30 \\
MAM & Outdoor (avg) & 3.33 & 326.05 \\
\bottomrule
\end{tabular}
\label{tab:all_scaled_results}
\end{table}
Table \ref{tab:outdoor_tests_avg} contains the results of those tests. We can see that, in some situations, nodes executing the MAM algorithm were able to collect more unique messages than other test runs in they used BTM-R.
\subsection{Simulation vs ESP-32 outdoor experiments comparison}
In table \ref{tab:all_scaled_results} we show the count of unique packets collected in the simulated data (tested using OMNET++ from table \ref{tab:omnet_simulated_test_results}) and the outdoor experiments (tested using ESP-32 microcontrollers) scaled to the same duration as OMNET++ simulations. The scaling uses a simple rule of three. We chose to scale those results to provide a more fair comparison, since in the ESP-32 tests we varied the time intervals (5/10/15 minutes) whereas in the OMNET++ simulations we used a fixed time interval of 3.33 minutes.
It is possible to notice that the count of messages with unique data received is similar among the outdoor 5/10/15 minute ESP-32 tests (BTM-R averaging 313.30 messages and MAM averaging 326.05 messages). Those values are significantly lower than the simulation results (BTM-R simulation presented 2992 messages (+954\%), and MAM simulation presented 1498 messages (+459\%)).
Figure \ref{fig:all_scaled_results} shows the same results from Table \ref{tab:all_scaled_results}, scaling the simulation results to the different tested intervals. While on the simulation MAM collected fewer unique packets than BTM-R (BTM-R 2992, MAM 1498; -50.06\% - worse performance), in the ESP-32 results MAM collected more unique packets: BTM-R 313.3 and MAM 326.05 (+4.06\% - better performance).
\begin{figure}
\centering
\subcaptionbox{Accumulated unique data (big is better)}{
\begin{tikzpicture}
\centering
\begin{axis}[
ybar,
enlarge x limits={abs=3*\pgfplotbarwidth},
axis on top,
title={Accumulated unique data},
ymin=30,
enlarge y limits={value=.1,upper},
axis x line*=bottom,
axis y line*=left,
tickwidth=0pt,
legend style={
at={(0.5,-0.1)},
anchor=north,
legend columns=2,
/tikz/every even column/.append style={column sep=0.5cm}
},
ylabel={Number of messages},
symbolic x coords={5min,10min,15min},
xtick=data,
nodes near coords={
\pgfmathprintnumber[precision=0]{\pgfplotspointmeta}
}
]
\addplot [draw = blue, semithick, pattern = dots,pattern color = blue] coordinates {
(5min,484)
(10min,996)
(15min,1458)};
\addlegendentry{MAM on ESP-32}
\addplot [draw = blue, pattern = north east lines, pattern color = blue, fill = blue!60] coordinates {
(5min,477)
(10min,938)
(15min,1395)};
\addlegendentry{BTM-R on ESP-32}
\end{axis}
\end{tikzpicture}
\label{fig:result_real_accumulated}
}
\subcaptionbox{Accumulated duplicate data (small is better)}{
\begin{tikzpicture}
\centering
\begin{axis}[
ybar,
enlarge x limits={abs=3*\pgfplotbarwidth},
axis on top,
title={Accumulated duplicate data},
ymin=30,
enlarge y limits={value=.1,upper},
axis x line*=bottom,
axis y line*=left,
tickwidth=0pt,
legend style={
at={(0.5,-0.1)},
anchor=north,
legend columns=2,
/tikz/every even column/.append style={column sep=0.5cm}
},
ylabel={Number of messages},
symbolic x coords={5min,10min,15min},
xtick=data,
nodes near coords={
\pgfmathprintnumber[precision=0]{\pgfplotspointmeta}
}
]
\addplot [draw = blue, semithick, pattern = dots,pattern color = blue] coordinates {
(5min,180)
(10min,370)
(15min,536)};
\addlegendentry{MAM on ESP-32}
\addplot [draw = blue, pattern = north east lines, pattern color = blue, fill = blue!60] coordinates {
(5min,175)
(10min,296)
(15min,534)};
\addlegendentry{BTM-R on ESP-32}
\end{axis}
\end{tikzpicture}
\label{fig:result_real_duplicates}
}
\caption{Accumulated messages per time interval.}
\end{figure}
\subsection{Additional challenges and Pitfalls}
Due to Bluetooth Mesh's provisioning requirement, we had to provision each and every ESP-32 node that would be part of each test run/experiment. Provisioning required us to move each node close to the provisioner node which, in our case, had to be connected to a laptop for logging/debugging purposes.
As mentioned in section \ref{sec:indoor}, we were not successful in enabling provisioning information persistence (an ESP-32/ESP-IDF firmware feature). Hence, we had to provision each device every time they had to be rebooted. This was a significant burden for indoor experiments and (specially) for outdoor experiments in which node distance was greater and node positioning was more complex. Using a crayon pen, we marked the nodes' positions so that we could safely remove them for battery recharging, reflashing and other operations we had to perform in each day we conducted the implementation and experiments. Since we marked the nodes' positions, we could replace them exactly on the same positions as before, making the experiments easier to compare. Nodes also had to be remove due to weather (since ESP-32 are not waterproof and we did not have proper waterproof casing). Battery recharging could take several hours with limited availability of chargers and USB ports in the field. Acquiring extra batteries and chargers was a simple way to reduce the time needed to reset simulations after nodes ran out of battery.
By default, an ESP-32 provisioner can only provision up to 10 nodes. This is configured by setting the parameter $MAX\_PROV\_NODES$ (presented on Table \ref{tab:espidf_params}). We had to increase that value to 14 in order to provision all nodes that we needed (1 Mobile-Hub + 1 commander node + 10 nodes + 2 extra positions in case we had to provision the Mobile-Hub or the commander node again).
Another hassle was to identify, during each test run, if the network was connected (if all nodes could reach each other) and if a node restarted during the experiment. Checking for restarts was important since a restart would invalidate the simulation (and potentially interrupt the node's communication with the network if the provisioning data was lost). We implemented a restart counter in ESP-32's non-volatile memory, and manually verified that counter during each simulation start/end, as well as other counters such as the number of messages received/sent by each node.
Verifying nodes' reachability and positioning them (their antennas) in a way that the simulation worked with all nodes connected was very labor-intensive and proved to be a challenge. We believe that a tool that could analyze connectivity in a Bluetooth Mesh network using ESP-32 would have been very helpful.
A tool that is able to make those verifications automatically over-the-air would have reduced the time needed to validate simulations. A possible way to implement such tool would be to send a heartbeat packet that gets recursively propagated and have each node (identified by known unique node ids) send an acknowledgment. The tool would wait a few seconds until all nodes acknowledged it, and display the information to the user certifying or not the simulation before it starts and after it ends.
Another challenged that we experienced in outdoor experiments was due to the low available random access memory that is available in ESP-32 (not an exclusive problem of the ESP-32, which actually has more RAM available than many less powerful microcontrollers). Our implementation for tracking unique and duplicate packets consisted of a simple hash map to store message ids, and that worked very well in indoor settings in which interference severely degraded the nodes' ability to send/receive data. In outdoor experiments, in which message throughput was significantly higher (more than twice as many messages received), our metrics tracking implementation did not work. With a 30 minute test, it already exceeded a maximum number of entries we could add to the hash map before running out of memory. Our implementation, however, did not warn us about the node running out of memory, and it took us a long time to identify the issue. We avoided this issue by reducing the experiment time to 15 minutes. However, as future work, we plan to implement a more efficient data structure that can track unique/duplicate packets. Such structure could track continuous intervals in which messages were received or were missing (message ids are sequential by node id and sequence number) and cleanup entries as new continuous intervals were formed; there are several solutions in literature for solving this type of problem, with space-efficient and probabilistic data structures as well. However, this out of the scope of this current work.
\begin{figure}
\centering
\begin{tikzpicture}
\centering
\begin{axis}[
ybar,
enlarge x limits={abs=3*\pgfplotbarwidth},
axis on top,
title={Accumulated unique data},
ymin=30,
enlarge y limits={value=.1,upper},
axis x line*=bottom,
axis y line*=left,
tickwidth=0pt,
legend style={
at={(0.5,-0.1)},
anchor=north,
legend columns=2,
/tikz/every even column/.append style={column sep=0.5cm}
},
ylabel={Number of messages},
symbolic x coords={5min,10min,15min},
xtick=data,
nodes near coords={
\pgfmathprintnumber[precision=0]{\pgfplotspointmeta}
}
]
\addplot [draw = blue, semithick, pattern = dots,pattern color = blue] coordinates {
(5min,484)
(10min,996)
(15min,1458)};
\addlegendentry{MAM on ESP-32}
\addplot [draw = blue, pattern = north east lines, pattern color = blue, fill = blue!60] coordinates {
(5min,477)
(10min,938)
(15min,1395)};
\addlegendentry{BTM-R on ESP-32}
\addplot [draw = red, semithick, pattern = dots,pattern color = red] coordinates {
(5min,2249.24)
(10min,4498.49)
(15min,6747.74)};
\addlegendentry{MAM on OMNET++}
\addplot [draw = red, pattern = north east lines, pattern color = red, fill = red!60] coordinates {
(5min,4492.49)
(10min,8984.98)
(15min,13477.47)};
\addlegendentry{BTM-R on OMNET++}
\end{axis}
\end{tikzpicture}
\caption{Unique messages per time interval (big is better).}
\label{fig:all_scaled_results}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\columnwidth]{images/indoor.drawio.png}
\caption{Indoor node placement.}
\label{fig:indoormap}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{images/outdoor.drawio.png}
\caption{Outdoor node placement.}
\label{fig:outdoormap}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We implemented a Bluetooth Mesh data collection strategy and deployed it in indoor and outdoor settings, using ESP-32 microcontrollers. This data collection strategy also covered an alternative packet routing strategy - MAM - already discussed and simulated in previous work using the OMNET++ simulator. We compared the ESP-32 experiments with the past simulated data, and the results differed significantly: the simulations predicted a +459\% unique message collection compared to the results we obtained with the ESP-32.
Based on those results, we also identified vast room for improvement in our ESP-32 implementation for future work, including solving an unexpected packet duplication in the MAM algorithm implementation and implementing a tool to verify node reachability and if any nodes restarted.
Even though our experiments have apparent flaws (such as unexpected duplicate packets in the MAM configuration), MAM performed better than Bluetooth Mesh's default relay strategy, with up to +4.06\% more (unique) data messages collected. Our research team learned important lessons about field testing, which was a new experience for us, mostly used to conducting software simulations and indoor experiments.
Summarizing, some of the lessons learned were: mark the position of each node (and where their antennas were pointing to) so that they can be easily replaced after being removed for charging or reconfiguration; have extra batteries to minimize test disruption waiting for batteries being recharged; add as many parameters as possible, to minimize having to reflash each node (which may take some time - on our experiments it took us one minute per node to reflash); consider the memory footprint of the metrics implementation, and test the metrics with a high number of exchanged messages to avoid unexpected out of memory errors in outdoor experiments.
We hope the discussion about the challenges we experienced when implementing, deploying and running benchmarks using Bluetooth Mesh and the ESP-32 platform are helpful for future experiments with Bluetooth Mesh. Our research group plans to conduct further improvements to the ESP-32 implementation, as well as to build new tools to reduce testing time and make setup/verification easier. All of the firmware source code changes we made is open source, available in our esp-idf fork at \url{https://github.com/marcelopaulon/esp-idf}.
\bibliographystyle{unsrt}
|
1,314,259,993,931 | arxiv | \section{Elliptic knots and links of degree 6}
\newcommand{\operatorname{Ch}}{\operatorname{ch}}
\begin{thm}\label{thm-g1t}
There are 16 topological isotopy types (homeomorphism classes
of pairs $({\mathbb R}{\mathbb P}^3,\mathbb R L)$)
of elliptic (genus 1) smooth real algebraic curves $K$ of
degree 6
in the projective space ${\mathbb R}{\mathbb P}^3$ in the case
when $\mathbb R K$ is a two-component link.
Namely, Figure \ref{deg6l} lists the links.
There are 4 such types in the case when $\mathbb R L$ is
connected. Namely, the types $K_1,K_3,K_5,K_6$
from Figure \ref{deg6c} are realizable by
smooth elliptic sextic curves in $\mathbb P^3$.
\end{thm}
\begin{thm}\label{thm-g1r}
There are 40 rigid isotopy types of rational real algebraic curves of degree 6
embedded in the projective space ${\mathbb R}{\mathbb P}^3$ in the case when $\mathbb R K$ is a two-component
link.
Namely, each curve depicted on Figure \ref{deg6l} enhanced with
a choice of the listed value for $w_\lambda$ gives rise to
one rigid isotopy class of $\mathbb R K$ in the depicted link type.
Furthermore, simultaneous reflection of $\mathbb R K$ in ${\mathbb R}{\mathbb P}^3$ and changing
the sign of $w_\lambda$ gives a new rigid isotopy type with the exception
of when $\mathbb R K$ is a
topologically trivial link (the first case
in Figure \ref{deg6l}).
There are 12 such types in the case when $\mathbb R K$ is
connected. Namely, we may have $w_\lambda=\pm 3,\pm 1$
for the types $K_1$; $w_\lambda=3$ for
$K_3$; $w_\lambda=1,3$ for $K_5$; and $w_\lambda=1$ for $K_6$
(see Figure \ref{deg6c}).
\end{thm}
\begin{figure}[h]
\includegraphics[width=100mm]{d6g1.eps}
\caption{
Two-component real algebraic links of degree 6 and genus 1.
\label{deg6l}}
\end{figure}
\subsection{Binodal planar elliptic quartics}
To prove Theorems \ref{thm-g1t} and \ref{thm-g1r}
we use a technique similar to that developed
in the previous section with some modifications.
Lemma \ref{rkprime} and Propositions
\ref{pirkprime} and \ref{reconstruct-rkprime}
reduce the proof to studying of nodal diagrams
$(C,D)$, where $C$ is a nodal elliptic curve
of degree 4 while $D=D^+-D^-$ is linearly
equivalent to a hyperplane section of $C$.
Namely, the nodal diagram $(C,D)$ defines
a spatial curve $K'\subset \mathbb P^3$
with the node $p=(0:0:0:1)\in \mathbb P^3$
that can be resolved in two ways to a smooth
spatial elliptic sextic. Once again, we
can distinguish between positive and negative
resolution of $p$.
If $p$ is the self-intersection
of the same component of $\mathbb R K$
then we may use any of its orientations
to determine the sign.
If $p$ is the intersection of different
components then $K$ is necessarily of type I
and we may use a complex orientation of
$\mathbb R K$ to determine the sign.
The singular set $\Sigma\subset C$ consists
of two nodes. Thus the four points
of $\nu^{-1}(\Sigma)\subset K$
give a hyperplane section of $C$
through the normalization map $\nu:K\to C$.
Let $E$ be an abstract elliptic curve
(not embedded to any projective space)
enhanced with an antiholomorphic involution
$\operatorname{conj}$ with non-empty fixed locus $\mathbb R E$.
Let $\operatorname{Ch}_j\subset E$, $j=1,2$, be two disjoint pairs
of points with $\operatorname{conj}(\operatorname{Ch}_j)=\operatorname{Ch}_j$.
\begin{lem}\label{2chords}
If the divisors formed
by $\operatorname{Ch}_1$ and $\operatorname{Ch}_2$ are not linearly
equivalent in $E$ then there exists a
planar nodal
quartic curve $C\subset\mathbb P^2$
with two nodes $q_j\in C$, $j=1,2$,
with $(K;\nu^{-1}(q_1),\nu^{-1}(q_2))$
is isomorphic to $(E;\operatorname{Ch}_1,\operatorname{Ch}_2)$.
Vice versa, for any nodal irreducible quartic
curve $C$ with two nodes $q_j\in C$, $j=1,2$,
the divisors $\nu^{-1}(q_1)$
and $\nu^{-1}(q_2)$ are not linearly equivalent.
\end{lem}
\begin{proof}
Consider the projective linear system
$|\operatorname{Ch}_1+\operatorname{Ch}_2|$ and a point
$r\in E\setminus (\operatorname{Ch}_1\cup\operatorname{Ch}_2)$.
As $\dim |\operatorname{Ch}_1+\operatorname{Ch}_2|=3$
there exist a unique point
$r_j$, $j=1,2$, such that
$r_j+r+\operatorname{Ch}_j$ is equivalent to $\operatorname{Ch}_1+\operatorname{Ch}_2$.
Since $|\operatorname{Ch}_1|\neq|\operatorname{Ch}_2|$ we have $r_1\neq r_2$,
and $\operatorname{Ch}_1+\operatorname{Ch}_2, r_1+r+\operatorname{Ch}_1, r_2+r+\operatorname{Ch}_2$
generate a 2-dimensional subsystem in
$|\operatorname{Ch}_1+\operatorname{Ch}_2|$ and a map of
$E$ to $\mathbb P^2$. Note that
the image of $E$ has two nodes corresponding to
$\operatorname{Ch}_j$. Thus
this subsystem
contains $s_j+s+\operatorname{Ch}_j$ for with some $s_j$
for every
$s\in E\setminus (\operatorname{Ch}_1\cup\operatorname{Ch}_2)$.
Therefore, the resulting linear system
is independent of the choice of $r$.
\ignore{
the 2-dimensional subsystem generated by
Choose a linear system of degree 3 to present $E$
as a planar cubic. Choose homogeneous
coordinates in $\mathbb P^2$
so that the pairs $\operatorname{Ch}_j$ are contained in the
coordinate lines $\{z_j=0\}$. Perturbing our choice
of linear system if needed we may assume that
the coordinate lines $\{z_j=0\}$ intersect
$E$ in three distinct points. Let
$r_j\in \{z_j=0\}\cap E$ be the remaining
intersection point.
Choose the third coordinate line to contain
$r_1$ and $r_2$ and make the quadratic transformation
according to this coordinate choice.
}
For the converse it suffices to take
a generic point in a nodal
elliptic quartic curve
$C\subset\mathbb P^2$ and connect it with the nodes $q_j$
of $C$ by lines. The lines will intersect $C$
at distinct fourth points. Distinct single points
are not linear equivalent since our normalization $K$
is not rational.
\end{proof}
This allows us to work with planar nodal elliptic quartics
almost as freely as with planar nodal rational quartics.
\begin{coro}\label{binod}
There are
10 distinct rigid isotopy classes of
real elliptic quartic curves in $\mathbb P^2$
in the case when the normalization of the real
locus consists of two components:
5 with both nodes
hyperbolic, 2 with one hyperbolic and one elliptic
node, 1 with two elliptic nodes and 2 with a
pair of complex conjugate nodes.
There are 5 distinct classes in the case when the
normalization is connected: 2 with both nodes
hyperbolic, 1 with with one hyperbolic and one elliptic
node, 1 with two elliptic nodes and 1 with a
pair of complex conjugate nodes.
\end{coro}
Here by rigid isotopy we mean a deformation
in the class of (irreducible) real elliptic binodal
quartics in $\mathbb P^2$.
\begin{proof}
It is convenient to think of $\operatorname{Ch}_j$ as a {\em chord}
connecting two points of the real locus $\mathbb R E$.
We have three deformation classes if each chord
connects two points from the same component of $\mathbb R E$,
and
a single class if one or both chords connect
different components of $\mathbb R E$.
Each class is unique to automorphism and deformation
of $(E;\operatorname{Ch}_1,\operatorname{Ch}_2)$ in the class of triples with
$|\operatorname{Ch}_1|\neq|\operatorname{Ch}_2|$.
If $\mathbb R E$ is disconnected then complex conjugate
nodes may correspond to intersections of the same
or different components, this gives us two classes.
\end{proof}
\subsection{Nodal diagrams
in the
elliptic case}
Propositions \ref{prop-diagram} has the following
counterpart for the case of genus 1.
\begin{prop}\label{prop-diagram-g1}
Suppose that $(\mathbb R C;\mathbb R D^-,\tau,\sigma)$
is a virtual nodal diagram such that
all nodes of a nodal elliptic quartic curve $\mathbb R C$
are hyperbolic.
The diagram $(\mathbb R C;\mathbb R D^-,\tau,\sigma)$
is not realizable by a 3D-explicit nodal diagram $(C,D)$
if the six-point set $D^-\cup\nu^{-1}(\Sigma)$
is contained in a connected component of $\mathbb R K$.
It is also not realizable if
$D^-\cup\nu^{-1}(\Sigma)\subset\mathbb R K$ and
$\mathbb R C$ has a node corresponding to intersection of
distinct components of $\mathbb R K$.
In all other cases
$(\mathbb R C;\mathbb R D^-,\tau,\sigma)$
is realizable by a 3D-explicit nodal diagram $(C,D)$.
Furthermore,
the virtual diagram $(\mathbb R C;\mathbb R D^-,\tau,\sigma)$
together with the sign $c$
of deformation of $K'$
determine the embedded real algebraic curve $K\subset\mathbb P^3$ up to rigid isotopy.
\end{prop}
\begin{proof}
Suppose $(C,D)$ is a 3D-explicit nodal diagram.
The number of arc-components of
$\mathbb R K\setminus(\nu^{-1}(\Sigma)\cup D^-)$
is 6 if $D^-\subset\mathbb R K$, otherwise it is 4.
Suppose that we have 6 odd arcs.
Then
all 6 points of $D^+$ are real, and each point
sits on its own arc of $\mathbb R K\setminus(\nu^{-1}(\Sigma)\cup D^-)$. If all these 6 points belong to
the same component of $\mathbb R K$ we have a
contradiction to the Abel theorem
as $|D^+|=|D^-+\nu^{-1}(\Sigma)|$, but
a divisor cannot be principal if it
can be presented as the boundary
of a collection of disjoint coherently
oriented intervals contained
in the same component of $\mathbb R K$.
If the images of different components of $\mathbb R K$
under $\nu$ intersect then by Corollary \ref{binod}
there are just two possibilities for $\mathbb R C$
up to rigid isotopy. In this case $K$ is of type I.
Note that if $f:K\to\mathbb P^1$ is a real meromorphic
function with $f^{-1}({\mathbb R}{\mathbb P}^1)=\mathbb R K$ then $f$
has no real critical values, and a complex
orientation of $\mathbb R K$ can be obtained
as the pullback of an orientation of ${\mathbb R}{\mathbb P}^1$.
Applying this remark to the function defined
by $D^+-(D^-+\nu^{-1}(\Sigma))$
for these two possibilities we get a contradiction, see Figure \ref{lemdd}.
In the remaining cases existence and
uniqueness of a nodal diagram $(C,D)$
up to deformation follows from Lemma \ref{2chords}, where the black points represent $f^{-1}(\infty)$
(i.e. the divisor $D^- + \nu^{-1}(\Sigma)$) and the white points
represent $f^{-1}(\epsilon)$
with $\epsilon\gg 0$.
\end{proof}
\begin{figure}[h]
\includegraphics[width=80mm]{lemdd.eps}
\caption{
Six odd arcs and the Abel theorem.
\label{lemdd}}
\end{figure}
\begin{lem}\label{enode-g1}
Any real smooth rational sextic $K\subset\mathbb P^3$
is rigidly isotopic to a curve obtained
from a nodal diagram such that the underlying
real quartic rational curve does not have
elliptic nodes.
\end{lem}
\begin{proof}
Suppose that $(C,D)$ is a 3D-explicit
nodal diagram, and $C$ has an elliptic node.
Use Lemma \ref{2chords} to deform the corresponding
chord
to a tangent line, and then further to real chord.
Then the elliptic node gets deformed to a cusp,
and further to a hyperbolic node.
Deform $D$ in its linear equivalence class to extend
the deformation of $C$ to a 3D-explicit deformation
$(C_t,D_t)$
of $(C,D)$. We can do that since we may assume
that $\deg (D^+\cap\mathbb R K)\le 4$
since $\mathbb R K\setminus(\nu^{-1}(\Sigma)\cup D^-)$
contains at most of four arcs.
If during the deformation the curve
$K'_t\setminus\{p\}\subset\mathbb P^3$ has a singular
point then it must be an elliptic node.
Thus exchanging the roles of $p$ and this node
we may assume that $p$ is elliptic, i.e.
$D^-\cap \mathbb R K=\emptyset$. In this case
we may assume that $\deg (D^+\cap\mathbb R K)\le 2$
and keep two of the points of $D^+_t$
at the inverse image of the elliptic node of $C_t$
under $\nu$.
If $C_t$ has two elliptic nodes then $D^+_t$
can be chosen without real points and we may
keep four points of $D_t^+$ at $\nu^{-1}(\Sigma)$
thus ensuring (after a perturbation)
that $K'_t\setminus\{p\}$ is smooth.
\end{proof}
\begin{coro}
\label{coro-2ptg1}
Any real elliptic curve $K$ of degree 6 in $\mathbb P^3$
is rigidly isotopic
to a curve obtained from one of the curves
whose virtual nodal diagrams
are listed on Figure \ref{g1-2pt}
by resolving them according to $c=\pm 1$
as in Lemma \ref{rkprime} if $K$ is of type I.
Each diagram $N^c_\epsilon$,
where $N$ is the number of the
diagram from Figure \ref{g1-2pt},
$c=\pm$ is the sign of deformation
of $K'$ into $K$,
and
$\epsilon$
is the sum of the signs at all solitary nodes of $\mathbb R C$
(located in the region specified by the diagram)
uniquely determines the rigid isotopy class of
a real algebraic curve.
Here the allowed values for $\epsilon$
are $\epsilon=\pm1$ in the cases 17 and 18;
$\epsilon=0,-2$ in the cases 22 and 23;
and $\epsilon=0,\pm2$ in the cases 24 and 25.
If $K$ is of type II then it
is rigidly isotopic to a curve obtained
from is rigidly isotopic
to a curve obtained from one of the curves
whose virtual nodal diagrams
are listed on Figure \ref{g0-2pt},
cases 20, 21, 22 (without further elliptic nodes),
26 with $\epsilon=-1$, 27 with $\epsilon=0,-2$,
or 29 with $\epsilon=\pm 2,0$,
by resolving them according to $c=\pm 1$.
\end{coro}
\begin{figure}[h]
\includegraphics[width=90mm]{dd-g1.eps}
\caption{
Equivalence classes of the diagrams of $\mathbb R K'$ with respect to the moves
of Proposition \ref{prop-moves}.
\label{g1-2pt}}
\end{figure}
\begin{proof}
The proof is similar to that of Corollary
\ref{coro-2ptg0}. Note that if the number of
odd arcs is not more than 4 then we have the moves
of Proposition \ref{prop-moves}
also for the genus 1 case.
\end{proof}
\subsection{Trinodal spatial elliptic sextics
and proof of Theorems \ref{thm-g1t} and \ref{thm-g1r}}
Additional isotopies among the curves corresponding
to positive and negative resolution of nodal
diagrams from Figure \ref{g1-2pt} are obtained
with the help of bi- and trinodal spatial elliptic
sextics. As in the case of quadrinodal rational
sextics we may resolve the nodes of such curves
independently according to our choice of signs.
\begin{lem}\label{34red}
Let $J\subset\mathbb P^3$ be a rational real quadrinodal sextic
and $p\in J$ be one of its real nodes (can be elliptic
or hyperbolic).
We may choose to smooth $J$ at $p$
to a real elliptic sextic $I$ of a type I or to a type II
keeping three other nodes of $J$.
\end{lem}
\begin{proof}
We have seen that $J$ is given by 4 chords on a conic
$Q\subset\mathbb P^2$.
We can view the chords as lines in $\mathbb P^2$.
Perturbing the diagram if needed
we may assume that no three chords intersect
in a point.
The plane $p^2$ can be linearly embedded to $\mathbb P^3$
so that the four chords are cut by the coordinate
planes. Then $J$ is the image of $Q$ under \eqref{cucr}.
Let the line $L_p\subset\mathbb P^2$ be the chord corresponding to $p$. Let $R\subset\mathbb P^2$ be the cubic curve obtained
by perturbation of $J\cup L_p$ with the help of the three
other chords (either to a type I or type II
real curve). The image of $R$ under \eqref{cucr}
is $I$.
\end{proof}
\noindent{\em Proof of Theorems \ref{thm-g1t} and \ref{thm-g1r}.}
Suppose that $K$ is of type II.
We use the same quadrinodal
curves as in Figure \ref{4dp} and apply to them
Lemma \ref{34red} to complete the classification
in this case.
If $K$ is of type I we use trivalent curves from
Figure \ref{3dp} to relate the curves from
Corollary \ref{coro-2ptg1} with graphs depicted
at Figure \ref{graphs-d6g1}.
\ignore{
All the trinodal curves in Figure \ref{3dp}
except for J
can be obtained from quadrinodal curves
of Figure \ref{4dp} with the help of Lemma \ref{34red}.
It is easy to construct J as well
as the binodal curves K and L
explicitly.
}
All the nodal curves in Figure 15 except for J can be obtained from
rational nodal curves of Figures 9 and 10 with the help of Lemma 33
(see Table below).
It is easy to construct J explicitly.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Rational multinodal curve & A & B & C & H &
J & H & I & J & K & L & L \\
\hline
Perturbed node & 1 & 2 & 2 & 2 & 3 & 1 & 4 &
1 & 1 & 3 & 1\\
\hline
Elliptic multinodal curve &
A & B & C & D & E & F & G & H & I & K & L\\
\hline
\end{tabular}
\end{center}
\begin{figure}[h]
\includegraphics[width=100mm]{3dp.eps}
\caption{
Twelve trinodal and binodal elliptic curves
\label{3dp}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=125mm]{graphs-d6g1.eps}
\caption{
Graphs of rigid isotopy equivalence.
\label{graphs-d6g1}}
\end{figure}
\qed
\section{Rational knots of degree 6}
\begin{thm}\label{thm-g0t}
There are 14 topological isotopy types (homeomorphism classes
of pairs $({\mathbb R}{\mathbb P}^3,\mathbb R K)$)
of rational real algebraic curves $\mathbb R K$ of
degree 6 embedded in the projective space ${\mathbb R}{\mathbb P}^3$.
Figure \ref{deg6c} lists the knots.
\end{thm}
\newcommand{\operatorname{ew}}{\operatorname{ew}}
\begin{thm}\label{thm-g0r}
There are 38 rigid isotopy types of rational real algebraic curves of degree 6
embedded in the projective space ${\mathbb R}{\mathbb P}^3$.
Namely, each curve depicted on Figure \ref{deg6c} enhanced with
a choice of the listed value for $w$ gives rise to
one rigid isotopy class of $\mathbb R K$ in the depicted knot type.
Furthermore, simultaneous reflection of $\mathbb R K$ in ${\mathbb R}{\mathbb P}^3$ and changing
the sign of $w$ gives a new rigid isotopy type with the exception
when the knot type of $\mathbb R K$ is amphichiral (the first two
knots of Figure \ref{deg6c}).
\end{thm}
\begin{figure}[h]
\includegraphics[width=100mm]{d6g0.eps}
\caption{
Real algebraic knots of degree 6 and genus 0.
\label{deg6c}}
\end{figure}
\subsection{Quartic nodal diagrams and odd arcs}
Lemma \ref{rkprime} and
Proposition \ref{reconstruct-rkprime}
reduce Theorems \ref{thm-g0t} and \ref{thm-g0r}
to classification
of the nodal diagrams
$(C,D)$ with respect to equivalences corresponding to
topological and rigid isotopies of the resulting
spatial curves.
Since $d=6$ the curve $ C$
is a nodal quartic. Thus it has at most three
real nodes.
By Lemma \ref{3pt-separate} we may assume that
$ D$ is 3D-explicit.
Thus we may apply Proposition \ref{prop-3Dliftcr}.
\ignore{
Namely, a choice of line ${\mathbb R}{\mathbb P}^1\subset{\mathbb R}{\mathbb P}^2$ (generic with respect to $\mathbb R C\subset{\mathbb R}{\mathbb P}^2$)
gives us a presentation of ${\mathbb R}{\mathbb P}^2$
as the closure of the affine plane $\mathbb R^2={\mathbb R}{\mathbb P}^2\setminus{\mathbb R}{\mathbb P}^1$.
Similarly, ${\mathbb R}{\mathbb P}^3$ becomes presented as the closure of
\begin{equation}\label{repR3}
\mathbb R^3={\mathbb R}{\mathbb P}^3\setminus\overline{\pi^{-1}_p({\mathbb R}{\mathbb P}^1)}.
\end{equation}
Once this choice is made, one of the two branches of the normalization
of $\mathbb R C$ near its double point becomes an {\em overcrossing} (it sits above the
other branch in $\mathbb R^3$) while the other branch becomes an {\em undercrossing}.
Indicating this information at all nodes turns the curve $\mathbb R C=\pi_p(\mathbb R K)$ into
a {\em knot diagram of $\mathbb R K$}.
Consider a small arc $I\subset\mathbb R C$ centered around $x''$ or $y''$.
The presentation \eqref{repR3} allows to identify one of the component of
$I\setminus D_-$ as an upper half-arc while the other one as a lower half-arc.
We include this information to the knot diagram of $\mathbb R K$ by drawing
the upper half-arc with a solid line, and the lower half-arc with a dashed line.
\begin{defn}
{\em A diagram of $\mathbb R K'$} is a generically immersed curve $\mathbb R C$
enhanced with two distinct points $x'',y''\in\mathbb R C\setminus\Sigma$ as well
as the indication of overcrossing at the nodes of $\mathbb R C$ and the upper half-arcs
at $x''$ and $y''$.
\end{defn}
}
Given a nodal diagram of $\mathbb R K'$
we may consider
parity of the number of elements of $A\cap D_+$
(counted with multiplicities)
for any connected component
$A\subset\mathbb R C\setminus(\Sigma\cup D_-)$.
We refer to such components $A$ as {\em diagram arc}.
A diagram arc $A$ is called {\em odd} if the parity of $A\cap D_+$
is odd, and {\em even} otherwise.
Clearly, the parity of a diagram arc depends only
on the underlying virtual knot diagram,
so that we may speak of odd arcs of virtual
nodal diagrams.
\ignore{
\begin{defn}
A diagram consisting of a generically immersed
real curve $\mathbb R C\subset {\mathbb R}{\mathbb P}^2$ with the nodal set $\Sigma$,
two distinct points $x'',y''\in\mathbb R C\setminus\Sigma$ as
well as
the upper/lower half-space information near $x'',y''$ and the nodal set $\Sigma\subset \mathbb R C$
is called an {\em enhanced diagram} of $\mathbb R C$.
\end{defn}
Clearly, the notion of an odd diagram component makes sense for any
enhanced diagram of $\mathbb R C$ (and not only for $\mathbb R C=\pi_p(\mathbb R K')$).
}
\begin{prop}\label{prop-diagram}
Suppose that $( C;D^-,\tau,\sigma)$
is a virtual nodal diagram such that
all nodes of a rational quartic curve $\mathbb R C$
are hyperbolic.
The diagram $( C;D^-,\tau,\sigma)$
is realizable by a $3D$-explicit
nodal diagram $(C,D)$
if and only if the number of its odd arcs
is at most 6.
Furthermore, in this case the virtual diagram $( C;D^-,\tau,\sigma)$ and the sign $c$
of deformation of $K'$
determine the embedded real algebraic curve $K\subset\mathbb P^3$ up to rigid isotopy.
\end{prop}
\begin{proof}
If the number of odd diagram arcs
is greater than 6 then so must be the degree
of the effective divisor $ D^+$.
Conversely, if this number is at most 6 then we
may construct $ D^+$ by selecting a point at each odd arc.
If needed we add pairs of conjugate points on $ C$ to ensure $\deg D^+=6$.
The space of these choices is connected as any
pair of points of $D^+$ on the same arc may be deformed
off $\mathbb R C$ into $\mathbb C C\setminus\mathbb R C$.
\end{proof}
\subsection{Moves of the diagrams}
Some nodal diagrams $(C,D)$ with distinct $C$ and $D$ correspond
to the same rigid isotopy class of knots.
We formulate the following
straightforward proposition
for nodal diagrams of rational curves
of arbitrary
degree $d$.
\begin{prop}\label{prop-moves}
Suppose that nodal diagrams $(C_1,D_1)$
and $(C_2,D_2)$ of degree $d$
on rational curves $C_j$, $j=1,2$, of degree $d-2$
are 3D-explicit and
related by means of one of the moves listed below.
Then the embedded real algebraic curves
$K_1, K_2\subset\mathbb P^3$
obtained from the corresponding curves
$K'_1$ and $K'_2$ by resolving
their double points in coherent directions
are rigidly isotopic.
\noindent $\bullet$ {\bf (Moving a pole)} {\em
Suppose that $C_1=C_2$.
Let $u\in\Sigma$ be a hyperbolic node of
$\mathbb R C=\mathbb R C_1=\mathbb R C_2$
and $u^+,u^-\in\mathbb R K$ be two points corresponding
to $u$ under the normalization $\nu:K\to C$.
Let $u^{\pm}_1$ be the result of moving $u^\pm$
a little along $\mathbb R K$ in one (arbitrarily chosen)
direction, and $u^{\pm}_2$ be the result of
moving $u^{\pm}$ in the opposite direction.
Let $D_+$ be an arbitrary real
effective divisor on $K$ of degree $d-1$ disjoint
from $\nu^{-1}(\Sigma)$, and $D^-$ be
an arbitrary real effective
divisor on $K$ disjoint from
$\nu^{-1}(\Sigma)$ and $D^+$.
Define $D^{\pm}_j=D^{\pm}\cup\{u^{\pm}_j\}$, $j=1,2$.
(See Figure \ref{move-pole} for the change of
the corresponding virtual nodal diagram.)
}
\begin{figure}[h]
\includegraphics[height=12mm]{move-pole.eps}
\caption{Moving a pole. \label{move-pole}}
\end{figure}
\noindent $\bullet$ {\bf (Annihilation of two poles)}
{\em Suppose that $C_1=C_2$,
and $D^+$ is an arbitrary real effective divisor
of degree $d$ disjoint from $\Sigma$.
Let $x\in\mathbb R K\setminus(\nu^{-1}(\Sigma)\cup D^+)$
be a point.
Define $D^-_1$ to consist of two distinct points
in $\mathbb R K$ close to $x$ and $D^-_2$ to be a pair
of conjugate points in $\mathbb C K\setminus\mathbb R K$ close to $x$.
}
The remaining three moves are algebro-geometric
counterparts
of the Reidemeister moves from knot theory.
Here $C_1\neq C_2$, instead they are obtained
as different real perturbations
(within the class of real algebraic curves of degree $d$)
of a certain curve $C_0\subset\mathbb P^2$.
We denote the normalization of $C_0$ with
$\nu_0:K_0\to C_0$
and consider a certain divisor $D_0=D^+_0-D^-_0$
on $K$ obtained as the difference of
disjoint effective divisors $D^{\pm}_0$,
$\deg D^+_0=d$, $\deg D^-_0=2$, with $D^{\pm}_0$
disjoint from the normalization of the nodes of $C_0$.
In the following moves the divisor $D_j$, $j=1,2$,
on the normalization $K_j\to C_j$ is obtained
as an arbitrary small deformation of the divisor
$D_0$ on $K_0$.
\noindent $\bullet$ {\bf (Reidemeister 1)} {\em
Here $C_0$ is a curve with a single cusp $u$
and simple nodes as all other singularities,
while $D_0$ is such that $D^+_0\ni u$.
See Figure \ref{reidem1} for the corresponding virtual
nodal diagrams.
}
\noindent $\bullet$ {\bf (Reidemeister 2)} {\em
Here $C_0$ has a single tacnode $u$
and simple nodes as all other singularities,
while the divisors $D^{\pm}_0$ are disjoint from $u$.
}
\noindent $\bullet$ {\bf (Reidemeister 3)} {\em
Here $C_0$ is a curve with
a single ordinary triple point $u$
and simple nodes as all other singularities,
while $D^+_0$
contains a single point in the set $\nu_0^{-1}(u)$
(of cardinality 3).
}
\end{prop}
\ignore{
\begin{proof}
In all these cases the pairs $\deg D^+_0=d$,
$\deg D^-_0=2$$(C_1,D_1)$
and $(C_2,D_2)$ can be connected with a deformation.
Except for the annihilation of the poles move
the intermediate pairs $(C_t,D_t)$ give rise to curves
$K'_t\subset\mathbb P^3$ with a single self-crossing point $p$
with distinct tangent directions and non-singular
$K'_t\setminus\{p\}$. In these cases the rigid isotopy
is given by Lemma \ref{rkprime}.
In the remaining case $(C_0,D_0)$ corresponds to
a spatial $K'_0$ with a cusp at $p$.
The point of $\mathbb R\Xi\subset{\mathbb R}{\mathbb P}^4$ corresponding
to $K'_0$ (see the proof
of Lemma \ref{rkprime}) is smooth.
Thus the curves $K_1$ and $K_2$ corresponding (locally)
to points from the same side of $\mathbb R\Xi$
are rigidly isotopic.
\ignore{
We have topological invariance under these moves as the corresponding diagrams
are connected with a path of diagrams so that each intermediate diagram
can be lifted to a spatial curve with a single double point at $p$.
If $g=0$ then all divisors on $\mathbb R C_j$, $j=1,2$,
are linearly equivalent.
If $g=1$ and the number of odd arcs is at most 4 then we can deform
the divisor $ D_+$
to a divisor with a pair of complex conjugate points
within the same linear equivalence class.
Any deformation
of the real points of the divisor can now be extended to the conjugate pair
so that the linear equivalence class is preserved.
Thus under these conditions the intermediate spatial curve are algebraic
by Proposition \ref{cd-rkprime} and thus $\mathbb R C_1$
and $\mathbb R C_2$ are rigidly
isotopic (cf. also Proposition \ref{CD-ri} in the case of moving a pole).
}
\end{proof}
}
\begin{coro}\label{coro-2ptg0}
Any embedded real rational curve of degree 6 is rigidly isotopic
to a curve obtained from the curves
whose virtual nodal diagrams
are listed on Figure \ref{g0-2pt}
by resolving them according to $c=\pm 1$
as in Lemma \ref{rkprime}.
Each diagram $N^c_\epsilon$,
where $N$ is the number of the
diagram from Figure \ref{g0-2pt},
$c=\pm$ is the sign of deformation
of $K'$ into $K$,
and
$\epsilon$
is the sum of the signs at all solitary nodes of $\mathbb R C$
(located in the region specified by the diagram)
uniquely determines the rigid isotopy class of
a real algebraic curve.
Here the allowed values for $\epsilon$
are $\epsilon=\pm1$ in the cases 20-22, 28 and 30;
$\epsilon=0,-2$ in the case 26;
$\epsilon=\pm 1, -3$ in the case 27;
and $\epsilon=\pm 1,\pm 3$ in the case 29.
\end{coro}
We omit $\epsilon$ from $N^c_\epsilon$ in
the case when the diagram $N$ admits only
one value for $\epsilon$ (e.g. if there are
no solitary nodes at all).
\ignore{
The sign $\pm1$ on the diagrams of Figure \ref{g0-2pt}
indicates the sign of the solitary real point of $\mathbb R C$
from $w(\mathbb R C)$.
The symbol $\epsilon$ stands for the sum of
the signs at all solitary nodes of $\mathbb R C$.
In Figure \ref{g0-2pt}.20-22, 28 and 30
we have $\epsilon=\pm 1$ while the solitary node
is unique and placed according to the picture.
In Figure \ref{g0-2pt}.26
we have $\epsilon=0,-2$.
There
might be two solitary node in the same component
of ${\mathbb R}{\mathbb P}^2\setminus\mathbb R C$ or no solitary nodes
(if $\epsilon=0$).
In Figure \ref{g0-2pt}.27 we have $\epsilon=-3$
or $\epsilon=\pm 1$.
In Figures \ref{g0-2pt}.29
we have $\epsilon=\pm 3$ or $\epsilon=\pm 1$.
According to the value of $\epsilon$ there might be
one or three solitary nodes in the same component
of ${\mathbb R}{\mathbb P}^2\setminus\mathbb R C$.
In particular, we claim that the rigid isotopy
type of the spatial curve $\mathbb R K$ depends
only on the number of the corresponding curve
$\mathbb R C$ in Figure \ref{g0-2pt} and the value of $\epsilon$
even if the number of solitary nodes of $\mathbb R C$ may vary.
}
\begin{proof}
Once we ignore the sign of solitary nodes
(i.e. the $\sigma$-data in virtual diagrams
$(\mathbb R C;\mathbb R D^-,\tau,\sigma)$,
Figure \ref{g0-2pt} lists all triples
$(\mathbb R C;\mathbb R D^-,\tau)$ on
a generically immersed quartics $C$
with not more than 6 odd arcs
up to the equivalence generated
by all moves from Proposition \ref{prop-moves}.
We refer e.g. to \cite{DeMello} for
classification of generic real quartics $C$.
Suppose that $C$ does not have elliptic nodes.
In this case any 3D-explicit divisor
defines a nodal diagram unless $C$ has
a pair of conjugate nodes and $D$
is chosen so that the corresponding
lift $K'$ also have a pair of conjugate nodes.
This means that the real meromorphic function
corresponding to $D-H_0$, where $H_0$ is
the divisor cut on $C\subset\mathbb P^2$
by the infinite axis of $\mathbb P^2$ has
real values at a fixed pair
of distinct and non-conjugate
points of $\mathbb C C\setminus\mathbb R C$.
It is easy to see that the divisors
with this property form a codimension 2
subspace in the connected
real 8-dimensional space of all 3D-explicit
divisors corresponding to the same virtual
diagram.
Our next claim is that if $(C,D)$
is a 3D-explicit nodal diagram such that $C$
has a pair of conjugate nodes and a
hyperbolic node then
there exists a path $(C_t,D_t)$, $t\in[0,1]$,
$(C_0,D_0)=(C,D)$ such that
$(C_t,D_t)$ are 3D-explicit nodal diagrams,$C_1$
has three hyperbolic nodes,
$C_{\frac12}$ has a tacnode with two hyperbolic
branches while the curve in $\mathbb P^3$ corresponding
to $(C_{\frac12},D_{\frac12})$
by Proposition \ref{reconstruct-rkprime}
is smooth outside
of the projection point $p$.
The family $(C_t,D_t)$ determines
a rigid isotopy between
the corresponding algebraic knots.
To prove the claim
we note that there are two types of $(C,D)$
with such properties, and each can be obtained
by perturbation of two ellipses in $\mathbb P^2$
intersecting transversally at two
real points.
It is sufficient to prove the claim for one
representative curve in each type.
The perturbations smooth one
of the real transversal intersection points
in two possible ways and keep the remaining
one real and two imaginary points of the intersection
of the ellipses.
Both perturbations can be included in a one-parametric
family of pairs of ellipses so that when $t$ increases
the ellipses become tangent and then intersect transversely in 4 distinct real points producing a family $C_t$
of rational nodal quadrics. We define $D_t$ so that
$(C_t,D_t)$ is 3D-explicit,
and thus
the corresponding curve $K'_t\setminus\{p\}\subset\mathbb P^3$
is smooth.
Lemma \ref{enode}
allows us to reduce consideration
of nodal diagrams with elliptic nodes to those
without elliptic nodes and thus finishes the proof.
In particular, cases 26 with $\epsilon=+2$
as well as 27 with $\epsilon=+3$ can not
appear as the corresponding diagrams with
hyperbolic nodes have more than 6 odd arcs.
\end{proof}
\begin{lem}\label{enode}
Any real smooth rational sextic $K\subset\mathbb P^3$
is rigidly isotopic to a curve obtained
from a nodal diagram such that the underlying
real quartic rational curve does not have
elliptic nodes.
\end{lem}
\begin{proof}
Let $q\in \mathbb R C$ be an elliptic node in
the nodal diagram $(C,D)$ of $K$.
The image of $C$ under the quadratic transformation
centered in the nodes of $C$ is a conic intersecting
the coordinate line corresponding to $q$ in
two imaginary points. A deformation of this conic
to a conic intersecting this line in two real
points give a deformation $C_t$, $t\in[0,1]$,
of $C=C_0$ into a rational
nodal quartic $C_1$ that changes $q$ into a hyperbolic
node, and leaves the other nodes of $C$ unchanged.
This deformation can be extended to a deformation
$D_t$ of $D=D_0$ so that $D_t$ remains
disjoint from the nodes of $C_t$ and neither $D^+_t$
nor $D^-_t$ has multiple points.
This gives a deformation $K'_t\subset\mathbb P^3$
of sextic curves. If $K'_t\setminus\{p\}$
is nonsingular then the smooth curves obtained by
coherent deformations of $K'_0$ and $K'_1$
are rigidly isotopic while the number
of elliptic points of $C_1$ is less than that of $C_0$,
so that we may proceed inductively.
Suppose that $K'_t\setminus\{p\}$ is singular
for $t=\epsilon$ and nonsingular for $t\in [0,\epsilon)$.
Note that the singularity of $K'_t\setminus\{p\}$
must sit
over an elliptic node $s$ of $C_t$ since $D$ is
3D-explicit. Exchanging the roles of $s$ and $p$
if needed,
we may assume that $p$ was elliptic, i.e.
$D^-\cap\mathbb R K=\emptyset$.
In such case $\mathbb R K\setminus\nu^{-1}(\Sigma)$
consists of not more that 6 arcs,
and $D^+$ maybe deformed in a family $D_t$
of divisors in the same curve $K$
so that $D^-_t=D^-$, $D^+_t\cap D^-_t=\emptyset$,
$D^+_t\cap\nu^{-1}_1(\Sigma)=\emptyset$, $t=[0,1)$,
while $D^+_1=\nu^{-1}(\Sigma)$, so that
$K'_1\subset\mathbb P^3$ is a quadrinodal sextic curve,
i.e. a curve with 4 distinct nodes.
Note that
by our construction these 4 nodes
are not coplanar and the curve $K'_t$ is
not contained in any plane of $\mathbb P^2$.
If there are nodes forming a complex
conjugate pair then we can proceed
as in the proof of Corollary \ref{coro-2ptg0}
deforming this pair to a pair
of hyperbolic nodes.
If all nodes are real
(elliptic or hyperbolic) then
we choose the coordinates in $\mathbb P^3$
so that the intersections of the coordinate
hyperplanes correspond
to the nodes of $K'_1$,
the cubic transformation
\begin{equation}\label{cucr}
(x_0:x_1:x_2:x_3)\mapsto (\frac 1{x_0}:\frac1{x_1}:\frac 1{x_2}:\frac 1{x_3})
\end{equation}
maps $K'_t$ to a conic in $\mathbb P^3$. Thus all
quadrinodal curves corresponding to the same
planar nodal quartic $C$ are isotopic.
But all elliptic nodes of $C$ can
be simultaneously deformed to hyperbolic nodes
as we can see
through consideration of conics in $\mathbb P^2$
tangent to coordinate lines
It remains to prove that
we can deform $(C,D)=(C_0,D_0)$ to $(C_1,D_1)$ so that all
$(C_t,D_t)$, $t\in[0,1)$ are nodal diagrams while $K'_1\subset\mathbb P^3$ is quadrinodal.
We start with a deformation of $D$ on the same curve $K$ as considered above.
If there exists $\epsilon<1$ with singular $K'_\epsilon\setminus\{p\}$ then its singularity
must be an elliptic node $e$. Consider a plane section
$H$ of $K'_\epsilon$ disjoint from $\{p\}$ and such
it passes through $e$ and such that the corresponding
divisor is 3D-explicit (we can do that since
there are not more than 2 hyperbolic nodes of $C$).
Deforming the plane section from $D^+_\epsilon$
to $H$ gives us a family of intermediate nodal diagrams
$(C_t,D_t)$, $t\in [\epsilon,\epsilon_H)$,
$\epsilon<\epsilon_H<1$ while $D_{\epsilon_H}^+$ contains
$\nu_{\epsilon_H}^{-1}(e)$. We continue the process
by deformation of the effective divisor
$D_{\epsilon_H}^+-\nu_{\epsilon_H}^{-1}(\Sigma_{\epsilon_H}\setminus\{e\})$ until we arrive to $D_1=\nu^{-1}_1(\Sigma_1)$. Finally we perturb the family $(C_t,D_t)$
slightly to ensure that it consists if nodal diagrams
for $t<1$.
\ignore{
\newcommand{\operatorname{Ch}}{\operatorname{Ch}}
Let us recall that a rational nodal quartic
$C\subset\mathbb P^2$ is determined (up to
a multiplicative translation) by
its {\em chord diagram} on $\mathbb P^1$ defines as three
disjoint pairs of points $\operatorname{Ch}_j\subset \mathbb P^1$, $j=0,1,2$.
Indeed, three pairwise unions of $\operatorname{Ch}_j$ define
three divisors of degree 4 on $\mathbb P^1$,
and thus a rational curve of degree 4
in $\mathbb P^2$ with nodes at the three points
of intersection of coordinate axes.
An elliptic node corresponds to a pair
of points interchanged by $\operatorname{conj}$.
The divisor $D^+$ is given by 6 $\operatorname{conj}$-invariant
points on $\mathbb P^1$. Clearly,
a pair of conjugate points of $D^+$
may be deformed to a pair of real points that
are close to each other and disjoint
from $D^-$ and $\operatorname{Ch}_j\cap{\mathbb R}{\mathbb P}^1$.
We claim that we may deform $D^+$ and $\operatorname{Ch}_0$
simultaneously so that ..
simultaneously
}
\end{proof}
\ignore{
Furthermore, we claim that each elliptic node
of the diagram $\mathbb R C$ may be deformed to a hyperbolic
node by the Reidemeister 1 move so that
the nodes of $ C$ are lifted to distinct points
of $\cp^3$.
The lift $ K'\subset\cp^3$
of $ C\subset\cp^2$
is defined by
the divisor $ D= D_+- D_-$.
The projection of $ K'$ to the vertical coordinate
gives a meromorphic function $h: \tilde C\to\cp^1$
on the normalization $ \tilde C$ of $ C$.
The divisor of $h$ is $ D- E$, where $ E$ is cut
on $ C$ by the infinite line in $\cp^2$.
We may assume that the divisor $ E$ as well
as $ D_-$ consists
only of real points, i.e. that
$ E\cup D_-\subset\mathbb R C$.
Note that a solitary node $s$ of the diagram
$\mathbb R C$ lifts to a node of $\mathbb R K'$ if and only if
$h(\tilde s)\in{\mathbb R}{\mathbb P}^1$ (where
$\tilde s\in \tilde C$
is any of the two points corresponding
to $s$ under the normalization).
We claim that reality of $ E$ implies that
every component of
$\tilde C\setminus h^{-1}({\mathbb R}{\mathbb P}^1)$
is adjacent to $\mathbb R\tilde C$. To see
this we note that the image of the
boundary of each such component under $h$
is a real monotone function and thus must
contain $\infty\subset{\mathbb R}{\mathbb P}^1$.
However, by our assumption
we have $h^{-1}(\infty)\subset\mathbb R\tilde C$.
Thus both solitary nodes can be made
connected to $\mathbb R \tilde C$ with a path
disjoint from $h^{-1}({\mathbb R}{\mathbb P}^1)\setminus\mathbb R \tilde C$.
We choose the deformation of $\mathbb R C$ in ${\mathbb R}{\mathbb P}^2$
so that our solitary node follows this path with
the help of the quadratic transformation
as in \cite{DeMello}.
Moving solitary nodes into the real domain
allows us to determine the sign of the solitary
node in Figures \ref{g0-2pt}.16--19,
and to show that $\epsilon\neq+2$ for
Figures \ref{g0-2pt}.26 and that
$\epsilon\neq+3$ for
Figures \ref{g0-2pt}.27.
Indeed, the excluded cases have more than 6 odd
arcs after turning all solitary nodes into real
self-crossings.
\end{proof}
}
\begin{figure}[h]
\includegraphics[width=125mm]{dd.eps}
\caption{
Equivalence classes of the diagrams of $\mathbb R K'$ with respect to the moves
of Proposition \ref{prop-moves}.
\label{g0-2pt}}
\end{figure}
\subsection{Quadrinodal spatial rational sextics
and isotopy}
The quadrinodal curves we considered in the previous
subsection are also useful for
finding isotopies among the curves
listed in Corollary \ref{coro-2ptg0}.
Let $J\subset\mathbb P^3$ be a quadrinodal rational
sextic with
the nodes in $(1:0:0:0)$, $(0:1:0:0)$, $(0:0:1:0)$
and $p=(0:0:0:1)$.
\ignore{
If we project $\mathbb R J$ to ${\mathbb R}{\mathbb P}^2$
from one of its double point the result is a quartic curve $\mathbb R C\subset{\mathbb R}{\mathbb P}^2$ enhanced
with a real divisor $ D= D_+- D_-$ on its complexification.
As before, $ C_+$ and $ D_-$ are effective divisors with $\deg D_+=6$
and $\deg D_-=2$. Here $ D_-$ is formed by the tangent lines
to the two branches of $ J$ at the double point chosen as the center $p$ of projection
while $ D_+$ is given by a choice of the reference ``coordinate plane" not
passing through $p$.
It is convenient to choose the four homogeneous coordinate planes in ${\mathbb R}{\mathbb P}^3$
to be the four planes passing through the four triples
formed from the four
double points of $\mathbb R J$.
By the Bezout theorem there are no
intersection points of $\mathbb R J$ with these coordinate planes
other than the double points.
We call such a coordinate system
{\em compatible with the nodes of $\mathbb R J$.}
Recall that for each of the four nodes of $\mathbb R J$
we may consider its resolution into a positive or negative
crossing as in Figure \ref{perturb}.
}
\begin{prop}
For every choice of signs
for some nodes of a
real rational quadrinodal sextic curve $J$
there exists a deformation of $\mathbb R J$ in the class or real rational
sextics
resolving those nodes according to the chosen signs
and keeping all the other nodes unperturbed.
\end{prop}
\begin{proof}
The curve $J$ corresponds to a nodal rational planar
curve $C$ and the divisor $D^+$
consisting of 6 points in the normalization $K$ of $C$
corresponding to 3 nodes of $C$
so that each node corresponds to a pair of points
in $D^+$.
Moving one of the points in the pair in
an appropriate direction we get
the deformation of $\mathbb R J$ with the chosen sign.
(In the case when the pair consists of complex
conjugate points we move the second point
in a complex conjugate way.)
If our choice of signs keeps some of the nodes
unresolved then without loss of generality we
may assume that we do not resolve $p$.
If our choice resolve all the nodes
then we apply Lemma \ref{rkprime}
to resolve $p$.
\end{proof}
The normalization of $\mathbb R J$ is topologically a circle.
Hyperbolic nodes of $J$ may be encoded on this circle
by means of the so-called
{\em chord diagram}: we draw a chord
connecting
each pair of points that gets identified to a node of $\mathbb R J$.
Real solitary nodes of $ J$ are ignored
in the chord diagram $S$,
their number is equal
to 4 minus the number of chords.
As the space of $n$ distinct
pairs of complex conjugate points in $\mathbb C\mathbb P^1$
is connected we obtain the following statement.
\ignore{
\begin{lem}
Any chord diagram consisting of $a\le 4$ chords connecting $a$
disjoint pairs of points on the circle $S={\mathbb R}{\mathbb P}^1$ corresponds
to a real rational quadrinodal sextic curve $\mathbb R J\subset{\mathbb R}{\mathbb P}^3$
with $a$ hyperbolic nodes and $4-a$ elliptic nodes.
Up to projective linear transformations in ${\mathbb R}{\mathbb P}^3$ the corresponding
curve $\mathbb R J$ is unique in the case $a=4$ (recall that the chord data
includes the position of their endpoints) and, more generally,
is determined by a choice of $(4-a)$ points in the open
half-sphere $(\cp^1\setminus{\mathbb R}{\mathbb P}^1)/\operatorname{conj}$.
The birational
transformation of ${\mathbb R}{\mathbb P}^3$ defined by
\begin{equation}\label{cucr}
(x_0:x_1:x_2:x_3)\mapsto (\frac 1{x_0}:\frac1{x_1}:\frac 1{x_2}:\frac 1{x_3})
\end{equation}
in the coordinate system of ${\mathbb R}{\mathbb P}^3$ compatible with the nodes
of $\mathbb R J$.
takes $\mathbb R J$ to a conic curve in ${\mathbb R}{\mathbb P}^3$ (which is necessarily planar)
in generic position with respect to the union of the coordinate plane.
Vice versa, any such conic gets transformed to a real rational quadrinodal curve.
\end{lem}
\begin{proof}
Let us apply $\eqref{cucr}$ to $\mathbb R J$. Note that by the Bezout
theorem the branches of $\mathbb R J$ at its nodes cannot be tangent to the
coordinate planes. We see that the image is a conic disjoint
from the coordinate axes. A tangency of a conic
to a coordinate plane would correspond to a cusp under $\eqref{cucr}$,
so the image must be in general position with respect to
the union of coordinate planes.
A real projective embedding of a conic is given by four two-point divisors on $\cp^1$
invariant with respect to $\operatorname{conj}$. It is well-defined up to the action
of $(\mathbb R^\times)^3$ once the coordinate system is chosen.
To reconstruct $\mathbb R J$
we choose $a$ two-point divisors to be the endpoints of the chords,
and $(4-a)$ divisors consisting of complex conjugate pairs of points on $\cp^1\setminus{\mathbb R}{\mathbb P}^1$.
\end{proof}
}
\begin{prop}
The space of real quadrinodal rational sextic curves in $\mathbb P^3$ (considered up to projective linear transformations) corresponding to the same combinatorial diagram of real chords
is connected.
\end{prop}
Here we consider only quadrinodal curves with all
nodes real, i.e. without complex conjugate pairs
of nodes.
\subsection{Completing the classification}
\begin{proof}[Proof of Theorems \ref{thm-g0t} and \ref{thm-g0r}]
By Corollary \ref{coro-2ptg0} to deduce the classification we need
to identify the curves obtained from the diagrams of Figure \ref{g0-2pt}
that are rigidly isotopic.
For this we use eleven real rational quadrinodal curves
given by the chord diagrams A through K depicted on
Figure \ref{4dp}
as well as the 3-nodal curve L from Figure \ref{4dp-l}.
\begin{figure}[h]
\includegraphics[width=78mm]{4dp-l.eps}
\caption{
A trinodal rational sextic curve.
\label{4dp-l}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=90mm]{4dp-a-k.eps}
\caption{
Eleven quadrinodal sextic curves.
\label{4dp}}
\end{figure}
The quadrinodal curves are depicted along with 4 projections
from each of its nodes (numbered 1 through 4).
For convenience we indicate the diagram
of the one-nodal curve $\mathbb R K'$
obtained by {\em positive} resolution of all nodes except for the
projection point.
To identify different projections of the same quadrinodal curve we indicate
the same arc connecting two of the nodes.
Also we depict the lower and upper half arcs near the points of $ D_-$
for the corresponding diagrams. Note that this choice is determined
(up to simultaneous reversal) by the following rule.
Let us connect the two points of $ D_-$ by an arc in
the curve $\mathbb R C\subset{\mathbb R}{\mathbb P}^2$ and compute the number of points of $ D_+$ contained
on this arc. If the arc crosses the infinite line of the projective plane of
the diagram odd number of times
then we add one to this number. If the result is odd
then the arc connects a lower half-arc to an upper half arc. Otherwise it connects
the half-arcs of the same kind.
The tri-nodal curve in Figure 10 (curve ``L'') is parameterized by
$$
t\mapsto(
p_1 p_3 p_5 p_6 p_7 p_8 :
p_1 p_2 p_3 p_4 p_5 p_7 :
p_2 p_4 p_6^3 p_8 :
p_1 p_2 p_4 p_5 p_6 p_8 ),
$$
where $p_i=t-t_i$ and
$t_1 < t_2 < \dots < t_8$.
\begin{figure}[h]
\includegraphics[width=125mm]{graphs-d6g0.eps}
\caption{
Graphs of rigid isotopy equivalence.
\label{graphs}}
\end{figure}
Figure \ref{graphs} indicates which of the
curves obtained from the diagrams of Figure \ref{4dp}
are rigidly isotopic.
Namely, Figure \ref{graphs} lists bipartite graphs with two type
of vertices: numeric and alphabetic.
The number of the numeric vertices refers to the diagram number
from Figure \ref{g0-2pt} as well as the sign used for the perturbation of the node.
Each such diagram encodes the equivalence class with respect to the moves
of Proposition \ref{prop-moves}.
The letter of the alphabetic vertices refers to the multinodal curves from Figure \ref{4dp}.
Each edge is labeled by
the number of the node of the corresponding
multinodal curve that becomes the projection point in the diagram for the adjacent numeric
vertex.
The signs of the resolution used in the graph are determined by the signs
at the adjacent numerical vertices.
We see that each pair consisting of a curve from Figure \ref{deg6c} and
the non-negative value of its Viro invariant $w$ corresponds to a connected subgraph
and that each resolution of a curve from Figure \ref{g0-2pt} is contained
in one of the subgraphs. Different knots from Figure \ref{deg6c} are topologically
different as knots in ${\mathbb R}{\mathbb P}^3$, see \cite{JuViro}.
Figure \ref{ak0} provides topological identification of the boxed resolved diagram
in Figure \ref{graphs} and the corresponding knot type from Figure \ref{deg6c}.
\begin{figure}[h]
\includegraphics[width=115mm]{ak0.eps}
\caption{
Identifications between the knots from Figure \ref{deg6c}
and the boxed diagrams of Figure \ref{graphs}.
\label{ak0}}
\end{figure}
A reflection (an orientation-reversing automorphism) in ${\mathbb R}{\mathbb P}^3$ reverses the knot together
with its Viro invariant.
Thus each curve with positive $w$ corresponds to two distinct rigid isotopy types
under reflection
while a curves with $w=0$ may correspond to one or two types.
We have four types of knots with $w=0$: $K_1$, $K_2$, $K_5$ and $K_6$.
The knots $K_5$ and $K_6$ are chiral: they are not topologically isotopic
to their reflection in ${\mathbb R}{\mathbb P}^3$ as the two components of their inverse images
under the universal covering ${\mathbb S}^3\to{\mathbb R}{\mathbb P}^3$ have non-zero linking number.
In the same type the knot types $K_1$ and $K_2$ are amphichiral.
Furthermore, the corresponding real algebraic sextics are rigidly isotopic
as their reflections appear in the same components of the graphs from Figure \ref{graphs}.
Altogether we get 38 rigid isotopy types in Theorem \ref{thm-g0r}.
We get 14 topological types of Theorem \ref{thm-g0t}
by taking out the Viro invariant information from the data.
\end{proof}
\section{Viro invariant through diagrams}
If $K\subset\mathbb P^3$ is a
perturbation
of the nodal curve $K'\subset\mathbb P^3$
as in Lemma \ref{rkprime} then
the Viro invariant $w(K)$
can be computed
in terms of the virtual diagram $(\mathbb R C;\mathbb R D,\tau,\sigma)$.
Define $c=0$ if the two branches at $p$
belong to different components of $\mathbb R\tilde K$.
If both branches come from the same component of $\mathbb R\tilde K$ then
we orient $\mathbb R\tilde K$ arbitrarily and
define $c$ as the sign of the double point
resulting from $p$
of the knot diagram of $\mathbb R K$ (when projected from
a point far from $p$).
Let $u\in\mathbb R C$ be a smooth point.
Choose a local orientation of ${\mathbb R}{\mathbb P}^2$ near $u$
and an orientation of a component $M\subset\mathbb R C$
containing $u$. Note that it amounts to
a choice of generator in $H_1({\mathbb R}{\mathbb P}^2\setminus\{u\})$.
Let $u_+,u_-\notin\mathbb R C$
be points obtained by small deformations of $u$ to the
positive and negative side of $M$ respectively.
We define $\operatorname{ind}_M(u_\pm)$ as the image
of $M$ in $H_1({\mathbb R}{\mathbb P}^2\setminus \{u_\pm\})=\mathbb Z$
and set $\operatorname{ind}_M(u)=\frac{\operatorname{ind}_M(u_+)
+\operatorname{ind}_M(u_-)}2$. Clearly, the sign of this number
changes if we change the local orientation of ${\mathbb R}{\mathbb P}^2$
or the orientation of $M$.
However, in the case when $u\in D^-$
the orientation of $M$ defines the orientation of
the tangent line to $p$ so that together with the
local orientation of ${\mathbb R}{\mathbb P}^2$ we can compare the
resulting orientation with the (standard)
orientation of
the ambient ${\mathbb R}{\mathbb P}^3\supset\mathbb R K'$.
We set $i_M(u)\in\frac 12\mathbb Z$ to be $\operatorname{ind}_M(u)$
in these orientations agree and $-\operatorname{ind}_M(u)$
otherwise.
In the case when $K$
is a curve of type I
(see \cite{Rokhlin})
we can similarly define the index
$\operatorname{ind}_{\mathbb R C}(u)\in\frac 12
H_1({\mathbb R}{\mathbb P}^2\setminus \{u\})$
of $u$ with respect to the entire curve $\mathbb R C$ as well as the corresponding
half-integer number $i_{\mathbb R C}(u)\in\frac 12\mathbb Z$
using any of the two complex orientations of $\mathbb R C$.
Also in this case we define the linking number
$$\lambda(\mathbb R K)=\sum\limits_{M,N}\operatorname{lk}(M,N),$$
where the sum is taken over all pairs of different connected
components $M,N\subset\mathbb R K$ and the number $\operatorname{lk}(M,N)\in\frac 12\mathbb Z$
is the linking number in ${\mathbb R}{\mathbb P}^3$ of the components $M$ and $N$
enhanced with orientations induced from a complex orientation of $\mathbb R K$.
As it was noted in \cite{Vi} for type I curves $K$
it is also useful to consider the invariant
$$w_\lambda(K)=w(K)+\lambda(\mathbb R K).$$
In this case we define
$c_\lambda=\pm1$ according
to the sign of resolution of $p\in\mathbb R K'$
with respect to the complex orientations of $\mathbb R K$,
so that we have $c_\lambda=\pm 1$ even if
the two branches of $\mathbb R K'$ at $p$ correspond
to different components of $\mathbb R K$.
Similarly, for a hyperbolic node $q\in\Sigma$
we define $\sigma_\lambda(q)$ to be the sign
of the corresponding crossing point with respect
to the complex orientation of $\mathbb R K$.
\ignore{
If $K$ is of type I and $q\in\Sigma$
then in addition to the sign $\sigma(q)$
already defined we define the sign
$\tau(q)=\pm 1$ to coincide with $\sigma(q)$
(cf. Figure \ref{writhe}) unless $q$ is a hyperbolic
node corresponding to intersection of different components of the normalization
of $\mathbb R C$. In the latter case we define $\tau(q)$
also according to Figure \ref{writhe}
using any of the two complex orientations of $\mathbb R C$
(recall that in this case $\sigma(q)=0$ according to our convention).
}
\begin{prop}\label{viro-compute}
We have $$w=\sum\limits_{q\in\Sigma}\sigma(q)+
2\sum\limits_{u\in D^-}i_{M}(u)
+c.$$
Similarly,
$$w_\lambda(\mathbb R K)=\sum\limits_{q\in\Sigma}\sigma_\lambda(q)+
2\sum\limits_{u\in D^-}i_{\mathbb R C}(u)
+c_\lambda$$
if $\mathbb R K$ is of type I.
\end{prop}
\begin{proof}
After scaling $\mathbb R^3={\mathbb R}{\mathbb P}^3\setminus\mathbb R H$ by
a very large number we may assume that $\mathbb R K$ is
obtained by a deformation of the union of $\mathbb R C$
with the two lines connecting the two points
of $D^-$ and $p$.
The points of intersections of these lines with $\mathbb R C$
get smoothed.
The remaining intersection points contribute
$2\sum\limits_{u\in D^-}i_{M}(u)$
to $w$
and
$2\sum\limits_{u\in D^-}i_{\mathbb R C}(u)$
to $w_\lambda(\mathbb R K)$.
\end{proof}
\section{Introduction}
The subject of this paper is the problem of topological classification of smooth algebraic curves in ${\mathbb R}{\mathbb P}^3$
when their genus and degree are fixed. The ${\mathbb R}{\mathbb P}^2$ counterpart of this problem had
originated in the celebrated work \cite{Harnack} of Harnack,
was popularized by Hilbert in his famous list of problems \cite{Hilbert} and
consequently it was well-studied over the last century.
In the same time the corresponding topological classification in ${\mathbb R}{\mathbb P}^3$ remains relatively unstudied.
We note that such classification is straightforward in the case when the degree is $4$ or less
as all relevant curves are contained either in a plane or in a quadric surface.
The classification in the case of degree 5 and genus 0 was obtained
by Johan Bjorklund \cite{Bj}.
Our paper continues this work by providing the classification in the case of degree 5 and genus 1 as well as
for degree 6 and genus $\le 1$.
\section{Planar knots}
Let $\mathbb R K\subset{\mathbb R}{\mathbb P}^3$ be a smooth irreducible real algebraic curve of degree $d$ and genus $g$
consisting of $l$ connected components.
This means that $\mathbb R K\subset{\mathbb R}{\mathbb P}^3$ is given by a system
of homogeneous real polynomial equations in four variables so that
the locus $ K\subset\cp^3$ of complex solutions of the same system of equations
is an irreducible complex curve of genus $g$ smoothly embedded to $\cp^3$
and homologous to $d[\cp^1]\in H_2(\cp^3)=\mathbb Z$.
Recall that by the Harnack inequality we have $l\le g+1$.
We assume that $\mathbb R K$ is non-empty, i.e. that $l\ge 1$.
\begin{prop}
\label{lRR}
We have $g\le\frac{(d-1)(d-2)}{2}$.
If $g=\frac{(d-1)(d-2)}{2}$ then
there exists a hyperplane $\mathbb R H\subset{\mathbb R}{\mathbb P}^3$
such that $\mathbb R K\subset\mathbb R H$.
If $2d-g+\iota(2d,g)< 9$ then
there exists a quadric surface $\mathbb R Q\subset{\mathbb R}{\mathbb P}^3$
such that $\mathbb R K\subset\mathbb R Q$.
Here $\iota(2d,g)$ is the maximal possible irregularity (the rank
of the first cohomology group) of a line bundle of degree $2d$
over a surface of genus $g$. In particular,
$0\le\iota(2d,g)\le \max\{0,2g-1-2d\}$.
\end{prop}
\begin{proof}
Note that if $\mathbb R K$ is contained in a plane then $g=\frac{(d-1)(d-2)}{2}$
by the adjunction formula.
If $\mathbb R K$ is not contained in a plane in ${\mathbb R}{\mathbb P}^3$ then
we may find a linear projection $\lambda:\mathbb R K\to{\mathbb R}{\mathbb P}^2$
such that $\lambda(K)\subset{\mathbb R}{\mathbb P}^2$ is a reduced singular planar curve of degree $d$.
To get such $\lambda$ we may use a projection from a point contained in
the line tangent to a generic point of $\mathbb R K\subset{\mathbb R}{\mathbb P}^3$.
Thus $g<\frac{(d-1)(d-2)}{2}$.
The vector space of homogeneous quadratic forms in ${\mathbb R}{\mathbb P}^3$ is 10-dimensional.
The restriction of such form to $\mathbb R K$ gives a section of the line bundle
of degree $d$ over $\mathbb R K$ associated to the projective embedding taken twice.
We get a linear map between two vector spaces.
By the Riemann-Roch formula the dimension of the target vector space
is not greater than $1+2d-g+\iota(2d,g)$ so the hypothesis of the lemma
ensures that the kernel is nontrivial.
The inequality
$\iota(d,g)\le \max\{0,2g-1-d\}$ follows from Serre's duality as
$2g-2-d$ is the degree of the inverse bundle twisted by the canonical class
of the curve.
\end{proof}
\begin{coro}\label{dle6}
If $d\le 6$ and $g> 2d-9$ then
there exists a quadric surface $\mathbb R Q\subset{\mathbb R}{\mathbb P}^3$
such that $\mathbb R K\subset\mathbb R Q$.
\end{coro}
\begin{proof}
By Lemma \ref{lRR} it suffices to check that $2d-g< 9$ and
$2d-g+2g-1-2d< 9$. The first inequality follows from our hypothesis
while the second one translates to $g<10$.
Since $d\le 6$ this inequality holds unless $\mathbb R K$ is a planar sextic
(of genus 10), but then $\mathbb R K$ is contained in a reducible quadric surface.
\end{proof}
\begin{defn}
We say that $\mathbb R K\subset{\mathbb R}{\mathbb P}^3$ is
a {\em planar} link, if it is isotopic
to a smooth (not necessarily algebraic) link $L\subset{\mathbb R}{\mathbb P}^3$ that is
contained in a hyperplane $\mathbb R H\subset{\mathbb R}{\mathbb P}^3$.
\end{defn}
Planar links are trivial from the topological viewpoint.
Namely, we have the following straightforward statement.
\begin{prop}
Any two planar links with the same number $l$ of components and
the same parity of degree $d$ are isotopic.
\end{prop}
\ignore{
We denote with $p^l_0$ the isotopy type of a planar link of even degree and with
$p^{l-1}_1$ the isotopy type of a planar link of odd degree (so that we have one non-trivial
and $l-1$ trivial components.
\begin{proof}
Indeed, if $d$ is even then
it is isotopic to the disjoint union of $l$ unknotted circles
as any pair of nested ovals of $L\subset{\mathbb R}{\mathbb P}^2$ can be turned into
a non-nested pair by an isotopy in ${\mathbb R}{\mathbb P}^3$.
Similarly, if $d$ is odd then $\mathbb R K$ is isotopic to a planar curve
in ${\mathbb R}{\mathbb P}^2\subset{\mathbb R}{\mathbb P}^3$ consisting of
of a pseudoline and $l-1$ non-nested ovals.
\end{proof}
}
Suppose now that $\mathbb R K\subset\mathbb R Q$ for
a quadric surface $\mathbb R Q\subset{\mathbb R}{\mathbb P}^3$.
There are four topological types for a quadric $\mathbb R Q$.
\begin{itemize}
\item The case when $\mathbb R Q$ is reducible (or non-reduced) and thus contains a plane.
Then $\mathbb R K$ is planar, so that $g=\frac{(d-1)(d-2)}2$.
\item The case when $\mathbb R Q$ is ellipsoid. Then $d$ is necessarily even
and $\mathbb R K$ is planar.
\item The case when $\mathbb R Q$ is a singular quadric surface.
If $\mathbb R K$ is disjoint from the singular point of $\mathbb R Q$ then
$d$ is even and $\mathbb R K$ is planar
while $g=(\frac d2-1)^2$.
A component of $\mathbb R K$ containing the singular point of $\mathbb R Q$
must be isotopic to the line in ${\mathbb R}{\mathbb P}^3$. All other components
must bound disks in the cone $\mathbb R Q$, so once again
the link $\mathbb R K$ must be planar, but $d$ has to be odd.
Recall that $\mathbb R Q$ can be obtained from the (toric) Hirzebruch surface $F_2$
by contracting the $(-2)$-sphere there.
If $\mathbb R K\subset\mathbb R Q$ passes through the singular point of $\mathbb R Q$ then
$d$ must be odd.
Furthermore, the curve $\mathbb R K$ is the image of a smooth curve $\mathbb R \tilde K\subset \mathbb R F_2$
whose Newton polygon is the trapezoid with vertices $(0,0),(0,\frac{d-1}2),
(1,\frac{d-1}2),(d,0)$.
We have $g=(\frac{(d-1)(d-3)}4)$ if $d$ is odd, just as in the case of a curve
of bidegree $(\frac{d+1}2,\frac{d-1}2)$ on a hyperboloid.
\item The case when $\mathbb R Q$ is a hyperboloid.
This is the most interesting case. We consider it in more details in the following
section.
\end{itemize}
\section{Hyperboloidal links}
\begin{defn}
We say that $\mathbb R K\subset{\mathbb R}{\mathbb P}^3$ is
a {\em hyperboloidal} link, if it is isotopic
to a smooth (not necessarily algebraic) link $L\subset{\mathbb R}{\mathbb P}^3$ that is
contained in the hyperboloid $\mathbb R Q=\{(x:y:z:u)\in{\mathbb R}{\mathbb P}^3\ |\ x^2+y^2=z^2+u^2\}\subset{\mathbb R}{\mathbb P}^3$.
\end{defn}
Such $L$ consists of $k$ components that bound disks in $\mathbb R Q$
and $j$ homologically non-trivial components.
Note that all non-trivial components must be homologous
in $\mathbb R Q$ as otherwise they would intersect.
Recall that the hyperboloid $\mathbb R Q$ has a distinguished
basis (up to a sign and permutation) in its homology group $H_1(\mathbb R Q)=\mathbb Z\oplus\mathbb Z$ given
by the ruling $\mathbb R Q={\mathbb R}{\mathbb P}^1\times{\mathbb R}{\mathbb P}^1$.
To any hyperboloidal link $L$ we prescribe three integer numbers: $jp,jq,k$.
Here $(p,q)$ is the only non-trivial homology class of a component of $L$ in
$H_1(\mathbb R Q)=H_1({\mathbb R}{\mathbb P}^2\times{\mathbb R}{\mathbb P}^2)=\mathbb Z\oplus\mathbb Z$ (so that $p$ and $q$ are coprime,
$j$ is the number of components in this
class and $k$ is the number of homologically trivial components (ovals) of $L$ in $\mathbb R Q$.
Choosing the orientation of the generating lines of $\mathbb R Q={\mathbb R}{\mathbb P}^1\times{\mathbb R}{\mathbb P}^1$
as well as their order in the basis we may assume that $p\ge q\ge 0$.
If $jp=jq$ then the non-trivial components of $L$ on $\mathbb R Q$ bound disjoint disks in ${\mathbb R}{\mathbb P}^3$.
These disks can be obtained from planar sections of $\mathbb R Q$ and thus $L$ is planar in this case.
Consider the case $jp=jq+1$, so that $j=1$ and $p=q+1$.
The only non-trivial component of $L$ intersects a curve $S\subset\mathbb R Q$
of homology class $(1,-1)$ in a single point. In its turn, $S$ can be cut on $\mathbb R Q$
by a plane in ${\mathbb R}{\mathbb P}^3$. We may contract $S$ to a point so that $\mathbb R Q$ becomes
a quadratic cone and $L$ becomes a link on this cone passing through its apex.
Thus $L$ is planar (of odd degree).
We have proved the following statement.
\begin{prop}\label{plan-hyp}
If $L$ is a non-planar hyperboloidal link then $p>q$.
Furthermore, if $p=q+1$ then $j>1$.
\end{prop}
\begin{defn}
We denote with $h_{a,b}\ \sqcup \langle k
\rangle$, $a>b+1$, $k\ge 0$,
the isotopy type of a hyperboloidal link $L$ with $j=GCD(a,b)$
non-trivial components on $\mathbb R Q$.
Here $(\frac aj,\frac bj )\in H_1(\mathbb R Q)=H_1({\mathbb R}{\mathbb P}^1\times{\mathbb R}{\mathbb P}^1)=\mathbb Z\oplus\mathbb Z$
is the homology class of a component of $L$ (with the appropriate orientations and order
of the basis elements) and $k$ is the number of homologically trivial components (ovals).
We use the abbreviated notation $h_{a,b}$ for
$h_{a,b}\ \sqcup \langle k\rangle$ when $k=0$.
\end{defn}
\begin{prop}
The hyperboloidal links of type
$h_{a,b}\ \sqcup \langle k\rangle$, $a>b+1$, are non-isotopic for different
values of $a,b,k$.
\end{prop}
\begin{proof}
Consider the universal covering $\pi:{\mathbb S}^3\to{\mathbb R}{\mathbb P}^3$. Let $\pi_{\mathbb R Q}:\mathbb R \tilde Q\to\mathbb R Q$
be the restriction of this double covering to $\mathbb R Q$.
The covering $\pi_{\mathbb R Q}$ is given by the subgroup
$\{(\alpha,\beta)\in H_1(\mathbb R Q)\ |\ \alpha+\beta\equiv 0\pmod 2\}\subset H_1(\mathbb R Q)=\pi_1(\mathbb R Q)$,
thus the total space $\mathbb R \tilde Q$ is a torus.
Furthermore, this torus is a standard torus in ${\mathbb S}^3$ (the boundary of a tubular
neighborhood of an unknot) and the classes $(1,1)$ and $(1,-1)$ in $H_1(\mathbb R Q)$
are the images of its standard generators (bounding disks in ${\mathbb S}^3\setminus\mathbb R\tilde Q$).
Thus $\pi^{-1}(L)$ is the union of a $(a+b,a-b)$-torus link in ${\mathbb S}^3$ with $2k$ unknotted unlinked
circles. Since by our hypothesis we have $a-b>1$, all such links are non-isotopic.
\end{proof}
\ignore{
We denote with $h_{a,b}\ \sqcup\ J\ \sqcup
\langle k\rangle$
the isotopy type of the disjoint union of a
hyperboloidal link $L$ of type $h_{a,b}$ and
a planar link $L'$ of type
$ J\ \sqcup \langle k\rangle$.
Here we assume that the plane containing $L'$ and
the hyperboloid containing $L$ intersect along
an oval disjoint from the non-contractible
component of $L'$ as well as from the interiors
of all oval of $L'$.
}
\ignore{
\section{Multihyperboloidal links}
Consider a collection $\mathbb R Q_m\subset{\mathbb R}{\mathbb P}^3$, $m=1,\dots,n$,
$n\ge 2$,
of hyperboloids (real algebraic quadrics) contained
in small neighborhoods of disjoint lines $l_m\subset{\mathbb R}{\mathbb P}^3$
where ${l_m}_{m=1}^n$, is a Hopf configurations
of lines (i.e. so that $l_m$ are the fibers of the
standard Hopf map ${\mathbb R}{\mathbb P}^3\to {\mathbb S}^2$).
Clearly, each $\mathbb R Q_m$ can be obtained
from $\mathbb R Q$ by a projective linear transformation
so that any link $L\subset\mathbb R Q_m$ gets identified
with a hyperboloidal link $h^k_{a,b}$.
All hyperboloids $\mathbb R Q_m$ are disjoint in ${\mathbb R}{\mathbb P}^3$.
Furthermore, as $n\ge 2$ we may define
the {\em interiors} $U_m$ of $\mathbb R Q_m$ as
the component of ${\mathbb R}{\mathbb P}^3\setminus\mathbb R Q_m$
that is disjoint from $\mathbb R Q_{m'}$, $m'\neq m$.
\begin{defn}
A link $L\subset {\mathbb R}{\mathbb P}^3$ is called {\em multihyperboloidal}
if it is the disjoint union of some hyperboloidal links
$L_m\subset\mathbb R Q_m$, $m=1,\dots,n$, $n\ge 2$,
and a (planar) link of type $\langle k\rangle$
contained in
a ball disjoint from all $L_m$.
Here we assume that no component of $L_m$ is homologous
to zero in $U_m$, so that all components of $L_m$
are homologous in $\mathbb R Q_m$ once we choose their
orientation in a coherent way.
\end{defn}
An orientation of ${\mathbb R}{\mathbb P}^3$ induces the orientation
of the interiors $U_m$ and thus of $\mathbb R Q_m$.
Choose an orientation of two transverse
generator lines of $\mathbb R Q_m$ so that their intersection
is positive and they are homologous in $U_m$.
Let $(p_m,q_m)\in H_2(\mathbb R Q_m)$
be the homology class of a component of $L_m$
in the corresponding basis. As $L_m$ is non-oriented,
we may assume that $p_m\ge 0$.
We set $a_m=j_mp_m$, $b_m=j_mq_m$,
where $j_m$ be the number of components in $L_m$,
and denote the isotopy type of
the multihyperboloidal link $L$
with
\begin{equation}
h_{a_1,b_1}\sqcup \dots \sqcup h_{a_n,b_n}
\sqcup \langle k\rangle.
\end{equation}
Note that reversing of the orientation of ${\mathbb R}{\mathbb P}^3$
results in permutations of pairs $(a_m,b_m)$ for
all $m$.
}
\def\mathbb S{\mathbb S}
\def\mathbb T{\mathbb T}
\def\mathbb{RP}{\mathbb{RP}}
\def\varepsilon{\varepsilon}
\begin{rmk
We may describe the relation between the hyperboloidal links
and toric links as follows.
Let $\mathbb S^3=\{|z|^2+|w|^2=1\}$ be the unit sphere in $\mathbb C^2$ and let
$\mathbb T$ be the torus $\{|z|=|w|=1\}\subset\mathbb S^3$. As usually, for $p,q\in\mathbb Z$,
we define the {\it$(p,q)$-torus link} as $T(p,q)=\{z^p=w^q\}\cap\mathbb S^3\subset\mathbb T$.
It is clear that $T(p,q)\sim -T(-p,q)\sim T(q,p)$ and it is well known that
$T(p,q)$ is determined by $(p,q)$ up to isotopy under the condition $p\ge|q|$.
The number of components of $T(p,q)$ is equal to $\gcd(p,q)$.
In the case when $p\equiv q\mod 2$, the link $T(p,q)$ is invariant under the antipodal
involution $-1:\mathbb C^2\to\mathbb C^2$, $(z,w)\mapsto-(z,w)$. So, in this case we define the
{\it projective $(p,q)$-torus link} as the quotient $\bar T(p,q)=T(p,q)/(-1)$. It
sits in $\mathbb S^3/(-1)$ which we naturally identify with $\mathbb{RP}^3$.
It is clear that the isotopy type of $\bar T(p,q)$ is determined by $(p,q)$ up to the
above relations. Indeed, if two links in $\mathbb{RP}^3$ are isotopic, then their double covers
are isotopic as well.
As we have already seen, we have $h_{a,b}=\bar T(a+b,a-b)$.
\end{rmk}
To represent toric and projective toric links by diagrams, it is convenient to use the language
of braids. We define the closure of a braid in $\mathbb S^3$ in the usual way and we define
the {\it closure in $\mathbb{RP}^3$} of a braid as follows. A braid can be naturally identified with
a tangle in a (round) ball $B^3$ with all endpoints placed symmetrically on a great circle on $\partial B^3$.
So, the closure of the braid in $\mathbb{RP}^3$ is the image of the tangle under the identification of
antipodal points of $\partial B^3$.
In particular, the diagram of the closure of a braid in $\mathbb{RP}^3$ just coincides with the diagram of the
corresponding tangle.
Let $p$ and $q$ be positive of the same parity and $\varepsilon=\pm1$. Then $T(p,\varepsilon q)$ is the closure in $\mathbb S^3$
of the $p$-braid $(\alpha\beta)^q$ where
$\alpha=\sigma_1^\varepsilon\sigma_3^\varepsilon\dots$ and $\beta=\sigma_2^\varepsilon\sigma_4^\varepsilon\dots$.
Similarly, $\bar T(p,\varepsilon q)$ is the closure in $\mathbb{RP}^3$ of the braid represented by the first half of the
word $(\alpha\beta)^q$, i.~e., the braid $(\alpha\beta)^{q/2}$ if $p$ and $q$ are even and
$(\alpha\beta)^{(q-1)/2}\alpha$ if $p$ and $q$ are odd.
Many of the knots appearing in the classification
results of this paper are hyperboloidal (as topological knots)
even if the corresponding spatial algebraic curves
are not necessarily contained in a quadric surface.
E.g. we have
$K_3=h_{4,1}=\bar T(3,5)$ in Figure \ref{d5g0},
while we have
$K_5=h_{3,1}=\bar T(2,4)$,
$K_8=h_{5,3}=\bar T(2,8)$,
$K_{11}=h_{7,5}=\bar T(2,12)$,
$K_{14}=h_{5,1}=\bar T(4,6)$ in Figure \ref{deg6c}.
Note that among these knots, only $h_{4,1}$ and $h_{5,1}$ are realizable by
rational algebraic curves of respective degree (5 and 6) sitting in a hyperboloid.
\section{Viro's invariant}
\input{vironew.tex}
\section{Projection from a double point
and the resulting diagram}
\input dpoint.tex
\section{Knots and links of degree up to 5}
\input upto5.tex
\input deg6.tex
\input d6g1.tex
\subsection{Links of degree 4 and lower}
Let us apply Lemma \ref{lRR} and Corollary \ref{dle6}
for smooth irreducible algebraic curves $\mathbb R K\subset{\mathbb R}{\mathbb P}^3$ of small degree $d$.
If $d=1,2$ then $g=0$ and $\mathbb R K$ is (algebraically) planar. If $d=3$ and $g=1$ then
$\mathbb R K$ is also planar. If $d=3$ and $g=0$ then
$\mathbb R K$ is hyperboloidal of bidegree $(2,1)$, and thus
it is only topologically planar.
Consider the case of $d=4$.
By Corollary \ref{dle6} any such link sits on a quadric.
If $g=3$ then it is a planar link.
Otherwise $\mathbb R K$ corresponds
to a bidegree $(a,b)$ curve with $a+b=4$ and $(a-1)(b-1)=g$.
Therefore we never encounter $d=4$, $g=2$ curves.
If $g=1$ then $(a,b)=(2,2)$ and thus $L$ must be
topologically planar
by Proposition \ref{plan-hyp}.
Finally, if $g=0$ we have $(a,b)=(3,1)$ and thus $\mathbb R K$
is of hyperboloidal type $h_{3,1}$.
\subsection{Links of degree 5}
By Corollary \ref{dle6} if $d=5$ and $g>1$ then $\mathbb R K$ is hyperboloidal.
If $g=6$ it is planar. The bidegree of a smooth irreducible curve
$\mathbb R K\subset\mathbb R Q\subset{\mathbb R}{\mathbb P}^3$ is either $(3,2)$ or $(4,1)$.
In the first case we have $g=2$ with a topologically
planar
link by Proposition \ref{plan-hyp}.
In the second case we have $g=0$.
Therefore, the cases $d=5$ and $g=3,4$ never appear.
The following result was obtained by Bjorklund \cite{Bj}.
Recall that the topological isotopy type of $\mathbb R K$ is the
equivalence class of
the pair $({\mathbb R}{\mathbb P}^3,\mathbb R K)$ up to homeomorphism.
Note that an orientation-reversing homeomorphism
takes $w(K)$ to $-w(K)$.
Recall that we say that that
two real algebraic curves embedded in $\mathbb P^3$
are rigidly isotopic if one can be deformed
to another in the class of embedded smooth curves.
\begin{thm}[Bjorklund \cite{Bj}]
\label{thm-bj}
There are three distinct topological isotopy types of
$({\mathbb R}{\mathbb P}^3,\mathbb R K)$ for $d=5$, $g=0$
shown at Figure \ref{d5g0}.
\begin{itemize}
\item
The trivial knot $K_1$.
In this case $w=\pm 2$ or $w=0$.
\item
The long trefoil knot $K_2$
(a connected sum of a trefoil and a projective line).
In this case
$w=\pm 4$.
\item
The hyperboloidal knot $K_3$
of type $h_{4,1}=\bar T(6,4)$.
In this case $w=\pm 6$.
\end{itemize}
Furthermore, any two smooth curves of degree 5
and genus 0 in ${\mathbb R}{\mathbb P}^3$ are rigidly isotopic if
and only if they have the same invariant $w$.
\end{thm}
\begin{figure}[h]
\includegraphics[width=100mm]{d5g0.eps}
\caption{Rational quintic knots \label{d5g0}}
\end{figure}
Let us pass to the case when $\mathbb R K\subset{\mathbb R}{\mathbb P}^3$ is
a (non-empty) smooth degree $d=5$, genus $g=1$ curve.
As the number of components of $\mathbb R K$ is not greater
than $g+1$ by Harnack's inequality \cite{Harnack}
we have two possibilities:
either $\mathbb R K$ is connected,
or it contains two connected components.
In the latter case the quotient $ K/\operatorname{conj}$ of $ K$ by the involution
$\operatorname{conj}$ of complex conjugation is an annulus and thus $\mathbb R K$
must be of type I.
Thus the invariant $w_\lambda$
is well-defined.
In the former case the quotient $ K/\operatorname{conj}$ is a M\"obius band,
so that $\mathbb C K\setminus\mathbb R K$ is connected, i.e. $\mathbb R K$ is of type II, so we are restricted to consideration of $w$.
\ignore{
Recall that a smooth real algebraic curve $\mathbb R K$
is said to be of type I if $ K\setminus\mathbb R K$
is disconnected and to be of type II otherwise.
Type I implies that the number
of components of $\mathbb R K$ has the same parity as $g+1$.
A curve with $g+1$ components (a so-called
{\em M-curve}) is always of type I.
Type I curves admit the so-called {\em complex orientations}.
These are the orientations of $\mathbb R K$ that can be obtained
as the boundary orientation of a component of $ K\setminus
\mathbb R K$ enhanced with the orientation of an open set of
a holomorphic curve. As there are two components of
$ K\setminus \mathbb R K$ there are two choices of complex orientation
of an irreducible curve of type I. The two orientations
can be obtained from each other by simultaneous reversal
of the orientation on all components of $\mathbb R K$.
We refer to \cite{Rokhlin} for details.
In particular, in the case $d=5$, $g=1$, a connected $\mathbb R K$
must be of type II while a disconnected $\mathbb R K$ must be
of type I.
We have the following statement.
}
\begin{thm}
\label{thm-d5g1}
There are three distinct topological isotopy types of $({\mathbb R}{\mathbb P}^3,\mathbb R K)$
for $d=5$, $g=1$, in the case when $\mathbb R K$ is a two-component link, see Figure \ref{d5g1}.
\begin{itemize}
\item
The trivial (planar) link $L_1$.
In this case
$w=\pm 1$, $w_\lambda=\pm 1$.
\item
The link $L_2$
consisting of a line ${\mathbb R}{\mathbb P}^1\subset{\mathbb R}{\mathbb P}^3$
and an unknotted circle around this line.
In this case
$w=\pm 1$, $w_\lambda=\pm 3$.\\
(Figure \ref{d5g1} shows a complex orientation
in the case $w=1$.)
\item
The link $L_3$ consisting of a hyperboloidal
knot of type $h_{3,1}$ and a line ${\mathbb R}{\mathbb P}^1\subset{\mathbb R}{\mathbb P}^3$
disjoint from the hyperboloid containing the other components.
In this case
$w=\pm 3$, $w_\lambda=\pm 5$.
\end{itemize}
If $d=5$, $g=1$ and $\mathbb R K$ is connected
then it is isotopic to
${\mathbb R}{\mathbb P}^1\subset{\mathbb R}{\mathbb P}^3$ (see $K_1$ from Figure \ref{d5g0}).
In this case we have
$w=\pm 1$.
Furthermore, all two-component real algebraic links of degree 5
and genus 1 in ${\mathbb R}{\mathbb P}^3$ with the same value of $w_\lambda$ are rigidly isotopic.
Also all connected real algebraic knots of degree 5
and genus 1 in ${\mathbb R}{\mathbb P}^3$ with the same value of $w$
are rigidly isotopic.
\end{thm}
\begin{figure}[h]
\includegraphics[width=100mm]{d5g1.eps}
\caption{Elliptic quintic two-component links. \label{d5g1}}
\end{figure}
\begin{proof}
The rank $r$ of the linear system defined by
the plane section of $K\subset\mathbb P^3$ is at least
$d-g=4$
by the Riemann-Roch theorem. By Lemma \ref{rkprime}
we may assume that $K$ is obtained by
deformation of a curve $\mathbb R K'$
with a self crossing point $p$.
By Proposition \ref{pirkprime}
the curve
$C=\tilde\pi_p(K')\subset{\mathbb R}{\mathbb P}^2$
in the corresponding nodal diagram is cubic of genus 1.
Thus $C$ is smooth.
Suppose that $D^-\cap\mathbb R C=\emptyset$.
Note that this determines the equivalence class
of $D$ up to real deformations.
In this case $\mathbb R K'$ is topologically isotopic
to the union of the planar cubic curve
isotopic to $\mathbb R C$
and a solitary node at $p$.
After a deformation the solitary node $p$ disappears.
By Proposition \ref{viro-compute} we
have $w(K)=c=\pm 1$.
In other cases we have $D^-\subset\mathbb R C$.
Then $\mathbb R K'$ is obtained from $\mathbb R C\subset{\mathbb R}{\mathbb P}^2\subset
{\mathbb R}{\mathbb P}^3$
by attaching the two lines connecting $p$ with
the points of $D^-$ and
then perturbing the result with the help of $D^+$.
Note that up to equivalence $D^+$ is determined
by the parity of the number of points in each
connected component of $\mathbb R C\setminus D^-$.
Suppose that
$D^-$ is contained in the homologically
non-trivial component $J$ of $\mathbb R C\subset{\mathbb R}{\mathbb P}^2$.
Since $D$ is linearly equivalent to the hyperplane
section of $\mathbb R C$ we must have even number of points in
$\mathbb R C\setminus J$ and different parities
in the two arcs of $J\setminus D^-$.
Thus all such choices of $D$ are equivalent.
Once again, $\mathbb R K\subset{\mathbb R}{\mathbb P}^3$ is topologically isotopic
to a curve sitting in a plane and isotopic to $\mathbb R C$.
We have $w(K)=c=\pm 1$.
If $\mathbb R K$ is of type II
(i.e. $\mathbb R C$ is connected) then $J=\mathbb R C$
and there are no other possibilities for $D$.
If $\mathbb R C$ is of type I (i.e. $\mathbb R K$ is a two-component
link) then $w_\lambda(K)$ in both cases considered above
coincides with $w(K)$.
If $\mathbb R K$ is a two-component link we also have
additional cases.
If $D^-$ has points in different components
of $\mathbb R K$ this again determines the class of equivalence
of $D$. Then $\mathbb R K$ is topologically isotopic
either to $L_1$ or $L_2$
depending on the resolution at $p$.
We have $w_\lambda(K)=\pm 2 + c_\lambda$ in these cases
by Proposition \ref{viro-compute}.
If $D^-\subset \mathbb R C\setminus J$
then there are two equivalence classes of $D$.
In one case we have odd number of points of $D^+$
in all three components of $\mathbb R C\setminus D^-$.
Then $w_\lambda(K)=\pm 4 + c_\lambda$
and the topological type is $L_2$ or $L_3$
accordingly.
In the other case both arcs of $\mathbb R C\setminus (J\cup D^-)$
have even number of points from $D^+$.
Then $w_\lambda(K)=c_\lambda$
and the topological type is $L_1$.
\ignore{
As $\mathbb R C\subset{\mathbb R}{\mathbb P}^2$ is embedded,
an arbitrary divisor
$ Z$ of degree 5 disjoint from $x''$ and $y''$
yields a nodal diagram, so all the cases above
are realizable.
This completes the topological classification of
the spatial quintic curves of genus 1.
}
To deduce the rigid isotopy classification
it remains to prove that in our construction
above the curves with the same $w_\lambda$
that are obtained from different cases
considered above.
\ignore{
Note that any pair of points in $D^+$
sitting on the same connected
component of $\mathbb R C\setminus\{x'',y''\}$
may be deformed to the purely imaginary domain $ C\setminus\mathbb R Z$. To show this we move
a pair of points of $\mathbb R Z$ towards each other.
Linearly equivalent purely imaginary divisors are
deformable to each other within real divisors
of the same equivalence class.
Thus the rigid isotopy type of $\mathbb R K$ is defined
by the distribution of $x''$, $y''$ among the components
of $\mathbb R C$ and the parity of the number of points of
$\mathbb R Z$ on each connected component
of $\mathbb R C\setminus\{x'',y''\}$.
This implies the rigid isotopy classification for
curves of type II as such a distribution.
Namely, for type I curves we
have several possible distributions
of the points of $\mathbb R Z$
with $w$
determined by Proposition \ref{viro-compute}.
As $\mathbb R C\subset{\mathbb R}{\mathbb P}^2$ is a smooth cubic curve
we have $i_{\mathbb R C}(x'')=\pm 1$
and $i_{M}(x'')=\pm \frac 12$
if and only if $x''$ is a point on the oval (homologically trivial component)
of $\mathbb R C$. Otherwise $i_{\mathbb R C}(x'')=i_{M}(x'')=0$.
If $w_\lambda=5$ then both $x''$ and $y''$
are on the oval $O\subset\mathbb R C$. Furthermore, $x''$ and $y''$
must be separated by the odd number of points of $\mathbb R Z$
as $i_{M}(x'')$ and $i_{M}(y'')$ are of the same sign.
Thus all corresponding pairs $( C, D)$ are connected.
}
If $w_\lambda=3$ then there are two options for the distribution
of $D^-$ between the components of $\mathbb R C$.
If $D^-\subset \mathbb R C\setminus J$ and $c=-1$
then
$p$ corresponds to self-crossing
of the topologically trivial (even) component of $\mathbb R K$.
If $D^-\cap J\neq\emptyset$,
$D^-\cap (\mathbb R C\setminus J)=\emptyset$ and $c=+1$
then $p$ corresponds to the intersection point
between different components of $\mathbb R K$.
If $w_\lambda=1$ then there are three options for the distribution
of $D^-$ between the components of $\mathbb R C$.
They correspond to self-intersection of an even
component of $\mathbb R K$,
the self-intersection
of an odd component of $\mathbb R K$
and the intersection point of distinct components
of $\mathbb R K$.
We claim that in the case $|w_\lambda|\le 3$ there exists
a real deformation
of $K$ to an immersed curve $K'$ with
a single crossing point corresponding
to distinct components of $K$.
To see this we consider the projection of $K$
from a generic point $p$ on the even component of $\mathbb R K$.
The image $B\subset\mathbb P^2$ of the projection is a quartic curve
with two odd connected components of the normalization
$\mathbb R K$.
Being odd, these components must intersect at a point $r\in{\mathbb R}{\mathbb P}^2$.
Since the curve $B$
is elliptic, there is a second nodal point of $\mathbb R B$ which must be
either a self-intersection point $s\in{\mathbb R}{\mathbb P}^2$ of a component $P\subset\mathbb R K$
or an elliptic double point $s\in{\mathbb R}{\mathbb P}^2$ (the intersection of
two complex conjugate branches of $ B$).
If $s$ is elliptic then it corresponds to
a pair $P_s\subset\mathbb C K\setminus\mathbb R K$
of complex conjugate points in $\mathbb R K$
while $r$ corresponds to a pair $P_r\subset\mathbb R K$
of points
from different components of $\mathbb R B$.
Choose a plane $\mathbb R H$ passing through $P_s$
and a point $p^1_r\subset\ P_r$
from the odd component of $\mathbb R K$.
The divisor $ H\cap K$ on $K$ is
equivalent to $D^+$ and
consists of $P_s$, $p^1_r$
as well as another pair $P_m$ of points on $K$.
If $P_m$
is contained in the odd component of $\mathbb R K$
then we can deform $P_m$ into $\mathbb C K\setminus\mathbb R K$
not changing the linear equivalence class.
If $P_m\cap\mathbb R K=\emptyset$ then we can further
deform $P_m$ to the even component of $\mathbb R K$
within the same linear equivalence class.
If $P_m$ is contained in the even component of $\mathbb R K$
then we can deform $P_m$ (moving the image
$p_B\in\mathbb R B$ of the projection point $p\in{\mathbb R}{\mathbb P}^3$
if needed) so that $D^+-\{p_B\}$
stays in the same linear equivalence class
while the result of deformation of $D^+$
contains $P_r$. In this case the spatial curve
corresponding to $(C,D)$ by Proposition
\ref{reconstruct-rkprime}
has a double point at $r$.
If $s$ is not elliptic then it corresponds to
a self-intersection of one of the components
of $\mathbb R K$. We choose a plane $\mathbb R H\subset{\mathbb R}{\mathbb P}^3$
passing through a point $p_r\in\mathbb R K$
and so that it separates
the pair $P_s$ in the sense of
Definition \ref{def-sep}.
Here we choose $p_r$ to be on the component
$A\subset\mathbb R K$ containing the pair $P_s$.
Recall that we assume that $|w_\lambda|\le 3$.
By Proposition \ref{viro-compute}
this implies that if $A\cap D^+$ consists of
more than 4 points then two of them bound
an open interval $I\subset A$ disjoint from
$D^+$ and $p_B$. Thus a pair of points of $D^+$
can be pushed to $\mathbb C K\setminus\mathbb R K$ and then
to the other component $A'\subset\mathbb R K$.
Note that $\tilde\pi_p|_{A'}$ is an embedding
since $P_s\subset A$.
Thus we ensure that $D^+\cap A'$
consists at least of two points.
As in the case when $s$ is elliptic we deform
these points (along with $p_B$ if needed) to
ensure a crossing point between two different
components of the spatial curve.
Thus any embedded curve with $d=5$, $g=1$
and $|w_\lambda|\le 3$ is obtained
by perturbing a nodal spatial curve with
a node corresponding to crossing of different
components. Therefore $w_\lambda$ determines the rigid
isotopy type of type I $d=5$, $g=1$ real algebraic
links.
\ignore{
Proposition \ref{cd-rkprime} yields the corresponding
deformation of $\mathbb R K\subset{\mathbb R}{\mathbb P}^3$.
If $s$ is a self-intersection point
then we consider the complement $Y$ of its inverse image $S$ in the normalization of $\mathbb R B$.
The complement $Y$ consists of two arcs and a circle. If no connected
component of $Y$ contains more than one point of $\mathbb R Z$ then $ Z\setminus \mathbb R Z$
is non-empty. In this case any deformation of $\mathbb R Z$ extends to a deformation
of $ Z$ within the same linear equivalence class by moving the points of
$ Z\setminus \mathbb R Z$. Thus we may move two points of $\mathbb R Z$ to $S\subset Y$
in the complement of $S$ and degenerate $\mathbb R K$ as needed.
If two point of $\mathbb R Z$ sit on the same component of $\mathbb R Z$ and then we may
move them towards each other within the same linear equivalence class of the divisor
and deform to the imaginary domain unless they are separated by $p$.
However by our assumption that $ Z$ is cut by a hyperplane determining
a 3D-lift the only arc of $Y$ with odd number of points from $\{p\}\cup\mathbb R Y$ is
the arc on $P\subset\mathbb R B$ corresponding to a null-homologous loop in ${\mathbb R}{\mathbb P}^2$.
If two points of $\mathbb R Z$ are separated by $p$ on this arc then
$w_\lambda=\pm 3$ by Proposition \ref{viro-compute} since
the sign of the self-crossing at $s$ is the same as that of $i_M(s)$.
But this is not possible
by our assumption $|w_\lambda|\le 3$.
If $s$ is elliptic then
any of two circles and we may deform
$ Z$ so that $ Z\setminus\mathbb R Z\neq\emptyset$ similarly.
\ignore{
Consider first the case when $\mathbb R B$ has an elliptic point.
Then we may deform $ Z$ so that $ Z\setminus \mathbb R Z\neq\emptyset$
consists of a single point.
To achieve this we move any pair of
and ensure that both points corresponding
to the normalization of $\mathbb R B$ are
In the former case there is a loop on $P$ (homologous to zero in ${\mathbb R}{\mathbb P}^2$)
cut by the self-intersection point.
All possible enhancements
on a given curve $\mathbb R C$ of type II
are connected by Lemma \ref{tIIdiv}.
As the space of smooth nonempty planar cubic curves of
a given type is connected this finishes the rigid
isotopy classification in the case of type II.
Consider the case when $\mathbb R C$ is of type I
and $x'',y''$ are on the same connected component of $\mathbb R C$.
Lemma \ref{tIdiv} implies that
in this case the space of real divisors $ Z$
in our linear equivalence class is connected.
If $x''$ and $y''$ are on different components of
$\mathbb R C$
}
}
\end{proof}
|
1,314,259,993,932 | arxiv | \section{Introduction}
\label{sec:intro} We study multichannel deconvolution with errors following independent fractional Brownian motions (fBms). More specifically, consider the problem of recovering $f(\cdot) \in L^2(T)$, $T=[0,1]$, on the basis of observing the following noisy convolutions, with known blurring functions $g_\ell(\cdot)$,
\begin{equation}
dY_{\ell}(t) = K_\ell f(t)dt + \frac{\sigma_\ell }{n^{\alpha_\ell/2}} dB_{H_\ell}
(t),\ \ \ t \in T, \quad \ell=1,2,\ldots,M, \label{eq:multchan-lm}
\end{equation}
where $\sigma_\ell$ are known positive constants and the convolution operators $K_\ell$ are defined as
\begin{equation}
K_\ell f(t):=f*g_\ell(t) = \int_T g_\ell(t-x)f(x) dx,\ \ \ t \in T, \quad
\ell=1,2,\ldots,M. \label{eq:conv2}
\end{equation}
Here, $B_{H_\ell}(\cdot)$ are independent standard fBms with {\em Hurst} parameters $H_\ell=1-\alpha_\ell/2 \in [1/2,1)$, $\ell=1,2,\ldots,M$; that is, for each $\ell=1,2,\ldots,M$; $B_{H_\ell}(\cdot)$ is a Gaussian process with zero mean and covariance function
\[
\mathbb{E} (B_{H_\ell}(s)B_{H_\ell}(t)) = \frac{1}{2} \big(|s|^{2H_\ell}+|t|^{2H_\ell}-|t-s|^{2H_\ell}\big), \quad
s,t \in T, \quad \ell=1,2,\ldots,M.
\]
The case where $M=1$ corresponds to the fractional Gaussian noise model that can also be viewed as an approximation to the nonparametric regression model with long-range dependence (LRD) (cf. \cite{Wang-1996,Wang-1997}). On the other hand, the case $H_\ell=1/2$, $\ell=1,\ldots,M$; becomes the {\it multichannel} deconvolution with independent standard Brownian motion errors. This model has received attention in studies by \cite{DeCanditiis-Pensky-2006,Pensky-Sapatinas-2009,Pensky-Sapatinas-2010} and \cite{Pensky-Sapatinas-2011}.
We consider the following scenarios for the convolution operators $K_\ell$, $\ell=1,2,\ldots,M$; given by \eqref{eq:conv2} in the Fourier domain where $\widetilde f(m) \coloneqq \int_\mathbb{R} e^{-2\pi i m x} f(x) \, dx$.
\begin{enumerate}
\item {\em Smooth} convolutions such that, in the Fourier domain,
\begin{equation}
\label{eq:K.smooth.M}
|\widetilde{K_\ell f}(m)| \asymp\,
|m|^{-\nu_\ell} \exp{ \left\{ - \theta_\ell |m|^{\beta_\ell}\right\} }\,|\widetilde{f}(m)|,
\end{equation}
where $m \in \mathbb{R}$, $\ell=1,2,\ldots,M;$ $\beta_\ell > 0$ and $\theta_\ell \ge 0$. In particular, $\nu_\ell \in \mathbb{R}$ if $\theta_\ell > 0$ and $\nu_\ell > 0$ if $\theta_\ell = 0$. The key parameter is $\theta_\ell$, controlling the severity of the decay. The so-called super-smooth deconvolution or exponential decay occurs when $\theta_\ell > 0$ and the regular-smooth or polynomial case occurs when $\theta_\ell = 0$. In the regular-smooth case, each $\nu_\ell >0$ corresponds to the so-called {\em degree of ill-posedness} (DIP) index with $\nu_\ell=0$ representing the {\em direct} (or {\em well-posed}) case.
\item {\em Box-car} convolutions such that, in the Fourier domain,
\begin{equation}
\label{eq:K.box.M} |\widetilde{K_\ell f}(m)|=\frac{\sin (\pi m c_\ell)}{\pi m c_\ell}\,|\widetilde{f}(m)|, \quad m \in \mathbb{R}, \quad
\ell=1,2,\ldots,M;
\end{equation}
where $c_\ell >0$ for each $\ell=1,2,\ldots,M$.
\end{enumerate}
Deconvolution is a common problem in many areas of signal and image processing which include, for instance, light detection and ranging (LIDAR) remote sensing and reconstruction of blurred images. LIDAR is a laser device which emits pulses, reflections of which are gathered by a telescope aligned with the laser. The return signal is used to determine the distance and the position of the reflecting material. However, if the system response function of the LIDAR is longer than the time resolution interval, then the measured LIDAR signal is blurred and the effective accuracy of the LIDAR decreases. This loss of precision can be corrected by deconvolution. In practice, measured LIDAR signals are corrupted by additional noise which renders direct deconvolution impossible. Moreover, if $M\geq2$ (finite) LIDAR devices are used to recover a signal, then we talk about a {\em multichannel} deconvolution problem. The case where $M \geq 2$ in \eqref{eq:multchan-lm}--\eqref{eq:conv2} and $H_\ell=1/2$, $\ell=1,\ldots,M$; i.e., the problem of considering systems of convolution equations with independent errors, was first considered by \cite{Casey-Walnut-1994} in order to evade the ill-posedness of the standard deconvolution model.
In the standard Brownian motion error case, a statistical use of the above idea was investigated by \cite{DeCanditiis-Pensky-2004:JRSSB-Dis,DeCanditiis-Pensky-2006} who proposed adaptive wavelet thresholding estimators. In particular, if $K_\ell$ are regular-smooth convolutions, they showed that an {\em adaptive} wavelet thresholding estimator based on the output from the $M$ channels ``picks'' the convergence rate according to ``the best'' operator $K_\ell$, i.e., the one with the smallest $\nu_\ell$, $\ell=1,2,\ldots,M$. Consequently, adding more channels does not improve the convergence rate of the suggested estimator. On the other hand, if $K_\ell$, $\ell=1,2,\ldots,M$; are box-car convolutions, they showed that adding new channels improves the convergence rate. To be more specific, \cite{DeCanditiis-Pensky-2006} showed, in particular, that the true signal $f(\cdot)$ can be recovered with accuracy (within a logarithmic factor),
\[
n^{-2s/(2s+2\nu+1)} \qquad \text{and} \qquad n^{-2s/(2s+(2M+1)/M+1)},
\]
in the regular-smooth and box-car convolutions, respectively. Here, $s>0$ is the smoothness of the underlying signal, $\nu=\min\{\nu_1,\ldots,\nu_M\}$ and the {\em accuracy of estimation} is measured with respect to an upper bound on the $L_2$-risk. In \cite{DeCanditiis-Pensky-2006} the authors did not consider the super-smooth convolutions.
However, real data do not always meet the independence assumption and scientist in diverse fields have observed empirically that correlations between observations that are far apart decay to zero at a slower rate than one would expect from independent data (or, in more general situation, where one deals with short-range dependent data). These fields include astronomy, agronomy, economics chemistry, etc. (see, e.g., \cite{Beran-et-al-2013}).
Therefore, our aim is to study the multichannel deconvolution with errors following fBms. In fact, we show that the situation in this case is much more involved than in the case where the errors follow standard Brownian motions. In particular, we show that in multichannel deconvolution with errors following fBms, the true signal $f(\cdot)$ can be recovered with respect to an upper bound on the $L^p$-risk ($1 \le p < \infty$) with accuracy,
\[
n^{-s\alpha_{\ell_*}p/(2s+2\nu_{*}+1)}, \qquad (\log n)^{-ps^*/\beta_{\ell_*}} \qquad \text{and} \qquad n^{-s\alpha_*p/(2s+2\widetilde \nu_{*}+1)}
\]
for regular-smooth, super-smooth and box-car deconvolutions respectively (the regular smooth and box-car scenarios are within a logarithmic factor). The parameters in the case of smooth (both regular-smooth and super-smooth) convolutions are defined with
\begin{equation}
\label{eq:optimal.smooth.channel}\ell_* \coloneqq \argmin_{1 \le \ell \le M} n^{-\alpha_\ell}2^{(\alpha_\ell + 2\nu_\ell)}e^{2\theta_\ell 2^{\beta_\ell}}.
\end{equation}
for the optimal channel and $\nu_{*}$ is defined for the regular-smooth case as
\begin{equation}
\label{eq:smooth.optimal.nu}
\nu_*\coloneqq \nu_{\ell_*}+\frac{\alpha_{\ell_*}}{2}-\frac{1}{2}.
\end{equation}
For the case of box-car convolutions the parameters are defined with
\begin{gather}
\label{eq:box.car.alpha} \alpha_*\coloneqq \min\{\alpha_1,\ldots,\alpha_M\},\quad \text{and}\quad \alpha^* \coloneqq \max\{\alpha_1,\ldots,\alpha_M\},\\
\label{eq:box.car.optimal.nu}\widetilde \nu_*\coloneqq \frac{2M+1}{2M}+\frac{\alpha^*}{2}-\frac{1}{2}.
\end{gather}
Consequently, the conclusions of \cite{DeCanditiis-Pensky-2006} are
no longer valid here. Even in case of $M=2$, there are different
possibilities for the {\it best scenario}, depending on a
complicated relationship between $s$, $M$, $\nu_\ell$, $\alpha_\ell$, $\theta_\ell$ and $\beta_\ell$, as we illustrate in Section \ref{sec:4}.
\subsection{Modification of the {\tt WaveD} method}
Along with theoretical results, a comparison with the existing {\tt WaveD} method is presented to examine the effect of LRD and multiple channels. Let us compare our modification of the {\tt WaveD} to the standard {\tt R}-package {\tt WaveD} of \cite{Raimondo-Stewart-2007}. In particular, the four signals, {\tt LIDAR}, {\tt Doppler}, {\tt Bumps} and {\tt Blocks} are used as candidate signals in estimation.
For mild levels of LRD ($1/2 < \alpha < 1$) there is not too much difference between the both approaches. However, an improvement is visible for a stronger dependence ($0 < \alpha < 1/2$), as illustrated on \autoref{fig:1} and \autoref{fig:2}. For the parameters $\alpha=0.5$, $\nu=0.5$ and $M = 2$, in the third row, a signal is reconstructed using the proposed multichannel method while the fourth row shows the standard {\tt WaveD} approach using the best channel.
Clearly, the standard {\tt WaveD} approach does not remove artificial noise, which is due to LRD (cf. \autoref{fig:1}). We modify the {\tt WaveD} approach and achieve more reliable estimation by appropriately modified tuning parameters and also truncating the wavelet expansion at an appropriate lower scale level. This truncation is particularly important when there is severe LRD but does not universally yield better estimates (cf. \autoref{fig:2}) and is discussed in more depth in the numerical section later.
\begin{figure}
\centering
\begin{minipage}{\columnwidth}
\subfloat[Doppler signal]{\label{Figure1a}
\includegraphics[height=0.24\textwidth]{fig1a.pdf}
}
\subfloat[LIDAR signal]{\label{Figure1b}
\includegraphics[height=0.24\textwidth]{fig1b.pdf}
}
\end{minipage}
\begin{minipage}{\columnwidth}
\subfloat[Doppler blurred and noisy]{\label{Figure1c}
\includegraphics[height=0.24\textwidth]{fig1c.pdf}
}
\subfloat[LIDAR blurred and noisy]{\label{Figure1d}
\includegraphics[height=0.24\textwidth]{fig1d.pdf}
}
\end{minipage}
\begin{minipage}{\columnwidth}
\subfloat[Doppler reconstruction]{\label{Figure1e}
\includegraphics[height=0.24\textwidth]{fig1e.pdf}
}
\subfloat[LIDAR reconstruction]{\label{Figure1f}
\includegraphics[height=0.24\textwidth]{fig1f.pdf}
}
\end{minipage}
\begin{minipage}{\columnwidth}
\subfloat[Doppler {\tt WaveD} reconstruction]{\label{Figure1g}
\includegraphics[height=0.24\textwidth]{fig1g.pdf}
}
\subfloat[LIDAR {\tt WaveD} reconstruction]{\label{Figure1h}
\includegraphics[height=0.24\textwidth]{fig1h.pdf}
}
\end{minipage}
\caption{\small Top row: original Doppler and LIDAR signal; 2nd row: corresponding blurred and noisy signals, $\nu=0.5$, $\alpha=0.5$ (black line: first channel; grey line: second
channel); 3rd row: reconstructed signal using the proposed method with $M = 2$ channels; 4th row: reconstructed signal using the standard R-package {\tt WaveD} using the best channel
(see \eqref{eq:ell.star.estimator}, for the notion of `best channel').}
\label{fig:1}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{\columnwidth}
\subfloat[Bumps signal]{\label{Figure2a}
\includegraphics[height=0.24\textwidth]{fig2a.pdf}
}
\subfloat[Blocks signal]{\label{Figure2b}
\includegraphics[height=0.24\textwidth]{fig2b.pdf}
}
\end{minipage}
\begin{minipage}{\columnwidth}
\subfloat[Bumps blurred and noisy]{\label{Figure2c}
\includegraphics[height=0.24\textwidth]{fig2c.pdf}
}
\subfloat[Blocks blurred and noisy]{\label{Figure2d}
\includegraphics[height=0.24\textwidth]{fig2d.pdf}
}
\end{minipage}
\begin{minipage}{\columnwidth}
\subfloat[Bumps reconstruction]{\label{Figure2e}
\includegraphics[height=0.24\textwidth]{fig2e.pdf}
}
\subfloat[Blocks reconstruction]{\label{Figure2f}
\includegraphics[height=0.24\textwidth]{fig2f.pdf}
}
\end{minipage}
\begin{minipage}{\columnwidth}
\subfloat[Bumps {\tt WaveD} reconstruction]{\label{Figure2g}
\includegraphics[height=0.24\textwidth]{fig2g.pdf}
}
\subfloat[Blocks {\tt WaveD} reconstruction]{\label{Figure2h}
\includegraphics[height=0.24\textwidth]{fig2h.pdf}
}
\end{minipage}
\caption{\small Top row: original Bumps and Blocks signal; 2nd row: corresponding blurred and noisy signals, $\nu=0.5$, $\alpha=0.5$ (black line: first channel; grey line: second channel) ; 3rd row: reconstructed signal using the proposed method with $M = 2$ channels; 4th row: reconstructed signal using the standard R-package {\tt WaveD} using the best channel (see \eqref{eq:ell.star.estimator}, for the notion of `best channel')}
\label{fig:2}
\end{figure}
\subsection{Related works}
The case where $M=1$ and $H_1=1/2$ in \eqref{eq:multchan-lm}--\eqref{eq:conv2}
refers to the so-called {\em standard deconvolution model} which attracted attention of a number of researchers.
(Note that the standard deconvolution model is typically {\em ill-posed} in the sense of Hadamard: the inversion does not depend continuously on the observed data, i.e., small noise in the convolved signal leads to a significant error in the estimation procedure.) After a rather rapid progress in this problem in late eighties--early nineties, authors turned to adaptive wavelet solutions of the problem that are optimal (in the {\em minimax} or the {\em maxiset} sense), or near-optimal within a logarithmic factor, in a wide range of Besov balls and for a variety of loss functions defining the risk, and under mild conditions on the blurring function (see, e.g., \cite{Donoho-1995,Abramovich-Silverman-1998,Kalifa-Mallat-2003,Johnstone-et-al-2004,Donoho-Raimondo-2004, Johnstone-Raimondo-2004, Neelamani-et-al-2004,Kerkyacharian-et-al-2007}).
The case $M=1$ and $H_\ell>1/2$ (i.e., standard deconvolution with LRD errors) has been investigated
in \cite{Wang-1996,Wang-1997,Kulik-Raimondo-2009} and \cite{Wishart-2013}.
The case where $\alpha_\ell=1$ for each $\ell=1,2,\ldots,M$; (i.e., the case where in the multichannel deconvolution model \eqref{eq:multchan-lm} the errors follow independent standard Brownian motions) was first considered in \cite{DeCanditiis-Pensky-2006} (extending the results obtained in \cite{Johnstone-et-al-2004} for the case $M=1$).
The case of the multichannel deconvolution with errors following LRD sequences was investigated in \cite{benhaddou:etal} using the minimax approach, extending results obtained in \cite{Pensky-Sapatinas-2009,Pensky-Sapatinas-2010} and \cite{Pensky-Sapatinas-2011}.
The case of nonparametric density estimation for the errors-in-variables problem with LRD has been studied by \cite{Kulik-2008}. In particular, it was shown that LRD has no impact on the optimal convergence properties in the super-smooth scenario. We show similar results for the multichannel deconvolution model presented here.
Finally, for more information regarding the LIDAR device, the reader is referred to, e.g., \cite{Park-et-al-1997} and \cite{Harsdorf-Reuter-2000}.
\subsection{Structure of the paper}
The paper is organised as follows. \autoref{sec:2} contains some preliminaries on the periodised Meyer wavelets and Besov spaces on the unit interval $T$. \autoref{sec:3} provides the construction of the proposed adaptive wavelet thresholding estimators while \autoref{sec:4} contains the corresponding upper bound results over a wide range of Besov spaces and for a variety of loss functions defining the risk, for regular-smooth, super-smooth and box-car convolutions. An extensive simulation study to supplement the theoretical findings of \autoref{sec:4} is performed in \autoref{sec:5}. Conclusions and discussion are given in \autoref{sec:6} and the proofs of the theoretical results and auxiliary results given in \autoref{sec:7} and \hyperref[sec:appendix]{Appendix A}.
\section{Preliminaries}
\label{sec:2}
\subsection{Periodised Meyer wavelets and Besov spaces on the unit interval}
\label{sec:besov}
To avoid edge problems and unnecessary technicalities arising in defining wavelet basis on the unit interval $T$, we will assume that $f(\cdot)$ and $g_\ell(\cdot)$, $\ell=1,2,\ldots,M$; are periodic on $T$. Moreover, not only for theoretical reasons but also for practical convenience (see, e.g., \cite{Johnstone-et-al-2004}, Sections 2.3, 3.1--3.2), we use band-limited wavelet basis, and in particular the periodised Meyer wavelet basis for which fast algorithms exist (see, e.g., \cite{Kolaczyk-1994} and \cite{Donoho-Raimondo-2004}). Specifically, let $\phi(\cdot)$ and $\psi(\cdot)$ be the Meyer scaling and mother wavelet functions, respectively, on the real line $\mathbb{R}=(-\infty,\infty)$ (see, e.g., \cite{Meyer-1992} or \cite{Mallat-1999}). As usual,
\[
\phi_{j,k}(t) = 2^{j/2}\phi(2^jt-k), \quad \psi_{j,k}(t) =
2^{j/2}\psi(2^jt-k), \quad j\geq 0,\;\;k \in \mathbb{Z}, \quad t \in \mathbb{R},
\]
are, respectively, the dilated and translated Meyer scaling and wavelet functions at resolution level $j$ and scale position $k/2^j$. Similarly to Section 2.3 in \cite{Johnstone-et-al-2004}, we obtain a periodised version of Meyer wavelet basis by periodising the basis functions $\{\phi(\cdot),\psi(\cdot)\}$ on $\mathbb{R}$, i.e., for $j \geq 0$ and $k=0,1,\ldots,2^{j}-1$,
\[
\Phi_{j,k}(t) = \sum_{i \in \mathbb{Z}} 2^{j/2} \phi(2^j (t +i) - k),
\quad \Psi_{j,k}(t) = \sum_{i \in \mathbb{Z}} 2^{j/2} \psi(2^j (t +i) -
k), \quad t \in T.
\]
In the periodic setting, we recall that Besov spaces are characterised by the behaviour of the wavelet coefficients (see, e.g., \cite{Johnstone-et-al-2004}, Section 2.4), i.e.,
\begin{definition}
\label{def:SBspace} For $f(\cdot) \in L^{\pi_0}(T)$, $1\leq \pi_0 < \infty$,
\begin{equation}
f(\cdot) \in \Besov{\pi_0}{r}{s}(T)\; \Longleftrightarrow
\sum_{j=0}^{\infty}2^{j(s+1/2-1/\pi_0)r}\bigg[\sum_{k=0}^{
2^j-1}|b_{j,k}|^{\pi_0}\bigg]^{r/\pi_0}<\infty, \label{eq:fBesov}
\end{equation}
with the usual modification if $\pi_0=\infty$ and/or $r=\infty$.
\end{definition}
As usual, the wavelet coefficients $b_{j,k}$ are obtained by $b_{j,k}=\int_T f(t)\psi_{j,k}(t)dt$. The parameter $s >0$ can be thought of as related to the number of derivatives of $f(\cdot)$. With different values of $\pi_0$ ($1 \leq \pi_0 \leq \infty$) and $r$ ($1 \leq r \leq \infty$), the Besov spaces $\Besov{\pi_0}{r}{s}(T)$ capture a variety of smoothness features in a function including spatially inhomogeneous behaviour.
In the sequel, $\kappa$ will denote the multiple index $(j,k)$ and, adopting standard convention, $\Phi(\cdot) = \Psi_{-1}(\cdot)$, where $\Phi(\cdot)$ corresponds to the periodised scaling function associated with the Meyer wavelet basis mentioned above.
\section{Construction of the adaptive wavelet thresholding and linear estimators} \label{sec:3}
The estimation of $f$ is approached differently for the different deconvolution types. Namely, for regular-smooth and box-car convolutions a wavelet non-linear (hard thresholding) estimator is used while for the super-smooth convolutions a wavelet linear (projection) estimator is used.
To simplify the overall problem, the estimation procedure is considered in the Fourier domain to reduce the convolution operator to a product of Fourier coefficients. Denote the Fourier basis functions, $ e_m(t) \coloneqq e^{2\pi i m t}$, $m \in \mathbb{Z}$, with the corresponding inner product operator, $\innerp{f_1}{f_2} = \int f_1 (x) \overline{f_2}(x) \, dx$ where $\overline{f}$ denotes the complex conjugate of $f$. Let $h=f*g_{\ell}$. Denote the relevant Fourier coefficients,
\begin{align*}
\Phi_{m j_0 k} & = \innerp{\Phi_{j_0,k}}{e_m}, \quad \Psi_m^{\kappa} = \Psi_{m j k} = \innerp{\Psi_{j,k}}{e_m},
\end{align*}
\begin{align}
h_{m,\ell} &= \innerp{h_\ell}{e_m}, \quad y_{m,\ell} = \int_\mathbb{R} \overline{e_{m}}(t) dY_\ell(t),\quad z_{m,\ell} = \int_\mathbb{R} \overline{e_{m}}(t) dB_{H_\ell}(t), \label{eq:Fourier-defs}\\
f_m & = \innerp{f}{e_m}, \quad g_{m,\ell} = \innerp{g_\ell}{e_m}, \quad \ell = 1,2,\ldots, M.\nonumber
\end{align}
Applying the Fourier transform to \eqref{eq:multchan-lm}, we get the following sequence space model
\begin{align}
\label{eq:yl} y_{m,\ell} & = h_{m,\ell} + \frac{\sigma_\ell}{n^{\alpha_\ell/2}}\; z_{m,l},\quad m \in \mathbb{Z},\quad \ell=1,2,\ldots,M;\\
\label{eq:hml} h_{m,\ell} & = g_{m,\ell}f_m,\quad m \in \mathbb{Z},\quad \ell=1,2,\ldots,M;
\end{align}
where, for each $\ell$, $\sigma_\ell$ are known positive constants and the structure of the Fourier coefficients, $g_{m,\ell}f_m = \widetilde{K_\ell f}(m)$, is given by \eqref{eq:K.smooth.M} and \eqref{eq:K.box.M} for the smooth-type and box-car convolutions respectively. Following a similar procedure to \cite{DeCanditiis-Pensky-2006}, weights $\gamma_{m,\ell} \overline{g_{m,\ell}}$ are multiplied to the $h_{m,\ell}$ coefficients and added together (where $\gamma_{m,\ell}$ are weights to be specified later). Thus \eqref{eq:hml} leads to the following expression for the target function coefficients,
\[
f_m=\frac{\sum_{\ell=1}^M \gamma_{m,\ell} \overline{g_{m,\ell}}h_{m,\ell}}{\sum_{\ell=1}^M \gamma_{m,\ell}|g_{m,\ell}|^2}, \quad m \in \mathbb{Z}.
\]
Furthermore using the Parseval identity one can obtain the wavelet coefficients,
\[
b_{\kappa}=\int_T f(t)\Psi_{\kappa}(t)\,dt=\sum_{m \in \mathbb{Z}}f_{m}\overline{\Psi_{m}^{\kappa}}=\sum_{m \in \mathbb{Z}}\frac{\sum_{\ell=1}^M \gamma_{m,\ell}\overline{g_{m,\ell}}h_{m,\ell}}{\sum_{\ell=1}^M \gamma_{m,\ell} |g_{m,\ell}|^2}\overline{\Psi_{m}^{\kappa}}
\]
which can be estimated using \eqref{eq:yl} with
\begin{equation}
\label{eq:wav.coeff.estim} \widehat b_{\kappa}=\sum_{m \in C_j}\frac{\sum_{\ell=1}^M \gamma_{m,\ell}\overline{g_{m,\ell}}y_{m,\ell}}{\sum_{\ell=1}^M \gamma_{m,\ell} |g_{m,\ell}|^2}\overline{\Psi_{m}^{\kappa}},
\end{equation}
where $C_j$ denotes the domain of the Meyer wavelet in the Fourier domain,
\begin{equation}
C_j = \left\{ a \in \mathbb{Z} : \pm a \in \left\{ \left\lceil \frac{2^j}{3} \right\rceil, \left\lceil \frac{2^j}{3} \right\rceil + 1, \ldots, \left\lfloor \frac{2^{j+2}}{3} \right\rfloor \right\} \right\},\label{eq:Cj}
\end{equation}
where $j \geq 0$. The scaling coefficients $a_{\kappa} = \int_T f(t)\Phi_\kappa(t)\, dt$ and their estimates $\widehat a_{\kappa}$ are defined in a similar manner.
\noindent{\sl Estimators:} A non-linear estimator $\widehat f_n(\cdot)$ of $f(\cdot)$ based on hard thresholding of a wavelet expansion is as follows:
\begin{equation}
\label{eq:generic} \widehat f_n(t)= \sum_{k = 0}^{2^{j_0}-1}\widehat a_{j_0,k} \Phi_{j_0,k}(t) + \sum_{\kappa \in \Lambda}
\,\widehat b_{\kappa}\,
\1{\{|\widehat b_{\kappa}|\geq \lambda\}}\Psi_{\kappa}(t), \quad t \in T,
\end{equation}
where $\1{A}$ denotes the indicator function of the set $A$, the index range, $\Lambda=\Lambda_n$, the coarse scale level $j_0$ and the threshold parameter $\lambda=\lambda_j$ are forthcoming.
A linear (projection) wavelet estimator $\widehat f_n(\cdot)$ of $f(\cdot)$ with coarse scale level $j_0$ is
\begin{equation}
\label{eq:super-smooth.estimator}\widehat f_n(t) = \sum_{k = 0}^{2^{j_0}-1} \widehat a_{j_0,k} \Phi_{j_0,k}(t).
\end{equation}
\noindent {\sl Resolution levels:} The range of resolution levels (frequencies) is given by
\[
\Lambda_n=\{(j,k),\;j_0 \le j\le j_1, \;\; 0 \le k \leq 2^j-1\}.
\]
The coarse scale $j_0$ is defined in the super-smooth case as,
\begin{equation}
\label{eq:j0.super-smooth} 2^{j_0}\asymp \left( \frac{(\alpha_{\ell_*}-\epsilon)\log n}{2\theta_{\ell_*}}\right)^{1/\beta_{\ell_*}}
\end{equation}
where $\epsilon > 0$ is small, $\theta_\ell$ is the super-smooth parameter defined in \eqref{eq:K.smooth.M} and $\ell_*$ is given by \eqref{eq:optimal.smooth.channel}. For the regular-smooth and box-car case the parameter $j_0$ is not important for the asymptotic convergence of the estimator and we set $j_0 = -1$. The fine scale level $j_1$ is important for the asymptotic convergence results in these cases and is set to be,\begin{equation}
\label{eq:j1} 2^{j_1}\asymp \left(\frac{n^{\alpha_{\ell_*}}}{\log n}\right)^{1/(2\nu_*+1)}
\end{equation}
for regular-smooth convolutions and
\begin{equation}
\label{eq:j1.boxcar} 2^{j_1}\asymp \left(\frac{n^{\alpha_*}}{\log n}\right)^{1/(2\widetilde \nu_* + 1)}
\end{equation}
for box-car convolutions, where $\alpha_*$, $\nu_*$, $\widetilde \nu_*$ and $\ell_*$ are defined in \eqref{eq:box.car.alpha}, \eqref{eq:smooth.optimal.nu}, \eqref{eq:box.car.optimal.nu} and \eqref{eq:optimal.smooth.channel} respectively. The fine resolution level $j_1$ in \eqref{eq:j1} coincides with the level given by \cite{Wishart-2013} for the case when $M = 1$, $\nu_1 = \nu$ and $\alpha_1 = \alpha$.
\medskip
\noindent{\sl Thresholds:} To ease the presentation and include both the regular-smooth and box-car cases, define
\[
\xi = \begin{cases}
\alpha_{\ell_*}, & \text{ in the case of regular-smooth deconvolutions};\\
\alpha_{*}, & \text{ in the case of box-car deconvolutions}.
\end{cases}
\]
Then the scale level threshold values $\lambda=\lambda_j$ are given by
\begin{equation}
\label{eq:lambda} \lambda_j= \zeta\,\tau_{j}\,c_n,
\end{equation}
where the three input parameters are specified as:
\begin{itemize}
\item $\zeta$: a smoothing parameter, $\zeta>2\sqrt{(p\vee 2)2 \xi}.$
\item $c_n$: a sample size-dependent scaling factor,
\begin{equation}
\label{eq:cn} c_n = \sqrt{\frac{\log n}{n^{\xi}} }.
\end{equation}
\item $\tau_j$: a level-dependent scaling factor,
\begin{align}
\label{eq:sigma.j} \tau_j^2 &= n^{\xi}\sum_{m \in C_j} |\Psi^\kappa_m |^2\left( \sum_{\ell=1}^M \sigma_\ell^{-2}n^{\alpha_\ell} |m|^{2H_{\ell}-1}|g_{m,\ell}|^2\right)^{-1}.
\end{align}
\end{itemize}
In practical applications, the noise levels $\sigma_\ell$; $\ell=1,2,\ldots,M$; are usually unknown. In this case, estimate each $\sigma_\ell$ by $\widehat{\sigma}_\ell$ and define
\begin{equation}
\label{eq:sigma.j-hat} \widehat{\tau}_j^2=n^{\xi}\sum_{m \in C_j} |\Psi^\kappa_m |^2\left( \sum_{\ell=1}^M \widehat \sigma_\ell^{-2}n^{\alpha_\ell} |m|^{2H_{\ell}-1}|g_{m,\ell}|^2\right)^{-1}.
\end{equation}
This expression is used in the simulation study conducted in Section \ref{sec:4}.
Note that the above thresholds $\lambda_j$ defined in \eqref{eq:lambda} coincide with the ones defined in \cite{DeCanditiis-Pensky-2006} ($M \geq 2$, $\alpha_*=1$).
\section{Upper bound results of the adaptive wavelet thresholding and linear estimators}
\label{sec:4}
Consider first the smooth convolutions scenario. In this case, the regular-smooth and super-smooth cases are handled when $\theta_\ell = 0$ or $\theta_\ell > 0$ respectively. The super-smooth case is similar to estimating analytic functions with a slow convergence rate. In this scenario linear estimators obtain the optimal (in the minimax sense) convergence rates and hence a linear (projection) wavelet estimator with an appropriate primary resolution level $j_0$ suffices.
\begin{theorem}\label{thm:main-1}
Consider the model described by \eqref{eq:multchan-lm} with $f\in \Besov{\pi_0}{r}{s}(T)$ with $\pi_0 \geq 1$, $s\geq \frac{1}{\pi_0}$. If $\theta_\ell= 0$ for each $\ell = 1,2\ldots,M;$ (regular-smooth case) then consider
\begin{equation}
\label{eq:Besov-rho}0 < r \leq r_0=\min\Bigg\{
\frac{p(2\nu_*+1)}{2(\nu_*+s)+1},
\frac{(2\nu_*+1)p-2}{2(\nu_*+s)-2/\pi_0+1} \Bigg\}
\end{equation}
and the adaptive wavelet estimator $\widehat f_n$ defined in \eqref{eq:generic} with the index range $\Lambda=\Lambda_n$ defined by \eqref{eq:j1} and threshold value $\lambda=\lambda_j$ defined by \eqref{eq:lambda} for some $\zeta>2\sqrt{(p\vee 2)2\alpha_{\ell_*}}$ with $\tau_j$ and $c_n$ given, respectively, by \eqref{eq:sigma.j} and \eqref{eq:cn}. If $\theta_\ell > 0$ for each $\ell = 1,2,\ldots,M;$(super-smooth case) then consider $r >0$ and the linear projection wavelet estimator defined in \eqref{eq:super-smooth.estimator} with coarse scale level, $j_0$, given by \eqref{eq:j0.super-smooth}. Let $p> 1$ be an arbitrary finite real number. Then, there exists a constant $C>0$ such that for all $n \geq 1$,
\[
\mathbb{E} \|\widehat{f}_n-f\|_p^p \le C \left(\frac{\log n}{n^\delta}\right)^{\varrho},
\]
where in the regular-smooth case $\theta_{\ell_*} = 0$ and $\delta = 1$ with
\begin{align}\label{eq:rates-dense-smooth}
\varrho &={\frac{\alpha_{\ell_*} s p}{2(s+(2\nu_*+1)/2)}},& \hbox{ if }& s\geq \frac{(2\nu_*+1)}{2}\Big(\frac{p}{\pi_0}-1\Big);\\
\label{eq:rates-sparse-smooth}
\varrho &=\frac{\alpha_{\ell_*} p(s-1/\pi_0+1/p)}{2(s-1/\pi_0+(2\nu_*+1)/2)},& \hbox{ if }&\frac{1}{\pi_0} - \nu_{*}- \frac{1}{2} \leq s<
\frac{(2\nu_*+1)}{2}\Big(\frac{p}{\pi_0}-1\Big);
\end{align}
while in the super-smooth case, $\theta_{\ell_*} > 0$ and $\delta = 0$ with,
\begin{align}
\varrho &= -ps^*/\beta_{\ell_*} \qquad \text{where } \beta_{\ell_*} > 0\label{eq:rate-super-smooth}
\end{align}
and $s^* = s + 1/p - 1/\min (p,\pi_0)$ and $\ell_*$ is defined with \eqref{eq:optimal.smooth.channel}.
\end{theorem}
Now, consider box-car convolutions scenario. Recall $\alpha_*$ defined by \eqref{eq:box.car.alpha} and $\nu_*$ is now replaced with $\widetilde \nu_*$ defined by \eqref{eq:box.car.optimal.nu}. For the definitions of the `Badly Approximable' (BA) irrational number and the BA irrational tuple that we used in the following statement, see, e.g., p.22 and p.42 of \cite{Schmidt-1980}.
\begin{theorem}\label{thm:main-2}
Consider the model described by \eqref{eq:multchan-lm} and the wavelet estimator $\widehat{f}_n$ defined in \eqref{eq:generic} with the index range $\Lambda=\Lambda_n$ defined by \eqref{eq:j1.boxcar} and threshold value $\lambda=\lambda_j$ defined by \eqref{eq:lambda} for some
$\zeta>2\sqrt{(p\vee 2)2\alpha_*}$ with $\tau_j$ and $c_n$ given, respectively, by \eqref{eq:sigma.j} and \eqref{eq:cn}. Let $p> 1$ be an arbitrary finite real number and assume that one of the $c_1,c_2,\ldots,c_M$ is a BA irrational number and that $c_1,c_2,\ldots,c_M$ ($M \geq 2$) is a BA irrational tuple. If $f\in \Besov{\pi_0}{r}{s}(T)$ with $\pi_0 \geq 1$, $s\geq \frac{1}{\pi_0} - \widetilde \nu_* - 1/2$ and $r$ satisfying \eqref{eq:Besov-rho} with $\nu_*$ replaced with $\widetilde \nu_*$, then, in this case, the result of \autoref{thm:main-1} still holds with $\delta = 1$ and
\begin{align}\label{eq:rates-dense-boxcar}
\varrho&={\frac{\alpha_* s p}{2(s+(2\widetilde \nu_*+1)/2)}},& \hbox{ if }&
s\geq \frac{(2\widetilde \nu_*+1)}{2}\Big(\frac{p}{\pi_0}-1\Big);\\
\label{eq:rates-sparse-boxcar}
\varrho &=\frac{\alpha_* p(s-1/\pi_0+1/p)}{2(s-1/\pi_0+(2\widetilde \nu_*+1)/2)},&
\hbox{ if }& \frac{1}{\pi_0} - \widetilde \nu_{*}- \frac{1}{2}\leq s<
\frac{(2\widetilde \nu_*+1)}{2}\Big(\frac{p}{\pi_0}-1\Big);
\end{align}
where $\widetilde \nu_*$ is defined by \eqref{eq:box.car.optimal.nu} and $\alpha_*$ is defined by \eqref{eq:box.car.alpha}.
\end{theorem}
\begin{remark}There is an {\em elbow effect} or {\em phase transition} in the upper bound on the $L^p$-risk $(1 \leq p < \infty)$ in both the regular-smooth and box-car convolutions. Namely, in the regular-smooth case switching from \eqref{eq:rates-dense-smooth} to \eqref{eq:rates-sparse-smooth} as the assumed smoothness decreases; and similarly switching from \eqref{eq:rates-dense-boxcar} to \eqref{eq:rates-sparse-boxcar} in the box-car case. The two regimes are usually referred to as the {\em dense} and {\em sparse} regions respectively (see \cite{Johnstone-et-al-2004} and \cite{DeCanditiis-Pensky-2006} for the case with independent Brownian motion errors). The upper bound results obtained in \autoref{thm:main-1} for the regular-smooth case and in \autoref{thm:main-2} for the box-car case show that the boundary region of $s$ depends on the LRD indices $\alpha_\ell$, $\ell=1,2,\ldots,M$; and the sparse region is smaller in the case where the errors follow independent fBms.
\end{remark}
\begin{remark}\label{rem:agree}Single Channel, $M = 1$:
For $\nu_*=\nu_1=0$, the upper bounds obtained on the $L^p$-risk $(1 \le p < \infty)$ in Theorem 1 agree with existing optimal rate results (up to a logarithmic factor) for wavelet regression with long-memory errors obtained by \cite{Wang-1996}, (minimax $L^2$-risk) and \cite{Kulik-Raimondo-2009} (upper bounds on the $L^p$-risk, $1 \le p < \infty$). Similarly, when $\nu_* = \nu_1 > 0$ the results also agree with \cite{Wang-1997} (minimax $L^2$-risk, $p = 2)$ and \cite{Wishart-2013} (upper bounds on the $L^p$-risk, $1 \le p < \infty$). Multichannel, $M > 1$: Our results generalise the results in \cite{DeCanditiis-Pensky-2006} and include the results of their case when $\alpha_* = 1$ (upper bounds on the $L^p$-risk, $1 \le p < \infty$).
\end{remark}
\begin{remark}As expected, the upper bounds deteriorate in the regular-smooth and box-car cases when $\nu_{\ell_*}$ increases (larger DIP) or when $\alpha_{\ell_*}$ decreases (stronger LRD). The combined effect of $\nu_{\ell_*}$ and $\alpha_{\ell_*}$ on the location of the elbow is reverse as the sparse region increases with both $\nu_{\ell_*}$ and $\alpha_{\ell_*}$. Consistent with the literature, the super-smooth case has a logarithmic convergence rate with indices that depend on the underlying smoothness in $s^*$ and the severity of the super-smooth decay in $\beta_{\ell_*}$. The upper bounds on the $L^p$-risk $(1 \le p < \infty)$ in the super-smooth case do not depend on $\nu_{\ell_*}$ or $\alpha_{\ell_*}$.
\end{remark}
\begin{remark}Our upper bounds on the $L^p$-risk (for $p=2$) are not directly comparable to the maximal upper bounds obtained in \cite{benhaddou:etal}. In that paper the framework is different whereby the number of channels $M$ depends on the number of total observations, $n$, in each channel (i.e. $M=M_n$). However, in our case the number of channels is fixed and not dependent on $n$. Our results are comparable to the works of \cite{DeCanditiis-Pensky-2006,Wishart-2013} demonstrating both the effects of the number of channels and the LRD on the upper bounds on the $L^p$-risk $(1 \le p < \infty)$.
\end{remark}
\section{Simulation study}
\label{sec:5}
A simulation study for $p = 2$ is conducted for the regular-smooth scenario and is heavily based on the algorithm in the {\tt WaveD} {\tt R}-package of \cite{Raimondo-Stewart-2007}. In the regular smooth scenario, it is crucial to know $\ell_*$, the `best channel', since it appears in both the smoothing parameter $\zeta$ and in the fine scale level $j_1$. The fine scale parameter is particularly important since it truncates the wavelet expansion early enough to ensure an accurate yet reliable algorithm. Methods have been established for choosing $j_1$ in practice for the single channel regular Brownian motion case by \cite{Cavalier-Raimondo-2007} and expanded to the single channel LRD case by \cite{Wishart-2013}. The method is sketched below and the interested reader is referred to those papers for a more in-depth treatment.
The method assumes the practitioner can pass the Fourier basis, $f = \left\{ e_u \right\}_{u \in \mathbb{Z}}$, into \eqref{eq:multchan-lm} and denote this new information with,
\[
d\breve Y_\ell (x) =g_\ell * e_u (x) + \sigma_\ell n^{-\alpha_\ell/2} dB_{H_\ell}(x).
\]
Due to the orthogonality of the Fourier basis, the Fourier domain representation of $\breve Y_\ell$ is
\[
\breve y_{m,\ell} = \int_\mathbb{R} e_m(x) d\breve Y_\ell (x) = g_{m, \ell} + \sigma_\ell n^{-\alpha_\ell/2} w_{m,\ell}
\]
where $w_{m,\ell}$ is identically distributed to but independent of $z_{m,\ell}$ (recall \eqref{eq:Fourier-defs} for the definition of $z_{m,\ell}$). Then an estimate of $j_{\ell,1} \asymp \left( n^{\alpha_\ell}/\log n \right)^{1/(\alpha_\ell + 2\nu_\ell)}$ is constructed with,
\[
\widehat j_{\ell,1} = \lfloor \log_2 F_\ell \rfloor -1
\]
where the stopping time $F_\ell$ is determined in the Fourier domain with $F_\ell = \min\left\{ \omega, \omega > 0 : |\breve y_{\omega,\ell}| \le \omega^{\alpha/2} \varepsilon_\ell \log \varepsilon_\ell^{-2} \right\}$ and $\varepsilon_\ell \coloneqq \sigma_\ell n^{-\alpha_\ell/2}$. The estimate $\widehat j_{\ell,1}$ is close to $j_{\ell,1}$ with high probability due to Lemma 1 in \cite{Wishart-2013}.
Then define the overall fine scale estimator with,
\begin{equation}
\widehat j_1 = \max_{1 \le \ell \le M} \widehat j_{\ell,1}.\label{eq:j1.estimator}
\end{equation}
since the optimal channel defined with
\[
\ell^* \coloneqq \argmax _{1 \le \ell \le M} \left\{ \left( \frac{n^{\alpha_\ell}}{\log n}\right)^{1/(\alpha_\ell + 2\nu_\ell)} \right\},
\]
is equivalent to the optimal channel $\ell_*$ defined in \eqref{eq:optimal.smooth.channel}. For the same reason, the best channel is estimated as the one with the largest stopping time,
\begin{equation}
\widehat \ell_* = \argmax_{1 \le \ell \le M } F_\ell \label{eq:ell.star.estimator}
\end{equation}
The theory suggests that the smoothing parameter should satisfy the bound, $\zeta > 4 \sqrt{\alpha_{\ell_*}}$ when $p = 2$. However, as will become evident in the simulations a smaller choice for $\zeta$ results in improved numerical performance. This smaller choice of smoothing parameter compared to the theory is consistent with other numerical results of \cite{Johnstone-et-al-2004} and \cite{Wishart-2013}. The signals used in the simulations are the standard {\tt LIDAR, Doppler, Bumps} and {\tt Blocks} functions that have been used consistently throughout the literature (cf. \cite{Donoho-et-al-1995,Cavalier-Raimondo-2007})
The steps for a simulation study are then as follows:
\begin{enumerate}
\item We choose $f(\cdot)$ to be the {\tt Doppler}, {\tt LIDAR}, {\tt Bumps} or {\tt Blocks} functions.
\item Choose $M$, $n$, $\nu_{\ell}$ and the set of dependence parameters $\alpha_\ell$ for each $\l = 1,2.\ldots, M$; and $n = 2^J$ for $J =12$.
\item Generate $M$ independent FARIMA sequences of length $n$. Each sequence is standardised, to have the same signal-to-noise ratio,
\[
\text{SNR} = 10 \log_{10} \left( \norm{g_\ell * f}^2/\sigma_\ell^2 \right)
\]
for three scenarios where $\text{SNR}$ = 10 dB (high noise), 20 dB (medium noise) or 30 dB (low noise). This means that the level of noise compared to the blurred signal is standardised. To simulate the dependent sequence, we use the {\tt R}-package {\tt fracdiff} and the {\tt R}-function {\tt fracdiff.sim}.
\item Estimate the highest permissible scale level, $\widehat j_1$ by using the estimator $\widehat j_1$ defined in \eqref{eq:j1.estimator}.
\item Estimate the `best channel' from the noisy data using \eqref{eq:ell.star.estimator} with $\sigma_\ell$ replaced with $\widehat \sigma_\ell$. Then set the smoothing parameter $\zeta$.
\item Compute $\widehat b_{\kappa}$ using the formula (\ref{eq:wav.coeff.estim}) with level-depending
thresholds $\lambda_j=\zeta \widehat{\tau}_j c_n$ defined in
(\ref{eq:lambda}), where $\widehat{\tau}_j$ and $c_n$ are given
(\ref{eq:sigma.j-hat}) and (\ref{eq:cn}), respectively. The noise level in each channel is estimated using the MAD of the wavelet coefficients at the highest scale level ($J-1$).
\item Compute the above estimates repeatedly to obtain an empirical version of the RMSE with,
\[
\widehat{MSE}(\widehat f,f) = \widehat{\mathbb{E}} \norm{\widehat f - f}_2 = \frac{1}{m} \sum_{i = 1}^m \norm{\widehat f_i - f}_2
\]
where $m = 1024.$
\end{enumerate}
The results of the simulations are populated in Tables \ref{tab:1} -- \ref{tab:4}
\begin{table}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{llllclllclll}
\toprule
\multicolumn{1}{l}{$\nu = 0.3$}&\multicolumn{3}{c}{$\alpha = 1 $}&\multicolumn{1}{c}{}&\multicolumn{3}{c}{$\alpha = 0.8 $}&\multicolumn{1}{c}{}&\multicolumn{3}{c}{$\alpha = 0.6 $}\tabularnewline
\cline{2-4} \cline{6-8} \cline{10-12}
\multicolumn{1}{l}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt LIDAR}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&\textbf{0.054}\hfill (7)&\textbf{0.045}\hfill (7)&\textbf{0.041}\hfill (7)&&\textbf{0.064}\hfill (7)&\textbf{0.052}\hfill (7)&\textbf{0.046}\hfill (7)&&\textbf{0.081}\hfill (7)&\textbf{0.064}\hfill (7)&\textbf{0.056}\hfill (7)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.073\hfill (7)&0.06\hfill (7)&0.052\hfill (7)&&0.08\hfill (7)&0.065\hfill (7)&0.057\hfill (7)&&0.093\hfill (7)&0.074\hfill (7)&0.065\hfill (7)\tabularnewline
~~{\small WaveD}&0.06\hfill (7)&0.059\hfill (7)&0.059\hfill (7)&&0.064\hfill (7.6)&0.064\hfill (7.8)&0.064\hfill (7.9)&&0.083\hfill (8)&0.084\hfill (8)&0.084\hfill (8)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Doppler}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&\textbf{0.039}\hfill (8)&\textbf{0.03}\hfill (8)&\textbf{0.026}\hfill (8)&&0.048\hfill (8)&\textbf{0.036}\hfill (8)&\textbf{0.031}\hfill (8)&&0.062\hfill (8)&\textbf{0.046}\hfill (8)&\textbf{0.039}\hfill (8)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.056\hfill (8)&0.044\hfill (8)&0.036\hfill (8)&&0.059\hfill (8)&0.046\hfill (8)&0.038\hfill (8)&&0.064\hfill (8)&0.05\hfill (8)&0.041\hfill (8)\tabularnewline
~~{\small WaveD}&0.046\hfill (8)&0.045\hfill (8)&0.045\hfill (8)&&\textbf{0.047}\hfill (8)&0.047\hfill (8)&0.047\hfill (8)&&\textbf{0.057}\hfill (8)&0.058\hfill (8)&0.058\hfill (8)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Bumps}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&\textbf{0.275}\hfill (7)&\textbf{0.27}\hfill (7)&\textbf{0.268}\hfill (7)&&0.28\hfill (7)&0.273\hfill (7)&0.27\hfill (7)&&0.288\hfill (7)&0.278\hfill (7)&0.274\hfill (7)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.279\hfill (7)&0.273\hfill (7)&0.271\hfill (7)&&0.282\hfill (7)&0.276\hfill (7)&0.273\hfill (7)&&0.289\hfill (7)&0.279\hfill (7)&0.276\hfill (7)\tabularnewline
~~{\small WaveD}&0.276\hfill (7)&0.276\hfill (7)&0.275\hfill (7)&&\textbf{0.253}\hfill (7.2)&\textbf{0.231}\hfill (7.4)&\textbf{0.215}\hfill (7.6)&&\textbf{0.189}\hfill (8)&\textbf{0.188}\hfill (8)&\textbf{0.188}\hfill (8)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Blocks}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&\textbf{0.373}\hfill (6)&\textbf{0.365}\hfill (6)&\textbf{0.363}\hfill (6)&&\textbf{0.384}\hfill (6)&\textbf{0.371}\hfill (6)&\textbf{0.367}\hfill (6)&&0.502\hfill (5)&0.492\hfill (5)&0.489\hfill (5)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.397\hfill (6)&0.373\hfill (6)&0.366\hfill (6)&&0.414\hfill (6)&0.382\hfill (6)&0.372\hfill (6)&&0.508\hfill (5)&0.495\hfill (5)&0.49\hfill (5)\tabularnewline
~~{\small WaveD}&0.376\hfill (6)&0.376\hfill (6)&0.376\hfill (6)&&0.385\hfill (6)&0.385\hfill (6)&0.385\hfill (6)&&\textbf{0.408}\hfill (6)&\textbf{0.408}\hfill (6)&\textbf{0.409}\hfill (6)\tabularnewline
\bottomrule
\end{tabular}}
\caption{RMSE for estimates when $\nu = 0.3$ at mild levels of strong dependence when the number of channels ($M$) increases.}
\label{tab:1}
\end{table}
\begin{table}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{llllclllclll}
\toprule
\multicolumn{1}{l}{$\nu = 0.3$}&\multicolumn{3}{c}{$\alpha = 0.5 $}&\multicolumn{1}{c}{}&\multicolumn{3}{c}{$\alpha = 0.3 $}&\multicolumn{1}{c}{}&\multicolumn{3}{c}{$\alpha = 0.1 $}\tabularnewline
\cline{2-4} \cline{6-8} \cline{10-12}
\multicolumn{1}{l}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt LIDAR}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&\textbf{0.094}\hfill (7)&\textbf{0.073}\hfill (7)&\textbf{0.063}\hfill (7)&&\textbf{0.115}\hfill (6)&\textbf{0.089}\hfill (6)&\textbf{0.077}\hfill (6)&&0.192\hfill (6)&0.141\hfill (6)&0.118\hfill (6)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.102\hfill (7)&0.081\hfill (7)&0.07\hfill (7)&&0.122\hfill (6)&0.098\hfill (6)&0.084\hfill (6)&&\textbf{0.164}\hfill (6)&\textbf{0.126}\hfill (6)&\textbf{0.107}\hfill (6)\tabularnewline
~~{\small WaveD}&0.103\hfill (8)&0.105\hfill (8)&0.105\hfill (8)&&0.168\hfill (8)&0.169\hfill (8)&0.171\hfill (8)&&0.271\hfill (8)&0.273\hfill (8)&0.273\hfill (8)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Doppler}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&0.072\hfill (7.7)&0.054\hfill (7.9)&0.045\hfill (8)&&0.091\hfill (7)&0.073\hfill (7)&0.066\hfill (7)&&0.153\hfill (7)&0.113\hfill (7)&0.097\hfill (7)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&\textbf{0.068}\hfill (7.7)&\textbf{0.053}\hfill (7.9)&\textbf{0.044}\hfill (8)&&\textbf{0.08}\hfill (7)&\textbf{0.065}\hfill (7)&\textbf{0.059}\hfill (7)&&\textbf{0.111}\hfill (7)&\textbf{0.086}\hfill (7)&\textbf{0.076}\hfill (7)\tabularnewline
~~{\small WaveD}&0.069\hfill (8)&0.07\hfill (8)&0.07\hfill (8)&&0.107\hfill (8)&0.109\hfill (8)&0.109\hfill (8)&&0.17\hfill (8)&0.171\hfill (8)&0.172\hfill (8)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Bumps}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&0.294\hfill (7)&0.281\hfill (7)&0.276\hfill (7)&&0.469\hfill (6)&0.461\hfill (6)&0.458\hfill (6)&&0.496\hfill (6)&0.475\hfill (6)&0.467\hfill (6)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.294\hfill (7)&0.282\hfill (7)&0.278\hfill (7)&&0.467\hfill (6)&0.461\hfill (6)&0.458\hfill (6)&&0.489\hfill (6)&0.472\hfill (6)&0.466\hfill (6)\tabularnewline
~~{\small WaveD}&\textbf{0.201}\hfill (8)&\textbf{0.201}\hfill (8)&\textbf{0.202}\hfill (8)&&\textbf{0.246}\hfill (8)&\textbf{0.247}\hfill (8)&\textbf{0.248}\hfill (8)&&\textbf{0.329}\hfill (8)&\textbf{0.331}\hfill (8)&\textbf{0.33}\hfill (8)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Blocks}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&0.511\hfill (5)&0.497\hfill (5)&0.492\hfill (5)&&0.82\hfill (4)&0.808\hfill (4)&0.804\hfill (4)&&1.116\hfill (3)&1.094\hfill (3)&1.087\hfill (3)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.518\hfill (5)&0.5\hfill (5)&0.494\hfill (5)&&0.823\hfill (4)&0.809\hfill (4)&0.805\hfill (4)&&1.116\hfill (3)&1.094\hfill (3)&1.087\hfill (3)\tabularnewline
~~{\small WaveD}&\textbf{0.43}\hfill (6)&\textbf{0.43}\hfill (6)&\textbf{0.43}\hfill (6)&&\textbf{0.506}\hfill (6)&\textbf{0.507}\hfill (6)&\textbf{0.507}\hfill (6)&&\textbf{0.68}\hfill (6.1)&\textbf{0.691}\hfill (6.3)&\textbf{0.697}\hfill (6.3)\tabularnewline
\bottomrule
\end{tabular}}
\caption{RMSE for estimates when $\nu = 0.3$ at severe levels of strong dependence when the number of channels ($M$) increases.}
\label{tab:2}
\end{table}
\begin{table}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{llllclllclll}
\toprule
\multicolumn{1}{l}{$\nu = 0.5$}&\multicolumn{3}{c}{$\alpha = 1 $}&\multicolumn{1}{c}{}&\multicolumn{3}{c}{$\alpha = 0.8 $}&\multicolumn{1}{c}{}&\multicolumn{3}{c}{$\alpha = 0.6 $}\tabularnewline
\cline{2-4} \cline{6-8} \cline{10-12}
\multicolumn{1}{l}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt LIDAR}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&\textbf{0.073}\hfill (6)&\textbf{0.062}\hfill (6)&\textbf{0.056}\hfill (6)&&\textbf{0.085}\hfill (6)&\textbf{0.07}\hfill (6)&\textbf{0.063}\hfill (6)&&\textbf{0.104}\hfill (6)&\textbf{0.085}\hfill (6)&\textbf{0.075}\hfill (6)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.094\hfill (6)&0.081\hfill (6)&0.073\hfill (6)&&0.103\hfill (6)&0.088\hfill (6)&0.08\hfill (6)&&0.122\hfill (6)&0.1\hfill (6)&0.09\hfill (6)\tabularnewline
~~{\small WaveD}&0.083\hfill (6)&0.082\hfill (6)&0.083\hfill (6)&&0.086\hfill (6)&0.086\hfill (6)&0.086\hfill (6)&&0.105\hfill (6.1)&0.106\hfill (6.2)&0.107\hfill (6.2)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Doppler}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&\textbf{0.059}\hfill (7)&\textbf{0.053}\hfill (7)&\textbf{0.051}\hfill (7)&&\textbf{0.067}\hfill (7)&\textbf{0.058}\hfill (7)&\textbf{0.054}\hfill (7)&&0.084\hfill (6.9)&\textbf{0.067}\hfill (7)&\textbf{0.061}\hfill (7)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.076\hfill (7)&0.061\hfill (7)&0.056\hfill (7)&&0.082\hfill (7)&0.065\hfill (7)&0.059\hfill (7)&&0.092\hfill (6.9)&0.071\hfill (7)&0.063\hfill (7)\tabularnewline
~~{\small WaveD}&0.065\hfill (7)&0.064\hfill (7)&0.064\hfill (7)&&0.067\hfill (7)&0.067\hfill (7)&0.067\hfill (7)&&\textbf{0.078}\hfill (7)&0.078\hfill (7)&0.078\hfill (7)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Bumps}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&\textbf{0.457}\hfill (6)&\textbf{0.455}\hfill (6)&\textbf{0.453}\hfill (6)&&0.461\hfill (6)&0.457\hfill (6)&0.455\hfill (6)&&0.467\hfill (6)&0.46\hfill (6)&0.458\hfill (6)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.457\hfill (6)&0.456\hfill (6)&0.455\hfill (6)&&0.461\hfill (6)&0.457\hfill (6)&0.456\hfill (6)&&0.467\hfill (6)&0.46\hfill (6)&0.458\hfill (6)\tabularnewline
~~{\small WaveD}&0.457\hfill (6)&0.457\hfill (6)&0.457\hfill (6)&&\textbf{0.441}\hfill (6.1)&\textbf{0.429}\hfill (6.2)&\textbf{0.418}\hfill (6.3)&&\textbf{0.332}\hfill (6.9)&\textbf{0.319}\hfill (7)&\textbf{0.318}\hfill (7)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Blocks}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&\textbf{0.494}\hfill (5)&\textbf{0.488}\hfill (5)&\textbf{0.486}\hfill (5)&&\textbf{0.505}\hfill (5)&\textbf{0.494}\hfill (5)&\textbf{0.49}\hfill (5)&&0.807\hfill (4)&0.801\hfill (4)&0.8\hfill (4)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.506\hfill (5)&0.493\hfill (5)&0.489\hfill (5)&&0.519\hfill (5)&0.501\hfill (5)&0.494\hfill (5)&&0.811\hfill (4)&0.803\hfill (4)&0.801\hfill (4)\tabularnewline
~~{\small WaveD}&0.497\hfill (5)&0.497\hfill (5)&0.497\hfill (5)&&0.506\hfill (5)&0.506\hfill (5)&0.506\hfill (5)&&\textbf{0.527}\hfill (5)&\textbf{0.528}\hfill (5)&\textbf{0.528}\hfill (5)\tabularnewline
\bottomrule
\end{tabular}}
\caption{RMSE for estimates when $\nu = 0.5$ at mild levels of strong dependence when the number of channels ($M$) increases.}
\label{tab:3}
\end{table}
\begin{table}[htp]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{llllclllclll}
\toprule
\multicolumn{1}{l}{$\nu = 0.5$}&\multicolumn{3}{c}{$\alpha = 0.5 $}&\multicolumn{1}{c}{}&\multicolumn{3}{c}{$\alpha = 0.3 $}&\multicolumn{1}{c}{}&\multicolumn{3}{c}{$\alpha = 0.1 $}\tabularnewline
\cline{2-4} \cline{6-8} \cline{10-12}
\multicolumn{1}{l}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$M = 1$}&\multicolumn{1}{c}{$M = 2$}&\multicolumn{1}{c}{$M=3$}\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt LIDAR}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&\textbf{0.11}\hfill (5)&\textbf{0.094}\hfill (5)&\textbf{0.087}\hfill (5)&&\textbf{0.142}\hfill (5)&\textbf{0.115}\hfill (5)&\textbf{0.103}\hfill (5)&&0.215\hfill (4)&\textbf{0.184}\hfill (4)&\textbf{0.172}\hfill (4)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.134\hfill (5)&0.109\hfill (5)&0.097\hfill (5)&&0.163\hfill (5)&0.128\hfill (5)&0.113\hfill (5)&&\textbf{0.214}\hfill (4)&0.185\hfill (4)&0.173\hfill (4)\tabularnewline
~~{\small WaveD}&0.134\hfill (6.5)&0.14\hfill (6.7)&0.142\hfill (6.9)&&0.243\hfill (7)&0.246\hfill (7)&0.246\hfill (7)&&0.399\hfill (7)&0.401\hfill (7)&0.4\hfill (7)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Doppler}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&0.104\hfill (6)&0.097\hfill (6)&0.094\hfill (6)&&0.122\hfill (6)&0.107\hfill (6)&0.101\hfill (6)&&0.183\hfill (5)&0.166\hfill (5)&0.16\hfill (5)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.106\hfill (6)&0.098\hfill (6)&0.095\hfill (6)&&\textbf{0.115}\hfill (6)&\textbf{0.103}\hfill (6)&\textbf{0.099}\hfill (6)&&\textbf{0.172}\hfill (5)&\textbf{0.161}\hfill (5)&\textbf{0.156}\hfill (5)\tabularnewline
~~{\small WaveD}&\textbf{0.091}\hfill (7)&\textbf{0.092}\hfill (7)&\textbf{0.092}\hfill (7)&&0.14\hfill (7)&0.141\hfill (7)&0.141\hfill (7)&&0.216\hfill (7)&0.218\hfill (7)&0.218\hfill (7)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Bumps}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&0.688\hfill (5.2)&0.643\hfill (5.3)&0.611\hfill (5.4)&&0.742\hfill (5)&0.736\hfill (5)&0.734\hfill (5)&&0.88\hfill (4)&0.873\hfill (4)&0.871\hfill (4)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.689\hfill (5.2)&0.643\hfill (5.3)&0.611\hfill (5.4)&&0.743\hfill (5)&0.736\hfill (5)&0.734\hfill (5)&&0.88\hfill (4)&0.873\hfill (4)&0.871\hfill (4)\tabularnewline
~~{\small WaveD}&\textbf{0.334}\hfill (7)&\textbf{0.334}\hfill (7)&\textbf{0.334}\hfill (7)&&\textbf{0.387}\hfill (7)&\textbf{0.389}\hfill (7)&\textbf{0.389}\hfill (7)&&\textbf{0.487}\hfill (7)&\textbf{0.488}\hfill (7)&\textbf{0.488}\hfill (7)\tabularnewline
\midrule
\multicolumn{11}{c}{{\tt Blocks}: SNR 20 dB}\\ \cmidrule{2-12}
~~$\sqrt{\alpha_{\ell_*}}$&0.813\hfill (4)&0.805\hfill (4)&0.802\hfill (4)&&1.085\hfill (3)&1.078\hfill (3)&1.075\hfill (3)&&1.126\hfill (3)&1.098\hfill (3)&1.089\hfill (3)\tabularnewline
~~$4\sqrt{\alpha_{\ell_*}}$&0.819\hfill (4)&0.807\hfill (4)&0.803\hfill (4)&&1.086\hfill (3)&1.078\hfill (3)&1.075\hfill (3)&&1.129\hfill (3)&1.099\hfill (3)&1.089\hfill (3)\tabularnewline
~~{\small WaveD}&\textbf{0.548}\hfill (5)&\textbf{0.549}\hfill (5)&\textbf{0.549}\hfill (5)&&\textbf{0.625}\hfill (5)&\textbf{0.626}\hfill (5)&\textbf{0.625}\hfill (5)&&\textbf{0.796}\hfill (5)&\textbf{0.798}\hfill (5)&\textbf{0.8}\hfill (5)\tabularnewline
\bottomrule
\end{tabular}}
\caption{RMSE for estimates when $\nu = 0.5$ at severe levels of strong dependence when the number of channels ($M$) increases.}
\label{tab:4}
\end{table}
\noindent{\bf Comments and analysis}
The numerical study is considered for three particular contexts. Namely, the effect of the severity of LRD, the effect of multiple channels and the degree of ill-posedness. The method is also compared with using the standard {\tt WaveD} estimator on the `best channel' in the sense of the algorithm posed at the start of this Section. The results are contained in Tables \ref{tab:1} -- \ref{tab:4}. Simulations were conducted for a large range of noise levels with SNR = 10,15,20,25 and 30 dB but are omitted due to space constraints. The estimates at other noise levels showed similar results to those displayed here and did not add further to the concepts being discussed below.
Performance of our method (and the {\tt WaveD} method) is reliant on two key steps. The most important step is choosing the fine scale level $j_1$ to truncate the expansion at the highest allowable level before performance deteriorates. A less important but still crucial step is to choose the smoothing parameter $\zeta$ appropriately (the smoothing parameter $\eta$ for the {\tt WaveD} algorithm is fixed at its default of $\sqrt{6}$).
To demonstrate both the role of $j_1$ and $\zeta$, the RMSE of the estimators in all the forthcoming contexts are presented inside the cells of the tables with the average fine scale level $\widehat j_1$ shown in parenthesis. The values of $\zeta$ are given in the first column (with {\tt WaveD} denoting the standard {\tt WaveD} estimator in the best possible channel).
Theoretical arguments suggest that $\zeta$ should be at least greater than $4 \sqrt{\alpha_{\ell_*}}$ for $p = 2$. Simulations were conducted for more liberal and conservative choices of $\zeta$ with $\zeta \in \left( \sqrt{\alpha_{\ell_*}}, 8\sqrt{\alpha_{\ell_*}} \right)$. In almost all cases, the performance was optimal using the smaller choice of $\zeta = \sqrt{\alpha_{\ell_*}}$. The exceptions generally being when the dependence was considerably strong ($\alpha < 0.3)$ and $M = 1$.
As is consistent with \cite{Wishart-2013}, allowing higher scales can capture more transient features of a signal but can be at the cost of spurious effects of LRD noise being included. Sometimes early truncation can be beneficial to performance or detrimental to performance based on the features of the signal. For example, the estimation performance on the {\tt LIDAR } and {\tt Doppler} signals benefits from the earlier truncation but is detrimental to the estimation of the {\tt Bumps} and {\tt Blocks} signals. In the latter estimated signals, the captured transient features at higher scales outweigh the potential loss incurred from including spurious LRD noise effects. A potential reason that the {\tt LIDAR} signal is estimated well in the multichannel method in simulations compared to the similar {\tt Blocks} signal is the close proximity of the jumps combined with the early truncation (small $j_1$) in the expansion. The {\tt WaveD} does not truncate early to avoid the LRD effects and hence captures the jumps better (cf. Figures \ref{fig:1} and \ref{fig:2}). Finally in the {\tt Bumps} signal, the {\tt WaveD} method consistently outperformed the multichannel estimator (except with the liberal choice with $\zeta = 1$ when $\alpha_{\ell_*} = 1)$. This makes sense since the captured high frequency local features of the {\tt Bumps} signal used with a larger $j_1$ outweigh the loss incurred by spurious LRD effects. All of the aforementioned points are evident across Tables \ref{tab:1} -- \ref{tab:4} and shown visually as particular cases in Figures \ref{fig:1} and \ref{fig:2}.
Supporting the theory and being consistent with previous results in the literature, as the degree of ill-posedness increases ($\nu$ increases), the performance of estimation deteriorates. This is demonstrated by comparing results from Tables \ref{tab:1} -- \ref{tab:2} with the results in Tables \ref{tab:3} -- \ref{tab:4}.
In the same vein, as the level of dependence increases ($\alpha$ decreases), the performance deteriorates. Studying Tables \ref{tab:1} -- \ref{tab:4} in more detail, consider the effect of $\alpha$ while keeping $M$ fixed and $\nu$ fixed. As is consistent with the theoretical upper bound on rates of convergence established in Section \ref{sec:4}, the convergence rate deteriorates as the level of dependence increases ($\alpha$ decreases).
The theory also suggests that the convergence rate only relies on the best available channel. However, numerically this doesn't seem to be the case. Interestingly, when keeping the dependence and DIP levels fixed across multiple channels, the inclusion of more channels (increasing $M$) generally results in improved estimation performance for the multichannel estimator in all signals while the {\tt WaveD} estimator has the same performance across multiple channels. This should not seem surprising since the {\tt WaveD} estimator is only used the `best channel' meaning only $n = 4096$ observations are being used each time. The multichannel estimator though is using a weighted average of all channels using 4096, 8192 and 12288 observations respectively in the $M = 1$, $2$ and $3$ scenarios.
\section{Conclusion}
\label{sec:6}
In this paper we considered multichannel deconvolution with errors following fractional Brownian motions, with different Hurst parameters. We established upper bounds on the $L^p$-risk $(1 \le p < \infty)$ for the non-linear wavelet estimators for regular-smooth and box-car convolutions and linear wavelet estimator for super-smooth convolutions. In particular, we extended the findings from \cite{DeCanditiis-Pensky-2006} and demonstrated that they are no longer valid in the LRD set-up. That is, in the box-car case adding new channels is beneficial for the upper bound only if the additional channel isn't outweighed by the dependence in the sense of $\widetilde \nu_*$ defined in \eqref{eq:box.car.optimal.nu} and the upper bound in \autoref{thm:main-2}. While in the regular-smooth case, adding new channels might perhaps improve the upper bound. An improved upper bound would arise if the $\alpha$ and DIP parameters in the new channel are better in the sense of \eqref{eq:optimal.smooth.channel}. In both regular-smooth and box-car cases though, LRD affects upper bounds which is consistent with previous findings in \cite{Wang-1997,Kulik-Raimondo-2009} and \cite{Wishart-2013}. In the super-smooth case, adding new channels is also beneficial, however, the upper bounds do not involve LRD.
We supported our theoretical findings by extensive simulations studies for the regular-smooth case using the $L^p$-risk for $p = 2$. We found that adding new channels improves performance, especially for severe levels of LRD. On the other hand, the optimal choice of threshold level was in some instances different than the one suggested by the theory. The optimal choice highly depends on the underlying signal. One has to remember though, that the established theory is \textit{asymptotic} in nature, whereas simulations studies are based on finite sample properties. This explains the aforementioned discrepancy.
A possible direction for future research is to explore and extend our upper bounds to minimax type rates towards the direction of \cite{benhaddou:etal} obtained for the $L^2$-risk in the discrete model when the number of channels, $M$, also depend on the total number of observations $n$, i.e., $M=M_n$.
\section{Proofs}
\label{sec:7}
We provide technical details for the proofs of Theorems \ref{thm:main-1} and \ref{thm:main-2}. In the regular-smooth and box-car cases, the proofs are based on the maxiset theorem (see Theorem 6.1 in \cite{Kerkyacharian-Picard-2000}). The steps are similar to those of \cite{Johnstone-et-al-2004} and \cite{DeCanditiis-Pensky-2006}, with necessary modifications. In the super-smooth case we do not need the maxiset theorem but proceed according to \cite{Petsa-Sapatinas-2009} and consider the $L^p$-risk ($1 \le p < \infty$) directly.
\subsection{Stochastic analysis of estimated wavelet coefficients}
\label{sec:varnoisecoef}
By definition, it is clearly seen that the estimated wavelet coefficients have no bias. Consider now the covariance structure of the $z_{\cdot \ell}$ process where $z_{m,\ell} = \int_\mathbb{R} \overline{e_m}(t) dB_{H_\ell}(t)$. It is assumed that, $B_{H_\ell}$ is independent of $B_{H_\ell'}$ for $\ell \neq \ell'$. This has the immediate consequence that, $\Cov{z_{m\ell}}{z_{m'\ell'}} = 0$ for $\ell \neq \ell'$. Using the results of Section 5.2 of \cite{Wishart-2013}, the covariance of the fBm coefficients within each channel is,
\begin{equation}
\Cov{ z_{m\ell}}{ z_{m'\ell}} = |m m'|^{1/2 - {H_\ell}} \sum_{\kappa' } \psi^{\kappa'}_{m} \overline{\psi^{\kappa'}_{m'}},\label{eq:exactcov}
\end{equation}
where $\psi$ is the Meyer wavelet and $\kappa' = (j',k')$.
The result in \eqref{eq:exactcov} would seem to imply that the covariance matrix of $z_{m\ell}$ is non-trivial. However, applying \autoref{lem:dyadic}, the covariance matrix reduces to
\begin{equation}
\Cov{z_{m\ell}}{ z_{m'\ell}} = |mm'|^{1/2 -H_\ell}\sum_{j \in \mathbb{Z}}\1{\left\{ \frac{m-m'}{2^j} \in \mathbb{Z} \right\}}\psi_{m2^{-j}} \overline{\psi_{m'2^{-j}}}.\label{eq:covZHsum}
\end{equation}
Thus we are in a position to bound the variance of the estimated wavelet coefficients (recall $\gamma_{m,\ell}$ are weighting
constants),
\begin{align}
\mathbb{V}\text{ar} \left(\widehat b_{\kappa} \right) & = \mathbb{V}\text{ar} \left(b_{\kappa} +\sum_{m \in C_j} \sum_{\ell = 1}^M \frac{\gamma_{m,\ell} n^{-\alpha_\ell/2}\sigma_\ell \overline{g_{m,\ell}}z_{m\ell}}{ \sum_{\ell=1}^M \gamma_{m,\ell}
|g_{m,\ell}|^2}\overline{\Psi^\kappa_{m}} , \right)\nonumber\\
& = \sum_{\ell = 1}^M \sum_{m,m' \in C_j}\frac{\gamma_{m,\ell}\gamma_{m',\ell}\sigma_\ell^2 n^{-\alpha_\ell} |mm'|^{1/2 -H_\ell}\overline{g_{m,\ell}}g_{m',\ell} \overline{\Psi^\kappa_{m}}{\Psi^\kappa_{m}}}{\left( \sum_{\ell=1}^M \gamma_{m,\ell}|g_{m,\ell}|^2\right)\left( \sum_{\ell=1}^M \gamma_{m',\ell}
|g_{m',\ell}|^2\right)} \nonumber\\
&\qquad \qquad \times \sum_{j' \in \mathbb{Z}}\1{\left\{ \frac{m-m'}{2^j} \in \mathbb{Z} \right\}}\psi_{m2^{-j'}} \overline{\psi_{m'2^{-j'}}},\label{eq:varBetaK}
\end{align}
where the second last line follows by \eqref{eq:covZHsum} and the independence of the fBms. Apply \autoref{lem:identity-covariance} to \eqref{eq:varBetaK} yields,
\begin{equation}
\mathbb{V}\text{ar} \left(\widehat b_{\kappa} \right) = \sum_{\ell = 1}^M \sum_{m \in C_j}\frac{\gamma_{m,\ell}^2 \sigma_\ell^2 n^{-\alpha_\ell} |m|^{1 -2H_\ell}|g_{m,\ell}|^2 |\Psi^\kappa_{m}|^2}{\left( \sum_{\ell=1}^M \gamma_{m,\ell}|g_{m,\ell}|^2\right)^2}. \label{eq:varBetaK-simple}
\end{equation}
Using the Cauchy Schwarz-inequality we have,
\begin{align*}
\left( \sum_{\ell=1}^M \gamma_{m,\ell} |g_{m,\ell}|^2 \right)^2 & \le \left( \sum_{\ell=1}^M \gamma_{m,\ell}^2 \sigma_\ell^2 n^{-\alpha_\ell}|m|^{1-2H_{\ell}}|g_{m,\ell}|^2\right) \left( \sum_{\ell=1}^M \sigma_\ell^{-2}n^{\alpha_\ell} |m|^{2H_{\ell}-1}|g_{m,\ell}|^2\right)
\end{align*}
with equality only if $\gamma_{m,\ell} = \gamma_{m,\ell}^* \coloneqq n^{\alpha_\ell}\sigma_\ell^{-2}|m|^{2H_\ell-1}$. Use these choice of optimal weights, $\gamma_{m,\ell}^*$, starting with the case of regular-smooth convolution,
\begin{align*}
\mathbb{V}\text{ar} \left(\widehat b_{\kappa} \right) &= \sum_{m \in C_j}|\Psi^\kappa_{m}|^2\sum_{\ell = 1}^M\frac{\gamma_{m,\ell}^2\sigma_\ell^2 n^{-\alpha_\ell} |m|^{1-2H_\ell}|g_{m,\ell}|^2}{\left( \sum_{\ell=1}^M \gamma_{m,\ell}|g_{m,\ell}|^2\right)^2}\nonumber\\
&= \sum_{m \in C_j} |\Psi^\kappa_m |^2\left( \sum_{\ell=1}^M \sigma_\ell^{-2}n^{\alpha_\ell} |m|^{2H_{\ell}-1}|g_{m,\ell}|^2\right)^{-1}\nonumber\\
& \le C \int_\mathbb{R} |\Psi(x) |^2\, dx\, \left( \sum_{\ell = 1}^M n^{\alpha_\ell}\inf_{x \in C_j}|x|^{2H_{\ell}-1} \inf_{y \in C_j }|g_{y,\ell}|^2\right)^{-1} \nonumber\\
& = \mathcal O \left( \left( \sum_{\ell = 1}^M n^{\alpha_\ell}2^{j(1-\alpha_\ell-2\nu_\ell)}\right)^{-1}\right)\nonumber\\
& = \mathcal O \left( \min_{1\le \ell \le M}n^{-\alpha_\ell} 2^{-j(1-\alpha_\ell-2\nu_\ell)}\right)
\end{align*}
Consider the case of box-car convolution. In particular, for $x \in \mathbb{R}$ define the distance to the nearest integer, $\norm{x} \coloneqq \inf \left\{ |x - r|: r \in \mathbb{Z} \right\}$. Then bounds can be given on the box-car Fourier coefficients with,
\[
\frac{2\norm{m c_\ell}}{\left|\pi mc_\ell \right|} \le |g_{m, \ell}| \le \frac{\norm{m c_\ell}}{\left| mc_\ell \right|},
\]
(see for example, p.298 of \cite{DeCanditiis-Pensky-2006}). Using this bound with the same optimal weights $\gamma_{m,\ell}^*$ and the bound $|\Psi_m^\kappa| \le 2^{-j}$ with \eqref{eq:varBetaK-simple},
\begin{align*}
\mathbb{V}\text{ar} \left(\widehat b_{\kappa} \right) &= \sum_{m \in C_j} |\Psi^\kappa_m |^2\left( \sum_{\ell=1}^M \sigma_\ell^{-2}n^{\alpha_\ell} |m|^{2H_{\ell}-1}|g_{m,\ell}|^2\right)^{-1}\nonumber\\
& \le \tfrac{2}{\pi}2^{-j} n^{-\alpha_*} \sum_{m \in C_j} m^2 \left( \sum_{\ell=1}^M c_\ell^{-2}\sigma_\ell^{-2} |m|^{2H_{\ell}-1}\norm{mc_\ell}^2\right)^{-1}\nonumber\\
& = \mathcal O \left( 2^{j(\alpha^*-2)} n^{-\alpha_*} \sum_{m \in C_j} m^2 \left( \sum_{\ell=1}^M \norm{mc_\ell}^2\right)^{-1} \right)\nonumber\\
& = \mathcal O \left( n^{-\alpha_*} j2^{j(1+\alpha^*+1/M)}\right)
\end{align*}
The last bound follows from a result in the proof of Lemma 4 in \cite{DeCanditiis-Pensky-2006} where,
\[
\sum_{m \in C_j} m^2 \left( \sum_{\ell=1}^M \norm{mc_\ell}^2\right)^{-1} = \mathcal O \left( j2^{j(3+1/M)}\right).
\]
Consider the final case of the super smooth convolution. Using similar arguments it can be shown,
\begin{align}
\mathbb{V}\text{ar} \left(\widehat a_{\kappa} \right) & = \sum_{m \in C_j} |\Phi^\kappa_m |^2\left( \sum_{\ell=1}^M \sigma_\ell^{-2}n^{\alpha_\ell} |m|^{2H_{\ell}-1}|g_{m,\ell}|^2\right)^{-1}\nonumber\\
& \le C\sum_{m \in C_j} |\Phi^\kappa_m |^2\, \left( \sum_{\ell = 1}^M n^{\alpha_\ell}\inf_{x \in C_j}|x|^{2H_{\ell}-1} \inf_{y \in C_j }|g_{y,\ell}|^2\right)^{-1} \nonumber\\
& = \mathcal O \left( \left( \sum_{\ell = 1}^M n^{\alpha_\ell}2^{j(1-\alpha_\ell)} \inf_{y \in C_j }|y|^{-2\gamma_\ell }e^{-2\theta_\ell |y|^{\beta_\ell}}\right)^{-1}\right)\nonumber\\
& = \mathcal O \left( \min_{1\le \ell \le M}n^{-\alpha_\ell} 2^{-j(1-\alpha_\ell - 2 \nu_{\ell})} e^{2\theta_\ell 2^{j\beta_\ell}}\right).\label{eq:varbound-super-smooth}
\end{align}
\subsection{The maxiset theorem}
\label{sec:thm-maxi}
For completeness, we give the statement of the following theorem that is borrowed from Theorem 6.1 in \cite{Kerkyacharian-Picard-2000}. We also refer to Section \ref{sec:tem} below for the definition of the Temlyakov property. First, we introduce some notation: $\mu$ will denote the measure such that for $j\in \mathbb{N},\; k\in \mathbb{N}$ and $0 < q < p$,
\begin{equation*}
\mu\{(j,k)\}=\|\tau_j\psi_{j,k}\|_p^p=
\tau_j^p2^{j(\frac{p}{2}-1)}\|\psi\|_p^p,
\end{equation*}
\begin{equation*}
l_{q,\infty}(\mu)=\left\{ f \in L^p,\; \sup_{\lambda>
0}\lambda^q \mu\{(j,k) : \;
|b_{j,k}|>\tau_j\lambda\}<\infty\right\}.
\end{equation*}
\begin{theorem}
\label{maxlp} Let $ p>1$, $0<q<p$, $ \{\psi_{j,k}, j\ge -1,\; k=0,1,\ldots,2^j\} $ be a periodised wavelet basis of $L^2(T)$, $T=[0,1]$, and $\tau_{j}$ be a positive sequence such that the heteroscedastic basis $\tau_{j}\psi_{j,k}$ satisfies the Temlyakov property. Suppose that $\Lambda_n $ is a set of pairs $(j,k)$ and that $c_n$ is a deterministic sequence tending to zero with
\begin{equation}\label{eq:condlam}
\sup_n\,\mu\{\Lambda_n\}\,c_n^{p}<\infty.
\end{equation}
If, for any $n$ and any pair $\kappa=(j,k) \in \Lambda_n$, we have
\begin{eqnarray}
\label{eq:cond.1} \mathbb{E} |\widehat b_{\kappa} - b_{\kappa} |^{2p}
&\leq& C \,(\tau_{j}\,c_n)^{2p},
\\
\label{eq:cond.2} \ensuremath{{\mathbb P}} \Big( |\widehat{b_{\kappa}}- b_{\kappa} |
\geq \eta\, \tau_{j}\, c_n/2 \Big) &\leq& C \,( c_n^{2p}\wedge
c_n^{4}),
\end{eqnarray}
for some positive constants $\eta$ and $C$, then, the wavelet based estimator
\begin{equation}
\widehat{f}_n(t)=\sum_{\kappa\in \Lambda_n}
\,\widehat b_{\kappa}\1{\{|\widehat b_{\kappa}|\geq\,\eta\,\tau_{j}\,c_n\}}\psi_{\kappa}(t), \quad t \in T, \label{new}
\end{equation} is such that, for all positive integers $n$,
\[
\mathbb{E} \|\widehat{ f_n} -f \|_{p}^p \leq C\, c_n^{p-q},
\]
if and only if
\begin{equation}
f(\cdot) \in l_{q,\infty}(\mu), \label{eq:maxi1}
\end{equation}
and \begin{equation} \sup_nc_n^{q-p} \| f-\sum_{\kappa\in\Lambda_n}
{b_{\kappa}}\psi_{\kappa}\|_p^p<\infty. \label{eq:maxi2}
\end{equation}
\end{theorem}
This theorem identifies the `maxiset' of a general wavelet thresholding estimator of the form \eqref{new}. This is done by using conditions \eqref{eq:maxi1} and \eqref{eq:maxi2} for an appropriate choice of $q$. In the proofs of Theorems \ref{thm:main-1} and \ref{thm:main-2}, we will choose $q$ according to the {\em dense} or the {\em sparse} regions as follows
\begin{equation}
\label{eq:q.dense} q=q_d:=\frac{(2\nu_*+1)p}{2s+2\nu_*+1}, \quad
\hbox{if}\quad s\geq \frac{2\nu_*+1}{2}\Big(\frac{p}{\pi_0}-1\Big )
\end{equation}
\begin{equation}
\label{eq:q.sparse} q=q_s:=\frac{(2\nu_*+1)
p/2-1}{s-1/\pi_0+(2\nu_*+1)/2},\quad \quad \hbox{if}\quad s \leq
\frac{2\nu_*+1}{2}\Big(\frac{p}{\pi_0}-1\Big ).
\end{equation}
We first verify \eqref{eq:condlam}. Consider first the case of regular-smooth convolutions. Using \eqref{eq:j1}, simple algebra shows that
\begin{align*}
\mu(\{\Lambda_n\}) &= \sum_{j\le j_1}\sum_{k=0}^{2^j-1}\mu\{(j,k)\}=\sum_{j\le j_1}2^j\mu\{(j,k)\} \nonumber \\
&= \mathcal O (1)\sum_{j\le j_1}2^j2^{j(p/2-1)}\tau_j^p=\mathcal O (1)\sum_{j\le
j_1}2^{jp(1/2+\nu_*)} \nonumber \\
&=\mathcal O (2^{j_1p(1/2+\nu_*)})=\mathcal O (c_n^{-p}),
\end{align*}
where $c_n$ is given by \eqref{eq:cn}, since it is easily seen in this case that $\tau_j^2=\mathcal O (2^{2j\nu_*})$ with $\nu_*$ given by \eqref{eq:smooth.optimal.nu} (compare also with p. 306 of \cite{DeCanditiis-Pensky-2006}). A similar bound can be shown for the box-car case with $\nu_*$ replaced with $\widetilde \nu_*$ given by \eqref{eq:box.car.optimal.nu}.
We now verify \eqref{eq:cond.1} and \eqref{eq:cond.2}. Since the random variables $\widehat b_{\kappa}-b_{\kappa}$ follow a Gaussian distribution, the higher moment bounds
\eqref{eq:cond.1} follows from the variance inequality. Similarly, denoting $Z$ to be a standard Gaussian distributed random variable,
\begin{align*}
\ensuremath{{\mathbb P}}\left(|\widehat b_{\kappa}-b_{\kappa}|>\zeta\tau_j c_n/2 \right) &= 2\ensuremath{{\mathbb P}} \left( Z \ge \frac{\zeta\sqrt{\log n}}{2} \right)\nonumber\\
& \le \frac{4n^{-\zeta^2/8}}{\zeta \sqrt{\log n}}\nonumber \\
&= O\big( c_n^4\wedge c_n^{2p}\big).
\end{align*}
as long as $\zeta> 2\sqrt{(p\vee 2)2\xi}$. This proves \eqref{eq:cond.2}.
\subsection{Temlyakov property}
\label{sec:tem}
As seen in Appendix A in \cite{Johnstone-et-al-2004}, the basis $\{\tau_j \psi_{j,k}(\cdot)\}$ satisfies the Temlyakov property as soon as
\begin{equation}
\label{tem:1-fan} \sum_{j \in \Lambda_n} 2^j\, \tau_j^2\leq
C\sup_{j \in \Lambda_n} \big ( 2^j \tau_j^2\big )
\end{equation}
and
\begin{equation}
\label{tem:2-fan} \sum_{j \in \Lambda_n} 2^{jp/2}\, \tau_j^p\leq C\,
\sup_{j \in \Lambda_n} \big ( 2^{jp/2} \tau_j^p\big ),\quad 1\leq p<2.
\end{equation}
Recall that $\tau_j^2=\mathcal O (2^{2\nu_*j})$ (regular-smooth convolutions) and $\tau_j^2=O\big(j2^{2\widetilde \nu_*j}\big)$ (box-car convolutions) with $\nu_*$ and $\widetilde \nu_*$ given by \eqref{eq:smooth.optimal.nu} and \eqref{eq:box.car.optimal.nu}. Hence, \eqref{tem:1-fan} and \eqref{tem:2-fan} are verified by direct calculations.
\subsection{Besov embedding and maxiset conditions}
\label{sec:imbII}
We recall that
\begin{equation}
\label{eq:Inc1} \Besov{\pi_0}{r}{s} \subseteq \Besov{p}{r}{s''},\quad
\hbox{if}\quad \pi_0\geq p, \quad s\geq s'' .
\end{equation}
\begin{equation}
\label{eq:Inc2} \Besov{\pi_0}{r}{s} \subseteq \Besov{p}{r}{s''},\quad
\hbox{if} \quad \pi_0 < p, \quad s-1/\pi_0 = s''-1/p.
\end{equation}
For both {\em dense} (\ref{eq:q.dense}) and {\em sparse}
(\ref{eq:q.sparse}) regions, we look for a Besov scale $\delta$ such
that
$\Besov{\pi_0}{r}{\delta} \subseteq l_{q,\infty}.$
As usual, we note that it is easier to work with
\[
l_q(\mu)=\left\{f(\cdot) \in L^p(T):~f=\sum_{j,k}b_{j,k}\psi_{j,k} \;\;\text{such that}\;\; \sum_{j,k\in
A_j}\frac{|\beta_{jk}|^q}{\tau_j^q}\left\|\tau_j\psi_{j,k}\right\|_p^p<\infty\right\},
\]
where $A_j$ is a set of cardinality proportional to $2^j$. Using
(\ref{eq:sigma.j}) and the fact that
\[
\left\|\tau_j\psi_{j,k}\right\|_p^p=\tau_j^p\,
2^{j(p/2-1)}=2^{j((2\nu_*+1) p/2-1)},
\]
we see that $f(\cdot)\in l_q(\mu)$ if,
\[
\sum_{j\geq 0} 2^{j ( (2\nu_*+1) p-2 -2\nu_*q)
/2}\sum_{k=0}^{2^j-1} |b_{j,k}|^q= \sum_{j\geq 0} 2^{jq \Big [
\frac{(2\nu_*+1)(p-q) }{2q}+\frac{1}{2}-\frac{1}{q}\Big ]
}\sum_{k=0}^{2^j-1} |b_{j,k}|^q <+\infty.
\]
From \eqref{eq:fBesov}, the latter condition holds when
\begin{equation}
\label{eq:delta.s} f(\cdot) \in \Besov{q}{q}{\delta}(T) \quad
\text{for} \quad \delta=\frac{(2\nu_*+1)}{2}\bigg
(\frac{p}{q}-1\bigg ).
\end{equation}
Now, depending on whether we are in the {\em dense} \eqref{eq:q.dense} or {\em sparse} \eqref{eq:q.sparse} regions, we
look for $s$ and $\pi$ such that
\begin{equation}
\label{eq:bsvEMB} \Besov{\pi_0}{r}{s} \subseteq
\Besov{q}{q}{\delta}.
\end{equation}
This embedding can be found by exploiting the known monotonicity of Besov balls, namely for $0 < r \leq q$, $\Besov{\pi_0}{r}{s} \subseteq \Besov{\pi_0}{q}{s}$, along with \eqref{eq:Inc1} or \eqref{eq:Inc2}.
\noindent{\bf The dense region.} By definition \eqref{eq:q.dense} of $q=q_d$, we have $s \ge (\nu_*+1/2)(p/\pi_0 - 1)$. Eliminate $p$ by substituting $p = q_d (2s + 2\nu_* +1)/(2\nu_*+1)$ yields $\pi_0\geq q_d$. Hence, \eqref{eq:bsvEMB} follows from \eqref{eq:Inc1} as long as $s\geq \delta=\frac{(2\nu_*+1)}{2}(\frac{p}{q}-1 )$, which is always true in this {\em dense} region since $\delta = s > 0$ when $q = q_d$.
\vskip2mm \noindent{\bf The sparse region.} Take $q=q_s$ and $\delta=\frac{(2\nu_*+1)}{2}\left(\frac{p}{q_s}-1\right)=(2\nu_*+1)\frac{sp-p/\pi_0+1}{(2\nu_*+1) p-2}$. We consider two cases. If $\pi_0>q_s$, we use the embedding \eqref{eq:Inc1}. We have to check that $s> \delta = (2\nu_*+1)\frac{sp-p/\pi_0+1}{(2\nu_*+1) p-2}$ which is equivalent to $s<\frac{(2\nu_*+1)}{2}\left(\frac{p}{\pi_0}-1\right)$, which is true in the {\em sparse} region. Note that we require $\delta>0$ which implies either $(i)$ $p > 2/(2\nu_*+1)$ and $s > 1/\pi_0 - 1/p$ or $(ii)$ $p < 2/(2\nu_*+1)$ and $s < 1/\pi_0 - 1/p$. The $(ii)$ scenario is impossible since $p< 2/(2\nu_*+1)$ and $s < 1/\pi_0 - 1/p$ is a contradiction of $s \ge 1/\pi_0 - \nu_* - 1/2$. On the other hand, by definition, when in the sparse phase, $1/\pi_0 - \nu_* - 1/2 < (\nu_*+1/2)(p/\pi_0 - 1)$ which implies $p > 2/(2\nu_*+1)$ and consequently verifies that $s > 1/\pi_0 - 1/p$ since $s > 1/\pi_0 - \nu_* - 1/2$. Thus we established \eqref{eq:bsvEMB} for $q_s<\pi_0<q_d$. By definition \eqref{eq:q.sparse} of $q=q_s$, if $\pi_0 \leq q_s$, the corresponding $\delta$ fulfils $s-1/\pi_0=s'-1/q$. In this case, \eqref{eq:Inc2} and \eqref{eq:delta.s} ensure that
\[
\Besov{\pi_0}{r}{s} \subseteq \Besov{q}{q}{s'}\equiv l_q(\mu),
\]
as had to be proved.
To apply \autoref{maxlp}, \eqref{eq:maxi2} needs to be verified. Therefore we need to find a $\delta > 0$ such that for any $f \in \Besov{p}{r}{\delta}$, \eqref{eq:maxi2} is satisfied.
\begin{align*}
c_n^{q-p} \norm{f - \sum_{j,k }b_{j,k}\Psi_{j,k}}_p^p = c_n^{q-p}2^{-j_1 \delta p} \norm{f}_{\Besov{p}{r}{\delta}} = c_n^{q-p+ 2\delta p/(2\nu_* + 1)} \norm{f}_{\Besov{p}{r}{\delta}}.
\end{align*}
The above is bounded uniformly in $n$ if we choose $\delta = 1/2(2\nu_* + 1)(1 - q/p)$. Now we need to find $s,\pi_0$ such that $\Besov{\pi_0}{r}{s}\subseteq \Besov{p}{r}{\delta}$.
Consider the first case $\pi_0 \ge p$. This case cannot occur in the sparse phase due to \eqref{eq:rates-sparse-smooth} and \eqref{eq:rates-sparse-boxcar} with the assumption that $s$ is positive. In the dense phase, use embedding \eqref{eq:Inc1} with $\gamma = \delta$ and $q = q_d$. Therefore, \eqref{eq:Inc1} holds if $s \ge 1/2(2\nu + \alpha)(1 - q_d/p)$. This implies,
\begin{align*}
s &\ge 1/2(2\nu_* + 1)(1 - q_d/p)\\
&= \frac{(2\nu_* + 1)s}{2s + 2\nu_* + 1},
\end{align*}
which always holds under the assumption that $s > 0$.
Now consider the dense case when $\pi_0 < p$. In this scenario use embedding \eqref{eq:Inc2} by defining $s - 1/\pi_0 = s'' - 1/p$ which ensures $\Besov{\pi_0}{r}{s} \subseteq \Besov{p}{r}{s''}$. Then complete the embedding using \eqref{eq:Inc1} (namely, $\Besov{p}{r}{s''} \subseteq \Besov{p}{r}{\delta}$) which requires $s'' \ge \delta$ with $q = q_d$ or equivalently after rearrangement, $2ss'' + (2\nu_* + 1)(1/p - 1/\pi_0) \ge 0.$ The left hand side is greater than $(s - 1/\pi_0)(p/\pi_0 - 1)(2\nu_* + 1) \ge 0$ when $s \ge (\nu _*+ 1/2)(p/\pi_0 -1)$ (which is true in the dense phase).
The last case to consider is the sparse case when $\pi_0 < p$. Again introduce a new Besov scale $s''$ defined with, $s - 1/\pi = s'' - 1/p$ and apply a similar argument to above which requires that, $s'' \ge \delta$ with $q = q_s$. This is satisfied if $s > 1/\pi_0$, which always holds.
\subsection{Proofs of \autoref{thm:main-1} and \autoref{thm:main-2}}
The proofs of Theorems \ref{thm:main-1} and \ref{thm:main-2} are a direct application of \autoref{maxlp} with $j_1$, $\zeta$, $\tau_j$, and $c_n$ of Section \ref{sec:2}. Combining results of Sections \ref{sec:tem} and \ref{sec:imbII}, we see that all conditions of \autoref{maxlp} are satisfied. Using the embedding results of Section \ref{sec:imbII}, we derive the rate exponent $\gamma = \gamma_S$ or $\gamma = \gamma_B$ given by \eqref{eq:rates-dense-smooth} and \eqref{eq:rates-dense-boxcar} respectively for smooth and boxcar convolutions for any $f(\cdot) \in \Besov{\pi_0}{r}{s}$ using \eqref{eq:q.dense} for $q$ when $s\geq \frac{(2\nu_*+1)}{2}(p/\pi_0-1)$ and the rate exponent $\gamma = \gamma_S$ and $\gamma = \gamma_B$ given by \eqref{eq:rates-sparse-smooth} and \eqref{eq:rates-sparse-boxcar} respectively for smooth and boxcar convolutions for any $f(\cdot) \in \Besov{\pi_0}{r}{s}$ using \ref{eq:q.sparse} for $q$ when $1/\pi_0 \leq s < \frac{(2\nu_*+1)}{2}(p/\pi_0-1)$, with $\nu_*$ given either by \eqref{eq:smooth.optimal.nu} (regular-smooth convolutions) or \eqref{eq:box.car.optimal.nu} (box-car convolutions).
For the super-smooth scenario in \autoref{thm:main-1} we appeal to the same arguments used in the proof of \cite[Theorem 4.2][]{Petsa-Sapatinas-2009}. Consider the moment bound directly with the estimator \eqref{eq:super-smooth.estimator},
\begin{align}
\label{eq:main-3-1} \mathbb{E} \norm{\widehat f_n - f}^p_p &\le 2^{p-1} \mathbb{E} \norm{\sum_{k = 0}^{2^{j_0} - 1} \left(\widehat a_{j_0,k} - a_{j_0,k} \right)\Phi_{j_0,k}(t) }^p_p\\
&\qquad \qquad + 2^{p-1} \norm{\sum_{j = j_0}^{\infty} \sum_{k = 0}^{2^j-1} b_{j,k}\Psi_{j,k}(t) }_p^p\nonumber
\end{align}
The two terms in \eqref{eq:main-3-1} can be bounded separately with \eqref{eq:varbound-super-smooth} and the scale level \eqref{eq:j0.super-smooth},
\begin{align}
\mathbb{E} \norm{\sum_{k = 0}^{2^{j_0} - 1} \left(\widehat a_{j_0,k} - a_{j_0,k} \right)\Phi_{j_0,k}(t) }^p_p &\le C 2^{j_0(p/2-1)} \sum_{k = 0}^{2^{j_0}-1} \, \mathbb{E} |\widehat a_{j_0,k} - a_{j_0,k}|^p\nonumber\\
&\le C n^{-\alpha_{\ell_*}p/2}2^{j_0p/2(\alpha_{\ell_*}+ 2\nu_{\ell_*})}e^{a_{\ell_*}p2^{j_0\beta_{\ell_*}}}\nonumber\\
&\le C n^{-\epsilon p/2}(\log n)^{p/2(\alpha_{\ell_*}+ 2\nu_{\ell_*})}\nonumber\\
&= o((\log n)^{-s^*p/\beta_{\ell_*}}).\label{eq:main-3-2}
\end{align}
For the next term use the property of Besov spaces,
\begin{align}
\norm{\sum_{j = j_0}^{\infty} \sum_{k = 0}^{2^j-1} b_{j,k}\Psi_{j,k}(t) }_p^p &\le \left( \sum_{j = j_0}^\infty C 2^{-j(s + 1/p - 1/\min (\pi_0,p))} \right)^p \nonumber\\
&= \mathcal O((\log n)^{-s^*p/\beta_{\ell_*}}).\label{eq:main-3-3}
\end{align}
The result of \eqref{eq:rate-super-smooth} follows combining the results \eqref{eq:main-3-1}, \eqref{eq:main-3-2} and \eqref{eq:main-3-3}.
\section*{Acknowledgements}
The authors would like to thank the the Editor and two anonymous reviewers whose comments and suggestions lead to an improved version and presentation of the paper.
|
1,314,259,993,933 | arxiv | \section{Introduction}
\subsection{Local-global principle for zero cycles}
Given a smooth projective variety defined over a global field, a natural and important problem is to find criteria for the existence of rational points and a description of the set of all rational points.
Hasse principle and weak approximation problem, or local-global principle, gives a characterization of this set in terms of the adelic points.
There are various obstructions for local-global principle to hold, notably the so called Brauer-Manin obstruction. A conjecture due to Colliot-Th\'el\`ene states that for rationally connected varieties defined over a global field, this is the only obstruction.
The study of zero cycles, as natural generalizations of rational points, has also drawn lots of attentions in recent years.
Motivated by the case of rational points,
Colliot-Th\'el\`ene has formulated the following local-global conjectures in \cite[Conjecture 2.2]{CTLocalGlobalChow} for zero cycles.
\begin{conj}\label{conj:CT1}
Let $X$ be a smooth projective variety defined over the function field $\mbb{F}_q(B)$ of a smooth curve $B$ defined over a finite field $\mbb{F}_q$.
For every place $\nu$ of $\mbb{F}_q(B)$, let $z_\nu \in CH_0(X_v)$. Suppose that for all element $A\in Br(X)\{\ell\}$, we have $\sum_{\nu} Inv(A(z_\nu))=0$. Then for all $n>0$, there is a cycle $z_n \in CH_0(X)$ such that for all $\nu$ we have that
\[
cl(z_n) =cl(z_{\nu}) \in H^{2d}_{\text{\'et}}(X_{\nu}, \mu_{\ell^n}^{\otimes d}).
\]
\end{conj}
Here $Inv(A(z_\nu))$ means the value of $(A, z_\nu)$ under the pairing
\[
Br(X_\nu)\{\ell\} \times CH_0(X_\nu) \to \mbb{Q}/\mbb{Z}.
\]
A particular case of the above conjecture is the following.
\begin{conj}\label{conj:CT2}
Let $X$ be a smooth projective variety defined over the function field $\mbb{F}_q(B)$ of a smooth curve $B$ defined over a finite field $\mbb{F}_q$.
Suppose that for every place $\nu$ of $\mbb{F}_q(B)$, there is a zero cycle $$z_\nu \in CH_0(X_v)$$ of degree prime to $\ell$.
Suppose that for all element $A\in Br(X)\{\ell\}$, we have
\[\sum_{\nu} Inv(A(z_\nu))=0.
\]
Then for all $n>0$, there is a cycle $z \in CH_0(X)$ of degree prime to $\ell$.
\end{conj}
In this paper, for an abelian group $A$, we use $A \hat{\otimes} \mbb{Z}_\ell$ to denote the inverse limit $\lim \limits_{\longleftarrow} A/\ell^n A$.
The following stronger form of the above conjectures is also well-known.
\begin{conj}\label{conj:E}
Let $X$ be a smooth projective variety defined over a global field $K$. Let $l$ be a prime number invertible in $K$. There is an exact sequence:
\[
CH_0(X) \hat{\otimes} \mbb{Z}_\ell \to \Pi_{\nu \in \Omega(K)} CH_0(X_\nu) \hat{\otimes} \mbb{Z}_\ell \to Hom (Br(X)\{\ell\}, \mbb{Q}/\mbb{Z}).
\]
\end{conj}
Conjectures \ref{conj:CT1} and \ref{conj:CT2} are consequences of this via considering the commutative diagram of various cycle class maps.
On the other hand, if the cycle class map $CH_0(X_\nu) \hat{\otimes} \mbb{Z}_{\ell} \to H^{2d}_{\text{\'et}}(X, \mbb{Z}_{\ell}(d))$ is injective, Conjecture \ref{conj:CT1} and this stronger conjecture \ref{conj:E} are equivalent.
In general the injectivity fails.
But we will see that in many (and conjecturally all) cases of interest to us, the injectivity holds.
One of the main theorems of this article is the following.
\begin{thm}\label{thm:zerocycle}
Let $X$ be a smooth projective geometrically rational surface defined over the function field $\mbb{F}_q(B)$ of a smooth projective curve $B$. Then Conjectures \ref{conj:E}, and hence \ref{conj:CT1} and \ref{conj:CT2} hold true for $X$ .
\end{thm}
By a happy coincidence, we deduce a corollary for rational points.
\begin{thm}\label{thm:delpezzo4}
Let $X$ be a del Pezzo surface of degree $4$ defined over a global function field of odd characteristic. Then Brauer-Manin obstruction is the only obstruction for Hasse principle for rational points on $X$ to hold.
\end{thm}
\begin{proof}
If there is rational point everywhere locally and satisfies the Brauer-Manin constraint, there is a zero cycle of degree $1$ over the function field by Theorem \ref{thm:zerocycle}.
A del Pezzo surface of degree $4$ is a complete intersection of $2$ quadrics in $\mbb{P}^4$.
Such a complete intersection has a rational point if and only if it has an odd degree $0$-cycle.
Hence we have the result.
\end{proof}
\begin{rem}
One is also interested to study weak approximation for rational points on a del Pezzo surface of degree $4$.
For a del Pezzo surface of degree $4$ over a number field, assuming that there is a rational point, Salberger and Skorobogatov\cite{SS1991} prove that the Brauer-Manin obstruction is the only obstruction to weak approximation.
As the author has been informed by Colliot-Th\'el\`ene, essentially the same argument also proves that over a global function field of odd characteristic, Brauer-Manin obstruction is the only obstruction to weak approximation once there is a rational point.
In characteristic $2$, some partial results are contained in the joint work of the author with Letao Zhang \cite{WACubicGlobal}.
\end{rem}
We finish this section with some previously known results.
There is a vast literature on the local-global principles for zero-cycles/rational points on geometrically rational surfaces. Let us only mention a few relevant results and refer the readers to survey articles such as \cite{WittenbergRationalPointSurvey} etc. for a more comprehensive list.
Colliot-Th\'el\`ene proved Conjecture \ref{conj:E} holds for ruled surfaces defined over number fields \cite{CT_zero_cycle_ruled}.
The global function field version for ruled surfaces is proved by Parimala-Suresh \cite{Parimala_Suresh_ruled}, whose proof depends on the computation of degree $3$ unramified cohomology and also establishes the integral Tate conjecture for conic bundle over surfaces defined over finite fields.
An interesting example of cubic surfaces of the form $(f+tg=0)\subset \mbb{P}^3 \times \mbb{A}^1_t$ is studied by Colliot-Th\'el\`ene-Swinnerton-Dyer \cite{CTSDPencil}. In addition to proving that Hasse-principle for zero cycles holds for cubic surfaces of this form, they also prove that the existence of rational points is equivalent to the existence of a zero cycle of degree $1$ for such surfaces.
The study of complete intersection of two quadrics has also drawn lots of attention. It starts with the work of Colliot-Th\'el\`ene, Sansuc, and Swinnerton-Dyer \cite{CTSD}. Heath-Brown proved that Hasse principle for rational points holds for smooth complete intersections of two quadrics in $\mbb{P}^7$ over number fields \cite{Heath-Brown_2_quadrics}. Under the assumption on finiteness of Tate-Shafarevich groups of elliptic curves and the validity of Schinzel's hypothesis, Wittenberg proved Hasse principle holds for such complete intersections in $\mbb{P}^5$ and some case in $\mbb{P}^4$ over number fields \cite{Wittenberg_Thesis}.
The author has shown in a previous paper \cite{Hasse} that Hasse principle for rational points holds for smooth complete intersections of two quadrics in $\mbb{P}^n, n\geq 5$ defined over a global function field of odd characteristic.
\subsection{Integral Tate conjecture}
Our approach to Theorem \ref{thm:zerocycle} is based on the close relation between an integral version of the Tate conjecture and Colliot-Th\'el\`ene's conjectures, first studied by Saito \cite{SaitoMotivicCoh_ArithmeticScheme} and Colliot-Th\'el\`ene \cite{CTLocalGlobalChow}.
Let $X$ be a smooth projective geometrically irreducible variety of dimension $d$ defined over a finite field $\mbb{F}$. We have the cycle class maps:
\begin{equation}\label{int_Tate}
CH_1(X) \otimes \mbb{Z}_\ell \to H^{2d-2}(X, \mbb{Z}_\ell(d-1)),
\end{equation}
\begin{equation}
CH_1(X) \otimes \mbb{Z}_\ell \to H^{2d-2}(X, \mbb{Z}_\ell(d-1)) \to H^{2d-2}(\bar{X}, \mbb{Z}_\ell(d-1))^G,
\end{equation}
\begin{equation}
CH_1(\bar{X}) \otimes \mbb{Z}_\ell \to \cup_{K/\mbb{F}} H^{2d-2}(\bar{X}, \mbb{Z}_\ell(d-1))^{G_K} \subset H^{2d-2}(\bar{X}, \mbb{Z}_\ell(d-1)).
\end{equation}
We also have the corresponding cycle class maps after tensoring with $\mbb{Q}_\ell$.
Tate conjecture predicts that the cycle class map on codimension $r$ cycles
\[
CH^r(X) \otimes \mathbb{Q}_\ell \to H^{2r}_{\text{\'et}}(X, \mathbb{Q}_\ell(r))
\]
is surjective for any smooth projective varieties defined over a finite field.
While the cycle class map is in general not surjective for $\mbb{Z}_\ell$ coefficients, one is still interested in knowing in which cases surjectivity still holds.
This is usually called the integral Tate conjecture (even though it is not true in general).
The results of this paper, together with some results proved by the author in \cite{zerocycleLaurent}, \cite{Coniveau}, strongly suggest that the following is true.
\begin{conj}\label{conj}
Let $X$ be a smooth projective variety defined over a finite field. Assume that $X$ is either separably rationally connected, or is a separably rationally connected fibration over a geometrically irreducible curve. Then the cycle class map
\[
CH_1(X) \otimes \mathbb{Z}_\ell \to H^{2d-2}_{\text{\'et}}(X, \mathbb{Z}_\ell(d-1))
\]
is surjective, where $d=\dim X$.
\end{conj}
We refer the readers to Theorem \ref{thm:integralTate} and Remark \ref{rem} for evidences of this conjecture.
The connection between integral Tate conjecture and Conjecture \ref{conj:CT1}, \ref{conj:CT2} is the following.
\begin{thm}[\cite{CTLocalGlobalChow} Proposition 3.2, \cite{SaitoMotivicCoh_ArithmeticScheme} Corollary (8-6)]\label{thm:TateImpiesCT}
Let $\mbb{F}$ be a finite field, $C$ a smooth projective geometrically connected curve over $\mbb{F}$, and $K$ the function field of $C$.
Let $\mc{X}$ be a smooth projective geometrically connected variety of dimension $d+1$ defined over $\mbb{F}$, equipped with a morphism $p: \mathcal{X} \to C$, whose generic fiber is smooth and geometrically irreducible. Let $l$ be a prime different from the characteristic.
\begin{enumerate}
\item If the cycle class map
\[
CH^d(\mc{X}) \otimes \mbb{Z}_\ell \to H^{2d}_{\text{{\'e}t}}(\mc{X}, \mbb{Z}_\ell(d))
\]
is surjective, Conjectures \ref{conj:CT1} and \ref{conj:CT2} are true.
\item If the cycle class map
\[
CH^d(\mc{X}) \otimes \mbb{Z}_\ell \to H^{2d}_{\text{{\'e}t}}(\mc{X}, \mbb{Z}_\ell(d)) \to H^{2d-2}_{\text{{\'e}t}}(\overline{\mc{X}}, \mbb{Z}_\ell(d-1))^G
\]
is surjective, or if
\[
CH^d(\mc{X}) \otimes \mbb{Z}_\ell \to H^{2d}_{\text{{\'e}t}}(\mc{X}, \mbb{Z}_\ell(d))
\]
is surjective modulo torsion,
Conjecture \ref{conj:CT2} is true.
\end{enumerate}
\end{thm}
\begin{rem}
The cited references only contain a proof of the first statement. But the second statement follows from the same proof. The general result of Saito produces a cohomology class $\xi \in H^{2d}(\mc{X}, \mbb{Z}_\ell(d))$ whose restriction to each local place coincide with the class of $z_\mu$(\cite[Proposition 3.1]{CTLocalGlobalChow}). The various types of integral Tate conjecture are simply used to find a global cycle whose class agrees with $\xi$ in various cohomology groups. See also Page 19 of the slide of Colliot-Th\'el\`ene's lecture at Cambridge in 2008 (available at \url{https://www.imo.universite-paris-saclay.fr/~jean-louis.colliot-thelene/expocambridge240809.pdf}).
\end{rem}
A result of Schoen \cite{SchoenIntegralTate} says that if the Tate conjecture is true for divisors on all smooth projective surfaces defined over finite fields, then for any smooth projective variety $V$ defined over a finite field $\mbb{F}$, the cycle class map
\[
CH_1(\bar{V})\otimes \mbb{Z}_l \to \cup_{K/\mbb{F}} H^{2d-2}(V, \mbb{Z}_\ell(d-1))^{\text{Gal}(\bar{\mbb{F}}/K)}\subset H^{2d-2}(\bar{V}, \mbb{Z}_\ell(d-1))
\]
is surjective, where $\bar{V}$ is the base change of $V$ to an algebraic closure of $\mbb{F}$.
If furthermore $V$ is rationally connected or a rationally connected fibration over a curve, it is easy to see that every class in $H^{2d-2}(\bar{V}, \mbb{Q}_\ell(d-1))$ is algebraic. Thus every class in $H^{2d-2}(\bar{V}, \mbb{Z}_\ell(d-1))$ is fixed by some open subgroup of the Galois group.
So in this case, Schoen's theorem implies that we always have a surjection
\[
CH_1(V) \otimes \mbb{Z}_\ell \to H^{2d-2}(V, \mbb{Z}_\ell(d-1)),
\]
provided that Tate conjecture holds for all surfaces.
The paper \cite{CTSzamuelyIntegralTate} discussed the implication of Schoen's result for varieties defined over $\bar{\mbb{F}}(C)$, the function field of a curve defined over $\bar{\mbb{F}}$.
Colliot-Th\'el\`ene and Kahn analyzed the surjectivity of the $\mbb{Z}_\ell$-coeffiecient cycle class map of codimension $2$ cycles and its relation with degree $3$ unramified cohomology $H^3_{\text{nr}}(X, \mbb{Q}_\ell/\mbb{Z}_\ell(2))$ in \cite{CTKahnCycleCodim2} (over the complex numbers, such a relation is studied in \cite{CTVoisin}) (For the sake of brevity, and since we will not need these notions for the other parts of this paper, we will not define this invariant. Instead, we refer the interested reader to these papers and the references therein for definitions and properties of unramified cohomology). In particular, if the unramified cohomology $H^3_{\text{nr}}(X, \mbb{Q}_\ell/\mbb{Z}_\ell(2))$ vanishes, the cokernal of $CH^2(X) \otimes \mbb{Z}_\ell \to H^4(X, \mbb{Z}_\ell(2))$ is torsion free \cite[Th\'eor\`eme 2.2]{CTKahnCycleCodim2}.
Thus if, in addition, we know that the cokernal is torsion (for instance, if the Chow group of zero cycles with rational coefficients is universally supported in a surface \cite[Proposition 3.2]{CTKahnCycleCodim2}), we know the cycle class map is surjective.
One should also note that by the Tate conjecture, one expects the cokernal to be torsion.
In general, they deduced a short exact sequence relating various Chow groups of codimension $2$ cycles and degree $3$ unramified cohomology.
Their short exact sequence for varieties over finite fields reads the following (\cite[Th\'eor\`eme 6.8]{CTKahnCycleCodim2}):
\begin{align}\label{eq:CTK}
\nonumber 0 \to &\text{Ker}(CH^2(X) \to CH^2
(\bar{X})) \to H^1(\mbb{F}, \oplus_{\ell} H^3_\text{\'et}(X, \mbb{Z}_\ell(2))_\text{tors})\\
\to &\text{Ker}(
H^3_\text{nr}(X, \mbb{Q}/\mbb{Z}(2)) \to H^3_\text{nr}(\bar{X}, \mbb{Q}/\mbb{Z}(2)))\\
\nonumber
\to &\text{Coker}(CH^2
(X) \to CH^2(\bar{X})^G)\to 0
\end{align}
Of course, one can deduce from this a similar exact sequence for $\ell$-primary torsions.
In particular, we can apply their results to $3$-fold, which then relates the integral Tate conjecture to the vanishing of degree $3$ unramified cohomology.
Note that by the Lefschetz hyperplane theorem, if we can prove integral Tate conjecture for one cycles on all $3$-folds, we prove integral Tate conjecture for one cycles on all smooth projective varieties.
Several groups of authors proved the vanishing of the degree $3$ unramified cohomoogy on certain threefolds and deduce the integral Tate conjecture for one cycles, and thus proving Conjecture \ref{conj:CT1} and \ref{conj:CT2} for some surfaces defined over a global function field. See, for example, \cite{Parimala_Suresh_ruled} for the case of conic bundles over a surface, \cite{CT_Scavia_ITC} and \cite{Scavia_ITC} for the case of a product of a curve with a $CH_0$-trivial surface.
We prove Theorem \ref{thm:zerocycle} as a consequence of the following case of the integral Tate conjecture for one cycles.
\begin{thm}\label{thm:integralTateSurface}
Let $\pi: \mc{X} \to B$ be a projective flat family of surfaces over a smooth projective curve $B$ defined over a finite field $\mbb{F}_q$. Assume that $\mc{X}$ is smooth and that the geometric generic fiber is a smooth rational surface. Then integral Tate conjecture holds for one cycles. More concretely, the cycle class map
\[
CH_1(\mc{X})\otimes \mbb{Z}_\ell \to H^{4}_\text{\'et}(\mc{X}, \mbb{Z}_\ell(2))
\]
is surjective.
\end{thm}
In general, one can deduce the following geometric criterion for the validity of the integral Tate conjecture \ref{conj} and the local-global principles for separably rationally connected varieties defined over global function fields.
Given a variety $V$ defined over a field $k$, we denote by $A_1(V)$ the group of one cycles in $V$ modulo algebraic equivalence.
We also use $\overline{V}$ to denote the base change of $V$ to an algebraic closure of $k$.
\begin{thm}\label{thm:integralTate}
Let $\pi: \mc{X} \to B$ be a projective flat family of varieties over a smooth projective curve $B$ defined over a finite field $\mbb{F}_q$. Assume that $\mc{X}$ is smooth and that the generic fiber is smooth, separably rationally connected, and of dimension $d$. Consider the following hypothesis:
\begin{itemize}
\item [(A)] The cycle class map $A_1(\overline{\mc{X}})\otimes \mbb{Z}_\ell \to H^{2d}_\text{\'et}(\overline{\mc{X}}, \mbb{Z}_\ell(d))$ is surjective.
\item [(B)] The cycle class map $A_1(\overline{\mc{X}})\otimes \mbb{Z}_\ell \to H^{2d}_\text{\'et}(\overline{\mc{X}}, \mbb{Z}_\ell(d))$ is injective.
\item [(C)] The cycle class map from higher Chow groups
\[
\lim \limits_{\xleftarrow[n]{}}CH_1(\overline{\mc{X}}, 1, \mbb{Z}/\ell^n \mbb{Z}) \to H^{2d-1}_\text{\'et}(\overline{\mc{X}}, \mbb{Z}_\ell(d))
\]
is surjective.
\item [(D)] The coniveau filtration $N^1H^{2d-1}_\text{\'et}(\overline{\mc{X}}, \mbb{Z}_\ell)$ is the whole cohomology group $$H^{2d-1}_\text{\'et}(\overline{\mc{X}}, \mbb{Z}_\ell).$$
\end{itemize}
If $\overline{\mc{X}}$ satisfies hypothesis (A) and (B),
then the cycle maps
\[
CH_1(\mc{X})\otimes \mbb{Z}_l \to H^{2d}_\text{\'et}(\mc{X}, \mbb{Z}_l(d))\to H^{2d}_\text{\'et}(\overline{\mc{X}}, \mbb{Z}_l(d))^{Gal(\bar{\mbb{F}}_q/\mbb{F}_q)}
\]
is surjective, and Conjecture \ref{conj:CT2} holds for the generic fiber $X$ over $\mbb{F}_q(B)$.
If $\overline{\mc{X}}$ satisfies hypothesis (C) or (D),
then the cycle maps
\[
CH_1(\mc{X})_{\text{alg}}\otimes \mbb{Z}_l \to H^1(\mbb{F}_q, H^{2d-1}_\text{\'et}(\overline{\mc{X}}, \mbb{Z}_l(d)))
\]
is surjective.
Thus Conjecture \ref{conj:CT1} holds for the generic fiber $X$ over $\mbb{F}_q(B)$ if either hypothesis (A), (B), (C) or (A), (B), (D) hold.
\end{thm}
\begin{rem}
The statements in Hypothesis (A), (B), (C), (D) only depends on the stable birational class of the generic fiber of $\overline{\mc{X}}\to \bar{B}$.
In particular, they only depend on the stable birational class of the generic fiber $X$ over the field $\mbb{F}_q(B)$ (assuming that there is a smooth projective model for every stable birational class of $X$).
Also note that Conjectures \ref{conj:CT1}, \ref{conj:CT2}, \ref{conj:E}, \ref{conj} only depend on the stable birational class of the variety over $\mbb{F}_q(B)$ (or $\mbb{F}$ for Conjecture \ref{conj}).
\end{rem}
\begin{rem}\label{rem}
We make a few simple remarks about the validity of the hypothesis above. First of all, it is a simple exercise to prove that all these hypothesis hold if we use $\mbb{Q}_\ell$-coefficient, and that they hold for all but finitely many $\ell$.
As discussed above, Hypothesis (A) follows from Tate's conjecture on surfaces.
The author has made conjectures on the Kato homology of rationally connected fibrations over an algebraically closed field of characteristic $0$ in \cite{zerocycleLaurent}. A special case of the conjecture predicts that for a rationally connected fibration over a curve defined over an algebraically closed field of characteristic $0$, hypothesis (B), (C), and (D) hold. It is quite reasonable to believe that the same is true for separably rationally connected fibrations in characteristic $p>0$. We discuss some examples in Section \ref{sec:example}.
\end{rem}
As a corollary of Theorem \ref{thm:integralTateSurface}, we confirm a conjecture of Colliot-Th\'el\`ene and Kahn (\cite[Conjecture 5.8]{CTKahnCycleCodim2}) up to $p$ torsion.
\begin{cor}\label{cor:CTK}
Let $X$ be a smooth projective threefolds defined over a finite field $\mbb{F}$.
Assume that $X$ admits a fibration structure over a smooth projective curve with smooth projective geometrically rational generic fiber.
Then we have
\[
H^3_{\text{nr}}(X, \mbb{Q}_\ell/\mbb{Z}_\ell(2))=0,
\]for any $\ell$ invertible in $\mbb{F}$,
and a short exact sequence
\[
0 \to H^1(\mbb{F}, H^3(\bar{X}, \mbb{Z}_\ell(2))\{\ell\} \to CH_1(X) \otimes \mbb{Z}_\ell \to CH_1(\bar{X})^G \otimes \mbb{Z}_\ell \to 0.
\]
\end{cor}
\begin{proof}
The vanishing of $H^3_{\text{nr}}(X, \mbb{Q}_\ell/\mbb{Z}_\ell(2))$ follows from Theorem \ref{thm:integralTateSurface}, \cite[Theorem 2.2, Proposition 3.2]{CTKahnCycleCodim2}, and the fact that $CH_0(\bar{X})$ is supported in curve.
It then follows from the exact sequence (\ref{eq:CTK}) that we have the above descritption of the Chow gruops of $X$ and $\bar{X}$.
\end{proof}
\begin{rem}
Theorem \ref{thm:integralTate} holds for smooth projective separably rationally connected varieties.
We have made the proof works in both cases.
A cheaper way to get this result is to note that the validity of the integral Tate conjecture is a stable birational invariant and apply the above theorems to the product $\mbb{P}^1 \times X$ ($X$ separably rationally connected) as a fibration over $\mbb{P}^1$. Unfortunately, for a separably rationally connected threefold $V$ defined over ${\mbb{F}}_q$, we do not know if the cycle class map
\[
CH_1(\bar{V}) \otimes \mbb{Z}_\ell \to H^4(\bar{V}, \mbb{Z}_\ell(2))
\]
is surjective.
Once we know this (e.g. if we are willing to assume the Tate conjecture for surfaces), the same argument as above shows that
\[
CH_1({V}) \otimes \mbb{Z}_\ell \to H^4({V}, \mbb{Z}_\ell(2))
\]
is surjective.
One can also deduce the vanishing of degree $3$ unramified cohomology and the short exact sequence of Chow groups as above.
\end{rem}
\subsection{Algebraic equivalence}
To study the integral Tate conjecture for one cycles, we prove a Galois descent type result for one cycles on a separably rationally connected fibration.
Recall that a \emph{pseudo algebraically closed field} (or a \emph{PAC field} for short) is a field where every geometrically integral variety has a rational point.
\begin{thm}\label{thm:G_inv_cycle}
Let $\pi: \mc{X} \to B$ be a projective flat family of varieties over a smooth projective curve $B$ defined over a finite field $\mbb{F}_q$ or a PAC field $\mbb{F}$. Assume that $\mc{X}$ is smooth and that the generic fiber is smooth, separably rationally connected, and of dimension $d$.
In the finite field case or if $\mbb{F}$ is perfect, we have an isomorphism
\[
A_1(\mc{X}) \cong A_1(\overline{\mc{X}})^G,
\]
where $G$ is the absolute Galois group of $\mbb{F}_q$ (or $\mbb{F}$).
If $\mbb{F}$ is not perfect, then we have an isomorphism after inverting the characteristic $p$.
\end{thm}
\begin{rem}
The injectivity part of the theorem says that two cycles are algebraically trivial over $\mbb{F}_q$ or $\mbb{F}$ if they are algebraically trivial over $\bar{\mbb{F}}_q$ or $\bar{\mbb{F}}$. This statement is in accordance with the geometric Manin conjecture, or rather, the philosophy behind the formulation of the conjecture,.
In some vague sense, the conjecture predicts that the moduli space of curves in $V$ should have only one geometrically irreducible component for each curve class, at least when the curve class is sufficiently large and this statement is understood in certain sense of taking a limit.
If it does happen that there is only one geometrically irreducible component for each curve class and each genus, then one can show that algebraic equivalence over $\mbb{F}$ is the same as algebraic equivalence over $\bar{\mbb{F}}$.
\end{rem}
One can probably guess from the formulation of the theorem above that the proof of these theorems involve a study of algebraic equivalent cycles.
Indeed, we develop a new method of representing deformations of one cycles in a separably rationally connected fibration by deformations of stable maps.
As is well-known, the Chow variety does not admit a deformation/obstruction theory.
Therefore it is almost impossible to get a good control of the deformation of cycles.
On the other hand, deformation/obstruction theory of stable maps is well-understood.
This method allows one to control the deformation of cycles in a precise way via deformation of stable maps.
Therefore one can understand the group of one cycles in a better way.
This method already has applications in the study of other geometric problems (\cite{Coniveau}).
Section \ref{sec:deformation} and \ref{sec:alg_equiv} are devoted to develop this method.
Algebraically equivalence between two cycles usually means that one has to add complicated cycles to both of them and then get a family of cycles over a curve.
Our first result shows that one can characterize algebraically equivalent one cycles in separably rationally connected fibrations in a much simpler way.
Recall that the a stable map is a comb if the dual graph of the domain (a connected nodal curve) has vertices $v_0, v_1, \ldots, v_n$ and edges $e_i, 1 \leq n$ connecting $v_0$ and $v_i$. The irreducible component corresponding to $v_0$ is called the \emph{handle}. The other irreducible components are called \emph{teeth}. In some sense, on can think of a comb as the simplest way of adding curves to the handle.
\begin{thm}[See Theorem \ref{thm:algebraic_equivalence}]
Let $V$ be a smooth projective variety defined over an algebraically closed field of dimension at least $3$. Assume that either $V$ is separably rationally connected or a separably rationally connected fibration over a curve.
Let $\Gamma_1 \to V$ and $\Gamma_2\to V$ be two morphisms from smooth connected projective curves of the same geometric genus to $V$ that are algebraically equivalent as cycles.
Then there are two combs such that
\begin{enumerate}
\item The handles are $\Gamma_1$ and $\Gamma_2$.
\item There is a one-to-one correspondence between the teeth in the two combs. Each pair of teeth corresponds to two smooth points in the same irreducible component of the moduli space of stable maps.
\item The two combs correspond to two smooth points in the same irreducible component of the moduli space of stable maps, whose general member is a smooth embedded curve.
\end{enumerate}
\end{thm}
Theorem \ref{thm:algebraic_equivalence} gives more precise information about the teeth.
One can prove similar results for general stable maps and for stable maps (Corollary \ref{cor:alg_equi_stable}) defined over non-algebraically closed fields (Proposition \ref{alg_equiv_field}).
This geometric re-interpretation of algebraic equivalence allows one to descend algebraic equivalence relations in some cases.
This is how Theorem \ref{thm:G_inv_cycle} is proved.
An application of Theorem \ref{thm:algebraic_equivalence} is the following deformation invariance result for one cycles modulo algebraic equivalence.
\begin{cor}[=Corollary \ref{cor:deformation_inv}]
Let $X\to \text{Spec } R$ be a smooth projective morphism over the spectrum of a DVR. Let $X_0$ and $X_1$ be the geometric special fiber and the geometric generic fiber. Assume that $X_0$ and $X_1$ are separably rationally connected or are separably rationally connected fibrations over a curve. Then the specialization map
\[
A_1(X_1) \to A_1(X_0)
\]
between one cycles modulo algebraic equivalences is an isomorphism.
\end{cor}
\textbf{Acknowledgment:} I would like to thank Jean-Louis Colliot-Th\'el\`ene and Olivier Wittenberg for many helpful and constructive comments. This work is partially supported by NSFC grants No. 11890660 and No.11890662.
\section{Deformation of stable maps}\label{sec:deformation}
We discuss some deformation theoretic aspects of stable maps in this section.
\begin{defn}\label{def:bouquet}
A \emph{bouquet} is a stable map $f: C \to X$ such that $C=C_0 \cup C_1 \ldots C_k (k\geq 2)$ where
\begin{enumerate}
\item $C_0 \cong \mbb{P}^1$. We say that this is the \emph{distinguished $\mbb{P}^1$ component}.
\item Each $C_i$ is a smooth curve and intersects $C_0$ at a single node;
\item For $1 \leq i<j \leq k$, $C_i$ and $C_j$ are disjoint.
\item $f(C_0)$ is a point in $X$.
\end{enumerate}
\end{defn}
\begin{constr}{\label{const:bouquet}}
Here is a construction of a bouquet that will be useful later.
Let $X$ be a smooth projective variety defined over a field $k$ and $x$ a $k$-rational point.
Fix a finite Galois field extension $k'/k$.
Let $C_1$ be a smooth projective geometrically irreducible curve defined over $k'$ with a $k'$-rational point $c_1$.
Let $f_1: C_1 \to X$ be a morphism defined over $k'$ such that $f_1(c_1)=x$.
Note that $c_1$ and $f_1$ may have a smaller field of definition (we only assume that they are defined over the Galois field extension $k'/k$). Denote by $k_1$ the field of definition of the pointed curve and the morphism.
Here we adopt different conventions for finite fields and infinite fields.
Finite field extensions between finite fields are always Galois. Thus if $k$ is a finite field, when we later refer to this construction of bouquets, we assume that $k'$ is the field of definition $k_1$.
But for infinite fields, we cannot make such an assumption.
Denote all the Galois conjugates of $f_1, C_1, c_1$ as $f_g, C_g, c_g, g \in Gal(k'/k)$ with $f_e, C_e, c_e$ ($e$ being the identity) the same as $f_1, C_1, c_1$. Note that when $k$ is infinite, it is possible that some of the Galois conjugates are isomorphic since they might be defined over the smaller field $k_1$.
If $k$ is a finite field, we do the following construction.
We assemble a curve $C$ defined over $k$ by taking the union of $C_i$'s and glue the points $c_i$ to an orbit of $k'$-points on $\mbb{P}^1$.
The morphism is the obvious one that contracts this $\mbb{P}^1$ to the point $x$ and restricts to $f_i$ on each $C_i$.
If $k$ is an infinite field, we do the following construction.
There are $|Gal(k'/k_1)|$ copies of each isomorphism class of the Galois conjugates over $k_1$.
We take $|Gal(k'/k_1)|$ distinct orbits of rational points in $\mbb{P}^1_k$ whose field of definition is $k_1$. This is possible since $k$ is infinite.
For each orbit, we glue on a copy of the Galois conjugates.
In the end, we always have a bouquet defined over $k$ with $|Gal(k'/k)|$ teeth.
\end{constr}
\begin{defn}\label{defn:chandelier}
A \emph{chandelier} is a stable map $f: C \to X$ that consists of the following data
\begin{enumerate}
\item One irreducible smooth non-contracted curve $C_0 \subset C$, called the \emph{support} of the chandelier.
\item A possibly empty set of \emph{candles} of the chandelier, which are disjoint irreducible components $T_1, \ldots, T_k$ such that each $T_i$ is smooth and is connected to $C_0$ at a single node.
\item A possibly empty set of \emph{bouquets of candles}, which are bouquets $B_1, \ldots, B_n$ such that each bouquet is connected to the support at a single node in the distinguished $\mbb{P}^1$ component of the bouquet.
\item A possibly empty set of \emph{decorations}, which are irreducible smooth curves with two nodes connected to the support $C_0$.
\end{enumerate}
A \emph{comb} is a stable map $f:C \to X$ that is a chandelier with only candles and a support. In this case, the support is called the \emph{handle} and the candles are called \emph{teeth}.
\end{defn}
Let $f: C \to X$ be a stable map from a prestable curve to a smooth projective variety. Define the complex $\Omega_f$ as $f^*\Omega_X \to \Omega_C$, with $\Omega_C$ placed in degree $0$.
It is known that the first order deformation of $f$ is classified by $\mathbb{H}^1(C, RHom(\Omega_f, \mc{O}_C))$ and the obstruction space is contained in $\mathbb{H}^2(C, RHom(\Omega_f, \mc{O}_C))$.
Assume that $C=C_0 \cup C_1 \cup \ldots \cup C_n$ where $C_i$ are connected prestable curves and let $f_i: C_i \to X$ be the restriction of $f$ to $C_i$ for $i=0, \ldots, n$. For simplicity, in the following we write $\Omega_f^*$ for $RHom(\Omega_f, \mc{O}_C)$ and similarly for $\Omega_{f_i}^*, i=0, \ldots, n$.
We have exact triangles
\begin{equation}\label{eq:def1}
\Omega_{f_0}^*\to \Omega_{f}^*|_{C_0} \to \oplus Q_j[1] \to \Omega_{f_0}^*[1],
\end{equation}
where $Q_j$ are torsion sheaves supported at the connecting nodes of $C_0$ and other curves $C_i, i=1, \ldots, n$, and
\begin{equation}\label{eq:def2}
\Omega_f^*|_{C_1\cup \ldots \cup C_n}(-D) \to \Omega_f^* \to \Omega_f^*|_{C_0},
\end{equation}
where $D$ is the divisor on $C_1\cup \ldots \cup C_n$ corresponding to the connecting nodes with $C_0$.
From these and a diagram chasing using the long exact sequence of hypercohomologies, we arrive at the following:
\begin{lem}\label{lem:SmoothCriterion}
Let $f: C \to X$ be a stable map from a prestable curve to a smooth projective variety $X$. Assume that $C=C_0 \cup C_1 \cup \ldots \cup C_n$ where $C_i$ are connected prestable curves and let $f_i: C_i \to X$ be the restriction of $f$ to $C_i$ for $i=0, \ldots, n$. Assume that
\begin{enumerate}
\item The curves $C_i, i>0$ are disjoint from each other.
\item The curve $C_0$ is isomorphic to $ \mbb{P}^1$ and is contracted.
\item For all $i>0$, $\mathbb{H}^2(C_i, \Omega_{f_i}^*(-n_i))=0$, where $n_i$ is the divisor corresponding to the nodes connecting $C_i$ and $C_0$.
\end{enumerate}
Then the stable map $f:C \to X$ has $\mathbb{H}^2(C, \Omega_f^*)=0$ and $\mathbb{H}^2(C, \Omega_f^*(-c))=0$ for any point $c \in C_0$ that is also contained in the smooth locus of $C$. In particular, the stable map $f: C \to X$ (resp. the stable map with one marked point $f: (C, c) \to X$) corresponds to a smooth point in the moduli stack.
\end{lem}
\begin{proof}
The key point is that $\Omega_{f_0}$ is isomorphic to the following complex on $C_0\cong \mbb{P}^1$:
\[
\oplus^{\dim X} \mc{O} [1] \oplus \Omega_{C_0}.
\]
Thus $\mathbb{H}^2(C_0, \Omega_{f_0}^*\otimes \mc{O}(-1))=0$.
The rest is a routine calculation using long exact sequences of hypercohomologies of distinguished triangles (\ref{eq:def1}) and (\ref{eq:def2}).
\end{proof}
\begin{defn}\label{def:m-free}
Let $X$ be a smooth projective variety defined over a field $k$.
A curve $f: C \to X$ is {$m$-free}, if $f$ is an embedding in the case $\dim X \geq 3$, immersion in the case $\dim X=2$, and for any effective divisor $D$ of degree $m$ on $C$, $\mathbb{H}^2(C, \Omega_f^*(-D))=0$.
A family of smooth projective curve $F: \mathcal{C} \to X, p: \mathcal{C} \to S$ (defined over $k$) is \emph{$n$-connecting} (resp. \emph{generically $n$-connecting}), if for any algebraically closed over-field $K/k$, any distinct $n$ $K$-points (resp. any general $n$ $K$-points) in $X$, the subfamily of curves in $\mathcal{C}_K\to S_K$ passing through these points is non-empty and geometrically irreducible. Note that $S$ is necessarily geometrically irreducible.
A curve is $n$-connecting if it belongs to a family of $n$-connecting curves.
\end{defn}
\begin{rem}
We can take the product of $X$ with some $\mbb{P}^N$ and choose a morphism $C \to \mbb{P}^N$. Deformations in the product gives deformations in $X$. For almost all of the applications, we can replace $X$ by the product and assume $\dim X \geq 3$.
\end{rem}
It is well-known that in a separably rationally connected variety, there are \emph{genus $0$}, $m$-free curves for any positive integer $m$, and that one can kill the obstruction space of a stable map by adding sufficiently many $2$-free (usually called very free) genus $0$ curves at general positions and general normal directions.
The existence of genus $0$ $2$-free, generically $1$-connecting families of curves follows from work of Koll\'ar \cite[Theorem 3]{KollarFundamentalGroup}, applied to the generic point.
Higher connecting genus $0$ curves can also be constructed.
\begin{lem}\label{lem:1connecting}
Let $X$ be a smooth projecive variety that is a separably rationally connected fibration over a curve $B$ defined over a field $k$.
Then there exists a family of generically $1$-connecting curves that are very free curves in general fibers.
\end{lem}
\begin{proof}
Apply \cite[Theorem 3]{KollarFundamentalGroup} to the generic point of $X_\eta/k(B)$.
This gives a geometrically irreducible family of very free curves of $X_\eta$ defined over $k(B)$ that is generically $1$-connecting.
On can spread out this into a family of very free curves in general fibers of $X \to B$.
\end{proof}
First, we show the existence of $m$-free $n$-connecting curves on a separably rationally connected fibration.
Before we state the result, we note a simple observation that will be used many times.
\begin{lem}\label{lem:irr}
Let $X \to S$ be a morphism between finite type $k$-scheme and $Y\subset X$ be a locally closed sub $k$-scheme. Assume that for each point $s$ of $S$, $Y_s$ is geometrically irreducible and intersects the smooth locus of $X \to S$. If $S$ is also geometrically irreducible, then there is a geometrically irreducible Zariski open subset $X^0$ of $X$, defined over $k$, such that for each point $s \in S$, $X^0_s$ is geometrically irreducible.
\end{lem}
\begin{proof}
First note that $Y$ is geometrically irreducible. There is a unique irreducible component of $X$ containing $Y$. If not, there are two irreducible component of $X$ whose intersection contains a locus where $X \to S$ is smooth. This is impossible.
Take the unique irreducible component of $X$ that contains $Y$.
Let $X_1$ be the complement of all the other irreducible components.
Then $X_1$ is an open subset of $X$ and of the unique irreducible component that contains $Y$.
Moreover, it intersects $Y_s$ non-trivially for any $s \in S$.
The geometric generic fiber of $X_1 \to S$ is irreducible.
We may assume $Y$ is contained in the smooth locus of $X_1 \to S$.
There might be a closed subset $S_1$ of $S$ such that the fiber of $X_1$ over a point in $S_1$ is not geometrically irreducible. But the union of all the irreducible components that do not intersect $Y$ is closed in $X_1$. We define $X^0$ to the the complement of this closed subset in $X_1$.
\end{proof}
\begin{rem}
It is necessary to take a Zariski open subset to make sure that very fiber $X^0_s$ is geometrically irreducible. For example, take $X$ to be a conic bundle with singular fibers over a curve and $Y$ a section in the smooth locus.
\end{rem}
\begin{lem}\label{lem:connecting}
Let $X$ be a smooth projecive variety that is a separably rationally connected fibration over a curve or a separably rationally connected variety defined over a field $k$.
Then there exists a family of $m$-free $n$-connecting curves defined over the same field $k$ for any positive integers $m, n$.
\end{lem}
\begin{proof}
We first construct a family of $n$-connecting curves as a family of smooth complete intersections of a linear system of very ample divisors containing the family of $n$ distinct points.
Note that the family of $n$ distinct points in $X$ is an irreducible family.
In particular, such a family is irreducible.
Clearly we may take the family of complete intersections to be defined over the same field $k$.
As long as we choose the ample divisors to be sufficiently positive, deformations of such families with $n$ points fixed cover $X$.
So we can glue on curves in the family of generically $1$-connecting curves in Lemma \ref{lem:1connecting} to the complete intersections at general points along general normal directions.
This gives a family of combs containing the $n$ points, with vanishing obstruction $\mathbb{H}^2(\Omega_f^*(-D))$ for any degree $m$ effective divisor in the handle.
Moreover, this family is defined over the field $k$.
Since these very free curves in general fibers are generically $1$-connecting, for any set of $n$ geometric points, the family of combs constructed above is defined over $k$ and geometrically irreducible.
This gives a family of stable maps with reducible domains such that for any $n$ geometric points, the subfamily containing these points is geometrically irreducible.
We apply Lemma \ref{lem:irr}, with $X$ being the total non-stacky part of the moduli space of stable maps and $Y$ being the family of combs.
This gives the desired family of $m$-free $n$-connecting curves.
\end{proof}
\begin{lem}\label{lem:unobstructed}
Let $X$ be a smooth projective variety that is either a separably rationally connected fibration over a smooth projective curve $B$ or a separably rationally connected variety.
For any stable map $f: C \to X$ from a connected prestable curve,
and any effective divisor $D$ contained in the smooth locus of $C$,
one can attach $2$-free curves to $C$ at general points along general directions, and the resulting stable map $F: \bar{C} \to X$ satisfies the vanishing
\[\mathbb{H}^2(\bar{C}, \Omega_F^*\otimes \mc{O}_{\bar{C}}(-D))=0.
\]
A general deformation of $F$ has smooth irreducible domain, and is $\deg D$-free.
\end{lem}
\begin{proof}
To get the smoothing and vanishing result, we use a trick.
We take an embedding $i$ of the comb into $\mbb{P}^n$ such that the degree of each irreducible component is large compared to the genus of the normalization and $\deg D$ (in fact, greater than $2g-1+\deg D$ suffices).
Then we replace $X$ by $X\times \mbb{P}^n$ and $f$ etc. by the product morphism $f \times i$.
Comparing the cotangent complex, we have
\[
\Omega_{f\times i}=\Omega_f \oplus \Omega_{\mbb{P}^n}[1].
\]
Since we assume that the degree of each irreducible component $\Gamma$ is large compared to its genus and $\deg D$, the Euler sequence for the tangent bundle of $\mbb{P}^n$ shows that the cohomology group $H^1(\Gamma, T_{\mbb{P}^n}\otimes L)$ for any line bundle of degree $\deg D$ on $\Gamma$ vanishes.
If follows that the obsturction group for $f \times i$ is isomorphic to $f$.
Thus the first part of the statement follows from the case where $f: C \to X$ is an embedding, which is well-known.
A general deformation is an embedding (resp. immersion) if $\mathbb{H}^2(\Gamma, \Omega_g^*(-p-q))=0$ for any two (not necessarily distinct) points $p, q$ in $\Gamma$, and if $\dim X \geq 3$ (resp. $\dim X \geq 2$). One can prove this by making a dimension count of all space of curves that is not an embedding/immersion.
\end{proof}
It turns out that $2$-free $2$-connecting curves gives higher connecting curves.
\begin{lem}\label{lem:higherconnecting}
Let $X$ be a smooth projective variety that is a separably rationally connected fibration over a curve or a separably rationally connected variety.
Assume that $X$ has dimension at least $3$.
Let $f: C \to X$ be a morphism from a smooth projective curve.
Fix positive integers $m, n$, and a family of $2$-free $2$-connecting curves $\mathcal{C}$.
Let $g: D \to X$ be the comb with handle $C$ and
with sufficiently many teeth in the family $\mathcal{C}$ (depeding on $m, n$) at general points along general directions as in Lemma \ref{lem:unobstructed}.
Then the family of curves coming from general deformations of $g$ is $m$-free $n$-connecting.
\end{lem}
\begin{proof}
The $m$-free part is by Lemma \ref{lem:unobstructed}.
Moreover, by the same lemma, we know that as long as we attach sufficiently many teeth at general points along general directions, the comb satisfies the vanishing of the obstruction group. So we assume that there are more than $n$ teeth in the following.
We choose $n$-teeth and put one marked point on each of the $n$ chosen teeth.
We also assume that the comb with handle $C$ and with the remaining teeth is unobstructed.
Since our goal is to prove that general deformations of the comb form a family of $n$-connecting curves, we first deform the comb in such a way that smooths the nodes connecting the remaining comb and keeps the nodes connecting the chosen $n$-teeth.
So we end up with a new comb with unobstructed handle and $n$ teeth that lie in a family of $2$-connecting curves.
In the following we assume that this is our comb $g:D \to X$ with handle $C \subset D$.
For any $n$ distint points in $X$, there is a comb with handle $C$ ( the morphism restricted to $C$ is the same as $g|_C$) and with the $n$ teeth connecting the $n$ points and $C$.
Here we allow the points to lie in the image $g(C)$. But we take the $n$-teeth in the family $\mathcal{C}$ to connect $n$-general points in $C$ and the $n$-points in $X$.
Since the teeth lie in a $2$-connecting family, the family of all such combs is a smooth geometrically irreducible family.
As in Lemma \ref{lem:connecting}, one can show that general deforms of the combs give a family of $n$-connecting curves by using Lemma \ref{lem:irr}.
\end{proof}
\begin{cor}\label{cor:bouquet_connecting}
Let $X$ be a smooth projective variety that is a separably rationally connected fibration over a curve or a separably rationally connected variety.
Assume that $X$ has dimension at least $3$.
Let $f: C=\mbb{P}^1 \to X$ be a constant morphism.
Fix a family of $2$-free $2$-connecting curves $\mathcal{C}$ and let $g: D \to X$ be the comb with handle $C$ and
with teeth belonging to the family $\mathcal{C}$.
Then for any point $x \in X$, the family of bouquets with a marked point on $\mbb{P}^1$ and mapped to $x$ is geometrically irreducible and smooth.
Moreover, the family of curves coming from general deformations of $g$ is $2$-free $2$-connecting.
\end{cor}
\begin{proof}
The first statement is easy. The smoothness follows from \ref{lem:unobstructed}. The moduli space is obviously geometrically irreducible.
For the second statement, simply note that the constant $f: C \cong \mbb{P}^1 \to X$ has obstructed deformation (i.e. $H^2(C, \Omega_f^*)=0$). Thus the proof of Lemma \ref{lem:higherconnecting} works the same way if there are at least $2$ teeth.
If there is only one tooth, then the deformation of $g$ is simply the given family of $2$-free $2$-connecting curves.
\end{proof}
The next lemma explains the advantage of having $n$-connecting curves.
\begin{lem}\label{lem:samefamily}
Let $X$ be a smooth projective variety that is a separably rationally connected fibration over a curve or a separably rationally connected variety defined over an algebraically closed field.
Let $f: \Sigma \to X, p:\Sigma \to T$ be a family of pre-stable maps parameterized by a smooth curve $T$ with geometrically irreducible generic fiber.
Fix a positive integer $ n$.
Fix a family of $n$-free $n$-connecting curves $\mathcal{C} \to S, \mathcal{C} \to X$.
Let $t_1, t_2$ be two points in the curve $T$.
Choose any $n$-tuple of points in the smooth locus of $\Sigma_{t_1}$ (resp. $\Sigma_{t_2}$) whose image in $X$ consisting of $n$ distinct points, and any curve $C_1$ (resp.$C_2$) in the family of $n$-free $n$-connecting curves containing the image of $n$-tuple of points in $\Sigma_{t_1}$ (resp. $\Sigma_{t_2}$), one can construct two new stable maps by taking the union $\Sigma_1=\Sigma_{t_1} \cup C_1$ (resp. $\Sigma_2=\Sigma_{t_2} \cup C_2$) along the $n$-tuple of points and there is a deformation parameterized by an irreducible curve $T'$ from $\Sigma_1$ to $\Sigma_2$.
\end{lem}
\begin{proof}
Consider the subfamily of curves $\mathcal{C}$ that pass through $n$ points in the image of $(\Sigma_t)^{\text{sm}}, t \in T$.
This is a smooth family fibered over $T$ with geometrically irreducible generic fiber.
Thus up to shrinking $T$, we may find a multisection that contains the two points corresponding to the curves $C_1, C_2$.
After a suitable base change, we can assemble a family of pre-stable maps over an irreducible curve $T'$ by gluing the original family of stable maps with the family of curves in $\mathcal{C}$. This family gives a deformation from $\Sigma_1$ to $\Sigma_2$.
\end{proof}
Next we introduce some useful deformations.
Recall that the dual graph of a stable map is a graph whose vertices correspond to irreducible components and edges correspond to nodes.
\begin{lem}[The sliding lemma]\label{lem:slide}
Let $f: \Gamma \to V (\dim V \geq 3)$ be a stable map with dual graph $G$.
Let $B, C, D$ be three irreducible components of $\Gamma$ corresponding to a chain $V_B - V_C - V_D$ of adjacent vertices in $G$.
Assume that every irreducible component is $m$-free (in particular, every irreducible component is an embedded smooth curve by definition of $m$-freeness) for $m$ larger than the number of nodes on the component.
Then there is a deformation parameterized by an irreducible curve from $f$ to a stable map $f': \Gamma' \to V$, whose irreducible components are unions of deformations of $B, C, D$ and all the other irreducible components of $\Gamma$, and whose dual graph $G'$ is obtained from $G$ by changing the chain $V_B - V_C - V_D$ in $G$ to a chain $V_D - V_B - V_C $ and keeping the other edges unchanged (here we use the same symbol to denote the vertex corresponding to deformations of irreducible components).
Moreover, if the $D$ belong to a family of $(n+1)$-connecting curves (where $n$ is the number of nodes connecting $D$ and other irreducible components in $\Gamma$), then we may choose the deformations with $B$ and $C$ fixed.
\end{lem}
\setlength{\unitlength}{0.8in}
\begin{figure}[h]
\centering
\includegraphics[width=4.4in]{Sliding.png}
\put(-3.55, 0.75){$\mbb{P}^1$}
\put(-4.75, 0){1.}
\put(-2.55, 0){2.}
\put(-0.5, 0){3.}
\put(-5.55,-0.3){1. Deform the curve $D$ along the curve $C$ to the node.}
\put(-5.55,-0.5){2. A new $\mbb{P}^1$ componnet with three nodes connecting $B, C, D$ is created.}
\put(-5.55, -0.7){3. Move $D$ away from the node along the curve $B$.}
\caption{Sliding the curve $D$ from $C$ to $B$}
\label{fig:Slide}
\end{figure}
\begin{proof}
We produce the deformation in two steps. For a schematic picture of the deformation process, see Figure \ref{fig:Slide}.
\textbf{Step 1.}
Since very irreducible component is $m$-free for $m$ larger than the number of nodes on the component, the stable map $f$ is unobstructed and a general deformation of $f$ keeping the same dual graph is again unobstructed.
By taking a general such deformation, we may assume that the nodes between $B, C, D$ are in general position.
More precisely, denote by $d_0, \ldots, d_k$ all the nodes on the curve $D$ with $d_0$ the node connecting $D$ to $C$.
The image of the point $d_0$ under general deformations of $f|_D: (D, d_0, \ldots, d_k) \to V$ with $d_1, \ldots, d_k$ fixed contains a Zariski open set $U$ of $V$.
We need the condition that the node $n_{BC}$ between $B$ and $C$ that corresponds to the edge in the chain $V_B - V_C$ is in this Zariski open subset.
We may replace $f$ with this general deformation.
\textbf{If $n_{BC}$ is already contained in this Zariski open subset, we do not need to make this replacement. In particular, if $D$ belong to a family of $(n+1)$-connecting curves, we do not need to take this deformation.}
Thus we have a family of stable maps over an irreducible curve $T$ with two points $t_1, t_2$ such that
\begin{enumerate}
\item The stable map over $t_1$ is just $f$.
\item The stable map over $t_2$ is a deformation of $f$ by deforming the irreducible component $D$ with $d_1, \ldots, d_k$ fixed such that the node $d_0$ passes through the node $n_{BC}$ and fixing all the other components.
\item The stable maps over $t_1, t_2$ are unobstructed by Lemma \ref{lem:unobstructed}.
\end{enumerate}
The stable map over $t_2$ has one more irreducible component that is isomorphic to $\mbb{P}^1$ with three nodes connecting this component to $B, C$ and the deformation of $D$.
For simplicity, let us use the same letters $D, d_0, \ldots, d_k$ to denote the deformation of $D$ and the nodes.
\textbf{Step 2.}
We further deform the stable map over $t_2$ by deforming the irreducible component $D$ with $d_1, \ldots, d_k$ fixed such that the node $d_0$ is deformed along the curve $B$ and fixing all the other components.
This is possible since the curve $B$ intersects the Zariski open set $U$ around the node $n_{BC}$.
Now take $f': \Gamma' \to V$ to be a general such deformation.
Since the stable map over $t_2$ is unobstructed by Lemma \ref{lem:unobstructed}, there is only one irreducible component of the moduli space of stable maps that contains the corresponding point.
So $f'$ and $f$ both lie in this component and we may find a deformation from $f$ to $f'$ within this irreducible component.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[width=4.4in]{unbuckle.png}
\put(-5.5, 1.7){$f_0$}
\put(-3, 1.7){$f_1$}
\put(-5.5, 0.5){$f_2$}
\put(-3, 0.5){$f_3$}
\put(-5.5, -0.3){$f_0$: start with a stable map that has two nodes between $C$ and $D$.}
\put(-5.5, -0.5){$f_1$: replace one node between $C$ and $D$ by a $\mbb{P}^1$ and add $2$-free $2$-connecting}
\put(-5.2, -0.7){curves on $\mbb{P}^1$.}
\put(-5.5, -0.9){$f_2$: a deformation of the comb turns the added curves into a single smooth curve}
\put(-5.25, -1.1){$B$ connecting $C$ and $D$.}
\put(-5.5, -1.3){$f_3$: use the sliding lemma to move the node between $B$ and $D$ to $C$.}
\caption{Unbuckle a node between $C$ and $D$.}
\label{fig:unbukle}
\end{figure}
\begin{constr}[Unbuckle a node]\label{lem:unbuckle}
Let $V$ be a smooth projective variety defined over an algebraically closed field and $f_0: \Gamma_0 \to V$ be a stable map with dual graph $G_0$.
Let $C, D$ be two irreducible components of $\Gamma$ such that there are two edges between the corresponding vertices $ V_C$ and $ V_D$.
We fix an irreducible family $\mathcal{C} \to M$ of $n$-connecting $m$-free curves for $n$ $m$ large.
We assume that each irreducible component of $\Gamma_0$ is a smooth embedded curve in $V$ and that it is $m$-free for $m$ larger than the number of nodes it has. In particular, the deformation of this curve with all the nodes fixed is unobstructed.
By Lemma \ref{lem:unobstructed}, this assumption is always achieved after adding curves in the family $\mathcal{C}\to M$ and take a general smoothing.
Now we describe a deformation called ``unbuckle a node". For a schematic picture of the process, see Figure \ref{fig:unbukle}.
Choose one of the nodes corresponding to the edges between $V_C$ and $V_D$.
We construct a prestable map by separating $C$ and $D$ at this node and adding a $\mbb{P}^1$ connecting them.
Then we add enough curves in the family $\mathcal{C} \to M$ to this $\mbb{P}^1$ along general directions.
Call this stable map $f_1: \Gamma_1 \to V$.
By Lemma \ref{lem:unobstructed}, this stable map is unobstructed.
Take a general deformation from this stable map to a new stable map $f_2: \Gamma_2 \to V$, which smooths all the nodes between $\mbb{P}^1$ and the curves in the family $\mathcal{C}\to M$ while keep all the other nodes.
The dual graph $G_2$ of $\Gamma_2$ is obtained from $G_0$ by replacing one of the edges between $V_C$ and $V_D$ with a new vertex $V_B$ and two edges connecting $V_B$ to $V_C$ and $V_D$.
Now we apply the sliding lemma \ref{lem:slide} to the chain $V_C \to V_D \to V_B$.
This gives a deformation of $f_2$ to a stable map $f_3: \Gamma_3 \to V$, which moves the node between $B$ and $D$ to $C$.
The dual graph $G_3$ of $\Gamma_3$ is obtained from $G$ by adding a new vertex $V_B$ with two edges connecting $V_B$ and $V_C$ and removing one of the edges connecting $V_C$ to $V_D$.
We call the construction from $f_0$ to $f_3: \Gamma_3 \to V$ \emph{unbuckle a node}.
By construction, $f_i, i=0, \ldots, 3$ are unobstructed stable maps.
Therefore they all lie in the same irreducible component of the space of stable maps.
\end{constr}
We finish this section with two particular type of deformations.
\begin{lem}[The break lemma]\label{lem:break}
Let $f: C \to D$ be a generically \'etale morphism of degree $d$ between smooth projective connected curves. There is a family of curves $W \to \mbb{P}^1, F: W \to D$ with the following properties.
\begin{enumerate}
\item $W_0$ is a connected nodal curve, which is the union of a copy of $C$ and $\mbb{P}^1$'s.
\item Each copy of $\mbb{P}^1$ connects $C$ at two points.
\item The morphism $F$ maps each copy of $\mbb{P}^1$ to a point in $D$, and $F$ restricted to the copy of $C$ is the same as $f$.
\item $W_\infty$ is the union of $d$ copies of $D$ (with identity morphism $D \to D$), and a number of $\mbb{P}^1$'s that map to points in $D$.
\end{enumerate}
\end{lem}
\begin{proof}
This is essentially Corollary 1.3 in \cite{deJongStarr_GHS} with a minor twist.
The statement about $W_\infty$ follows from the construction in the proof of Proposition 1.1 in \cite{deJongStarr_GHS}.
\end{proof}
\begin{lem}\label{lem:residual}
Let $C$ be a connected projective nodal curve and $D$ a smooth projective curve, and let $f: C \to D$ be a morphism. There is a family of curves $W \to \mbb{P}^1, F: W \to D$ with the following properties.
\begin{enumerate}
\item $W_0$ is a connected nodal curve, which is the union of a copy of $C$ and a smooth irreducible projective curve $C'$.
\item $W_\infty$ is a smooth irreducible curve.
\item $F|_{C'}: C'\to D$ and $F|_{W_\infty}: W_\infty \to D$ are generically \'etale.
\end{enumerate}
\end{lem}
\begin{proof}
Choose an embedding of $i: C \to \mbb{P}^n$. Let $\Gamma \subset D \times \mbb{P}^n$ be the image of $f\times i$. Since $C$ is nodal, we may find a smooth surface $S \subset D\times \mbb{P}^n$ containing $\Gamma$. Choose a very ample divisor $H$ on $S$ such that $H-\Gamma$ is also very ample. Choose a general divsor $C'$ in the linear system of $|H-\Gamma|$.
Take $W$ to be a pencil of divisors consisting of $C+C'$ and a general member of the linear system $|H|$.
\end{proof}
\section{Algebraic equivalence}\label{sec:alg_equiv}
In this section, we give a characterization of algebraically equivalent cycles on a separably rationally connected fibration and give some applications.
The main result in this section translate the algebraic equivalence from the language of cycles into a condition using stable maps.
Given an irreducible cycle in $V$, there are many ways to write it as the image of a stable map. In the following, we essentially choose the initial object.
\begin{lem}\label{lem:stable}
Let $\Sigma \subset B \times V$ be a family of cycles over a curve $B$. Then after a base change $B' \to B$, there are families of stable maps over $B', \Sigma' \to B', \Sigma' \to V$ such that
\begin{enumerate}
\item The morphism $\Sigma' \to B'$ is generically smooth.
\item For each $b' \in B'$, the cycle class of the image of $\Sigma'_{b'}$ in $V$ is the same as that of the cycle $(\Sigma\times_B B')_{b'}\subset V$.
\end{enumerate}
\end{lem}
\begin{proof}
We may assume that there is only one irreducible surface $\Sigma$.
We take an algebraic closure of the function field of $B$ and consider the geometric generic fiber $\Sigma_{\bar{B}}$ of $\Sigma$.
If this one dimension scheme $\Sigma_{\bar{B}}$ is not reduced, which only happens if the characteristic of $k$ is positive, we replace it with copies of its reduced subscheme. The number of copies is just the length of the generic point of $\Sigma_{\bar{B}}$.
Then we take the normalization of these schemes, which can all be defined over a finite field extension of the function field $k(B)$.
Let us use $B''$ to denote the unique curve over $k$ whose function field gives this extension, and $\Sigma''$ the surface over $B''$ whose geometric generic fiber is the normalization.
Now the generic fibers of $\Sigma'' \to B''$ are smooth projective curves.
We may apply stable reduction for these families to get families of stable maps $\Sigma' \to B', \Sigma' \to V$.
\end{proof}
We first show that it is possible to put algebraic equivalence between stable maps in a standard form.
\begin{lem}\label{lem:standard}
Let $V$ be a smooth projective variety of dimension at least $3$ defined over an algebraically closed field. Assume that either $V$ is separably rationally connected or a separably rationally connected fibration over a curve. Fix a family $\mathcal{C} \to M$ of $n$-connecting $m$-free curves ($m, n\geq 2$).
Let $\Sigma \to B, \Sigma \to V$ be a family of stable maps such that a general one has smooth connected domains.
Let $b, b'$ be two points in $B$.
Assume that
\begin{enumerate}
\item Every irreducible component over $b, b'$ are $m'$-free for some $m'$ larger than the number of nodes on this irreducible component. In particular, every irreducible component is an embedded smooth curve.
\item There is a one-to-one correspondence between irreducible fibers over $b$ and $b'$.
\item There is one irreducible component $\Gamma$ over $b$ and the corresponding component $\Gamma'$ over $b'$ such that the two curves $\Gamma$ and $\Gamma'$ have the same genus.
\item Every other pair of irreducible components under this correspondence are paremeterized by smooth points in the same irreducible component of the moduli space of stable maps.
\end{enumerate}
Then there is a pair of combs with the following properties.
\begin{enumerate}
\item The handles are general deformations of $\Gamma$ and $\Gamma'$.
\item There is a one-to-one correspondence between the teeth in the two combs. Each pair of teeth corresponds to two smooth points in the same irreducible component of the moduli space of stable maps.
\item The two combs correspond to two smooth points in the same irreducible component of the moduli space of stable maps, whose general member is a smooth embedded curve.
\end{enumerate}
Moreover, we may assume that each tooth is $m'$-free and belongs to an $n'$-connecting family of curves for any positive integers $m', n'$.
\end{lem}
\begin{proof}
We apply the sliding lemma \ref{lem:slide} to deform the fibers over $b/b'$ so that all the irreducible components only intersect $\Gamma/\Gamma'$.
We first do this for the fiber over $b$.
Let us look at the dual graph $G$ of the fiber over $b$.
Let $V_0$ be the vertex corresponding to $\Gamma$.
For each vertex $V \in G$, define the distance from $V$ to $V_0$ as
\[
d(V)=\min\{n| \text{There is a connected chain } V_0 \to V_1 \to \ldots \to V_n=V.\}.
\]
If a chain achieves the minimum distance for $V$, we say that this is a \emph{minimal path}.
Let $V$ be a vertex with maximal distance to $V_0$.
The minimal path of any vertex different from $V$ cannot pass through the vertex $V$.
We choose a minimal path $V_0\to V_1\to \ldots \to V_n=V$.
We apply the sliding lemma \ref{lem:slide} to the chain $V_{n-2} - V_{n-1} -V_n$, and move the edge between $V_{n-1}$ and $V_n$ to an edge between $V_{n-2}$ and $V_n$.
The resulting stable map lies in the same irreducible component as the fiber over $b_1$ as guaranteed by the slide lemma.
This operation does not change neither the distance nor the minimal path of any other vertex.
But it reduces the distance of $V$ to $V_0$ by $1$.
We continue this process and each time move one edge of a vertex that has the maximal distance to $V_0$ among the remaining vertices.
Eventually we end up with a dual graph such that any vertex other than $V_0$ has at least one edge connecting to $V_0$.
There might be edges connecting two vertices $V_1, V_2, (V_i \neq V_0, i=1, 2)$.
Then apply the sliding lemma \ref{lem:slide} to move the edge between $V_1, V_2$ to an edge between $V_0, V_1$.
In the end, all the edges are between $V_0$ and other vertices.
Similarly we can do this for the fiber over $b'$.
By the sliding lemma \ref{lem:slide}, the new fibers over $b$ and $b'$ lie in the same irreducible component.
Then we apply the unbuckle a node construction \ref{lem:unbuckle} to reduce the number nodes between $\Gamma$ (resp. $\Gamma'$) and other irreducible components.
Note that at this point, the dual graph $G$ of the fiber over $b$ consisting of $V_0, V_1, \ldots, V_n$ with $e_i$ edges connecting $V_i$ and $V_0$.
The dual graph $G'$ of the fiber over $b'$ consisting of $V_0', V_1', \ldots, V_n'$ with $e_i'$ edges connecting $V_i'$ and $V_0'$.
We arrange them in such a way that $V_i$ and $V_i'$ are vertices of the irreducible components under the correspondence.
Let $g_j$ (resp. $g_j'$) be the genus of the irreducible component corresponding to the vertex $V_j$ (resp. $V_j'$) for $j=0, \ldots, n$.
By assumption, $g_j=g_j'$ since the corresponding irreducible components lie in the same component of the moduli space.
Then we have the equality of arithmetic genus on both sides:
\[
\sum_{i=1}^n (e_i-1)+\sum_{j=0}^n g_j=\sum_{i=1}^n (e_i'-1)+\sum_{j=0}^n g_j'.
\]
Thus we have $\sum_{i=1}^n (e_i-1)=\sum_{i=1}^n (e_i'-1)$.
For any vertex $V_i, i\neq 0$ with more than one edge, choose one of the $e_i (e_i>1)$ edges, which corresponds to a node between $\Gamma$ and the irreducible component corresponding to $V_i$.
Up to making a base change of the family of stable maps over $B$, we may insert a $\mbb{P}^1$ at each of the other nodes.
We may use Lemma \ref{lem:samefamily} to glue on curves in the family $\mathcal{C}$ to these $\mbb{P}^1$ components.
With these new $\mbb{P}^1$'s, for any other edge, we can apply the unbuckle a node construction \ref{lem:unbuckle} to reduce the number of edges between $V_i$ and $V_0$ to $1$.
This introduces new vertices with two edges connecting to $V_0$.
Put it another way, new irreducible components that has two nodes connecting to $\Gamma$.
By the equality above, we add the same number of new vertices for both graphs.
Moreover, in the unbuckle a node construction \ref{lem:unbuckle}, we can assume that the added irreducible components are smoothings of some bouquets of $n$-connecting $m$-free curves ($m, n \geq 2$), and that they come in pairs and each pair are deformation equivalent.
For later use, we also note that by Lemma \ref{lem:higherconnecting}, as long as we have added enough teeth in the bouquets, smoothing of the bouquets belong to a family of $m'$-free $n'$-connecting curves for some $m', n'\geq 2$.
At this moment, we already have two chandeliers deforming to each other.
The supports are $\Gamma, \Gamma'$.
The candles are deformations of all the original irreducible components over $b/b'$.
The decorations are smoothings of the bouquets we introduced when applying the unbuckle a node construction.
All the decorations belong to the same family of $m'$-free $n'$-connecting curves for some $m', n'\geq 2$ by Lemma \ref{lem:higherconnecting} and Corollary \ref{cor:bouquet_connecting}.
In the last step, we use the sliding lemma \ref{lem:slide} again to move the decorations.
For the chandelier over $b$, we choose one candle $C$, one decoration $D$, one node between the decoration and the support, corresponding respectively to vertices $V_C, V_D$, an edge $E$, and a vertex $V_S$ in the dual graph.
Apply the silding lemma \ref{lem:slide} to the chain $V_C \to V_S \to V_D$ with connecting edge $E$.
Then we use the sliding lemma again to move the other node to connect $C$ and $D$.
In this way we move the original loop between $V_D$ and $V_S$ to a loop between $V_D$ and $V_C$.
We fix this candle and move all the decorations to this candle.
Finally we take a general deformation that smooths all the nodes created in this operation.
For the other chandelier, we choose the corresponding candle and perform the same operation.
In the end, we get the desired pair of combs.
There is still one more point to check.
Namely the corresponding teeth given by smoothing all the decorations and the chosen candle on the two combs are deformation equivalent.
We have already noted that the decorations belong to a family of $m'$-free $n'$-connecting curves ($m', n' \geq 2$). Thus this follows from Lemma \ref{lem:samefamily}.
As for the last statement, we can add more curves in the $m$-free $n$-connecting family of curves to the teeth and take a general deformation. Then the statement follows from Lemma \ref{lem:unobstructed} and Lemma \ref{lem:higherconnecting}.
\end{proof}
Now we can describe algebraically equivalent cycles in a separably rationally connected fibration.
\begin{thm}\label{thm:algebraic_equivalence}
Let $V$ be a smooth projective variety defined over an algebraically closed field of dimension at least $3$. Assume that either $V$ is separably rationally connected or a separably rationally connected fibration over a curve.
Fix a family $\mathcal{C} \to M$ of $n$-connecting $m$-free curves ($m, n\geq 2$).
Let $\Gamma_1 \to V$ and $\Gamma_2\to V$ be two morphisms from smooth connected projective curves of the same geometric genus to $V$ that are algebraically equivalent as cycles.
Then there are two combs such that
\begin{enumerate}
\item The handles are $\Gamma_1$ and $\Gamma_2$.
\item There is a one-to-one correspondence between the teeth in the two combs. Each corresponding pair of teeth corresponds to two smooth points in the same irreducible component of the moduli space of stable maps.
\item The two combs correspond to two smooth points in the same irreducible component of the moduli space of stable maps, whose general member is a smooth embedded curve.
\end{enumerate}
Moreover, we may assume that each tooth is $m'$-free and belongs to an $n'$-connecting family of curves for any positive integers $m', n'$.
\end{thm}
Before we give the proof, we give an example to explain the difficulty in the proof and how to overcome it.
\begin{ex}
Let us consider the simplest case. The morphisms $f_1: \Gamma_1 \to V, f_2:\Gamma_2 \to V$ factors through a smooth curves $\Gamma_1 \to D \to V, \Gamma_2 \to D \to V$.
So the cycle classes are the same, and hence trivially algebraically equivalent.
But the morphisms $f_1, f_2$ does not deform to each other in general.
For example, \'etale morphisms and purely inseparable morphisms (between higher genus curves) are rigid and cannot be deformed.
So the question is how to add suitable teeth to make the deformation possible.
What we do is the following. Assume that $f_1, f_2$ are of degree $d$.
We add $f_2: \Gamma_2 \to V$ to $f_1$ and $d$ copies of $D \to V$ to $f_2$.
Then we add $f_1: \Gamma_1 \to V$ to $f_2$ and $d$ copies of $D \to V$ to $f_2$.
Now we have two morphisms, both of which consists of a copy of $f_1, f_2$ and $d$-copies of $D \to V$.
How are these copies connected to each other is not important.
We can always deform them from one to the other as in Lemma \ref{lem:standard}.
In general, algebraic equivalence gives us families of cycles. We can always assemble two stable maps from this data such that there is a one-to-one correspondence between irreducible components.
The corresponding components deform to each other.
This operation requires some care since we need to assemble stable maps over a family.
The actual operation is certainly much more complicated.
\end{ex}
Now we begin the proof.
\begin{proof}[Proof of Theorem \ref{thm:algebraic_equivalence}]
We will reduce the theorem to the situation in Lemma \ref{lem:standard} step by step .
\textbf{Step I:} Preparation.
First of all, we claim that it suffices to prove the theorem under the extra assumption that $\Gamma_1$ and $\Gamma_2$ are smooth embedded $m$-free curves.
Indeed, we may add enough curves in the family $\mathcal{C}$ to both $\Gamma_1$ and $\Gamma_2$ and replace them with general deformation of both combs.
As long as we add the same number of curves in the family $\mathcal{C}$ to both $\Gamma_1$ and $\Gamma_2$, general deformations are smooth embedded $m$-free curves that are algebraically equivalent as cycles.
Suppose that we have proved the theorem for these general deformations. Since the teeth added can be taken to be $m'$-free $n'$-connecting, we may use Lemma \ref{lem:samefamily} to deform these teeth to the original $\Gamma_1$ and $\Gamma_2$.
Together with the teeth in the famiy $\mathcal{C}$ we added, we assemble the desired combs.
Lemma \ref{lem:higherconnecting} and \ref{lem:samefamily} assures that once we have the combs,
we may always make the teeth to be $m'$-free and $n'$-connecting for any large $m', n'$
\textbf{Step II:} Replacement by pre-stable maps.
By definition of algebraic equivalence, There are irreducible surfaces $\Sigma_i \subset B\times V, i=1, \ldots, n$ such that
\begin{enumerate}
\item There are two points $b_1, b_2$ of $B$ and $\Gamma_1$ (resp. $\Gamma_2$) appears as the reduced subscheme of one of the irreducible components of the fibers of some $\Sigma_i \to B$ over $b_1$ (resp. $b_2$).
\item As a cycle, the fiber over $b_1$ (resp. $b_2$) is $\Gamma_1+\gamma$ (resp. $\Gamma_2+\gamma$) for some effective cycle $\gamma$ of $V$.
\end{enumerate}
Note that we have no control over the cycle $\gamma$.
In particular, as a cycle, it may contain a multiple of $\Gamma_1/\Gamma_2$ and the irreducible component $\Gamma_1/\Gamma_2$ indeed can appear with multiplicities in the fiber.
There exists a sequence of blowing-ups $V_k \to V_{k-1} \ldots \to V_0=V$ at points in the support of $\gamma$ and its strict transforms such that the strict transform of $\gamma$, hence also the strict transforms of ${\Sigma_i}_{b_1}, {\Sigma_i}_{b_2}$ in $V_n$ is smooth \cite[Theorem A.1]{SaitoSato_0_cycle}.
We replace $\Sigma_i$ by the strict transforms of $\Sigma_i$ in $B \times V_k$.
Starting from this point, we only have $B$-morphisms $\Sigma_i \to B \times V$.
Thus in the following we have the condition that the cycles in ${\Sigma_i}_{b_1}, {\Sigma_i}_{b_2}$ either have smooth support in $V_n$ or lie in the exceptional divisors of $V_k \to V$.
Next we replace the surfaces by a family of stable maps to $V_k$ using Lemma \ref{lem:stable}.
Again by abuse of notations, let us denote these families to be $\Sigma_i \to B$, together with $B$-morphisms $\Sigma_i \to B \times V_k \to B \times V$.
\textbf{Step III:} Adding new fibered surfaces.
We divide the irreducible components of fibers $\Sigma_i \to B$ over $b_1$ and $b_2$ into different types:
\begin{enumerate}
\item Type I: The stable map to $V$ restricted to this component is birational.
\item Type II: The stable map to $V$ restricted to this component is finite but not generically \'etale.
\item Type III: The stable map to $V$ restricted to this component is generically \'etale of degree larger than $1$.
\item Type IV: The stable map to $V$ restricted to this component is constant, and this component is not isomorphic to $\mbb{P}^1$.
\item Type V: The stable map to $V$ restricted to this component is constant, and this component is isomorphic to $\mbb{P}^1$.
\end{enumerate}
By our construction, the strict transform of the image curve is smooth in $V_n$ in the first three cases.
Thus in the type I case, this irreducible component has to be smooth and the stable map restricted to this component is an isomorphism to its image in $V_n$.
Moreover, in the first three cases, the stable map restricted to the irreducible component factors through a morphism to a smooth curve.
It is possible that none of the irreducible components maps birationally to $\Gamma_1, \Gamma_2$, since stable reduction may produce multiple covers if the original fiber have multiplicity greater than one along an irreducible component.
But we will see that after the following steps, which add more fibered surfaces, there will be irreducible components mapping birationally onto $\Gamma_1, \Gamma_2$.
For each irreducible component of type II to IV, we will construct a fibered surface (and modify the base of the surfaces $\Sigma_i$ if necessary) in the following way.
Let us denote this irreducible component by $C$ and its image in $V_n$ by $D$.
\textbf{Type II}: There is a finite but non-generically-\'etale morphism $C \to D$.
Lemma \ref{lem:residual} gives a fibration $W \to \mbb{P}^1, F: W \to D$ such that
\begin{enumerate}
\item $W_0$ is a connected nodal curve, which is the union of a copy of $C$ and a smooth irreducible projective curve $C'$.
\item $W_\infty$ is a smooth irreducible curve.
\item $F|_{C'}: C'\to D$ and $F|_{W_\infty}: W_\infty \to D$ are generically \'etale.
\end{enumerate}
We can construct a fibration over $\Sigma_C \to B$ by choosing a suitable morphism $B \to \mbb{P}^1$ and pulling back the family $W \to \mbb{P}^1$.
Moreover, we may choose the fibration in such a way that if the irreducible component $C$ is over $b_1$ (resp. $b_2$), the fiber of $\Sigma_C$ over $b_2$ (resp. $b_1$)is $C+C'$ and the fiber of $\Sigma_C$ over $b_1$ (resp. $b_2$) is $C''$.
After this, we only introduce new type III irreducible components over $b_1, b_2$.
\textbf{Type III}: We apply Lemma \ref{lem:break}. This gives a family $W \to \mbb{P}^1, F: W \to D$. Again by base change, we may find a family of stable maps over $B$. Moreover, we may choose the fibration in such a way that if the irreducible component $C$ is over $b_1$ (resp. $b_2$), the fiber of $W \times_{\mbb{P}^1} B$ over $b_2$ (resp. $b_1$) is $C$ union a bunch of rational curves that are maps to a point in $D$. Compose this family with $D \to V_n \to V$ gives a family of stable maps.
After this, we only introduce new type I and type V irreducible components.
\textbf{Type IV}: The map $C \to V$ factors through a constant map to a point $x$.
As an abstract curve, there is a deformation from the pre-stable curve $C$ to a pre-stable curve whose irreducible components are all isomorphic to $\mbb{P}^1$.
We take the fibred surface given by this family. Up to making a base change $B' \to B$ for all the surfaces $\Sigma_i \to B$, we may assume that this fibered surface is over the same base as all the other surfaces. By abuse of notations, we still call this base curve $B$.
Moreover, we may choose the fibration in such a way that if the original irreducible component $C$ is over $b_1$ (resp. $b_2$), the new constructed surface has fiber $C$ over $b_2$ (resp. $b_1$).
This new family of curves gives a constant pre-stable map to $V$ by mapping every thing to $x$.
After this, we only introduce new type V irreducible components over $b_1, b_2$.
Finally, we may add additional type V components (by blowing up at smooth points in the fibers over $b_1, b_2$) and produce a family of pre-stable maps to $V_k$ and $V$ so that type V components appear the same number of times over $b_1, b_2$.
So at this step, we have a family of pre-stable maps $\Sigma_i \to B, \Sigma_i \to V$ such that
\begin{enumerate}
\item All irreducible components of type II, III, IV (over $b_1, b_2$) appear in pairs. That is, if $C$ is an irreducible component of type II, III, IV in some $\Sigma_i$ over $b_1$ (resp. $b_2$), there is a corresponding irreducible component of some $\Sigma_j$ over $b_2$ (resp. $b_1$), such that the morphism restricted to these two irreducible components are the same as stable maps to $V$.
\item Type V components appear the same number of times over $b_1, b_2$. We choose a one-to-one correspondence between them.
\item Type I components over $b_1, b_2$ with the same image in $V$ appear the same number of times. This is true for the following reason. By our construction, the cycle of the fiber of each new fibered surface over $b_1, b_2$ are the same as cycles in $V$. Therefore the cycle of the fibers over $b_1$(resp. $b_2$) is $\Gamma_1+\gamma+\delta$ (resp. $\Gamma_2+\gamma+\delta$). Since type II, III components appear in pairs over $b_1, b_2$ and type IV, V components do not contribute to cycle in $V$, type I components that map to the same curve in $V$ must appear the same number of times.
\item For each curve appearing in the support of the cycle $\Gamma_1+ \gamma+\delta$ (resp. $\Gamma_2+\gamma+\delta$), there is at least one irreducible component of the stable maps over the fiber $b_1$ (resp. $b_2$) that maps birationally to this curve. In particular, there is at least one irreducible component of the stable maps over the fiber $b_1$ (resp. $b_2$) that maps birationally to $\Gamma_1$ (resp. $\Gamma_2$).
\end{enumerate}
\textbf{Step IV:} Reduction to Lemma \ref{lem:standard}.
For each irreducible component of the fibers of $\Sigma_i \to B$, we may add enough curves in the family $\mathcal{C}$ and take a general deformation smoothing all these added nodes.
Note that up to a base change we can do this in families.
That is, for each surface $\Sigma_i$, we may add a large number of curves in $\mathcal{C}$ to each fiber along general directions so that for every irreducible component over $b_1, b_2$, the resulting comb is sufficiently free.
So we may assume that each irreducible component of the fibers of $\Sigma_i \to B$ are $m'$-free for $m'$ larger than the number of nodes on this irreducible component.
Choose an integer $N$ larger than the number of curves in the family $\mathcal{C}$ we have added to any irreducible component.
This number $N$ will be the eventual number of curves added to each irreducible component.
Even though a corresponding pair of irreducible components of the same type over $b_1, b_2$ lie in the same irreducible component of the moduli space of stable maps by assumption, the resulting pair of curves after smoothing may not even have the same curve class.
This is because the corresponding two irreducible components may appear in different surfaces among $\Sigma_i \to B$.
Therefore in the process of attaching curves in the family $\mathcal{C}$, we may have added different numbers of curves in the family $\mathcal{C}$ to the corresponding pair, and thus the smoothings may not lie in the same irreducible component.
It is possible, but complicated to arrange, to have the same number of curves added to each corresponding pair of irreducible components at the moment.
So we will just remember the one-to-one correspondence and add more curves to make the total number of curves added to each irreducible component equal to the chosen large number $N$ later.
We say the irreducible components of the fibers of these new families $\Sigma_i \to B$ are of type I to V if it comes from the smoothing of the comb whose handle is type I to V.
Finally, we may glue all the fibred surfaces together by adding a family of $m'$-free curves over $B$ (up to a base change) that intersects all the fibers of $\Sigma_i \to B$ transversely at smooth points (with $m'$ greater than the total number of irreducible components in the fibers $b_1, b_2$).
So we get a family of \emph{connected} stable curves over $B$, and there is a one-to-one correspondence between irreducible components of the same type over $b_1, b_2$.
The fibers of this newly cosntructed surface over $b_1$ and $b_2$ can be smoothed to a non-singular curve.
In other word, the fibers over $b_1, b_2$ lie in an irreducible component of the moduli space of stable maps whose general point parameterizes a stable curve with smooth irreducible domain.
Thus we can construct a new family $\Sigma \to B$ (up to making a base change) such that the fibers over $b_1$ and $b_2$ are the same as the fibers of the previous family, but a general fiber is irreducible.
For each irreducible component of fibers of $\Sigma \to B$ over $b_1, b_2$, we would like to add more curves in the family $\mathcal{C}$ along general directions such that the total number of curves added this time and the time before is the chosen number $N$.
In the first time, the numbers of curves added for the fibers of each surface $\Sigma_i$ over $b_1$ and $b_2$ are the same by construction.
Since we have the same number of irreducible components of all the surfaces $\Sigma_i$ over $b_1, b_2$, the total number of curves we would like to add to fibers over $b_1, b_2$ this time are the same.
Therefore this time, we can also add these curves for the whole family $\Sigma$ over $B$ by Lemma \ref{lem:samefamily}.
\textbf{Step V:} finishing the proof.
At this point, we have arrived at the goal that the corresponding pair of irreducible components lies in the same irreducible component of the moduli space of stable maps.
Moreover, the irreducible components are unobstructed.
Now apply Lemma \ref{lem:standard} to finish the first part of the proof.
For the last part, we may add curves in the family $\mathcal{C}$ to teeth and take general deformations smoothing all the added nodes. Lemma \ref{lem:samefamily} guarantees that each pair of the resulting new teeth (as well as the new combs) still lie in the same irreducible component of the moduli space of stable maps.
By Lemma \ref{lem:unobstructed} and Lemma \ref{lem:higherconnecting}, every new tooth lie in a family of $m'$-free $n'$-connecting curves (but the families might be different for different teeth).
\end{proof}
We state some variants of this theorem.
\begin{cor}\label{cor:alg_equi_stable}
Theorem \ref{thm:algebraic_equivalence} is true in the case $f_1: \Gamma_1 \to X, f_2: \Gamma_2 \to X$ are stable maps with connected but possibly reducible domains.
\end{cor}
\begin{proof}
We add curves in the $m$-free $n$-connecting families to make the stable map unobstructed.
Call the stable maps $g_1, g_2$.
We may assume that the corresponding cycles of $g_1, g_2$ are still algebraically equivalent and that the domains have the same arithmetic genus.
Take a general deformation of $g_1, g_2$ that smooths all the nodes.
Theorem \ref{thm:algebraic_equivalence} holds for the general smoothing.
Since we may assume that the attached teeth are $n$-connecting $m$-free for $m, n$ larger than $1$, we can add deformations of these teeth to the stable maps $g_1, g_2$ by Lemma \ref{lem:samefamily}.
Since all the stable maps considered are unobstructed, and since there is a chain of irreducible curves that deforms from one to the other, they all lie in the same irreducible component.
This proves the corollary.
\end{proof}
\begin{cor}\label{cor:multi_alg_equiv}
Let $V$ be a smooth projective variety defined over an algebraically closed field. Assume that either $V$ is separably rationally connected or a separably rationally connected fibration over a curve. Fix a family $\mathcal{C} \to M$ of $m$-connecting $n$-free curves ($m, n\geq 2$). Let $f_1:\Gamma_1 \to V, \ldots, f_k: \Gamma_k \to V$ be stable maps of the same arithmetic genus with morphisms to $V$ such that the images are algebraically equivalent as cycles. Then a similar statement in Corollary \ref{cor:alg_equi_stable} holds.
\end{cor}
\begin{proof}
We apply Corollary \ref{cor:alg_equi_stable} to the pairs $\Gamma_1, \Gamma_i$ for $i=2, \ldots, k$.
For each $i=2, \ldots, k$, let $T_i'$ and $T_i$ be the teeth on $\Gamma_1$ and $\Gamma_i$ given by Theorem \ref{thm:algebraic_equivalence}.
Note that $T_i', T_i$ are disjoint union of curves.
When we say deformations of $T_i'$ or $T_i$, we mean the deformation of each curve.
For each $j=2, \ldots, k, j\neq i$, we may attach general deformations of $T_j'$ to $\Gamma_i$.
For $\Gamma_1$ we attach all the teeth $T_i', i=2, \ldots, k$ together.
This proves the first part.
Theorem \ref{thm:algebraic_equivalence} already gives the second part of the statement for each teeth.
\end{proof}
A final application of Theorem \ref{thm:algebraic_equivalence} is the following deformation invariance result for one cycles modulo algebraic equivalence.
\begin{cor}\label{cor:deformation_inv}
Let $X\to \text{Spec } R$ be a smooth projective morphism over the spectrum of a DVR. Let $X_0$ and $X_1$ be the geometric special fiber and the geometric generic fiber. Assume that $X_0$ and $X_1$ are separably rationally connected or are separably rationally connected fibrations over a curve. Then the specialization map
\[
sp: A_1(X_1) \to A_1(X_0)
\]
between one cycles modulo algebraic equivalences is an isomorphism.
\end{cor}
\begin{proof}
We fix a family of $2$-free $2$-connecting curves in $X_0$.
Such a family lifts to a family of $2$-free $2$-connecting curves in $X_1$.
Since we may add enough $2$-free curves to any curve in $X_0$ and make the reducible curve unobstructed, there is a lifting of the whole reducible curve to $X_1$.
Therefore, the specialization map is surjective.
To see the injectivity, note that if a cycle $\Gamma_1=\Gamma_1^+-\Gamma_1^-$ in $X_1$ specializes to a cycle $\Gamma_0=\Gamma_0^+-\Gamma_0^-$ in $X_0$ and becomes algebraically equivalent to $0$ in $X_0$, we may find possibly disconnected $2$-free curves $T_0^+, T_0^-$, such that $T_0^+, T_0^-$ lie in the same irreducible component and such that the resulting combs $\Gamma_0^+\cup T_0^+$ and $\Gamma_0^- \cup T_0^-$ are unobstructed and lie in the same irreducible component of the space of stable maps by Theorem \ref{thm:algebraic_equivalence}.
We may choose the curves $T_0^+, T_0^-$ liftable to $T^+_1, T_1^-$ in $X_1$, and the combs $\Gamma_0^+\cup T_0^+$ and $\Gamma_0^- \cup T_0^-$ liftable to combs $\Gamma_1^+\cup T_1^+$ and $\Gamma_1^- \cup T_1^-$, we see that $T^+_1, T_1^-$ lie in the same irreducible component of the space of stable maps.
The lifted combs $\Gamma_1^+\cup T_1^+$ and $\Gamma_1^- \cup T_1^-$ lie in the same irreducible component of the space of stable maps,
since their reductions lie in the same irreducible component.
Therefore $\Gamma_1$ is algebraically equivalent to $0$ in $X_1$ and we prove the injectivity.
\end{proof}
\section{Proof of Theorems \ref{thm:G_inv_cycle}}
Recall that we use $A_1(X)$ to denote the Chow group of one cycles modulo algebraic equivalence on a variety $X$.
We need to prove that for a separably rationally connected fibration over a curve defined over a finite field $\mbb{F}_q$ or a PAC field $\mbb{F}$, we have an isomorphism
\[
A_1(\mc{X}) \to A_1(\overline{\mc{X}})^G,
\]
where $G$ is the absolute Galois group of $\mbb{F}_q$ ( resp. $\mbb{F}$).
We start with a technical proposition that applies to a more general situation.
\begin{prop}\label{alg_equiv_field}
Let $V$ be a smooth projective variety defined over a field $F_0$. Assume that either $V$ is separably rationally connected or a separably rationally connected fibration over a curve. Fix a family $\mathcal{C} \to M$ of $m$-connecting $n$-free curves ($m, n\geq 2$) defined over $F$. Let $\Gamma, \Gamma'$ be smooth projective geometrically irreducible curves over $F_0$ of the same genus with $F_0$-morphisms to $V$. Assume that the morphisms are unobstructed as stable maps, and that over the algebraic closure $\bar{F}$, the images are algebraically equivalent as cycles. Then there are two chandeliers defined over a finite purely inseparable field extension $F_1$ of $F$ such that
\begin{enumerate}
\item The chandeliers consists of supports and bouquets of candles. The supports are $\Gamma, \Gamma'$.
\item There is a one-to-one correspondence between the candles. Each pair of candles belongs to a $2$-connecting $2$-free family over $\bar{F}_0$.
\item The chandeliers are unobstructed, and lie in the same geometrically irreducible component of the moduli space of stable maps over $F_1$, whose general member is a smooth embedded curve.
\item If the field $F_0$ is a finite field, then there is a one-to-one correspondence between the bouquets on $\Gamma, \Gamma'$. Moreover, each pair of bouquets under this correspondence is uniquely determined by a label $(E_1, g)$, where $E_1$ is the field of definition of the bouquet and $g \in Gal(E_1/F_1)$. The two bouquets with the same label corresponds to two smooth $E_1$-points in the same geometrically irreducible component of the moduli space of stable maps over $E_1$.
\item If the field $F_0$ is a PAC field, then we may take the chandelier to be a comb. Namely, there are only candles but no bouquets of candles.
\end{enumerate}
In particular the cycles $[\Gamma], [\Gamma']$ are algebraically equivalent over $F_1$.
\end{prop}
\begin{proof}
We first deal with the case $F_0$ is an infinite field and then explain the finite field case and PAC field case.
So from now on assume that $F_0$ is infinite.
Theorem \ref{thm:algebraic_equivalence} gives two combs defined over a finite field extension $F/F_0$ satisfying the following conditions:
\begin{enumerate}
\item The handles of the combs are $\Gamma, \Gamma'$.
\item Each tooth belong to a $2$-connecting, $2$-free family.
\item For any tooth $T$ in the comb with handle $\Gamma$, there is a tooth $T'$ in the comb with handle $\Gamma'$, such that $T$ and $T'$ lie in the same irreducible component of the moduli space of stable maps.
\end{enumerate}
Let $F_1$ be the inseparable closure of $F_0$ in $F$.
Essentially we want to take Galois conjugates of these teeth under the Galois group and assemble new Chandeliers. But to make the construction work, we need to carefully keep the deformation conditions.
For a tooth $T$, and a general point in $\Gamma$ defined over a separable field extension $E_1/F_1$, we can find a deformation of the tooth $T$ (over $\bar{F}_0$) that passes through this point.
Thus the space of stable maps that are deformations of the tooth $T$ and that passes through a general closed point defined over a finite separable field extension $E_1$ of $F'$ is non-empty.
This moduli space is certainly defined over the same field as the closed point.
And it is generically smooth.
After a further separable field extension $E_2/E_1$, we may find a smooth point in this moduli space.
We may replace the original tooth $T$ by its deformation represented by this smooth point in the moduli space and the whole situation remains the same (except possibly replacing $F$ by a larger separable field extension over $F_1$, which we do not care).
Let $T'$ be the corresponding tooth in $\Gamma'$.
Thus $T$ and $T'$ lie in the same irreducible component of the moduli space of stable maps over $\bar{F}_0$.
Similar as the preceding paragraph, we may replace the tooth $T'$ by its deformation, which is defined over a separable field extension $E_2'$ and connects to the handle $\Gamma'$ at a general point defined over a separable field extension $E_1'$.
Choose a finite Galois extension $E/F_1$ that contains $E_1, E_2, E_1', E_2'$.
Both teeth lie in the same geometrically irreducible component in the moduli space of stable maps,
which is defined over the field $E$.
Thus there is deformation parameterized by an irreducible curve $S$ defined over the field $E$ between the two teeth.
Let $g$ be an element of the Galois group of the finite Galois field extension $E$ over $F_1$.
There is a deformation parameterized by an irreducible curve between $g(T)$ and $g(T')$ by taking Galois conjugate of the curve $S$ under $g$.
In particular, these two teeth also lie in the same geometrically irreducible component of the moduli space of stable maps
(But $g(T)$ and $T$ might lie in different irreducible components).
Thus we may construct bouquets defined over the field $E_1$ (resp. $E_1'$) from Galois conjugates of $T$ (resp. $T'$) under Galois groups $Gal(E/E_1)$ (resp. $Gal(E/E_1')$) as in Construction \ref{const:bouquet}.
Then taking Galois conjugates under $Gal(E_1/F_1)$ (resp. $Gal(E_1'/F_1)$) gives a number of bouquets of candles defined over $F_1$.
For later reference, we note that as a cycle, we simply replace $T, T'$ by all the Galois conjugates (under the Galois group $Gal(E/F_1)$) of certain deformations of $T, T'$.
We perform this construction for all the teeth of the original combs with handle $\Gamma$ and $\Gamma'$. More precisely, for each teeth $T$ on $\Gamma$ and the corresponding teeth $T'$ on $\Gamma'$ that lie in the same geometrically irreducible component, we choose a Galois field extension $E(T)/F'$ so that
\begin{enumerate}
\item There are intermediate field extensions $E_1(T), E_2(T), E_1'(T), E_2'(T)$.
\item The curve $\Gamma$ (resp. $\Gamma'$) has a point whose field of definition is $E_1(T)$ (resp. $E_1'(T)$).
\item There is a point, whose field definition is $E_2(T)$ (resp. $E_2'(T)$), in the $2$-free $2$-connecting family containing the teeth $T$ (resp. $T'$), which gives a curve connecting to $\Gamma$ (resp. $\Gamma'$) at the $E_1(T)$ (resp. $E_1'(T)$)-rational point.
\end{enumerate}
We assemble bouquets of candles attacted to $\Gamma$ and $\Gamma'$ using Galois conjugates under $Gal(E/F_1(T))$ as in Construction \ref{const:bouquet}.
We have constructed chandeliers with support $\Gamma, \Gamma'$ and bouquets of candles coming from the teeth.
By Lemma \ref{lem:unobstructed}, the chandeliers are unobstructed stable maps.
Note that this is the place where we use the assumption that $\Gamma, \Gamma'$ are unobstructed.
Over an algebraically closed field, one can certainly assemble a comb with unobstructed deformation by adding $2$-free curves to kill the obstruction.
But the author does not see how to make such a comb defined over $F_1$.
We claim that one can still deform the resulting chandeliers to each other by an irreducible curve (over $\bar{F}_1$).
This claim implies that there is a unique geometrically irreducible component of moduli space of stable maps that contains the points representing the two chandeliers.
This irreducible component is necessarily defined over the field $F_1$ .
To see the claim, we start with the deformation of the two combs over an irreducible curve.
For the fibers corresponding to these two comb, up to making a further base change, we add an extra irreducible component isomorphic to $\mbb{P}^1$ at each node between the teeth and handle, and extra $\mbb{P}^1$ components (with constant morphism) at every node between the added bouquets and the support.
We then apply Lemma \ref{lem:samefamily} repeatedly to the chandeliers, each time adding one more candle, which is $2$-connecting by assumption.
This proves the part (1), (2), (3) of the theorem for an infinite field.
The case of finite fields has two advantages over the general case. First, every finite field extension is Galois. Second, by the Lang-Weil estimate, for any geometrically irreducible variety defined over a finite field, and for any sufficiently large degree field extension, there is a rational point defined over this field extension but not defined over any smaller fields.
Therefore, in the above construction, for each tooth $T$, we can always make sure that the rational points in $\Gamma, \Gamma'$ used to assemble the bouquets have the same field of definition. That is, $E_1(T)=E_1'(T)$. Then since the family of teeth containing this point is geometrically irreducible, we can take the deformation of $T$ and $T'$ to have the same field of definition. That is, $E_2(T)=E_2'(T)$.
Thus in the above construction, we can make sure that each bouquet has the same number of candles and that there is a one-to-one correspondence between the bouquets.
We can take larger and larger field extensions over a finite field. So if we take an arbitrary well-ordering of the teeth $T_0 < T_1 \ldots,$ we may choose that the fields to satisfy the inclusions:
\[
E_1(T_0) \subset E_2(T_0) \subset E_1(T_1) \subset E_2(T_1) \subset \ldots.
\]
Thus we may label all the bouquets by $E_1(T), g \in Gal(E_1(T)/F_1)$.
Moreover, if $T$ and $T'$ are teeth on the comb with handle $\Gamma$ and $\Gamma'$ that lie in the same irreducible component, then the bouquet constructed from $T$ and $T'$ lie in the smooth locus of the same irreducible component of stable maps. This follows from the same type of argument as above. Namely, we think of the deformation from $T$ to $T'$ as a deformation from $T \cup \mbb{P}^1$ to $T' \cup \mbb{P}^1$ by blowing-up suitable points in the family. Then we may apply Lemma \ref{lem:samefamily} and add the Galois conjugates of the teeth.
Thus for a bouquet on $\Gamma$ and a bouquet on $\Gamma'$ that are defined over the same field $E(T)$, they are deformation equivalent to each other via an irreducible curve over $E(T)$.
It follows that the Galois conjugates (under $Gal(E(T)/F_1$) of the bouquets also deform to each other via an irreducible curve.
Now we discuss the case of PAC fields. Let us denote by $C_1, \ldots, C_n$ (resp. $C_1', \ldots, C_n'$) the candles in the chandelier with support $\Gamma$ (resp. $\Gamma'$) constructed above.
Consider the following moduli space of comb with $n$-teeth:
\begin{align*}
\{f: \Gamma_1 \cup_{i=1}^n \cup T_i \to V| &\text{Each tooth } T_i \text{ in the comb is a deformation of the candle } C_i \\
&\text{ in the } 2\text{-free }2\text{-connecting family}.\}
\end{align*}
This moduli space is geometrically irreducible, since all the teeth belong to a $2$-connecting family. Its closure in the moduli space of stable maps contains the chandelier with support $\Gamma_1$, since we can move the tooth $T_i$ to position of the candle $C_i$ in the chandelier.
Therefore, this moduli space has to be defined over the same field as the chandelier, namely, $F_1$.
Since $F_1$ is a PAC field, we have a comb as desired. The same argument applies to the chandelier with support $\Gamma_2$ and produces a comb with handle $\Gamma_2$ over $F_1$.
Finally we prove that the cycles are algebraically equivalent over $F_1$.
Clearly the two Chandeliers are algebraically equivalent since there is a deformation between them over $F_1$.
So it suffices to show that the cycle class of all the candles added to $\Gamma, \Gamma'$ are the same modulo algebraic equivalence over $F_1$.
Since every pair of corresponding candles $T, T'$ on $\Gamma, \Gamma'$ belong a geometrically irreducible family of $2$-connecting $2$-free curves defined over some finite separable field extension of $F'$, we know that the Galois conjugates of them lie in the Galois orbits of this family.
Denote the union of all these families by $U$, which is irreducible over $F_1$.
Therefore summing up all the Galois conjugates of $T, T'$ under $Gal(E(T)/F_1)$ is simply taking an effective zero cycle of degree $|Gal(E(T)/F_1)|$ defined over $F_1$ in $U$ and then pushing forward the cycle class of the fibers over this zero cycle.
Since any two such zero cycles are algebraically equivalent over $F_1$, the resulting one cycles are algebraically equivalent.
\end{proof}
Now we can compare algebraic equivalence on one cycles over $F$ and $\bar{F}$.
\begin{cor}\label{cor:alg_equi}
Let $V$ be a smooth projective variety defined over a field $F$, that is either a finite field or a PAC field. Assume that either $V$ is separably rationally connected or a separably rationally connected fibration over a curve.
Let $\Gamma_1, \Gamma_2$ be two cycles in $V$ defined over $F$.
Assume that they are algebraically equivalent over the algebraic closure $\bar{F}$.
If $F$ is perfect, they are algebraically equivalent over $F$.
If $F$ is not perfect, then $\Gamma_1-\Gamma_2$ is algebraically equivalent to $0$ in $CH_1(V) \otimes \mbb{Z} [\frac{1}{p}]$.
\end{cor}
\begin{proof}
The goal is to reduce to Corollary \ref{alg_equiv_field}. We first reduce to stable maps with equal arithmetic genus and then reduce to stable maps with smooth geometrically irreducible domains.
Up to making a finite purely inseparable field extension, we may assume that the normalizations of the irreducible components of $\Gamma_1, \Gamma_2$ are smooth over $F$.
We replace $X$ by $X \times \mbb{P}^n$ for some $n$ large. In doing so, we may assume that $\Gamma_1, \Gamma_2$ are cycle classes of disjoint embedded curves.To achieve the condition, we simply take some embedding of the normalization of the support of $\Gamma_1, \Gamma_2$ in $\mbb{P}^n$. If an irreducible component in the support with multiplicity $m$, we also use elements of $PGL_{n+1}(F)$ to give $m$ different embeddings.
Fix a sufficiently ample line bundle $H$.
If $F$ is a PAC field, then the set of rational points in $X \times \mbb{P}^n$ is Zariski dense.
If $F$ is a finite field $\mbb{F}_q$, then using Lang-Weil estimate we can get many rational points of $X \times \mbb{P}^n$ over $\mbb{F}_{q^k}, \mbb{F}_{q^{(k+1)}}$ as long as $k$ is large.
So in any case, given any positive integer $M$, we can find two effective cycles $z^+, z^-$ such that $\deg z^+-\deg z^-=M$.
Moreover, for any subvariety $Y\subset X \times \mbb{P}^n$, we can arrange the zero cycles to be away from $\tilde{Y}$.
We first consider the case $F$ is a PAC field.
For simplicity, let us write the liftings of $\Gamma_1, \Gamma_2$ in $X \times \mbb{P}^n$ still as $\Gamma_1, \Gamma_2$.
We choose a general complete intersection curve $C$ that intersects both $\Gamma_1$ and $\Gamma_2$ transversely.
If we fix a sufficiently positive very ample line bundle, the locus of complete intersection curves coming from sections of this line bundle that intersects $\Gamma_1, \Gamma_2$ transversely is an open subset in a product of projective spaces.
Since $F$ is a PAC field, we can always find a rational point in this open subset.
Thus we may construct two connected stable maps $\Gamma_1 \cup C$ and $\Gamma_2 \cup C$.
Denote by $g_1$ and $g_2$ the arithmetic genus of the two domains.
If $g_1 \neq g_2$, we construct a new stable map with the same arithmetic genus.
Without loss of generality, assume $g_1 >g_2$.
We may find two effective zero cycles $v^+, v^-$ such that $\deg v^+-\deg v^-=g_1-g_2$ and such that they are disjoint from $\Gamma_1, \Gamma_2, C$.
Furthermore, we may assume that every irreducible cycle in $v^{+/-}$ has multiplicity $1$.
Then we may find a smooth complete intersection curve $D$ that contains the support of $v^+, v^-$ and such that it intersects $C$ transversely.
Finally we may find a complete intersection curve $E$ that also contains the support of $v^+, v^-$ but otherwise does not intersect $\Gamma_1, \Gamma_2, C$.
We have a pair of stable maps $\Gamma_1 \cup C \cup D \cup E_v^+$,$\Gamma_2 \cup C \cup D \cup E_v^-$. Here $\Gamma_1 \cup C \cup D \cup E_v^+$ ( resp. $\Gamma_1 \cup C \cup D \cup E_v^-$) means that we take the normalization of union at all the nodes between $E$ and $D$ except those supported at the zero cycle $v^+$ (resp. $v^-$).
In both cases, we have constructed two new stable maps, $\gamma_1, \gamma_2$, defined over $F$, which are algebraically equivalent over $\bar{F}$, have the same arithmetic genus, and it suffices to prove the statements for these two cycles.
Again by taking an embedding of the stable maps in some $\mbb{P}^n$ and replacing $V$ with $V\times \mbb{P}^n$, we may assume the stable maps $\gamma_1, \gamma_2$ are embedded nodal curves.
We may take general divisors $H_1, \ldots, H_{\dim V-2}$ defined over $F$ in the linear system $|H|$ that contains both curves.
The intersection is a smooth projective surface $S$ defined over $F$.
We note that $K_S=(K_V+(\dim V-2)H) |_S$. In particular, $K_S \cdot \gamma_1= K_S \cdot \gamma_2$. Since $\gamma_1, \gamma_2$ have the same arithmetic genus, we know that $\gamma_1 \cdot \gamma_1=\gamma_2 \cdot \gamma_2$, where the intersection is taken as divisors in $S$.
Take a general section $H$ defined over $F$ that contains $\gamma_1$ (resp. $\gamma_2$) and consider the residual curve $\gamma_1'$ (resp. $\gamma_2'$).
Then $\gamma_1'$ and $\gamma_2'$ are smooth projective geometrically irreducible curve defined over $F$.
Moreover, $\gamma_1$ and $\gamma_2$ are algebraically equivalent over any field $E/F$ if and only if $\gamma_1'$ and $\gamma_2'$ are algebraically equivalent over $E$.
Finally note that
\[
K_{\gamma_1'} \cdot \gamma_1'=(K_S+H-\gamma_1)\cdot (H-\gamma_1)=(K_S+H-\gamma_2)\cdot (H-\gamma_2)=K_{\gamma_2'} \cdot \gamma_2'.
\]
That is, $\gamma_1'$ and $\gamma_2'$ have the same genus.
So now we have reduced the statements about stable maps with smooth geometrically irreducible domains.
Let us still call these stable maps $\Gamma_1, \Gamma_2$.
We take a family of $2$-free $2$-connecting curves defined over $F$.
Consider the family of combs that have $\Gamma_1$ as the handle and have $n$ teeth in the family of $2$-free $2$-connecting curves.
Similarly, we can construct a family of combs that have $\Gamma_2$ as the handle and have $n$-teeth in the family of $2$-free $2$-connecting curves.
With $n$ fixed, these two families are both geometrically irreducible, and are both defined over $F$.
For any $n$ large enough, a general point of both of these two families parameterizes a comb with unobstructed deformation, which has a deformation smoothing all the nodes.
Since $F$ is a PAC field, we may replace $\Gamma_1$ and $\Gamma_2$ by a general deformation, which is a stable map with unobstructed deformation and has geometrically irreducible domains.
Now the corollary follows from Corollary \ref{alg_equiv_field}.
Finally, consider the case $F$ is a finite field. We note that the only place in the above argument that needs $F$ to be a PAC field is to find a point in some open subsets of some geometrically irreducible varieties.
When $F$ is a finite field, with all the relevant parameter spaces fixed, one can find a rational point over any field extention $F'/F$ of sufficiently large degrees.
Thus we may prove the statement after passing to any sufficiently large field extension of $F$.
Now choose two field extensions of relatively prime degrees over $F$.
Since $\Gamma_1, \Gamma_2$ are algebraically equivalence over these two field extensions, they are algebraically equivalent over $F$.
\end{proof}
It remains to prove surjectivity.
We first start with an easy but important observation.
\begin{lem}\label{lem:algebraicity_criterion}
Let $V$ be a projective variety defined over a finite field $\mbb{F}_q$ or a PAC field $\mbb{F}$ and $\alpha \in A_1(\bar{V})^G$ be an invariant cycle class, where $G$ is the Galois group of $\bar{\mbb{F}}_q/\mbb{F}_q$ or $\bar{\mbb{F}}/\mbb{F}$.
If there is a geometrically irreducible variety $U$ defined over $\mbb{F}_q$ or $\mbb{F}$ parameterizing a family of cycles of $V$ whose class is $\alpha$, then there is an algebraic cycle $\mathcal{Z}$ of $V$ defined over $\mbb{F}_q$ (or $\mbb{F}$) whose class is $\alpha$.
\end{lem}
\begin{proof}
The PAC field case follows from definition.
By the Lang-Weil estimate, there is an $\mbb{F}_{q^n}$ rational point of $V$ for every $n$ large enough. We may choose $n$ and $n+1$ as such.
This gives two rational points $z_n$ and $z_{n+1}$ of $U$ over $\mbb{F}_{q^n}$ and $\mbb{F}_{q^{n+1}}$.
Equivalently, a cycle $Z_n$ (and $Z_{n+1}$) defined over $\mbb{F}_{q^n}$ (and $\mbb{F}_{q^{n+1}}$).
Let $\mathcal{Z}_n$ (resp. $\mathcal{Z}_{n+1}$) be sum of the Galois conjugate cycles of $Z_n$ (resp. $Z_{n+1}$).
Then $\mathcal{Z}_{n+1}-\mathcal{Z}_{n}$ is the desired cycle.
\end{proof}
\begin{rem}
If $V$ is a separably rationally connected fibration over a curve, by the standard argument of adding $m$-free curves, one can show that a class in $A_1(\bar{V})^G$ comes from $A_1(V)$ if and only if it is the difference of two Galois invariant classes satisfying the conditions in Lemma \ref{lem:algebraicity_criterion}.
\end{rem}
Thus to prove the surjectivity part in Theorem \ref{thm:G_inv_cycle}, we only need to find the family of cycles as above.
To find such a family, we only need to find a smooth point of the Hilbert scheme/space of stable maps, whose Galois conjugate points are also contained in the same irreducible component.
If the finitely Galois conjugate points of this smooth point lie in the same component, then they represent cycles algebraically equivalent to each other.
Hence the consideration of algebraic equivalence naturally appears.
Here we do not want to use the Chow variety because there is no deformation theory for Chow varieties.
In particular, given a point in the Chow variety, it is difficult to tell if it lies in a unique irreducible component.
\begin{proof} [Proof of the surjectivity in Theorem \ref{thm:G_inv_cycle}]
We only prove the finite field case. The PAC field case is the simpler.
When the PAC field is not perfect, we replace it with its perfect closure in some algebraic closure.
This is why we have to invert $p$ in the statement.
Let $C\subset \overline{\mc{X}}$ be a curve representing a curve class in $A_1(\overline{\mc{X}})^G$.
We choose a family of $n$-connecting $m$-free curves ($m, n\geq 2$) that is parameterized by a geometrically irreducible variety defined over $\mbb{F}$ (Lemma \ref{lem:connecting}).
By the argument in Lemma \ref{lem:algebraicity_criterion}, the curve class of this family lies in the image of $CH_1(\mc{X})$.
Thus to show that the class of $C$ is in the image of $CH_1(\mc{X})$, we may replace $C$ by a general smoothing of the comb consisting of $C$ and members of this family of $n$-connecting $m$-free curves.
Note that such a general smoothing has unobstructed deformation and is an embedding.
The curve $C$ is defined over a finite field extension $\mbb{F}'/\mbb{F}_q$.
Let $g \in Gal(\bar{\mbb{F}}/\mbb{F}_q)$ be the generator $x\mapsto x^q$.
For any finite field extension $E/\mbb{F}_q$, we will still use $g$ to denote the image of $g$ in $Gal(E/\mbb{F}_q)$.
Let $g^k(C)$ be the Galois conjugate of $C$ under $g^k$.
Since the curve class of $C$ in $A_1(\overline{\mc{X}})$ is Galois invariant, $C$ and $g^k(C)$ are algebraically equivalent as cycles.
The starting point is Corollary \ref{cor:multi_alg_equiv}, which gives combs defined over $\bar{\mbb{F}}_q$ with handles $g^i(C)$.
These combs all lie in the same irreducible component.
The same argument as in the proof of Corollary \ref{alg_equiv_field} proves that there are chandeliers defined over $\mbb{F}'$ satisfying the following conditions:
\begin{enumerate}
\item The support of each of the chandeliers is a Galois conjugate of $C$.
\item Bouquets in each chandelier are uniquely determined by a label $(E_1, \phi)$, where $E_1$ is the field of definition of the bouquet and $\phi \in Gal(E_1/\mbb{F}')$. Every bouquet labeled by $(E_1, \phi)$ corresponds to smooth $E_1$-points in the same geometrically irreducible component of the moduli space of stable maps over $E_1$. Moreover, the bouquet labeled $(E_1, \phi)$ in a chandelier is the Galois conjugate of $(E_1, id)$ by $\phi$.
\item There is a one-to-one correspondence between the bouquets in each chandelier. Corresponding bouquets have the same label $(E_1, \phi)$.
In particular, they are all defined over the field $E_1$.
They also lie in the same geometrically irreducible component of the moduli space of stable maps defined over the same field as the bouquet.
\item Each candle belongs to a $2$-connecting, $2$-free family.
\item There is a one-to-one correspondence between the candles attached to $C$ and $g^i(C)$. For any candle $T_0$ in the chandelier with support $C$ and for each $i$, there is a corresponding candle $T_i$ in the chandelier with support $g^i(C)$. The two candles $T_0$ and $T_i$ are defined over the same field $E_2(T_0)$, and lie in the same irreducible component of the moduli space of stable maps.
\item The chandeliers are unobstructed, and lie in a geometrically irreducible component defined over $\mbb{F}'$.
\end{enumerate}
We briefly explain how to construct the chandeliers. The verification of the above properties is the same as the verification in Corollary \ref{alg_equiv_field}. So we omit the details.
We start with one set of the corresponding teeth $T_0, T_1, \ldots$ in the combs with handle $C, g(C), \ldots$ given by Corollary \ref{cor:multi_alg_equiv}. Then we choose a point of $C, g(C), \ldots$ defined over $E_1(T)$, and a deformation of $T_0, T_1, \ldots$ (all defined over $E_2(T)$) passing through this point.
Then we use the Galois group $Gal(E_2(T)/E_1(T))$ to assemble a bouquet defined over $E_1(T)$ as Construction \ref{const:bouquet}.
We choose the points to be defined over different fields for different set of corresponding teeth.
We may also choose the points in such a way that the Galois orbits of these points under $Gal(\bar{\mbb{F}}_q/\mbb{F}_q)$ are different in $\cup g^i(C)$. This is possible since the number of points defined over $\mbb{F}_{q^n}$ in $C, g(C), \ldots$ is on the order of $q^n$, while the Galois orbits of a point is of order $n$.
The chandeliers are not necessarily Galois conjugate to each other.
Essentially we want to take Galois conjugates of these bouquets constructed above and assemble a new set of Chandeliers so that the bouquets attached to each Galois conjugate of $C$ are also Galois conjugate to each other. But to make the construction work, we need to carefully keep track of the deformation conditions using Lemma \ref{lem:samefamily}.
Let us denote by $B(g^k(C))$ the set of bouquets constructed for the curve $g^k(C)$.
We take the Galois conjugates (under $Gal(\mbb{F}'/\mbb{F})$) of all the bouquets of candles. For the curve $g^k(C)$, we attach the union of bouquets
$\cup_i g^i(B(g^{k-i}(C)))$. Here $g$ is the generator in $Gal(\mbb{F}'/\mbb{F})$.
The union is finite. Since we assume that the attaching points of the bouquets do not overlap under $Gal(\mbb{F}'/\mbb{F})$, we can assemble a new chandelier.
Recall that by construction, for each bouquet in $B(g^{k-i}(C))$, defined over some field extension $E_1(T)$, and its corresponding bouquet in $B(g^{k+1-i}(C))$ (also defined over $E_1(T)$), there is a deformation parameterized by an irreducible curve defined over $E_1(T)$.
So the same is true for their Galois conjugates under $g^i$.
It follows that the bouquets attached to $g^k(C)$, i.e. $\cup_i g^i(B(g^{k-i}(C)))$, are deformation equivalent to the corresponding ones attached to $g^{k+1}(C)$, i.e. $\cup_i g^i(B(g^{k+1-i}(C)))$.
To summarize, we have constructed stable maps with disconnected domains $\mathcal{B} \to \mc{X}$ (bouquets of candles), and $C \to \mc{X}$, all defined over $\mbb{F}'$, such that
\begin{enumerate}
\item There is a stable map with connected domains $\mathcal{B} \cup C \to \mc{X}$ defined over $\mbb{F}'$.
\item There is a deformation from $\mathcal{B} \cup C \to \mc{X}$ to each of its Galois conjugates parameterized by an irreducible curve. To see this, first note that the chandeliers with bouquets of candles assembled from Galois conjugates of the original teeth deforms to each other over an irredudible curve. Adding a bouquet is the same as adding a $\mbb{P}^1$ and then adding a tooth and all of its Galois conjugates. We have seen that for two teeth $T, T'$ that lie in the same irreducible family, their Galois conjugates also lie in an irreducible family (but possibly different). Since the teeth are $2$-connecting, Lemma \ref{lem:samefamily} proves the statement.
\item There is a deformation from $\mathcal{B}$ to each of its Galois conjugates, parameterized by an irreducible curve.
\item All the stable maps are unobstructed.
\end{enumerate}
Hence there is an irreducible variety $U$ defined over $\mbb{F}_q$ that parameterizes bouquets.
Moreover, the bouquet $\mathcal{B}$ and all its Galois conjugates are parameterized by points in $U$.
There is also an irreducible variety $V$ defined over $\mbb{F}_q$ that parameterizes the stable maps such that $C \cup \mathcal{B}$ and all its Galois conjugates are parameterized by points in $V$.
By Lemma \ref{lem:algebraicity_criterion}, the cycle classes of the stable maps $C \cup \mathcal{B}$ and $\mathcal{B}$ are in $CH_1(\mc{X})$. So is the class of $C$.
\end{proof}
\section{Integral Tate conjecture and local-global principle for zero cycles}\label{sec:ITateHasse}
Let $X$ be a smooth projective geometrically irreducible variety of dimension $d$ defined over a finite field $\mbb{F}$. We have the cycle class maps:
\begin{equation}\label{int_Tate_2}
CH_1(X) \otimes \mbb{Z}_\ell \to H^{2d-2}(X, \mbb{Z}_\ell(d-1)),
\end{equation}
Recall the integral Tate conjecture asks the following question.
\begin{ques}\label{q:SRCTate}
For which smooth projective variety $X$ defined over $\mbb{F}$, and which $r$, is the cycle class map (\ref{int_Tate_2}) surjective?
\end{ques}
We mention another closely related question.
\begin{ques}\label{q:CH0}
Let $X$ be a smooth projecitve variety defined over a henselian local field with finite residue field. Is the cycle class map
\[
CH_0(X)\hat{\otimes} \mbb{Z}_\ell \to H^{2d}(X, \mbb{Z}_\ell(d))
\]
injective? Here $\ell$ is invertible in the residue field.
\end{ques}
\begin{rem}\label{rem:WE}
Question \ref{q:CH0} has a positive answer if $X$ is a geometrically rational surface, and has a regular model with SNC central fiber (\cite[Theorem 3.1]{WittenbergEsnault_0_cycle} in general and \cite[Theorem A]{Saito_torsion_codim_2} for the case of $p$-adic fields).
In this case, the proof in \cite{WittenbergEsnault_0_cycle} also shows that the closed fiber also satisfies a version of the integral Tate conjecture.
For $X$ defined over a Laurant field $\mbb{F}_q \Semr{t}$, this last condition is satisfied since we have resolution of singularities for $3$-folds.
\end{rem}
If Question \ref{q:CH0} has a positivie answer for the generic fiber $X$, then Conjectures \ref{conj:CT1} and \ref{conj:E} are equivalent for $X$.
\begin{rem}
We also note that the results in \cite{zerocycleLaurent} and \cite{Coniveau} suggest that Question (\ref{q:CH0}) should be true for separably rationally connected varieties, provided that the characteristic $p$ analogues of the conjecture $\textbf{R}(n, 3)$ about Kato homology in loc. cit. is true, and that we have the minimal model program established in positive and mixed characteristic.
\end{rem}
As discussed in Theorem \ref{thm:TateImpiesCT} and the remark that follows this theorem in the introduction, various types of integral Tate conjectures would imply various versions of Colliot-Th\'el\`ene's conjectures \ref{conj:CT1}, \ref{conj:CT2}.
We can deduce Theorem \ref{thm:integralTate} from Theorem \ref{thm:G_inv_cycle}.
\begin{proof}[Proof of Theorem \ref{thm:integralTate}]
Recall that $G=\text{Gal}(\bar{\mbb{F}}_q/\mbb{F}_q)$ is the absolute Galois group.
By Theorem \ref{thm:G_inv_cycle}, we have an isomorphism
\[
A_1(\mc{X}) \cong A_1(\overline{\mc{X}})^G.
\]
Under the assumptions (A), (B) of Theorem \ref{thm:integralTate}, we know that there is an isomorphism of $\text{Gal}(\bar{\mbb{F}}_q/\mbb{F}_q)$-modules:
\[
A_1(\overline{\mc{X}}) \otimes \mbb{Z}_\ell \cong H^{2d}(\overline{\mc{X}}, \mbb{Z}_\ell(d)).
\]
Note that $G$ is generated by the Frobenius $F$ and $A_1(\overline{\mc{X}})^G$ is the kernal of $F^*-\text{id}$. Since $\mbb{Z}_\ell$ is a flat $\mbb{Z}$-module, we have $A_1(\overline{\mc{X}})^G\otimes \mbb{Z}_\ell$ is the kernal of $$(F^*-\text{id})\otimes \text{id}_{\mbb{Z}_\ell}: A_1(\overline{\mc{X}})\otimes \mbb{Z}_\ell \to A_1(\overline{\mc{X}})\otimes \mbb{Z}_\ell.$$ That is,
\[
A_1(\overline{\mc{X}})^G \otimes \mbb{Z}_\ell \cong (A_1(\overline{\mc{X}})\otimes \mbb{Z}_\ell)^G \cong H^{2d}(\overline{\mc{X}}, \mbb{Z}_\ell)^G,
\]
where $\mbb{Z}_\ell$ is equipped with the trivial action of $G$.
This proves the first part of the theorem. The second part is \cite[Corollary 1.11]{Coniveau}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:integralTateSurface}]
First note that the surjectivity of the cycle class map is a birational invariant.
So using resolution of singularities for $3$-folds \cite{Resolution1, Resolution2, AbhyankarResolution}, we may assume that the singular fibers are SNC divisors.
The result of Bloch-Srinivas shows that the Griffiths group of $1$ cycles on $\overline{\mc{X}}$ is $p$-torsion. Thus the hypothesis (B) in Theorem \ref{thm:integralTate} is satisfied.
As for hypothesis (A), we have a commutative diagram of localization exact sequences:
\[
\begin{CD}
\oplus CH_1(\overline{\mc{X}}_i)\otimes \mbb{Z}_\ell @>>> CH_1(\overline{\mc{X}})\otimes \mbb{Z}_\ell @>>>CH_1 (\overline{\mc{X}}^0)\otimes \mbb{Z}_\ell @>>> 0 \\
@VVV @VVV @VVV\\
\oplus H^{4}_{\overline{\mc{X}}_i}(\overline{\mc{X}}, \mbb{Z}_\ell(2)) @>>>H^4(\overline{\mc{X}}, \mbb{Z}_\ell(2))@>>>H^4(\overline{\mc{X}}^0, \mbb{Z}_\ell(2))\\
\end{CD}
\]
Here $\mc{X}_i$ is a singular fiber of the fibration $\mc{X} \to B$ and $\mc{X}^0$ is the complement of all the singular fibers $\mc{X}_i$.
By Section 4.3 in \cite{WittenbergEsnault_0_cycle}, the first vertical map is surjective.
We may assume that $\overline{\mc{X}}^0$ is over an affine curve (i.e. the direct sum on the left is non-trivial).
A simple calculation then shows that $H^4(\overline{\mc{X}}^0)$ is one dimensional and spanned by the class of a section. Thus the third cycle class map is also surjective by \cite{deJongStarr_GHS}.
So the middle one is surjective.
Hypothesis (C) and (D) are also satisfied in this case.
For simplicity, we only explain how to prove hypothesis (D).
On the one hand, the cokernal is torsion for separably rationally connected fibrations by a decomposition of diagonal argument.
On the other hand, Bloch-Kato conjecture proved by Voevodsky (in dimension $3$, we can also use the Merkurjev-Suslin theorem) implies that it is torsion free for all smooth projective $3$-fold.
Hence the cokernal has to vanish.
Thus Theorem \ref{thm:integralTate} implies that $CH_1(\mc{X}) \otimes \mbb{Z}_\ell \to H^4(\mc{X}, \mbb{Z}_l(2))$ is surjective.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:zerocycle}]
This follows from combining Theorem \ref{thm:TateImpiesCT}, \ref{thm:integralTateSurface}, and Remark \ref{rem:WE}.
\end{proof}
\section{Examples}\label{sec:example}
We conclude this article with some examples where one can check the conditions in Theorem \ref{thm:integralTate}.
\begin{prop}
Let $\mc{X} \subset \mbb{P}_B(E)$ be a family of complete intersections of degree $d_1, \ldots, d_c$ in $\mbb{P}^n$ over a smooth projective curve $B$ over $\mbb{F}_q$. Assume that the generic fiber $X$ is smooth separably rationally connected of dimension $d, d \geq 5$ and that $\sum d_i^2\leq n$.
Also assume that $\mc{X}$ is smooth.
Then the cycle class map
\[
CH^d(\mc{X}) \otimes \mbb{Z}_\ell \to H^{2d}_{\text{{\'e}t}}(\mc{X}, \mbb{Z}_\ell(d))
\]
is surjective and Conjectures \ref{conj:CT1} and \ref{conj:CT2} hold for the generic fiber $X$ over $\mbb{F}_q(B)$.
\end{prop}
\begin{proof}
By the dimension assumption and the affine vanishing for \'etale cohomology,
there is an isomorphism
\[
H^{2d}_{\text{{\'e}t}}(\mc{X}, \mbb{Z}_\ell(d)) \cong H^{2d-2}_{\text{{\'e}t}}(\overline{\mc{X}}, \mbb{Z}_\ell(d-1))^G,
\]
which is spanned by the class of a section and a line in a fiber.
So it suffices to show that over the algebraic closure $\overline{\mbb{F}}_q$, every multisection is rationally equivalent to a multiple of any fixed section modulo lines in fibers, and that every line in a fiber is algebraically equivalent to any line in any smooth fiber.
Both statements follows from some well-known argument.
More precisely, the space of a chain of two lines passing through two points in a complete intersection of degree $d_1, \ldots, d_c$ is defined by equations of degree $1,1,2,2, \ldots, d_1-1, d_1-1, d_1, 1, 1, \ldots, d_2-1, d_2-1, d_2, \ldots, 1, 1, \ldots, d_c-1, d_c-1, d_c$ in $\mbb{P}^n$ (see, for example, Lemma 3.4 in \cite{Pan_conic}).
Thus by the classical Tsen-Lang theorem, for any family of complete intersections of degree $(d_1, \ldots, d_c)$ over a smooth curve $T/\bar{\mbb{F}}_q$, and for any two sections of this family, there is a family of chain of two lines in the complete intersections over $T$ such that the two sections lie in this family of chain of two lines.
Any two sections in a $\mbb{P}^1$-bundle over a curve are rationally equivalent modulo general fibers.
Thus any two sections are rationally equivalent up to lines in general fibers.
This in turn implies that any two multi-sections of the same degree are rationally equivalent up to lines in general fibers.
Since any curve in a fiber is rationally equivalent to the difference of two multi-sections of the same degree, it is also rationally equivalent to lines in general fibers.
Finally, since the Fano scheme of lines of a complete intersection is connected as long as it has positive dimension \cite[Th\'ero\`eme 2.1]{DebarreManivelFanoScheme}, all lines in a fiber are algebraically equivalent.
\end{proof}
\begin{rem}
The dimension assumption is not restrictive.
The only low dimensional examples satisfying the numerical conditions are quadrics and linear spaces.
One can check by hands that integral Tate conjecture holds for them.
\end{rem}
\begin{rem}
In general it is still an open question if a smooth Fano complete intersection is separably rationally connected. However, one can show that if the characteristic $p$ is larger than all the $d_i$, then every smooth Fano complete intersection of degree $d_1, \ldots, d_c$ is separably rationally connected \cite{WASTZ2018}.
\end{rem}
\begin{prop}
Let $X$ be a smooth proper variety that is also a homogeneous variety under an integral linear algebraic group $G$ over $\mbb{F}_q(B)$.
Assume that $X$ admits a regular projective model $\mc{X} \to B$. Then the cycle class map
\[
CH^d(\mc{X}) \otimes \mbb{Z}_\ell \to H^{2d}_{\text{{\'e}t}}(\mc{X}, \mbb{Z}_\ell(d))
\]
is surjective and Conjecture \ref{conj:CT2} holds for $X$.
\end{prop}
\begin{proof}
It is well-known that $\bar{G}$ over $\bar{\mbb{F}}_q(B)$ is rational.
Thus $\overline{\mc{X}} \to \bar{B}$ is birational to $\bar{B} \times_{\bar{\mbb{F}}_q} \mbb{P}^n \to \bar{B}$.
So the conditions in Theorem \ref{thm:integralTate} are satisfied.
\end{proof}
\begin{rem}
Liang \cite{LiangYQ} proved that if the Brauer-Manin obstruction is the only obstruction to weak approximation of rational points in a rationally connected variety over a number field $K$ and all of its finite field extensions, the number field analogue of Conjecture \ref{conj:E} is true. As a corollary, he proved Conjecture \ref{conj:E} for all smooth proper varieties birational to a homogeneous space under a linear algebraic groups with connected stablizer. Harpaz and Wittenberg \cite{Harpaz_Wittenberg_zero_cycle} proved Conjecture \ref{conj:E} for all smooth proper varieties birational to a homogeneous space under a linear algebraic group.
One could expect that this also holds in the global function field case by essentially the same proof (modulo some characteristic p issues).
\end{rem}
\bibliographystyle{alpha}
|
1,314,259,993,934 | arxiv | \section{Introduction}
Black holes and Gravitational Waves (GWs) are two of the most fascinating
predictions of General Relativity (GR). Black holes are among the most
mysterious objects in our universe that have attracted the attention of
scientific community for many decades. It is widely believed that when a
sufficiently massive star runs out of fuel, the inward pull of gravity becomes
dominant and there is a rapid collapse of matter towards the centre, which
ultimately results in the formation of a black hole. A black hole interacts
with its surrounding matter and is generally in a perturbed state. Such
perturbations cause the black hole to undergo oscillation and that leads to
the emission of GWs \cite{ligo_1,konoplya_1, konoplya_2, michele_1,
hughes}. GWs are ripples in the spacetime fabric, generated by accelerating
massive objects, which propagate at the speed of light. The LIGO-Virgo
collaboration declared the first-ever detection of GWs on 14th of september
2015 \cite{abbott2016}, almost after 100 years of their prediction by prominent physicist A.\
Einstein in 1916. Since then, a number of detections of GWs has been reported
by the collaboration \cite{abbott2016_2,abbott2017, abbott2020,abbott2021}. It
has generated a new era of gravitational astronomy, the likes of which has
never been seen before.
Quasinormal Modes (QNMs) are some complex frequencies associated with the GWs
that represent the reaction of a black hole, after some perturbations act on it
\cite{mashhoon_1, chandrasekhar_1, konoplya_1, konoplya_2, konoplya_3,
konoplya_4, djgogoi_1, djgogoi_2, djgogoi_3}. There are different methods of
calculations of QNMs from the black holes \cite{konoplya_1}. One of the
simplest and elegant methods of finding out the QNMs was devised by B.\
Mashhoon, which is commonly called as the Mashhoon Method \cite{konoplya_1,
mashhoon_1,mashhoon_2,mashhoon_3, mashhoon_4}. It is an analytical method
which is easy to handle for a simple system. The most frequently used and
trusted method of QNM analysis is the Wentzel-Kramers-Brillouin (WKB) method
\cite{konoplya_1,konoplya_2, iyer}, which was initially utilized by Schutz
and Will \cite{schutz}. Improvements were made in this method by Konoplya who
introduced corrections in WKB calculations upto 6th orders \cite{konoplya_4}.
Recently, more higher order corrections are introduced in this method
(\cite{konoplya_5} and references therein). In this work we consider both these methods of analyzing
the QNMs for the sake of accuracy in the calculation. Apart from these,
numerous works have been done on the analytical and numerical techniques to
calculate the QNMs associated with the black hole perturbations in recent
times \cite{leaver_1, leaver_2, cardoso, karlos,zhidenko, zhidenko2,ali1,ali2}.
Other works related to black hole physics may be found in \cite{nandan,kala}
and in the references therein.
Recently, the black hole physics and related thermodynamics have also
attracted researcher's attention and a lot of such works can be seen in
recent times in literature \cite{marco,p,ted,zhang,michael,marco_1,M,ch,yuan,
sharif_1, mohsen}. It was after the ground-breaking works of Bekenstein and
Hawking that cemented the idea of interpreting the black hole as a
thermodynamic system, showing the corresponding properties like temperature
and entropy. Hawking proposed that a black hole emits radiation from its event
horizon \cite{hawking_1,hawking_2, hawking_3} and Bekenstein showed
that as the black hole engulfs matter, the information associated with it is
not lost but is incorporated in the horizon area of the black hole
\cite{bekenstein_1,bekenstein_2, bekenstein_3}. These ideas led to a
revolution in black hole thermodynamics.
The discovery of Riess and Perlmutter, which showed that the universe is
expanding with an acceleration \cite{riess,perlmutter} led to a flood of
theoretical models coming forward to resolve this unexpected observation.
In the majority of such models the idea of dark energy was invoked to interpret
this colossal expansion (see \cite{dark} for a review). One of the convenient
ideas to deal with it was the $\Lambda$CDM model, in which Einstein's
cosmological constant was reintroduced as a homogeneous and isotropic fluid
with the negative pressure and is considered to be the cause of this present
state of expansion of the universe \cite{dark_2}. The cosmological constant
was thought to originate out of quantum fluctuations of vacuum, but its
theoretically predicted value could not match the observational value. To
alleviate this problem, dynamical scalar field models were proposed, and
the most common among them is the quintessence model of dark energy. In this
model a scalar field minimally coupled to gravity is used to describe this
late time accelerated expansion. Detailed about and current status of
the quintessence model can be seen in Refs.~\cite{dark,dark_2,q1,q2,q3,q4,q5}.
Many works can be found in literature where the black hole thermodynamics
is studied with a surrounding field. In 2003, a Schwarzschild black hole
surrounded by a quintessence was studied and corresponding thermodynamic
properties were examined by Kiselev \cite{kiselev}. He showed that presence
of the surrounding field has a major impact on the properties of a black hole.
Subsequently, Chen \textit{et al.}~\cite{chen_1} considered a d-dimensional
black hole with a quintessence matter surrounding it and examined
thermodynamic properties of the black hole. Reissner-Nordstr\"om black holes
were examined with a quintessential surrounding by Wei and Chu \cite{wei_1}.
Thermodynamics of Narai type black holes were considered by Fernando with
a quintessential surrounding \cite{fernando}. Recently, it is seen that many
research works have considered the effects of quantum corrections via the
Generalized Uncertainty Principle (GUP) on the thermodynamics of black hole
\cite{shahjalal,anacleto_1,bcl}. One such work was carried out by Shahjalal in 2019
\cite{shahjalal}, where he compared the effects of the quantum deformations
with and without the presence of quintessential surroundings. The case of
rotating non-linear magnetically charged black holes was taken up by
Ndongmo \textit{et al.}~\cite{ndongmo}, in which thermodynamics of the black
hole was studied. Moreover, Anacleto \textit{et al.} \cite{anacleto_1} studied
the quantum-corrected Schwarzschild black holes and analyzed the absorption
and scattering processes. Gonz\'ales \textit{et al.} \cite{gonzales} studied
a 3-dimensional Godel black hole and calculated the QNMs and Hawking radiation.
Further, L\"utf\"uoglu \textit{et al.}~\cite{bcl} studied the thermodynamics of
Schwarzschild black holes with the quintessential surrounding and GUP. They
showed that the upper and lower bounds on various functions like temperature
and entropy depend on the deformation parameters as well as on the quintessence
coefficient, and also presented plots of P-V isotherms.
The study of black hole shadows has also been carried out in recent times as
they provide useful insights into the black hole event horizon as well as into
the optical properties of a black hole \cite{konoplya_shadow,vagnozzi}. The
initial work in this direction was done by Synge \cite{synge} and Luminet
\cite{luminet} in the 1970s and for rotating Kerr black holes by Bardeen
\cite{bardeen1}. Lately, an interesting work on QNMs and shadows of Schwarzschild
black hole was performed by Anacleto \textit{et al.}~\cite{anacleto}, where
they considered the GUP-modified Schwarzschild black hole solutions and
calculated the QNMs and shadows of the black holes, and showed the dependency
of these properties on the deformation parameters. In recent times, many works
have been done in this field \cite{j1,s1,s2,s3,s4,s5,s7}.
Thus, inspired from the ongoing endeavours to explore these novel ideas, in
this work we intend to study the various properties of a GUP-corrected
Schwarzschild black hole surrounded by a quintessence field, such as the QNMs
and thermodynamic properties like Hawking temperature, entropy, heat capacity
and surface gravity. To the best of our information, GUP with both linear and
quadratic terms has not been incorporated to Schwarzschild black hole
surrounded by quintessence. The novelty of GUP that it introduces a minimum
length scale, that is the Planck's length, might play an important role in the
properties of black holes \cite{bcl,anacleto}. Also, we plan to analyse the
possibility of association of the parameters of our model with the shadow
radius of the black hole.
The rest of the paper is organized as follows. In section II, we compute the
QNMs of the GUP-corrected Schwarzschild black hole surrounded by a quintessence
field using the Mashhoon method and the semi-analytical WKB method, and analyze
the results. In section III, we study the thermodynamic properties of the
black hole and present a graphical analysis of dependency of the thermodynamic
properties on the deformation parameters as well as on the quintessence
coefficient. Then we analyze the black hole shadow with respect to variations
in the model parameters. We summarize our results and present some concluding
remarks in section IV.
\section{Quasinormal modes of a GUP-corrected Schwarzschild black hole}
The general form of the black hole metric as initially derived by Kiselev
\cite{kiselev}, in which he considered a Schwarzschild black hole surrounded by
a quintessence dark energy with a particular energy density, can be expressed
by
\begin{equation}
ds^2 = -g(r)\, dt^2 + \frac{1}{g(r)}\, dr^2 + r^2\, d\Omega^2,
\label{eq1}
\end{equation}
where $d\Omega^2 = d\theta^2+\sin^2\theta\, d\phi^2$ and the metric function
$g(r)$ has the from:
\begin{equation}
g(r) = 1 - \frac{2M}{r}-\frac{e}{r^{3\omega +1}}.
\label{eq2}
\end{equation}
In this function $M$ is the mass of the black hole, $\omega$ is the equation
of state parameter of the quintessence field, and $e$ is the positive
normalization coefficient that is dependent on the quintessence density.
Since the recent past, a quantum correction to various black hole solutions,
including the Schwarzschild one has been introduced via GUP in order to avoid
singularities in such solutions by introducing a minimum length other than zero \cite{kaz}.
Under this correction, the normal metric of a black hole is modified, which gives a corresponding
new horizon of the black hole. Thus, for example, in the case of
Schwarzschild black hole, the original Schwarzschild horizon radius $r_{h}$ of the black hole
has to be replaced with the GUP-corrected radius $r_{hGUP}$ for this
purpose \cite{anacleto}. The basic steps of incorporation of GUP correction
into the black hole metric \eqref{eq1} are the following.
Considering the modified Heisenburg algebra, we may write
\begin{equation}
\Delta x \Delta p \geq \frac{\hbar}{2}\bigg(1-\frac{\alpha l_p}{\hbar} \Delta p + \frac{\beta l^2_p}{\hbar^2} (\Delta p)^2 \bigg),
\label{eq3}
\end{equation}
where $\alpha$ and $\beta$ are two dimensionless deformation parameters, and
$l_p$ is the Planck's length. Taking the unit system with $G=c=\hbar=l_p =1$,
equation \eqref{eq3} can be solved for $\Delta p$, which gives
\begin{equation}
\Delta p \approx \frac{4r_h+\alpha}{2\beta}\left[1-\sqrt{1-\frac{4\beta}{(4r_h+\alpha)^2}}\right].
\label{eq4}
\end{equation}
Here the uncertainty in position is taken as the horizon diameter $2r_h$ and we
end up with the following expression \cite{anacleto_1}:
\begin{equation}
E_{GUP} \geq E\left[1-\frac{4\alpha}{r_h}+\frac{16 \beta}{r_h^2}+\dots \right],
\label{eq5}
\end{equation}
where $E_{GUP}$ is the GUP-corrected energy of the black hole. Now considering
the assumption that $E\sim M$, $E_{GUP}\sim M_{GUP}$ and calculating
$r_h =\frac{2M}{1-e}$ from the function \eqref{eq2}, we obtained the relation,
\begin{equation}
M_{GUP}\geq M\left[1-\frac{4\alpha}{r_h}+\frac{16 \beta}{r_h^2}\right]=M\left(1-\frac{2\alpha(1-e)}{M}+\frac{4 \beta(1-e)^2}{M^2}\right).
\label{eq6}
\end{equation}
Thus, finally the line element of a GUP-corrected Schwarzschild black hole
surrounded by a quintessence field ($\omega = -1/3$) takes the form:
\begin{equation}
ds^2 = -f(r)dt^2 + \frac{1}{f(r)}dr^2 + r^2 d\Omega^2
\label{eq7}
\end{equation}
with the modified metric function
\begin{equation}
f(r)=1-\frac{2M}{r}\left(1-\frac{2\alpha(1-e)}{M}+\frac{4 \beta(1-e)^2}{M^2}\right)-e \equiv 1 -\frac{2M_{GUP}}{r} - e.
\label{eq8}
\end{equation}
This metric function gives the GUP-corrected horizon radius as $r_{hGUP} =
2M_{GUP}/(1-e)$. Fig.~\ref{fig1} shows the behaviours of the original
metric and the GUP-corrected metric as functions of $r$ for various values of
the related parameters. From the figure it is seen that there is only one
event horizon for the black hole for different values of the parameters.
There is no other horizon obtained for the black hole. The left plot shows
that with the increasing values of $\alpha$, the horizon radius becomes
smaller. Whereas the middle and the right plots show that with the increasing
$\beta$ and $e$ values respectively, the horizon radius increases. It is also
seen that the effect of parameter $\beta$ is more dominant than that of the
parameters $\alpha$ and $e$.
\begin{figure}[!h]
\includegraphics[scale=0.26]{metric_plot.pdf}\hspace{0.3cm}
\includegraphics[scale=0.26]{beta.pdf}\hspace{0.3cm}
\includegraphics[scale=0.26]{c_vs_metric.pdf}
\caption{Behaviours of the original metric and the GUP-modified metric as a
function of $r$ for different values of the associated parameters as shown.
In the middle plot $\alpha=0.09$ is used, whereas in the right plot
$\alpha = \beta = 0.02$ is used. In all the three plots $M=1$ is used. This
value of $M$ is used for all plots and analysis of this work, if otherwise not
stated.}
\label{fig1}
\end{figure}
At this stage it is necessary to mention that the small values of the GUP
parameters is an obvious choice because any correction term that we introduce
into our theory cannot be larger than the base term involved as these
corrections are generally very minute in nature. In the literature, we found
that the values of these parameters have been considered less than unity
\cite{bcl,anacleto,shahjalal,gogoi4}. This choice is well motivated as we are considering only
small corrections to the original uncertainty relation and it is demanded for
the derivation of the metric expression. Similarly, the quintessence
parameter ($e$) has been constrained for the case of a Schwarzschild black
hole surrounded by a quintessence field in Ref.~\cite{A6}, where the authors found a
bound on the quintessential parameter as $10^{-21}\leq eM\leq 10^{-11}$. On
the other hand in Refs.\ \cite{chen_1,toledo}, it can be seen that the
quintessential parameter is taken of the order $\sim 0.1$. So, in our study
we consider reasonably small values of these parameters.
\subsection{QNMs by Mashhoon Method}
Mashhoon method is an analytical method of calculating the QNMs of a black
hole by comparing its effective potential with a standard potential, such as
the Poschl-Teller potential \cite{mashhoon_2} or the Eckart potential \cite{mashhoon_1}. Starting
from the metric function $f(r)$, the effective black hole potential can be
obtained from the formula \cite{djgogoi_2,konoplya_5}:
\begin{equation}
V_l(r)=f(r) \Big[\frac{f'(r)}{r}+\frac{l(l+1)}{r^2}\Big].
\label{eq9}
\end{equation}
After obtaining the effective potential of the black hole, we compare it with
the standard Poschl-Teller potential at their maxima. The Poschl-Teller
potential has the form \cite{mashhoon_2}:
\begin{equation}
V_{PT}=\frac{V_{0}}{\cosh^2 a(x-x_{0})},
\label{eq10}
\end{equation}
where the quantity $V_{0}$ denotes the height and $a$ denotes the curvature
of the potential at its maximum. On comparing the two potentials, we get the
analytical form of the parameters $V_0$ and $a$. Then we utilize the formulae
for calculating the QNMs, which is given by Mashhoon as \cite{mashhoon_2}
\begin{equation}
\omega=\omega_0 + i \Gamma=\pm \Big(V_0 - \frac{a^2}{4}\Big)^{\frac{1}{2}}+ i a\Big(n+\frac{1}{2}\Big).
\label{eq11}
\end{equation}
Now, using this formula we have calculated the QNMs of the GUP-corrected
black hole surrounded by a quintessential dark energy field as shown in the
Table \ref{table01}. In this calculation of QNMs, we have considered a small
positive value of the quintessence parameter $e =0.05$ laying well within the
accepted range and mass of the black hole is considered as $M=1$. The
quantum deformation parameters $\alpha$ and $\beta$ are also assumed as small
positive values within their well accepted range as shown in the table.
\begin{table}[h!]
\caption{QNMs of the GUP-corrected black hole surrounded by a
quintessence field for $n=0$, $n=1$, $n=2$ modes, for multipole number
$l=1$, quintessence parameter $e=0.05$ and various values of the deformation parameter $\alpha$ and $\beta$ obtained by using the Mashhoon method.}
\vspace{2mm}
\centering
\begin{tabular}{c@{\hskip 5pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 5pt}c}
\hline\hline
&$\alpha$ & $\beta$ & $e$ & QNMs for $n=0$ & QNMs for $n=1$ & QNMs for $n=2$ & QNMs for $n=3$ & \\
\hline\hline
&0.00 & 0.00 & 0.05 & 0.269699 + 0.106170i & 0.269699 + 0.318511i & 0.269699 + 0.530852i & 0.269699 + 0.743192i& \\
&0.00 & 0.01 & 0.05 & 0.257500 + 0.109324i & 0.257500 + 0.327971i & 0.257500 + 0.546619i & 0.257500 + 0.765267i& \\
&0.00 & 0.02 & 0.05 & 0.246116 + 0.111819i & 0.246116 + 0.335457i & 0.246116 + 0.559095i & 0.246116 + 0.782733i& \\
&0.00 & 0.03 & 0.05 & 0.235482 + 0.113762i & 0.235482 + 0.341287i & 0.235482 + 0.568812i & 0.235482 + 0.796337i& \\
\hline
&0.02 & 0.00 & 0.05 & 0.283504 + 0.101997i & 0.283504 + 0.305990i & 0.283504 + 0.509984i & 0.283504 + 0.713977i& \\
&0.02 & 0.01 & 0.05 & 0.270366 + 0.105984i & 0.270366 + 0.317951i & 0.270366 + 0.529919i & 0.270366 + 0.741886i& \\
&0.02 & 0.02 & 0.05 & 0.258121 + 0.109175i & 0.258121 + 0.327526i & 0.258121 + 0.545876i & 0.258121 + 0.764227i& \\
&0.02 & 0.03 & 0.05 & 0.246696 + 0.111702i & 0.246696 + 0.335107i & 0.246696 + 0.558512i & 0.246696 + 0.781916i& \\
\hline
&0.04 & 0.00 & 0.05 & 0.298387 + 0.096763i & 0.298387 + 0.290290i & 0.298387 + 0.483817i & 0.298387 + 0.677343i& \\
&0.04 & 0.01 & 0.05 & 0.284222 + 0.101762i & 0.284222 + 0.305286i & 0.284222 + 0.508810i & 0.284222 + 0.712334i& \\
&0.04 & 0.02 & 0.05 & 0.271034 + 0.105795i & 0.271034 + 0.317385i & 0.271034 + 0.528975i & 0.271034 + 0.740565i& \\
&0.04 & 0.03 & 0.05 & 0.258744 + 0.109025i & 0.258744 + 0.327075i & 0.258744 + 0.545124i & 0.258744 + 0.763174i& \\
\hline
&0.06 & 0.00 & 0.05 & 0.314447 + 0.090238i & 0.314447 + 0.270713i & 0.314447 + 0.451188i & 0.314447 + 0.631663i& \\
&0.06 & 0.01 & 0.05 & 0.299161 + 0.096470i & 0.299161 + 0.289410i & 0.299161 + 0.482350i & 0.299161 + 0.675289i& \\
&0.06 & 0.02 & 0.05 & 0.284942 + 0.101525i & 0.284942 + 0.304574i & 0.284942 + 0.507623i & 0.284260 + 0.712247i&\\
&0.06 & 0.03 & 0.05 & 0.271705 + 0.105604i & 0.271705 + 0.316812i & 0.271705 + 0.528020i & 0.271705 + 0.739228i&\\
\hline
\end{tabular}
\label{table01}
\end{table}
It is interesting to note that there is a striking dependence of the QNMs on
the deformation parameters. As is seen from Table \ref{table01}, when $\alpha$
is kept constant and $\beta$ is increased, there is a noticeable decrease in
the magnitude of the real part of the QNMs. That is, the amplitude of QNMs is
inversely proportional to $\beta$. Whereas, with a particular value of
$\beta$, it is seen that the amplitude increases with increase in $\alpha$.
On the other hand in the case of the imaginary part of the QNM representing the
damping of the wave, for a fixed value of $\alpha$, the damping increases with
increase in $\beta$. While, for a fixed $\beta$, the damping decreases with an
increase in $\alpha$. We have also calculated the QNMs for the black hole with
$l=2$ as shown in Table \ref{table02}. Similar pattern is observed in
this case also. But one notable feature that comes out is that with an increase
in $l$, the corresponding amplitudes of the QNMs increase noticeably, while
the damping factor is not much affected although it decreases with increasing
$l$. Another feature that is apparent from
all the calculated modes is that with the increase in $n$ values the amplitude
of QNMs remains the same but its damping increases. This is in fact already
clear from equation \eqref{eq11}.
\begin{table}[h!]
\caption{QNMs of GUP-corrected black hole surrounded by a
quintessence field for $n=0$, $n=1$, $n=2$ and $n=3$ modes, for multipole
number $l=2$, quintessence parameter $e=0.05$ and various values of the
deformation parameter $\alpha$ and $\beta$ obtained by using the Mashhoon
method.}
\vspace{2mm}
\centering
\begin{tabular}{c@{\hskip 5pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 5pt}c}
\hline \hline
&$\alpha$ & $\beta$ & $e$ & QNMs for $n=0$ & QNMs for $n=1$ & QNMs for $n=2$ & QNMs for $n=3$& \\
\hline \hline
&0.00 & 0.00 & 0.05 & 0.447498 + 0.102677i & 0.447498 + 0.308031i & 0.447498 + 0.513384i & 0.447498 + 0.718738i & \\
&0.00 & 0.01 & 0.05 & 0.430431 + 0.105325i & 0.430431 + 0.315974i & 0.428658 + 0.527870i & 0.430431 + 0.737273i &\\
&0.00 & 0.02 & 0.05 & 0.414525 + 0.107389i & 0.414525 + 0.322167i & 0.414525 + 0.536946i & 0.414525 + 0.751724i &\\
&0.00 & 0.03 & 0.05 & 0.399674 + 0.108966i & 0.399674 + 0.326897i & 0.399674 + 0.544828i & 0.399674 + 0.762759i &\\
\hline
&0.02 & 0.00 & 0.05 & 0.466854 + 0.099131i & 0.466854 + 0.297394i & 0.466854 + 0.495657i & 0.466854 + 0.693919i &\\
&0.02 & 0.01 & 0.05 & 0.448431 + 0.102519i & 0.448431 + 0.307558i & 0.448431 + 0.512596i & 0.448431 + 0.717635i &\\
&0.02 & 0.02 & 0.05 & 0.431299 + 0.105201i & 0.431299 + 0.315602i & 0.431299 + 0.526004i & 0.431299 + 0.736406i &\\
&0.02 & 0.03 & 0.05 & 0.415335 + 0.107293i & 0.415335 + 0.321880i & 0.415335 + 0.536467i & 0.415335 + 0.751054i &\\
\hline
&0.04 & 0.00 & 0.05 & 0.487794 + 0.094643i & 0.487794 + 0.283930i & 0.487794 + 0.473216i & 0.487794 + 0.662503i &\\
&0.04 & 0.01 & 0.05 & 0.467862 + 0.098931i & 0.467862 + 0.296793i & 0.467862 + 0.494654i & 0.467862 + 0.692516i &\\
&0.04 & 0.02 & 0.05 & 0.449367 + 0.102360i & 0.449367 + 0.307079i & 0.449367 + 0.511798i & 0.449367 + 0.716518i &\\
&0.04 & 0.03 & 0.05 & 0.432170 + 0.105075i & 0.432170 + 0.315226i & 0.432170 + 0.525376i & 0.432170 + 0.735527i &\\
\hline
&0.06 & 0.00 & 0.05 & 0.510501 + 0.089004i & 0.510501 + 0.267012i & 0.510501 + 0.445020i & 0.510501 + 0.623028i &\\
&0.06 & 0.01 & 0.05 & 0.488885 + 0.094391i & 0.488885 + 0.283172i & 0.488885 + 0.471953i & 0.488885 + 0.660734i &\\
&0.06 & 0.02 & 0.05 & 0.468874 + 0.098728i & 0.468874 + 0.296184i & 0.468874 + 0.493640i & 0.468874 + 0.691096i &\\
&0.06 & 0.03 & 0.05 & 0.450307 + 0.102198i & 0.450307 + 0.306594i & 0.450307 + 0.510991i & 0.450307 + 0.715387i &\\
\hline
\end{tabular}
\label{table02}
\end{table}
It will also be interesting to observe any dependence of the QNMs with the
quintessence parameter $e$, which has been kept at a constant value in the
previous analysis. For this purpose, as shown in the Table \ref{table03}, we
have computed the QNMs for different values of the parameter $e$ with some
fixed values of the deformation parameters $\alpha$ and $\beta$. It is seen
from the table that for a fixed value of $l$, with increasing $e$, the
amplitude decreases, while the damping increases.
\begin{table}[h!]
\caption{QMNs of GUP-corrected black hole surrounded by a
quintessence field for $n=0$, $n=1$, $n=2$ and $n=3$ modes, for multipole
number $l=1$ and $l=2$, the deformation parameter $\alpha=0.02$ and
$\beta=0.02$ and various values of the quintessence parameter $e$ obtained by
using the Mashhoon method.}
\vspace{2mm}
\centering
\begin{tabular}{c@{\hskip 5pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 5pt}c}
\hline \hline
&$l$ & $e$ & QNMs for $n=0$ & QNMs for $n=1$ & QNMs for $n=2$ & QNMs for $n=3$
& \\
\hline \hline
&1 & 0.01 & 0.278884 + 0.106046i & 0.278884 + 0.318139i & 0.278884 + 0.530231i & 0.278884 + 0.742323i &\\
&1 & 0.03 & 0.268485 + 0.107679i & 0.268485 + 0.323037i & 0.268485 + 0.538395i & 0.268485 + 0.753753i &\\
&1 & 0.05 & 0.258121 + 0.109175i & 0.258121 + 0.327526i & 0.258121 + 0.545876i & 0.258121 + 0.764227i &\\
&1 & 0.07 & 0.247793 + 0.110532i & 0.247793 + 0.331597i & 0.247793 + 0.552662i & 0.247793 + 0.773727i &\\
\hline
&2 & 0.01 & 0.459854 + 0.102496i & 0.459854 + 0.307487i & 0.459854 + 0.512479i & 0.459854 + 0.717470i &\\
&2 & 0.03 & 0.445542 + 0.103907i & 0.445542 + 0.311721i & 0.445542 + 0.519535i & 0.445542 + 0.727349i &\\
&2 & 0.05 & 0.431299 + 0.105201i & 0.431299 + 0.315602i & 0.431299 + 0.526004i & 0.431299 + 0.736406i &\\
&2 & 0.07 & 0.417129 + 0.106375i & 0.417129 + 0.319124i & 0.417129 + 0.531873i & 0.417129 + 0.744623i & \\
\hline
\end{tabular}
\label{table03}
\end{table}
Figs.~\ref{fig2} -- \ref{fig4} give the visual representation of all these
behaviours of QNMs of the black hole as discussed and presented in Tables
\ref{table01} -- \ref{table03}. Fig.~\ref{fig2} shows the variation of the
amplitude and damping part of the QNMs with respect to the GUP parameter
$\alpha$ for a fixed value of the parameter $\beta = 0.05$ and for three
different $l$ values. It is seen that amplitude of QNMs increases slowly,
whereas the damping decreases rapidly with the increasing values of
$\alpha$. In Fig.~\ref{fig3} the variations of amplitude and damping of QNMs
with respect to $\beta$ for a fixed value of $\alpha = 0.05$ and for three
different $l$ values are shown. This figure shows that the effect of $\beta$
is totally opposite to that of $\alpha$ on the QNMs. However, the trend of the
effect of these two parameters on QNMs is almost similar. The first two plots
of Fig.~\ref{fig4} show the behaviours of amplitude and damping respectively
of QNMs of the black hole with respect to quintessence field parameter $e$ for
a constant value of $\alpha = \beta = 0.05$ and for two values $l$. Similar to
the case of the parameter $\beta$ in this case also the amplitude decreases
slowly, but the damping increases at a relatively faster step. The third plot
of this figure shows the fact that for a fixed $l$, the damping almost remains
constant with increasing quintessence parameter $e$ and the higher values of
$n$ give a much higher damping.
\begin{figure}[h!]
\includegraphics[scale=0.31]{amplitude_vs_a.pdf}\hspace{0.5cm}
\includegraphics[scale=0.31]{damping_vs_a.pdf}
\caption{Behaviours of QNMs with respect to the GUP parameter $\alpha$ for
three different values of $l$ with $n=0$ and $\beta=0.05$. The amplitude
of QNMs increases with an increase in $\alpha$, while the damping decreases
with increasing $\alpha$ (Mashhoon Method).}
\label{fig2}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.32]{amplitude_vs_b.pdf}\hspace{0.5cm}
\includegraphics[scale=0.32]{damping_vs_b.pdf}
\caption{Behaviours of QNMs with respect to the GUP parameter $\beta$ for
three different values of $l$ with $n=0$ and $\alpha=0.05$. The
amplitude decreases with an increase in $\beta$, while the damping increases
with increasing $\beta$ (Mashhoon Method).}
\label{fig3}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.28]{amplitude_c_variation.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{damping_c_vary_l.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{damping_vs_c_vary_n_l=1.pdf}
\caption{Behaviours of QNMs with respect to the quintessence parameter $e$
for two different values of $l$ (first two plots with $n=0$) and two different
values $n$ (right plot with $l=1$) with $\alpha=\beta=0.05$. The
amplitude decreases with an increase in $e$, while the damping increases with
increasing $e$ (Mashhoon Method).}
\label{fig4}
\end{figure}
\subsection{QNMs by WKB method}
The QNMs can be reliably calculated using the WKB method. It is a
semi-analytical approximation method. The basics of the WKB
method can be found extensively in literature (\cite{konoplya_1,konoplya_2,djgogoi_2} and references therein). Here our basic intention is to make a
comparative analysis of the QNM frequencies obtained by the Mashhoon method
with that will be given by the WKB method.
\begin{figure}[h!]
\includegraphics[scale=0.28]{alpha_vary.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{beta_vary.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{c_vary.pdf}
\caption{Variation of amplitude of QNMs with various parameters
of the model obtained by using the WKB approximation. The left plot is for
$\beta=0.01$ and $e=0.05$, the middle plot is for $\alpha=0.01$ and $e=0.05$,
and the right plot is for $\alpha=\beta=0.01$.}
\label{fig5}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.28]{alpha_vary_d.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{beta_vary_d.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{c_vary_d.pdf}
\caption{Variation of damping of QNMs with various parameters of the model
obtained by using the WKB approximation. The left plot is for $\beta=0.01$
and $e=0.05$, the middle plot is for $\alpha=0.01$ and $e=0.05$, and the
right plot is for $\alpha=\beta=0.01$.}
\label{fig5ad}
\end{figure}
The perturbation equation of a black hole with a probe that couple minimally
to a scalar field, can be conveniently transformed into a Schr\"odinger-like
wave equation given by
\begin{equation}
\frac{d^2 \psi}{dx^2}+\big(\omega^2-V_l(x)\big)\psi=0.
\label{eq12}
\end{equation}
Here, the tortoise coordinate $x$ is defined as $dx=dr/f(r)$, $\omega$ gives
the QNM frequencies of the black hole and the effective potential $V_l(x)$ is
same as the potential \eqref{eq9} if we replace the coordinate
$x \rightarrow r$.
For consistency, equation \eqref{eq12} should satisfy some boundary conditions
at the horizon and at infinity. Asymptotically flat spacetimes lead to the
following quasinormal criteria:
\begin{align}
\psi(x) \rightarrow \Bigg \{ \begin{array}{lr}
P e^{+i\omega x} & \text{if }x \rightarrow -\infty \\
Q e^{-i\omega x} & \;\text{if }x \rightarrow +\infty, \end{array}
\end{align}
where $P$ and $Q$ denote constants of integration. Using these conditions we
have calculated the QNM frequencies for our considered black hole.
The results, i.e.\ the amplitude and damping of QNMs or the real and imaginary
parts of QNM frequencies have been plotted against the GUP and the quintessence
parameters as shown in Figs.~\ref{fig5} and \ref{fig5ad}. Fig.~\ref{fig5}
shows the trends of variations of the real parts of the QNMs with respect to
variations in $\alpha$, $\beta$ and $e$. It is clear that these trends are the
same with the amplitude obtained by the Mashhoon method, but in this case
the value of the amplitude is slightly greater than that obtained by the
Mashhoon method. Whereas the damping of the QNMs shows the opposite behaviour
with the Mashhoon method. However, it is to be noted that in WKB method, the
variation of damping term is insignificant with respect to variation in
all the model parameters.
Tables \ref{table04} and \ref{table05} show a clear comparison between the
Mashhoon method and the WKB method. Table \ref{table04} is for the $n=0$ case
and Table \ref{table05} is for the $n=1$. For $n=0$ we found that
the real part of the QNM calculated by the two methods agree to a good
extent while the imaginary part of the modes vary to some extent. There is a
far better agreement between two methods for higher $l$ values. From Table
\ref{table04} one can observe the following additional points. For the given
values of $l$, $\beta$ and $e$ with increase in $\alpha$ values, there is an
increasing mismatch between the two methods. On the other hand, for the given
values of $l$, $\alpha$ and $e$ with increasing values of $\beta$ there is a
better match between the methods. Moreover, for the given values of $l$,
$\alpha$ and $\beta$, increasing $e$ values lead to decrease in mismatch
between the methods. However, from Table \ref{table05} it is seen that some
of the above observations are not applicable to the case for $n=1$ and the
difference between the Mashhoon method and the WKB method in this case is
almost ten times higher than that for the $n=0$ case. This may be due to the
fact that the usual WKB method can only be reliably used to calculate QNMs for the
situation with $n < l$ \cite{konoplya_1,djgogoi_2}.
\begin{table}[h!]
\caption{Comparison of QNMs of GUP-corrected black hole
surrounded by a quintessence field obtained by using the Mashhoon method
and the 6th order WKB method for $n=0$, $l=1,2,3,4$, and for various values
of the parameters $\alpha$, $\beta$ and $e$.
Here $\Delta |\omega_M-\omega_{WKB}|$ represents the absolute difference
between the QNM frequencies calculated by using the Mashhoon method and the WKB
method.}
\vspace{2mm}
\centering
\begin{tabular}{c@{\hskip 5pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 5pt}c}
\hline \hline
&Multipole & $\alpha$ & $\beta$ & $e$ & Mashhoon Method & WKB method & $\Delta |\omega_M-\omega_{WKB}|$ & \\
\hline
&\multirow{4}{4em}{$l=1$} & $0.01$ & $0.01$ & $0.03$ & 0.274715 + 0.105993i & 0.274250 + 0.090298i & $5.7204 \times 10^{-3}$ &\\
&& $0.01$ & $0.01$ & $0.05$ & 0.263814 + 0.107754i & 0.265742 + 0.086685i & $5.4485 \times 10^{-3}$ &\\
&& $0.01$ & $0.03$ & $0.05$ & 0.240990 + 0.112803i & 0.248128 + 0.080939i & $5.0885 \times 10^{-3}$ &\\
&& $0.05$ & $0.01$ & $0.05$ & 0.291550 + 0.099263i & 0.287202 + 0.093685i & $5.8888 \times 10^{-3}$ &\\
\hline
&\multirow{4}{4em}{$l=2$} & $0.01$ & $0.01$ & $0.03$ & 0.454274 + 0.102489i & 0.453426 + 0.089402i & $3.5361 \times 10^{-3}$ & \\
&& $0.01$ & $0.01$ & $0.05$ & 0.439261 + 0.104011i & 0.439746 + 0.085840i & $3.3614 \times 10^{-3}$ &\\
&& $0.01$ & $0.03$ & $0.05$ & 0.407365 + 0.108192i & 0.410599 + 0.080150i & $3.1389 \times 10^{-3}$ &\\
&& $0.05$ & $0.01$ & $0.05$ & 0.478163 + 0.096792i & 0.475258 + 0.092772i & $3.6331 \times 10^{-3}$ & \\
\hline
&\multirow{4}{4em}{$l=3$} & $0.01$ & $0.01$ & $0.03$ & 0.634124 + 0.101454i & 0.633405 + 0.089164i & $2.5386 \times 10^{-3}$ &\\
&& $0.01$ & $0.01$ & $0.05$ & 0.614233 + 0.102908i & 0.614444 + 0.085616i & $2.4137 \times 10^{-3}$ &\\
&& $0.01$ & $0.03$ & $0.05$ & 0.571617 + 0.106832i & 0.573718 + 0.079941i & $2.2538 \times 10^{-3}$ &\\
&& $0.05$ & $0.01$ & $0.05$ & 0.666197 + 0.096067i & 0.664065 + 0.092530i & $2.6074 \times 10^{-3}$ & \\
\hline
&\multirow{4}{4em}{$l=4$} & $0.01$ & $0.01$ & $0.03$ & 0.814241 + 0.101021i & 0.813646 + 0.089066i & $1.9775 \times 10^{-3}$ & \\
&& $0.01$ & $0.01$ & $0.05$ & 0.789248 + 0.102448i & 0.789370 + 0.085524i & $1.8798 \times 10^{-3}$ &\\
&& $0.01$ & $0.03$ & $0.05$ & 0.735481 + 0.106263i & 0.737049 + 0.079855i & $1.7553 \times 10^{-3}$ &\\
&& $0.05$ & $0.01$ & $0.05$ & 0.854793 + 0.095764i & 0.853116 + 0.092430i & $2.0321 \times 10^{-3}$ &\\
\hline
\end{tabular}
\label{table04}
\end{table}
\begin{table}[h!]
\caption{Comparison of QNMs of GUP-corrected black hole
surrounded by a quintessence field obtained by using the Mashhoon method
and the 6th order WKB method for $n=1$, $l=1,2,3,4$, and for various values
of the parameters $\alpha$, $\beta$ and $e$.
Here $\Delta |\omega_M-\omega_{WKB}|$ represents the absolute difference
between the QNM frequencies calculated by using the Mashhoon method and the
WKB method.}
\vspace{2mm}
\centering
\begin{tabular}{c@{\hskip 5pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 10pt}c@{\hskip 5pt}c}
\hline \hline
&Multipole & $\alpha$ & $\beta$ & $e$ & Mashhoon Method & WKB method & $\Delta |\omega_M-\omega_{WKB}|$ & \\
\hline
& \multirow{4}{4em}{$l=1$} & $0.01$ & $0.01$ & $0.03$ & 0.274715 + 0.317980i & 0.248160 + 0.282859i & $4.3926 \times 10^{-2}$ & \\
&& $0.01$ & $0.01$ & $0.05$ & 0.263814 + 0.323261i & 0.240814 + 0.271373i & $5.4432 \times 10^{-2}$ & \\
&& $0.01$ & $0.03$ & $0.05$ & 0.240990 + 0.338408i & 0.224852 + 0.253387i & $7.6679 \times 10^{-2}$ & \\
&& $0.05$ & $0.01$ & $0.05$ & 0.291550 + 0.297790i & 0.260262 + 0.293288i & $2.4635 \times 10^{-2}$ & \\
\hline
& \multirow{4}{4em}{$l=2$} & $0.01$ & $0.01$ & $0.03$ & 0.454274 + 0.307466i & 0.435359 + 0.272995i & $3.4672 \times 10^{-2}$ & \\
&& $0.01$ & $0.01$ & $0.05$ & 0.439261 + 0.312032i & 0.422543 + 0.262032i & $4.1613 \times 10^{-2}$ & \\
&& $0.01$ & $0.03$ & $0.05$ & 0.407365 + 0.324575i & 0.394536 + 0.244664i & $5.6619 \times 10^{-2}$ & \\
&& $0.05$ & $0.01$ & $0.05$ & 0.478163 + 0.290375i & 0.456666 + 0.283193i & $2.2079 \times 10^{-2}$ & \\
\hline
&\multirow{4}{4em}{$l=3$} & $0.01$ & $0.01$ & $0.03$ & 0.634124 + 0.304361i & 0.620011 + 0.269992i & $2.7137 \times 10^{-2}$ & \\
&& $0.01$ & $0.01$ & $0.05$ & 0.614233 + 0.308725i & 0.601704 + 0.259200i & $3.2296 \times 10^{-2}$ & \\
&& $0.01$ & $0.03$ & $0.05$ & 0.571617 + 0.320496i & 0.561822 + 0.242020i & $4.3601 \times 10^{-2}$ & \\
&& $0.05$ & $0.01$ & $0.05$ & 0.666197 + 0.288200i & 0.650296 + 0.280132i & $1.7796 \times 10^{-2}$ & \\
\hline
&\multirow{4}{4em}{$l=4$} & $0.01$ & $0.01$ & $0.03$ & 0.814241 + 0.303063i & 0.803072 + 0.268725i & $2.1973 \times 10^{-2}$ & \\
&& $0.01$ & $0.01$ & $0.05$ & 0.789248 + 0.307343i & 0.779316 + 0.258007i & $2.6063 \times 10^{-2}$ & \\
&& $0.01$ & $0.03$ & $0.05$ & 0.735481 + 0.318789i & 0.727662 + 0.240906i & $3.5094 \times 10^{-2}$ & \\
&& $0.05$ & $0.01$ & $0.05$ & 0.854793 + 0.287293i & 0.842251 + 0.278843i & $1.4571 \times 10^{-2}$ & \\
\hline
\end{tabular}
\label{table05}
\end{table}
\section{Thermodynamic Properties of the Black hole}
The notion of a minimal length scale does not exist in the normal
Heisenberg algebra, but in the Planck energy scales, taking into
consideration the effects of gravity, it becomes necessary. The
introduction of the GUP is thus naturally motivated and leads to
interesting results. The temperature of the Schwarzschild black hole
is usually expressed in the form \cite{liang}:
\begin{equation}
T=\frac{\kappa}{8\pi}\frac{dA}{dS},
\label{eq13}
\end{equation}
where $\kappa$ is the surface gravity of the black hole, $A$ is the
surface area and $S$ is the entropy of the black hole. Calculations yield the
expression for the surface gravity at the horizon in our case as
\begin{equation}
\kappa=-\lim_{r\to r_H}\sqrt{-\frac{g^{11}}{g^{00}}}\,\frac{(g^{00})^{'}}{g^{00}}=\frac{1}{r_H}\Big(1+\frac{3e \omega}{r_H^{3\omega+1}}\Big),
\label{eq14}
\end{equation}
where the GUP corrected horizon radius $r_{hGUP}$ of the black hole is denoted
as $r_H$. Liang \cite{liang} showed that the area of a black hole
increases proportionately when it absorbs a particle of particular mass and
size. A minimal change in area means a minimal change in entropy,
which can have the smallest possible value of $\ln2$ according to
the information theory. So, we can express the ratio $\frac{dA}{dS}$ as
\begin{equation}
\frac{dA}{dS}=\frac{(\Delta A)_{min}}{(\Delta S)_{min}}=\frac{\epsilon \Delta x \Delta p}{\ln2}= \frac{\epsilon (4r_H +\alpha)r_H}{\beta \ln2}\left[1-\sqrt{1-\frac{4\beta}{(4r_H+\alpha)^2}}\right].
\label{eq15}
\end{equation}
Here, the uncertainties in momentum and position are connected with the mass
and size of the particle falling into the black hole respectively and
$\epsilon$ is the calibration factor \cite{bcl}. Using equations \eqref{eq14}
and \eqref{eq15} into \eqref{eq13}, we have the following expression for the
GUP-corrected temperature,
\begin{equation}
T_{GUP}=\frac{1}{8\pi}\left(1+\frac{3e \omega}{r_H^{3\omega+1}}\right)\frac{\epsilon (4r_H +\alpha)}{\beta \ln2}\left[1-\sqrt{1-\frac{4\beta}{(4r_H+\alpha)^2}}\right].
\label{eq16}
\end{equation}
The above expression of temperature modifies into a simpler form
in absence of the quintessence and the deformation parameters, which
must be equal to the Hawking temperature $\frac{1}{4\pi r_H}$. Thus,
the factor $\epsilon$ is determined to be $4 \ln2$ and we have the
final form of the GUP-modified black hole temperature as
\begin{equation}
T_{GUP}=\frac{(4r_H +\alpha)}{2\pi\beta }\Big(1+\frac{3e \omega}{r_H^{3\omega+1}}\Big)\left[1-\sqrt{1-\frac{4\beta}{(4r_H+\alpha)^2}}\right].
\label{eq17}
\end{equation}
For $\omega=-\frac{1}{3}$, the expression for the GUP-corrected temperature
can be simplified as
\begin{equation}
T_{GUP}= \frac{(4r_H +\alpha)}{2\pi\beta}(1-e)\left[1-\sqrt{1-\frac{4\beta}{(4r_H+\alpha)^2}}\right].
\label{eq18}
\end{equation}
It is interesting to note that the introduction of GUP corrections lead to
the dependency of the temperature on the deformation parameters $\alpha$
and $\beta$, apart from the quintessence parameter $e$. In absence of the
deformation parameters, the HUP-corrected temperature of the Schwarzschild
black hole surrounded by quintessence is given by
\begin{equation}
T_{HUP}=\frac{1}{4\pi r_H}\Big(1+\frac{3e \omega}{r_H^{3\omega+1}}\Big).
\label{eq19}
\end{equation}
Considering real-valued temperature, it can be seen from the above
equation \eqref{eq18} that for $\omega=-\frac{1}{3}$, there exists some bounds
on the horizon of the black hole depending on the values of $\alpha$ and
$\beta$ \cite{bcl}, since the term inside the square root can not be negative.
This is illustrated by the plots of temperature vs horizon graphs shown in
Fig.~\ref{fig6}, which clearly shows the dependence as mentioned.
\begin{figure}
\includegraphics[scale=0.28]{t_vs_r_varying_c.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{t_vs_r_varying_a.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{t_vs_r_varying_b.pdf}
\caption{Variation of the GUP-corrected black hole temperature with
quintessence parameter $e$ for $\alpha=\beta=0.05$ (left plot), with $\alpha$
considering $\beta= e = 0.05$ (middle plot) and with $\beta$ keeping
$\alpha = e = 0.05$ (right plot). In all three plots the solid red line
represents the Schwarzschild black hole temperature.}
\label{fig6}
\end{figure}
From this figure it is clear that introduction of quantum corrections due
to the generalized uncertainty principle and the surrounding quintessence
field have some influence on the black hole temperature. In some cases the
influence looks significant for black holes with small event horizon radii.
Whereas for black holes with large event horizon radius influence looks
insignificant. In all the
cases, temperature seems to decrease with the increase in horizon radius. Thus,
black holes with very large event horizon radii might have sub-zero
temperatures associated with them. The temperature profiles obtained here are
in good agreement with results presented in \cite{bcl,adler}.
Another interesting aspect of black hole physics is the study of the remnant
formation, which is considered to be a stable state of the black hole that
does not emit any heat and whose mass is reduced due to the evaporation. In this
context the dependency of heat capacity of the black hole on the GUP
parameters becomes an important feature that we want to analyze. For this
purpose, we make use of the thermodynamic relation connecting the heat
capacity $C$ of the black hole, its mass $M$ and temperature $T$ as given
below:
\begin{equation*}
C=\frac{dM}{dT}.
\end{equation*}
From this relation, we derive the expression for the GUP-corrected heat
capacity of the black hole as
\begin{equation}
C_{GUP}=-\frac{\pi \beta \left(4 \alpha (1-e)+r_H \left(\sqrt{\frac{(e-1)^2 \left((4 \alpha +r_H)^2-64 \beta \right)}{r_H^2}}-e+1\right)\right)\left((\alpha +4\, r_H)^2-4 \beta \right)}{8\, r_H \sqrt{\frac{(e-1)^2 \left((4 \alpha +r_H)^2-64 \beta \right)}{r_H^2}}\left((g-1) (\alpha +4\, r_H)^2+4 \beta \right)},
\label{eq20}
\end{equation}
where we have introduced a term $g$ defined as $g=\sqrt{1-\frac{4\beta}{(4r_H+\alpha)^2}}$ and considered $\omega=-\frac{1}{3}$. Fig.~\ref{fig7} shows the
variation of heat capacity function \eqref{eq20} with the horizon radius for
different values of $e$, $\alpha$ and $\beta$ considering
$\omega=-\frac{1}{3}$ and $M=1$. It is seen that the heat capacity is
independent of the quintessence field, but depends heavily on the GUP-parameters
$\alpha$ and $\beta$. This dependence is more pronounced on the parameter
$\beta$, especially for small horizon radii black holes. In almost all the
cases the heat capacity is significantly different from the case of the
Schwarzschild black hole. The heat capacity is negative throughout, which is
very large for the small horizon radii black holes as well as for the cases
of large radii ones. Thus GUP-corrected black holes should lose more energy
in the form of radiation than that of the Schwarzschild black hole, in
particular, by small and large horizon radii black holes.
\begin{figure}
\includegraphics[scale=0.28]{C_vs_r_varying_c.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{C_vs_r_varying_a.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{C_vs_r_varying_b.pdf}
\caption{Variation of the GUP-corrected heat capacity of the black hole
$C_{BH}$ in terms of horizon radius $r_H$ for varying $e$ values with
$\alpha=0.01$ and $\beta = 0.09$ (left plot), for varying $\alpha$ values with
$\beta = e = 0.07$ (middle plot) and for varying values of $\beta$ with
$\alpha=0.07$ and $e=0.05$ (right plot). In all three plots the solid red line
represents the heat capacity of Schwarzschild black hole.}
\label{fig7}
\end{figure}
As already stated, when a black hole is not exchanging any heat with its
surrounding, then we call this stable state as the remnant of the black
hole. In this case, the heat capacity becomes zero. From the above expression,
it can be shown that for the existence of remnant, the expression of the
horizon radius of the remnant comes out as
\begin{equation}
r_{rem}=\frac{1}{4} \left(2 \sqrt{\beta }-\alpha \right).
\label{eq21}
\end{equation}
Thus, the remnant horizon radius depends on the deformation parameters $\alpha$
and $\beta$, and is independent of the behaviour of the surrounding field. It
is interesting to note that in Ref.~\cite{bcl}, this dependency was
established with one parameter only. The expression for the remnant
temperature is calculated as
\begin{equation}
T_{rem}= \frac{3 e \omega \left(\frac{\sqrt{\beta }}{2}-\frac{\alpha }{4}\right)^{-(3 \omega +1)}+1}{\pi \sqrt{\beta }}.
\label{eq22}
\end{equation}
For instance, the remnant temperature for a particular combination of
$\alpha=0.05$, $\beta=0.05$, $e=0.05$ and $\omega=-\frac{1}{3}$ comes out to
be $1.352$, which is above the upper limit of $T_{GUP} \le 1.210$ for this
case as obtained from the equation \eqref{eq17}. This upper limit of $T_{GUP}$
is calculated from the condition that
$$\sqrt{1-\frac{4\beta}{(4r_H+\alpha)^2}}\;\;\ge 0,$$ which gives minimum
allowed horizon radius for this case as $\sim 0.1$. This implies that the
GUP-corrected Schwarzschild black holes in our study can not reach the stage
of the remnant, which is also clear from the heat capacity analysis above.
The entropy function of the black hole can be estimated from the thermodynamic
relation,
\begin{equation}
S=\int\frac{dM}{T}
\label{eq23}
\end{equation}
which, with the help of equations \eqref{eq8} and \eqref{eq17}, can be
expressed for the GUP-corrected black hole as
\begin{equation}
S_{GUP}= \frac{\pi M^2 \left((a+1) (\alpha +4 r_H)^2-4 \beta \log ((a+1) (\alpha +4 r_H))\right)}{32 \left(M^2-4 \beta (e-1)^2\right)}.
\label{eq24}
\end{equation}
\begin{figure}[h!]
\includegraphics[scale=0.28]{entropy_e.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{entropy_alpha.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{entropy_beta.pdf}
\caption{Variation of entropy of GUP-corrected black holes with respect to
horizon radius for different values of $e$ with $\alpha = \beta = 0.05$
(left plot), for different values of $\alpha$ with $e=0.1$ and $\beta =0.05$
(middle plot), and for different values of $\beta$ with $e=0.1$ and
$\alpha =0.05$ (right plot). In all three plots the solid red line
represents the entropy of Schwarzschild black holes.}
\label{fig7ad}
\end{figure}
Fig.~\ref{fig7ad} shows the variations of the entropy function \eqref{eq24}
with respect to the horizon radius of the GUP-corrected black holes for
different values of the model's parameters together with the same for the
Schwarzschild black holes. It is observed that the parameter $e$ and
$\alpha$ have a very negligible impact on the entropy of the black holes.
Whereas, the parameter $\beta$ has a significant impact on the entropy of
black holes with sufficient horizon radii \cite{bcl}. Moreover, the entropy of
the GUP-corrected black holes is found to be higher than that of the
Schwarzschild black holes for almost all cases and for sufficient horizon
radius. This difference is substantial for a black hole with a larger horizon
radius depending on the value of model parameter $\beta$.
Black hole shadow is a dark area surrounding the black hole and caused by
its gravitational lensing and light ray capturing. Generally, for a static and
spherically symmetric case, the shadow is circular, but when rotating black
holes are considered, there is a elongation in the direction of rotation. In
our case, the black holes are chosen to be spherically symmetric and hence the
condition for finding the radius of photon sphere of our black holes is \cite{s1}
\begin{equation}
2-\frac{r f'(r)}{f(r)}=0.
\label{25}
\end{equation}
The solution of the above equation gives us the radius of the photon sphere
as
\begin{equation}
r_{ps}=\frac{12 \beta (1-e)}{M}+\frac{3 M}{1-e}- 6 \alpha.
\label{26}
\end{equation}
The shadow radius can be found out from the following expression \cite{s1}:
\begin{equation}
R_{s}=\frac{r_{ps}}{\sqrt{f(r)}_{r\rightarrow r_{ps}}}.
\label{27}
\end{equation}
The final expression for the shadow radius of our black holes as a function
of $\alpha$, $\beta$, $c$ and $M$ can be found as
\begin{equation}
R_s=\frac{3 \sqrt{3} \Big(4 \beta (1-e)^2 - 2 \alpha (1-e) M+M^2\Big)}{(1-e)^{3/2} M}.
\label{28}
\end{equation}
\begin{figure}[h!]
\includegraphics[scale=0.28]{shadow_vs_a.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{shadow_vs_b.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{shadow_vs_c.pdf}
\caption{Shadow radius of GUP-corrected black holes versus model parameter
curves for different values of the non-varying parameters at each specific
case.
}
\label{fig8}
\end{figure}
We have plotted the shadow radius versus the deformation parameters
$\alpha$ and $\beta$, and the quintessential parameter $e$ in Fig.~\ref{fig8}.
All the plots show a linear relationship between the radius and the parameters.
That is the shadow radius decreases linearly with $\alpha$, whereas it
increases linearly with $\beta$ and $e$.
\begin{figure}[!h]
\includegraphics[scale=0.28]{Shadow_vs_a1.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{Shadow_vs_b1.pdf}\hspace{0.3cm}
\includegraphics[scale=0.28]{Shadow_vs_c1.pdf}
\caption{Variation of shadow radius in celestrial coordinate plane \cite{anacleto,s1}
with parameters $\alpha$, $\beta$ and $e$. Here, $\beta=0.02$ and $e=0.05$
for the first plot, $\alpha=0.05$ and $e=0.05$ for the second plot and
$\alpha=0.02$ and $\beta=0.02$ for the third plot.}
\label{fig9}
\end{figure}
Moreover, Fig.~\ref{fig9} shows the shadow radius variations in celestial
coordinates X and Y \cite{anacleto,s1}. This figure also shows a similar trend of shadows
of GUP-corrected black holes with respect to parameters $\alpha$, $\beta$ and
$e$ as stated above.
\section{Conclusion} \label{conclusion}
The primary objective of this work is to study the effects of the
deformation parameters introduced by the GUP on the QNMs of oscillation
of the Schwarzschild black holes, together with a brief review of the
thermodynamic properties of such GUP-corrected black holes surrounded by
a quintessence dark energy field. It has been observed that both the
deformation parameters as well as the quintessence parameter play an important
role on the behaviour of the QNMs of the black holes. We employed two methods
for obtaining the QNMs, namely the Mashhoon method and the WKB method. Further
we derive a GUP-modified temperature expression of the black holes and show
its dependence on the deformation parameters as well as on the quintessence
parameter. It is seen that there exist an upper bound on the temperature,
which is impacted by the deformation parameters. Further, the heat capacity
along with entropy have been evaluated for the GUP-corrected black holes and
existence of black hole remnants has been studied. The existence of remnant
radius and remnant temperature are certainly impacted by the deformation
parameters. We observed that the GUP-corrected black holes can not reach the
state of remnant. It is also seen that the quintessence field and the first
deformation parameter have no effective roles in the entropy of the black
holes, which is dependent only on the second deformation parameter. The shadow
radius of the GUP-corrected black hole has been calculated and its dependence
on various parameters has been shown. It is quite remarkable that the
introduction of small quantum corrections to the black hole metric can
have notable influence on various properties of the black holes. This avenue
has been investigated in the literature for many years but there are further
scopes in this direction apart from the present study. It will be interesting
to analyze the impact of these deformations on black hole properties,
considering various Modified Theories of Gravity (MTGs).
It is to be noted that once perturbed, a black hole responds by radiating
GWs, which evolve in time. This evolution process is divided into three
phases, an instant outburst of radiation, a longer period of damped
oscillations (QNMs) and at very late times, a suppressed power-law
tail \cite{konoplya_1}. Since we have been able to detect the first phase of GW
only, it remains a challenge
for the physicists and engineers to develop and improve the sensitivity of
modern day detectors so that the second phase of GW can be detected. Steps in
this direction have already been undertaken in the means of ambitious future
projects like the LISA \cite{A8} and the Einstein Telescope \cite{A9}, which
is believed to have far better sensitivity than the present detectors. The
prospect of detection of QNMs by LISA has been analysed in Ref.~\cite{gogoi4}.
The detection of QNMs of the black hole can have many useful implications. It
can be used to constrain the GUP parameters and as testing grounds for various
MTGs as well. These upcoming advanced detectors of GWs can shed more light into
this field and provide data for validating various models available at present
times, which is one of the primary objectives in this field of study.
|
1,314,259,993,935 | arxiv |
\section{Acknowledgments}
The author thanks Melanie Beck, Andrew Reed, Chris Wallace, Grant Custer, Danielle Thorpe and other members of the Cloudera Fast Forward team for their valuable feedback.
\section{Appendices}
\label{sec:appendix}
Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper.
Appendices should be \textbf{uploaded as supplementary material} when submitting the paper for review.
Upon acceptance, the appendices come after the references, as shown here.
\paragraph{\LaTeX-specific details:}
Use {\small\verb|
\subsection{Related Work}
QA systems that integrate deep learning models remain an active area of research and practice. For example, AllenNLP Interpret \cite{Wallace2019AllenNLP} provides a demonstration interface and sample code for interpreting a set of AllenNLP models across multiple tasks. Similarly, \citet{chakravarti-etal-2019-cfo} provide a gRPC-based orchestration flow for QA. However, while these projects provide a graphical user interface (GUI), their installation process is complex and requires specialized code to adapt them to existing infrastructure, such as retriever instances. Several open source projects also offer a programmatic interface for inference (e.g., \href{https://huggingface.co/transformers/main_classes/pipelines.html}{ HugginFace Pipelines.}), as well as joint retrieval paired with reading (e.g., \href{https://github.com/deepset-ai/haystack/}{ Deepset Haystack.}).
NeuralQA makes progress along these lines, by providing an extensible code base, a low-code declarative configuration interface, tools for query expansion and a visual interface for sensemaking of results. It supports a local research/development workflow (via the \href{https://pypi.org/project/neuralqa/}{pip package manager}) and scaled deployment via containerization (we provide a Dockerfile). We believe this ease of use can serve to remove barriers to experimentation for researchers, and accelerate the deployment of QA interfaces for experienced teams.
\section{Conclusion}
In this paper, we presented NeuralQA - a usable library for question answering on large datasets.
NeuralQA is useful for developers interested in qualitatively exploring QA models for their custom datasets, as well as for enterprise teams seeking a flexible QA interface/API for their customers.
NeuralQA is under active development, and roadmap features include support for a \href{https://lucene.apache.org/solr/}{Solr} retriever, additional model explanation methods and additional query expansion methods such as RM3 \cite{lavrenko2017relevance}. Future work will also explore empirical evaluation of our CQE and \(RelSnip\) implementation to better understand their strengths and limitations.
\section{Discussion}
\section{Experiments}
\input{experimentgraph}
We conducted several experiments and ablation studies that inform the design choices implemented in NeuralQA. Specifically, we explore the impact of passage fragmentation strategies, query expansion methods, retrieved passage concatenation \(RelSnip\) on QA performance. We define QA performance loosely as the \colorbox{yellow}{Andrew ..} ...
Our experiments are conducted using a subset of the Google natural questions dataset \colorbox{yellow}{ref}.
\input{experimenttable}
\subsection{Query Expansion}
\label{sec:queryexpansion}
We explore the following strategies for QE prior to document reading.
i) BM25 Baseline ii.) Relevance Model \cite{lin2019neural} iii.) Finetuned Word2Vec Embeddings iv.) Finetuned MLM embeddings v.)
\subsection{Passage Fragmentation with \(RelSnip\)}
\label{sec:relsnip}
Findings from \cite{yang2019end} that a segmenting long documents into paragraphs prior to indexing provided better overall QA performance (almost 2x). We explore this option and also introduce a \textit{relevant snippet, \(RelSnip\)} approach that does not involve any custom indexing steps. A \(RelSnip\) is constructed as follows: For each long passage (e.g. 5,000 tokens) returned by BM25,
i.) get a highlight sections - neighbourhood (e.g 150 characters) of top k text where each keyword match occurs. Elastic search supports this out of the box
ii.) concatenate each snippet to form a new "relevant snippet" passage of say 400 tokens
iii.) feed this 400 tokens to BERT based document reader instead of original 5,000 tokens.
In many practical cases, it may be challenging, redundant or undesirable to recreate indexes at the paragraph level. The \(RelSnip\) helps avoid that while also reducing the run time costs required for document reading.
adding some relevant papers to be read and processed
use search on mlm for query expansion
paper by ganguly et al and otsuka et al
\section{Introduction}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{mlm}
\caption{Examples of qualitative results from applying query expansion: (a) Query expansion using SpaCy \cite{spacy2} word embeddings to identify the most similar words for each expansion candidate token. This approach yields terms with low relevance (e.g., terms related to work (jobs, hiring) and fruits (apple, blackberry, pears) are not relevant to the current query context) (b) Query expansion using a masked language model (BERT). This approach yields terms that are not contained within the original query (e.g., mac, macintosh, personal) but are, \textit{in general}, relevant to the current query.
\label{fig:mlm}}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{architecture}
\caption{The NeuralQA Architecture is comprised of four primary modules. (a) User interface: enables user queries and visualizes results from the retriever and reader (b) Contextual Query Expander: offers options for generating query expansion terms using an MLM (c) Retriever: leverages the BM25 scoring algorithm in retrieving a list of candidate passages; it also optionally condenses lengthy documents to smaller passages via \ref{sec:relsnip} \(RelSnip\). (d) Document Reader: identifies answer spans within documents (where available) and provides explanations for each prediction.
\label{fig:architecture}}
\end{figure*}
The capability of providing \textit{exact} answers to queries framed as natural language questions can significantly improve the user experience in many real world applications. Rather than sifting through lists of retrieved documents, automatic QA (also known as reading comprehension) systems can surface an exact answer to a query, thus reducing the cognitive burden associated with the standard search task. This capability is applicable in extending conventional information retrieval systems (search engines) and also for emergent use cases, such as open domain conversational AI systems \cite{gao2018neural, qu2019bert}. For enterprises, QA systems that are both fast and precise can help unlock knowledge value in large unstructured document collections.
Conventional methods for open domain QA \cite{yang2015wikiqa, yang2019end} follow a two-stage implementation - (i) a \textbf{retriever} that returns a subset of relevant documents. Retrieval is typically based on sparse vector space models such as BM25 \cite{robertson2009probabilistic} and TF-IDF \cite{chen2017reading}; (ii) a machine reading comprehension model (\textbf{reader}) that identifies spans from each document which contain the answer. While sparse representations are fast to compute, they rely on exact keyword match, and suffer from the \textit{vocabulary mismatch} problem - scenarios where the vocabulary used to express a query is different from the vocabulary used to express the same concepts within the documents.
To address these issues, recent studies have proposed neural ranking \cite{lee-etal-2018-ranking, kratzwald-etal-2019-rankqa} and retrieval methods \cite{karpukhin2020dense, lee2019latent, guu2020realm}, which rely on dense representations.
However, while dense representations show significantly improved results, they introduce additional complexity and latency, which limits their practical application. For example, \citet{guu2020realm} require a specialized MLM pretraining regime, as well as a supervised fine-tuning step, to obtain representations used in a retriever. Similarly \citet{karpukhin2020dense} use dual encoders in learning a dense representation for queries and all documents in the target corpus. Each of these methods require additional infrastructure to compute dense representation vectors for all documents in the target corpus as well as implement efficient similarity search at run time. In addition, transformer-based architectures \cite{vaswani2017attention} used for dense representations are unable to process long sequences due to their self-attention operations which scale quadratically with sequence length. As a result, these models require that documents are indexed/stored in small paragraphs. For many use cases, meeting these requirements (rebuilding retriever indexes, training models to learn corpus specific representations, precomputing representations for all indexed documents) can be cost-intensive. These costs are hard to justify, given that simpler methods can yield comparable results \cite{lin2019neural}. Furthermore, as reader models are applied to domain-specific documents, they fail in counter-intuitive ways. It is thus valuable to offer visual interfaces that support debugging or sensemaking of results (e.g., explanations for \textit{why} a set of documents were retrieved or \textit{why} an answer span was selected from a document). While several libraries exist to explain NLP models, they do not integrate interfaces that help users make sense of both the retriever and the reader tasks. Collectively, these limitations can hamper experimentation with QA systems and the integration of QA models into practitioner workflows.
In this work, we introduce NeuralQA to help address these limitations. Our contributions are summarized as follows:
\begin{itemize}
\item An easy to use, end-to-end library for implementing QA systems. It integrates methods for query expansion, document retrieval (ElasticSearch\footnote{ElasticSearch {https://www.elastic.co}}), and document reading (QA models trained using the HuggingFace Transformers API \cite{Wolf2019HuggingFacesTS}). It also offers an interactive user interface for sensemaking of results (retriever + reader). NeuralQA is \href{https://github.com/victordibia/neuralqa}{open source and released under the MIT License}.
\item To address the vocabulary mismatch problem, NeuralQA introduces and implements a method for contextual query expansion (CQE), using a masked language model (MLM) fine-tuned on the target document corpus. Early qualitative results show CQE can surface relevant additional query terms that help improve recall and require minimal changes for integration with existing retrieval infrastructure.
\item In addition, we also implement \(RelSnip\), a simple method for extracting relevant snippets from retrieved passages before feeding it into a document reader. This, in turn, reduces the latency required to chunk and read lengthy documents. Importantly, these options offer the opportunity to improve latency and recall, with no changes to existing retriever infrastructure.
\end{itemize}
Overall, NeuralQA complements a line of end-to-end applications that improve QA system deployment \cite{akkalyoncu-yilmaz-etal-2019-applying,yang2019end} and provide visual interfaces for understanding machine learning models \cite{Wallace2019AllenNLP, seq2seqvisv1,madsen2019visualizing, Dibia2020Anomagram, Dibia2020Convnetplayground}.
\section{NeuralQA System Architecture}
In this section, we review the architecture for NeuralQA, as well as design decisions and supported workflows. The core modules for NeuralQA (Fig. \ref{fig:architecture}) include a user interface, retriever, expander, and reader. Each of these modules are implemented as extensible python classes (to facilitate code reuse and incremental development), and are exposed as REST API endpoints that can be either consumed by 3rd party applications or interacted with via the NeuralQA user interface.
\subsection{Retriever}
The retriever supports the execution of queries on an existing ElasticSearch instance, using the industry standard BM25 scoring algorithm.
\label{sec:relsnip}
\subsubsection{Condensing Passages with \(RelSnip\)}
In practice, open corpus documents can be of arbitrary length (sometimes including thousands of tokens) and are frequently indexed for retrieval \textit{as is}. On the other hand, document reader models have limits on the maximum number of tokens they can process in a single pass (e.g., BERT-based models can process a maximum of 512 tokens). Thus, retrieving large documents can incur latency costs, as a reader will have to first split the document into manageable \textit{chunks}, and then process each \textit{chunk} individually. To address this issue, NeuralQA introduces \(RelSnip\), a method for constructing smaller documents from lengthy documents. \(RelSnip\) is implemented as follows: For each retrieved document, we apply a highlighter (\href{ https://lucene.apache.org/core/7_3_1/highlighter/org/apache/lucene/search/uhighlight/UnifiedHighlighter.html}{Lucene Unified Highlighter}), which breaks the document into fragments of size \(k_{frag}\) and uses the BM25 algorithm to score each fragment as if they were individual documents in the corpus. Next, we concatenate the top \(n\) fragments as a new document, which is then processed by the reader. \(RelSnip\) relies on the simplifying assumption that fragments with higher match scores contain more relevant information. As an illustrative example, \(RelSnip\) can yield a document of 400 tokens (depending on \(k_{frag}\) and \(n\) ) from a document containing 10,000 tokens. In practice, this can translate to ~25x increase in speed.
\subsection{Expander}
\subsubsection{Contextual Query Expansion}
Contextual Query Expansion (CQE) relies on the assumption that a masked language model (MLM) that has been fine-tuned on the target document corpus contains implicit information on the target corpus that can be exploited in identifying relevant query expansion terms. Ideally, we want to expand tokens, such that additional tokens serve to increase recall, while adding minimal noise and without significantly altering the semantics of the original query. We implement CQE as follows:
First, we identify a set of expansion candidate tokens. For each token \(t_i\) in the query \(t_{query}\), we use the SpaCy \cite{spacy2} library to infer its part of speech tag \(t_{i_{pos}}\) and apply a filter \(f_{rule}\) to select a subset as a candidate for expansion \(t_{candidates}\). Next, we construct intermediate versions of the original query, in which each token in \(t_{candidates}\) is masked, and an MLM (BERT) predicts the top \(n\) tokens that are contextually most likely to complete the query. These predicted tokens \(t_{expansion}\) are then added to the original query as expansion terms.
To minimize the chance of introducing spurious terms that are unrelated to the original query, we find that two quality control measures are useful. First, we leverage confidence scores returned by the MLM and only accept expansion tokens above a certain threshold (e.g., \(k_{thresh} =0.5\)) where \(k_{thresh}\) is a hyperparameter. Secondly, we find that a conservative filter in selecting token expansion candidates can mitigate the introduction of spurious terms. Our filter rule \(f_{rule}\) currently only expands tokens that are either nouns or adjectives \(t_{i_{pos}} \in ({noun, adj}) \); tokens for other parts of speech are not expanded. Finally, the list of expansion terms are further cleaned by the removal of duplicate terms, punctuation, and stop words. Fig. \ref{fig:mlm} shows a qualitative comparison of query expansion terms suggested by a static word embedding and a masked language model for a given query.
\subsection{Reader}
The reader module implements an interface for predicting answer spans, given a query and context documents. Underneath, it loads any QA model trained using the HuggingFace Transformers API \cite{Wolf2019HuggingFacesTS}. Documents that exceed the maximum token size for the reader are automatically split into chunks with a configurable stride and answer spans provided for each chunk. All answers are then sorted, based on an associated score (start and end token softmax probabilities). Finally, each reader model provides a method that generates gradient-based explanations (Vanilla Gradients \cite{simonyan2013deep,erhan2009visualizing,baehrens2010explain}).
\begin{figure*}[ht]
\includegraphics[width=\textwidth]{figures/manual.png}
\caption{
The NeuralQA user interface is optimized for web and mobile. a.) Basic view (mobile) for closed domain QA, i.e., the user provides a question \textit{and} passage. b.) Advanced options view (desktop mode) for open domain QA. The user can select the retriever (e.g., \# of returned documents, toggle \(RelSnip\), fragment size \(k_{frag}\)), set expander and reader parameters (BERT reader model, token stride size)). View also shows a list of returned documents (D0-D4) with highlights that match query terms; a list of answers (A0) with gradient-based explanation of which tokens impact the selected answer span.
\label{fig:uiscreen}}
\end{figure*}
\subsubsection{User Interface }
The NeuralQA user interface (Fig. \ref{fig:uiscreen}) seeks to aid the user in performing queries and in sensemaking of underlying model behaviour. As a first step, we provide a visualization of retrieved document highlights that indicate what portions of the retrieved document contributed to their relevance ranking. Next, following work done in AllenNLP Interpret\cite{Wallace2019AllenNLP}, we implement gradient-based explanations that help the user understand what sections of the input (question and passage) were most relevant to the choice of answer span. We do not use attention weights, as they have have been shown to be unfaithful explanations of model behaviour \cite{jain2019attention, serrano-smith-2019-attention} and not intuitive for end user sensemaking. We also implement a document and answer tagging scheme that indicates the source document from which an answer span is derived.
NeuralQA is scalable, as it is built on industry standard OSS tools that are designed for scale (ElasticSearch, HuggingFace Transformers API). We have tested deployments of NeuralQA on docker containers running on CPU machine clusters which rely on ElasticSearch clusters. The UI is responsive and optimized to work on desktop, as well as on mobile devices.
\subsection{Configuration and Workflow}
NeuralQA implements a command line interface for instantiating the library, and a declarative approach for specifying the parameters for each module. At run time, users can provide a command line argument specifying the location of a configuration YAML file\footnote{ A sample configuration file can be found on \href{https://github.com/victordibia/neuralqa/blob/master/neuralqa/config_default.yaml}{Github.}}. If no configuration file is found in the provided location and in the current folder, NeuralQA will create a default configuration file that can be subsequently modified. As an illustrative example, users can configure properties of the user interface (views to show or hide, title and description of the page, etc.), retriever properties (a list of supported retriever indices), and reader properties (a list of supported models that are loaded into memory on application startup).
\subsubsection{User Personas}
NeuralQA is designed to support use cases and personas at various levels of complexities. We discuss two specific personas briefly below.
\\\textbf{Data Scientists}: Janice, a data scientist, has extensive experience applying a collection of machine learning models to financial data. Recently, she has started a new project, in which the deliverable includes a QA model that is skillful at answering factoid questions on financial data. As part of this work, Janice has successfully fine-tuned a set of transformer models on the QA task, but would like to better understand how the model behaves. More importantly, she would like to enable visual interaction with the model for her broader team. To achieve this, Janice hosts NeuralQA on an internal server accessible to her team. Via a configuration file, she can specify a set of trained models, as well as enable user selection of reader/retriever parameters. This workflow also extends to other user types (such as hobbyists, entry level data scientists, or researchers) who want an interface for qualitative inspection of custom reader models on custom document indices.
\\\textbf{Engineering Teams}: Candice manages the internal knowledge base service for her startup. They have an internal ElasticSearch instance for search, but would like to provide additional value via QA capabilities. To achieve this, Candice provisions a set of docker containers running instances for NeuralQA and then modifies the frontend of their current search application to make requests to the NeuralQA REST API end point and serve answer spans.
\section{Overview}
\section{The Question Answering Pipeline}
\label{relatedwork}
There are several subtasks that frequently comprise the QA pipeline and are implemented in NeuralQA.
\subsection{Document Retrieval}
The first stage in the QA process focuses on retrieving a list of candidate passages, which are subsequently processed by a reader. Conventional approaches to QA apply representations from sparse vector space models (e.g., BM25, TF-IDF) in identifying the most relevant document candidates. For example, \citet{chen2017reading} introduce an end-to-end system combining TF-IDF retrieval with a multi-layer RNN for document reading. This is further improved upon by \citet{yang2019end}, who utilize BM25 for retrieval with a modern BERT transformer reader. However, sparse representations are keyword dependent, and suffer from the \textit{vocabulary mismatch} problem in information retrieval (IR); given a query \(Q\) and a relevant document \(D\), a sparse retrieval method may fail to retrieve \(D\) if \(D\) uses a different vocabulary to refer to the same content in \(Q\). Furthermore, given that QA queries are under-specified by definition (users are searching for unknown information), sparse representations may lack the contextual information needed to retrieve the most relevant documents. To address these issues, a set of related work has focused on methods for re-ranking retrieved documents to improve recall \cite{kratzwald-etal-2019-rankqa, wang2018r}. More recently, there have been efforts to learn representations of queries and documents useful for retrieval. \citet{lee2019latent} introduce an inverse cloze task for pretraining encoders used to create static embeddings that are indexed and used for similarity retrieval during inference. Their work is further expanded by \citet{guu2020realm} who introduce non-static representations that are learned simultaneous to reader fine-tuning. Finally, \citet{karpukhin2020dense} use dual encoders for retrieval: one encoder that learns to map queries to a fixed dimension vector, and another that learns to map documents to a similar fixed-dimension vector (such that representations for similar query and documents are close).
\subsection{Query Expansion}
In addition to re-ranking and dense representation retrieval, query expansion methods have also been proposed to help address the vocabulary mismatch problem. They serve to identify additional relevant query terms, using a variety of sources - such as the target corpus, external dictionaries (e.g., WordNet), or historical queries.
Existing research has explored how implicit information contained in queries can be leveraged in query expansion. For example, \citet{lavrenko2017relevance, lv2010positional} show how a relevance model (RM3) can be applied for query expansion and improve retrieval performance. More recently, \cite{lin2019neural} also show that the use of a well-tuned relevance model such as RM3 \cite{lavrenko2017relevance, abdul2004umass} results in performance at par with complex neural retrieval methods.
Word embeddings have been explored as a potential method for query expansion, as well. In their work, \citet{kuzi2016query} train a word2vec \cite{mikolov2013distributed} CBOW model on their search corpora and use embeddings to identify expansion terms that are either semantically related to the query as a whole or to its terms. Their results suggest that a combination of word2vec embeddings and a relevance model (RM3) provide good results. However, while word embeddings trained on a target corpus are useful, they are static and do not take into consideration the context of the words in a specific query. In this work, we propose an extension to this direction of thought and explore how contextual embeddings produced by an MLM, such as BERT \cite{devlin2018bert}, can be applied in generating query expansion terms.
\subsubsection{Document Reading}
Recent advances in pretrained neural language models, like BERT \cite{vaswani2017attention} and GPT \cite{radford2019language}, have enabled robust contextualized representation of natural language, which, in turn, have enabled significant performance increases on the QA task. Each QA model (reader) consists of a base representation and an output feedforward layer which produces two sets of scores: (i) scores for each input token that indicate the likelihood of an answer span starting at the token offset, and (ii) scores for each input token that indicate the likelihood of an answer span ending at the token offset.
\section{User Study}
|
1,314,259,993,936 | arxiv | \section{Introduction}
We say that a boolean function $f : \bit^n \rightarrow \bit$ is a \emph{Polynomial Threshold Function of degree $d$} if it can be expressed as the sign of a polynomial $p \in \mathbb R [ x_1, \ldots, x_n ]$ of degree at most $d$ evaluated on the boolean hypercube.
For brevity, we will use the term $(n,d)$-PTF (or simply PTF, when $n$ and $d$ are either implicit or irrelevant) to refer to a polynomial threshold function of degree $d$ on $n$ variables.
We say that the coefficients of $p$ are the \emph{realizing weights} of $f$.
Note that these realizing weights are not unique, as any sufficiently small perturbation of $p$ will not affect its sign on the discrete set $\bit^n$.
This definition alone is not terribly exciting without restrictions on $d$, as every boolean function on $n$ variables can be written as the sign of (and in fact can be written exactly as) a multilinear polynomial of degree $n$.
We are interested particularly in the case where $d$ is small.
In an influential paper, Craig Gotsman and Nathan Linial \cite{GL94} applied Fourier analytic techniques to the study of PTFs.
They were mainly interested in connecting different measures of the complexity of boolean functions, and of low-degree PTFs in particular.
One such measure was the Average Sensitivity of a boolean function, defined in Fourier analytic terms.
For simplicity, in this paper we use the following (equivalent) combinatorial definition:
\begin{definition}
For a function $f: \bit^n \rightarrow \bit$, we define its \emph{Dichromatic Count} $\mathbf{D}[f]$ to be the number of (unordered) pairs of Hamming neighbors $\{x,y\}$ such that $f(x) \ne f(y)$.
\end{definition}
We say that such a pair of Hamming neighbors is a \emph{dichromatic edge} of $f$.
\begin{definition}
The \emph{Average Sensitivity} of a boolean function $f$ is $\mathbf{AS}[f] := 2^{1-n} \mathbf{D}[f]$.
\end{definition}
Among other things, Gotsman and Linial proved a tight upper bound on the average sensitivity of $(n,1)$-PTFs, achieved by the MAJORITY function on $n$ variables.
They conjectured that this bound generalizes to higher degree PTFs, in that the $(n,d)$-PTF of maximal average sensitivity is the obvious symmetric candidate, which alternates signs on the $d+1$ values of $\displaystyle \sum_{i \in [n]} x_i$ closest to $0$.
\begin{conjecture}[Gotsman-Linial]\label{GL}
Let $p^*_{n,d}$ be the monic univariate polynomial of degree $d$ with (non-repeated) roots at the $d$ integers closest to $0$ of opposite parity from $n$.
Let $\displaystyle f^* ( x_1, \ldots, x_n ) = \text{sgn} \left( p^*_{n,d} \left( \sum_{i \in [n]} x_i \right) \right)$.
Then for every $(n,d)$-PTF $f$, $\mathbf{AS}[f] \le \mathbf{AS}[f^*_{n,d}]$.
\end{conjecture}
This conjecture was listed as a prominent open problem in \cite{OD14} and \cite{alphabetsoup}.
If true, it would have many applications in complexity and learning (see for example \cite{HKM, GS, K12, KW, CSS}), although most of the applications would already be implied by an asymptotic version of the conjecture, stated below.
Gotsman and Linial proved their conjecture for the case where $d=1$, and it is also known to be true in the case where $d=0$.
However, it was left open whether the conjecture holds for any $d \ge 2$.
Two weaker versions of this conjecture have since been formulated and studied.
\begin{conjecture}[Gotsman-Linial - Asymptotic]\label{GLweak}
Let $f : \bit^n \rightarrow \bit$ be an $(n,d)$-PTF.
Then the average sensitivity $\mathbf{AS}[f] \in O( d \sqrt{n} )$.
\end{conjecture}
\begin{conjecture}[Gotsman-Linial - Weak]\label{GLsuperweak}
Let $f : \bit^n \rightarrow \bit$ be an $(n,d)$-PTF.
Then the average sensitivity $\mathbf{AS}[f] \in O( \sqrt{n} \log^{g(d)} n )$ for some function $g$ depending only on $d$.
\end{conjecture}
Conjecture \ref{GLsuperweak} was resolved by Daniel Kane \cite{K13}.
\subsection{Result}
In this paper, we resolve the Gotsman-Linial Conjecture (Conjecture \ref{GL}) for all pairs $(n, d)$ except the case when $n>7$ is even and $d=2$.
The main result of this paper is the following.
\begin{theorem}\label{mainrefute}
For all pairs of natural numbers $(n,d)$ satisfying one of the following criteria, there exists an $(n,d)$-PTF $f_{n,d}$ witnessing a counterexample to the Gotsman-Linial Conjecture (Conjecture \ref{GL}):
\begin{compactitem}
\item $n \ge 5$ is odd, and $d=2$.
\item $n \ge 7$, and $3 \le d \le n-3$.
\end{compactitem}
Moreover, $\mathbf{AS}[f_{n,d}] \in (1 + \Omega(n^{-1} e^{-d^2/n})) \mathbf{AS}[f^*_{n,d}]$.
\end{theorem}
In addition, the conjecture holds in many of the remaining cases.
\begin{theorem}\label{mainconfirm}
For all pairs of natural numbers $(n,d)$ satisfying one of the following criteria, $f^*_{n,d}$ has the greatest average sensitivity among $(n,d)$-PTFs.
\begin{compactitem}
\item $d \le 1$.
\item $d \ge n-2$.
\item $n=6$.
\end{compactitem}
\end{theorem}
Our results (and the remaining open cases) are summarized in Figure \ref{resultstable}.
Although we refute the Gotsman-Linial Conjecture for most cases that are of interest for applications, the asymptotic conjecture (Conjecture \ref{GLweak}), which would suffice for most known applications, remains open.
\begin{figure}[h]
\begin{center}
\begin{tabular}{cr|cccccccccccccccc}
& \multicolumn{1}{r}{} & \multicolumn{16}{c}{$n$} \\
&& $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ & $\cdots$ & $2k$ & $2k+1$ & $\cdots$ \\
\cline{2-18}
\multirow{9}{*}{$d$} & $0$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan} \cdots$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan} \cdots$ \\
& $1$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan} \cdots$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan} \cdots$ \\
& $2$ && $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{red}\times$ & $\color{cyan}\checkmark$ & $\color{red}\times$ & ? & $\color{red}\times$ & ? & $\color{red}\times$ & ? & $\cdots$ & ? & $\color{red}\times$ & $\cdots$ \\
& $3$ &&& $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red} \cdots$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red} \cdots$ \\
& $4$ &&&& $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red} \cdots$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red} \cdots$ \\
& $5$ &&&&& $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red} \cdots$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red} \cdots$ \\
& $6$ &&&&&& $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red} \cdots$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red} \cdots$ \\
& $7$ &&&&&&& $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{cyan}\checkmark$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red} \cdots$ & $\color{red}\times$ & $\color{red}\times$ & $\color{red} \cdots$ \\
& $\vdots$ &&&&&&&& $\color{cyan} \ddots$ & $\color{cyan} \ddots$ & $\color{cyan} \ddots$ & $\color{red} \ddots$ & $\color{red} \ddots$ & $\color{red} \ddots$ & $\color{red} \ddots$ & $\color{red} \ddots$ & $\color{red} \ddots$
\end{tabular}
\end{center}
\caption{Results are summarized in the above table.
A cyan tick mark indicates a case in which the conjecture holds (for all $(n,d)$-PTFs $f$, $\mathbf{AS}[f] \le \mathbf{AS}[f^*_{n,d}]$).
A red cross indicates a refutation (there exists an $(n,d)$-PTF $f$ such that $\mathbf{AS}[f] \in (1 + \Omega(n^{-1} e^{-d^2/n})) \mathbf{AS}[f^*_{n,d}]$).
A black question mark indicates an open case.
Note: the cases $(n,d) = (6,2)$ and $(n,d) = (6,3)$ were verified with the help of a computer search and a linear program solver (see Appendix~\ref{appendix-computer}).}
\label{resultstable}
\end{figure}
The remainder of this paper is structured as follows.
We first present some high level intuition relating to the Gotsman-Linial Conjecture.
Section 2 contains background information.
Section 3 contains constructions of the refutations indicated in Figure \ref{resultstable}.
Section 4 concludes the paper and presents a revised conjecture.
\subsection{Intuition}\label{intuition}
We start with some very high level intuition as to why the Gotsman-Linial Conjecture might be (approximately) true.
The conjecture holds in the case of symmetric PTFs (boolean functions which can be expressed as the sign of a univariate polynomial in the sum of the input bits).
This follows from the Fundamental Theorem of Algebra and a simple counting argument.
In the more general case, we might expect that a degree-$d$ PTF can be expressed (at least approximately) in terms of $d$ unate functions.
This generalizes the observation that every linear threshold function is unate.
For a sufficiently close approximation, this would prove the Asymptotic Gotsman-Linial Conjecture.
Intuition may also be drawn from Kane's proof of Conjecture \ref{GLsuperweak}.
If inputs are chosen from a Gaussian distribution instead of a Bernoulli distribution, a polynomial $p$ is expected to be too large in magnitude for a small change in its input to change its sign.
Under certain conditions, a similar result can be extended to polynomial threshold functions on the boolean hypercube.
As for why the Gotsman-Linial Conjecture is not (exactly) true, we observe that the PTF of conjectured maximal average sensitivity is the product of $d$ linear threshold functions, with parallel separating hyperplanes between two of the middle $d+1$ layers (sets of vertices of equal Hamming weight) in the hypercube.
For some $d$, one might expect to be able to find a PTF of greater average sensitivity approximated by turning one of these separating hyperplanes `sideways', i.e. replacing a hyperplane that cuts the fewest edges with a hyperplane orthogonal to the rest.
Intuitively, this would require that $d$ be sufficiently large that some of the hyperplanes cut many more edges than others, but also sufficiently small that not too many edges are cut by two hyperplanes.
As it turns out, this intuition can be formalized for many $n$ and $d$, refuting the Gotsman-Linial Conjecture.
\section{Preliminaries}
\subsection{Background}
Low-degree PTFs, in particular linear threshold functions (degree-$1$ PTFs) with integral and polynomially bounded realizing weights, are of interest in the study of complexity classes such as ${\sf TC}$ (i.e. circuits composed of AND, OR, NOT, and MAJORITY gates of unbounded fan-in) and of neural networks.
More generally, we say that a circuit (with unbounded fan-in) is a \emph{degree-$d$ polynomial threshold circuit} if each of its constituent gates computes a degree-$d$ PTF of its inputs.
Note that since AND, OR, MAJORITY, and NOT are all linear threshold functions, ${\sf AC}$ and ${\sf TC}$ circuits are degree-$1$ polynomial threshold circuits.
Despite much research, the power of polynomial threshold circuits is poorly understood.
For instance, it is currently an open question, and a rather embarrassing one at that, whether ${\sf NE}$ (the class of functions computable in nondeterministic $2^{O(n)}$ time) is contained in ${\sf TC}^0_3$ (the class of functions computable by families of depth-$3$, polynomial size linear threshold circuits with polynomially bounded realizing weights).
Recent work by Daniel Kane and Ryan Williams \cite{KW} gave a partial answer to this question.
They studied the sensitivity of PTFs to random restrictions, proving (among other things) that ${\sf NE}$ (and in fact, ${\sf P}$-uniform ${\sf TC}^0$) does not have depth-3 ${\sf TC}$ circuits of $n^{1.499}$ gates or $n^{2.499}$ wires.
\subsection{Progress}
Conjecture \ref{GL} is trivially true in the cases $d=0$ and $d=n$ (the only $(n,0)$-PTFs are the constant functions, and $f^*_{n,n}$ is the parity function, which has the maximum possible average sensitivity).
Gotsman and Linial originally noted that Conjecture \ref{GL} had already been proven in the case where $d=1$ by Patrick O'Neil in 1971 \cite{ON}.
\begin{theorem}[O'Neil]\label{O'Neil}
The maximal number $k$ of edges of $H := \bit^n$ which may be cut by a hyperplane $P$ is given by $\displaystyle k = \left( n - \left\lfloor \frac{1}{2} n \right\rfloor \right) {n \choose \frac{1}{2} n}$.
\end{theorem}
Very little additional progress was made towards resolving the above conjectures until recently.
The first non-trivial bounds on the average sensitivity of PTFs of arbitrary degree were found independently by two groups \cite{HKM, DRST} and published jointly \cite{souplite}.
Daniel Kane in 2012 obtained the first bound which was truly sublinear in $n$ \cite{K12}, and in 2013, he proved the weak version of the Gotsman-Linial Conjecture (Conjecture \ref{GLsuperweak}) \cite{K13}.
\section{Resolution of Gotsman-Linial Conjecture}
For simplicity, we start by introducing some notation.
\begin{definition}
Let $f, g : \bit^n \rightarrow \bit$.
We say $f \sim g$, iff there exist $\sigma \in S_n$ and $\alpha \in \bit^n$ such that the function $x \mapsto f \left( x_1, \ldots, x_n \right) g \left( \alpha_1 x_{1 \sigma}, \ldots, \alpha_n x_{n \sigma} \right)$ is a constant.
\end{definition}
Note that $\sim$ defines an equivalence relation on boolean functions.
Two functions are equivalent iff one can be turned into the other through a combination of permuting the inputs and negating the inputs/output.
\begin{definition}
An $(n,d)$-Hypersensitive Function, or {\em $(n,d)$-HSF{}} is an $(n,d)$-PTF $f$ such that $\mathbf{D}[f] > \mathbf{D}[f^*_{n,d}]$.
\end{definition}
More generally, we say that a PTF $f$ is an HSF{} if $n$ and $d$ are either implicit or irrelevant.
We may now restate the original Gotsman-Linial Conjecture (Conjecture \ref{GL}) as follows:
\begin{conjecture}\label{GLcm}
For all $n, d \in \mathbb N$, $(n,d)$-HSFs{} do not exist.
\end{conjecture}
We first prove some simple cases of Conjecture \ref{GLcm}.
The following corollary of O'Neil's theorem (Theorem \ref{O'Neil}) uses our notation.
\begin{corollary}\label{GLd=1}
For every $n \in \mathbb N$, $(n,1)$-HSFs{} do not exist.
\end{corollary}
\begin{proof}
Every $(n,1)$-PTF $f$ is defined by a separating hyperplane $P$ which cuts all of the dichromatic edges of $f$.
From O'Neil, $\displaystyle \mathbf{D}[f] \le ( n - \lfloor n/2 \rfloor ) {n \choose n/2} = \mathbf{D}[f^*_{n,1}]$, so $f$ is not an HSF{}.
\end{proof}
The case $d=n-1$ is a simple consequence of a result first proven in 1968 by Marvin Minsky and Seymour Papert \cite{MP} and since re-proven several times.
We present here a variation on the proof by Aspnes et al. \cite{ABFR}.
\begin{theorem}[Minsky-Papert]
Any PTF which computes parity on $n$ variables must have degree at least $n$.
\end{theorem}
\begin{proof}
Let $p \in \mathbb R[x_1, \ldots, x_n]$ be a multilinear polynomial of degree $n-1$ which is never zero on $\bit^n$.
The set of monomials of degree at most $n$ is an orthogonal basis for the vector space of degree-$n$ multilinear polynomials on the boolean hypercube.
Hence $p$ is orthogonal to the parity function $\phi_n$, i.e. $\displaystyle \langle p, \phi_n \rangle = \sum_{x \in \bit^n} p(x) \phi_n(x) = 0$.
By assumption, every term in the sum on the RHS is non-zero, so at least one of them is negative, i.e. $\text{sgn} \circ p \ne \phi_n$.
\end{proof}
\begin{corollary}\label{GLd=n-1}
For every $n \in \mathbb N$, $(n,n-1)$-HSFs{} do not exist.
\end{corollary}
\begin{proof}
Let $f$ be an $(n,n-1)$-PTF.
Then $f \ne \phi_n$, and $f \ne -\phi_n$.
Let $X = \lbrace x : f(x) = \phi_n(x) \rbrace$ and $Y = \lbrace y : f(y) \ne \phi_n(y) \rbrace$.
Take $x \in X$ and $y \in Y$.
There are $n$ edge-disjoint paths between $x$ and $y$ in the boolean hypercube, and each must contain at least one edge crossing the cut between $X$ and $Y$ (i.e. a monochromatic edge).
Hence $\mathbf{D}[f] \le n ( 2^{n-1} - 1 ) = \mathbf{D}[f^*_{n,n-1}]$, so $f$ is not an HSF{}.
\end{proof}
\begin{lemma}\label{GLbound}
Let $n, d \in \mathbb N$, and let $g$ have maximal $\mathbf{D}[g]$ over all $(n-1,d)$-PTFs.
Then for every $(n,d)$-PTF $f$, $\displaystyle \mathbf{D}[f] \le \frac{2n}{n-1} \mathbf{D}[g]$.
\end{lemma}
\begin{proof}
Let $n, d \in \mathbb N$, and let $g$ be an $(n-1,d)$-PTF with $\mathbf{D}[g]$ maximal.
Let $f$ be an $(n,d)$-PTF.
Any restriction $f'$ of $f$ to a function on $n-1$ variables is also a degree-$d$ PTF, so $\mathbf{D}[f'] \le \mathbf{D}[g]$.
There are $2n$ such restrictions $f'$, and each dichromatic edge of $f$ appears in exactly $n-1$ of them.
Hence $(n-1) \mathbf{D}[f] \le 2n \mathbf{D}[g]$, from which the desired result follows immediately.
\end{proof}
\begin{lemma}\label{GLparity}
Let $n, d \in \mathbb N$ with $d < n$.
If $n$ and $d$ have the same parity, and $(n-1,d)$-HSFs{} do not exist, then $(n,d)$-HSFs{} do not exist.
\end{lemma}
\begin{proof}
Assume that no $(n-1,d)$-PTF is an HSF{}.
If $n$ and $d$ have the same parity, then every restriction of $f^*_{n,d}$ is equivalent (with respect to $\sim$) to $f^*_{n-1,d}$.
There are $2n$ such restrictions, and each dichromatic edge of $f^*_{n,d}$ appears in exactly $n-1$ of them, so $\mathbf{D}[f^*_{n,d}] = \frac{2n}{n-1} \mathbf{D}[f^*_{n-1,d}]$.
Hence by Lemma{} \ref{GLbound}, $(n,d)$-HSFs{} do not exist.
\end{proof}
\begin{corollary}\label{GLd=n-2}
For every $n \in \mathbb N$, $(n,n-2)$-HSFs{} do not exist.
\end{corollary}
\begin{proof}
This follows from Corollary \ref{GLd=n-1} and Lemma{} \ref{GLparity}.
\end{proof}
\begin{corollary}\label{GLn<6}
Let $n, d \in \mathbb N$.
If $d \le n \le 5$ and $(n,d) \ne (5,2)$, then $(n,d)$-HSFs{} do not exist.
\end{corollary}
\begin{proof}
This follows from Corollaries \ref{GLd=1}, \ref{GLd=n-1} and \ref{GLd=n-2}, and the fact that $(n,d)$-HSFs{} trivially do not exist when $d \in \lbrace 0, n \rbrace$.
\end{proof}
\subsection{A Simple Counterexample}
In the statement of Corollary \ref{GLn<6}, the caveat $(n,d) \ne (5,2)$ cannot be removed.
\begin{lemma}
There exists a unique $(5,2)$-HSF{} $f_{5,2}$, modulo $\sim$.
\end{lemma}
\begin{proof}
In the case where $n=5$ and $d=2$, $p^*_{5,2}(x) = x(x - 2)$, and $\mathbf{D}[f^*_{5,2}] = 50$.
Let $q \in \mathbb R [ x, y ]$ be defined by $q ( x, y ) := 3y^2 - x^2 + 2xy + y - x - 3$, let $q' \in \mathbb R [ x_1, \ldots, x_5 ]$ such that $q' ( x_1, \ldots, x_5 ) := q ( x_1 + x_2, x_3 + x_4 + x_5 )$, and let $f_{5,2} := \text{sgn} \circ q'$.
Since $q$ is quadratic, $f_{5,2}$ is a $(5,2)$-PTF.
It is not difficult to verify that $\mathbf{D}[f_{5,2}] = 51 > 50 = \mathbf{D}[f^*_{5,2}]$, so $f_{5,2}$ is a $(5,2)$-HSF{}.
For uniqueness, see Appendix \ref{appendix-5,2}.
\end{proof}
The existence of a $(5,2)$-HSF{} precludes the use of Lemma{} \ref{GLparity} to prove that $(6,2)$-HSFs{} do not exist.
However, the uniqueness of $f_{5,2}$, along with the fact that it only has one additional dichromatic edge, allows for a proof using Lemma{} \ref{GLbound}.
\begin{lemma}\label{GL6,2}
For every $d$, $(6,d)$-HSFs{} do not exist.
\end{lemma}
\begin{proof}
The cases $d \in \lbrace 0, 1, 4, 5, 6 \rbrace$ have already been covered.
For $d=3$, see Appendix \ref{appendix-6,3}.
The case $d=2$ remains.
Assume for the sake of contradiction that $f$ is a $(6,2)$-HSF{}.
The dichromatic count of every boolean function on an even number of variables is an even integer.
Since for every $(5,2)$-PTF $g$, $\mathbf{D}[g] \le 51$, Lemma{} \ref{GLbound} implies that $120 < \mathbf{D}[f] \le 122.4$, and hence that $\mathbf{D}[f] = 122$.
There are $12$ restrictions of $f$ to a function $g$ on $5$ variables, all of which satisfy $\mathbf{D}[g] \le 51$.
Every dichromatic edge in $f$ appears in exactly five such $g$, so the expectation over a uniformly random restriction $g$ of $\mathbf{D}[g]$ is $\displaystyle \frac{5}{12} \cdot 122 > 50.5$.
Since $\mathbf{D}[g]$ is always an integer, $\mathbf{D}[g] = 51$ with probability strictly greater than $1/2$.
In particular, there exists $i$ such that $f|_{x_i = -1} \sim f|_{x_i = 1} \sim f_{5,2}$ (*).
However, it is easily verified (see Appendix \ref{appendix-6,2}) that no function $f$ satisfying both (*) and $\mathbf{D}[f] = 122$ is a $(6,2)$-PTF.
This contradicts the initial choice of $f$.
Hence no $(6,2)$-HSFs{} exist.
\end{proof}
This also completes the proof of Theorem \ref{mainconfirm}. \hfill$\Box$
\subsection{Extension to Odd $n$}
We may extend $f_{5,2}$ to an $(n,2)$-HSF{} for any odd $n \ge 5$.
\begin{theorem}\label{extension}
For every odd $n \in \mathbb N$ with $n \ge 5$, there exists an $(n,2)$-HSF{} $f_{n,2}$ with\\$\displaystyle \mathbf{D}[f_{n,2}] \in \left( 1 + \Omega \left( n^{-1} \right) \right) \mathbf{D}[f^*_{n,2}]$.
\end{theorem}
Intuitively, $f_{n,2}$ behaves exactly as $f_{5,2}$, with the additional variables contributing to the second argument of $q$.\\
\begin{proof}
Let $n \ge 5$ be an odd integer.
Let $A := \{-2, 0, 2\}$ and $B := 2\mathbb Z + 1$.
Let $H$ be the $n$-dimensional boolean hypercube, and let $G$ be the graph with vertex set $A \times B$ and an edge between $u$ and $v$ exactly when $\lVert u-v \rVert_1 = 2$.
Let $\phi : H \rightarrow G$ be the graph homomorphism defined by $\phi ( x_1, \ldots, x_n ) := \left( x_1 + x_2, x_3 + \ldots + x_n \right)$.
Let $q ( x, y ) := 3y^2 - x^2 + 2xy + y - x - 3$ as above, let $f := \text{sgn} \circ q$, and take $f_{n,2} := f \circ \phi$.
Note that because $\phi$ is a graph homomorphism, we may compute $\mathbf{D}[f_{n,2}]$ by counting the dichromatic edges $e$ induced by $f$ on $G$, weighted by $\phi^{-1}(e)$.
To this end, we observe that an edge $e$ between $(2i-2, 2j+2-n)$ and $(2i-2, 2j-n)$ has a preimage under $\phi$ of cardinality
\begin{align*}
\lvert \phi^{-1}(e) \rvert = {2 \choose i} {n-2 \choose j} \left( n - 2 - j \right).
\end{align*}
Similarly, for an edge $e$ between $(2i-2, 2j+2-n)$ and $(2i, 2j+2-n)$,
\begin{align*}
\lvert \phi^{-1}(e) \rvert = {2 \choose i} {n-2 \choose j} \left( 2 - i \right).
\end{align*}
We observe that $q$ is positive on $A \times B$ except at the four points $\{ (-2, 1), (0, -1), (2, -1), (2, 1) \}$.
Hence $f$ gives nine dichromatic edges, as indicated by the black lines below.
\begin{center}
\begin{tabular}{cr|cccccccc}
&\multicolumn{1}{r}{} & \multicolumn{8}{c}{$j$} \\
&& $2-n$ & $\cdots$ & $-3$ & $-1$ & $+1$ & $+3$ & $\cdots$ & $n-2$ \\
\cline{2-10}
& $-2$ & $\color{cyan} +$ & $\cdots$ & $\color{cyan} +$ & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\color{red} -$} & $\color{cyan} +$ & $\cdots$ & $\color{cyan} +$ \\
\cline{6-7}
$i$ & $0$ & $\color{cyan} +$ & $\cdots$ & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\color{red} -$} & $\color{cyan} +$ & $\color{cyan} +$ & $\cdots$ & $\color{cyan} +$ \\
\cline{7-7}
& $+2$ & $\color{cyan} +$ & $\cdots$ & $\color{cyan} +$ & \multicolumn{1}{|c}{$\color{red} -$} & \multicolumn{1}{c|}{$\color{red} -$} & $\color{cyan} +$ & $\cdots$ & $\color{cyan} +$
\end{tabular}
\end{center}
Summing the above expressions over these nine edges, we have
\begin{align*}
\mathbf{D}[f_{n,2}] &= {n-2 \choose \frac{n-1}{2}} \left( {2 \choose 0} n + {2 \choose 1} \left( n-1 \right) + {2 \choose 2} \left( n-1 \right) \right) \\
&= {n-2 \choose \frac{n-1}{2}} \left( 4n-3 \right) \\
&= {n-2 \choose \frac{n-1}{2}} \left( n-3 + 3n \right) \\
&= {n-2 \choose \frac{n+1}{2}} \left( n+1 \right) + {n-2 \choose \frac{n-1}{2}} 3n \\
&= \left( {n-1 \choose \frac{n+1}{2}} + {n-1 \choose \frac{n-1}{2}} \right) n + {n-2 \choose \frac{n+1}{2}} \\
&= {n \choose \frac{n+1}{2}} n + {n-2 \choose \frac{n+1}{2}} \\
&\in \left( 1 + \Theta(n^{-1}) \right) \mathbf{D}[f^*_{n,2}].
\end{align*}
Hence $f_{n,2}$ is an $(n,2)$-HSF{}, as desired.
\end{proof}
\subsection{The General Case}
Using a similar construction, we now prove the existence of HSFs{} of arbitrary degree.
\begin{theorem}\label{general}
For every $n, d \in \mathbb N$ with $n \ge 7$ and $3 \le d \le n-3$, there exists an $(n, d)$-HSF{} $f_{n,d}$ with $\mathbf{D}[f_{n,d}] \in \left( 1 + \Omega \left( n^{-1} e^{-d^2/n} \right) \right) \mathbf{D}[f^*_{n,d}]$.
\end{theorem}
We first consider the case where $n$ and $d$ have the same parity.
The case where $n$ and $d$ have opposite parity is similar but handled later.
\begin{theorem}
For every $n, d \in \mathbb N$ with $3 \le d \le n-4$ and $n-d$ even, there exists an $(n, d)$-HSF{} $f_{n,d}$ with $\mathbf{D}[f_{n,d}] \in \left( 1 + \Omega \left( n^{-1} e^{-d^2/n} \right) \right) \mathbf{D}[f^*_{n,d}]$.
\end{theorem}
\begin{proof}
Let $n, d$ be integers of the same parity with $3 \le d \le n-4$.
Let $A := \{-3, -1, 1, 3\}$ and let $B := 2\mathbb Z + d + 1$.
Let $H$ be the $n$-dimensional boolean hypercube, and let $G$ be the graph with vertex set $A \times B$ and an edge between $u$ and $v$ exactly when $\lVert u-v \rVert_1 = 2$.
Let $\phi$ be the graph homomorphism defined by $\phi(x_1, \ldots x_n) := (x_1 + x_2 + x_3, x_4 + \ldots + x_n)$.
We now define four polynomials $p_1, p_2, p_3, p_4$ on $A \times B$ as follows:
\begin{align*}
p_1(x,y) &:= (y-1+d)(y+1-d) \\
p_2(x,y) &:= 1 - 2 \left( x (d-1) + y \right)^2 \\
p_3(x,y) &:= (y-3+d)(y-5+d) \cdots (y+5-d)(y+3-d) \\
p_4(x,y) &:= x(x+2)(x-2)(y-4+d)(y-6+d) \cdots (y+6-d)(y+4-d)
\end{align*}
Since $p_1 \in \Omega(p_2)$, there exists $\varepsilon' > 0$ such that for every $v \in G$ with $p_1(v) \ne 0$, $|p_1(v)| > |2 \varepsilon' p_2(v)|$.
Similarly, $p_3 \in \Omega(p_4)$, so there exists $\varepsilon \in (0, \varepsilon']$ such that for every $v \in G$ with $p_3(v) \ne 0$, $|p_3(v)| > |\varepsilon p_4(v)|$.
For instance, we may take $\varepsilon = \varepsilon' = \left( 4d \right)^{-d}$.
Take $p := \left( p_1 + \varepsilon p_2 \right) \cdot p_3 - \varepsilon^2 p_4$, take $g := \text{sgn} \circ p$, and take $f_{n,d} := g \circ \phi$.
Since $p_1$ and $p_2$ have degree $2$, $p_3$ has degree $d-2$, and $p_4$ has degree $d$, $f_{n,d}$ is a polynomial threshold function of degree $d$.
Towards computing $\mathbf{D}[f_{n,d}]$, we first consider the relevant behaviors of $p_1, p_2, p_3, p_4$ separately.
All four are integer-valued (evaluations of) polynomials on the domain $A \times B$.
Both $p_2$ and $p_4$ are always odd, so in particular, are non-zero everywhere.
Firstly, $p_3$ is positive when $y > d-3$, is zero when $|y| \le d-3$, and has the same sign as $(-1)^d$ when $y < 3-d$.
Clearly, $p_1$ is positive when $|y| > d-1$ and zero when $|y| = d-1$.
By choice of $\varepsilon$, $p_1 + \varepsilon p_2$ is never in the interval $(-\varepsilon, \varepsilon)$ and always has the same sign as $p_1$ when $p_1$ is non-zero.
Similarly, $p$ is always non-zero and always has the same sign as $\left( p_1 + \varepsilon p_2 \right) \cdot p_3$ when $p_3$ is non-zero.
Hence we may rewrite $g$ as the following piecewise function:
\begin{align*}
g(x,y) &= \left\lbrace
\begin{array}{ll}
(-1)^d & y < 1-d \\
\text{sgn} ((-1)^d p_2(x,y)) & y = 1-d \\
\text{sgn} (p_4(x,y)) & |y| < d-1 \\
\text{sgn} (p_2(x,y)) & y = d-1 \\
1 & y > d-1
\end{array}
\right.
\end{align*}
Since $p_2$ is positive only at the two points $(-1, d-1)$ and $(1, 1-d)$ when $|y| = d-1$, the above piecewise representation shows that when $|y| \le d-1$, $g(x,y)$ computes the parity function except at the two points $(3, d-1)$ and $(-3, 1-d)$ (illustrated in Figures \ref{evenfigure} and \ref{oddfigure}).
\begin{figure}[h]
\begin{center}
\begin{tabular}{cr|ccccccccc}
& \multicolumn{1}{r}{} & \multicolumn{9}{c}{$y$} \\
&& $\cdots$ & $-1-d$ & $1-d$ & $3-d$ & $\cdots$ & $d-3$ & $d-1$ & $d+1$ & $\cdots$ \\
\cline{2-11}
\multirow{4}{*}{$x$} & $-3$ & $\cdots$ & $\color{cyan} +$ & \multicolumn{1}{|c}{$\color{red} -$} & $\color{red} -$ & \multicolumn{1}{|c|}{$\cdots$} & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\color{red} -$} & $\color{cyan} +$ & $\cdots$ \\
\cline{6-9}
& $-1$ & $\cdots$ & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\color{red} -$} & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\cdots$} & $\color{red} -$ & \multicolumn{1}{|c}{$\color{cyan} +$} & $\color{cyan} +$ & $\cdots$ \\
\cline{5-9}
& $+1$ & $\cdots$ & $\color{cyan} +$ & \multicolumn{1}{c|}{$\color{cyan} +$} & $\color{red} -$ & \multicolumn{1}{|c|}{$\cdots$} & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\color{red} -$} & $\color{cyan} +$ & $\cdots$ \\
\cline{5-8}
& $+3$ & $\cdots$ & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\color{red} -$} & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\cdots$} & $\color{red} -$ & \multicolumn{1}{c|}{$\color{red} -$} & $\color{cyan} +$ & $\cdots$
\end{tabular}
\end{center}
\caption{Illustration of $g$ in the case where $n$ and $d$ are both even}
\label{evenfigure}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cr|ccccccccc}
& \multicolumn{1}{r}{} & \multicolumn{9}{c}{$y$} \\
&& $\cdots$ & $-1-d$ & $1-d$ & $3-d$ & $\cdots$ & $d-3$ & $d-1$ & $d+1$ & $\cdots$ \\
\cline{2-11}
\multirow{4}{*}{$x$} & $-3$ & $\cdots$ & $\color{red} -$ & \multicolumn{1}{|c}{$\color{cyan} +$} & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\cdots$} & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\color{red} -$} & $\color{cyan} +$ & $\cdots$ \\
\cline{6-9}
& $-1$ & $\cdots$ & $\color{red} -$ & \multicolumn{1}{|c|}{$\color{cyan} +$} & $\color{red} -$ & \multicolumn{1}{|c|}{$\cdots$} & $\color{red} -$ & \multicolumn{1}{|c}{$\color{cyan} +$} & $\color{cyan} +$ & $\cdots$ \\
\cline{5-9}
& $+1$ & $\cdots$ & $\color{red} -$ & \multicolumn{1}{c|}{$\color{red} -$} & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\cdots$} & $\color{cyan} +$ & \multicolumn{1}{|c|}{$\color{red} -$} & $\color{cyan} +$ & $\cdots$ \\
\cline{5-8}
& $+3$ & $\cdots$ & $\color{red} -$ & \multicolumn{1}{|c|}{$\color{cyan} +$} & $\color{red} -$ & \multicolumn{1}{|c|}{$\cdots$} & $\color{red} -$ & \multicolumn{1}{c|}{$\color{red} -$} & $\color{cyan} +$ & $\cdots$
\end{tabular}
\end{center}
\caption{Illustration of $g$ in the case where $n$ and $d$ are both odd}
\label{oddfigure}
\end{figure}
We now define $g' : A \times B \rightarrow \{ -1, 1 \}$ by $g'(\phi_{n,3}(x)) := -f^*_{n,d}(x)$.
Note that because $f^*_{n,d}$ is symmetric, this gives a well-defined function $g'$.
It is easily verified that for all $(x,y) \in A \times B$ such that $|y| \le d-1$ and at the two points $(3, -1-d)$ and $(-3, d+1)$, $g'(x,y) = g(x,y)$, and that for all other $(x,y) \in A \times B$, $g'(x,y) = -g(x,y)$.
Hence there are ten edges $\{ u, v \}$ in $G$ for which $g(u)g(v) \ne g'(u)g'(v)$.
This allows us to compute $\mathbf{D}[f_{n,d}]$ as follows:
\begin{align*}
\mathbf{D}[f_{n,d}] =& \ \mathbf{D}[f^*_{n,d}] \\
&+ (n+d-2) {n-3 \choose \frac{n-d-4}{2}} + 3(n+d-2) {n-3 \choose \frac{n-d-4}{2}} \\
&- 3(n+d-2) {n-3 \choose \frac{n-d-4}{2}} - 6 {n-3 \choose \frac{n-d-4}{2}} - (n-d-4) {n-3 \choose \frac{n-d-4}{2}} \\
=& \ \mathbf{D}[f^*_{n,d}] + (2d-4) {n-3 \choose \frac{n-d-4}{2}} \\
\in& \ \left( 1 + \Omega \left( \frac{{n \choose \frac{n-d}{2}}}{n {n \choose \frac{n}{2}}} \right) \right) \mathbf{D}[f^*_{n,d}] \\
\subseteq& \ \left( 1 + \Omega \left( n^{-1} e^{-d^2/n} \right) \right) \mathbf{D}[f^*_{n,d}]. &&\text{(Stirling's Inequality)}
\end{align*}
Hence $f_{n,d}$ is an $(n,d)$-HSF{}, as desired.
\end{proof}
\begin{theorem}
For every $n,d \in \mathbb N$ with $n \ge 7$, $3 \le d \le n-3$ and $n-d$ odd, there exists an $(n,d)$-HSF{} $f_{n,d}$ with $\displaystyle \mathbf{D}[f_{n,d}] \in \left( 1 + \Omega \left( n^{-1} e^{-d^2/n} \right) \right) \mathbf{D}[f^*_{n,d}]$.
\end{theorem}
\begin{proof}
The proof proceeds similarly to the previous case.
We define $A$, $B$, $H$, $G$, $p_1$, $p_2$, $p_3$, $p_4$, $p$, $g$, and $g'$ as above, and we define $\psi ( x_1, \ldots, x_n ) := x_1 + x_2 + x_3, 1 + x_4 + \ldots + x_n )$.
We now define $f_{n,d} := g \circ \psi$ analogously to above.
The computation of $\mathbf{D}[f_{n,d}]$ now proceeds as follows:
\begin{align*}
\mathbf{D}[f_{n,d}] =& \ \mathbf{D}[f^*_{n,d}] \\
&+ \frac{n+d-3}{2} {n-3 \choose \frac{n-d-3}{2}} + 3 \frac{n+d-3}{2} {n-3 \choose \frac{n-d-3}{2}} \\
&- 3 \frac{n+d-3}{2} {n-3 \choose \frac{n-d-3}{2}} - 3 {n-3 \choose \frac{n-d-3}{2}} - \frac{n-d-3}{2} {n-3 \choose \frac{n-d-3}{2}} \\
&+ \frac{n+d-1}{2} {n-3 \choose \frac{n-d-5}{2}} + 3 \frac{n+d-1}{2} {n-3 \choose \frac{n-d-5}{2}} \\
&- 3 \frac{n+d-1}{2} {n-3 \choose \frac{n-d-5}{2}} - 3 {n-3 \choose \frac{n-d-5}{2}} - \frac{n-d-5}{2} {n-3 \choose \frac{n-d-5}{2}} \\
=& \ \mathbf{D}[f^*_{n,d}] + (d-3) {n-3 \choose \frac{n-d-3}{2}} + (d-1) {n-3 \choose \frac{n-d-5}{2}} \\
\in& \ \left( 1 + \Omega \left( \frac{{n \choose \frac{n-d}{2}}}{n {n \choose \frac{n}{2}}} \right) \right) \mathbf{D}[f^*_{n,d}] \\
\subseteq& \ \left( 1 + \Omega \left( n^{-1} e^{-d^2/n} \right) \right) \mathbf{D}[f^*_{n,d}]. &&\text{(Stirling's Inequality)}
\end{align*}
Hence $f_{n,d}$ is an $(n,d)$-HSF{}, as desired.
\end{proof}
This also completes the proofs of Theorems \ref{general} and \ref{mainrefute}. \hfill$\Box$
\section{Conclusion}
For almost all $d$ and almost all $n$, we refute the Gotsman-Linial Conjecture (Conjecture \ref{GL}) with a multiplicative separation of $1 + \Theta_d \left( n^{-1} \right)$.
This separation is too weak to refute most known applications of the conjecture.
We would need to improve $1 + \Theta_d \left( n^{-1} \right)$ to $\omega(1)$ to refute the Asymptotic Gotsman-Linial Conjecture (Conjecture \ref{GLweak}), on which the applications depend.
Although for every $(n,d)$-HSF{} $f$ given in this paper, $\mathbf{D}[f] > \mathbf{D}[f^*_{n,d}]$, it should be noted that the RHS is still an upper bound in a \emph{limiting} sense.
This, along with the intuition presented in Section \ref{intuition}, invites the following revised conjecture.
\begin{conjecture}[Gotsman-Linial - Limit]\label{GLlimit}
Let $f : \bit^n \rightarrow \bit$ be an $(n,d)$-PTF.
Then the average sensitivity $\mathbf{AS}[f] \le d \mathbf{AS}[f^*_{n,1}]$.
\end{conjecture}
Conjecture \ref{GLlimit} would resolve the remaining cases of Conjectures \ref{GLcm} and \ref{GL}, i.e.
\begin{conjecture}
For every even $n$, $(n,2)$-HSFs{} do not exist.
\end{conjecture}
Furthermore, our revised conjecture would imply the Asymptotic Gotsman-Linial Conjecture (Conjecture \ref{GLweak}) and its consequent applications.
\section*{Acknowledgments}
The author would like to thank Ryan Williams especially for inspiration, advice, feedback, and an admirable tolerance of cheesemonkeys; the Williams family, the Chap-people, Henry Qin, and Carolyn Kim for moral support and a good work environment; and Not Luke the goldfish for surviving.
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,314,259,993,937 | arxiv | \section{Introduction}
\label{sec:intro}
Incremental speech recognition (ISR) allows a speech-based interaction system to react quickly while the utterance is being spoken. Unlike offline sentence-wise automatic speech recognition (ASR), where the decoding result is available after a user finishes speaking, ISR returns $N$-best decoding results with small latency during speech. These $N$-best results, or hypotheses, gradually improve as the system receives more speech data. Since ISR is usually employed for immediate reaction to speech, word stability \cite{selfridge2011stability, mcgraw2011estimating} and incremental lattice generation \cite{sagerer1996incremental} have been important topics.
In this paper, we introduce an end-to-end character-level ISR system with two unidirectional recurrent neural networks (RNNs). An acoustic RNN roughly dictates the input speech and an RNN-based language model is employed to augment the dictation result through decoding. Compared to a conventional word-level backend for speech recognition system, the character-level ASR is capable of dictating out of vocabulary (OOV) words based on the pronunciation. Also, our model is trained directly from speech and text corpus and does not require external word dictionary or senone modeling.
There have been efforts to deal with OOV words in conventional HMM based ASR systems. In \cite{killer2003grapheme}, graphemes are employed as basic units instead of phonemes. Also, a sub-lexical language model is proposed in \cite{bisani2005open} for detecting previously unseen words.
RNN-based character-level end-to-end ASR systems were studied in \cite{graves2014towards, hannun2014deepspeech, hannun2014first, miao2015eesen, bahdanau2015end}. However, they lack the capability of dictating OOV words since the decoding is performed with word-level LMs. Recently, a lexicon-free end-to-end ASR system is introduced in \cite{maas2015lexicon}, where a character-level RNN LM is employed. We further improve this approach by employing prefix tree based online beam search with additional depth-pruning for ISR.
The character-level ISR system proposed in this paper is composed of an acoustic RNN and an RNN LM. The acoustic RNN is end-to-end trained with connectionist temporal classification (CTC) \cite{graves2006connectionist} using Wall Street Journal (WSJ) speech corpus \cite{paul1992design}. The output of the acoustic RNN is the probability of characters, which are decoded with character-level beam search to generate $N$-best hypotheses. To improve the performance, a character-level RNN LM is employed to augment the beam search performance. Also, we propose depth-pruning for efficient tree-based beam search. The RNN LM is separately trained with large text corpus that is also included in WSJ corpus. Unlike for word-level language modeling, conventional statistical LMs such as $n$-gram back-off models cannot be used because much longer history window is required for character-level prediction. Both acoustic RNN and RNN LM have deep unidirectional long short-term memory (LSTM) network structures \cite{hochreiter1997long, graves2013hybrid}. For continuous ISR on infinitely long input speech, they are trained with virtually infinite training data streams that are generated by randomly concatenating training sequences.
The proposed model is evaluated on a single test sequence that is generated by concatenating all test utterances in WSJ \texttt{eval92} (Nov'92 20k evaluation set) without any external reset of RNN states at the utterance boundaries. The ISR performance is examined by varying the beam width and depth. Generally, wider beam increases the accuracy. Under the same beam width, there is a trade-off between the accuracy and stability (or latency), where the balance between them can be adjusted by the beam depth.
\section{Models}
\label{sec:model}
\subsection{Acoustic model}
\label{ssec:speech}
The acoustic model is a deep RNN trained with CTC \cite{graves2006connectionist}.
The network consists of two LSTM layers with 768 cells each, where the network has total 12.2 M trainable parameters. The model is similar to the one in the previous work about end-to-end speech recognition with RNNs \cite{graves2014towards} except a few major differences. In our case, the RNN is trained by online CTC \cite{hwang2015online} with very long training sequences that are generated by randomly concatenating several utterances. There is no need to reset the RNN states at the utterance boundary. This is necessary for ISR systems that runs continuously with an infinite input audio stream. Also, our model has a unidirectional structure since bidirectional networks that are usually employed for end-to-end speech recognition are not suitable for low-latency speech recognition. This is because the backward layers in the bidirectional networks cannot be computed before the input utterance is finished.
The input of the network is a 40-dimensional log mel-frequency filterbank feature vector with energy and their delta and double-delta values, resulting in an 123-dimensional vector. The feature vectors are extracted every 10 ms with 25 ms Hamming window. The input vectors are element-wisely standardized based on the statistics obtained from the training set. The output is a 31-dimensional vector that consists of the probabilities of 26 upper case alphabets, 3 special characters, the end-of-sentence (EOS) symbol, and the CTC blank label.
The networks are trained with stochastic gradient descent (SGD) with 8 parallel input streams on a GPU \cite{hwang2015single}. The networks are unrolled 2048 times and weight updates are performed every 1024 forward steps. The network performances are evaluated at every 10 M training frames. The evaluation is performed on total 2 M frames from the development set. The learning rate starts from $1 \times 10^{-5}$ and is reduced by the factor of $10$ whenever the WER on the development set is not improved for 6 consecutive evaluations. The training ends when the learning rate drops below $1 \times 10^{-7}$.
We trained the networks on two training sets. The first one is the standard WSJ \texttt{SI-284} set and the second one, \texttt{SI-ALL}, is the set of all speaker independent training utterances in the WSJ corpus. Note that the utterances with verbalized punctuations are removed from both training sets. Also, odd transcriptions are filtered out, which makes the final \texttt{SI-284} and \texttt{SI-ALL} sets contain roughly 71 and 167 hours of speech, respectively. WSJ \texttt{dev93} (Nov'93 20k development set) and \texttt{eval92} (Nov'92 20k evaluation set) sets are used as the development set and the evaluation set, respectively.
\subsection{Language model}
\label{ssec:lm}
\begin{figure}[!t]
\begin{framed}
\texttt{\footnotesize THREE ISSUES ADVANCED MICRO OF AMERICA THE ONLY WAY TO DIVERSIFY INTO TREATING MODERN ARMIES
\vspace{5pt}\\LOOKING AHEAD TO MR. LEYSEN WITH AN INTOLERABLE POP CUT WHEN AN ALL POWERFUL STUDENT SEEKS ITS CORE DRIVING UPJOHN STOVES
\vspace{5pt}\\AMERICAN EXPRESS HASN'T YET SWORED PARTICULARLY WITH THE RESTRUCTURING IS A COMMITMENT TO BUY POTENTIAL BUYERS IN THE OPEN MARKET}
\end{framed}
\caption{Example of character-level random text generation with the RNN LM.}
\label{fig:lmgen}
\end{figure}
An RNN language model (LM) \cite{mikolov2011extensions} is employed for the proposed ISR system since conventional statistical LMs such as $n$-gram back-off models are not suitable for character-level prediction since they cannot make use of very long history windows. Specifically, the RNN LM has a deep LSTM network structure with two LSTM layers where each of them has 512 memory cells, resulting in total 3.2 M parameters.
The input of the RNN LM is a 30-dimensional vector, where the current label (character) is one-hot encoded. The output is also a 30-dimensional vector which represents the probabilities of next labels. Although the RNN LM is trained to predict the next characters with only given the current character, the past character histories are internally stored inside the RNN and used for the prediction. It is well known that RNN LM can remember contexts for very long time steps.
As for the acoustic RNN, the RNN LM is trained on a very long text stream that is generated by attaching randomly picked sentences and inserting EOS labels between sentences. The RNN LM is trained with AdaDelta \cite{zeiler2012adadelta} based SGD method for accelerated training and better annealing. The WSJ LM training text with non-verbalized punctuation, which contains about 215 M characters, is used for training the RNN LM. Randomly selected 1\% of the corpus is reserved for evaluation, on which the final bits-per-character (BPC) of the RNN LM is 1.167 (character-level perplexity of 2.245).
Random sentences can be generated following the method described in \cite{sutskever2011generating}. Briefly, the next label is randomly picked following the probabilities of the current output of the RNN LM and fed back to the RNN in the next step. By iterating these steps, texts can be sequentially generated as shown in \figurename~\ref{fig:lmgen}. From the example, it is clear that the RNN LM learned the linguistic structures as well as spellings of words that frequently appear.
\section{Character-level beam search}
\label{sec:beam}
\subsection{Tree-based CTC beam search}
\begin{figure}[!t]
\centerline
{%
\includegraphics[width=2.3in]{tree}%
}
\caption{Beam search tree consisting of label nodes. The CTC blank label is not included.}
\label{fig:tree}
\end{figure}
\begin{figure}[!t]
\centerline
{%
\includegraphics[width=2.4in]{state}%
}
\caption{CTC state transition between two label nodes. If the two nodes have the same label, then a transition between the same CTC state is not allowed.}
\label{fig:state}
\end{figure}
\begin{figure}[!t]
\centerline
{%
\includegraphics[width=3.4in]{pruning}%
}
\caption{Example of depth-pruning with the beam depth of 2. The pruning is performed by selecting a new root node so that the new depth of the best hypothesis node becomes the beam depth. The shaded nodes indicate the original active nodes. Also, the path of the best hypothesis is drawn with thick strokes.}
\label{fig:pruning}
\end{figure}
Let $L$ be the set of labels without the CTC blank label. The label sequence $\mathbf{z}$ is a sequence of labels in $L$. The length of the label sequence $\mathbf{z}$ is less than or equal to the number of input frames. The objective of the beam search decoding is to find the label sequence that has the maximum posterior probability given the input features from time 1 to $t$ generated by the acoustic RNNs, that is,
\begin{align}
\mathbf{z}_{\max}=\arg\max_{\mathbf{z}} P(\mathbf{z}|x_{1:t}), \label{eq:argmax_pi}
\end{align}
where $x_{1:t}$ is the input features from time 1 to $t$.
However, the CTC-trained RNN output has one more blank label. Let $L'$ be the set of labels (or CTC states) with the additional CTC blank label, and the path ${\pi}_{t}^{(i)}$ be a sequence of labels in $L'$ from time 1 to $t$. The length of the path ${\pi}_{t}^{(i)}$ is the same as $t$. By the definition of CTC, every ${\pi}$ can be reduced into the corresponding ${\mathbf{z}}$. For example, ${\pi}$ with ``aab-c--a'' corresponds to ${\mathbf{z}}$ with ``abca'', where ``-'' is the blank label.
There can be many paths, ${\pi}_{t}^{(i)}$, that can be reduced into the same ${\mathbf{z}}$. Let $\mathcal{F}(\cdot)$ be a function that maps a path to the corresponding label sequence, that is, $\mathcal{F}({\pi}_{t}^{(i)}) = {\mathbf{z}}$, then the posterior probability in (\ref{eq:argmax_pi}) becomes,
\begin{align}
P(\mathbf{z}|x_{1:t})=\sum_{\{\forall i | \mathcal{F}({\pi}_{t}^{(i)}) = {\mathbf{z}}\}}P({\pi}_{t}^{(i)}|x_{1:t}).
\label{eq:pathagg}
\end{align}
Therefore, if the two different paths ${\pi}_{t}^{(j)}$ and ${\pi}_{t}^{(k)}$ in the decoding network are mapped to the same ${\mathbf{z}}$, then they can be merged by summing their probabilities.
For the beam search, we first represent the lattice with a tree-based structure so that each node has one of labels in $L$ as depicted in \figurename~\ref{fig:tree}. Then, backtracking from any node generates a unique label sequence ${\mathbf{z}}$. To deal with CTC state transitions, we need a state-based network that is represented with CTC states, $L'$. As shown in \figurename~\ref{fig:state}, this can be easily done by expanding each tree node, of which label is in $L$, into two CTC states, one with the corresponding label in $L'$ followed by the blank CTC label. Since the label-level ($L$) search network is based on a tree structure, two different state-level ($L'$) paths with different label sequences never meet each other. This simplifies the problem since there is no interaction between two different sequence labelings (hypotheses) and (\ref{eq:pathagg}) is the only equation that we should concern.
As proposed in \cite{hannun2014first, maas2015lexicon}, external language models can be integrated by modifying the posterior probability term in (\ref{eq:argmax_pi}) into:
\begin{align}
\mathrm{log}(P(\mathbf{z}|x_{1:t})) &= \mathrm{log}(P_{\mathrm{CTC}}(\mathbf{z}|x_{1:t})) \\
&\hphantom{= } + \alpha \mathrm{log}(P_{\mathrm{LM}}(\mathbf{z})) + \beta |\mathbf{z}|, \nonumber
\end{align}
where $\alpha$ is the LM weight and $\beta$ is the insertion bonus. This modification can be applied by adding the additional terms with $\alpha$ and $\beta$ to the log probability of the destination state when a state transition between two different label nodes occurs.
The probability of the next label is computed using the RNN LM when a new active label node is added to the beam search tree. For this, the RNN LM context (hidden activations) is copied from the parent node to the child node and the RNN LM processes the new label of the child node with the copied context. Therefore, each active node has its own RNN LM context.
\subsection{Pruning}
Pruning of the search tree is performed by the standard beam search approach. That is, at each frame, only the active nodes with the top $N$ hypotheses and their ancestor nodes remain alive after the pruning with the beam width of $N$. However, this standard pruning, or \emph{width-pruning}, cannot prevent the tree from growing indefinitely especially when the input speech is very long. This gradually degrades the efficiency of beam search on recent nodes since more and more hypotheses would be wasted to maintain the old part of the lattice that is already out of the context range of the RNN LMs.
To remedy this issue, we propose an additional pruning method called \emph{depth-pruning}. The procedure is as follows. First, find the $M$-th ancestor of the node with the best hypothesis, where $M$ is the beam depth. Then, the ancestor node becomes a new root node. The pruning is performed by removing the nodes that are not descendants of the new root node. In this way, a beam can be better utilized for recent hypotheses rather than older ones. \figurename~\ref{fig:pruning} shows an example of depth-pruning with the beam depth of 2. Note that the depth of some nodes can be larger than the beam depth. In the following experiments, depth-pruning is performed every 20 frames.
\section{Experiments}
\label{sec:evaluation}
\begin{figure}[t]
\centering
\centerline{%
\begin{tikzpicture}
\begin{axis}
[
width=\columnwidth,
height=0.6\columnwidth,
compat=1.3,
xmin=0.0,
ymin=8,
xmax=100,
ymax=12,
label style={font=\footnotesize},
xlabel={Beam depth (characters)},
ylabel={WER (\%)},
xlabel shift=-2pt,
ylabel shift=-2pt,
legend columns=2,
legend style={
font=\scriptsize,at={(0.97,0.95)},anchor=north east,
/tikz/column 2/.style={
column sep=5pt}
},
tick label style={font=\scriptsize},
ymajorgrids,
minor x tick num=4,
minor y tick num=4,
log basis x={10},
xtick pos=both,
xtick align=inside,
major tick style={line width=0.010cm, black},
major tick length=0.10c
]%
\legend{{SI-284, BW=128},{SI-284, BW=512},{SI-ALL, BW=128},{SI-ALL, BW=512}}
\addplot[color=black, solid, mark=square, mark size=1.5, mark repeat=1,mark options=solid]
file{data/online_SI284_BW128.txt};
\addplot[color=orange, solid, mark=triangle, mark size=2, mark repeat=1,mark options=solid]
file{data/online_SI284_BW512.txt};
\addplot[color=cyan, solid, mark=o, mark size=1.5, mark repeat=1,mark options=solid]
file{data/online_SIALL_BW128.txt};
\addplot[color=magenta, solid, mark=x, mark size=2, mark repeat=1,mark options=solid]
file{data/online_SIALL_BW512.txt};
\end{axis}%
\end{tikzpicture}%
}%
\caption{WER of the proposed online decoding on the evaluation set with respect to the beam depth. Experiments are conducted with two acoustic RNNs trained on \texttt{SI-284} and \texttt{SI-ALL} and beam search is performed with the beam width (BW) of 128 and 512.}
\label{fig:online}
\end{figure}
\begin{table}[!t]
\renewcommand{\arraystretch}{1.0}
\caption{CER / WER in percent on the evaluation set with online depth-pruning and offline sentence-wise decoding. The error rates are reported with two acoustic RNNs trained on \texttt{SI-284} (71~hrs) and \texttt{SI-ALL} (167~hrs).}
\label{tbl:method}
\centering
\vspace{2 mm}
\begin{tabular}{l c c c}
\hline
Method & Beam width & \texttt{SI-284} & \texttt{SI-ALL} \\
\hline\hline
Online (no LM) & 512 & 10.96 / 38.37 & \hphantom{0}9.66 / 35.44\\
Online & 128 & 4.25 / 9.87 & 3.56 / 8.56\\
Online & 512 & 3.80 / \bf{8.90} & 3.39 / \bf{8.06}\\
Sentence-wise & 128 & \hphantom{0}4.46 / 10.30 & 3.63 / 8.84\\
Sentence-wise & 512 & 4.04 / 9.45 & 3.38 / 8.28\\
\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\renewcommand{\arraystretch}{1.0}
\caption{Comparison of WERs with other end-to-end speech recognizers in the literature. For reference, WERs of phoneme based GMM/DNN-HMM systems are also reported. All systems are trained with \texttt{SI-284} and evaluated on \texttt{eval92}.}
\label{tbl:baseline}
\centering
\vspace{2 mm}
\begin{tabular}{l l l}
\hline
System & Model & WER \\
\hline\hline
Proposed ISR & Uni. CTC + Char. RNN LM & 8.90\% \\
Graves and Jaitly \cite{graves2014towards} & CTC + Trigram (extended) & \hphantom{0}8.7\% \\
Miao \textit{et al}. \cite{miao2015eesen} & CTC + Trigram (extended) & 7.34\% \\
Miao \textit{et al}. \cite{miao2015eesen} & CTC + Trigram & 9.07\% \\
Hannun \textit{et al}. \cite{hannun2014first} & CTC + Bigram & 14.1\% \\
Bahdanau \textit{et al}. \cite{bahdanau2015end} & Encoder-decoder + Trigram & 11.3\% \\
\hline
Woodland \textit{et al}. \cite{woodland1994large}& GMM-HMM + Trigram & 9.46\% \\
Miao \textit{et al}. \cite{miao2015eesen} & DNN-HMM + Trigram & 7.14\% \\
\hline
\end{tabular}
\end{table}
\begin{figure*}[!t]
\begin{framed}
\footnotesize
100: \texttt{HE'S\_THE\_}\\
150: \texttt{HE'S\_THE\_ONLY\_GU}\\
200: \texttt{HE'S\_THE\_ONLY\_GUY\_WHO\_COULD\_S}\\
250: \texttt{HE'S\_THE\_ONLY\_GUY\_WHO\_COULD\_SHOW\_UP\_IN\_THE\_}\\
300: \texttt{...IN\_THE\_PLAZA\_I}\\
350: \texttt{...IN\_THE\_PLAZA\_IN\_ROCK\_R}\\
400: \texttt{...IN\_THE\_PLAZA\_IN\_\textbf{DRAW}\_\textbf{RATE}\_OF\_SEVE}\\
450: \texttt{...IN\_THE\_PLAZA\_IN\_DRAW\_RATE\_OF\_SEVENTY\_FIVE\_THO}\\
500: \texttt{...IN\_THE\_PLAZA\_\textbf{AND}\_DRAW\_\textbf{CROWD}\_OF\_SEVENTY\_FIVE\_THOUSAND\_\textbf{PEO}}\\
550: \texttt{...IN\_THE\_PLAZA\_AND\_DRAW\_CROWD\_OF\_SEVENTY\_FIVE\_THOUSAND\_PEOPLE\_S}\\
600: \texttt{...IN\_THE\_PLAZA\_AND\_DRAW\_CROWD\_OF\_SEVENTY\_FIVE\_THOUSAND\_PEOPLE\_SAYS\_ONE\_LA}\\
650: \texttt{...IN\_THE\_PLAZA\_AND\_DRAW\_CROWD\_OF\_SEVENTY\_FIVE\_THOUSAND\_PEOPLE\_SAYS\_ONE\_LATIN\_DIPLOM}\\
700: \texttt{...IN\_THE\_PLAZA\_AND\_DRAW\_CROWD\_OF\_SEVENTY\_FIVE\_THOUSAND\_PEOPLE\_SAYS\_ONE\_LATIN\_DIPLOMAT} \vspace{3pt} \\
Ground truth: \texttt{HE'S\_THE\_ONLY\_GUY\_WHO\_COULD\_SHOW\_UP\_IN\_THE\_PLAZA\_AND\_DRAW\_}\\
\hphantom{Ground truth: }\texttt{\textbf{A\_}CROWD\_OF\_SEVENTY\_FIVE\_THOUSAND\_PEOPLE\_SAYS\_ONE\_LATIN\_DIPLOMAT}
\end{framed}
\caption{Example of ISR partial results. The best hypothesis is shown at every 50 frames (500 ms). The word ``ROCK" is corrected to ``DRAW" after hearing ``RATE" and ``IN DRAW RATE" to ``AND DRAW CROWD" while hearing ``PEOPLE".}
\label{fig:incremental}
\end{figure*}
The proposed ISR system is evaluated on a single 42-minute speech stream that is formed by concatenating all 333 utterances in the evaluation set, \texttt{eval92} (WSJ Nov'92 20k evaluation set). We use $\alpha=2.0$ and $\beta=1.5$ for the system trained with \texttt{SI-284}, and $\alpha=1.5$ and $\beta=2.0$ for the other one trained with \texttt{SI-ALL}.
The effects of beam depth and width to the final WER are examined in \figurename~\ref{fig:online}. The gap between the beam width of 128 and 512 is roughly 0.5\% to 1\% WER. However, there was little difference when the beam width increases from 512 to 2048 in our preliminary experiments.
The best performing beam depths are 50 and 30 for the \texttt{SI-284} and \texttt{SI-ALL} systems, respectively. This means the \texttt{SI-ALL} system can recognize speech more immediately than the \texttt{SI-284} system. We consider this is because the acoustic model of the \texttt{SI-ALL} system can embed stronger language model due to increased training data, and can make decision more precisely without relying on the external language model much. The character error rate (CER) and WER are reported in \tablename~\ref{tbl:method} with the optimal beam depths. For comparison, we also report sentence-wise offline decoding results without depth-pruning.
The proposed ISR system is compared with other end-to-end word-level speech recognition systems in \tablename~\ref{tbl:baseline}. The other systems perform sentence-wise offline decoding with bidirectional RNNs. The best result was achieved by Miao \textit{et al}. \cite{miao2015eesen} with a CTC-trained deep bidirectional LSTM network and a retrained trigram LM with extended vocabulary. The systems with the original trigram model provided with the WSJ corpus perform worse than our ISR system with character-level RNN LM. On the other hand, our system is beaten by the other ones with extended trigram models. However, more precise comparison of the decoding stages should be done by employing the same CTC model.
\figurename~\ref{fig:incremental} shows the incremental speech recognition result with the proposed ISR system. The best hypothesis is reported every 50 frames (500 ms). It is shown that the past best result can be corrected by making use of the additional speech input. For example, the word ``ROCK'' is changed to ``DRAW'' in the frame 450 by listening the word ``RATE''. Moreover, the correction of ``IN DRAW RATE'' to ``AND DRAW CROWD'' during hearing the word ``PEOPLE'' in the frame 500 is a good evidence that long term context can also be considered.
\section{Concluding remarks}
\label{sec:conclusion}
A character-level incremental speech recognizer is proposed and analyzed throughout the paper. The proposed system combines a CTC-trained RNN with a character-level RNN LM through tree-based beam search decoding. For online decoding with very long input speech, depth-pruning is proposed to prevent indefinite growth of the search tree. When the proposed model is trained with WSJ \texttt{SI-284}, 8.90\% WER can be achieved on the very long speech that is formed by concatenating all utterances in the WSJ \texttt{eval92} evaluation set. The incremental recognition result shows the evidence that character-level RNN LM can learn dependencies between two words even when they are five words apart, which are hard to be caught using conventional $n$-gram back-off language models.
Note that the proposed system only requires speech and text corpus for training. External lexicon or senone modeling is not needed for training, which is a huge advantage. Moreover, it is expected that OOV words or infrequent words such as names of places or people can be dictated as they are pronounced.
\pagebreak
\bibliographystyle{IEEEbib}
|
1,314,259,993,938 | arxiv | \section{Introduction}%
In recent years, deep reinforcement learning (RL) has been applied very successfully to hard control tasks like playing video games \cite{Vinyals2019, Berner2019, Mnih2015, Silver2017}, acquiring locomotion skills \cite{Schulman2015a, Haarnoja2018a, Barth-Maron2018} and, robotic manipulation tasks \cite{Levine2016, Andrychowicz2017, Akkaya2019}.
However, learning these tasks often requires enormous amounts of environment interactions, which makes it impractical for many applications.
For example, learning to manipulate a Rubik's cube for a robotic hand, OpenAI reported a cumulative experience of 14,000 years for simulated interactions \cite{Akkaya2019}.
On the contrary, humans are able to manipulate the cube nearly instantaneously, as they have learned how to manipulate objects in general beforehand.
Meta-reinforcement learning (meta-RL) aims to design an efficient reinforcement learning algorithm to mimic the human learning ability that learns new tasks quickly \cite{Duan2016, Wang2016, Botvinick2019}.
Meta-RL algorithms achieve this by conditioning the policy on past experience and inferring the task information based on the received rewards \cite{Rakelly2019}.
Unfortunately, meta-RL algorithms perform poorly on diverse sets of tasks \cite{Yu2019a}, since they solely rely on rewards to communicate the task to the agent, which is especially problematic when the rewards are sparse or indistinguishable among similar tasks.
Therefore, providing additional information about the task to the agent offers a promising way to help the learning of new tasks.
Natural language provides a rich and intuitive way for humans and robots to interact with each other, due to the possibility of referring to abstract concepts.
When a human worker is given a new task, they are usually told what to do by language, which specifies the task goal or the required skill.
Therefore, the worker will not have to try every possible action sequence to figure out the goal, but purposefully aim at solving the specified task.
Although language is the most intuitive way for humans to understand tasks, the topic of controlling a robot using language instructions is rather new and poorly understood.
\begin{figure}[t!]
\centering
\begin{tikzpicture}[scale=1]
\node[inner sep=0pt,opacity=1] (base) at (0,0)
{\includegraphics[width=250pt]{metaworld.pdf}};
\begin{scope}[xshift=0cm, yshift=0cm]
\node[text=red!60, font=\footnotesize] at (-3.5, 0.7) {Basketball};
\node[text=red!60, font=\footnotesize] at (-1.8, 0.7) {Button Press};
\node[text=red!60, font=\footnotesize] at (-0.1, 0.7) {Dial Turn};
\node[text=red!60, font=\footnotesize] at (1.7, 0.7) {Drawer Close};
\node[text=red!60, font=\footnotesize] at (3.5, 0.7) {Peg Insert};
\node[text=red!60, font=\footnotesize] at (-3.5, -0.7) {Pick Place};
\node[text=red!60, font=\footnotesize] at (-1.8, -0.7) {Push};
\node[text=red!60, font=\footnotesize] at (-0.1, -0.7) {Reach};
\node[text=red!60, font=\footnotesize] at (1.7, -0.7) {Sweep Into};
\node[text=red!60, font=\footnotesize] at (3.5, -0.7) {Window Open};
\node[text=blue!60, font=\footnotesize] at (-3.5, -2.2) {Door Close};
\node[text=blue!60, font=\footnotesize] at (-1.8, -2.2) {Drawer Open};
\node[text=blue!60, font=\footnotesize] at (-0.1, -2.2) {Lever Pull};
\node[text=blue!60, font=\footnotesize] at (1.7, -2.2) {Shelf Place};
\node[text=blue!60, font=\footnotesize] at (3.5, -2.2) {Sweep};
\end{scope}
\end{tikzpicture}
\vspace{-0.5cm}
\caption{A visualization of the ML10 benchmark from Meta-World.
The first two rows show the training tasks and the last row shows the testing tasks.
The figure is adapted from \cite{Yu2019a}.}
\label{fig_metaworld}
\end{figure}
\begin{figure*}[!t]
\drawoverview
\caption{Overview of our algorithm. Actions $a_t$ are sampled from the distribution $\pi(a_t|o_{0:t})$. The dotted lines indicate how the memory segment in the GTrXL influences the hidden states. The memory segment before the first observation of the episode is initialized as a sequence of zero vectors. States with a double circle are terminal states. In our experiments we use a larger size of the GTrXL such that the agent can still use the observations from the first instruction phase to compute the last few actions of an episode.
}
\label{fig:overview}
\end{figure*}
With the fast development of algorithms in natural language processing, more and more studies that attempt to control robots via language instructions are beginning to emerge.
Shao et al. proposed an imitation learning algorithm Concept2Robot \cite{Shao2020}, which aims to enable the robot to learn manipulation skills from language instructions and visual appearances of the task in two stages.
In the first stage, Concept2Robot uses a video-based action classifier to generate a prediction score of the corresponding target task, which is served as a proxy reward to train the single-task policy.
In the second stage, a multi-task policy is trained through imitation learning to imitate all the single-task policies.
Stepputtis et al. \cite{NEURIPS2020_9909794d} introduced an imitation learning model that directly maps labeled language instructions and visual observations to manipulation skills.
Brucker et al. \cite{bucker2022reshaping} proposed a flexible language based interface for human-robot collaboration, which allows a user to reshape existing trajectories for an autonomous agent.
On the basis of imitating a large number of existing trajectories, the agent can generalize and adapt to new trajectories guided by the language.
Lynch et al. \cite{pmlr-v100-lynch20a} invented another algorithm that learns from existing expert demonstrations and adapt to solve tasks via multi-modal information to create the goal, such as languages or images.
Clearly, we can see most existing algorithms learn language-conditioned skills via the concept of imitation learning, where large numbers of expert trajectories are required.
This once again highly involves hand-crafted or engineered data and lacks the advantage of the trial-and-error learning paradigm, with which the agent can explore and learn the task by itself.
To this end, we establish a meta-RL algorithm that addresses the challenge of learning skills with language instructions in multiple manipulation tasks.
We introduce the \textbf{M}eta re\textbf{I}nforcement \textbf{L}earning algorithm using \textbf{L}anguage \textbf{I}nstructi\textbf{ON} (\textbf{MILLION}), which mimics the human-like learning manner and greatly improves the asymptotic performance in the challenging benchmark Meta-World.
We base our method on three concepts.
\begin{itemize}
\item First, we propose a meta-RL learning paradigm that contains an instruction phase and a trial phase.
In the instruction phase, the language description of the task is given to the agent, so that it can understand the goal of the task.
In the trial phase, with the stored task information, the agent can explore and attempt to solve the task as standard reinforcement learning.
\item Second, we build the architecture of our algorithm via three functional modules.
The language instruction is encoded by a pre-trained language module and then taken as an input for a transformer module, where the information is stored and processed.
The on-policy RL algorithm V-MPO is used to update the policy network and the value network.
\item Experiment results demonstrate that MILLION significantly outperforms state-of-the-art algorithms on the challenging robotic manipulation benchmark (Meta-World \cite{Yu2019a}, Figure \ref{fig_metaworld}), in terms of training and testing success rate.
Previous works only achieve less than $50\%$ success rate on the training tasks and less than $40\%$ on the testing tasks, while MILLION achieves almost perfect performance on the training tasks and can solve about half of the testing tasks.
\end{itemize}
\section{Methodology}
In this work, our goal is to propose a method that can provide the task information to the agent via instructions and learns to solve the task using trial-and-error RL algorithms.
First, our policy network should be able to accept free-form language instructions of tasks as the input.
Second, our method should use such instructions to communicate to the agent about what the task entails, instead of using extensive numbers of expert trajectories as other imitating learning based methods.
Third, our method should enable the agent to successfully master diverse skills across broad tasks during training and adapt to unseen tasks during testing.
\subsection{Overview}
The architecture of MILLION is shown in Figure \ref{fig:overview} and briefly explained as follows.
\begin{itemize}
\item First, an episode starts with an instruction phase, during which the language instructions are encoded as the observation using the pre-trained language model GloVe \cite{pennington2014glove} and fed into the transformer module.
The action generated by the policy network and the reward collected from the environment are simply ignored, since there is no interaction during the instruction phase.
\item Second, after the instruction phase, a trial phase is started, during which the agent interacts with the environment by following the task's Markov decision process (MDP).
If the agent solves the task successfully, the environment will be reset and another trial phase starts.
In the case of an unsuccessful trial, another instruction phase will start right after the trial phase, which resembles a real world scenario where a human operator might try a slightly different instruction to communicate the task to the agent.
The whole episode will be terminated after a fixed number of trial phases.
\item Finally, a new task is sampled and the same procedure will be executed.
\end{itemize}
\subsection{Language Instruction Phase}
\textcolor{black}{We consider the problem of learning an instruction-conditioned control policy $\pi(a|s, \mathcal{I}(\tau))$, where $\mathcal{I}$ represents language instructions about task $\tau$.
$a$ is the selected action conditioned on the observation $s$.
The instructions should be encoded into a sequence of vectors $[w_1, w_2, \ldots , w_n] = \mathcal{I}(\tau) \text{, where } w_i \in \mathbb{R}^n, n \in \mathbb{N} $.
We assume two phases in one training episode, namely, the \textbf{instruction phase} during which the task information is provided to the agent and the \textbf{trial phase} during which the agent interacts with the environment.
The additional task information $\mathcal{I}(\tau)$ is only given to the agent in instruction phases, which can be expressed as
\begin{equation}
\begin{cases}
\mathcal{I}(\tau) = [w_1, w_2, \ldots , w_n]_{1 \times n}& \text{, if } \textbf{instruction phase} \\
\mathcal{I}(\tau) = [0, \ldots , 0]_{1 \times n} & \text{, if } \textbf{trial phase}
\end{cases}
\end{equation}
During the instruction phase, the agent receives the encoded vectors in sequence and does not interact with the environment, thus the actions generated by the policy and the environment rewards are ignored. While in the trial phase, the agent does not receive new instructions, but interacts with the environment via the actions and rewards, therefore, $\mathcal{I}(\tau)$ is set as zero.
}
\begin{figure}[!t]
\centering
\drawlanginstruction
\caption{Example of a language instruction during the instruction phase.
}
\label{fig:language_example}
\end{figure}
We provide free-form language instructions as the source of instructions, e.g., “open the drawer” for the drawer opening task, and “press the button” for the button pressing task.
For every task, we create a set of language instructions $\mathit{l}$ with similar key words.
Some examples of the language instructions that we use for the ML10 benchmark are listed in Table \ref{tab:language_configuration}.
At the beginning of the instruction phase, a new language instruction will be sampled for the current task $\tau$.
To capture the information represented in the natural-language command, we first use the GloVe algorithm \cite{Pennington2014} to convert the language instruction $\mathit{l}$ into a sequence of fixed size vector $W = [w_0, ..., w_T] = \mathcal{I}(l)\text{ with } w_i \in \mathbb{R}^{50}$, encoding up to $T$ words with their respective 50-dimensional word embedding.
This means that, at time step $t$ of the instruction phase, the observation will be $w_t$. After $T$ time steps the trial phase will start.
An example of the instruction phase is visualized in Figure \ref{fig:language_example}.
It should be noted that, to make the observations have the same length between the instruction phase and trial phase, a vector of zero is concatenated to the joint positions.
\begin{figure}[!t]
\drawtrialphase
\caption{The visualization of the phase sequence. Each episode starts with the instruction phase and follows with the trial phase.
In case a trial is not able to solve the task, a new instruction phase will be added to enhance the understanding of the task.
}
\label{fig:trial_phase}
\end{figure}
\begin{table}[!b]
\centering
\caption{Examples of Language instructions for ML10}
\label{tab:language_configuration}
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{ll}
\toprule
\textbf{Task} & \textbf{Language Instructions} \\
\hline
reach & reach to goal\_pos, reach goal\_pos \\
\hline
\multirow{2}{*}{\makecell{push}} & {push goal\_pos, push to goal\_pos} \\
& {push object to goal\_pos} \\
\hline
\multirow{2}{*}{\makecell{pick-place}} & {pick and place at goal\_pos} \\
& {pick object and place at goal\_pos} \\
\hline
door-open & pull goal\_pos, open door, pull to goal\_pos \\
\hline
\multirow{2}{*}{\makecell{drawer-open}} & {pull goal\_pos, pull to goal\_pos} \\
& {pull back to goal\_pos} \\
\hline
\multirow{2}{*}{\makecell{drawer-close}} & {push goal\_pos, push to goal\_pos} \\
& {push forward to goal\_pos} \\
\hline
\multirow{2}{*}{\makecell{button-press-topdown}} & {push object down to goal\_pos, press button} \\
& {press down, press button down} \\
\bottomrule
\end{tabular}}
\centering
\end{table}
\subsection{Trial Phase}
The trial phase is defined as steps of environmental interactions between two resets of the environment by following the task's MDP.
The action policy $\Pi$ and value policy $V$ are updated by maximizing the accumulated rewards in the trial phase.
The reset of the environment can be triggered by two conditions, namely, reaching a terminal state or reaching the maximum time-steps.
As illustrated in Figure \ref{fig:trial_phase}, we start each episode with an instruction phase and end the episode after three trial phases.
In the event of an successful trial in which the agent solves the task, we continue the training with a new trial phase.
In the event of an unsuccessful trial phase, we continue the training with a new instruction phase in which a similar language instruction is given to the agent.
Following the same procedure, one trial phase will be initiated after each instruction phase.
\subsection{Reward Normalization}
In multi-task RL or meta RL, one policy is trained to solve multiple tasks, from which the rewards typically have different magnitudes, for instance, in Meta-World (version 1), the task \textit{press-button-v1} has a reward varying from $0$ to $10,000$ while \textit{put-on-shelf-v1} has a reward varying from $0$ to $10$.
This makes the learning extremely difficult and inefficient.
A well-used solution is to clip the reward to a specified range.
Preserving outputs precisely, while adaptively rescaling targets (Pop-Art) \cite{VanHasselt2016} can be used to
normalize the learning targets for the value function for every task individually.
Inspired by Pop-Art, we also update the value function of our network as follows.
The value function is used to predict the reward return $G_t$ and is approximated as
\begin{equation}
f_{\theta, \sigma, \mu, w, b}(x) = \sigma (W h_{\theta / {W, b}} + b) + \mu,
\end{equation}
where $h$ is the neural network with the weights $\theta$.
$W$ and $b$ are parameters to normalize the prediction of the network.
$\mu$ and $\sigma$ are used to track the mean and standard deviation of the returns $G_t$.
Then, $\mu$ and $\sigma$ are updated as
\begin{equation}
\begin{cases}
\mu_t = (1 - \beta) \mu_{t-1} + \beta G_t &\\
\sigma_t = \sqrt{\nu_t - \mu_t^2} & \\
\nu_t = (1 - \beta) \nu_{t-1} + \beta (G_t)^2 &
\end{cases}
\end{equation}
where $\beta$ is a training hyper-parameter.
To keep the learning stationary, we update $W$ and $b$ as
\begin{equation}
\begin{cases}
W_t = \frac{\sigma_{t-1}}{\sigma_t} W_{t-1} &\\
b_t = \frac{\sigma_t b_{t-1} + \mu_{t-1} - \mu_t}{\sigma_t} &
\end{cases}.
\end{equation}
\begin{algorithm}[!t]
\caption{MILLION}
\label{algo_million}
\begin{algorithmic}[1]
\State policy $\pi_\theta(a | s)$, state-value function $V_\phi^\pi(s) $
\State initialize FIFO buffer $\tilde{B}$ with capacity $b * T_{target}$
\While{not converged}
\State Update $\pi_{\theta_{old}} \leftarrow \pi_\theta$
\For {learning step $l=1..T_{target}$}
\For {trajectory number $i=1...b$}
\State Select instruction $I(\tau)$ for random task $\tau$
\State Encode $I(\tau)$ in language phase
\State Do MDPs in trial phase with $\pi_{\theta_{old}}(a|s,I(\tau))$ to generate trajectory $\Omega_{\tau}$, and add $\Omega_{\tau}$ to $\tilde{B}$
\EndFor
\State $B_{batch}$ = Sample $b$ trajectories from $\tilde{B}$
\State Reward normalize $B_{batch}$
\State Compute loss $\mathcal{L}(\phi, \theta, \eta, \alpha_\mu,\alpha_\Sigma)$ from $B_{batch}$
\State Update $\phi, \theta, \eta, \alpha_\mu, \alpha_\Sigma$ with gradient step
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsection{V-MPO with Improved Sample Efficiency}
The policy is trained using the on-policy algorithm Maximum a Posteriori Policy Optimization (V-MPO).
V-MPO is very sample inefficient.
It requires a lot of environment interactions during training.
We improve the sample efficiency by modifying the V-MPO algorithm slightly to reuse sampled environment interactions more often.
The original V-MPO algorithm uses every environment trajectory only for one gradient update. We change this by keeping a small FIFO buffer with the last $T_{target} \times b$ trajectories, where $b$ is the batch size for the gradient updates.
Then we randomly sample batches from this buffer for gradient updates.
The overall MILLION algorithm is given in Algorithm \ref{algo_million}.
\section{Experiments}
\label{chapter:experiments}
In this section, we evaluate the performance of our method on the well-known Meta-World benchmark that consists of $50$ complex manipulation tasks. First, we apply MILLION to the ML10 benchmark to compare the performance against state-of-the-art meta-RL algorithms in terms of training and testing success rate.
Second, we provide an ablation study on ML10 to validate the proposed concepts.
Last, we conduct experiments on the most challenging benchmark ML45 to show its broad effectiveness and generalization capability.
\begin{figure*}[!t]
\drawMLTen
\caption{Maximum per-task success rates on ML10 V1. MILLION shows the highest performance
on the training tasks (98.8\%) and the test tasks (50\%).
}
\label{fig:ml10res}
\end{figure*}
\begin{figure*}[!t]
\drawMLTenV
\caption{Maximum per-task success rates on ML10 V2. MILLION shows the highest performance
on the training tasks (98.3\%) and the test tasks (55.4\%).
}
\label{fig:ml10v2res}
\end{figure*}
\subsection{Meta-World Benchmark}
Meta-World \cite{Yu2019a} is a collection of $50$ diverse robotic manipulation tasks built on the MuJoCo physics simulator \cite{Todorov2012}.
It contains two widely-used benchmarks, namely, ML10 and ML45.
The ML10 contains a subset of the ML45 training tasks, which are split into $10$ training tasks and $5$ test tasks, and the ML45 consists of $45$ training tasks and $5$ test tasks.
Most tasks contain some kind of object that should be manipulated with the robot arm and adopt the control strategy:
\begin{itemize}
\item The action space $\mathcal{A}$ contains the desired 3D Cartesian positions of the end-effector and a normalized control command for the gripper.
\item The state space $\mathcal{S}$ contains the 3D Cartesian positions of the end-effector, the positions of the manipulable objects, and the goal position.
The state space is always nine dimensional.
\item A success metric function is provided for each task, which defines the competition condition of the corresponding task.
\item For each task, a well-shaped reward function is provided with a similar structure across all tasks, which makes the tasks individually solvable for recent RL algorithms.
\end{itemize}
We make two additional changes to the Meta-World benchmark to reduce the training time.
First, inspired by \cite{Bellemare2012}, we repeat actions twice during the trial phase to reduce the trial length across all the tasks, which enables a shorter sequence length for the transformer model, and therefore reduces the computation requirements significantly.
It should be noted that the reported number of environment steps in our results corresponds to the number of observations the agent has seen.
Second, we add a scalar to the observations during the trial phase, which indicates the remaining time in the trial.
This helps the agent to learn a better value function, because the Meta-World environments have a time dependent termination condition \cite{Pardo2018}.
The time observation is computed as $\frac{\text{steps in one trial}}{\text{maximum steps per trial}}$.
During the instruction phase, a zero value is concatenated to the observation instead.
There are two versions of Meta-World.
Note that, in the first version of Meta-World, three tasks had to be removed from the benchmark, because the scripted policies provided by Meta-World did not work well to solve the tasks.
This includes peg-insert-side-v1, lever-pull-v1 and bin-picking-v1.
Another task, sweep-v1, had to be removed because the reward function did not encourage the agent to solve the task.
But for the second version of Meta-World, we keep all the tasks accessible for training and testing.
\subsection{ML10 Benchmark \label{ml10_experiments}}
We first tested our method on the ML10 benchmark to show its performance when the agent receives a language instruction instead of only observing the reward signal.
The language instructions are short sentences that describe the goal of the task.
For each task in ML10, we designed multiple simple language instructions.
\begin{table}[!t]
\centering
\caption{Average success rates over all tasks for ML10 and ML45.}
\begin{tabular}{lllll}
\toprule
\multirow{2}{*}{Methods} &
\multicolumn{2}{c}{ML10} &
\multicolumn{2}{c}{ML45} \\
& Training & Testing & Training & Testing \\
\hline
MAML & 25\% & 36\% & 21\% & 24\% \\
RL$^2$ & 50\% & 10\% & 43\% & 20\% \\
PEARL & 43\% & 0\% & 11\% & 30\% \\
MILLION & \textbf{99\%} & \textbf{50\%} & \textbf{95\%} & \textbf{48\%} \\
\bottomrule
\end{tabular}
\label{tab:ml_results}
\end{table}
\begin{table}[t!]\normalsize
\centering
\caption{Comparison among MILLION variants in ML10 V1.}
\small{
\begin{tabular}{l|c|c} \hline
Variants & Meta-training & Meta-test \\ \hline
MILLION & \textbf{0.99} & \textbf{0.50} \\
without Pop-Art & 0.41 & 0.30 \\
without instructions & 0.71 & 0.29 \\
with Full Time Obs & 0.83 & 0.40 \\ \hline
\end{tabular}
}
\label{tab:ml10_ablation}
\vspace{-10pt}
\end{table}
According to the reported results from \cite{Yu2019a}, we listed the averaged success rates of state-of-the-art meta-RL algorithms in Table \ref{tab:ml_results}, which includes MAML \cite{finn2017model}, RL$^2$ \cite{duan2017rl}, PEARL \cite{rakelly2019efficient}, and our method MILLION.
Detailed performance for each task in ML10 is visualized in Figure \ref{fig:ml10res} and \ref{fig:ml10v2res}.
It can be observed that, in both versions of Meta-World, MILLION achieves success rates of almost 100\% on the training tasks, which significantly outperforms state-of-the-art methods.
It demonstrates the advantage of providing the agent with the task instructions instead of only rewards.
For meta-testing, MILLION has a success rate of around $50\%$, which also performs better than other methods (See Figure \ref{fig:metav1}.).
\begin{figure}[!t]
\centering
\includegraphics[width=0.475\textwidth]{ml10_v1_language.pdf}
\vspace{-0.25cm}
\caption{Meta-World ML10 V1 training progress with language instructions. The shaded regions indicate one standard deviation over three training runs. The result of ML10 V2 benchmark can be found on the website\protect\footnotemark[1].}
\label{fig:metav1}
\end{figure}
\footnotetext[1]{\url{https://tumi6robot.wixsite.com/million}}
\begin{figure}[!t]
\centering
\begin{tikzpicture}
\centering
\node[above right, inner sep=0] (image) at (0,0){\includegraphics[width=0.45\textwidth, trim={0.5cm 0 2cm 0},clip]
{reach_task.jpg}};
\node(goal1) at (5.2,1.9)[red, fill=white] {\footnotesize {goal2}};
\draw[arrow1, red, line width=0.4mm] (5.15,2.15) -- (5.0,2.55);
\node(goal2) at (1.77,3.2)[red, fill=white] {\footnotesize {goal2}};
\draw[arrow1, red, line width=0.4mm] (2.2,3.2) -- (2.7,3.4);
\node(start1) at (3.2,1.1)[cyan, fill=white] {\footnotesize {start}};
\draw[arrow1, cyan, line width=0.4mm] (3.2,1.3) -- (3.65,1.6);
\node(micro) at (1.4,2.4)[blue,fill=white!30] {\footnotesize {microphone}};
\draw[arrow1, blue, line width=0.4mm] (1.4,2.15) -- (1.7,1.5);
\end{tikzpicture}
\vspace{-0.25cm}
\caption{We take the reaching task as one example to show that MILLION can be successfully used in the real world.
The task are specified by the language instruction through the microphone.
More demonstrations can be found on the project webpage\protect\footnotemark[1].}
\label{fig:realwold_results}
\vspace{-5pt}
\end{figure}
To emphasize the importance of the proposed concepts, we provide ablation studies on the ML10 benchmark, shown in Table~\ref{tab:ml10_ablation}.
First, we demonstrate the importance of the Pop-Art reward normalization.
This normalizes the rewards for every task individually.
The results demonstrate that Pop-Art is very important for our algorithm.
Without this, the agent only learns to solve less than half of the training tasks.
Second, we also examine the performance of a variant that only observes rewards without instructions, which means an episode consists only of three trials and no instruction phase.
The rewards are simply concatenated to the observations to serve as a potential information source of the task. This is similar to many other recent context-based meta-RL algorithms \cite{duan2017rl, wang2016learning}.
This variant can learn $70\%$ of the training tasks but adapts to the testing tasks poorly.
Another ablation is to use a different time observation. Our algorithm observes the remaining time in the current trial. Here we evaluate our algorithm when it observes the remaining time in the full episode. This was originally proposed by \cite{Pardo2018}. The results show that this variant learns the training tasks slightly worse than MILLION. Our hypothesis is that the discount factor causes the value function during a trial to be relatively independent of the next trial rewards. This means that the remaining time is more important for the value function than the remaining time in the episode.
\begin{figure}[!t]
\centering
\includegraphics[width=0.475\textwidth]
{ml45_v1_language.pdf}
\vspace{-0.25cm}
\caption[Meta-World ML45 training progress]{Average trial success rate on ML45 training and test tasks. The policy is evaluated every 5 million environment steps and averaged over 5 consecutive evaluations.}
\label{fig:ml45_results}
\vspace{-5pt}
\end{figure}
We also successfully transfer the learned policy from simulation to the real world.
Due to the page limit, we only show the snapshot of MILLION solving the reach-v1 task in the real world (See Figure \ref{fig:realwold_results}).
More demonstrations of manipulation tasks from ML10 can be found on the webpage\protect\footnotemark[1].
\subsection{ML45 Benchmark}
To test ML 45, we use the same hyperparameters and the same number of trials as for the ML10 benchmark.
However, we train the agent for over 1 billion time steps instead of just 400 million because the benchmark contains more diverse tasks.
The results (see Figure \ref{fig:ml45_results}) show that MILLION is able to learn almost all training tasks and about 48\% of the test tasks, which indicates that our method has a stable performance on complex manipulation tasks scenarios.
A detailed comparison between MILLION and state-of-the-art algorithms on ML45 is also listed in Table \ref{tab:ml_results}.
It demonstrates that our algorithm MILLION greatly outperforms other baselines in terms of success rate in both training and testing stages.
\section{Conclusion}
In this paper, we showed that meta-reinforcement learning can be greatly improved by providing the agent with additional task information, such as language instructions, which are often much easier to provide than dense rewards.
By encoding the language instructions into the observations, we designed a very simple and general algorithm.
This eases the application of RL algorithms to be used for real-world robotic tasks.
Furthermore, we demonstrated that our algorithm is able to solve a set of very diverse robotic manipulation tasks.
In future, we plan to incorporate the language information as a feedback signal to further calibrate the behavior of the meta-RL agent, which can potentially advance our understanding on interactive intelligent robots in the future.
\bibliographystyle{IEEEtran}
|
1,314,259,993,939 | arxiv | \section{Introduction}
Nekrasov partition function \cite{r:Nekrasov} is an exact formula for the partition function of four-dimensional $\mathcal{N}=2$
supersymmetric gauge theory, including non-perturbative instanton effects.
It is calculated in a deformed four-dimensional Euclidean space, called $\Omega$-background,
which is parameterized by two parameters $\epsilon_1,\epsilon_2$.
At the same time, it was recognized that
the partition function can be identified with
the correlation function of two-dimensional
conformal field theory. In a recent paper \cite{r:AGT},
the explicit form of such correspondence was proposed
between $N=2$ gauge theories and Liouville (Toda) conformal blocks
(AGT conjecture).
In AGT proposal, the instanton part of Nekrasov partition function
is identified with the conformal block of $W$-algebra
\cite{Wyllard:2009hg, Mironov:2009by} .
This article is in the line of this development.
The instanton partition function for linear quiver gauge theories
is decomposed into matrix like product with a factor $Z_{\vec Y, \vec W}$
which depends on two sets of Young diagrams (eq.\ref{e:Nek}).
Here the Young diagrams $\vec Y=(Y_1,\cdots, Y_N)$
represent the fixed points of
$U(N)$ instanton moduli space under localization.
$Z_{\vec Y,\vec W}$ consists of contributions from one bifundamental
hypermultiplet and vectormultiplets.
We find that the building block $Z_{\vec Y, \vec W}$
satisfies an infinite series of recursion relations,
\ba \label{e:sketch}
\delta_{\pm 1,n} Z_{\vec Y,\vec W} -U_{\pm 1,n} Z_{\vec Y,\vec W}=0\,,
\ea
where $\delta_{\pm 1,n}Z_{\vec Y,\vec W}$
represents a sum of the Nekrasov partition function with
instanton number larger or less than $Z_{\vec Y,\vec W}$ by one
with appropriate coefficients and $U_{\pm 1,n}$ are polynomials
of parameters such as the mass of bifundamental matter or the VEV of gauge multilets.
The subscript $n$ takes any non-negative integer values.
The detailed form of the recursion formula and its derivation
are given in the first half of this paper.
The recursion formula is derived by a complicated but
straightforward calculation from the definition of the factor $Z_{\vec Y, \vec W}$.
We note that a classical limit of such relations was
recently explored in \cite{Nekrasov:2012xe}.
In the latter half of this article, we give an interpretation of (\ref{e:sketch}).
We show that the variation in (\ref{e:sketch}) can be
understood as an action of an infinite-dimensional extended conformal algebra.
It is defined in \cite{r:SV} and called SH$^c$ algebra.\footnote{This name of
the algebra appears only in \cite{r:SV}. Degenerate
double affine Hecke algebra, or DDAHA in short, may be more appropriate.
We thank Y. Tachikawa for informing us of the relevance of \cite{r:SV}.}
For this purpose, we construct an explicit representation
where the basis of the Hilbert space is labeled by
sets of $N$ Young diagrams.
Physically, it can be understood that these states correspond to instantons characterized by the same set of Young diagrams.
In our previous paper \cite{Kanno:2012hk}, we showed a similar form of recursion formula under self-dual $\Omega$-background $(\e_1+\e_2=0)$
and discussed that it can be interpreted in terms of $\cW_{1+\infty}$ algebra.
The analysis here is a natural generalization to arbitrary $\Omega$-deformation.
SH$^{c}$ algebra contains a parameter $\beta$, which is related to
$\Omega$-deformation parameters by $\beta=-\e_1/\e_2$.
When we take $\beta=1$, (\ref{e:sketch}) reduces to that in \cite{Kanno:2012hk} and
the action of SH$^c$ algebra can be identified with the $\cW_{1+\infty}$ algebra.
We will also see SH$^{c}$ algebra contains Heisenberg$\times$Virasoro subalgebra and
its central charge is the same as that of Heisenberg$\times W_N$ algebra with background charge $Q=\sqrt{\beta}-1/\sqrt{\beta}$.
The combination of Heisenberg algebra with $W_N$ appears in \cite{Alba:2010qc,Fateev:2011hq,Belavin:2011js}, where the authors formally construct a basis of Hilbert space of Heisenberg$\times W_N$ algebra which reproduces the
factorized form of Nekrasov partition function.
Such observation implies that one may regard the formula (\ref{e:sketch})
as the conformal Ward identities which characterize the conformal block function.
We mention that there is another one parameter deformation
of $\cW_{1+\infty}$ algebra \cite{Gaberdiel:2012ku}, $W_{\infty}[\mu]$
in the context of higher spin supergravity.
SH$^c$ and $W_{\infty}[\mu]$ share a property that they are generated by
infinite higher spin generators and contain
$W_N$ algebra with general $\beta$ as their reduction.
Here we use SH$^c$ since its action on a basis parametrized by
sets of Young diagram is already known. It is natural to expect that
these two algebras are identical although they appear to be very different.
It should also be noted that the introduction of further deformation parameter is
possible \cite{r:DAHA, r:DI, r:Miki}
and was applied to a generalization of AGT conjecture \cite{DAHA-AGT}.
As we will see later, we expect that the recursion relation from SH$^{c}$ algebra
should be regarded as the extended conformal Ward identities and
fully reproduce the conformal block function.
Because of a technical difficulty to characterize the vertex operator
in SH$^c$, explicit demonstration of the relation is limited to
the Heisenberg and Virasoro subalgebra.
For these cases, the recursion relations
for $n=0,1$ can be indeed interpreted as Ward identities.
The algebra SH$^{c}$ was introduced in \cite{r:SV} to prove the AGT conjecture
for pure super Yang-Mills theory. Our analysis shows that it may be
applied to linear quiver gauge theories as well. For the recent development
toward such direction, see also \cite{Maulik:2012wi}.
The rest of this article is organized as follows.
In section 2, we describe Nekrasov partition function for linear quiver gauge theories.
In section 3, we derive the recursion formula for Nekrasov partition function.
In section 4, we give the definition of SH$^c$ algebra and a representation of it.
The relation to $\cW_{1+\infty}$ algebra at $\beta=1$ is also discussed.
In section 5, we show SH$^c$ algebra contains
Heisenberg$\times$Virasoro subalgebra and its central charge is equal to
that of Heisenberg$\times W_N$ algebra.
In section 6, we discuss that Nekrasov partition function can be interpreted as a correlator of SH$^{c}$ algebra.
Especially, we explain the recursion formulae for $n=0,1$
represent the $U(1)$ and Virasoro constraint for Nekrasov partition function respectively.
The vertex operator in the correlator should be chosen to be special ones which have maximal number
of null states at level 1.
Since the calculations in this article are very lengthy but straightforward,
most of the detail are not presented.
We nevertheless keep some outline of the computation
in Appendix for readers who are interested in the detail.
\section{Nekrasov partition function}
In this article, we consider four-dimensional $\mathcal{N}=2$ superconformal linear quiver gauge theory with
$U(N) \times U(N) \times \cdots \times U(N)$ gauge group.
The instanton partition function of $N=2$ gauge theories have been
developed in \cite{r:Nekrasov,r:NY,r:MNS,inst1}.
In this case, it can be written in the following form
\ba\label{e:Nek}
Z^\mathrm{Nek}= \sum_{\vec Y^{(1)},\cdots,\vec Y^{(n)}}
q_i^{|\vec Y^{(i)}|}
\bar{V}_{\vec Y^{(1)}} \cdot Z_{\vec Y^{(1)} \vec Y^{(2)}}\cdots
Z_{\vec Y^{(n-1)}\vec Y^{(n)}} \cdot V_{\vec Y^{(n)}}\,.
\ea
\ba
Z_{\vec Y^{(i)} \vec Y^{(i+1)}}&=& Z(\vec a^{(i)},\vY^{(i)}; \vec a^{(i+1)}, \vY^{(i+1)};\mu^{(i)}),\\
\bar V_{\vY^{(1)}}&=& Z(\vec\lambda, \vec\emptyset; \vec a^{(1)}, \vY^{(1)};\mu^{(0)}),\\
V_{\vec Y^{(n)}}&=& Z(\vec a^{(n)},\vY^{(n)}; \vec \lambda', \vec\emptyset;\mu^{(n)}),
\ea
where $q_i=\exp{(2\pi i \t_i)}$ represents the complexified coupling constant $\t_i$ of $i$-th $U(N)$ gauge group,
and $\vec Y^{(i)}$ is a set of $N$ Young diagrams characterizing
fixed points of localization in the instanton moduli space of the $i^\mathrm{th}$ $U(N)$.
$\vec a^{(i)}$ is the VEV for an adjoint scalar field in the vector multiplet of $i^\mathrm{th}$ $U(N)$
and $\mu^{(i)}$ is the mass parameter for the bifundamental matter
field which interpolates $i^\mathrm{th}$ and $i+1^\mathrm{th}$ gauge groups.
We write $\vec\emptyset$ to represent
a set of null Young diagrams $(\emptyset,\cdots,\emptyset)$.
The building block reads,
\ba
Z(\vec a, \vec Y; \vec b, \vec W;\mu) =\frac{z_\mathrm{bf}}{z_\mathrm{vect}}= \frac{\prod_{p,q=1}^N g_{Y_p W_q}(a_p-b_q-\mu)}{\left(
\prod_{p,q} g_{Y_p Y_q}(a_p-a_q) g_{W_pW_q}(b_p-b_q)
\right)^{1/2}}
\label{e:Z}
\ea
where the numerator ($z_\mathrm{bf}$)
comes from the contiribution of the bifundamental multiplet
and the denominator ($z_\mathrm{vect}$) is the contribution from
the vector multiplet to which the bifundamental multiplet couples to. The function $g_{YW}$ is
\ba
g_{Y,W}(x)&=&\prod_{(i,j)\in Y}(x+\beta(Y^\prime_j-i+1)+W_i-j)
\prod_{(i,j)\in W}(-x+\beta(W^\prime_j-i)+Y_i-j+1)\,.
\ea
The decomposition of the form (\ref{e:Nek}) seems to be natural if we recall the pants decomposition of multi-point function on a sphere
and the dictionary of AGT relation;
A bifundamental and a vector multiplet correspond to a vertex operator insertion and an internal line respectively
(see fig.\ref{f:block}).
\begin{figure}[bpt]
\begin{center}
\includegraphics[scale=0.6]{block.eps}
\end{center}
\caption{Decomposition of Nekrasov function}
\label{f:block}
\end{figure}
\section{Recursion formula for Nekrasov partition function}
In this section, we present the accurate form of the formula (\ref{e:sketch})
and then derive it from the definition (\ref{e:Z}).
For this purpose,
we need to introduce some notations.
We decompose $Y,W$ into rectangles $Y=(r_1, \cdots, r_f; s_1,\cdots, s_f)$
(with $0<r_1<\cdots <r_f$, $s_1>\cdots>s_f>0$, see Figure \ref{f:Young}
for the parametrization).
We use $f_p$ (resp. $\bar f_p$) to represent the number of rectangles
of $Y_p$ (resp $W_p$).
\begin{figure}[bpt]
\begin{center}
\includegraphics[scale=0.6]{Young.eps}
\end{center}
\caption{Decomposition of Young diagram by rectangles}
\label{f:Young}
\end{figure}
Furthermore, we write (with $r_0=s_{k+1}=0$):
\begin{eqnarray}
A_k(Y)& =& \beta r_{k-1}-s_k-\xi,\quad (k=1,\cdots, f+1)\label{e:Ak}\,,\\
B_k(Y)&= & \beta r_{k}-s_k,\quad (k=1,\cdots, f)\label{e:Bk}\,,
\end{eqnarray}
where $\xi:=1-\beta$.
$A_k(Y)$ (resp. $B_k(Y)$) represents the $k^\mathrm{th}$ location
where a box may be added to (resp. deleted from) the Young diagram $Y$ (Figure \ref{f:Young+-})
composed with a map from location to $\mathbf{C}$.
\\\\
We denote $Y^{(k,+)}$ (resp. $Y^{(k,-)}$) as the Young diagram obtained from
$Y$ by adding (resp. deleting) a box at $(r_{k-1}+1,s_k+1)$ (resp. $(r_k,s_k)$).
Similarly we use the notation $\vec Y^{(k\pm),p}=(Y_1,\cdots, Y_p^{(k,\pm)}, \cdots, Y_N)$
to represent the variation of one Young diagram in a set of Young tables $\vec Y$.
\begin{figure}[bpt]
\begin{center}
\includegraphics[scale=0.6]{Young+-.eps}
\end{center}
\caption{Locations of boxes}
\label{f:Young+-}
\end{figure}
One can write the schematic relation (\ref{e:sketch}) more explicitly.
We define,
\ba
\delta_{-1,n} Z(\vec a, \vec Y; \vec b, \vec W;\mu)&=&
\sum_{p=1}^N\left(
\sum_{k=1}^{f_p+1} (a_p+\nu+A_k(Y_p))^n \Lambda^{(k,+)}_p (\vec a,\vec Y) Z(\vec a,\vec Y^{(k,+),p};
\vec b, \vec W;\mu)\right.\nn\\
&& \left. -\sum_{k=1}^{\tilde f_p} (b_p+\mu+\nu+B_k(W_p))^n
\Lambda^{(k,-)}_p(\vec b, \vec W)
Z(\vec a,\vec Y;\vec b, \vec W^{(k,-),p};\mu)
\right)\label{delta1}\,,\\
\delta_{1,n} Z(\vec a, \vec Y; \vec b, \vec W;\mu)&=&
\sum_{p=1}^N\left(
-\sum_{k=1}^{f_p} (a_p+\nu+B_k(Y_p))^n \Lambda^{(k,-)}_p (\vec a,\vec Y) Z(\vec a,\vec Y^{(k,-),p};
\vec b, \vec W;\mu)\right.\nn\\
&& \left. +\sum_{k=1}^{\tilde f_p} (b_p+\nu+\mu+A_k(W_p)+\xi)^n \Lambda^{(k,+)}_p(\vec b, \vec W)
Z(\vec a,\vec Y;\vec b, \vec W^{(k,+),p};\mu)
\right),\label{delta2}
\ea
where we introduced coefficients $\Lambda$:
\ba
\Lambda^{(k,+)}_p(\vec a,\vec Y) &=& \left(
\prod_{q=1}^N \left(\prod_{\ell=1}^{f_q} \frac{
a_p-a_q+A_k(Y_p)-B_\ell(Y_q)+\xi
}{
a_p-a_q+A_k(Y_p)- B_\ell(Y_q)
}{\prod}_{\ell=1}^{\prime f_q +1}\frac{a_p-a_q+A_k(Y_p)- A_\ell(Y_q) -\xi}{
a_p-a_q+A_k(Y_p)-A_\ell(Y_q)
}
\right)\right)^{1/2}\,,\\
\Lambda^{(k,-)}_p(\vec a,\vec Y) &=& \left(
\prod_{q=1}^N \left(\prod_{\ell=1}^{ f_q+1} \frac{ a_p-a_q+B_k(Y_p)-A_\ell(Y_q)-\xi
}{a_p-a_q+B_k(p)- A_\ell(q)
}{\prod}_{\ell=1}^{\prime f_q}\frac{a_p-a_q+B_k(Y_p)-B_\ell(Y_q)+\xi}{
a_p-a_q+B_k(Y_p)-B_\ell(Y_q)
}
\right)\right)^{1/2}\,.
\ea
Prime in the product symbol ($\prod'$) represents that $(\ell,q)=(k,p)$ is excluded
in the product. The parameter $\nu$ is arbitrary.
\\\\
In order to define the polynomial $U_{\pm 1, n}$, we introduce
a generating function for multi-variables, $x_1,\cdots, x_\mathcal{N},
y_1,\cdots, y_\mathcal{N}$, (the expansion around $\zeta=\infty$),
\ba\label{genfun}
\prod_{I=1}^\mathcal{N} \frac{\zeta-y_I}{\zeta-x_I}=1+\sum_{n=1}^\infty q_{n}(x, y) \zeta^{-n}\,.
\ea
which gives the order $n$ polynomial $q_{n}$ in variables $x_I$ and $y_I$.
$U_{\pm 1,n}$ is written in terms of $q_n$ as
\ba
U_{\pm1,n}=\beta^{-1/2}q_{n+1}(x,y),
\ea
where we need to make replacements of variables:
\ba
x_I &\rightarrow& \{ \nu+A_k(Y_p), \nu+\mu+B_k(W_p)\},\,\,
y_I \rightarrow \{\nu+\mu+A_k(W_p)+\xi , \nu+B_k(Y_p)-\xi\} \quad \mbox{for}\quad U_{-1,n}\,,\label{rep1}\\
x_I &\rightarrow& \{\nu+\mu+A_k(W_p)+\xi , \nu+B_k(Y_p)\},\,\,
y_I\rightarrow \{\nu+A_k(Y_p)+\xi, \nu+\mu+B_k(W_p)\} \quad \mbox{for}\quad U_{1,n}\,.\label{rep2}
\ea
Here $k,p$ run over all possible values and the number of variables
is $\mathcal{N}=N+\sum_{p=1}^N (f_p+\bar f_p)$.
We note that the right hand side of (\ref{genfun}) is written as
\ba
\exp\left(\sum_{n=1}^\mathcal{N} \frac{\zeta^{-n}}{n} p_n(x,y)\right)\,,\quad
p_n(x,y):= \sum_{I=1}^\mathcal{N} ({x_I}^n-{y_I}^n)\,.
\ea
In terms of $p_n$, the function $q_n$ is written as,
\ba
q_1=p_1,\quad q_2=\frac12 (p_2+p_1^2),\cdots
\ea
and so on. In general it takes the form of Schur polynomial for single row Young diagram $(n)$
written in terms of power sum polynomial.
Let us give a proof of the recursion relation (\ref{e:sketch}).
It is based on a direct evaluation of the variations of
Nekrasov partition function which is given in the appendix \ref{Evaluation of variations of Nekrasov formula}.
By the formulae (\ref{bf1}--\ref{vec4}),
the left hand sides of (\ref{delta1},\ref{delta2}) are written in the form,
\ba
\label{summulti}
\beta^{-1/2}\sum_{I=1}^\mathcal{N} (x_I)^n \frac{\prod_{J=1}^\mathcal{N} (x_I-y_J)}{\prod'_J (x_I-x_J)}
\ea
with the replacements (\ref{rep1},\ref{rep2}).
We rewrite this expression in the form of the generating functional,
\ba
\sum_{I=1}^\mathcal{N} \left(\sum_{n=0}^\infty
\frac{x_I^n}{\zeta^{n+1}} \right)\frac{\prod_{J=1}^\mathcal{N} (x_I-y_J)}{\prod'_J (x_I-x_J)}
=
\sum_{I=1}^\mathcal{N} \frac{1}{\zeta-x_I} \frac{\prod_{J=1}^\mathcal{N} (x_I-y_J)}{\prod'_J (x_I-x_J)}
= \prod_{I=1}^\mathcal{N} \frac{\zeta-y_I}{\zeta-x_I} -1\,.
\ea
From the second to the third term, we need to use a
nontrivial identity \cite{Kanno:2012hk} which can be proved
by comparing the locations of poles and the residue on both hand sides.
The third term takes the form of
the left hand side of (\ref{genfun}).
Comparing the coefficients of $\zeta^{-(n+1)}$, we arrive at
the recursion formula (\ref{e:sketch}).
\section{Symmetry algebra {\bf SH}$^c$}
In this section, we show that the structure of the one box variations in (\ref{e:sketch})
has a nonlinear algebra which is denoted as SH$^c$ in the paper
\cite{r:SV}.
It has generators $D_{r,s}$ with $r\in \mathbf{Z}$ and $s\in \mathbf{Z}_{\geq 0}$.
We call the first index $r$ as degree and the second index $s$ as order
of generator.
The commutation relations for degree $\pm 1, 0$ generators are defined by,
\ba
\left[D_{0,l} , D_{1,k} \right] & =& D_{1,l+k-1}, \;\;\; l \geq 1 \,,\label{SH1}\\
\left[D_{0,l},D_{-1,k}\right]&=&-D_{-1,l+k-1}, \;\;\; l \geq 1 \,,\\
\left[D_{-1,k},D_{1,l}\right]&=&E_{k+l} \;\;\; l,k \geq 1\label{eDDE}\,,\\
\left[D_{0,l} , D_{0,k} \right] & =& 0 \,,\,\, k,l\geq 0\,,\label{SH4}
\ea
where
$E_k$ is a nonlinear combination of $D_{0,k}$ determined in the form of a generating function,
\ba
1+(1-\beta)\sum_{l\geq 0}E_l s^{l+1}= \exp(\sum_{l\geq 0}(-1)^{l+1}c_l \pi_l(s))\exp(\sum_{l\geq 0}D_{0,l+1} \omega_l(s)) \,,\label{com0}
\ea
with
\ba
&&\pi_l(s)=s^l G_l(1+(1-\beta)s) \,,\\
&&\omega_l(s)=\sum_{q=1,-\beta,\beta-1}s^l(G_l(1-qs)-G_l(1+qs)) \,,\\
&&G_0(s)=-\log(s), \;\;\; G_l(s)=(s^{-l}-1)/l \;\;\; l \geq 1\,.
\ea
The parameters $c_l$ ($l\geq 0$) are central charges.
The first few $E_l$ can be computed more explicitly as,
\ba
E_0&=&c_0,\\
E_1&=&-c_1 +c_0(c_0-1)\xi /2,\label{E_1} \\
E_2&=&c_2+ c_1(1-c_0)\xi +c_0(c_0-1)(c_0-2)\xi^2 /6 +2\beta D_{0,1}, \\
E_3& = &6\beta D_{0,2}+ 2c_0 \beta \xi D_{0,1} + \cdots, \\
E_4
& =& 12\beta D_{0,3}+ 6c_0 \beta \xi D_{0,2} +
(-c_0 \beta \xi^2 +c_0^2 \beta \xi^2- 2c_1 \beta \xi +2 -4\xi +4\xi^2 -2\xi^3)D_{0,1} + \cdots\,.
\ea
where $\cdots$ are terms which does not contain $D_{0,l}$.
Other generators are defined recursively by,
\ba
D_{l+1,0} = \frac1l \left[D_{1,1} , D_{l,0} \right] ,&\qquad&
D_{-l-1,0} = \frac1l \left[D_{-l,0},D_{-1,1}\right] \,, \\
D_{r,l} = \left[D_{0,l+1} , D_{r,0} \right] \;\;\; &\qquad&
D_{-r,l}= \left[D_{-r,0} , D_{0,l+1} \right]\,.
\ea
for $l\geq 0, r>0$\,.
Some of the basic properties of SH$^c$ \cite{r:SV} are listed as follows:
\begin{itemize}
\item The algebra has a natural action on the fixed points of localization in the moduli space of
$SU(N)$ instantons.
\item It can be derived as a singular limit of double affine Hecke algebra (DAHA) \cite{r:DAHA}.
\item When $\beta\rightarrow 1$, the algebra reduces to the much simpler algebra $\cW_{1+\infty}$.
\item For general $\beta$, the algebra contains $W_N$ algebra when the representation is constructed out of $N$ Young diagrams.
\item It is closely related to the recursion relations among Jack polynomials.
\end{itemize}
To see the relation with (\ref{e:sketch}), we introduce a Hilbert space $\mathcal{H}_{\vec a}$
spanned by an basis $|\vec a, \vec Y\rangle$ where $\vec a\in \mathbf{C}^N$
and $\vec Y=(Y_1,\cdots, Y_N)$ is a set of $N$ Young tables.
The dual basis $\langle \vec a, \vec Y|$ is defined such that
\ba
\langle \vec a, \vec Y|\vec b, \vec W\rangle =\delta_{\vec Y, \vec W}\delta(\vec a-\vec b)\,.
\ea
We define the actions of $D_{\pm 1, l}, D_{0,l}$ on the ket and bra basis as,
\ba
D_{-1,l}|\vec b,\vec W>&=&(-1)^{l} \sum_{q=1}^{N} \sum_{t=1}^{\tilde{f_q}}(b_q+B_t(W_q))^l \Lambda^{(t,-)}_q(\vec W)|\vec b,\vec W^{(t,-),q}>\label{DW1} \,,\\
D_{1,l}|\vec b,\vec W>&=&(-1)^{l}\sum_{q=1}^{N}\sum_{t=1}^{\tilde{f_q}+1}(b_q+A_t(W_q))^l
\Lambda^{(t,+)}_q(\vec W)|\vec b,\vec W^{(t,+),q}>\label{DW2}\,,\\
D_{0,l+1}|\vec b,\vec W>&=&(-1)^l \sum_{q=1}^{N}\sum_{\mu \in W_q}(b_q+c(\mu))^l |\vec b,\vec W>\,,
\label{eD0W}\\
\langle \vec a, \vec Y| D_{-1,l}&=&(-1)^l\sum_{p=1}^{N}\sum_{t=1}^{f+1}(a_p+A_t(Y_p))^l \Lambda_p^{(t,+)}(\vec Y)
\langle \vec a, \vec Y^{(t,+),p}| \,,\label{eDmY}\\
\langle \vec a, \vec Y| D_{1,l}&=& (-1)^l\sum_{p=1}^{N}\sum_{t=1}^{f}
(a_p+B_t(Y_p))^l {\Lambda}_p^{(t,-)}(\vec Y)\langle \vec a, \vec Y^{(t,-),p}|
\label{eD1Y}\,,\\
\langle \vec a, \vec Y| D_{0,l+1}&=&(-1)^l\sum_{p=1}^{N}\sum_{\mu \in Y_p}(a_p+c(\mu))^l \langle \vec a, \vec Y|\,,\label{eD0Y}
\ea
where
$
c(\mu)=\beta i-j \mbox{ for }\mu=(i,j).
$
With such definitions, we claim that the action of $D_{a,l}$
on the ket and bra basis satisfies SH${}^c$ algebra with central charges
\ba\label{e:cl}
c_l=\left\{
\begin{array}{ll}
\sum_{q=1}^{N}(b_q-\xi)^l \quad &\mbox{(for ket)}\\
\sum_{p=1}^{N}(a_q -\xi)^l\quad &\mbox{(for bra)}
\end{array}
\right. \,.
\ea
We note that the ``central charges" depend on the label $\vec a, \vec b$
in bra and ket state in general except for $c_0 =N$. Of course, when the inner product between them becomes
nonvanishing ($\vec a=\vec b$), they coincide.
Up to overall signs and shift of parameters $a_p\rightarrow a_p+\nu$
and $b_p\rightarrow b_p+\mu+\nu+\xi$, the coefficients
which define $D_{\pm 1,l}$ are identical to the variations $\delta_{\pm 1, l}$
in (\ref{delta1},\ref{delta2}). This observation suggests that
the partition function may be written as an inner product
of the basis $\langle \vec a+\nu\vec e,\vec Y|$ and
$|\vec b+(\nu+\mu+\xi)\vec e,\vec W\rangle$
($\vec e:=(1,\cdots,1)$) with some operator insertions,
and the recursion formula
should be regarded as the Ward identity for the symmetry algebra SH$^c$.
We will pursue this idea in the following.
Actually there exists a small mismatch in the above observation.
The coefficient appearing in (\ref{delta2}) is shifted from the coefficient
in (\ref{DW1}) by $\xi$. As we see later, this factor will be canceled
by slightly modifying the vertex operator inserted between two basis.
With such change, the vertex operator is no more the primary field for the $U(1)$ factor.
We need to perform a lengthy computation to confirm
that the action of $D_{\pm 1, l}$ indeed gives a
representation of SH$^c$. See the appendix \ref{s:shc} for some detail.
\subsection{Comparison with $\cW_{1+\infty}$}
For general value of $\beta$, SH$^c$ is a complicated nonlinear algebra.
Simplification occurs when we choose $\beta=1$.
In this case, the nonlinear algebra reduces to a linear algebra
$\cW_{1+\infty}$. It is an algebra of higher order differential operator $z^n D^m$
($n\in \mathbf{Z}$, $m=0,1,2,\cdots$, $D=z\partial_z$). Then a quantum generator
$\cW(z^n D^m)$ is assigned to each differential operator (say $z^n D^m$)
and satisfies the algebra with a central extension,
\ba\label{Winf}
[\cW(z^ne^{xD}) , \cW(z^me^{yD})]=(e^{mx}-e^{ny})\cW(z^{n+m} e^{(x+y)D})-C \frac{e^{mx}-e^{ny}}{e^{x+y}-1}\delta_{n+m,0}
\,.
\ea
The connection between SH$^c$ and $\cW_{1+\infty}$
was already explained in appendix F in \cite{r:SV}.
In our previous paper \cite{Kanno:2012hk},
we use the explicit action of $\cW_{1+\infty}$ generators
on the free fermion Fock space and have shown that
Nekrasov partition function satisfies a recursion formula
associated with the symmetry.
Here we make a direct comparison of the action of $\cW_{1+\infty}$
algebra on the free fermion Fock space in \cite{Kanno:2012hk}
with the corresponding action of SH$^c$ (\ref{DW1}--\ref{eD0W}).
For simplicity, we consider the $N=1$ case.
\ba
\cW(zD^l)|a,Y\rangle &=& (-1)^l \sum_{i=1}^f (a+B_i(Y)-1)^l |a,Y^{(i,-)}\rangle,\\
\cW(z^{-1}D^l)|a,Y\rangle &=& (-1)^l \sum_{i=1}^{f+1} (a+A_i(Y))^l |a,Y^{(i,+)}\rangle\,.
\ea
We need rewrite $\lambda$ in \cite{Kanno:2012hk} with $-a$ here.
This implies the correspondence in the $\beta\rightarrow 1$ limit:
\ba
D_{-1,l}&\leftrightarrow& \cW(z (D+1)^l)= \cW(D^l z),\\
D_{1,l}&\leftrightarrow &\cW(z^{-1} D^l).
\ea
One may proceed to see the correspondence between the generators
in $\cW_{1+\infty}$ and those in SH$^c$.
The recursion formulae and the Ward identity obtained
in \cite{Kanno:2012hk} can be derived from the corresponding formulae
in this paper by taking the limit $\beta\rightarrow 1$.
\section{Heisenberg and Virasoro algebra in {\bf SH}$^c$}
In the following, we focus on the important subalgebra in SH$^c$,
namely the Heisenberg (or $U(1)$ current) and Virasoro algebras.
They are important because we can make the explicit evaluation
of Ward identity, while the higher generators in general
have nonlinear commutation relation with the vertex operator.
Generators of Heisenberg ($J_l$) and Virasoro algebras ($L_l$)
are embedded in SH$^c$ as \cite{r:SV},
\ba
&& J_{l}=(-\sqrt{\beta})^{-l} D_{-l,0},\quad J_{-l}=(-\sqrt{\beta})^{-l} D_{l,0}, \quad J_0=E_1/\beta,
\label{defJ}\\
&& L_l=(-\sqrt{\beta})^{-l} D_{-l,1}/l +(1-l) c_0 \xi J_l/2\,,\quad\nn\\
&& L_{-l}= (-\sqrt{\beta})^{-l} D_{l,1}/l +(1-l)c_0 \xi J_{-l}/2\,,\nn\\
&& L_0=[L_1,L_{-1}]/2=D_{0,1} +\frac{1}{2\beta}\left(c_2+c_1(1-c_0) \xi +\frac{\xi^2}{6} c_0(c_0-1)(c_0-2)\right)\,.\label{defV}
\ea
The commutation relations among these generators are the standard ones,
\ba
\left[ J_n, J_m\right] &=& \frac{n N}{\beta} \delta_{n+m,0},\\
\left[L_n, J_m\right] &=& -m J_{n+m},\\
\left[ L_n, L_m\right]&=& (n-m) L_{n+m}+\frac{c}{12}(n^3-n) \delta_{n+m,0}\,.
\ea
The derivations of these
simple formulae from SH$^c$ commutator are nontrivial
since in the commutation relation of SH$^c$, we have generators with degree$=\pm 1,0$
while $J_n, L_n$ have degree $n$.
Proof of the first line is given in \cite{r:SV}\,.
We need derive the commutation relation among
them recursively.
The confirmation of Virasoro algebra is much more tedious but we give
the explicit computation of $\left[L_2, L_{-2}\right]$ in appendix \ref{derivVir}.
This particular commutation relation is important since
it implies the central charge of Virasoro algebra
is related to those in SH$^c$ as,
\ba\label{cVir}
c= \frac{1}{\beta} \left(
- c_0^3 \xi^2 +c_0 -c_0 \xi + c_0 \xi^2
\right) =1+(N-1) (1-Q^2(N^2+N))\,,\quad
Q:=\sqrt\beta-\sqrt\beta^{-1}=-\beta^{-1/2}\xi\,.
\ea
This is the central charge for a combined system of $W_N$ algebra
and a free scalar field. It motivate us to propose a free field representation,
\ba
J(z)&=& \sum_{n} J_n z^{-n-1} = \beta^{-1/2}\sum_{i=1}^N \partial_z \varphi^{(i)}(z)\,,\\
T(z) &=& \sum_{n} L_n z^{-n-2} =\sum_{i=1}^N\left(\frac12(\partial\varphi^{(i)}(z))^2 -Q\rho_i \partial^2 \varphi^{(i)}(z)\right)\,, \label{vir}
\ea
with
\ba
&&\varphi^{(i)} (z)=q^{(i)}+\alpha_0^{(i)} \log z -\sum_{n\neq0} \frac{\alpha^{(i)}_n}{n} z^{-n}\,,\\
&& [\alpha^{(i)}_n, \alpha^{(j)}_m]=n\delta_{n+m,0}\delta_{ij} \,,\quad
[\alpha^{(i)}_m,q^{(j)} ]=\delta_{m,0}\delta_{ij} \,,\\
&& \rho_i=\frac{N+1}{2}-i,\qquad i,j=1,\cdots, N\,.
\ea
Eqs.(\ref{defJ}, \ref{defV}) imply
\ba
J_0|\vec a, \vec Y\rangle&=& \frac{1}{\beta}\left(
-\sum_i (a_i-\xi) +\frac{\xi N(N-1)}{2} \right)|\vec a, \vec Y\rangle,\label{eigenJ}\\
L_0|\vec a, \vec Y\rangle &=&\left( |\vec Y|+\frac{1}{2\beta}
\left(
\sum_i (a_i-\xi)^2 +(1-N)\xi\sum_i (a_i-\xi) + \frac{\xi^2}{6} N(N-1)(N-2)
\right)\right)|\vec a, \vec Y\rangle\,.\label{eigenL}
\ea
We assign the eigenvalue of $\alpha_0^{(i)}$ on the state $|\vec a, \vec Y\rangle$
as
\ba
\alpha_0^{(i)} |\vec a, \vec Y\rangle\ = p_i|\vec a, \vec Y\rangle\,,
\quad p_i:= -\frac{a_i}{\sqrt{\beta}}-Qi\,,\quad i=1,\cdots, N\,.
\ea
With such assignments, we can rewrite
(\ref{eigenJ}, \ref{eigenL}) in the more familiar form,
\ba
J_0 |\vec a, \vec Y\rangle=
\frac{1}{\sqrt{\beta}}\left(\vec p\cdot \vec e\right)|\vec a, \vec Y\rangle,\quad
L_0|\vec a, \vec Y\rangle &=& \left(
|\vec Y| +\Delta(\vec p)\right)|\vec a, \vec Y\rangle,\qquad
\Delta(\vec p) :=\frac{\vec p\cdot(\vec p-2Q\vec \rho )}{2}\,.
\ea
$\Delta(\vec p)$ is
the conformal dimension of a vertex operator $:e^{\vec p \vec\varphi}:$
for (\ref{vir}).
\section{Nekrasov partition function as a correlator and Heisenberg-Virasoro constraints}
In the previous sections, we have seen that the recursion formulae
for Nekrasov partition function takes a form of the representation of
SH$^c$ algebra in terms of the orthonormal basis.
We have also seen that SH$^c$ algebra contains Heisenberg and
Virasoro algebras as its subalgebras.
We observe that AGT conjecture can be proved once we prove the relation
\ba\label{conj}
Z(\vec a, \vec Y; \vec b, \vec W;\mu)=\langle \vec a + \nu \vec e, \vec Y|V(1) |\vec b + (\xi + \nu+\mu) \vec e, \vec W\rangle,
\ea
with the orthonormal basis $|\vec a, \vec Y\rangle$ defined in previous sections
and a vertex operator $V$. Existence of such basis was formally proved in
\cite{Fateev:2011hq}.
The vertex operator is factorized as $V=\tilde V^H V^W$ where $V^W$ is the
vertex operator for $W_N$ algebra and $\tilde V^H$ describes
the contribution of $U(1)$ factor. Furthermore it is known that
the correlator of Toda theory is calculable only for the special momenta.
\ba\label{msing}
\vec{p}= -\kappa \vec e_1 \quad \mbox{or}\quad \vec p=-\kappa \vec e_N,
\quad \vec e_1=(1,0,\cdots,0), \quad \vec e_N=(0,\cdots,0,1)\,.
\ea
The new parameter $\kappa$ is to be determined later.
For the convenience of the computation, we take the latter choice.
$\tilde V^H$ and $V^W$ in the decomposition should be written as,
\ba
\tilde V_\kappa^H=e^{-\frac{\kappa}{N}\vec e\cdot \vec \varphi},\quad
V_\kappa^W=e^{-\kappa(\vec e_N-\frac{\vec e}{N})\vec\varphi}\,,
\label{vmom}
\ea
for $\vec p$ taking the second value in (\ref{msing}).
This form of $W_N$ vertex operator is also important in the context of AGT conjecture.
$V_{\kappa}^W$ is a vertex operator corresponding to the so-called simple puncture.
As we see, we need modify $\tilde V^H$ to meet the
behavior of $U(1)$ factor in AGT conjecture.
The relation (\ref{conj}) can be established once one proves that
the partition function $Z$ satisfies the recursion relation which
defines the right hand side \cite{Kanno:2012hk} . Namely,
\ba\label{ccond}
0=&&(\langle \vec a + \nu \vec e, \vec Y|D_{n,m}) V(1) |\vec b + (\xi + \nu+\mu) \vec e, \vec W\rangle\nonumber \\
&&-\langle \vec a + \nu \vec e, \vec Y|\left[ D_{n,m}, V(1)\right] |\vec b + (\xi + \nu+\mu) \vec e, \vec W\rangle
-\langle \vec a + \nu \vec e, \vec Y|V(1) (D_{n,m}|\vec b + (\xi + \nu+\mu) \vec e, \vec W\rangle)\,.
\ea
The right hand side gives the Ward identity
for the conformal block.
One may translate such relation into a recursion relation
which $Z$ should satisfy
if we use the relation (\ref{conj}).
It may sound strange to use the relation to be proved.
Here we use it as the assumption in the inductive method.
It is obvious that the relation (\ref{conj}) holds for the trivial case
$\vec Y=\vec W=\vec\emptyset$
with a proper definition of the inner product.
General relation (\ref{conj}) will be obtained through the Ward identities
by induction.
As we have seen, the recursion relation for $Z$ exists for
$n=\pm 1$ and arbitrary $m\geq 0$. Other relations should be derived from them.
On the right hand side of \eqref{ccond}, we have already defined
the action of $D_{n,m}$ on the basis. A problem is that
the commutation relation with the vertex operator cannot be written in
the closed form except for Heisenberg
and Virasoro generators. Thus we focus on these cases in the following
though it is not sufficient to complete the inductive proof.
\subsection{Modified vertex operator for $U(1)$ factor}
While the definition of the vertex operator for $W_N$ algebra is
well-known, those for U(1) factor $V^H$ is somewhat tricky
\cite{r:AGT, Fateev:2011hq, r:CO}.\footnote{We thank V. Pasquier to point out this important fact.}
We give a brief account on the construction.
The free boson field which describes the $U(1)$ part is given
by the operators $J_n$ defined in the previous section.
With
\ba
\alpha_n=\sqrt{\beta/N} J_n,
\ea
we define a free boson field as,
\ba
\phi(z)=q+\alpha_0 \log z -\sum_{n\neq0} \frac{\alpha_n}{n} z^{-n}
=\frac{\vec e \cdot \vec \varphi }{\sqrt{N}}\,.
\ea
We modify the vertex operator $\tilde V^H$ for the $U(1)$ factor as,
\ba
&& V^H_\kappa(z) =e^{\frac{1}{\sqrt N}(NQ-\kappa) \phi_-}
e^{\frac{-1}{\sqrt N}\kappa \phi_+}\,,\\
&& \phi_+=\alpha_0 \log z -\sum_{n=1}^\infty \frac{\alpha_n}{n} z^{-n}\,,\quad
\phi_-=q +\sum_{n=1}^\infty \frac{\alpha_{-n}}{n} z^{n}\,.
\ea
Such definition of modified vertex operator is needed to reproduce
the contribution of $U(1)$ factor in the correlator \footnote{
Compared with the reference \cite{Fateev:2011hq},
we included the zero mode to modify the commutator with the Virasoro generator.
},
\ba
\langle V^H_{\kappa_1}(z_1)\cdots V^H_{\kappa_n}(z_n)\rangle
=\prod_{i<j}(z_i-z_j)^{\frac{-\kappa_i (NQ-\kappa_j)}{N}}\,.
\ea
Due to the modification, the commutation relation with $U(1)$ current
(Heisenberg generator) becomes asymmetric,
\ba
\label{u1 commute}
[\alpha_m, V^H_\kappa(z)]=\frac{1}{\sqrt N}(NQ-\kappa) z^m V^H_\kappa(z),\quad
[\alpha_{-n}, V^H_\kappa(z)]=\frac{-1}{\sqrt N}\kappa z^{-n} V^H_\kappa(z)\,,
\ea
for $m\geq0$, $n >0$.
Unlike the standard definition of the
vertex operator $V=:e^{\kappa \phi}:$, the conformal
property of the modified vertex becomes rather complicated.
It is, however, helpful to understand the recursion relations (\ref{e:sketch})
which has some anomaly as well.
We define the Virasoro generator for the $U(1)$ factor as,
\ba
L^H_n=\frac12 \sum_m :\alpha_{n-m} \alpha_m: \,,
\ea
which has $c=1$.
The commutator of the total Virasoro genrators $L_n=L_n^H+L_n^W$
with the vertex $V_\kappa(z)=V^H_\kappa(z) V^W_\kappa(z)$ becomes,
{\small
\ba
\left[L_n, V_\kappa(z)\right]&=&z^{n+1}\partial_z V_\kappa(z)
+\frac{(NQ-\kappa)^2}{2N}(n+1)z^{n} V_\kappa(z)
+\sqrt{N} Q\sum_{ m =0}^n z^{n-m} V_\kappa(z) \alpha_m
+(n+1)z^{n}\Delta_W V_\kappa(z) ,\,\,\, n\geq 0 \label{LnVp}\,,\\
\left[L_n, V_\kappa(z)\right]&=&z^{n+1}\partial_z V_\kappa(z)
+\frac{\kappa^2}{2N}(n+1)z^{n} V_\kappa(z)
-\sqrt{N} Q\sum_{ m =1}^{|n|} z^{n+m}\alpha_{-m} V_\kappa(z)
+(n+1)z^{n}\Delta_W V_\kappa(z),\,\,\, n<0 \,,\label{LnVm}
\ea
}
where
$
\Delta_W=\frac{\kappa(\kappa-Q(N-1))}{2}-\frac{\kappa^2}{2N}
$ is the conformal dimension of $W_N$ vertex operator $V^W_\kappa$ with Toda momenta
$\vec p=-\kappa(\vec e_N-\frac{\vec e}{N})$ as in (\ref{vmom}).
The anomaly due to the modification of $U(1)$ vertex manifests itself through the third term
on the right hand side.
We write the commutator for the special cases $n=\pm1, 0$ for the convenience of later calculation.
\ba
\left[L_1, V_\kappa(z)\right]&=&z^{2}\partial_z V_\kappa(z)
+\frac{(NQ-\kappa)^2}{N}z V_\kappa(z)
+\sqrt{N} Q z V_\kappa(z) \alpha_0
+\sqrt{N} Q V_\kappa(z) \alpha_1
+2z\Delta_W V_\kappa(z)\,,
\\
\label{L0 commute}
\left[L_0, V_\kappa(z)\right]&=&z\partial_z V_\kappa(z)
+\frac{(NQ-\kappa)^2}{2N} V_\kappa(z)
+\sqrt{N} Q V_\kappa(z) \alpha_0
+\Delta_W V_\kappa(z)\,,
\\
\label{L1 commute}
\left[L_{-1}, V_\kappa(z)\right]&=&\partial_z V_\kappa(z) \,.
\ea
In the following,
we examine the relation (\ref{ccond}) for Heisenberg
($U(1)$) and Virasoro generators for $D_{n,m}$.
\subsection{Ward identities for $U(1)$ currents}
We start from examining the case $n=0$ which can be interpreted as
the Ward identity for $J_{\pm 1}$,
\ba
&&(\langle \vec a + \nu \vec e, \vec Y|J_{\pm 1}) V(1) |\vec b + (\xi + \nu+\mu) \vec e, \vec W\rangle
-\langle \vec a + \nu \vec e, \vec Y|V(1) (J_{\pm 1}|\vec b + (\xi + \nu+\mu) \vec e, \vec W\rangle) \nn \\
&&=\langle \vec a + \nu \vec e, \vec Y|\left[ J_{\pm 1}, V(1)\right] |\vec b + (\xi + \nu+\mu) \vec e, \vec W\rangle \,.\label{eWJpm}
\ea
By the definition of the representation of SH$^c$ algebra (\ref{defJ}, \ref{eDmY}, \ref{DW1})
and the vertex operator (\ref{u1 commute}),
the action of $J_1$ on the bra and ket basis and the commutator with the vertex operator are given as,
\ba
\langle \vec a + \nu \vec e, \vec Y| J_1&=&(-\sqrt{\beta})^{-1}\sum_{p=1}^N\sum_{k=1}^{f_p+1}
\langle \vec a + \nu \vec e,\vec Y^{(k,+),p}| \Lambda^{(k,+)}_p (\vec Y),\\
J_1 |\vec b + (\xi + \nu+\mu) \vec e,\vec W\rangle &=&
(-\sqrt{\beta})^{-1}\sum_{q=1}^N\sum_{\ell=1}^{\tilde f_p} \Lambda^{(\ell,-)}_q (\vec W)|\vec b + (\xi + \nu+\mu) \vec e, \vec W^{(\ell,-),q}\rangle\,, \\
\left[J_1,V_\k(1)\right]&=&\frac{1}{\sqrt \b}(NQ-\kappa) V_\kappa(1)\,.
\ea
Plugging them into (\ref{eWJpm}) gives,
\ba \label{id.J1}
(-\sqrt{\beta})^{-1}\sum_{p=1}^N\sum_{k=1}^{f_p+1} \Lambda^{(k,+)}_p (\vec Y)
\langle \vec a + \nu \vec e,\vec Y^{(k,+),p}| V(1) |\vec b + (\xi + \nu+\mu) \vec e, \vec W\rangle\nonumber \\
-(-\sqrt{\beta})^{-1}\sum_{q=1}^N\sum_{\ell=1}^{\tilde f_p} \Lambda^{(\ell,-)}_q (\vec W)
\langle \vec a + \nu \vec e, \vec Y|V(1)|\vec b + (\xi + \nu+\mu) \vec e, \vec W^{(\ell,-),q}\rangle\ \\
=\frac{1}{\sqrt \b}(NQ-\kappa)\langle \vec a + \nu \vec e, \vec Y|V(1) |\vec b + (\xi + \nu+\mu) \vec e, \vec W\rangle)\, \nn.
\ea
Using the assumption (\ref{conj}), the left hand side of (\ref{id.J1}) becomes
\ba
\sqrt{\beta}^{-1}\delta_{-1,0} Z(\vec a, \vec Y; \vec b, \vec W;\mu)\,.
\ea
On the other hand,
taking account of U(1) charge conservation condition, which is derived from the action of $J_0$,
\ba
\label{k requirement}
\kappa=-{\beta}^{-1/2} \sum_{p=1}^N (a_p-b_p-\mu)\,,
\ea
the right hand side of (\ref{id.J1}) becomes
\ba
\frac{1}{\sqrt \b}(NQ-\kappa)Z(\vec a, \vec Y; \vec b, \vec W;\mu)=\beta^{-1}\sum_{p=1}^{N}(a_p-b_p-\mu-\xi)Z(\vec a, \vec Y; \vec b, \vec W;\mu)
=\sqrt{\beta}^{-1}U_{-1,0}Z(\vec a, \vec Y; \vec b, \vec W;\mu) \,.
\ea
Thus the Ward identity for $J_1$
is proved since it is identified with the recursion formula
$\delta_{-1,0} Z_{\vec Y,\vec W} -U_{-1,0} Z_{\vec Y,\vec W}=0$.
Derivation of the identity for $J_{-1}$
can be performed similarly.
The actions of $J_{-1}$ are given by
\ba
\langle \vec a + \nu \vec e, \vec Y| J_{-1}&=&(-\sqrt{\beta})^{-1}\sum_{p=1}^N\sum_{k=1}^{f_p}
\langle \vec a + \nu \vec e,\vec Y^{(k,-),p}| \Lambda^{(k,-)}_p (\vec Y),\\
J_{-1} |\vec b + (\xi + \nu+\mu) \vec e,\vec W\rangle &=&
(-\sqrt{\beta})^{-1}\sum_{q=1}^N\sum_{\ell=1}^{\tilde f_p+1} \Lambda^{(\ell,+)}_q (\vec W)|\vec b + (\xi + \nu+\mu) \vec e, \vec W^{(\ell+-),q}\rangle\,, \\
\left[J_{-1},V_\k(1)\right]&=&-\frac{1}{\sqrt \b}\kappa V_\kappa(1)\,.
\ea
By the assumption (\ref{conj}), we have
\ba
\label{J-1 difference}
\langle \vec a + \nu \vec e, \vec Y|J_{-1} V_\kappa(1) |\vec b + (\xi + \nu+\mu) \vec e, \vec W\rangle&-&
\langle \vec a + \nu \vec e, \vec Y| V_\kappa(1) J_{-1} |\vec b + (\xi + \nu+\mu) \vec e,\vec W\rangle \nn \\
&=&-\sqrt{\beta}^{-1}\d_{1,0}Z(\vec a, \vec Y; \vec b, \vec W;\mu) \,,
\ea
\ba \label{J-1 com}
\langle \vec a + \nu \vec e, \vec Y|[J_{-1}, V_\kappa(1)] |\vec b + (\xi + \nu+\mu) \vec e,\vec W\rangle=-{\beta}^{-1/2} \kappa
Z(\vec a, \vec Y; \vec b, \vec W;\mu)\,.
\ea
In the last equality in (\ref{J-1 com}), we use $U(1)$ charge conservation (\ref{k requirement}).
It shows the equivalence between
the recursion formula $\delta_{1,0} Z_{\vec Y,\vec W} -U_{1,0} Z_{\vec Y,\vec W}=0$
and the Ward identity for $J_{-1}$.
We note that the modification of the vertex operator is necessary
to produce the Ward identities for $U(1)$ currents.
\subsection{Ward identities for Virasoro generators}
We proceed to examine the equivalence of
the Ward identity for Virasoro generators
and the recursion formula.
The actions of $L_1$ on the basis and the vertex operator are
evaluated by (\ref{defV}, \ref{DW1}--\ref{eD0Y}, \ref{LnVp}),
\ba
&&\langle \vec a + \nu \vec e, \vec Y| L_1=\sqrt{\beta}^{-1}\sum_{p=1}^N\sum_{k=1}^{f_p}
\langle \vec a + \nu \vec e, \vec Y^{(k,+),p}|( a_p+ \nu+A_{k}(Y_p))\Lambda^{(k,+),p} (\vec Y), \nn \\
&&L_1 |\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle =
\sqrt{\beta}^{-1}\sum_{q=1}^N\sum_{\ell=1}^{f_p} \Lambda^{(\ell,-),q} (\vec W) (b_q+ \nu+\mu+B_{\ell}(W_q)+\xi)|\vec b + (\xi + \nu+\mu) \vec e
,\vec W^{(\ell,-),q}\rangle\,, \nn \\
&&\left[L_1, V_\kappa(1)\right]=\partial V_\kappa(1)
+\frac{(NQ-\kappa)^2}{N} V_\kappa(1)
+\sqrt{N} Q V_\kappa(1) \alpha_0
+\sqrt{N} Q V_\kappa(1) \alpha_1
+2\Delta_W V_\kappa(1) \nn
\,.
\ea
As we see from the derivative term in the commutator,
in order to evaluate the Virasoro Ward identities, we need to evaluate
$\langle\vec a+\nu \vec e,\vec Y|\partial V(1) |\vec b+(\nu+\mu+\xi) \vec e,\vec W\rangle$.
Since the modified vertex operator is not a primary operator,
the correlator does not have the standard dependence
on the position of the vertex operator.
We can, however, derive it through the Ward identity of $L_0$.
According to the actions of $L_0$ on the basis (\ref{eigenL}), we have
\ba
&&\frac{\langle \vec a + \nu \vec e, \vec Y|L_0 V_\kappa(z) |\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle-
\langle \vec a + \nu \vec e, \vec Y| V_\kappa(z) L_0 |\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle}{\langle \vec a + \nu \vec e, \vec Y|V_\kappa(z)|\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle}\nn\\
&&~~~~~~~~=
\D \left(-\frac{\vec a + \nu \vec e}{\sqrt \b}-Q\vec \rho+Q\frac{N+1}{2}\vec e \right)+|\vec Y|
-\D\left(-\frac{\vec b + (\nu+\mu) \vec e}{\sqrt \b}-Q\vec \rho+Q\frac{N+1}{2}\vec e\right)-|\vec W|\,.
\label{direct L0}
\ea
On the other hand, from the commutator between $L_0$ and vertex operator (\ref{L0 commute}), we obtain
\ba
&&\left.
\frac{\langle \vec a + \nu \vec e, \vec Y|[L_0, V_\kappa(z)]|\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle}
{\langle \vec a + \nu \vec e, \vec Y|V_\kappa(z)|\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle}\right|_{z=1}
=\left.
\frac{\langle \vec a + \nu \vec e, \vec Y|z\partial_z V_\kappa(1) |\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle}{\langle \vec a + \nu \vec e, \vec Y|V_\kappa(1)|\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle}\right|_{z=1} \nn \\
&&-\left.\frac{\langle \vec a + \nu \vec e, \vec Y|\sqrt{N} Q V_\kappa(z) \alpha_0|\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle}
{\langle \vec a + \nu \vec e, \vec Y|V_\kappa(z)|\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle}\right|_{z=1}
-\frac{(NQ-\kappa)^2}{2N}
-\Delta_W \label{commu L0} \,.
\ea
Since (\ref{direct L0}) is identical with (\ref{commu L0}) by the Ward identity for $L_0$,
the derivative term can be evaluated as follows,
\begin{equation}
\begin{split}
\label{partial VZ}
&\frac{\langle \vec a + \nu \vec e, \vec Y|\partial_z V_\kappa(1) |\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle}{\langle \vec a + \nu \vec e, \vec Y|V_\kappa(1)|\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle}\\
=
&\D \left(-\frac{\vec a + \nu \vec e}{\sqrt \b}-Q\vec \rho+Q\frac{N+1}{2}\vec e \right)+|\vec Y|
-\D\left(-\frac{\vec b + (\nu+\mu) \vec e}{\sqrt \b}-Q\vec \rho+Q\frac{N+1}{2}\vec e\right)-|\vec W| \nn \\
&-\frac{\xi}{\beta}\bigg (-\sum_{p=1}^N (b_p +\nu +\mu )
+N(N-1)\xi/2\bigg )
-\frac{(NQ-\kappa)^2}{2N}
-\frac{\kappa(\kappa-Q(N-1))}{2}+\frac{\kappa^2}{2N} \,.
\end{split}
\end{equation}
Now we are ready to check the recursion relation for Virasoro generators.
Applying (\ref{conj}), we obtain
\ba \label{L1-1}
&&\langle \vec a + \nu \vec e, \vec Y|L_1 V_\kappa(1) |\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle-
\langle \vec a + \nu \vec e, \vec Y| V_\kappa(1) L_1 |\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle \nn \\
&&= \sqrt\beta^{-1}\delta_{-1,1} Z(\vec a, \vec Y; \vec b, \vec W;\mu)
-Q\sum_{q=1}^N\sum_{\ell=1}^{f_p} \Lambda^{(\ell,-),q} (\vec W) Z(\vec a, \vec Y; \vec b, \vec W^{(\ell,-),q};\mu) \,.
\ea
Unlike in the $J_1$ case, an additional term appears because the action of SH$^c$ algebra on the ket space is
slightly different from the action of $\d_{-1,1}$ on $Z(\vec a, \vec Y; \vec b, \vec W;\mu)$
as we have explained previously. The commutator part becomes
\ba \label{L1-2}
&&\langle \vec a + \nu \vec e, \vec Y|[L_1, V_\kappa(1)] |\vec b + (\xi + \nu+\mu) \vec e
,\vec W\rangle \nn \\
&&=\bigg\{\D \left(-\frac{\vec a + \nu \vec e}{\sqrt \b}-Q\vec \rho+Q\frac{N+1}{2}\vec e \right)+|\vec Y|
-\D\left(-\frac{\vec b + (\nu+\mu) \vec e}{\sqrt \b}-Q\vec \rho+Q\frac{N+1}{2}\vec e\right)-|\vec W| \nn \\
&&
+\frac{(NQ-\kappa)^2}{2N}
+\frac{\kappa(\kappa-Q(N-1))}{2}-\frac{\kappa^2}{2N} \bigg\} Z(\vec a, \vec Y; \vec b, \vec W;\mu)
-Q\sum_{q=1}^N\sum_{\ell=1}^{f_p} \Lambda^{(\ell,-),q} (\vec W) Z(\vec a, \vec Y; \vec b, \vec W^{(\ell,-),q};\mu) \nn \\
&&=\sqrt{\beta}^{-1} U_{-1,1} Z(\vec a, \vec Y; \vec b, \vec W;\mu)
-Q\sum_{q=1}^N\sum_{\ell=1}^{f_p} \Lambda^{(\ell,-),q} (\vec W) Z(\vec a, \vec Y; \vec b, \vec W^{(\ell,-),q};\mu) \,.
\ea
In the last equality we use (\ref{k requirement}). This also have an anomalous term since the modified vertex is not primary operator
and its commutator with $L_{1}$ has the $V_\k J_1$ term.
However, the anomalies in (\ref{L1-1}) and (\ref{L1-2}) are identical and
the Ward identity for $L_1$ is reduced to
the recursion relation $\delta_{-1,1} Z_{\vec Y,\vec W} -U_{-1,1} Z_{\vec Y,\vec W}=0$
which is already proved.
We note that the identity holds only when
we have the special value for the vertex momentum (\ref{msing}).
In the same way, for $L_{-1}$, we have
\ba
&&\langle \vec a + \nu \vec e, \vec Y|L_{-1} V_\kappa(1) |\vec b + (\xi + \nu+\mu) \vec e ,\vec W\rangle-
\langle \vec a + \nu \vec e, \vec Y| V_\kappa(1) L_{-1} |\vec b + (\xi + \nu+\mu) \vec e ,\vec W\rangle \nn \\
&&~~~=\sqrt {\beta}^{-1}\d_{1,1} Z(\vec a, \vec Y; \vec b, \vec W;\mu)\,, \label{L-1-1}\\
\nn \\
&&\langle \vec a + \nu \vec e, \vec Y|[L_{-1} ,V_\kappa(1)] |\vec b + (\xi + \nu+\mu) \vec e ,\vec W\rangle \nn \\
&&~~~=
\left\{\D \left(-\frac{\vec a + \nu \vec e}{\sqrt \b}-Q\vec \rho+\frac{N+1}{2}\vec e \right)+|\vec Y|
-\D\left(-\frac{\vec b + (\nu+\mu) \vec e}{\sqrt \b}-Q\vec \rho+\frac{N+1}{2}\vec e\right)-|\vec W| \right.\nn \\
&&~~~~~-\frac{\xi}{\beta}\bigg (-\sum_{p=1}^N (b_p +\nu +\mu )
+N(N-1)\xi/2\bigg )
-\frac{(NQ-\kappa)^2}{2N}
-\frac{\kappa(\kappa-Q(N-1))}{2}+\frac{\kappa^2}{2N} \biggl\} Z(\vec a, \vec Y; \vec b, \vec W;\mu) \nn \\
&&~~=\sqrt{\beta}^{-1} U_{1,1} Z(\vec a, \vec Y; \vec b, \vec W;\mu)\,. \label{L-1-2}
\ea
Again, we use (\ref{k requirement}) to derive the last equality in (\ref{L-1-2}).
Thus, the recursion formula $\delta_{1,1} Z_{\vec Y,\vec W} -U_{1,1} Z_{\vec Y,\vec W}=0$ can be identified with the Ward identity.
These two consistency conditions are highly nontrivial
and strongly suggest that
the identify (\ref{e:sketch}) are a part of
the Ward identities for the extended conformal symmetry.
\begin{comment}
\subsection{$J_2$}
$(-x)^{-2}D_{-2,0} =[(-x)^{-1}D_{-1,0}\,,(-x)^{-1}D_{-1,1}]$ , i.e., $J_2 =[J_1,L_1]$ .
From the definition of $J_1$ and $L_1$, we see that most of the terms vanish and we have
\ba
&&\langle \vec a, \vec Y| [J_1,L_1]=\sum_{p=1}^N\sum_{k=1}^{f_p}
\langle \vec Y^{(k,+2H),p}| \beta \Lambda^{(k,+2H),p} (\vec Y)
-\langle \vec Y^{(k,+2E),p}| \Lambda^{(k,+2E),p} (\vec Y) \\
&&[J_1,L_1] |\vec W \rangle =
\sum_{q=1}^N\sum_{\ell=1}^{f_p} \beta \Lambda^{(\ell,-2H),q} (\vec W)|\vec W^{(\ell,-2H),q}\rangle
-\Lambda^{(\ell,-2E),q} (\vec W)|\vec W^{(\ell,-2E),q}\rangle ,
\ea
where
\ba
\Lambda^{(\ell,-2H),q} (\vec W) &=&\Biggl\{
\frac{2}{\beta+1}\prod_{p=1}^N \Biggl(\prod_{k=1}^{\tilde f_p+1} \frac{
(D_{(q),\ell}-C_{(p),k})(D_{(q),\ell}-C_{(p),k}-\beta)
}{
(D_{(q),\ell}- \tilde C_{(p),k})(D_{(q),\ell}- \tilde C_{(p),k}-\beta)
} \nn \\
&&{\prod}_{k=1}^{\prime \tilde f_p}\frac{ (D_{(q),\ell}-\tilde D_{(p),k})(D_{(q),\ell}-\tilde D_{(p),k}-\beta)}{
(D_{(q),\ell}-D_{(p),k})(D_{(q),\ell}-D_{(p),k}-\beta)
}
\Biggl)\Biggl\}^{1/2} \\
\Lambda^{(\ell,-2E),q} (\vec W) &=&\Biggl\{
\frac{2\beta}{\beta+1}\prod_{p=1}^N \Biggl(\prod_{k=1}^{\tilde f_p+1} \frac{
(D_{(q),\ell}-C_{(p),k})(D_{(q),\ell}-C_{(p),k}+1)
}{
(D_{(q),\ell}- \tilde C_{(p),k})(D_{(q),\ell}- \tilde C_{(p),k}+1)
} \nn \\
&&{\prod}_{k=1}^{\prime \tilde f_p}\frac{ (D_{(q),\ell}-\tilde D_{(p),k})(D_{(q),\ell}-\tilde D_{(p),k}+1)}{
(D_{(q),\ell}-D_{(p),k})(D_{(q),\ell}-D_{(p),k}+1)
}
\Biggl)\Biggl\}^{1/2}.
\ea
\ba
\Lambda^{(k,+2H),p} (\vec Y) &=&\Biggl\{
\frac{2}{\beta+1}\prod_{p=1}^N \Biggl(\prod_{k=1}^{\tilde f_p+1} \frac{
(\tilde A_{(q),\ell}-\tilde B_{(p),k})(\tilde A_{(q),\ell}-\tilde B_{(p),k} +\beta)
}{
(\tilde A_{(q),\ell}-B_{(p),k})(\tilde A_{(q),\ell}-B_{(p),k}+\beta)
} \nn \\
&&{\prod}_{k=1}^{\prime \tilde f_p}\frac{\tilde A_{(q),\ell}-A_{(p),k})(\tilde A_{(q),\ell}-A_{(p),k}+\beta)}{\tilde A_{(q),\ell}-\tilde A_{(p),k})(\tilde A_{(q),\ell}-\tilde A_{(p),k}+\beta)}
\Biggl)\Biggl\}^{1/2} \\
\Lambda^{(k,+2E),p} (\vec Y) &=&\Biggl\{
\frac{2}{\beta+1}\prod_{p=1}^N \Biggl(\prod_{k=1}^{\tilde f_p+1} \frac{
(\tilde A_{(q),\ell}-\tilde B_{(p),k})(\tilde A_{(q),\ell}-\tilde B_{(p),k} -1)
}{
(\tilde A_{(q),\ell}-B_{(p),k})(\tilde A_{(q),\ell}-B_{(p),k}-1)
} \nn \\
&&{\prod}_{k=1}^{\prime \tilde f_p}\frac{\tilde A_{(q),\ell}-A_{(p),k})(\tilde A_{(q),\ell}-A_{(p),k}-1)}{\tilde A_{(q),\ell}-\tilde A_{(p),k})(\tilde A_{(q),\ell}-\tilde A_{(p),k}-1)}
\Biggl)\Biggl\}^{1/2} .
\ea
Some computation gives
\ba
&&\prod_{q } \Lambda^{(\ell,-2H),q} (\vec W)
\frac{\langle \vec a, \vec Y| V(1) |\vec W^{(\ell,-2H),q}\rangle}{\langle \vec a, \vec Y|V(1)|\vec W\rangle}
\nn \\
&=& \frac{1}{\beta+1}
\prod_{p=1}^N
\frac{
\prod_{k=1}^{f_p}(D_{(q),\ell}-\tilde B_{(p),k})(D_{(q),\ell}-\tilde B_{(p),k}-\beta)
}{\prod_{k=1}^{f_p+1}(D_{(q),\ell}-\tilde A_{(p),k})
(D_{(q),\ell}-\tilde A_{(p),k}-\beta)
}
\prod_{q=1}^{N} \frac{\prod_{k=1}^{\tilde{f}_q+1}(D_{(p),\ell}-C_{(q),k})(D_{(p),\ell}-C_{(q),k}-\beta)}
{\prod_{k=1}^{\prime \tilde{f}_q}(D_{(p),\ell}-D_{(q),k})(D_{(p),\ell}-D_{(q),k}-\beta)}
\\
&&\prod_{q } \Lambda^{(\ell,-2E),q} (\vec W)\frac{\langle \vec a, \vec Y| V(1) |\vec W^{(\ell,-2E),q}\rangle}{\langle \vec a, \vec Y|V(1)|\vec W\rangle}\nn \\
&=&\frac{\beta}{\beta+1}
\prod_{p=1}^N
\frac{
\prod_{k=1}^{f_p}(D_{(q),\ell}-\tilde B_{(p),k})
(D_{(q),\ell}-\tilde B_{(p),k}+1)
}{\prod_{k=1}^{f_p+1}(D_{(q),\ell}-\tilde A_{(p),k})
(D_{(q),\ell}-\tilde A_{(p),k}+1)
}
\prod_{q=1}^{N} \frac{\prod_{k=1}^{\tilde{f}_q+1}(D_{(p),\ell}-C_{(q),k})(D_{(p),\ell}-C_{(q),k}+1)}
{\prod_{k=1}^{\prime \tilde{f}_q}(D_{(p),\ell}-D_{(q),k})(D_{(p),\ell}-D_{(q),k}+1)}.
\ea
\ba
&&\sum_{p=1}^N\sum_{k=1}^{f_p+1}\Lambda^{(k,+2H),p} (\vec Y)
\frac{\langle \vec Y^{(k,+2H),p}| V(1) |\vec W \rangle}{\langle \vec a, \vec Y|V(1)|\vec W\rangle}
\nn \\
&=& \frac{1}{\beta+1}
\sum_{p=1}^N\sum_{k=1}^{f_p+1}
\bigg \{
\frac{
\prod_{q=1}^N\prod_{\ell=1}^{\tilde f_q+1}(\tilde A_{(p),k}-C_{(q),\ell})(\tilde A_{(p),k}-C_{(q),\ell}+\beta)
}{\prod_{q=1}^N\prod_{\ell=1}^{\tilde f_q}(\tilde A_{(p),k}-D_{(q),\ell})(\tilde A_{(p),k}-D_{(q),\ell}+\beta)
} \\
&\qquad & \times
\frac{
\prod_{q=1}^N\prod_{\ell=1}^{\tilde f_q}(\tilde A_{(p),k}-\tilde B_{(q),\ell})(\tilde A_{(p),k}-\tilde B_{(q),\ell}+\beta)
}{\prod_{q=1}^N\prod_{\ell=1}^{\prime f_q+1}(\tilde A_{(p),k}-\tilde A_{(q),\ell})(\tilde A_{(p),k}-\tilde A_{(q),\ell}+\beta)
}
\bigg \}
\\
&&\sum_{p=1}^N\sum_{k=1}^{f_p+1}\Lambda^{(k,+2E),p} (\vec Y)
\frac{\langle \vec Y^{(k,+2E),p}| V(1) |\vec W \rangle}{\langle \vec a, \vec Y|V(1)|\vec W\rangle}
\nn \\
&=& \frac{1}{\beta+1}
\sum_{p=1}^N\sum_{k=1}^{f_p+1}
\bigg \{
\frac{
\prod_{q=1}^N\prod_{\ell=1}^{\tilde f_q+1}(\tilde A_{(p),k}-C_{(q),\ell})(\tilde A_{(p),k}-C_{(q),\ell}-1)
}{\prod_{q=1}^N\prod_{\ell=1}^{\tilde f_q}(\tilde A_{(p),k}-D_{(q),\ell})(\tilde A_{(p),k}-D_{(q),\ell}-1)
} \\
&\qquad & \times
\frac{
\prod_{q=1}^N\prod_{\ell=1}^{\tilde f_q}(\tilde A_{(p),k}-\tilde B_{(q),\ell})(\tilde A_{(p),k}-\tilde B_{(q),\ell}-1)
}{\prod_{q=1}^N\prod_{\ell=1}^{\prime f_q+1}(\tilde A_{(p),k}-\tilde A_{(q),\ell})(\tilde A_{(p),k}-\tilde A_{(q),\ell}-1)
}
\bigg \}.
\ea
Combine the above four equations we got a similar expression with the
$J_1$ case.
\subsection{$L_2$}
\begin{equation}
\begin{split}
L_2&=\frac{(-x)^{-2}}{2} D_{-2,1} - \frac{c_0 \xi}{2} J_2 \\
&=\frac{(-x)^{-2}}{2} [D_{-1,0},D_{-1,2}] -\frac{(-x)^{-2}}{2}c_0(1-\beta) [D_{-1,0},D_{-1,1}]
\end{split}
\end{equation}
\ba
[L_2, \tilde V_\kappa(z)]=z^{3}\partial_z \tilde V_\kappa(z)
+(\kappa -2Q)(\frac{\kappa}{2} +2Q)z^{2} \tilde V_\kappa(z)
+(2Q-2\kappa)z\tilde V_\kappa(z) a_1
\ea
\end{comment}
\section{Conclusion}
As a generalization of our last work on $\beta =1$ case
\cite{Kanno:2012hk}, we establish the
recursion relations for arbitrary $\beta$, which characterizes the
Nekrasov partition function and gives a partial proof of AGT conjecture.
This project is much more complicated than before
and we have to introduce many new ideas to solve the issues caused
by the arbitrary $\beta$. For example, we have to modify the
vertex operator to cancel the anomalous terms
in the recursion formulae.
We also need the help of SH$^c$ algebra to define the basis.
Now we have derived the conformal Ward identities for $J_{\pm 1}$ and $L_{\pm 1}$.
The derivation of the similar formulae for the general Virasoro and Heisenberg
generators ($J_n$, $L_n$) will not be so difficult
along the line of \cite{Kanno:2012hk} while the computation
may be tedious and lengthy. What remains to do is to confirm the Ward identities for
$L_{\pm 2}$. The identities for other generators can be derived from them.
It is supposed to give a proof of
AGT conjecture for $SU(2)$ linear quiver gauge theories.
For the further generalization to $SU(N)$,
we expect that the existence of the recursion formulas for arbitrary $n$
in eq.(\ref{e:sketch}) implies
that the Ward identities which completely characterizes the conformal block
may be reduced to eq.(\ref{e:sketch}) in the end after the proper definition
of the vertex operator in SH$^c$.
We also note
that there are some important progress in terms of AGT relation \cite{DAHA-AGT}
for the two parameter extension of $\cW_{1+\infty}$ \cite{r:DAHA}.
It is, however, nontrivial to derive AGT from the results of DAHA
since the degeneration limit is singular.
We hope to come back to this issue in our future work.
We would like to mention some recent papers which are relevant to this work.
In \cite{r:NSlimit},
large $N$ limit ($N$ is the size of Young tableaux) is taken
to relate AGT conjecture to matrix model. There should be a similar limit
in our recursion formula where the computation becomes much simpler
and the relation with Nekrasov-Shatashvili limit
\cite{Nekrasov:2009rc} will be clearer.
In \cite{Estienne:2011qk} , the correlator of primary fields is defined in terms of null state condition of $W_N$ algebra
which in tern relates to Calogero-Sutherland system.
Since the symmetry of Jack polynomial is identified with SH$^c$, there should be an interesting connection with the current work.
In \cite{Tan:2013tq}, an M-theoretic approach to AGT relation was
explored.
Furthermore, SH$^c$ seems to have interesting applications to
quantum Hall effects or higher spin
theories \cite{r:Winf}. These may also be interesting directions.
\subsection*{Acknowledgments}
We would like to thank Yuji Tachikawa, Vincent Pasquier, Junichi Shiraishi for their critical comments
at various stages. We also thank
Didina Serban, Sylvain Ribault and Jean-Emile Bourgine for their interest and comments
in some materials in this paper.
YM is supported in part by KAKENHI (\#25400246). HZ is supported by Global COE Program, the
Physical Sciences Frontier, MEXT, Japan.
SK is supported by JSPS Research Fellowships for Young Scientists.
SK and YM are benefited from the Japan-France exchange
program SAKURA which enabled the valuable stay at Saclay where this project started.
|
1,314,259,993,940 | arxiv |
\section{INTRODUCTION}
Understanding the cosmic evolution of supermassive black holes (SMBHs) in galactic
centers and their connections with the evolution of their host galaxies is one of the
main goals in modern astronomy.
Active galactic nuclei (AGN) are the fundamental laboratories in those studies because
they are in the stage where the surrounding gas is accreting onto the SMBHs by releasing
their gravitational energy into radiation.
It is known that the central engines of AGN are surrounded by a dusty ``torus'' \citep{kro86}.
Since optical and ultraviolet emission is easily absorbed by the torus,
a complete survey of AGN including obscured populations is crucial to
elucidate the growth history of SMBHs.
The ultra-hard X-ray ($E > 10$~keV) band is extremely useful
for detecting the whole population of AGN because they have
1) stronger penetrating power than optical/UV and even hard
($E < 10$~keV) X-ray radiation and
2) very little contamination from the starburst emission.
Ultra-hard X-ray detectors such as \textit{Swift}/Burst Alert
Telescope \citep[BAT,][]{bar05},
IBIS/ISGRI on board \textit{INTEGRAL} \citep{win03}, FPMA/FPMB
on board \textit{NuSTAR}~\citep{har13} are therefore well suited
for those studies.
Among them, \textit{Swift}/BAT provides the most sensitive ultra-hard
X-ray survey of the whole sky in the 14--195 keV range.
Since most of the \textit{Swift}/BAT sources are local objects, they
have been observed by a large number of
multi-wavelength facilities, which allow us to study their properties.
Follow-up studies below 10~keV have
shown that the fraction of obscured ($N_{\rm H} \ge 10^{22}$~cm$^{-2}$) AGN
highly depends on the intrinsic X-ray luminosities \citep[e.g.,][]{bec09,bur11,
ric14, kaw16a}, and also proved to be an effective tool to identify previously
missed class of AGN with small opening angle tori \citep[e.g.,][]{ued07,
win09, egu09,egu11, ric11}, and Compton-thick AGN \citep{gan15, ric15, tan16}.
Studies carried out by optical spectroscopy enable us to investigate
the properties of extended ($>100$~pc) narrow line regions
\citep[NLR; e.g., ][]{hai13, hai14a}
through analysis of the [OIII]$\lambda5007$ emission line
\citep{win09, ued15} and also offer the opportunity to estimate the black
hole masses through the broad line regions or velocity dispersion measurements.
\textit{Swift}/BAT AGN Spectroscopic Survey (BASS) is in progress to complete
the first large ($>$500) sample of BAT detected AGN with optical spectroscopy,
which enables us to constrain the nature of the NLR \citep{kos16, ber15, oh16}.
Cross-matching the \textit{Swift}/BAT AGN with all-sky mid-infrared
(MIR\footnote{Here we define near-IR (NIR) as
$\lambda<5$~$\mu$m and MIR as $5$~$\mu$m~$ < \lambda \le 25$~$\mu$m
since all of the all-sky IR surveys used here cover IR bands in
$5$~$\mu$m~$ < \lambda \le 25$~$\mu$m, whereas only the \textit{WISE} survey
covers IR bands at $\lambda < 5$~$\mu$m.}) catalogs can provide information
on the dust surrounding the central engine.
While sometimes the MIR suffers contamination from the star formation,
for luminous AGN the MIR is dominated by the torus dust re-emission
with $T\sim200$--$300$~K.
This fact is used for new diagnostics identifying various AGN population \citep{mat12b},
and it has shown that clumpy torus models \cite[e.g.,][]{nen02, nen08a, nen08b,
hon06, hon10, kaw10,kaw11, sch08,sta12,sie15}
are favored to explain that MIR emission of AGN is almost isotropic
\citep{mul11,ich12, asm15, gar16b} rather than the smooth torus
models \citep{pie92,pie93,efs95}.
Near-IR (NIR) observations ($\lambda<5$~$\mu$m) are useful for identifying
luminous obscured AGN because the NIR colors trace well the hot dust
emission which cannot be reproduced by starburst galaxies \citep{lac04,ste05,hic07,ima10,mat12a,don12,ste12,ass13,ich14}.
However, the color-color plots often miss the known X-ray selected
obscured/Compton-thick AGN due to the strong contamination from the
host galaxies in the NIR bands \citep[e.g.,][]{gan14,gan15}, especially
at the low-luminosity end \citep{kaw16b}.
Thus we are motivated to evaluate the NIR two-color selection efficiency
as a function of AGN luminosity, using a complete sample including
Compton-thick and low-luminosity AGN.
On the other hand, far-IR (FIR; $\lambda \ge 60$~$\mu$m) data shed light
on the starburst emission in the host galaxies of AGN.
Using IR Astronomical Satellite (\textit{IRAS}) FIR bands, \cite{rod87} found
that the FIR 60~$\mu$m to 100~$\mu$m colors of nearby AGN and starburst
galaxies are indistinguishable,
suggesting that most of the FIR emission of nearby AGN must originate from
star formation processes \citep[see also; ][]{net07,mul11}.
Using the clumpy torus model, \cite{ich15} demonstrated that torus model
emission is one order of magnitude smaller than the observed
\textit{Herschel} 70~$\mu$m data points, suggesting starburst emission is necessary
in order to reproduce them.
Utilizing \textit{Herschel}/PACS 70/160~$\mu$m bands, \cite{mel14} and
\cite{mus14} found that the FIR emission of most AGN is dominated
by the nuclear starburst within the $\sim2$~kpc scale, while there are
exceptions in which the emission is dominated by the AGN torus \citep[e.g.,][]{mat15a,gar16}.
\cite{hat10} also found that the \textit{Spitzer}/MIPS and \textit{Herschel}/SPIRE
two-color plot ($f_{250}/f_{70}$ and $f_{70}/f_{24}$) can separate AGN and
starburst galaxies because the 24~$\mu$m flux is dominated by the torus emission.
However, the SPIRE colors alone do not differ from those of non-AGN galaxies.
Thus, combining the MIR and FIR as well as the hard X-ray band enables
us to investigate the properties of torus, host galaxies, and
accretion processes in AGN,
all of which are the key components to understand SMBH/host galaxy connection.
We report here the NIR to FIR (3--500~$\mu$m) properties of
ultra-hard X-ray selected AGN from the \textit{Swift}/BAT 70-month catalog
\citep{bau13},
by cross-matching the AGN positions with the \textit{WISE}, \textit{AKARI},
\textit{IRAS} all-sky surveys as well as the \textit{Herschel} archived data.
The main advantage of the BAT 70 month survey compared to previous
\textit{Swift}/BAT surveys includes better sensitivity resulting
from a complete reprocessing of the data with an improved data reduction
pipeline and more exposure time.
Throughout the paper, we adopt $H_{0}=70.0$~km~s$^{-1}$~Mpc$^{-1}$,
$\Omega_{\rm M}=0.3$, and $\Omega_{\Lambda}=0.7$.
\begin{figure*}
\begin{center}
\includegraphics[width=4.6cm]{fig01a.pdf}~
\includegraphics[width=4.6cm]{fig01b.pdf}~
\includegraphics[width=4.6cm]{fig01c.pdf}~
\includegraphics[width=4.6cm]{fig01d.pdf}\\
\includegraphics[width=4.6cm]{fig01e.pdf}~
\includegraphics[width=4.6cm]{fig01f.pdf}~
\includegraphics[width=4.6cm]{fig01g.pdf}~
\includegraphics[width=4.6cm]{fig01h.pdf}\\
\includegraphics[width=4.6cm]{fig01i.pdf}~
\includegraphics[width=4.6cm]{fig01j.pdf}~
\includegraphics[width=4.6cm]{fig01k.pdf}~
\includegraphics[width=4.6cm]{fig01l.pdf}
\caption{
Redshift distribution of AGN in the \textit{Swift}/
BAT 70-month catalog (black solid line: 606 objects) and of those with IR
counterparts (red color area) at each wavelength. ``FIR detection'' represents
the counterparts detected in any of the FIR bands.
The number of detected sources at each IR band is compiled in Table~2.}\label{fig:zdist}
\end{center}
\end{figure*}
\section{Sample}
\subsection{{\it Swift}/BAT Hard X-ray Catalog}
Our initial sample contains the 834 AGN reported in the 70-month
\textit{Swift}/BAT catalog \citep{bau13,ric16}, of which
105 are blazars. Blazars were identified based on the Rome
BZCAT \citep{mas15} and on recent literature \citep{ric16}.
Of the remaining 729 sources, 697 sources have secure redshift information
as presented in \cite{ric16}.
Next, we removed galaxy pairs or interacting galaxies not resolved
in the BAT survey because the BAT catalog in \cite{bau13} only
provides the counterpart name of the galaxy pair, not the galaxy itself,
which makes us to obtain the IR counterpart very difficult for those sources.
Out of 697 sources, 684 fulfilled this criterion.
Further, the 606 sources located at higher galactic latitude with $|b|>10^{\circ}$
were selected to reduce the contamination in the crowded region through
IR catalog matching.
In the following we refer only to these 606 non-blazar AGN as the parent sample.
The sample is local, with an average redshift of $\left<z\right>=0.055$ as shown
in Figure~\ref{fig:zdist} (black solid lines)\footnote{
M~81 is not shown in the Figures due to its low redshift ($z<10^{-3}$).}.
\cite{ric16} collected the X-ray spectra below 10~keV,
including the $\sim60$~unknown objects in the \textit{Swift}/BAT 70-month catalog,
then derived the best estimated line of sight column density ($N_{\rm H}$) and
absorption corrected BAT 14--195~keV luminosity ($L_{14-195}$).
Even the energy band of the \textit{Swift}/BAT survey, the observed flux is
affected by obscuring material if the column density of the target exceeds
$N_{\rm H}> 10^{24}$~cm$^{-2}$ \citep[e.g., see Figure~1 of ][]{ric15}.
Thus, we use absorption corrected 14--195~keV luminosity ($L_{14-195}$)
in this study and all the values of $L_{14-195}$ and $N_{\rm H}$ will be
tabulated in \cite{ric16}.
\subsection{IR Catalogs}
The available NIR to FIR data were obtained as follows.
\subsubsection{ALLWISE Catalog}\label{sec:allwise_catalog}
The \textit{WISE} mission mapped the all-sky in 3.4 (W1), 4.6 (W2),
12 (W3), and 22~$\mu$m (W4) bands.
In this study, we obtained the data from the latest \textsc{Allwise} catalog \citep{cut13}
that achieved better sensitivity than the \textit{WISE} all-sky data release \citep{wri10}
thanks to an improved data processing pipeline.
The catalog tabulates the pipeline-measured magnitudes based on the profile fitting
on $\sim6$~arcsec scale.
In this study, we use this instrumental profile-fit photometry magnitude.
The \textsc{Allwise} achieved 5$\sigma$ sensitivity at 3.4, 4.6, 12, and 22~$\mu$m
is 0.054, 0.071, 1, and 6~mJy, respectively.
The positional accuracy based on cross-matching with the 2MASS catalog
is $\sim2$~arcsec at $3\sigma$ level.
We only use the sources with the flux quality \verb|ph_qual=A|, with a signal-to-noise
ratio larger than 10.
We also check sources of contamination and/or biased flux, due to the proximity
to an image artifact (e.g., diffraction spikes, scattered-light halos, and/or optical
ghosts) using the flag name \verb|ccflag|.
A source that is unaffected by known artifacts is flagged as \verb|ccflag=0|.
We thus only use sources with \verb|ccflag=0| for each band.
\subsubsection{\textit{AKARI} Point Source Catalogs}
To further obtain the IR properties of the \textit{Swift}/BAT AGN,
we use the \textit{AKARI} All-Sky Survey Point Source Catalogs
(AKARI-PSC). \textit{AKARI} carries two instruments, the infrared camera
\citep[IRC;][]{ona07} operating in the 2--26 $\mu$m band (centered at 9 $\mu$m
and 18 $\mu$m) and the Far-Infrared Surveyor \citep[FIS;][]{kaw07} operating in
the 50--200 $\mu$m band (centered at 65, 90, 140, and 160 $\mu$m).
The \textit{AKARI} catalogs cover the brightest sources ($> 1$~Jy at $12$~$\mu$m band)
whose fluxes \textsc{Allwise} could not trace properly due to for saturation.
The AKARI-PSC achieved the flux
sensitivities of 0.05, 0.09, 2.4, 0.55, 1.4, and 6.3 Jy with position accuracies of
6 arcsec at the 9, 18, 65, 90, 140, and 160 $\mu$m bands,
respectively. In our study, we only utilize sources with the quality
flag of \verb|fqual=3|, whose flux measurements are reliable\footnote{
See the release note of the AKARI/FIS catalog for the details of \texttt{fqual}.
It is recommended not to use the flux data when \texttt{fqual <= 2} for a reliable scientific analysis.\\
\url{http://irsa.ipac.caltech.edu/data/AKARI/documentation/AKARI-FIS\_BSC\_V1\_RN.pdf}}.
\subsubsection{\textit{IRAS} Catalogs}
The \textit{IRAS} mission performed an unbiased all sky survey in the 12, 25,
60, and 100 $\mu$m bands. The typical position accuracy at 12
and 25 $\mu$m is 7 arcsec and 35 arcsec in the scan and cross scan
direction, respectively \citep{bei88}. In this paper we use two largest catalogs,
the \textit{IRAS} Point Source Catalog (\textit{IRAS}-PSC) and the \textit{IRAS} Faint Source
Catalog (\textit{IRAS}-FSC). \textit{IRAS} achieved 10$\sigma$ point source
sensitivities better than 0.7 Jy over the whole sky. The
\textit{IRAS}-FSC contains even fainter sources with fluxes of $>$0.2
Jy in the 12 and 25~$\mu$m bands.
We use only \textit{IRAS} sources with \texttt{fqual=3} (the highest quality)
\footnote{
see \cite{bei88} for the definition of \texttt{fqual} in the \textit{IRAS} catalogs.
False detections may be included when \texttt{fqual <= 2}. }.
\subsubsection{\textit{Herschel} BAT AGN Catalog}
The \textit{Swift}/BAT AGN were also observed
with \textit{Herschel}/Photodetector Array Camera and
Spectrometer \citep[PACS;][]{pog10} and
Spectral and Photometric Imaging Receiver \citep[SPIRE;][]{gri10}.
\cite{mel14} compiled a catalog of 313 nearby ($z<0.05$) sources
observed with \textit{Herschel}/PACS.
The PACS covers the two bands at the center wavelength of
70~$\mu$m (60--85~$\mu$m) and 160~$\mu$m (130--210~$\mu$m) simultaneously.
The PSF is 1.4 and 2.85~arcsec at 70~$\mu$m and 160~$\mu$m, respectively.
Considering the median redshift ($z \sim 0.025$) of the catalog, PACS 70~$\mu$m
PSF covers $\sim$2.8~kpc, which contains most of the host galaxy component.
\cite{shi16} reported that nearby ($z<0.05$) 293 sources were observed
with \textit{Herschel}/SPIRE as part of a cycle-1 open time program.
In addition, other 20 sources were included from other separate
programs to complete the sample.
The PSF is 18, 24, and 36~arcsec for 250, 350, and 500~$\mu$m, respectively.
\subsection{Cross Matching of BAT AGN with the IR Catalogs}
We first compile the IR counterparts by cross matching the BAT AGN
positions with IR catalogs. In this study, the IR luminosity $L_{X~\mu{\rm m}}$
represents the observed frame luminosity $\lambda L_{\lambda} (X\mu{\rm m})$ (erg~s$^{-1}$),
where $3.4 \le X \le 500$.
\subsubsection{NIR bands}\label{sect:NIRselection}
We determine the NIR (3.4 and 4.6~$\mu$m) counterparts of the
\textit{Swift}/BAT AGN through the positional matching with the \textsc{Allwise}.
We applied a cross-matching radius of 2~arcsec, informed by the cross-matches
with the 2MASS catalog as described in Section~\ref{sec:allwise_catalog}.
Using \textsc{Allwise}, we found 591 NIR counterparts out of
606 sources within the 2~arcsec radius.
Considering the superb sensitivity of \textsc{Allwise} than that of the BAT
survey (see Appendix A), essentially all of them should be detected.
Therefore, we checked again the \textsc{Allwise} counterparts of the
remaining 15 non-detected sources by expanding the matching-radius.
As a result, 13 sources have been found within 5~arcsec radius,
and we confirmed that the detections are real based on the visual
inspection of DSS optical and \textsc{Allwise} images.
One of the remaining two sources not detected,
the counterpart of NGC~3516 was classified as one of the \textsc{Allwise} reject table sources
\footnote{The sources not selected from the \textsc{Allwise} catalog
because they are low signal-to-noise ratio or spurious detections
of image artifacts}.
Another source (3C~59) was not detected even by expanding the
searching radius up to 15~arcsec.
After checking the visual inspection between DSS optical
and XMM/PN X-ray image, we found that the coordinate of 3C 59
in the BAT catalog traces the jet lobe component, not the central
object. We used the coordinate of the
central object obtained from Simbad (RA, Dec)=(31.7592, 29.512775)
for this target and we found the \textit{WISE} counterpart successfully.
In total, 605 counterparts are identified in the \textsc{Allwise} catalog.
Out of the 605 sources, 602 and 603 sources fulfill \texttt{ph\_qual=A}
at 3.4~$\mu$m and 4.6~$\mu$m.
After selecting the sources which fulfill \texttt{ccflag = 0},
the number of IR counterparts at 3.4~$\mu$m and 4.6~$\mu$m
turns out to be 549 ($\sim90.6$\%) and 548 ($\sim90.4$\%)
sources, respectively.
The number of IR counterparts in the NIR band (either 3.4 or 4.6~$\mu$m)
is 560 ($\sim92.4$\%) sources.
\begin{figure*}
\begin{center}
\includegraphics[width=7.4cm]{fig02a.pdf}~
\includegraphics[width=7.4cm]{fig02b.pdf}~\\
\includegraphics[width=7.4cm]{fig02c.pdf}~
\includegraphics[width=7.4cm]{fig02d.pdf}\\
\includegraphics[width=7.4cm]{fig02e.pdf}~
\includegraphics[width=7.4cm]{fig02f.pdf}~\\
\includegraphics[width=7.4cm]{fig02g.pdf}~
\includegraphics[width=7.4cm]{fig02h.pdf}
\caption{
Flux-flux relations of AGN between two IR bands.
The red color filled circle represents the source detected in both bands.
The size of the circle is proportional to the redshift of the source.
The solid line represents the best-fit line and red colored shade are represents
1$\sigma$ dispersion of each linear scaling relation.
The number of sources for the fitting and 1$\sigma$ error is also written in
the right bottom at each panel.
(Left) From top to bottom,
\textit{AKARI} 9~$\mu$m vs. \textit{WISE}~12~$\mu$m,
\textit{AKARI} 18~$\mu$m vs. \textit{WISE}~22~$\mu$m,
\textit{AKARI} 65~$\mu$m vs. \textit{Herschel}/PACS 70~$\mu$m,
\textit{IRAS} 100~$\mu$m vs. \textit{AKARI} 90~$\mu$m,
(Right) from top to bottom,
\textit{IRAS} 12~$\mu$m vs. \textit{WISE}~12~$\mu$m,
\textit{IRAS} 25~$\mu$m vs. \textit{WISE}~22~$\mu$m,
\textit{IRAS} 60~$\mu$m vs. \textit{Herschel}/PACS 70~$\mu$m,
\textit{AKARI} 160~$\mu$m vs. \textit{Herschel}/PACS 160~$\mu$m.}\label{fig:LIRvsLIR}
\end{center}
\end{figure*}
\subsubsection{MIR bands}
We determine the MIR (9--25~$\mu$m) counterparts of the \textit{Swift}/BAT AGN
by cross matching the \textsc{Allwise}, \textit{AKARI}, and \textit{IRAS}
catalogs in this order.
Our primary goal is to obtain photometric data in the IR band
as completely as possible for the \textit{Swift/}BAT selected AGN.
We give the highest priority to the \textsc{Allwise} catalog because of
its 50~times better sensitivity than \textit{AKARI}, which allows us to
search for fainter sources in the MIR all-sky view.
Then we cross matched the sources undetected by \textsc{Allwise} with \textit{AKARI}.
\textit{AKARI} covers the brighter sources which are saturated due to the high sensitivity
of \textsc{Allwise}, and have the advantage of a 2--4 times
higher sensitivity than the \textit{IRAS} survey. While all the \textit{IRAS}
sources should be detected with \textit{AKARI}, the flux quality flags of \textit{AKARI} for
very nearby ($z < 0.005$) objects turn out to be bad due to their
extended morphology when fitted with a single Gaussian.
In such cases, we rather refer to the \textit{IRAS} data with good flux quality,
which have $\sim11$ times worse angular resolution than \textit{AKARI},
since we aim to measure the total MIR flux from both nucleus and host galaxy
in a uniform way for the whole AGN sample.
The positional matching of the optical counterparts of the \textit{Swift}/BAT AGN
with IR survey catalogs was already discussed in Section~\ref{sect:NIRselection}
for \textsc{Allwise} and in \cite{ich12} for \textit{AKARI} and \textit{IRAS},
and we follow here the same approach.
For the MIR bands, the number of detections is compiled at the second column
in Table~\ref{tab:LIRvsLx}.
Here the detection at 12~$\mu$m represents the detection either at
\textit{AKARI}~9~$\mu$m, \textit{WISE}~12~$\mu$m, or \textit{IRAS}~12~$\mu$m;
22~$\mu$m represents either at \textit{AKARI}~18~$\mu$m, \textit{WISE}~22~$\mu$m,
or \textit{IRAS}~25~$\mu$m;
the MIR band represents either at 12~$\mu$m or 22~$\mu$m band defined above.
Finally, we obtained 601 ($\sim99.2$\%) counterparts in at least one MIR band.
Thus, the identification in the MIR bands is almost as complete as in the NIR bands.
The redshift distribution of the IR counterparts at each wavelength is shown in Figure~\ref{fig:zdist}.
\subsubsection{FIR bands}
The FIR counterparts of the \textit{Swift}/BAT AGN at $60 \le \lambda \le 160$~$\mu$m
were gathered by cross-matching the \textit{AKARI}, \textit{IRAS}, and \textit{Herschel}
in this order.
Our goal is to obtain photometric data for the full host galaxy emission in the FIR band.
We gave \textit{AKARI} counterparts the highest priority because of the
better sensitivity with respect to \textit{IRAS} surveys.
Then we matched the potion of the sources undetected by \textit{AKARI} with \textit{IRAS}.
Considering the better sensitivity of \textit{AKARI}/FIS,
one might expect that \textit{IRAS} would not cover many sources.
However, \textit{AKARI} often misses emission from sources with extended morphology
due to its better angular resolution.
In such cases, \textit{IRAS} gives the best quality estimate of flux by measuring the
whole FIR flux from the host galaxies.
Finally, the remaining distant sources or faint sources which neither \textit{AKARI} nor
\textit{IRAS} detected were cross matched with the \textit{Herschel}/PACS catalog of \cite{mel14}.
We cross matched the sources by referring to the counterpart source names reported by
\cite{mel14} and the \textit{Swift}/BAT catalog.
For the FIR counterpart at $250 \le \lambda \le 500$~$\mu$m, only \textit{Herschel}/SPIRE
catalog can access to those wavelengths.
We also cross matched the sources by referring the counterpart source names written
in \cite{shi16} and the \textit{Swift}/BAT catalog.
For the FIR bands, 388 ($\sim64.2$\%), 241 ($\sim 39.9$\%), 89 ($\sim14.7$\%),
229 ($\sim 37.9$\%), 213 ($\sim 35.3$\%), 170 ($\sim28.1$\%), and 107 ($\sim17.7$\%)
sources are compiled at
70 (either at \textit{IRAS}~60~$\mu$m, \textit{AKARI}~65~$\mu$m, or \textit{Herschel}~70~$\mu$m),
90 (either at \textit{AKARI}~90~$\mu$m or \textit{IRAS}~100~$\mu$m),
140 (at \textit{AKARI}~140~$\mu$m), and
160~$\mu$m (at \textit{Herschel}~160~$\mu$m or \textit{AKARI}~160~$\mu$m),
250~$\mu$m (at \textit{Herschel}/SPIRE~250~$\mu$m),
350~$\mu$m (at \textit{Herschel}/SPIRE~350~$\mu$m),
500~$\mu$m (at \textit{Herschel}/SPIRE~500~$\mu$m),
respectively. Those numbers are also compiled in the second column of Table~\ref{tab:LIRvsLx}.
Finally, 402 ($\sim 66.3$\%) IR counterparts are obtained in at least one FIR band.
Thus, the identification in the FIR bands is not yet complete, but a statistically significant
sample has been compiled for this analysis.
\subsection{Luminosity Correlation among IR catalogs}
Since the four IR catalogs have slightly different central wavelengths and
aperture sizes, we investigate the correlation between the
\textit{AKARI/IRAS/WISE/Herschel} luminosities, using only the sources
detected in two separate observations.
For the MIR bands, we choose \textit{AKARI} 9~$\mu$m and \textit{IRAS}~12~$\mu$m
for \textit{WISE} 12~$\mu$m, and \textit{AKARI} 18~$\mu$m and \textit{IRAS}
25~$\mu$m for \textit{WISE} 22~$\mu$m, respectively, because of the proximity
of the central wavelengths.
For the FIR bands, \textit{AKARI} 65~$\mu$m and \textit{IRAS} 60~$\mu$m for
\textit{Herschel}/PACS 70~$\mu$m, \textit{IRAS} 100~$\mu$m for \textit{AKARI}
90~$\mu$m, \textit{AKARI} 160~$\mu$m for \textit{Herschel}/PACS 160~$\mu$m.
Figure~\ref{fig:LIRvsLIR} displays the flux correlations between the two bands,
showing that the correlation in flux between different IR catalogs are tight and significant.
The standard deviation of the flux-ratio distribution between these two bands
are written in the caption of Figure~\ref{fig:LIRvsLIR}.
Figure~\ref{fig:LIRvsLIR} also shows that the flux relations are independent
from the redshift.
Although within the scatter, the flux relations between \textit{WISE} 12,
22~$\mu$m and \textit{IRAS} 12, 25~$\mu$m show systematic $z$ dependence
that flux ratio of $f_{\rm IRAS}/f_{\rm WISE}$ is anti-correlated to $z$.
This could be due to greatly larger aperture of \textit{IRAS} than that of \textit{WISE},
therefore the MIR emission from the host galaxy slightly contaminates to the \textit{IRAS}
fluxes of low-$z$ sources.
Based on the flux correlation, we derive the empirical formula to convert
the flux of each band into \textit{WISE}~12~$\mu$m, 22~$\mu$m,
Herschel/PACS 70~$\mu$m, 160~$\mu$m, and \textit{AKARI} 90~$\mu$m as follows:
{\small
\begin{align*}
\input{Eq01.tex}
\end{align*}
}
\begin{figure*}
\begin{center}
\includegraphics[width=9.0cm]{fig03a.pdf}~
\includegraphics[width=9.0cm]{fig03b.pdf}~
\caption{
Luminosity correlations between the luminosities at 12 (left) and 22~$\mu$m (right)
($L_{12~\mu{\rm m}}, L_{22~\mu{\rm m}}$) and 14--195~keV ($L_{14-195}$).
Blue/red cross represents type-1/-2, respectively.
The black solid line represents the slope of our study in Equation (1) for
left panel and (2) for right panel.
The black dashed line represents the slope of our study using only high
luminosity sources with $\log L_{14-195}>43$.
The other dashed line represents the study of \cite{fio09} (orange),
\cite{ste15} (gray), \cite{gan09} (purple), \cite{mat15b} (green), and \cite{asm15} (cyan)
respectively.
The studies with local sample (mostly $z<0.1$ and main luminosity range of
$41<L_{\rm X}<46$) are our study, \cite{gan09}, and \cite{asm15}.
The studies with high-$z$ sample (mostly $0.1 <z < 5$ and $42<L_{\rm X}<46$)
are \cite{fio09}, \cite{mat15b}, and \cite{ste15}.
The studies with type-1 AGN are \cite{fio09}, \cite{ste15}, and with both type-1
and type-2 AGN are \cite{gan09}, \cite{mat15b}, \cite{asm15}, and our study.
See Table~\ref{tab:LIRvsLx_wlit} for more details.
}\label{fig:LIRvsLx}
\end{center}
\end{figure*}
Assuming that AGN that are not detected in the highest priority band
(\textit{WISE}~12~$\mu$m, 22~$\mu$m, Herschel/PACS 70~$\mu$m, 160~$\mu$m,
and \textit{AKARI} 90~$\mu$m) but in second or third priority bands
should follow the same correlations as examined here, we
apply the conversion factors reported above to derive
the 12, 22, 70, 90, and 160~$\mu$m luminosities. Doing so
we can discuss the luminosity correlation with the 14--195~keV band in
a uniform way regardless of the matched catalogs.
All the IR properties of the parent sample AGN are summarized in Table~\ref{tab:IRcatalog}.
\subsection{AGN type}
To examine the IR properties of different AGN populations,
we divide the sample into two types based on the column density ($N_{\rm H}$)
obtained from the X-ray spectral fitting by \cite{ric16}.
The AGN with $N_{\rm H} < 10^{22}$~cm$^{-2}$ are called
``X-ray type-1'' (hereafter type-1), and the AGN with $N_{\rm H} \ge 10^{22}$~cm$^{-2}$
are called ``X-ray type-2'' (hereafter type-2).
The sample is divided into 311 type-1 AGN and 293 type-2 AGN.
The AGN type of each source will be tabulated in \cite{ric16}.
\section{Results and Discussion}
\subsection{Correlation between the MIR and Ultra-Hard X-Ray Luminosities}\label{sect:LxvsLIR}
Figure~\ref{fig:LIRvsLx} shows the luminosity correlations between the
MIR (12 and 22~$\mu$m) luminosities ($L_{12\mu{\rm m}}, L_{22\mu{\rm m}}$)
and $L_{14-195}$ in the luminosity range of $10^{40} < L_{14-195}<10^{47}$~erg~s$^{-1}$
\footnote{M~81 and NGC~4395 are not shown in Figure~\ref{fig:LIRvsLx} and
~\ref{fig:LFIRvsLbol} due to their low luminosities
of ($\log L_{12~\mu{\rm m}}, \log L_{22 \mu{\rm m}}, \log L_{90~\mu{\rm m}},
\log L_{14-195}) = (39.20, 39.22, 39.76,38.50)$ for M~81 and
$(\log L_{12~\mu{\rm m}}, \log L_{22 \mu{\rm m}}, \log L_{90~\mu{\rm m}}, \log L_{14-195})
= (39.88, 40.27, 41.48, 40.77)$ for NGC~4395.}.
Blue and red crosses represent type-1 and type-2 AGN, respectively.
The error-bars are not shown in Figure~\ref{fig:LIRvsLx} and Figure~\ref{fig:LFIRvsLbol}
since the uncertainties of both infrared luminosities and 14-195 keV luminosity are
vanishingly small ($<10$\%) in the log-log plot.
Since our motivation is to determine the slope of the luminosity relation between
$L_{\rm MIR}$ and $L_{14-195}$ as two independent variables,
we apply ordinary least-squares Bisector fits, which minimizes perpendicular
distance from the slope line to data points \citep{iso90}.
The ordinal least-squares Bisector fits (with the form of
$[ \log (L_{\rm MIR}/10^{43}~{\rm erg}~{\rm s}^{-1}) =
(a \pm \Delta a) + (b \pm \Delta b) \log (L_{14-195}/10^{43}~{\rm erg}~{\rm s}^{-1}$)],
where $\Delta a$ and $\Delta b$ is the standard deviation of $a$ and $b$, respectively)
gives the correlations of
{\small
\begin{align}\label{Eq:LbatvsLIR_all}
\input{Eq02.tex }
\end{align}
}
The significance of the correlations between the two bands luminosities (and fluxes) can be obtained
by performing Spearman's tests. The results are summarized in Table~\ref{tab:LIRvsLx}.
We find that both luminosity--luminosity and flux--flux correlations
between the NIR, MIR bands and the 14--195~keV bands are highly significant.
In the Seyfert galaxy class with $L_{14-195}<10^{44}$~erg~s$^{-1}$,
the correlation between MIR and X-ray was first reported by using ground
telescopes with low spatial resolutions \citep{elv78, kra01}, and then
by several authors thanks to the new windows opened by the \textit{ISO}
satellite \citep{lut04, ram07} and by \textit{Spitzer} \citep{saz12}.
Studies based on the ground-based high spatial resolution MIR photometry
were first compiled by \cite{hor06}, then expanded independently by \cite{lev09}
and \cite{gan09}, and finally by \cite{asm15}.
The correlation parameters of \cite{gan09} and \cite{asm15}
are the most widely used because they include Compton-thick AGN.
\cite{gan09} show steeper results than ours with $b=1.11 \pm 0.04$, but
\cite{asm15} report the results consistent with our studies within the uncertainties
with $b=0.97 \pm 0.03$.
Both slopes are over-plotted in Figure~\ref{fig:LIRvsLx} and also compiled
in Table~\ref{tab:LIRvsLx_wlit}.
Since both studies used 2--10~keV luminosity as X-ray luminosity, we apply
the conversion factor of $L_{14-195}/L_{2-10} = 2.1$ under the assumption
of $\Gamma = 1.9$ for the over-plot in Figure~\ref{fig:LIRvsLx}.
Hereafter, we always apply this conversion factor for estimating $L_{14-195}$ from $L_{2-10}$.
The host galaxy contamination in the MIR emission especially in the low-luminosity
end could affect the slope values of $b=0.96-0.98$ in our study.
If we use only the sources with $L_{14-195}>10^{43}$~erg~s$^{-1}$,
the luminosity relations become
{\small
\begin{align}
\input{Eq03.tex}\label{Eq:LMIRvsLbat_high},
\end{align}
}
which is slightly steeper, but within 2$\sigma$ uncertainty of \cite{asm15}.
The slope obtained by \cite{gan09} depends on the choice of algorithm and the
value becomes $b=1.00 \pm 0.08$ when using the same method of \cite{asm15}.
Therefore, our results are generally fully consistent with the high spatial resolution
results in high luminosity end with $L_{14-195}>10^{43}$~erg~s$^{-1}$.
While our results with poorer spatial resolution suffer from the contamination
from the host galaxies in the lower luminosity end,
our study has the advantage of the completeness ($\sim98$\%) in the
MIR bands of the ultra-hard-X-ray flux limited \textit{Swift}/BAT 70 month catalog,
which is the least bias against absorption up to $N_{\rm H} \simeq 10^{24}$~cm$^{-2}$.
The comparison to the literature from the higher luminosity (and also high-$z$) studies
with $L_{14-195} \ge 10^{44}$~erg~s$^{-1}$ can also provide important information.
We compile the luminosity correlations of those studies in Table~\ref{tab:IRXcatalog}.
\cite{fio09} derived the observed rest-frame 6~$\mu$m and 2--10~keV luminosities of
$\sim80$ X-ray-selected type-1 AGN in the COSMOS and CDF-S fields obtained from
\textit{Chandra} and \textit{Spitzer} satellites.
The slope is quite steep, with $b=1.39$ for $\log L_{6~\mu {\rm m}} \ge 43$.
Although the detailed fitting algorithm were not mentioned in their studies,
there is a trend of increasing MIR--X-ray ratio at high luminosity end with
$\log (L_{6~\mu {\rm m}} / {\rm erg}~{\rm s}^{-1} ) > 44$.
Further evidence of this trend is obtained by \cite{ste15} (plotted with gray dotted
line in Figure~\ref{fig:LIRvsLx}) using SDSS DR5, tracing at high-$z$ QSOs mainly
with $2<z<4$.
They used quadratic function for reproducing the X-ray--MIR luminosity relations.
If the trends above are true, the steeper slope suggests that X-ray emission is
inefficient in the high-luminosity end.
This is reported by several observations that SED shape of AGN changes with
luminosity and Eddington ratio \citep[][]{vas07}.
The existence of X-ray weak sources at high bolometric luminosities has been
recently confirmed by \cite{ric16a}, who found that Hot Dust Obscured Galaxies
\citep[hot DOGs; ][]{wu12} seem to have X-ray luminosities one or two order
of magnitudes below the value expected by the local X-ray -- MIR correlation.
Since our sample is ultra-hard X-ray selected, our studies might miss those X-ray
saturated sources in the high-luminosity end.
Those sources could be located faint end in 14--195~keV flux, but not in MIR fluxes.
Since BAT 14--195~keV flux limit is over one order of magnitude shallower than those of MIR fluxes,
a deeper survey is necessary to assess the luminosity relation between MIR and 14--195~keV
luminosity at high luminosity end (see also Figure~\ref{fig:fIRvsfx_supp}).
Another suggestion from this trend is that AGN might have large obscuring fraction
in the high luminosity regime and/or in the high-$z$ universe \citep{buc15}.
This might be true considering the X-ray studies that the fraction of Compton-thick
AGN increase with redshift from $z=0$ to $z=2$ \citep{bri12}.
We do not attempt to solve this question at high luminosities here
but it is valuable to mention another possibility of the slope differences
among those studies of high-luminosity end.
One possibility which make the slope steeper originates from the
6~$\mu$m bands instead of 12~$\mu$m.
\cite{asm15} pointed out that hot dust component could dominate
around 6~$\mu$m \citep{mor12} rather than the typical torus warm
component which peaks around 20--30~$\mu$m \citep[e.g.,][]{mul11}.
The contamination from the host galaxies at 6~$\mu$m
is also possible concern. \cite{mat15b} revised the
$L_{6 \mu {\rm m}}$--$L_{2-10}$ luminosity relations
by using a complete and flux limited sample of $>200$ AGN
from the Bright Ultra hard \textit{XMM-Newton} Survey and \textit{WISE}.
They obtained absorption corrected X-ray luminosities and also
derived the 6~$\mu$m AGN luminosity by the spectral decomposition
of the torus and host galaxies.
They applied the Bayesian approach to linear regression with errors
in both X and Y-axis by using the IDL command \verb|linmix_err|
with the X-ray luminosity as independent variable, which is the same
method used by \cite{asm15}.
They report a slope of $b=0.99 \pm 0.03$ (over plotted as green
dotted line in Figure~\ref{fig:LIRvsLx}) up to luminosities of
$L_{2-10} \sim 10^{46}$~erg~s$^{-1}$.
This agrees well with the results of \cite{asm15} and ours ($b=0.96\pm0.02$).
Note that studies by \cite{mat15b} also might miss very luminous sources
due to the limited survey volume in X-ray, which is
possibly making the slope shallower.
Equations~(1) and (2) also show that intercept $a$ of $L_{22\ \mu{\rm m}}$
is higher than that of $L_{12 \mu{\rm m}}$.
This tendency can be explained by two possibilities,
1) the torus emission peaks in $\nu F_{\nu}$ unit at 20--40~$\mu$m
rather than $\sim10$~$\mu$m, which is suggested by both of the observation
\citep{wee05,buc06,mul11,asm11,asm14, ich15, ful16} and clumpy torus
models \citep{nen08a,hon10, sch08}.
2) the star formation component contaminates more at longer
wavelengths \citep[e.g.,][]{net07,mul11}.
The former contributes more strongly at high luminosity end
($L_{14-195} > 10^{44}$~erg~s$^{-1}$) because the relative star formation
contamination could be smaller considering the slope of $L_{\rm FIR}$--$L_{14-195}$
is shallower ($b<0.94$).
On the other hand, the latter contributes strongly to lower luminosity sources
($L_{14-195} < 10^{44}$~erg~s$^{-1}$).
This will be discussed again in Section~\ref{Sect:WISEcolor}.
\begin{figure*}
\begin{center}
\includegraphics[width=8.5cm]{fig04a.pdf}~
\includegraphics[width=8.5cm]{fig04b.pdf}\\
\caption{
Luminosity correlations between the luminosities at 70 and
90~$\mu$m ($ L_{70~\mu{\rm m}}, L_{90~\mu{\rm m}}$) and
bolometric luminosity ($L_{\rm bol}$) estimated from Equation~(5).
Blue/red color represents type-1/-2, respectively.
The black solid line represents the slope of Equation (6)
and (7), respectively.
The black dashed line represents the slope of Equation (8) and (9),
respectively.
The dot-dashed line (navy) represents the slope obtained by \cite{net09}.
The gray solid line represents the pure-AGN sequence reported in Equation (11) and (12),
respectively.
}\label{fig:LFIRvsLbol}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=8.5cm]{fig05a.pdf}~
\includegraphics[width=8.5cm]{fig05b.pdf}\\
\caption{
Mean luminosity correlations between the luminosities at 70 and
90~$\mu$m ($L_{70~\mu{\rm m}}, L_{90~\mu{\rm m}}$) and
bolometric luminosity ($L_{\rm bol}$) estimated from Equation (5).
Green color bin represents the mean measurements of 70 and
90~$\mu$m luminosity as a function of bolometric luminosity.
Black solid bin represents the mean measurements of bolometric
luminosity as a function of 70 and 90~$\mu$m luminosity, respectively.
The gray points represent AGN detected in both bands and are the
same data points shown in Figure~\ref{fig:LFIRvsLbol}.
The green/black dashed line represents the slope obtained by the leas-bisector fit for
the green/black bin sample, respectively.
The cyan dashed line represents the fitted line of local X-ray selected AGN
obtained from \cite{ros12}.
The solid green line represents a fit to the relationship using the same function
from as \cite{ros12}, but applied to our binned data shown by the green points.
}\label{fig:LFIRvsLbol_bin}
\end{center}
\end{figure*}
\subsection{Correlation between the FIR and AGN bolometric luminosities}\label{Sect:LFIRvsLbol}
The correlation between FIR and AGN bolometric luminosity ($L_{\rm bol}$) could shed the light on the link
between the star formation activity of the AGN host galaxies and the accretion rate of AGN.
Since the accretion disk emission cannot be directly obtained for all the sources of our sample,
the bolometric correction should be applied to $L_{14-195}$ to estimate the bolometric luminosity.
\cite{mar04} account for variations in AGN SEDs by using the well-known anti-correlation
between the optical-to-X-ray spectral index ($\alpha_{\rm OX}$).
Then, they renormalize the template SED to a particular $\alpha_{\rm OX}$ to obtain
the bolometric correction with AGN luminosity.
Therefore, they assume a varying relation between optical/UV and X-ray luminosity,
not a constant value \citep[e.g.,][]{elv94}.
A similar approach is followed by \cite{hop07} who, however used a template SED
generated from the averages of real SEDs in different wavebands.
There is a systematic difference that the one of \cite{hop07} is roughly
a factor of $\sim1.5$ larger than that of \cite{mar04}.
This is because \cite{hop07} defines $L_{\rm bol}$ as
the integral of the observed template SED including the
reprocessed emission in the MIR from the accretion disk,
whereas \cite{mar04} only integrates the emission of optical-UV and X-ray radiated
by the accretion disk itself and hot corona, respectively.
Since the accretion rate is better related to the total luminosity directly
produced by the accretion process, $L_{\rm bol}$ defined
by \cite{mar04} is better suited for our study. Hence,
we apply the bolometric correction of \cite{mar04} with
\begin{align}\label{Eq:LbolvsL14_195}
\log L_{\rm bol} = 0.0378 (\log L_{\rm 14-195})^2 - 2.03 \log L_{\rm 14-195} + 61.6.
\end{align}
Figure~\ref{fig:LFIRvsLbol} shows that FIR luminosities
($L_{70~\mu{\rm m}},L_{90~\mu{\rm m}}$) are plotted against $L_{\rm bol}$.
The least-squares Bisector fits to the FIR versus bolometric AGN luminosity
with a power-law give the correlations of
{\small
\begin{align}
\input{Eq04.tex}\label{Eq:LbolvsLFIR_eq}.
\end{align}
}
Since our relations are obtained based on the FIR detected sample, which is not complete
(65\% for 70~$\mu$m band and 45\% for 90~$\mu$m band) as shown in Figure~\ref{fig:zdist},
we check the dependence of the completeness by restricting
the redshift down to $z<0.076$ for 70~$\mu$m and $z<0.022$ for 90~$\mu$m band to achieve 80\% completeness of
the IR counterparts, respectively.
The relation at each band is given as
{\small
\begin{align}
\input{Eq05.tex}.
\end{align}
}
The slope here is slightly steeper than those reported in Equation (6) and (7),
but the slopes are consistent within the 1$\sigma$ uncertainties (see also Figure~\ref{fig:LFIRvsLbol})
Therefore, we conclude that dependence of the completeness is weak.
In addition, we also estimate the effective luminosity range based on
the limited volume of the \textit{Swift}/BAT AGN sample.
This is because AGN with the highest luminosity are rare and
so might be found in the limited \textit{Swift}/BAT survey volume.
Likewise, faint AGN will be missing from the sample because of the
\textit{Swift}/BAT ultra-hard X-ray flux limits.
First, based on the flux limit of \textit{Swift}/BAT survey of $f_{14-195} = 1.34
\times 10^{-11}$~erg~s$^{-1}$~cm$^{-2}$ \citep{bau13},
and the conversion factor of $f_{2-10} = 2.1 \times f_{14-195}$,
we estimate the survey volume as a function of the flux limited luminosity
$V(L_{2-10}) = (4/3) \pi D_{L_{2-10}}^3$~Mpc$^3$.
Next, we calculate the expected number of AGN detection as a function of
$L_{2-10}$ using the 2-10~keV luminosity function from \cite{ued14} with a
$z$-dependence of $\propto (1+z)^4$.
Then, we define the effective luminosity range in which
the expected number of detected AGN per dex in $L_{2-10}$ is greater than 10,
which is sufficient to measure the luminosity relation.
The result is $40.8 < \log L_{2-10} < 45.5$
which is equivalent to $41.1 < \log L_{14-195} < 45.8$ and $41.8 < \log L_{\rm bol} < 47.7$.
Therefore, a deeper and/or wider survey is needed to measure the relations between
FIR and AGN luminosity both at $\log L_{\rm bol} < 41.8$ and $\log L_{\rm bol}>47.7$.
In Figure~\ref{fig:LFIRvsLbol}, we also show the relation of \cite{net09} (navy dashed line).
Our local sample reproduces well the relation of \cite{net09} using local optical type-2 AGN ($z \le 0.2$).
The difference between our study and that of \cite{net09} is that while we used
$L_{\rm FIR}$ as a proxy for star formation, \cite{net09} used the break at 4000~\AA~(D4000)
for estimating $L_{\rm FIR}$.
\cite{mat15a} reported that while D4000-based SFR is not well determined at lower
SFR since the calibration was based on starburst galaxies, the systematic difference
between D4000-based SF luminosity and $L_{\rm FIR}$ is small enough compared to
the broad distribution between SF luminosity and $L_{\rm AGN}$.
Even with the $L_{\rm FIR}$, we find a consistent result with \cite{mat15a},
using a sample of SDSS DR7 local AGN at $z<0.2$.
Recent studies have reported that ``mean'' or ``binned'' $L_{\rm FIR}$--$L_{\rm bol}$
show a flattened (or even horizontal) pattern in each redshift bin \citep[e.g.,][]{sha10,ros12,sta15}
for $0<z<2.5$.
However, such flattened pattern is not detected in our sample when we use individual
luminosity measurements instead of the mean luminosities.
To check this, the binned analysis is also applied to our 70~$\mu$m or 90~$\mu$m
detected sources and the results are shown in Figure~\ref{fig:LFIRvsLbol_bin}.
The plotted bin is the median value in each luminosity bin with errorbars showing
the interpecentage range containing 80\% of the sample.
Green points represent the mean measurements of 70 and 90~$\mu$m
luminosity averaged in bins of bolometric luminosity, while black points represent
the mean bolometric luminosity averaged in bins of 70 and 90~$\mu$m luminosity,
respectively.
The dashed line represents the estimated relation based on the least-square Bisector fits.
As shown in the Figure, the slope ($b=0.54 \pm 0.08$ for 70~$\mu$m
and $b=0.56 \pm 0.07$ for 90~$\mu$m) of green dashed line (binned with $L_{\rm bol}$)
is significantly shallower than that of the black dashed line (binned with $L_{\rm FIR}$;
$b=0.88 \pm 0.05$ for 70~$\mu$m and $b=0.82 \pm 0.06$ for 90~$\mu$m).
To further model the trend of green points, we apply curve fit used in \cite{ros12}.
The function is written as
\begin{align}
\log L_{\rm FIR} = \log \left( 10^{b \log L_{\rm bol} + \log L_{\rm b} - b \log L_{\rm c}} + 10^{\log L_{\rm b}} \right)
\end{align}
with three free parameters ($b$, $\log L_{\rm b}$, $\log L_{\rm c}$)
but we fix the slope $b=0.78$ by following \cite{ros12}.
$L_{\rm b}$ is a constant value mainly determined by the constant $L_{\rm FIR}$
value where $L_{\rm AGN}$ is small.
$L_{\rm c}$ represents the value of $L_{\rm AGN}$ where the function becomes equal to $L_{\rm b}$.
We fit the green points using a non-linear least squares fitting procedure (curve\_fit in Python).
The result is shown with the green solid line in Figure~\ref{fig:LFIRvsLbol_bin}.
The model nicely reproduces the flattened relation, but systematically smaller value
($\log L_{\rm b}=42.79 \pm 0.02$ for 70~$\mu$m and $\log L_{\rm b}=42.64 \pm 0.03$
for 90~$\mu$m) than the line of local AGN ($\log L_{\rm b}= 43.57 \pm 0.08$) in \cite{ros12}.
This would be because our sample contains deeper data from \textit{Herschel},
whereas \cite{ros12} use shallower \textit{IRAS}-FSC data
for the local AGN and do not apply stacking analysis of non-detection
sources for this local sample.
Overall, whereas the IR averaging (in bins of bolometric luminosity, shown by the green
points) nicely reproduces the flattened trend as
reported in the literature \citep{sha10, ros12},
the black solid bin still holds rising trend almost same as the relation
obtained from the individual objects.
This could be originated from the different time scale of SF and AGN activity.
\cite{hic14} calculated mean $L_{\rm FIR}$--$L_{\rm bol}$ in two ways.
\cite{hic14} constructed a simple model population of SF galaxies in
which SF and the BH growth are correlated in galaxies
across a range of $0.25<z<1.25$, with a $z$ dependent distribution
in SFR from the FIR luminosity function derived by \cite{gru13}.
They assigned an observed average SFR to BH accretion rate of
3000 \citep[e.g.,][]{raf11, mul12, che13}, and also assumed that the
instantaneous accretion rate relative to the average is distributed from
the given fiducial luminosity distribution.
They first derived the averaged $L_{\rm bol}$ for galaxies in each
$L_{\rm FIR}$ bin and compared the results obtained by
\cite{sym11} and \cite{che13} for a range of $0.25<z<1.25$.
This reproduces well the rising relation as shown in our study.
They next computed the average $L_{\rm FIR}$ as a function of $L_{\rm bol}$.
This then reproduces well the flattened relation.
This result strongly suggests a picture in which SF
and BH accretion are closely connected over long timescales,
but this correlation is sometimes hidden at low to moderate $L_{\rm bol}$
due to the short-term AGN variability.
Note that there are clear difference of the sample used in the aforementioned
studies and ours.
They included all FIR detected galaxies whereas we focused only AGN
host galaxies with the both detections in FIR and X-rays.
Further studies using the spectral decomposition of IR SEDs will be
discussed in a forthcoming paper (K. Ichikawa et al. in prep).
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{fig06a.pdf}\\
\includegraphics[width=7cm]{fig06b.pdf}
\caption{
Histograms of $r_{12,22} = \log (L_{12~\mu{\rm m}, 22~\mu{\rm m}}/
L_{12~\mu{\rm m}, 22~\mu{\rm m}}^{\rm (slope)})$
(top and bottom panel, respectively).
The solid black/shaded blue/shaded red line represents the
total, type-1, and type-2 sample, respectively.}\label{fig:LMIRovLx}
\end{center}
\end{figure}
\subsection{FIR Pure-AGN candidates}
If there are luminous AGN hosted by low-SF galaxies, we may find
those ``FIR pure-AGN'' candidates at the bottom right in Figure~\ref{fig:LFIRvsLbol}.
\cite{mat15a} investigated the FIR pure-AGN sequence between
$L_{\rm FIR}$--$L_{\rm bol}$ by adopting a typical AGN SED template of \cite{mul11}.
The FIR pure-AGN sequences are given with
\begin{align}
\log \frac{L_{70~\mu{\rm m}}}{{\rm erg~s}{^{-1}}} = (7.45 \pm 0.26) + 0.80\log \frac{L_{\rm bol}}{{\rm erg~s}{^{-1}}}\\
\log \frac{L_{90~\mu{\rm m}}}{{\rm erg~s}{^{-1}}} = (7.17 \pm 0.26) + 0.80\log \frac{L_{\rm bol}}{{\rm erg~s}{^{-1}}}.
\end{align}
In Figure~\ref{fig:LFIRvsLbol}, the estimated FIR pure-AGN sequence
is also over plotted with gray lines.
In this study we define the FIR pure-AGN candidate if the source
is located under the FIR pure-AGN sequence in Figure~\ref{fig:LFIRvsLbol}.
As a result, 50 and 4 sources fulfill the criterion at 70 and 90~$\mu$m, respectively.
There is a clear number difference between 70 and 90~$\mu$m even
when we consider the ratio of the FIR pure-AGN to the sample
($50/388 \sim 13$\% for 70~$\mu$m while $4/241 \sim 2$\% for 90~$\mu$m criterion).
One reason could originate from the sensitivity difference.
The sensitivity at 70~$\mu$m is better than at 90~$\mu$m
because of the inclusion of the \textit{Herschel}/PACS detected
sources at 70~$\mu$m.
Of the 50 objects selected at 70~$\mu$m, 42 are not detected at 90~$\mu$m.
It is also likely that, at shorter wavelengths, the typical AGN contribution becomes
stronger while the SF contribution becomes weaker \citep[e.g., ][]{mul11}.
To support this, all the four pure-AGN candidates at 90~$\mu$m also fulfill
the criterion at 70~$\mu$m, while only 50\% (4/8) of 70~$\mu$m selected
pure-AGN candidates with 90~$\mu$m detection fulfill the criterion at 90~$\mu$m.
The names of 90~$\mu$m selected five sources are NGC~1194,
ESO~506-G027, NGC~5252, and CGCG~164-019.
All are type-2 AGN and average luminosity is high with
$\langle \log (L_{\rm 14-195}/{\rm erg}~{\rm s}^{-1}) \rangle=44.1$.
While those pure-AGN population is quite small with $\sim2$\%
(4 out of 274), they are good sample to construct the pure-AGN IR
SEDs including the FIR end, and to examine the extrapolation to
FIR luminosities from the intrinsic-AGN SED is correct.
They also could be in the stage that SF is suppressed because
AGN feedback is in action \citep[e.g.,][]{woo16}.
The future X-ray satellite e-ROSITA \citep{mer12} will discover
over 3-million AGN and cross-matching those with the FIR catalogs
would reveal pure-AGN in large numbers.
\begin{figure*}
\begin{center}
\includegraphics[width=8.5cm]{fig07a.pdf}~
\includegraphics[width=8.5cm]{fig07b.pdf}
\caption{MIR luminosity to bolometric luminosity ratio as a function
of bolometric luminosity.
$L_{\rm bol}$ is estimated from Equation~\ref{Eq:LbolvsL14_195}
for our sample, while from Equation~\ref{Eq:LbolvsL2_10} for the other
studies from the literature.
(Left) $\log L_{12 \mu{\rm m}}/L_{\rm bol}$ versus $\log L_{\rm bol}$.
(Right) $\log L_{22 \mu{\rm m}}/L_{\rm bol}$ versus $\log L_{\rm bol}$.
Blue cross/red X-shape shows type-1/type-2, respectively.
The red crosses are shifted to the right by 0.1 dex for clarity.
The main AGN type used in \cite{fio09}, \cite{ste15} are type-1, and
that of \cite{gan09}, \cite{mat15b}, \cite{asm15}, and our study are both
of type-1 and type-2 AGN.
}\label{fig:LIRovLbolvsLbol}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=8.5cm]{fig08a.pdf}~
\includegraphics[width=8.5cm]{fig08b.pdf}
\caption{ Covering factor as a function of bolometric luminosity obtained
from 12~$\mu$m (left) and 22~$\mu$m band (right).
The correction from $L_{\rm MIR}/L_{\rm bol}$ is made using the correction
function of \cite{sta16}.
Each filled area represents the range of the possible covering factor at
each $L_{\rm bol}$. Since the corrected function of type-1 AGN always
gives the higher value than that of type-2 AGN, we assign an upper limit
under the assumption that all sources in this study are type-2 AGN, and a lower limit
assuming that all sources are type-1 AGN.
The main AGN type used in \cite{fio09}, \cite{ste15} are type-1, and
that of \cite{gan09}, \cite{mat15b}, \cite{asm15}, and our study are both
of type-1 and type-2 AGN.
}\label{fig:CFvsLbol}
\end{center}
\end{figure*}
\subsection{Distribution of $r_{12,22}$}
Figure~\ref{fig:LMIRovLx} shows the histograms of the ratio
defined as $r_{12,22} = \log (L_{12~\mu{\rm m}, 22~\mu{\rm m}}/
L_{12~\mu{\rm m}, 22~\mu{\rm m}}^{\rm (slope)})$,
where $L_{12~\mu{\rm m}, 22~\mu{\rm m}}^{\rm (slope)}$ represents
the expected MIR luminosities obtained from $L_{14-195}$ using the
slopes between $L_{\rm MIR}$ and $L_{14-195}$.
The standard deviation for each sample is compiled in Table~\ref{tab:LIRovLx}.
The standard deviation for the full sample is $\sigma=0.39$ at 12~$\mu$m and
$\sigma=0.42$ at 22~$\mu$m.
This value is slightly larger than that of \cite{asm15} with $\sigma=0.32$,
In 12~$\mu$m band, we obtain $\sigma = 0.40 \pm 0.03$ for type-1
and $\sigma = 0.43 \pm 0.04$ for type-2 AGN.
The scatter is consistent between the type-1 and type-2 AGN in our sample,
within the statistical uncertainties, so we find no evidence for a difference in the
scatter of the MIR to 14--195~keV X-ray ratio between AGN types.
This result might support recent observations that
most of the MIR emission comes from the polar extended region
with $\le 10$~pc scale \citep[e.g., ][]{hon12, hon13, lop16} or from even
larger $\simeq 100$ pc scales \citep{asm16} since an extended geometry of dust
more easily produces isotropic MIR emission compared to traditional torus models.
\subsection{Luminosity Dependence of Covering Factor}
We investigate the relation between the AGN and its surrounding dusty torus.
Since MIR emission originates from the re-radiation from the dusty torus,
one can naturally expect that the ratio of the MIR to AGN luminosity corresponds
to the solid angle of the sky covered by the dust
(i.e., covering factor; $C_{\rm T}$ and $L_{\rm MIR} \propto C_{\rm T}L_{\rm bol}$).
Figure~\ref{fig:LIRovLbolvsLbol} shows the luminosity dependence
of $L_{\rm MIR}/L_{\rm bol}$.
The black solid and dashed line in Figure~\ref{fig:LIRovLbolvsLbol} represents
the estimated line converted from the
$L_{\rm MIR}$--$L_{14-195}$ luminosity relations
of Equations (1), (2) for the full sample, and Equation (3), (4)
for the high-luminosity sample, respectively.
We apply Equation~\ref{Eq:LbolvsL14_195} for the bolometric correction.
Figure~\ref{fig:LIRovLbolvsLbol} shows that $L_{\rm MIR}/L_{\rm bol}$ is declining when $L_{\rm bol}$
increases, being consistent with the trend so called ``luminosity-dependent unified models''.
This model can describe the decrease of covering factor by receding
the sublimation radius with the AGN luminosity \citep{law82}.
However, \cite{lus13} found that corrections for the anisotropy for the dust
emission are necessary for using $L_{\rm MIR}/L_{\rm bol}$ as a proxy of
the covering factors.
In addition, using a 3D Monte Carlo radiation code, \cite{sta16} reported
that the tori of type 1 (viewed from face-on) AGN make
$L_{\rm MIR}/L_{\rm bol}$ underestimate low covering factors
and overestimate high covering factors.
Type 2 (viewed from edge-on) AGN always underestimates
covering factors.
They also provide the correction functions to account for anisotropy and
obtain corrected covering factors.
Thus, we derive the corrected covering factor using the combination of
the ratio $L_{12~\mu{\rm m}} / L_{\rm bol}$,
$L_{22~\mu{\rm m}} / L_{\rm bol}$ and the correction function by \cite{sta16}.
We use the correction function of
{\small
\begin{equation}
C_{\rm T}=
\begin{cases}
-0.178R^4+0.875R^3-1.487R^2+1.408R+0.192\ {\rm (type1)}\\
2.039R^3-3.976R^2+2.765R+0.205\ {\rm (type2)}
\end{cases}
\end{equation}
}
where $R=L_{\rm MIR}/L_{\rm bol}$ and
the estimated optical thickness of torus at 9.7~$\mu$m is $\tau_{9.7}=3.0$
(see Table~1 of \cite{sta16} for more details).
Figure~\ref{fig:CFvsLbol} shows corrected $C_{\rm T}$ derived
from the slopes tabulated in Table~\ref{tab:LIRvsLx_wlit} as a
function of $L_{\rm bol}$ (black solid area).
It still holds that $C_{\rm T}$ is a declining function of $L_{\rm bol}$,
confirming the trend of ``luminosity-dependent unified models''.
It is principle possible that the luminosity-dependent trend may be due
largely to host galaxy contamination, since the emission from
the host galaxy contributes significantly to MIR emission in the
low luminosity end as discussed in Section~\ref{sect:LxvsLIR}.
To check this effect, in Figures~\ref{fig:LIRovLbolvsLbol} and
\ref{fig:CFvsLbol} we also show the $L_{\rm MIR}/L_{\rm bol}$
and the corrected $C_{\rm T}$ using the slope with high luminosity
sample with $\log L_{14-195}>43$, then extrapolating them to
the lower luminosity end.
The luminosity dependence of the $C_{\rm T}$ mitigates, but
still holds the relations.
This idea has been gaining observational evidence from radio
\citep{gri04}, IR \citep{mai07, tre08, mor09, alo11, ich12b, tob13, tob14},
optical \citep{sim05}, and X-ray \citep{ued03,bec09, ued11, ric13, lus13,
ued14}
studies of AGN.
On the other hand, in the high-$z$ universe with $z=2-3.5$, \cite{net16}
reported in their sample infer covering factors consistent with
no-evolution with AGN luminosity within the uncertainties for bolometric
correction factor.
\begin{figure}
\begin{center}
\includegraphics[width=8.0cm]{fig09a.pdf}\\
\caption{W1--W2 versus W2--W3 two color diagram in the
unit of Vega magnitude. The two color diagram for each type
highlighted with blue (type-1) and red (type-2).
}\label{fig:IR2color}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=8.0cm]{fig10a.pdf}~
\includegraphics[width=8.0cm]{fig10b.pdf}\\
\includegraphics[width=8.0cm]{fig10c.pdf}~
\includegraphics[width=8.0cm]{fig10d.pdf}\\
\caption{W1--W2 versus W2--W3 two color diagram in the unit of Vega magnitude with different BAT luminosity populations,
highlighted with blue (type-1) and red (type-2) over plotted with the total sample with gray.
}\label{fig:IR2color_type}
\end{center}
\end{figure*}
Again, the previous studies from the literature are also over
plotted in Figure~\ref{fig:LIRovLbolvsLbol} and \ref{fig:CFvsLbol}
using the bolometric correction of \cite{mar04}:
\begin{align}\label{Eq:LbolvsL2_10}
\log L_{\rm bol} = 0.0378 (\log L_{2-10})^2 - 2.00 \log L_{2-10} +60.5
\end{align}
for 2--10~keV luminosity ($L_{2-10}$).
As shown in Figure~\ref{fig:LIRovLbolvsLbol} and \ref{fig:CFvsLbol}, studies in the local universe \citep{gan09,asm15} found a decrease of the covering factor with the AGN luminosity.
Even the high luminosity sample of \cite{mat15b} in the high-$z$ universe, same trend can be observed.
On the other hand, the studies carried out using high-$L$ (and high-$z$) sources by \cite{fio09} and \cite{ste15} strongly contradict the luminosity dependent unified models.
Considering their rare population of high luminosity AGN in the local universe
as discussed in Section~\ref{Sect:LFIRvsLbol},
further investigation with deep survey is necessary for solving this controversy at the high luminosity end.
\subsection{WISE color-color distribution of Hard X-ray Selected AGN}\label{Sect:WISEcolor}
\begin{figure}
\begin{center}
\includegraphics[width=7.0cm]{fig11a.pdf}\\
\includegraphics[width=7.0cm]{fig11b.pdf}
\caption{
Fraction of AGN that meet the IR color selections as a function of BAT luminosity (in logarithmic units)
highlighted with black filled circle (all), blue cross (type-1), and red cross (type-2) color.
(Top) color cut with $W1-W2>0.8$ by \cite{ste12}. (Bottom) color cut by \cite{mat12a}.}\label{fig:fracvsLx}
\end{center}
\end{figure}
IR color-color selection is useful to identify obscured AGN candidates
and also efficient compared to other time-consuming methods such
as spectroscopical methods.
Figure~\ref{fig:IR2color} shows the distribution of AGN on the \textit{WISE}
color-color plane.
Increasing levels of AGN contribution to the MIR emission have
been shown in Figure~\ref{fig:IR2color_type} to move sources
upwards in the plane with the color cut $W1 - W2 = 0.8$ \citep{ste12}
and also within the AGN wedge \citep{mat12a}.
It is clear that our objects do not always locate within the criteria above.
As discussed in Section 3.1, lower luminosity sources could have
non-negligible level of contamination from the host galaxies in the
NIR and MIR bands.
To check it quantitatively, we divide the sample into subgroups of
luminosities, then calculate the detection rate.
Figure~\ref{fig:fracvsLx} shows the detection rate of AGN using
the thresholds of \cite{ste12} (top) and \cite{mat12a} (bottom), respectively.
The detection rate increases drastically at $L_{14-195}>10^{43}$~erg~s$^{-1}$
and most ($>80$\%) sources can be selected using the IR color-color methods
at $L_{14-195}>10^{44}$~erg~s$^{-1}$.
Thus, while the IR color-color methods are highly effective at high
luminosities ($L_{14-195}>10^{44}$~erg~s$^{-1}$),
searching for faint AGN with $L_{14-195}<10^{44}$~erg~s$^{-1}$
with near IR color-color methods should be complemented with
other AGN identification methods such as hard ($E>2$~keV)
X-rays \citep[e.g.,][]{lam15}.
Figure~\ref{fig:fracvsLx} shows that type-1 and type-2 AGN do
not show any significant difference.
It indicates that the detection rate does not originate from the
different AGN population, such as the effects of the suppressions
of NIR SEDs in type-2 AGN due to heavier obscuration by the
torus clumps \citep{ram11}, but more likely by the dilution from
the host galaxy stellar direct emission which causes blue $W1-W2$
colors \citep[e.g.,][]{ste05, ris06, san08, ima10,ich14}.
The same trend is also reported in \cite{tob14}, which used the
\textit{WISE}-matched SDSS AGN selected by the BPT diagram \citep{bal81}.
They showed that \textit{WISE} color-method efficiency
increases with $L_{22~\mu{\rm m}}$.
\cite{kaw16b} also reported that hard X-ray selected
low-luminosity AGN cannot be found using the IR
color selections above.
These results are consistent with what is shown in
Figure~\ref{fig:fracvsLx} by considering that most
sources are at $L_{\rm 14-195}< 10^{44}$~erg~s$^{-1}$.
The same trend is also reported from the X-ray
studies of Compton-thick AGN \citep{gan15, tan16}.
They reported that secure Compton-thick AGN in the local
universe do not preferentially locate within the AGN cut
or wedge at $L_{\rm 14-195}< 10^{43}$~erg~s$^{-1}$.
However, even for the luminous AGN, if they are heavily obscured AGN such as
buried AGN, NIR and even MIR absorption may play a role to locate them outside
the AGN cut or wedge \citep[e.g.,][]{hai14, ima16}.
In addition, some authors \citep[e.g.,][]{sat14, sec15} find that the fraction of AGN
by \textit{WISE} color selection is highest at lower stellar masses and drops
dramatically in higher mass galaxies,
suggesting the stellar mass (or, Eddington-ratio) is another key parameter affecting
the success rate of the IR color selections as well as the X-ray luminosity discussed above.
\section{CONCLUSIONS}
We have compiled the IR (3--500~$\mu$m) counterparts of a nearby
complete flux limited 604 AGN sources detected in the 70-month
integration of the \textit{Swift}/BAT all-sky survey in the 14--195 keV band.
Utilizing the IR catalogs obtained from \textit{WISE}, \textit{AKARI},
\textit{IRAS}, and \textit{Herschel}, we identified 604, 560, 601, and
402 counterpart in the any IR, NIR, MIR, and FIR band, respectively.
For our discussion, the detected sources are divided into two AGN
types based on $N_{\rm H}$ with a boundary of
$N_{\rm H} = 10^{22}$~cm$^{-2}$.
Our results are summarized as follows:
\begin{enumerate}
\item We find a good luminosity correlation between the MIR
and ultra hard X-ray band over 5 orders of magnitude
($41< \log (L_{14-195}/ {\rm erg}~{\rm s}^{-1})<46$).
Using the linear relation of $\log (L_{\rm MIR}/10^{43}
~{\rm erg}~{\rm s}^{-1}) = a + b \log (L_{14-195}/10^{43}~
{\rm erg}~{\rm s}^{-1})$, the slope $b=0.96-0.98$ is obtained
for the whole sample and $b=1.05-1.07$ for the high luminosity
sample ($L_{14-195}>10^{43}$~erg~s$^{-1}$).
This value is consistent with those obtained by high spatial
resolution MIR image observations of X-ray selected catalogs.
Whereas the slope is shallower than that obtained from the
sample of high-$z$ optically selected luminous AGN.
This indicates that X-ray emission could be saturated than
MIR ones in the high-luminosity end.
\item We find a rising trend between bolometric AGN power
and FIR over 5 orders of magnitude in the individual plots.
The slope is consistent with that obtained by \cite{net09} as
well as \cite{mat15a}. The binned analysis also shows that
mean $L_{\rm bol}$ as a function of $L_{\rm FIR}$ shows
the rising trend, which is consistent with the individual plot
analysis. However, the mean $L_{\rm FIR}$ as a function
of $L_{\rm bol}$ shows a flattened trend. This seemingly
contradicting result could be originated from the difference
of the dominant timescale between SF and AGN activity
that SF and BH accretion is closely connected over
long timescales, but this relation can be hidden at lower
$L_{\rm bol}$ due to the short-term AGN variability
\citep[e.g., ][]{mul12, che13, hic14}
\item We find a small number of FIR pure-AGN candidates
which have strong AGN luminosity with very weak SF
contribution from their host galaxies.
These objects represent a good sample to
construct the pure-AGN IR
SED including the FIR end. They could be good
candidates to study AGN feedback since they might
be in the stage that SF activity is suppressed due to
energy output from the AGN.
\item Using the correction from MIR to bolometric luminosity
ratio to covering factor by \cite{sta16}, we find the covering
factor decreases with bolometric luminosities, confirming
the luminosity-dependent unified model.
\item We find that the efficiency of the \textit{WISE}
color-color cuts proposed by \cite{ste12} and \cite{mat12a}
is highly AGN luminosity dependent. These methods cannot
completely pick up local X-ray selected low-luminosity AGN
with $L_{14-195}<10^{44}$~erg~s$^{-1}$, while the color-color
cut methods efficiently pick up most AGN with $L_{14-195}>10^{44}$~erg~s$^{-1}$.
\end{enumerate}
\acknowledgments
We thank the anonymous referee for a very careful reading of the manuscript and
numerous helpful suggestions that greatly strengthened the paper.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which
is operated by the Jet Propulsion Laboratory, California Institute of Technology,
under contract with the National Aeronautics and Space Administration.
This research has made use of ``Aladin sky atlas'' developed at CDS, Strasbourg Observatory, France
\citep{bon00, boc14}.
K.I. thanks the Department of Astronomy at Kyoto university, where a part of the research was conducted.
K.I. and Y.U. acknowledge support from JSPS Grant-in-Aid for Scientific Research (grant number 40756293: KI, 26400228: YU).
C.R. acknowledge financial support from the CONICYT-Chile grants ``EMBIGGEN'' Anillo ACT1101,
FONDECYT 1141218, and Basal-CATA PFB--06/2007.
Part of this was financially supported by the Grant-in-Aid for JSPS fellow for young researchers (PD; KI. KM; DC1; TK.).
|
1,314,259,993,941 | arxiv | \section{Introduction}
Richardson first introduced the concept of {\em scale invariance}, i.e., a power-law scaling between dependent and independent variables, to the study of conflict by examining the frequency of large and small conflicts, as a function of their severity~\citep{Richardson48}. His work demonstrated that for both wars and small-scale homicides, the frequency of an event scales as an inverse power of the event's severity (in this case, the number of casualties). Richardson, and subsequent researchers such as~\cite{Cederman03}, have found that the frequency of wars of a size $x$ scales as $P(x)\propto x^{-\alpha}$, where $\alpha\approx 2$ and is called the scaling exponent. Recently, similar power-law statistics have been found to characterize a wide variety of natural phenomena including disasters such as earthquakes, floods and forest fires~\citep{Bak89, Turcotte98, Newman05}, social behavior or organization such the distribution of city sizes, the number of citations for scientific papers, the number of participants in strikes, and the frequency of words in language~\citep{Zipf49, Simon55, Newman05, Biggs05}, among others. As a reflection of their apparent ubiquity, but somewhat pejoratively, it has even been said that such power-law statistics seem ``more normal than normal''~\citep{Willinger05}.
In this paper, we extend Richardson's program of study to the most topical kind of conflict: terrorism. Specifically, we empirically study the distributional nature of the frequency and severity of terrorist events worldwide since 1968. Although terrorism as a political tool has a long history~\citep{Congleton02, Enders06}, it is only in the modern era that small groups of so-motivated individuals have had access to extremely destructive weapons~\citep{Shubik97, FBI99}. Access to such weapons has resulted in severe terrorist events such as the \mbox{7 August 1998} car bombing in Nairobi, Kenya which injured or killed over $5200$, and the more well known attack on \mbox{11 September 2001} in New York City which killed $2749$. Conventional wisdom holds that these rare-but-severe events are {\em outliers}, i.e., they are qualitatively different from the more common terrorist attacks that kill or injure only a few people. Although that impression may be true from an operational standpoint, it is false from a statistical standpoint. The frequency-severity statistics of terrorist events are scale invariant and, consequently, there is no fundamental difference between small and large events; both are consistent with a single underlying distribution. This fact indicates that there is no reason to expect that ``major'' or more severe terrorist attacks should require qualitatively different explanations than less salient forms of terrorism.
The results of our study are significant for several reasons. First, severe events have a well documented disproportional effect on the targeted society. Terrorists typically seek publicity, and the media tend to devote significantly more attention to dramatic events that cause a large number of casualties and directly affect the target audience~\citep{Wilk97, Gartner04}. When governments are uncertain about the strength of their opponents, more severe terrorist attacks can help terrorist groups signal greater resources and resolve and thereby influence a government's response to their actions~\citep{Overgaard94}. Research on the consequences of terrorism, such as its economic impact, likewise tends to find that more severe events exert a much greater impact than less severe incidents~\citep[Ch. 9]{Enders06}. For instance,~\cite{Navarro01} report dramatic declines in share prices on the New York Stock Exchange, Nasdaq, and Amex after the devastating 11 September attacks in the United States. In contrast, although financial markets fell immediately following the 7 July 2005 bombings in London, share prices quickly recovered the next day as it became clear that the bombings had not been as severe as many initially had feared.\footnote{See figures for the FTSE 100 index of the 100 largest companies listed on the London Stock Exchange at {\tt http://www.econstats.com/eqty/eq\_d\_mi\_5.htm}.} Recent examples of this non-linear relationship abound, although the tremendous reorganization of the national security apparatus in the United States following the 11 September 2001 attacks is perhaps the most notable in Western society. Second, although researchers have made efforts to develop models that predict the {\em incidence} of terrorist attacks, without also predicting the {\em severity}, these predictions provide an insufficient guide for policy, risk analysis, and recovery management. In the absence of an accurate understanding of the severity statistics of terrorism, a short-sighted but rational policy would be to assume that every attack will be severe. Later, we will show that when we adapt current models of terrorism to predict event severity, they misleadingly predict a thin tailed distribution, which would cause us to dramatically underestimate the future casualties and consequences of terrorist attacks. Clearly, we need to better understand how our models can be adapted to more accurately produce the observed patterns in the frequency-severity statistics. That is, an adequate model of terrorism should not only give us indications of where or when events are likely to occur, but also tell us how severe they are likely to be. Toward this end, we describe a toy model that can at least produce the correct severity distribution.
Past research on conflict has tended to focus on large-scale events like wars, and to characterize them dichotomously according to their incidence or absence, rather than according to their scale or severity. This tendency was recently highlighted by~\cite{Cederman03} for modeling wars and state formation, and by~\cite{Lacina06} for civil wars. Additionally accounting for an event's severity can provide significantly greater guidance to policy makers; for instance, \cite{Cioffi-Revilla91} accurately predicted the magnitude (the base ten logarithm of total combatant fatalities) of the Persian Gulf War in 1991, which could have helped in estimating the political consequences of the war.
As mentioned above, research on terrorism has also tended to focus on incidence, rather than severity. Recently, however, two of the authors of this study demonstrated for the first time that the relationship between the frequency and severity of terrorist events exhibits the surprising and robust feature of scale invariance~\citep{Clauset05}, just as Richardson showed for wars. In a subsequent study,~\cite{Johnson05} considered data for fatal attacks or clashes in the guerilla conflicts of Colombia and Iraq, suggesting that these too exhibit scale invariance. Additionally, they claim that the time-varying behavior of these two distributions are trending toward a common power law with parameter $\alpha = 2.5$ -- a value they note as being similar to the one reported by~\cite{Clauset05} for terrorist events in economically underdeveloped nations. Johnson et al. then adapted a dynamic equilibrium model of herding behavior on the stock market to explain the patterns they observed for these guerilla conflicts. From this model, they conjecture that the conflicts of Iraq, Colombia, Afghanistan, Casamance (Senegal), Indonesia, Israel, Northern Ireland and global terrorism are all converging to a universal distribution with exactly this value of $\alpha$~\citep{Johnson06}. We will briefly revisit this idea in a later section. Finally, the recent work of~\cite{Bogen06} also considers the severity of terrorist attacks primarily via aggregate figures to assess whether there has been an increase in the severity of terrorism over time, and to forecast mortality due to terrorism.
This articles makes three main contributions. First, we make explicit the utility of using a power-law model of the severity statistics of terrorist attacks, and demonstrate the robust empirical fact that these frequency-severity statistics are scale invariant. Second, we demonstrate that distributional analyses of terrorism data can shed considerable light on the subject by revealing new relationships and patterns. And third, we show that, when adapted to predict event severity, existing models of terrorism incidence fail to produce the observed heavy-tail in the severity statistics of terrorism, and that new models are needed in order to connect our existing knowledge about what factors promote or discourage terrorism with our new results on the severity statistics.
\section{Power laws: a brief primer}
Before plunging into our analysis, and for the benefit of readers who may be unfamiliar with the topic, we will briefly consider the topics of heavy-tailed statistics and power-law distributions. What distinguishes a power-law distributions from the more familiar normal distribution is its {\em heavy tail}, i.e., in a power law, there is a non-trivial amount of weight far from the distribution's center. This feature, in turn, implies that events orders of magnitude larger (or smaller) than the mean are relatively common. The latter point is particularly true when compared to a normal distribution, where essentially no weight is far from the mean. Although there are many distributions that exhibit heavy tails, the power law is a particularly special case, being identifiable by a straight line with slope $\alpha$ on doubly-logarithmic axes\footnote{A straight line on doubly-logarithmic axes is a necessary, but not sufficient condition for a distribution to be a power law; for example, when we have only a small number of observations from an exponentially distributed variable, it can appear roughly straight on double-logarithmic axes.}, and which appears widely in physics. The power law has the particular form in which multiplication of the argument, e.g., by a factor of $2$, results in a proportional division of the frequency, e.g., by a factor of $4$, and the ratio of these values is given by the ``scaling parameter'' $alpha$. Because this relationship holds for all values of the power law, the distribution is said to ``scale'', which implies that there is no qualitative difference between large and small events.
Power-law distributed quantities are actually quite common, although we often do not think of them as being that way. Consider, for instance, the populations of the 600 largest cities in the United States (from the 2000 Census). With the average population being only $\langle x \rangle =165~719$, metropolises like New York City and Los Angles would seem to be clear ``outliers'' relative to this value. The first clue that this distribution is poorly explained by a truncated normal distribution is that the sample standard deviation $\sigma = 410~730$ is significantly larger than the sample mean. Indeed, if we model the data in this way, we would expect to see $1.8$ times fewer cities at least as large as Albuquerque, at $448~607$, than we actually do. Further, because it is more than a dozen standard deviations from the mean, we would never expect to see a city as large as New York City, with a population of $8~008~278$; for a sample this size, the largest city we would expect to see is Indianapolis, at $781~870$. Figure~\ref{fig:cities} shows the actual distribution, plotted on doubly-logarithmic axes, as its complementary cumulative distribution function (ccdf) $P(X\geq x)$, which is the standard way of visualizing this kind of data.\footnote{The ccdf is preferable to the probability distribution function (p.d.f) as the latter is significantly noisier in the upper tail, exactly where subtle variations in behavior can be concealed. If a distribution scales, it will continue to do so on the ccdf} The scaling behavior of this distribution is quite clear, and a power-law model (black line) of its shape is in strong agreement with the data. In contrast, the truncated normal model is a terrible fit.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.55]{v2_fig0b_cities_b.eps}
\end{center}
\caption{The complementary cumulative distribution function (ccdf) $P(X\geq x)$ of the population $x$ of the 600 largest cities in the United States, i.e., those with $x\geq50~000$, based on data from the 2000 Census. The solid black line shows the power-law behavior that the distribution closely follows, with scaling exponent $\alpha=2.36(6)$, while the dashed black line shows a truncated normal distribution with the same sample mean.}
\label{fig:cities}
\end{figure}
As a more whimsical second example, consider a world where the heights of Americans were distributed as a power law, with approximately the same average as the true distribution (which is convincingly normal when certain exogenous factors are controlled). In this case, we would expect nearly $60~000$ individuals to be as tall as the tallest adult male on record, at $2.72$ meters. Further, we would expect ridiculous facts such as $10~000$ individuals being as tall as an adult male giraffe, one individual as tall as the Empire State Building ($381$ meters), and $180$ million diminutive individuals standing a mere $17$ cm tall. In fact, this same analogy was recently used to describe the counter-intuitive nature of the extreme inequality in the wealth distribution in the United States~\citep{Crook06}, whose upper tail is also distributed according to a power law.
Although much more could be said about power laws, we hope that the curious reader takes away a few basic facts from this diversion. First, heavy-tailed distributions do not conform to our expectations of a linear, or normally distributed, world. As such, the average value of a power law is not representative of the entire distribution, and events orders of magnitude larger than the mean are, in fact, relatively common. Second, the scaling property of power laws implies that, at least statistically, there is no qualitative difference between small, medium and extremely large events, as they are all succinctly described by a very simple statistical relationship. Readers who would like more information about power laws should refer to the extensive review by~\cite{Newman05}. With these ideas in hand, we can begin our analysis of the severity statistics of terrorism.
\section{Data sources for terrorist events}
Many organizations track terrorist events worldwide, but few provide their data in a form amenable to scientific analysis. The most popular source of information on terrorist events in the political science literature is the ITERATE data set~\citep{Mickolus04}, which focuses exclusively on transnational terrorist events involving actors from at least two countries. In principle, however, and from the standpoint of frequency and severity statistics, we see no reason to restrict our analysis to transnational events. Instead, we use the data contained in the~\citet[MIPT]{MIPT} database, which largely overlaps with the ITERATE data, but also includes fully domestic terrorist events since at least 1998. We note, however, that our analyses can easily be applied to the portion of the ITERATE data that reports event severity, and indeed, doing so yields evidence similar to that which we present here. Thus, without loss of generality and except where noted, we will focus exclusively on the MIPT data for the remainder of this article. The MIPT database is itself the compilation of the RAND Terrorism Chronology 1968-1997, the RAND-MIPT Terrorism Incident database (1998-Present), the Terrorism Indictment database (University of Arkansas \& University of Oklahoma), and DFI International's research on terrorist organizations.
By 18 June 2006, the MIPT database contained records for over $28~445$ terrorist events in more than $5000$ cities across $187$ countries worldwide since 1968. Although alternative definitions for terrorism exist, the MIPT database uses a relatively standard one that may be summarized as any violent act intended to create fear for political purposes. Each entry in the database is quite narrow: it is an attack on a single target in a single location (city) on a single day. For example, the Al Qaeda attacks in the United States on \mbox{11 September 2001} appear as three events in the database, one for each of the locations: New York City, Washington D.C. and Shanksville, Pennsylvania. Each record includes the date, target, city (if applicable), country, type of weapon used, terrorist group responsible (if known), number of deaths (if known), number of injuries (if known), a brief description of the attack and the source of the information.
Of the nearly thirty thousand recorded events, $10~878$ of them resulted in at least one person being injured or killed, and we restrict our analyses to these events as they appear to be the least susceptible to any reporting bias. Further, it is a reasonable assumption that the largest events, due their severity both in terms of casualties and political repercussions, will have the most accurate casualty estimates. Finally, if there is a systemic bias in the form of a proportionally small under- or over-estimate of event severity, it will have only a small effect the results of our statistical analysis and will not change the core result of scale invariance -- as with Richardson's study of the severity of wars, simply obtaining the correct order of magnitude of an event reveals much of the basic scaling behavior. Throughout the remainder of the paper, we take the {\em severity} of an event to be either the number of injuries, the number of deaths, or their sum (total casualties), where the severity is always at least one. Unless otherwise noted, we focus exclusively on the statistics of these values.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.55]{v2_fig1_allEvents.eps}
\end{center}
\caption{The frequency-severity distributions $P(X\geq x)$ of
attacks worldwide since 1968 by injuries, deaths and
their sum. The solid line indicates the power-law scaling found by
the maximum likelihood method. Details of fits for these
distributions are given in Table 1. }
\label{fig:terrorism}
\end{figure}
\section{Frequency-severity distributions for attacks since 1968}
\label{sec:fulldistribution}
Collecting all events since 1968 as a histogram of severities, we show their complementary cumulative distribution functions (ccdfs) $P(X\geq x)$ in Figure~\ref{fig:terrorism}. The regular scaling in the upper tails of these distributions immediately demonstrates that events orders of magnitude larger than the average event size are not outliers, but are instead in concordance with a global pattern in the frequency statistics of terrorist attacks. Significantly, the scaling exists despite large structural and political changes in the international system such as the fall of communism, variations in the type of weapon used, recent developments in technology, the demise of individual terrorist organizations, and the geographic distribution of events themselves. In subsequent sections, we will examine the robustness of the scale invariance property to both categorical and temporal analysis.
If we make the idealization that events are independent and identically distributed (iid), we may model the distribution as a power law with some exponent $\alpha$, where the scaling behavior holds only for values at least some lower-bound $x_{\min}$. Obviously, significant correlations exist between many terrorist events, and such an idealization is made only for the purpose of doing a distributional analysis. Using the method of maximum likelihood, we estimate two parameters of the power-law model from the data (details of our statistical methodology are discussed in the Appendix). Models found in this way for the full distributions described above are summarized in Table 1. Using the Kolmogorov-Smirnov goodness-of-fit test, we find that these simple iid models are a surprisingly good representation of the death and total severity distributions (both $p_{\rm KS}>0.9$), although a more marginal representation of the injuries distribution ($p_{\rm KS}>0.4$).\footnote{The Kolmogorov Smirnov test evaluates whether observed data seem a plausible random sample from a given probability distribution by comparing the maximum difference between the observed and the expected cumulative distributions.}
In Section~\ref{sec:comp}, we will see that we can further decompose these distributions into their components, each of which are strongly scale invariant but with different scaling and limit parameters. As mentioned earlier, the power law is not the only distribution with a heavy tail, and although testing all such alternatives is beyond the scope of this paper, we considered another common distribution, the log-normal~\cite[see, for instance,][]{Serfling02}, and found in all cases that we may convincingly reject this model ($p_{\rm KS}<0.05$).
\begin{table
\caption{A summary of the distributions shown in Figure~\ref{fig:terrorism}, with power-law fits from the maximum likelihood method. $N$ ($N_{\rm tail}$) depicts the number of events in the full (tail) distribution. The parenthetical value depicts the standard error of the last digit of the estimated
scaling exponent. \newline } \label{table:summary} \centering
\fbox{%
\begin{tabular}{l|ccccc|cccc}
Distribution & $N$ & $\langle x \rangle$ & $\sigma_{\rm std}$ & $x_{\rm max}$ & & $N_{\rm tail}$ & $\alpha$ & $x_{\rm min}$ & $p_{\rm KS}\geq$ \\
\hline
Injuries & 7456 & 12.77 & 94.45 & 5000 & & 259 & 2.46(9) & 55 & 0.41 \\
Deaths & 9101 & 4.35 & 31.58 & 2749 & & 547 & 2.38(6) & 12 & 0.94 \\
Total & 10878 & 11.80 & 93.46 & 5213 & & 478 & 2.48(7) & 47 & 0.99
\end{tabular}}
\end{table}
\section{Evolution of terrorism over time}
Because events in the database are annotated with their incidence date, we may write them down as a time-series and investigate the severity distribution's behavior as a function of time.\footnote{In 1998, the management of the database was transferred from the RAND Corp. to the MIPT, which resulted in several observable differences in the database records. For instance, although some purely domestic events appear prior to 1998, such as the 1995 Oklahoma City bombing, domestic events make up a significant fraction of the events entered subsequent to 1998, suggesting that the true number of events for some period directly prior to 1998 is greater than we observe in the database. Although this effect could create problems for analyses that count incidents in a simple way, it does not effect the scale invariant shape of the frequency-severity distribution, primarily, we believe, because the large events that comprise the tail of the distribution were the least susceptible to any under-reporting bias. We shall explore this point more in the next section.} Although we are ultimately interested in the property of scale invariance over time, we first consider a simple, model-agnostic measure of the distribution's shape: the average log-severity. Sliding a window of $24$ months over the 38.5 years of event data, we compute the average log-severity (deaths) of events within each window.
\begin{figure} [t]
\begin{center}
\includegraphics[scale=0.55]{v2_fig2_logsev24.eps}
\caption{(upper) The average log-severity (deaths) of events within a sliding window of $24$ months, for the entire 38.5 years of data. The upper dashed line indicates one standard deviation, while the other shows the average log-severity for the entire span of time. (lower) The autocorrelation function of the average log-severity, illustrating a strong periodicity in the breadth of the distribution at roughly $\tau \approx 13$ years. Similar results apply when we analyze total or injury severity, but with slight changes to the magnitude or location of the anomalous peak
in the autocorrelation function.} \label{fig:logsev}
\end{center}
\end{figure}
For highly skewed distributions, such as those we show in Figure~\ref{fig:terrorism}, the average log-severity measures the position on the independent axis of the distribution's center. The average log-severity is significantly less sensitive to variations in the length of the upper tail, which may arise from the occasional presence of rare-but-severe events, than is the average severity. The resulting time series of this measure is shown in the upper-pane of Figure~\ref{fig:logsev}, along with one standard deviation. Notably, this function is largely stable over the nearly forty years of data in the MIPT database, illustrating that the center of the distribution has not varied substantially over that time.
A closer examination of the fluctuations, however, suggests the presence of potential periodic variation. We investigate this possibility by taking the autocorrelation function (ACF) of the time series, which we show in the lower-pane. The noticeable sinusoidal shape in the ACF shows that the fluctuations do exhibit a strong degree of periodicity on the order of $\tau \approx 13$ years. If we vary the size of the window, e.g., windows between $12$ and $60$ or more months (data not shown), the location and magnitude of the peak are, in fact, quite stable. But, these features do vary slightly if we instead either examine the total or injury distributions, or truncate the time-series. As such, we conjecture that some periodicity is a natural feature of global terrorism, although we have no explanation for its origin. It has been suggested that the $\tau\approx13$ value may be related to the modal life-expectancy of the average terrorist group. However, we caution against such conclusions for now, as these aforementioned variations on our analysis can shift the peak by several years.
\section{Scale invariance over time}
Turning now to the question of scale invariance over time, we again use a sliding window of two years, but now shifted forward by one year at a time. To remain parsimonious, we make the idealization that events within each window were drawn iid from a two-parameter power-law model. After fitting such a model to each window's frequency-severity distribution, we calculate its statistical significance as a way to check the model's plausibility for that time-period.
Obviously, this assumption of no temporal correlations is quite strong, and, where appropriate, we discuss what light out analysis sheds on its accuracy. \cite{Johnson05} used a similar approach to study the time-varying distributions for the conflicts in Colombia and Iraq, but did not consider the accuracy of their models' fit or give any measure of their statistical significance.
In Figure~\ref{fig:timeseries}a we show the estimated scaling parameters $\alpha$ for each time period. For the first 30 years of data, the scaling parameter appears to fluctuate around $\alpha\approx 2$, which suggests that the scaling behavior was relatively stable over this period. Subsequent to 1998, when a larger number of domestic events were incorporated into the database, the scaling parameter shifts upward to $\alpha\approx 2.5$, but again shows no consistent trend in any direction. This shift, taken with the apparent stability of the scaling behavior over time, suggests that the absence of domestic events before 1998 may have biased those distributions toward more shallow scaling, i.e., before 1998, larger events appear to be more common.
\begin{figure} [t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.36]{v2_fig3a_means.eps} &
\includegraphics[scale=0.36]{v2_fig3b_3panes.eps} \\
(a) & (b)
\end{tabular}
\caption{Results of fitting a simple power-law model to the tail of the severity (deaths) distribution of terrorist events within discrete periods of time since 1968. We divide time into two year periods, sliding the window forward one year at a time; similar results apply for larger windows. (a) The average scaling exponent $\alpha$ for each two-year period, with circle size being proportional to the statistical significance value $p_{\rm KS}$; solid circles indicate $p>0.9$. We omit the data point for 1997 as it spans the transition of the database management. (b) Three panes showing aspects of the models; (top) significance values computed from a one-sided KS test showing that most models do not achieve high significance, (middle) the estimated $x_{\rm min}$ values, and (bottom) the average inter-event interval for events in the tail, i.e., those with severity greater than $x_{\min}$.}
\label{fig:timeseries}
\end{center}
\end{figure}
Although many ($41\%$) of these iid power-law models appear to match the distribution of severities quite well ($p_{\rm KS}>0.9$), nearly half ($49\%$) achieve only a middling level of statistical significance ($0.5< p_{\rm KS} \leq 0.9$; Figure~\ref{fig:timeseries}b upper). That is, there are significant temporal correlations within the time series, or perhaps there are strong but temporally localized deviations from the long-term structure of the power-law distribution, that cause our simple model to yield a poor fit at these times. Either case is unsurprising for this kind of real-world data. An interesting line of future inquiry would be a close study of the tail events' political context, which may reveal the origin of their correlations and explain when temporally local deviations from the long-term behavior occurred.
Further, we observe that the frequency of the most severe events, i.e., events in the upper tail of the distribution, has not changed much over the past 30 years. In Figure~\ref{fig:timeseries}b (lower pane), we plot the reciprocal of those frequencies, the mean inter-event intervals, for each two-year period. Notably, from 1977--1997, the inter-event interval for extreme events averaged $6.9\pm3.7$ days, while from 1998--2006, it averaged $5.3\pm4.0$ days.
Although this result may appear to contradict recent official
reports that the frequency of terrorist attacks worldwide has
increased dramatically in the past few decades~\citep{State04}, or that the frequency of ``major'' events has decreased, it
does not. Instead, the situation is slightly more complicated: our
analysis suggests that the changes in event frequencies have not
been evenly distributed with respect to their severity, but rather
that less severe attacks are now relatively more frequent, while
the frequency of ``major'' or tail-events has remained unchanged.
This behavior is directly observable as the upward movement of the lower-bound on the scaling region in recent years, precisely when attacks overall are thought to be more frequent (Figure~\ref{fig:timeseries}b, middle pane).
Taking the above results together with those of the average log-severity time-series (Figure~\ref{fig:logsev}) in the previous section, we can reasonably conclude that the dominant features of the frequency-severity statistics of terrorism have not changed substantially over the past 38.5 years. That is, had some fundamental characteristic of terrorism changed in the recent past, as we might imagine given recent political events, the frequency-severity distribution would not display the degree of stability we observe in these statistical experiments.
\section{Variation in scale invariance by target-country industrialization}
Returning to the full distributions, we now consider the impact of industrialization on the frequency-severity statistics -- given that each attack is executed within a specific country, we may ask whether there is a significant difference in the scaling behaviors of events within industrialized and non-industrialized countries. Toward this end, we divide the events since 1968 into those that occurred within the 30 Organization for Economic Co-operation and Development (OECD) nations (1244 events, or $11\%$), and those that occurred throughout the rest of the world ($9634$ events, or $89\%$). We plot the corresponding total severity distributions in Figure~\ref{fig:weapons}a, and give their summary statistics in Table 2.
Most notably, we find substantial differences in the scaling of the two distributions, where industrialized-nation events scale as $\alpha_{\rm OECD}=2.02(9)$ while non-industrialized-nation events scale more steeply, as $\alpha_{\rm non-OECD}=2.51(7)$. That is, while events have been, to date, less likely to occur within the major industrialized nations, when they do, they tend to be more severe than in non-industrialized nations. Although this distinction is plausibly the result of technological differences, i.e., industrialization itself makes possible more severe events, it may also arise because industrialized nations are targeted by more severe attacks for political reasons. For instance, the OECD events are not uniformly distributed over the 30 OECD nations, but are disproportionately located in eight states: Turkey (335 events), France (201), Spain (109), Germany (98), the United States of America (93), Greece (76), Italy (73) and the United Kingdom (62). These eight account for $84.2\%$ (1047) of all such events, and $141$ of those are tail events, i.e., their total severity is at least $x_{\min}=13$. These eight nations account for $89.2\%$ of the most severe events, suggesting that industrialization alone is a weak explanation of the location of severe attacks, and that political factors must be important.
\begin{figure}[t]
\begin{center}
\begin{tabular*}{16.5cm}{cc}
\includegraphics[scale=0.36]{v2_fig4a_oecd.eps} &
\includegraphics[scale=0.36]{v2_fig4b_weaps.eps} \\
(a) & (b)
\end{tabular*}
\end{center}
\caption{(a) The frequency-severity (total) distributions $P(X\geq x)$ of attacks worldwide between February 1968 and June 2006, divided among nations inside and outside of the OECD. (b) Total-severity distributions for six weapon types: chemical or biological agents ($0.2\%$ of events), explosives ($44.8\%$, includes remotely detonated devices), fire ($1.2\%$), firearms ($42.3\%$), knives ($2.3\%$) and other ($9.2\%$; includes unconventional and unknown weapon types). For both figures, solid lines indicate the fits, described in Table 2, found by the maximum likelihood method. }
\label{fig:weapons}
\end{figure}
\section{Variation in scale invariance by weapon type}
\label{sec:comp} As our final characterization of the frequency-severity distribution's scale-invariance, we consider the connection between technology, represented by the type of weapon used in an attack, and the severity of the event. Figure~\ref{fig:weapons}b shows the total severity distributions for chemical or biological weapons, explosives (including remotely detonated devices), fire, firearms, knives and a catch-all category ``other'' (which also includes unconventional\footnote{$~$The attacks of \mbox{11 September 2001} are considered unconventional.} and unknown weapons). We find that these component distributions themselves exhibit scale invariance, each with a unique exponent $\alpha$ and lower limit of the power-law scaling $x_{\rm min}$. However, for the chemical or biological weapons, and the explosives distributions, we must make a few caveats. In the former case, the sparsity of the data reduces the statistical power of our fit, and, as discussed by~\cite{Bogen06}, the severity of the largest such event, the 1998 sarin gas attack in Tokyo, is erroneously high. For the latter distribution, another phenomenon must govern the shape of the lower tail, and we investigate its causes below. Table 2 summarizes the distributions and their power-law models.
By partitioning events by weapon-type, we now see that the origin of the bending in the lower tail of the injury and total severity distributions (Figure~\ref{fig:terrorism}a) are primarily due to explosive attacks, i.e., there is something about attacks utilizing explosives that makes them significantly more likely to injure a moderate or large number of people than other kinds of weapons. However, this property fails for larger events, and the regular scaling resumes in the upper tail. In contrast, we see no such change in the scaling behavior in the lower tail for other weapons -- this demonstrates that the property of scale invariance is largely independent of the choice of weapon. Further, by partitioning events according to their weapon type, we retain high estimates of statistical significance ($p_{\rm KS}>0.9$).
\begin{table
\caption{A summary of the distributions shown in Figure~\ref{fig:weapons}, with power-law fits from the maximum likelihood method. $N$ ($N_{\rm tail}$) depicts the number of events in the full (tail) distribution. The parenthetical value depicts the standard error of the last digit of the estimated scaling exponent. As described in the text, the statistical significance of the explosives distribution model increases to $p_{\rm KS}\geq0.82$ when we control for suicide explosive
attacks. \newline } \label{table:wsummary} \centering
\fbox{%
\begin{tabular}{l|ccccc|cccc}
Distribution & $N$ & $\langle x \rangle$ & $\sigma_{\rm std}$ & $x_{\rm max}$ & & $N_{\rm tail}$ & $\alpha$ & $x_{\rm min}$ & $p_{\rm KS}\geq$ \\
\hline
OECD & 1244 & 17.65 & 206.28 & 5012 & & 158 & 2.02(9) & 13 & 0.61 \\
Non-OECD & 9634 & 11.04 & 66.09 & 5213 & & 438 & 2.51(7) & 47 & 0.84 \\
\hline
Chem/Bio & 19 & 274.11 & 1147.48 & 5012 & & 19 & 1.5(2) & 1 & 0.89 \\
Explosives & 4869 & 18.93 & 90.61 & 5213 & & 412 & 2.52(7) & 49 & 0.60 \\
Fire & 133 & 16.79 & 107.14 & 1200 & & 85 & 1.9(1) & 2 & 0.99 \\
Firearms & 4603 & 4.09 & 24.52 & 1058 & & 744 & 2.37(5) & 5 & 0.92 \\
Knives & 254 & 2.43 & 7.01 & 107 & & 52 & 2.6(2) & 3 & 0.99 \\
Other & 1000 & 9.30 & 158.79 & 5010 & & 189 & 2.17(9) & 5 & 0.99
\end{tabular}}
\end{table}%
What property of explosives attacks can explain the large displacement of the upper tail in that distribution? \cite{Pape03} demonstrated, through a careful analysis of all suicide attacks between 1980 and 2001, that suicide attacks cause significantly more deaths than non-suicide attacks on average, being $13$ and $1$, respectively. Similarly, for our data set, the average total severity for suicide attacks using explosives is $41.11$, while non-suicide attacks have an average total severity of $14.41$. Controlling for these attacks (692 events, or $12.9\%$) does not significantly change the curvature of the lower-tail in the explosives distribution. It does, however, improve the statistical significance of our best-fit model to the upper tail ($\alpha=2.55(9)$, $x_{\min}=47$, $p_{\rm KS}\geq 0.82$), suggesting that the severity of suicide explosives attacks deviates strongly from the general scaling behavior, and further that such attacks are not the source of the lower-tail's curvature. Conditioning on additional factors, either singly or jointly, such as the target, tactic or geographic region, can reduce the curvature in the lower-tail to varying degrees, but can never eliminate it (results not shown).
By analyzing the sequence of events, however, we find evidence that the curvature is at least partially a temporal phenomenon. When we divide events into the four decades beginning with 1968, 1978, 1988 and 1998, we see that the displacement of the upper tail $x_{\min}$ increases over time, ranging from between 2--20 for the first three decades, to 49 for the most recent decade. Indeed, because most of the explosives events in the database occurred recently (3034 non-suicide events, or $72.6\%$), the scaling behavior of this decade dominates the corresponding distribution in Figure~\ref{fig:weapons}b. Separating the data by time, however, yields more statistically significant models, with $p_{\rm KS}\geq0.8$ for the latter three decades, and progressively more curvature in the lower tail over time. Thus, we cannot wholly attribute the curvature to the inclusion of domestic events in more recent years, although certainly it is largest then. Rather, its behavior may be a function of changes in the explosives technology used in terrorist attacks over the past 40 years. The validation of this hypothesis, however, is beyond the scope of the current study, and we leave it for future work.
\section{A regression model for the severity of terrorist events}
There is an extensive literature on what factors promote terrorism and make governments more likely to become targets of terrorism. We refer to~\cite{Reich90},~\cite{Pape03}, and~\cite{Sandler05} for overviews of existing studies of terrorism. Notably, however, existing studies say nothing about the frequency-severity distribution of events, and empirical research on terrorism has tended to focus on predicting terrorist incidence. In this section, we consider to what extent models proposed to predict the incidence of terrorism data can account for the severity of terrorism, and to what extent they can reproduce the observed frequency-severity distribution.
\begin{table}
\caption{Coefficients for a negative binomial regression model on
terrorist event incidence, after~\cite{Li05}, and its ability to
predict observed severity statistics; parenthetical entries give
robust standard errors. \newline } \label{table:results}
\centering
\fbox{%
\begin{tabular}{lcccc}
\hline
& \mbox{\rule[-0.2cm]{0cm}{0.7cm} (1)} & (2) & (3) & (4) \\
{\textbf{Variable}} & \shortstack{No. attacks \\ (ITERATE)}
& \shortstack{No. attacks \\ (MIPT)}
& \shortstack{Deaths by \\ event}
& \shortstack{Deaths by \\ country-year} \\
& & & & \\ \hline
Govt constraint & \mbox{\rule[-0.2cm]{0cm}{0.7cm} 0.061} & 0.102 & -0.013 & 0.046 \\
\vspace{4pt} & \begin{footnotesize}(0.023)\end{footnotesize}
& \begin{footnotesize}(0.030)\end{footnotesize}
& \begin{footnotesize}(0.013)\end{footnotesize}
& \begin{footnotesize}(0.038)\end{footnotesize}\\
Democratic participation & -0.009 & -0.007 & -0.001 & -0.011 \\
\vspace{4pt} & \begin{footnotesize}(0.004)\end{footnotesize}
& \begin{footnotesize}(0.006)\end{footnotesize}
& \begin{footnotesize}(0.003)\end{footnotesize}
& \begin{footnotesize}(0.007)\end{footnotesize} \\
Income inequality & 0.001 & -0.001 & 0.003 & -0.002 \\
\vspace{4pt} & \begin{footnotesize}(0.014)\end{footnotesize} & \begin{footnotesize}(0.016)\end{footnotesize} & \begin{footnotesize}(0.007)\end{footnotesize} & \begin{footnotesize}(0.021)\end{footnotesize} \\
Per capita income & -0.177 & -0.161 & 0.008 & -0.222 \\
\vspace{4pt} & \begin{footnotesize}(0.11)\end{footnotesize} & \begin{footnotesize}(0.14)\end{footnotesize} & \begin{footnotesize}(0.047)\end{footnotesize} & \begin{footnotesize}(0.15)\end{footnotesize} \\
Regime durability & -0.076 & -0.109 & 0.039 & 0.010 \\
\vspace{4pt} & \begin{footnotesize}(0.047)\end{footnotesize} & \begin{footnotesize}(0.060)\end{footnotesize} & \begin{footnotesize}(0.024)\end{footnotesize} & \begin{footnotesize}(0.067)\end{footnotesize} \\
Size & 0.118 & 0.0494 & -0.014 & -0.001 \\
\vspace{4pt} & \begin{footnotesize}(0.044)\end{footnotesize} & \begin{footnotesize}(0.054)\end{footnotesize} & \begin{footnotesize}(0.015)\end{footnotesize} & \begin{footnotesize}(0.079)\end{footnotesize} \\
Govt capability & 0.275 & 0.189 & -0.018 & 0.072 \\
\vspace{4pt} & \begin{footnotesize}(0.14)\end{footnotesize} & \begin{footnotesize}(0.18)\end{footnotesize} & \begin{footnotesize}(0.061)\end{footnotesize} & \begin{footnotesize}(0.21)\end{footnotesize} \\
Past incident & 0.547 & 0.717 & -0.009 & 0.789 \\
\vspace{4pt} & \begin{footnotesize}(0.045)\end{footnotesize} & \begin{footnotesize}(0.052)\end{footnotesize} & \begin{footnotesize}(0.024)\end{footnotesize} & \begin{footnotesize}(0.081)\end{footnotesize} \\
Post-cold war & -0.578 & -0.253 & 0.104 & -0.036 \\
\vspace{4pt} & \begin{footnotesize}(0.097)\end{footnotesize} & \begin{footnotesize}(0.11)\end{footnotesize} & \begin{footnotesize}(0.061)\end{footnotesize} & \begin{footnotesize}(0.16)\end{footnotesize} \\
Conflict & -0.170 & -0.046 & 0.294 & 0.072 \\
\vspace{4pt} & \begin{footnotesize}(0.11)\end{footnotesize} & \begin{footnotesize}(0.13)\end{footnotesize} & \begin{footnotesize}(0.13)\end{footnotesize} & \begin{footnotesize}(0.39)\end{footnotesize} \\
Europe & 0.221 & -0.263 & -0.133 & -0.589 \\
\vspace{4pt} & \begin{footnotesize}(0.20)\end{footnotesize} & \begin{footnotesize}(0.34)\end{footnotesize} & \begin{footnotesize}(0.075)\end{footnotesize} & \begin{footnotesize}(0.49)\end{footnotesize} \\
Asia & -0.494 & -0.684 & 0.239 & -0.542 \\
\vspace{4pt} & \begin{footnotesize}(0.25)\end{footnotesize} & \begin{footnotesize}(0.28)\end{footnotesize} & \begin{footnotesize}(0.13)\end{footnotesize} & \begin{footnotesize}(0.36)\end{footnotesize} \\
America & -0.349 & -0.681 & -0.098 & -1.125 \\
\vspace{4pt} & \begin{footnotesize}(0.15)\end{footnotesize} & \begin{footnotesize}(0.23)\end{footnotesize} & \begin{footnotesize}(0.073)\end{footnotesize} & \begin{footnotesize}(0.30)\end{footnotesize} \\
Africa & -0.423 & -0.462 & 0.022 & -0.538 \\
\vspace{4pt} & \begin{footnotesize}(0.18)\end{footnotesize} & \begin{footnotesize}(0.21)\end{footnotesize} & \begin{footnotesize}(0.12)\end{footnotesize} & \begin{footnotesize}(0.31)\end{footnotesize} \\
Constant & -0.443 & 0.805 & 1.591 & 2.548 \\
\vspace{4pt} & \begin{footnotesize}(1.54)\end{footnotesize} &
\begin{footnotesize}(1.89)\end{footnotesize} &
\begin{footnotesize}(0.65)\end{footnotesize} &
\begin{footnotesize}(2.63)\end{footnotesize} \\
\multicolumn{5}{c}{} \\ \hline
N & \mbox{\rule[-0.2cm]{0cm}{0.7cm} 2232} & 2232 & 1109 & 2232 \\
Log-likelihood & -3805.791 & -3300.011 & -2897.375 & -2268.129 \\
LR-$\chi^{2}$ & 1151.842 & 507.427 & 151.709 & 373.293 \\ \hline
\end{tabular}
}
\end{table}
As a recent example of empirical studies on the frequency of terrorist attacks, we use that of~\cite{Li05}. Although different studies have suggested different features to predict variation in terrorist incidents, the Li study is both careful and generally representative of the structure of cross-country comparative studies. Li empirically explores the impact of a large number of political and economic factors that have been hypothesized to make transnational terrorist incidents more or less likely, and argues that while some features of democratic institutions, such as greater executive constraints, tend to make terrorist incidents more likely, other features, such as democratic participation, are associated with fewer incidents. Model (1) in Table 3 displays the coefficient estimates for Li's original results from a negative binomial regression of the number of transnational terrorist events, with each country-year as the unit of observation. We refer to the original~\cite{Li05} article for all details on variable construction, etc.
Since our data are based on terrorist incidents that are not limited to transnational events,
we first replicate the Li model for incidents in the MIPT data to ensure that our results are not an artifact of systematic differences between transnational-only and transnational-plus-domestic terrorist events. The coefficient estimates for the Li model applied to the number of incidents in the MIPT data shown as Model (2) in Table 3 are generally reasonably similar to the results for the original Model (1), suggesting that the model behaves similarly when applied to the two sources of data on terrorism.
Next, we examine to what extent the right-hand side covariates in the Li model allow us to predict to differences in the severity of terrorism. Model (3) in Table 3
displays the results for a negative binomial regression of the number of deaths among the lethal events in the MIPT data. Comparing the size of the coefficient estimates to their standard errors suggest that none of these coefficients are distinguishable from 0, with the possible exception of the estimate for Europe and the post-Cold War period. In other words, none of the factors proposed by Li seem to be good predictors of the severity of terrorist events. Moreover, the proposed Li model fails to generate predictions that in any way resemble the observed variation in the number of deaths: the largest predicted number of deaths for any observation in the observed sample is less than 10, far below the actual observed maximum of 2749 (i.e., the 11 September 2001 attack on the World Trade Center).
The original Li model examines the number of incidents by country-year, and it may therefore be argued that looking only at events with casualties could understate the possible success of the model in identifying countries that are unlikely to become targets of terrorist incidents. The results for the Li model applied to the total events for all country-years, Model (4) in Table 3, however, do not lend much support to this idea. Very few of the features emphasized by Li have coefficient estimates distinguishable from 0 by conventional significance criteria, and the highest predicted number of deaths for any one country-year in the sample is still less than 16. As such, this model is clearly not able to generate the upper tail of the observed frequency-severity distribution.
\section{A toy model for scale invariance through competitive forces}
Having shown that a representative model of terrorism incidence is a poor predictor of event severity, we now consider an alternative mechanism by which we can explain the robust statistical feature of scale invariance. As it turns out, power law distributions can arise from a wide variety of processes~\citep{Kleiber03, Mitzenmacher04, Newman05, Farmer06}. In the case of disasters such as earthquakes, floods, forest fires, strikes and wars, the model of self-organized criticality (SOC)~\citep{Bak87}, a physics model for equilibrium critical phenomena\footnote{Critical phenomena characterize a phase transition such as the evaporation of water, while an equilibrium critical phenomenon is one in which the critical state is a global attractor of system dynamics.} in spatially extended systems, appears to be the most reasonable explanation~\citep{Bak89,Turcotte98,Cederman03,Biggs05} as events themselves are inherently spatial. However, such models seem ill-suited for terrorism, where the severity of an event is not merely a function of the size of the explosion or fire. That is, the number of casualties from a terrorist attack is also a function of the density of people at the time and location of the attack, and of the particular application of its destructive power, e.g., a small explosion on an airplane can be more deadly than a large explosion on solid ground.\footnote{A trivial ``spatial'' model for the frequency-severity scale invariance would be a tight connection with size of the targeted city and the number of casualties. That is, as we saw earlier, large city populations are distributed as a power law, and we might suppose that an event's severity is proportional to the size of the target city. If target cities are chosen roughly uniformly at random, an obviously unrealistic idealization, then a power law in the frequency-severity statistics follows naturally. Tabulating population estimates for cities in our database from publicly available census data, we find that the correlation between an event's severity and the target city population is very weak, $r = 0.2(2)$ for deaths and $r = 0.2(1)$ for total severity, where the number in parentheses is the standard error from a bootstrap resampling of the correlation calculation.}
In the context of guerilla conflicts,~\citeauthor{Johnson05}~\citeyearpar{Johnson05,Johnson06} have adapted a dynamic equilibrium model of herding behavior on the stock market to produce frequency-severity distributions with exponents in the range of $1.5$ to $3.5$, depending on a parameter that is related to the rates of fragmentation and coalescence of the insurgent groups; they conjecture that the value $2.5$ is universal for all asymmetric conflict, including terrorism. Given the variation in the scaling behaviors that we measure for different aspects of terrorism (Figures~\ref{fig:terrorism},~\ref{fig:timeseries}a and~\ref{fig:weapons}a,b), this kind of universalism may be unwarranted. As an alternative explanation to the origin of the scale invariance for terrorism, we propose and analyze a simple, non-spatially extended toy model of a stochastic, competitive process between states and non-state actors~\citep{Clauset05}. The model itself is a variation of one described by~\cite{Reed02}, and can produce exponents that vary depending on the choice of model parameters -- a feature necessary to explain the different scaling behaviors for industrialization and weapon types.
Central to our model are two idealizations: that the potential severity of an event is a certain function of the amount of planning required to execute it, and that the competition between states and non-state actors is best modeled by a selection mechanism in which the probability that an event is actually executed is inversely related to the amount of planning required to execute it.
Consider a non-state actor (i.e., a terrorist) who is planning an attack. Although the severity of the event is likely to be roughly determined before planning begins, we make the
idealization that the potential severity of the event grows with time, up to some finite limit imposed perhaps by the choice of weapon (as suggested by Figure~\ref{fig:weapons}), the choice of target, or the availability of resources. If we further assume that the payoff rate on additional planning is proportional to the amount of time already invested, i.e., increasing the severity of a well-planned event is easier than for a more ad hoc event, then the potential severity of the event can be expressed as $p(t)\propto {\rm e}^{\kappa t}$, where $\kappa>0$ is a constant.
However, planned events are often prevented, aborted or executed prematurely, possibly as a result of intervention by a state. This process by which some events are carried out, while others are not, can be modeled as a selection mechanism. Assuming that the probability of a successful execution is exponentially related to the amount of time invested in its planning, perhaps because there is a small chance at each major step of the planning process that the actors will be incarcerated or killed by the state, or will abandon their efforts, we can relate the severity of a real event to the planning time of a potential event by $x \propto {\rm e}^{\lambda t}$, where $\lambda<0$ is a constant. Thus, to derive the distribution of real event severities, after the selection mechanism has filtered-out those events that never become real, we must solve the following identity from probability theory\footnote{Note that this operation is isomorphic randomly sampling the potential severity distribution.}
\[ \int p(x)\, {\rm d}x = \int p(t)\, {\rm d}t \enspace . \]
Doing so yields $p(x) \propto x^{-\alpha}$ where $\alpha = 1 - \kappa/\lambda$. Again considering the competitive nature of this process, it may be plausible that states and actors will, through interactions much like the co-evolution of parasites and hosts, develop roughly equal capabilities, on average, but perhaps with a slight advantage toward the state by virtue of its longevity relative to terrorist organizations, such that $|\kappa| \gtrsim |\lambda|$. In this case, we have a power law with exponent $\alpha \gtrsim 2$, in approximate agreement with much of our empirical data.
Although our toy model makes several unrealistic idealizations, its foundational assumptions fit well with the modern understanding of terrorism, and also with examples of recent attacks and foiled attempts. Whereas the plans for 11 September 2001 attacks in the United States are believed to have been underway since 1996,\footnote{On the planning for the 11 September 2001 attacks, see the summary of a documentary aired by Al-Jazeera at {\tt archives.cnn.com/2002/WORLD/meast/09/12/alqaeda.911.claim/index.html}.} subsequent attacks and attempts in the United Kingdom carried out by less organized groups and with less advance planning have failed to create a similar impact. For example, the 21 July 2005 attacks on the London Underground are now believed to have been a direct copycat effort initiated after the prior 7 July bombings. The attack was spectacularly unsuccessful: none of the four bombs' main explosive charges actually detonated, and the only reported casualty at the time was later found to have have died on an asthma attack. Even though the suspects initially managed to flee, all were later apprehended.
The competitive relationship of states and non-state actors has been explored in a variety of other contexts. \cite{Hoffman99} suggests that the state's counter-terrorism efforts serve as a selective measure, capturing or killing those actors who fail to learn from their peers' or predecessors' mistakes, leaving at-large the most successful actors to execute future attacks. \cite{Overgaard94},~\cite{Sandler03},~\cite{Sandler04}, and~\cite{Arce05}
give a similar view, arguing that the actions of states and actors are highly interdependent -- that actors typically make decisions on who, where, when or what to attack based on a careful assessment of the likelihood and impact of success, with these factors being intimately related to the decisions states make to discourage certain forms of attacks or responses. Governments make a similar calculus, although theirs is primarily reactive rather than proactive~\citep{Arce05}. Looking forward, a game theoretic approach, such as the one used by~\cite{Sandler03} to produce practical counter-terrorism policy suggestions, will likely be necessary to capture this interdependence, although presumably it will be roughly similar to the selective process we describe above.
Obviously, the practical, geopolitical and cultural factors relevant to a specific terrorist attack are extremely complex. Although our toy model intentionally omits them, they presumably influence the values assumed by the model parameters and are essential for explaining the variety of scaling exponents we observe in the data, e.g., the different scaling exponents for OECD and non-OECD nations and for attacks perpetrated using different weapons. It may be possible to incorporate these factors by using a regression approach to instead estimate the parameter values of our toy model, rather than to directly estimate the event severity.
\section{Discussion and conclusions}
Many of the traditional analyses of trends in terrorism are comparative, descriptive, historical or institutional, and those that are statistical rely on assumptions of normality and thus treat rare-but-severe events as qualitatively different from less severe but common events~\citep{Reich90, FBI99, State04, Sandler05}. By demonstrating that Richardson's discovery of scale invariance in the frequency-severity statistics of wars extends to the severity statistics of terrorism, we show that these assumptions are fundamentally false. Our estimates of the scaling behavior for terrorism, however, differs substantially from that of the severity of wars; in the latter case, the frequency-severity distribution scales quite slowly, with $\alpha_{\rm war} = 1.80(9)$, while the distribution scales much more steeply for terrorism, $\alpha_{\rm deaths} = 2.38(6)$, indicating that severe events are relatively less common in global terrorism than in interstate warfare.
Taking Richardson's program of study on the statistics of deadly human conflicts together with the extensive results we discuss here, our previous, preliminary study of terrorism~\citep{Clauset05}, and the study by~\citeauthor{Johnson05}~\citeyearpar{Johnson05,Johnson06} of insurgent conflicts, we conjecture first that scale invariance is a {\em generic feature} of the severity distribution of all deadly human conflicts, and second that it is {\em the differences in the type of conflict} that determine the particular scaling behavior, i.e., the values of the scaling exponent $\alpha$ and the lower-limit of the scaling $x_{\min}$. Indeed, this variation is precisely what we observe when we control for attributes like the degree of economic development, and the type of weapon used in the attack. In honor of Richardson and his pioneering interest in the statistics of deadly conflict, we call our conjecture Richardson's Law. A significant open question for future work remains to determine how and why the distinguishing attributes of a conflict, such as the degree of asymmetry, the length of the campaign, and the political agenda, etc., affect the observed scaling behavior.
With regard to counter-terrorism policy, the results we describe here have several important implications. First, the robustness of the scale invariant relationship between the frequency and severity of attacks demonstrates the fact that severe events are not fundamentally different from less severe ones. As such, policies for risk analysis and contingency planning should reflect this empirical fact. Second, although severe events do occur with much greater frequency than we would expect from our traditional thin-tailed models of event severity, their incidence has also been surprisingly stable over the past 30 years (Figure~\ref{fig:timeseries}b, lower pane). This point suggests that, from an operational standpoint, and with respect to their frequency and severity, there is nothing fundamentally new about recent terrorist activities, worldwide. Third, limiting access to certain kinds of weapons and targets is clearly important, with this being particularly true for those that are inherently more likely to produce a severe event, such as high explosives, or targets like airplanes and other mass transit systems. But, severe events themselves are not only associated with one or a few weapon-types (or targets). Restricting access to some weapons and targets will likely induce the substitution of less easily restricted ones~\citep{Enders06} -- a contingency for which we should plan. Fourth, the trend we identify for explosives, i.e., that such attacks have produced progressively more casualties over time, is particularly worrying given the sheer number of explosives attacks in the recent past. Both their severity and their popularity suggest that current international regulation of explosives technology is failing to keep these weapons out of the hands of terrorists, and that current diplomacy is failing to keep terrorists from resorting to their use. And finally, although it may be tempting to draw an analogy between terrorism and natural disasters, many of which also follow power-law statistics, we caution against such an interpretation. Rather, a clear understanding of the political and socioeconomic factors that encourage terrorist activities, and an appropriate set of policies that directly target these factors, may fundamentally change the frequency-severity statistics in the future, and break the statistical robustness of the patterns we have observed to date.
In closing, the discovery that the frequency of severe terrorist attacks follows a robust empirical law opens many new questions, and points to important gaps in our current understanding of both the causes and consequences of terrorism. Although we have begun to address a few of those, such as showing that the severity of suicide attacks using explosives does not follow the same frequency-severity statistics as other forms of terrorism, many more remain. We hope to see the community of conflict researchers making greater use of these new ideas in future research on terrorism.
\section*{Acknowledgments}
\noindent A.C. and M.Y. thank Cosma Shalizi, Cristopher Moore and Raissa D'Souza for helpful conversations. K.S.G. thanks Lindsay Heger, David A. Meyer, and Quan Li. We are also grateful to the editor at JCR and two anonymous reviewers for valuable comments on a previous version of the manuscript. This work was supported in part by the National Science Foundation under grants PHY-0200909 and ITR-0324845 (A.C.), CCR-0313160 (M.Y.), and SES-0351670 (K.S.G.), and by the Santa Fe Institute (A.C.).
\begin{appendix}
\section*{Appendix: Statistical methodology, and the use of power laws in empirical studies}
\label{appendix:A} Because the use of power laws and other heavy-tailed distributions in the social sciences is a relatively new phenomenon, the statistical tools and their relevant characteristics may not be familiar to some readers. This appendix thus serves to both explain our statistical methodology and to give the interested reader a brief tutorial on the subject. We hope that this material illuminates a few of the subtleties involved in using power laws in real-world situations. Readers interested in still more information should additionally refer to~\cite{Newman05} and~\cite{Goldstein04}.
To begin, we note that there are two distinct kinds of power laws, a real-valued or continuous kind and a discrete kind. Although both forms have many characteristics in common, the numerical methods one employs in empirical studies can be quite different depending on whether the data are best treated as continuous or discrete. Examples of the former might be voltages on power lines, the intensity of solar flares or the magnitude of earthquakes. In cases where discrete data takes values that are quite large, they can often be safely treated as if they were continuous variables, such as for the population of US cities, books sales in the US or the net worth of Americans. In the social sciences, however, data more frequently assume integer values where the maximum value is only a few orders of magnitude larger than the minimum, i.e., the tail is heavy but rather short. Examples of this kind of data might be the number of connections per person in a social network, casualty statistics for terrorist attacks, and word frequencies in a text. If such data are treated as a continuous variable, estimates of the scaling behavior or other statistical analyses can be significantly biased.
Instead, these heavy but relatively short tails should be modeled explicitly as a discrete power law,
\begin{align*}
P(x) = x^{-\alpha} / \zeta(\alpha) \enspace,
\end{align*}
with $x$ assuming only integer values greater than zero, and $\zeta(\alpha)$ being the Riemann zeta function, the normalization constant. In what follows, we will first consider the necessity of generating a random deviate with a power-law distribution, and then consider methods for estimating power law parameters from data itself. Both sections describe the statistical methods employed in this study, and provide a brief comparison with alternative methods.
\subsection*{Generating power-law distributed data}
Statistical modeling often necessitates the generation of random deviates with a specified distribution, e.g., in simple null-models or statistical hypothesis tests. \cite{Newman05} gives a simple analytic formula, derived using the transformation method~\cite{Press92}, for converting a uniform deviate into a continuous power law deviate:
\begin{align*}
x = x_{\min}(1-r)^{-1/(\alpha-1)} \enspace,
\end{align*}
where $x$ is distributed as a real number over the interval $[x_{\min},\infty]$, and $r$ is a uniform deviate. Although it may be tempting to simply take the integer portion of each deviate $x$ in order to obtain a discrete power law, the resulting distribution will actually differ quite strongly from what is desired: such a procedure shifts a significant amount of probability mass from smaller to larger values, relative to the corresponding theoretical discrete power-law distributed deviate.
A more satisfying approach is to use a deviate generator specifically designed for a discrete power law. Because the discrete form does not admit a closed-form analytical solution via the transformation method like the continuous form, the generator must instead take an algorithmic approach to convert uniform deviates via the inverse cumulative density function of the discrete power law. Such an approach is a standard practice, and fast algorithms exist for doing so~\citep{Press92}. To illustrate the differences between these two power-law deviate generators, we show in Figure~\ref{fig:deviates}a that the latter approach produces distributions that are significantly closer to the desired theoretical one than does the former method, and it is the latter which we use for our statistical studies in the main text.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.36]{deviate_deviation_n10k_b20.eps} &
\includegraphics[scale=0.36]{estimator_bias.eps} \\
(a) & (b)
\end{tabular}
\end{center}
\caption{(a) The closeness, in the sense of the Kolmogorov-Smirnov goodness-of-fit measure, of power-law distributed deviates, generated using the two methods described in the test, to the target distribution, a discrete power law with scaling parameter $\alpha=2.5$. Results are for $x_{\min}=1$ and $n=10~000$, with similar results holding for other values, although the difference decreases as $x_{\min}\rightarrow\infty$. Quite dramatically, the discrete deviate generator does a significantly better job at matching the theoretical distribution than does the continuous method discussed in the text. (b) The results of using the three methods discussed in the text for estimating the scaling parameter of discrete power-law distributed data, with parameters $x_{\min}=1$ and $n=10~000$; similar results hold for other values, although the estimates get increasingly noisy as the number of observations shrinks, and the two maximum likelihood estimators increasingly agree as $x_{\min}\rightarrow\infty$. Error bars are omitted when they are less than the size of the series symbol.} \label{fig:deviates}
\end{figure}
\subsection*{Estimating scaling parameters from data}
Since Richardson first considered the scale invariance in the frequency and severity of wars, statistical methods for characterizing power laws have advanced significantly. The signature feature of a tail distribution that decays as a power law is a straight-line with slope $\alpha$ on doubly logarithmic axes. As such, a popular method of measuring the scaling exponent $\alpha$ has been by a least-squares regression on log-transformed data, i.e., one takes the log of both the dependent and independent variables, or one could bin the data into decades, and then measures the slope using a least-squares linear fit. Unfortunately, this procedure yields a biased estimate for the scaling exponent~\citep{Goldstein04}. For continuous power-law data, Newman (2005) gives an unbiased estimator based on the method of maximum likelihood; however, it too yields a biased estimate when applied to discrete data like ours. \cite{Goldstein04} studied the bias of some estimators for power-law distributed data, and, also using the method of maximum likelihood, give a transcendental equation whose solution is an unbiased estimator for discrete data. In our main study, we use a generalization of this equation as our discrete maximum likelihood estimator.
To give the reader a sense of the performance of these methods, we show in Figure~\ref{fig:deviates}b the results of applying them to simulated data derived from the discrete generator described above. Quite clearly, the discrete maximum likelihood estimator yields highly accurate results, with the other techniques either over- or under-estimating the true scaling parameter, sometimes dramatically so. \cite{Johnson06} have also studied the accuracy of these estimators, but apparently only for data derived from the continuous deviate generator described above.
The discrete maximum likelihood estimator of Goldstein et al. assumes that the tail encompasses the entire distribution. A generalization of their formula to distributions where the tail begins at some minimum value $x_{\min}\geq1$ follows, and the value of $\alpha_{\rm ML}$ that satisfies this equation is the discrete maximum likelihood estimator:
\begin{align*}
\frac{\zeta'(\alpha,x_{\min})}{\zeta(\alpha,x_{\min})} = -\frac{1}{n}\sum_{i=1}^{n}\log x_{i} \enspace ,
\end{align*}
where the $x_{i}$ are the data in the tail, $n$ is the number of such observations, and $\zeta(\alpha,x_{\min})$ is the incomplete Riemann zeta function. If desired, the latter can be rewritten as $\zeta(\alpha) - {\rm H}_{x_{\min}}^{\alpha}$, being the difference between a zeta function and the $x_{\min}$th harmonic number of order $\alpha$. When $x_{\min}=1$, the left-hand side reduces to $\zeta'(\alpha)/\zeta(\alpha)$, the values of which can be calculated using most standard mathematical software. Alternatively, one can numerically maximize the log-likelihood function itself,
\begin{align*}
\mathcal{L}(\alpha~|~x) = -n\log\zeta(\alpha,x_{\min}) - \alpha\sum_{i=1}^{n} \log x_{i} \enspace ,
\end{align*}
which may be significantly more convenient than dealing with the derivative of the incomplete zeta function. This approach is what was used in both the present study, and in our preliminary study of this terrorism data~\citep{Clauset05}.
These equations assume that the range of the scaling behavior, i.e., the lower bound $x_{\min}$, is known. In real-world situations, this value is often estimated visually and a conservative estimate of such can be sufficient when the data span a half-dozen or so orders of magnitude. However, the data for many social or complex systems only span a few orders of magnitude at most, and an underpopulated tail would provide our tools with little statistical power. Thus, we use a numerical method for selecting the $x_{\min}$ that yields the best power-law model for the data. Specifically, for each $x_{\min}$ over some reasonable range, we first estimate the scaling parameter $\alpha_{\rm ML}$ over the data $x \geq x_{\min}$, and then compute the Kolmogorov-Smirnov (KS) goodness-of-fit statistic between the data being fit and a theoretical power-law distribution with parameters $\alpha_{\rm ML}$ and $x_{\rm min}$. We then select the $x_{\rm min}$ that yields the best such fit to our data. For simulated data with similar characteristics to the MIPT data, we find that this method correctly estimates both the lower bound on the scaling and the scaling exponent. Mathematically, we take
\begin{align*}
x_{\rm min} = \min_{y} \left[~\max_{x} \Big| F(x; \alpha_{\rm ML}, y) - \hat{F}(x; y) \Big|~\right] \enspace ,
\end{align*}
where $F(x; y, \alpha_{\rm ML})$ is the theoretical cumulative distribution function (cdf) for a power law with parameters $\alpha_{\rm ML}$ and $x_{\rm min}=y$, and $\hat{F}(x; y)$ is the empirical distribution function (edf) over the data points with value at least $y$. In cases where two values of $y$ yield roughly equally good fits to the data, we report the one with greater statistical significance.
Once these parameters have been estimated, we first calculate the standard error in $\alpha$ via bootstrap resampling. The errors reported in Tables 1 and 2, for instance, are derived in this manner. Finally, we calculate the statistical significance of this fit by a Monte Carlo simulation of $n$ data points drawn a large number of times (e.g., at least $1000$ draws) from $F(x;\alpha_{\rm ML}, x_{\rm min})$, where $\alpha_{\rm ML}$ and $x_{\rm min}$ have been estimated as above, under the one-sided KS test. Tabulating the results of the simulation yields an appropriate table of $p$-values for the fit, and by which the relative rank of the observed KS statistic can be interpreted in the standard way.
As mentioned in the text, there are many heavy-tailed distributions, e.g., the \mbox{q-exponential} $e_{q}^{-\alpha x}$, the stretched exponential $e^{-\alpha x^{\beta}}$, the log-normal, and even a different two-parameter power law $(c + x)^{-\alpha}$. For data that span only a few orders of magnitude, the behavior of these functions can be statistically indistinguishable, i.e., it can be hard to show that data generated from an alternative distribution would not yield just as good a fit to the power-law model. As such, we cannot rule out all Type II statistical errors for our power law models. On the other hand, we note that for the distributions described in Section~\ref{sec:fulldistribution}, the statistical power test versus a log-normal model indicates that the power law better represents the empirical data. In some sense, the particular kind of asymptotic scaling in the data is less significant than the robustness of the heavy tail under a variety of forms of analysis. Simply the fact that the patterns in the real-world severity data deviate so strongly from our expectations via traditional models of terrorism illustrates that there is much left to understand about this phenomenon, and our models need to be extended to account for the robust empirical patterns we observe in our study.
\end{appendix}
\singlespace
\fontsize{10}{10}
\selectfont
|
1,314,259,993,942 | arxiv | \section{Introduction}
Recent years have witnessed new progress in the study of particle physics during the cosmic inflation. It was realized that current and upcoming cosmological observations have great potential in probing the structures of primordial scalar and tensor perturbations, and thereby could provide us a wealth of information about the high-energy physics at the inflation scale. Many studies in recent years have explored the possibility of probing high-scale new physics with future data, a program sometimes called the ``cosmological collider (CC) physics''\cite{Chen:2009we,Chen:2009zp,Baumann:2011nk,Chen:2012ge,Pi:2012gf,Noumi:2012vr,Gong:2013sma,Arkani-Hamed:2015bza,Chen:2015lza,Chen:2016nrs,Chen:2016uwp,Chen:2016hrz,Lee:2016vti,An:2017hlx,An:2017rwo,Iyer:2017qzw,Kumar:2017ecc,Chen:2017ryl,Tong:2018tqf,Chen:2018sce,Chen:2018xck,Chen:2018cgg,Chua:2018dqh,Wu:2018lmx,Saito:2018omt,Li:2019ves,Lu:2019tjj,Liu:2019fag,Hook:2019zxa,Hook:2019vcn,Kumar:2018jxz,Kumar:2019ebj,Alexander:2019vtb,Wang:2019gbi,Wang:2019gok,Wang:2020uic,Li:2020xwr,Wang:2020ioa,Fan:2020xgh,Aoki:2020zbj,Bodas:2020yho,Maru:2021ezc,Lu:2021gso,Sou:2021juh,Lu:2021wxu,Pinol:2021aun,Cui:2021iie,Tong:2022cdz,Reece:2022soh,Qin:2022lva,Chen:2022vzh,Cabass:2022rhr,Cabass:2022oap,Niu:2022quw,Niu:2022fki}.
The main observables of CC physics are the $n$-point $(n\geq 2)$ correlation functions of the primordial scalar or tensor fluctuations. We collectively call them \emph{inflation correlators}. They can be viewed as correlation functions of quantum fields living in the bulk of the inflationary spacetime, which is approximately de Sitter, but with their external legs pinned to the future boundary of the spacetime, namely, the end of inflation. As key quantities connecting QFT predictions with the cosmological observations, the inflation correlators play crucial roles in CC physics, much like the Minkowskian scattering amplitudes to the collider physics. It is thus of central importance to have a good theoretical understanding of inflation correlators. Although our current knowledge about inflation correlators is still very preliminary compared to the much-developed scattering amplitudes in Minkowski spacetime, a lot of new results have emerged in recent years in both analytical and numerical frontiers \cite{Arkani-Hamed:2018kmz,Baumann:2019oyu,Baumann:2020dch,Pajer:2020wnj,Pajer:2020wxk,Cabass:2021fnw,Pimentel:2022fsc,Jazayeri:2022kjy,Qin:2022fbv,Xianyu:2022jwk,Wang:2022eop,Baumann:2022jpr,Sleight:2019mgd,Sleight:2019hfp,Sleight:2020obc,Sleight:2021iix,Sleight:2021plv,Wang:2021qez,Goodhew:2020hob,Melville:2021lst,Goodhew:2021oqg,DiPietro:2021sjt,Tong:2021wai,Bonifacio:2021azc,Hogervorst:2021uvp,Meltzer:2021zin,Heckelbacher:2022hbq,Gomez:2021qfd,Gomez:2021ujt,Baumann:2021fxj}.
A particular aspect of studying inflation correlators is to find precise and explicit results, either numerical or analytical, to a range of key processes that were identified in the recent studies of CC phenomenologies. Until recently, many phenomenological studies have relied on very often unjustified approximations. In general, the CC processes involve inflation correlators mediated by massive fields at both the tree and the loop levels, and the on-shell production of these massive particles during inflation can leave oscillatory signatures in the inflation correlators. While this is phenomenologically quite appealing, the computation of these massive processes is relatively difficult. In the diagrammatic approach in the Schwinger-Keldysh (SK) formalism \cite{Schwinger:1960qe,Keldysh:1964ud,Feynman:1963fq,Jordan:1986ug,Weinberg:2005vy,Chen:2017ryl}, the computation involves multi-layered and nested time integrals over products of special functions.
Several methods have been developed in recent years that benefit analytical computations of inflation correlators. For example, as was pointed out in \cite{Arkani-Hamed:2015bza}, one can derive differential equations satisfied by inflation correlators, so that one can find explicit results by solving these differential equations with proper boundary conditions, instead of computing the SK integrals directly. This method has been further developed in subsequent studies and was called the cosmological bootstrap \cite{Arkani-Hamed:2018kmz,Baumann:2019oyu,Baumann:2020dch,Pajer:2020wnj,Pajer:2020wxk,Cabass:2021fnw,Pimentel:2022fsc,Jazayeri:2022kjy,Qin:2022fbv,Xianyu:2022jwk,Wang:2022eop,Baumann:2022jpr}. Borrowing this terminology and following our previous work \cite{Qin:2022fbv}, we shall call the differential equations satisfied by inflation correlators the \emph{bootstrap equations}.
Another recently explored method is the Mellin transform \cite{Sleight:2020obc,Sleight:2021iix,Sleight:2021plv}, which exploits the dilatation symmetry of dS. The Mellin variable is essentially the weight of dilatation eigenmodes. By using Mellin variables in place of time variables, we can trivialize the SK time integrals. It was further shown in \cite{Qin:2022lva,Qin:2022fbv} that a more practical approach is to use Mellin variables only for the internal propagators, while the external legs still retain their time dependence. This approach, called partial Mellin-Barnes representation, is convenient for computing inflation correlators, since these correlators are defined to be equal-time correlators at the future boundary. It is thus better to leave the time variables in external modes untransformed.
With either the bootstrap techniques or the Mellin transform, many explicit results have been found for tree-level correlators with single massive exchanges, including the dS covariant cases and boost-breaking cases. See, e.g., \cite{Arkani-Hamed:2018kmz,Baumann:2019oyu,Sleight:2019hfp,Pimentel:2022fsc,Jazayeri:2022kjy,Qin:2022fbv}. There are also new results at the 1-loop level obtained with partial Mellin-Barnes representation \cite{Qin:2022lva} or dS spectral decomposition \cite{Xianyu:2022jwk}. In all these examples, when the internal massive fields are heavy enough, the correlator contains an oscillatory/nonanalytic piece which we call the \emph{signal}, and a smooth/analytic piece which we call the \emph{background}. The signal part typically consists of imaginary powers of momentum ratios multiplied by a hypergeometric function, while the background part is typically written as a double Taylor series in momentum ratios. Most of the previous results were first obtained at the four-point level, and the three-point correlators were obtained by taking a soft limit, although taking this soft limit could be subtle in practice, as will be commented below.\footnote{There are also studies that work directly at the three-point level; See, e.g., \cite{Pimentel:2022fsc}.} The results for the two-point functions are even rarer.
In this work, we present new analytical results for a wide range of three-point and two-point inflation correlators with a single massive exchange at the tree level, shown in Fig.\ \ref{fd_3pt2pt}. The external modes can be either massless scalar mode such as the inflaton fluctuation, or the closely related conformal scalar, or the massless tensor mode. The intermediate massive particle can have arbitrary mass, spin, and chemical potential within the physically allowed parameter space. We allow very general couplings between the external mode and the intermediate massive particle, including both the nonderivative and derivative couplings, and the coupling coefficient is allowed to have very general complex power dependence on the conformal time $\tau$. Thus our result covers models with time-dependent couplings, especially the oscillatory couplings that are present in models with oscillating background fields, e.g., \cite{Chen:2022vzh}. The couplings with arbitrary complex power dependence on the conformal time are also useful when we treat our correlators as subgraphs of more complicated correlators.
Our results feature exact and closed-form formulae. In contrast to previously obtained results, our expressions for the three-point and two-point correlators do not contain any Taylor series. Instead, all the momentum dependences are fully captured by familiar special functions whose analytical properties are well understood. This makes our result particularly suitable for understanding the analytic properties of inflation correlators. Also, our compact results can also be used as handy building blocks when constructing more complicated inflation correlators, a topic we shall explore in a separate work \cite{qin_box}.
The strategy we adopt in this paper is similar in spirit to many previous works on cosmological bootstraps. That is, we begin with a tree-level four-point correlator $\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\varphi_{\mathbf k_4}\ra'$ with a single massive exchange in the $s$-channel, with momentum $\mathbf k_s\equiv \mathbf k_1+\mathbf k_2$. The result for the three-point and two-point correlators can be obtained by taking soft limits $\mathbf k_4\rightarrow 0$ and $\mathbf k_2,\mathbf k_4\rightarrow 0$, which we call the single folded limit and the double folded limit, respectively. (See Fig.\ \ref{fd_4pt}.) The regularity of folded limits of four-point functions follows from the Bunch-Davies initial condition for all the fields involved \cite{Arkani-Hamed:2018kmz}. So, there is no conceptual difficulty in taking the folded limits. In practice, however, the four-point functions usually contain terms singular in folded limits, and these singularities must cancel out in the full expression. The matter is further complicated by the fact that the double folded limit is usually at the boundary of convergent regions of the Taylor series for the background. All these complications make the folded limit less trivial than it seems.
In this work, we circumvent these complications by making a proper change of variables. The advantage of adopting a new set of variables for the bootstrap equations has already been observed in \cite{Qin:2022fbv}. Here we shall make fuller use of it. The key idea is simple: after stripping off trivial external momentum factors, an $s$-channel four-point correlator depends on various momenta only through two independent momentum ratios. In previous works, the two independent momentum ratios are normally taken as $r_1\equiv k_{s}/(k_1+k_2)$ and $r_2\equiv k_s/(k_3+k_4)$, where $k_i\equiv |\mathbf k_i|$. Our new observation is that it is more advantageous to use $u_i\equiv 2r_i/(1+r_{i})$ $(i=1,2)$ instead of $r_{1,2}$. For physical parameters $0\leq r_{1,2}\leq 1$, the new variable $u_{i}$ is a monotonic function of $r_i$ and also takes its value from $[0,1]$. The key advantage of $u$ variables is that the inhomogeneous bootstrap equation can be solved by an ansatz of single-layer Taylor series in the single folded limit $u_2\rightarrow 1$, and it turns out that this Taylor series can be easily summed to give a closed-form special function, which in our case is always a generalized hypergeometric function ${}_3\text{F}_2$. With the closed-form expression for the three-point functions, the double folded limit $u_{1,2}\rightarrow 1$ is also easily taken, and the result is again a closed-form expression.
Now we give an outline for the rest of this work. In Sec.\ \ref{sec_reduce}, we introduce the correlators to be computed in this work, and explain that all these correlators can be easily reduced to simple linear combinations of several \emph{seed integrals}, and thereby reduce the computation of the correlators to that of the seed integrals. Depending on whether there is a nonzero chemical potential for the intermediate massive particle, we need two types of seed integrals: For massive fields of arbitrary mass and spin but without chemical potential, the corresponding internal propagator can always be expressed in terms of Hankel functions. For such correlators, we define a \emph{Hankel seed integral}. On the other hand, for an intermediate field with a chemical potential, the transversely polarized internal propagator is expressed in terms of Whittaker W functions. For such cases, we define a \emph{Whittaker seed integral}. We then give various explicit examples, showing how to reduce the correlators to seed integrals. Then, we present all the details of solving the bootstrap equations for the Hankel seed integral in Sec.\ \ref{sec_hankel}, and for the Whittaker seed integral in Sec.\ \ref{sec_whittaker}. We conclude in Sec.\ \ref{sec_concl} with further discussions. Some useful formulae are summarized in App.\ \ref{app_formulae}, and a computation of the seed integrals in the squeezed limit is presented in App.\ \ref{app_squeezed}.
Readers uninterested in the technical details of bootstrapping seed integrals can skip most of Sec.\ \ref{sec_hankel} and Sec.\ \ref{sec_whittaker}. For readers who only want to use the results, the shortest route is to glance through Sec.\ \ref{sec_reduce} to get an idea of how to reduce a correlator to the seed integral, and then jump directly to the results. A quick summary of our results is as follows. The Hankel seed integral $\wt{\mathcal{I}}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u_1,u_2)$ is defined in (\ref{eq_SeedIntH}). The closed-form expression for $\wt{\mathcal{I}}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u_1,1)$ is given in (\ref{eq_IPPresult}) and (\ref{eq_IPMresult}), while the closed-form expression for $\wt{\mathcal{I}}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(1,1)$ is given in (\ref{eq_H2ptResultPP}) and (\ref{eq_H2ptResultPM}). The Whittaker seed integral $\wt{\mathcal{I}}^{(h)p_1p_2}_{\mathsf{a}\mathsf{b}}(u_1,u_2)$ is defined in (\ref{eq_SeedIntW}). The closed-form expression for its single folded limit $\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{(h)p_1p_2}(u,1)$ is given in (\ref{eq_IhPMresult}) and (\ref{eq_IhPPresult}), and the closed-form expression for the double folded limit $\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{(h)p_1p_2}(1,1)$ is given in (\ref{eq_Ih2ptResultPM}) and (\ref{eq_Ih2ptResultPP}).
Explicit analytical results already exist in the literature for many special cases considered in this work, but almost always expressed in terms of Taylor series.\footnote{In the appendix of \cite{Arkani-Hamed:2018kmz} it was shown that such Taylor series in $r$ variables may be written in terms of the Kampé de Fériet function, which is a very general form of hypergeometric function of two variables. However, it seems that the numerical implementation of this function is not readily available in Mathematica.} For these known cases, our results are new in that they are expressed in compact and closed form in terms of familiar special functions, instead of Taylor series. Our results also include some completely new cases that were not treated in the literature, including most of two-point functions, the three-point function with (complex) time-dependent couplings, and also the chemical-potential-boosted three-point function with arbitrary couplings.
\paragraph{Notations and conventions.} We use mostly plus signature for the spacetime metric and we fix the background geometry to be the inflationary patch of dS. With the conformal time $\tau\in(-\infty,0)$ and comoving spatial coordinates $\mathbf x\in\mathbb R^3$, the metric reads ${\mathrm{d}} s^2 =a^2(\tau)(-{\mathrm{d}}\tau^2+{\mathrm{d}}\mathbf x^2)$ and $a(\tau)=-1/(H\tau)$, where $H$ is the inflation Hubble parameter. Throughout this work, we take $H=1$ for simplicity. We follow the diagrammatic notations and conventions in \cite{Chen:2017ryl}. Frequently used shorthand notations include $k_{ij}\equiv k_i+k_j$ $(i,j=1,2,3,4)$, $k_{123}\equiv k_1+k_2+k_3$, $p_{12}\equiv p_1+p_2$, $\bar p_{12}\equiv p_1-p_2$. Indices sans serif such as $\mathsf{a},\mathsf{b}$ are often but not always labels for SK branches, but they always take values in $\pm 1$. In most places, we use the mass parameter $\wt\nu$ instead of the mass $m$ for the intermediate massive field. The two are related by $\wt\nu=\sqrt{m^2-9/4}$ for scalars and $\wt\nu=\sqrt{m^2-(s-1/2)^2}$ for fields of spin $s\neq 0$. In all mid-steps, we treat $\wt\nu$ as a positive real parameter. The results for complementary fields ($\wt\nu$ being purely imaginary) can be obtained in the final results by analytic continuation. For spinning fields, the chemical potential is denoted by $\wt\mu$. We use $\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\cdots\varphi_{\mathbf k_n}\ra'$ to denote correlators of 3-momentum modes $\varphi_{\mathbf k}$, and the prime in $\langle}\def\ra{\rangle\cdots\ra'$ means that the momentum-conserving $\delta$-function is removed. Other variables and parameters will be defined in the main text when they appear.
\section{Reducing Correlators to Seed Integrals}
\label{sec_reduce}
\begin{figure}
\centering
\parbox{0.38\textwidth}{\includegraphics[width=0.38\textwidth]{fd_scalar_tree_3pt}}~~~~
\parbox{0.38\textwidth}{\includegraphics[width=0.38\textwidth]{fd_scalar_tree_2pt}}
\caption{Three-point correlator $\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra'$ and two-point correlator $\langle}\def\ra{\rangle\varphi_{\mathbf k}\varphi_{-\mathbf k}\ra'$ mediated by a single massive field $\sigma$ at the tree level. Here $\varphi$ can be either the inflaton which is a massless scalar, or the graviton which is a massless spin-2 particle. Our diagrammatic notation follows \cite{Chen:2017ryl}. In particular, the shaded circles represent vertices of either SK branch $\mathsf{a}=\pm$ which is summed over in the final result, and the white squares denote the boundary points at $\tau=0$. }
\label{fd_3pt2pt}
\end{figure}
In this section, we first introduce the objects to be computed in this work, including the three-point and two-point correlators mediated by a single massive field at the tree level. Next, we introduce the two types of seed integrals, the Hankel seed integrals and the Whittaker seed integrals, which are built from the SK integrals in a standard diagrammatic calculation in the bulk. Then, we provide a number of examples showing how to reduce SK integrals to a simple linear combination of seed integrals, including both scalar and spinning exchanges with several types of couplings.
\subsection{Seed integrals: the definition and the result}
\paragraph{Ingredients: propagators and vertices.}
The inflation correlators considered in this work have two types of topologies shown in Fig.\ \ref{fd_3pt2pt}. That is, we consider correlators of the external field $\varphi$ mediated by a single massive field $\sigma$ at the three-point and two-point levels. The external field $\varphi$ can be a massless inflaton, a conformal scalar ($m^2=2$), or even a massless spin-2 graviton, since all these cases share very similar bulk-to-boundary propagators. Therefore, for the clarity of presentation, we will often take $\varphi$ as the massless inflaton field below, with a notable exception when we discuss the example of a massive spinning exchange with nonzero chemical potential.
In the standard SK formalism, the bulk-to-boundary propagator of the inflaton field $\varphi$, represented by all the black external lines in Fig.\ \ref{fd_3pt2pt}, is given by \cite{Chen:2017ryl}:
\begin{equation}
\label{eq_Gbtob}
G_\mathsf{a}(k;\tau)=\FR{1}{2k^3}(1-\mathrm{i}\mathsf{a} k\tau)e^{\mathrm{i}\mathsf{a} k\tau}.
\end{equation}
Here $k$ is the magnitude of the momentum flowing in the propagator. The conformal time $\tau$ and the SK index $\mathsf{a}=\pm$ are both for the bulk endpoint. See \cite{Chen:2017ryl} for more detailed discussions.
On the other hand, the blue internal lines in Fig.\ \ref{fd_3pt2pt} represent the bulk propagator of the massive field $\sigma$. In this work, the massive field is allowed to have either dS covariant or dS boost-breaking dispersion relation. The dS covariant case is technically simpler thanks to the presence of the full dS isometries, and is thus interesting on its own. The dS boost breaking case is more relevant to CC physics, since the rolling of the inflaton background necessarily breaks the dS boosts. More importantly, boost breaking processes typically lead to larger observable signals, and thus are more interesting for phenomenological model buildings \cite{Chen:2018xck,Hook:2019zxa,Liu:2019fag,Wang:2019gbi,Wang:2020ioa,Tong:2022cdz,Pimentel:2022fsc,Jazayeri:2022kjy,Qin:2022fbv,Niu:2022fki,Niu:2022quw}.
Similar to the Minkowskian QFT, free massive particles with dS invariant dispersion relation are classified by their mass $m$ and spin $s$. The spin can take any nonnegative integer or half-integer value, although we shall only consider integer spin in this work, since only the integer-spin field can contribute to tree-level inflaton correlators. For the mass $m$, as mentioned at the end of the introduction, we shall always use the \emph{mass parameter} $\wt\nu$ in place of the mass $m$. The two are related by $\wt\nu=\sqrt{m^2-9/4}$ for a scalar field and $\wt\nu=\sqrt{m^2-(s-1/2)^2}$ for a field of spin $s\neq 0$. For simplicity, in the calculations performed in this work, we always assume $\wt\nu$ to be positive real. This is the situation most relevant to CC physics. However, our final analytic results are also applicable to lighter particles by analytic continuation, in which cases the mass parameter $\wt\nu$ takes purely imaginary value.
It is useful to present some basic results about a massive scalar particle. Given a massive scalar field $\sigma(\tau,\mathbf x)$ with mass parameter $\wt\nu>0$, we can canonically quantize it in the usual way, by writing it as a linear combination of the creation operator $a_{-\mathbf k}^\dag$ and annihilation operator $a_{\mathbf k}$ in the 3-momentum space:
\begin{align}
\sigma(\tau,\mathbf x)=\int\FR{{\mathrm{d}}^3\mathbf k}{(2\pi)^3}\Big[\sigma(k,\tau)a_{\mathbf k}+\sigma^*(k,\tau)a^\dag_{-\mathbf k}\Big]e^{\mathrm{i}\mathbf k\cdot\mathbf x},
\end{align}
where the time-dependent coefficient $\sigma(k,\tau)$ is called the mode function, and is determined by the Klein-Gordon equation in dS together with the Bunch-Davies initial condition. Explicitly:
\begin{equation}
\sigma(k,\tau)=\FR{\sqrt\pi}{2}e^{-\pi\wt\nu/2}(-\tau)^{3/2}\text{H}_{\mathrm{i}\wt\nu}^{(1)}(-k\tau),
\end{equation}
where $\mathrm{H}_{\nu}^{(1)}(z)$ is the Hankel function of first kind. Using the mode function, one can construct the bulk propagator $D_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)$ for $\sigma$ field, in which $\mathsf{a},\mathsf{b}=\pm$ and $\tau_{1,2}$ are SK indices and conformal time variables at the two bulk endpoints of the propagator. According to the standard procedure in the SK formalism, as reviewed in \cite{Chen:2017ryl}, one first constructs the two Wightman functions $D_>(k;\tau_1,\tau_2)$ and $D_<(k;\tau_1,\tau_2)$. The ``greater'' Wightman function $D_>$ is given, in terms of the mode function $\sigma(k,\tau)$, by:
\begin{equation}
\label{eq_DScalarGreater}
D_>(k;\tau_1,\tau_2)=\sigma(k,\tau_1)\sigma^*(k,\tau_2)=\FR{\pi e^{-\pi\wt\nu}}{4}(\tau_1\tau_2)^{3/2}\mathrm{H}_{\mathrm{i}\wt\nu}^{(1)}(-k\tau_1)\mathrm{H}_{-\mathrm{i}\wt\nu}^{(2)}(-k\tau_2),
\end{equation}
while the ``less'' Wightman function $D_<(k;\tau_1,\tau_2)=D_>^*(k;\tau_1,\tau_2)$. Then, the four SK propagators $D_{\mathsf{a}\mathsf{b}}$ with $\mathsf{a},\mathsf{b}=\pm$ are given as:
\begin{align}
\label{eq_DScalarSame}
&D_{\pm\pm}(k;\tau_1,\tau_2)=D_{\gtrless}(k;\tau_1,\tau_2)\theta(\tau_1-\tau_2)+D_\lessgtr(k;\tau_1,\tau_2)\theta(\tau_2-\tau_1),\\
\label{eq_DScalarOpp}
&D_{\pm\mp}(k;\tau_1,\tau_2)=D_\lessgtr(k;\tau_1,\tau_2).
\end{align}
On the other hand, it is also important to consider the boost-breaking dispersion relations for the massive particle, as mentioned above. In this case, one needs more parameters to characterize the particle. If we break the three dS boost symmetries but retain the dS dilatation, then there are two additional parameters one can include in the quadratic Lagrangian of a massive field: First, one can have a non-unit sound speed $c_s$ which characterizes the relative sizes between the kinetic energy $\sigma'^2$ and the gradient energy $(\partial_i\sigma)^2$. The non-unit sound speed can be introduced in our computation by replacing all momentum $k$ of the massive field by $c_s k$. The problem of including non-unit sound speed has been treated in detail in \cite{Pimentel:2022fsc,Jazayeri:2022kjy}, and we do not consider them from now on.
Second, when the dS boost is broken by, say, a rolling inflaton background, the helicity of a massive spinning particle $h\equiv\mathbf s\cdot\mathbf k/|\mathbf k|$ becomes unambiguously defined, since we are no longer allowed to boost particles into different helicity states. Therefore, one can include a helicity-weighted particle number operator in the Hamiltonian, whose coefficient is a helicity-dependent chemical potential $\wt\mu$ \cite{Wang:2019gbi}, which we shall simply call chemical potential for short.\footnote{The chemical potential $\mu$ is a dim-1 parameter, and it is usually convenient to define a dimensionless chemical potential $\wt\mu=\mu/H$. Since we take $H=1$ in this work, the two become the same. } Technically, the chemical potential modifies the mode function of a massive spinning particle from the original Hankel-type functions to the Whittaker W functions. For example, a massive spin-1 particle of mass parameter $\wt\nu$ and chemical potential $\wt\mu$ has the following mode function $B^{(h)}(k,\tau)$ for its two transverse polarizations $h=\pm 1$:
\begin{equation}
\label{eq_Bhmode}
B^{(h)}(k,\tau)=\FR{e^{-h\pi\wt\mu/2}}{\sqrt{2k}}\mathrm{W}_{\mathrm{i} h\wt\mu,\mathrm{i}\wt\nu}(2\mathrm{i} k\tau),
\end{equation}
where $\text{W}_{\kappa,\nu}(z)$ is the Whittaker W function. The bulk propagators for particles of nonzero chemical potential can then be constructed from their mode functions in the same way as shown above for a massive scalar. It is then clear that the cases with nonzero chemical potential require a separate treatment. Therefore, we classify the correlators considered in this work into two categories, one corresponds to the Hankel-type correlators, and includes all massive fields with dS covariant dispersion; the other is the Whittaker-type correlators, which covers the cases of massive spinning fields with nonzero chemical potential.
With all bulk-to-boundary propagators and bulk propagators specified in Fig.\ \ref{fd_3pt2pt}, it remains to include the appropriate couplings for all the internal vertices. In this work, we allow the massive field $\sigma$ to have very general couplings to the external inflaton lines. The couplings can be either nonderivative or derivative, and they can contain an arbitrary number of uncontracted time derivatives or contracted spatial derivatives. Normally, if one imposes scale invariance, the time dependence in the coupling coefficient is fully determined by the number of fields and derivatives. However, we do not have to make the assumption of scale invariance, which is broken at least by the inflaton potential. Indeed, it is even possible to include complicated time dependences in the coupling coefficients, such as a component oscillatory in physical time, $c(\tau)\supset (-\tau)^{p}\cos\omega t\sim (-\tau)^{p\pm\mathrm{i}\omega}$, which may appear in some resonant models.
\paragraph{Seed integrals: definitions and results.} With all ingredients introduced above, we are now in a position to construct correlators and to compute them. The general strategy of this work is that we compute two types of \emph{seed integrals}, one for Hankel-type correlators, and one for Whittaker-type correlators. These seed integrals are essentially the SK integrals for the four-point correlators with $s$-channel exchange of a massive field, shown in Fig.\ \ref{fd_4pt}. As four-point functions with $s$-channel mediation, the seed integrals depend on the magnitudes of the four external momenta, $k_i\equiv |\mathbf k_i|$ $(i=1,2,3,4)$, as well as the magnitudes of the $s$-channel momentum $k_s\equiv |\mathbf k_s|$, with $\mathbf k_s\equiv \mathbf k_1+\mathbf k_2$. We also use shorthand notations such as $k_{12}\equiv k_1+k_2$, $k_{34}\equiv k_3+k_4$, and $k_{123}\equiv k_1+k_2+k_3$.
\begin{figure}
\centering
\parbox{0.26\textwidth}{\includegraphics[width=0.26\textwidth]{fd_scalar_tree_4pt}}
$\xrightarrow[(u_2\rightarrow 1)]{\mathbf k_4\rightarrow 0}$
\parbox{0.26\textwidth}{\includegraphics[width=0.26\textwidth]{fd_scalar_tree_3pt}}
$\xrightarrow[(u_1\rightarrow 1)]{\mathbf k_2\rightarrow 0}$
\parbox{0.26\textwidth}{\includegraphics[width=0.26\textwidth]{fd_scalar_tree_2pt}}
\caption{Three-point and two-point correlators as successive folded limits of a four-point correlator.}
\label{fd_4pt}
\end{figure}
We define the seed integrals such that they depend on these momenta only through two independent ratios. A widely used pair of independent momentum ratios are the following:
\begin{align}
&r_1\equiv\FR{k_s}{k_{12}},
&&r_2\equiv\FR{k_s}{k_{34}}.
\end{align}
However, a key point in this work is that it is more advantageous to use the following two variables $u_{1,2}$ instead of $r_{1,2}$, as was observed in \cite{Qin:2022fbv}:
\begin{align}
&u_1\equiv\FR{2k_s}{k_{12}+k_s},
&&u_2\equiv\FR{2k_s}{k_{34}+k_s}.
\end{align}
As we shall see in later sections, it is the use of the $u$-variables that enables us to conveniently take the folded limits of the four-point function, and to get the closed-form analytical formulae for the three-point and two-point functions.
Now we are ready to introduce the seed integrals, to which a large class of correlators can be reduced, as will be shown later in this section. As explained above, we introduce two types of seed integrals. One is the \emph{Hankel seed integral} $\wt{\mathcal{I}}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u_1,u_2)$, which is defined by:
\begin{keyeqn}
\begin{align}
\label{eq_SeedIntH}
\wt{\mathcal{I}}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u_1,u_2)\equiv (-\mathsf{a}\mathsf{b})k_s^{5+p_{12}}\int_{-\infty}^0{\mathrm{d}}\tau_1{\mathrm{d}}\tau_2(-\tau_1)^{p_1}(-\tau_2)^{p_2}e^{\mathrm{i}\mathsf{a} k_{12}\tau_1+\mathrm{i}\mathsf{b} k_{34}\tau_2}D_{\mathsf{a}\mathsf{b}}(k_s;\tau_1,\tau_2).
\end{align}
\end{keyeqn}
Here $\mathsf{a},\mathsf{b}=\pm1$ are SK indices as required by the diagrammatic rule in the SK formalism, and $(p_1,p_2)$ are a pair of numbers that can take most \emph{complex} values, except that we require $\text{Re}~p_{1,2}>-5/2$ to make the seed integral well defined. Furthermore, $D_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)$ is the bulk SK propagator for a massive scalar field with mass parameter $\wt\nu$, whose explicit form has been given in (\ref{eq_DScalarGreater}), (\ref{eq_DScalarSame}), and (\ref{eq_DScalarOpp}). With these explicit expressions, one can see that the seed integral defined above depends on various momenta only through two independent ratios, which we choose to be $u_1$ and $u_2$.
The Hankel seed integral will be computed explicitly in Sec.\ \ref{sec_hankel} with one or both of $u_1$ and $u_2$ taken to be 1, so that one can directly read the three-point and two-point correlators from them. The result for the three-point limit $\wt{\mathcal{I}}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u_1,1)$ is summarized in (\ref{eq_IPPresult}) and (\ref{eq_IPMresult}), and the result for the two-point limit $\wt{\mathcal{I}}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(1,1)$ is given in (\ref{eq_H2ptResultPP}) and (\ref{eq_H2ptResultPM}).
Likewise, we introduce the following \emph{Whittaker seed integral} for the Whittaker-type correlators:
\begin{keyeqn}
\begin{align}
\label{eq_SeedIntW}
\wt{\mathcal{I}}^{(h)p_1p_2}_{\mathsf{a}\mathsf{b}}(u_1,u_2)\equiv (-\mathsf{a}\mathsf{b})k_s^{3+p_{12}}\int_{-\infty}^0{\mathrm{d}}\tau_1{\mathrm{d}}\tau_2(-\tau_1)^{p_1}(-\tau_2)^{p_2}e^{\mathrm{i}\mathsf{a} k_{12}\tau_1+\mathrm{i}\mathsf{b} k_{34}\tau_2}D_{\mathsf{a}\mathsf{b}}^{(h)}(k_s;\tau_1,\tau_2).
\end{align}
\end{keyeqn}
Everything here is in parallel with the Hankel seed integral, except that here we are including the bulk SK propagator $D_{\mathsf{a}\mathsf{b}}^{(h)}(k_s;\tau_1,\tau_2)$ for the helicity-$h$ $(h=\pm 1)$ component of a massive spin-1 field, constructed from the mode function (\ref{eq_Bhmode}). Therefore, we add a superscript $(h)$ in the seed integral to distinguish it from the Hankel seed integral. The computation of the Whittaker seed integral will be given in Sec.\ \ref{sec_whittaker}. The result for the three-point limit $\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{(h)p_1p_2}(u,1)$ is given in (\ref{eq_IhPMresult}) and (\ref{eq_IhPPresult}), and the result for the two-point limit $\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{(h)p_1p_2}(1,1)$ is given in (\ref{eq_Ih2ptResultPM}) and (\ref{eq_Ih2ptResultPP}).
One notable difference in our definition of seed integrals from the previously defined seed integrals (including our earlier work \cite{Qin:2022fbv}) is that the SK indices $\mathsf{a},\mathsf{b}$ are left unsummed. As we shall see later, when writing correlators as linear combinations of seed integrals, the coefficients of the linear combination usually involve additional SK indices, and this fact necessitates the results of the seed integrals for each individual SK branch. In previous works, this complication was usually overcome by acting appropriate differential operators on a fully summed seed integral. Acting on such differential operators has the effect of raising the powers $p_{1,2}$ by integer units. However, when we want to change $p_{1,2}$ continuously on their complex planes, the method of acting differential operators no longer works. Also, for the three-point and two-point functions, there is simply no sufficient momentum dependence to act the differential operator. For example, to raise the power of $p_2$ in a four-point seed integral, one needs to act a differential operator that contains $\partial_{u_2}$. However, for the three-point function, $u_2$ has been set to a constant, so the differential operator is no longer applicable. It is for these two reasons that we choose to keep the SK indices explicit in the definition of the seed integrals. We shall explain this point more clearly with an example below.
\subsection{Examples}
In this subsection, we provide various examples, showing how to reduce the correlators to seed integrals. We choose a range of examples that often appear in models of CC physics. We also provide sufficient details and mid-steps to illustrate the procedure.
\subsubsection{Scalar exchange}
The first group of examples all involve tree-level exchange of a massive scalar field $\sigma$, but the massive scalar can couple to the external inflaton $\varphi$ in many different ways. The first three examples below consider three special choices of couplings, all coming from CC model buildings. The fourth example considers the most general coupling one can write down between $\sigma$ and $\varphi$.
\paragraph{Time-derivative coupling.} The presence of a rolling inflaton background breaks the time diffeomorphism, and thus allows us to introduce uncontracted temporal indices when writing the effective Lagrangian for the fluctuating fields. In particular, one can consider couplings between $\sigma$ and the conformal time derivative of $\varphi'\equiv {\mathrm{d}}\varphi/{\mathrm{d}}\tau$. Therefore, for the trilinear and bilinear couplings shown in Fig.\ \ref{fd_3pt2pt}, we can choose:
\begin{align}
&\mathcal{O}_{\varphi\varphi\sigma}= \FR{1}{2}(-\tau)^{-2}(\varphi')^2\sigma,
&&\mathcal{O}_{\varphi\sigma}= (-\tau)^{-3}\varphi'\sigma.
\end{align}
The explicit $\tau$ dependence in the couplings is dictated by the scale invariance, and we do not include the constant coefficients for clarity. It is trivial to put them back in the final expression. The above couplings are perhaps the simplest choice from a purely technical point of view, owing to the fact that the bulk-to-boundary propagator $G_{\mathsf{a}}(k,\tau)$ in (\ref{eq_Gbtob}) becomes simpler when acting on a conformal time derivative: $G_{\mathsf{a}}'(k,\tau)=\tau e^{+\mathrm{i}\mathsf{a} k\tau}/(2k)$. The SK integral thus takes the following form: (See \cite{Chen:2017ryl} for the diagrammatic rules in the SK formalism.)
\begin{align}
\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_{\sigma_{\mathbf k_3}}'=&-\sum_{\mathsf{a},\mathsf{b}=\pm}\mathsf{a}\mathsf{b}\int\FR{{\mathrm{d}}\tau_1}{(-\tau_1)^2}\FR{{\mathrm{d}}\tau_2}{(-\tau_2)^3} G_\mathsf{a}'(k_1;\tau_1)G_\mathsf{a}'(k_2;\tau_1) G_\mathsf{b}'(k_3;\tau_2) D_{\mathsf{a}\mathsf{b}}(k_3;\tau_1,\tau_2)\nonumber\\
=&~\FR{1}{8 k_1k_2k_3}\sum_{\mathsf{a},\mathsf{b}=\pm}\mathsf{a}\mathsf{b}\int {\mathrm{d}}\tau_1 \FR{{\mathrm{d}}\tau_2}{(-\tau_2)^2} e^{\mathrm{i}\mathsf{a} k_{12}\tau_1+\mathrm{i}\mathsf{b} k_3\tau_2}D_{\mathsf{a}\mathsf{b}}(k_3;\tau_1,\tau_2) .
\end{align}
Here the notation $\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_{\sigma_{\mathbf k_3}}'$ means that we have only included the graph with the massive propagator carrying the momentum $\mathbf k_3$. There are also other two permutations to be included in the full result $\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_\sigma'$. Now, comparing the above expression with the Hankel seed integral defined in (\ref{eq_SeedIntH}), we immediately find:
\begin{align}
\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_\sigma'
= \FR{-1}{8k_1k_2k_3^4}\sum_{\mathsf{a},\mathsf{b}=\pm}\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{0,-2}(u,1)+(\text{2 perms}),
\end{align}
where $u=2k_3/k_{123}$. In a similar way, one can compute the correction to the two-point function $\delta\langle}\def\ra{\rangle\varphi_{\mathbf k}\varphi_{-\mathbf k}\ra_\sigma'$ with two insertions of bilinear couplings $\mathcal{O}_{\varphi\sigma}$ introduced above:
\begin{align}
\delta\langle}\def\ra{\rangle\varphi_{\mathbf k}\varphi_{-\mathbf k}\ra_\sigma'=&-\sum_{\mathsf{a},\mathsf{b}=\pm}\mathsf{a}\mathsf{b}\int\FR{{\mathrm{d}}\tau_1}{(-\tau_1)^3}\FR{{\mathrm{d}}\tau_2}{(-\tau_2)^3} G_\mathsf{a}'(k;\tau_1)G_\mathsf{b}'(k;\tau_2) D_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)\nonumber\\
=&~\FR{1}{4k^3}\sum_{\mathsf{a},\mathsf{b}=\pm}\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{-2,-2}(1,1).
\end{align}
In this simple example, we have only one seed integral with its SK indices directly summed, thanks to the simple form of the bulk-to-boundary propagator with one time derivative. We will see more complicated combinations in the following examples where the combination coefficients themselves contain SK indices.
\paragraph{dS-invariant coupling.}
Our second example is a slight modification of the first example. Instead of time-derivative coupling for the trilinear coupling, we make the following dS invariant choice:
\begin{align}
&\mathcal{O}_{\varphi\varphi\sigma}= \FR{1}{2}(-\tau)^{-2}\eta^{\mu\nu}(\partial_\mu\varphi)(\partial_\nu\varphi)\sigma,
&&\mathcal{O}_{\varphi\sigma}= (-\tau)^{-3}\varphi'\sigma.
\end{align}
At the same time, we still keep the time derivative coupling for the bilinear vertex. We note in passing that dS invariant couplings for the bilinear vertex, such as $\partial_\mu\sigma\partial^\mu\varphi$ and $\sigma\varphi$, are in a sense trivial, since such couplings can always be rotated away by a simple linear field redefinition. Consequently, dS invariant two-point mixings normally do not lead to physically interesting effects in CC physics.
It is straightforward to find the following SK integral for the three-point correlator constructed with the above two couplings:
\begin{align}
\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_{\sigma_{\mathbf k_3}}'
=&~\sum_{\mathsf{a},\mathsf{b}=\pm}\mathsf{a}\mathsf{b}\int\FR{{\mathrm{d}}\tau_1}{(-\tau_1)^2}\FR{{\mathrm{d}}\tau_2}{(-\tau_2)^3} G_\mathsf{b}'(k_3;\tau_2) D_{\mathsf{a}\mathsf{b}}(k_3;\tau_1,\tau_2)\nonumber\\
&~\times \Big[G_\mathsf{a}'(k_1;\tau_1)G_\mathsf{a}'(k_2;\tau_1)+\mathbf k_1\cdot\mathbf k_2G_\mathsf{a}(k_1;\tau_1)G_\mathsf{a}(k_2;\tau_1)\Big].
\end{align}
Using the explicit expression for the bulk-to-boundary propagator $G_\mathsf{a}(k;\tau)$ in (\ref{eq_Gbtob}) and then comparing the result with the definition of the seed integral (\ref{eq_SeedIntH}), we have:
\begin{align}
\label{eq_3ptLorCovCoup}
\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_\sigma'
=&~\FR{1}{16(k_1k_2k_3)^2}\sum_{\mathsf{a},\mathsf{b}=\pm} \Big[(\varrho_{12}^2-1)\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{0,-2}(u,1)+\mathrm{i}\mathsf{a} \FR{\varrho_{12}(1-\varrho_1^2-\varrho_2^2)}{\varrho_1\varrho_2}\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{-1,-2}(u,1)\nonumber\\
&~+\FR{1-\varrho_1^2-\varrho_2^2}{\varrho_1\varrho_2}\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{-2,-2}(u,1)\Big]+(\text{2 perms}),
\end{align}
where $u=2k_3/k_{123}$ as before, and we have defined $\varrho_1\equiv k_1/k_3$, $\varrho_2\equiv k_2/k_3$, $\varrho_{12}\equiv k_{12}/k_3$ for simplicity. Also, we have used the momentum conservation to write $\mathbf k_1\cdot\mathbf k_2=\frac{1}{2}(k_3^2-k_1^2-k_2^2)$. As mentioned before, the coefficients in front of some seed integrals have explicit dependence on SK indices. In this particular example, one can get rid of the SK-indices-dependent coefficients by acting appropriate differential operators. One can directly check that the following equation holds:
\begin{align}
\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_\sigma'
=&~\FR{1}{8(k_1k_2k_3)^2}\Big[-k_1k_2\partial_{k_{12}}^2+(\wh{\mathbf k}_1\cdot\wh{\mathbf k}_2)(1-k_1\partial_{k_{12}})(1-k_2\partial_{k_{12}})\Big]\sum_{\mathsf{a},\mathsf{b}=\pm} \wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{-2,-2}(u,1)\nonumber\\
&~+(\text{2 perms}).
\end{align}
When acting the differential operator $\partial_{k_{12}}$, the $u$ parameter in $\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{-2,-2}(u,1)$ should be understood as $u=2k_3/(k_{12}+k_3)$. Thus we see that it is sometimes possible to avoid SK-index-dependent coefficients by acting appropriate differential operators on the fully summed seed integrals. But this is not always possible, as we shall see in a later example with the most general couplings.
\paragraph{Oscillating coupling.}
Now we consider an example with explicitly time-dependent couplings. In particular, we consider a trilinear coupling that is oscillatory in \emph{physical} time $t$.
\begin{align}
&\mathcal{O}_{\varphi\varphi\sigma}= \FR{1}{2}(-\tau)^{-2}\big[1+\lambda(-\tau)^p\cos(\omega t+\delta)\big](\varphi')^2\sigma,
&&\mathcal{O}_{\varphi\sigma}= (-\tau)^{-3}\varphi'\sigma.
\end{align}
Here $\lambda$ is a small dimensionless parameter, $\delta$ is a phase, and the real exponent $p$ describes how the oscillation amplitude changes with time. Such a trilinear coupling can appear when there is a background field oscillating with a physical frequency $\omega$.
Now let us focus on the three-point correlator contributed by the oscillatory term in $\mathcal{O}_{\varphi\varphi\sigma}$, namely, the term proportional to $\lambda$. With the same procedure, we write down the SK integral and compare it with the Hankel seed integral, which shows that the $\order{\lambda}$ part of the correlator can be written in the following way:
\begin{align}
\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_\lambda' =&~\FR{-\lambda}{16k_1k_2k_3^4}\sum_{\mathsf{a},\mathsf{b}=\pm}\Big[e^{-\mathrm{i}\delta}\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{\,p+\mathrm{i}\omega,-2}(u,1)+e^{+\mathrm{i}\delta}\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{\,p-\mathrm{i}\omega,-2}(u,1)\Big]+(\text{2 perms}).
\end{align}
Here we have used the fact that the physical time is related to the conformal time via $\tau=-e^{-t}$.
\paragraph{General couplings.}
Finally we consider the most general coupling between the massive scalar field $\sigma$ and the external inflaton mode $\varphi$ respecting the spatial translation and rotation. Using integration by parts, we can move all the derivatives on $\sigma$ to $\varphi$. After doing this, the most general trilinear and bilinear couplings take the following form:
\begin{align}
\mathcal{O}_{\varphi\varphi\sigma}
=&~\FR{1}{2}(-\tau)^R \Big[\partial_{\tau}^{J} \partial_{i_1}\cdots\partial_{i_{M}}(\partial_j\partial^j)^{N}\varphi\Big] \Big[\partial_{\tau}^{K}\partial^{i_1}\cdots\partial^{i_{M}}(\partial_k\partial^k)^{L}\varphi\Big] \sigma,\nonumber\\
\mathcal{O}_{\varphi\sigma}
=&~ (-\tau)^S \Big[\partial_{\tau}^{P}(\partial_i\partial^i)^{Q}\varphi\Big] \sigma.
\end{align}
Here $R$ and $S$ can be any complex numbers with $\text{Re}\,R,S>-5/2$, while $J,K,M,N,L,P,Q$ can take any nonnegative integer values. If we further assume scale symmetry, then the powers of time $R$ and $S$ are fixed to be $R=-4+J+K+2(M+N+L)$ and $S=-4+P+2Q$, but we do not have to make this assumption. Then, applying the diagrammatic rule in Schwinger-Keldysh formalism, one can write down the SK integral for the three-point correlator:
\begin{align}
&\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_{\sigma_{\mathbf k_3}}'
=-\FR{1}{2}(-\mathbf k_1\cdot\mathbf k_2)^M (-k_1^2)^{N}(-k_2^2)^{L}(-k_3^2)^Q\sum_{\mathsf{a},\mathsf{b}=\pm}\mathsf{a}\mathsf{b}\int {\mathrm{d}}\tau_1{\mathrm{d}}\tau_2(-\tau_1)^R(-\tau_2)^S\nonumber\\
&~\times\Big[\partial_{\tau_1}^{J}G_\mathsf{a}(k_1;\tau_1)\Big]\Big[\partial_{\tau_1}^{K}G_\mathsf{a}(k_2;\tau_1)\Big]\Big[\partial_{\tau_2}^{P}G_\mathsf{b}(k_3;\tau_2)\Big]D_{\mathsf{a}\mathsf{b}}(k_s;\tau_1,\tau_2)+(\mathbf k_1\leftrightarrow \mathbf k_2).
\end{align}
To compare this with the Hankel seed integral (\ref{eq_SeedIntH}), it is useful to note the following expression for the bulk-to-boundary propagator with $J$ time derivatives:
\begin{equation}
\partial_{\tau}^{J}G_\mathsf{a}(k_1;\tau_1)=\FR{1}{2k^3}(\mathrm{i}\mathsf{a} k)^J(1-J-\mathrm{i}\mathsf{a} k\tau)e^{+\mathrm{i} \mathsf{a} k\tau}.
\end{equation}
Then it is straightforward to find the following expression:
\begin{align}
&~\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_{\sigma}'
=\FR{(-1)^{M+N+L+Q}(\mathbf k_1\cdot\mathbf k_2)^M k_1^{2N+J-1} k_2^{2L+K-1}k_3^{2Q+P-R-S-6}}{16k_1^2k_2^2k_3^2}\sum_{\mathsf{a},\mathsf{b}=\pm}(\mathrm{i}\mathsf{a})^J(\mathrm{i}\mathsf{a})^K(\mathrm{i}\mathsf{b})^P \nonumber\\
&~\times\bigg[(1-J)(1-K)(1-P)\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{R,S}(u,1)-\mathrm{i}\mathsf{a}\big((1-K)\varrho_1+ (1-J)\varrho_2\big)(1-P)\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{R+1,S}(u,1)\nonumber\\
&~-(1-P)\varrho_1\varrho_2\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{R+2,S}(u,1)-\mathrm{i}\mathsf{b} (1-J)(1-K)\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{R,S+1}(u,1)\nonumber\\
&~-\mathsf{a}\mathsf{b}\big((1-K)\varrho_1+(1-J)\varrho_2\big)\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{R+1,S+1}(u,1)-\mathrm{i}\mathsf{b} \varrho_1\varrho_2\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{R+2,S+1}(u,1)\bigg]+(\text{5 perms}).
\end{align}
Again, this expression contains coefficients that depend on SK indices. In this example, it is no longer possible to eliminate this dependence by acting on differential operators on a summed seed integral, in particular because the $u_2$ variable has been set to 1. Therefore it seems most convenient to leave the SK indices unsummed in the definition of the seed integral, in order to accommodate the most general couplings as presented here.
\subsubsection{Spinning exchange}
Next we consider the exchange of massive particles with nonzero spin $s$. The angular momentum conservation at each of the two-point mixing vertices in Fig.\ \ref{fd_3pt2pt} implies that only the longitudinal component of the spinning field can mix with the external scalar mode, and that only the helicity-2 component of the spinning field can mix with the external graviton. Therefore, in the following examples, we will only consider longitudinal polarizations for the three-point and two-point scalar correlators with spinning exchange. In the last example, we shall also consider an example of graviton correlators, in which we will have a helicity-2 exchange, possibly with a nonzero chemical potential.
The main purpose of the following examples is to show how to reduce the SK integrals with spinning field into seed integrals defined with spin-0 or spin-1 massive propagators. Therefore we will no longer consider the most general couplings. Instead, we will focus on the problems such as how to relate the longitudinal mode function of a spinning particle with the mode function of a massive scalar field. We shall begin with the simplest example, namely the spin-1 case, and then consider the general spin-$s$ field.
\paragraph{Spin-1.}
The three-point function with longitudinal spin-1 exchange has been extensively studied in \cite{Qin:2022fbv}, and here we show the main results together with some important mid-steps for illustration. A massive spin-1 particle can be obtained after canonically quantizing a vector field $A_\mu$ with mass parameter $\wt\nu$. For this field, we choose the following simple couplings to the external inflaton modes:
\begin{align}
&\mathcal{O}_{\varphi\varphi A}=\FR{1}{2}(-\tau)^{-1}\varphi'(\partial_i\varphi)A_i,
&&\mathcal{O}_{\varphi A}=(-\tau)^{-1}(\partial_i\varphi')A_i
\end{align}
Then, the SK integral for the three-point function mediated by the longitudinal mode of $A$ reads:
\begin{align}
\label{eq_SKintA1}
\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_{A}'
=&~\bigg\{ \FR{1}{2}\Big[\mathbf k_2\cdot\bm{\epsilon}^{(L)}(\wh k_3)\Big]\Big[\mathbf k_3\cdot\bm{\epsilon}^{(L)*}(\wh k_3)\Big]\sum_{\mathsf{a},\mathsf{b}=\pm}\mathsf{a}\mathsf{b}\int\FR{{\mathrm{d}}\tau_1}{-\tau_1}\FR{{\mathrm{d}}\tau_2}{-\tau_2}\nonumber\\
&~\times G_{\mathsf{a}}'(k_1,\tau_1)G_{\mathsf{a}}(k_2,\tau_1)G_{\mathsf{b}}'(k_3,\tau_2) D^{(L)}_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)\bigg\} +(\text{5 perms}).
\end{align}
Here we have introduced the longitudinal polarization vector $\bm{\epsilon}^{(L)}(\wh k)=\wh{\mathbf k}\equiv \mathbf{k}/k$, as well as the longitudinal bulk propagator $D_{\mathsf{a}\mathsf{b}}^{(L)}(k_3;\tau_1,\tau_2)$. This bulk propagator is built from the longitudinal mode function $B^{(L)}(k,\tau)$ in (\ref{eq_Bhmode}) following the standard procedure. Now, by a careful examination of the equation of motion of $A_\mu$ as well as the constraint $\nabla^\mu A_\mu=0$, one can show that the longitudinal mode function $B^{(L)}(k,\tau)$ can be built from a massive scalar mode function of Hankel-type \cite{Lee:2016vti,Qin:2022fbv}. Here we directly quote the result. It is convenient to define the following differential operator $\mathcal{L}_\tau$:
\begin{align}
\mathcal{L}_\tau= \partial_\tau-\FR{2}{\tau} .
\end{align}
Then, one can show that the longitudinal component in $A_i$ is related to the temporal polarization $B^{(L)}_0$ in $A_0$ via an action of $\mathcal{L}_\tau$ operator. In addition, the mode function for the temporal polarization $B^{(L)}_0$ is identical to that of the massive scalar $\sigma(k,\tau)$ up to a constant factor:
\begin{align}
&B^{(L)}(k,\tau)=\FR{1}{\mathrm{i} k}\mathcal{L}_\tau B_0^{(L)}(k,\tau),\\
&B^{(L)}_0(k,\tau)=\FR{\sqrt\pi k}{2m}e^{-\pi\wt\nu/2}(-\tau)^{3/2}\mathrm{H}^{(1)}_{\mathrm{i}\wt\nu}(-k\tau)=\FR{k}{m}\sigma(k,\tau).
\end{align}
From the above relations, we can immediately find a relation between the longitudinal propagator $D_{\mathsf{a}\mathsf{b}}^{(L)}(k;\tau_1,\tau_2)$ and the massive scalar propagator $D_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)$:
\begin{align}
\label{eq_DLtoD}
D_{\mathsf{a}\mathsf{b}}^{(L)}(k;\tau_1,\tau_2)=\FR{1}{m^2}\Big[\mathcal{L}_{\tau_1}\mathcal{L}_{\tau_2}D_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)-\mathrm{i}\mathsf{a}\tau_1\tau_2\delta_{\mathsf{a}\mathsf{b}} \delta(\tau_1-\tau_2)\Big].
\end{align}
There is a notable contact term proportional to $\delta_{\mathsf{a}\mathsf{b}}\delta(\tau_1-\tau_2)$ in this expression. It is nonzero only for the two same-sign propagators $D_{\pm\pm}^{(L)}$. This term arises when we commute the $\mathcal{L}_\tau$ operator, originally acting on the mode function, with the step function in $D_{\pm\pm}^{(L)}$; See (\ref{eq_DScalarSame}).
Now we substitute (\ref{eq_DLtoD}) in (\ref{eq_SKintA1}), which then leads to the following SK integral involving the massive scalar propagator $D_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)$:
\begin{align}
\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_{A}'
=&~\FR{\mathbf k_2\cdot\mathbf k_3}{2m^2}\bigg\{\sum_{\mathsf{a},\mathsf{b}=\pm}\mathsf{a}\mathsf{b}\int\FR{{\mathrm{d}}\tau_1}{-\tau_1}\FR{{\mathrm{d}}\tau_2}{-\tau_2}G_{\mathsf{a}}'(k_1;\tau_1)G_{\mathsf{a}}(k_2;\tau_1)G_{\mathsf{b}}'(k_3;\tau_2)\mathcal{L}_{\tau_1}\mathcal{L}_{\tau_2}D_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)\nonumber\\
&-\sum_{\mathsf{a}=\pm}\mathrm{i} \mathsf{a}\int {\mathrm{d}}\tau_1\,G_{\mathsf{a}}'(k_1;\tau_1)G_{\mathsf{a}}(k_2;\tau_1)G_{\mathsf{a}}'(k_3;\tau_1)\bigg\}+(\text{5 perms}).
\end{align}
The first term in the second line comes from the contact term in (\ref{eq_DLtoD}). Then, we perform integration by parts, to move $\mathcal{L}_{\tau_1}$ and $\mathcal{L}_{\tau_2}$ to external modes. There is no boundary term since the integrand vanishes at both $\tau=-\infty$ (by $\mathrm{i}\epsilon$ prescription enforced by the Bunch-Davies initial condition) and $\tau=0$ (by explicitly counting the powers of $\tau_{1,2}$ in the integrand). After this is done, we are ready to write the above SK integral as a linear combination of the Hankel seed integral:
\begin{align}
\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_{A}'
=&-\FR{1}{m^2}\FR{\wh{\mathbf k}_2\cdot\wh{\mathbf k}_3}{16(k_1k_2k_3)^2}\bigg\{\sum_{\mathsf{a},\mathsf{b}=\pm} \bigg[4\varrho_1\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{-1,-1}(u,1)-2\mathrm{i}\mathsf{b}\varrho_1\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{-1,0}(u,1)\nonumber\\
&-2\mathrm{i}\mathsf{a}\varrho_1(\varrho_1-2\varrho_2)\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{0,-1}(u,1)-\mathsf{a}\mathsf{b}\varrho_1(\varrho_1-2\varrho_2)\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{0,0}(u,1)+2\varrho_1\varrho_2\varrho_{12}\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{1,-1}(u,1)\nonumber\\
&-\mathrm{i}\mathsf{b} \varrho_1\varrho_2\varrho_{12}\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{1,0}(u,1)\bigg]+\FR{4\varrho_1\varrho_3^2(\varrho_{123}+3\varrho_2)}{\varrho_{123}^4}\bigg\}+(\text{5 perms}).
\end{align}
Here again $u=2k_3/k_{123}$, and we define the same momentum ratios $\varrho_i$ as in (\ref{eq_3ptLorCovCoup}). Therefore, we see that the computation of three-point correlators with spin-1 exchange can also be reduced to that of the Hankel seed integral.
\paragraph{General integer spin.}
The procedure described above for spin-1 field can be readily applied to fields with general integer spin $s$, although the algebra can be quite tedious. A spin-$s$ particle can be described by a transverse, traceless, and totally symmetric tensor $\Psi_{\mu_1\cdots\mu_s}$ of rank-$s$.
There are again a lot of possible choices for the trilinear and bilinear vertices in Fig.\ \ref{fd_3pt2pt}. We do not pursue the most general possibilities, but only illustrate the method with the following simple example:
\begin{align}
&\mathcal{O}_{\varphi\varphi\sigma}=(-\tau)^{-3+2s}\varphi'(\partial_{i_1}\cdots \partial_{i_s}\varphi)\Psi_{i_1\cdots i_s},
&&\mathcal{O}_{\varphi\sigma}=(-\tau)^{-3+2s}(\partial_{i_1}\cdots\partial_{i_s}\varphi')\Psi_{i_1\cdots i_s},
\end{align}
where the explicit time dependence is again fixed by the scale symmetry. With the couplings given, the SK integral for the three-point correlator with $\Psi$-exchange reads:
\begin{align}
\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\varphi_{\mathbf k_3}\ra_{\Psi}'
=&~\bigg\{\FR{(-1)^{s+1}}{2}\Big[k_2^{i_1}\cdots k_2^{i_s}{\epsilon}_{i_1\cdots i_s}^{(L)}({\wh k}_3)\Big]\Big[ k_3^{j_1}\cdots k_3^{j_s}{\epsilon}_{j_1\cdots j_s}^{(L)*}({\wh k}_3)\Big]\sum_{\mathsf{a},\mathsf{b}=\pm}\mathsf{a}\mathsf{b}\int\FR{{\mathrm{d}}\tau_1}{(-\tau_1)^{3-2s}}\FR{{\mathrm{d}}\tau_2}{(-\tau_2)^{3-2s}}\nonumber\\
&~\times G_{\mathsf{a}}'(k_1,\tau_1)G_{\mathsf{a}}(k_2,\tau_1)G_{\mathsf{b}}'(k_3,\tau_2) D^{(s,L)}_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)\bigg\} +(\text{5 perms}).
\end{align}
Here $\epsilon_{i_1\cdots i_s}^{(L)}(\wh{k})$ is the longitudinal polarization tensor. Up to a normalization factor, $\epsilon_{i_1\cdots i_s}^{(L)}(\wh{k})$ is the unique combination of $\wh{k}_i$ and $\delta_{ij}$ with the property of being transverse, traceless, and totally symmetric with all indices. Therefore, in the above expression, the factor $k_3^{j_1}\cdots k_3^{j_s}{\epsilon}_{j_1\cdots j_s}^{(L)*}({\wh k}_3)$ contributes a constant factor, while $k_2^{i_1}\cdots k_2^{i_s}{\epsilon}_{i_1\cdots i_s}^{(L)}({\wh k}_3)$ contributes to a angular dependent factor $\propto \mathrm{P}_{s}(\cos\theta_{23})$ where $\mathrm{P}_n(z)$ is the Legendre polynomial and $\theta_{23}$ is the angle between $\mathbf k_2$ and $\mathbf k_3$. The factor $\mathrm{P}_n(z)$ is well known in CC physics as the signal of a spin-$s$ exchange. However, let us emphasize that this factor also depends on the form of the couplings. Had we coupled the spin-$s$ in a different way to the external mode, this factor will change accordingly.
The propagator $D_{\mathsf{a}\mathsf{b}}^{(s,L)}$ corresponding to the longitudinal polarization tensor $\epsilon_{i_1\cdots i_s}^{(L)}$ with $s$ spatial indices is again built from the mode function for the helicity-0 component of the spin-$s$ field with all-spatial indices $\Psi_{i_1\cdots i_s}$. Similar to the previous spin-1 case, the mode function $\Psi^{(s,L)}_s(k,\tau)$ can also be constructed from the scalar mode function $\sigma(k,\tau)$ of the same mass parameter $\wt\nu$ by the action of a differential operator. For spin-$s$, this differential operator is defined recursively. Here we quote the result, and the details can be found in, e.g., App.\ A of \cite{Lee:2016vti}. First, similar to spin-1 case, the temporal mode function $\Psi^{(s,L)}_0(k,\tau)$ is identical to the scalar mode function up to a constant factor:
\begin{align}
\Psi^{(s,L)}_0(k,\tau)= \FR{\pi^{3/4}\text{sech}^{1/2}(\pi\wt\nu)}{2^{s/2}}\Gamma^{1/2}\begin{bmatrix} 1+s \\ \fr12+s,\fr12+s+\mathrm{i}\wt\nu,\fr12+s-\mathrm{i}\wt\nu\end{bmatrix} k^s\sigma(k,\tau),
\end{align}
where we have used the compact notation for the Euler-$\Gamma$ product, as defined in (\ref{eq_GammaProd2}). Then, the longitudinal mode function $\Psi^{(s,L)}_s(k,\tau)$ with respect to the longitudinal polarization of $s$ spatial indices is constructed recursively from the following relation:
\begin{align}
\Psi^{(s,L)}_{n}(k,\tau)=\FR{1}{\mathrm{i} k}\mathcal{L}_\tau \Psi^{(s,L)}_{n-1}(k,\tau)-\sqrt\pi\sum_{m=0}^{n-1}\Gamma\begin{bmatrix} 1+n,\fr{1+m+n}2 \\ 1+m,1-m+n,\fr12+n,\fr{1+m-n}2\end{bmatrix} \Psi^{(s,L)}_{m}(k,\tau).
\end{align}
For example, the all-spatial longitudinal mode functions, $\Psi_2^{(2,L)}(k,\tau)$ for a spin-2 particle and $\Psi_3^{(3,L)}(k,\tau)$ for a spin-3 particle, are respectively given by:
\begin{align}
\Psi_2^{(2,L)}(k,\tau)
=&~\sqrt{\FR{2}{3(\wt\nu^2+\fr14)(\wt\nu^2+\fr94)}}\bigg(\mathcal{L}_\tau^2-\FR{k^2}{3}\bigg)\sigma(k,\tau),\\
\Psi_3^{(3,L)}(k,\tau)
=&~\sqrt{\FR{2}{5(\wt\nu^2+\fr14)(\wt\nu^2+\fr94)(\wt\nu^2+\fr{25}4)}}\bigg(\mathcal{L}_\tau^3-\FR{14 k^2}{15}\mathcal{L}_\tau\bigg)\sigma(k,\tau),
\end{align}
where we have dropped unimportant overall phases. Then, in parallel with the previous \mbox{spin-1} example, we can move the differential operator on $\sigma(k,\tau)$ to the external modes by using the integration by parts, and thereby reduce the correlator of spin-$s$ exchange to a linear combination of the Hankel seed integral.
\paragraph{Tensor and mixed correlators.} In all examples considered above, the correlators are reduced to linear combinations of Hankel seed integral. The Whittaker seed integral has not been used until now. The reason is that the Whittaker mode function appears only for spinning field with helicity-dependent chemical potentials. Such chemical potentials usually vanish for the longitudinal polarization, and therefore it does not play a role in the scalar two-point and three-point correlators at the tree level. (However, the chemical-potential-enhanced mode does play a role in scalar three-point function at the one-loop level or the scalar four-point function at the tree level.) To see the effect of the chemical potential in tree-level three-point functions, we should consider spinning external states, among which the graviton correlators are most relevant to realistic CC physics. Indeed, it was shown in \cite{Tong:2022cdz} that a helical chemical potential can be introduced to a massive spin-2 particle $\Sigma$. When $\Sigma$ linearly mixes with the massless graviton mode $\gamma_{\mathbf k}^{(\pm 2)}$, it can mediate a mixed three-point function $\langle}\def\ra{\rangle\varphi_{\mathbf k_1}\varphi_{\mathbf k_2}\gamma_{\mathbf k_3}^{(\pm 2)}\ra'$, described again by the left diagram of Fig.\ \ref{fd_3pt2pt}, but with the right external mode replaced by $\gamma_{\mathbf k_3}$. There is also a two-point function of graviton modes $\langle}\def\ra{\rangle\gamma_{\mathbf k}^{(\pm 2)}\gamma_{-\mathbf k}^{(\pm 2)}\ra'$ contributed by the massive spin-2 mode, similar to the right diagram of Fig.\ \ref{fd_3pt2pt}. The corresponding couplings can be chosen as:
\begin{align}
&\mathcal{O}_{\varphi\varphi\Sigma}=\FR{1}{2}(\partial_i\varphi)(\partial_j\varphi)\sigma_{ij},
&&\mathcal{O}_{\gamma\Sigma}=(-\tau)^{-1}\gamma_{ij}'\sigma_{ij}.
\end{align}
To write down the SK integral, we also need the bulk-to-boundary propagator $T_{\mathsf{a}}(k;\tau)$ for the external graviton, as well as the mode function $\Sigma^{(\pm 2)}(k,\tau)$ for the helicity-2 component of the massive spin-2 field $\Sigma$. They are respectively given by:
\begin{align}
&T_\mathsf{a}(k,\tau)=\FR{1}{4k^3}(1-\mathrm{i}\mathsf{a} k\tau)e^{\mathrm{i}\mathsf{a} k\tau},\\
&\Sigma^{(\pm 2)}(k,\tau) = -\FR{e^{\mp \pi\wt\mu/2}}{2\sqrt{k}\tau}\mathrm{W}_{\pm \mathrm{i}\wt\mu,\mathrm{i}\wt\nu}(2\mathrm{i} k\tau)=-\FR{1}{\sqrt2 \tau}B^{(\pm1)}(k,\tau),
\end{align}
where $B^{(\pm1)}(k,\tau)$ is the mode function for the massive spin-1 field with helicity $h=\pm1$, given in (\ref{eq_Bhmode}). From this one can build the SK integral, and then rewrite it as a linear combination of Whittaker seed integrals. The details have been spelled out in \cite{Qin:2022fbv}. Here we show the final result:
\begin{align}
&\langle}\def\ra{\rangle \varphi(\mathbf k_1)\varphi(\mathbf k_2)\gamma^{(\pm 2)}(\mathbf k_3)\ra'\nonumber\\
=&~\FR{\wh{k}_{1i}\wh{k}_{2j} \epsilon_{ij}^{(\pm 2)}(\wh k_3) }{32(k_1k_2k_3)^2}\sum_{\mathsf{a},\mathsf{b}=\pm}\Big[\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{(\pm)-1,-1}(u,1)
+\mathrm{i}\mathsf{a}\varrho_{12}\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{(\pm)0,-1}(u,1)-\varrho_1\varrho_2\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{(\pm)1,-1}(u,1)\Big],
\end{align}
where, again, $u=2k_3/k_{123}$, and the various momentum ratios $\varrho_i$ are defined in the same way as in (\ref{eq_3ptLorCovCoup}). It turns out that this correlator can also be obtained by acting an appropriate differential operator on a fully summed SK integral. Interested readers can find more details in \cite{Qin:2022fbv}.
It seems that the Whittaker seed integral is of limited use in representing inflation correlators. As we shall show in a separate work \cite{qin_box}, this seed integral will be more useful when we treat the two-point or three-point functions considered here as subgraphs of more complicated diagrams. In that case, the Whittaker seed integral will be a very convenient building block for constructing more complicated processes at the loop level.
\section{Bootstrapping Hankel Seed Integrals}
\label{sec_hankel}
In this section, we provide the details of computing the three-point and two-point functions with Hankel-type tree-level mediation. We will derive the bootstrap equations satisfied by the seed integrals, first in $r$-variables, and then in $u$-variables. We then solve the bootstrap equations and obtain both the particular solution to the inhomogeneous equation and the general solution to the homogeneous equation. The final answer is determined by proper boundary conditions, which we choose to impose from the squeezed limit. Then, we take the single and double folded limit, to get the desired result for the seed integrals with one or both of $u_1$ and $u_2$ taken to 1. Readers uninterested in the technical details can directly go to Sec.\ \ref{sec_hankel_summary} for the final results.
\subsection{Hankel seed integral and its bootstrap equation}
As mentioned above, our starting point is the Hankel seed integral $\mathcal{I}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u_1,u_2)$ defined in (\ref{eq_SeedIntH}). Here we shall first write the seed integrals as functions of $r_1=k_s/k_{12}$ and $r_2=k_s/k_{34}$, and transform to $(u_1,u_2)$ later:
\begin{align}
\label{eq_seedHr}
\mathcal I^{p_1p_2}_{\mathsf{a}\mathsf{b}}(r_1,r_2)
\equiv - \mathsf{a}\mathsf{b}\, k_s^{5+p_{12}} \int_{-\infty}^{0} {\mathrm{d}}\tau_1{\mathrm{d}}\tau_2\,(-\tau_1)^{p_1}(-\tau_2)^{p_2}e^{\mathrm{i}\mathsf{a} k_{12}\tau_1+\mathrm{i} \mathsf{b} k_{34}\tau_2}D_{\mathsf{a}\mathsf{b}}(k_s;\tau_1,\tau_2).
\end{align}
where $D_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)$ is the bulk propagator for a massive scalar field, given in (\ref{eq_DScalarSame}) and (\ref{eq_DScalarOpp}). An important and well-known property of the Schwinger-Keldysh propagators is that they satisfy the equation of motion for the massive scalar field with or without a $\delta$-source:
\begin{align}
\label{eq_DEoM1}
&(\tau_1^2 \partial_{\tau_1}^2 - 2\tau_1 \partial_{\tau_1} + k_s^2\tau_1^2 + m^2)D_{\pm\mp}(k_s;\tau_1,\tau_2)=0,\\
\label{eq_DEoM2}
&(\tau_1^2 \partial_{\tau_1}^2 - 2\tau_1 \partial_{\tau_1} + k_s^2\tau_1^2 + m^2)D_{\pm\pm}(k_s;\tau_1,\tau_2)=\mp\mathrm{i} \tau_1^2\tau_2^2\delta(\tau_1-\tau_2).
\end{align}
This will be the key property we shall make use of when deriving the bootstrap equation for the Hankel seed integral.
Now, we start to derive the bootstrap equation. It turns out convenient to redefine several variables. First, we shall use $z_1\equiv -k_{12}\tau_1$ and $z_2\equiv -k_{34}\tau_2$ instead of the conformal time $\tau_1$ and $\tau_2$. Notice that the negative sign in the definition of $z_{1,2}$ makes them real and positive: $z_{1,2}\in(0,+\infty)$ in the physical regime. Second, we shall define a ``hatted'' massive propagator $\wh D_{\mathsf{a}\mathsf{b}}(z_1,z_2)$:
\begin{equation}
\wh D_{\mathsf{a}\mathsf{b}}(z_1,z_2) = k^{3}D_{\mathsf{a}\mathsf{b}}(k;\tau_1,\tau_2)=\FR{\pi e^{-\pi\wt\nu}}{4}(z_1z_2)^{3/2}\mathrm{H}_{\mathrm{i}\wt\nu}^{(1)}(z_1)\mathrm{H}_{-\mathrm{i}\wt\nu}^{(2)}(z_2).
\end{equation}
The nice thing about the hatted propagator is that it depends on the time and the momentum only through the combination $k\tau_1$ and $k\tau_2$, and therefore, we can rewrite it as a function of two independent variables $z_1$ and $z_2$. With these redefinitions, we can rewrite the Hankel seed integral (\ref{eq_seedHr}) as
\begin{align}
\label{eq_seedHhatted}
\mathcal I_{\mathsf{a}\mathsf{b}}^{p_1p_2}(r_1,r_2)
=&~(-\mathsf{a}\mathsf{b}) r_1^{1+p_1}r_2^{1+p_2} \int_0^\infty {\mathrm{d}} z_1{\mathrm{d}} z_2\,z_1^{p_1}z_2^{p_2}e^{-\mathrm{i}\mathsf{a} z_1-\mathrm{i} \mathsf{b} z_2}\wh D_{\mathsf{a}\mathsf{b}}(r_1z_1,r_2z_2).
\end{align}
It now becomes manifest that the Hankel seed integral depends on external momenta only through $r_1$ and $r_2$.
Recall that the massive propagator $D_{\mathsf{a}\mathsf{b}}$ satisfies the equation of motion (\ref{eq_DEoM1}) and (\ref{eq_DEoM2}). This pair of equations can be rewritten in terms of the hatted propagator as
\begin{align}
\label{eq_DhatEoM1}
&(z_1^2 \partial_{z_1}^2 -2 z_1\partial_{z_1}+z_1^2+m^2)\wh D_{\pm\mp}(z_1,z_2) = 0,\\
\label{eq_DhatEoM2}
&(z_1^2 \partial_{z_1}^2 -2 z_1\partial_{z_1}+z_1^2+m^2)\wh D_{\pm\pm}(z_1,z_2) = \mp\mathrm{i} z_1^2z_2^2\delta(z_1-z_2).
\end{align}
For the hatted propagator $\wh D_{\mathsf{a}\mathsf{b}}(r_1z_1,r_2z_2)$ in (\ref{eq_seedHhatted}), the two variables $z_1$ and $z_2$ are multiplied by the momentum ratios $r_1$ and $r_2$, respectively. Therefore, for this particular propagator, we can change the variables in the equations (\ref{eq_DhatEoM1}) and (\ref{eq_DhatEoM2}), and rewrite them as differential equations with respect to $r_1$ and $r_2$:
\begin{align}
&(r_1^2 \partial_{r_1}^2 -2 r_1\partial_{r_1}+r_1^2z_1^2+m^2)\wh D_{\pm\mp}(r_1z_1,r_2z_2) = 0,\\
&(r_1^2 \partial_{r_1}^2 -2 r_1\partial_{r_1}+r_1^2z_1^2+m^2)\wh D_{\pm\pm}(r_1z_1,r_2z_2) = \mp\mathrm{i} r_1^2z_1^2r_2^2z_2^2\delta(r_1z_1-r_2z_2),
\end{align}
Now, we can insert the differential operator $(r_1^2 \partial_{r_1}^2 -2 r_1\partial_{r_1}+r_1^2z_1^2+m^2)$ in front of the hatted propagator in (\ref{eq_seedHhatted}), which reduces the massive propagator into 0 or a term proportional to $\delta$ function. Either way, the integral is trivialized, and this will become the right-hand side of the final bootstrap equation.
To derive the left-hand side of the bootstrap equation, we commute the differential operator $(r_1^2 \partial_{r_1}^2 -2 r_1\partial_{r_1}+r_1^2z_1^2+m^2)$ with the $z$-integrals. It is clear that the only obstacle in our attempt of pulling the differential operator to the left of the integral sign is the $r_1^2z_1^2$ term, which depends on the integration variable $z_1$. This obstacle can be removed by integration by parts. That is, for any well-behaved function $f(rz)$, we have:
\begin{equation}
0=\int_0^\infty {\mathrm{d}} z\, \partial_z\big[z^{p+1} e^{-\mathrm{i}\mathsf{a} z} f(rz) \big]= \int_0^\infty {\mathrm{d}} z\, z^p e^{-\mathrm{i}\mathsf{a} z}( p+1-\mathrm{i}\mathsf{a} z+ r\partial_r) f(rz),
\end{equation}
which then gives
\begin{equation}
\label{eq_zIBP}
\int_0^\infty {\mathrm{d}} z\, z^p e^{-\mathrm{i}\mathsf{a} z} zf(rz) = -\mathrm{i}\mathsf{a} (r\partial_r+p+1) \int_0^\infty {\mathrm{d}} z\, z^p e^{-\mathrm{i}\mathsf{a} z}f(rz).
\end{equation}
A repetition of the same procedure then gives:
\begin{align}
\label{eq_z2IBP}
\int_0^\infty {\mathrm{d}} z\, z^p e^{-\mathrm{i}\mathsf{a} z} z^2f(rz) = - (r\partial_r+p+2) (r\partial_r+p+1) \int_0^\infty {\mathrm{d}} z\, z^p e^{-\mathrm{i}\mathsf{a} z}f(rz),
\end{align}
which shows that the $z^2$ term can be as well transformed into a differential operator with respect to $r$. Then, it is straightforward to get the following differential equations:
\begin{align}
&\Big[(r_1^2-r_1^4)\partial_{r_1}^2-\big(2r_1+(4+2p_1)r_1^3\big)\partial_{r_1}
+\big((\wt\nu^2+\fr94)-(p+1)(p+2)r_1^2\big)
\Big]\nonumber\\
&\times \big[r_1^{-1-p_1}r_2^{-1-p_2}\mathcal I_{\pm\mp}^{p_1p_2}(r_1,r_2)\big]=0,\\
&\Big[(r_1^2-r_1^4)\partial_{r_1}^2-\big(2r_1+(4+2p_1)r_1^3\big)\partial_{r_1}
+\big((\wt\nu^2+\fr94)-(p+1)(p+2)r_1^2\big)
\Big]\nonumber\\
&\times \big[r_1^{-1-p_1}r_2^{-1-p_2}\mathcal I_{\pm\pm}^{p_1p_2}(r_1,r_2)\big]=
e^{\mp\mathrm{i} p_{12}\pi/2}\FR{r_1^{4+p_2}r_2^{4+p_1}}{(r_1+r_2)^{5+p_{12}}}\Gamma(5+p_{12}).
\end{align}
After slight simplifications, we arrived at the bootstrap equations for the Hankel seed integral in terms of $r$-variables:
\begin{align}
&\mathcal{D}_{r_1}^{p_1}\mathcal I_{\pm\mp}^{p_1p_2}(r_1,r_2)=0,\\
&\mathcal{D}_{r_1}^{p_1}
\mathcal I_{\pm\pm}^{p_1p_2}(r_1,r_2) =
e^{\mp\mathrm{i} p_{12}\pi/2}\Gamma(5+p_{12})\Big(\FR{r_1r_2}{r_1+r_2}\Big)^{5+p_{12}};\\
&\mathcal{D}_r^p\equiv (r^2-r^4)\partial_{r}^2-\big[(4+2p)r+2r^3\big]\partial_{r}
+ \wt\nu^2+\FR{(5+2p)^2}4.
\end{align}
One can well work with this set of equations and solve for the Hankel seed integral $\mathcal{I}_{\mathsf{a}\mathsf{b}}$, as in most previous works on this topic. However, a crucial observation made in \cite{Qin:2022fbv} is that it is much easier to take the three-point and two-point limits if we choose in instead to work with the following new variables:
\begin{equation}
u_i \equiv \FR{2r_i}{1+r_i},\quad i=1,2.
\end{equation}
Correspondingly, we rewrite the Hankel seed integral as functions of $u_{1,2}$, namely, we define:
\begin{equation}
\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u_1,u_2)\equiv \mathcal I_{\mathsf{a}\mathsf{b}}^{p_1p_2}\big(r_1(u_1),r_2(u_2)\big).
\end{equation}
Then, the bootstrap equations for $\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u_1,u_2)$ are:
\begin{align}
\label{eq_BTEs1}
&\mathcal D_{u_1}^{p_1}\wt{\mathcal I}_{\pm\mp}^{p_1p_2}(u_1,u_2)=0,\\
\label{eq_BTEs2}
&\mathcal D_{u_1}^{p_1}\wt{\mathcal I}_{\pm\pm}^{p_1p_2}(u_1,u_2)=
e^{\mp \mathrm{i} p_{12}\pi/2}\Gamma(5+p_{12})\Big(\FR{u_1u_2}{2(u_1+u_2-u_1u_2)}\Big)^{5+p_{12}};\\
&\mathcal D_u^p \equiv (u^2-u^3)\partial_u^2 - \Big[(4+2p)u-(1+p)u^2\Big]\partial_u + \Big[\wt\nu^2+\big(p+\fr52\big)^2\Big].
\end{align}
This completes the derivation of the bootstrap equations for the Hankel seed integral in $u$-variables. Below we are going to solve these equations to get the four-point function and especially its folded limit.
\subsection{Solving the bootstrap equation}
Now we start to solve the bootstrap equations (\ref{eq_BTEs1}) and (\ref{eq_BTEs2}) for the Hankel seed integrals.
For the homogeneous equation (\ref{eq_BTEs1}), the solution is a proper linear combination of two independent solutions:
\begin{equation}
\label{eq_IpmmpPar}
\wt{\mathcal{I}}_{\pm\mp}^{p_1p_2}(u_1,u_2)=\sum_{\mathsf{a},\mathsf{b}=\pm}\alpha_{\pm\mp|\mathsf{a}\mathsf{b}}\mathcal{Y}_{\mathsf{a}}^{p_1}(u_1)\mathcal{Y}_{\mathsf{b}}^{p_2}(u_2).
\end{equation}
Here $\mathcal{Y}_\pm^p(u)$ is a pair of linearly independent solutions to the homogeneous equation (\ref{eq_BTEs1}), which we choose to be:
\begin{equation}
\label{eq_calYpm}
\mathcal Y_\pm^{p_1}(u_1) = 2^{\mp\mathrm{i}\wt\nu}\Big(\FR{u_1}2\Big)^{5/2+p_1\pm\mathrm{i}\wt\nu}
\Gamma\Big[\FR52+p_1\pm\mathrm{i}\wt\nu,\mp\mathrm{i}\wt\nu\Big]
{}_2\mathrm F_1\left[\begin{matrix}\fr52+p_1\pm\mathrm{i}\wt\nu,\fr12\pm\mathrm{i}\wt\nu\\1\pm2\mathrm{i}\wt\nu\end{matrix}\middle|u_1\right].
\end{equation}
Here we have included some numerical factors in $\mathcal Y_\pm^{p_1}(u_1)$ for later convenience. Note that we have used the fact that the seed integral $\wt{\mathcal{I}}_{\pm\mp}^{p_1p_2}(u_1,u_2)$ has a symmetry with respect to the exchange $u_1\leftrightarrow u_2$, and that the seed integrals satisfy an identical set of bootstrap equations with respect to $u_2$. Therefore, we expect that the final result for $\wt{\mathcal{I}}_{\pm\mp}^{p_1p_2}(u_1,u_2)$ should be bilinear in $\mathcal{Y}_{\mathsf{a}}^{p_1}(u_1)$ and $\mathcal{Y}_{\mathsf{b}}^{p_2}(u_2)$. The coefficients $\alpha_{\pm\mp|\mathsf{a}\mathsf{b}}$ should be determined by imposing a proper boundary condition. In this work, we impose the boundary condition by matching the result to an explicit calculation in the squeezed limit $u_1\ll u_2\ll 1$, as will be elaborated below.
For the inhomogeneous equation (\ref{eq_BTEs2}), the solution is the sum of two parts: one particular solution $\wt{\mathcal{I}}_{\pm\pm,\text{Inh}}^{p_1p_2}(u_1,u_2)$ with the presence of the source term, and one homogeneous solution which is again a linear combination of two independent solutions to the homogeneous equation:
\begin{align}
\label{eq_IpmpmPar}
\wt{\mathcal{I}}_{\pm\pm}^{p_1p_2}(u_1,u_2)=\wt{\mathcal{I}}_{\pm\pm,\text{Inh}}^{p_1p_2}(u_1,u_2)+\sum_{\mathsf{a},\mathsf{b}=\pm}\alpha_{\pm\pm|\mathsf{a}\mathsf{b}}\mathcal{Y}_{\mathsf{a}}^{p_1}(u_1)\mathcal{Y}_{\mathsf{b}}^{p_2}(u_2).
\end{align}
The coefficients $\alpha_{\pm\pm;\mathsf{a}\mathsf{b}}$ are again determined by matching the result to an explicit calculation in the squeezed limit.
Below, we shall first work out the particular solution $\wt{\mathcal{I}}_{\pm\pm,\text{Inh}}^{p_1p_2}(u_1,u_2)$, and then construct the homogeneous solution. To find the particular solution, we do not work with arbitrary momentum configuration; Instead, we shall directly work with the three-point limit by taking $u_2\rightarrow 1$, since we are concerned only with the three-point and two-point functions in this work, and the single folded limit $u_2\rightarrow 1$ is regular for the particular solution. The general four-point correlator for arbitrary momentum configuration can be calculated following the same procedure in our previous work \cite{Qin:2022fbv}.
On the other hand, to find the homogeneous solution, we still work with the 4-point function, and impose the boundary conditions in the squeezed limit $u_{1,2}\rightarrow 0$. It turns out that this limit is enough to fix all the coefficients in the homogeneous solution. Only after this is done, we take the single folded limit $u_2\rightarrow 1$ to get the result for the three-point function. The reason that we work with the four-point function for the homogeneous solution is that the single folded limit is singular for each term in the homogeneous solution, and the regular three-point limit is achieved only when we take the ``correct'' linear combination of all these terms.
\paragraph{Particular solution.}As said, we solve the inhomogeneous equation (\ref{eq_BTEs2}) in the single folded limit $u_2=1$. Also writing $u_1=u$, the equation becomes:
\begin{equation}
\label{eq_DupqI}
\mathcal D_{u}^{p_1} \wt{\mathcal I}_{\pm\pm,\text{Inh}}^{p_1p_2}(u,1) = e^{\mp \mathrm{i} p_{12}\pi/2}\Gamma(5+p_{12})\Big(\FR{u}{2}\Big)^{5+p_{12}}.
\end{equation}
Note that $\wt{\mathcal{I}}_{++,\text{Inh}}^{p_1p_2}$ and $\wt{\mathcal{I}}_{--,\text{Inh}}^{p_1p_2}$ satisfy the equation with the same differential operator on the left-hand side but with the source term differed by a phase.
We use the following Taylor series as an ansatz for the solution:
\begin{equation}
\label{eq_ScalarAnsatz}
\wt{\mathcal I}_{\pm\pm,\text{Inh}}^{p_1p_2}(u,1) = 2^{-5-p_{12}}e^{\mp \mathrm{i} p_{12}\pi/2}\Gamma(5+p_{12}) \sum_{n=0}^\infty \mathcal{X}_{n} u^{5+p_{12}+n}.
\end{equation}
Then, the equation (\ref{eq_DupqI}) gives rise to the following recursion relation for the Taylor coefficients $\mathcal{X}_n$:
\begin{equation}
\mathcal{X}_0 = \FR{1}{(\fr52+p_2)^2+\wt\nu^2},
\qquad
\mathcal{X}_{n+1} = \FR{(n+3+p_2)(n+5+p_{12})}{(n+\fr72+p_2)^2+\wt\nu^2} \mathcal{X}_n.
\end{equation}
Substituting this back into the ansatz (\ref{eq_ScalarAnsatz}), we see that the summation is a standard (generalized) hypergeometric series, which can be directly finished, and we get the following result for the particular solution:
\begin{equation}
\mathcal{X}_n = \FR{(3+p_2)_n(5+p_{12})_n}{(p_2+\fr52-\mathrm{i}\wt\nu)_{n+1}(p_2+\fr52+\mathrm{i}\wt\nu)_{n+1}},
\end{equation}
where $(a)_n\equiv\Gamma(a+n)/\Gamma(a)$ is the Pochhammer symbol. Thus
\begin{align}
\label{eq_IBGs}
\wt{\mathcal I}_{\pm\pm,\text{Inh}}^{p_1p_2}(u,1)=
\FR{e^{\mp \mathrm{i} p_{12}\pi/2}\Gamma(5+p_{12})u^{5+p_{12}} }{2^{5+p_{12}}[(\fr52+p_2)^2+\wt\nu^2]}
{}_3\mathrm F_2
\left[\begin{matrix} 1,3+p_2,5+p_{12}\\\fr72+p_2-\mathrm{i}\wt\nu,\fr72+p_2+\mathrm{i}\wt\nu\end{matrix}\middle|u\right].
\end{align}
This completes our derivation for the particular solution to the bootstrap equation.
\paragraph{Homogeneous solutions.} As mentioned above, the problem of finding the homogeneous solution boils down to a determination of coefficients $\alpha_{\pm\mp|\mathsf{a}\mathsf{b}}$ and $\alpha_{\pm\pm|\mathsf{a}\mathsf{b}}$, as given in (\ref{eq_IpmmpPar}) and (\ref{eq_IpmpmPar}).
We impose the boundary condition in the hierarchical squeezed limit $u_1\ll u_2\ll 1$. In this limit, the tree-seed integral can be directly done. We put this calculation in App.\ \ref{app_hankel}, and here we quote the result:
\begin{align}
\label{eq_ItildePMSq}
\lim_{u_1\ll u_2\ll 1}\wt{\mathcal I}^{p_1p_2}_{\pm\mp}(u_1,u_2)
=&~\FR{e^{\mp\mathrm{i}\bar{p}_{12}\pi/2}}{4\pi}\Big[\wt{\mathcal Y}_+^{p_1}(u_1)+\wt{\mathcal Y}_-^{p_1}(u_1)\Big]\Big[\wt{\mathcal Y}_+^{p_2}(u_2)+\wt{\mathcal Y}_-^{p_2}(u_2)\Big],\\
\label{eq_ItildePPSq}
\lim_{u_1\ll u_2\ll 1}\wt{\mathcal I}^{p_1p_2}_{\pm\pm}(u_1,u_2)
=&~\FR{\pm\mathrm{i} e^{\mp\mathrm{i} p_{12}\pi/2}}{4\pi}\Big[e^{\pi\wt\nu}\wt{\mathcal Y}_\pm^{p_1}(u_1)+e^{-\pi\wt\nu}\wt{\mathcal Y}_{\mp}^{p_1}(u_1)\Big]\Big[\wt{\mathcal Y}_+^{p_2}(u_2)+\wt{\mathcal Y}_-^{p_2}(u_2)\Big].
\end{align}
where $\wt{\mathcal Y}_\pm^p(u)$ is the two independent solutions $\mathcal Y_\pm^p(u)$ given in (\ref{eq_calYpm}) in the squeezed limit $u\rightarrow 0$:
\begin{align}
\wt{\mathcal Y}_\pm^p(u)= 2^{\mp \mathrm{i}\wt\nu} \Big(\FR{u}{2}\Big)^{5/2+p\pm\mathrm{i}\wt\nu}\Gamma\Big[\FR52+p+\mathrm{i}\wt\nu,-\mathrm{i}\wt\nu\Big].
\end{align}
Given the form of the squeezed-limit solution, we do not have to write down all the coefficients $\alpha_{\mathsf{cd}|\mathsf{a}\mathsf{b}}$ explicitly. Rather, we only need to make the replacement $\wt{\mathcal Y}_\pm^p(u)\rightarrow \mathcal Y_\pm^p(u)$ in (\ref{eq_ItildePMSq}) and (\ref{eq_ItildePPSq}), and the results are guaranteed to be the correct homogeneous solutions:
\begin{align}
\label{eq_ItildePM}
\wt{\mathcal I}^{p_1p_2}_{\pm\mp}(u_1,u_2)
=&~\FR{e^{\mp\mathrm{i}\bar{p}_{12}\pi/2}}{4\pi}\Big[{\mathcal Y}_+^{p_1}(u_1)+{\mathcal Y}_-^{p_1}(u_1)\Big]\Big[{\mathcal Y}_+^{p_2}(u_2)+{\mathcal Y}_-^{p_2}(u_2)\Big],\\
\label{eq_ItildePP}
\wt{\mathcal I}^{p_1p_2}_{\pm\pm}(u_1,u_2)
=&~\FR{\pm\mathrm{i} e^{\mp\mathrm{i} p_{12}\pi/2}}{4\pi}\Big[e^{\pi\wt\nu}{\mathcal Y}_\pm^{p_1}(u_1)+e^{-\pi\wt\nu}{\mathcal Y}_{\mp}^{p_1}(u_1)\Big]\Big[{\mathcal Y}_+^{p_2}(u_2)+{\mathcal Y}_-^{p_2}(u_2)\Big].
\end{align}
At this point, an important observation is that the squeezed-limit results (\ref{eq_ItildePMSq}) and (\ref{eq_ItildePPSq}) scale with the momentum ratio $u_1$ as $u_{1}^{5/2+p_1\pm \mathrm{i}\wt\nu}$, which is identical to the scaling of the general homogeneous solution (\ref{eq_calYpm}). On the contrary, the particular solution (\ref{eq_IBGs}) scales as $u_1^{5+p_{12}}$ in the squeezed limit $u_1\ll 1$. [Note that $u_1=u$ in (\ref{eq_IBGs}).] Therefore, for positive real $\wt\nu$, as long as $p_2>-5/2$, the particular solution (\ref{eq_IBGs}) is always subdominant, and the general solutions in (\ref{eq_IpmmpPar}) and (\ref{eq_IpmpmPar}) can be matched to the squeezed-limit result (\ref{eq_ItildePMSq}) and (\ref{eq_ItildePPSq}), respectively. On the other hand, when $p_2\leq -5/2$, the particular solution will dominate the squeezed limit, and there is no way to match the squeezed-limit result. In fact, the Hankel seed integral $\mathcal{I}^{p_1p_2}$ is divergent when $p_{1,2}\leq -5/2$, and the divergence does not cancel out among different SK branches. Physically, the powers $p_{1,2}\leq -5/2$ appear because of too nonlocal interactions, due to which the perturbation theory actually breaks down. Therefore, we shall always assume $p_{1,2}>-5/2$ in this work when $\wt\nu$ is positive real.
\subsection{Folded limit}
\paragraph{Single folded limit.} Now we consider the single folded limit $u_2\to1$. The general solution (\ref{eq_calYpm}) can become singular in this limit through the hypergeometric factor ${}_2\text{F}_1$, and the singular terms must cancel themselves in (\ref{eq_ItildePM}) and (\ref{eq_ItildePP}), so that only the finite part of the hypergeometric function contributes to the folded limit. Therefore, we only keep the finite part of ${}_2\text{F}_1$ function. In the folded limit $u\rightarrow 1$,
\begin{equation}
\label{eq_Fin2F1}
\text{Fin}\bigg\{\lim_{u\rightarrow 1}{}_2\mathrm F_1 \left[\begin{matrix} a,b\\c\end{matrix} \middle| u\right]\bigg\} =
\Gamma\left[\begin{matrix} c,s\\ a+s,b+s\end{matrix}\right],~~~~~~(s\equiv c-a-b\notin\mathbb{Z})
\end{equation}
Here $\text{Fin}\{\cdots\}$ means the finite part of the limit. As indicated, this formula holds only when the balance $s=c-a-b$ is not an integer. The case of $s\in\mathbb{Z}$ can be dealt with by taking the limit. Then, the finite part of the general solution $\mathcal Y_\pm^p(u)$ in (\ref{eq_calYpm}) as $u \rightarrow 1$ is:
\begin{equation}
\text{Fin}\Big\{\lim_{u\to1}\mathcal Y_{\pm}^p(u)\Big\} = \pm\mathrm{i} 2^{-5/2-p}\pi^{1/2}
\text{csch}(\pi\wt\nu)\Gamma\left[\begin{matrix}-2-p,\fr52+p\pm\mathrm{i}\wt\nu\\-\fr32-p\pm\mathrm{i}\wt\nu\end{matrix}\right],
\end{equation}
Now, all the general solutions $\mathcal{Y}_\pm^{p_2}(u_2)$ in (\ref{eq_ItildePM}) and (\ref{eq_ItildePP}) come in the combination $\mathcal Y_+^{p_2}(u_2) + \mathcal Y_-^{p_2}(u_2)$. It can be shown that the divergences in the limit $u_2\rightarrow 1$ cancel out in this combination, and we have the following regular limit:
\begin{equation}
\label{eq_uPanduM}
\lim_{u\to1}\Big[\mathcal Y_+^{p}(u) + \mathcal Y_-^{p}(u)\Big] =2^{-3/2-p}\pi^{1/2}\Gamma\left[\begin{matrix} \fr52+p-\mathrm{i}\wt\nu,\fr52+p+\mathrm{i}\wt\nu\\
3+p\end{matrix}\right].
\end{equation}
Substituting this result back into (\ref{eq_ItildePM}) and (\ref{eq_ItildePP}), we find the correct homogeneous solution in the single folded limit. The explicit results are summarized in (\ref{eq_IPPresult}) and (\ref{eq_IPMresult}) at the end of this section.
\paragraph{Double-folded limit.}
Now we consider the double folded limit, in order to get an expression for the two-point correlator. The double folded limit is obtained by further taking $u\rightarrow 1$ in $\wt{\mathcal{I}}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u,1)$. Remarkably, all of the four seed integral $\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u,1)$ are individually regular in the limit of $u\rightarrow 1$. First, the regularity of $\wt{\mathcal I}_{\pm\mp}^{p_1p_2}(u,1)$ is again due to the combination ${\mathcal Y}_+^{p_1}(u)+{\mathcal Y}_-^{p_1}(u)$, which we have shown to be regular in the limit of $u\rightarrow 1$ in the previous subsection. In fact, by direct substitution of (\ref{eq_uPanduM}) in $\wt{\mathcal I}_{\pm\mp}^{p_1p_2}(u,1)$ in (\ref{eq_IPMresult}), we immediately get:
\begin{align}
\wt{\mathcal I}_{\pm\mp}^{p_1p_2}(1,1)
=&~\FR{e^{\mp\mathrm{i}\bar{p}_{12}\pi/2}}{2^{5+p_{12}} }\Gamma\left[\begin{matrix} \fr52+p_1-\mathrm{i}\wt\nu,\fr52+p_1+\mathrm{i}\wt\nu,\fr52+p_2-\mathrm{i}\wt\nu,\fr52+p_2+\mathrm{i}\wt\nu\\
3+p_1,3+p_2\end{matrix}\right].
\end{align}
On the other hand, in $\wt{\mathcal I}_{\pm\pm}^{p_1p_2}(u,1)$ in (\ref{eq_IPPresult}), the combination $e^{\pi\wt\nu}{\mathcal Y}_+^{p_1}(u)+e^{-\pi\wt\nu}{\mathcal Y}_-^{p_1}(u)$ is divergent when $u\rightarrow 1$, and this divergence is to be canceled by another divergent piece from the ${}_3\text{F}_2$ function in (\ref{eq_IPPresult}), as a consequence of the Bunch-Davies initial condition for all the mode functions. Therefore, we shall only retain the finite part of each of these two pieces, and add them together to get the result in the double folded limit.
The finite part of the combination $e^{\pi\wt\nu}{\mathcal Y}_+^{p_1}(u)+e^{-\pi\wt\nu}{\mathcal Y}_-^{p_1}(u)$ can again be obtained by using the formula in (\ref{eq_Fin2F1}). To obtain the finite part of the ${}_3\text{F}_2$ function, we use the following formula \cite{doi:10.1137/0518089}:
\begin{equation}
\label{eq_3F2finite}
\text{Fin}\bigg\{\lim_{u\rightarrow 1}{}_3\mathrm F_2 \left[\begin{matrix} a,b,c\\d,e\end{matrix} \middle| u\right]\bigg\} =
\Gamma\left[\begin{matrix} d,e,s\\ c,a+s,b+s\end{matrix}\right]
{}_3\mathrm F_2 \left[\begin{matrix} d-c,e-c,s\\a+s,b+s\end{matrix} \middle| 1\right] ,
\end{equation}
where $s\equiv d+e-a-b-c$ is the balance of the hypergeometric function. Once again, the above formula holds when $s$ is not an integer, and the integer case can be dealt with by taking the limit. Therefore, for the ${}_3\text{F}_2$ function in the inhomogeneous solution (\ref{eq_IBGs}), we can make the following assignment of parameters,
\begin{equation}
a=5+p_{12},\quad b=3+p_2,\quad c=1,
\quad d=\FR72+p_2-\mathrm{i}\wt\nu,\quad e=\FR72+p_2+\mathrm{i}\wt\nu,\quad
s=-2-p_1.
\end{equation}
Combined with the finite part of the homogeneous solution as shown in (\ref{eq_IpmpmPar}), we obtain the double-folded limit of the $\wt{\mathcal I}_{\pm\pm}^{p_1p_2}$ as follows:
\begin{align}
\label{eq_I2ptPP2ptSimp}
&\wt{\mathcal I}_{\pm\pm}^{p_1p_2}(1,1)\nonumber\\
=&~\FR{\pm\mathrm{i} e^{\mp\mathrm{i} {p}_{12}\pi/2}[1\mp\mathrm{i}\cot(\pi p_1)]\cosh\pi\wt\nu}{2^{5+p_{12}}}\Gamma\left[\begin{matrix}\fr52+p_1-\mathrm{i}\wt\nu,\fr52+p_1+\mathrm{i}\wt\nu, \fr52+p_2-\mathrm{i}\wt\nu,\fr52+p_2+\mathrm{i}\wt\nu\\
3+p_1,3+p_2\end{matrix}\right] \nonumber\\
&+\FR{e^{\mp \mathrm{i} p_{12}\pi/2}\Gamma(5+p_{12})}{2^{5+p_{12}}}
{}_3\mathcal{F}_2
\left[\begin{matrix} \fr52+p_2-\mathrm{i}\wt\nu,\fr52+p_2+\mathrm{i}\wt\nu,-2-p_1\\ 3+p_2,1-\bar{p}_{12}\end{matrix}\middle|1\right],
\end{align}
There are a large number of relations among hypergeometric functions of argument unity. With these relations we can recast the result into a somewhat simpler form. For example, we can use the following relation (Eq.\ 4.3.4.4 of \cite{Slater:1966}):
\begin{align}
\label{eq_3F2transf}
&~{}_3\mathrm F_2\left[\begin{matrix} a,b,d+e-a-b-1\\ d,e \end{matrix}\middle|1\right]
=\Gamma\left[\begin{matrix}
d,e,d-a-b,e-a-b\\d-a,d-b,e-a,e-b
\end{matrix}\right]\nonumber\\
&+\FR{1}{a+b-d}\Gamma\left[\begin{matrix}
d,e\\a,b,d+e-a-b\end{matrix}\right]
{}_3\mathrm F_2\left[\begin{matrix} d-a,d-b,1\\1+d-a-b,d+e-a-b\end{matrix}\middle|1\right].
\end{align}
Then, with the following assignment of parameters:
\begin{equation}
a=\FR52+p_2-\mathrm{i}\wt\nu,\quad
b=-2-p_1,\quad
d=3+p_2,\quad
e=1-p_1+p_2,
\end{equation}
we obtain:
\begin{align}
&\wt{\mathcal I}_{\pm\pm}^{p_1p_2}(1,1)
=\FR{\pm\mathrm{i} e^{\mp\mathrm{i} {p}_{12}\pi/2}e^{-\pi\wt\nu}}{2^{5+p_{12}}}\Gamma\left[\begin{matrix}\fr52+p_1-\mathrm{i}\wt\nu,\fr52+p_1+\mathrm{i}\wt\nu, \fr52+p_2-\mathrm{i}\wt\nu,\fr52+p_2+\mathrm{i}\wt\nu\\
3+p_1,3+p_2\end{matrix}\right] \nonumber\\
&-\FR{e^{\mp \mathrm{i} p_{12}\pi/2}}{2^{5+p_{12}}}\Gamma\Big[5+p_{12},\fr52+p_1\pm\mathrm{i}\wt\nu,\fr52+p_2\pm\mathrm{i}\wt\nu\Big]
{}_3\wt{\mathrm{F}}_2
\left[\begin{matrix} 5+p_{12},\fr12\pm\mathrm{i}\wt\nu,1\\ \fr72+p_1\pm\mathrm{i}\wt\nu,\fr72+p_2\pm\mathrm{i}\wt\nu\end{matrix}\middle|1\right].
\end{align}
This result has the advantage that it is manifestly symmetric in $p_1$ and $p_2$, and is regular when $p_1$ and $p_2$ take generic integer values.
\subsection{Summary}
\label{sec_hankel_summary}
At this point, we have finished the computation of the Hankel seed integral $\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{p_1p_2}(u,1)$ in the single folded limit. We present the full result below.
\begin{keyeqn}
\begin{align}
\label{eq_IPPresult}
\wt{\mathcal I}_{\pm\pm}^{p_1p_2}(u,1)
=&~\FR{\pm\mathrm{i} e^{\mp\mathrm{i} {p}_{12}\pi/2}}{2^{7/2+p_2}\pi^{1/2}}\Gamma\left[\begin{matrix} \fr52+p_2-\mathrm{i}\wt\nu,\fr52+p_2+\mathrm{i}\wt\nu\\
3+p_2\end{matrix}\right]\Big[e^{\pi\wt\nu}{\mathcal Y}_\pm^{p_1}(u)+e^{-\pi\wt\nu}{\mathcal Y}_\mp^{p_1}(u)\Big]\nonumber\\
&+\FR{e^{\mp \mathrm{i} p_{12}\pi/2}\Gamma(5+p_{12})u^{5+p_{12}} }{2^{5+p_{12}}[(\fr52+p_2)^2+\wt\nu^2]}
{}_3\mathrm F_2
\left[\begin{matrix} 1,3+p_2,5+p_{12}\\\fr72+p_2-\mathrm{i}\wt\nu,\fr72+p_2+\mathrm{i}\wt\nu\end{matrix}\middle|u\right],\\
\label{eq_IPMresult}
\wt{\mathcal I}_{\pm\mp}^{p_1p_2}(u,1)
=&~\FR{e^{\mp\mathrm{i}\bar{p}_{12}\pi/2}}{2^{7/2+p_2}\pi^{1/2}}\Gamma\left[\begin{matrix} \fr52+p_2-\mathrm{i}\wt\nu,\fr52+p_2+\mathrm{i}\wt\nu\\
3+p_2\end{matrix}\right]\Big[{\mathcal Y}_+^{p_1}(u)+{\mathcal Y}_-^{p_1}(u)\Big] ,
\end{align}
\end{keyeqn}
where $\mathcal Y_\pm^{p}(u)$ is a pair of independent solutions to the homogeneous bootstrap equation:
\begin{equation}
\mathcal Y_\pm^{p}(u) = 2^{\mp\mathrm{i}\wt\nu}\Big(\FR{u}2\Big)^{5/2+p\pm\mathrm{i}\wt\nu}
\Gamma\Big[\FR52+p\pm\mathrm{i}\wt\nu,\mp\mathrm{i}\wt\nu\Big]
{}_2\mathrm F_1\left[\begin{matrix}\fr52+p\pm\mathrm{i}\wt\nu,\fr12\pm\mathrm{i}\wt\nu\\1\pm2\mathrm{i}\wt\nu\end{matrix}\middle|u\right].
\end{equation}
With these results, one can easily generate explicit analytical expressions for a large class of three-point correlators, as discussed in Sec.\ \ref{sec_reduce}.
The result for the two-point function can be read from the Hankel seed integral in the double folded limit. The results are:
\begin{keyeqn}
\begin{align}
\label{eq_H2ptResultPP}
&\wt{\mathcal I}_{\pm\pm}^{p_1p_2}(1,1)
=\FR{\pm\mathrm{i} e^{\mp\mathrm{i} {p}_{12}\pi/2}e^{-\pi\wt\nu}}{2^{5+p_{12}}}\Gamma\left[\begin{matrix}\fr52+p_1-\mathrm{i}\wt\nu,\fr52+p_1+\mathrm{i}\wt\nu, \fr52+p_2-\mathrm{i}\wt\nu,\fr52+p_2+\mathrm{i}\wt\nu\\
3+p_1,3+p_2\end{matrix}\right] \nonumber\\
&-\FR{e^{\mp \mathrm{i} p_{12}\pi/2}}{2^{5+p_{12}}}\Gamma\Big[5+p_{12},\fr52+p_1\pm\mathrm{i}\wt\nu,\fr52+p_2\pm\mathrm{i}\wt\nu\Big]
{}_3\wt{\mathrm{F}}_2
\left[\begin{matrix} 5+p_{12},\fr12\pm\mathrm{i}\wt\nu,1\\ \fr72+p_1\pm\mathrm{i}\wt\nu,\fr72+p_2\pm\mathrm{i}\wt\nu\end{matrix}\middle|1\right],\\
\label{eq_H2ptResultPM}
& \wt{\mathcal I}_{\pm\mp}^{p_1p_2}(1,1)
=\FR{e^{\mp\mathrm{i}\bar{p}_{12}\pi/2}}{2^{5+p_{12}} }\Gamma\left[\begin{matrix} \fr52+p_1-\mathrm{i}\wt\nu,\fr52+p_1+\mathrm{i}\wt\nu,\fr52+p_2-\mathrm{i}\wt\nu,\fr52+p_2+\mathrm{i}\wt\nu\\
3+p_1,3+p_2\end{matrix}\right].
\end{align}
\end{keyeqn}
\section{Bootstrapping Whittaker Seed Integrals}
\label{sec_whittaker}
Now we consider the case where the intermediate massive particle has a boost-breaking chemical potential. This scenario is particularly interesting for CC phenomenology due to its exponentially enhanced signal and also its parity-violating nature. Mathematically, the mode functions for such fields involve the Whittaker W function, which needs a separate treatment from the previously considered Hankel type. In this section, we shall consider the Whittaker case and go through the same procedure as the last section. We shall be briefer in this section since most of the intermediate steps are in parallel with the previous section.
\subsection{Whittaker seed integral and its bootstrap equation}
As introduced in Sec.\ \ref{sec_reduce}, the Whittaker seed integral (\ref{eq_SeedIntW}) is defined based on the SK integral for the tree-level four-point correlator with the exchange of a spin-1 particle of mass parameter $\wt\nu$ and helicity-dependent chemical potential $\wt\mu$. In parallel with the previous section, we first derive the bootstrap equations for this seed integral with $r$-variables, and then change to $u$-variables. Thus we rewrite the Whittaker seed integral (\ref{eq_SeedIntW}) in the following way:
\begin{align}
\label{eq_seedIntWr}
\mathcal{I}^{(h)p_1p_2}_{\mathsf{a}\mathsf{b}}(r_1,r_2)
= -\mathsf{a}\mathsf{b}\,r_1^{1+p_1}r_2^{1+p_2}\int_0^\infty {\mathrm{d}} z_1{\mathrm{d}} z_2\,
z_1^{p_1}z_2^{p_2} e^{-\mathrm{i}\mathsf{a} z_1-\mathrm{i}\mathsf{b} z_2}\wh D_{\mathsf{a}\mathsf{b}}^{(h)}(r_1z_1,r_2z_2).
\end{align}
In the seed integral above, $\wh D_{\mathsf{a}\mathsf{b}}^{(h)}(z_1,z_2)$ is again a hatted propagator, this time built from the helicity-$h$ component ($h=\pm 1$) of the massive spin-1 propagator $D_{\mathsf{a}\mathsf{b}}^{(h)}(k;\tau_1,\tau_2)$:
\begin{equation}
\wh D_{\mathsf{a}\mathsf{b}}^{(h)}(z_1,z_2)=k D_{\mathsf{a}\mathsf{b}}^{(h)}(k;\tau_1,\tau_2).
\end{equation}
Again, we use the dimensionless and positive variables $z_i=-k\tau_i$ $(i=1,2)$. Here the four Schwinger-Keldysh propagators $D_{\mathsf{a}\mathsf{b}}^{(h)}(k;\tau_1,\tau_2)$ are again related to the Wightman functions $D_>^{(h)}$ and $D_<^{(h)}=D_>^{(h)*}$, and the ``greater'' Wightman function is given by
\begin{align}
D^{(h)}_>(k;\tau_1,\tau_2)=&~ \FR{e^{-\pi h\wt\mu}}{2k}\mathrm{W}_{\mathrm{i} h\wt\mu,\mathrm{i}\wt\nu}(2\mathrm{i} k\tau_1)\mathrm{W}_{-\mathrm{i} h\wt\mu,\mathrm{i}\wt\nu}(-2\mathrm{i} k\tau_2).
\end{align}
Let us emphasize that, although we are building the Whittaker seed integral from the spin-1 propagators, this seed integral can be used to compute correlators involving higher spin states. Very often, the boost-breaking chemical potential changes the dispersion of a spin-$s$ field in such a way that its highest or lowest helicity component ($h=\pm s$) is most enhanced. For such states, the mode function is identical to the $h=\pm 1$ states of the spin-1 field up to a prefactor, and thus above seed integral is perfectly applicable to these cases. One only needs to be careful that the mass parameter $\wt\nu$ is related to the mass $m$ of a spin-$s$ state via $\wt\nu=\sqrt{m^2-(s-1/2)^2}$.
The equations of motion satisfied by the propagator $D_{\mathsf{a}\mathsf{b}}^{(h)}(k;\tau_1,\tau_2)$ then lead to the following set of equations satisfied by the hatted propagators:
\begin{align}
&(r_1^2\partial_{r_1}^2+r_1^2z_1^2 +2h\wt\mu r_1z_1+m^2)\wh D_{\pm\mp}^{(h)}(r_1z_1,r_2z_2) = 0,\\
&(r_1^2\partial_{r_1}^2+r_1^2z_1^2 +2h\wt\mu r_1z_1+m^2)\wh D_{\pm\pm}^{(h)}(r_1z_1,r_2z_2) = \mp \mathrm{i} r_1r_2z_1z_2\delta(r_1z_1-r_2z_2).
\end{align}
Therefore, we insert the differential operator $r_1^2\partial_{r_1}^2+r_1^2z_1^2 +2h\wt\mu r_1z_1+m^2$ in front of the hatted propagator in the Whittaker seed integral (\ref{eq_seedIntWr}), to derive the bootstrap equation. On one hand, the differential operator reduces the integral to either 0 or a local term, which becomes the right-hand side of the bootstrap equation. On the other hand, we commute the differential operator with the integrals over $z_{1,2}$ with the help of (\ref{eq_zIBP}) and (\ref{eq_z2IBP}), and end up with a new differential operator acting on the whole seed integral, which is the left-hand side of the bootstrap equation. The result is:
\begin{align}
&\mathcal{D}_{\pm,r_1}^{p_1} \mathcal I_{\pm\mp}^{(h)}(r_1,r_2)= 0,\\
&\mathcal{D}_{\pm,r_1}^{p_1} \mathcal I_{\pm\pm}^{(h)}(r_1,r_2)
=-e^{\mp\mathrm{i} p_{12}\pi/2}\Gamma(3+p_{12})\Big(\FR{r_1r_2}{r_1+r_2}\Big)^{3+p_{12}};\\
&\mathcal{D}_{\pm,r}^p\equiv (r^2-r^4)\partial_{r}^2-\Big[2(1+p)r\pm 2\mathrm{i} h\wt\mu r^2+2r^3\Big]\partial_{r}
+ \Big[\wt\nu^2+\fr{(3+2p)^2}4\Big].
\end{align}
Next we use the new variables $u_i = (2r_i)/(1+r_i)$ in place of $r_i$ $(i=1,2)$, in terms of which the seed integral is rewritten as
\begin{equation}
\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{(h)p_1p_2}(u_1,u_2)\equiv \mathcal I_{\mathsf{a}\mathsf{b}}^{(h)p_1p_2}\big(r_1(u_1),r_2(u_2)\big)
\end{equation}
Then we can derive the following set of equations with respect to the $u$-variable:
\begin{align}
\label{eq_BTEV1}
&\mathcal D_{\pm,u_1}^{p_1}\wt{\mathcal I}_{\pm\mp}^{(h)}(u_1,u_2)= 0,\\
\label{eq_BTEV2}
&\mathcal D_{\pm,u_1}^{p_1}\wt{\mathcal I}_{\pm\pm}^{(h)}(u_1,u_2)
=-e^{\mp\mathrm{i} p_{12}\pi/2}\Gamma(3+p_{12})\Big(\FR{u_1u_2}{2(u_1+u_2-u_1u_2)}\Big)^{3+p_{12}};\\
\label{eq_Dpmu}
&\mathcal D_{\pm,u}^p = (u^2-u^3)\partial_u^2 - \Big[(2+2p)u+(\pm \mathrm{i} h\wt\mu-p)u^2\Big]\partial_u + \Big[\wt\nu^2+\big(p+\fr32\big)^2\Big].
\end{align}
These are the bootstrap equations for the Whittaker seed integral, from which we shall bootstrap the three-point and two-point functions in the following subsections.
\subsection{Solving the bootstrap equation}
In parallel with the procedure in the last section, we now solve the bootstrap equations (\ref{eq_BTEV1}) and (\ref{eq_BTEV2}). Again, the solution to the homogeneous equation (\ref{eq_BTEV1}) can be written as a linear combination of the two independent solutions
\begin{align}
\label{eq_IhPMpar}
\wt{\mathcal I}_{\pm\mp}^{(h)p_1p_2}(u_1,u_2)=\sum_{\mathsf{a},\mathsf{b}=\pm}\beta_{\pm\mp|\mathsf{a}\mathsf{b}}\,\mathcal{U}_{\pm|\mathsf{a}}^{p_1}(u_1)\,\mathcal{U}_{\mp|\mathsf{b}}^{p_2}(u_2),
\end{align}
with $\mathcal{U}^{p}_{\mathsf{a}|\mathsf{b}}(u)$ ($\mathsf{b}=\pm1$) the two independent solutions to the following equations:
\begin{align}
\mathcal{D}_{\mathsf{a},u}^p\mathcal{U}^p_{\mathsf{a}|\mathsf{b}}(u)=0.
\end{align}
Here $\mathcal{D}_{\mathsf{a},u}^p$ are the differential operators in (\ref{eq_Dpmu}). Note that we have two distinct operators $\mathcal{D}_{\mathsf{a},u}^p$, labeled by $\mathsf{a}=\pm$. For each fixed choice of $\mathsf{a}=\pm$, there are a pair of independent solutions, labeled by $\mathsf{b}=\pm$, hence the notation $\mathcal{U}^p_{\mathsf{a}|\mathsf{b}}$ for the solutions. Explicitly, we can choose $\mathcal{U}^p_{\mathsf{a}|\mathsf{b}}$ to be:
\begin{equation}
\mathcal U_{\mathsf{a}|\mathsf{b}}^p(u) = \mathrm{i}\,\mathsf{a}\mathsf{b}\, 2^{\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu} \pi \text{csch}(2\pi\wt\nu) \Big(\FR u2\Big)^{3/2+p+ \mathrm{i}\mathsf{a}\mathsf{b}\wt\nu} {}_2\mathcal F_1\left[\begin{matrix} \fr32+p+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu,\fr12+\mathrm{i}\mathsf{a} h\wt\mu+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu\\1+2\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu \end{matrix}\middle| u \right].
\end{equation}
Here, again, we have included some numerical factors for later convenience. The coefficients $\beta_{\pm\mp|\mathsf{a}\mathsf{b}}$ in (\ref{eq_IhPMpar}) should be determined by the boundary conditions in the squeezed limit, as will be detailed below.
For the inhomogeneous equation (\ref{eq_BTEV2}), the solution is a sum of a particular solution $\wt{\mathcal I}_{\pm\pm,\text{Inh}}^{(h)p_1p_2}(u_1,u_2)$ and a homogeneous solution which is again a linear combination of independent solutions:
\begin{align}
\wt{\mathcal I}_{\pm\pm}^{(h)p_1p_2}(u_1,u_2)=\wt{\mathcal I}_{\pm\pm,\text{Inh}}^{(h)p_1p_2}(u_1,u_2)+\sum_{\mathsf{a},\mathsf{b}=\pm}\beta_{\pm\pm|\mathsf{a}\mathsf{b}}\,\mathcal{U}^{p_1}_{\pm|\mathsf{a}}(u_1)\,\mathcal{U}^{p_2}_{\pm|\mathsf{b}}(u_2).
\end{align}
Below we will first find the particular solution $\wt{\mathcal I}_{\pm\pm,\text{Inh}}^{(h)p_1p_2}(u_1,u_2)$, and then determine the homogeneous solutions by matching the results in the squeezed limit.
\paragraph{Particular solution.}
Similar to the previous section, we do not pursue the most general result for the particular solution. Rather, we shall set $u_2=1$ and write $u_1=u$ in the inhomogeneous equation (\ref{eq_BTEV2}):
\begin{align}
&\mathcal D_{\pm,u}^{p_1} \wt{\mathcal I}^{(h)p_1p_2}_{\pm\pm,\text{Inh}}(u,1) = -e^{\mp\mathrm{i} p_{12}\pi/2}\Gamma(3+p_{12})\Big(\FR u2\Big)^{3+p_{12}}.
\end{align}
This equation can again be solved by trying the following series ansatz:
\begin{equation}
\wt{\mathcal I}^{(h)p_1p_2}_{\pm\pm,\text{Inh}}(u,1) = -2^{-3-p_{12}}e^{\mp\mathrm{i} p_{12}\pi/2}\Gamma(3+p_{12})
\sum_{n=0}^\infty \mathcal{V}_{n,\pm} u^{3+p_{12}+n}.
\end{equation}
The differential equation then leads to the following recursion relations for the series coefficients:
\begin{equation}
\mathcal{V}_{0,\pm} = \FR{1}{(\fr32+p_2)^2+\wt\nu^2},
\qquad
\mathcal{V}_{n+1,\pm} = \FR{(n+2+p_2\pm\mathrm{i} h\wt\mu)(n+3+p_{12})}{(n+\fr52+p_2)^2+\wt\nu^2} \mathcal{V}_{n,\pm},
\end{equation}
and the recursion relation can be directly solved to get a general term formula:
\begin{equation}
\mathcal{V}_{n,\pm} = \FR{(2+p_2\pm\mathrm{i} h\wt\mu)_n(3+p_{12})_n}{(p_2+\fr32-\mathrm{i}\wt\nu)_{n+1}(p_2+\fr32+\mathrm{i}\wt\nu)_{n+1}}.
\end{equation}
This shows that the series ansatz is again a standard hypergeometric series. Finishing the summation, we get:
\begin{equation}
\wt{\mathcal I}^{(h)p_1p_2}_{\pm\pm,\text{Inh}}(u,1)=
-
\FR{e^{\mp\mathrm{i} p_{12}\pi/2}\Gamma(3+p_{12})u^{3+p_{12}}}{2^{3+p_{12}}[(\fr32+p_2)^2+\wt\nu^2]}
{}_3\mathrm F_2
\left[\begin{matrix} 1,2+p_2\pm\mathrm{i} h\wt\mu,3+p_{12}\\\fr52+p_2-\mathrm{i}\wt\nu,\fr52+p_2+\mathrm{i}\wt\nu\end{matrix}\middle|u\right].
\end{equation}
This completes our computation of the particular solution.
\paragraph{Homogeneous solutions.} To determine the homogeneous solutions in both $\wt{\mathcal{I}}_{\pm\mp}^{(h)p_1p_2}(u_1,u_2)$ and $\wt{\mathcal{I}}_{\pm\pm}^{(h)p_1p_2}(u_1,u_2)$, we again compute these integrals in the squeezed limit $u_1\ll u_2\ll 1$. Some details are given in App.\ \ref{app_whittaker}, and the results are shown in (\ref{eq_IhSqLimit}). By matching the results in the squeezed limit, we can determine the coefficients $\beta_{\mathsf{a}\mathsf{b}|\mathsf{c}\mathsf{d}}$:
\begin{align}
\label{eq_betaCoef1}
&\beta_{\pm\mp|++}=\beta_{\pm\mp|--}=\beta_{\pm\mp|+-}=\beta_{\pm\mp|-+}=\FR{e^{\mp\mathrm{i} \bar p_{12}\pi/2}e^{-\pi h\wt\mu}}{2\pi^2}(\cosh 2\pi\wt\mu + \cosh 2\pi\wt\nu),\\
\label{eq_betaCoef2}
&\beta_{\pm\pm|++}=\beta_{\pm\pm|+-}=\FR{\mp\mathrm{i} e^{\mp\mathrm{i} p_{12}\pi/2}e^{-\pi (h\wt\mu-\wt\nu)}\cosh[\pi(h\wt\mu+\wt\nu)]}{\pi \Gamma\big[\fr12\pm\mathrm{i} h\wt\mu-\mathrm{i}\wt\nu,\fr12\pm\mathrm{i} h\wt\mu+\mathrm{i}\wt\nu\big]},\\
\label{eq_betaCoef3}
&\beta_{\pm\pm|-+}=\beta_{\pm\pm|--}=\FR{\mp\mathrm{i} e^{\mp\mathrm{i} p_{12}\pi/2}e^{-\pi (h\wt\mu+\wt\nu)}\cosh[\pi(h\wt\mu-\wt\nu)]}{\pi \Gamma\big[\fr12\pm\mathrm{i} h\wt\mu-\mathrm{i}\wt\nu,\fr12\pm\mathrm{i} h\wt\mu+\mathrm{i}\wt\nu\big]}.
\end{align}
\subsection{Folded limit}
\paragraph{Single-folded limit.} Now we consider the folded limits of the Whittaker seed integrals. First, to obtain the three-point function, we take the single-folded limit $u_2\rightarrow 1$. As in the previous section, each of the solutions $\mathcal U_{\mathsf{a}|\mathsf{b}}^p(u)$ is singular when $u\rightarrow 1$. However, the singular terms must cancel out in the final expression, as a result of choosing the Bunch-Davies initial condition. Therefore, we only need to retain the finite parts of the solutions $\mathcal U_{\mathsf{a}|\mathsf{b}}^p(u)$ in the folded limit $u\rightarrow 1$. For notational simplicity, we define the finite part of $\mathcal U_{\mathsf{a}|\mathsf{b}}^p(1)$ as:
\begin{equation}
U_{\mathsf{a}|\mathsf{b}}^p \equiv\text{Fin}\Big\{\mathcal U_{\mathsf{a}|\mathsf{b}}^p(1)\Big\} =\FR{\mathrm{i}\mathsf{a}\mathsf{b}\pi}{2^{3/2+p}\sinh(2\pi\wt\nu)}
\Gamma\left[\begin{matrix}
\fr32+p+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu,\fr12+\mathrm{i}\mathsf{a} h\wt\mu+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu,-1-p-\mathrm{i}\mathsf{a} h\wt\mu\\-\fr12-p+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu,\fr12-\mathrm{i}\mathsf{a} h\wt\mu+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu
\end{matrix}\right].
\end{equation}
Looking at the structure of the coefficients $\beta_{\mathsf{a}\mathsf{b}|\mathsf{c}\mathsf{d}}$ in (\ref{eq_betaCoef1}) - (\ref{eq_betaCoef3}), we see that the $u_2$-dependence in the full homogeneous solution always appears in the combination ${\mathcal U}_{\pm|+}^{p_2}(u_2) + {\mathcal U}_{\pm|-}^{p_2}(u_2)$. It can be shown that the singular terms in ${\mathcal U}_{\mathsf{a}|\mathsf{b}}^{p_2}(u_2)$ always cancel in this combination, and therefore, we only need the following finite result:
\begin{align}
& U_{\pm|+}^p + U_{\pm|-}^p
= \FR{\pm\mathrm{i}\pi\Gamma(-1-p\mp\mathrm{i} h\wt\mu)}{2^{3/2+p}\sinh(2\pi\wt\nu)}
\Gamma\left[\begin{matrix}
\fr32+p\pm\mathrm{i}\wt\nu,\fr12\pm\mathrm{i} h\wt\mu\pm\mathrm{i}\wt\nu\\
-\fr12-p\pm\mathrm{i}\wt\nu,\fr12\mp\mathrm{i} h\wt\mu\pm\mathrm{i}\wt\nu
\end{matrix}\right]
+(\wt\nu\rightarrow-\wt\nu) .
\end{align}
Then, the single-folded limit of the Whittaker seed integral can be written in the following way:
\begin{align}
\label{eq_Ih3ptU1}
\wt{\mathcal I}_{\pm\mp}^{(h)p_1p_2}(u,1)=&~\beta_{\pm\mp|++}\Big(U_{\mp|+}^{p_2}+U_{\mp|-}^{p_2}\Big)\Big[\,\mathcal{U}_{\pm|+}^{p_1}(u)+\mathcal{U}_{\pm|-}^{p_1}(u)\Big],\\
\label{eq_Ih3ptU2}
\wt{\mathcal I}_{\pm\pm}^{(h)p_1p_2}(u,1)=&~\Big(U_{\mp|+}^{p_2}+U_{\mp|-}^{p_2}\Big)\Big[\beta_{\pm\pm|++}\,\mathcal{U}_{\pm|+}^{p_1}(u)+\beta_{\pm\pm|-+}\mathcal{U}_{\pm|-}^{p_1}(u)\Big]+\wt{\mathcal I}^{(h)p_1p_2}_{\pm\pm,\text{Inh}}(u,1).
\end{align}
All quantities in these expressions have been calculated; A simple substitution then gives us the explicit expressions. We present these explicit results at the end of this section.
\paragraph{Double-folded limit.}
Next we consider the double folded limit, which will give us an expression for the two-point function. Formally, this limit can be reached by taking $u\rightarrow 1$ in (\ref{eq_Ih3ptU1}) and (\ref{eq_Ih3ptU2}). That is:
\begin{align}
\label{eq_Ih2ptU1}
\wt{\mathcal I}_{\pm\mp}^{(h)p_1p_2}(u,1)=&~\beta_{\pm\mp|++}\Big(U_{\pm|+}^{p_1}+U_{\pm|-}^{p_1}\Big)\Big(U_{\mp|+}^{p_2}+U_{\mp|-}^{p_2}\Big),\\
\label{eq_Ih2ptU2}
\wt{\mathcal I}_{\pm\pm}^{(h)p_1p_2}(u,1)=&~\Big(\beta_{\pm\pm|++}U_{\pm|+}^{p_1}+\beta_{\pm\pm|-+}U_{\pm|-}^{p_1}\Big)\Big(U_{\mp|+}^{p_2}+U_{\mp|-}^{p_2}\Big)+\text{Fin}\Big\{\wt{\mathcal I}^{(h)p_1p_2}_{\pm\pm,\text{Inh}}(1,1)\Big\}.
\end{align}
To get the finite part of the particular solution in the last line, we again use the formula (\ref{eq_3F2finite}) with the following assignment of parameters:
\begin{align}
a=3+p_{12},\quad b=2+p_2+\mathrm{i} h\wt\mu,\quad c=1, \quad
d=\FR52+p_2-\mathrm{i}\wt\nu,\quad e=\FR52+p_2+\mathrm{i}\wt\nu.
\end{align}
Therefore, the finite part of the background is:
\begin{align}
\text{Fin}\Big\{\wt{\mathcal I}^{(h)p_1p_2}_{\pm\pm,\text{Inh}}(1,1)\Big\}
=&-\FR{e^{\mp\mathrm{i} p_{12}\pi/2}\Gamma(3+p_{12})}{2^{3+p_{12}}}
{}_3\mathcal F_2
\left[\begin{matrix} \fr32+p_2-\mathrm{i}\wt\nu,\fr32+p_2+\mathrm{i}\wt\nu,-1-p_1-\mathrm{i} h\wt\mu\\2+p_2-\mathrm{i} h\wt\mu,1-\bar p_{12}\end{matrix}\middle|1\right].
\end{align}
This expression can again be put into a better form by using (\ref{eq_3F2transf}) with the following assignment of parameters:
\begin{equation}
a=\FR32+p_2-\mathrm{i}\wt\nu,\quad
b=-1-p_1-\mathrm{i} h\wt\mu,\quad
d=2+p_2-\mathrm{i} h\wt\mu,\quad
e=1-p_1+p_2.
\end{equation}
The resulting expression will be summarized in the next subsection.
\subsection{Summary}
Now we summarize the results of this section by presenting the explicit expressions for the Whittaker seed integral in the folded limits. This include the three-point function $\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{(h)p_1p_2}(u,1)$ and the two-point function $\wt{\mathcal I}_{\mathsf{a}\mathsf{b}}^{(h)p_1p_2}(1,1)$. For the three-point function, we have:
\begin{keyeqn}
\begin{align}
\label{eq_IhPMresult}
\wt{\mathcal I}_{\pm\mp}^{(h)p_1p_2}(u,1)
=&~\bigg\{\FR{\pm\mathrm{i} e^{\mp\mathrm{i} \bar p_{12}\pi/2}e^{-\pi h\wt\mu}}{2^{3/2+p_2}\sinh(2\pi\wt\nu)}
\Gamma\left[\begin{matrix} -1-p_2\pm\mathrm{i} h\wt\mu,\fr32+p_2\pm\mathrm{i}\wt\nu\\
\fr12\pm\mathrm{i} h\wt\mu-\mathrm{i}\wt\nu,\fr12\pm\mathrm{i} h\wt\mu+\mathrm{i}\wt\nu,-\fr12-p_2\pm\mathrm{i}\wt\nu\end{matrix}\right]\nonumber\\
&~\times\cosh\big[\pi(h\wt\mu+\wt\nu)\big]+(\wt\nu\rightarrow-\wt\nu)
\bigg\} \Big[\,\mathcal U_{\pm|+}^{p_1}(u)+\mathcal U_{\pm|-}^{p_1}(u)\Big],\\
\label{eq_IhPPresult}
\wt{\mathcal I}_{\pm\pm}^{p_1p_2}(u,1)
=&~\bigg\{\FR{e^{\mp\mathrm{i} p_{12}\pi/2}e^{-\pi h\wt\mu}\cosh[\pi(h\wt\mu-\wt\nu)]}{2^{3/2+p_2}\pi\sinh(2\pi\wt\nu)}
\Gamma\left[\begin{matrix}
-1-p_2\mp\mathrm{i} h\wt\mu,\fr32+p_2\pm\mathrm{i}\wt\nu\\
-\fr12-p_2\pm\mathrm{i}\wt\nu
\end{matrix}\right]
+(\wt\nu\rightarrow-\wt\nu)
\bigg\}\nonumber\\
&\times \Big\{e^{\pi\wt\nu}\cosh\big[\pi(h\wt\mu+\wt\nu)\big]\mathcal U_{\pm|+}^{p_1}(u_1)+(\wt\nu\rightarrow-\wt\nu)\Big\}\nonumber\\
&~-\FR{e^{\mp\mathrm{i} p_{12}\pi/2}\Gamma(3+p_{12})u^{3+p_{12}}}{2^{3+p_{12}}[(\fr32+p_2)^2+\wt\nu^2]}
{}_3\mathrm F_2
\left[\begin{matrix} 1,2+p_2\pm\mathrm{i} h\wt\mu,3+p_{12}\\\fr52+p_2-\mathrm{i}\wt\nu,\fr52+p_2+\mathrm{i}\wt\nu\end{matrix}\middle|u\right].
\end{align}
\end{keyeqn}
Here again the momentum ratio $u=2k_3/k_{123}$, and $\mathcal{U}_{\mathsf{a}|\mathsf{b}}^p(u)$ are the independent solutions to the homogeneous bootstrap equations, whose explicit expression are:
\begin{equation}
\mathcal U_{\mathsf{a}|\mathsf{b}}^p(u) = \mathrm{i}\,\mathsf{a}\mathsf{b}\, 2^{\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu} \pi \text{csch}(2\pi\wt\nu) \Big(\FR u2\Big)^{3/2+p+ \mathrm{i}\mathsf{a}\mathsf{b}\wt\nu} {}_2\mathcal F_1\left[\begin{matrix} \fr32+p+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu,\fr12+\mathrm{i}\mathsf{a} h\wt\mu+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu\\1+2\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu \end{matrix}\middle| u \right].
\end{equation}
For the two-point function, we have
\begin{keyeqn}
\begin{align}
\label{eq_Ih2ptResultPM}
\mathcal I_{\pm\mp}^{(h)p_1p_2}(1,1) =&~\FR{e^{\mp\mathrm{i} \bar p_{12}\pi/2}e^{-\pi h\wt\mu}}{2^{3+p_{12}}}
\Gamma\left[\begin{matrix}\fr32+p_1-\mathrm{i}\wt\nu,\fr32+p_1+\mathrm{i}\wt\nu,\fr32+p_2-\mathrm{i}\wt\nu,\fr32+p_2+\mathrm{i}\wt\nu\\
2+p_1\pm\mathrm{i} h\wt\mu,2+p_2\mp\mathrm{i} h\wt\mu\end{matrix}\right],\\
\label{eq_Ih2ptResultPP}
\mathcal I_{\pm\pm}^{(h)p_1p_2}(1,1) =&~\FR{\mp\mathrm{i} e^{\mp\mathrm{i} p_{12}\pi/2}
}{2^{3+p_{12}}}\Gamma\Big[\FR32+p_1\pm\mathrm{i}\wt\nu,\FR32+p_2\pm\mathrm{i}\wt\nu\Big] \nonumber\\
&\times \bigg\{\FR{e^{-2\pi h\wt\mu}+e^{-2\pi\wt\nu}}{2\pi}\Gamma\left[\begin{matrix}\fr32+p_1\mp\mathrm{i}\wt\nu,\fr32+p_2\pm\mathrm{i}\wt\nu,\fr12\pm\mathrm{i} h\wt\mu-\mathrm{i}\wt\nu,\fr12\pm\mathrm{i} h\wt\mu+\mathrm{i}\wt\nu\\
2+p_1\pm\mathrm{i} h\wt\mu,2+p_2\pm\mathrm{i} h\wt\mu\end{matrix}\right]\nonumber\\
&\pm\mathrm{i}\Gamma\Big[3+p_{12}\Big]
{}_3\wt {\mathrm F}_2\left[\begin{matrix} 3+p_{12},\fr12\mp\mathrm{i} h\wt\mu\pm\mathrm{i}\wt\nu,1\\ \fr52+p_1\pm\mathrm{i}\wt\nu,\fr52+p_2\pm\mathrm{i}\wt\nu \end{matrix}\middle| 1 \right]\bigg\}.
\end{align}
\end{keyeqn}
\section{Conclusions and Outlooks}
\label{sec_concl}
Inflation correlators play central roles in the study of Cosmological Collider physics, and it is also of central importance for a better understanding of quantum field theories in dS. Exact and analytical results for inflation correlators are useful for understanding the analytic structure of dS correlation functions in general, for phenomenological studies of CC physics, and also for efficient numerical implementation.
In this work, we have found exact and closed-form formulae for a wide range of two-point and three-point correlation functions of massless modes in inflationary spacetime mediated by a single massive field at the tree level. As we have shown, tree-level correlators can be easily reduced to (combinations of) seed integrals, either Hankel-type or Whittaker-type. In particular, three/two-point functions correspond to the single/double-folded limit of seed integrals. We calculated the seed integrals using an improved bootstrap method. First, our start point of deriving the bootstrap equation is the equations of motion for the massive propagator, rather than the bulk symmetry, so our method does not depend on full dS symmetry and also applies to dS-boost-breaking models. Second, we found it more convenient to express the bootstrap equations with variables $u_{1,2}$, since the inhomogeneous solution in the single-folded limit can then be summed to a generalized hypergeometric function. With the coefficients of homogeneous solutions appropriately determined, closed-form expressions of seed integrals in the single-folded limit are derived. It is then straightforward to go to the double-folded limit, where all the spurious divergences should be canceled due to the Bunch-Davis initial condition.
Apart from building phenomenological models and constructing efficient templates for practical data analysis, our results also find wide applications for theoretical studies of inflation correlators.
On one hand, low-point tree-level correlators can be subgraphs of more complicated processes, and our expressions can be used as building blocks. This topic will be explored in detail in \cite{qin_box}, where the two-point correlators found in this work act as effective vertices in a loop diagram.
On the other hand, closed-form expressions explicitly show the analytical structure of the correlators and thus make the analytic continuation easier to do. The analytic structure of inflation correlators encodes rich physical information, and can provides us useful insights into the physical processes happening in the inflationary universe. They may also pave the way for efficient computation methods for more complicated correlation functions. We leave these topics for future studies.
\paragraph{Acknowledgments.} This work is supported by the National Key R\&D Program of China (2021YFC2203100), NSFC under Grant No.\ 12275146, an Open Research Fund of the Key Laboratory of Particle Astrophysics and Cosmology, Ministry of Education of China, and a Tsinghua University Initiative Scientific Research Program.
\newpage
\begin{appendix}
\section*{Appendix}
\section{Useful Formulae}
\label{app_formulae}
In this appendix, we list some of the definitions and formulae frequently used in the main text.
First, we
use the following shorthand notations for the products and fractions of the Euler $\Gamma$ function:
\begin{align}
\label{eq_GammaProd}
\Gamma\left[ z_1,\cdots,z_m \right]
\equiv&~ \Gamma(z_1)\cdots \Gamma(z_m) ,\\
\label{eq_GammaProd2}
\Gamma\left[\begin{matrix} z_1,\cdots,z_m \\w_1,\cdots, w_n\end{matrix}\right]
\equiv&~\FR{\Gamma(z_1)\cdots \Gamma(z_m)}{\Gamma(w_1)\cdots \Gamma(w_n)}.
\end{align}
We also use the Pochhammer symbol $(a)_n\equiv \Gamma(a+n)/\Gamma(a)$ in some expressions.
The closed-form expressions of two and three-point correlators often involve various types of (generalized) hypergeometric functions. The original (generalized) hypergeometric function is defined as:
\begin{align}
\label{eq_HGF}
{}_p\mathrm{F}_q\left[\begin{matrix} a_1,\cdots,a_p \\ b_1,\cdots,b_q \end{matrix} \middle| z \right]=\sum_{n=0}^\infty\FR{(a_1)_n\cdots (a_p)_n}{(b_1)_n\cdots (b_q)_n}\FR{z^n}{n!}.
\end{align}
We only encounter the case of $p=q+1$, where the above series converges within the disk $|z| < 1$.
At $z=1$, \eqref{eq_HGF} converges if and only if $\mathrm{Re}\, s > 0$, where the balance $s$ is defined by:
\begin{equation}
s = (b_1 + \cdots + b_q) - (a_1 + \cdots a_p).
\end{equation}
When considering the folded limit of the seed integrals, we shall usually take $z\rightarrow 1$ for several (generalized) hypergeometric functions. Typically, each of these functions will diverge, but their divergent parts must be canceled with each other. After the cancellation, only the ``finite part" of each (generalized) hypergeometric function remains. Below we list the finite part of the (generalized) hypergeometric functions involved as the argument $z\rightarrow 1$ \cite{doi:10.1137/0518089}:
\begin{align}
&\text{Fin}\bigg\{\lim_{z\rightarrow 1}{}_2\mathrm F_1 \left[\begin{matrix} a,b\\c\end{matrix} \middle| z\right]\bigg\} =
\Gamma\left[\begin{matrix} c,s\\ a+s,b+s\end{matrix}\right],\\
&\text{Fin}\bigg\{\lim_{z\rightarrow 1}{}_3\mathrm F_2 \left[\begin{matrix} a,b,c\\d,e\end{matrix} \middle| z\right]\bigg\} =
\Gamma\left[\begin{matrix} d,e,s\\ c,a+s,b+s\end{matrix}\right]
{}_3\mathrm F_2 \left[\begin{matrix} d-c,e-c,s\\a+s,b+s\end{matrix} \middle| 1\right],
\end{align}
both of which hold when the balance $s \notin\mathbb Z$ because of the factor $\Gamma(s)$. When $s$ is an integer, similar formulae can be derived by an approximate limiting process. After some simplifications the expressions in the folded limit will apply for any value of $s$.
We shall also use the regularized and dressed (generalized) hypergeometric functions to simplify expressions, which are respectively defined as the following:
\begin{equation}
\label{eq_RegF}
{}_p\wt{\mathrm{F}}_q\left[\begin{matrix}
a_1, \cdots, a_p \\
b_1, \cdots, b_q
\end{matrix}\middle|z\right]=\frac1{\Gamma\left[b_1, \cdots, b_q\right]}{}_p\mathrm{F}_q\left[\begin{matrix}
a_1, \cdots, a_p \\
b_1, \cdots, b_q
\end{matrix}\middle|z\right].
\end{equation}
\begin{equation}
\label{eq_DressedF}
{}_p\mathcal{F}_q\left[\begin{matrix}
a_1, \cdots, a_p \\
b_1, \cdots, b_q
\end{matrix}\middle|z\right]=\Gamma\left[\begin{matrix}
a_1, \cdots, a_p \\
b_1, \cdots, b_q
\end{matrix}\right]{}_p\mathrm{F}_q\left[\begin{matrix}
a_1, \cdots, a_p \\
b_1, \cdots, b_q
\end{matrix}\middle|z\right].
\end{equation}
\section{Squeezed limit results}
\label{app_squeezed}
In this appendix, we calculate the seed integrals of both Hankel-type \eqref{eq_SeedIntH} and Whittaker-type \eqref{eq_SeedIntW} in a particular squeezed limit, namely $u_1\ll u_2\ll 1$. The results will help us determine the coefficients of each possible combination of homogeneous solutions to the bootstrap equations.
The full results for seed integrals \eqref{eq_SeedIntH} and \eqref{eq_SeedIntW} have been calculated in \cite{Qin:2022fbv}, using the method of partial Mellin-Barnes representation. We can directly take the squeezed limits of these known expressions to obtain the leading order results. For concreteness, below we repeat a similar calculation, but in a rather brief way, and compute the squeezed limit results. Readers can refer to \cite{Qin:2022fbv} for more details. For convenience, we first calculate the seed integrals defined as \eqref{eq_seedHr} and \eqref{eq_seedIntWr} which depend on variables $r_{1,2}$, and then change to variables $u_{1,2}$. The hierarchical squeezed limit $u_1\ll u_2\ll 1$ is equivalent to $r_1 \ll r_2 \ll 1$. Also notice that we can identify $u_i\simeq 2r_i$ $(i=1,2)$ in the squeezed limit up to $\order{r_{1,2}}$ corrections.
\subsection{Hankel seed integral in squeezed limit}
\label{app_hankel}
The Mellin-Barnes representation for the ``less"/``greater" Wightman function are the following \cite{Qin:2022fbv}:
\begin{align}
D_\lessgtr (k;\tau_1,\tau_2) =&~ \FR{1}{4\pi} \int_{-\mathrm{i}\infty}^{\mathrm{i}\infty} \FR{{\mathrm{d}} s_1}{2\pi\mathrm{i}} \FR{{\mathrm{d}} s_2}{2\pi\mathrm{i}}\, e^{\mp\mathrm{i}\pi(s_1-s_2)}\Big(\FR k2\Big)^{-2s_{12}}(-\tau_1)^{-2s_1+3/2}(-\tau_2)^{-2s_2+3/2}\nonumber\\
&\times \Gamma\Big[s_1-\FR{\mathrm{i}\wt\nu}2,s_1+\FR{\mathrm{i}\wt\nu}2,s_2-\FR{\mathrm{i}\wt\nu}2,s_2+\FR{\mathrm{i}\wt\nu}2\Big].
\end{align}
Here and below we use shorthand $s_{12}=s_1+s_2$. The four SK propagators $D_{\mathsf{a}\mathsf{b}}$ are related by \eqref{eq_DScalarSame} and \eqref{eq_DScalarOpp}.
Furthermore, we split the same-sign propagators into two parts:
\begin{equation}
\label{eq_cuttingrule}
D_{\pm\pm}(k;\tau_1,\tau_2)
=D_\gtrless(k;\tau_1,\tau_2)+\Big[D_\lessgtr(k;\tau_1,\tau_2)-D_\gtrless(k;\tau_1,\tau_2)\Big]\theta(\tau_2-\tau_1)
\end{equation}
in the region $r_1<r_2$ (equivalently, $u_1<u_2$, and $k_{12}>k_{34}$).
The two parts will give rise to the factorized (F) part ${\mathcal I}_{\pm\pm,\text{F},>}^{p_1p_2}$ and time-ordered (TO) part ${\mathcal I}_{\pm\pm,\text{TO},>}^{p_1p_2}$ of the same-sign seed integrals $\mathcal I_{\pm\pm}(r_1,r_2)$, respectively. The subscript $>$ serves as a reminder that we take $r_1<r_2$ and hence $k_{12}>k_{34}$. Naively, we can split $D_{\pm\pm}$ in a different way, with the factorized part being $D_\lessgtr$. This choice will lead to an exchange of $r_1 \leftrightarrow r_2$ in the expressions \eqref{eq_ScaSqueezedOpp}, \eqref{eq_ScaSqueezedSameF} and \eqref{eq_ScaSqueezedSameTO}, and the time-ordered part will be non-analytic in the limit $r_1 \ll r_2$ and contribute even in the leading order. In fact, our choice \eqref{eq_cuttingrule} is the appropriate ``cutting rule" in the region $r_1<r_2$, and it can be easily proved by the Mellin-Barnes representation. See \cite{Qin:2022fbv} for more details, and also \cite{Tong:2021wai} for an intuitive explanation in the bulk.
Now we insert the Mellin-Barnes representation for the four SK propagators into the definition of seed integral \eqref{eq_seedHr}, complete the trivialized time integrals, and then obtain:
\begin{align}
\label{eq_ScaSqueezedOpp}
\mathcal{I}_{\pm\mp}^{p_1p_2} =&~ \FR{1}{4\pi}e^{\mp\mathrm{i}\bar p_{12}\pi/2}r_1^{5/2+p_1}r_2^{5/2+p_2}\int_{-\mathrm{i}\infty}^{\mathrm{i}\infty}\FR{{\mathrm{d}} s_1}{2\pi\mathrm{i}}\FR{{\mathrm{d}} s_2}{2\pi\mathrm{i}}\,
\Big(\FR{r_1}2\Big)^{-2s_1}\Big(\FR{r_2}2\Big)^{-2s_2}\nonumber\\
&\times \Gamma\Big[\fr52+p_1-2s_1,\fr52+p_2-2s_2,s_1-\fr{\mathrm{i}\wt\nu}2,s_1+\fr{\mathrm{i}\wt\nu}2,s_2-\fr{\mathrm{i}\wt\nu}2,s_2+\fr{\mathrm{i}\wt\nu}2\Big],\\
\label{eq_ScaSqueezedSameF}
{\mathcal I}_{\pm\pm,\text{F},>}^{p_1p_2}=&~\FR{1}{4\pi}e^{\mp\mathrm{i} p_{12}\pi/2}r_1^{5/2+p_1}r_2^{5/2+p_2} \int_{-\mathrm{i}\infty}^{\mathrm{i}\infty}\FR{{\mathrm{d}} s_1}{2\pi\mathrm{i}}\FR{{\mathrm{d}} s_2}{2\pi\mathrm{i}}\,(\pm \mathrm{i} e^{\pm 2\mathrm{i}\pi s_1})
\Big(\FR{r_1}2\Big)^{-2s_1}\Big(\FR{r_2}2\Big)^{-2s_2}\nonumber\\
&\times \Gamma\Big[\fr52+p_1-2s_1,\fr52+p_2-2s_2,s_1-\fr{\mathrm{i}\wt\nu}2,s_1+\fr{\mathrm{i}\wt\nu}2,s_2-\fr{\mathrm{i}\wt\nu}2,s_2+\fr{\mathrm{i}\wt\nu}2\Big],\\
\label{eq_ScaSqueezedSameTO}
{\mathcal I}_{\pm\pm,\text{TO},>}^{p_1p_2}
=&~ \FR{1}{4\pi }e^{\mp\mathrm{i}\pi(p_1+p_2)/2}r_1^{5+p_{12}}\int_{-\mathrm{i}\infty}^{\mathrm{i}\infty}\FR{{\mathrm{d}} s_1}{2\pi\mathrm{i}}\FR{{\mathrm{d}} s_2}{2\pi\mathrm{i}}\,(\mp \mathrm{i} e^{\mp 2\mathrm{i}\pi s_1} \pm \mathrm{i} e^{\pm 2\mathrm{i}\pi s_2})
\Big(\FR{r_1}2\Big)^{-2s_{12}}\nonumber\\
&\times \Gamma\Big[\fr52+p_2-2s_2,5+p_{12}-2s_{12},s_1-\fr{\mathrm{i}\wt\nu}2,s_1+\fr{\mathrm{i}\wt\nu}2,s_2-\fr{\mathrm{i}\wt\nu}2,s_2+\fr{\mathrm{i}\wt\nu}2\Big]\nonumber\\
&\times {}_2\wt{\mathrm{F}}_1\left[\begin{matrix} \fr52+p_2-2s_2,5+p_{12}-2s_{12}\\\fr72+p_2-2s_2\end{matrix}\middle|\,-\FR{r_1}{r_2}\right].
\end{align}
The last step is to complete the above Barnes-type integrals using the residue theorem. Since we focus on the region $r_1<r_2<1$, we should close the contour from left, and sum over residues at the left poles for both $s_1$ and $s_2$:
\begin{equation}
s_i = -n_i \mp \mathrm{i}\FR{\wt\nu}2, \qquad n_i \in \mathbb N,\quad i=1,2,
\end{equation}
then we can obtain the full result.
However, we only need the leading order result in the squeezed limit $r_1 \ll r_2 \ll 1$, which obviously corresponds to the poles with $n_{1,2}=0$. Furthermore, we find both $\mathcal I_{\pm\mp}^{p_1p_2}$ and $\mathcal I_{\pm\pm,\text{F},>}^{p_1p_2}$ are of order $\mathcal O(r_1^{5/2+p_1}r_2^{5/2+p_2})$, but the time-ordered integral $\mathcal I_{\pm\pm,\text{TO},>}^{p_1p_2}$ is of order $\mathcal O(r_1^{5+p_{12}})$ and thus can be neglected since we are working in the region $r_1 \ll r_2$ and we assume $\text{Re}\,p_{2}>-5/2$ as explained below (\ref{eq_SeedIntH}). So finally we can obtain the leading order results by summing the residues at four poles $s_i = \mp \mathrm{i} \wt\nu/2$:
\begin{align}
\lim_{r_1\ll r_2\ll 1} \mathcal I_{\pm\mp}^{p_1p_2}(r_1,r_2) =&~
\FR{e^{\mp\mathrm{i} \bar p_{12}\pi/2}}{4\pi} \Big[\wh{\mathcal Y}_+^{p_1}(r_1)+\wh{\mathcal Y}_-^{p_1}(r_1)\Big]\Big[\wh{\mathcal Y}_+^{p_2}(r_2)+\wh{\mathcal Y}_-^{p_2}(r_2)\Big],\\
\lim_{r_1\ll r_2\ll 1} \mathcal I_{\pm\pm}^{p_1p_2}(r_1,r_2) =&~\FR{\pm\mathrm{i} e^{\mp\mathrm{i} p_{12}\pi/2}}{4\pi}\Big[e^{\pi\wt\nu}\wh{\mathcal Y}_\pm^{p_1}(r_1)+e^{-\pi\wt\nu}\wh{\mathcal Y}_{\mp}^{p_1}(r_1)\Big]\Big[\wh{\mathcal Y}_+^{p_2}(r_2)+\wh{\mathcal Y}_-^{p_2}(r_2)\Big],
\end{align}
where
\begin{equation}
\wh{\mathcal Y}_\pm^{p}(r) = 2^{\mp\mathrm{i}\wt\nu} r^{5/2+p\pm\mathrm{i}\wt\nu}\Gamma\Big[\FR52+p\pm\mathrm{i}\wt\nu,\mp\mathrm{i}\wt\nu\Big].
\end{equation}
One can straightforwardly write down the above results in terms of variables $u_{1,2}$:
\begin{align}
\lim_{u_1\ll u_2\ll 1}\wt{\mathcal I}^{p_1p_2}_{\pm\mp}(u_1,u_2)
=&~\FR{e^{\mp\mathrm{i}\bar{p}_{12}\pi/2}}{4\pi}\Big[\wt{\mathcal Y}_+^{p_1}(u_1)+\wt{\mathcal Y}_-^{p_1}(u_1)\Big]\Big[\wt{\mathcal Y}_+^{p_2}(u_2)+\wt{\mathcal Y}_-^{p_2}(u_2)\Big],\\
\lim_{u_1\ll u_2\ll 1}\wt{\mathcal I}^{p_1p_2}_{\pm\pm}(u_1,u_2)
=&~\FR{\pm\mathrm{i} e^{\mp\mathrm{i} p_{12}\pi/2}}{4\pi}\Big[e^{\pi\wt\nu}\wt{\mathcal Y}_\pm^{p_1}(u_1)+e^{-\pi\wt\nu}\wt{\mathcal Y}_{\mp}^{p_1}(u_1)\Big]\Big[\wt{\mathcal Y}_+^{p_2}(u_2)+\wt{\mathcal Y}_-^{p_2}(u_2)\Big].
\end{align}
where
\begin{align}
\wt{\mathcal Y}_\pm^p(u)= 2^{\mp \mathrm{i}\wt\nu} \Big(\FR{u}{2}\Big)^{5/2+p\pm\mathrm{i}\wt\nu}\Gamma\Big[\FR52+p\pm\mathrm{i}\wt\nu,\mp\mathrm{i}\wt\nu\Big].
\end{align}
\subsection{Whittaker seed integral in squeezed limit}
\label{app_whittaker}
The calculation for the Whittaker seed integral is basically the same as the previous case, except that there are different choices of Mellin-Barnes representation for the propagators. We find that the simplest choice is the following \cite{Qin:2022fbv}:
For the opposite-sign propagators $D_{\pm\mp}^{(h)p_1p_2} = D_{\lessgtr}^{(h)p_1p_2}$, we use:
\begin{align}
D_{\lessgtr}^{(h)}(k;\tau_1,\tau_2)=&~
\FR{e^{-h\pi\wt\mu}}{2\pi^2}(\cosh2\pi\wt\mu+\cosh2\pi\wt\nu)e^{\pm\mathrm{i} k(\tau_1-\tau_2)}\nonumber\\
&\times\int_{-\mathrm{i}\infty}^{\mathrm{i}\infty} \FR{{\mathrm{d}} s_1}{2\pi\mathrm{i}}\FR{{\mathrm{d}} s_2}{2\pi\mathrm{i}}\,
e^{\mp\mathrm{i}\pi(s_1-s_2)/2}(2k_s)^{-s_{12}}(-\tau_1)^{-s_1+1/2}(-\tau_2)^{-s_2+1/2}\nonumber\\
&\times \Gamma\Big[-s_1+\fr12\pm \mathrm{i} h\wt\mu,-s_2+\fr12\mp \mathrm{i} h\wt\mu,s_1-\mathrm{i}\wt\nu,s_1+\mathrm{i}\wt\nu,s_2-\mathrm{i}\wt\nu,s_2+\mathrm{i}\wt\nu\Big],
\end{align}
and for the same-sign propagators $D_{\pm\pm}^{(h)p_1p_2}$, we again do the split:
\begin{equation}
D_{\pm\pm}^{(h)}(k;\tau_1,\tau_2)
=D_\gtrless(k;\tau_1,\tau_2)+\Big[D_\lessgtr(k;\tau_1,\tau_2)-D_\gtrless(k;\tau_1,\tau_2)\Big]\theta(\tau_2-\tau_1),
\end{equation}
and use another representation:
\begin{align}
D_{\lessgtr}^{(h)}(k;\tau_1,\tau_2)=&~
\FR{e^{-h\pi\wt\mu}}{\pi\Gamma[\fr12-\mathrm{i}\wt\nu\mp\mathrm{i} h\wt\mu,\fr12+\mathrm{i}\wt\nu\mp\mathrm{i} h\wt\mu]}e^{\mp\mathrm{i} k(\tau_1+\tau_2)}\nonumber\\
&\times\int_{-\mathrm{i}\infty}^{\mathrm{i}\infty} \FR{{\mathrm{d}} s_1}{2\pi\mathrm{i}}\FR{{\mathrm{d}} s_2}{2\pi\mathrm{i}}\,
e^{\mp\mathrm{i}\pi(s_1-s_2)/2}\cos\pi(s_1\pm\mathrm{i} h\wt\mu)(2k_s)^{-s_{12}}(-\tau_1)^{-s_1+1/2}(-\tau_2)^{-s_2+1/2}\nonumber\\
&\times \Gamma\Big[-s_1+\FR12\mp \mathrm{i} h\wt\mu,-s_2+\FR12\mp \mathrm{i} h\wt\mu,s_1-\mathrm{i}\wt\nu,s_1+\mathrm{i}\wt\nu,s_2-\mathrm{i}\wt\nu,s_2+\mathrm{i}\wt\nu\Big].
\end{align}
Similar to the Hankel case, we insert the above Mellin-Barnes representation into the Whittaker seed integral \eqref{eq_seedIntWr}. After integrating out $\tau_{1,2}$, we make use of the residue theorem to compute the Barnes-type integral over $s_{1,2}$, with the left poles:
\begin{equation}
s_i = -n_i \mp \mathrm{i}\wt\nu,\qquad n_i\in\mathbb N, \quad i=1,2.
\end{equation}
Again, the leading order result in the squeezed limit comes from the residues at $s_i = \mp \mathrm{i}\wt\nu$, and the time-ordered integral is negligible since $r_1\ll r_2$. The results can be summarized collectively in the following form:
\begin{equation}
\lim_{r_1\ll r_2\ll 1} \mathcal I_{\mathsf{a}\mathsf{b}}^{(h)p_1p_2}(r_1,r_2) = \sum_{\mathsf{c},\mathsf{d} = \pm} \beta_{\mathsf{a}\mathsf{b}|\mathsf{c}\mathsf{d}}\, \wh {\mathcal U}_{\mathsf{a}|\mathsf{c}}^{p_1}(r_1)\wh {\mathcal U}_{\mathsf{b}|\mathsf{d}}^{p_2}(r_2),
\end{equation}
where $\beta_{\mathsf{a}\mathsf{b}|\mathsf{c}\mathsf{d}}$ are defined in \eqref{eq_betaCoef1}, \eqref{eq_betaCoef2}, \eqref{eq_betaCoef3}, and
\begin{equation}
\wh {\mathcal U}_{\mathsf{a}|\mathsf{b}}^p(r) = \mathrm{i}\,\mathsf{a}\mathsf{b}\, 2^{\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu} \pi \text{csch}(2\pi\wt\nu) \,r^{3/2+p+ \mathrm{i}\mathsf{a}\mathsf{b}\wt\nu}\Gamma\Big[\begin{matrix} \fr32+p+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu,\fr12+\mathrm{i}\mathsf{a} h\wt\mu+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu\\1+2\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu \end{matrix}\Big].
\end{equation}
One can then turn to the expression in $u$-variables:
\begin{equation}
\label{eq_IhSqLimit}
\lim_{u_1\ll u_2\ll 1} \mathcal I_{\mathsf{a}\mathsf{b}}^{(h)p_1p_2}(u_1,u_2) = \sum_{\mathsf{c},\mathsf{d} = \pm} \beta_{\mathsf{a}\mathsf{b}|\mathsf{c}\mathsf{d}}\, \wt {\mathcal U}_{\mathsf{a}|\mathsf{c}}^{p_1}(u_1)\wt {\mathcal U}_{\mathsf{b}|\mathsf{d}}^{p_2}(u_2),
\end{equation}
where
\begin{equation}
\wt {\mathcal U}_{\mathsf{a}|\mathsf{b}}^p(u) = \mathrm{i}\,\mathsf{a}\mathsf{b}\, 2^{\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu} \pi \text{csch}(2\pi\wt\nu) \Big(\FR u2\Big)^{3/2+p+ \mathrm{i}\mathsf{a}\mathsf{b}\wt\nu}\Gamma\Big[\begin{matrix} \fr32+p+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu,\fr12+\mathrm{i}\mathsf{a} h\wt\mu+\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu\\1+2\mathrm{i}\mathsf{a}\mathsf{b}\wt\nu \end{matrix}\Big].
\end{equation}
\end{appendix}
\newpage
|
1,314,259,993,943 | arxiv | \section{\textbf{Introduction}}
In Ref. \cite{epjc2014} was studied a five-dimensional
Eins\-tein-Chern-Si\-mons gravity whose action $S=S_{g}+S_{M}$ is composed
of a gravitational sector and a sector of matter, where the gravitational
sector is given by a particular Chern-Simons gravity action \cite{PLB2009}\
instead of the Einstein-Hilbert action and where the matter sector is given
by the so called perfect fluid.
The corresponding Chern-Simons Lagrangian of Ref. \cite{PLB2009} is a
Lagrangian for the so called $\mathfrak{B}$ algebra whose generators
\left\{ J_{ab},P_{a},Z_{ab},Z_{a}\right\} $ satisfy the commutation relation
given by in the first equation of Ref. \cite{epjc2014}. This Lagrangian can
be constructed from the one-form gauge connection
\begin{equation}
\boldsymbol{A}=\frac{1}{2}\omega ^{ab}\boldsymbol{J}_{ab}+\frac{1}{l}e^{a
\boldsymbol{P}_{a}+\frac{1}{2}k^{ab}\boldsymbol{Z}_{ab}+\frac{1}{l}h^{a
\boldsymbol{Z}_{a},
\end{equation
and the two-form curvatur
\begin{equation}
\boldsymbol{F}=\frac{1}{2}R^{ab}\boldsymbol{J}_{ab}+\frac{1}{l}T^{a
\boldsymbol{P}_{a}+\frac{1}{2}K^{ab}\boldsymbol{Z}_{ab}+\frac{1}{l}H^{a
\boldsymbol{Z}_{a},
\end{equation
where $T^{a}=\mathrm{D}_{\omega }e^{a},$ $R^{ab}=\mathrm{d}\omega
^{ab}+\omega _{\text{ \ }c}^{a}\omega ^{cb}$, $H^{a}=\mathrm{D}_{\omega
}h^{a}+k_{\text{ \ }b}^{a}e^{b},$ $K^{ab}=\mathrm{D}_{\omega }k^{ab}+\frac{
}{l^{2}}e^{a}e^{b}$, are the corresponding "curvatures". \ In fact, using
the extended Cartan's homotopy formula \cite{bzumino,salg3}, and integrating
by parts, we find that the five-dimensional Chern--Simons lagrangian for the
$\mathfrak{B}$ algebra is given by \cite{epjc2014}
\begin{eqnarray}
L_{\mathrm{ChS}}^{\left( 5\right) } &=&\alpha _{1}l^{2}\epsilon
_{abcde}R^{ab}R^{cd}e^{e}+\alpha _{3}\epsilon _{abcde}\left( \frac{2}{3
R^{ab}e^{c}e^{d}e^{e}+2l^{2}k^{ab}R^{cd}T^{e}+l^{2}R^{ab}R^{cd}h^{e}\right)
\notag \\
&&+dB_{EChS}^{(4)}, \label{1a}
\end{eqnarray
where the surface term $B_{\text{EChS}}^{(4)},$ given by
\begin{align}
B_{\text{EChS}}^{(4)}& =\alpha _{1}l^{2}\epsilon _{abcde}e^{a}\omega
^{bc}\left( \frac{2}{3}\mathrm{d}\omega ^{de}+\frac{1}{2}\omega _{\text{ \
f}^{d}\omega ^{fe}\right) \notag \\
& \quad +\alpha _{3}\epsilon _{abcde}\left[ l^{2}\left( h^{a}\omega
^{bc}+k^{ab}e^{c}\right) \left( \frac{2}{3}\mathrm{d}\omega ^{de}+\frac{1}{2
\omega _{\text{ \ }f}^{d}\omega ^{fe}\right) \right. \notag \\
& \qquad \qquad \qquad \left. +l^{2}k^{ab}\omega ^{cd}\left( \frac{2}{3
\mathrm{d}e^{e}+\frac{1}{2}\omega _{\text{ \ }f}^{d}e^{e}\right) +\frac{1}{6
e^{a}e^{b}e^{c}\omega ^{de}\right] .
\end{align}
In the above mentioned reference \cite{epjc2014} and also in Ref.\cite{gomez}
was shown that:
$(i)$ the field equations can be obtained from \ the Lagrangian\ $L=L_
\mathrm{ChS}}^{(5)}+\kappa L_{M}$, where $L_{M}=L_{M}(e^{a},h^{a},\omega
^{ab})$ is the matter Lagrangian and\ $\kappa $ is a coupling constant
related to the effective Newton's constant. In fact, the variation of the
lagrangian (\ref{1a}) w.r.t. the dynamical fields vielbein $e^{a}$, spin
connection $\omega ^{ab}$, $h^{a}$ and $k^{ab}$, leads to the following
field equations
\begin{align}
\varepsilon _{abcde}R^{ab}e^{c}e^{d}& =4\kappa _{5}\left( \frac{\delta L_{M
}{\delta e^{e}}+\alpha \frac{\delta L_{M}}{\delta h^{e}}\right) , \label{9}
\\
\frac{\delta L_{M}}{\delta h^{e}}& =\frac{l^{2}}{8\kappa _{5}}\varepsilon
_{abcde}R^{ab}R^{cd}, \label{10} \\
\varepsilon _{abcde}R^{cd}D_{\omega }h^{e}& =0. \label{11}
\end{align
where we have imposed the conditions \ $T^{a}=0$, $k^{ab}=0$ and $\delta
L_{M}/\delta \omega ^{ab}=0$ and where $\kappa _{5}=\kappa /8\alpha _{3}$
and $\alpha =-\alpha _{1}/\alpha _{3}$. \ Note that the equation (\ref{9})
is analogous to Einstein's equation, where $\delta L_{M}/\delta h^{a}$
correspond to the energy-momentum tensor for the field $h^{a}.$
In the case where the equations (\ref{9}-\ref{11}) satisfy the cosmological
principle and the ordinary matter is negligible compared to the dark energy,
we find that corresponding the FLRW equations \ take the form
\begin{align}
6\left( \frac{{\dot{a}}^{2}+k}{a^{2}}\right) & =\kappa _{5}\alpha \rho
^{(h)}, \label{eqz06} \\
3\left[ \frac{\ddot{a}}{a}+\left( \frac{{\dot{a}}^{2}+k}{a^{2}}\right)
\right] & =-\kappa _{5}\alpha p^{(h)}, \label{eqz07} \\
{\frac{3l^{2}}{\kappa _{5}}\left( \frac{{\dot{a}}^{2}+k}{a^{2}}\right) ^{2}
& =\rho ^{(h)}, \label{eqz08} \\
\frac{3l^{2}}{\kappa _{5}}\frac{\ddot{a}}{a}\left( \frac{{\dot{a}}^{2}+k}
a^{2}}\right) & =-p^{(h)}, \label{eqz09} \\
\left( \frac{{\dot{a}}^{2}+k}{a^{2}}\right) \left[ (h-h(0))\frac{\dot{a}}{a}
\dot{h}\right] & =0. \label{eqz10}
\end{align}
These field equations were completely resolved in reference \cite{epjc2014}\
\ for the age of dark energy, where was found that the field $h^{a}$ has a
similar behavior of a cosmological constant.
$\left( ii\right) $The equations (\ref{eqz06}-\ref{eqz10}) have solutions
that describe an accelerated expansion for the three possible cosmological
models of the universe. Namely, spherical expansion $\left( k=1\right) $,
flat expansion $\left( k=0\right) $ and hyperbolic expansion $\left(
k=-1\right) $ when the constant $\alpha $ is greater than zero. This mean
that the FRW--Einstein--Chern-Simons field equations have as a of their
solutions an universe in accelerated expansion. This result allow us to
conjeture that this solutions are compatible with the era of Dark Energy and
that the energy-momentum tensor for the field $h^{a}$ corresponds to a form
of positive cosmological constant.
In summary in Refs. \cite{epjc2014,gomez} were studied the implications of
replacing in the action $S=S_{g}+S_{M}$ the Einstein--Hilbert action by the
Einstein--Chern--Simons action on the cosmological evolution for a
Friedmann--Lema\^{\i}tre--Robertson--Walker metric (FLRW). In the case that
the matter action $S_{M}$ is the action for a perfect fluid, was found that
the FRW--Einstein--Chern--Simons field equations have solutions that
describe an accelerated expansion for the three possible cosmological models
of the universe.
On the other hand, in Ref. \cite{salg1} was found that the non-relativistic
limit of Einstein--Chern--Simons gravity action is given by the so called
Newton--Chern--Simons gravity action. This action is invariant under the so
called non-relativistic algebra $\mathcal{G}\mathfrak{B}_{_{5}},$ which can
be obtained as the non-relativistic limit of the generalized Poincar\'{e}
algebra $\mathfrak{B}_{_{5}}.$
One of the purpose of this article is to find a non-relativistic limit of
the results found in references \cite{epjc2014,gomez}, i.e., some
cosmological solutions for the field equations which can be obtained from
the Newton--Chern--Simons action studied in Ref. \cite{salg1}. \
This paper is organized as follows: In Section $2$ we obtain the field
equations for the Lagrangian $L=L_{\mathrm{ChS}}^{(5)}+\kappa L_{M}$, where
L_{\mathrm{ChS}}^{(5)}$ is the Newton--Chern--Simons Lagrangian and $L_{M}$
is the corresponding matter Lagrangian. These field equations correspond to
the non-relativistic limit of the field equations studied in Refs. \cit
{epjc2014,gomez}.\ In Section $3$ we find the field equations for a
Newton--Chern--Simons cosmology. In Section $4$ it is shown that the
Newton--Chern--Simons cosmology is a sort of analogue of the projectable
version of the Ho\v{r}ava--Lifshitz theory in $(3+1)$-dimensions, although
one of the terms is not present. Solutions and their asymptotic limits are
found, which show interesting properties. In particular a phantom solution
with a future singularity reminiscent of a Litlle Big Rip future singularity
is obtained. Finally, a brief revision of the adiabaticity in the cosmic
evolution is made.
\section{\textbf{Newton--Chern--Simons gravity}}
In this section we will make a brief review of the so-called
Newton--Chern--Simons gravity. \ The non-relativistic algebra $\mathcal{G
\mathfrak{B}_{_{5}}$ has the following commutation relation \cite{salg1}
(see also \cite{salg2}),
\begin{align}
\lbrack J_{ij},J_{kl}]& =\eta _{kj}J_{il}+\eta _{lj}J_{ki}-\eta
_{ki}J_{jl}-\eta _{li}J_{kj}, \notag \\
\lbrack J_{ij},K_{k}]& =\eta _{jk}K_{i}-\eta _{ik}K_{j},\text{ \ \
[K_{i},P_{j}]=-\delta _{ij}M, \notag \\
\lbrack J_{ij},P_{k}]& =\eta _{jk}P_{i}-\eta _{ik}P_{j},\text{ \
[K_{i},H]=-P_{i}, \notag \\
\lbrack J_{ij},Z_{kl}]& =\eta _{kj}Z_{il}+\eta _{lj}Z_{ki}-\eta
_{ki}Z_{jl}-\eta _{li}Z_{kj}, \notag \\
\lbrack J_{ij},Z_{k0}]& =\eta _{jk}Z_{i0}-\eta _{ik}Z_{j0},\text{ \ \
[K_{i},Z_{j}]=-\delta _{ij}N, \notag \\
\lbrack Z_{ij},K_{k}]& =\eta _{jk}Z_{i0}-\eta _{ik}Z_{j0},\text{ \
[K_{i},Z_{0}]=-Z_{i}, \notag \\
\lbrack J_{ij},Z_{k}]& =\eta _{jk}Z_{i}-\eta _{ik}Z_{j},\text{ \ \
[Z_{i0},P_{j}]=-\delta _{ij}N, \notag \\
\lbrack Z_{ij},P_{k}]& =\eta _{jk}Z_{i}-\eta _{ik}Z_{j},\text{ \
[Z_{i0},H]=-Z_{i}, \notag \\
\lbrack P_{i},H]& =\text{ }Z_{i0}. \label{gb5}
\end{align
The one-form gauge connetion $A$ valued in the $\mathcal{G}\mathfrak{B
_{_{5}}$ algebra is given by
\begin{align}
A& =\frac{c}{l}\tau H+\frac{1}{l}e^{i}P_{i}+\frac{c}{l}\hat{\tau}Z_{0}+\frac
1}{l}h^{i}Z_{i}+\frac{1}{cl}mM+\frac{1}{cl}nN \notag \\
& +\frac{1}{c}\omega ^{i}K_{i}+\frac{1}{c}k^{i}Z_{i0}+\frac{1}{2}\omega
^{ij}J_{ij}+\frac{1}{2}k^{ij}Z_{ij},
\end{align
where $l$ and $c$ are parameters of dimensions of length and velocity
respectively. \ The corresponding 2-form curvature $F=dA+AA$ is then given
by \cite{salg1
\begin{align}
F& =\frac{c}{l}R(H)H+\frac{1}{l}R^{i}(P_{i})P_{i}+\frac{c}{l}R(Z_{0})Z_{0}
\frac{1}{l}R^{i}(Z_{i})Z_{i}+\frac{1}{cl}R(M)M \notag \\
& +\frac{1}{cl}R(N)N+\frac{1}{c}R^{i}\left( K_{i}\right) K_{i}+\frac{1}{c
R^{i}\left( Z_{i0}\right) Z_{i0}+\frac{1}{2}R^{ij}\left( J_{ij}\right)
J_{ij}+\frac{1}{2}R^{ij}\left( Z_{ij}\right) Z_{ij},
\end{align
\newline
where
\begin{align}
R(H)& =d\tau ,\text{ }R^{i}(P_{i})=T^{i}-\omega ^{i}\tau , \notag \\
R(Z_{0})& =d\hat{\tau},\text{ \ }R(M)=dm-\omega ^{i}e_{i}, \notag \\
R^{i}(Z_{i})& =Dh^{i}-\omega ^{i}\hat{\tau}-k^{i}\tau +k_{\,\,j}^{i}e^{j},
\notag \\
R(N)& =dn-\omega ^{i}h_{i}-k^{i}e_{i}, \notag \\
R^{i}(Z_{i0})& =Dk^{i}+\nu ^{2}e^{i}\tau +k_{\,\,j}^{i}\omega ^{j},\text{ \
R^{i}(K_{i})=D\omega ^{i}, \notag \\
R^{ij}(J_{ij})& =R^{ij},\text{ \ \ }R^{ij}(Z_{ij})=Dk^{ij},
\end{align
with $\nu =c/l$, $T^{i}=de^{i}+\omega ^{ij}e_{j}$ and $R^{ij}=d\omega
^{ij}+\omega _{\,\,k}^{i}\omega ^{kj}$.\newline
From the gauge connection transformation for $A$, $\delta A=d\Lambda +\left[
A,\Lambda \right] $, wit
\begin{align}
\Lambda & =\frac{v}{l}\zeta ^{0}H+\frac{1}{l}\zeta ^{i}P_{i}+\frac{v}{l}\rho
^{0}Z_{0}+\frac{1}{l}\rho ^{i}Z_{i}+\frac{1}{vl}\sigma M+\frac{1}{vl}\gamma N
\notag \\
& +\frac{1}{v}\lambda ^{i}K_{i}+\frac{1}{v}\chi ^{i}Z_{i0}+\frac{1}{2
\lambda ^{ij}J_{ij}+\frac{1}{2}\chi ^{ij}Z_{ij},
\end{align
it is direct to find the variations of the diferent gauge fields \cite{salg1}
\begin{align}
\delta \tau & =d\zeta ^{0}\text{, \ }\delta e^{i}=D\zeta ^{i}-\omega
^{i}\zeta ^{0}-\lambda ^{ij}e_{j}+\tau \lambda ^{i}, \notag \\
\delta h^{i}& =D\rho ^{i}-\omega ^{i}\rho ^{0}-\lambda
^{ij}h_{j}+h^{0}\lambda ^{i}+k^{ij}\zeta _{j}-k^{i}\zeta ^{0}-\chi
^{ij}e_{j}+\tau \chi ^{i}, \notag \\
\delta m& =d\sigma -\omega ^{i}\zeta _{i}+e^{i}\lambda _{i}\text{, \ }\delta
\omega ^{i}=D\lambda ^{i}-\lambda ^{ij}\omega _{j}, \notag \\
\delta n& =d\gamma -k^{i}\zeta _{i}+h^{i}\lambda _{i}-\omega ^{i}\rho
_{i}+e^{i}\chi _{i}\text{,\ }\delta h^{0}=d\rho ^{0}, \notag \\
\delta k^{i}& =D\chi ^{i}-\lambda ^{ij}k_{j}-\chi ^{ij}\omega
_{j}+k^{ij}\lambda _{j}+e^{i}\zeta ^{0}-\zeta ^{i}\tau ,\text{ \ }\delta
\omega ^{ij}=D\lambda ^{ij}, \notag \\
\delta k^{ij}& =D\chi ^{ij}+k_{\,\,k}^{i}\lambda ^{kj}+k_{\,\,k}^{j}\lambda
^{ik}, \label{campos}
\end{align
where the derivative $D$ is covariant with respect to the $J
-transformations.
From (\ref{campos}) we can see that only the gauge fields $e_{\mu }^{\,\,i}
, $\tau _{\mu }$, $m_{\mu }$, $h_{\mu }^{\,\,i}$, $h_{\mu }^{0}$ and $n_{\mu
}$ transform under $P$ and $H$ transformations. These are the fields that
should remain independent, while the remaining fields will be dependent upon
the aforementioned fields. This can be archieved with the following
constraint
\begin{align}
R(H)& =d\tau =0,\text{ \ }R^{i}(P_{i})=T^{i}-\omega ^{i}\tau =0, \notag \\
R(M)& =dm-\omega ^{i}e_{i}=0,\text{ \ }R(Z_{0})=dh^{0}=0, \notag \\
R^{i}(Z_{i})& =Dh^{i}-\omega ^{i}h^{0}-k^{i}\tau +k_{j}^{i}e^{j}=0, \notag
\\
R(N)& =dn-\omega ^{i}h_{i}-k^{i}e_{i}=0. \label{constraints}
\end{align}
Using the subspaces separation method introduced in Ref. \cite{salg3}, was
found that, except for surface terms, the so called Newton--Chern--Simons
lagrangian is given by
\begin{align}
L_{\mathrm{NRChS}}& =\alpha _{1}\varepsilon _{ijkl}\left(
-2R^{ij}T^{k}\omega ^{l}-\frac{4}{3}R^{ij}\omega ^{k}\omega ^{l}\tau
+2R^{ij}D\omega ^{k}e^{l}-R^{ij}R^{kl}m\right) \notag \\
& \,\,\,\,+\alpha _{3}\varepsilon _{ijkl}\left( \frac{4}{3}\nu
^{2}R^{ij}e^{k}e^{l}\tau -2R^{ij}Dh^{k}\omega ^{l}-\frac{4}{3
R^{ij}k^{k}\omega ^{l}\tau -\frac{4}{3}R^{ij}\omega ^{k}\omega ^{l}\hat{\tau
\right. \notag \\
& \left. +2R^{ij}D\omega ^{k}h^{l}-\frac{4}{3}Dk^{ij}T^{k}\omega
^{l}-Dk^{ij}\omega ^{k}\omega ^{l}\tau -R^{ij}k^{kl}dm-\frac{2}{3
R^{ij}k^{kl}e^{m}\omega _{m}\right. \notag \\
& \,\,\,\,\left. -\frac{2}{3}R^{ij}\omega _{\,m}^{k}k^{ml}m-\frac{4}{3
k^{ij}T^{k}D\omega ^{l}-k^{ij}D\omega ^{k}\omega ^{l}\tau -2R^{ij}T^{k}k^{l}
\frac{4}{3}R^{ij}\omega ^{k}k^{l}\tau \right. \notag \\
& \,\,\,\,\left. +\frac{2}{3}R^{ij}k^{km}\omega _{m}e^{l}+\frac{2}{3}\omega
_{\,m}^{i}k^{jm}D\omega ^{k}e^{l}-R^{ij}R^{kl}n-2R^{ij}\omega
^{km}k_{m}e^{l}\right) , \label{lagrangeanob}
\end{align
where $\nu ,\alpha _{1},\alpha _{3}$ are parameters of the theory and
\kappa $ is a constants (for detail see \cite{epjc2014,gomez,salg1}). \
In the next section we will consider obtaining the equations of motion
associated with the action whose Lagrangian is given by the eq. (\re
{lagrangeanob})
\section{\textbf{Newton--Chern--Simons field equations}}
In presence of matter, the complete Lagrangian of the theory is
\begin{equation}
L=\kappa L_{M}+L_{\mathrm{NRChS}} \label{lagran1}
\end{equation
where $L_{\mathrm{NRChS}}$ is the Newton--Chern--Simons lagrangian given in
\ref{lagrangeanob}) and $L_{M}$ is the corresponding matter Lagrangian.
The field equations obtained from the action (\ref{lagran1}) are given by
\begin{eqnarray}
\varepsilon _{ijkl}\left( -\frac{4}{3}\alpha _{1}R^{ij}\omega ^{k}\omega
^{l}+\frac{4}{3}\alpha _{3}\nu ^{2}R^{ij}e^{k}e^{l}\right) &=&\kappa \frac
\delta L_{M}}{\delta \tau }, \label{1} \\
\frac{4}{3}\alpha _{3}\varepsilon _{ijkl}R^{ij}\omega ^{k}\omega ^{l}
&=&-\kappa \frac{\delta L_{M}}{\delta \hat{\tau}}, \label{2} \\
2\varepsilon _{ijkl}\left( \alpha _{1}R^{ij}D\omega ^{k}-\frac{4}{3}\alpha
_{3}\nu ^{2}R^{ij}e^{k}\tau \right) &=&\kappa \frac{\delta L_{M}}{\delta
e^{l}}, \label{3} \\
2\alpha _{3}\varepsilon _{ijkl}R^{ij}D\omega ^{k} &=&\kappa \frac{\delta
L_{M}}{\delta h^{l}}, \label{4} \\
a_{1}\varepsilon _{ijkl}R^{ij}R^{kl} &=&-\kappa \frac{\delta L_{M}}{\delta m
, \label{5} \\
a_{3}\varepsilon _{ijkl}R^{ij}R^{kl} &=&-\kappa \frac{\delta L_{M}}{\delta n
, \label{6}
\end{eqnarray
\begin{equation}
4\varepsilon _{ijkl}\left( \frac{2}{3}\alpha _{1}R^{ij}\omega ^{k}\tau
-\alpha _{1}R^{ij}T^{k}+\frac{2}{3}\alpha _{3}R^{ij}\omega ^{k}\hat{\tau
-\alpha _{3}R^{ij}Dh^{k}\right) =\kappa \frac{\delta L_{M}}{\delta \omega
^{l}}, \label{7}
\end{equation
\begin{eqnarray}
&&\varepsilon _{ijkl}\left( -2\alpha _{1}R^{km}e_{m}\omega ^{l}-4\alpha
_{1}T^{k}D\omega ^{l}-\frac{4}{3}\alpha _{1}\omega ^{k}\omega ^{l}d\tau
\frac{8}{3}\alpha _{1}D\omega ^{k}\omega ^{l}\tau +2\alpha _{1}R^{km}\omega
_{m}e^{l}\right. \notag \\
&&-2\alpha _{1}R^{kl}dm+\frac{8}{3}\nu ^{2}\alpha _{3}T^{k}e^{l}\tau +\frac{
}{3}\alpha _{3}e^{k}e^{l}d\tau -2\alpha _{3}R^{km}h_{m}\omega ^{l}-4\alpha
_{3}Dh^{k}D\omega ^{l} \notag \\
&&\left. -\frac{4}{3}\alpha _{3}\omega ^{k}\omega ^{l}d\hat{\tau}-\frac{8}{3
\alpha _{3}D\omega ^{k}\omega ^{l}\hat{\tau}+2\alpha _{3}R^{km}\omega
_{m}h^{l}-\alpha _{3}R^{kl}dn\right) =\kappa \frac{\delta L_{M}}{\delta
\omega ^{ij}}, \label{8}
\end{eqnarray
where we have imposed the $k^{ij}=k^{i}=0$ conditions, and used
\begin{equation*}
T_{i}=\ast \left( \frac{\delta L_{M}}{\delta e^{i}}\right) ,\text{ \
T_{0}=\ast \left( \frac{\delta L_{M}}{\delta \tau }\right) .
\end{equation*}
From (\ref{constraints}) and Bianchi identities we find
\begin{equation}
DT^{i}=D\omega ^{i}\tau \,\,,\,\,\,\,R^{ij}e_{j}=D\omega ^{i}\tau \,,\text{
\ }R_{[\lambda \mu }^{\,\,\,\,\,\,\,\,ij}e_{\nu ]j}^{\,}=(D_{[\lambda
}\omega _{\mu })^{i}\tau _{\nu ]}, \label{bianchi}
\end{equation}
\begin{equation}
e_{[\lambda i}(D_{\mu }\omega _{\nu ]})^{i}=0. \label{bianchi2}
\end{equation
For simplicity we will assume that the torsion vanishes. In this case
\delta L_{M}/\delta \omega ^{ij}=0$. Using the constraint (\ref{constraints
) we find that (\ref{7}) takes the form
\begin{align}
4\varepsilon _{ijkl}\left( \frac{\alpha _{1}}{3}R^{ij}\omega ^{k}\tau +\frac
\alpha _{3}}{3}R^{ij}\omega ^{k}\hat{\tau}\right) & =0, \notag \\
\hat{\tau}& =-\frac{\alpha }{3}\tau . \label{tao}
\end{align
Introducing (\ref{bianchi}),(\ref{bianchi2}) in (\ref{8}) we have
\begin{eqnarray}
&&\varepsilon _{ijkl}\left( -\frac{10\alpha _{1}}{3}D\omega ^{k}\omega
^{l}\tau +2\alpha _{1}R^{km}\omega _{m}e^{l}-\frac{8}{3}\nu ^{2}\alpha
_{3}\omega ^{k}\tau e^{l}\tau -\alpha _{1}R^{kl}dm\right. \notag \\
&&\left. +2\alpha 3R^{km}\omega _{m}h^{l}-\frac{10}{3}\alpha _{3}D\omega
^{k}\omega ^{l}\hat{\tau}-\alpha _{3}R^{kl}dn\right) \notag \\
&=&0.
\end{eqnarray
Since $\tau ^{2}=0$ we can write
\begin{equation}
\varepsilon _{ijkl}\left( -\frac{13\alpha _{1}}{3}D\omega ^{k}\omega
^{l}\tau -\frac{13}{3}\alpha _{3}D\omega ^{k}\omega ^{l}\hat{\tau}\right) =0,
\end{equation
which means that this equation is satisfied identically and therefore the
space is a flat manifold as can be seen from the equations (\ref{5}), (\re
{6}).
Introducing eqs. (\ref{2}) in (\ref{1}) and (\ref{4}) in (\ref{3}), we
obtain
\begin{eqnarray}
\varepsilon _{ijkl}R^{ij}e^{k}e^{l} &=&\frac{6}{\nu ^{2}}\left( k_{1}\frac
\delta L_{M}}{\delta \tau }-\alpha k_{2}\frac{\delta L_{M}}{\delta \hat{\tau
}\right) , \notag \\
\varepsilon _{ijkl}R^{ij}e^{k}\tau &=&\frac{3}{\nu ^{2}}\left( k_{1}\frac
\delta L_{M}}{\delta e^{l}}-\alpha \frac{\delta L_{M}}{\delta h^{l}}\right) .
\end{eqnarray
Taking into account that
\begin{eqnarray}
\ast (T_{0})\delta \tau &=&\det (g)\delta _{\delta }^{\sigma }T_{\text{ \
0}^{0}\text{\ }\delta \tau _{\text{ \ }\sigma }^{\delta }dx^{5},
\label{coord1} \\
\varepsilon _{ijkl0}R^{ij}e^{k}e^{l}\delta \tau &=&2\det (g)\left( \delta
_{\delta }^{\sigma }R-2R_{\text{ \ }\delta }^{\sigma }\right) \delta \tau _
\text{ \ }\sigma }^{\delta }dx^{5}, \label{coord2} \\
\varepsilon _{ijkl0}R^{ij}e^{k}\tau \delta e^{l} &=&-2\det (g)\left( \delta
_{\delta }^{\sigma }R-2R_{\text{ \ }\delta }^{\sigma }\right) \delta e_
\text{ \ }\sigma }^{\delta }dx^{5}, \label{coord3}
\end{eqnarray
and using $T_{\text{ \ }00}^{(h)}=\rho ^{(h)}$, $T_{\text{ \
ii}^{(h)}=4p^{(h)}/c^{2}$, we find (with $R=0$)
\begin{equation}
R_{00}=\frac{3}{2\nu ^{2}}\left( k_{1}\rho ^{(e)}-\alpha k_{2}\rho
^{(h)}\right) , \label{r00}
\end{equation
\begin{equation}
R_{00}=\frac{3}{c^{2}\nu ^{2}}\left( k_{1}p^{(e)}-\alpha k_{2}p^{(h)}\right)
, \label{r002}
\end{equation
where (\ref{r00}) coincides with the results found in \cite{salg1}
\begin{equation}
\nabla ^{2}\phi =\frac{3}{2\nu ^{2}}(k_{1}\rho ^{(e)}-\alpha k_{2}\rho
^{(h)}),
\end{equation
with $\nu =c/l,$ $\beta _{1}=\beta _{2}=\kappa $, $k_{1}=\kappa /8\alpha
_{3}=8\pi G_{5}$, $k_{2}=\kappa /24\alpha _{3},$ $\alpha =3\alpha
_{1}/\alpha _{3},$ $k_{1}=3k_{2}.$ From (\ref{r00}), (\ref{r002}) we have,
\begin{eqnarray}
2k_{1}p^{(e)}-2\alpha k_{2}p^{(h)} &=&\left( k_{1}\rho ^{(e)}-\alpha
k_{2}\rho ^{(h)}\right) c^{2}, \notag \\
2p^{(e)}-\frac{2\alpha }{3}p^{(h)} &=&\left( \rho ^{(e)}-\frac{\alpha }{3
\rho ^{(h)}\right) c^{2}.
\end{eqnarray
Defining a density and an effective pressure a
\begin{eqnarray}
p &=&\frac{p^{(e)}}{2}-\frac{\alpha }{6}p^{(h)}, \notag \\
\rho &=&\frac{\rho ^{(e)}}{2}-\frac{\alpha }{6}\rho ^{(h)},
\end{eqnarray
we fin
\begin{equation}
p=\frac{\rho c^{2}}{2}, \label{estad}
\end{equation
and from (\ref{tao}), we have
\begin{equation}
\rho ^{(h)}=-\frac{3}{\alpha }\rho ^{(e)},\text{\ } \label{densidad}
\end{equation
From (\ref{r00}), (\ref{r002}) we can se
\begin{eqnarray}
R_{00} &=&\frac{3}{4\nu ^{2}}\left( k_{1}\rho ^{(e)}-\alpha k_{2}\rho
^{(h)}+2k_{1}\frac{p^{(e)}}{c^{2}}-2\alpha k_{2}\frac{p^{(h)}}{c^{2}}\right)
, \notag \\
R_{00} &=&\nabla ^{2}\phi =\frac{3k_{1}}{2\nu ^{2}}\left( \rho +\frac{2p}
c^{2}}\right) . \label{poisson2}
\end{eqnarray}
On the other hand, the interaction between the fluids is described by the
following state equations
\begin{eqnarray}
p^{(e)} &=&\omega ^{(e)}\rho ^{(e)}c^{2}, \notag \\
p^{(h)} &=&\omega ^{(h)}\rho ^{(h)}c^{2}=-\frac{3}{\alpha }\omega ^{(h)}\rho
^{(e)}c^{2},
\end{eqnarray
\begin{eqnarray}
2k_{1}\omega ^{(e)}\rho ^{(e)}-2\alpha k_{2}\omega ^{(h)}\rho ^{(h)}
&=&k_{1}\rho ^{(e)}-\alpha k_{2}\rho ^{(h)}, \notag \\
2\left( k_{1}\omega ^{(e)}+3k_{2}\omega ^{(h)}\right) \rho ^{(e)} &=&\left(
k_{1}+3k_{2}\right) \rho ^{(e)},
\end{eqnarray
\begin{eqnarray}
\omega ^{(h)} &=&\frac{\left( k_{1}+3k_{2}\right) }{6k_{2}}-\frac{k_{1}}
3k_{2}}\omega ^{(e)}, \notag \\
&=&1-\omega ^{(e)}. \label{rel}
\end{eqnarray}
In the next section we will study a possible non-relativistic version of the
results obtained in Ref. \cite{epjc2014}.
\section{\textbf{Newton--Chern--Simons cosmology}}
Following the formalism used in \cite{2}, we denote with $(t,x^{i})$, the
local coordinates where $i=1,2,3,4$ and $\tau =dx^{0},$ $h=\delta
^{ij}\partial _{i}\otimes \partial _{j}$ are the temporal and spatial metric
respectively. The matter is modeled as an ideal fluid with velocity $u$,
which is a timelike unit vector. The vorticity\ $\Omega ^{\alpha \beta }$
and the (rate of) strain $\Theta ^{\alpha \beta }$ relative to a timelike
unit vector field $V$, where $\tau (V)=1$, i.e., $\tau _{a}=g_{\alpha \beta
}V^{\beta }$, are given by
\begin{eqnarray}
\Omega ^{\alpha \beta } &=&\frac{1}{2}(u_{;\lambda }^{\alpha }h^{\lambda
\beta }-u_{;\lambda }^{\beta }h^{\lambda \alpha }), \notag \\
\Theta ^{\alpha \beta } &=&\frac{1}{2}(u_{;\lambda }^{\alpha }h^{\lambda
\beta }+u_{;\lambda }^{\beta }h^{\lambda \alpha }).
\end{eqnarray
The expansion rate and the (rate of) shear is the trace-free part of the
strain are given by
\begin{equation}
\theta =h^{\alpha \beta }\Theta _{\alpha \beta },\text{ \ \ }\sigma =\Theta
\frac{1}{4}\theta h
\end{equation
respectively.\ \
It is posible to show that $\theta =u_{;\sigma }^{\sigma }$ and that the
covariant derivative of the velocity can be decomposed as \cite{2}
\begin{equation}
h_{\alpha \lambda }u_{;\beta }^{\lambda }=\Theta _{\alpha \beta }+\Omega
_{\alpha \beta }+h_{\alpha \rho }V^{\lambda }u_{;\lambda }^{\rho }g_{\beta
\sigma }V^{\sigma },
\end{equation
and with the help of this last equation we can obtain the so called
Raychaudhuri equation in the Newton--Chern--Simons gravity. Following Ref.
\cite{3} we start from the known identity (see also \cite{11})
\begin{eqnarray}
u_{;\beta ;\gamma }^{\alpha }-u_{;\gamma ;\beta }^{\alpha } &=&R_{\sigma
\gamma \beta }^{\alpha }u^{\sigma }, \notag \\
u^{\beta }u_{;\alpha ;\beta }^{\alpha } &=&\left( u^{\beta }u_{;\beta
}^{\alpha }\right) _{;\alpha }-u_{;\alpha }^{\beta }u_{;\beta }^{\alpha
}-R_{\alpha \beta }u^{\alpha }u^{\beta },
\end{eqnarray
where the two first terms on the right are given by \cite{2}
\begin{eqnarray}
\left( u^{\beta }u_{;\beta }^{\alpha }\right) _{;\alpha } &=&\func{div
(\nabla _{u}u), \notag \\
u_{;\alpha }^{\beta }u_{;\beta }^{\alpha } &=&h^{\rho \alpha }h^{\sigma
\beta }(\Theta _{\rho \beta }\Theta _{\sigma \alpha }+\Omega _{\rho \beta
}\Omega _{\sigma \alpha }),
\end{eqnarray
and the last terms on the right is given by
\begin{equation}
R_{\alpha \beta }u^{\alpha }u^{\beta }=\frac{3k_{1}}{2\nu ^{2}}\left( \rho
\frac{2p}{c^{2}}\right) ,
\end{equation
where we have used (\ref{poisson2}) together to the equations
\begin{eqnarray}
R_{\alpha \beta } &=&\frac{3k_{1}}{2\nu ^{2}}\left( \rho +\frac{2p}{c^{2}
\right) \tau _{\alpha }\tau _{\beta }, \notag \\
\tau _{\alpha }\tau _{\beta } &=&g_{\alpha \sigma }g_{\beta \rho }u^{\sigma
}u^{\rho }. \label{r1}
\end{eqnarray
These results allow us to find the five dimensional Raychaudhuri equation
for the Newton--Chern--Simons gravity
\begin{equation}
\func{div}(\nabla _{u}u)=\nabla _{u}\theta +\frac{1}{4}\theta ^{2}+\sigma
^{\alpha \beta }\sigma _{\alpha \beta }-\Omega ^{\alpha \beta }\Omega
_{\alpha \beta }+\frac{3k_{1}}{2\nu ^{2}}\left( \rho +\frac{2p}{c^{2}
\right) . \label{ray1}
\end{equation}
\subsection{\textbf{FLRW background}}
In this section we study \ the non-relativistic FLRW equations in the
context of the Newton-Chern-Simons gravity.
The calculation of the Ricci tensor from its definition leads to the
following result
\begin{eqnarray}
R_{00} &=&-\frac{1}{2}(h^{ij}\dot{h}_{ij})_{,0}-\frac{1}{4}h^{ij}\dot{h
_{jk}h^{kl}\dot{h}_{li}+2h^{ij}\kappa _{0j,i}+\kappa ^{ij}\kappa _{ij},
\notag \\
&=&\frac{3k_{1}}{2\nu ^{2}}\left( \rho +\frac{2p}{c^{2}}\right) , \notag \\
R_{0i} &=&h^{jk}\kappa _{ik,j}=0, \notag \\
R_{ij} &=&0. \label{ray2}
\end{eqnarray
The first equation is equivalent to the Raychaudhuri equation (\ref{ray1})
for $u=V,$ while the second equation is equivalent to $\Omega _{\text{ \
,j}^{ij}=0$ for $u=V$, since for any $u$
\begin{equation}
\Omega _{\alpha \beta }=\frac{1}{2}\left(
h_{ac}u_{,b}^{c}-h_{bc}u_{,a}^{c}-2\kappa _{ab}\right) .
\end{equation
For the other kinematical quantities we fin
\begin{eqnarray}
\Theta _{ab} &=&\frac{1}{2}\left( h_{ac}u_{,b}^{c}-h_{bc}u_{,a}^{c}+\dot{h
_{ab}\right) , \label{cant1} \\
\theta &=&u_{,a}^{a}+\frac{1}{2}h^{ab}\dot{h}_{ab}, \label{cant2} \\
\func{div}(\nabla _{u}u) &=&\dot{u}_{,a}^{a}+2h^{ab}\kappa
_{0a,b}+2h^{bc}\kappa _{ac}u_{,b}^{a} \notag \\
&&+h^{bc}u_{,b}^{a}\dot{h}_{ac}+u^{a}u_{,ab}^{b}+u_{,b}^{a}u_{,a}^{b}.
\label{cant3}
\end{eqnarray
For an ideal fluid with pressure, the continuity equation and the Euler
equation are respectively given by \cite{10},
\begin{equation}
\dot{\rho}+\left[ \left( \rho +\frac{p}{c^{2}}\right) u^{i}\right] _{,i}
\frac{1}{2}h^{ij}\dot{h}_{ij}\left( \rho +\frac{p}{c^{2}}\right) =0,
\label{cont}
\end{equation
an
\begin{equation}
\dot{u}^{i}+u^{j}u_{,j}^{i}+2h^{ij}\kappa _{0j}+2u^{j}\left( \frac{1}{2
h^{ik}\dot{h}_{kj}+h^{ik}\kappa _{jk}\right) +\left( \rho +\frac{p}{c^{2}
\right) ^{-1}h^{ij}p_{,j}=0. \label{euler}
\end{equation
When $\kappa _{ij}=0$, we find that the equations (\ref{cont}), (\ref{euler
) and (\ref{ray2}) take the form
\begin{eqnarray}
\dot{\rho}+\left[ \left( \rho +\frac{p}{c^{2}}\right) u^{i}\right] _{,i}
&=&0, \notag \\
\dot{u}^{i}+u^{j}u_{,j}^{i} &=&-\left( \rho +\frac{p}{c^{2}}\right)
^{-1}p_{,i}+g^{i}, \notag \\
-g_{,i}^{i} &=&\frac{3k_{1}}{2\nu ^{2}}\left( \rho +\frac{2p}{c^{2}}\right) ,
\end{eqnarray
where $g^{i}=-2\kappa _{0i}$. \ So that we have arrived to the equations for
Newton--Chern--Simons gravity coupled to an ideal fluid.
If now we assume that $\rho $ and $p$ are only functions of $t$
(homogeneity), then the Euler equation implies
\begin{equation}
\nabla _{u}u=-\left( \rho +\frac{p}{c^{2}}\right) ^{-1}\func{div}(ph)=0,
\end{equation
and the continuity equation (\ref{cont}) shows that $u_{,i}^{i}$ depends
only of the time $t$. These results leads to the following simplifications
of the equation (\ref{ray1})
\begin{equation}
\dot{\theta}+\frac{1}{4}\theta ^{2}+\sigma ^{ab}\sigma _{ab}-\Omega
^{ab}\Omega _{ab}+\frac{3k_{1}}{2\nu ^{2}}\left( \rho +\frac{2p}{c^{2}
\right) =0. \label{theta}
\end{equation
Since we have used the fact that $\theta $ is a function that depends only
on time, we have that (\ref{cant2}) and (\ref{cont}) imply
\begin{equation}
\dot{\rho}+\theta \left( \rho +\frac{p}{c^{2}}\right) =0, \label{ro}
\end{equation
Let us now consider a homogeneous and isotropic flat-FLRW background in the
context of Newton--Chern--Simons gravity. This model is found using the
following Ansatz
\begin{equation}
V=u,\text{ \ }h_{ij}=a^{2}(t)\delta _{ij},\text{ }\Omega =0,
\end{equation
which leads to
\begin{equation}
\theta =4\frac{\dot{a}}{a},\text{ \ }\sigma _{ab}=0,\text{ \ }\dot{\theta
=4\left( \frac{\ddot{a}a-\dot{a}^{2}}{a^{2}}\right) .
\end{equation
Here, $a$ is the cosmic scale factor. Introducing these results in the
equations (\ref{ro}) and (\ref{theta}) we obtain
\begin{eqnarray}
\dot{\theta}+\frac{1}{4}\theta ^{2} &=&-\frac{3k_{1}}{2\nu ^{2}}\left( \rho
\frac{2p}{c^{2}}\right) , \notag \\
\frac{\ddot{a}}{a} &=&-\frac{3k_{1}}{8\nu ^{2}}\left( \rho +\frac{2p}{c^{2}
\right) . \label{theta2}
\end{eqnarray}
In the following Section we will use the equations (\ref{theta2}) to
visualize the cosmologies that can be derived from the present
five-dimensional scheme
\subsection{\textbf{Cosmological solutions}}
From now on we use units $k_{1}=8\pi G_{5}=1=c$ and $\nu =1/l$. The
equations (\ref{theta2}) are the conservation equation and the equation for
the acceleration, respectively, where $p$ is the pressure, $\rho $ the
energy density, $\theta =4H$, $H=\dot{a}/a$ is the Hubble parameter and $a$
is the cosmic scale factor. We immediately visualize the absence of the
Friedmann constraint. This situation is analogous to what happens in the
projectable version of the Ho\v{r}ava--Lifshitz theory in (3+1)-dimensions
\cite{[1]}. From equations (\ref{theta2}) it is possible to obtain the first
integral
\begin{equation}
\frac{8}{3}\nu ^{2}H^{2}=\rho +\frac{C_{0}}{a^{2}}, \label{3'}
\end{equation
where $C_{0}$ is an integration constant. The term $C_{0}/a^{2}$ is not dark
matter in the usual sense, but gravitationally behaves like a fluid whose
pressure is $p=-\left( 1/2\right) \rho $ which, as we shall see, corresponds
to an evolutionary scheme with zero acceleration, which is a Milne universe.
In General Relativity in (3+1)-dimensions, dark matter corresponds to $\rho
\left( a\right) =\rho \left( a_{0}\right) \left( a_{0}/a\right) ^{3}$. A
term of this form is present in the Ho\v{r}ava--Lifshitz theory in
(3+1)-dimension through $C\left( t\right) /a^{3}$. In Ref. \cite{[2]}, a
realization of the Ho\v{r}ava--Lifshitz gravity as the dynamical
Newton--Cartan geometry was discussed.
The scheme of equations in the projectable version of Ho\v{r}ava--Lifshitz
theory in (3+1)-dimensions is given by the equations
\begin{eqnarray}
\dot{\rho}+3H\left( \rho +p\right) &=&-Q, \notag \\
3\eta \left( 2\dot{H}+3H^{2}\right) &=&p.
\end{eqnarray
where $\eta $ is a dimensionless constant parameter associated with
invariance under diffeomorphisms and $Q$ represents the amount of energy
non-conservation \cite{[3]}. Here there is no Friedmann constraint. From
these equations, it is straightforward to find the first integral
\begin{equation}
3\eta H^{2}=\rho +\frac{C\left( t\right) }{a^{3}}\text{ \ \ }with\text{ \ \
\ }C\left( t\right) =C_{0}+\int_{t_{0}}^{t}dta^{3}Q,
\end{equation
and $C\left( t\right) /a^{3}$ is not a real dark matter, but gravitationally
it behaves like a fluid with $p=0$.
Now, we return to the equations given in (\ref{theta2}), which can be
written in the form
\begin{eqnarray}
\dot{\rho}+4H\left( \rho +p\right) &=&0, \label{4'} \\
\frac{\ddot{a}}{a} &=&\dot{H}+H^{2}=-\frac{3}{8\nu ^{2}}\left( \rho
+2p\right) . \label{5'}
\end{eqnarray
Considering the barotropic relation $p=\omega \rho $, we can write (\ref{4'
) and (\ref{5'}) in the for
\begin{equation}
\dot{\rho}+4H\left( 1+\omega \right) \rho =0, \label{6'}
\end{equation
from which it is direct to obtai
\begin{equation}
\rho \left( a\right) =\rho \left( a_{0}\right) \left( \frac{a_{0}}{a}\right)
^{4\left( 1+\mathbf{\omega }\right) },
\end{equation
and
\begin{equation}
\dot{H}+H^{2}=-\frac{3}{4\nu ^{2}}\left( \omega +\frac{1}{2}\right) \rho ,
\label{7'}
\end{equation
from which we can see that $\ddot{a}\gtreqless 0$ and that $\omega
\lesseqqgtr -1/2.$
The equations (\ref{3'}) and (\ref{6'}), with $1+z=a_{0}/a\,$ where $z$ is
the redshift parameter, implies that the Hubble parameter can be written as
\begin{equation}
H\left( z\right) =\sqrt{\frac{3\rho \left( 0\right) }{8\nu ^{2}}\left(
1+z\right) ^{4\left( 1+\mathbf{\omega }\right) }+\left( H^{2}\left( 0\right)
-\frac{3\rho \left( 0\right) }{8\nu ^{2}}\right) \left( 1+z\right) ^{2}}.
\label{8'}
\end{equation
We considere now some particular cases:
\begin{enumerate}
\item[(1)] \textbf{Case when }$\mathbf{\omega =0}$. In this case $\rho
\left( z\right) =\rho \left( 0\right) \left( 1+z\right) ^{4}$ and
\end{enumerate}
\begin{equation}
H\left( z\right) =\sqrt{\frac{3\rho \left( 0\right) }{8\nu ^{2}}\left(
1+z\right) ^{2}+\left( H^{2}\left( 0\right) -\frac{3\rho \left( 0\right) }
8\nu ^{2}}\right) }\left( 1+z\right) , \label{9'}
\end{equation
where we can see tha
\begin{equation}
H\left( z\rightarrow \infty \right) \rightarrow \infty \text{ \ \ \ }an
\text{ \ \ }H\left( z\rightarrow -1\right) \rightarrow 0. \label{10'}
\end{equation}
\begin{enumerate}
\item[(2)] \textbf{Case when }$\mathbf{\omega =-1/2}$. In this case $\rho
\left( z\right) =\rho \left( 0\right) \left( 1+z\right) ^{2}$ and
\end{enumerate}
\begin{equation}
H\left( z\right) =H\left( 0\right) \left( 1+z\right) \Longrightarrow \ddot{a
=0, \label{11'}
\end{equation
which correspond to a Milne universe.
\begin{enumerate}
\item[(3)] \textbf{Case when }$\omega =-1$. In this case $\rho \left(
z\right) =\rho \left( 0\right) =const.$, but according to (\ref{8'})
\end{enumerate}
\begin{equation}
H\left( z\right) =\sqrt{\frac{3\rho \left( 0\right) }{8\nu ^{2}}+\left(
H^{2}\left( 0\right) -\frac{3\rho \left( 0\right) }{8\nu ^{2}}\right) \left(
1+z\right) ^{2}}, \label{12'}
\end{equation
from which we se
\begin{equation}
H\left( z\rightarrow \infty \right) \rightarrow \infty \text{ \ \ \ }an
\text{ \ \ }H\left( z\rightarrow -1\right) \rightarrow \sqrt{\frac{3\rho
\left( 0\right) }{8\nu ^{2}}}, \label{13'}
\end{equation
and unlike to General Relativity in (3+1)-dimensions, we have $H\left(
z\right) \neq const$. for $\rho \left( z\right) =const$.
\begin{enumerate}
\item[(4)] \textbf{Case when }$\omega <-1$. In this case $\rho \left(
z\right) =\rho \left( 0\right) \left( 1+z\right) ^{-4\left( \left\vert
\mathbf{\omega }\right\vert -1\right) }$ and
\end{enumerate}
\begin{equation}
H\left( z\rightarrow \infty \right) \rightarrow \sqrt{H^{2}\left( 0\right)
\frac{3\rho \left( 0\right) }{8\nu ^{2}}}\left( 1+z\right) \rightarrow
\infty , \label{14'}
\end{equation
\begin{equation}
H\left( z\rightarrow -1\right) \rightarrow \sqrt{\frac{3\rho \left( 0\right)
}{8\nu ^{2}}}\left( 1+z\right) ^{-2\left( \left\vert \mathbf{\omega
\right\vert -1\right) }\rightarrow \infty . \label{15'}
\end{equation
We note that $H\left( z\rightarrow -1\right) $ diverges, when $\rho \left(
z\rightarrow -1\right) $. However this not happens in a finite time how it
happens in General Relativity in $(3+1$)-dimensions when we think, for
instance, in a Little Big Rip future singularity \cite{[4]}.
Now, the first and second law of thermodynamics tell us, respectively,
\begin{eqnarray}
TdS &=&d\left( \rho V\right) +pdV=\left( \rho +p\right) dV+Vd\rho ,
\label{16'} \\
T\frac{dS}{dt} &=&V\left[ \dot{\rho}+4H\left( \rho +p\right) \right] .
\label{17'}
\end{eqnarray
Since, according to (\ref{4'}), $\dot{\rho}+4H\left( \rho +p\right) =0,$ we
have an adiabatic evolution. This means that in Newton--Chern--Simons
cosmology there is no Friedmann constraint and we have adiabatic evolution.
On the other hand in Ho\v{r}ava--Lifshitz theory in (3+1)-dimensions there
is no Friedmann constraint and, unlike of Newton--Chern--Simons cosmology,
the evolution is non-adiabatic since $\dot{\rho}+4H\left( \rho +p\right) =-Q$
$\neq 0$ and therefore $dS/dt\neq 0$.
In summary we can say that we have presented cosmological schemes from the
five-dimensional Einstein-Chern-Simons gravity theory. It could be
interesting to use some process of compactification to project these results
to (3 + 1) -dimensions and then compare them with the results obtained in
the context of general relativity.
\section{\textbf{Final Remarks}}
We have considered a five-dimensional action $S=\int L_{\mathrm{ChS
}^{(5)}+\kappa L_{M}$ which is composed of a gravitational sector and a
matter sector, where the gravitational sector is given by a
Newton--Chern--Simons gravity action instead of the Einstein--Hilbert action
and the matter sector is described by a perfect fluid. We have studied the
implications of replacing the Einstein--Hilbert action by the
Newton--Chern--Simons action on the cosmological evolution for a FLRW metric.
We have showed that the Newton--Chern--Simons cosmology is a sort of
analogue of the projectable version of the Ho\v{r}ava--Lifshitz theory in
(3+1)-dimension, although a term that contains $Q$ is not present. We have
found solutions and their asymptotic limits which show interesting
properties. In addition, a phantom solution with a future singularity
reminiscent of a Litlle Big Rip future singularity has been obtained.
Finally, a brief revision of the adiabaticity in the cosmic evolution was
made.
As we said at the end of the previous section, an interesting thing would be
to do a compactification of five to four-dimensions in order to obtain
generalized non-relativistic cosmologies to be compared with the respectives
schemes studied in the context of general relativity (work in progress).
\textbf{Acknowledgement.} \ \textit{This work was supported in part }by
\textit{\ FONDECYT Grant }No.\textit{\ 1180681} from the Government of
Chile. \textit{One of the authors (}GR\textit{) was supported by grant from
Comisi\'{o}n Nacional de Investigaci\'{o}n Cient\'{\i}fica y Tecnol\'{o}gica
}CONICYT\textit{\ }No.\textit{\ }21140971\textit{\ and from Universidad de
Concepci\'{o}n, Chile. }
|
1,314,259,993,944 | arxiv | \section{Supplemental Material}
\begin{centering}
{\large \textbf{Evidence for even parity unconventional superconductivity in Sr$_2$RuO$_4$}}
A. Chronister$^{1 \dagger\star}$, A. Pustogow$^{1 \dagger\star}$, N. Kikugawa$^{2}$, D. A. Sokolov$^{3}$, F. Jerzembeck$^3$, C. W. Hicks$^{3}$, A. P. Mackenzie$^{3,4}$, E. D. Bauer$^{5}$, S. E. Brown$^{1 \dagger}$
$^1$Department of Physics $\&$ Astronomy, UCLA, Los Angeles, CA 90095, USA;
$^2$National Institute for Materials Science, Tsukuba 305-0003, Japan;
$^3$Max Planck Institute for Chemical Physics of Solids, Dresden 01187, Germany;
$^4$Scottish Universities Physics Alliance, School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews KY16 9SS, UK;
$^5$Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA.
\end{centering}
\section{The discontinuous transition at $B_{c2}(T\to0)$}
While at low temperature there is good evidence that the thermal and magnetic responses both originate from field-induced quasiparticles, the thermodynamic discontinuities at the first-order transition at $B_{c2}(T)$ were previously explored in some detail~\cite{Yonezawa2013,Yonezawa2014,Amano2015}. These are thermodynamically constrained, expressed as the appropriate Clausius-Clapeyron relation, \textit{e.g.},
\begin{equation*}
\frac{dB_{c2}}{dT_c}=-\frac{\Delta S}{\Delta M}.
\end{equation*}
While $\Delta M$ is the total magnetization and includes diamagnetic shielding, it is dominated by the hyperfine part in Sr$_2$RuO$_4$\ \cite{Amano2015,Murakawa2007}. The phase transition was found discontinuous for $T\lesssim0.8$ K, with specific heat~\cite{Yonezawa2014} and MagnetoCaloric Effect~\cite{Yonezawa2013} measurements extending to temperatures as low as 90 mK. At $T=$200 mK, the entropy jump is quoted as 10\% of the normal state value, $\Delta S=(0.1)\gamma_NT_c$, where $\gamma_n$= 37.5 mJ/mol-K$^2$. Also reported at 200 mK, $dB_{c2}/dT_c$= -0.2 T/K. These combined results lead to the expectation $\Delta M(200\textrm{mK})\simeq .6-.7\chi_{n}B_{c2}(T=200\textrm{mK}$.
Since in these cases the Zeeman energy is much larger than the thermal energy scale, the fractional entropy of the transition is not expected to be strongly temperature-dependent. In that case, the magnetization discontinuity inferred from the measurements reported here compare favorably. That is, the experiments indicate a slightly smaller discontinuity, $\Delta M_s\simeq0.4\chi_nB_{c2}$ with $\chi_{sc}\simeq0.9(10)^{-3}$ emu/mole.
\section{Comment on the orbital shift for $^{17}$O}
The $^{17}$O orbital shifts $K_o$ are of considerable importance here, since we use them to reference the Knight shifts $K_s(1_{\parallel},1_{\perp},2)$ relative to ``0''. Generally, $K_o$ can extend to greater than +1500 ppm in oxygen-containing molecules, that are dominated by the paramagnetic term, and are linked to an increased $2p$-bonding order~\cite{Figgis1962}. In this case, the $2p-\sigma$ non-bonding orbital is $\sim3/4$-filled, and the bonding $2p-\pi$ orbitals of most interest here, are $\sim7/8$ filled~\cite{Luo2019}. in Ref.~\cite{Ishida1998}, the orbital shifts $K_o(1_{\perp},2)$ were emprically found neglibibly small by way of a so-called $K-\chi$ plot, where temperature is an implicit parameter. In a similar fashion, $K_o(1_{\parallel})\simeq0.18\%$. These values were applied to the analysis leading to Fig. 3, although we used $K_o(2)=0.02\%$. More specifically, if $K_o(1_{\parallel})$ is taken as vanishingly small, the Knight shift $K_s(1_{\parallel})$, originating dominantly with the $p-\pi$ orbitals, acquires the wrong (unphysical) sign. See Fig. 3(a), where total shift is seen to change sign as the field-induced quasiparticle density is monotonically increased with the field strength.
The paramagnetic orbital shift for O($1_{\parallel}$) implies it follows from the specific unquenching of the angular momentum in the $p-\pi$ states. These states, in hybridizing with the Ru $t_{2g}$ orbitals form the bands $\alpha$, $\beta$, $\gamma$ crossing the Fermi surface. Thus, the perturbative methods applied somewhat successfully to the cuprate superconductors~\cite{Pennington1989} may be less useful here.
Note also that if we adopt $K_s(1_{\perp})>0$, a similarly unphysical field-dependent sign change is imposed on the corresponding hyperfine part, and the diamagnetic part is expected less than .02\%. \textit{This is the clearest constraint on setting the stated upper bound to the condensate fraction of the shift} to $<10\%$ of $K_{normal}$.
Finally, more relative uncertainty is associated with the O(2) site. $K_o$=0.0\% was assumed in Ref.~\cite{Imai1998}. Here, we took it as +0.02\%, also small on the scale of oxygen paramagnetic orbital shifts, and just 25\% compared to inferred normal state hyperfine (Knight) shift. Using this value, the results for O(2) match those for O($1_{\parallel},1_{\perp}$).
\section{Nuclear Hamiltonian Parameterization}
For the $^{17}$O nucleus with spin $I=\frac{5}{2}$, the nuclear spin hamiltonian consists of two parts:
\begin{align}
H=H_Q+H_z \\
H_z=\gamma \vec{I} \cdot (1+K) \cdot \vec{B} \\
H_Q = \frac{eQ}{2I(2I-1)\hbar}\vec{I}\cdot V \cdot \vec{I}
\end{align}
where $H_z$ is the Zeeman interaction and $H_Q $ is the nuclear quadrupole interaction. $K = K_s + K_{orb}$ is the total shift tensor, including both orbital and hyperfine contributions. $Q$ is the electric quadrupole moment of the nucleus and $V$ is the electric field gradient (EFG).
The quadrupolar term has the general effect of splitting the degenerate Zeeman transitions, resulting in $2I$ resonance frequencies. Thus, in the case of Sr$_2$RuO$_4$, which has three distinct oxygen sites under the application of in-plane field ($B || a$), we expect a full $^{17}$O spectrum of 15 lines.
By measuring these NMR lines, one can probe the electronic spin susceptibility $\chi$ of the material via the strength of the hyperfine interaction ($K_s$). However, as Equations (1-3) imply, this effect must be differentiated from the other interactions contributing to the total Hamiltonian. If the parameters defining the nuclear quadrupole interaction are known, this can be readily done. Fortunately, the hyperfine shift, orbital shift, and EFG tensors are all independent of field in the normal state, making it possible to determine these parameters experimentally. So, by measuring all fifteen $^{17}$O resonances at various fields in the normal state ($B>B_{c2}$), one can overdetermine the normal state shift tensor $K_{norm}$ and the EFG tensor $V$. Then, since the quadrupolar and orbital shifts are independent of the superconducting phase transition, any discrepancy between the expected and measured resonance frequencies below $B_{c2}$ can be directly attributed to changes in $K_s$, hence the spin susceptibility.
The nuclear spin Hamiltonian is canonically expressed in the principle axes system (PAS) of $V$, using the convention for the diagonal entries $V_{zz}\geq V_{xx}\geq V_{yy}$. By doing this, the quadrupolar part can be written compactly as:
\begin{align}
H_Q= \frac{\nu_Q}{6}[3I_z^2-\vec{I}^2+\eta(I_x^2-I_y^2)]
\end{align}
where, $\nu_Q$ is the principle axis NQR frequency, proportional to $V_{zz}$ and $\eta$ is the asymmetry parameter given by $(V_{xx}-V_{yy})/V_{zz}$. Since the shift tensor is also diagonal in this frame for all three oxygen sites in Sr$_2$RuO$_4$, the Zeeman term can be expressed as
\begin{align}
H_z=\gamma [ B_x (1+K_{xx})I_x +B_y (1+K_{yy})I_y + B_z (1+K_{zz})I_z]
\end{align}
where $\vec{B}$ is also written in the EFG basis.
With the Hamiltonian written in this form, there are 7 parameters to be determined for each oxygen site: $\nu_Q$, $\eta$, the three PAS components of $\textbf{K}$, and the two angles relating $\hat{B}$ to the EFG frame, $\theta$ and $\phi$. However, as explained below, both the angles determining $\hat{B}$ can be measured independently-- leaving only 5 parameter to be determined via normal state measurements.
\section{Sample Alignment With Respect to Magnetic Field Direction}
\begin{figure}[h]
\includegraphics[width=0.5\columnwidth]{B-Bc2_vs_angle_compare-Yonezawa2013-3}
\caption{The angle dependence of the upper critical field $B_{c2}$, determined from the field-dependence of the coil inductance~\cite{Pustogow2019}, was measured with a piezo-electric rotator (blue symbols). $B_{c2}(\theta)$ is plotted in units of the maximum value $B_{c2,max}= 1.42$~T. The data agree well with specific heat results from Ref.~\cite{Yonezawa2013} where $B_{c2,max}$ ranges from 1.41--1.45~T for different samples and field-sweep conditions. The in-plane condition is satisfied to $\pm 0.2^{\circ}$ for our sample.
}
\label{fig:Hc2_vs_angle}
\end{figure}
First, the out-of-plane angle can be determined independent of the NMR spectrum by utilizing the extreme anisotropy in the upper critical field for $B || ab$ and $B || c$. $B_{c2}$ reaches a maximum of around 1.45~T with the field aligned directly in plane \cite{Yonezawa2013}. As mentioned in the main text, the NMR coil containing the sample is mounted on a piezoelectric step rotator with rotation axis perpendicular to the applied field. By activating the piezo until $B_{c2}$ reaches a maximum, the in-plane condition can be aligned to within $\pm 0.2 ^{\circ}$. The angle dependence of $B_{c2}$ is shown in Fig.~\ref{fig:Hc2_vs_angle}.
The in-plane angle is then checked by \textit{a posteriori} visual inspection of the sample mounting using a microscope to view the sample orientation. While the in-plane condition was verified by anisotropy of $B_{c2}$ in Fig.~\ref{fig:Hc2_vs_angle}, a $3^{\circ}$ angle is found between the long axis of the single crystal and the magnetic field direction.
\section{Normal-State Measurement of Hamiltonian Parameters}
\begin{figure}[]
\includegraphics[width=1\columnwidth]{central-satellite_field-fits3}
\caption{Numerically calculated transition frequencies (dashed lines) compared to measured resonance frequencies (symbols) at different fields for (a) central transitions of the three oxygen sites and (b) central and satellite transitions of O(1$_{\perp}$).
}
\label{calc_freqsweep_compare}
\end{figure}
\begin{table}[]
\begin{tabular}{ l | l | l | l l}
& Shift (\%) & NQR frequency (MHz) & Asymmetry &\\
\hline
O(1) & & & & \\
& $K_{1||} = -0.12 $ & $\nu_Q= 0.765$ & $\eta = 0.174$ & \\
& $K_{1\perp} = +0.509 $& & & \\
& & & & \\
\hline
O(2) & & & & \\
& $K_{2ab} = +0.082 $ & $\nu_Q= 0.6065$ & $\eta = 0$ & \\
& & & &\\
\hline
\end{tabular}
\caption{List of best fit Hamiltonian parameters for the different oxygen sites with $\theta = 0^{\circ}$ and $\phi=3^{\circ}$. The two planar oxygen sites O(1) and O(1') are identical without applied field and are labeled O(1). With the field aligned in the Ru-O plane ($\theta = 0^{\circ}$), just two components of $K$ are relevant for the O(1) site while only one is relevant for the O(2) site. }
\label{table1}
\end{table}
The remaining parameters are determined by fitting the output of a numerical diagonalization of the exact Hamilton to experimentally measured $^{17}$O NMR transitions at three fields greater than $B_{c2}$, (B=1.6T, 4.6T, 8T). The quadrupolar parameters for Sr$_2$RuO$_4$\ have been investigated on a different crystal in a previous study \cite{Luo2019}, and were used as a starting point for the fit. A comparison of the best fit calculation to the experimental normal-state line positions are shown in Fig.~\ref{calc_freqsweep_compare}. The fit reproduces the measured resonance frequencies extremely well, with an average error of less than 1kHz across the three fields. The parameters extracted from the best fit are given in Table \ref{table1}. These values are consistent with previously published results \cite{Mukuda1998}.
\begin{figure}[]
\includegraphics[width=0.8\columnwidth]{fit_error2}
\caption{Difference between calculated and measured resonance frequencies, $\Delta f=f_{exp}-f_{calc}(\phi)$ is shown for (a) $B=1.6$, (b) 4.6 and (c) 8.1~T. The experimental data ($\Delta f=0$) are shown at the respective frequency for O(1$_{\parallel}$), O(2), O(1$_{\perp}$) in blue, green and red colors, respectively. The calculated results at $\phi=0^{\circ}$, $3^{\circ}$ and $5^{\circ}$ in-plane angle with respect to $\mathbf{B}\parallel [100]$ are indicated by crosses, minus and plus signs in dark grey, black and light grey color, respectively.
On the right we illustrate the maximum deviations from the experimentally determined peak positions. The angle dependence becomes most pronounced at low fields, where $\phi=3^{\circ}$ provides the best fit at $B=1.6$~T. The accumulated rms error is smallest for $\phi=3^{\circ}$ at all fields.
}
\label{calc_angle_compare}
\end{figure}
\section{Discussion of in-plane angle uncertainty}
Due to the weak dependence of the quadrupolar term on in-plane angle near $0^{\circ}$ at high field, it is possible to accurately fit the normal-state spectra for a range of in-plane angles ($\approx 0^{\circ}- 5^{\circ}$). This is illustrated in Fig.~\ref{calc_angle_compare}, which shows the deviation between the predicted and measured frequencies for all $^{17}$O transitions using $0^{\circ}$, $3^{\circ}$, $5^{\circ}$ in-plane angle fits; the predicted normal-state frequencies differ only by $\pm 1$~kHz between the fits for $B=1.6$--8~T. While the overall deviations are smallest for $\phi=3^{\circ}$ (which was used for NMR shift analysis), any systematic error introduced by uncertainty in the in-plane angle should be examined.
While for fields $B>B_{c2}$ the effect of in-plane angle is small, it can have a strong impact on the expected normal-state position at lower fields. As such, this affects the ability to extract $K/K_{normal}$ for $B\rightarrow 0$. To illustrate this, the resulting $K/K_{normal}$ are shown for best fits using the three in-plane angles $\phi=0^{\circ}$, $3^{\circ}$ and $5^{\circ}$ in Fig.~\ref{Kplot_angle_compare}. The O(1$_{\parallel}$) site shows a particularly strong dependence on $\phi$: the $0^{\circ}$ and $5^{\circ}$ fits produce unphysical behavior, with $K_s$ exceeding the normal-state value for $\phi=0^{\circ}$ and changing sign for $\phi=5^{\circ}$. The O(1$_{\perp}$) site has a much weaker dependence, but still shows unphysical behavior for angles deviating from $3^{\circ}$. This gives further confidence in the visually determined $3^{\circ}$ angle, but also shows that the O(1$_{\perp}$) site is more robust to a small angle systematic error in evaluating the Knight shifts. Additionally, it should be noted that the apical O(2) site, although having much weaker hyperfine coupling, is completely independent of the in-plane angle due to its axial symmetry, avoiding this issue altogether.
\begin{figure}[h]
\includegraphics[width=1\columnwidth]{in-plane-angle3}
\caption{Difference between extracted $K_s/K_{normal}$ for in-plane angles $\phi=0^{\circ}$, $3^{\circ}$ and 5$^{\circ}$ with respect to $\mathbf{B}\parallel [100]$. (a) $\phi=0^{\circ}$ and 5$^{\circ}$ yield strong deviations for $K_{1\parallel}$ with non-physical behavior $K_s<0$ and $K_s>K_{normal}$. (b) Due to generally larger Knight shift, the variations of $K_{1\perp}$ are less pronounced, yielding more robust values. Still, the non-monotonous behavior upon lowering $B$ for 5$^{\circ}$ is not meaningful, and also the susceptibility values smaller (0$^{\circ}$) than the quasiparticle contribution from specific heat, $K_s/K_{normal}<C/C_{normal}$~\cite{NishiZaki2000}, are unphysical. Altogether, we conclude upon an in-plane angle $\phi=3^{\circ}$ from [100], consistently used in this work.
}
\label{Kplot_angle_compare}
\end{figure}
\end{document}
|
1,314,259,993,945 | arxiv | \section{Introduction}
Let $k$ be a base field. Given a smooth proper $k$-scheme $X$, let us denote by $\mathcal{Z}^*(X)_\mathbb{Q}$ the (graded) $\mathbb{Q}$-vector space of algebraic cycles on $X$. It is well-known that one can define many different equivalence relations on $\mathcal{Z}^*(X)_\mathbb{Q}$, such as the rational equivalence relation $\sim_\mathrm{rat}$, the algebraic equivalence relation $\sim_\mathrm{alg}$, the nilpotence equivalence relation $\sim_\mathrm{nil}$, the numerical equivalence relation $\sim_\mathrm{num}$, etc; consult \cite{Fulton2nd_ed}.
Voevodsky conjectured in \cite{Voevodsky95} that the nilpotence and numerical equivalence relations agree. This conjecture was proved\footnote{Thanks to the work of Kahn-Sebastian, Matsusaka, Voevodsky, and Voisin (see \cite{KS09,Mat57,Voevodsky95, Voisin96}), Voevodsky's conjecture is known in the case of curves, surfaces, and abelian 3-folds (when $k$ is of characteristic zero).} by Bernardara-Marcolli-Tabuada \cite{BMT} for certain quadric fibrations, intersections of quadrics, linear sections of Grassmannians, linear sections of determinantal varieties, and Moishezon manifolds, and later by Ornaghi-Pertusi \cite{OP} for certain cubic fourfolds and Gushel-Mukai fourfolds.
As shown by Voevodsky in \cite{Voevodsky95}, every algebraic cycle which is algebraically trivial is also nilpotently trivial. Consequently, it is natural to ask if the aforementioned results also hold when the nilpotence equivalence relation is replaced by the algebraic equivalence relation? Our main result is an affirmative answer to this question:
\begin{theorem}[Refinement]
\label{theorem: main theorem}
The algebraic and numerical equivalence relations agree for certain quadric fibrations, intersections of quadrics, linear sections of Grassmannians, linear sections of determinantal varieties, Moishezon manifolds, cubic fourfolds and Gushel-Mukai fourfolds. In addition, they also agree for certain K{\"u}chle fourfolds and families of Sextic del Pezzo surfaces.\label{main theo}
\end{theorem}
Theorem \ref{theorem: main theorem} follows from the combination of Corollaries \ref{corollary: certain quadrics}, \ref{corollary: certain intersections}, \ref{corollary: sextic del Pezzo}, \ref{corollary: grassmannians}, \ref{corollary: determinantal} with Theorems \ref{theorem: certain moishezon}, \ref{theorem (A)}, \ref{theorem (C)}, \ref{theorem (D)}, \ref{theorem: certain kuchel fourfolds}; consult \S \ref{subsection: quadric}-\S \ref{subsection: HPD} below.
\begin{remark}
\label{remark: not always true}
Theorem \ref{main theo} does not hold for every smooth proper $k$-scheme! For example, it follows from the pioneering work of Griffiths \cite{Grif69} that if $X$ is a general quintic $3$-fold, then the algebraic and numerical equivalence relations on $\mathcal{Z}^*(X)_\mathbb{Q}$ do not agree; consult also the later works of Clements \cite{Clem83} and Voisin \cite{Voisin00} for further examples.
\end{remark}
In addition to Theorem \ref{main theo}, we also prove that the quotient between any two equivalence relations is invariant under Homological Projective Duality in the sense of Kuznetsov \cite{Kuzn07}; consult Theorem \ref{Theorem: HPD} below. This allows us to explicitly compute the quotient between the rational and the algebraic equivalence relations for certain 5-folds; consult Corollaries \ref{corollary: Grassmannian curve genus 1}-\ref{corollary: Grassmannian curve genus 43} below.
\section{Preliminaries}
Throughout this paper, $k$ denotes a base field.
\subsection{Dg categories}
\label{subsection: dg}
We recall here some basic concepts on differential graded categories; for a survey, consult \cite{Keller06}.
Let $(C(k),\otimes, k )$ be the category of (cochain) complexes of $k$-vector spaces. A \textit{differential graded (dg) category} $\mathcal{A}$ is a category enriched over $C(k)$ and a dg functor $F: \mathcal{A} \rightarrow \mathcal{B}$ is a functor enriched over $C(k)$. Note that every $k$-algebra $A$ gives rise to a dg category with a single object.
Recall also that, given any quasi-compact quasi-separated $k$-scheme $X$, the category of perfect complexes $\perf (X)$ admits a canonical\footnote{When $X$ is quasi-projective, this dg enhancement is moreover unique, see \cite[Theorem 2.12]{LunOrl}} dg enhancement $\perfdg (X)$; see \cite[\S 4.6]{Keller06}.
Let $\mathcal{A}$ be a dg category. The opposite dg category $\mathcal{A}^{\mathrm{op}}$ has the same objects of $\mathcal{A}$ and $\mathcal{A}^{\mathrm{op}}(x,y):=\mathcal{A}(y,x)$ and the category $\mathrm{H}^0(\mathcal{A})$ has the same objects of $\mathcal{A}$ and $(\mathrm{H}^0(A))(x,y)=\mathrm{H}^0(\mathcal{A}(x,y))$; consult \cite[\S 2.2]{Keller06}.
A \textit{right dg $\mathcal{A}$-module} is a dg functor $M: \mathcal{A}^{\mathrm{op}} \rightarrow \mathcal{C}_{\mathrm{dg}}(k)$ with values in the dg category of complexes of $k$-vector spaces.
The category of dg modules $\mathcal{C}(\mathcal{A})$ has the right dg $\mathcal{A}$-modules as objects and the morphisms of dg functors as morphisms. The localization of $\mathcal{C}(\mathcal{A})$ with respect to the class of quasi-isomorphism is called the \emph{derived category} $\mathcal{D}(\mathcal{A})$ of $\mathcal{A}$. We will write $\mathcal{D}_c (\mathcal{A})$ for the full subcategory of compact objects of $\mathcal{D}(\mathcal{A})$; consult \cite[\S 3.2]{Keller06}.
A dg functor $F: \mathcal{A} \rightarrow \mathcal{B}$ is a \textit{Morita equivalence} if it induces an equivalence on the derived categories $\mathcal{D}(\mathcal{A}) \simeq \mathcal{D}(\mathcal{B})$; see \cite[\S 4.6]{Keller06}.
Given dg categories $\mathcal{A}$ and $\mathcal{B}$, its \textit{tensor product} $\mathcal{A} \otimes \mathcal{B}$ is the dg category whose set of objects is $\obj(\mathcal{A}) \times \obj(\mathcal{B})$ and $(\mathcal{A} \otimes \mathcal{B})((x,w),(y,z)):= \mathcal{A}(x,y) \otimes \mathcal{B}(w,z)$. Following \cite[\S 2.3]{Keller06}, this construction gives a symmetric monoidal structure on the category $\dgcat(k)$ of (essentially small) dg categories and dg functors over the base field $k$.
A \emph{dg $\mathcal{A}$-$\mathcal{B}$-bimodule} is a dg functor $\mathrm{B}: \mathcal{A} \otimes \mathcal{B}^{\mathrm{op}} \rightarrow \mathcal{C}_{\mathrm{dg}}(k)$, i.e., a right dg $\mathcal{A}^{\mathrm{op}} \otimes \mathcal{B}$-module. Given a dg functor $F: \mathcal{A} \rightarrow \mathcal{B}$ there exists an induced dg $\mathcal{A}$-$\mathcal{B}$-bimodule associated to $F$ defined as $\prescript{}{F}{\mathcal{B}} : \mathcal{A} \otimes \mathcal{B}^{\mathrm{op}} \longrightarrow \mathcal{C}_{\mathrm{dg}}(k),$ $(x,z) \longmapsto \mathcal{B}(z,F(x))$.
Following Kontsevich \cite{kont05Video, Kont09Chp,kont10Notes}, a dg category $\mathcal{A}$ is called \textit{smooth} if the dg $\mathcal{A}$-$\mathcal{A}$-bimodule $\prescript{}{id}{\mathcal{A}}$ belongs to the subcategory $\mathcal{D}_c(\mathcal{A}^{\mathrm{op}} \otimes \mathcal{A})$ and it is called \textit{proper} if $\sum_i \dim \mathrm{H}^i\mathcal{A}(x,y)<\infty$ for every pair of objects $x,y \in \mathcal{A}$.
The dg category of the perfect complexes $\perfdg (X)$ associated to a smooth proper $k$-scheme $X$ is an example of a smooth proper dg category; see \cite[Example 1.42(ii)]{Tab15a}. We shall write $\dgcat_{\mathrm{sp}}(k) \subseteq \dgcat(k)$ for the full subcategory of smooth proper dg categories.
Given dg categories $\mathcal{A}$ and $\mathcal{B}$ and a $\mathcal{B}$-$\mathcal{A}$-bimolude $\mathrm{B}$, consider the dg category $T(\mathcal{A}, \mathcal{B}; \mathrm{B})$ whose set of objects is the disjoint union of the sets of objects of $\mathcal{A}$ and $\mathcal{B}$, and whose complexes of morphisms are defined as follows:
\begin{align*}
T(\mathcal{A}, \mathcal{B}; \mathrm{B}) (x,y):=\left\lbrace\begin{matrix}
\mathcal{A}(x,y) & \text{if} & x,y \in \mathcal{A}\\
\mathcal{B}(x,y) & \text{if} & x,y \in \mathcal{B}\\
\mathrm{B}(y,x) & \text{if} & x \in \mathcal{A}, y \in \mathcal{B}\\
0 & \text{if} & y \in \mathcal{A}, x \in \mathcal{B}.\\
\end{matrix}\right.
\end{align*}
By consctruction, we have two canonical inclusions $i_\mathcal{A} : \mathcal{A} \rightarrow T(\mathcal{A}, \mathcal{B}; \mathrm{B})$ and $i_\mathcal{B} : \mathcal{B} \rightarrow T(\mathcal{A}, \mathcal{B}; \mathrm{B})$.
\begin{lemma}
\label{Lemma Triangular-system}
Let $\mathcal{A}$ and $\mathcal{B}$ be two dg categories and $\mathrm{B}$ a dg $\mathcal{B}$-$\mathcal{A}$-bimolude. Given a dg category $\mathcal{C}$ we have a canonical identification between the dg categories $T(\mathcal{A}, \mathcal{B}; \mathrm{B}) \otimes \mathcal{C}$ and $T(\mathcal{A} \otimes \mathcal{C}, \mathcal{B} \otimes \mathcal{C}; \mathrm{B} \otimes \mathcal{C})$.
\end{lemma}
\begin{proof}
The proof is simple and we leave it to the reader.
\end{proof}
\subsection{Noncommutative motives}
\label{subsection: nc motives}
We recall here some basic concepts on noncommutative motives. For a book, resp. survey, consult \cite{Tab15a}, resp. \cite{Tab18a}. Recall from \cite[\S 4.1]{Tab15a} the construction of the category of \textit{noncommutative Chow motives} with $\mathbb{Q}$-coefficients $\NChow(k)_\mathbb{Q}$.
This category is $\mathbb{Q}$-linear, additive, idempotent complete, rigid symmetric monoidal (i.e., all its objects are dualizable) and is equipped with a symmetric monoidal functor $U(-)_\mathbb{Q}: \dgcat_{\mathrm{sp}}(k) \rightarrow \NChow(k)_\mathbb{Q}$. Moreover, given any two smooth proper dg categories $\mathcal{A}$ and $\mathcal{B}$, we have the following computation:
\begin{align}
\label{Hom Nchow}
\Hom_{\NChow(k)_\mathbb{Q}}(U(\mathcal{A})_\mathbb{Q}, U(\mathcal{B})_\mathbb{Q}):=K_0(\mathcal{D}_c(\mathcal{A}^{\mathrm{op}} \otimes \mathcal{B}))_\mathbb{Q}=K_0(\mathcal{A}^{\mathrm{op}} \otimes \mathcal{B})_\mathbb{Q} .
\end{align}
Furthermore, the composition law in $\NChow(k)_\mathbb{Q}$ is induced by the (derived) tensor product of bimodules.
Finally, recall from \cite[\S 4.4 \& \S4.6]{Tab15a} the construction of the categories of noncommutative Voevodsky motives NVoev$(k)_\mathbb{Q}$ and noncommutative numerical motives NNum$(k)_\mathbb{Q}$. These categories are also $\mathbb{Q}$-linear, additive, idempotent complete, and rigid symmetric monoidal.
\section{Equivalence relations}\label{equiv rel}
Let $\mathcal{A}$ be a smooth proper dg category. In this section, we introduce three different equivalence relations on the rational Grothendieck group $K_0(\mathcal{A})_\mathbb{Q}:=K_0( \mathcal{D}_c (\mathcal{A}))_\mathbb{Q}$ and compare them with their classical commutative counterparts.
\subsection{Algebraic equivalence relation}
\label{subsection alg}
Given a smooth connected $k$-scheme $T$ and a $k$-rational point $p: \Spec (k) \rightarrow T$, consider the associated pull-back dg functor $p^*: \perfdg (T) \rightarrow \perfdg (\Spec (k))$. Note that $\perfdg (\Spec (k))$ and $k$ are Morita equivalent.
\begin{definition}
\label{def: algebrico}
An element $\alpha \in K_0(\mathcal{A})_\mathbb{Q}$ is called \emph{algebraically trivial} if it belongs to the image of the homomorphism
\begin{align}
\bigoplus_{(T,p,q)} K_0( \mathcal{A} \otimes \perfdg(T))_\mathbb{Q} \xrightarrow[]{\bigoplus (K_0(id \otimes p^*)_\mathbb{Q} - K_0(id \otimes q^*)_\mathbb{Q})} K_0(\mathcal{A})_\mathbb{Q},
\label{alg rel}
\end{align}
where the direct sum ranges over all smooth connected $k$-schemes $T$ equipped with two $k$-rational points $p$ and $q$.
\end{definition}
\begin{remark}
\label{remark: k alg closed}
In the particular case where $k$ is algebraically closed, any two $k$-rational points of $T$ can be joined by a smooth projective $k$-curve. Hence, in this particular case, it suffices to consider the triples $(C,p,q)$ where $C$ is a smooth projective $k$-curve equipped with two $k$-rational points $p$ and $q$.
\end{remark}
The above homomorphism (\ref{alg rel}) gives rise to the \textit{algebraic equivalence relation} $\sim_\mathrm{alg}$ on $K_0(\mathcal{A})_\mathbb{Q}$. In what follows, we will write $K_0(\mathcal{A})_\mathbb{Q} /_{\!\sim_\mathrm{alg}}$ for the associated quotient, i.e., for the cokernel of (\ref{alg rel}).
\begin{lemma}
\label{lemma: k algebrico sends morita to iso}
The functor $K_0(-)_\mathbb{Q}/_{\!\sim_\mathrm{alg}} : \dgcat_{\mathrm{sp}}(k) \rightarrow \Vect (\mathbb{Q})$ sends Morita equivalences to isomorphisms.
\end{lemma}
\begin{proof}
Let $F: \mathcal{A} \rightarrow \mathcal{B}$ be a Morita equivalence in $\dgcat_{\mathrm{sp}}(k)$. Since the functor $K_0(-)_\mathbb{Q}$ is an additive invariant (consult \cite[\S 2]{Tab15a}), $K_0(F)_\mathbb{Q}$ is an isomorphism.
Moreover, since Morita equivalences are preserved by tensor products over a field (consult \cite[\S 1.6.4]{Tab15a}), we have a Morita equivalence $F \otimes id_{\perfdg(T)}$ and, consequently, an isomorphism $K_0(F \otimes id_{\perfdg(T)})_\mathbb{Q}$. We then obtain the following commutative diagram:
\begin{align}
\label{diagram: k alg sends morita to iso}
\begin{split}
\xymatrix@C=5pc @R=4pc{
\bigoplus_{(T,p,q)}K_0(\mathcal{A} \otimes \perfdg(T))_\mathbb{Q} \ar[d]_-{\bigoplus K_0(F \otimes id_{\perfdg(T)})_\mathbb{Q}}^-{\simeq} \ar[r]^-{(\ref{alg rel})} & K_0(\mathcal{A})_\mathbb{Q} \ar[d]^-{K_0(F)_\mathbb{Q}}_-{\simeq} \ar@{->>}[r] & K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \ar[d]^{K_0(F)_\mathbb{Q}/_{\!\sim_\mathrm{alg}}} \\
\bigoplus_{(T,p,q)}K_0(\mathcal{B} \otimes \perfdg(T))_\mathbb{Q} \ar[r]_-{(\ref{alg rel})} & K_0(\mathcal{B})_\mathbb{Q} \ar@{->>}[r] & K_0(\mathcal{B})_\mathbb{Q}/_{\!\sim_\mathrm{alg}}.}
\end{split}
\end{align}
Thanks to Definition \ref{def: algebrico}, we hence conclude from (\ref{diagram: k alg sends morita to iso}) that $K_0(F)_\mathbb{Q}/_{\!\sim_\mathrm{alg}}$ is an isomorphism.\end{proof}
\begin{lemma}
\label{lemma: K algebrico 2nd prop additive}
Let $\mathcal{A}$ be a smooth proper dg category such that $\mathrm{H}^0(\mathcal{A})=\langle \mathfrak{b}, \mathfrak{c} \rangle$ admits a semi-orthogonal decomposition in the sense of Bondal-Orlov \cite{BondalOrlov02}. Let $\mathfrak{b}^{\mathrm{dg}}$ and $\mathfrak{c}^{\mathrm{dg}}$ denote, respectively, the dg enhancement of $\mathfrak{b}$ and $\mathfrak{c}$ induced from $\mathcal{A}$.
Under these assumptions, the inclusions of $\mathfrak{b}^{\mathrm{dg}}$ and $\mathfrak{c}^{\mathrm{dg}}$ into $\mathcal{A}$ induce an isomorphism:
\begin{align*}
K_0(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \oplus K_0(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \stackrel{\simeq}{\longrightarrow} K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_\mathrm{alg}}.
\end{align*}
\end{lemma}
\begin{proof}
Consider the dg category $T(\mathfrak{b}^{\mathrm{dg}},\mathfrak{c}^{\mathrm{dg}}; \prescript{}{id}{\mathcal{A}})$, which is known to be smooth and proper; see \cite[Proposition 4.9]{KuznLunts15}.
Following the proof of \cite[Proposition 2.2]{Tab15a}, the inclusion of $T(\mathfrak{b}^{\mathrm{dg}},\mathfrak{c}^{\mathrm{dg}}; \prescript{}{id}{\mathcal{A}})$ into $\mathcal{A}$ is a Morita equivalence.
Therefore, since the functor $K_0(-)_\mathbb{Q}$ is an additive invariant, we obtain an induced isomorphism between $K_0(T(\mathfrak{b}^{\mathrm{dg}},\mathfrak{c}^{\mathrm{dg}};\prescript{}{id}{\mathcal{A}}))_\mathbb{Q}$ and $K_0(\mathcal{A})_\mathbb{Q}$.
Consequently, in order to finish the proof, it suffices to show that the inclusions of $\mathfrak{b}^{\mathrm{dg}}$ and $\mathfrak{c}^{\mathrm{dg}}$ into $T(\mathfrak{b}^{\mathrm{dg}},\mathfrak{c}^{\mathrm{dg}}; \prescript{}{id}{\mathcal{A}})$ induce an isomorphism
\begin{align}
\label{useful iso}
K_0(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \oplus K_0(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \longrightarrow K_0(T(\mathfrak{b}^{\mathrm{dg}},\mathfrak{c}^{\mathrm{dg}}; \prescript{}{id}{\mathcal{A}}))_\mathbb{Q}/_{\!\sim_\mathrm{alg}}.
\end{align}
Consider the following commutative diagram
\begin{align*}
\resizebox{19.024cm}{!}{
\xymatrix@C=2.5pc @R=4pc{
(\bigoplus_{(T,p,q)} K_0(\mathfrak{b}^{\mathrm{dg}} \otimes \perfdg T)_\mathbb{Q}) \oplus (\bigoplus_{(T,p,q)} K_0(\mathfrak{c}^{\mathrm{dg}} \otimes \perfdg T)_\mathbb{Q}) \ar[r]^-{(\ref{alg rel}) \oplus (\ref{alg rel})} \ar[d] & K_0(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q} \oplus K_0(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q} \ar@{->>}[r] \ar[d]_-\simeq & K_0(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \oplus K_0(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \ar[d]^{(\ref{useful iso})}\\
\bigoplus_{(T,p,q)} K_0(T(\mathfrak{b}^{\mathrm{dg}},\mathfrak{c}^{\mathrm{dg}}; \prescript{}{id}{\mathcal{A}}) \otimes \perfdg T)_\mathbb{Q} \ar[r]_-{(\ref{alg rel})} & K_0(T(\mathfrak{b}^{\mathrm{dg}},\mathfrak{c}^{\mathrm{dg}}; \prescript{}{id}{\mathcal{A}}))_\mathbb{Q} \ar@{->>}[r] & K_0(T(\mathfrak{b}^{\mathrm{dg}},\mathfrak{c}^{\mathrm{dg}}; \prescript{}{id}{\mathcal{A}}))_\mathbb{Q}/_{\!\sim_\mathrm{alg}}}}
\end{align*}
\noindent with exact rows and whose vertical arrows are induced by the inclusions of $\mathfrak{b}^{\mathrm{dg}}$ and $\mathfrak{c}^{\mathrm{dg}}$ into $T(\mathfrak{b}^{\mathrm{dg}},\mathfrak{c}^{\mathrm{dg}}; \prescript{}{id}{\mathcal{A}})$. We need to prove that (\ref{useful iso}) is an isomorphism.
By diagram chasing, we observe that (\ref{useful iso}) is surjective. Since the middle vertical map of the above diagram is an isomorphism, Lemma \ref{Lemma Triangular-system} implies that, for a fixed $(T,p,q)$, the corresponding map
\begin{align*}
K_0(\mathfrak{b}^{\mathrm{dg}} \otimes \perfdg T)_\mathbb{Q}) \oplus K_0(\mathfrak{c}^{\mathrm{dg}} \otimes \perfdg T)_\mathbb{Q}) \longrightarrow K_0(T(\mathfrak{b}^{\mathrm{dg}},\mathfrak{c}^{\mathrm{dg}}; \prescript{}{id}{\mathcal{A}}) \otimes \perfdg T)_\mathbb{Q}
\end{align*}
is an isomorphism as well.
This implies that the left-hand-side vertical map of the above diagram is surjective. Consequently, we conclude (once again by diagram chasing) that (\ref{useful iso}) is moreover injective and hence an isomorphim.
\end{proof}
\begin{lemma}
\label{lemma: azumaya algebras}
Let $X$ be a smooth proper $k$-scheme and $\mathbb{B}_0$ an Azumaya algebra over $X$. The canonical dg functor
\noindent $i_{X,\mathbb{B}_0}: \perfdg(X) \rightarrow \perfdg(X, \mathbb{B}_0)$ induces an isomorphism:
\begin{align*}
K_0(i_{X,\mathbb{B}_0})_\mathbb{Q}/_{\!\sim_\mathrm{alg}}: K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \xrightarrow[]{\simeq}K_0(\perfdg(X, \mathbb{B}_0))_\mathbb{Q}/_{\!\sim_\mathrm{alg}}.
\end{align*}
\end{lemma}
\begin{proof}
Thanks to \cite[Propositions 8.3 and 8.17]{TabVB15}, the canonical dg functor $i_{X,\mathbb{B}_0}$ induces an isomorphism $U(i_{X,\mathbb{B}_0})_\mathbb{Q}$ in the category of noncommutative Chow motive $\NChow(k)_\mathbb{Q}$.
As a consequence, since the functor $U(-)_\mathbb{Q}$ is symmetric monoidal, the morphism $U(i_{X,\mathbb{B}_0} \otimes id_{\perfdg(T)})_\mathbb{Q}$ is also an isomorphism.
Thanks to the computation (\ref{Hom Nchow}), this implies that the following two maps are invertible:
\begin{align*}
K_0(i_{X,\mathbb{B}_0})_\mathbb{Q}: K_0(\perfdg(X))_\mathbb{Q} &\stackrel{\simeq}{\longrightarrow} K_0(\perfdg(X, \mathbb{B}_0))_\mathbb{Q};\\
K_0(i_{X,\mathbb{B}_0} \otimes id_{\perfdg(T)})_\mathbb{Q}: K_0(\perfdg(X) \otimes \perfdg (T))_\mathbb{Q} &\stackrel{\simeq}{\longrightarrow} K_0(\perfdg(X, \mathbb{B}_0) \otimes \perfdg (T))_\mathbb{Q}.
\end{align*}
Therefore, we obtain the following commutative diagram:
\begin{align}
\label{diagram: azumaya}
\begin{split}
\xymatrix@C=3pc @R=3.5pc{
\bigoplus_{(T,p,q)}K_0(\perfdg(X) \otimes \perfdg(T))_\mathbb{Q} \ar[d]_-{\bigoplus K_0(i_{X,\mathbb{B}_0} \otimes id_{\perfdg(T)})_\mathbb{Q}}^{\simeq} \ar[r]^-{(\ref{alg rel})} & K_0(\perfdg(X))_\mathbb{Q} \ar[d]^-{K_0(i_{X,\mathbb{B}_0})_\mathbb{Q}}_-{\simeq} \ar@{->>}[r] & K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \ar[d]^{K_0(i_{X,\mathbb{B}_0})_\mathbb{Q}/_{\!\sim_\mathrm{alg}}} \\
\bigoplus_{(T,p,q)}K_0(\perfdg(X, \mathbb{B}_0) \otimes \perfdg(T))_\mathbb{Q} \ar[r]_-{(\ref{alg rel})} & K_0(\perfdg(X, \mathbb{B}_0))_\mathbb{Q} \ar@{->>}[r] & K_0(\perfdg(X, \mathbb{B}_0))_\mathbb{Q}/_{\!\sim_\mathrm{alg}}.}
\end{split}
\end{align}
Thanks to Defintion \ref{def: algebrico}, we hence conclude from (\ref{diagram: azumaya}) that $K_0(i_{X,\mathbb{B}_0})_\mathbb{Q}/_{\!\sim_\mathrm{alg}}$ is an isomorphism.
\end{proof}
\subsection{Nilpotence equivalence relation}
\label{subsection nil}
Given an integer $m\geq 1$ consider the following functor:
\begin{align}
\prod _{i=1}^m\mathcal{D}_c(\mathcal{A}) \longrightarrow \mathcal{D}_c(\mathcal{A}^{\otimes m}) \qquad
\{M_i\}_{1 \leq i \leq m} \longmapsto \otimes_{i=1}^m M_i.
\label{step to nil rel}
\end{align}
The functor (\ref{step to nil rel}) is triangulated in each one of its variables. Hence, it gives rise to the following multilinear pairing:
\begin{align}
\prod _{i=1}^m K_0(\mathcal{A})_\mathbb{Q} \longrightarrow K_0(\mathcal{A}^{\otimes m})_\mathbb{Q} \qquad
\{[M_i]\}_{1 \leq i \leq m} \longmapsto [\otimes_{i=1}^m M_i].
\label{nil rel}
\end{align}
\begin{definition}
An element $\alpha \in K_0(\mathcal{A})_\mathbb{Q}$ is called \emph{nilpotently trivial} if there is an integer $m \geq 1$ such that the image of $\underbrace{(\alpha, \ldots, \alpha)}_{m\text{-copies}} $ under the multilinear pairing (\ref{nil rel}) is equal to $0$.
\end{definition}
The above multilinear pairing (\ref{nil rel}) gives rise to the \textit{nilpotence equivalence relation} $\sim_\mathrm{nil}$ on $K_0(\mathcal{A})_\mathbb{Q}$. In what follows, we will write $K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_\mathrm{nil}}$ for the associated quotient.
\begin{remark}
\label{remark: nil nchow}
Recall from \S \ref{subsection: nc motives} the following computation in the category of noncommutative Chow motives:
\begin{align*}
\Hom_{\NChow(k)_\mathbb{Q}} ( U(k)_\mathbb{Q}, U(\mathcal{A})_\mathbb{Q})=K_{0}(\mathcal{D}_c(k^{\mathrm{op}} \otimes \mathcal{A}))_\mathbb{Q}=K_{0}(\mathcal{A})_\mathbb{Q}.
\end{align*}
This enables the following re-phrasing of the nilpotence equivalence relation: an element $\alpha \in K_0(\mathcal{A})_\mathbb{Q}$ is nilpotently trivial if and only if the associated morphism $\alpha: U(k)_\mathbb{Q} \rightarrow U(\mathcal{A})_\mathbb{Q}$ in $\NChow (k)_\mathbb{Q}$ is $\otimes $-nilpotent, or, equivalently, if and only if the associated morphism $\alpha: U(\mathcal{A}^{\mathrm{op}})_\mathbb{Q} \rightarrow U(k)_\mathbb{Q}$ in $\NChow (k)_\mathbb{Q}$ is $\otimes $-nilpotent.
\end{remark}
\subsection{Numerical equivalence relation}
\label{subsection num}
Consider the following Euler bilinear pairing:
\begin{align}
\chi: K_0(\mathcal{A})_\mathbb{Q} \times K_0(\mathcal{A})_\mathbb{Q} \longrightarrow \mathbb{Q} \qquad
( [M],[N]) \longmapsto \sum_{n} (-1)^n \dim_k \Hom_{\mathcal{D}_c(\mathcal{A})}(M, N[n]).
\label{euler pairing}
\end{align}
Although the bilinear pairing (\ref{euler pairing}) is not symmetric nor skew-symmetric, its left and right kernels agree; consult \cite[\S 4]{Tab15a}. Let us then write $\ker(\chi)$ for this (unique) kernel.
\begin{definition}
An element $\alpha \in K_0(\mathcal{A})_\mathbb{Q}$ is called \emph{numerically trivial} if it belongs to $\ker(\chi)$. In other words, we have $\chi (\alpha, \beta)=0$ (or, equivalently, $\chi (\beta, \alpha)=0$) for every $\beta \in K_0(\mathcal{A})_\mathbb{Q}$.
\label{num rel}
\end{definition}
Definition \ref{num rel} gives rise to the \textit{numerical equivalence relation} $\sim_\mathrm{num}$ on $K_0(\mathcal{A})_\mathbb{Q}$. In what follows we will write $K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_\mathrm{num}}$ for the associated quotient.
\subsection{Comparison between the different equivalence relations}
In this subsection we compare the different equivalence relations introduced in \S \ref{subsection alg}- \S \ref{subsection num}.
\begin{theorem}
\label{ordering}
Given an element $\alpha \in K_0(\mathcal{A})_\mathbb{Q}$, we have the implications $\alpha \sim_\mathrm{alg} 0 \Rightarrow \alpha \sim_\mathrm{nil} 0 \Rightarrow \alpha \sim_\mathrm{num} 0$.
\end{theorem}
\begin{proof}
We start by proving the implication $\alpha \sim_\mathrm{alg} 0 \Rightarrow \alpha \sim_\mathrm{nil} 0$. Thanks to Proposition \ref{prop: algebraic closure} below, we can assume without loss of generality that $k$ is algebraically closed. Take $\alpha \in K_0(\mathcal{A})_\mathbb{Q}$ an algebraically trivial element.
Recall from Remark \ref{remark: nil nchow} that it is enough to show that the associated morphism $\alpha : U(\mathcal{A}^{\mathrm{op}})_\mathbb{Q} \rightarrow U(k)_\mathbb{Q}$ in the category $\NChow(k)_\mathbb{Q}$ is $\otimes$-nilpotent.
Since $k$ is algebraically closed and the nilpotently trivial elements form a $\mathbb{Q}$-linear subspace of $K_0(\mathcal{A})_\mathbb{Q}$, we can assume that there exists a single smooth projective curve $C$, equipped with two $k$-rational points $p$ and $q$, and a element $\beta \in K_0(\mathcal{A} \otimes \perfdg (C))_\mathbb{Q}$ such that $\alpha= K_0(id_\mathcal{A} \otimes p^*)_\mathbb{Q}(\beta)-K_0(id_\mathcal{A} \otimes q^*)_\mathbb{Q}(\beta)$; see Remark \ref{remark: k alg closed}.
Making use of the following computations
\begin{align*}
\Hom_{\NChow(k)_\mathbb{Q}}(U(\mathcal{A}^{\mathrm{op}})_\mathbb{Q},U(\perfdg (C))_\mathbb{Q})\simeq K_0(\mathcal{A} \otimes_k \perfdg (C))_\mathbb{Q} \qquad \Hom_{\NChow(k)_\mathbb{Q}}(U(\mathcal{A}^{\mathrm{op}})_\mathbb{Q},U(k)_\mathbb{Q})\simeq K_0(\mathcal{A}),
\end{align*}
we observe that the associated morphism $\alpha : U(\mathcal{A}^{\mathrm{op}})_\mathbb{Q} \rightarrow U(k)_\mathbb{Q}$ can be written as the following composition
\begin{align*}
\alpha : U(\mathcal{A}^{\mathrm{op}})_\mathbb{Q} \xrightarrow{\quad \beta \quad} U(\perfdg (C))_\mathbb{Q} \xrightarrow{U(p^*)_\mathbb{Q} - U(q^*)_\mathbb{Q}} U(k)_\mathbb{Q},
\end{align*}
where $\beta$ is the morphism associated to the element $\beta \in K_0(\mathcal{A} \otimes \perfdg (C))_\mathbb{Q}$.
Consequently, once we show that $U(p^*)_\mathbb{Q} - U(q^*)_\mathbb{Q}$ is $\otimes$-nilpotent, we conclude that $\alpha$ is $\otimes$-nilpotent too.
Let $\Chow(k)_\mathbb{Q}$ be the classical category of Chow motives; see Manin \cite{Manin}. This category is $\mathbb{Q}$-linear, additive, idempotent complete, and rigid symmetric monoidal.
In addition, it has a symmmetric monoidal functor $\mathfrak{h}(-)_\mathbb{Q}: \text{SmProp}(k)^{\mathrm{op}} \rightarrow \Chow(k)_\mathbb{Q}$ defined on smooth proper $k$-schemes. Following \cite[Theorem 4.3]{Tab15a}, there exists a $\mathbb{Q}$-linear, fully-faithfull, symmmetric monoidal functor $\Phi$ making the following diagram commutative:
\begin{align}
\label{diagram: chow and nchow}
\begin{split}
\xymatrix@C=4pc @R=2pc{
\text{SmProp}(k)^{\mathrm{op}} \ar[d]_{\mathfrak{h}(-)_\mathbb{Q}} \ar[r]^{X \longmapsto \perfdg (X)} & \dgcat_{\mathrm{sp}}(k) \ar[dd]^{U(-)_\mathbb{Q}}\\
\Chow(k)_\mathbb{Q} \ar[d]_{\tau} & \\
\Chow(k)_\mathbb{Q}/(-\otimes \mathbb{Q}(1)) \ar[r]_-{\Phi} & \NChow(k)_\mathbb{Q},}
\end{split}
\end{align}
\noindent where we denote by $\Chow(k)_\mathbb{Q}/(-\otimes \mathbb{Q}(1))$ the orbit category with respect to the Tate motive $\mathbb{Q}(1)$; consult \cite[\S 4.2]{Tab15a} for the definition of the orbit category. In \cite[Proposition 3.1]{Voevodsky95} it is proven that the morphism $\mathfrak{h}(p)_\mathbb{Q}-\mathfrak{h}(q)_\mathbb{Q}: \mathfrak{h}(C)_\mathbb{Q} \rightarrow \mathfrak{h}( \Spec (k))_\mathbb{Q}$, corresponding to the degree zero cycle $p-q$ on $C$, is $\otimes$-nilpotent.
Since both functors $\tau$ and $\Phi$ are symmetric monoidal, the commutativity of diagram (\ref{diagram: chow and nchow}) hence implies that $U(p^*)_\mathbb{Q}-U(q^*)_\mathbb{Q}$ is $\otimes$-nilpotent. This concludes the proof.
We now prove the implication $\alpha \sim_\mathrm{nil} 0 \Rightarrow \alpha \sim_\mathrm{num} 0$. Recall first from \cite[\S 5 and Proposition 6.2]{MTab12} that an element $\alpha \in K_0(\mathcal{A})_\mathbb{Q}$ is numerically trivial if and only if for every $\beta \in K_0(\mathcal{A}^{\mathrm{op}})_\mathbb{Q}$ the associated composition
\begin{align*}
U(k)_\mathbb{Q} \xrightarrow{\quad \alpha \quad } U(\mathcal{A})_\mathbb{Q} \xrightarrow{\quad \beta \quad } U(k)_\mathbb{Q}
\end{align*}
is equal to zero.
Let $\alpha\in K_0(\mathcal{A})_\mathbb{Q}$ be a nilpotently trivial element. Bearing in mind Remark \ref{remark: nil nchow}, the associated morphism $\alpha :U(k)_\mathbb{Q} \rightarrow U(\mathcal{A})_\mathbb{Q}$ is $\otimes$-nilpotent.
Let us assume by absurd that $\alpha$ is not numerically trivial.
In this case, there would exist an element $\beta \in K_0(\mathcal{A}^{\mathrm{op}})_\mathbb{Q}$ such that the associated morphism $\beta : U(\mathcal{A})_\mathbb{Q} \rightarrow U(k)_\mathbb{Q}$ composed with $\alpha$ is different from zero.
Using the fact that $\beta \circ \alpha \in \Hom_{\NChow(k)_\mathbb{Q}}(U(k)_\mathbb{Q},U(k)_\mathbb{Q})\simeq \mathbb{Q}$, we can further assume without loss of generality that $\beta \circ \alpha$ is the identity. But, this would then imply that $\beta \circ \alpha$ is not $\otimes$-nilpotent, which contradicts the assumption that $\alpha$ is $\otimes$-nilpotent. In conclusion, $\alpha$ is also numerically trivial.
\end{proof}
\begin{proposition}\label{prop: algebraic closure}
Let $\overline{k}/k$ be a fixed algebraic closure of $k$. Given an element $\alpha \in K_0(\mathcal{A})_\mathbb{Q}$, the following holds:
\begin{align*}
\alpha \sim_\mathrm{alg} 0 \Rightarrow (\alpha \otimes_k \overline{k}) \sim_\mathrm{alg} 0 \qquad \text{ and } \qquad \alpha \sim_\mathrm{nil} 0 \Leftrightarrow (\alpha \otimes_k \overline{k}) \sim_\mathrm{nil} 0.
\end{align*}
\end{proposition}
\begin{proof}
It follows from \cite[Proposition 7.2]{MTab14} that if $\mathcal{A} \in \dgcat_{\mathrm{sp}}(k)$, then $\mathcal{A} \otimes_k \overline{k} \in \dgcat_{\mathrm{sp}}(\overline{k})$.
Note also that given a smooth connected $k$-scheme $T$, $\perfdg(T) \otimes_k \overline{k}$ is a smooth $\overline{k}$-linear dg category which is canonically Morita equivalent to $\perfdg (\overline{T})$, where $\overline{T}:= T \times_{\Spec (k)} \Spec (\overline{k})$.
Hence, the implication $\alpha \sim_\mathrm{alg} 0 \Rightarrow \alpha \otimes_k \overline{k} \sim_\mathrm{alg} 0$ follows from diagram (\ref{diagram: algebraic closure}) below (where $T'$ is a smooth connected $\overline{k}$-scheme equipped with two $\overline{k}$-rational points $p'$ and $q'$):
\begin{align}
\label{diagram: algebraic closure}
\begin{split}
\xymatrix@C=4pc @R=4pc{\bigoplus_{(T',p',q')} K_0((\mathcal{A} \otimes_k \overline{k}) \otimes_{\overline{k}} \perfdg (T') )_\mathbb{Q} \ar[r]^-{(\ref{alg rel})}& K_0(\mathcal{A} \otimes_k \overline{k})_\mathbb{Q}\\
\bigoplus_{(T,p,q)} K_0(\mathcal{A} \otimes_k \perfdg (T) )_\mathbb{Q} \ar[r]_-{(\ref{alg rel})} \ar[u]^{K_0(-\otimes_k \overline{k})_\mathbb{Q}} & K_0(\mathcal{A})_\mathbb{Q} \ar[u]_{K_0(-\otimes_k \overline{k})_\mathbb{Q}}.}
\end{split}
\end{align}
We now prove the equivalence $\alpha \sim_\mathrm{nil} 0 \Leftrightarrow (\alpha \otimes_k \overline{k}) \sim_\mathrm{nil} 0$. For any integer $m\geq 1$, we have the commutative diagram:
\begin{align}
\label{diagram: algebraic closure equivalence nil}
\begin{split}
\xymatrix@C=6.5pc @R=4pc{
\prod_{i=1}^m K_0(\mathcal{A} \otimes_k \overline{k})_\mathbb{Q} \ar[r]^-{(\ref{nil rel})} & K_0((\mathcal{A}\otimes_k \overline{k})^{\otimes_{\overline{k}} m})_\mathbb{Q}\\
\prod_{i=1}^m K_0(\mathcal{A})_\mathbb{Q} \ar[r]_-{(\ref{nil rel})} \ar[u]^{\prod_{i=1}^m K_0(-\otimes_k \overline{k})_\mathbb{Q}} & K_0(\mathcal{A}^{\otimes_k m})_\mathbb{Q} \ar[u]_{K_0(-\otimes_k \overline{k})_\mathbb{Q}}.}
\end{split}
\end{align}
Moreover, we have a canonical identification $\mathcal{A}^{\otimes_{k}m} \otimes_{k} \overline{k} \simeq (\mathcal{A} \otimes_{k} \overline{k})^{\otimes_{\overline{k}}m}$. Therefore, by applying Lemma \ref{lemma: alg closure induces injection} below to the dg category $\mathcal{A}^{\otimes_{k}m}$, we conclude that the homomorphism $K_0(-\otimes_k \overline{k})_\mathbb{Q}$ is injective. Consequently, the proof follows now from (\ref{diagram: algebraic closure equivalence nil}).
\end{proof}
\begin{lemma}
\label{lemma: alg closure induces injection}
Given an algebraic closure $\overline{k}/k$, the induced homomorphism $K_0(-\otimes_k \overline{k})_\mathbb{Q}: K_0(\mathcal{A})_\mathbb{Q} \rightarrow K_0(\mathcal{A} \otimes_k \overline{k} )_\mathbb{Q}$ is injective.
\end{lemma}
\begin{proof}
Assume that $\overline{k}/k$ is a finite field extension of degree $d$. In this case, the restriction along the field extension $k \rightarrow \overline{k}$ yields an homomorphism $\text{Res}_{\overline{k}/k}: K_0(\mathcal{A} \otimes_k \overline{k})_\mathbb{Q} \rightarrow K_0(\mathcal{A})_\mathbb{Q}$ such that
$\text{Res}_{\overline{k}/k} \circ K_0(- \otimes_k \overline{k})_\mathbb{Q}$ is equal to multiplication by $d$.
Therefore, since $K_0(\mathcal{A})_\mathbb{Q}$ is a $\mathbb{Q}$-vector space, the induced homomorphism $K_0(-\otimes_k \overline{k})_\mathbb{Q}$ is injective.
In the case where the field extension $\overline{k}/k$ is infinite, $\overline{k}$ identifies with the colimit of the filtrant diagram $\{k_i\}_{i \in I}$ of all the intermediate field extensions $\overline{k}/k_i/k$ which are finite over $k$. Since both functors $-\otimes_k \overline{k}$ and $K_0(-)_\mathbb{Q}$ preserve filtrant (homotopy) colimits, we have that $K_0(\mathcal{A} \otimes _k \overline{k})_\mathbb{Q} \simeq \colim_{i \in I} K_0(\mathcal{A} \otimes_k k_i)_\mathbb{Q}$ and injectivity of $K_0(-\otimes_k \overline{k})_\mathbb{Q}$ follows now from the finite field extension case.
\end{proof}
\subsection{Classical equivalence relations on algebraic cycles}
Given a smooth proper $k$-scheme $X$, recall from \cite[Corollary 18.3.2]{Fulton2nd_ed} that the following map is invertible
\begin{align}
\label{iso from nc to c}
K_0(\perfdg (X) )_\mathbb{Q} \stackrel{\simeq}{\longrightarrow} \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \qquad
[\mathcal{F}] \longmapsto \ch(\mathcal{F}) \cdot \sqrt{\td_X},
\end{align}
where $\ch(\mathcal{F})$ stands for the Chern character of $\mathcal{F}$ and $\td_X$ for the Todd class of $X$.
\begin{theorem}
\label{theo: non commutative to classical rel}
The above map (\ref{iso from nc to c}) induces the following isomorphisms:
\begin{align}
\label{iso from nc to c in theorem}
K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \xrightarrow{\simeq} \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \hspace{0.13054cm} K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_\mathrm{nil}} \xrightarrow{\simeq} \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{nil}}
\hspace{0.13054cm} K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_\mathrm{num}} \xrightarrow{\simeq} \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{num}}.
\end{align}
\end{theorem}
\begin{proof}
The left-hand-side and middle isomorphisms in (\ref{iso from nc to c in theorem}) follow from Lemmas \ref{lemma: chern nc to c}-\ref{lemma: todd nc to c} below.
In order to prove the right-hand-side in (\ref{iso from nc to c in theorem}), note that, given any two perfect complexes $\mathcal{F}$, $\mathcal{G} \in \perf(X)$, we have the following equalities
\begin{align}
\label{equation: rephrase euler pairing}
\chi([\mathcal{F}],[\mathcal{G}])= K_0(\pi_*([\textbf{R}\underline{\Hom}(\mathcal{F},\mathcal{G})]))_\mathbb{Q}=K_0(\pi_*([\mathcal{F}^* \otimes_X^{\textbf{L}} \mathcal{G}] ))_\mathbb{Q},
\end{align}
where $\pi : X \rightarrow \Spec (k)$ stands for the structure map of $X$ and $\textbf{R}\underline{\Hom}(-,-)$, resp. $(-)^*$, stands for the internal $\Hom$, resp. (contravariant) duality, of the category $\perfdg (X)$. Hence, we obtain the following commmutative diagram:
\begin{align}
\label{diagram: num nc to c}
\begin{split}
\xymatrix@C=2.5pc @R=1pc{
K_0(\perfdg (X))_\mathbb{Q} \times K_0(\perfdg (X))_\mathbb{Q} \ar@{=}[dd] \ar[rrrr]^-{\chi} & & & & \mathbb{Q} \ar@{=}[dd] \\
& & \circled{1} & & \\
K_0(\perfdg (X))_\mathbb{Q} \times K_0(\perfdg (X))_\mathbb{Q} \ar[rr]_-{K_0((-)^*\otimes^{\textbf{L}}_X -)_\mathbb{Q}} \ar[dd]_{K_0((-)^*)_\mathbb{Q} \times id}^{\simeq}& & K_0(\perfdg (X))_\mathbb{Q} \ar[rr]_-{K_0(\pi_*)_\mathbb{Q}} \ar@{=}[dd] & & \mathbb{Q} \ar@{=}[dd]\\
& \circled{2} & & \circled{3} &\\
K_0(\perfdg (X))_\mathbb{Q} \times K_0(\perfdg (X))_\mathbb{Q} \ar[rr]^-{-\cdot -} \ar[dd]_{( \ch(-)\cdot \sqrt{\td_X}) \times( \ch(-) \cdot \sqrt{\td_X})}^{\simeq} & & K_0(\perfdg (X))_\mathbb{Q} \ar[rr]^-{K_0(\pi_*)_\mathbb{Q}} \ar[dd]_{\simeq}^{ \ch(-)\cdot \td_X}& & \mathbb{Q} \ar@{=}[dd]\\
& \circled{4} & & \circled{5} &\\
\mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \times \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \ar[rr]_-{- \cdot -} & & \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \ar[rr]_-{\pi_{*}} & & \mathbb{Q}.}
\end{split}
\end{align}
The square $\circled{1}$ is commutative, thanks to (\ref{equation: rephrase euler pairing}). The commutativity of the squares $\circled{2}$ and $\circled{3}$ is obvious; note that $\mathbb{Q}=K_0(\Spec(k))_\mathbb{Q}$.
The square $\circled{4}$ is commutative since $\ch(-)$ is multiplicative.
Finally, the square $\circled{5}$ is commutative because of the Grothendieck-Riemann-Roch formula; see \cite[Theorem 5.26]{Huyb06}. Note that the right vertical map of the square $\circled{5}$, i.e., $\ch(-)\cdot \td_{\Spec (k)}$, is the identity map.
Indeed, since $K_0(\Spec(k))_\mathbb{Q}=\mathbb{Q}$ is a graded ring concentrated in degree zero, we have $\td_{\Spec (k)}=1$ and the ring map $\ch(-)$ from $K_0(\Spec(k))_\mathbb{Q}$ to $\mathcal{Z}^*(\Spec(k))_\mathbb{Q}/_{\!\sim_\mathrm{rat}}=\mathbb{Q}$ is the identity.
We already know that $\ch(-)\cdot \sqrt{\td_X}$ is an isomorphism and that $K_0( (-)^*)_\mathbb{Q} \times id$ is an isomorphism because both $(-)^*$ and $id$ are isomorphisms. This explains diagram (\ref{diagram: num nc to c}).
Finally, recall that an algebraic cycle $\mu \in \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}}$ is called numerically trivial if $\pi_*(\nu \cdot \mu)=0$ for all $\nu \in \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}}$. Therefore, given an element $\mathcal{F} \in K_0(\perfdg(X))_\mathbb{Q}$, we conclude from the outer commutative square of diagram (\ref{diagram: num nc to c}) that $\mathcal{F}$ is numerically trivial if $\ch(\mathcal{F}) \cdot \sqrt{\td_X}$ is numerically trivial. This concludes the proof of Theorem \ref{theo: non commutative to classical rel}.
\end{proof}
\begin{lemma}\label{lemma: chern nc to c}
The Chern character $\ch(-): K_0(\perfdg (X) )_\mathbb{Q} \stackrel{\simeq}{\longrightarrow} \mathcal{Z}^*(X)_\mathbb{Q}/_{ \!\sim_\mathrm{rat}}$ induces the following isomorphisms:
\begin{align}
\label{iso: chern nc to c}
K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \stackrel{\simeq}{\longrightarrow} \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \qquad \qquad K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_\mathrm{nil}} \stackrel{\simeq}{\longrightarrow}\mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{nil}}.
\end{align}
\end{lemma}
\begin{proof}
From \cite[Lemma 4.26]{TabVB18} we have that, for $T$ a smooth connected $k$-scheme, the following dg functor is a Morita equivalence:
\begin{align*}
- \boxtimes- : \perfdg (X) \otimes \perfdg (T) \longrightarrow \perfdg(X \times T).
\end{align*}
Moreover, by the properties of Chern character (see \cite[page 282, (ii)]{Fulton2nd_ed}), one obtains the commutative diagram:
\begin{align}
\label{diagram: chern I}
\begin{split}
\xymatrix@C=10pc @R=2.5pc{
\bigoplus_{(T,p,q)}K_0(\perfdg (X) \otimes \perfdg(T))_\mathbb{Q} \ar[d]_{\bigoplus K_0(-\boxtimes-)_\mathbb{Q}}^{\simeq} \ar[r]^-{\bigoplus (K_0(id \otimes p^*)_\mathbb{Q} - K_0(id \otimes q^*)_\mathbb{Q})} & K_0(\perfdg(X))_\mathbb{Q} \ar[dd]_\simeq^{\ch(-)}\\
\bigoplus_{(T,p,q)}K_0(\perfdg(X \times T))_\mathbb{Q} \ar[d]_{\bigoplus\ch(-)}^{\simeq} & \\
\bigoplus_{(T,p,q)}\mathcal{Z}^*(X \times T)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \ar[r]_-{\bigoplus((id \times p)^*-(id \times q)^*)} & \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}},}
\end{split}
\end{align}
\noindent where $p$ and $q$ stand for $k$-rational points of a smooth connected $k$-scheme $T$. Recall that an algebraic cycle $\mu \in \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}}$ is called algebraically trivial if it belongs to the image of $\bigoplus_{(T,p,q)} ((id \otimes p)^* - (id \otimes q)^*)$. Therefore, the left-hand-side isomorphism in (\ref{iso: chern nc to c}) follows from the above commutative diagram (\ref{diagram: chern I}).
In order to prove the right-hand-side isomorphism in (\ref{iso: chern nc to c}), note that a repeated use of \cite[Lemma 4.26]{TabVB18} shows that the following dg functor
\begin{align*}
- \boxtimes - \cdots - \boxtimes - : \perfdg(X)^{\otimes m} \longrightarrow \perfdg(X^{\otimes m})
\end{align*}
is a Morita equivalence. Therefore, for any integer $m\geq 1$ we obtain the following commutative diagram:
\begin{align}
\label{diagram: chern II}
\begin{split}
\xymatrix@C=11pc @R=2.25pc{
\prod_{i=1}^m K_0(\perfdg(X))_\mathbb{Q} \ar[r]^-{([\mathcal{F}_1],\ldots, [\mathcal{F}_m]) \longmapsto [\mathcal{F}_1 \otimes \ldots \otimes \mathcal{F}_m]} \ar[dd]_{\prod \ch(-)}^{\simeq}& K_0(\perfdg (X)^{\otimes m})_\mathbb{Q} \ar[d]_{\simeq}^{K_0(- \boxtimes - \ldots - \boxtimes -)_\mathbb{Q}}\\
& K_0(\perfdg(X^{\times m}))_\mathbb{Q} \ar[d]_{\simeq}^{\ch(-)}\\
\prod_{i=1}^m \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \ar[r]_-{(\mu_1,\ldots, \mu_m) \longmapsto \mu_1 \times \ldots \times \mu_m} & \mathcal{Z}^*(X^{\times m})/_{\!\sim_\mathrm{rat}}.
}
\end{split}
\end{align}
Recall that an algebraic cycle $\mu \in \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}}$ is called nilpotently trivial if there is a positive integer $m$ such that $\mu^{m}$ is equal to $0$. Therefore, the right-hand-side isomorphism in (\ref{iso: chern nc to c}) follows from the above commutative diagram (\ref{diagram: chern II}).
\end{proof}
\begin{lemma}
\label{lemma: todd nc to c}
The homomorphism $- \cdot \sqrt{\td_{X}}: \mathcal{Z}^*(X)_\mathbb{Q}/_{ \!\sim_\mathrm{rat}} \stackrel{\simeq}{\longrightarrow} \mathcal{Z}^*(X)_\mathbb{Q}/_{ \!\sim_\mathrm{rat}}$ induces the following isomorphisms:
\begin{align}
\label{iso: todd nc to c}
\mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \stackrel{\simeq}{\longrightarrow} \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \qquad \qquad \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{nil}} \stackrel{\simeq}{\longrightarrow} \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{nil}}.
\end{align}
\end{lemma}
\begin{proof}
Note that in order to prove the left-hand-side of (\ref{iso: todd nc to c}) it suffices to show that the following square is commutative:
\begin{align}
\label{diagram: mult todd alg}
\begin{split}
\xymatrix@C=6.5pc @R=4pc{
\bigoplus_{(T,p,q)}\mathcal{Z}^*(X \times T)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \ar[d]_-{\bigoplus (-\cdot \sqrt{\td_{X \times T}})}^-{\simeq} \ar[r]^-{\bigoplus((id \times p)^*-(id \times q)^*)} & \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \ar[d]^-{-\cdot \sqrt{\td_X}}_-{\simeq}\\
\bigoplus_{(T,p,q)}\mathcal{Z}^*(X \times T)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \ar[r]_-{\bigoplus((id \times p)^*-(id \times q)^*)} & \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}},}
\end{split}
\end{align}
\noindent where $p$ and $q$ stand for $k$-rational points of a smooth connected $k$-scheme $T$. Let $\pi_X$ and $\pi_T$ be, respectively, the projections from $X \times T$ to $X$ and $T$. Consider the pull-backs $\pi_X^*: \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \rightarrow \mathcal{Z}^*(X \times T)_\mathbb{Q}/_{\!\sim_\mathrm{rat}}$ and $\pi_T^*: \mathcal{Z}^*(T)_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \rightarrow \mathcal{Z}^*(X \times T)_\mathbb{Q}/_{\!\sim_\mathrm{rat}}$.
It is known, by the proof of \cite[Lemma 10.6]{Huyb06}, that $\sqrt{\td_{X \times T}}=\pi_X^*(\sqrt{\td_X})\cdot \pi_T^*(\sqrt{\td_T})$. Consequently, we conclude that
\begin{align*}
(id \times p)^*(\sqrt{\td_{X \times T}})&=(id \times p)^*(\pi_X^*(\sqrt{\td_X})\cdot \pi_T^*(\sqrt{\td_T}))\\
&=(id \times p)^*(\pi_X^*(\sqrt{\td_X})) \cdot (id \times p)^*(\pi_T^*(\sqrt{\td_T}))\\
&= (\pi_X \circ (id \times p))^*(\sqrt{\td_X}) \cdot (\pi_T \circ (id \times p))^*(\sqrt{\td_T})\\
&= id^*(\sqrt{\td_X}) \cdot (\pi_T \circ (id \times p))^*(\sqrt{\td_T})\\
&=\sqrt{\td_X} \cdot (\pi_T \circ (id \times p))^*(\sqrt{\td_T}).
\end{align*}
Recall from \cite[\S 5.2]{Huyb06} that the degree zero term of $\td_T$, in $\mathrm{H}^0(T;\mathbb{Q})$, is $1$. Therefore, choose for $\sqrt{\td_T}$ the cohomology class whose degree zero term is 1. Since $\mathcal{Z}^*(\Spec (k))_\mathbb{Q}/_{\!\sim_\mathrm{rat}}=\mathbb{Q}$ and $p^*$ is a graded ring morphism, the following equalities hold for every $\nu \in \mathcal{Z}^*(X \times T)_\mathbb{Q}/_{\!\sim_\mathrm{rat}}$.
\begin{align*}
(id \times p)^*(\nu \cdot \sqrt{\td_{X \times T}})&= (id \times p)^*(\nu) \cdot (id \times p)^*(\sqrt{\td_{X \times T}})\\
&=(id \times p)^*(\nu) \cdot \sqrt{\td_X} \cdot (\pi_T \circ (id \times p))^*(\sqrt{\td_T})\\
&= (id \times p)^*(\nu) \cdot \sqrt{\td_X} \cdot (p \circ \pi_{\Spec(k)})^*(\sqrt{\td_T}) \\
&= (id \times p)^*(\nu) \cdot \sqrt{\td_X} \cdot (\pi_{\Spec(k)}^* \circ p^*)(\sqrt{\td_T})\\
&= (id \times p)^*(\nu) \cdot \sqrt{\td_X} \cdot \pi_{\Spec(k)}^*(1) \\
&= (id \times p)^*(\nu) \cdot \sqrt{\td_X}.
\end{align*}
\noindent This implies that the above diagram (\ref{diagram: mult todd alg}) is commutative.
Now, in order to prove the right-hand-side of (\ref{iso: todd nc to c}), note that the homomorphism $\{\mu_i\}_{1\leq i \leq m} \longmapsto \mu_1 \times \ldots \times \mu_m$ can be written as the following composition:
\begin{align*}
\prod_{i=1}^m \mathcal{Z}^*(X)_\mathbb{Q} /_{\!\sim_\mathrm{rat}} \xrightarrow{\prod \pi_i^*} \prod_{i=1}^m \mathcal{Z}^*(X^{\times m})_\mathbb{Q}/_{\!\sim_\mathrm{rat}} \xrightarrow{\text{mult}} \mathcal{Z}^*(X^{\times m})_\mathbb{Q}/_{\!\sim_\mathrm{rat}},
\end{align*}
where $\pi_i$ stands for the $i^{th}$ projection map from $X^{\times m}$ to $X$. Consequently, since the pull-back homomorphism $\pi_i^*$ is multiplicative, if an algebraic cycle $\mu \in \mathcal{Z}^*(X)_\mathbb{Q}/_{ \!\sim_\mathrm{rat}}$ is nilpotently trivial, then so is $\mu \cdot \sqrt{\td_{X}}$. Conversely, since the homomorphism $\pi_i^*$ is multiplicative and $\sqrt{\td_{X}}$ is invertible, if $\mu \cdot \sqrt{\td_{X}}$ is nilpotently trivial, then $\mu$ is also necessarily nilpotently trivial.
\end{proof}
\section{Kernels}
In this section, we consider the quotients between the different equivalence relations introduced in \S \ref{equiv rel}. We start by fixing some useful notations that will be used in the remainder of the paper.
\begin{notation}
\renewcommand{\labelenumi}{(\roman{enumi})}
Let $X$ be a smooth proper $k$-scheme and $\mathcal{A}$ a smooth proper dg category (over $k$).
\begin{enumerate}
\item Given equivalence relations $\sim_1$ and $\sim_2$ on a set $S$, we will write $\sim_1 \succeq \sim_2$ if, for all $a,$ $b \in S$, $a \sim_1 b$ implies $a \sim_2 b$.
\item Given an equivalence relation $\sim$ on $\mathcal{Z}^*(X)_\mathbb{Q}$, resp. on $K_0(\mathcal{A})_\mathbb{Q}$, let us write
\begin{align*}
\mathcal{Z}^*_{\sim}(X)_\mathbb{Q}:=\{\alpha \in \mathcal{Z}^*(X)_\mathbb{Q} | \alpha \sim 0 \} \text{ resp. } K_{0,\sim}(\mathcal{A})_\mathbb{Q}:=\{\alpha \in K_0(\mathcal{A})_\mathbb{Q} | \alpha \sim 0 \}
\end{align*}
for the $\mathbb{Q}$-subspace of $\sim$-trivial elements.
\item Given equivalence relations $\sim_1 \, \succeq \, \sim_2$ on $\mathcal{Z}^*(X)_\mathbb{Q}$, resp. on $K_0(\mathcal{A})_\mathbb{Q}$, let us write
\begin{align*}
\mathcal{Z}^*_{\sim_2/\sim_1}(X)_\mathbb{Q}:=\dfrac{\mathcal{Z}^*_{\sim_2}(X)_\mathbb{Q}}{\mathcal{Z}^*_{\sim_1}(X)_\mathbb{Q}} \text{ resp. } K_{0,\sim_2/\sim_1}(\mathcal{A})_\mathbb{Q}:=\dfrac{K_{0,\sim_2}(\mathcal{A})_\mathbb{Q}}{K_{0,\sim_1}(\mathcal{A})_\mathbb{Q}}
\end{align*} for the associated quotient.
\item Given equivalence relations $\sim_1 \, \succeq \, \sim_2$ on $\mathcal{Z}^*(X)_\mathbb{Q}$, resp. on $K_0(\mathcal{A})_\mathbb{Q}$, let us write
\begin{align*}
q^{X}_{\sim_2/\sim_1}: \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_1} \twoheadrightarrow \mathcal{Z}^*(X)_\mathbb{Q}/_{\!\sim_2} \text{ resp. } q^{\mathcal{A},\nc}_{\sim_2/\sim_1}: K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_1} \twoheadrightarrow K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_2}
\end{align*}
for the quotient map.
\item Given equivalence relations $\sim_1 \, \succeq \, \sim_2$ on $K_0(\perfdg(X))_\mathbb{Q}$, let $q^{X,\nc}_{\sim_2/\sim_1}: K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_1} \twoheadrightarrow K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_2}$ be the quotient map.
\end{enumerate}
\end{notation}
\begin{remark}
\label{remark: trivial leq 2}
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item The equivalence relations $\sim_\mathrm{alg}$, $\sim_\mathrm{nil}$ and $\sim_\mathrm{num}$ agree for smooth proper $k$-schemes $X$ of $\dim \leq 2$; see \cite[\S 3.2.7]{YAnd04} and \cite[\S 19.3.5]{Fulton2nd_ed}. Moreover, $\sim_\mathrm{rat}$, $\sim_\mathrm{alg}$, $\sim_\mathrm{nil}$ and $\sim_\mathrm{num}$ agree for $0$-dimensional $k$-schemes.
\item In general, we have $\ker(q^{X}_{\sim_2/\sim_1}) \neq 0$; consult Remark \ref{remark: not always true}.
\item For every smooth projective complex curve $X$ of positive genus $g$, we have that $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X)_\mathbb{Q} \simeq (\mathbb{Q}/\mathbb{Z})^{\oplus 2 g}$. Indeed, as $X$ is a curve, we have $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X)_\mathbb{Q} = \mathcal{Z}^1_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X)_\mathbb{Q}$ which, thanks to \cite[19.3.5]{Fulton2nd_ed}, is isomorphic to the $\mathbb{Q}$-vector space of torsion points of the Albanese variety $\text{Alb}(X)$.
Since $X$ is a curve, we have $\text{Alb}(X) \simeq \text{Jac}(X)$ (the Jacobian variety of X) and it is well-known that the $\mathbb{Q}$-vector space of torsion points of $\text{Jac}(X)$ is isomorphic to $(\mathbb{Q}/\mathbb{Z})^{\oplus 2 g}$.
\item Voevodsky's nilpotence conjecture asserts that $\ker(q^{X}_{\sim_\mathrm{num}/\sim_\mathrm{nil}})=0$.
\end{enumerate}
\end{remark}
\begin{remark}
\label{remark: kernel=quocient}
Let $\sim_1, \sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$.
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item Given a smooth proper $k$-scheme $X$, we have $\mathcal{Z}^*_{\sim_2/\sim_1}(X)_\mathbb{Q}\simeq\ker(q^X_{\sim_2/\sim_1})$.
\item Given a smooth proper dg category $\mathcal{A}$ (over $k$), we have $K_{0,\sim_2/\sim_1}(\mathcal{A})_\mathbb{Q} \simeq \ker(q^{\mathcal{A},\nc}_{\sim_2/\sim_1})$.
\end{enumerate}
\end{remark}
\begin{corollary}[of Theorem \ref{theo: non commutative to classical rel}]
\label{corollary: nc and c are =}
Given a smooth proper $k$-scheme $X$ and $\sim_1, \sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$, we have an isomorphism $\ker(q^{X}_{\sim_2/\sim_1}) \simeq \ker(q^{X,\nc}_{\sim_2/\sim_1})$. Equivalently, we have an isomorphism $\mathcal{Z}^*_{\sim_2/\sim_1}(X)_\mathbb{Q} \simeq K_{0,\sim_2/\sim_1}(\perfdg(X))_\mathbb{Q}$.
\end{corollary}
\begin{proof}
Recall from Theorem \ref{theo: non commutative to classical rel} that the map (\ref{iso from nc to c}) induces an isomorphism $K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim} \xrightarrow[]{\simeq} \mathcal{Z}^*(X)_{\mathbb{Q}}/_{\!\sim}$, where
\noindent $\sim \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$.
Consequently, we have the following commutative diagram:
\begin{align}
\label{diagram: kernels}
\begin{split}\xymatrix@C=3pc @R=3pc{
\ker(q^{X,\nc}_{\sim_2/\sim_1}) \ar[r] \ar[d]& K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_1} \ar@{->>}[r] \ar[d]^-{(\ref{iso from nc to c})}_-{\simeq}& K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_2} \ar[d]^-{(\ref{iso from nc to c})}_-{\simeq}\\
\ker(q^{X}_{\sim_2/\sim_1}) \ar[r] & \mathcal{Z}^*(X)_{\mathbb{Q}}/_{\!\sim_1} \ar@{->>}[r] & \mathcal{Z}^*(X)_{\mathbb{Q}}/_{\!\sim_2}.}
\end{split}
\end{align}
\noindent Since both rows in (\ref{diagram: kernels}) are exact, we hence conclude that the left-hand-side vertical homomorphism in (\ref{diagram: kernels}) is also invertible.
\end{proof}
\begin{proposition}
\label{proposition: kernel nc factors}
Let $\mathcal{A}$ be a smooth proper dg category such that $\mathrm{H}^0(\mathcal{A})=\langle \mathfrak{b}, \mathfrak{c} \rangle$ admits a semi-orthogonal decomposition in the sense of Bondal-Orlov \cite{BondalOrlov02}. Let $\mathfrak{b}^{\mathrm{dg}}$ and $\mathfrak{c}^{\mathrm{dg}}$ denote, respectively, the dg enhancement of $\mathfrak{b}$ and $\mathfrak{c}$ induced from $\mathcal{A}$.
Under these assumptions, the inclusions of $\mathfrak{b}^{\mathrm{dg}}$ and $\mathfrak{c}^{\mathrm{dg}}$ into $\mathcal{A}$ induce an isomorphism
\begin{align*}
\ker(q^{\mathfrak{b}^{\mathrm{dg}},\nc}_{\sim_2/\sim_1}) \oplus \ker(q^{\mathfrak{c}^{\mathrm{dg}},\nc}_{\sim_2/\sim_1}) \stackrel{\simeq}{\longrightarrow} \ker(q^{\mathcal{A},\nc}_{\sim_2/\sim_1}\!) \text{ with } \sim_1,\sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num} \}.
\end{align*}
Equivalently, we have an induced isomorphism $K_{0,\sim_2/\sim_1}(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q} \oplus K_{0,\sim_2/\sim_1}(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q} \xrightarrow{\simeq} K_{0,\sim_2/\sim_1}(\mathcal{A})_\mathbb{Q}$.
\end{proposition}
\begin{proof}
Since the dg category $\mathcal{A}$ is smooth and proper, it follows from the proof of \cite[Lemma 2.1]{BMT} that the dg categories $\mathfrak{b}^{\mathrm{dg}}$ and $\mathfrak{c}^{\mathrm{dg}}$ are also smooth and proper.
Moreover, as explained in \cite[\S 3.2]{Tab18a}, we have the following computations:
\begin{align}
\label{Hom Nvoev Nnum}
\Hom_{\text{NVoev}(k)_\mathbb{Q}}(U(k)_\mathbb{Q}, U(\mathcal{A})_\mathbb{Q}) \simeq K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_\mathrm{nil}} \qquad \Hom_{\text{NNum}(k)_\mathbb{Q}}(U(k)_\mathbb{Q}, U(\mathcal{A})_\mathbb{Q}) \simeq K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_\mathrm{num}}.
\end{align}
\noindent As $\mathrm{H}^0(\mathcal{A})= \langle \mathfrak{b}, \mathfrak{c} \rangle$, we deduce from \cite[Proposition 2.2 and Theorem 2.9]{Tab15a} that the induced morphism
$U(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q} \oplus U(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q} \rightarrow U(\mathcal{A})_\mathbb{Q}$ is invertible.
Therefore, making use of the computations (\ref{Hom Nvoev Nnum}), we obtain induced isomorphisms:
\begin{align*}
K_0(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{nil}} \oplus K_0(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{nil}} \stackrel{\simeq}{\longrightarrow} K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_\mathrm{nil}} \qquad
K_0(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{num}} \oplus K_0(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{num}} \stackrel{\simeq}{\longrightarrow} K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_\mathrm{num}}.
\end{align*}
Thanks to the computation (\ref{Hom Nchow}), we have an induced isomorphism $K_0(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q} \oplus K_0(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q} \xrightarrow[]{\simeq} K_0(\mathcal{A})_\mathbb{Q}$.
Note that thanks to Lemma \ref{lemma: K algebrico 2nd prop additive}, the inclusions of $\mathfrak{b}^{\mathrm{dg}}$ and $\mathfrak{c}^{\mathrm{dg}}$ into $\mathcal{A}$ also induce an isomorphism $K_0(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \oplus K_0(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \xrightarrow[]{\simeq} K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_\mathrm{alg}}$. Consequently, we have the following commutative diagram:
\begin{align}
\label{useful diagram decomposition}
\begin{split}
\xymatrix@C=3pc @R=4pc{
\ker(q^{\mathfrak{b}^{\mathrm{dg}},\nc}_{\sim_2/\sim_1}) \oplus \ker(q^{\mathfrak{c}^{\mathrm{dg}},\nc}_{\sim_2/\sim_1}) \ar[r] \ar[d] & K_0(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_1} \oplus K_0(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_1} \ar@{->>}[r] \ar[d]^{\simeq} & K_0(\mathfrak{b}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_2} \oplus K_0(\mathfrak{c}^{\mathrm{dg}})_\mathbb{Q}/_{\!\sim_2} \ar[d]^{\simeq}\\
\ker(q^{\mathcal{A},\nc}_{\sim_2/\sim_1}) \ar[r] & K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_1} \ar@{->>}[r] & K_0(\mathcal{A})_\mathbb{Q}/_{\!\sim_2}.}
\end{split}
\end{align}
Since both rows in (\ref{useful diagram decomposition}) are exact, we conclude that the left-hand-side vertical homomorphism in (\ref{useful diagram decomposition}) is also invertible.
\end{proof}
\begin{proposition}
\label{proposition: morita implies equivalent conj}
Let $\mathcal{A}$ and $\mathcal{B}$ be two smooth proper dg categories.
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item Given $F: \mathcal{A} \rightarrow \mathcal{B}$ a Morita equivalence, the homomorphism $K_0(F)_\mathbb{Q}/_{\!\sim}$ is invertible when $\sim \in \{\sim_\mathrm{rat},\sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$.
\item For $\sim_1, \sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$ and $\mathcal{A}$ Morita equivalent to $\mathcal{B}$, we have $\ker(q^{\mathcal{A},\nc}_{\sim_2/\sim_1}) \simeq \ker(q^{\mathcal{B},\nc}_{\sim_2/\sim_1})$, or, equivalently, $K_{0,\sim_2/\sim_1}(\mathcal{A})_\mathbb{Q} \simeq K_{0,\sim_2/\sim_1}(\mathcal{B})_\mathbb{Q}$.
\end{enumerate}
\end{proposition}
\begin{proof}
We start by proving item (i). Since $K_0(-)_\mathbb{Q}$ is an additive invariant, we have that $K_0(F)_\mathbb{Q}$ is an isomorphism.
Thanks to Lemma \ref{lemma: k algebrico sends morita to iso}, $K_0(F)_\mathbb{Q}/_{\!\sim_\mathrm{alg}}$ is an isomorphism.
Following \cite[Proposition 2.2 and Theorem 2.9]{Tab15a}, we have that $U(-)$ sends Morita equivalences to isomorphisms. This together with the following computations (see \cite[\S 3.2]{Tab18a})
\begin{align*}
\Hom_{\text{NVoev}(k)_\mathbb{Q}}(U(k)_\mathbb{Q}, U(-)_\mathbb{Q}) \simeq K_0(-)_\mathbb{Q}/_{\!\sim_\mathrm{nil}} \qquad \Hom_{\text{NNum}(k)_\mathbb{Q}}(U(k)_\mathbb{Q}, U(-)_\mathbb{Q}) \simeq K_0(-)_\mathbb{Q}/_{\!\sim_\mathrm{num}},
\end{align*}
is enough to conclude that both $K_0(F)_\mathbb{Q}/_{\!\sim_\mathrm{nil}}$, and $K_0(F)_\mathbb{Q}/_{\!\sim_\mathrm{num}}$ are isomorphisms.
Now, note that item (ii) follows from item (i). \end{proof}
\begin{proposition}
\label{proposition: azumaya imples equivalent conj}
Let $X$ be a smooth proper $k$-scheme and $\mathbb{B}_0$ an Azumaya algebra over $X$.
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item The canonical dg functor $i_{X,\mathbb{B}_0}:\perfdg(X) \rightarrow \perfdg(X,\mathbb{B}_0)$ induces an isomorphism
\begin{align*}
K_0(i_{X,\mathbb{B}_0})_\mathbb{Q}/_{\!\sim}: K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim} \longrightarrow K_0(\perfdg(X,\mathbb{B}_0))_\mathbb{Q}/_{\!\sim} \text{ with } \sim \in \{\sim_\mathrm{rat},\sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}.
\end{align*}
\item For $\sim_1, \sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$, we have $\ker(q^{X,\nc}_{\sim_2/\sim_1}) \simeq \ker(q^{\perfdg(X,\mathbb{B}_0),\nc}_{\sim_2/\sim_1})$, or, equivalently, an isomorphism
\begin{align*}
K_{0,\sim_2/\sim_1}(\perfdg(X))_\mathbb{Q} \simeq K_{0,\sim_2/\sim_1}(\perfdg(X,\mathbb{B}_0))_\mathbb{Q}.
\end{align*}
\end{enumerate}
\end{proposition}
\begin{proof}
To prove item (i), we start by noting that, following \cite[Corollary 3.1]{TabVB15}, we have $K_0(\perfdg(X))_\mathbb{Q} \simeq K_0(\perfdg(X,\mathbb{B}_0))_\mathbb{Q}$.
Moreover, thanks to Lemma \ref{lemma: azumaya algebras}, we have that $ K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_\mathrm{alg}} \simeq K_0(\perfdg(X,\mathbb{B}_0))_\mathbb{Q}/_{\!\sim_\mathrm{alg}}$.
Following \cite[Propositions 8.3 and 8.7]{TabVB15}, the canonical dg functor $i_{X,\mathbb{B}_0}$ induces an isomorphism $U(\perfdg(X))_\mathbb{Q} \simeq U(\perfdg(X,\mathbb{B}_0))_\mathbb{Q}$. So, the same reasoning in Proposition \ref{proposition: morita implies equivalent conj} allows us to conclude that
\begin{align*}
K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_\mathrm{nil}} \simeq K_0(\perfdg(X,\mathbb{B}_0))_\mathbb{Q}/_{\!\sim_\mathrm{nil}} \qquad K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_\mathrm{num}} \simeq K_0(\perfdg(X,\mathbb{B}_0))_\mathbb{Q}/_{\!\sim_\mathrm{num}}.
\end{align*}
This finishes the proof of item (i).
To prove item (ii), note that item (i) yields the following commutative diagram:
\begin{align}
\label{useful diagram azumaya}
\begin{split}
\xymatrix@C=3pc @R=4pc{
\ker(q^{X,\nc}_{\sim_2/\sim_1}) \ar[r] \ar[d] & K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_1} \ar@{->>}[r] \ar[d]^{\simeq} & K_0(\perfdg(X))_\mathbb{Q}/_{\!\sim_2} \ar[d]^{\simeq}\\
\ker(q^{\perfdg(X,\mathbb{B}_0),\nc}_{\sim_2/\sim_1}) \ar[r] & K_0(\perfdg(X,\mathbb{B}_0))_\mathbb{Q}/_{\!\sim_1} \ar@{->>}[r] & K_0(\perfdg(X,\mathbb{B}_0))_\mathbb{Q}/_{\!\sim_2}.}
\end{split}
\end{align}
\noindent Since both rows in (\ref{useful diagram azumaya}) are exact, the left-hand-side vertical homomorphism in (\ref{useful diagram azumaya}) is also invertible.
\end{proof}
\section{Applications}
In this section, making use of the mathematical tools developed in \S 3-\S 4, we prove Theorem \ref{main theo} (as well as Theorem \ref{Theorem: HPD}). The proofs are, in most cases, adaptations of the ones given in \cite{BMT} and \cite{OP}.
\subsection{Quadric fibrations}
\label{subsection: quadric}
Take $S$ a smooth projective $k$-scheme and $q: Q \rightarrow S$ a flat quadric fibration of relative dimension $n$ with $Q$ smooth.
Let $C_0$ be the sheaf of even parts of the Clifford algebra associated to $q$; see \cite[\S 1]{ABB14} \cite[\S 3]{Kuzn08}.
Recall from \cite[\S 3.5-\S 3.6]{Kuzn08} that when the discriminant divisor of $q$ is smooth and $n$ is even (resp., odd) we have a discriminant double cover $\widetilde{S} \rightarrow S$ (resp., a square root stack $\widehat{S}$) equipped with an Azumaya algebra $\mathbb{B}_0$.
\begin{theorem}
Under the above assumptions, with $\sim_1, \sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$, the following holds:
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item We have $\mathcal{Z}^*_{\sim_2/\sim_1}(Q)_\mathbb{Q}\simeq K_{0,\sim_2/\sim_1}(\perfdg(S, C_0))_\mathbb{Q} \bigoplus \mathcal{Z}^*_{\sim_2/\sim_1}(S)^{\oplus n}_\mathbb{Q}$.
\item If the discriminant divisor of $q$ is smooth and $n$ is even, then $\mathcal{Z}^*_{\sim_2/\sim_1}(Q)_\mathbb{Q} \simeq \mathcal{Z}^*_{\sim_2/\sim_1}(\widetilde{S})_\mathbb{Q} \bigoplus \mathcal{Z}^*_{\sim_2/\sim_1}(S)^{\oplus n}_\mathbb{Q}$.
\item If the discriminant divisor of $q$ is smooth and $n$ is odd, then $\mathcal{Z}^*_{\sim_2/\sim_1}(Q)_\mathbb{Q} \simeq K_{0,\sim_2/\sim_1}(\perfdg(\widehat{S}, \mathbb{B}_0))_\mathbb{Q} \bigoplus \mathcal{Z}^*_{\sim_2/\sim_1}(S)^{\oplus n}_\mathbb{Q}$.
\end{enumerate}
\label{theorem: quadric fibrations}
\end{theorem}
\begin{proof}
Follow the reasoning of the proof of \cite[Theorem 1.2]{BMT} with the following changes:
to prove item (i), use Corollary \ref{corollary: nc and c are =} instead of \cite[Theorem 1.1]{BMT} and Proposition \ref{proposition: kernel nc factors} instead of \cite[equation (5.2)]{BMT} (in order to conclude that $\ker(q^{Q,\nc}_{\sim_2/\sim_1})$ is isomorphic to $\ker(q^{\perfdg(S, C_0),\nc}_{\sim_2/\sim_1}) \bigoplus \ker(q^{S, \nc}_{\sim_2/\sim_1})^{\oplus n}$);
to prove item (ii), consider Proposition \ref{proposition: azumaya imples equivalent conj} and use Corollary \ref{corollary: nc and c are =} instead of \cite[Theorem 1.1]{BMT};
finally, in order to prove (iii), it is enough to replace the reference \cite[Theorem 1.1]{BMT} by Corollary \ref{corollary: nc and c are =}.
\end{proof}
\begin{corollary}
\label{corollary: certain quadrics}
When $\dim(S)\leq 2$, the following holds:
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item If the discriminant divisor of $q$ is smooth and $n$ is even, then the equivalence relations $\sim_\mathrm{alg}$ and $\sim_\mathrm{num}$ on $\mathcal{Z}^*(Q)_\mathbb{Q}$ agree.
\item If the discriminant divisor of $q$ is smooth and $n$ is odd, then $\mathcal{Z}^*_{\sim_\mathrm{num}/\sim_\mathrm{alg}}(Q)_\mathbb{Q} \simeq K_{0,\sim_\mathrm{num}/\sim_\mathrm{alg}}(\perfdg(\widehat{S}, \mathbb{B}_0))_\mathbb{Q}$.
\end{enumerate}
\end{corollary}
\begin{proof}
Since $\dim(\widetilde{S})=\dim (S)$, the proof follows from Remark \ref{remark: trivial leq 2}(i).
\end{proof}
\subsection{Intersection of quadrics}
Let $X$ be a smooth complete intersection of $r$ quadric hypersurfaces in $\mathbb{P}^m$. The linear span of these $r$ quadrics gives rise to a hypersurface
$Q \subseteq \mathbb{P}^{r-1} \times \mathbb{P}^m$,
and the projection into the first factor to a flat quadric fibration $q:Q \rightarrow \mathbb{P}^{r-1}$ of relative dimension $m-1$.
\begin{theorem}
\label{theorem: intersection of quadrics}
Under the above assumptions, with $\sim_1, \sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$, the following holds:
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item We have $\mathcal{Z}^*_{\sim_2/\sim_1}(X)_\mathbb{Q} \simeq K_{0,\sim_2/\sim_1}(\perfdg(\mathbb{P}^{r-1}, C_0))_\mathbb{Q}$.
\item If the discriminant divisor of $q$ is smooth and $m$ is odd, then $\mathcal{Z}^*_{\sim_2/\sim_1}(X)_\mathbb{Q} \simeq \mathcal{Z}^*_{\sim_2/\sim_1}(\widetilde{\mathbb{P}^{r-1}})_\mathbb{Q}$.
\item If the discriminant divisor of $q$ is smooth and $m$ is even, then $\mathcal{Z}^*_{\sim_2/\sim_1}(X)_\mathbb{Q} \simeq K_{0,\sim_2/\sim_1}(\perfdg(\widehat{\mathbb{P}^{r-1}}, \mathbb{B}_0))_\mathbb{Q}$.
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is similar to the one given in \cite[\S 7]{BMT} with the following adaptations: to prove item (i), replace the reference to the proof of \cite[Theorem 1.2(i)]{BMT} by the actual proof of Theorem \ref{theorem: quadric fibrations}(i) and consider Corollary \ref{corollary: nc and c are =} instead of \cite[Theorem 1.1]{BMT};
the proofs of items (ii)-(iii) follow a similar reasoning to the proofs of Theorem \ref{theorem: quadric fibrations}(ii)-(iii).
\end{proof}
\begin{corollary}
\label{corollary: certain intersections}
When $r\leq 3$, the following holds:
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item If the discriminant divisor of $q$ is smooth and $m$ is odd, then the equivalence relations $\sim_\mathrm{alg}$ and $\sim_\mathrm{num}$ on $\mathcal{Z}^*(X)_\mathbb{Q}$ agree.
\item If the discriminant divisor of $q$ is smooth, $m$ is even and $k$ is algebraically closed, then the equivalence relations $\sim_\mathrm{alg}$ and $\sim_\mathrm{num}$ on $\mathcal{Z}^*(X)_\mathbb{Q}$ agree.
\end{enumerate}
\end{corollary}
\subsection{Moishezon manifolds}
A \textit{Moishezon manifold} $X$ is a compact complex manifold such that the field of meromorphic functions on each component of $X$ has transcendence degree equal to the dimension of the component.
As proved in \cite{Mois66}, $X$ is a smooth projective $\mathbb{C}$-scheme if and only if it admits a K\"{a}hler metric. In the remaining cases, it is shown in \cite{MArtin70} that $X$ is a proper algebraic space over $\mathbb{C}$.
Let $Y \rightarrow \mathbb{P}^2$ be one of the non-rational conic bundles described by Artin and Mumford in \cite{AM72}, and $X \rightarrow Y$ a small resolution. In this case, $X$ is a smooth (not
necessarily projective) Moishezon manifold.
\begin{theorem}
\label{theorem: certain moishezon}
The equivalence relations $\sim_\mathrm{alg}$ and $\sim_\mathrm{num}$ on $K_0(\perfdg(X))_\mathbb{Q}$ agree.
\end{theorem}
\begin{proof}
The proof of \cite[Theorem 1.14]{BMT} can be adapted as follows:
replace the reference to the proof of \cite[Thereom 1.2(i)]{BMT} by the proof of Theorem \ref{theorem: quadric fibrations}(i), consider Remark \ref{remark: trivial leq 2}(i), and use Proposition \ref{proposition: kernel nc factors} instead of \cite[Theorem 1.2]{BMT}.
\end{proof}
\subsection{Cubic fourfolds and Gushel-Mukai fourfolds}
Recall that a \textit{cubic fourfold} is a smooth complex hypersurface of degree 3 in $\mathbb{P}^5$ and consult \cite[\S 2.2]{OP} for the definition of an (ordinary/ special) Gushel-Mukai fourfold.
In what follows, we adapt \cite[Theorems (A)-(D)]{OP} to our context.
\begin{theorem}
\label{theorem: certain cubic fourfolds}
The equivalence relations $\sim_\mathrm{alg}$ and $\sim_\mathrm{num}$ on $\mathcal{Z}^*(X)_\mathbb{Q}$ agree when $X$ is a cubic fourfold or an ordinary generic Gushel-Mukai fourfold.
\label{theorem (A)}
\end{theorem}
\begin{proof}
Apply the same reasoning of the proof of \cite[Theorem (A)]{OP}, use Remark \ref{remark: trivial leq 2}(i), and replace the equivalence relation $\sim_\mathrm{nil}$ by the equivalence relation $\sim_\mathrm{alg}$.
\end{proof}
\begin{remark}
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item Let $X$ be a cubic fourfold. Recall from \cite{kuzn16} that the Kuznetsov category $\mathcal{A}_X$ of $X$ is defined as a certain semi-orthogonal component of $\perf(X)$. Therefore, thanks to Theorem \ref{theorem (A)}, Corollary \ref{corollary: nc and c are =}, and Proposition \ref{proposition: kernel nc factors}, we have $\ker(q^{\mathcal{A}_X^{\mathrm{dg}},\nc}_{\sim_\mathrm{num}/\sim_\mathrm{alg}})= 0$, i.e., the equivalence relations $\sim_\mathrm{alg}$ and $\sim_\mathrm{num}$ on $\mathcal{A}_X^{\mathrm{dg}}$ agree, where $\mathcal{A}_X^{\mathrm{dg}}$ denotes the dg enhancement of $\mathcal{A}_X$ induced from $\perfdg(X)$.
\item Let $X$ be a Gushel-Mukai $n$-fold. Recall from \cite{KuznPer18} that the Gushel-Mukai category $\mathcal{A}_X$ of $X$ is defined as a certain semi-orthogonal component of $\perf(X)$. Therefore, for $X$ a Gushel-Mukai fourfold, Theorem \ref{theorem (A)}, Corollary \ref{corollary: nc and c are =}, and Proposition \ref{proposition: kernel nc factors}, imply that $\ker(q^{\mathcal{A}_X^{\mathrm{dg}},\nc}_{\sim_\mathrm{num}/\sim_\mathrm{alg}})=0$, i.e., the equivalence relations $\sim_\mathrm{alg}$ and $\sim_\mathrm{num}$ on $\mathcal{A}_X^{\mathrm{dg}}$ agree.
\end{enumerate}
\end{remark}
Under the above notations, we obtain the analogous of \cite[Theorem (B)]{OP}:
\begin{theorem}
The equivalence relations $\sim_\mathrm{alg}$ and $\sim_\mathrm{num}$ on $K_0(\mathcal{A}_X^{\mathrm{dg}})_\mathbb{Q}$ agree when $X$ is a cubic fourfold or an ordinary generic Gushel-Mukai fourfold.
\label{theorem (B)}
\end{theorem}
We now adapt \cite[Theorems (C) and (D)]{OP} to our context:
\begin{theorem}
The equivalence relations $\sim_\mathrm{alg}$ and $\sim_\mathrm{num}$ on $\mathcal{Z}^*(X)_\mathbb{Q}$ agree when $X$ is a generic Gushel-Mukai fourfold containing a plane $P$ of type $\Gr(2,3)$.
\label{theorem (C)}
\end{theorem}
\begin{proof}
The proof is similar to the one given in \cite[\S 5]{OP}. Just bear in mind that one needs to use Corollary \ref{corollary: nc and c are =}, Propositions \ref{proposition: kernel nc factors} and \ref{proposition: morita implies equivalent conj}, and also to consider Theorem \ref{theorem (B)} instead of \cite[Theorem (B)]{OP}.
\end{proof}
\begin{theorem}
\label{theorem (D)}
The equivalence relations $\sim_\mathrm{alg}$ and $\sim_\mathrm{num}$ on $\mathcal{Z}^*(X)_\mathbb{Q}$ agree when $X$ is an ordinary Gushel-Mukai fourfold containing a quintic del Pezzo surface.
\end{theorem}
\begin{proof}
Note first that the semi-orthogonal decomposition of $\perf (X)$ consists only of $\mathcal{A}_X$ and of exceptional objects, \cite[Proposition 2.3]{KuznPer18}. Therefore, Corollary \ref{corollary: nc and c are =} and a repeated use of Proposition \ref{proposition: kernel nc factors} imply that $\ker(q^X_{\sim_\mathrm{num}/\sim_\mathrm{alg}})$ and $\ker(q^{\mathcal{A}_X^{\mathrm{dg}},\nc}_{\sim_\mathrm{num}/\sim_\mathrm{alg}})$ are isomorphic.
To finish the proof, just consider Corollary \ref{corollary: nc and c are =}, Proposition \ref{proposition: morita implies equivalent conj}, and follow the proof of \cite[Theorem (D)]{OP}.
\end{proof}
\subsection{K{\"u}chle fourfolds}
A K{\"u}chle fourfolds is the zero locus of a global section of a certain vector bundle on a specific Grassmannian. In particular, a \textit{K{\"u}chle fourfold of type} $c_7$, denoted by $X_{c_7}$, is the zero locus of a global section of the vector bundle $\Lambda^2\mathcal{U}^\perp(1) \oplus \mathcal{O}(1)$ on $\Gr(3,8)$, where $\mathcal{U}^\perp$
is the tautological vector subbundle of rank $5$ on the Grassmannian $\Gr(3, 8)$ and $\mathcal{O}(1)$ stands for the ample generator of its Picard group; consult \cite{Kuchle95, Kuzn15} for further details.
\begin{theorem}
\label{theorem: certain kuchel fourfolds}
The equivalence relations $\sim_\mathrm{num}$ and $\sim_\mathrm{num}$ on $\mathcal{Z}^*(X_{c_7})_\mathbb{Q}$ agree.
\end{theorem}
\begin{proof}
Thanks to \cite[Corollary 4.12]{Kuzn15}, the category $\perf(X_{c_7})$ admits a semi-orthogonal decomposition with 6 exceptional line bundles and a noncommutative K$3$ category $\mathcal{A}_X$ which is (Fourier-Mukai) equivalent to the non-trivial part of the derived category of a cubic fourfold $Z$.
Consequently, Proposition \ref{proposition: kernel nc factors}, Remark \ref{remark: trivial leq 2}(i), and Theorem \ref{theorem (B)}, imply that $\ker(q^{X_{c_7},\nc}_{\sim_\mathrm{num}/\sim_\mathrm{alg}}) \simeq \ker(q^{\mathcal{A}_X^{\mathrm{dg}},\nc}_{\sim_\mathrm{num}/\sim_\mathrm{alg}})$ is trivial. Thanks to Corollary \ref{corollary: nc and c are =}, this implies that $\mathcal{Z}^*_{\sim_\mathrm{num}/\sim_\mathrm{alg}}(X_{c_7})_\mathbb{Q}$ is trivial.
\end{proof}
\begin{remark}
A consequence of Theorem \ref{theorem: certain kuchel fourfolds} is that Voevodsky's nilpotence conjecture holds for $X_{c_7}$.
To the best of the author's knowledge, this proves Voevodsky's nilpotence conjecture in new cases.
Note that $X_{c_7}$ is not a Gushel-Mukai fourfold because it has Picard number greater than 1; see \cite[\S 1]{DebarKuzn18}.
\end{remark}
\subsection{Family of sextic del Pezzo surfaces}
A \textit{sextic du Val del Pezzo surface} is a normal integral projective surface $X$ with at worst du Val singularities and ample anticanonical class such that $K^2_X=6$.
Take $S$ and $T$ smooth projective $k$-schemes and $f:T \rightarrow S$ a \textit{du Val family of sextic del Pezzo surfaces}, i.e., $f$ is a flat morphism such that for every geometric point $s \in S$ the fiber $T_s$ of $T$ over $S$ is a sextic du Val del Pezzo surface.
Following \cite[\S 5]{kuzn18}, with $d=2,3$, let $\mathcal{M}_d$ denote the relative moduli stack of semi-stable sheaves on fibers of $T$ over $S$ with Hilbert polynomial $h_d(t):=(3t+d)(t+1)$ and $Z_d$ the coarse moduli space of $\mathcal{M}_d$. Consequently, there are finite flat morphisms $Z_2 \rightarrow S$ and $Z_3 \rightarrow S$ with degree $2$ and $3$, respectively.
\begin{theorem}
\label{theorem: certain sextic del Pezzo}
Let $f:T \rightarrow S$ be a du Val family of sextic del Pezzo surfaces, and assume that the characteristic of $k$ is not $2$ neither $3$. Under these conditions, with $\sim_1, \sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$, we have an isomorphism:
\begin{align*}
\mathcal{Z}^*_{\sim_2/\sim_1}(T)_\mathbb{Q} \simeq \mathcal{Z}^*_{\sim_2/\sim_1}(S)_\mathbb{Q} \oplus \mathcal{Z}^*_{\sim_2/\sim_1}(Z_2)_\mathbb{Q} \oplus \mathcal{Z}^*_{\sim_2/\sim_1}(Z_3)_\mathbb{Q}.
\end{align*}
\end{theorem}
\begin{proof}
Following \cite[Theorem 5.2 and Proposition 5.11]{kuzn18}, the category $\perf (T)$ admits a semi-orthogonal decomposition
\begin{align*}
\perf(T)=\langle \perf(S), \perf(Z_2,\mathbb{B}_2), \perf(Z_3,\mathbb{B}_3)\rangle,
\end{align*}
where $\mathcal{F}_2$ and $\mathcal{F}_3$ are, resp., certains sheafs of Azumaya algebras over $Z_2$, and $Z_3$, of order 2, and 3, resp.. By considering Propositions \ref{proposition: kernel nc factors} and \ref{proposition: azumaya imples equivalent conj}, we hence conclude that
\begin{align*}
K_{0,\sim_2/\sim_1}(\perfdg(T))_\mathbb{Q} \simeq K_{0,\sim_2/\sim_1}(\perfdg(S))_\mathbb{Q} \oplus K_{0,\sim_2/\sim_1}(\perfdg(Z_2))_\mathbb{Q} \oplus K_{0,\sim_2/\sim_1}(\perfdg(Z_3))_\mathbb{Q}.
\end{align*}
\noindent Now, an application of Corollary \ref{corollary: nc and c are =} finishes the proof.
\end{proof}
\begin{corollary}
\label{corollary: sextic del Pezzo}
Let $f:T \rightarrow S$ be a du Val family of sextic del Pezzo surface and assume that the characteristic of $k$ is not $2$ neither $3$. If $\dim (S) \leq 2$, then the equivalence relations $\sim_\mathrm{num}$ and $\sim_\mathrm{alg}$ on $\mathcal{Z}^*(T)_\mathbb{Q}$ agree.
\end{corollary}
\begin{remark}
Note that, in the conditions of Corollary \ref{corollary: sextic del Pezzo}, Voevodsky's conjecture holds for $T$.
\end{remark}
\subsection{Homological Projective Duality}
\label{subsection: HPD}
Let $X$ be a smooth projective $k$-scheme equipped with a line bundle $\mathcal{L}_X(1)$ and let us write $X\rightarrow \mathbb{P}(V)$ for the associated morphism where $V:=\mathrm{H}^0(X, \mathcal{L}_X(1))^*$.
Assume that the triangulated category $\perf (X)$ admits a Lefschetz decomposition $\langle \mathbb{A}_0, \mathbb{A}_1(1), \ldots, \mathbb{A}_{i-1}(i-1) \rangle$ with respect to $\mathcal{L}_X(1)$, where $\mathbb{A}_r(r):=\mathbb{A}_r \otimes \mathcal{L}_X(r)$, see \cite[Definition 4.1]{Kuzn07}. Note that $\mathbb{A}_r(r) \simeq \mathbb{A}_r$.
Bearing in mind \cite[Definition 6.1]{Kuzn07}, let $Y$ be the Homological Projective (HP)-dual of $X$, $\mathcal{L}_Y(1)$ the HP-dual line bundle, and $Y\rightarrow \mathbb{P}(V^*)$ the morphism associated to $\mathcal{L}_Y(1)$. Given a linear subspace $L\subseteq V^* $, we consider the linear sections $X_L:= X \times_{\mathbb{P}(V)}\mathbb{P}(L^\perp)$ and $Y_L:= Y \times_{\mathbb{P}(V^*)}\mathbb{P}(L)$. For a survey on HP Duality we invite the reader to consult \cite{kuzn14}.
\begin{theorem}[HPD invariance]
Let $X$ and $Y$ be as above and assume that $X_L$ and $Y_L$ are smooth and that $\dim (X_L)=\dim (X)-\dim(L)$ and $\dim (Y_L)=\dim (Y)-\dim(L^\perp)$. Consider the equivalence relations $\sim_1, \sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$ and assume moreover that $\ker(q^{\mathbb{A}_0^{\mathrm{dg}},\nc}_{\sim_2/\sim_1})=0$ (or, equivalently, that $K_{0,\sim_2/\sim_1}(\mathbb{A}_0^{\mathrm{dg}})_\mathbb{Q}=0$), where $\mathbb{A}_0^{\mathrm{dg}}$ stands for the dg enhancement of $\mathbb{A}_0$ induced from $\perfdg(X)$.
Under these assumptions, we have an isomorphism $\mathcal{Z}^*_{\sim_2/\sim_1}(X_L)_\mathbb{Q} \simeq \mathcal{Z}^*_{\sim_2/\sim_1}(Y_L)_\mathbb{Q}$.
\label{Theorem: HPD}
\end{theorem}
\begin{remark}
Given a \textit{generic} subspace $L\subseteq V^*$, the sections $X_L$ and $Y_L$ are smooth, and we have $\dim(X_L)=\dim(X)-\dim(L)$ and $\dim(Y_L)=\dim(Y)-\dim(L^{\perp})$. Moreover, by inductive use of Proposition \ref{proposition: kernel nc factors}, we have $\ker(q^{\mathbb{A}^{\mathrm{dg}},\nc}_{\sim_2/\sim_1})=0$ whenever $\mathbb{A}$ admits a full exceptional collection; see \cite[\S 1.1]{kuzn14} and Remark \ref{remark: trivial leq 2}(i).
This shows that the assumptions of Theorem \ref{Theorem: HPD} are quite mild.
\label{Remark: HPD}
\end{remark}
\begin{proof}
The proof is very similar to the proof given in \cite[\S 9]{BMT}. Follow the same reasoning with the next three differences:
firstly, where they write that a certain conjecture holds we write that the respective quotient $K_{0,\sim_2/\sim_1}(-)_\mathbb{Q}$ is trivial;
secondly, apply Proposition \ref{proposition: kernel nc factors} to conclude that $K_{0,\sim_2/\sim_1}(\mathbb{A}_j^{\mathrm{dg}})_\mathbb{Q}$, $K_{0,\sim_2/\sim_1}(\mathfrak{a}_j^{\mathrm{dg}})_\mathbb{Q}$, and $K_{0,\sim_2/\sim_1}(\mathbb{B}_j^{\mathrm{dg}})_\mathbb{Q}$ are trivial for every $j$
(the $\mathfrak{a}_j$'s are the orthogonal complements of $\mathbb{A}_{j+1}$ in $\mathbb{A}_j$ and the $\mathbb{B}_j$'s are the components of the semi-orthogonal decomposition of $\perf(Y)$ obtained by \cite[Theorem 6.3]{Kuzn07});
finally, apply Corollary \ref{corollary: nc and c are =} instead of \cite[Theorem 1.1]{BMT}.
\end{proof}
\begin{example}[Linear sections of Grassmannians]
Let us apply Theorem \ref{Theorem: HPD} to the case of linear sections of Grassmannians:
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item For $W=k^{\oplus 6}$, let $X_L$ be a generic linear section of codimension $r$ of the Grassmannian $\Gr(2,W)$ under the Pl\"{u}cker embedding, and $Y_L$ the corresponding dual linear section of the cubic Pfaffian $\Pf(4,W^*)$ in $\mathbb{P}(\Lambda^2 W^*)$. Note that $X_L$ and $Y_L$ are smooth and that $\dim(X_L)=8-r$ and $\dim(Y_L)=r-2$ when $r \leq 6$.
\item For $W=k^{\oplus 7}$, let $X_L$ be a generic linear section of codimension $r$ of the Grassmannian $\Gr(2,W)$ under the Pl\"{u}cker embedding, and $Y_L$ the corresponding dual linear section of the cubic Pfaffian $\Pf(4,W^*)$ in $\mathbb{P}(\Lambda^2 W^*)$. Note that $X_L$ and $Y_L$ are smooth and that $\dim(X_L)=10-r$ and $\dim(Y_L)=r-4$ when $r \leq 10$.
\end{enumerate}
Note also that \cite[(11) and (12)]{Kuzn06}, Proposition \ref{proposition: kernel nc factors}, and Remark \ref{remark: trivial leq 2}, imply that for both classes (i)-(ii) there is a Lefschetz decomposition of $\perf(\Gr(2,W))$ and that
$\ker(q^{\mathbb{A}_0^{\mathrm{dg}},\nc}_{\sim_2/\sim_1})=0$, where $\mathbb{A}_0$ is the first component of the Lefschetz decomposition of $\perf(\Gr(2,W))$.
\begin{corollary}
\label{corollary: grassmannians}
Let $X_L$ and $Y_L$ be as in the above classes (i)-(ii) and $\sim_1, \sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$. Under the assumption that $X_L$ and $Y_L$ are smooth, we have $\mathcal{Z}^*_{\sim_2/\sim_1}(X_L)_\mathbb{Q} \simeq \mathcal{Z}^*_{\sim_2/\sim_1}(Y_L)_\mathbb{Q}$. Moreover, the equivalence relations $\sim_\mathrm{num}$ and $\sim_\mathrm{alg}$ on $\mathcal{Z}^*(X_L)$ agree when $r \leq 6$ (class (i)), and when $r\leq 6$ and $8 \leq r \leq 10$ (class (ii)).
\end{corollary}
\begin{proof}
The first statement is immediate from Theorem \ref{Theorem: HPD}. The second statement follows from Remark \ref{remark: trivial leq 2}(i), except the case where $X_L$ and $Y_L$ are as in class (i) and $r=5$. In this latter case, just follow the proof in \cite[\S 8]{BMT} and consider Remark \ref{remark: trivial leq 2}(i).
\end{proof}
\begin{corollary}
\label{corollary: Grassmannian curve genus 1}
For $k=\mathbb{C}$, let $X_L$ and $Y_L$ be as above in class (i) and let $\dim L=r=3$. Then $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X_L)_\mathbb{Q} \simeq (\mathbb{Q}/\mathbb{Z})^{\oplus 2}$.
\end{corollary}
\begin{proof}
In this case, $X_L$ is a 5-fold and $Y_L$ is an elliptic curve; see \cite[\S 10]{Kuzn06}. Therefore, since $Y_L$ is of genus 1, $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(Y_L)_\mathbb{Q}$ is isomorphic to $(\mathbb{Q}/\mathbb{Z})^{\oplus 2 \times 1}$; see Remark \ref{remark: trivial leq 2}(iii). Consequently, Theorem \ref{Theorem: HPD} implies $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X_L)_\mathbb{Q}$ isomorphic to $(\mathbb{Q}/\mathbb{Z})^{\oplus 2}$.
\end{proof}
\begin{corollary}
\label{corollary: Grassmannian curve genus 43}
For $k=\mathbb{C}$, let $X_L$ and $Y_L$ be as in the above class (ii) and let $\dim L=r$.
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{enumerate}
\item If $r=5$, then $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X_L)_\mathbb{Q} \simeq (\mathbb{Q}/\mathbb{Z})^{\oplus 86}$.
\item If $r=9$, then $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(Y_L)_\mathbb{Q} \simeq (\mathbb{Q}/\mathbb{Z})^{\oplus 30}$.
\end{enumerate}
\end{corollary}
\begin{proof}
When $r=5$, $X_L$ is a Fano 5-fold of index 2 and $Y_L$ is a curve of genus 43; see \cite[\S 11]{Kuzn06}.
Therefore, we deduce that $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(Y_L)_\mathbb{Q}$ is isomorphic to $(\mathbb{Q}/\mathbb{Z})^{\oplus 2 \times 43}$; see Remark \ref{remark: trivial leq 2}(iii). Consequently, Theorem \ref{Theorem: HPD} implies that $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X_L)_\mathbb{Q}$ and $(\mathbb{Q}/\mathbb{Z})^{\oplus 86}$ are isomorphic.
When $r=9$, $X_L$ is a curve of genus 15 and $Y_L$ is a Fano 5-fold of index 2; see \cite[\S 11]{Kuzn06}. Hence, by combining Theorem \ref{Theorem: HPD} with Remark \ref{remark: trivial leq 2}(iii), we conclude that $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(Y_L)_\mathbb{Q}$, $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X_L)_\mathbb{Q}$ and $(\mathbb{Q}/\mathbb{Z})^{\oplus 2 \times 15}$ are all isomorphic to each other.
\end{proof}
\end{example}
\begin{example}[Linear sections of determinantal varieties]\label{example: determinantal}
Let $U$ and $V$ be two $k$-vector spaces of dimensions $m$ and $n$, respectively, with $m \leq n$ and $r$ an integer such that $0<r<m$.
As in \cite{BBF06}, consider the determinantal variety $Z^r_{m,n} \subseteq \mathbb{P}(U \otimes V)$ defined as the locus of those matrices $M:V \rightarrow U^*$ with rank $\leq r$.
It is known that $Z^r_{m,n}$ admits a canonical Springer resolution of singularities $\mathcal{X}^r_{m,n}:=\mathbb{P}(\mathcal{Q} \otimes U) \rightarrow \Gr(r,U)$, where $\mathcal{Q}$ stands for the tautological quotient on $\Gr(r,U)$.
Under these notations, let $X_L$ be a generic linear section of codimension $c$ of $\mathcal{X}^r_{m,n}$ under the map $\mathcal{X}^r_{m,n}\rightarrow \mathbb{P}(U \otimes V)$, and $Y_L$ the corresponding dual linear section of $\mathcal{X}^{m-r}_{m,n}$ under the map $\mathcal{X}^{m-r}_{m,n} \rightarrow \mathbb{P}(U^* \otimes V^*)$. From \cite[\S 3]{BBF06} we have that $X_L$ and $Y_L$ are both smooth and, from \cite[\S 3]{Tab20}, we have $\dim (X_L)=r(m+n-r)-1-\dim(L)$ and $\dim (Y_L)=r(m-n-r)-1+\dim(L)$.
Moreover, by \cite[\S 3]{BBF06}, Proposition \ref{proposition: kernel nc factors}, and Remark \ref{remark: trivial leq 2}(i), there is a Lefschetz decomposition of $\perf(\mathcal{X}^r_{m,n})$ and $\ker(q^{\mathbb{A}_0^{\mathrm{dg}},\nc}_{\sim_2/\sim_1})=0$.
\begin{corollary}
\label{corollary: determinantal}
Let $X_L$ and $Y_L$ be as in Example \ref{example: determinantal} and $\sim_1, \sim_2 \in \{\sim_\mathrm{rat}, \sim_\mathrm{alg}, \sim_\mathrm{nil}, \sim_\mathrm{num}\}$. Under these assumptions, we have $\mathcal{Z}^*_{\sim_2/\sim_1}(X_L)_\mathbb{Q} \simeq \mathcal{Z}^*_{\sim_2/\sim_1}(Y_L)_\mathbb{Q}$. Moreover, the equivalence relations $\sim_\mathrm{num} and \sim_\mathrm{alg}$ on $\mathcal{Z}^*(X_L)$ (and on $\mathcal{Z}^*(Y_L)$) agree whenever $\dim (L)\geq r(m+n-r)-3$ or $\dim (L) \leq 3+r(n-m+r)$.
\end{corollary}
\end{example}
\subsection{Prime Fano threefolds and del Pezzo threefolds}
A \textit{Fano variety} is a smooth proper connected algebraic variety whose anticanonical class is ample. Following \cite[\S 5.4]{kuzn21}, we have that a \textit{prime Fano threefold} is a Fano threefold $X$ with Pic$(X)=\mathbb{Z} K_X$ whose genus g$(X)$ is defined from $(-K_X)^3=2\text{g}(X)-2$ and it is known that $1 \leq \text{g}(X)\leq 12$ and g$(X)\neq 11$. Moreover, for prime Fano threefolds of even genus there is a semi-orthogonal decomposition of $\perf(X)$ with a nontrivial component $\mathcal{A}_X$.
In the same way, following \cite[\S 5.4]{kuzn21}, a \textit{del Pezzo threefold} is a Fano threefold $Y$ with $-K_Y=2H$, for a primitive Cartier divisor class $H$ and its degree is defined as d$(Y)=H^3$. It is known that $1\leq \text{d}(Y)\leq 5$ for del Pezzo threefolds of Picard rank 1. In addition, we have that a del Pezzo threefold admits a semi-orthogonal decomposition with a non-trivial component $\mathcal{B}_Y$; consult \cite[\S3]{kuzn09} for further details.
Let $X$ be a prime Fano threefold with g$(X)\in \{8,10,12\}$. Following \cite[Theorem 3.8]{kuzn09}, there exists a unique del Pezzo threefold $Y$ with degree d$(Y)=\dfrac{\text{g}(X)}{2}-1\in \{3,4,5\}$ such that $\mathcal{A}_X \simeq \mathcal{B}_Y$.
\begin{proposition}
\label{proposition: Prime Fano and Del Pezzo}
Let $X$ and $Y$ be as above. Then, we have an isomorphism $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X)_\mathbb{Q} \simeq \mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(Y)_\mathbb{Q}$.
\end{proposition}
\begin{proof}
Proposition \ref{proposition: kernel nc factors} and Remark \ref{remark: trivial leq 2}(i) imply that $K_{0,\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X)_\mathbb{Q}$ is isomorphic to $K_{0,\sim_\mathrm{alg}/\sim_\mathrm{rat}}(\mathcal{A}_X^{\mathrm{dg}})_\mathbb{Q}$.
Similarly, $K_{0,\sim_\mathrm{alg}/\sim_\mathrm{rat}}(Y)_\mathbb{Q}$ is isomorphic to $K_{0,\sim_\mathrm{alg}/\sim_\mathrm{rat}}(\mathcal{B}_Y^{\mathrm{dg}})_\mathbb{Q}$.
Since the categories $\mathcal{A}_X$ and $\mathcal{B}_Y$ are (Fourier-Mukai) equivalent, we have, moreover, an isomorphism between $K_{0,\sim_\mathrm{alg}/\sim_\mathrm{rat}}(\mathcal{A}_X^{\mathrm{dg}})_\mathbb{Q}$ and $K_{0,\sim_\mathrm{alg}/\sim_\mathrm{rat}}(\mathcal{B}_Y^{\mathrm{dg}})_\mathbb{Q}$. This implies that $K_{0,\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X)_\mathbb{Q}$ and $K_{0,\sim_\mathrm{alg}/\sim_\mathrm{rat}}(Y)_\mathbb{Q}$ are isomorphic. Consequently, Corollary \ref{corollary: nc and c are =} allows us to conclude that $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(X)_\mathbb{Q}$ is isomorphic to $\mathcal{Z}^*_{\sim_\mathrm{alg}/\sim_\mathrm{rat}}(Y)_\mathbb{Q}$.
\end{proof}
\noindent \textbf{Acknowledgments}. I thank Gonçalo Tabuada for discussions regarding \cite{BMT}. This work is funded by national funds through the FCT - Fundação para a Ciência e a Tecnologia, I.P., under the scope of the projects UIDB/00297/2020 and UIDP/00297/2020 (Center for Mathematics and Applications) and the PhD scholarship SFRH/BD/144305/2019.
\bibliographystyle{siam}
|
1,314,259,993,946 | arxiv | \section{Introduction}
Vector fields were suggested in cosmology as alternative to scalar
fields as inflation and dark energy drivers
\cite{Ford,Arm,Kiselev:2004py,Wei:2006tn}. Apart from being
physically appealing as well-understood and common component of all
existing theories,they may be useful, in particular, in generating
appropriate curvature perturbations
\cite{Dimopoulos:2009am,Dimopoulos:2011ws,Karciauskas:2011fp}.
Especially attractive seem to be non-Abelian models, which admit
magnetic type field configurations compatible with isotropy and
homogeneity of space-time
\cite{Galtsov:1991un,VoGa,Moniz:1990hf,Darian:1996mb} and may
therefore be used in the standard Friedmann-Robertson-Walker setting
without averaging needed in the case of vector singlet to avoid
anisotropy. If their dynamics is ruled by the standard Yang-Mills
(YM) conformally invariant action, the corresponding equation of
state is that of the hot universe \cite{Galtsov:1991un}, but
violation of conformal symmetry may lead to different equations of
state. In particular, the Born-Infeld lagrangian generates an
equation of state which interpolates between the hot regime at low
densities and a zero acceleration regime at high density
\cite{BI,elizalde,Fuzfa:2005qn,Fuzfa:2006}. Stronger violation of
conformal symmetry in the purely YM sector may produce accelerating
expansion of inflationary dark energy type \cite{1} exhibiting
typically a finite acceleration period. Various realizations of this
scenario were suggested \cite{Zhao:2005bu}--\cite{Shchigolev:2011kr}
with different motivation. Cosmological perturbations in the YM
cosmology were studied in \cite{pert,zraa2009,kmm}. It is also worth
noting that if one treats inflation as associated with the Higgs
sector of some gauge theory, the excitation of the Yang-Mills
component becomes inevitable and modifies inflationary regime in a
non-trivial way
\cite{Gal'tsov:2010dd,Galtsov:2011aa,Davydov:2011aa}.
Here we would like to investigate cosmological dynamics of
Yang-Mills filed non-minimally coupled to gravity via
curvature-dependent mass term. Earlier proposals of non-minimally
coupled Yang-Mills in cosmology include
\cite{Balakin:2007mp,Bamba:2008xa,Banijamali:2011ep,Elizalde:2010xq},
but our approach here is different. To introduce non-minimal
coupling one usually probe various combination the Riemann tensor
with the field strength and the vector potential. In the latter case
the gauge invariance of the theory is destroyed at classical level,
but one consider this just as the simplified for cosmological
purposes effect of spontaneous symmetry breaking mechanism of the
gauge theory giving mass to $W$-boson.
\section{The model}
We introduce the non-minimal coupling of the YM potential
$A^{a}_\mu$ to the Ricci tensor generically depending on two
constants together with the ordinary mass term. This form is
motivated by absence of higher derivatives in the resulting
equations, so the theory seems to be ghost-free. With this in mind
we choose as the lagrangian the sum of the Einstein-Hilbert term,
the standard YM term and the non-minimal term $L=L_{EH}+L_{YM}+\Lc$,
namely:
\begin{equation}\label{dL}
L=-\frac{ {M}_{Pl}^2}{2}R-\frac{1}{4}
F^{a}_{\mu\nu}F^{a\mu\nu}+\left(\nu R/4-m^2/2\right) A^{a}_\mu
A^{a\mu}+(\lambda-\nu) S_{\mu\nu}A^{a\mu} A^{a\nu}\,.
\end{equation}
Here $ {M}_{Pl}=1/\sqrt{8\pi G}$ is the modified Plank mass, and
$\nu,\lambda$ are new \emph{dimensionless} parameters and $m^2$ is
the mass term which looks natural within such a model. The Ricci
coupling is split into the scalar curvature coupling and coupling to
Schouten tensor, defined in $D$ dimensions as
\begin{equation}\label{Schouten}
S_{\mu\nu}=\frac{1}{D-2}\left(R_{\mu\nu}-\frac{R}{2(D-1)}g_{\mu
\nu}\right)\,,
\end{equation}
its advantage will be clear later.
It is convenient to choose the length scale $l=(1/e {M}_{Pl})$ and
replace the mass by the dimensionless parameter $\mu=(m/e
{M}_{Pl})^2$. The interval in dimensionless coordinates reads
\begin{equation}\label{metric}
ds^2=l^2\left\{-N^2 dt^2+a^2\left[d\chi^2+\Sigma^{2}_{k}(\chi)
(d\theta^2+\sin^2\theta d\varphi^2)\right]\right\}\,,
\end{equation}
where $\Sigma_{k}(\chi)=\{\sin\chi,\chi,\sinh\chi\}$ for the
closed, flat and open universe, labeled by $k=1,0,-1$,
correspondingly.
The scalar curvature and the Schouten tensor contain the second
order derivatives
\begin{eqnarray}\label{Ricci}
&&R = 6\left[(\da/aN)\dot{}/N+2\da^2/a^2
N^2+k/a^2\right];\\
&& S^{0}_0 = (\da/aN)\dot{}/N+\da^2/2a^2 N^2-k/2a^2,
\quad S^{i}_{i}=\da^2/2a^2N^2+k/2a^2\;.
\end{eqnarray}
In General Relativity it is common to integrate by parts separating
the total derivative
\begin{equation}
\frac12 R\sqrt{-g}=3\left[a^3(\da/aN)\dot{}+2a\da^2/N+
ka N\right]=3\left[ka N-a\da^2/N\right]+\mbox{div}\,.
\end{equation}
The ansatz for the YM potential preserving the isotropy and
homogeneity was constructed in \cite{Galtsov:1991un} for all $k$. In
the temporal gauge $A^{a}_0=0$ we have
\begin{equation}\label{AA}
A^{a}_i A^{a j}=\delta^{j}_{i}\frac{(k-f)^2}{a^2}\,,\quad -\frac14
F_{\mu\nu}^a F^{\mu\nu}_a=\frac{3\df^2}{2a^2N^2}-
\frac{3(k-f^2)^2}{2a^4}\,.
\end{equation}
The same as for the curvature term, one can shift the second
derivative in the scalar curvature coupling term to the gauge field:
\begin{equation}\label{RAA}
\frac{1}{4} R A^{a}_\mu A^{a\mu}\sqrt{-g}=3\df(k-f)\da/N+
\frac32 (k-f)^2\left[kN/a+\da^2/aN\right]+\mbox{div}\,.
\end{equation}
Note that the term $3\df(k-f)\da/N$ looks quite similar to the
topological term
$$\varepsilon^{\mu\nu
\alpha\beta}F^{a}_{\mu\nu}F^{a}_{\alpha\beta}\sqrt{-g}
=3\df(k-f^2)\,.$$ It has the form of the interaction with an
``axion'' $\da/N$ --- which is a common trick to make the
topological term to contribute into dynamics.
The coupling of the YM field amplitude to the Schouten tensor does
not contain the second derivative terms:
\begin{equation}\label{SAA}
S_{\mu\nu}A^{a\mu}
A^{a\nu}\sqrt{-g}=\frac{3}{2}(k-f)^2\left[kN/a+\da^2/aN\right]\,.
\end{equation}
Collecting all the terms together and omitting the factor $3$, one
obtains the following effective one-dimensional Lagrangian:
\begin{equation}\label{L1}
\begin{split}
&L_{\mathrm{eff}}=ka
N-\frac{a\da^2}{N}+\frac{a\df^2}{2N}-(k-f^2)^2
\frac{N}{2a}+\Lc,\quad\mbox{where}\\
&\Lc=-\frac{\mu
}{2}(k-f)^2 aN+\nu\df(k-f)\frac{\da}{N}+\frac{\lambda}{2}(k-f)^2\left[\frac{kN}{a}+\frac{\da^2}{aN}\right].
\end{split}
\end{equation}
So there arise three different dynamical terms in $\Lc$ due to
non-minimal coupling, which can be switched off by making zero the
corresponding coupling constants.
\section{Slow-roll} Our goal is to describe the slow-roll inflation within the
above model, so we will make some preparations to make future
analysis a bit easier. First, we omit the contribution of the
curvature term during the phase of the fast expansion, though it
contains a very interesting mode of the cosmological sphaleron in
the case $k=1$ due to the YM self-interaction potential
\cite{Gist,Volkov:1993gp}, which will be discussed elsewhere. So, in
the what follows, $k=0$.
Next, for the YM mode the conformal field amplitude $\psi=f/a$ is
quite natural, while the metric variable can be chosen in the
exponential form $a=\exp\alpha$. This allows us to simplify the
contribution of the metric into the kinetic term, so the full
Lagrangian now takes the form:
\begin{equation}\label{L2}
\begin{split}
&L=\frac{e^{3\alpha}}{2N}\left[-A(\psi)\dal^2+2B(\psi)\dal\dpsi
+\dpsi^2\right]-N e^{3\alpha}V(\psi), \quad\mbox{where}\\
&A(\psi)=1+2\nu\psi-(\lambda+1)\psi^2,\quad
B(\psi)=(1-\nu)\psi,\quad V(\psi)=\frac{1}{2}(\mu\psi^2+\psi^4)\,.
\end{split}
\end{equation}
The above dynamical system has the constraint, obtained by the
variation with respect to $N$, which in the form of the Friedman
equation reads:
\begin{equation}\label{constr1}
\frac{\dal^2}{2}A(\psi)=\dal\dpsi
B(\psi)+\frac{\dpsi^2}{2}+N^2 V(\psi)\,.
\end{equation}
The variation with respect to $\alpha$ and $\psi$ gives the
equations of motion:
\begin{eqnarray}
&&[N^{-1}e^{3\alpha}(B\dpsi-A\dal)]\,\dot{}
= \frac{3 e^{3\alpha}}{2N}\left[-A\dal^2+2B\dal\dpsi
+\dpsi^2-2N^2 V\right], \label{eq_t1}\\
&&[N^{-1}e^{3\alpha}(\dpsi+B\dal)]\,\dot{} =
\frac{ e^{3\alpha}}{2N}\left[-A_\psi\dal^2+2B_\psi\dal\dpsi-2N^2V_\psi\right]\,. \label{eq_t2}
\end{eqnarray}
Here $A_\psi,B_\psi,V_\psi$ denote the partial derivative with
respect to $\psi$. Instead of fixing the time gauge we may proceed
with an invariant description, choosing the $\alpha$ as an
independent variable. Introducing the Hubble parameter, $H\equiv
d\alpha/N dt$, we may rewrite the system as
\begin{eqnarray}
&&H[H e^{3\alpha}(B\psi'-A)]'
= -\frac32 H^2 e^{3\alpha}\left[A-2B\psi'
-\psi'^2+2H^{-2}V\right], \label{eq_a1}\\
&&H\left[H e^{3\alpha}(\psi'+B)\right]' =
-\frac12 H^2 e^{3\alpha}\left[A_\psi-2B_\psi\psi'+2H^{-2}V_\psi\right]\,,
\label{eq_a2}
\end{eqnarray}
where prime denotes the derivative with respect to $\alpha$. Using
the constraint, the potential term can be expressed as
\begin{equation}\label{constraintV}
2H^{-2} V=A-2B\psi'
-\psi'^2,
\end{equation}
and used in the above equations.
Now let us introduce the slow-roll parameters which are usually
used to detect the inflationary stage in the dynamics of the system:
\begin{equation}\label{slowroll}
\ep=-\frac{\dot{H}}{H^2N}=-\frac{H'}{H},\quad \delta=-\frac{\dpsi}{H\psi
N}=-\frac{\psi'}{\psi}\,.
\end{equation}
Of course they are also independent on the choice of gauge. We may
rewrite the system of equations~(\ref{eq_a1}--\ref{eq_a2}),
replacing $\psi',\psi'',H'$ by $\delta,\delta',\ep$ and using the
constraint to avoid appearance of the $H^{-2}$ term:
\begin{eqnarray}
&& (\ep-3)(B\psi\delta+A)+(B_\psi\psi+B)\psi\delta^2-B\psi\delta'=
-3(A+2B\psi\delta -\psi^2\delta^2)\,, \\
&&(\ep-3)(\psi\delta-B)+\psi(\delta^2-\delta')-B_\psi\psi\delta
=\nonumber\\&& -\frac{1}{2}(A_\psi+2B_\psi\psi\delta)-\frac{1}{2}[A+2B\psi\delta
-\psi^2\delta^2](\ln V)_\psi\,.
\end{eqnarray}
Finally collect terms with different powers of $\ep,\delta$:
\begin{eqnarray}
&&A\ep+3B\psi\delta=B\psi\delta'-(B_\psi\psi+B-3\psi)\psi\delta^2-B\psi\delta\ep, \\
&& 3B+\frac12 \left[A_\psi+A(\ln V)_\psi\right]+\left[B(\ln V)_\psi-3\right]\psi\delta-B\ep
= \psi^2\delta'+\left[\frac{\psi}{2}(\ln V)_\psi-1\right]\psi\delta^2-\psi\delta\ep.
\end{eqnarray}
Since the constraint was already incorporated in these two
equations, they provide the independent conditions on the slow-roll
parameters. The first equation gives:
\begin{equation}\label{epdelta}
\ep=-(3B\psi/A)\delta+O(\delta^2).
\end{equation}
Then the second equation implies the relation on the initial state
of the system:
\begin{equation}\label{istate}
\delta+\frac{A[A(\ln V)_\psi+A_\psi+6B]}{2\psi[AB(\ln
V)_\psi-3A+3B^2]}=O(\delta^2,\delta').
\end{equation}
If the initial conditions $\{\psi_i,\psi_i'\}$ ensure both
$\delta_i=-\psi_i'/\psi_i\ll 1$ and the l.h.s. of the
relation~(\ref{istate}) to vanish, then one has $\delta'\sim
O(\delta^2),\,\ep\sim\delta$ which signals a slow-roll regime. Now
assume for simplicity that neither the quantity $3B\psi/A$ itself,
nor its derivative with respect to $\psi$ is singular in the
corresponding region of the phase space, to ensure that $\ep'\sim
O(\delta^2)$ as well. Also one has to ensure that the value $H^2$
from the constraint~(\ref{constraintV}) is positive for the chosen
initial conditions: this is the additional restriction, which was
not taken into account before.
To estimate the number of $e$-folds gained by the scale factor
during the slow-roll inflation, one can just treat the
expression~(\ref{istate}) as the function of $\psi:\;\delta(\psi)$.
Then by definition $d\alpha=-[\psi\delta(\psi)]^{-1}d\psi$ and
\begin{equation}
\Ne=\alpha_e-\alpha_i=-\int_{\psi_i}^{\psi_e}\frac{d\psi}{\psi\delta(\psi)},
\end{equation}
where the exit from the slow-roll may occur when
$|\delta(\psi_e)|=1$. The full expression is rather complicated,
yet one can find that in the trans-Planckian region, $|\psi_i|\gg
1$, $\psi^2\gg|\mu|$,
\begin{equation}\label{delta}
\delta(\psi)\approx
-\frac{[2\nu-\psi(\lambda+1)][5\nu-3\psi(\lambda+\nu)]}{(3\nu^2+4\lambda\nu-\lambda-2\nu+2)\psi^2}.
\end{equation}
In addition, the constraint~(\ref{constraintV}) will provide
$H^2>0$ if $A+2B\psi\delta-\psi^2\delta^2>0$.
For the convenient choice of the parameters, say, $\lambda=-1$ or
$\lambda=-\nu$ one can easily get the answer. Indeed, one has now
$\delta\sim 1/\psi$ and $\Ne\sim\psi_i$. Of course, the value
$\Ne$ is calculated up to the factor of unity, since the exit
condition, $\delta_e\sim 1$, has just the same level of accuracy.
And in general case the analytic analysis is quite complicated. So
we now better proceed with a brief numerical calculations.
\section{Numeric analysis and Outlook}
It is convenient to solve numerically the
system~(\ref{eq_t1}--\ref{eq_t2}) in the gauge $N=1$. Mention that
the system of equations $M_{ij}(q)\ddot{q}^j=\Phi_{i}(q,\dot{q})$
has a singularity when $\Delta=\det(M)$ vanishes. In our case the
corresponding determinant is $\Delta\sim B^2(\psi)+A(\psi)$. But the
slow-roll initial state automatically ensures that the regime is
non-singular (the dynamics can not be `slow' in the vicinity of
singularity).
For simplicity we restrict numerical experiments here by the
two-dimensional parameter domain $(\nu,\lambda)$ of non-minimal
couplings to gravity setting the mass parameter $\mu$ zero. The
corresponding behavior of $\delta(\psi)$ is shown on the Fig.
\ref{Fig_istates}. It appears that the sub-domains of the slow-roll
initial states are nearly one-dimensional in a wide range of
$\psi_i$. This is true in the transplanckian region but changes
substantially when the field amplitude goes below the Planck scale.
There is no practical sense in investigating such a complicated
domain of the initial states. So we then focus on the transplanckian
regime. The numerical solutions confirm that the line $\lambda=-\nu$
generates the stable slow-roll inflation in the area $\nu>1$. The
number of $e$-folds does not actually depend on the value of $\nu$,
and is proportional to $\psi_i$, as can be seen on the Fig.
\ref{Fig_solutions} . But it is very sensitive to the condition
$\lambda=-\nu$. Even the small deviations up to percent may increase
or decrease $\Ne$ up to several times. Finally, on the exit of the
inflationary stage when $\psi_e\sim 1$, the solutions shown on the
Fig. \ref{Fig_solutions} meet the square-root singularity due to
the vanishing system determinant $\Delta$. In fact, the singularity
can be avoided for other choices of initial data, but the numerical
experiments performed here have shown that these usually do not
provide the good inflation.
It would be interesting to look whether there is any physics beyond
the condition $\lambda=-\nu$. In this case the non-minimal coupling
term in four dimensions will be $$\Lc=-\nu
[R_{\mu\nu}-(5/12)g_{\mu\nu}]A^{a\mu} A^{a\nu}.$$ The geometrical
tensor appearing here is close to, but not exactly coinciding with
the Einstein tensor. In such a model the stable inflationary regime
exists starting in the transplanckian region when the YM field
amplitude rolls down to the Planck scale, while the number of
$e$-folds is proportional to the initial value of the field
amplitude, $\Ne\approx 0.6\psi_i$.
Another special property of the proposed non-minimally coupled YM
model is that typically there is no smooth exit from the
inflationary stage into the radiation dominated universe. The
solutions end with a square-root singularity, where the derivatives
$\dot\alpha$ and $\dot\psi$ diverge. In this area the system
demonstrates chaotic behavior due to the essentially non-linear
coupling of the YM field to the metric which then evolves from one
singularity to another. The solutions corresponding to radiation
dominated universe can be obtained when the initial energy of the YM
field is small, so that non-linear terms of the non-minimal coupling
can be neglected. Probably, the singular region between the
inflationary stage and the hot universe may correspond to a phase
transition in the universe, when other matter fields are included.
\section*{Acknowledgments}
This work was supported by RFBR grants 11-02-01371-a and
11-02-01335-a.
\begin{figure}[p]
\hbox to\linewidth{\hs
\psfrag{a}{\LARGE{$\psi_i=100$}}\psfrag{b}{\LARGE{$\psi_i=10$}}
\psfrag{c}{\LARGE{$\psi_i=1$}} \psfrag{d}{\LARGE{$\psi_i=0.1$}}
\psfrag{x}{\huge{$\mbox{Abscissae}:\:-5<\nu<5$}}
\psfrag{y}{\huge{$\mbox{Ordinates}:\:-5<\lambda<5$}}
\resizebox{7cm}{7cm}{\includegraphics{istates.eps}}
\hss} \caption{Some sets in the parameter space (filled with grey
color) which satisfy the energy constraint and provide the slow-roll
dynamics: $|\delta(\psi_i,\nu,\lambda)|<0.1$ and
$H^2(\psi_i,\nu,\lambda)>0$.} \label{Fig_istates}
\end{figure}
\begin{figure}[p]
\hbox to\linewidth{\hs
\psfrag{1}{\LARGE{$\psi_i=200$}}\psfrag{2}{\LARGE{$\psi_i=150$}}
\psfrag{3}{\LARGE{$\psi_i=100$}} \psfrag{4}{\LARGE{$\psi_i=50$}}
\psfrag{5}{\LARGE{$\nu=-\lambda=1$}}\psfrag{6}{\LARGE{$\nu=-\lambda=5$}}
\psfrag{7}{\LARGE{$\nu=-\lambda=10$}}\psfrag{8}{\LARGE{$\nu=-\lambda=15$}}
\psfrag{t}{\huge{$t$}} \psfrag{y}{\huge{$\psi,\,\alpha$}}
\resizebox{10cm}{8cm}{\includegraphics{solutions.eps}}
\hss} \caption{A set of 16 solutions for $\psi_i=50,100,150,200$
and $\nu=-\lambda=1,5,10,15$. Solid lines for the metric exponent,
$\alpha(t)$, dot lines for the YM field amplitude, $\psi(t)$.
Their linear similarity verifies that the trans-Planckian region
and the parameter line $\nu=-\lambda>1$ is a stable domain for the
slow-roll inflationary scenario.} \label{Fig_solutions}
\end{figure}
|
1,314,259,993,947 | arxiv | \section{Introduction}\label{Introduction}
Let $S_{g, b}$ be a compact connected orientable surface of genus $g\geq 3$ with $b\geq 0$ boundary components.
The {\it mapping class group} $\mathcal{M}_{g, b}$ of $S_{g, b}$ is the group consists of the isotopy classes of orientation preserving homeomorphisms on $S_{g, b}$, fixing the boundary components pointwise.
Let $i\colon S_{g, b}\rightarrow S_{g, 0}$ be a natural inclusion map defined by capping the boundary components of $S_{g, b}$ by $b$ disks.
Then there is a surjective homomorphism $\varphi\colon \mathcal{M}_{g, b}\rightarrow\mathcal{M}_{g, 0}$ induced by $i$, namely, $\varphi(f)=[\bar{F}]$ ($f=[F]\in\mathcal{M}_{g, b}$), where $\bar{F}(x)=F(x)$ if $x\in S_{g, b}$ and $\bar{F}(x)=x$ if $x\not\in S_{g, b}$.
For $b=0,1$, the {\it Torelli group} ${\mathcal I}_{g, b}$ of $S_{g, b}$ is the kernel of the natural homomorphism $\Phi \colon {\mathcal M}_{g, b}\rightarrow {\rm Aut}(H_{1}(S_{g, b};{\mathbb Z}))$.
We note that for $b=0,1$ the {\it Torelli group} is a normal subgroup of the mapping class group.
For $b\geq 2$, we define the {\it Torelli group} ${\mathcal I}_{g, b}$ of $S_{g, b}$ by $\varphi^{-1}(\mathcal{I}_{g, 0})$.
By the definition, $\mathcal{I}_{g,b}$ is also a normal subgroup of $\mathcal{M}_{g,b}$ for $g\geq 2$.
There are several definitions of Torelli groups with $b\geq 2$ boundary components (see~\cite{Putman11}).
In case of our definition, the Torelli group is finitely generated (it is a consequence of \cite[Theorem 4.1]{Putman11}), and hence it is equipped with a word norm.
The geometry of the mapping class groups of orientable surfaces is well understood.
Therefore, understanding the geometry of the inclusion homomorphism ${\mathcal I}_{g, b}\hookrightarrow\mathcal{M}_{g, b}$ may allow one to deduce geometric
properties of the Torelli group from geometric properties of the mapping class group.
Let $G$ be a finitely generated group and $K$ a finitely generated subgroup of $G$.
In this paper, we denote by $\parallel\cdot\parallel_{G}$ a word metric of a finitely generated group $G$.
Then, there exists $C>0$ such that $\parallel h\parallel_{G}\leq C\parallel h\parallel_{K}$ for any $h\in K$.
A natural question is what the smallest function $\delta\colon{\mathbb N}\rightarrow{\mathbb R}$ which satisfies $\parallel h\parallel_{K}\leq\delta(\parallel h\parallel_{G})$ is.
The distortion of $K$ in $G$ is at most $\delta$ if there exists $C$, $C'$ such that for each $h\in K$, it follows that $\parallel h\parallel_{K}\leq C\delta(\parallel h\parallel_{G})+C'$.
The distortion of $K$ in $G$ is at least $\delta$ if there exists a sequence $\{h_{i}\}$ ($h_{i}\in K$) such that the word norm of $h_{i}$ in $G$ grows linearly, while the word norm in $K$ grows $\delta$.
If $K$ is at most and at least $\delta$ distorted in $G$, then we say the {\it distortion} of $K$ in $G$ is (exactly) $\delta$.
If the distortion of $K$ in $G$ is linear, we call $K$ is {\it undistorted} in $G$.
The distortions of various subgroups in the mapping class groups have been investigated.
For example, the mapping class groups of subsurfaces are undistorted in the mapping class group by Masur-Minsky~\cite{Masur-Minsky00} and Hamenst\"{a}dt~\cite{Hamenstad09a}, or the handle body group is exponentially distorted in the corresponding mapping class group by Hamenst\"{a}dt-Hensel~\cite{Hamenstad-Hensel12}.
Moreover, Broaddus-Farb-Putman~\cite{Broaddus-Farb-Putman11} proved that $\mathcal{I}_{g, 0}$ (resp. $\mathcal{I}_{g, 1}$) is at least exponentially and at most double exponentially distorted in $\mathcal{M}_{g, 0}$ (resp. $\mathcal{M}_{g, 1}$).
From the result of Cohen~\cite{Cohen14}, it follows that the distortion of $\mathcal{I}_{g, 0}$ (resp. $\mathcal{I}_{g, 1}$) in $\mathcal{M}_{g, 0}$ (resp. $\mathcal{M}_{g, 1}$) is exponential.
We find a lower bound of the distortion of the Torelli groups in the mapping class groups for the orientable surfaces with $b\geq2$ boundary components by applying Broaddus-Farb-Putman's arguments:
\begin{theorem}\label{first_thm}
For $g\geq 3$ and $b\geq 2$, ${\mathcal I}_{g, b}$ is at least exponentially distorted in ${\mathcal M}_{g, b}$.
\end{theorem}
For $d\geq 2$, the {\it level $d$ mapping class group} ${\mathcal M}_{g, b}[d]$ of $S_{g, b}$ ($b=0, 1$) is the kernel of the natural homomorphism $\Phi_{d}\colon {\mathcal M}_{g, b}\rightarrow {\rm Aut}(H_{1}(S_{g, b};{\mathbb Z}/d{\mathbb Z}))$.
We show that the distortion of the Torelli groups in the level $d$ mapping class group is the same as that of in the mapping class groups, and so we obtain the following.
\begin{theorem}\label{second_thm}
For $g\geq 3$, $b\leq 1$ and $d\geq 2$, ${\mathcal I}_{g, b}$ is exponentially distorted in ${\mathcal M}_{g, b}[d]$.
\end{theorem}
We note that we cannot apply Broaddus-Farb-Putman's arguments to the upper bound of the distortion of ${\mathcal I}_{g, b}$ in ${\mathcal M}_{g, b}$ for $g\geq 3$ and $b\geq 2$.
The reason is as follows.
For $b\leq 1$, the image $\Phi ({\mathcal M}_{g, b})$ is isomorphic to the symplectic group $\mathrm{Sp}(2g;\mathbb{Z})$, and they use a property of $\mathrm{Sp}(2g;\mathbb{Z})$ for the upper bound of the distortion of ${\mathcal I}_{g, b}$ in ${\mathcal M}_{g, b}$.
However, for $b\geq 2$ we don't know whether the image $\Phi ({\mathcal M}_{g, b})$ is isomorphic to some symplectic group, and so we cannot use the same approach as that of Broaddus-Farb-Putman.
\section{Proof of Theorem~\ref{first_thm}}\label{Proof of First theorem}
In this section, we prove Theorem~\ref{first_thm}. Firstly, we give the definition of the partially hyperbolic matrices.
\begin{definition}
Let $V$ be a free abelian group.
We say that an element of the automorphism group $\mathrm{GL}(V)\cong \mathrm{Aut}(V)$ is {\it partially hyperbolic} if the corresponding linear transformation of $V\otimes\mathbb{C}$ has some eigenvalue $\lambda$ with $|\lambda|>1$.
We call such a matrix a {\it partially hyperbolic matrix}.
\end{definition}
We use the following proposition by Broaddus-Farb-Putman~\cite{Broaddus-Farb-Putman11} to prove Theorem~\ref{first_thm}.
\begin{proposition}{\rm (}\cite[Proposition 2.3]{Broaddus-Farb-Putman11}{\rm )}\label{criterion for exponential distortion}
Let $G$ be a finitely generated group and $K$ be a finitely generated subgroup of $G$.
Suppose that $V$ is a free abelian subgroup equipped with a $G$-action $\rho\colon G\rightarrow {\rm GL}(V)$ and that $\psi\colon K\rightarrow V$ is a surjective homomorphism which is $G$-equivariant, where $G$ acts on $K$ by conjugation.
If $\rho(G)$ contains a partially hyperbolic matrix, then the distortion of $K$ in $G$ is at least exponential.
\end{proposition}
We assume that $g\geq 3$.
We set $H=H_{1}(S_{g, 0};{\mathbb Z})$.
Broaddus-Farb-Putman~\cite{Broaddus-Farb-Putman11} proved that $G=\mathcal{M}_{g, 0}$, $K=\mathcal{I}_{g, 0}$, $V=\wedge^{3}H/H$, and $\psi\colon K\rightarrow V$ satisfy the assumptions in Proposition~\ref{criterion for exponential distortion}, where $\psi$ is the Johnson homomorphism.
Hence $\mathcal{I}_{g, 0}$ is at least exponentially distorted in $\mathcal{M}_{g, 0}$.
We use this fact in the proof.
\begin{proof}[Proof of Theorem~\ref{first_thm}]
We set $G=\mathcal{M}_{g, b}$ and $K=\mathcal{I}_{g, b}$ ($g\geq 3$, $b\geq 2$).
By the definition of $\mathcal{I}_{g, b}$, $\mathcal{M}_{g, b}$ acts on $\mathcal{I}_{g, b}$ by conjugation.
Further, we put $V=\wedge^{3}H/H$.
We regard $S_{g,0}$ as the surface obtained by capping $S_{g,b}$ by $b$ disks.
Let $\varphi\colon\mathcal{M}_{g, b}\rightarrow\mathcal{M}_{g, 0}$ be the homomorphism defined in Section~\ref{Introduction}.
Let $\tau\colon\mathcal{I}_{g, 0}\rightarrow \wedge^{3}H/H$ be the Johnson homomorphism, and $\psi$ the composition of $\varphi|_{K}$ with $\tau$.
The homomorphism $\psi$ is surjective since $\varphi$ induces the surjective homomorphism $\varphi|_{K}\colon K\rightarrow \mathcal{I}_{g, 0}$.
We denote by $\rho$ a homomorphism defined by the composition of $\varphi$ with the homomorphism $\varphi'\colon\mathcal{M}_{g, 0}\rightarrow\mathrm{GL}(\wedge^{3}H/H)$.
Since $\varphi$ is surjective and Broaddus-Farb-Putman~\cite{Broaddus-Farb-Putman11} showed that ${\rm Im}(\varphi')$ contains a partially hyperbolic matrix, ${\rm Im}(\rho)$ also contains a partially hyperbolic matrix.
We show that they satisfy the assumptions of Proposition~\ref{criterion for exponential distortion}.
It is sufficient to prove the surjective homomorphism $\psi\colon\mathcal{I}_{g, b}\rightarrow\wedge^{3}H/H$ is $\mathcal{M}_{g, b}$-equivariant, namely, $f\cdot\psi(g)=\psi(f\cdot g)$ for any $f\in\mathcal{M}_{g,b}$ and $g\in\mathcal{I}_{g, b}$.
Since $f\cdot\psi(g)=(\varphi(f))(\psi(g))$ and $\psi(f\cdot g)=\psi(fgf^{-1})$, we show $(\varphi(f))(\psi(g))=\psi(fgf^{-1})$.
Since $\tau$ is $\mathcal{M}_{g, b}$-equivariant, we have $(\varphi(f))(\psi(g))=\tau(\varphi(f)\cdot\varphi(g))=\tau(\varphi(fgf^{-1}))=\psi(fgf^{-1})$.
Hence, $\psi$ is $\mathcal{M}_{g, b}$-equivariant, and we have finished the proof.
\end{proof}
\section{Proof of Theorem~\ref{second_thm}}\label{Proof of Second theorem}
In this section we prove Theorem~\ref{second_thm}.
\begin{proof}[Proof of Theorem~\ref{second_thm}]
We can show ${\mathcal M}_{g, b}[d]$ has a finite index in ${\mathcal M}_{g, b}$.
Then, ${\mathcal M}_{g, b}[d]$ is quasi-isometrically embedded in ${\mathcal M}_{g, b}$, that is, there exists $\lambda>0$ such that $\parallel f\parallel_{{\mathcal M}_{g, b}}\leq\lambda\parallel f\parallel_{{\mathcal M}_{g, b}[d]}+\lambda$ for any $f\in{\mathcal M}_{g, b}[d]$.
From the fact that the distortion of ${\mathcal I}_{g, b}$ in ${\mathcal M}_{g, b}$ is exponential, there exist $C_{1}, C_{2}>0$ such that
$\parallel f\parallel_{{\mathcal I}_{g, b}}\leq C_{1}e^{\parallel f\parallel_{{\mathcal M}_{g, b}}}+C_{2}$ for any $f\in{\mathcal I}_{g, b}$.
Hence, it follows that $\parallel f\parallel_{{\mathcal I}_{g, b}}\leq C_{1}e^{\lambda\parallel f\parallel_{{\mathcal M}_{g, b}[d]}+\lambda}+C_{2}$.
Then we see the distortion of ${\mathcal I}_{g, b}$ in ${\mathcal M}_{g, b}[d]$ is at most exponential.
If $L\leq K\leq G$, then the distortion of $L$ in $G$ is at most the composition of the distortion of $L$ in $K$ with the distortion of $K$ in $G$.
We know ${\mathcal I}_{g, b}\leq {\mathcal M}_{g, b}[d]\leq {\mathcal M}_{g, b}$.
Now ${\mathcal M}_{g, b}[d]$ has finite index in ${\mathcal M}_{g, b}$.
Thus ${\mathcal M}_{g, b}[d]$ is undistorted in ${\mathcal M}_{g, b}$.
Therefore the distortion of ${\mathcal I}_{g, b}$ in ${\mathcal M}_{g, b}[d]$ is at least exponential, and we are done.
\end{proof}
\par
{\bf Acknowledgements:} The authors are deeply grateful to Hisaaki Endo for his warm encouragement and helpful advice.
The first author was supported by JSPS KAKENHI, the grant number 16J00397 and the second author was supported by JSPS KAKENHI, the grant number 15J10066 of Research Fellowship for Young Scientists.
|
1,314,259,993,948 | arxiv |
\section{Coding Theory}
Error correcting codes are broadly separated in two sections; linear codes and non linear codes. Due to their elegant mathematical structure, linear codes received major interest of the coding community. Traditionally, linear codes are categorized into two types; block linear codes and convolutional codes. As every McEliece cryptosystem is based on block linear codes, we look specifically into block linear codes. Our standard reference for coding theory will be Blahut ~\cite[Chapter 3]{Blahut}. Most of our discussions are on binary linear codes, that is, codes over $\mathbb{F}_{{2}^{l}}$ for some integer $l\geq 1$ but the same theory can be applied over any finite field.
A $q$-ary linear code $\mathcal{C}$ of length $n$ and rank $k$ is a $k$ dimensional linear subspace of $\mathbb{F}_{q}^{n}$
\begin{definition} Hamming weight: Hamming weight or simply weight $w$ of a vector $v$, denoted by $w(v)$, is defined as the number of non-zero entries in $v$.
\end{definition}
The hamming weight sets up a canonical distance function over $\mathbb{F}_{{q}^{n}}$. This distance is called the hamming distance.
\begin{definition} Hamming distance $d_{H}(x,y) := w(x-y)$.
\noindent The distance of code $\mathcal{C}$ is defined as the minimum of the distance between any two distinct codewords in $\mathcal{C}$. We denote this by $d(\mathcal{C})$ or simply $d$ if code $\mathcal{C}$ is clear from the context.
\[d(\mathcal{C})= \min_{x,y \in \mathcal{C}, x\neq y} d_{H}(x,y)\].
\end{definition}
Traditionally, such a code is denoted as $\left[ n,k,d\right]$
Following is an easy lemma, proof of which can be found in any standard text such as ~\cite{Roth,Huffman-Pless}.
\begin{lemma} Let $\mathcal{C}$ be a linear code then \[d(\mathcal{C}) = \min_{x \in \mathcal{C} , x \neq 0} w(x)\]
\end{lemma}
One of the important parameters of a code is its error correction capacity.
\begin{definition} Error correction capacity $t$ : Let $\mathcal{C}$ be $\left[n,k,d \right]$ code. Then the error correcting capacity is the largest integer $t$ such that for any $y\in \mathbb{F}_{q}^{n}$ there exists a unique $c \in \mathcal{C}$ such that $d_{H}(y,c) \leq t$. The ratio $\frac{t}{n}$ is known as the error correction rate of the code $\mathcal{C}$.
\end{definition}
Error correction capacity and the minimum distance of a code are related to each other by an obvious relation $t= \left\lfloor \dfrac{d(\mathcal{C})-1}{2} \right\rfloor$.
\vspace{\baselineskip}
A natural way to construct this $k$ dimensional space $\mathcal{C}$ is by viewing it as a range space of some $k \times n$ matrix $G$ known as the generator matrix of the code $\mathcal{C}$ and we say that $G$ generates code $\mathcal{C}$. Alternatively, $\mathcal{C}$ can be constructed by kernel space of some matrix $H$ of size $(n-k) \times n$. In such cases $H$ is known as the parity check matrix. Of course, for a given code $\mathcal{C}$ there are many generator matrices and there are many parity check matrices. Code generated by $H$ is known as the dual code of $\mathcal{C}$ and is denoted as $\mathcal{C}^{\perp}$. Note that both $G$ and $H$ are full rank matrices.
It is easy to check that $GH^{T} = 0 $. Also dual of the dual is the same code,that is, $\left( \mathcal{C}^{\perp} \right) ^{\perp} = \mathcal{C}$. So another way to define dual code is as follows:
\begin{definition} Dual code $\mathcal{C}^{\perp}$ : Let $\mathcal{C}$ be a linear $\left[ n,k \right]$ code over $\mathbb{F}_{q}$ then its $\left[ n, n-k \right]$ dual $\mathcal{C}^{\perp}$ is defined as
$\mathcal{C}^{\perp} = \lbrace y \in \mathbb{F}_{q}^{n} \text{ such that for all } x\in \mathcal{C}\; \langle x, y \rangle = 0 \rbrace$ where $\langle x,y\rangle$ denotes inner product of $x$ and $y$ over $\mathbb{F}_{q}$.
\end{definition}
Now we give a few examples of codes.
\begin{enumerate}
\item[$\left(I \right)$] Hadamard Code $\left[ n=2^{r}, k=r, d=2^{r-1} \right]$ :
\noindent Hadamard Code is a linear code generated from a generator matrix whose $i^{th}$ column is the number $i$ written as a binary expression in $r$ bits. Thus, there are $2^{r}$ columns corresponding to all binary numbers with $r$ bits. The code has minimum distance $d=2^{r-1}$ and hence can correct roughly about $t=2^{r-2}$ errors.
\vspace{\baselineskip}
eg. $G=\left[ \begin{array}{cccccccc}
0 &0 &0 &0 &1 &1 &1 &1\\
0 &0 &1 &1 &0 &0 &1 &1\\
0 &1 &0 &1 &0 &1 &0 &1 \end{array} \right] $
\item[$\left(II \right)$] Binary Goppa Codes $\left[n,k,2t+1 \right]$ :
\noindent Construction of a binary Goppa code is done with a polynomial $g(x)$ of degree $t$ over a finite field $\mathbb{F}_{2^{m}}$ of characteristic 2 without multiple zeros. Then construction of binary Goppa codes is done in the following way
\noindent Pick a set of $n$-points $\lbrace p_{1},p_{2},\ldots ,p_{n}: p_{i} \in \mathbb{F}_{2^{m}} , g(p_{i}) \neq 0 \rbrace$ i.e. none of the $p_{i}'s$ are a root of the polynomial $g(x)$. Then parity check matrix for corresponding code is a product of two matrices; first one is a vandermonde matrix and the second one is a diagonal matrix. Construct following matrices $V$ and $D$
\vspace{\baselineskip}
$V=\left[ \begin{array}{cccc}
1&1&\cdots& 1\\
p_{1}&p_{2}&\cdots&p_{n}\\
p_{1}^{2}&p_{2}^{2}&\cdots&p_{n}^{2}\\
\vdots& \vdots&\ddots& \vdots\\
p_{1}^{t-1}&p_{2}^{t-1}&\cdots&p_{n}^{t-1}
\end{array}\right]$
$D=\left[ \begin{array}{cccc}
\frac{1}{g(p_{1})}&0&\cdots& 0\\
0& \frac{1}{g(p_{2})}&\cdots&0\\
\vdots& \vdots&\ddots& \vdots\\
0&0&\cdots&\frac{1}{g(p_{n})}
\end{array}\right]$
$H = V D $.
\vspace{\baselineskip}
Binary Goppa codes are the codes used in the original McEliece cryptosystems.
\item[$\left(III \right)$] Cyclic Codes:
\noindent Let $a=(a_{1},a_{2},\ldots ,a_{n})$ be an element of $\mathbb{F}_{q}^n$ then we define the right shift of $a$ as $(a_{n},a_{1},\ldots ,a_{n-1})$. A linear code is called a cyclic code if it is closed under right shifts.
\noindent Cyclic codes have a nice algebraic structure. The code can be considered as an ideal in the polynomial ring $R=\mathbb{F}_{q}[x]/(x^{n}-1)$ which is a PID. In this ring multiplication by $x$ to a codeword $c$ results into a right shift of $c$. Being an ideal, the code space can be generated by a single polynomial $g(x)$, known as generator polynomial of the code.
\item[$\left(IV \right)$] Quasi Cyclic Codes (QCCs) $\left[ n=m_{1}p, k=m_{2}p, d \right]$:
Quasi-cyclic code is a simple generalization of a cyclic code. A linear code is considered from a class of quasi-cyclic codes if there exists an integer $m$ such that code is closed under $m$ right shifts. That is, if we denote $R$ as a right shift operator then for all $c \in QCC$ we have $R^{n}(c)\in QCC$. Such lowest $m$ for which the code is closed under $m$ right shifts is known as the index of the code. For cyclic codes, we have $m=1$. QCCs can be constructed by generator matrices whose every block is a circulant matrix of size $p$ and the number of such blocks in a particular row indicates its index. Similar to cyclic code , QCCs also have a nice algebraic structure. We will come back to QCCs later to talk in detail as our both variants are built over QCCs.
\end{enumerate}
Consider a communication where we want a $k$-bit message $m=(b_{1},b_{2},\ldots,b_{k})$. Now when we send this over a channel, there is a possibility that some $b_{i}$ gets corrupted to $b'_{i}$. Notice that such corrupted vector is also a possible message that one could intend to send. Hence the receiver receives a wrong message. To overcome this difficulty, we add redundancy to the message eg. repeating the message bits thrice, i.e, for message $m=(b_{1},b_{2},\ldots,b_{k})$ of k-bits we instead send a message of $3k$-bits $m'=(b_{1},b_{1},b_{1},b_{2},b_{2},b_{2},\ldots,b_{k},b_{k},b_{k})$. Such codes are known as repetition codes. So even if some bit $b_{i}$ gets corrupted to $b'_{i}$ at one particular position, we have two other copies to recover the original message $m$. \footnote{Note that corrupted message is not in the possible sample set for receiver since theoretically receiver expects vectors with some particular pattern, namely each partition of 3-components has same element throughout the partition.} In practice, much larger numbers than 3 are used. It is highly unlikely that the exact same corruption happens at multiple places.
This process of adding redundancies before sending a message is known as the encoding of an error correcting code. The process for a general linear error correcting code can be implemented as follows: Suppose we want to send messages of $k$-bits. So we have a message space of dimension $k$ over $\mathbb{F}_{q}$. We embed this space into much higher dimensional space, say of dimension $n$, over $\mathbb{F}_{q}$. One of the ways this can be done is by using a compatible linear map $G$ of full rank.
\begin{center}
$\begin{array}{rccl}
\varphi_{G}: & \mathbb{F}_{q}^{k} &\longrightarrow& \mathbb{F}_{q}^{n}\\
& x & \mapsto & xG
\end{array}$
\end{center}
The map has a $k$-dimensional range with trivial kernel. As one might guess, our generator matrix $G$ can be used as a linear map. The domain of the map $\varphi_{G}$ is known as the message space while the range is known as the code space. This process is known as encoding and the map $\varphi_{G}$ is known as the encoding function.
Once received, the vector from $\mathbb{F}_{q}^{n}$ has to be converted back to the original message vector in $\mathbb{F}_{q}^{k}$. This is done by the process known as decoding. A decoding process is a two step process. Usually, the first step consists of finding the nearest codeword to the received vector. And the second step is obtaining the message from this codeword. The second step can be done using common routines from linear algebra such as Gaussian elimination and generally is not a difficult task.
\[\begin{array}{rcccl}
\psi: & \mathbb{F}_{q}^{n} & \stackrel{\psi_{1}}{\longrightarrow} & \mathcal{C} \stackrel{\psi_{2}}{\longrightarrow} & \mathbb{F}_{q}^{k}
\end{array}\]
\begin{center}
where $\psi_{1}(x) \mapsto c$ such that for all $c^{\prime}\in \mathcal{C},\; d_{H}(x,c^\prime) \geqslant d_{H}(x,c)$
and $\psi_{2} $ is an inverse of $\varphi_{G}$ when restricted on $\mathcal{C}$.
\end{center}
Here, $e=x-c$ is known as the error and process of computing $c$ from $x$, denoted by $\psi_{1}$ above is called the error correction. As this is the hardest computation in $\psi$, sometimes, $\psi_{1}$ is also referred as decoding of a linear code. This process of decoding is not easy for a random linear code. The problem of decoding a random linear code is known as the general decoding problem and is NP-complete ~\cite{brl}. But there are some codes for which this can be done easily eg. RS codes, Binary Goppa codes, cyclic codes, some subclasses of QCCs. Such codes are called decodable codes and it is said that they have a decoder.
\section{Public Key cryptosystems based on coding theory}
Using this hardness of the general decoding problem, Robert McEliece ~\cite{McEliece} proposed a cryptosystem based on algebraic coding theory. Later Niederreiter proposed his knapsack like variant based on the same principle. The rough idea behind both the systems can be explained as below :
Take a code $\mathcal{C}$ which has a good decoding algorithm. Transform this code into a new code $\mathcal{C^\prime}$ with no apparent structure. A sender sends message using code $\mathcal{C^\prime}$. Now as the code $\mathcal{C^\prime}$ has no visible structure it is as good as a random linear code and no one can decode it. As you are the one who generated the system you have the transformation from $\mathcal{C}$ to $\mathcal{C^\prime}$ you can revert the transformation and work in the frame of $\mathcal{C}$ and then decode successfully as $\mathcal{C}$ has a good decoder. Again, an eavesdropper can not decode this as he does not have a way to transform the system from $\mathcal{C^\prime}$ to $\mathcal{C}$.
Now we move into McEliece and Niederreiter cryptosystems. We first explain there encryption and decryption algorithms and classical security. We address the question of quantum security in the chapter 2.
\subsection{Description of the McEliece cryptosystem}
Let $M$ be a generator matrix for a $\left[n,k\right]$ linear code $\mathcal{C}$ for which a fast decoding algorithm exists. Let $\mathcal{E}$ be the number of errors that $\mathcal{C}$ can correct.
\textbf{Private Key}: ($S$,$M$,$P$) where $S \in GL_{k}(\mathbb{F}_{2})$ and $P$ is a $n\times n$ permutation matrix.
\textbf{Public Key}: $M^\prime=SMP$.
\noindent\textbf{Encryption}:
\begin{description}
\item Let $p \in \mathbb{F}_{q}^{k}$ be a $k$-bit plain-text. Corresponding cipher-text $c \in \mathbb{F}_{q}^{n}$ is obtained by calculating $c=pM^\prime+e$ where $e$ is a random error vector such that $wt(e) \leq \mathcal{E}$.
\end{description}
\textbf{Decryption}:
\begin{description}
\item Received cipher-text $c$ is decrypted in the following way:
\item Multiplying $c$ by $P^{-1}$ we obtain $c P^{-1}=(pM^\prime+e)P^{-1}=(pSMP+e)P^{-1}=pSM+e_{2}$. Note that $e_{2}$ has same weight as $e$.
\item Now use the decoding algorithm for $M$ on vector $SAM+e_{2}$ to obtain $pS$.
\item Multiply by $S^{-1}$ to recover plain-text $p$.
\end{description}
\textit{A brief explanation for security}:
Consider a communication where Alice wants to send a message to Bob. Similar to any public key cryptosystem, Bob generates his private key and computes its public counterpart. Private key consists of three matrices $S, M, P $ with $M$ having a good decoder. The private key has a corresponding public key $M^\prime$. Since our $S\; and\; P$ are chosen randomly, the resulting matrix $M^\prime$ is also a random matrix with no structure and hence no efficient decoding with $M^\prime$ is possible. Clearly, every process in decryption is trivial for Bob and he can decrypt easily. An eavesdropper, however, can not do this because he has no good decoding algorithm for publicly known matrix $M^ \prime$ and he does not have knowledge of $S$ and $P$ which is essential.
Another interesting question regrading security of the system is, \textsl{'what happens if $M$ is known ?'} The problem is same as finding transformation $\left( S, P \right)$ between two codes. It is believed that this problem is not easy to solve. Decision version of this, that is, finding if two codes are equivalent is NP-hard and graph isomorphism problem can be reduced to it; so, if one solves the code equivalence problem, one of the problems that received the most attention in last few decades by theoretical computer science community can be solved efficiently ~\cite{Petrank}. This problem of finding transformation between codes or generator matrices of equivalent codes remains intractable. But this presents a possible window of attack from a quantum computer as this transformation problem, known as scrambler-permutation problem, can be modelled as a hidden shift problem and further can be reduced to a hidden subgroup problem where quantum Fourier sampling can come into play. Thus, it becomes essential to make sure that McEliece or its variants resist this quantum Fourier sampling attack.
\subsection{Niederreiter Cryptosystem}
Let $H$ be a $(n-k) \times n$ parity matrix for a [n,k] linear code $\mathcal{C}$ for which a fast decoding algorithm exists. Let $\mathcal{E}$ be the number of errors that $\mathcal{C}$ can correct.
\textbf{Private Key}: ($S$,$H$,$P$) where $S \in GL_{k}(\mathbb{F}_{2})$ and $P$ is a permutation matrix of size $n$.
\textbf{Public Key}: $H^{\prime}=SHP$.
\noindent\textbf{Encryption}:
\begin{description}
\item Let $p$ be a $n$-bit plain-text with weight at most $\mathcal{E}$. Corresponding cipher-text $c$ of $n-k$ bits is obtained by calculating $c=H^\prime p^T$.
\end{description}
\textbf{Decryption}:
\begin{description}
\item Compute $y=S^{-1} c$. Thus $y = H P {{p}^T}$.
\item By linear algebra find a $z$ such that $Hz^T = y$. As $y = H P {{p}^T}$ we have $z-pP^T \in \mathcal{C}$.
\item Now use fast decoding on $z$ with $H$ to get $pP^T$ and thus recover $p$.\end{description}
\subsection{Signature scheme}
Initially it was thought that McEliece cryptosystem could not accommodate for signature scheme as it is not a commutative cryptosystem in the sense that order or role of encryption and decryption algorithms can not be altered. In other cryptosystems such as RSA, where encryption and decryption algorithms do commute, ready signature schemes are available. Later, in 2001, Courtois, Finiasz and Sendrier ~\cite{Courtois} came up with a signature scheme for Niederreiter cryptosystem.
\section{Classical attacks}
In this section we briefly go over the generic classical attacks. Most of the attacks are local attacks in the sense that they try to decrypt a given ciphertext. Complete breaks that completely recover private keys are extremely demanding and not possible.
The attacks trying to recover plaintext from the knowledge of ciphertext and public key are really hard due to the general decoding problem and are out of the discussion. Most of the classical attacks that stand a chance come under category called Information Set Decoding (ISD). There are two popular ISD attacks; one by Stern ~\cite{Stern} and other by Lee and Brickell ~\cite{Lee}. As mentioned in ~\cite{Baldi}, ISD attacks are the best known classical attacks and hence considered as security level of the system.
One of the basic attacks was suggested by McEliece~\cite{McEliece}. Lee and Brickell improved his attack and added an important verification step where attacker can confirm whether recovered message is the correct one. The strategy is based on repeatedly selecting $k$ bits at random from an $n$-bit cipher-text in hope that none of the selected bits are part of the error. Similar attacks can also be implemented over Niederreiter cryptosystems. Lee and Brickell also provided a closed-form equation for complexity of the attack. The work factor for this attack can be given as
\vspace{\baselineskip}
$W_{j} = T_{j} \left( \alpha k^{3} + N_{j} \beta k \right)$ where,
\vspace{\baselineskip}
\hspace*{0.3in}[3cm] $T_{j} = \dfrac{1}{\sum\limits_{i=0}^{i=j} \dfrac{{t \choose i} {n-t \choose k-i}}{{n \choose k}}} $
\hspace*{0.3in}[1cm] and \hspace*{0.3in} $N_{j} = \sum\limits_{i=o}^{i=j}{k \choose i}$
\vspace{\baselineskip}
Other attack is by Stern ~\cite{Stern}. The basic idea behind this attack is to recover intentional vector by embedding public code space into some higher dimensional codespace. Let $C^\prime$ be the code generated by public key $G^\prime$ then Stern constructs a code given by generator matrix $G^{{\prime}{\prime}}= \left[ \begin{array}{c}
G^{\prime} \\
x
\end{array} \right]$ . Bernstein, Lange and Peters ~\cite{Bernstein} improved the attack using Markov Chain Modelling. This modified version of attack made the attack faster by a factor of $2^{12}$ and parameters for the system had to be readjusted.
For a given code, both McEliece and Niederreiter cryptosystem have same security ~\cite{Li} on the equation level. That means, if one tries to recover plain-text from cipher-text for a McEliece cryptosystem it is hard as doing the same for Niederreiter cryptosystem and hence ISD attacks have same strength when applied on McEliece or Niederreiter over a code with same parameters. Li, Deng and Wang ~\cite{Li} also analyzed both the systems under the attack by Lee and Brickell.
\subsection{Parameters}
Robert McEliece in his original work suggested parameters $n = 1024,\; k=524,\; t=50$. As mentioned before, after ~\cite{Bernstein} these parameters were not secure. Bernstein suggested some new sets of parameters. We mention a few of them here.
\begin{description}
\item[(a)] For Goppa codes and 80 bit security $n=1632$, $k=1269$, $t=33$. The public key size in this case is $460647\; bits$.
\item[(b)] Without list decoding suggested set is $n=2048,\; k=1751,\; t=27$. The public key size in this case is $520047\; bits$
\item[(c)] For 128-bit security $n=2960$, $k=2288$, $t=56$. This parameter selection leads to public keys of size $1537536$ $bits$.
\end{description}
From these parameter choices it is very clear that McEliece cryptosystem or its Niederreiter variant suffers two major drawbacks:
\begin{description}
\item[$(I)$] Large public key sizes.
\item[$(II)$] Low transmission rate or encryption rate.
\end{description}
Various attempts have been made to overcome above problems but most of them turned out to be insecure. McEliece variants over Quasi-cyclic circulant codes ~\cite{Baldi} is one of the notable attempts in that direction. In this particular version, authors looked at QC-LDPC codes and put forth a cryptosystem which is similar to the McEliece cryptosystem with matrices $S$ and $P$ replaced by block circulant matrices. Though in this variant shorter key structure is possible and higher encryption rates can be achieved, one of the most important piece of the puzzle, quantum security of this variant is still missing. It is absolutely essential that if McEliece or its any variant were to replace any of the current popular systems such as RSA, ElGamal first priority should be its quantum security than key sizes and encryption rate.
In next few chapters we will look into quantum security of McEliece-type cryptosystems and then provide a Niederreiter variant that is quantum secure having high transmission rate. We also present a brief comparative analysis of key sizes and encryption rate to original McEliece cryptosystems.
\section{The Hidden Subgroup Problem (HSP)}
\begin{definition} {Hidden Subgroup Problem}: Let $\mathrm{G}$ be a group and $f$ a function from $\mathrm{G}$ to a set $\mathrm{X}$. We know that $f(g_{0})=f(g_{1})$ if and only if $g_{0}\mathrm{H}=g_{1}\mathrm{H}$ for some subgroup $\mathrm{H}$. The problem is, given $f$ find a generating set for the unknown subgroup\footnote{The function $f$ in the hidden subgroup problem is said to be separating cosets of $\mathrm{H}$ as $f$ is constant on a each coset and different on different cosets.} $\mathrm{H}$.
\end{definition}
In particular, Shor's algorithm is a hidden subgroup problem over $\mathbb{Z}/N\mathbb{Z}$ with function $f(x)=a^{x}$. In this case hidden subgroup is $\mathrm{H}= \langle r\rangle$ where $r$ is the order of $a$ in $\left( \mathbb{Z}/N\mathbb{Z} \right) ^{\times}$.
Quantum Fourier Sampling is an algorithm which uses Quantum Fourier Transform as a building block. Before going into QFS we recall some of the facts form representation theory. Given a group $\mathrm{G}$ a matrix representation is a homomorphism $\rho: \mathrm{G}$ $\longrightarrow$ $GL_{d_{\rho}}\left(\mathbb{C}\right)$ where $GL_{d}\left(\mathbb{C}\right)$ is a space of $d \times d$ matrices over complex numbers. We denote set of all the irreducible representations of the group $\mathrm{G}$ by $\widehat{\mathrm{G}}$. So for every $\rho \in \widehat{\mathrm{G}}$ and every $g \in \mathrm{G}$; $\rho(g)$ gives us a $d_{\rho}$ $\times d_{\rho}$ matrix and $\rho_{(i,j)}(g)$ will denote the the entry in $i^{th}$ row and $j^{th}$ column of $\rho(g)$. We stick only to finite groups and their complex irreducible representations. One of the very well known result from representation theory of finite groups states that $\sum_{\rho} {d_{\rho}}^{2}= \vert \mathrm{G} \vert $.
Let $\vert \mathrm{G} \vert = N$. Fix an ordering $\left( g_{1},{g}_{2},\ldots,g_{N}\right)$ on elements of $\mathrm{G}$. For a vector in $\mathbb{C}^{N}$, a general normalized state in basis $\mathcal{B}_{1} = \lbrace g_{i};\; 1\leqslant i \leqslant N\rbrace$ can be described as \[\vert \psi \rangle = \sum_{i=1}^{N} \alpha_{i} \vert g_{i}\rangle\; \text{where }\alpha_{i}s\; \text{are complex numbers such that} \sum_{i=1}^{N}\vert \alpha_{i}\vert^{2} = 1.\]
Now consider another basis for $\mathbb{C}^{N}$ given by $\mathcal{B}_{2}= \lbrace (\rho,i,j)\; \rho\in \widehat{\mathrm{G}},\;1 \leqslant i,j \leqslant d_{\rho} \rbrace$. Clearly, $\vert \mathcal{B}_{2}\vert = N$ as $\sum_{\rho} d_{\rho}^{2} = N$. A general normalized state in basis $\mathcal{B}_{2}$ is denoted as
\[\vert \psi \rangle = \sum_{\rho,i,j} \beta_{\rho,i,j} \vert \rho, i,j \rangle \text{ where }\beta_{\rho,i,j}'s\; \text{are complex numbers such that} \sum_{\rho,i,j} \vert \beta_{\rho,i,j}\vert^{2} = 1.\]
A quantum Fourier transform is a map that takes a normalized state in basis $\mathcal{B}_{1}$ to a normalized state in $\mathcal{B}_{2}$. Under this map the vector associated with $g$ is $\sum_{\rho,i,j}\dfrac{\sqrt{d_{\rho}}}{\sqrt{\vert \mathrm{G} \vert}} \rho_{(i,j)}(g) \vert \rho,i,j \rangle$. Quantum Fourier transform can be viewed as $\mathcal{C}$ linear extension of the above association. Precisely, quantum Fourier transform on a general normalized state $\psi = \sum_{l}\alpha_{l} \vert g_{l} \rangle$ in basis $\mathcal{B}_{1}$ would give
\[QFT \vert \psi \rangle = \sum_{l} \alpha_{l}\; QFT \vert g_{l}\rangle = \sum_{l} \alpha_{l} \sum_{\rho,i,j}\dfrac{\sqrt{d_{\rho}}}{\sqrt{\vert \mathrm{G} \vert}} \rho_{(i,j)}(g) \vert \rho,i,j \rangle.\]
Measurement is a nonlinear operator used in almost every quantum algorithm. Measurement operator is defined with respect to a basis. Here we describe measurement with respect to an orthonormal basis only as both $\mathcal{B}_{1}$ and $\mathcal{B}_{2}$ described above form an orthonormal basis. For measurement in a general basis or more general measurement operator see ~\cite{NDavidMermin}, ~\cite{Kaye}. Consider an orthonormal basis for $n$ dimensional space $\mathcal{B}= \lbrace b_1, b_2,\ldots b_n\rbrace$. Then a normalized general state in this basis can be represented by $\vert \psi \rangle = \sum_{i} \beta_{i} \vert b_{i} \rangle$. Measurement on state $\vert \psi \rangle$ gives $b_{i}$ as its output with probability $\vert \beta_{i}\vert ^{2}$. In other words, after measurement the state collapses to one of the basis element states; the probability that it falls on a particular state depends on coefficient of that basis element in the state before measurement.
After this short background on quantum computing, we are ready to describe quantum Fourier sampling. For further reading a reader can refer to ~\cite{Kaye,Lomont}
\begin{algorithm}
\caption{Quantum Fourier Sampling (QFS)}\label{alg:QFS}
\begin{algorithmic}[1]
\Procedure{QFS}{$\mathrm{G}, f$}\Comment{QFS over group $\mathrm{G}$ with function $f$}
\State $\vert \psi \rangle = \sum_{g}{\vert g, 0\rangle}$
\State apply $U_{f}$ on $\vert \psi \rangle$ Let $\vert \psi_{2}\rangle = U_{f} \vert \psi \rangle$
\Comment{$U_{f}$ is a two state unitary operator such that $U_{f}\vert x,y\rangle = \vert x, y \oplus f(x)\rangle$}
\State Measure in the second component to get $\vert \psi_{3}\rangle$
\Comment{Measurement of vector $\vert \phi \rangle = \sum_{i} \alpha_{i} \vert e_{i} \rangle$ gives output $e_{i}$ with probability $\vert \alpha_{i} \vert ^{2}$}
\State Apply QFT on first component of $\vert \psi_{3} \rangle$
\State Measure $\vert \rho,i,j\rangle$ for strong Fourier sampling \emph{OR}
Measure $\vert \rho \rangle$ component for weak Fourier sampling
\EndProcedure
\end{algorithmic}
\end{algorithm}
Strong quantum Fourier sampling gives you output $\vert \rho,i,j \rangle$ for hidden subgroup $\mathrm{H}$ with probability given by
\[ \mathsf{P}_{\mathrm{H}} \left( \vert \rho,i,j \rangle\right) = \dfrac{1}{\vert \mathrm{G} \vert} \sum_{g\in \mathrm{G}} \mathsf{P}_{g\mathrm{H}} \left( \vert \rho,i,j\rangle \right)\] where \[\mathsf{P}_{g\mathrm{H}} = \dfrac{d_{\rho}}{\vert \mathrm{G}\vert \vert\mathrm{H}\vert}\left\vert \sum_{h\in \mathrm{H}} \rho_{i,j} \left(g \mathrm{H} \right)\right\vert ^{2}\]
Details about quantum Fourier sampling and in general status about non-abelian hidden subgroup problem can be found in ~\cite{Vazirani}. For basics of quantum computation and quantum Fourier sampling specifically from hidden subgroup point of view we refer ~\cite{Lomont}. For more broader view of quantum computation reader can use standard texts such as ~\cite{NDavidMermin},~\cite{Kaye}. The rough idea behind using this QFS to solve a hidden subgroup problem is to try and reconstruct $\mathrm{H}$ from $\mathsf{P}_{\mathrm{H}}$. We make this idea precise in next few sections. But before going there, let us briefly look at how hidden subgroup problem can be used to break McEliece cryptosystem or its variants. We define scrambler - permutation problem.
\section{McEliece-type Cryptosystems and HSP}
\begin{problem} {Scrambler - Permutation Problem: } Consider two $k \times n$ matrices $M$ and $M'$ over $\mathbb{F}_{{q}^{l}}$. It is known that they are related by equation $M^{\prime} = SMP$ for some unknown $S \in GL_{k}(\mathbb{F}_{q})$ and some unknown permutation matrix $P$ of size $n$. The problem is to find $S$ and $P$.
\end{problem}
Clearly, one of the ways to attack McEliece or Niederreiter cryptosystem is by solving the scrambler-permutation problem. For this attack, we assume attacker knows both $M$ and $M'$ and he is trying to recover remaining part of the private key. This attack is known as the scrambler - permutation attack. As far as the quantum attacks go, this is the only known way of attacking a McEliece cryptosystem. This structural attack is exactly same for a McEliece cryptosystem or a Niederreiter cryptosystem except that instead of finding a scrambler-permutation pair from generator matrix $G$ to $G^\prime$ one has to find scrambler-permutation pair from parity check matrix $H$ to $H^\prime$. The algebraic structure of the problem remains the same. So, we present it in general form, to find a scrambler-permutation pair from a $k \times n$ matrix $M$ to other $k \times n$ matrix $M^{\prime}$ keeping in mind $M=G$ and $M^{\prime}= G^{\prime}$ in case of McEliece while $M=H$ and $M^{\prime}=H^{\prime}$ in case of Niederreiter. To mean either a McEliece or a Niederreiter cryptosystem we use a broad term McEliece-type cryptosystems. In this attack, we assume $M$ and $M^\prime$ known, the attack is to find $A$ and $B$ such that $AMB=M^{\prime}$ with $A$ and $B$ coming form groups as defined before. Notice finding any $A^\prime$ and $B^\prime$ such that $A^\prime MB^\prime=M^\prime$ will also make the attack successful.
\begin{problem}[Hidden Shift Problem] Let $\mathrm{G}$ be a group. Let $f_{0}$ and $f_{1}$ be two functions from group $\mathrm{G}$ to a set $\mathrm{X}$. Given $f_{0}(g)=f_{1}(g_{0}g)$ for some unknown constant $g_{0}$ the task is to find a constant $g_{0} \in \mathrm{G}$. Note that there can be many $g_{0}$ that satisfy the above condition. Hidden shift problem asks us to find any one of those constants.
\end{problem}
Let $M'=AMP$. A McEliece-type cryptosystem will be broken if we find one possible pair $(A,P)$ from $M$ and $M'$. Consider two functions from group $\mathrm{G}=GL_{k}(\mathbb{F}_{2}) \times S_{n} $ given by
\begin{equation}
f_{0}(A,P)=A^{-1}MP
\end{equation}
\begin{equation}
f_{1}(A,P)=A^{-1}M'P
\end{equation}
Then one can check that $f_{1}(A,P)=f_{0}((A_{0}^{-1},P_{0}).(A,P))$, that is $(A_{0}^{-1},P_{0})$ is the shift between $f_{0}$ and $f_{1}$. Hence, if one can solve the hidden shift problem over $\mathrm{G}=GL_{k}(\mathbb{F}_{2}) \times S_{n}$ he can break the McEliece-type cryptosystem.
The general procedure to solve this hidden shift problem is to reduce it to try and reduce it to a hidden subgroup problem. We can reduce the hidden shift problem with functions$f_{0}$ and $f_{1}$ defined above on the group $\mathrm{G} =GL_{k} (\mathbb{F}_{2}) \times S_{n}$ to the hidden subgroup problem over $(\mathrm{G}\times \mathrm{G})\rtimes \mathbb{Z}_{2}$ ~\cite[Section 2.2]{Dinh}. The hidden subgroup in this case is
\begin{equation} \label{eqnK}
\mathrm{K}=(((\mathrm{H}_{0},s^{-1}\mathrm{H}_{0}s),0) \cup ((\mathrm{H}_{0}s,s^{-1}\mathrm{H}_{0}),1))
\end{equation} where $\mathrm{H}_{0}=\lbrace (A,P)\in GL_{k} (\mathbb{F}_{2}) \times S_{n} : A^{-1}MP=M \rbrace$ and $s$ is a shift from $f_{0}$ to $f_{1}$.
In short, the scrambler-permutation problem is one of the key ways to attack a McEliece-type cryptosystems. This problem can be formulated as a hidden shift problem which further can be reduced to a hidden subgroup problem. So we can attack McEliece type cryptosystems by trying to solve a hidden subgroup problem over $(\mathrm{G}\times \mathrm{G})\rtimes \mathbb{Z}_{2}$ with
$\mathrm{G} =GL_{k} (\mathbb{F}_{2}) \times S_{n}$.
\section{Successful Quantum Fourier Sampling}
In the previous section we saw that solving the hidden subgroup problem as a standard way to attack the a McEliece-type cryptosystem. An interesting question is, when is the hidden subgroup problem hard to solve? This way we can ensure the security of a McEliece-type cryptosystem against known quantum attacks.
We briefly sketch thought behind effectiveness of QFS. Algorithm of QFS in a general scenario and its use for solving a hidden subgroup problem is very well explained in~\cite{Vazirani}. Arguments particular to McEliece-type cryptosystems and corresponding hidden subgroup problem are in~\cite{Dinh}. The standard model of QFS yields a probability distribution as a function of the hidden subgroup. The basic idea here is if two subgroups $\mathrm{H}_{1}$ and $\mathrm{H}_{2}$ yield probability distributions $\mathsf{P}_{\mathrm{H}_{1}}$ and $\mathsf{P}_{\mathrm{H}_{2}}$ such that $\mathsf{P}_{\mathrm{H}_{1}}$ and $\mathsf{P}_{\mathrm{H}_{2}}$ are \emph{very close} to each other then QFS will not give us enough information to solve the hidden subgroup problem. The concept of closeness of two probability distributions can be captured by setting up a norm on the space of probability functions. To our knowledge, J. Kempe and A. Shalev~\cite{Kempe} were the first to introduce this beautiful idea. They used total variation norm. So, if two probability functions have total variation distance between them less than or equal to $log^{-\omega(1)}\vert \mathrm{G} \vert$ then we say that those two distributions are non-distinguishable. Using this definition, they provided a necessary condition to distinguish a subgroup of $S_{n}$ from the trivial subgroup $\langle e \rangle$.
Later Dinh, Moore and Russel~\cite{Dinh} extended this result with keeping McEliece-type group structure under consideration. Their result can be viewed as an analysis of a hidden subgroup problem over the group $\mathrm{G}=(GL_{k}(\mathbb{F}_{2})\times S_{n})^{2} \rtimes \mathbb{Z}_{2}$, the group structure for McEliece-type cryptosystems. Instead of using total variation distance, they use $L_{1}$ distance. Other key difference that can be considered between two definitions is Kempe and Shalev defined it for weak Fourier sampling while Dinh, Moore and Russel defined it for strong Fourier sampling. Also to account for all the conjugate subgroups Dinh, Moore and Russel took expectation over all the conjugate subgroups along with expectation over irreducible complex representations of group $\mathrm{G}$ denoted as $\rho$. Here they demonstrate a case when the hidden subgroup $\mathrm{H}$ can not be distinguished from either its conjugate subgroups $g\mathrm{H}g^{-1}$ or the trivial subgroup $\langle e \rangle$.
First note that weak Fourier sampling gives same distributions for all the conjugate subgroups that is $\mathsf{P}_{\mathrm{H}}$ is same as $\mathsf{P}_{g\mathrm{H}g^{-1}}$. Hence weak Fourier sampling can not differentiate a subgroup from its conjugate subgroup and thus it suffices to look at strong Fourier sampling.
Dinh, Moore and Russel~\cite{Dinh}, inspired from J. Kempe and A. Shalev~\cite{Kempe} define distinguishability of a subgroup $\mathrm{H}$ by strong Fourier sampling.
\begin{definition} Distinguishability of a subgroup on strong QFS
$\mathcal{D}_{\mathrm{H}} := \mathbf{E}_{\rho ,g} \Vert \mathsf{P}_{g\mathrm{H}g^{-1}} (\cdot| \rho ) - \mathsf{P}_{\langle e \rangle} (\cdot| \rho )\Vert_{1}$
A subgroup $\mathrm{H}$ is called \textit{indistinguishable} by strong Fourier sampling if $\mathcal{D}_{\mathrm{H}} \leq log^{- \omega(1)} \vert \mathrm{G} \vert$.
\end{definition}
The real $\mathcal{D}_{\mathrm{H}}$ is nothing but the expectation of $L_{1}$ distance between probability distribution of conjugate subgroups and the trivial subgroup.
Note that if a subgroup $\mathrm{H}$ is indistinguishable according to this definition then by Markov's inequality for all $c$, $\Vert \mathsf{P}_{g\mathrm{H}g^{-1}} (\cdot| \rho ) - \mathsf{P}_{\lbrace e \rbrace} (\cdot| \rho )\Vert _{t.v.} \leq log^{-c} \vert \mathrm{G} \vert $; which is analogous to definition provided by J. Kempe and A. Shalev~\cite{Kempe} for indistinguishability of a subgroup by weak Fourier sampling.
Now we state a few definitions which will be used to establish quantum security of McEliece-type cryptosystem.
\begin{definition} $Aut(M)=\lbrace P\in S_{n}$ such that there exists $ A \in GL_{k}(\mathbb{F}_{q}) ,\ AMP=M \rbrace $.
\end{definition}
\begin{definition} The minimal degree of a $G \leqslant S_{n}$ acting on set of $n$ symbols is defined to be minimum number of elements moved by a non-identity element of the group $G$.
\end{definition}
\begin{definition}Consider a $k\times n$ matrix $M$, we define $T_M$ for matrix $M= \left[I_{k} | M^{*} \right]$ as
$T_M=\lbrace \mathcal{P}_{1} \in S_{k}$ such that there exists $\mathcal{P}_{2}\in S_{n-k}$ with $\mathcal{P}_{1} M^{*} \mathcal{P}_{2} =M^{*}$ $\rbrace$.
\end{definition}
\begin{theorem}
~\cite[Theorem 4]{Dinh}: Assume $q^{k^2} \leqslant n^{an}$ for some constant $0 < a < 1/4$. Let $m$ be the minimal degree of the automorphism group $Aut(M)$. Then for sufficiently large n, the subgroup $\mathrm{K}$, $D_{K} \leqslant O(\vert K \vert ^ 2 e^{-\delta m} ) $, where $\delta > 0$ is a constant.
\end{theorem}
In the above theorem, the subgroup $\mathrm{K}$ is the hidden subgroup for McEliece-type cryptosystems that we stated earlier in this chapter. Details of the proof can be found in ~\cite{Dinh}. For a matrix $M$ of full column rank, $\vert K\vert = 2 \vert {Aut(M)} \vert ^ {2}$ ~\cite{Dinh}. Hence if $\vert Aut(M) \vert ^{4} e^{-\delta m} \leq log^{- \omega(1)} \vert \mathrm{G} \vert$ then $K$ is indistinguishable making scrambler-permutation attack using QFS infeasible. Thus if a $k \times n$ matrix $M$ with minimal degree $m$ is such that $\vert Aut(M) \vert ^{4} e^{-\delta m} \leq log^{- \omega(1)} \vert \mathrm{G} \vert$ then we can not find a scrambler-permutation pair and hence system remains secure against quantum Fourier sampling.Later we use this result for parity check matrix $H$ to show that our Niederreiter cryptosystem is secure against this hidden subgroup attack.
\section{Quasi-Cyclic Codes}
\begin{definition} Cyclic code: A code $\mathcal{C}$ is called a cyclic code if it is closed under right shifts i.e. for all $c = \left(c_{0},c_{1},c_{2},\ldots,c_{n-1} \right) \in \mathcal{C}$ we have $c^\prime = \left( c_{n},c_{0},c_{1},\ldots,c_{n-1} \right) \in \mathcal{C}$.
\end{definition}
A quasi-cyclic code (QCC), a simple generalization of the cyclic code, is such that any cyclic shift of a codeword by $m$ symbols gives another codeword of QCC. If $m=1$ the code is a cyclic code.
We are particularly interested in $\frac{m-1}{m}$ rate codes. More specifically, our system is based on $\frac{m-1}{m}$ rate codes over $ \mathbb{F}_{2}$ in this chapter and over $\mathbb{F}_{{2}^{l}}$ in the next chapter. Such codes along with Quasi-cyclic codes of rate $\frac{1}{m}$ are studied in great detail in ~\cite{Gulliver_thesis}.
\begin{definition} Circulant matrix: A $p \times p$ matrix $C$ is called circulant if every row, except for the first row, is a circular right shift of the row above that.
\end{definition}
A typical example of a circulant matrix is
\begin{center}
$\left[ \begin{array}{cccc}
c_{0} & c_{1} & \cdots & c_{p-1} \\
c_{p-1} & c_{0} & \cdots & c_{p-2}\\
\vdots & \vdots & \ddots & \vdots\\
c_{1} & c_{2} & \cdots & c_{0}
\end{array} \right]$ \end{center}
Now we state a couple of relevant results about circulant matrices and cyclotomic polynomials. Each lemma is easy to check involving basic ring definitions and Chinese remainder theorem for rings so proofs are skipped.
\begin{lemma} \label{cir1} Let $\mathcal{C}_{n}$ denote class of all circulant matrices of size $n$. The class $\mathcal{C}_{n}$ forms a commutative ring under usual matrix addition and multiplication.
\end{lemma}
\begin{lemma} The \label{cir2}Ring $\mathcal{C}_{n}$ is isomorphic to the ring $\frac{\mathbb{F}_{q} \left[ x\right]}{\left(x^{n}-1\right)}.$ Furthermore if $n$ is prime and $p \neq q$ then this isomorphism decomposes as \[ \mathcal{C}_{p} \xrightarrow[]{\sim} \dfrac{\mathbb{F}_{q} \left[ x\right]}{\left(x-1\right)} \times \dfrac{\mathbb{F}_{q} \left[ x\right]}{\Phi_{p}(x)}\] where $\Phi_{p} (x)$ is $p^{th}$ cyclotomic polynomial. \end{lemma}
A rate $\frac{m-1}{m}$ systematic Quasi-Cyclic code has an $p \times mp$ parity check matrix of the form
$H= \left[I_{p}|C_{1}|C_{2} |\cdots | C_{m-1}\right]$ where each $C_{i}$ is a circulant matrix of size $p$ and $I_{p}$ is identity matrix of size $p$. For compactness we denote this as $H= \left[I| C\right]$ with keeping in mind that $C$ is an array of circulants and we denote $C =$\textsc{Array}$\left[C_{1},C_{2},\ldots,C_{m-1}\right]$. Alternatively, these codes can be defined using generator matrices ~\cite{Aylaj}. In this case, generator matrix takes the following form:
\[ G = \left[ \begin{array}{c|c}
\multirow{4}{*}{I} & {C^{\prime}}_{1}\\
& {C^{\prime}}_{2}\\
& \vdots \\
& {C^{\prime}}_{m-1}
\end{array} \right]\] Again for compactness we denote this as $G=\left[I|C\right]$ with understanding that $C$ is a stack of circulants and we denote $C$ as $C=$\textsc{Stack}$\left[{C^{\prime}}_{1},{C^{\prime}}_{2},\ldots,{C^{\prime}}_{m-1}\right]$.
In a recent work ~\cite{Aylaj} a way to generate generator matrices for such codes over $\mathbb{F}_{{2}}$ is presented. Since these generator matrices are in systematic form one can easily construct parity check matrix from generator matrix. Regarding the codes over extension fields ~\cite[chapter 6]{Gulliver_thesis} shows that Quasi-cyclic codes over extension fields can be MDS (maximum distance separable) codes. As the name suggest MDS codes can achieve large minimum distance and hence no low weight codewords. This plays an important role for classical security of the system against classical attacks such as Stern's attack and Lee-Brickell attack. Though ~\cite{Gulliver_thesis} presents examples of MDS codes with rate $\frac{1}{m}$, this does present a case for study quasi-cyclic codes of rate $\frac{m-1}{m}$ with large minimum distance. For more details about quasi-cyclic codes a reader can refer to ~\cite{Gulliver_thesis,Aylaj}.
\subsection{Decoding} Quasi-cyclic codes are well studied and well established codes and depending on how one constructs them there are various decoders available. We briefly mention some of them here. ~\cite[Appendix B]{Gulliver_thesis} presents some ML (majority logic) decodable QCCs. Another new and interesting way of decoding quasi-cyclic codes using Gr\"{o}bner basis formulation can be found in ~\cite{Ling}.
\section{Our McEliece variant}
Now we are ready to describe our McEliece variant over quasi-cyclic codes of rate $\frac{m-1}{m}$. Our McEliece variant over $\mathbb{F}_{2}$ has a generator matrix $M = \left[I|C \right]$ with $C=$\textsc{Stack}$\left[C_{1},C_{2},\ldots,C_{m-1}\right]$ satisfying following conditions:
\begin{description}
\item[$(I)$] Size of each circulant is a prime $p$, i.e., each circulant is a $p \times p$ matrix for some prime $p$.
\item[$(II)$] At least one of the $C_{i}s$ is invertible i.e. there exists $i<m$ suth that $C_{i} \in GL_{p}(\mathbb{F}_{2})$.
\item[$(III)$] Given any two columns $c_{i_{0}}$, $c_{i_{1}}$ of $C$, there is at most one index $j$ with $c_{i_{0}}[j]=c_{i_{1}}[j]=1$; that is, both the columns can have non-zero entry simultaneously at maximum one position. Some authors refer to this condition as no more than one overlapping 1s.
\item[$(IV)$] Let $t$ be the weight of a column of $C$ and $t_{r}$ be a weight of a row of $C$ then $t \cdot t_{r} \leqslant p-1$.
\end{description}
Now we prove bounds on $Aut(M)$ and minimal degree using a sequence of lemmas. First we just point out some relation between columns of matrices.
Let $P\in Aut(M)$ then for some $A$ we have $A \left[ I|C\right]P = \left[A|AC \right]P = \left[ I|C\right]$. Hence, $\left[ A|AC\right]$ have same set of columns as $\left[I|C \right]$ possibly in different order.
\begin{remark} \label{r5.1} Every column of $A$ and $AC$ is either a column of $C$ or a column of $I$. Also no column of $A$ is same as column of $AC$, in fact, no two columns of $\left[A|AC\right]$ are identical. \end{remark}
Assume for the whole discussion that every column of $C$ has weight $t$.
\begin{lemma}\label{l5.1} Let $\lbrace v_{1},v_{2},\ldots , v_{t}\rbrace$ be set of $t$ distinct columns where each $v_{i}$ comes either from $I$ or from $C$ such that at least 2 columns are from $C$ then $\Sigma_{i=1}^{t} v_{i}$ is of weight at least $2$.
\end{lemma}
\begin{proof}
Suppose $v_{1},v_{2}$ are from $C$. Hence $v_{1}+v_{2}$ has weight at least $2t-2$ from condition (III) on $C$ (as at most one entry from each column can get converted to 0). Now each of the remaining $t-2$ columns $v_{3},v_{4},\ldots,v_{t}$ can reduce the weight by at most 2 as it can reduce weight of $v_{1}$ by at most 1 and weight of $v_{2}$ by at most 1. Thus weight of $\Sigma_{i=1}^{t} v_{i}$ is at least $2t-2-2(t-2)=2$.
\end{proof}
\begin{lemma} \label{l5.2} Let $\lbrace v_{1},v_{2},\ldots , v_{t}\rbrace$ be a set of $t$ distinct columns where each $v_{i}$ comes from either $I$ or $C$ such that $\Sigma_{i=1}^{t} v_{i}$ is weight 1. Then only possible combination is 1 column of weight $t$ and $t-1$ columns of weight 1. Moreover, each column of weight 1, including the resultant $\Sigma_{i=1}^{t} v_{i}$, should have 1 in the same place as those of weight $t$ column.
\end{lemma}
\begin{proof}
Clearly, if all $v_{i}'s$ are weight 1, then $\Sigma_{i=1}^{t} v_{i}$ would be weight $t$. Now, if at least two $v_{i}'s$ are weight $t$ then $\Sigma_{i=1}^{t} v_{i}$ can not be weight 1 from $Lemma\ \ref{l5.1}$. The condition on position of 1 is easy to check so we skip the proof.
\end{proof}
\begin{remark} \label{r5.2} Since columns of $C$ have weight $t$, any column of $AC$ is an addition of $t$ columns of $A$. Moreover, every column of $A$ contributes to such $t_{r}$ additions where $t_{r}$ is the weight of a row of $C$.
\end{remark}
\begin{theorem}\label{t5.1} If $P\in Aut(M)$ with $C$ satisfying condition (II) and (III) then for any corresponding $A$, $AC$ can not have a column of weight 1.
\end{theorem}
\begin{proof}
Suppose there is a column (say $ac_{1}$) of $AC$ with weight 1. As columns of $AC$ are obtained by adding $t$ columns of $A$, there exists a set $ \lbrace a_{1},a_{2},\ldots,a_{t} \rbrace$ of $t$ distinct columns such that $\Sigma a_{i}=ac_{1}$.
From Lemma $\ref{l5.2}$, one column should be weight $t$ and rest of the columns must be weight $1$. Let $a_{1}$ be weight $t$ column and $a_{2},a_{3},\ldots,a_{t}$ be weight 1 and each weight 1 column matches with weight $t$ column.
Now since $a_{1}$ must be involved in $t_{r}$ such additions ($t_{r} \geq 2$), there exists another set $\lbrace a_{1},a'_{2},a'_{3},\ldots a'_{t} \rbrace$ such that $a_{1}+a'_{2}+a'_{3}+\cdots+a'_{t}=ac_{2}$ is another column of $AC$. As $ac_{2}$ is column of $AC$ it must be of weight 1 or weight $t$.
Observe that $ac_{2}$ can not be weight 1. Because if $ac_{2}$ was weight 1, then from Lemma $\ref{l5.2}$ it should match 1, with $a_{1}$. The only columns satisfying such condition are $a_{2},a_{3},\ldots,a_{t},ac_{1}$. But $ac_{2}$ can not be equal to either of those since column of $AC$ can not be equal to any column of $A$ or any other column of $AC$, this follows from Remark $\ref{r5.1}$.
Now only possibility is $ac_{2}$ has weight $t$. Now we analyze this possibility in two cases.
\textit{Case 1:} All $a'_{2},a'_{3},\ldots,a'_{t}$ have weight $t$. This leads to a contradiction as now the columns in the addition are weight $t$, all of them come from $C$. Thus we have one column of $C$ as sum of other columns of $C$. Contradiction to condition (II).
\textit{Case 2: } There is a column of weight 1 in addition (say $a'_{2}$). Now we have
$a_{1}+a'_{2}+a'_{3}+\cdots+a'{t}=ac_{2}$
Therefore, $a_{1}+ac_{2}+a'_{3}+a'_{4}+\cdots+a'_{t}=a'_{2}$. Contradiction to Lemma $\ref{l5.1}$ as $a_{1},ac_{2}$ have weight $t$ and $a'_{2}$ has weight 1.
Thus we have ruled out every possibility for $ac_{2}$. Hence our assumption that there was a column of weight 1 in $AC$ is wrong. Hence, every column of $AC$ is weight $t$.
\end{proof}
\begin{corollary} \label{c5.1} If $C$ satisfies conditions (II),(II) and $M=\left[I|C \right]$ then every $P\in Aut(M)$ is a direct sum of permutation matrix of size $mp$ and a permutation matrix of size $p$ that is $P$ can be expressed as $P=P_{1} \oplus P_{2}$ where $P_{1}$ is a $mp \times mp$ permutation matrix and $P_{2}$ is a $p \times p$ permutation matrix.
\end{corollary}
\begin{proof}
Since $AC$ has no column of weight 1; $A$ must be permuted to get identity matrix and $AC$ must be permuted to get $C$. Thus $P$ permutes first $mp$ columns within themselves and last $k$ columns within themselves. Therefore, $P=P_{1} \oplus P_{2}$ with corresponding sizes. Moreover, $A={P_{1}}^{-1}$ and $ACP_{2}=C$.
\end{proof}
\begin{lemma} \label{l5.3} If $M$ satisfies condition (II) and (III) then $\vert Aut(M) \vert = \# (\mathcal{P}_{1},\mathcal{P}_{2})$ satisfying $\mathcal{P}_{1}C\mathcal{P}_{2}=C$.
\end{lemma}
\begin{proof}
From corollary \ref{c5.1}; $A=P_{1}^{-1}$ and $ACP_{2}=C$. Therefore, $P_{1}^{-1}CP_{2}=C$ and satisfies required equation with $\mathcal{P}_{1}=P_{1}^{-1}$ and $\mathcal{P}_{2}=P_{2}$. Similarly, one can go back to show other side. Thus, $Aut(M)$ is in one-to-one correspondence with ($\mathcal{P}_{1}, \mathcal{P}_{2}$) solutions for $\mathcal{P}_{1}C\mathcal{P}_{2}=C$.
\end{proof}
\begin{corollary} \label{c5.2} If $M$ satisfies condition (II) and (III) then $\vert Aut(M) \vert = \# \mathcal{P}_{2}$ satisfying $\mathcal{P}_{1}C\mathcal{P}_{2}=C$.
\end{corollary}
\begin{proof}
Notice that since no two rows of $C$ are identical $C\mathcal{P}_{2}$ are identical as $\mathcal{P}_{2}$ just permutes columns. Hence there is at most one way to permute rows to get back $C$. Hence for every $P_{2}$ there is at most one $\mathcal{P}_{1}$. Since $\mathcal{P}_{2}$ satisfies the equation, we have a unique $P_{1}$ corresponding to $\mathcal{P}_{2}$.
\end{proof}
So due to corollary \ref{c5.2} problem of finding size of the automorphism group is reduced to finding number of $\mathcal{P}_{2}$ solutions to \begin{equation} \label{eq1}
\mathcal{P}_{1}C\mathcal{P}_{2}=C
\end{equation}
Here, we state a theorem by Burnside ~\cite[Theorem 3.5B]{Dixon}.
\begin{theorem} \label{burni}
\textsf{Burnside} Let $\mathrm{G}$ be a subgroup of $Sym(\mathbb{F}_{p})$ containing a $p-cycle$ $\mu : \xi \mapsto \xi+1$. Then $\mathrm{G}$ is either 2-transitive or $\mathrm{G} \leq AGL_{1}(\mathbb{F}_{p})$ where $AGL_{1}(\mathbb{F}_{p})$ is the affine group over $p$.
\end{theorem}
We use this theorem to prove our main result
\begin{theorem}
\label{main_thm}
If $M$ satisfies conditions (I), (II) and (III) then $\vert Aut(M)\vert \leq p (p-1)$
\end{theorem}
\begin{proof}Let $\mathrm{G} \text{ be the set of } \mathcal{P}_{2}$ that satisfy equation \eqref{eq1}. It is an easy check that $\mathrm{G}$ forms a group. Also we can check that $\mathcal{P}_{2}=\mu \in \mathrm{G}$ as it satisfies the equation \eqref{eq1} with $\mathcal{P}_{1}$ being block diagonal with $\mu^{-1}$ as every block. So using Burnside's theorem we can say that $\mathrm{G}$ is a subgroup of $ AGL_{1}(\mathbb{F}_{p})$ if it is not doubly transitive and the its size is less that or equal to $p(p-1)$. Proof that $\mathrm{G}$ is not doubly transitive follows from condition $(IV)$.
\end{proof}
\begin{lemma} If $t \cdot t_{r} \leqslant p-1$ then $\mathrm{G}$ is not doubly transitive.
\end{lemma}
\begin{proof}
Let $\mathcal{S}$ be the set of $x$ such that there exists a row $r$ in $C$ which contains non-zero entries at positions $0$ and $x$ that is there exists a row $r$ such that $r[0]=r[x]=1$. Now clearly $\vert \mathcal{S}\vert \leqslant t \cdot t_{r}$. Hence, $\vert \mathcal{S}\vert < p$ and there exists $y\in \lbrace 0,1,2,\ldots, p-1 \rbrace$ such that $y\notin \mathcal{S}$. Denote first row of $C$ by $r_{0}$. Let $x_{0},y_{0}$ be distinct positions such that $r_{0}[x_{0}] = r_{0} [y_{0}] = 1$. Now one can notice that no element $\mathcal{P}_{2} \in \mathrm{G}$ can send $(x_{0},y_{0})$ to $(0,y)$ as $C\mathcal{P}_{2}$ needs to have same set of rows as $C$. After action of such $\mathcal{P}_{2}$ first row has 1's at positions $0$ and $y$ but no row can have 1's at positions $0$ and $y$ the reason being $y \notin \mathcal{S}$. Thus $\mathrm{G}$ is not 2 transitive.
\end{proof}
\begin{lemma} Minimal degree of $\mathrm{G}$ is at least $p-1$.
\end{lemma}
\begin{proof}
As $ P = P_{1} \oplus P_{2}$ number of points moved by $P$ is greater than or equal to number of points moved by $P_{2}$. Now assume $P_{2}$ fixes at least two points. Then $P_{2} = I_{p}$ as $P_{2} \in AGL(\mathbb{F}_{p})$. And as for every $P_{2}$ there is at most one corresponding $P_{1}$, $P_{1} = I_{(m-1)p}$ making $P= I_{mp}$. Thus non identity $P$ can not have corresponding $P_{2}$ fixing more than one point and hence every non identity $P$ moves at least $p-1$ points.
\end{proof}
\begin{algorithm}
\caption{An algorithm that generates required generator matrix}
\begin{algorithmic}[1]
\Procedure{$Generate\_C$}{$p,m,t_{r}$}
\State $Available\_set=\left[ 0,1,\ldots, p-1 \right]$
\State $\mathcal{A}= \left[ \; \right]$
\State \textbf{repeat $m$ times}
$\Big{\lbrace}$ \While{size of Current\_set $\leq t_{r}$}
\State randomly choose $\alpha_{1} \in$ Available\_set
\State Current\_set.append[$\alpha_{1}$]
\State remove $\alpha_{1}$ from Available\_Set
\For { $\left( \alpha_{2}, \alpha_{3}\right) \in Current\_set \cup \mathcal{A}$ }
\State remove $\alpha_{2}+\alpha_{3}-\alpha_{1}\; modulo\; p$ from Available\_set
\EndFor
\EndWhile
\State $\mathcal{A}.append\left(Current\_State \right) \Big{\rbrace}$
\State construct circulant $C_{i}$ having $\mathcal{A}[i]$ as its first column
\State \textbf{return} $C = $ \textsc{Stack} $\left[ C_{0},C_{1},\ldots,C_{m-1}\right]$
\EndProcedure
\end{algorithmic}
\end{algorithm}
We will now prove correctness of the algorithm. We claim that output of the algorithm is a $C$ satisfying condition $(III)$. Let $\mathcal{A} = \lbrace \alpha_{j}\rbrace$ denote positions such that $c_{0} [\alpha_{j}]=1$ where $c_{0}$ is the first column of $C$.
\begin{lemma} Let $\mathcal{A}^{\prime}= \lbrace \alpha^{\prime}_{j}\rbrace$ denote positions such that $c_{N}[\alpha^{\prime}_{j}] = 1 $ where $c_{N}$ denotes the $N+1\,th$ column of $C$ then $\lbrace \alpha_{j} + N\; mod\; p\; |\; \alpha_{j} \in \mathcal{A} \rbrace = \lbrace \alpha^{\prime}_{j} \;|\; \alpha^{\prime}_{j} \in \mathcal{A}^{\prime} \rbrace$.
\end{lemma}
\begin{lemma} If condition $(III)$ is not satisfied by $C$ then there exists $\alpha_{1}, \alpha_{2}, \alpha_{3}, \alpha_{4}$ such that $\alpha_{4} = \alpha_{1}+\alpha_{3}- \alpha_{2} \; mod\; p$ such that $\alpha_{1}, \alpha_{2}$ belong to the same circulant $C_{i}$ and $\alpha_{3}, \alpha_{4}$ belong to the same circulant $C_{j}$ \footnote{$C_{i}$ can be same as $C_{j}$ that is it is possible that $i = j$ where the case represents both the overlaps happening in the same circulant}.
\end{lemma}
\begin{proof}
Without loss of generality we can assume that column $c_{0}$ and column $c_{N}$ have more than one overlaps. So there exists $\alpha_{1}$ such that $c_{0}\left[ \alpha_{1} \right] = c_{N} \left[ \alpha_{1} \right] = 1$. Now from second equality and above lemma $\alpha_{1} = \alpha_{2} + N \; mod \; p$ for some $\alpha_{2} \in \mathcal{A}$ . Similarly second overlap will give you $\alpha_{3} = \alpha_{4} + N\; mod \; p$. Solving both the equations we get $\alpha_{4} = \alpha_{1}+\alpha_{3}- \alpha_{2} \; mod \; p$.
\end{proof}
Our algorithm iteratively constructs circulants $C_{0},C_{1},\ldots,C_{m-1}$ so that $\alpha_{4} \neq \alpha_{1}+\alpha_{3}- \alpha_{2}$. In this way we ensure that $C$ generated as an output of algorithm satisfies condition $(III)$.
We end this chapter by showing how to achieve other conditions. Condition $(I)$ is trivial as we just need to choose a prime $p$. We now move towards condition $(II)$. By Lemma $\ref{cir2}$ we know that $\mathcal{C}_{p}$ is isomorphic to direct product of two rings. First part $\frac{\mathbb{F}_{q}\left[ x
\right]}{(x-1)}$ is a field as $(x-1)$ is irreducible. So to ensure that image in this ring is non-zero we just have to ensure that weight of circulant is odd. So we get our first condition as $t_{r}$ should be odd. For the second part of the product we use following lemma.
\begin{lemma} If $2$ is a primitive root modulo $p$ then $\Phi_{p}(x)$ is irreducible in $\mathbb{F}_{2} \left[x \right]$.
\end{lemma}
Proof of this can be found in standard number theory text.
So to make the second part a field we need to choose $p$ such that $2$ is primitive. One of the ways to do this is choose $p=4q+1$ such that both $p$ and $q$ are prime. For condition $(IV)$ notice that $t = m\cdot t_{r}$ which gives us $t \cdot t_{r} = {t_{r}}^2 \cdot m$. So we can satisfy condition $(IV)$ by choosing $t_{r} \leqslant \sqrt{\frac{p}{m}}$.
In conclusion, we can generate a matrix $C$ satisfying required conditions by running algorithm for prime $p$ such that $2$ is primitive mod $p$ and choosing odd $t_{r}$ less than $\sqrt{\frac{p}{m}}$ and thus we can construct $\frac{m-1}{m}$ codes with small automorphism group and minimal degree at least $p-1$.
\section{Introduction}
In chapter 2 we mentioned a way to ensure security of cryptosystem against quantum Fourier sampling. But before going there, we describe our variant and briefly go over its classical security. Our variant is based on Quasi-cyclic codes over $\mathbb{F}_{2^{l}}$ of rate $\frac{m-1}{m}$. The relevant definitions about Quasi-cyclic codes are in section 3.1 and more details are in ~\cite{Gulliver_thesis}.
\subsection{Description of our Niederreiter cryptosystem} \paragraph{Description of the parity check matrix used for the proposed Niederreiter cryptosystem}\label{dd}
Recall that we are talking about $\frac{m-1}{m}$ quasi-cyclic codes over $\mathbb{F}_{2^l}$. For the cryptosystem to be quantum-secure the parity check matrix $\mathcal{H}$ for the $[n=mp,k=(m-1)p,d]$, $\frac{m-1}{m}$ quasi-cyclic code should satisfy the following conditions:
\begin{itemize}
\item[I] Integers $m,p$, such that $p$ is a prime and $m$ is bounded above by a polynomial in $p$.
\item[II] The matrix $\mathcal{H}$ is of size $p\times mp$ over $\mathbb{F}_{2^l}$.
\item[III] The matrix $\mathcal{H}$ is of the form $\left[\,C_0=I\,|\,C_1\,|\,C_2\,|\ldots\,|\,C_{m-1}\,\right]$, where each $C_i$ is a circulant matrix of size $p$. Each $C_i$ for $i>0$ should contain an element from a proper extension of $\mathbb{F}_2$.
Furthermore, we denote the matrix $\mathcal{H}$ as $\left[\,I\,|\,C\,\right]$ where $C$ is the concatenation of the circulant matrices $C_i$, $i>0$.
\item[IV] We define $T_{\mathcal{H}}=\left\{P_1\in \mathrm{S}_p \;|\; \exists P_2\in \mathrm{S}_{p(m-1)} \;\text{such that}\; P_1CP_2=C\right\}$, where $\mathrm{S}_n$ is the symmetric group acting on $n$ letters. It is easy to see that $T_\mathcal{H}$ is a permutation group action on $p$ letters. The condition we impose on $\mathcal{H}$ is that $T_\mathcal{H}$ is not 2-transitive.
\item[V] No two columns of $C$ are identical.
\end{itemize}
\section{Classical Attacks}
In this section we briefly go over the generic classical attacks against McEliece and Niederreiter cryptosystems. We also mention some attacks exploiting the circulant structures in the keys. Interestingly, Li \textit{\etal}~\cite[Section III]{Li} proved that both McEliece and Niederreiter cryptosystems are equivalent in terms of classical security. The proof follows from the fact that the encryption equation for one can be reduced to the other. This implies the equivalence of security of both the cryptosystems for attacks that try to extract the plaintext from a ciphertext.
Most generic attacks over algebraic code based cryptosystems are \emph{information set decoding attacks}(ISD). Two most popular ways of implementing ISD attacks are by Lee and Brickell~\cite{Lee} and Stern~\cite{Stern}. As mentioned by Baldi \etal~\cite{Baldi} ISD attacks are the best known attacks with the least work factor as far as classical cryptanalysis is considered. Hence these work factors are considered as security levels for a McEliece and Niederreiter cryptosystems.
The basic idea behind one of the attacks was suggested by McEliece himself. Lee and Brickell~\cite{Lee} improved the attack and added an important verification step where attacker confirms that recovered message is the correct one. In this case, we are dealing with a McEliece cryptosystem over a $[n,k]$ linear code. The strategy is based on repeatedly selecting $k$ bits at random from a $n$-bit ciphertext in hope that none of the selected bits are part of the error. Similar attacks can also be implemented over Niederreiter cryptosystems. Lee and Brickell also provided a closed-form equation for complexity of the attack. As our system is based on $\left( n=mp, k=(m-1)p, d_{min} = 2\mathcal{E}+1\right)$ code the expression for minimal work factor (with $\alpha = \beta = 1$ as taken by Lee and Brickell) takes the following form $$
W_{min}= W_{2}= T_{2} \left( (m-1)^{3} p ^{3} + (m-1) p N_{2} \right)$$ where $T_{2}= \dfrac{1}{Q_{0} + Q_{1} + Q_{2}}$ and $Q_{i}=$ $\mathcal{E} \choose i$ ${n-\mathcal{E}} \choose {k-i} $ / $n \choose k$ with $N_{2}= 1 + k + {k \choose 2}$.
In Table~\ref{table} we present numerical data for work factor for different values of parameters. Recently, Aylaj \textit{\etal}~\cite{Aylaj} developed an algorithm to construct stack-circulant codes with high error correcting capacity which makes the proposed Niederreiter cryptosystem much more promising.
Other ISD attacks are based on a strategy given by Stern. To recover the intentional error vector $e$ in a McEliece cryptosystem such strategies use an extension code $C^{\prime \prime}$ generated by generator matrix $M^{\prime \prime}=\left[ \begin{array}{c}
M^\prime \\
x
\end{array} \right]$. Bernstein \etal~\cite{Bernstein} later improved this attack. Probability of success and work factor for Stern's attack is described in ~\cite{Hirotomo}. In the Table~\ref{table} we also provide probability of success for parameters $l=16$ and $A_{w} \approx n-k$. Both the parameters can be optimized further to obtain the least work factor but not much variation is seen as we change any of these parameters. With such low probabilities, it is clear that the work factor for Stern's attack is worse than the Lee-Brickell attack. Even when one considers improvements suggested by Bernstein \textit{\etal}~\cite{Bernstein}, Lee-Brickell's~\cite{Lee} attack seems to outperform the attack by Bernstein \textit{\etal}~as it produces speedup up to 12 times and hence the security of the system against the Lee-Brickell attack should be considered the security of the system. Key sizes should be devised according to that.
Another attack worth mentioning for quasi-cyclic codes is the attack on the dual code. This attack works only if the dual code has really low weight codewords and is often encountered only when sparse parity check matrices are involved. For example, McEliece with QC-LDPC~\cite{Baldi}. Such attacks can easily be stopped by choosing codes that do not have low weight codewords. From the work of Aylaj \etal~\cite{Aylaj} this can be achieved.
\section{Quantum security}
After this discussion on classical security we now move towards quantum-security of the proposed McEliece and Niederreiter cryptosystems which is one of the major goal of this chapter.
\
Before moving towards quantum security we recall definitions and theorem by Dinh, Moore and Russell.
\begin{definition} $Aut(M)=\lbrace P\in S_{n}$ such that there exists $ A \in GL_{k}(\mathbb{F}_{q}) ,\ AMP=M \rbrace $
\end{definition}
\begin{definition} The minimal degree of a $G \leqslant S_{n}$ acting on set of $n$ symbols is defined to be minimum number of elements moved by a non-identity element of the group $G$.
\end{definition}
\begin{definition}Consider a $k\times n$ matrix $M$ , we define $T_M$ for matrix $M= \left[I_{k} | M^{*} \right]$ as
$T_M=\lbrace \mathcal{P}_{1} \in S_{k}$ such that there exists $\mathcal{P}_{2}\in S_{n-k}$ with $\mathcal{P}_{1} M^{*} \mathcal{P}_{2} =M^{*}$ $\rbrace$
\end{definition}
\begin{theorem} \label{thm1}
~\cite[Theorem 4]{Dinh}: Assume $q^{k^2} \leqslant n^{an}$ for some constant $0 < a < 1/4$. Let $m$ be the minimal degree of the automorphism group $Aut(M)$. Then for sufficiently large n, the subgroup $\mathrm{K}$, $D_{K} \leqslant O(\vert K \vert ^ 2 e^{-\delta m} ) $, where $\delta > 0$ is a constant.
\end{theorem}
The idea behind security of the system is when distinguishability of the hidden subgroup $\mathrm{K}$ denoted as $D_{k}$ becomes less than $\log^{-\omega(1)} \vert \mathrm{G}\vert$, quantum Fourier sampling can not give us the hidden subgroup and hence an attacker can not find a scrambler permutation pair.
\subsection{Proof of indistinguishability of the hidden subgroup}
We prove indistinguishability in a sequence of lemmas.
\begin{lemma} \label{ov1} Let $P \in \rm{Aut}(\mathcal{H})$ then $P= P_{1} \oplus P_{2}$ where $P_{1}$ is a block of size $p$ and $P_{2}$ is a block of size $(m-1)p$ and $P_1\oplus P_2$ is a block diagonal matrix of size $mp\times mp$ with the top block $P_1$ and the bottom block $P_2$.
\end{lemma}
\begin{proof} Let $P \in \rm{Aut}(\mathcal{H})$, from the definition of automorphism there is an $A$ such that $A\mathcal{H}P=\mathcal{H}$.
This implies that
\[A \left[\,I\,|\,C\,\right] P = \left[\,A\,|\,AC \,\right]P= \left[\,I\,|\,C\, \right].\]
As action of right multiplication by a permutation matrix permute columns, the above equality shows that $\left[\,A\,|\,AC\,\right]$ has same columns as $\left[\,I\,|\,C \,\right]$ possibly in different order. Now since every column of $C$ contains an entry from a proper extension of $\mathbb{F}_q$, no column of $A$ can be column of $C$. This forces $A$ to have same columns as $I$ and $AC$ to have same columns as that of $C$. Hence $P$ permutes first $p$ columns within themselves and last $(m-1)p$ columns in themselves. Hence every $P \in \rm{Aut}(\mathcal{H})$ can be broken into $P_{1} \oplus P_{2}$ so that $P_{1}$ acts on $I$ and $P_{2}$ acts on $C$.
\end{proof}
\noindent An obvious corollary follows:
\begin{corollary}\label{ov2} If $P \in \rm{Aut}(\mathcal{H})$ then the corresponding $A \in \mathrm{S}_{k}$.
\end{corollary}
\noindent The next lemma is central to quantum-security. It gives us a way to move from $\mathcal{H}$ to $C$ by noting, the $P_1$ from the $P\in\rm{Aut}(\mathcal{H})$ is actually a member of $T_{\mathcal{H}}$.
\begin{lemma} \label{ov3} The cardinality of $\rm{Aut}(\mathcal{H})$ is the cardinality of the set $\left\{(P_{1},P_{2})\right\}$ that satisfy $P_{1}C P_{2}=C$ where $\mathcal{H}=\left[\,I\,|\,C\,\right]$ as defined earlier.
\end{lemma}
\begin{proof}
The proof follows from the fact, if $P$ belongs to $\rm{Aut}(\mathcal{H})$, then $P=P_1\oplus P_2$. Then $A\left[\,I\,|\,C\,\right]P=\left[\,I\,|\,C\,\right]$ translates into $A\left[\,I\,|\,C\,\right](P_1\oplus P_2)=\left[\,I\,|\,C\,\right]$. Keeping in mind the block diagonal nature of $P$, it follows that $\left[\,AIP_1\,|\,ACP_2\,\right]=\left[\,I\,|\,C\,\right]$. Then $A=P_1^{-1}$ and $P_1^{-1}CP_2=C$. This proves the lemma.
\end{proof}
\noindent The next lemma proves that for each $P_1$ there is at most one $P_2$.
\begin{lemma}\label{ov4} Cardinality of the set $\lbrace\left(P_{1},P_{2} \right)$ that satisfy $P_{1}C {P}_{2}=C\rbrace$ equals $\vert T_{\mathcal{H}} \vert$.
\end{lemma}
\begin{proof} Recall that $T_{\mathcal{H}} = \lbrace P_{1}$ that satisfy $P_{1}CP_{2}=C \rbrace$. So it suffices to show that for every $P_{1}$ there is at most one $P_{2}$. Since no two columns of $C$ are identical, no two columns of $P_{1}C$ are identical. Hence, there is at most one way to re-order them to get back $C$. Thus for every $P_{1}$ there is at most one $P_{2}$.
\end{proof}
\begin{theorem}[Burnside~{\cite[Theorem 3.5B]{Dixon}}] \label{burni}
Let $G$ be a subgroup of $Sym(\mathbb{F}_{p})$ containing a $p$-cycle $\mu : \xi \mapsto \xi+1$. Then $G$ is either 2-transitive or $G \leq A\rm{\rm{GL}}_{1}(\mathbb{F}_{p})$ where $A\rm{\rm{GL}}_{1}(\mathbb{F}_{p})$ is the affine group over $p$.
\end{theorem}
\noindent We prove a theorem on the size of the \rm{Aut}omorphism group of $\mathcal{H}$.
\begin{theorem}
If $\mathcal{H}$ satisfies conditions I,II and III then $\vert \rm{Aut}(\mathcal{H}) \vert \leqslant p(p-1)$.
\end{theorem}
\begin{proof}
From Lemma~\ref{ov3} and Lemma~\ref{ov4}, the group $\rm{Aut}(\mathcal{H})$ has same size as $T_{\mathcal{H}}$. It is now easy to check that the circulant matrix $\mu$ with first row $[0,1,0,\ldots,0]$ of size $p$ belongs to $T_{\mathcal{H}}$. The corresponding $P_{2}$ will be a block diagonal $(m-1)p$ matrix with blocks of size $p$ and each consisting of $\mu^{-1}$. Now notice that the circulant matrix $\mu$ corresponds to the $p$-cycle $\xi\mapsto\xi+1$.
By our condition III, $T_{\mathcal{H}}$ is not 2-transitive. Now by Burnside's theorem $T_{\mathcal{H}} \leq A\rm{\rm{GL}}_{1}(\mathbb{F}_{p})$. Thus $\vert \rm{Aut}(\mathcal{H}) \vert \leqslant p (p-1)$.
\end{proof}
\noindent After this bound on the size of the \rm{Aut}omorphism group we move towards the minimal degree of the \rm{Aut}omorphism group.
\begin{lemma} The minimal degree of $\rm{Aut}(\mathcal{H})$ is bounded below by $p-1$.
\end{lemma}
\begin{proof}
Notice that any $P\in\rm{Aut}({\mathcal{H}})=P_1\oplus P_2$. By the twist, from $P\in\rm{Aut}(\mathcal{H})$ to $P_1^{-1}\in T_{\mathcal{H}}$, it is easy to see that $P_{1} \in A\rm{GL}_k(\mathbb{F}_{q})$. Then $P_{1}(x)=ax+b \pmod{q}$ for some $a,b\in\mathbb{F}_q$. If $P$ fixes two distinct points, then $a=1$ and $b=0$ is the only possible solution. This corresponds to the identity element and thus a non-identity element can not fix more that one point. So minimal degree of $\rm{Aut}(\mathcal{H})$ is bounded below by $p-1$.
\end{proof}
\noindent We now prove the main theorem of this chapter.
\begin{theorem}\label{mainthm}
Let $p$ be a prime and $m$ a positive integer bounded above by a polynomial in $p$, such that, $p\leq\frac{1}{4}m\left(\log{m}+\log{p}\right)$. Then the subgroup $K$ (Equation~\ref{eqnK}) defined above is indistinguishable.
\end{theorem}
\begin{proof}
We will use Theorem~\ref{thm1} in this proof. First note, the minimal degree is bounded below by
$p-1$. Now it is well known that $|K|=2|H_0|^2$ and $|H_0|=|\rm{Aut}(\mathcal{H})|\times|\rm{Fix(\mathcal{H})}|$. We have shown that $|\rm{Aut}(\mathcal{H})|\leq p(p-1)$ and it is easy to see that $|\rm{Fix(\mathcal{H}})|=1$. Putting all these together, we see that $|K|^2e^{-\delta p}\leq 4p^8e^{-\delta p}$ for some positive constant $\delta$. However, from the bound on the size of $m$, it is obviously true that $4p^8e^{-\delta p}\leq \left(mp\log{(mp)}\right)^{-\omega(1)}$ for large enough $p$.
Now, if $p\leq am\left(\log{m}+\log{p}\right)$, then $p^2\leq amp\left(\log{m}+\log{p}\right)$ which gives $2^{p^2}\leq(mp)^{amp}$ for $0<a<\frac{1}{4}$. This satisfies the premise of Theorem~\ref{thm1} and hence $K$ is indistinguishable.
\end{proof}
\section{Construction of the required parity check matrix}
Now we address the last question about the proposed Niederreiter cryptosystem, how to construct a matrix $\mathcal{H}$ satisfying conditions I, II, III and IV? Clearly, conditions I, II and III are trivial to set up and deserve no special attention. We suggest a particular way for construction of parity check matrix $\mathcal{H}$ so that condition IV is satisfied. It should be noted that there may be other ways to meet condition IV as well.
Choose a pair of distinct elements $a,b\in F_{q^l}$. Now construct $\mathcal{H}$ such that $C_{1}$ contains both $a$ and $b$ exactly once in each column and no other $C_{i}$ contains both $a$ and $b$. We restate this condition as our condition $\mathrm{IV}^\prime$. We could have replaced $C_{1}$ by any other $C_{i}$ for $i>1$ and the proof remains the same. For sake of simplicity we stick with $C_{1}$.
\begin{description}
\item[$\mathrm{IV}^\prime$] Two distinct elements $a,b\in F_{q^l}$ occurs as entries of $C_{1}$ exactly once in each column and no other $C_i$ contain both $a$ and $b$.
\end{description}
\begin{lemma} If the matrix $\mathcal{H}$ satisfies $\mathrm{IV}^\prime$, it also satisfies $\mathrm{IV}$.
\end{lemma}
\begin{proof} Let $\mathcal{P}_{1} \in T_{\mathcal{H}}$. From $\mathcal{P}_1 C\mathcal{P}_2=C$ it follows
that $\mathcal{P}_{1}C$ should have the same set of columns as $C$ but possibly in a different order. Let
$\alpha$ denote the row of $a$ in the first column of $C_{1}$ and $\beta$ denote the row of $b$ in the
same column. Now notice that every column in $C$ that contains both $a$ and $b$ contains them such that
difference between rows of $a$ and $b$ is $\alpha - \beta$ mod $p$ where $p$ is the size of each
circulant matrix. Now let $\sigma\in T_{\mathcal{H}}$ such that it sends $\beta$ to $\alpha$ and
$\alpha$ to $\beta$. It then follows from the fact that $p$ is a odd prime, $\alpha=\beta$ which
contradicts our assumption. Hence, $T_{\mathcal{H}}$ is not 2 transitive.
\end{proof}
Condition V can be easily satisfied using brute force and other means and this completes the construction of a parity check matrix $\mathcal{H}$ satisfying I, II, III, IV and V and hence, a \textbf{Niederreiter cryptosystem that resists quantum Fourier sampling} is found.
\
\begin{algorithm}
\caption{An algorithm that generates required parity check matrix}
\begin{algorithmic}[1]
\Procedure{$Generate\_H$}{$p,m$}
\State choose $a,b \in \mathbb{F}_{2^{l}}^{\times}$
\State generate an array $v$ of length $p-2$ with entries from $\mathbb{F}_{2^{l}}- \lbrace a,b \rbrace$
\State Construct an array $c_{1}$ by concatenating array $[a,b]$ with the array $v$
\State randomly permute entries within $c_{1}$
\State Let $I$ denote the array of length $p$; $I = \left[1, 0, 0,\ldots, 0 \right]$
\State $vector\ list = \left[ I, c_{0}\right]$
\State Let $R$ denote the right shift operator
\State construct an array $x$ of length $p$ over $\mathbb{F}_{{2}^{l}}$
\While{$vector\ list$ has length less than $m$}
\For{$y \in vector\ list$}
\For{$0\leqslant i \leqslant n-1$}
\If {$R^{i}(x) = y$}
\State choose next $x$
\Else
\State add $x$ in the $vector\ list$
\State choose next $x$
\EndIf
\EndFor
\EndFor
\EndWhile
\For{$v_{i} \in vector\ list\ ; 0\leqslant i \leqslant m-1$}
\State Construct a circulant matrix $C_{i}$ with $v$ as its first column
\EndFor
\State \textbf{return} parity check matrix $H=$ \textsc{Array}$\left[ C_{0},C_{1},\ldots,C_{m-1}\right]$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Advantages of the proposed cryptosystem}
One of the prime advantages of our proposed cryptosystem is quantum-security. Apart from that it has high transmission rate which translated into high encryption rate. Current McEliece cryptosystem built on Goppa codes has transmission rate of about $0.52$. For a McEliece cryptosystem its rate is same as that of the transmission rate of the underlying code and is $\frac{k}{n}$. Niederreiter cryptosystems have slightly different rates due to difference in their encryption algorithm. For a general cryptosystem its encryption rate or information rate can be defined as follows ~\cite{NiedBook}:
Let $\mathcal{S}(C)$ denote possible number of plaintext and $\mathcal{T}(C)$ denote possible number of ciphertexts then information rate of the system is defined by
\[\mathcal{R}(C) = \dfrac{log\left(\mathcal{S}(C)\right)}{log\left(\mathcal{T}(C)\right)}.\]
This information rate can be viewed as amount of information contained in one bit of ciphertext.
Our proposed Niederreiter cryptosystem have encryption rate on the higher side. This gives our variant an edge over once those constructed on classical Goppa codes or with GRS codes (generalized Reed-Solomon codes).
As discussed before another problem with McEliece and Niederreiter cryptosystems is large key size. Circulant matrices seems like a good choice when it comes to key-sizes. Matrices are 2-dimensional objects but circulant matrices behave like a 1-dimensional object as they can be described by their first row. Though this circulant structure is lost in public key due to the scrambler-permutation pair, the size of the key still remains smaller than the conventional Niederreiter cryptosystem. Our system is slightly better than original Niederreiter cryptosystem because of the less number of rows in the public key matrices. With $p=101$, this number is less than one-tenth of the original Niederreiter cryptosystem. Though there are two factors that increase size of matrices in our variant compared to original McEliece, one, our matrices have large number of columns; and two, our system is based on extension field $\mathbb{F}_{q^{l}}$ which makes the effective size of the matrix $l$ times compared to McEliece which is based on $\mathbb{F}_{2}$. However, in most cases due to very less number of rows the net result indicates that our system requires shorter keys than original McEliece. For instance, at 80-bit security with $p=101$ and $l=3$ our keys are almost half of the keys corresponding to original McEliece at same security level. While at 256-bit security level with $p=211, t=40$ and $l=3$ our system key size of about $\frac{1}{4}^{th}$ of the original McEliece.
\begin{table}[ht]
\caption{Parameters for the proposed Niederreiter cryptosystem}
\centering
\label{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|}
\hline\hline
Security & $p$ & $t$ & $m_c$ & $m_Q$ & $m$ & Probability & \multicolumn{2}{|l|}{Public Key Size}& Rate\\
\cline{8-9}
in bits & & & & & & of success & No.~rows & No.~cols & \\
\hline
\multirow{4}{*}{80-bits} & \multirow{2}{*}{101} & 15 & 17 & 35 & 35 & $2^{-132}$ & 101 & 3535 & 0.60 \Tstrut\\
& & 20 & 9 & 35 & 35& $2^{-190}$ & 101 & 3535 & 0.77 \Tstrut\\
& \multirow{2}{*}{211} & 35 & 4 & 62 & 62 & $2^{-398}$ & 211 & 13082 & 0.71 \Tstrut\\
& & 40 & 3& 62 & 62 & $2^ {-465}$ & 211 & 13082 & 0.80 \TBstrut\\
\hline
\multirow{4}{*}{100-bits} & \multirow{2}{*}{101} & 15 & 40 & 35 & 40 & $2^ {-136}$ & 101 & 4040 & 0.61\Tstrut\\
& & 20 & 17 & 35 & 35&$2^{-190}$ & 101 & 3535 & 0.77\Tstrut\\
& \multirow{2}{*}{211} & 35 & 5 & 62 & 62 & $2^{-398}$ & 211 & 13082 & 0.71 \Tstrut\\
& & 40 & 5 & 62 & 62 & $2^{-465}$ & 211 & 13082 & 0.80 \TBstrut\\
\hline
\multirow{4}{*}{120-bits} & \multirow{2}{*}{101} & 15 & 95 & 35 & 95 & $2^{-171}$ & 101 & 9595 & 0.67\Tstrut\\
& & 20 & 32 & 35 & 35& $2^{-190}$ & 101 & 3535 & 0.77 \Tstrut\\
& \multirow{2}{*}{211} & 35 & 8& 62 & 62 & $2^{-398}$ & 211 & 13082 & 0.71 \Tstrut\\
& & 40 & 6& 62 & 62 & $2^{-465}$ & 211 & 13082 & 0.80 \TBstrut\\
\hline
\multirow{4}{*}{128-bits} & \multirow{2}{*}{101} & 15 & 134 & 35 & 134 & $2^{-184}$ & 101 & 13534 & 0.70\Tstrut\\
& & 20 & 42 & 35 & 42& $2^{-199}$ & 101 & 4242 & 0.79 \Tstrut\\
& \multirow{2}{*}{211} & 35 & 9& 62 & 62 & $2^{-398}$ & 211 & 13082 & 0.71 \Tstrut\\
& & 40 & 7& 62 & 62 & $2^{-465}$ & 211 & 13082 & 0.80 \TBstrut\\
\hline
\multirow{2}{*}{256-bits} & \multirow{2}{*}{211} & 35 & 98 & 62 & 98 & $2^{-443}$ & 211 & 20678 & 0.75\Tstrut\\
& & 20 & 55 & 62 & 62& $2^{-465}$ & 211 & 13082 & 0.80 \Tstrut\\ \hline
\end{tabular}
\end{table}
\section*{ }\includegraphics[scale=0.86,page=1]{dedicated.pdf}}
\graduationyear{2018}
\academicyear{2017-2018}
\graduationmonth{May}
\thesisabstract{\input{Abstract}}
\acknowledgments{\input{Acknowledgements}}
\begin{document}
\thesisfront
\chapter{\textsc{Coding Theory and Public Key Cryptosystems}}
\input{Chapter1}
\chapter{\textsc{Quantum Fourier Sampling}}
\input{Chapter2}
\chapter{\textsc{A McEliece Variant}}
\input{Chapter3}
\chapter{\textsc{Quantum Secure Niederreiter Variant}}
\input{Chapter4}
\chapter{\textsc{Results and Conclusion}}
\input{Chapter5}
\bibliographystyle{ieeetr}
|
1,314,259,993,949 | arxiv | \section{Introduction}
For all terms and definitions, not defined specifically in this paper, we refer to \cite{JAG,FH, DBW} and for the terminology and results in the theory of signed graphs, see \cite{AACE,GSH,FHS,TZ1,TZ2}. Unless mentioned otherwise, all graphs considered here are simple, finite and have no isolated vertices.
A set-labeling of a graph $G$ can generally be considered as an assignment of the vertices of a graph to the subsets of a non-empty set $X$ in an injective manner such that the set-labels of the edges of $G$ are obtained by taking the symmetric difference the set-labels of their end vertices. Some studies on set-labeling of signed graphs has been discussed in \cite{AGS1} and that of signed digraphs has been done in \cite{BDAS}.
The {\em sum set} (see \cite{MBN}) of two sets $A$ and $B$, denoted by $A+B$, is defined as $A+B=\{a+b:a\in A, b\in B\}$. Let $\mathbb{N}_0$ be the set of all non-negative integers and let $X$ be a non-empty subset of $\mathbb{N}_0$. Using the concepts of sumsets, we have the following notions as defined in \cite{GS1,GS0}.
\begin{defn}{\rm
\cite{GS1,GS0} An {\em integer additive set-labeling} (IASL, in short) is an injective function $f:V(G)\to \mathcal{P}(X)-\{\emptyset\}$ such that the induced function $f^+:E(G)\to \mathcal{P}(X)-\{\emptyset\}$ is defined by $f^+{(uv)}=f(u)+f(v)~ \forall uv\in E(G)$. A graph $G$ which admits an IASL is called an {\em integer additive set-labeled graph} (IASL-graph).}
\end{defn}
\begin{defn}{\rm
\cite{GS1,GS0} An {\em integer additive set-indexer} (IASI) is an injective function $f:V(G)\to \mathcal{P}(X)-\{\emptyset\}$ such that the induced function $f^+:E(G) \to \mathcal{P}(X)-\{\emptyset\}$ is also injective. A graph $G$ which admits an IASI is called an {\em integer additive set-indexed graph} (IASI-graph).}
\end{defn}
The cardinality of the set-label of an element (vertex or edge) of a graph $G$ is called the {\em set-indexing number} of that element.
\subsection{Signed Graphs and Their IASLs}
A \textit{signed graph} (see \cite{TZ1,TZ2}), denoted by $\Sigma(G,\sigma)$, is a graph $G(V,E)$ together with a function $\sigma:E(G)\to \{+,-\}$ that assigns a sign, either $+$ or $-$, to each ordinary edge in $G$. The function $\sigma$ is called the {\em signature} or {\em sign function} of $\Sigma$, which is defined on all edges except half edges and is required to be positive on free loops.
An edge $e$ of a signed graph $\Sigma$ is said to be a \textit{positive edge} if $\sigma(e)=+$ and an edge $\sigma(e)$ of a signed graph $\Sigma$ is said to be a \textit{negative edge} if $\sigma(e)=-$.
A simple cycle (or path) of a signed graph $\Sigma$ is said to be {\em balanced} (see \cite{AACE,FHS}) if the product of signs of its edges is $+$. A signed graph is said to be a {\em balanced signed graph} if it contains no half edges and all of its simple cycles are balanced.
It is to be noted that the number of all negative signed graph is balanced if and only if it is bipartite. The notion of a signed graph that is associated to a given IASL-graph has been introduced as follows.
\begin{defn}{\rm
\cite{GS5} Let $\Sigma$ be a signed graph, with underlying graph $G$ and signature $\sigma$. An injective function $f:V(\Sigma)\to \mathcal{P}(X)-\{\emptyset\}$ is said to be an \textit{integer additive set-labeling} (IASL) of $\Sigma$ if $f$ is an integer additive set-labeling of the underlying graph $G$ and the signature of $\Sigma$ is defined by $\sigma(uv)=(-1)^{|f^+(uv)|}$. }
\end{defn}
A signed graph which admits an integer additive set-labeling is called an \textit{integer additive set-labeled signed graph} (IASL-signed graph) and is denoted by $\Sigma_f$. If the context is clear, we can represent an IASL-signed graph by $\Sigma$ itself.
Some interesting studies on the signed graphs which admit different types of integer additive set-labelings have been done in \cite{GS5}. Motivated by the above mentioned studies, in this paper, we further study the characteristics of signed graphs which admits certain other IASLs.
\section{Arithmetic IASL-Signed Graphs}
By saying that a set is an arithmetic progression, we mean that the elements of that set is in arithmetic progression. The notion of an arithmetic integer additive set-labeling (AIASL) of a given graph was introduced in \cite{GS3} as given below.
\begin{defn}{\rm
\cite{GS3} An \textit{arithmetic integer additive set-labeling} of a graph $G$ is an integer additive set-indexer $f$ of $G$, with respect to which the set-labels of all vertices and edges of $G$ are arithmetic progressions. A graph that admits an AIASL is called an {\em arithmetic integer additive set-labeled graph}(AIASL-graph).}
\end{defn}
If the context is clear, the common difference of the set-label of an element of $G$ can be called the common difference of that element. The \textit{deterministic ratio} of an edge of $G$ is the ratio, $k\ge 1$ between the common differences of its end vertices. The following theorem is a necessary and sufficient condition for a graph $G$ to admit an arithmetic integer additive set-labeling.
\begin{thm}\label{Thm-AIASI-1}
{\rm \cite{GS3}} A graph $G$ admits an arithmetic integer additive set-labeling $f$ if and only if for every edge of $G$, the set-labels of its end vertices are arithmetic progressions with common differences $d_u$ and $d_v$ such that $d_u\le d_v$ and its deterministic ratio $\frac{d_v}{d_u}$ is a positive integer less than or equal to $|f(u)|$.
\end{thm}
The following theorem discussed the cardinality of the set-label of the edges of an arithmetic integer additive set-labeled graph.
\begin{thm}{\rm
{\rm \cite{GS3}} Let $f$ be an integer additive set-labeling of a graph $G$. Then, the set-indexing number of any edge $uv$ of $G$, with $d_u\le d_v$, is given by $|f(u)|+\frac{d_v}{d_u}\left(|f(u)|-1\right)$. }
\end{thm}
All set-labels mentioned in this section are arithmetic progressions so that the given signed graph $\Sigma$ admits an arithmetic integer additive set-labeling. Invoking the above results, we establish the following theorem for an edges of an AIASL-graph to be a positive edge.
\begin{thm}\label{Thm-2.1}
Let $\Sigma$ be a signed graph which admits an AIASL $f$. Then, an edge $uv$ is a positive edge of an IASL-signed graph if and only if
\begin{enumerate}\itemsep0mm
\item[(i)] the set-labels $f(u)$ and $f(v)$ are of different parity, provided the deterministic ratio of the edge $uv$ is odd.
\item[(ii)] the set-label of the end vertex, with minimum common difference, is of even parity, provided the deterministic ratio of the edge $uv$ is even.
\end{enumerate}
\end{thm}
\begin{proof}
By Theorem \ref{Thm-2.1}, the set-indexing number of $uv$ is given by $|f(u)|+k(|f(v)|-1)$. Note that the edge $uv$ is a positive edge of $\Sigma$ if and only if the value of $|f(u)|+k(|f(v)|-1)$ is a positive integer.
\textit{Case-1} Assume that the deterministic ratio of an edge $uv$ of $\Sigma$ is $k=\frac{d_v}{d_u}$, an odd positive integer.
Let $f(u)$ and $f(v)$ be of different parity. If $f(u)$ is of even parity and $f(v)$ is of odd parity, then $|f(v)|-1$ is even and hence $k(|f(v)|-1)$ is an even integer. Therefore, $(|f(u)|+k(|f(v)|-1))$ is an even integer and hence the signature of the edge $uv$ is positive. If $f(u)$ is of odd parity and $f(v)$ is of even parity, then $|f(v)|-1$ and hence $k(|f(v)|-1)$ are odd integers. Hence, $(|f(u)|+k(|f(v)|-1))$ is even and hence $\sigma(uv)$ is positive.
Next, assume that $f(u)$ and $f(v)$ are of same parity. If $f(u)$ and $f(v)$ are of odd parity, then $|f(v)|-1$ is even and hence $k(|f(v)|-1)$ is also an even integer. Therefore, $(|f(u)|+k(|f(v)|-1))$ is odd and $\sigma(uv)$ is negative. If $f(u)$ and $f(v)$ are of even parity, then $|f(v)|-1$ and hence $k(|f(v)|-1)$ are odd integers. Therefore, $(|f(u)|+k(|f(v)|-1))$ is odd and $uv$ is a negative edge of $G$.
Hence, any edge of an AIASL-signed graph $\Sigma$ is a positive edge if and only if the set-labels of its end vertices are of the different parity.
\textit{Case-2}: Let the deterministic ratio $k$ of the edge $uv$ is an even. Then, $k(|f(v)|-1)$ is always even irrespective of the parity of the set-label $f(v)$. Hence, $(|f(u)|+k(|f(v)|-1))$ is even only when $|f(u)|$ is even. Therefore, $\sigma(uv)$ is positive if and only if $|f(u)|$ is even.
\end{proof}
The following theorem establishes a necessary and sufficient condition for an AIASL-signed graph to a balanced signed graph.
\begin{thm}\label{Thm-2.2}
An AIASL-signed graph $\Sigma$ is balanced if and only if its underlying graph $G$ is bipartite.
\end{thm}
\begin{proof}
First assume that the underlying graph $G$ of a given signed graph is bipartite. Let $(V_1,V_2)$ be a bipartition of $G$. Then, label the vertices of $G$ in such a way that all vertices in the same partition have the same parity set-labels (arithmetic progressions). Here note that all edges in $\Sigma$ now have either positive signature simultaneously or negative signature simultaneously. Since all cycles in $\Sigma$ are even cycles, in all possible cases, the number of negative edges in each cycle will always be even. Hence, $\Sigma$ is balanced.
Conversely, assume that the AIASL-signed graph is balanced. If possible, let the underlying graph $G$ be non-bipartite. Then, $G$ contains some odd cycles. Let $C: v_1v_2v_3\ldots v_nv_1$ be one odd cycle in $G$. Label the vertices of $G$ by certain arithmetic progressions in such a way that there exists minimum number of different parity set-labels. This can be done by labeling any two adjacent vertices by different parity sets. In this way, we can see without loss of generality that the vertices $v_1, v_3, v_5, \ldots, v_{n-2}$ have the same parity set-labels, say odd parity set-labels. Hence, the vertices $v_2, v_4, v_6, \ldots, v_{n-1}$ have even parity set-labels. If $v_n$ has an odd parity set-label, then the edge $v_nv_1$ is the one only negative edge of $G$ and if $v_n$ has an even parity set-label, then the edge $v_{n-1}v_n$ is the one only negative edge of $G$. Moreover, if the parity of any one vertex, say $v_i$ is changed, then the set-labels of vertices $v_{i-1}, v_i, v_{i+1}$ become same parity sets and hence the signature of two edges $v_{i-1}v_i$ and $v_iv_{i+1}$ become negative, keeping the number of negative edges in $\Sigma$ odd. This is a contradiction to the hypothesis that $\Sigma$ is balanced. Hence, $G$ must be bipartite.
\end{proof}
\section{AIASL of Certain Associated Signed Graphs}
An IASL of signed graph $\Sigma$ is said to \textit{induce} the IASL $f$ of $\Sigma$ on its associated graphs if the following general conditions are hold.
\begin{enumerate}\itemsep0mm
\item[(i)] the set-labels of corresponding elements of the graph and the associated graphs have the same set-labels,
\item[(ii)] If a new edge (which is not in $\Sigma$) is introduced in the associated graph, then the set-label and signature of that edge is determined by corresponding rules,
\item[(iii)] if one edge of $\Sigma$ is replaced by another vertex (not in $\Sigma$) in the associated graph, then the new vertex is given the same set-label of the removed edge.
\end{enumerate}
In this section, we discuss the induced characteristics of certain signed graphs associated with the AIASL-signed graphs.
\begin{rem}{\rm
Let $\Sigma'$ be a signed subgraph of a balanced AIASL-signed graph $\Sigma$ given by $\Sigma'=\Sigma-v$, where $v$ is an arbitrary vertex of $\Sigma$. If $v$ is not in any cycle of $\Sigma$, then the removal of $v$ does not affect the number negative edges in the cycles of $\Sigma$. If $v$ is in a cycle $C$ of $\Sigma$, then the removal of $v$ makes $C-v$ acyclic. In this case also, removal of $v$ does not affect the number negative edges in the cycles of $\Sigma-v$. Then, $\Sigma'$ is balanced whenever $\Sigma$ is balanced.
}\end{rem}
A \textit{spanned signed subgraph} $\Sigma'$ of a signed graph $\Sigma$ is a signed graph which preserves signature and whose underlying graph $H$ is a spanning subgraph of the underlying graph $G$ of $\Sigma$. The following result is an obvious and immediate from the corresponding definition of the balanced signed graphs.
\begin{rem}{\rm
Let $\Sigma'$ be a spanned signed subgraph of a balanced AIASL-signed graph $\Sigma$. Then, $\Sigma'$ is balanced with respect to induced labeling and signature if and only if the following conditions are hold.
\begin{enumerate}
\item[(i)] the set $E(\Sigma\setminus \Sigma')$ contains even number of negative edges in $\Sigma$, if the signed graph $\Sigma$ is edge disjoint.
\item[(ii)] the set $E(\Sigma\setminus \Sigma')$ contains odd number of negative edges in $\Sigma$ if some of the negative edges are common to two more cycles in $\Sigma$.
\end{enumerate}
}\end{rem}
A \textit{subdivision signed graph} of a graph $G$ is the graph obtained by introducing a vertex to some or all edges of $G$. It is to be noted that the set-label of this newly introduced vertex is the same as that of the edge in $\Sigma$ to which it is introduced. In view of this fact, the following theorem checks whether a signed graph obtained by subdividing some or all edges of a balanced AIASL-signed graph $\Sigma$ to be balanced
\begin{thm}
A signed graph $\Sigma'$ obtained by subdividing an edge $e$ of a balanced AIASL-signed graph $\Sigma$ is a balanced under induced set-labeling if and only if $e$ is a cut edge of $\Sigma$.
\end{thm}
\begin{proof}
Let $\Sigma$ be a balanced AIASL-signed graph and let $e=uv$ be an arbitrary edge of $\Sigma$. If $e$ is a cut edge of $\Sigma$, then it is not contained any cycle of $\Sigma$. Hence, the cycles in $\Sigma'$ correspond to the same cycles in $\Sigma$. Therefore, subdividing the edge $e$ will not affect the number of negative edges in any cycle of $\Sigma'$. Therefore, $\Sigma'$ is balanced.
Now assume that a signed graph $\Sigma'$ obtained by subdividing the edge of $\Sigma$ is balanced under induced set-labeling. If possible, let $e$ be not a cut-edge of $G$, Then it is contained in a cycle $C$ of $\Sigma$. Now, introduce a new vertex $w$ into the edge $uv$. Then, the edge $uv$ will be removed and two edges $uw$ and $vw$ are introduced to $\Sigma$ and the vertex $w$ has the same set-label of the edge $uv$. Here, we need to consider the following cases.
\textit{Case-1}: Let $u$ and $v$ have the same parity set-labels. Then, the edge $uv$ has an odd parity set-label in $\Sigma$ and hence the new vertex $w$ in $\Sigma'$ has an odd parity set-label and negative signature. Hence, we have the following subcases.
\textit{Subcase-1.1}: If both $u$ and $v$ have odd parity set-labels, the edges $uw$ and $vw$ are negative edges in the corresponding cycle $C'$ in $\Sigma'$. Therefore, $C'$ contains odd number of negative edges, a contradiction to our hypothesis.
\textit{Subcase-1.2}: If both $u$ and $v$ have odd parity set-labels, the edges $uw$ and $vw$ are positive edges in the corresponding cycle $C'$ in $\Sigma'$. Therefore, the number of negative edges in $C'$ is one less than that of the corresponding cycle $C$ in $\Sigma$, a contradiction to our hypothesis that $\Sigma'$ is balanced.
\textit{Case-2}: Let $u$ and $v$ have different parity set-labels. Then, the edge $uv$ has an even parity set-label in $\Sigma$ and hence the new vertex $w$ in $\Sigma'$ has an even parity set-label and positive signature. Without loss of generality, let $u$ has even parity set-label and $v$ has odd parity set label in $\Sigma$. Then, the edges $uw$ is a negative edge and the edge $vw$ is a positive edge in $\Sigma'$. It can be noted that the cycle $C'$ contains one negative edge more than that in the corresponding cycle $C$ in $\Sigma$. It is also a contradiction to the hypothesis. In all possible cases, we get contradiction and hence the edge $e$ must be a cut edge of $\Sigma$. This complete the proof.
\end{proof}
A signed graph $\Sigma'$ is said to be homeomorphic to another signed graph $\Sigma$ if $\Sigma'$ is obtained by removing a vertex $v$ with $d(v)=2$ and is not a vertex of any triangle in $\Sigma$, and joining the two pendant vertices thus formed by a new edge. This operation is said to be an elementary transformation on $\Sigma$. The following theorem discusses the balance of a signed graph that is homeomorphic to a given balanced AIASL-signed graph $\Sigma$.
\begin{thm}
A signed graph obtained from a balanced AIASL-signed graph $\Sigma$ by applying an elementary transformation on a suitable vertex $v$ of $\Sigma$ is a balanced signed graph with respect to induced set-labeling if and only if the vertex $v$ is not in any cycle in $\Sigma$.
\end{thm}
\begin{proof}
Let $\Sigma$ be a balanced AIASL-signed graph and let $v$ be any vertex of $\Sigma$ with degree $2$ and is not in any triangle in $\Sigma$. Also, let $\Sigma'$ be a signed graph obtained from $\Sigma$ by applying an elementary transformation on $v$.
Since $d(v)=2$, it is adjacent to two vertices, say $u$ and $w$ in $\Sigma$. If $v$ is not in any cycle in $\Sigma$, then the number of negative edges in any cycle of $\Sigma$ and hence the number of negative edges in the corresponding cycles in $\Sigma'$ will not be affected by the elementary transformation on $v$. Therefore, $\Sigma'$ is balanced.
Conversely, assume that $v$ is a vertex of a cycle $C$ in $\Sigma$. Then we need to consider the following cases.
\textit{Case-1}: Let $u$ and $w$ have same parity set-labels. Here we have the following subcases.
\textit{Subcase-1.1}: If the set-label of $v$ and the set-label of $u$ and $w$ are of the same parity, then the edges $uv$ and $vw$ are negative edges in $C$ of $\Sigma$ and the edge $uw$ is a negative edge in the corresponding cycle $C'$ in $\Sigma'$. Therefore, $C'$ contains one negative edge less than that of $C$ in $\Sigma$. Hence, in this case, $\Sigma'$ not balanced.
\textit{Subcase-1.2}: If the parity of the set-label of $v$ is different from that of the set-labels of $u$ and $w$, then the edges $uv$ and $vw$ are positive edges in the cycle $C$ of $\Sigma$. But, the edge $uw$ is a negative edge in the corresponding cycle $C'$ of $\Sigma'$. Hence, the cycle $C'$ contains one negative cycle more than that of the corresponding cycle $C$ in $\Sigma$. Therefore, in this case also, $\Sigma'$ not balanced.
\textit{Case-2}: Let $u$ and $w$ have different parity set-labels. Then, the set-label of $v$ and the set-label either $u$ or $v$ are of the same parity. Without loss of generality, let the set-labels of $u$ and $v$ be of the same parity. Then, the edge $uv$ is a negative edge and $vw$ is a positive edge in the cycle $C$ of $\Sigma$. But, since $u$ and $v$ have different parity set-labels, the edge $uw$ is a positive edge in the corresponding cycle $C'$ of $\Sigma'$. Hence, the cycle $C'$ contains one negative cycle less than that of the corresponding cycle $C$ in $\Sigma$. Therefore, $\Sigma'$ not balanced. This completes the proof.
\end{proof}
\section{Conclusion}
In this paper, we discussed the characteristics and properties of the signed graphs which admits arithmetic integer additive set-labeling with a prime focus on the balance of these signed graphs. There are several open problems in this area. Some of the open problems that seem to be promising for further investigations are following.
\begin{prob}{\rm
Discuss the $k$-clusterability of different types of IASL-signed graphs for $k>2$.}
\end{prob}
\begin{prob}{\rm
Discuss the balance and $2$-clusterability and general $k$-clusterability of other types of signed graphs which admit different types of arithmetic IASLs.}
\end{prob}
\begin{prob}{\rm
Discuss the balance and $2$-clusterability and general $k$-clusterability of graceful, sequential and topological IASL-signed graphs.}
\end{prob}
\begin{prob}{\rm
Discuss the admissibility of AIASLs by the signed graphs obtained from the AIASL-signed graphs by finite number of edge contractions.}
\end{prob}
\begin{prob}{\rm
Discuss the admissibility of AIASLs by the signed graphs whose underlying graphs are the line graphs and total graphs of the underlying graphes of certain AIASL-signed graphs.}
\end{prob}
Further studies on other characteristics of signed graphs corresponding to different IASL-graphs are also interesting and challenging. All these facts highlight the scope for further studies in this area.
|
1,314,259,993,950 | arxiv | \section{Introduction}\label{sec:introduction}}
\IEEEPARstart{A}{rtificial} Intelligence (AI) is one of the most prevalent topics of research today across almost every scientific field.
For example, multi-agent systems can be applied to distributed control systems~\cite{GE20181684}, while distributed machine learning
has been adopted by Google for mobile users~\cite{Geyer17}.
However, as AI becomes more and more reliant on data,
several new problems have emerged, such as privacy violations, security issues, model instability, model fairness and communication overheads.
As just a few of the tactics used to derail AI,
adversarial samples can fool machine learning models,
leading to incorrect results.
Multi-agent systems may receive false information from malicious agents.
As a result, many researchers have been exploring new and existing security and privacy tools to tackle these new emerging problems.
Differential privacy is one of these tools.
Differential privacy is a prevalent privacy preservation model which guarantees whether an individual's information is included in a dataset has little impact on the aggregate output.
Fig.~\ref{fig-dp} illustrates a basic differential privacy framework using the following example.
Consider two datasets that are almost identical but differ in only one record and that,
access to the datasets is provided via a query function $f$.
If we can find a mechanism that can query both datasets and obtain the same outputs,
we can claim that differential privacy is satisfied.
In that scenario,
an adversary cannot associate the query outputs with either of the two neighbouring datasets,
so the one different record is safe.
Hence,
the differential privacy guarantees that, even if
an adversary knows all the other records in a dataset except for one unknown individual,
they still cannot infer the information of that unknown record.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{dp.eps}
\caption{Differential privacy}
\label{fig-dp}
\end{figure}
Interest in differential privacy mechanisms not only ranges from
the privacy community
to the AI community,
it has also attracted the attention of many private companies, such as Apple\footnote{https://www.apple.com/au/privacy/approach-to-privacy/
}, Uber\footnote{https://www.usenix.org/node/208168} and Google~\cite{RAPPOR14Google}.
The key idea of differential privacy is to introduce calibrated randomization to the aggregate output.
When Dwork et al.~\cite{Dwork14Science} showed that applying differential privacy mechanisms
to test data in machine learning could prevent over-fitting of learning algorithms,
it launched a new direction beyond simple privacy preservation to one that solves emerging problems in AI~\cite{Zhu20Philip}.
We use two examples to illustrate how those new properties can be applied.
\subsection{Examples}
The first example pertains to machine learning.
As shown in Fig.~\ref{Fig-examplelearning},
machine learning suffers from several problems, including privacy violations, over-fitting
and unfair models.
Recent research has shown that differential privacy mechanisms have the potential to tackle those problems. First, to maintain fairness in a model, the training data can be re-sampled from the data universe using a differential privacy mechanism~\cite{Alexandra18}.
Second, to preserve privacy, noise derived from a differential privacy mechanism can be added to the learning model~\cite{Chaudhuri20111069}.
Finally,
calibrated noise can be applied to generate fresh testing data to increase stability and avoid over-fitting of the learning algorithm~\cite{Dwork14Science}.
These successful applications of differential privacy show that learning problems can be solved by taking advantage of several properties of differential privacy,
such as randomization,
privacy preservation capability, and algorithm stability.
\begin{figure}[htpb]
\centering
\includegraphics[scale=0.7]{learningexample.eps}
\caption{Learning example}
\label{Fig-examplelearning}
\end{figure}
The second example comes from the realm of multi-agent systems, one of the traditional disciplines in AI.
A multi-agent system is a computerized system composed of multiple interacting intelligent agents,
such as sweeping robots as shown in Fig.~\ref{Fig-multiagent}.
The faces are agents and the grid denotes the moving environment of all agents.
An agent can make decisions over its direction of movement and can share that knowledge with other agents to help them make their decisions. The goal is for the robots to sweep all grids. Several problems exist in this multi-agent system. First, as each agent observes a different environment, it is difficult to share their knowledge. The randomizing mechanism in differential privacy can help to transfer the knowledge between agents. Second, communications between agents should be restricted to limit power consumption. Here, the privacy budget in differential privacy can help the system control to overall communications~\cite{Ye19}.
Third, when a malicious agent is present, like the agent in the red face, they may provide false knowledge. Differential privacy mechanisms can help improve the security level of communications by diminishing the impact of that agent.
\begin{figure}[htpb]
\centering
\includegraphics[scale=0.7]{agentexample.eps}
\caption{Multi-agent example}
\label{Fig-multiagent}
\end{figure}
Both these examples show how current research is applying differential privacy mechanisms to AI and how randomization can bring several new properties to AI.
\subsection{AI areas}
\begin{figure}[ht]
\centering
\caption{AI areas in the view of acting humanly}
\label{fig-aiarea1}
\includegraphics[scale=0.38]{AIareas.pdf}
\end{figure}
In AI, there are no strict area disciplines. Researchers and industries have built a birds-eye view of AI in all its diversity. Take the perspective of the Turing Test, for example. When programming a computer that needs to act like a human, the computer must have the following capabilities~\cite{russell2002}: natural language processing so it can successfully communicate with a human; knowledge representation to store what it knows or hears; automated reasoning to use the stored information to answer questions and to draw new conclusions; machine learning to adapt to new circumstances and to detect and extrapolate patterns; computer vision to perceive objects; and robotics to manipulate objects.
Based on this birds-eye view, we roughly categorize three major technical fields in AI: machine learning, deep learning and multi-agent system. Knowledge learning and automated reasoning can be processed by a multi-agent system; the other functions can be accomplished through machine learning and/or deep learning. In the view of the application, the AI area includes robotics, computer vision, natural language processing (NLP), etc.--see Fig~\ref{fig-aiarea1}.
Here, we note that although deep learning was originally a series of machine learning algorithms implemented in a neural network architecture, it has rapidly developed into a field of study in its own right with a huge number of novel perspectives and technologies, such as GANs~\cite{Goodfellow14}, ResNets~\cite{15RseNet}, etc. Therefore, we place deep learning in its own category.
The purpose of this paper is to document how the differential privacy mechanism can solve those new emerging problems in the technical fields: machine learning, deep learning and multi-agent systems. Applications such as robotics, NLP and computer vision have taken advantage of technologies such machine learning, deep learning and multi-agent system, so we have no reviewed these applications in dedicated sections.
\subsection{Differential privacy in AI areas}
Calibrated randomization benefits some AI algorithms. What follows is a summary of several properties derived from randomization.
\begin{itemize}
\item \textbf{Preserving privacy}.
This is the original purpose of differential privacy.
By hiding an individual in the aggregate information,
differential privacy can preserve the privacy of participants in a dataset.
\item \textbf{Stability}.
Differential privacy mechanisms ensure that the probability
of any outcome from a learning algorithm is
unchanged by modifying any individual record in the training data.
This property
establishes connections between a
learning algorithm and its ability to be generalized.
\item \textbf{Security}.
Security relates to malicious participants in a system. Differential privacy mechanisms can reduce the impact of malicious participants in AI tasks. This property can guarantee security in AI systems.
\item \textbf{Fairness}.
In machine learning, a given algorithm is said to be fair, or to have fairness, if its results are independent of sensitive attributes, like race and gender. Differential privacy can help to maintain fairness in a learning model by re-sampling the training data from the universe.
\item \textbf{Composition}.
Differential privacy mechanisms can guarantee that any step that satisfies differential privacy can construct a new algorithm that also satisfies differential privacy. This property is referred to as composition and is controlled by the privacy budget. In AI, composition can be used to control the number of steps, communication loads, etc.
\end{itemize}
\begin{table*}[]
\caption{Properties of differential privacy in artificial intelligence}\label{table-property}
\begin{tabular}{|l|l|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Selected AI areas} & Privacy & Stability & Fairness & Security & Composition & Utility \\ \hline
\multirow{3}{10mm}{Machine learning} & Private learning & Yes & & & & Yes & Decrease \\ \cline{2-8}
& Stability in learning & & Yes & & & & Increase \\ \cline{2-8}
& Fairness in learning & & & Yes & & & Increase \\ \hline
\multirow{3}{10mm}{Deep learning} & Deep Learning & Yes & & & & & Decrease \\ \cline{2-8}
& Distributed deep learning & Yes & & & & Yes & Decrease \\ \cline{2-8}
& Federated learning & Yes & & Yes & & Yes & Decrease or Increase \\ \hline
\multirow{3}{10mm}{Multi-agent system} & Reinforcement learning & Yes & & & Yes & Yes & Increase \\ \cline{2-8}
& Auction & Yes & & & & Yes & Decrease \\ \cline{2-8}
& Game theory & & & & & Yes & Decrease \\ \hline
\end{tabular}
\end{table*}
Table~\ref{table-property} shows the properties that have been explored to date for each of our three disciplines. In machine learning, differential privacy has been applied to private learning, stability and fairness. In deep learning, privacy is the major concern, but distributed deep learning and federated learning have also been investigated. In multi-agent systems, differential privacy has been used to guarantee privacy, provide security, and ensure composition. Utility shows the ultimate performance of the technology after adding differential privacy. Normally, privacy-preserving comes with a utility cost. However, if the differential privacy can contribute to stability, or security, the utility may increase, such as in federated learning or fairness.
Note, however, that blank cells do not mean we can not apply differential privacy mechanisms to those areas. As differential privacy has been proved to work well in many AI areas, in the future, more problems might be solved with the advantages of differential privacy.
The purpose of this paper is to highlight several possible avenues to integrate AI with differential privacy mechanisms, showing that differential privacy mechanisms have several attractive properties that make it quite valuable as a tool to AI beyond merely preserving privacy. The contributions of the paper are listed as follows:
\begin{itemize}
\item We have summarized several properties of differential privacy mechanisms.
\item We have shown that these properties can improve diverse aspects of AI areas, including machine learning, deep learning and multi-agent systems.
\item We explored new possibilities for taking advantage of differential privacy to bring new opportunities.
\end{itemize}
\section{Preliminary}\label{sec:preliminary}
\subsection{Differential privacy}
Consider a finite data universe $\mathcal{X}$.
Let the variable $r$ represent a record
with $d$ attributes sampled from the universe $\mathcal{X}$,
a dataset $D$ is
an unordered set of $n$ records from domain $\mathcal{X}$.
Two datasets $D$ and $D'$ are neighbouring datasets if
they differ in only one record.
A query $f$ is a function that
maps a dataset $D$ to an abstract range $\mathbb{R}$:
$f: D\rightarrow\mathbb{R}$.
The target of differential privacy is to mask the differences between the results to query $f$
between the neighbouring datasets to preserve privacy.
The maximal difference
is defined as the sensitivity $\Delta f$,
which determines how much perturbation is required for a private-preserving answer.
To achieve this goal,
differential privacy provides
a mechanism $\mathcal{M}$,
which is a randomization algorithm that
accesses the database and implements some functionalities.
A formal definition of differential privacy follows.
\begin{definition}[$\epsilon, \delta$-Differential Privacy~\cite{Dwork2006}]\label{Def-DP}
A randomized algorithm $\mathcal{M}$ gives $\epsilon$-differential privacy for any pair of
\emph{neighbouring datasets} $D$ and $D'$,
and, for every set of outcomes $\Omega$,
if $\mathcal{M}$ satisfies:
\begin{equation}
Pr[\mathcal{M}(D) \in \Omega] \leq \exp(\epsilon) \cdot Pr[\mathcal{M}(D') \in \Omega]+\delta
\end{equation}
where $\Omega$ denotes the output range of the algorithm $\mathcal{M}$.
\end{definition}
In Definition~\ref{Def-DP},
the parameter $\epsilon$ is defined as the privacy budget,
which controls the privacy guarantee level of mechanism $\mathcal{M}$.
A smaller $\epsilon$ represents stronger privacy.
If $\delta=0$, the randomized mechanism $\mathcal{M}$
gives $\epsilon$-differential privacy
by its strictest definition.
$(\epsilon,\delta)$-differential privacy provides freedom to
violate strict $\epsilon$-differential privacy
for some low probability events.
Sensitivity is a parameter used in both mechanisms to determine how much randomization is required:
\begin{definition}[Sensitivity]\label{Def-GS}
For a query $f:D\rightarrow\mathbb{R}$,
the sensitivity of $f$ is defined as
\begin{equation}
\Delta f=\max_{D,D'} ||f(D)-f(D')||_{1}
\end{equation}
\end{definition}
Two prevalent randomization mechanisms, Laplace and exponential, are used to satisfy the definition of differential privacy, but there are others, such as Gaussian mechanism. Each is explained next.
\subsection{Randomization: Laplace mechanism}
The Laplace mechanism is applied to numeric outputs~\cite{DworkCalibrated}.
The mechanism adds independent noise to the original answer, as shown in Definition~\ref{Def-LA}.
\begin{definition}[Laplace mechanism]\label{Def-LA}
For a function $f: D \rightarrow \mathcal{R}$ over a dataset $D$,
the mechanism $\mathcal{M}$ in Eq.~\ref{eq-lap} provides $\epsilon$-differential privacy.
\begin{equation}
\mathcal{M}(D)=f(D)+Lap(\frac{\Delta f}{\epsilon})
\end{equation}\label{eq-lap}
\end{definition}
\subsection{Gaussian mechanism}
Compared to a Laplace mechanism,
a Gaussian mechanism adds noise that is sampled from a zero-mean isotropic Gaussian distribution.
The noise $Z$ is sampled $\sim\mathcal{N}(0,\sigma^2)$ to the $L_{2}$ sensitivity
$\Delta f=\max_{D,D'} ||f(D)-f(D')||_{2}$ as follows:
\begin{definition}[Gaussian mechanism]\label{Def-GS}
For a function $f: D \rightarrow \mathcal{R}$ over a dataset $D$,
the mechanism $\mathcal{M}$ in Eq.~\ref{eq-gau} provides $\epsilon, \delta$-differential privacy.
\begin{equation}
\mathcal{M}(D)=f(D)+\sim\mathcal{N}(0,\sigma^2),
\end{equation}\label{eq-gau}
$\sigma=\Delta f \sqrt{2\log(1.25/\delta)/epsilon}$.
\end{definition}
\subsection{Exponential mechanism}
Exponential mechanisms are used to randomize the results for non-numeric queries. They are paired with a score function $q(D,\phi)$
that evaluates the quality of an output $\phi$.
Defining a score function is application-dependent, so
different applications lead to various score functions~\cite{McSherry07STOC}.
\begin{definition}[Exponential mechanism]\label{Def-EX}
Let $q(D,\phi)$ be a score function of dataset $D$
that measures the quality of output $\phi\in\Phi$,
$\Delta q$ represents the sensitivity of $\phi$.
The exponential mechanism $\mathcal{M}$ satisfies $\epsilon$-differential privacy if
\begin{equation}
\mathcal{M}(D) = \left( \text{return $\phi$ }\propto
\exp (\frac{\epsilon q(D,\phi)}{2\Delta q})\right).
\end{equation}
\end{definition}
\subsection{Composition}
Two privacy budget composition theorems are widely used
in the design of differential privacy mechanisms:
sequential composition~\cite{McSherry07STOC} and parallel composition~\cite{McSherry201089}.
\begin{thm}\label{def-comp1}
\emph{Parallel Composition}:
Suppose we have a set of privacy steps
$\mathcal{M}=\{\mathcal{M}_1,...\mathcal{M}_m\}$,
if each $\mathcal{M}_i$ provides an $\epsilon_{i}$ privacy guarantee on
a disjointed subset of the entire dataset,
the parallel of $\mathcal{M}$ will provide
$\max\{\epsilon_{1},...,\epsilon_{m}\}$-\emph{differential privacy}.
\end{thm}
Parallel composition corresponds to cases where
each $\mathcal{M}_i$ is applied to disjointed subsets of the dataset.
The ultimate privacy guarantee only depends on the largest privacy budget.
\begin{thm}\label{def-comp2}
\emph{Sequential Composition}:
Suppose a set of privacy steps
$\mathcal{M}=\{\mathcal{M}_1,...\mathcal{M}_m\}$
are sequentially performed on a dataset,
and each $\mathcal{M}_i$ provides an $\epsilon$ privacy guarantee,
$\mathcal{M}$ will provide $(m\cdot\epsilon)$-\emph{differential privacy}.
\end{thm}
Sequential composition
offers a privacy guarantee for a sequence of differentially private computations.
When a series of randomized mechanisms are performed sequentially on a dataset,
the privacy budgets are added up for each step.
\section{Differential Privacy in Machine Learning}\label{sec:DP-ML}
\subsection{Private machine learning}
Private machine learning aims to protect the individual’s privacy in training data or learning models.
Differential privacy has been considered to be one of the most important tools
in private machine learning and has been heavily investigated in the past decade.
The essential mechanisms in differential privacy all work to extend
current non-private machine learning algorithms into differentially private algorithms.
These extensions can be realized by incorporating Laplace or exponential mechanisms
into non-private learning algorithms directly~\cite{Zhu17survey},
or by adding Laplace noise into the objective functions~\cite{Chaudhuri20111069}.
Starting with Kasiviswanathan et al.'s work~\cite{PrivateLearning08},
the line of research presenting the details of private learning process from privacy based on empirical risk minimization \cite{Chaudhuri11,Wang17}, to prediction \cite{Dwork18,Dagan19},
Bayesian inference \cite{Foulds16,Bernstein18,Bernstein19} and the multi-armed bandit~\cite{Tossou16,Tossou17}.
Private machine learning is one of the most powerful models accepted in this field. To avoid redundancy, this paper will not dive into details of private machine learning.
A number of survey papers have discussed this field~\cite{Zhu17survey,Moha17,Rubaie19,Jay19} thoroughly.
\subsection{Differential privacy in learning stability}
\subsubsection{The overview of stability of learning}
A stable learning algorithm is one in which the prediction does not change much when the training data is modified slightly.
Bousquet et al.~\cite{bousquet2002stability} have proved that stability is linked to the
generalization error bound of the learning algorithm,
indicating that a highly stable algorithm leads to a less over-fit result.
However, increasing the stability of the algorithm is challenging when the size of the testing data is limited.
This is because the validate data sometimes are reused and lead to an incorrect learning model.
To preserve statistical learning validity, analysts
should collect new data for a fresh testing set.
Differential privacy can be naturally linked to learning stability.
The concept of differential privacy ensures that the probability
of observing any outcome from an analysis is
essentially unchanged by modifying any single record.
Dwork et al.~\cite{Dwork2015STOC, Dwork14Science} showed that differential
privacy mechanisms can be used to develop adaptive data analysis
algorithms with provable bounds for over-fitting, noting that
certain stability notions
are necessary and sufficient for generalization.
Therefore,
differential privacy is stronger
than previous notions of stability
and, in particular, possesses strong adaptive composition
guarantees~\cite{Hardt2014FOCS}.
\subsubsection{Differential privacy in learning stability}
Dwork et al.~\cite{Dwork14Science} show that, by adding noise to generate fresh testing data, differential privacy mechanisms can achieve highly stable learning.
For a dataset $D$, an analyst learns about the data by running a series of analyses $f_{i}$ on the dataset.
The choice of which analysis to run depends on
the results from the earlier analyses.
Specifically,
the analyst first selects a statistic $f_{0}$ to query on $D$ and observes a query result $y_{1}=f_{0}(D)$.
From the $k^{th}$ analysis, the analyst selects a function $f_{k}$ based on the query result ${y_1,...,...y_{k-1}}$.
To improve the generalization capability of the adaptive scenario,
noise is added in each analysis iteration.
For example, $y_{k}=Lap(\frac{\Delta f}{\epsilon})+f_{k-1}(D)$~\cite{Dwork2015STOC}.
This type of adaptive analysis can be linked to machine learning.
The dataset $D$ can be randomly partitioned into
a training set $D_{t}$ and a testing (holdout) set $D_{h}$.
The analyst can access
the training set $D_{t}$ without restrictions but
may only access $D_{h}$ through a differentially private interface.
The interface takes the testing and training
sets as inputs and, for all functions given by the
analyst, provides statistically valid estimates of
each function’s results.
For a sufficiently large testing set, the differential privacy interface
guarantees that for function $f:D\rightarrow[0,1]$,
the mechanism will return a randomized value $v_{f}$.
When $v_{f}$ is compared to the query result $y$,
we have $|v_{f}-y|\leq\tau$ with a probability of at least
$1-\beta$, where $\tau$ is the analyst’s choice of error and $\beta$ is the confidence parameter.
The probability space is over the data elements in $D_{h}$ and $D_{t}$ and the randomness
introduced by the interface.
A multiplicative weight updating mechanism~\cite{NIPS2012} could also be included in the
interface to conserve the privacy budget.
\subsubsection{Summary of stability of learning}
The idea of adding randomization during data analysis to increase stability has been widely accepted.
MacCoun et al.~\cite{MacCoun15Nature}
believed: when deciding which results to report, the analyst interacts
with a dataset that has been obfuscated
through adding noise to observations, removing some data points, or switching data labels.
The raw, uncorrupted, dataset is only used in computing the final reported values.
Differential privacy mechanisms can follow the above rules to significantly improve learning stability.
\subsection{Differential privacy in fairness}
\subsubsection{An overview of the fairness in learning}
Fairness issues are prevalent in every facet of our lives
including education, job application, the parole of prisoners and so on \cite{Binns18,Holstein19,Zhang20}.
Instead of resolving fairness issues, modern AI techniques, however, can amplify social inequities and unfairness.
For example, an automated hiring system may be more likely to recommend candidates from specific racial, gender or age groups \cite{Boettcher17,Giang18}.
A search engine may amplify negative stereotypes by showing arrest-record ads
in response to queries for names predominantly given to African-American babies but not for other names \cite{BBC13,Nobel18}.
Moreover, some software systems that are used to measure the risk of a person recommitting crime demonstrate a bias against African-Americans over Caucasians with the same profile~\cite{Angwin16,Mehrabi19}.
To address these fairness issues in machine learning, great effort has been placed on developing definitions of fairness \cite{Zemel13,Hardt16,Dwork19}
and algorithmic methods for assessing and mitigating undesirable bias in relation to these definitions~\cite{Kusner17,Agarwal18b}.
A typical idea is to make algorithms insensitive to one or multiple attributes of datasets, such as gender and race.
\subsubsection{Applying differential privacy to improve fairness}
Dwork et al.~\cite{Dwork12} classified individuals with the goal of preventing discrimination against a certain group while maintaining utility for the classifier. The key idea is to treat similar individuals similarly. To implement this idea, these researchers adopted the Lipschitz property, which requires that any two individuals,
$x$ and $y$, with a distance of $d(x,y)\in[0,1]$
must map to the distributions $M(x)$ and $M(y)$, respectively, such that the statistical distance
between $M(x)$ and $M(y)$ is at most $d(x,y)$~\cite{dixit2013testing}.
In other words, if the difference between $x$ and $y$ is $d(x,y)$,
the difference of the classification outcomes of $x$ and $y$ is at most $d(x,y)$.
A connection between differential privacy and the Lipschitz property has been theoretically established, in that a mapping satisfies differential privacy if, and only if, this mapping satisfies the Lipschitz property~\cite{Dwork12}.
Zemel et al.~\cite{zemel13pmrl} extended Dwork et al.’s~\cite{Dwork12} preliminary work by defining the metrics between individuals. They learned a restricted form of a distance function and formulated fairness as an optimization problem of finding the intermediate representation that best encodes the data. During the process, they preserved as much information about the individual’s attributes as possible, while removing any information about membership with other protected subgroup. The goal was two-fold: first, the intermediate representation should preserve the data’s original features as much as possible. Second, the encoded representation is randomized to hide whether or not the individual is from the protected group.
Both ideas take advantage of randomization in differential privacy. Considering an exponential mechanism with a carefully designed score function, the framework can sample fresh data from the universe to represent original data with the same statistical properties. However, the most challenging part of this framework is designing the score function. This is because differential privacy in fairness assumes that the similarity between individuals is given; however, estimating similarity between individuals in an entire feature universe is a tough problem. In other words, the evaluation of similarity between individuals is the key obstacle of model fairness, making score function design an obstinate problem. Therefore, differential privacy in model fairness needs further exploration.
Recently, researchers have attempted to adopt differential privacy to simultaneously achieve both fairness and privacy preservation~\cite{Xu19WWW,Ding20}. This research is motivated by settings where models are required to be non-discriminatory in terms of certain attributes, but these attributes may be sensitive and so must be protected while training the model~\cite{Jagielski19}. Addressing fairness and privacy preservation simultaneously is challenging because they have different aims~\cite{Cum19,Ding20}. Fairness focuses on the group level and seeks to guarantee that the model’s predictions for a protected group (such as female) are the same as the predictions made for an unprotected group. In comparison, privacy preservation focuses on the individual level. Privacy preservation guarantees that the output of a classification model is independent of whether any individual record is in or out of the training dataset. A typical solution to achieve fairness and privacy preservation simultaneously was proposed by Ding et al.~\cite{Ding20}.
Their solution is to add a different amount of differentially private noise based on different polynomial coefficients of the constrained objective function, where the coefficients relate to attributes in the training dataset. Therefore, privacy is preserved by adding noise to the objective function, and fairness is achieved by adjusting the amount of noise added to the different coefficients.
\subsubsection{Summary of differential privacy in fairness}
The best methods of similarity measurement and composition are open problems in fairness models. Further, differential privacy in fairness models has been directed toward classification problems. There are also some works on fairness in online settings such as online learning, bandit learning and reinforcement learning. However, how to use differential privacy mechanisms to benefit those online settings of fairness needs further investigation.
Composition fairness is also a big challenge. Here, fairness means that if each component in the algorithm satisfies the notion of fairness, the entire algorithm will satisfy the same~\cite{Alexandra18}. This composition property is essential for machine learning, especially for online learning. Dwork et al.~\cite{Dwork18Composition} explored this direction, finding that current methods seldom achieve this goal because classification decisions cannot be made independently, even by a fair classifier. Also, classifiers that satisfy group fairness properties may not compose well with other fair classifiers. Their results show that the signal provided by group fairness definitions under composition is not always reliable. Hence, further study is needed to figure out how to take advantage of differential privacy to ensure composition.
\subsection{Summary of differential privacy in machine learning}
\begin{table*}[htp]\scriptsize
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\caption{Summary of differential privacy in machine learning}
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|c|} \hline
\textbf{Papers}&\textbf{Research areas}&\textbf{Techniques used}&\textbf{Research aims}&\textbf{Advantages}&\textbf{Disadvantages}\\ \hline
Dwork et al.~\cite{Dwork14Science,Dwork2015STOC} & \tabincell{c}{Stable learning} & \tabincell{c}{Laplace \\mechanism} & Improve stability & \tabincell{c}{Improve stability \\with little overhead} & \tabincell{c}{Limited access \\to testing dataset}\\ \hline
Dwork et al. \cite{Dwork12} & \tabincell{c}{Fairness in learning} & \tabincell{c}{Concept of \\differential privacy} & Improve fairness & \tabincell{c}{Not only enforce\\ fairness but also\\ detect unfairness} & \tabincell{c}{An available similarity \\metric is assumed \\a prerequisite}\\ \hline
Zemel et al. \cite{zemel13pmrl} & \tabincell{c}{Fairness in learning} & \tabincell{c}{Concept of \\differential privacy} & Improve fairness & \tabincell{c}{Simultaneously encode \\and obfuscate data} & \tabincell{c}{Representation dependent} \\ \hline
Xu et al. \cite{Xu19WWW} & \tabincell{c}{Fairness in learning} & \tabincell{c}{Laplace \\mechanism} & \tabincell{c}{Improve fairness and \\preserve privacy} & \tabincell{c}{Achieve both fairness \\and privacy} & \tabincell{c}{For logistic \\regression only}\\ \hline
Jagielski et al. \cite{Jagielski19} & \tabincell{c}{Fairness in learning} & \tabincell{c}{Laplace \\mechanism} & \tabincell{c}{Improve fairness and \\preserve privacy} & \tabincell{c}{Achieve both \\fairness and privacy} & \tabincell{c}{Need large dataset}\\ \hline
Ding et al. \cite{Ding20} & \tabincell{c}{Fairness in learning} & \tabincell{c}{Functional \\mechanism} & \tabincell{c}{Improve fairness and \\preserve privacy} & \tabincell{c}{Achieve both fairness \\and privacy} & \tabincell{c}{For logistic \\regression only}\\ \hline
\end{tabular}}
\label{tab:summaryML}
\end{table*}
Table \ref{tab:summaryML} summarizes the papers that apply differential privacy to learning stability and fairness. From this summary, we can see that differential privacy can not only preserve privacy but also improve the stability and fairness in machine learning.
The key idea of achieving stability is derived from allowing an analyst to access the testing set only in a differentially private manner.
Likewise, the main idea of achieving fairness is also derived from randomly re-sampling fresh data from the data universe in a differentially private manner.
The two examples show that the sampling from the data universe can improve the machine learning performance to some extent.
Even though differential privacy has been proven to
guarantee privacy, stability and fairness in machine learning,
there are still some open research issues.
First, to preserve privacy, the utility of learning models is sacrificed to some extent.
Thus, how to obtain an optimal trade-off between the privacy and the utility still needs further exploration.
Second, current differentially private stable learning is suitable only for the learning models
where loss functions do not have regularization.
Differential privacy can provide additional generalization capability to the learning models who has limited regularization.
Hence, improving generalization capability for regularized loss functions will be helpful.
Third, the re-sampling in current fair learning is typically based on the exponential mechanism.
Exponential mechanism requires the knowledge of the utility of each sample.
This knowledge, however, may not be available or hardly be defined in some situations.
Thus, new mechanisms are needed for today's fair learning.
Research on differential privacy in machine learning can be broadened to address other non-privacy issues. For example, differential privacy mechanisms may be able to generate new data samples based on existing ones by properly adding noise to the values of attributes in existing samples. These newly-generated samples may not be suitable for training data, but they can be used as testing data. Another example is that differential privacy mechanisms can be used for sampling. Sampling is an important step in deep reinforcement learning and batch learning. The small database mechanism may be a good tool for sampling in machine learning, as it can guarantee the desired accuracy while sampling only a small set of samples.
\section{Differential privacy in deep learning}
Deep learning originated from regular neural networks
but thanks to the availability of large volumes of data and advancements in computer hardware,
implementing many-layered neural network models has become feasible,
and these models significantly outperforms their predecessors.
The latest deep learning algorithms have been successfully applied to many applications
such as natural language processing, image processing, and speech and audio processing \cite{Dargan19}.
Differential privacy has been broadly used in deep learning to preserve data and model privacy.
Thus, in this section, we mainly focus on analyzing the differential privacy in general deep neural networks, distributed deep learning \cite{Pouy18} and federated learning \cite{Yang19}.
\subsection{Deep neural networks: attacks and defences}
\subsubsection{Privacy attacks in the deep neural networks}
One of the most common privacy attacks is an inference attack
where the adversary maliciously infers sensitive features and background information
about a target individual from a trained model \cite{Gong20}.
Typically, there are two types of inference attacks in deep learning.
The first type is a membership inference attack.
The aim here is to infer whether or not a given record is in the present training dataset \cite{Nasr19}.
Membership inference attacks can be either black-box or white-box.
Black-box means that an attacker can query a target model
but does not know any other details about that model, such as its structure \cite{Shokri17}.
In comparison, white-box means that an attacker has full access to a target model along with some auxiliary information \cite{Yeom18}.
The attack model is based on the observation that
machine learning models often behave differently on training data versus
the data they ``see'' for the first time.
The second type of attack is an attribute inference attack.
The aim of an attribute inference attack is to learn hidden sensitive attributes of a test input
given access to the model and information about the non-sensitive attributes \cite{Fredrikson14}.
Fredrikson \cite{Fredrikson14} describes a typical attribute inference attack method,
which attempts to maximize the posterior probability estimate of the sensitive attributes
based on the values of non-sensitive attributes.
\subsubsection{Differential privacy in deep neural networks}
Some of the properties of differential privacy are naturally resistant to membership and attribute inference attacks.
An intuitive way to resist inference attacks is to properly add differentially private noise to the values of the sensitive attributes before using the dataset to train a model.
In a typical deep learning algorithm, there are four places to add noise,
as shown in Figure~\ref{fig-dpattackdefend}.
The first place is the training dataset, where the noise is derived from an input perturbation mechanism.
This operation occurs before the training starts
and is usually done to resist attribute inference attacks.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.65]{DPattack-defend.pdf}
\caption{Differential privacy in deep learning}
\label{fig-dpattackdefend}
\end{figure}
The second place is the loss function, which yields an objective perturbation mechanism.
This operation occurs during training
and is usually done to resist membership inference attacks.
The third place is the gradients at each iteration, i.e., a gradient perturbation mechanism.
Gradients are computed using the loss function to do partial-derivative against the weights of the deep neural network.
Likewise, this operation occurs during training
and is usually done to resist membership inference attacks.
The fourth place is the weights of the deep neural network, constituting the learned model, called an output perturbation mechanism.
This operation happens once training is complete.
The operation is easy to implement
and can resist both membership and attribute inference attacks.
However, directly adding noise to the model may significantly harm its utility,
even if the parameter values in differential privacy have been carefully adjusted.
Of these four places, adding noise to the gradients is the most common method.
However, because the gradient norm may be unbounded in deep learning,
a bound must be imposed before applying gradient perturbation.
A typical way is to manually clip the gradient at each iteration \cite{Abadi16}.
Such a clipping can also provide a sensitivity bound with differential privacy.
Table~\ref{tab-dpnoise} summarizes the properties of the mentioned attacks and their defence strategies.
\begin{table*} \centering
\caption{Attacks and defense in deep learning}\label{tab-dpnoise}
\begin{tabular}{c|c|c|c|c}
\toprule
Noise & Membership inference attack & Attribute inference attack & Privacy guarantee & Performance impact \\
\midrule
Dataset \cite{Heikkila17} & & $\checkmark$ & very strong & high \\
Loss function \cite{Zhao19} & $\checkmark$ & & strong & low \\
Gradient \cite{Shokri15,Abadi16,Cheng18} & $\checkmark$ & & strong & low \\
Weights \cite{Jay18,Phan19} & $\checkmark$ & $\checkmark$ & very strong & very high \\
Classes~\cite{Papernot17, Papernot18,Zhao18} & $\checkmark$ & $\checkmark$ & very strong & low \\
\bottomrule
\end{tabular}
\end{table*}
From Table \ref{tab-dpnoise}, we can see that adding noise to a dataset can defend against attribute inference attacks.
Since the aim of attribute inference attacks is to infer the values of sensitive attributes,
directly adding noise to these values is the most straightforward and efficient method of protecting them.
However, this method may significantly affect the utility of the learned model,
because it is heavily dependent on the values of the attributes in the training dataset,
and using a dataset with modified attribute values to learn a model is similar to using a ``wrong'' dataset.
By comparison, adding noise to the loss function or gradient only slightly affects the utility of the learned model.
This is because the noise is added during the training process
and the model can be corrected by taking the noise into account.
Adding noise to the loss function or gradient can resist membership inference attacks,
which can be guaranteed by the properties of differential privacy.
However, adding noise to the loss function or gradient does not offer much resistance to attribute inference attacks.
As mentioned before, a typical attribute inference attack needs two pieces of information:
1) the underlying distribution of the training dataset; and 2) the values of non-sensitive attributes.
These two pieces of information are not modified when adding noise to the loss function or gradient.
Finally, adding noise to the weights or classes of a neural network can resist both membership and attribute inference attacks.
This is because adding noise to the weights will modify the learned model and both of these types of attacks need to access the learned model to launch an attack.
The downside is that adding noise to a learned model after training may drastically affect its utility,
and retraining the model will not correct the problem as noise simply needs to be added again.
Noise could be added to the weights in each iteration of training.
However, this method might affect convergence,
since the output of the algorithm is computed based on the weights.
Hence, if noise is added to each weight,
the total amount of noise might become large enough to make the loss never convergent.
Adding noise to the classes has the similar disadvantage for the same reason.
\subsection{Differential privacy in distributed deep learning}
\subsubsection{Overview of distributed deep learning}
Conventional deep learning is limited to a single-machine system,
where the system has all the data and carries out the learning independently.
Distributed deep learning techniques, however, accelerate the learning process.
Two main approaches are applied in distributed deep learning:
data parallelism and model parallelism \cite{Pouy18}.
In data parallelism, a central model is replicated by a server and distributed to all the clients.
Each client then trains the model based on her own data.
After a certain period of time, each client summarizes an update on top of the model
and shares the update to the server.
The server then synchronizes the updates from all the clients and improves the central model.
In model parallelism, all the data are processed with one model.
The training of the model is split between multiple computing nodes,
with each computes only a subset of the model.
As data parallelism can intrinsically protect the data privacy of clients,
most research on distributed deep learning focuses on data parallelism.
\subsubsection{Differential privacy in distributed deep learning}
As mentioned in the previous subsection, differentially private noise can be added to five places in a deep neural network.
The following review is divided into methods based on adding noise.
\emph{Adding noise to input datasets.}
Heikkila et al. \cite{Heikkila17} proposed a general approach for privacy-preserving learning in distributed settings.
Their approach combines secure multi-party communication with differentially private Bayesian learning methods
so as to achieve distributed differentially private Bayesian learning.
In their approach, each client $i$ adds a Gaussian noise to her data
and divides them and the noise into shares.
Each share is then sent to a server.
In this way, the sum of the shares discloses the real value,
but separately they are just random noise.
\emph{Adding noise to loss functions.}
Zhao et al. \cite{Zhao19} proposed a privacy-preserving collaborative deep learning system
that allows users to collaboratively build a collective learning model
while only sharing the model parameters, not the data.
To preserve the private information embodied in the parameters,
a functional mechanism, which is an extended version of the Laplace mechanism,
was developed to perturb the objective function of the neural network.
\emph{Adding noise to gradients.}
Shokri and Shmatikov \cite{Shokri15} designed a system
that allows participants to independently train on their own datasets
and share small subsets of their models' key parameters during training.
Thus, participants can jointly learn an accurate neural network model
without sharing their datasets,
and can also benefit from the models of others to improve their learning accuracy
while still maintaining privacy.
Abadi et al. \cite{Abadi16} developed a differentially private stochastic gradient descent algorithm for distributed deep learning.
At each iteration during the learning, Gaussian noise is added to the clipped gradient to preserve privacy in the model.
In addition, their algorithm also involves a privacy accountant and a moment accountant.
The privacy accountant computes the overall privacy cost during the training,
while the moment accountant keeps track of a bound on the moments of the privacy loss random variable.
Cheng et al. \cite{Cheng18} developed a privacy-preserving algorithm for distributed learning
based on a leader-follower framework,
where the leaders guide the followers in the right direction to improve their learning speed.
For efficiency, communication is limited to leader-follower pairs.
To preserve the privacy of leaders, Gaussian noise is added to the gradients of the leaders' learning models.
\emph{Adding noise to weights.}
Jayaraman et al. \cite{Jay18} applied differential privacy with both output perturbation and gradient perturbation in a distributed learning setting.
With the output perturbation, each data owner combines their local model with a secure computation
and adds Laplace noise to the aggregated model estimator before revealing the model.
With the gradient perturbation, the data owners collaboratively train a global model using an iterative learning algorithm,
where, at each iteration, each data owner aggregates their local gradient within a secure computation
and adds Gaussian noise to the aggregated gradient before revealing the gradient update.
Phan et al. \cite{Phan19} proposed a heterogeneous Gaussian mechanism to preserve privacy in deep neural networks.
Unlike a regular Gaussian mechanism, this heterogeneous Gaussian mechanism can arbitrarily redistribute noise
from the first hidden layer and the gradient of the model to achieve an ideal trade-off between model utility and privacy loss.
To obtain the property of arbitrary redistribution, a noise redistribution vector is introduced
to change the variance of the Gaussian distribution.
Further, it can be guaranteed that, by adapting the values of the elements in the noise redistribution vector,
more noise can be added to the more vulnerable components of the model to improve robustness and flexibility.
\emph{Adding noise to output classes.}
Papernot et al. \cite{Papernot17} developed a model called Private Aggregation of Teacher Ensembles (PATE)
which has been successfully applied to generative adversarial nets (GAN) for privacy guarantees \cite{Jordon19}.
PATE consists 1) an ensemble of $n$ teacher models;
2) an aggregation mechanism; and 3) a student model.
Each teacher model is trained independently on a subset of private data.
To protect the privacy of data labels, Laplace noise is added to the output classes, i.e., the teacher votes.
Last, the student model is trained through knowledge transfer from the teacher ensemble with the public data and privacy-preserving labels.
Later, Papernot et al. \cite{Papernot18} improved the PATE model to make it applicable to large-scale tasks and real-world datasets.
Zhao \cite{Zhao18} also improved the PATE model by extending it to the distributed deep learning paradigm.
Each distributed entity uses deep learning to train a teacher network on private and labeled data.
The teachers then transfer the knowledge to the student network at the aggregator level in a differentially-private manner
by adding Gaussian noise to the predicted output classes of the teacher networks.
This transfer uses non-sensitive and unlabeled data for training.
\subsubsection{Summary of differential privacy in distributed deep learning}
Although a number of privacy-preserving methods have been proposed for distributed deep learning,
there are still some challenging issues that have not yet been properly addressed.
The first issue is synchronization.
If data parallelism has too many training modules,
it has to decrease the learning rate to ensure a smooth training procedure.
Similarly, if model parallelism has too many segmentations,
the output from the nodes will reduce training efficiency \cite{Pouy18}.
Differential privacy offers potential for solving this issue.
Technically, the challenge is a coordination problem,
where modules or nodes collaboratively perform a task,
but each has a privacy constraint.
This coordination problem can be modeled as a multi-player cooperative game,
and differential privacy has been proven as effective for achieving equilibria in this game \cite{Kearns14}.
The second issue is collusion.
Most of the existing methods assume non-collusion between multiple computation parties.
This assumption, however, may fail in some situations.
For example, multiple service providers may collude to obtain a user's private information.
Joint differential privacy may be able to address this issue,
as it has been proven to successfully protect any individual user's privacy
even if all the other users collude against that user \cite{Zhang16}.
The third issue is privacy policies.
Most existing methods rely on privacy policies
that specify which data can be used by which users according to what rules.
However, there is no guarantee that all the users will strictly follow the privacy policies.
Differential privacy may be available to deal with this issue,
as differential privacy can guarantee users will truthfully report their types
and faithfully follow the recommendations given by the privacy policies.
\subsection{Differential privacy in federated learning}
\subsubsection{Overview of federated learning}
Federated learning enables individual users to collaboratively learn a shared prediction model
while keeping all the training data on the users' local devices.
Federated learning was first proposed by Google in 2017 as an additional approach to the standard centralised machine learning approaches~\cite{Google17},
and has been applied to several real-world applications~\cite{FLsurvey2019}.
Fig.~\ref{fig-federated} shows the structure of a simple federated learning framework.
First, the training centre distributes a general learning model,
trained on general data, to all the smart devices in the network. This model is used
for general purposes, such as image processing.
In a learning iteration, each user downloads a shared model from the training centre to their local device.
Then, they improve the downloaded model by learning from their own local data.
Once complete, the changes to each user's local model are summarized as a small update,
which is sent to the training centre through a secure communication channel.
Once the cloud server receives all the updates,
the shared model is improved wholesale.
During the above learning process, each user's data remain on their own device
and is never shared with either the training centre or another user.
However, although private user data cannot be directly obtained by others,
it may be indirectly leaked through the updates.
For example, the updates of the parameters in an optimization algorithm, such as stochastic gradient descent \cite{McMahan17},
may leak important data information when exposed together with data structures \cite{Phong18}.
Differential privacy can, however, resolve this problem, as explained next.
\begin{figure}
\centering
\includegraphics[scale=0.7]{FederatedFramework.eps}
\caption{Federated Learning Framework}
\label{fig-federated}
\end{figure}
\subsubsection{Applying differential privacy in federated learning}
Although no training data is transferred from mobile devices to the cloud centre in federated learning,
simply keeping data locally does not provide a strong enough privacy guarantee
when conventional learning methods are used. For example, adversaries can use differential attacks to discover what data was used during training through the parameters of the learning model~\cite{Geyer17}.
To protect against these types of attacks, several algorithms that incorporate differential privacy have been proposed
that ensure a learned model does not reveal whether the data from a mobile device was used during training~\cite{Federated17}.
Adversaries can also interfere with the messages exchanged between communicating parties,
or they can collude among communicating parties during training to attack the accuracy of the learning outcomes.
To ensure the resulting federated learning model maintains acceptable prediction accuracy, approaches using both differential privacy mechanisms and secure multiparty computation frameworks have been created, providing formal data privacy guarantees~\cite{Fedarated18}.
Geyer et al. \cite{Geyer17} incorporated differential privacy mechanisms into federated learning to ensure that
whether an individual client participates in the training cannot be identified.
This approach protects the entire data of an individual client.
To achieve this aim, in each communication round, a subset of the total clients is randomly selected.
Then, the difference between the central model and each of the selected client's local model is calculated,
and Gaussian noise is added to the difference.
Shi et al. \cite{Shi17} investigated a distributed private data analysis setting,
where multiple mutually distrustful users co-exist and each of them has private data.
There is also an untrusted data aggregator in the setting
who wishes to compute aggregate statistics over these users.
The authors adopted computational differential privacy to develop a protocol,
which can output meaningful statistics with a small total error
even when some of the users fail to respond.
Also, the protocol can guarantee the privacy of honest users,
even when a portion of the users are compromised and colluding.
Agarwal et al. \cite{Agarwal18} combined two aspects of distributed optimization in federated learning:
1) quantization to reduce the total communication cost; and
2) adding Gaussian noise to the gradient
before sending the result to the central aggregator to preserve the privacy of each client.
\subsubsection{Summary of differential privacy in federated learning}
The main advantage of the federated leaning model is that
none of the training data needs be transferred to the cloud centre
which satisfies the basic privacy concerns of mobile device users can be satisfied. However,
federated learning has some unique challenges, mainly in the following three respects:
\begin{itemize}
\item Issues related to attacks on various vulnerabilities of the federated learning model and the countermeasures to defend against these attacks. For example, adversaries can use differential attacks to determine which mobile users have been included in the learning process~\cite{smith2017federated}; messages can be tampered with; and adversaries can use model poisoning attacks to cause the model to misclassify a set of chosen inputs with high confidence~\cite{bhagoji2018analyzing, pmlrv97}.
\item Issues related to the learning algorithms, such as the requirements of accuracy, scalability, efficiency, fault-tolerance, etc.~\cite{Google17, KonecnyMYRSB16}.
\item Issues related to the structure of the federated learning system, including its communication efficiency, the computational and power limitation of the mobile devices, the reliability of the mobile devices and communication system, etc.~\cite{smith2017federated}. This issue can potentially be tackled through the composition property of differential privacy by fixing the privacy budget and forcing all communications to consume that budget.
\end{itemize}
To effectively use federated learning in various applications,
we first need to overcome the challenges related to attacks, the system structures, and the learning algorithms.
Therefore, intensive research addressing these challenges will be required in the near future.
The second future development will be to explore the power and benefits of federated learning for both new and existing applications,
especially now that mobile devices are ubiquitous.
The third future development will be the automation of tools
that use federated learning and the emergence of companies providing such services to meet the various needs of business and individual customers.
\subsection{Summary of differential privacy in deep learning}
\begin{table*}[!ht]\scriptsize
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\caption{Summary of differential privacy in deep learning}
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|c|} \hline
\textbf{Papers}&\textbf{Research areas}&\textbf{Techniques used}&\textbf{Research aims}&\textbf{Advantages}&\textbf{Disadvantages}\\ \hline
Papernot et al. \cite{Papernot17} & \tabincell{c}{Deep learning} & \tabincell{c}{Laplace\\ mechanism} & Preserve privacy & \tabincell{c}{Independent of \\learning algorithms} & \tabincell{c}{Suitable only for\\ small-scale tasks}\\ \hline
Papernot et al. \cite{Papernot18} & \tabincell{c}{Deep learning} & \tabincell{c}{Gaussian\\ mechanism} & Preserve privacy & \tabincell{c}{Suitable for \\large-scale tasks} & \tabincell{c}{Need two \\aggregators}\\ \hline
Shokri et al. \cite{Shokri15} & \tabincell{c}{Distributed \\deep learning} & \tabincell{c}{Sparse \\vector technique} & Preserve privacy & \tabincell{c}{Preserve the privacy of \\participants without \\sacrificing the accuracy of \\resulting models} & \tabincell{c}{Vulnerable to Generative \\Adversarial Network\\-based attacks \cite{Hitaj17}}\\ \hline
Abadi et al. \cite{Abadi16} & \tabincell{c}{Distributed \\deep learning} & \tabincell{c}{Gaussian \\mechanism} & Preserve privacy & \tabincell{c}{Preserve the privacy of \\deep neural networks \\with non-convex \\objectives} & \tabincell{c}{Effective in \\a limited number of \\deep neural networks}\\ \hline
Heikkila et al. \cite{Heikkila17} & \tabincell{c}{Distributed \\deep learning} & \tabincell{c}{Gaussian \\mechanism} & Preserve privacy & \tabincell{c}{Achieve DP in \\distributed settings} & \tabincell{c}{Sacrifice \\learning performance} \\ \hline
Cheng et al. \cite{Cheng18} & \tabincell{c}{Distributed \\deep learning} & \tabincell{c}{Gaussian \\mechanism} & Preserve privacy & \tabincell{c}{Low differential \\privacy budget and \\high learning accuracy} & \tabincell{c}{High communication \\overhead}\\ \hline
Zhao \cite{Zhao18} & \tabincell{c}{Distributed \\deep learning} & \tabincell{c}{Gaussian \\mechanism} & Preserve privacy & \tabincell{c}{Use the teacher-student \\paradigm to improve \\learning performance} & \tabincell{c}{High communication \\overhead}\\ \hline
Jayaraman et al. \cite{Jay18} & \tabincell{c}{Distributed \\deep learning} & \tabincell{c}{Zero-concentrated \\DP mechanism} & Preserve privacy & \tabincell{c}{Both output and gradient \\are protected with \\reduced noise} & \tabincell{c}{Data owners' utility \\cannot be maximized}\\ \hline
Zhao et al. \cite{Zhao19} & \tabincell{c}{Distributed \\deep learning} & \tabincell{c}{Exponential and \\Laplace mechanism} & \tabincell{c}{Preserve privacy and \\improve stability} & \tabincell{c}{Preserve the privacy \\of collective deep \\learning systems with \\the existence of \\unreliable participants} & \tabincell{c}{Accuracy is less than \\the centralized methods}\\ \hline
Phan et al. \cite{Phan19} & \tabincell{c}{Distributed \\deep learning} & \tabincell{c}{Gaussian \\mechanism} & \tabincell{c}{Preserve privacy} & \tabincell{c}{Achieve tight \\robustness bound} & \tabincell{c}{Model accuracy \\is sacrificed}\\ \hline
Geyer et al. \cite{Geyer17} & \tabincell{c}{Federated learning} & \tabincell{c}{Gaussian \\mechanism} & \tabincell{c}{Preserve privacy} & \tabincell{c}{Balance tradeoff \\between privacy loss \\and model performance} & \tabincell{c}{Model performance \\depends on the \\number of clients}\\ \hline
Shi et al. \cite{Shi17} & \tabincell{c}{Federated learning} & \tabincell{c}{Concept of \\differential privacy} & Preserve privacy & \tabincell{c}{No P2P communication \\and fault tolerance} & \tabincell{c}{Focus only on \\multi-input functions}\\ \hline
Agarwal et al. \cite{Agarwal18} & \tabincell{c}{Federated learning} & \tabincell{c}{Gaussian and \\binomial mechanisms} & Preserve privacy & \tabincell{c}{Achieve both \\communication efficiency \\and differential privacy} & \tabincell{c}{The analysis of \\binomial mechanism \\may not be tight} \\ \hline
\end{tabular}}
\label{tab:summaryDL}
\end{table*}
Table \ref{tab:summaryDL} summarizes the papers that apply differential privacy to distributed deep learning and federated learning.
In this summary, we can see that most of these papers make use of the Gaussian mechanism.
This is because the probability density function of the Gaussian distribution is differentiable,
and this property is necessary for calculating the gradient of a learning model.
The Laplace mechanism does not have this property, so that it was seldom to be applied in deep learning.
It is worth pointing out that simply using differential privacy mechanisms during a learning process may not
provide enough security to protect the privacy of a learning model or the training data.
This is because an adversary, who is pretending to be an honest participant,
can use a GAN \cite{Goodfellow14} to generate prototypical samples
of a victim's private training dataset,
and because the generated samples have the same distribution as the victim's training dataset,
the adversary can bypass the protection of differential privacy \cite{Hitaj17}.
A potential solution against this security issue
is to use local differential privacy.
Unlike regular differential privacy which takes into account
all users' data in a dataset,
local differential privacy add randomization on single user's data \cite{Erlingsson14}.
Thus, local differential privacy has a finer granularity and stronger privacy guarantee.
Even if an adversary has the access to the personal responses of
an individual user in the dataset,
the adversary is still unable to learn accurate information about the user's personal data.
Moreover, there are two urgent research problems
which need further investigation.
The first direction is model inversion attack and its defense \cite{Fred15,Yang19ccs}.
Model inversion attacks aim to infer the training data from a target model's predictions.
To implement model inversion attacks,
a popular method is to train a second model called attack model \cite{Yang19ccs}.
The attack model takes the target model's predictions as input
and outputs reconstructed data
which are expected to be very similar to the training data of the target model.
Most of existing defense methods focus only on membership inference attacks
so that their effectiveness on model inversion attacks is still unclear.
A potential defense method against model inversion attacks
is to adopt differential privacy to modify the target model's predictions.
The major reason of the success of model inversion attacks is owning to
the redundant information contained in the target model's predictions.
Thereby, if this redundant information can be destroyed,
the attacks can be effectively defended.
The second direction is the client accuracy in federated learning.
In regular federated learning, the server takes the updates from all clients equally, aiming to minimize an aggregate loss function in general.
However, minimizing an aggregate loss function cannot
guarantee the accuracy of individual clients in the federated network \cite{Mohri19,Li20b},
which is unfair to the clients.
To improve the learning accuracy of the clients, Li et al. \cite{Li20b} introduce an aggregate re-weighted loss function in federated learning,
where different clients are allocated different weights.
A limitation in this type of method is that
the server still need to send all the clients the same model update.
This limitation could be overcome by enabling the server
to send different model updates to different clients
according to each client's requirements.
The server could use joint differential privacy \cite{Kearns14}
to differentiate the model updates in federated learning.
\vspace{-6mm}
\section{Differential Privacy in Multi-Agent Systems}\label{sec:DP-MAS}
A multi-agent system is a loosely coupled group of agents, such as sensor networks \cite{Ye18},
power systems \cite{Ye11} and cloud computing \cite{Ye19TSC},
interacting with one another to solve complex domain problems \cite{Wooldridge95, Ye17}.
An agent is an intelligent entity that can perceive its environment
and act upon the environment through actuators.
Currently, multi-agent systems face challenges with privacy violation, security issues and communication overhead.
In Mcsherry et al.'s early work~\cite{McSherry07STOC},
differential privacy mechanisms were applied to auctions to diminish the impact of untrusted participants.
In multi-agent systems, differential privacy mechanisms can also avoid malicious agents through a similar mechanism.
There is an increasing trend to apply differential privacy techniques to multi-agent systems
so as to preserve the agents' privacy \cite{Fioretto19} or improve the agents' performance \cite{Pai16}.
This section focuses on some key sub-areas of multi-agent systems, including
multi-agent learning, auction, and game theory
\subsection{Differential privacy in multi-agent reinforcement learning}
Multi-agent learning is generally based on the reinforcement learning \cite{Tuyls12}.
Normally, an agent learns proper behavior through the interactions with their environment and other agents in a trial-and-error manner.
Every time an agent performs an action,
they receive a reward which tells them how good that action was for accomplishing the given goal.
Importantly, agents can and do change their strategy to get or better rewards.
Therefore, the aim of the agent is to maximize its long-term expected reward by taking sequential actions.
For example, Figure~\ref{fig:multiagent} shows a set of sweeper robots (the agents with smiling or crying faces)
who are collecting rubbish from a grid (the red diamonds).
When a robot plans to move to the corner of the grid, it may try to move to the right first.
However, if it bumps into the wall, it receives a very low reward;
thus, the robot learns that moving to the right from its current location is not a good idea.
Standard multi-agent learning approaches may need a large number of interactions between agents and an environment to learn proper behaviors \cite{Bus08}.
Therefore, to improve agent learning performance, agent advising was proposed,
where agents are allowed to ask for advice from each other \cite{Silva18}.
For example, the robot in position $(1,1)$ in Fig. \ref{fig:multiagent} can ask its neighbor in $(2,2)$ for advice
and may obtain the knowledge that it cannot move to the left from its current location.
Existing agent advising approaches, however, suffer from malicious agents
who may provide false advice to hinder the performance of the system,
and heavy communication overheads \cite{Silva17,Silva19}
because agents are allowed to broadcast to all their neighboring agents for advice \cite{Silva17,Silva19}.
For example, in Figure~\ref{fig:multiagent}, the malicious robot (the crying face) may provide false information to the other robots
so that the rubbish is not collected in time.
\begin{figure}[htpb]
\centering
\includegraphics[scale=0.7]{sweeping.pdf}
\caption{A multi-agent learning example}
\label{fig:multiagent}
\end{figure}
\subsubsection{Differential privacy to improve the security of the reinforcement learning}
Differential privacy mechanisms can provide a security guarantee that
a malicious agent being in or out of a multi-agent system has little impact on the utility of other agents.
As the probability of selecting neighbors to ask for advice is based on the reward history provided by neighbors,
exponential or Laplace mechanisms can be applied to this step to diminish the impact of malicious agents on security purpose.
Moreover, the composition of the privacy budget can naturally control the communication overhead,
namely by limiting the amount of advice allowed throughout the whole system.
Ye et al. \cite{Ye19} proposed a differentially private framework to deal with malicious agents and communication overhead problems.
Using Fig.~\ref{fig:multiagent} as an example,
suppose each agent in the grid environment wants to move to the corner
and has the moving knowledge that can be shared with others.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{advising.eps}
\caption{Differentially private multi-agent system}
\label{fig:advising}
\end{figure}
Fig.~\ref{fig:advising} illustrates the process of agent interaction in this example.
Agent $2$ send out an advice request to its neighbors.
Agent $1$ and the malicious agent would give advice to agent $2$.
Agent $1$'s advice will include the best action in agent $2$'s state according to agent $1$'s knowledge.
However, the malicious agent will always give false advice.
After receiving advice from both neighbors, agent $2$
performs a differentially private adviser selection algorithm, which
applies an exponential mechanism to adjust the chosen probabilities.
Because the malicious agent always provides false advice,
the exponential mechanism may filter its advice with a high probability.
So that the impact of the malicious agent will be diminished.
The system will stop communicating when the privacy budget is used up,
so that the privacy budget can be used to control the communication overhead.
\subsubsection{Summary of differential privacy in reinforcement learning}
Differential privacy technology has been proven
to improve the performance of agent learning in addition to its use as a privacy-preservation tool.
Compared to the benchmark broadcast-based approach,
the differential privacy approach achieves a better performance (normally evaluated by the total reward of the system and the convergence rate) with less communication overhead
when malicious agents are present.
However, further exploration of differential privacy technique in new environments, such as a dynamic or an uncertain environment, would be worthwhile.
\subsection{Differential privacy in auction}
Auction-based approaches are effective in multi-agent systems for task and resource allocation \cite{Parsons11}.
Normally, there are three parties in an auction,
including a seller, an auctioneer and a group of buyers.
The auctioneer presents the item and
receives bids from the buyers following a pre-established bidding convention.
This convention determines the sequence of bids as well as
the way to decide the winner and the price.
The current privacy-preserving mechanisms are mainly based on cryptography and multi-party secure computation.
However, there may still be privacy leaks
if one infers their sensitive information from the auction's outcomes.
Differential privacy techniques can help to combat this issue \cite{Chen19}.
The current differentially private auction mechanisms are mostly designed for spectrum allocation in wireless communication.
Radio spectrum is a scarce resource
and thus, the allocation of it needs to be managed carefully.
However, communication also comes with a high desirability for security,
resulting in several private spectrum auction mechanisms.
Zhu et al. \cite{Zhu14} proposed a differentially private spectrum auction mechanism with approximate revenue maximization.
Their mechanism consists of three steps.
First, the auctioneer partitions the bidders into groups and subgroups.
Second, the auctioneer initializes the set of prices
and calculates the probability distribution over these prices using the exponential mechanism.
Finally, based on the probability distribution, the auctioneer randomly selects a price as the auction payment,
and the corresponding bidders are pronounced the winners.
Zhu and Shin \cite{Zhu15}
alternative achieves strategy-proofness and is polynomially tractable.
The mechanism performs an auction in four steps.
The auctioneer first partitions bidders into groups.
The auctioneer then creates virtual channels for bidders based on their geographical locations and a conflict graph.
Next, the auctioneer computes the probability of selecting each bidder as the winner.
Finally, the auctioneer selects the winner based on an exponential mechanism and determines the payment for the winner.
Wu et al. \cite{Wu16} developed a differentially private auction mechanism
which guarantees bid privacy and achieves approximate revenue maximization.
In their mechanism, the auctioneer first groups bidders based on their conflict graph,
and next determines the price for winners in each group using an exponential mechanism.
Finally, the auctioneer selects the winner based on sorted group revenues.
Chen et al. \cite{Chen19} designed a differentially private double spectrum auction mechanism.
Their mechanism is a uniform price auction mechanism,
where all sellers are paid with a selling clearing price
and all buyers groups are charged with a buying clearing price.
They apply an exponential mechanism twice,
once to select a selling clearance price and again to select a buying clearance price.
In addition to spectrum allocation,
differentially private auction mechanisms have also been developed for resource allocation in cloud computing.
Xu et al. \cite{Xu17} proposed a differentially private auction mechanism for trading cloud resources,
which preserves the privacy of individual bidding information and achieves strategy-proofness.
Their mechanism iteratively progresses through a set of rounds,
where each round consists of four steps.
The auctioneer first uses the exponential mechanism to compute the probability distribution over the set of current bids.
Next, the auctioneer randomly selects a bid from the set as the winner in the current round.
The auctioneer then creates a payment scheme for the winner.
Finally, the winner is removed from the set.
\subsubsection{Summary of differential privacy in auctions}
Although differential privacy in auctions has been widely accepted,
these approaches typically assume a seller or an auctioneer can directly interact with all potential buyers.
This assumption, however, may not be applicable to some real situations,
where sellers and buyers are organized in a network, e.g., social networks \cite{Li17,Zhao18b}.
Auctions in social networks introduce new challenging privacy issues.
The first issue is the bidding propagation.
In a social network, a bid from a buyer cannot be sent directly to the seller
but has to be propagated by other agents to the seller.
These intermediate agents may be potential buyers
and are thus competitive to that buyer.
Therefore, the bid value of that buyer is private and cannot be disclosed to others.
An intuitive way to protect the privacy of that buyer is to add Laplace noise to the bid value.
However, this does not stop the problem of a seller receiving a fake bid and making a wrong decision.
The second issue is social relationships.
During bid propagation, a trajectory forms
which indicates who is engaged in the propagation.
By investigating this trajectory, the seller may learn things about the buyer's social relationships.
\subsection{Differential privacy in game theory}
Game theory is a mathematical model used to study the strategic interaction among multiple agents or players \cite{Bona18}.
Game theory has been broadly studied and applied in various domains,
e.g., economic science \cite{Acquisti16}, social science \cite{Bona18} and computer science \cite{Durlauf10}.
Most of these studies, however, overlook the privacy of agents, the malicious agents or the stability of the game playing process.
To tackle these issues, differential privacy techniques have been introduced into game theory \cite{Roth13,Pai13,Wanger18}.
Differential privacy-based game theory research can be roughly classified into two categories:
1) using differential privacy to improve the performance of game theory, e.g., stability and equilibrium,
and 2) applying differential privacy to preserve the privacy of agents in games.
\subsubsection{Differential privacy to improve the performance}
Kearns et al. \cite{Kearns14} developed a differentially private recommender mechanism for incomplete information games.
Thanks to differential privacy techniques, their mechanism achieves equilibria of complete information games
even with a large number of players and any player's action only slightly affects the payoffs for other players.
Their mechanism offers a proxy that can recommend actions to players.
Players are free to decide whether to opt in to the proxy,
but if players do opt in, they must truthfully report their types.
In addition to satisfy the game-theoretic properties, the mechanism also guarantees player privacy,
namely that no group of players can learn much about the type of any player outside the group.
Rogers and Roth \cite{Rogers14} improved the performance and defense of malicious users by expanding on Kearns et al.'s work \cite{Kearns14}
to allow players to falsely report their types
even if the players opt in to the proxy.
They theoretically show that by using differential privacy
to form an approximate Bayes-Nash equilibrium,
players have to truthfully report their types and faithfully follow the recommendations.
Pai et al. \cite{Pai16} improved game performance as well.
They studied infinitely repeating games of imperfect monitoring with a large number of players,
where players observe noisy results, generated by differential privacy mechanisms, about the play in the last round.
The authors find that, theoretically, folk theorem equilibria may not exist in such settings,
which concern all the Nash equilibria of an infinitely repeated game.
Based on this finding, they yield antifolk theorems~\cite{Antifolk},
where restrictions are imposed on the information pattern of repeated games
such that individual deviators cannot be identified \cite{Masso89}.
Lykouris et al. \cite{Lyk16} increased game stability by analyzing the efficiency of repeated games in dynamically changing environments and population sizes.
They draw a strong connection between differential privacy and the high efficiency of learning outcomes in repeated games with frequent change.
Here, differential privacy is used as a tool to find solutions
that are close to optimal and robust to environmental changes.
Han et al. \cite{Han15} developed an approximately truthful mechanism to defend against malicious users in an application to manage the charging schedules of electric vehicles.
To ensure users to truthfully report their specifications,
their mechanism takes advantage of joint differential privacy
which can limit the sensitivity of the scheduling process to changes in user specifications.
Therefore, any individual user cannot benefit from misreporting his specification,
which results in truthful reports.
\subsubsection{Applying differential privacy to preserve the privacy}
Hsu et al. \cite{Hsu13} modelled the private query-release problem in differential privacy as a two-player zero-sum game between a data player and a query player.
Each element of the data universe for the data player is interpreted as an action.
The data player's mixed strategy is a distribution over her databases.
The query player has two actions for each query.
The two actions are used to penalize the data player,
when the approximate answer to a query is too high or too low.
An offline mechanism, based on the Laplace mechanism, is then developed to achieve the private equilibrium of the game.
Zhang et al. \cite{Zhang16} developed a general mobile traffic offloading system.
They used the Gale-Shapley algorithm~\cite{Galetheory} to optimize the offloading station allocation plan for mobile phone users.
In this algorithm, to protect users' location privacy, they proposed two differentially private mechanisms based on a binary mechanism \cite{Chan11}.
The first mechanism is able to protect the location of privacy of any individual user
when all the other users are colluding against this user, but the administrator is trusted.
The second mechanism is stronger than the first mechanism
because it assumes that even the administrator is untrusted.
Zhou et al. \cite{Zhou17} adopted an aggregation game to model spectrum sharing in large-scale and dynamic networks,
where a set of users compete for a set of channels.
They then applied differential privacy techniques to guarantee the truthfulness and privacy of the users.
Specifically, they use a Laplace mechanism to add noise to the cost threshold and users' costs to protect this information.
Moreover, they use an exponential mechanism to decide the mixed strategy aggregative contention probability distribution for each user
so as to preserve the privacy of users' utility functions.
\subsubsection{Summary of differential privacy in game theory}
The current research on differential privacy in game theory has mainly focused on static environments.
These same issues in dynamic environments are generally still open.
Of the little research that does consider dynamic environments \cite{Lyk16},
only changes in population are considered
while overlooking changes in other areas,
such as the strategies available to each agent
or the utility of each of those strategies.
Studying game theory with changing available strategies is a challenging issue,
as these types of changes may result in no equilibria between agents.
This is because no matter which strategy is taken by an agent,
other agents may always have strategies to defeat that agent.
In other words, other agents are incentivized to unilaterally change their strategies.
However, since differential privacy can be used to force agents to report truthfully,
it may also be used to force agents to reach equilibria.
\iffalse
\subsection{Differential privacy in multi-agent data sharing}
Data sharing typically happens between a set of agents with common tasks.
Data belonging to each agent may contain sensitive or private information
which should not be revealed to other agents \cite{Fioretto19}.
For example, in the real world, medical centers are interpreted as agents.
These agents have a common prediction task to diagnose a rare disease.
Each agent has a private dataset
but the number of samples is insufficient for properly training an algorithm.
These agents thus have a motivation to share their datasets to make the prediction
while preserve their privacy.
Privacy preserving data sharing has been heavily investigated in the normal data sharing scenarios,
however, in multi-agent systems, the problem is more complicated
as agents share data in a federated manner for prediction.
Each agent releases a privacy-preserving dataset.
Any individual agent, then, can use these datasets, together with its own dataset,
to train a variety of predictors.
Fioretto et al. \cite{Fioretto18} present how to use differential privacy to release mobility data between agents
which could be transportation systems or other mobile systems.
Their method consists of two steps.
First, they apply the Laplace mechanism to each feature query of a dataset.
Second, they develop an optimization algorithm to post-process the results obtained in the first step.
Specifically, the optimization algorithm re-distributes the noise introduced by the Laplace mechanism
so as to ensure the consistency of the dataset
and keep the noise as close as possible to the original one.
Fioretto et al. \cite{Fioretto19b} study how to use differential privacy to release data from a critical infrastructure network
without revealing sensitive information.
A critical infrastructure network could be a power grid network or a transportation network.
Their mechanism consists of three phases:
the location obfuscation phase, the value obfuscation phase and the fidelity restoration phase.
Location obfuscation is to shuffle the node locations using an instance of the exponential mechanism.
Value obfuscation is to modify the node values by adding the Laplace noise.
Fidelity restoration is to re-distribute the noise introduced in the above two phases
to guarantee the usability of the obfuscated network.
Fioretto and Van Hentenryck \cite{Fioretto19} propose a privacy-preserving federated data sharing protocol
which enables each agent to locally produce a privacy-preserving version of its dataset.
Their protocol includes two steps.
First, an agent locally trains a privacy-preserving predictor shared with other agents.
Second, the agent synthesizes an unlabeled privacy-preserving version of its data
and uses the predictors collected from other agents to generate the labels of its data.
The agent finally shares its synthesized, privacy-preserving and labeled data to others for their analytic tasks.
Huai et al. \cite{Huai19} propose a privacy-aware synthesizing method for crowdsourced data.
The method includes two phases.
First, they use the kernel density estimation to estimate the distribution of the raw data.
Second, they sample a set of candidate synthetic claims from the learned densities in the first phase,
and do a privacy test on each of the candidate claims.
If a claim passes the test, it will be released,
otherwise it will be discarded.
\subsubsection{Summary of differential privacy in data sharing}
The above-mentioned studies on differential privacy in data sharing has a common assumption
that agents are honest but curious to others' privacy.
This means that agents obey the rules
but they are still interested in others' privacy.
This assumption, however, may not be standing in the real world,
as agents may be malicious and disobey the rules.
For example, in \cite{Fioretto19}, each agent locally trains a privacy-preserving predictor
and shares it with other agents.
However, if an agent is malicious,
it may modify the predictor to harm the performance of other agents.
Existing solutions regarding malicious agents can avoid them but cannot identify them \cite{Ye19}.
It is an interesting research to adopt differential privacy to identify malicious agents or force them not to behave maliciously.
\fi
\subsection{Summary of multi-agent systems}
\begin{table*}[!ht]\scriptsize
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\caption{Summary of differential privacy in multi-agent systems}
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|c|} \hline
\textbf{Papers}&\textbf{Research areas}&\textbf{Techniques used}&\textbf{Research aims}&\textbf{Advantages}&\textbf{Disadvantages}\\ \hline
Ye et al. \cite{Ye19} & Multi-agent learning & \tabincell{c}{Laplace \\mechanism} & \tabincell{c}{Avoid \\malicious agents} & \tabincell{c}{Avoid malicious agents \\with low communication \\and computation overhead} & \tabincell{c}{Malicious agents \\cannot be identified}\\ \hline
Zhu et al. \cite{Zhu14} & Auction in spectrum & \tabincell{c}{Exponential \\mechanism} & Preserve privacy & \tabincell{c}{Guarantee both the \\truthfulness of bidders' \\valuations and \\their privacy} & \tabincell{c}{Only approximate \\revenue maximization}\\ \hline
Zhu and Shin \cite{Zhu15} & Auction in spectrum & \tabincell{c}{Exponential \\mechanism} & Preserve privacy & \tabincell{c}{Preserve the privacy of \\ both bidders and \\the auctioneer together} & \tabincell{c}{Only near optimal \\revenue achieved} \\ \hline
Wu et al. \cite{Wu16} & Auction & \tabincell{c}{Exponential \\mechanism} & Preserve privacy & \tabincell{c}{Guarantee both \\bid privacy and fairness} & \tabincell{c}{Only approximate \\revenue
maximization}\\ \hline
Chen et al. \cite{Chen19} & Auction in spectrum & \tabincell{c}{Exponential \\mechanism} & Preserve privacy & \tabincell{c}{Preserve the privacy \\of bidders in \\double spectrum auctions} & \tabincell{c}{Only approximate social \\welfare maximization}\\ \hline
Xu et al. \cite{Xu17} & \tabincell{c}{Auction in \\cloud computing} & \tabincell{c}{Exponential \\mechanism} & Preserve privacy & \tabincell{c}{Preserve the privacy of \\consumers in \\cloud environments} & \tabincell{c}{Only approximate \\truthfulness and revenue \\maximization guarantees}\\ \hline
Hsu et al. \cite{Hsu13} & \tabincell{c}{Game theory in \\databases} & \tabincell{c}{Laplace \\mechanism} & Preserve privacy & \tabincell{c}{Preserve the privacy of \\both individuals and \\analysts of \\database systems} & \tabincell{c}{Achieve only nearly \\optimal error rates}\\ \hline
Kearns et al. \cite{Kearns14} & Game theory & \tabincell{c}{Concept of \\differential privacy} & \tabincell{c}{Preserve privacy and \\Improve performance} & \tabincell{c}{Implement equilibria of \\complete information \\games in settings of \\incomplete information} & \tabincell{c}{The type of a player \\is still possible \\to be revealed}\\ \hline
Rogers and Roth \cite{Rogers14} & Game theory & \tabincell{c}{Concept of \\differential privacy} & \tabincell{c}{Preserve privacy and \\avoid malicious agents} & \tabincell{c}{Implement equilibria of \\complete information \\games in settings of \\incomplete information \\even if players \\are lying} & \tabincell{c}{The type of a player \\is still possible \\to be revealed}\\ \hline
Zhang et al. \cite{Zhang16} & \tabincell{c}{Game theory in \\mobile communication} & \tabincell{c}{Binary \\mechanism} & Preserve privacy & \tabincell{c}{Preserve each user's \\location privacy even if \\other users collude} & \tabincell{c}{The system administrator \\is required to be \\honest or semi-honest}\\ \hline
Zhou et al. \cite{Zhou17} & \tabincell{c}{Game theory in \\spectrum sharing} & \tabincell{c}{Laplace and \\exponential \\mechanisms} & Preserve privacy & \tabincell{c}{Guarantee both \\truthfulness and privacy \\of users} & \tabincell{c}{Achieve only \\approximate Nash \\equilibrium} \\ \hline
Pai et al. \cite{Pai16} & Game theory & \tabincell{c}{Concept of \\differential privacy} & Improve performance & \tabincell{c}{Quantify limit results \\for repeated games} & \tabincell{c}{Achieve only \\approximate equilibria}\\ \hline
Lykouris et al. \cite{Lyk16} & Game theory & \tabincell{c}{Concept of \\differential privacy} & Improve stability & \tabincell{c}{Connect differential \\privacy with learning \\efficiency in \\dynamic games} & \tabincell{c}{The solution is \\approximate optimal}\\ \hline
Han et al. \cite{Han15} & \tabincell{c}{Game theory in \\electric vehicles} & \tabincell{c}{Laplace \\mechanism} & Avoid malicious agents & \tabincell{c}{Reduce the incentive \\of user misreporting} & \tabincell{c}{Achieve only \\approximate truthfulness} \\ \hline
\end{tabular}}
\label{tab:summaryMAS}
\end{table*}
Table \ref{tab:summaryMAS} summarizes the papers that apply differential privacy to multi-agent systems.
In this summary, three important facts are involved.
First, some of these papers use differential privacy not to preserve the privacy of agents
but for other aims, e.g., avoiding malicious agents and improving agents' performance.
This implies that the differential privacy technique is able to achieve other research aims besides privacy preservation.
In keeping with this spirit, more potential applications of differential privacy are worthy of research.
Second, by using differential privacy, the common disadvantage is that
only approximate optimal results can be achieved.
Thus, more efficient differential privacy mechanisms need to be developed.
Third, most of these papers involve agent interaction; and differential privacy is adopted to guarantee the privacy of interaction information.
Therefore, other multi-agent research, which involves agent interaction,
may also enjoy the benefits of differential privacy
and deserves further investigation.
For example, multi-agent negotiation enables multiple agents to alternatively provide offers
to reach agreements on given events or goods \cite{Dimo19}.
However, offers may explicitly or implicitly contain agents' sensitive information,
e.g., commercial secrets, which should be protected.
Another example is multi-agent resource allocation.
To allocate resources fairly, agents have to reveal their preference
over different types of resources to others \cite{Beynier19}.
The preference, however, might be what the agents incline to hide.
In summary, differential privacy has a great potential to solve diverse problems in multi-agent system.
\section{Future Research Directions}
\subsection{Private transfer learning}
In addition to introducing differential privacy into standalone machine learning,
differentially private transfer learning has also been investigated \cite{Xie17,Wang18}.
Transfer learning aims to transfer knowledge from source domains to improve learning performance in target domains \cite{Wang18}.
It is typically used to handle the situation that
data are not stored in one place but distributed over a set of collaborative data centers \cite{LeTien19,Yao19}.
For example, transfer learning can be used in speech recognition
to transfer the knowledge of connectionist temporal classification model
to the target attention-based model to overcome the problem of limited speech resource in the target domain \cite{Qin18}.
Transfer learning can also be used in recommendation systems
to address the data-sparsity issue by enabling knowledge to
be transferred among recommendation systems \cite{Zhao13}.
Instead of transferring raw data, the intermediate computation results are transferred
from source domains to target domains.
However, even the intermediate results are potentially vulnerable to privacy-breach \cite{Wang09},
which is the motivation of privacy-preserving transfer learning.
\subsection{Deep reinforcement learning}
Deep reinforcement learning is a combination of reinforcement learning and deep learning \cite{Francois18},
and could be used to solve a wide range of complex decision-making problems that were previously beyond the capability of regular reinforcement learning. The learning process of deep reinforcement learning is similar to regular reinforcement learning in that both are based on trial-and-error. However, unlike regular reinforcement learning which may use a reward value table (Q-table) to store learned knowledge, deep reinforcement learning uses a deep Q-network instead. One of the advantages of using a deep Q-network is that deep reinforcement learning can take high-dimensional and continuous states as inputs, which is close to unfeasible with regular reinforcement learning.
Differential privacy in deep reinforcement learning has not been researched thoroughly. Wang and Hegde~\cite{Wang19} applied differential privacy to deep reinforcement learning to protect the value function approximator by adding Gaussian to the objective function, but their work still focuses on the “deep learning” aspects of the approach rather than the “reinforcement learning” parts. Compared to standard reinforcement learning and deep learning, deep reinforcement learning has some unique features. First, the training samples are collected during learning rather than pre-assembled before learning. Second, the training samples may not be independent but rather highly correlated. Third, the training samples are not usually labelled. Thus, to avoid overfitting with deep reinforcement learning, experience replay is required, which means randomly selecting a set of samples for training in each iteration. As discussed in the previous sections, differential privacy can improve the stability of learning. Therefore, it may be interesting to research whether introducing differential privacy into deep reinforcement learning can help to avoid overfitting.
\subsection{Meta-learning}
Meta-learning, also known as ‘learning to learn’, is a learning methodology that systematically observes how different machine learning approaches perform on a wide range of learning tasks and then learning from these observations~\cite{Vilalta02, Lemke15,Van19}. In meta-learning, the goal of the trained model is to quickly learn a new task from a small amount of new data. Also, the trained model should be able to learn on a number of different tasks~\cite{Finn17,Nichol18}, but this opens the risk of breaching the privacy of the different task owners~\cite{Li20}.
Recently, Li et al.~\cite{Li20} introduced differential privacy into meta-learning to preserve the privacy of task owners. Specifically, they use a certified $(\epsilon,\delta)$-differential privacy stochastic gradient descent~\cite{Bassily19} with each task, which guarantees that the contribution of each task owner carries global differential privacy guarantees with respect to the meta-learner. However, to guarantee global differential privacy, the number of tasks has to be known beforehand. This is hard to know in some situations, such as online meta-learning where tasks are revealed one after the other in a dynamic manner~\cite{Finn19}. Therefore, it would be worthwhile developing a new differential privacy-based algorithm to preserve the privacy of task owners in online meta-learning,
\subsection{Generative adversarial networks}
Generative adversarial networks (GANs) \cite{Goodfellow14} are a framework for producing a generative model by way of a two-player minimax game.
One player is the generator
who attempts to generate realistic data samples by transforming noisy samples
drawn from a distribution
using a transformation function with learned weights.
The other player is the discriminator
who attempts to distinguish between synthetic data samples created by the generator.
The GAN framework is one of the most successful learning models and has been applied to applications such as imitating expert policies \cite{Ho16} and domain transfer \cite{Yoo16}. More recently, GANs have been extended to accommodate multiple generators and discriminators so as to address more complex problems.
Like other learning models, GAN frameworks also suffer from the risk of information leaks. More specifically, the generator model estimates the underlying distribution of a dataset and randomly generates realistic samples, which means the generator, through the power of deep neural networks, remembers training samples. Now, when the GAN model is applied to a private or sensitive dataset, the privacy of the dataset may be leaked. To deal with this problem, Xu et al. proposed a GAN-obfuscator \cite{Xu19}, i.e., a differentially private GAN framework, where carefully designed Gaussian noise is added to the gradients of learning models during the learning procedure. By using the GAN-obfuscator, an unlimited amount of synthetic data can be generated for arbitrary tasks without disclosing the privacy of training data. However, although the framework can guarantee the privacy of training data, there is only one generator and one discriminator in this framework. Therefore, a useful direction of future research might be to extend these principles to multiple generators and discriminators to address more complex problems.
\subsection{Multi-agent systems}
\subsubsection{Multi-agent advising learning}
When an agent is in an unfamiliar state during a multi-agent learning process, it may ask for advice from another agent \cite{Silva17}. These two agents then form a teacher-student relationship. The teacher agent offers advice to the student agent about which action should be taken. Existing research is based on a common assumption that the teacher agent can offer advice only if it has visited the same state as the student agent’s current state. But this assumption might be relaxed by using differential privacy technique.
The property of differential privacy can be borrowed to address the advice problem. Two similar states are interpreted as two neighbouring datasets. The advice generated from the states is interpreted as the query result yielded from datasets. Since two results from neighbouring datasets can be considered approximately identical, two pieces of advice generated from two similar states can also be considered approximately identical. This property can thus guarantee that advice created in a state can still be used in another similar state. Hence, this may be an interesting way to improve agent learning performance.
\subsubsection{Multi-agent transfer learning}
When agents transfer knowledge between each other to improve learning performance,
a key problem discussed is that privacy needs to be preserved \cite{Silva19}. Existing methods are typically based on homomorphic cryptosystems \cite{Sakuma08,Wu18,Liu19}. However, homomorphic cryptosystems have a high computation overhead and, therefore, may not be very efficient in resource-constrained systems, e.g., wireless sensor networks. Differential privacy, with its light computation overhead, therefore, could be a good alternative in these situations.
\subsubsection{Multi-agent reasoning}
Reasoning is an ability that enables an agent to use known facts to deduce new knowledge.
It has been widely employed to address various real-world problems.
For example, knowledge graph-based reasoning can be used in speech recognition to parse speech contents into logical propositions \cite{Zhou20},
and case-based reasoning can be adopted to address the data-sparsity issue in recommendation systems
by filling in the vacant ratings of the user-item matrix \cite{Tawfik17}.
A typical reasoning method is based on the Belief, Desire and Intention (BDI) model \cite{Inverno04}.
An agent's beliefs correspond to information the agent has about the world.
Reasoning is a powerful tool in AI especially
when it is combined with deep neural networks.
For example, Mao et al. \cite{Mao19} has recently proposed a neuro-symbolic concept learner
which combines symbolic reasoning with deep learning.
Their model can learn visual concepts, words and semantic parsing of sentences
without explicit supervision on any of them.
As reasoning requires querying known facts
which may contain private information,
privacy preservation becomes an issue in reasoning process.
Tao et al. \cite{Tao14} propose a privacy-preserving reasoning framework.
Their idea is to hide the truthful answer from a querying agent
by providing the answer ``Unknown'' to a query.
Then, the querying agent cannot distinguish between the case
that the query is being protected and the case that
the query cannot be inferred from the known facts.
However, simply hiding truthful answers may seriously hinder the utility of querying results.
Differential privacy, with its theoretical guarantee of utility of querying results,
may be a promising technique for privacy-preserving reasoning.
Recently, there have been great efforts to combine differential privacy with reasoning \cite{Barthe13,Barthe16,Zhang19}.
These works, however, aim to take advantage of reasoning to prove
differential privacy guarantees of programs,
instead of using differential privacy to guarantee privacy-preserving reasoning.
Therefore, a potential direction of future research may be introducing
differential privacy into reasoning process to guarantee
the privacy of known facts.
\subsection{Combination of machine learning, deep learning and multi-agent systems}
A novel research area by combining
machine learning, deep learning and multi-agent systems
is the multi-agent deep reinforcement learning (MADRL) \cite{Leal19}.
In MADRL, multi-agent system technique is used to coordinate the behaviors of agents;
machine learning technique is responsible for guiding the learning process of agents;
and deep learning is employed by agents to learn efficient strategies.
One of the current research directions along MADRL is the action advising \cite{Ilhan19,Omidshafiei19,Silva20}.
Action advising in regular multi-agent reinforcement learning allows a teacher agent
to offer only an action as advice to a student agent in a concerned state.
By comparison, action advising in MADRL usually allows a student agent
to query a teacher agent's knowledge base to receive action suggestions \cite{Silva20}.
However, as the number of states in MADRL is very large,
an agent's knowledge base may contain the agent's
very rich private information that should be protected.
Privacy-preservation in MADRL is still an open research problem
which may be addressed by using differential privacy.
\section{Conclusion}
In this paper, we investigated the use of differential privacy in selected areas of AI. We described the critical issues facing AI and the basic concepts of differential privacy, highlighting how differential privacy can be applied to solving some of these problems. We discussed the strengths and limitations of the current studies in each of these areas and also pointed out the potential research areas of AI where the benefits of differential privacy remain untapped. In addition to the three areas of focus in this article – machine learning, deep learning and multi-agent learning – there are many other interesting areas of research in AI that have also leveraged differential privacy, such as natural language processing, computer vision, robotics, etc. Surveying differential privacy in these areas is something we intend to do in future work.
\bibliographystyle{IEEEtran}
{\small |
1,314,259,993,951 | arxiv | \section{Introduction}
\label{SECTION: INTRODUCTION}
The takeoff point for this paper is the classical Kelly trading problem~\cite{Kelly_1956, Cover_Thomas_2012, Thorp_2006, Rotando_Thorp_1992,Algoet_Cover_1988}, which calls for maximizing the Expected Logarithmic Growth~(ELG) of a trader's account.
To be more specific, the problem is often formulated by a sequence of trades with independent and identically distributed (i.i.d.) returns with known probability distribution.
The trader's objective is to specify a fraction~$K$ of its account value at each stage seeking to maximize the ELG at the terminal stage.
While many of the existing papers contributed on the Kelly's problem and its application to stock trading; e.g., see~\mbox{\cite{Cover_Thomas_2012,Lo_Orr_Zhang,Luenberger_2011,Algoet_Cover_1988,Maclean_Thorp_Ziemba_2010,Thorp_2006,Rotando_Thorp_1992}}, the effects of \textit{rebalancing frequency} is still \textit{not} heavily considered into the existing~literature.
Some initial results along these lines regarding rebalancing frequency effects can be found in~\cite{Kuhn_Luenberger_2010, Das_Kaznachey_Goyal_2014, Das_Kaznachey_Goyal_2015} and our most recent work in \cite{Hsieh_Barmish_Gubner_2018_ACC,Hsieh_Gubner_Barmish_2018_CDC, Hsieh_Dissertation}.
Indeed, in \cite{Kuhn_Luenberger_2010}, a portfolio optimization with returns following a continuous geometric Brownian motion was considered. However, only two extreme cases: High-frequency trading and buy and hold were emphasized in their results.
On the other hand, in~\cite{Das_Kaznachey_Goyal_2014} and \cite{Das_Kaznachey_Goyal_2015}, a portfolio optimization was considered with the constant gain~$K$ selected without regard for the frequency with which the portfolio rebalancing is done.
Subsequently, when this same gain~$K$ is used to find an optimal rebalancing period, the resulting levels of ELG are arguably suboptimal.
In contrast to~\cite{Kuhn_Luenberger_2010} and \cite{Das_Kaznachey_Goyal_2014}, our formulation to follow, achieved by adopting our previous work published in~\cite{Hsieh_Barmish_Gubner_2018_ACC} and \cite{Hsieh_Gubner_Barmish_2018_CDC}, considers full range of rebalancing frequencies and both the probability distribution of the returns and the time interval between rebalances are arbitrary.
That is, we deal with what we view to be a more appropriate \textit{frequency-based Kelly trading formulation} and seek an optimal portfolio which depends on the rebalancing frequency.
\subsection{Idea of Frequency-Based Formulation}
Specifically, within this frequency-based trading context, we let~$\Delta t$ be the time between trade updates and~\mbox{$n \geq 1$} be the number of steps between rebalancings.
Then the frequency is~\mbox{$f:=1/ (n \Delta t)$}.
In the sequel, we may call the quantity $n$ to be the \textit{rebalancing period}.
Now,
letting~$V(k)$ denote the trader's account value at stage $k$, the trader invests~$KV(0)$ with~\mbox{$K \geq 0$} at stage~$k = 0$ and waits~$n \geq 1$ steps before updating the trade~size.
After each trade, the broker takes its share and the balance of the money is left to ``ride'' with resulting profits or losses viewed as ``unrealized'' until stage $n$ is reached.
When~$n$ is small, this is viewed as the high-frequency case, and when~$n$ is large, one use the term ``buy~and~hold".
\subsection{Plan for the Remainder of the Paper}
In Section~\ref{SECTION: Problem Formulation}, we first recall our frequency-based formulation considered in \cite{Hsieh_Barmish_Gubner_2018_ACC} and \cite{Hsieh_Gubner_Barmish_2018_CDC}.
Then, in Section~\ref{SECTION: Main Results}, based on the formulation, we offer our main result which gives necessary and sufficient conditions for the frequency-based optimal Kelly portfolio.
In addition, several technical results regarding the various optimality conditions are also provided; e.g., extended dominant asset theorem, the expected ratio optimality, and asymptotic relative optimality are proved.
In Section~\ref{SECTION: Dominant Ratio Trading Algorithm}, we propose a simple trading algorithm which uses the idea of extended dominant asset theorem to determine when should one trigger a trade on an underlying asset or not.
Several back-testing simulations using historical prices are provided to support the trading performance of the algorithm.
In Section~\ref{SECTION: Conclusion}, a concluding remark is provided. Finally, in Appendix, we also address an important issue regarding survivability~\mbox{(no-bankruptcy)}.
\section{Problem Formulation}
\label{SECTION: Problem Formulation}
To study the effect of rebalancing frequency in portfolio optimization problems, as seen in Section~\ref{SECTION: INTRODUCTION}, let $n \geq 1$ being the number of steps between rebalancings.
For $k=0,1,\dots,$ we consider a trader who is forming a portfolio consisting of~$m \geq 2$ assets and assume that at least one of them is riskless with nonnegative rate of return $r \geq 0.$ That is, if an asset is riskless, its return is deterministic and is treated as a degenerate random variable with value $r$ for all $k$ with probability one.
Alternatively, if Asset~$i$ is a stock whose price at time~$k$ is~$S_i(k)>0$, then its return is
\[
X_i(k) = \frac{S_i(k+1) - S_i(k)}{S_i(k)}.
\]
In the sequel, for stocks, we assume that the return vectors~$
X(k):=\left[X_1(k) \, X_2(k)\, \cdots \,X_m(k)\right]^T
$
have a known distribution and have components $X_i(\cdot)$ which can be arbitrarily correlated.\footnote{Again, if the $i$th asset is riskless, then we put $X_i(k) = r \geq 0$ with probability one. If a trader maintains \textit{cash} in its portfolio, then this corresponds to the case~\mbox{$r=0.$}}
We also assume that these vectors are i.i.d. with components satisfying
$
X_{\min,i} \leq X_i(k) \leq X_{\max,i}
$
with known bounds above and with $X_{\max,i}$ being finite and~\mbox{$X_{\min,i} > -1$}.
The latter constraint on $X_{\min,i}$ means that the loss per time step is limited to less than~$100\%$ and the price of a stock cannot drop to zero.
\subsection{Feedback Control Perspectives}
Consistent with the literature~\cite{Barmish_Primbs_TAC_2016,Hsieh_Barmish_Gubner_2019_TAC,Hsieh_Barmish_Gubner_2018_ACC,Hsieh_Gubner_Barmish_2018_CDC, Zhang_2001, Primbs_ACC_2007, Hsieh_Dissertation}, we bring the control-theoretic point of view into our problem formulation.
That is, the system output at stage~$k$ is taken to be the trader's account value~$V(k)$ and the $i$th feedback gain
$
0 \leq K_i \leq 1
$
represents the fraction of the account allocated to the~\mbox{$i$th} asset for~$i=1,\dots,m$.
Said another way, the~$i$th controller is a linear feedback of the form
$
I_i(k) = K_iV(k).
$
Since~$K_i \geq 0$, the trader is going {\it long}.\footnote{In finance, a \textit{long} trade means that the trader purchases shares from the broker in the hope of making a profit from a subsequent rise in the price of the underlying stock.}
In view of the above and recalling that there is at least one riskless asset available, without loss of generality,
we consider the unit simplex constraint
$$
K \in {\mathcal K} := \left\{K \in \mathbb{R}^{m}: K_i \geq 0 \text{ for all $i$}, \; \sum_{i=1}^m K_i = 1 \right\}
$$
which is classical in finance; e.g., see \cite{Luenberger_2011, Cover_Thomas_2012, Hsieh_Gubner_Barmish_2018_CDC}.
That is, with~\mbox{$K \in \mathcal K$}, we have a guarantee that~100\% of the account is invested. Moreover, we claim that the constraint set $\mathcal{K}$ assures trader's survivability; i.e., no bankruptcy is assured; see Appendix for a proof of this important property.
\subsection[Frequency-Dependent Dynamics and Feedback Configuration]{Frequency-Dependent Dynamics and Feedback Setting}
Letting $n \geq 1$ be the number of steps between rebalancings, at time~\mbox{$k=0$}, the trader begins with initial investment control
$$
u(0) = \sum_{i=1}^m K_i V(0)
$$
and waits~$n$ steps in the spirit of buy and hold. Then, when~\mbox{$k=n$}, the investment control is updated to be~\mbox{$
u(n) = \sum_{i=1}^m K_i V(n).
$}
Now, to study the performance which is dependent on rebalancing frequency, for \mbox{$i=1,2,\ldots,m$}, we use the \textit{compound~returns}
\[
\mathcal{X}_{n,i} := \prod_{k=0}^{n-1} (1+X_i(k)) -1
\]
which are readily seen to satisfy~\mbox{$\mathcal{X}_{n,i} > -1$} for all $n \geq 1$ and we work with the random vector~$\mathcal{X}_n$ having $i$th
component~$\mathcal{X}_{n,i}$.
Then, for an initial account value~$V(0)>0$ and rebalancing period $n \geq 1$, the corresponding account value at stage $n$ is described by the stochastic recursion
$$
V(n) = (1 + K^T \mathcal{X}_n )V(0).
$$
In the sequel, we may sometimes write $V(n,K)$ to emphasize the dependence on the feedback gain $K$.
\subsection[Frequency-Dependent Optimization Problem]{Frequency-Dependent Optimization Problem}
Consistent with our prior work in \cite{Hsieh_Barmish_Gubner_2018_ACC} and \cite{Hsieh_Gubner_Barmish_2018_CDC}, for any rebalancing period~$n \geq 1$, we
study the problem of maximizing the expected logarithmic~growth
\begin{align*}
{g_n}(K)
&:= \frac{1}{n}\mathbb{E}\left[ \log \frac{V(n,K)}{V(0)} \right] \\[1ex]
&= \frac{1}{n}\mathbb{E}\left[ {\log (1 + {K^T}{\mathcal{X}_n})} \right]
\end{align*}
and we use $g_n^*$ to denote the associated optimal expected logarithmic~growth.
It is readily verified that $g_n(K)$ is concave in~$K$.
Furthermore, any vector~\mbox{$K^* \in \mathcal{K} \subset \mathbb{R}^m$} satisfying~$g_n(K^*) = g_n^*$ is called a \textit{Kelly optimal feedback gain}.
The portfolio which uses the Kelly optimal feedback gain is called \textit{frequency-based Kelly optimal portfolio}.
\section{Results On Optimality}
\label{SECTION: Main Results}
\vspace{0mm}
In this section, we provide necessary and sufficient conditions which characterize the frequency-based Kelly optimal~portfolio.
\begin{theorem}[Necessity and Sufficiency]\label{thm: KKT optimality}
The feedback gain $K^*$ is optimal to the frequency-dependent optimization problem described in Section~\ref{SECTION: Problem Formulation} if and only if for $i=1,2,\dots,m$,
\begin{align*}
\mathbb{E} \left[\frac{1 +\mathcal{X}_{n,i}}{ 1 + {K^*}^T \mathcal{X}_{n}}\right] = 1, \;\;\; \text{ if } K_i^* >0
\end{align*}
\begin{align*}
\mathbb{E} \left[\frac{1+\mathcal{X}_{n,i}}{1+ {K^*}^T \mathcal{X}_{n}}\right] \leq 1,\;\;\; \text{ if } K_i^*=0 .
\end{align*}
\end{theorem}
\vspace{0mm}
\begin{proof} To prove necessity,
define $\mathcal{R}_n := \mathcal{X}_n + \bf{1}$ representing the total return with~$i$th component \mbox{$\mathcal{R}_{n,i} = \mathcal{X}_{n,i}+1 $} and
\mbox{${\bf 1} := [1 \; 1\; \cdots \; 1]^T \in \mathbb{R}^m$}.
We now consider the frequency-dependent optimization problem as an equivalent constrained convex minimization problem as follows:
\begin{align*}
&\max_K -\mathbb{E}[\log K^T \mathcal{R}_n]\\
&\text{subject to}\\
&\hspace{5mm} K^T {\bf 1}-1 = 0 ;
\\
& \hspace{5mm} -K^T e_i \leq 0, \;\;\; i=1,2,...,m
\end{align*}
where $e_i $ is unit vector having $1$ at $i$th component.
Then the Karush-Kuhn-Tucker Conditions, see e.g.,~\cite{Boyd_Vandenberghe_2004}, tell us that if~$K^*$ is a local maximum then there is a scalar $\lambda \in \mathbb{R}^1$ and a vector~\mbox{$\mu \in \mathbb{R}^m$} with component~$\mu_j \geq 0$ such that
\begin{align*}
\nabla (-\mathbb{E}[\log {K^*}^T \mathcal{R}_n]) + \lambda {\bf 1} - \sum_{i=1}^m \mu_i e_i = \textbf{0}
\end{align*}
with $\textbf{0} \in \mathbb{R}^m$ being zero vector and
$
\mu_j {K^*}^T e_j = 0
$
for \mbox{$ j = 1,2,\dots,m$}.
This implies that for $j = 1,\dots,m$, we have~$\mu_j{K_j^*} = 0$ and
\begin{align} \label{eq:01}
-\mathbb{E} \left[\frac{\mathcal{R}_{n,j}}{ {K^*}^T \mathcal{R}_{n}}\right] + \lambda- \mu_j = 0.
\end{align}
We note here that the interchanging of differentiation and expectation is justifiable since $\mathcal{X}_{n,i}$ is bounded.
Now we take a weighted sum of equation~(\ref{eq:01}); i.e.,
\begin{align*}
\sum_{j=1}^m K_j^* \left( -\mathbb{E} \left[\frac{\mathcal{R}_{n,j}}{ {K^*}^T \mathcal{R}_{n}}\right] + \lambda - \mu_j \right) = 0
\end{align*}
which leads to
\begin{align*}
-\sum_{j=1}^m K_j^* \mathbb{E} \left[\frac{\mathcal{R}_{n,j}}{ {K^*}^T \mathcal{R}_{n}}\right] + \sum_{j=1}^m K_j^* \lambda - \sum_{j=1}^m K_j^*\mu_j = 0.
\end{align*}
Using the facts that $\mu_j K_j^* = 0$ for all $j$ and $\sum_{j=1}^m K_j^* = 1$, we have
\begin{align}\label{eq:02}
-\sum_{j=1}^m K_j^* \mathbb{E} \left[\frac{\mathcal{R}_{n,j}}{ {K^*}^T \mathcal{R}_{n}}\right] + 1 \cdot \lambda + 0 = 0.
\end{align}
Note that
\begin{align*}
\sum_{j=1}^m K_j^* \mathbb{E} \left[\frac{\mathcal{R}_{n,j}}{ {K^*}^T \mathcal{R}_{n}}\right] &= \mathbb{E} \left[\frac{{K^*}^T\mathcal{R}_{n}}{ {K^*}^T \mathcal{R}_{n}}\right] = 1.
\end{align*}
Thus, substituting the result above back into equation~(\ref{eq:02}), we obtain $\lambda = 1$. This tells us that for
$j = 1,\dots,m$,
\begin{align*}
-\mathbb{E} \left[\frac{\mathcal{R}_{n,j}}{{K^*}^T \mathcal{R}_{n}}\right] + 1- \mu_j = 0
\end{align*}
and
$
\mu_j{K_j^*} = 0.
$
Thus, to sum up, if $K_j^* >0$, implies that $\mu_j =0$ and
\begin{align*}
\mathbb{E} \left[\frac{\mathcal{R}_{n,j}}{{K^*}^T \mathcal{R}_{n}}\right] = 1.
\end{align*}
If $K_j^*=0,$ implies that $\mu_j \geq 0$ implies that
\begin{align*}
\mathbb{E} \left[\frac{\mathcal{R}_{n,j}}{{K^*}^T \mathcal{R}_{n}}\right] \leq 1.
\end{align*}
Now, transforming the $\mathcal{R}_n$ back to $\mathcal{X}_n$ by $\mathcal{R}_n = \mathcal{X}_n + 1$ and using the fact that $\sum_{i=1}^m K_i^* = 1$ again, we obtain the desired conditions.
Finally, by concavity of $\mathbb{E}[\log K^T\mathcal{R}_n]$, the conditions above are also~sufficient. \qedhere
\end{proof}
\textbf{Remarks:} It is interesting to note that if $n=1$, then Theorem~\ref{thm: KKT optimality} reduces to the classical result in classical Kelly theory; see \cite[Theorem 16.2.1]{Cover_Thomas_2012}.
Additionally, Theorem~\ref{thm: KKT optimality} is also closely related to the Dominant Asset Theorem given in our prior work \cite{Hsieh_Gubner_Barmish_2018_CDC}.
For the sake of completeness, we recall the statement of the theorem as follows: Given a collection of $m \geq 2$ assets, if Asset~$j$ is \textit{dominant}; i.e., Asset~$j$ satisfies
\[
\mathbb{E}\left[ \frac{1+X_i(0)}{1+X_j(0)}\right]\leq 1
\] for every other asset $ i \neq j$, then $K^* = e_j.$ Thus, $K_i^* = 0$ for $i \neq j.$\footnote{Intuitively speaking, the Dominant Asset Theorem tells us that when condition is right, one should ``bet the farm."} In fact, this result can be viewed as a special case of Theorem~\ref{thm: KKT optimality}.
It should be also noted that the Dominant Asset Theorem is about sufficiency on optimal $K^*$--- not necessity.
Fortunately, with the aids of Theorem~\ref{thm: KKT optimality}, we are now able to prove the missing part on necessity of Dominant Asset Theorem. This is summarized in the next theorem to follow.
\begin{theorem}[Extended Dominant Asset Theorem]\label{thm: Extended Dominant Asset Theorem}
The optimal Kelly feedback gain $K^*=e_j$ if and only if $$\mathbb{E}\bigg[\frac{1+X_i(0)}{1+X_j(0)}\bigg]\leq 1.$$
\end{theorem}
\begin{proof}
The sufficiency is proved in our prior work in~\mbox{\cite[Dominant Asset Theorem]{Hsieh_Gubner_Barmish_2018_CDC}}. Hence, for the sake of brevity, we only provide a proof of necessity here. Assuming that $K^*=e_j$, we must show the desired inequality holds. Applying Theorem~\ref{thm: KKT optimality}, it follows that
for $i \neq j,$ $K_i^*=0$ and
\[
\mathbb{E} \left[\frac{1+\mathcal{X}_{n,i}}{1+ {K^*}^T\mathcal{X}_{n,j}}\right] = \mathbb{E} \left[\frac{1+\mathcal{X}_{n,i}}{1+ \mathcal{X}_{n,j}}\right] \leq 1.
\]
Using the definition of $\mathcal{X}_{n,i} = \prod_{k=0}^{n-1}(1+X_i(k)))-1$, the equality above indeed implies that
\begin{align*}
\mathbb{E} \left[ \prod_{k=0}^{n-1}\frac{1+X_i(k)}{1+ X_j(k)}\right] \leq 1.
\end{align*}
Since $X_i(k)$ are i.i.d., in $k$, we have
\begin{align}\label{eq:03}
\left( \mathbb{E} \left[\frac{1+X_i(0)}{1+ X_j(0)}\right] \right)^n \leq 1.
\end{align}
Note that $X_i(0)>-1$ for all $i=1,2,\dots,m$, it follows that the ratio
$$\frac{1+X_i(0)}{1+ X_j(0)} > 0$$ with probability one; hence its expected value is also strictly positive. Thus, in combination with inequality~(\ref{eq:03}), we conclude
\[
\mathbb{E} \left[\frac{1+X_i(0)}{1+ X_j(0)}\right] \leq 1. \qedhere
\]
\end{proof}
\textbf{Remark:} When the condition
\begin{align}\label{eq: dominant ratio}
\mathbb{E}\bigg[\frac{1+X_i(0)}{1+X_j(0)}\bigg]\leq 1
\end{align}
the Extended Dominant Asset Theorem~\ref{thm: Extended Dominant Asset Theorem} tells us to invest all available funds on the $j$th asset. In the sequel, the inequality~(\ref{eq: dominant ratio}) is called the \textit{dominant asset condition}.
As seen later in Section~\ref{SECTION: Dominant Ratio Trading Algorithm}, this condition allows us to construct a simple algorithm which may be useful for practical stock trading.
In the rest of this section, some other new optimality results are provided.
\begin{lemma}[Expected Ratio Optimality]\label{lemma: Expected Ratio Optimality}
Let $K^*$ be the frequency-based optimal Kelly feedback gain. Then
\[
\mathbb{E}\bigg[\frac{ 1+{K}^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n} \bigg] \leq 1
\]
for any $K$.
In addition, we have
\[
\mathbb{E}\bigg[\log \frac{1+K^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n}\bigg] \leq 0
\]for any $K$.
\end{lemma}
\begin{proof} Let $K$ be given.
From Theorem~\ref{thm: KKT optimality}, it follows that for a Kelly optimal feedback gain $K^*$, we have
\[
\mathbb{E}\bigg[ \frac{1+X_{n,i}}{1+{K^*}^T\mathcal{X}_n} \bigg] \leq 1
\] for all $i=1,\dots, m$.
Multiplying this inequality by $K_i$ and summing over~$i$, we obtain
\[
\sum_{i=1}^m K_i \mathbb{E}\bigg[ \frac{1+X_{n,i}}{1+{K^*}^T\mathcal{X}_n} \bigg] \leq \sum_{i=1}^m K_i = 1
\]
which is equivalent to
\[
\mathbb{E}\bigg[\frac{ 1+{K}^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n} \bigg] \leq 1.
\]
To complete the proof, we invoke Jensen's inequality on the quantity $\mathbb{E}\left[\log \frac{1+K^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n}\right]$ and observe that
\begin{align*}
\mathbb{E}\bigg[\log \frac{1+K^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n}\bigg] &\leq \log \mathbb{E}\bigg[ \frac{1+K^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n}\bigg] \\[.5ex]
& \leq \log 1 = 0.
\end{align*}
Hence, the proof is complete.
\end{proof}
\textbf{Remark:} Lemma~\ref{lemma: Expected Ratio Optimality} above tell us that the frequency-based Kelly optimal portfolio also maximizes the expected relative wealth $\mathbb{E}[\frac{1+K^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n}]$. In addition, we note that the for any $K$,
\begin{align*}
1+K^T\mathcal{X}_n &= 1+\sum_{i=1}^{m}K_i\mathcal{X}_{n,i} \\
& \geq 1+\min_j\mathcal{X}_{\min,j}\sum_{i=1}^{m}K_i\\
& >0.
\end{align*}
Hence, the ratio $\frac{1+K^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n} > 0$. Now using the Markov inequality, the condition \[
\mathbb{E}\bigg[\frac{ 1+K^T\mathcal{X}_n }{1+{K^*}^T\mathcal{X}_n}\bigg] \leq 1
\]for any $K$ implies that
\[
P \left(\frac{ 1+K^T\mathcal{X}_n }{1+{K^*}^T\mathcal{X}_n} > c\right) \leq \frac{1}{c}
\] for any $c >0.$ The following lemma indicates a stronger result on the asymptotic relative optimality of $K^*$.
\begin{lemma}[Asymptotic Relative Optimality]\label{lemma: Asymptotic Relative Optimality}
The optimal feedback vector $K^*$ is such that
\[
\limsup_{n \to \infty} \frac{1}{n}\log \frac{1+K^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n}\leq 0
\] with probability one.
\end{lemma}
\begin{proof}
The idea of the proof is very similar to the one presented in \cite[Theorem~16.3.1]{Cover_Thomas_2012}. However, for the sake of completeness, we provide our own proof here. Recalling Lemma~\ref{lemma: Expected Ratio Optimality}, we have
\[
\mathbb{E}\bigg[\frac{1+K^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n}\bigg] \leq 1
\]
and Markov inequality tell us that
\[
P \left(\frac{1+K^T\mathcal{X}_n}{1+{K^*}^T\mathcal{X}_n} > c_n\right) \leq \frac{1}{c_n}
\] for any $c_n >0.$ Hence,
\[
P\left( \frac{1}{n} \log \frac{1+K^T\mathcal{X}_n }{1+{K^*}^T\mathcal{X}_n }>\frac{1}{n} \log c_n \right) \leq \frac{1}{c_n}.
\]
Take $c_n := n^2$ and summing all $n$, we have
\[
\sum_{n=1}^\infty P \left( \frac{1}{n} \log \frac{1+K^T\mathcal{X}_n }{1+{K^*}^T\mathcal{X}_n }>\frac{2\log n}{n} \right) \leq \sum_{n=1}^\infty \frac{1}{n^2} < \infty.
\]
Therefore, applying the Borel-Cantelli Lemma; e.g., see \cite{Rosenthal_probability}, it leads to
\[
P \left(\frac{1}{n} \log \frac{1+K^T\mathcal{X}_n }{1+{K^*}^T\mathcal{X}_n }>\frac{2\log n}{n} \,\, \text{infinitely often} \right) = 0.
\]
Thus, there exists $N>0$ such that for all $n\geq N$, we have
\[
\frac{1}{n} \log \frac{1+K^T\mathcal{X}_n }{1+{K^*}^T\mathcal{X}_n }\leq \frac{2\log n}{n}.
\]
It follows that
\[
\limsup_{n \to \infty} \frac{1}{n}\log \frac{1+K^T\mathcal{X}_n }{1+{K^*}^T\mathcal{X}_n }\leq 0
\]
with probability one.
\end{proof}
\vspace{3mm}
\textbf{Remark:} Note that for $n\geq 1$, $V(n) = (1+K^T \mathcal{X}_n)V(0)$, thus, Lemma~\ref{lemma: Asymptotic Relative Optimality} implies that
\[
\limsup_{n \to \infty} \frac{1}{n} \log \frac{V(n)}{V^*(n)} \leq 0
\]
with probability one
where $V^*(n) = (1+{K^*}^T \mathcal{X}_n)V(0)$.
\section{Dominant Ratio Trading Algorithm}\label{SECTION: Dominant Ratio Trading Algorithm}
Besides the theoretical interests, as mentioned in Section~\ref{SECTION: Main Results}, we view that Theorem~\ref{thm: KKT optimality} and Extended Dominate Asset Theorem~\ref{thm: Extended Dominant Asset Theorem} may be useful to design an algorithm for practical stock trading.
The main idea is to take advantage of the Dominant Asset Condition stated in Theorem~\ref{thm: Extended Dominant Asset Theorem}; i.e.,
\[
\mathbb{E}\left[ \frac{1+X_i(k)}{1+X_j(k)}\right] \leq 1,
\]
if it holds, then we set $K_j^* = 1$;
otherwise, $K_j^* = 0.$
\subsection{Bridging Theory and Practice}
To implement the idea described above, we proceed as follows: Using $s_i(k)$ to denote the $k$th daily \textit{realized prices} for the $i$th stock, we calculate the associated \textit{realized return}, call it $x_i(k)$, where
\[
x_i(k) := \frac{s_i(k+1)-s_i(k)}{s_i(k)}
\] for $i=1,2,\dots,m$.
It should be noted that, in practice, the realized returns $x_i(k)$ are often nonstationary. Hence, when testing the dominant asset condition, we work with a sliding window consisting of the most recent~$M$ trading steps.\footnote{Again, we note here that the unit of ``steps" here can be any time stamp such as milliseconds, minutes, days, months, etc.}
That is, we estimate the expected ratio in the Dominant Asset Condition by
\[
R_{ij}(k):= \frac{1}{M} \sum_{\ell=0}^{M-1}\frac{1+x_i(k-\ell)}{1+x_{j}(k-\ell)}.
\]
Then, if $R_{ij} \leq 1$ for all $i \neq j$, we set $K_j^*(k)=1$; otherwise, we set $K_j^*(k)=0.$ We call the procedure above the Dominant Ratio Trading Algorithm.
An illustrative example using historical prices data is provided in the next subsection to follow.
\subsection{Illustrative Example Via Back-Testing}
Consider a one-year long portfolio consisting of three assets with duration from February~14, 2019 to February~14,~2020: Vanguard Total World Stock Index Fund ETF Shares \mbox{(Ticker: VT)}, Vanguard Total Bond Market Index Fund ETF Shares (Ticker: BND), and Vanguard Total World Bond EFT \mbox{(Ticker: BNDX)} where the price trajectories are shown in Figure~\ref{fig:stockprices}.\footnote{The data are provided by Wharton Research Data Services.
}
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{figs/S1_S2_S3.eps}
\caption{Daily Closing Stock Prices $S_i(k), i=1,2,3$ for VT, BND, and BNDX, respectively.}
\label{fig:stockprices}
\end{figure}
Begin with initial account value $V(0)=\$1$, we implement the algorithm described above using a window size $M=20$ days. That is, the initial trade is triggered after receiving the first twenty daily prices data.
We ran MATLAB script and plot a typical trading performance in terms of the trajectory of account value~$V(k)$, which is shown in Figure~\ref{fig:goldenratiotest3assets}.
In Figure~\ref{fig:goldenratiotest3assets}, we find that the account value obtained by Dominant Ratio Trading Algorithm is increasing from $V(0)=1$ to $V(252) \approx 1.23$, which yields a returns about $23\%$ and is obviously higher than the account value obtained by standard buy and hold strategy.
We also reported the corresponding trading signal~$K_i(k)$ for $i=1,2,3$ in Figure~\ref{fig:k1k2k3} where a flavor of bang-bang control is seen.
To close this section, we also tested various sliding window sizes using equally-spaced $M=1,5,15,\dots,60$ with increment $5$ between elements and we seen that the algorithm produces similar trading performance to the one seen in Figure~\ref{fig:goldenratiotest3assets}.
This example provides a potential for bridging the theory and practice in stock trading. Further developments along this line might be fruitful to pursue as a direction of future research. For example, an initial computational complexity analysis and trading with various stocks may be of the next interests to pursue.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figs/Golden_Ratio_test_VT_BND_BNDX_ver02.eps}
\caption{Trading Performance Comparison: Dominant Ratio Trading Algorithm with $M=20$ versus Buy and Hold strategy.}
\label{fig:goldenratiotest3assets}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figs/K1_K2_K3.eps}
\caption{Feedback Gains $K_i(k)$ with $i=1,2,3$ for VT, BND, and BNDX, respectively. One sees a bang-bang flavored control signals.}
\label{fig:k1k2k3}
\end{figure}
\section{Conclusion and Future Work} \label{SECTION: Conclusion}
In this paper, we studied necessary and sufficient conditions for the frequency-based optimal Kelly portfolio.
With the aid of these conditions, we derived various different optimality characterizations such as expected ratio optimality, asymptotic relative optimality, and Extended Dominant Asset Theorem.
Moreover, to bridge the theory and practice, we used the notion of dominant asset to construct a trading algorithm which indicates the trader when to invest all available funds into the dominant asset.
Regarding further research, one obvious continuation would be to study the case when $K_i<0$ is allowed; i.e., short selling should be considered as a next level extension of the formulation.
In this situation, we envision a similar results along the lines of those given here.
In addition, it would be of interest to relax some of the assumptions in the formulation from i.i.d. return sequences to time-dependent sequences.
Finally,
for cases when the distribution model for returns~$X_i(k)$ is either partially known or completely unknown, it would be of interest to study the extent to which the theory in this paper can be extended. For example, the line along the data-driven algorithm described in Section~\ref{SECTION: Dominant Ratio Trading Algorithm} might be~helpful.
\section{ACKNOWLEDGMENTS}
The author thanks Professor B. Ross Barmish and Professor John A. Gubner for leading the author into this field and seeing the potentiality and applicability of control theory.
\appendices
\section{Survival Considerations}
\label{SECTION: Surivial}
\vspace{0mm}
In the context of stock trading, the very first goal for a trader is to assure that the bankruptcy would never occur for the entire trading period; i.e., one must assure $V(k)>0$ for all~$k$.
If this is the case, we say the trades are \textit{survival}.\footnote{
As stability is to the classical control system, so is survivability to the financial system.
In fact, in our prior work \cite{Hsieh_Barmish_Gubner_2019_TAC}, the survivability problem is regarded as a state positivity problem.} Below, we provide a result which indicates that the any feedback gain $K$ satisfying the constraint set $\mathcal{ K}$ considered in Section~\ref{SECTION: Problem Formulation} assures survival.
\vspace{5mm}\noindent
\textbf{Lemma:}\label{lemma: Survival for V(k+1)}
\textit{If $K \in \mathcal{ K}$, then
$V(n)>0$ for all $n\geq 1$.}
\begin{proof}
We first note that for $n=1$, the account value is
\begin{align*}
V(1) = (1+K^T\mathcal{X}_1)V(0)
= (1+K^TX(k))V(0)>0.
\end{align*}
Now, to show \mbox{$V(n) > 0$} for $n>1$, we observe that
\begin{align*}
V(n) &= (1+K^T\mathcal{X}_n)V(0)\\[.5ex]
&= \left(1+ \sum_{i=1}^m K_i \left(\prod_{k=0}^{n-1}(1+X_i(k))-1\right) \right)V(0) \\[.5ex]
& \geq \left(1+ \sum_{i=1}^m K_i \mathcal{X}_{i,\min} \right)V(0)
\end{align*}
where $\mathcal{X}_{i,\min} := (1+X_{\min,i})^n-1 > -1$ for all $i,$ Hence,
\begin{align*}
V(n) & \geq \left(1+\min_{i=1,\dots,m}\mathcal{X}_{i,\min} \sum_{j=1}^m K_j \right)V(0)\\[.5ex]
& = \left(1+\min_{i=1,\dots,m}\mathcal{X}_{i,\min} \right)V(0)\\[.5ex]
&>0
\end{align*}
where the last inequality holds since $\mathcal{X}_{i,\min}>-1$ for all $i$ implies $\min_i \mathcal{X}_{i,\min}>-1$
and the proof is complete.
\end{proof}
|
1,314,259,993,952 | arxiv | \section{Statement of the results}
Let $\mathcal{K}$ be a non-archimedean locally compact field with finite
residue field of order $q$. Let $G$ be an almost simple linear
algebraic group defined over $\mathcal{K}$ of $\mathcal{K}$-rank $\ell+1$. Let $\mathfrak T$
be the Bruhat-Tits building associated with $G(\mathcal{K})$ \cite{BT}. This
is an infinite, locally finite, contractible simplicial complex of
dimension $\ell+1$. Let $X$ be the link of a vertex of $\mathfrak T$. $X$ is
a finite simplicial complex of dimension $\ell$, which is a building
in the sense of Tits \cite{Brown}. In \cite{Garland}, Garland
defined a certain combinatorial Laplace operator $\Delta$ acting on
the $i$-cochains $C^i(X)$, $0\leq i\leq \ell-1$; see Definiton
\ref{defn1.7}. $\mathfrak T$ can be realized as the skeleton of a
non-archimedean symmetric space \cite[Ch. 5]{Berkovich}, and from
this point of view the operators $\Delta$ are the non-archimedean
analogues of curvature transformations of riemannian symmetric
spaces. Denote by $m^i(X)$ the minimal non-zero eigenvalue of
$\Delta$ acting on $C^i(X)$. By a rather ingenious argument, Garland
proved that for any $\varepsilon>0$ there is a constant $q(\varepsilon, \ell)$
depending only on $\varepsilon$ and $\ell$ such that if $q>q(\varepsilon, \ell)$
then $m^i(X)\geq \ell-i-\varepsilon$. Denote by $M^i(X)$ the maximal
eigenvalue of $\Delta$. The main result of this paper is the
following (see Theorems \ref{prop_ny} and \ref{thm-last}):
\begin{thm}
$M^i(X)=\ell+1$ and $m^i(X)\leq \ell-i$.
\end{thm}
In fact we prove this result for an arbitrary finite building $X$.
Note that our estimate on $m^i(X)$ is the best possible estimate
which does not depend on $q$. Based on some explicit calculations,
we also propose a conjectural description of the behavior of all the
eigenvalues of $\Delta$ as $q\to \infty$; see Conjecture \ref{conj}.
The method of our proof is based on a modification of Garland's
original arguments. The results in \cite{Garland} are stated for
buildings. On the other hand, as is nicely explained in
\cite{Borel}, part of the argument in \cite{Garland} works for quite
general simplicial complexes. In $\S$\ref{SecGM} we follow
\cite{Borel}.
The main application of Garland's estimate on $m^i(X)$ is a
vanishing result for the cohomology groups of discrete cocompact
subgroups of $G(\mathcal{K})$; see $\S$\ref{ssGV}. This vanishing theorem
plays an important role in many problems arising in representation
theory and arithmetic geometry. Incidentally, our explicit
calculations of the eigenvalues of Laplace operators indicate that,
despite the hope expressed in \cite{Garland}, Garland's method is
not powerful enough to prove the vanishing theorem unconditionally,
i.e., without a restriction on $q$ being sufficiently large.
\section{Proofs}\label{SecGM}
\subsection{Simplicial complexes} We start by fixing the terminology
and notation related to simplicial complexes.
A \textit{simplicial complex} is a collection $X$ of finite nonempty
sets, such that if $s$ is an element of $X$, so is every nonempty
subset of $s$. The element $s$ of $X$ is called a \textit{simplex}
of $X$; its \textit{dimension} is $|s|-1$. Each nonempty subset of
$s$ is called a \textit{face} of $s$. A simplex of dimension $i$
will usually be referred to as $i$-simplex. The \textit{dimension}
$\dim(X)$ of $X$ is the largest dimension of one of its simplices
(or is infinite if there is no such largest dimension). A
subcollection of $X$ that is itself a complex is called a
\textit{subcomplex} of $X$. The \textit{vertices} of the simplex $s$
are the one-point elements of the set $s$.
Let $s$ be a simplex of $X$. The \textit{star} of $s$ in $X$,
denoted $\mathrm{St}(s)$, is the subcomplex of $X$ consisting of the union
of all simplices of $X$ having $s$ as a face. The \textit{link} of
$s$, denoted $\mathrm{Lk}(s)$, is the subcomplex of $\mathrm{St}(s)$ consisting of
the simplices which are disjoint from $s$. If one thinks of $\mathrm{St}(s)$
as the ``unit ball'' around $s$ in $X$, then $\mathrm{Lk}(v)$ is the ``unit
sphere'' around $s$.
A specific ordering of the vertices of $s$ up to an even permutation
is called an \textit{orientation} of $s$. An \textit{oriented}
simplex is a simplex $s$ together with an orientation of $s$. Denote
the set of $i$-simplices by $\widehat{S}_i(X)$, and the set of
oriented $i$-simplices by $S_i(X)$. We will denote the vertices
$\widehat{S}_0(X)=S_0(X)$ of $X$ also by $\mathrm{Ver}(X)$. For $s\in
S_i(X)$, $\bar{s}\in S_i(X)$ denotes the same simplex but with
opposite orientation. An $\mathbb{R}$-valued \textit{$i$-cochain} on $X$ is
a function $f$ from the set of oriented $i$-simplices of $X$ to
$\mathbb{R}$, such that $f(s)=-f(\bar{s})$. Such functions are also called
\textit{alternating}. The $i$-cochains naturally form a $\mathbb{R}$-vector
space which is denoted $C^i(X)$. If $i<0$ or $i>\dim(X)$, we let
$C^i(X)=0$.
\subsection{Laplace operators} From now on we assume that $X$ is a finite $n$-dimensional
complex such that
\begin{enumerate}
\item[($1_X$)] Each simplex of $X$ is a face of some $n$-simplex.
\end{enumerate}
For $s\in S_i(X)$, let $w(s)$ be the number of (non-oriented)
$n$-simplices containing $s$. In view of ($1_X$), $w(s)\neq 0$ for
any $s$.
\begin{lem}\label{lem-w} Let $\sigma\in S_i(X)$ be fixed. Then
$$
\sum_{\substack{s\in \widehat{S}_{i+1}(X)\\ \sigma\subset
s}}w(s)=(n-i)\cdot w(\sigma).
$$
\end{lem}
\begin{proof}
Given an $n$-simplex $t$ such that $\sigma\subset t$ there are
exactly $(n-i)$ simplices $s$ of dimension $(i+1)$ such that
$\sigma\subset s\subset t$. Hence in the sum of the lemma we count
every $n$-simplex containing $\sigma$ exactly $(n-i)$ times.
\end{proof}
Define a positive-definite pairing on $C^i(X)$ by
\begin{equation}\label{eq-pairing}
(f,g):=\sum_{{s} \in \widehat{S}_i(X)} w(s)\cdot f(s)\cdot g(s),
\end{equation}
where $f,g\in C^i(X)$ and in $w(s)\cdot f(s)\cdot g(s)$ we choose
some orientation of $s$. (This is well-defined since both $f$ and
$g$ are alternating.)
Define the \textit{coboundary}, a linear transformation $d:
C^i(X)\to C^{i+1}(X)$, by
\begin{equation}\label{eq-d}
(df)([v_0,\dots,
v_{i+1}])=\sum_{j=0}^{i+1}(-1)^jf([v_0,\dots,\hat{v}_j,\dots,
v_{i+1}]),
\end{equation}
where $[v_0,\dots, v_{i+1}]\in S_{i+1}(X)$ and the symbol
$\hat{v}_j$ means that the vertex $v_j$ is to be deleted from the
array.
Let $s=[v_0,\dots, v_i]\in S_i(X)$ and $v\in \mathrm{Ver}(X)$. If the set
$\{v,v_0,\dots, v_i\}$ is an $(i+1)$-simplex of $X$, then we denote
by $[v, s]\in S_{i+1}(X)$ the oriented simplex $[v,v_0,\dots, v_i]$.
Define a linear transformation $\delta: C^i(X)\to C^{i-1}(X)$ by
\begin{equation}\label{eq-delta}
(\delta f)(s)=\sum_{\substack{v\in \mathrm{Ver}(X)\\ [v,s]\in S_i(X)}}
\frac{w([v,s])}{w(s)}f([v,s]).
\end{equation}
In (\ref{eq-d}) and (\ref{eq-delta}), by convention, an empty sum is
assumed to be $0$. One easily checks that $\delta$ is the adjoint of
$d$ with respect to (\ref{eq-pairing}):
\begin{lem}\label{v-prop1.12}
If $f\in C^i(X)$ and $g\in C^{i+1}(X)$, then $(df,g)=(f,\delta g)$.
\end{lem}
\begin{defn}\label{defn1.7} The \textit{Laplace operator} on $C^i(X)$ is the linear
operator $\Delta=\delta d$.
\end{defn}
Since $\Delta$ is self-adjoint with respect to the pairing
(\ref{eq-pairing}), and for any $f\in C^i(X)$, $(\Delta f,
f)=(df,df)\geq 0$, $\Delta$ is diagonalizable and its eigenvalues
are non-negative real numbers.
\begin{rem}
The Laplace operator in \cite[Def. 3.15]{Garland} is defined as
$\delta d+ d\delta$. What we denote by $\Delta$ in this paper is
denoted by $\Delta^+$ in \textit{loc. cit.} When $X$ is the link of
a vertex in a Bruhat-Tits building, Garland calls $\Delta^+$ the
\textit{p-adic curvature}; see \cite[p. 400]{Garland}.
\end{rem}
\subsection{Garland's method}\label{ssGM} For $v\in \mathrm{Ver}(X)$ let $\rho_v$ be
the linear transformation on $C^i(X)$ defined by:
$$
(\rho_vf)(s)=\left\{
\begin{array}{ll}
f(s) & \hbox{if } v\in s; \\
0 & \hbox{otherwise.}
\end{array}
\right.
$$
Since any $i$-simplex has $(i+1)$-vertices, for $f\in C^i(X)$ we
have the obvious equality
\begin{equation}\label{obv}
\sum_{v\in \mathrm{Ver}(X)}\rho_vf=(i+1)f.
\end{equation}
We also have the following obvious lemma:
\begin{lem}\label{v-prop1.13} \hfill
\begin{enumerate}
\item $\rho_v\rho_v=\rho_v$;
\item For $f \in C^i(X)$ and $g\in C^i(X)$, $(\rho_vf, g)=(f,
\rho_vg)$.
\end{enumerate}
\end{lem}
Let $d_v$ and $\delta_v$ be the linear operators $d$ and $\delta$
acting on the cochains of the finite simplicial complex $\mathrm{Lk}(v)$,
and let $\Delta_v:=\delta_vd_v$. Note that $\mathrm{Lk}(v)$ is an
$(n-1)$-dimensional complex satisfying condition $(1_X)$. For
$f,g\in C^i(X)$ define their inner product on $\mathrm{Lk}(v)$ by
\begin{equation}\label{eq-locp}
(f,g)_v:=\sum_{s\in \widehat{S}_i(\mathrm{Lk}(v))}w_v(s)\cdot f(s)\cdot
g(s),
\end{equation}
where $w_v(s)$ is the number of $(n-1)$-simplices in $\mathrm{Lk}(v)$
containing $s$. This is simply the pairing (\ref{eq-pairing}) of the
restrictions of $f$ and $g$ to $\mathrm{Lk}(v)$.
\begin{lem}\label{lem-borel} If $f\in C^i(X)$, then
$$
i\cdot (\Delta f, f)+(n-i)(f,f)=\sum_{v\in \mathrm{Ver}(X)}(\Delta \rho_v f,
\rho_v f).
$$
\end{lem}
\begin{proof} See \cite[Lem. 1.3]{Borel}. In the proof it is crucial that the
inner product $(\cdot, \cdot)$ on $C^i(X)$ is defined using the
weights $w(s)$.
\end{proof}
\begin{cor}\label{cor1.14} Let $f\in C^i(X)$.
If there is a positive real number $\Lambda$ such that $$(\Delta \rho_v
f, \rho_v f)\leq \Lambda\cdot (\rho_v f, f)$$ for all $v\in \mathrm{Ver}(X)$,
then
$$i\cdot (\Delta f, f)\leq \left(\Lambda\cdot (i+1)-(n-i)\right)(f,f).$$
\end{cor}
\begin{proof}
This follows from Lemma \ref{v-prop1.13}, Lemma \ref{lem-borel} and
(\ref{obv}).
\end{proof}
From now on we assume that $i\geq 1$. Define a linear transformation
$\tau_v:C^{i}(X)\to C^{i-1}(X)$ by
$$
(\tau_vf)(s)=\left\{
\begin{array}{ll}
f([v,s]) & \hbox{if $s\in S_{i-1}(\mathrm{Lk}(v))$;}\\
0 & \hbox{otherwise.}
\end{array}
\right.
$$
\begin{lem}\label{prop7.12}
For $f, g\in C^i(X)$, we have
$(\tau_vf,\tau_vg)_v=(\rho_vf,\rho_vg)$.
\end{lem}
\begin{proof} We have
$$
(\tau_vf,\tau_vg)_v =\sum_{\sigma\in
\widehat{S}_{i-1}(\mathrm{Lk}(v))}w_v(\sigma)\cdot \tau_vf(\sigma)\cdot
\tau_vg(\sigma).
$$
It is easy to see that there is a one-to-one correspondence between
the $n$-simplices of $X$ containing $[v,\sigma]$ and the
$(n-1)$-simplices of $\mathrm{Lk}(v)$ containing $\sigma$, so
$w_v(\sigma)=w([v,\sigma])$. Hence the above sum can be rewritten as
$$
\sum_{s\in \widehat{S}_{i}(\mathrm{St}(v))}w(s)\cdot (\rho_vf)(s)\cdot
(\rho_vg)(s).
$$
Since $\rho_v f$ is zero away from $\mathrm{St}(v)$, the sum can be extended
to the whole $\widehat{S}_{i}(X)$, so the lemma follows.
\end{proof}
\begin{lem}\label{prop7.14} For $f\in C^i(X)$, we have
$(\Delta \rho_v f,\rho_v f)=(\Delta_v\tau_vf,\tau_vf)_v$.
\end{lem}
\begin{proof}
By Lemma \ref{v-prop1.13} and Lemma \ref{prop7.12},
$$
(\Delta \rho_v f,\rho_vf)=(\rho_v\Delta \rho_v
f,\rho_vf)=(\tau_v\Delta \rho_vf,\tau_vf)_v.
$$
Expanding $\tau_v\Delta\rho_v f(s)$ for $s\in S_{i-1}(\mathrm{Lk}(v))$, one
easily checks that $\tau_v\Delta\rho_v f=\Delta_v\tau_vf$, which
implies the claim.
\end{proof}
\begin{lem}\label{lem2.17}
If $c$ is an eigenvalue of $\Delta_v$ acting on $C^{i-1}(\mathrm{Lk}(v))$
for some $v\in \mathrm{Ver}(X)$, then $c$ is also an eigenvalue of $\Delta$
acting on $C^i(X)$.
\end{lem}
\begin{proof}
Let $f\in C^{i-1}(\mathrm{Lk}(v))$ be such that $\Delta_vf=c\cdot f$. Define
$g\in C^i(X)$ as follows. If $s\in S_i(X)$ does not contain $v$ then
$g(s)=0$. If $s=[v,\sigma]$ then $g(s)=f(\sigma)$. In particular,
$\tau_v g=f$. We know that $\tau_v \Delta \rho_v g=\Delta_v\tau_v
g$. Obviously $\rho_v g=g$, so $\tau_v \Delta g=\Delta_v f=c\cdot
f=\tau_v(c\cdot g)$. This implies that $\Delta g=c\cdot g$.
\end{proof}
\begin{notn} Given a finite simplicial complex $Y$ satisfying $(1_X)$,
let $M^i(Y)$ and $m^i(Y)$ be the maximal and minimal non-zero
eigenvalues of $\Delta$ acting on $C^i(Y)$, respectively. Denote
$$
\lambda^i_{\max}(Y):= \max_{\substack{v\in \mathrm{Ver}(Y)}}M^i(\mathrm{Lk}(v)) \quad
\text{and}\quad \lambda^i_{\min}(Y):= \min_{\substack{v\in
\mathrm{Ver}(Y)}}m^i(\mathrm{Lk}(v)).
$$
\end{notn}
\begin{cor}\label{cor2.19} $M^i(X)\geq \lambda^{i-1}_{\max}(X)$ and
$m^i(X)\leq \lambda^{i-1}_{\min}(X)$.
\end{cor}
\begin{prop}\label{cor7.15} For $f\in C^i(X)$, we have
$$
(\Delta \rho_vf,\rho_vf)\leq \lambda^{i-1}_{\max}(X)\cdot(\rho_vf,f).
$$
\end{prop}
\begin{proof}
By Lemma \ref{prop7.14}, $(\Delta
\rho_vf,\rho_vf)=(\Delta_v\tau_vf,\tau_vf)_v$. Let $\{e_1, \dots,
e_h\}$ be an orthogonal basis of $C^{i-1}(\mathrm{Lk}(v))$ with respect to
$(\cdot , \cdot)_v$ which consists of $\Delta_v$-eigenvectors. Write
$\tau_vf=\sum_j a_j e_j$. Then
$$ (\Delta_v \tau_v f, \tau_v f)_v \leq M^{i-1}(\mathrm{Lk}(v)) \sum_{j=1}^h
a_j^2 (e_j, e_j)_v \leq \lambda_{\max}^{i-1}(X)\cdot (\tau_v f, \tau_v
f)_v.$$
On the other hand, by Lemma \ref{v-prop1.13} and Lemma
\ref{prop7.12}, $ (\tau_vf,\tau_vf)_v =(\rho_vf,\rho_vf)= (\rho_vf,
f)$.
\end{proof}
Denote by $\tilde{H}^i(\mathrm{Lk}(v), \mathbb{R})$ the $i$th reduced simplicial
cohomology group of $\mathrm{Lk}(v)$.
\begin{thm}[Fundamental Inequality]\label{thmFI}
For $1\leq i\leq n-1$, we have
$$
i\cdot M^i(X)\leq (i+1)\cdot \lambda^{i-1}_{\max}(X)-(n-i).
$$
If $\tilde{H}^{i-1}(\mathrm{Lk}(v), \mathbb{R})=0$ for every $v\in \mathrm{Ver}(X)$, then
$$
i\cdot m^i(X)\geq (i+1)\cdot \lambda^{i-1}_{\min}(X)-(n-i).
$$
\end{thm}
\begin{proof} Let $f\in C^i(X)$ be such that $\Delta f=M^i(X)\cdot f$.
Proposition \ref{cor7.15} implies that the assumption of Corollary
\ref{cor1.14} is satisfied with $\Lambda=\lambda^{i-1}_{\max}(X)$. This
proves the first part. The second part is Garland's original
fundamental estimate \cite[$\S$5]{Garland}. For a proof see Theorem
1.5 in \cite{Borel}.
\end{proof}
\begin{notn} For $m\geq 1$, let $I_m$ denote the $m\times m$ identity matrix and let $J_m$
denote the $m\times m$ matrix whose entries are all equal to $1$.
The minimal polynomial of $J_m$ is $x(x-m)$.
\end{notn}
\begin{example}\label{Ex-P} Let $X$ be an $n$-simplex. We claim that the
eigenvalues of $\Delta$ acting on $C^i(X)$ are $0$ and $(n+1)$ for
any $0\leq i \leq n-1$. It is easy to see that $0$ is an eigenvalue,
so we need to show that the only non-zero eigenvalue of $\Delta$ is
$(n+1)$, or equivalently, $m^i(X)=M^i(X)=n+1$. First, suppose $i=0$.
Since for any simplex of $X$ there is a unique $n$-simplex
containing it, one easily checks that $\Delta$ acts on $C^0(X)$ as
the matrix $(n+1)I_{n+1}-J_{n+1}$. The only eigenvalues of this
matrix are $0$ and $(n+1)$. Now let $i\geq 1$. The link of any
vertex is an $(n-1)$-simplex, so by induction
$\lambda_{\min}^{i-1}(X)=\lambda^{i-1}_{\max}(X)=n$. Since the reduced
cohomology groups of a simplex vanish, the Fundamental Inequality
implies
$$
i\cdot M^i(X)\leq (i+1)n-(n-i)=i(n+1)
$$
and
$$
i\cdot m^i(X)\geq (i+1)n-(n-i)=i(n+1).
$$
Hence $(n+1)\leq m^i(X)\leq M^i(X)\leq (n+1)$, which implies the
claim.
\end{example}
\subsection{Buildings}\label{sec3}
Let $G$ be a group equipped with a Tits system $(G,B,N,S)$ of rank
$\ell+1$. To every Tits system, there is an associated simplicial
complex $\mathfrak B$ of dimension $\ell$, called the \textit{building} of
$(G,B,N,S)$. For the definitions and basic properties of buildings
we refer to Chapters IV and V in \cite{Brown}. The simplices of
$\mathfrak B$ are in one-to-one correspondence with proper parabolic
subgroups of $G$. Assume from now on that $G$ is finite. Then $\mathfrak B$
is a finite simplicial complex satisfying $(1_X)$. Given a simplex
$s$ of $\mathfrak B$, it is known that $\mathrm{Lk}(s)$ is again a building
corresponding to a Tits system of rank $\ell-\dim(s)$.
We would like to estimate $M^i(\mathfrak B)$ and $m^i(\mathfrak B)$ for $0\leq i\leq
\ell-1$. This will be done inductively, using induction on $i$ and
$\ell$. The base of induction is the following lemma:
\begin{lem}\label{lem-n2.2}
If $\ell=1$ then $M^0(\mathfrak B)=2$, and $m^0(\mathfrak B)\leq 1$.
\end{lem}
\begin{proof} When $\ell=1$,
the eigenvalues of $\Delta$ acting on $C^0(\mathfrak B)$ were calculated by
Feit and Higman in \cite{FH}. The claim follows from these
calculations. See also Proposition 7.10 in \cite{Garland} when $\mathfrak B$
is of Lie type.
\end{proof}
Let $K$ be the fundamental chamber of $\mathfrak B$, i.e., the
$\ell$-simplex of $\mathfrak B$ corresponding to the Borel subgroup $B$ of
the given Tits system. Every simplex $s$ of $\mathfrak B$ can be transformed
to a unique face $s'$ of $K$ under the action of $G$. Label the
vertices of $K$ by the elements of $I_\ell:=\{0,1,\dots,\ell\}$, and
define $\mathrm{Type}(s)$ to be the subset of $I_\ell$ corresponding to the
vertices of $s'$. $G$ naturally acts on $\mathfrak B$ and this action is
type-preserving and strongly transitive; see \cite[$\S$V.3]{Brown}.
From this perspective one can think of $K$ as the quotient $\mathfrak B/G$.
\begin{lem}\label{lem-extend}
$\ell+1$ is an eigenvalue of $\Delta$ acting on $C^i(\mathfrak B)$. In
particular, $M^i(\mathfrak B)\geq \ell+1$.
\end{lem}
\begin{proof} Given a function $f\in C^i(K)$, we can
lift it (uniquely) to a $G$-invariant function $\tilde{f}\in
C^i(\mathfrak B)$ defined by $\tilde{f}(\tilde{s}):=f(s)$, where $\tilde{s}$
is any preimage of $s$ in $\mathfrak B$. As is explained in
\cite[$\S$4.2]{Borel}, we have $\widetilde{\Delta f}=\Delta
\tilde{f}$. Hence the claim follows from Example \ref{Ex-P}.
\end{proof}
\begin{rem}\label{rem3.12}
Note that Lemma \ref{lem2.17}, coupled with Lemma \ref{lem-extend},
implies that the integers $\ell+1, \ell,\cdots, \ell-i+1$ are always
present among the eigenvalues of $\Delta$ acting on $C^i(\mathfrak B)$. On
the contrary, $\ell-i$ is not necessarily an eigenvalue.
\end{rem}
Let $f\in C^0(\mathfrak B)$, and let $R$ be a fixed constant. For each
$\alpha\in I_\ell$, define a function $f_\alpha$ on the vertices of
$\mathfrak B$ by $f_\alpha(v)=f(v)$ if $\mathrm{Type}(v)\neq \alpha$ and
$f_\alpha(v)=R\cdot f(v)$ if $\mathrm{Type}(v)=\alpha$. Also, for a fixed
$\alpha\in I_\ell$ define a linear transformation $\rho_\alpha$ on
$C^0(\mathfrak B)$ by
$$
\rho_\alpha=\sum_{\mathrm{Type}(v)=\alpha}\rho_v.
$$
For $f\in C^0(\mathfrak B)$ and any $\alpha$, we have
\begin{equation}\label{eq-jan1}
(\rho_\alpha df_\alpha, df_\alpha)=(df_\alpha,
df_\alpha)-((1-\rho_\alpha)df, df),
\end{equation}
and
\begin{equation}\label{eq-jan2}
(\Delta\rho_\alpha df_\alpha, \rho_\alpha
df_\alpha)=((1-\rho_\alpha)df,df)
\end{equation}
The equations (\ref{eq-jan1}) and (\ref{eq-jan2}) are the equations
(3) and (6) in \cite[$\S$4.5]{Borel}, respectively.
\begin{lem}\label{lemd31}
Let $f\in C^0(\mathfrak B)$ and suppose $\Delta f=c\cdot f$. Then
$$
\sum_{\alpha\in I_\ell}(\Delta f_\alpha,
f_\alpha)=\left[(\ell-c)(R-1)^2+c(R^2+\ell)\right]\cdot (f,f).
$$
\end{lem}
\begin{proof} Fix some type $\alpha$ and
let $g\in C^0(\mathfrak B)$ be a function such that $g(v)=0$ if
$\mathrm{Type}(v)\neq \alpha$. Then $(\Delta g, g)=\ell\cdot (g,g)$. Indeed,
$$
(\Delta g, g)=(dg,dg)=\sum_{[x,v]\in
\widehat{S}_1(\mathfrak B)}w([x,v])(g(v)-g(x))^2
$$
$$
=\sum_{\mathrm{Type}(v)=\alpha}g(v)^2\sum_{x\in
\mathrm{Ver}(\mathrm{Lk}(v))}w([x,v])=\ell\sum_{\mathrm{Type}(v)=\alpha}w(v)\cdot
g(v)^2=\ell\cdot (g,g).
$$
(The middle equality on the previous line follows from Lemma
\ref{lem-w}.) If we apply this to $g=f_\alpha-f$, then we get
\begin{equation}\label{eq-d31}
(\Delta f_\alpha, f_\alpha)=\ell\cdot (f_\alpha,
f_\alpha)-2(\ell-c)(f_\alpha, f)+(\ell-c)(f,f).
\end{equation}
We clearly have
$$
\sum_{\alpha\in I_\ell}f_\alpha= (\ell+R)\cdot f\quad \text{and}
\quad \sum_{\alpha\in I_\ell}(f_\alpha, f_\alpha)=(\ell+R^2)\cdot
(f,f).
$$
Summing (\ref{eq-d31}) over all types and using the previous two
equalities, we get the claim.
\end{proof}
\begin{thm}\label{prop_ny}
$M^i(\mathfrak B)=\ell+1$.
\end{thm}
\begin{proof}
By Lemma \ref{lem-extend}, it is enough to show that $M^i(\mathfrak B)\leq
\ell+1$. We start with $M^0(\mathfrak B)$. Let $f\in C^0(\mathfrak B)$. Since the
vertices of any simplex in $\mathfrak B$ have distinct types, one easily
checks that
$$
\sum_{\mathrm{Type}(v)=\alpha}(\Delta\rho_v df_\alpha,
\rho_vdf_\alpha)=(\Delta \rho_\alpha df_\alpha, \rho_\alpha
df_\alpha),
$$
so by Proposition \ref{cor7.15}
\begin{equation}\label{eq-ny}
(\Delta \rho_\alpha df_\alpha, \rho_\alpha df_\alpha)\leq
\lambda^0_{\max}(\mathfrak B)\cdot (\rho_\alpha df_\alpha, \rho_\alpha
df_\alpha).
\end{equation}
Since for any $v\in \mathrm{Ver}(\mathfrak B)$, $\mathrm{Lk}(v)$ is a building of dimension
$\ell-1$, the induction on $\ell$ gives $\lambda^0_{\max}(\mathfrak B)=\ell$.
Combining this with (\ref{eq-ny}), (\ref{eq-jan1}) and
(\ref{eq-jan2}), we get
\begin{equation}\label{eq-dec5}
(1+\ell)\cdot ((1-\rho_\alpha)df,df)\leq \ell\cdot (df_\alpha,
df_\alpha).
\end{equation}
Now assume $\Delta f=c\cdot f$. Note that
\begin{align}\label{eq-ny3}
\sum_{\alpha\in I_\ell}(1-\rho_\alpha)df&=(\ell+1)df-\sum_{v\in
\mathrm{Ver}(\mathfrak B)}\rho_v df\\ \nonumber &=(\ell+1)df-2df=(\ell-1)df,
\end{align}
so summing the inequalities (\ref{eq-dec5}) over all types and using
Lemma \ref{lemd31}, we get
\begin{equation}\label{eq-last}
(\ell+1)(\ell-1)c\cdot (f,f)\leq \ell\cdot
\left[(\ell-c)(R-1)^2+c(R^2+\ell)\right]\cdot (f,f).
\end{equation}
If we put $R=(\ell-c)/\ell$, then (\ref{eq-last}) forces $c\leq
\ell+1$. In particular, $M^0(\mathfrak B)\leq \ell+1$.
Now let $i\geq 1$. The induction on $i$ and $\ell$ implies that
$\lambda^{i-1}_{\max}(\mathfrak B)=\ell$. From the Fundamental Inequality
\ref{thmFI} we get
$$
i\cdot M^i(\mathfrak B)\leq (i+1)\cdot \ell-(\ell-i),
$$
which implies $M^i(\mathfrak B)\leq \ell+1$.
\end{proof}
\begin{thm}\label{thm-last}
$m^i(\mathfrak B)\leq \ell-i$.
\end{thm}
\begin{proof} We start with $i=0$.
Denote $c:=m^0(\mathfrak B)$ and let $f$ be a $\Delta$-eigenfunction with
eigenvalue $c$. First we claim that $c\neq \ell+1$. Indeed, $\Delta$
is a semi-simple operator and if $c=\ell+1$ then by Theorem
\ref{prop_ny} it has only two distinct eigenvalues, namely $0$ and
$\ell+1$. This implies that $\Delta^2=(\ell+1)\Delta$. But it is
easy to check that this equality is false. Next, $c\neq \ell+1$
implies $(\Delta f_\alpha, f_\alpha)\geq c\cdot (f_\alpha,
f_\alpha)$; see equation (1) in \cite[$\S$4.6]{Borel}. Summing over
all types,
$$
\sum_{\alpha\in I_\ell} (\Delta f_\alpha, f_\alpha)\geq
c(\ell+R^2)\cdot (f, f).
$$
Comparing this inequality with the expression in Lemma \ref{lemd31},
we conclude that $$(\ell-c)(R-1)^2\geq 0.$$ Since $R$ is arbitrary,
we must have $c\leq \ell$.
Now assume $i\geq 1$. By Corollary \ref{cor2.19} and induction on
$i$ and $\ell$, we have $m^i(\mathfrak B)\leq \lambda^{i-1}_{\min}(\mathfrak B)\leq
(\ell-1)-(i-1)=\ell-i$.
\end{proof}
\begin{thm}[Garland]\label{thm-G} Assume that $G$ is the group of $\mathbb{F}_q$-valued
points of a simple, simply connected Chevalley group. For any
$\varepsilon>0$ there is a constant $q(\varepsilon, \ell)$ depending only on
$\varepsilon$ and $\ell$, such that if $q>q(\varepsilon, \ell)$ then $m^i(\mathfrak B)\geq
\ell-i-\varepsilon$.
\end{thm}
\begin{proof} For the proof see Sections 6, 7, 8 in \cite{Garland},
or Proposition 5.4 in \cite{Borel}.
\end{proof}
\section{Examples}\label{SecEg}
In this section we compute explicitly in some cases the eigenvalues
of $\Delta$ acting on $C^i(\mathfrak B)$. We concentrate on
$G=\mathrm{SL}_{\ell+2}(\mathbb{F}_q)$ for small $\ell$, with $B\subset G$
being the upper triangular group and $N$ being the monomial group,
cf. \cite[$\S$V.5]{Brown}. Denote the corresponding building by
$\mathfrak B_{\ell,q}$. The dimension of $\mathfrak B_{\ell,q}$ is $\ell$. Denote by
$\mathfrak m^i_\ell(q;x)$ the minimal polynomial of $\Delta$ acting on
$C^i(\mathfrak B_{\ell,q})$, $0\leq i\leq \ell-1$.
First, we recall an elementary description of $\mathfrak B_{\ell,q}$ which
is convenient for actual calculations. Let $V$ be a linear space
over $\mathbb{F}_q$ of dimension $\ell+2$. A \textit{flag} in $V$ is a
nested sequence $\mathcal{F}: F_0\subset F_1\subset \cdots \subset F_i$ of
distinct linear subspaces $F_0,\dots, F_i$ of $V$ such that $F_0\neq
0$ and $F_i\neq V$. $\mathfrak B_{\ell,q}$ is isomorphic to the simplicial
complex whose vertices correspond to the non-zero linear subspaces
of $V$ distinct from $V$; the vertices $v_0,\dots, v_i$ form an
$i$-simplex if the corresponding subspaces form a flag.
\vspace{0.1in}
Now assume $\ell=1$. In this case $\mathfrak B_{\ell,q}$ is isomorphic to
the $1$-dimensional complex whose vertices correspond to $1$ and
$2$-dimensional subspaces of a $3$-dimensional vector space $V$ over
$\mathbb{F}_q$, two vertices being adjacent if one of the corresponding
subspaces is contained in the other. With a slight abuse of
terminology, we will call $1$ and $2$ dimensional subspaces lines
and planes, respectively. The number of lines and planes in $V$ is
$m=q^2+q+1$ each. Let $A=(a_{ij})$ be the $m\times m$ matrix whose
rows are enumerated by the lines in $V$ and columns by the planes,
and $a_{ij}=-1$ if the $i$th line lies in the $j$th plane, and is
$0$ otherwise. We can choose a basis of $C^0(\mathfrak B_{\ell,q})$ so that
$(q+1)\Delta$ acts as the matrix
$$
(q+1)I_{2m}+\begin{pmatrix} 0 & A \\ A^t & 0\end{pmatrix}.
$$
Let $M=\begin{pmatrix} 0 & A \\ A^t & 0\end{pmatrix}$. Since any two
distinct lines lie in a unique plane and any line lies in $(q+1)$
planes, $AA^t=qI_m+J_m$. By a similar argument, $A^tA=qI_m+J_m$.
Hence
$$
M^2= qI_{2m}+\begin{pmatrix} J_m & 0 \\ 0 & J_m\end{pmatrix}.
$$
This implies that $(M^2-qI_{2m})(M^2-(q+1)^2I_{2m})=0$. Since
$(q+1)\Delta - (q+1)I_{2m}=M$, we conclude that $(q+1)\Delta$
satisfies the polynomial equation
$$
x(x-(2q+2))(x^2-(2q+2)x+(q^2+q+1))=0.
$$
It is not hard to see that this is in fact the minimal polynomial of
$(q+1)\Delta$. Hence
$$
\mathfrak m^0_1(q;x)=x(x-2)\left(x^2-2x+\frac{q^2+q+1}{q^2+2q+1}\right).
$$
The minimal non-zero root is $1-\sqrt{q}/(q+1)$. The smallest
possible value of this expression is approximately $0.53$, which
occurs at $q=2$, the value tends to $1$ as $q\to \infty$.
The very next case $\ell=2$ is already considerably harder to
compute by hand. With the help of a computer, we deduced that
\begin{align*}
\mathfrak m^0_2(q;x)=&x(x-2)\left(x-3\right)\left(x-\frac{2q^2+3q+2}{q^2+q+1}\right)\\
&\times\left(x^2-\frac{4q^2+3q+4}{q^2+q+1}x+\frac{4q^2+4}{q^2+q+1}\right).
\end{align*}
The minimal non-zero root is
$$
\frac{1}{2(q^2+q+1)}\left(4q^2+3q+4-\sqrt{8q^3+9q^2+8q}\right),
$$
which is at least $1.08$ and tends to $2$ from below as $q\to
\infty$. The whole polynomial tends coefficientwise to the
polynomial $x(x-3)(x-2)^4$ as $q\to \infty$. Next
\begin{align*}
\mathfrak m^1_2(q;x)=&x(x-1)(x-2)(x-3)\\
&\times\left(x^2-2x+\frac{q^2+1}{q^2+2q+1}\right)
\left(x^2-3x+\frac{2q^2+2q+2}{q^2+2q+1}\right)\\
&\times\left(x^2-4x+\frac{4q^2+6q+4}{q^2+2q+1}\right).
\end{align*}
The minimal non-zero root is $1-\sqrt{2q}/(q+1)$. This is always in
the interval $[1/3,1)$. Moreover, this eigenvalue is strictly larger
than $1/3$ for $q>2$ and tends to $1$ as $q\to \infty$; the whole
polynomial tends to $x(x-3)(x-1)^4(x-2)^4$.
The formulae for $\mathfrak m^0_2(q;x)$ and $\mathfrak m^1_2(q;x)$ are partly
conjectural, although almost certainly correct. We computed these
polynomials for $q=2, 3, 4, 5, 7$ using computer calculations with
concrete finite fields, and then came up with a formula which
recovers all the previous polynomials when we specialize $q$.
The complexity of calculations grows exponentially with $i$, $\ell$
and $q$, so for $\ell=3$ my computer was able to handle only $i=0$
for $q=2$ and $3$:
\begin{align*}
\mathfrak m^0_3(2;x)=& x(x - 4)\left(x - \frac{23}{7}\right)\left(x - \frac{19}{7}\right)\\
&\times \left(x^4 - 12x^3 + \frac{581528}{11025}x^2 -
\frac{220232}{2205}x + \frac{6734719}{99225}\right),
\end{align*}
\begin{align*}
\mathfrak m^0_3(3;x)=& x(x- 4)\left(x - \frac{42}{13}\right)\left(x - \frac{36}{13}\right)\\
&\times \left(x^4 - 12x^3 + \frac{14350977}{270400}x^2 -
\frac{2760633}{27040}x + \frac{309843369}{4326400}\right).
\end{align*}
The minimal non-zero roots of these polynomials are approximately
$1.68$ and $1.89$, respectively. To have a reasonable guess for the
coefficients of $\mathfrak m^0_3(q;x)$, one needs to compute these
polynomials for at least the next few values of $q$. Nevertheless,
note that the coefficients of above polynomials are close to the
coefficients of $x(x-4)(x-3)^6$.
The final example we have is
\begin{align*}
\mathfrak m^0_4(2;x)= & x(x - 4)(x - 5)\left(x - \frac{144}{35}\right)
\left(x^2 - \frac{1322}{155}x + \frac{2798}{155}\right)\\ &\times
\left(x^2 - \frac{276}{35}x + \frac{536}{35}\right) \left(x^3 -
\frac{1778}{155}x^2 + \frac{1306}{31}x - \frac{7512}{155}\right).
\end{align*}
The minimal non-zero root is approximately $2.32$, and the
coefficients of $\mathfrak m^0_4(2;x)$ are close to the coefficients of
$x(x-5)(x-4)^9$.
\vspace{0.1in}
The previous calculations, combined with Theorems
\ref{prop_ny}-\ref{thm-G} and Remark \ref{rem3.12}, suggest the
following possibility:
\begin{conj}\label{conj} In the situation of Theorem \ref{thm-G},
for any $\varepsilon>0$ there is a constant $q(\varepsilon, \ell)$ depending only
on $\ell$ and $\varepsilon$ such that if $q>q(\varepsilon, \ell)$ then any
non-zero eigenvalue of $\Delta$ acting on $C^i(\mathfrak B)$ is at a
distance less than $\varepsilon$ from one of the integers $$\ell-i,\
\ell-i+1,\ \dots,\ \ell+1.$$
\end{conj}
\subsection{Garland's vanishing theorem}\label{ssGV}
Let $\mathcal{K}$ be a field complete with respect to a non-trivial discrete
valuation and which is locally compact. Let $\mathbb{F}_q$ be the residue
field of $\mathcal{K}$. Let $\mathcal{G}$ be an almost simple linear algebraic group
over $\mathcal{K}$. Suppose $\mathcal{G}$ has $\mathcal{K}$-rank $\ell+1$. Let $\mathfrak T$ be the
Bruhat-Tits building associated with $\mathcal{G}(\mathcal{K})$. The link of a
simplex $s$ in $\mathfrak T$ is a finite building of dimension
$\ell-\dim(s)$. Using a discrete analogue of Hodge decomposition and
the Fundamental Inequality one proves the following theorem (see
\cite[Thm. 3.3]{Borel}):
\begin{thm}\label{thm4.2}
If $\lambda^{i-1}_{\min}(\mathfrak T)>\frac{\ell+1-i}{i+1}$, then $H^i(\Gamma,
\mathbb{R})=0$ for any discrete cocompact subgroup $\Gamma$ of $\mathcal{G}(\mathcal{K})$.
\end{thm}
Combining this with Theorem \ref{thm-G}, one concludes that there is
a constant $q(\ell)$ depending only on $\ell$ such that if
$q>q(\ell)$ then $H^i(\Gamma, \mathbb{R})=0$ for $1\leq i\leq \ell$. This is the
main result of \cite{Garland}. It is natural to ask whether the
restriction on $q$ being sufficiently large is redundant. This is
indeed the case, as was shown by Casselman \cite{Casselman}, who
proved the vanishing of the middle cohomology groups by an entirely
different argument.
Now let $\mathcal{G}=\mathrm{SL}_{\ell+2}$. Then
$\lambda^{i-1}_{\min}(\mathfrak T)=m^{i-1}(\mathfrak B_{\ell, q})$ In all examples
discussed above $m^0(\mathfrak B_{\ell, q})>\ell/2$, so in these cases
Garland's method proves the vanishing of $H^1(\Gamma,\mathbb{R})$ without any
assumptions on $q$. On the other hand, $m^1(\mathfrak B_{2, 2})=1/3$. But to
apply Theorem \ref{thm4.2} to show that $H^2(\Gamma,\mathbb{R})=0$ we need
$\lambda_{\min}^1(\mathfrak T)>1/3$. Hence when $\ell=2$ we need to assume $q>2$
to conclude $H^2(\Gamma,\mathbb{R})=0$ from Garland's method.
|
1,314,259,993,953 | arxiv | \section{Introduction}
The costs to trading desks of holding positions typically include funding, and at least five banks\footnote{Barclays, Goldman Sachs, Lloyds Banking Group, Royal Bank of Scotland, JP Morgan (for the last one, see Financial Times 15 Jan 2014).} \cite{Cameron2013a} report funding value adjustments (FVA). Current FVA-related literature treats funding costs as an input \cite{Burgard2011a,Burgard2012a,Morini2011a,Crepey2012a,Crepey2012b,Pallavicini2011b,Pallavicini2012a} that is usually constant (\cite{Pallavicini2013a} is an exception), and always risk-neutral. Books specifically on liquidity \cite{Castagna2013a} or from a Treasury point of view \cite{Choudhry2012a} do not treat funding costs as constant, but do not cover funding optimization mathematically. We take the Treasury point of view for whom the funding curve is an output based on a funding strategy. Treasury must consider Regulatory requirements \cite{FSA-P0916} (liquidity buffers) and both risk-neutral (\ensuremath{\mathbb{Q}}) and physical measures (\ensuremath{\mathbb{P}}). Precisely because Regulatory-required funding buffers exist, trading desks will always see funding costs that are not equal to the riskless rate when they have funding requirements. This is true even if the bank can fund itself, unsecured, at the riskless rate.
The objective of this paper is to describe the Treasury funding problem with respect to Regulatory-required liquidity buffers and provide an optimal solution taking into account both the risk-neutral measure (\ensuremath{\mathbb{Q}}) and the physical measure (\ensuremath{\mathbb{P}}). Our contributions are: firstly, the description of the Regulatory-Optimal Funding problem; secondly, the provision of theoretically optimal funding strategies for \ensuremath{\mathbb{Q}}\ and \ensuremath{\mathbb{P}}\ solving the Regulatory-Optimal Funding problem; thirdly the development of practical optimization strategies involving both \ensuremath{\mathbb{Q}}\ and \ensuremath{\mathbb{P}}; fourthly the calibration of the practical strategies and the demonstration of their effectiveness in four markets (USD, JPY, EUR, GBP), from 1995\footnote{From 2000 for EUR.} to 2012. Since we deal with physical measures, a further contribution is the development of appropriate measures, and statistical tests, for testing strategy performance on out-of-sample data. We demonstrate improvements on hedged funding with a combined approach achieving up to 71\%\ of a perfect information criterion (perfect information is when the future is known).
We detail our assumptions in the modeling section, however we highlight the most significant here.
\begin{itemize}
\item Practically, we focus on short-term funding out to one year. This is generally the area for money-market activity, rather than bond issuance and asset-liability management.
\item Theoretically our approach applies to any maturity but funding buffers are relatively less important for longer funding. Despite this, the attractiveness of shorter --- and usually cheaper --- funding remains when yield curves are upwards-sloping. Funding buffers are required by Regulators exactly because of this attractiveness to provide a degree of systematic resilience that might otherwise not be present. Regulatory details are in the next section.
\item We assume that a risk-neutral measure (\ensuremath{\mathbb{Q}}) exists. The significance of this assumption is that in the risk-neutral measure future expected yield curves can be derived from the present yield curve. In contrast we define physical measures (\ensuremath{\mathbb{P}}), and physical filtrations, as any other measures or filtrations that are not the risk neutral one. The risk neutral measure (and filtration) is the one that can be derived from current market prices (see standard texts for details, e.g. \cite{Shreve2004a}). A physical measure, for example involving the prediction of future expected yield curves, can be derived from any source but has the distinction that prices derived from it cannot be hedged. A trader's view of the market is one example of a physical measure. This view may lead the trader to place open positions that will create a profit if the view turns out to be correct. Section \ref{s:Pid} specifies the identification of the physical measure in this paper.
\item We take Xibor as a proxy for funding costs simply as a place to start and because funding costs are often quoted as a spread over Xibor (the X stands for the relevant currency). Xibor is an average of interbank lending rates, typically available up to 12 months or 18 months. Because it is an average, individual contributing banks will actually experience a spread over or under Xibor for their lending with these horizons.
Xibor is an unsecured borrowing rate. As such it already includes the possibility that the borrower may default, i.e. the credit risk. We optimize the bank's funding strategy assuming that it is a going concern. We do not consider the possibility that it may default on its liabilities as part of this strategy. If we did, then --- conditional on survival --- the bank would leak PnL.
\item We assume no volume effects, i.e. no market impact of funding requests on funding costs. Clearly there are volume effects, and these can be integrated into our framework, but we leave them for future research.
\item Our optimization involves expectation-based utility functions. More complex utility functions could be used, for example involving VAR or Expected Shortfall \cite{BCBS-219}, we leave these extensions for future work.
\end{itemize}
Bank funding overall is complex with multiple sources (deposits, debt markets, money markets, etc) and governed by a liquidity funding policy. Any liquidity funding policy must be executed with respect to different sources. In as much as markets are involved all liquidity funding policies are embodied by trading strategies. The overall liquidity funding policy will involve asset-liability planning as well. We focus on one area, funding optimization of the regulatory-required liquidity buffer. All banks must solve this problem (provision and cost of the buffer), even if they do so sub-optimally. Note that banks are not passive price-takers of funding costs over multi-year horizons. Executives chose a risk-appetite which they must then execute on. The success of this risk-appetite strategy will influence bank funding costs from the market. Given our timescale (up to one year) we do not include these effects. Furthermore bankwide funding cannot be considered in isolation from capital and credit, as the emergence of xVA desks \cite{Carver2013a,Carver2013b} demonstrates.
Banks direct internal funding using Funds Transfer Pricing (FTP) which can have local and global features. For example a local feature could be relatively cheap/expensive funding at a given maturity to correct an observed Asset-Liability mismatch. A global feature could involve relatively cheap/expensive funding for a particular business line. However the explicit and detailed control for business objectives is poorly developed \cite{fsa:dt}. Despite this, differential funding for different desks is typically present, e.g. repo desks versus OTC derivative desks.
We start by recalling the discussion in \cite{Hull2011a} (rather than \cite{Hull2012a}) on hedging input costs which we apply to funding. Let us assume that different banks have different (funding) costs and that Bank A hedges its (funding) costs. Now suppose that the general market climate improves and the systematic part of funding costs decreases for all banks. The Treasurer of Bank A must now explain to the CEO and the Board why Bank A is losing money relative to its competitors when offering similar prices for similar products. Thus (funding) hedging decisions should always be at Board level. Our approach avoids such issues by combining \ensuremath{\mathbb{Q}}\ and \ensuremath{\mathbb{P}}.
\subsection{Regulation}
The FSA scenario \cite{FSA-P0916} (now owned by the PRA) includes two weeks of market closure followed by an individually agreed level of market access for the remainder of the three months (the Individual Liquidity Guidance, ILG). Effective funding stress can be much larger than reductions in wholesale funding when deposit leakage is included, depending on the type of bank (retail, commercial, or investment).
Basel III \cite{BCBS-188,BCBS-189} by contrast bases its Liquidity Coverage Ratio (LCR) on total net cash outflows over 30 calendar days. This is defined (paragraph 50) as outflows minus the minimum of inflows and 75\%\ of outflows. The document specifies the runoff factor to be applied to each category of funding, e.g. "Unsecured wholesale funding provided by non-financial corporates and sovereigns, central banks and public sector entities: 75\%" and "Unsecured wholesale funding provided by other legal entity customers: 100\%" which includes non-central banks and financial firms. Thus wholesale funding will vanish under the Basel III LCR calculation.
Prospective regulations on Prudent Valuation \cite{EBA-CP-2013-28}, which is clarified in \cite{EBA-CP-2013-28-FAQ}, propose that for capital purposes funding should be calculated maturity matched with trade payments:
\begin{quote}
Art 10 ``... what adjustment must we make for the FVA'' reply: ``FVA should be assessed using the wholesale funding costs of the firm, maturity matched with the contractual trade payments.''
\end{quote}
We note that this does not mean the funding must be available --- only that for capital purposes pricing should include funding costs as though funding were available. Whilst maturity matching payments may be straightforward for zero coupon bonds its direct application to derivatives is problematic. An uncollateralized vanilla interest rate swap traded back-to-back with the street may require funding in the future or it may provide funding depending on market movements. Hence we see that the precise description of future funding for derivatives can be characterized by four dimensions as a minimum: start date; tenor; rate; and volume. We might say that this data prices volume-dependent funding swaptions. Most funding literature, except \cite{Pallavicini2013a}, is based on a one-dimensional curve where start date is the present and only the tenor is present. In any case pricing using this funding data hypercube is not practical because it cannot be calibrated. This paper offers a pragmatic alternative approach.
\subsection{Previous Work}
The key papers on funding consider either replication \cite{Burgard2011a,Burgard2012a} and / or (effectively) project finance \cite{Morini2011a,Hull2012a}. In \cite{Burgard2011a,Burgard2012a} the authors assume that trading in own-bonds is possible and has no effect on the funding costs of the overall firm. \cite{Hull2012a} assumes that firm funding costs exactly reflect the riskiness of the set of projects that the firm undertakes. Thus marginal funding is the same as stand-alone funding. In addition they assume that DVA can be divided into firm-default benefit and funding-default benefit, the latter of which exactly equals funding costs. This contrasts with the hedging setup by the same author described in the introduction. \cite{Morini2011a} shows that this depends on not assuming that the firm survives. \cite{Hull2013a} includes this point of view and comes to similar conclusions. \cite{Castagna2012a} also describe the conditions under which DVA can be replicated as well as interactions between DVA and FVA. If the analysis is done conditioning on firm survival then funding-default benefit no longer equals funding costs because only avoided-funding cancels with funding costs. Net funding at any point pays full funding costs. It is this net funding that we address.
Books specifically on liquidity or from a Treasury point of view \cite{Castagna2013a,Choudhry2012a} do not treat funding costs as constant. However, they do not cover funding optimization mathematically as here.
Prediction of future market conditions \cite{Asness2013a,Geczy2013a}, and by yield curves \cite{Estrella2006a}, has a long history and a large literature based on time-series analysis \cite{Hamilton1994a}. Our aim is not to contribute to this literature but to use the simplest possible approach (momentum, aka exponentially-weighted moving average or EWMA) to provide a \ensuremath{\mathbb{P}}-linked strategy that is demonstrably better than hedging alone.
\section{Modeling}
We aim to minimize funding costs by choosing an optimal funding roll ($\alpha$). The bank must always have a buffer of at least $\Delta$ days of funding. Although our setup is general our focus is medium-term funding, i.e. out to one year, in four markets (USD, JPY, EUR, GBP). We first analyze the characteristics of optimal \ensuremath{\mathbb{Q}}\ and optimal \ensuremath{\mathbb{P}}\ strategies under simple assumptions, and then turn to the problem of \ensuremath{\mathbb{P}}-measure discovery. We then apply standard techniques to calibration and out-of-sample model analysis.
We assume that the bank can fund spot according to a known curve that we will call the yield curve. In our examples we will use the Libor curve as a proxy for a typical bank as it contains credit and liquidity elements. As soon as funding is shorter than the regulatory buffer, $\Delta$, it is worthless to the bank. Thus we include a bid-ask spread $\varphi$ on the funding curve. $\varphi$ gives the fraction of the funding cost recoverable by selling funding. We will show that the bid-ask spread is an important driver of optimal funding roll characteristics.
\be
\varphi := L_{\rm bid}(*) / L_{\rm ask}(*)
\ee
Where we use $*$ to indicate that the bid-ask spread is not tenor-dependent in our model. $L(*)$ is the input funding curve. We do not write ``bid'' and ``ask'' explicitly but use the combination of the sign of the flow and $\varphi$ to express the effects. The model only uses the bid-ask spread at maturity $\Delta$ which is when held funding no longer counts towards the liquidity buffer.
Our base funding cost setup is \ensuremath{\mathbb{Q}}-funding, i.e. as assumption that the bank can fund in the future at rates predicted from the current yield curve. However, we also assume that only spot funding is practical. Thus in historical testing subsequent funding rolls must be carried out with the previously defined roll up to the horizon. For \ensuremath{\mathbb{P}}-funding spot funding also comes from the spot yield curve but future funding with predicted yield curves is included in the optimization. Practically all funding strategies will re-optimize at every funding roll, we leave this for future research using multi-stage stochastic optimization \cite{Birge2011a}. Here we explore the limits of myopic strategies, i.e. decide the funding roll at the start and then apply it up to the horizon.
The current work is to demonstrate that an assumption of hedged funding is sub-optimal in practice, and that using predicted funding curves is possible. By possible we mean that out-of-sample statistical, and significant practical improvements, are observed. We compare our optimal strategies using predictions with what we would do with perfect information. This is possible from historical analysis. Thus we report results in terms of average bps improvement over an attempted hedging strategy, and as a percentage achievement compared to perfect information.
\subsection{Setup}
Our basic setup is as follows (Figure \ref{f:figure_roll}).
\begin{itemize}
\item Fixed quantity of funding up to horizon $h$.
\item Regulatory funding buffer requirement as a constraint that the bank must always have sufficient funding for $\Delta$ days.
\item Input funding curve, i.e. where primary issuance can be sold for different maturities, $L(*)$. (This may well be different from secondary trading prices.)
\end{itemize}
The existence of a funding buffer means that funding rolls must overlap. Otherwise as we get towards the end of a funding roll, i.e. less than $\Delta$ days away, the regulatory constraint is broken.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.90\textwidth,clip,trim=5 25 5 25]{figure_roll.pdf}
\caption{Basic funding setup showing different funding rolls $\alpha$ obeying the minimum funding buffer time $\Delta$, up to the funding horizon $h$. Squares show start of each funding roll.}
\label{f:figure_roll}
\end{figure}
The first point in the setup deals with the question of how to define the average funding cost. We do the average for a fixed horizon. There are many possible definitions addressing different combinations of getting an average cost over a period versus the repeating nature of the problem. We have picked this one as corresponding to a planning horizon. The actual horizon is important because we have an input funding curve which may be upward sloping, flat, or possibly downward sloping, with variable curvature.
The maturity of the funding need sets a limit on the proportion of the time we must have a funding overlap. For a very long maturity with term funding the buffer overlap will be a very small part of the total funding time. This also illustrates that term funding is sub-optimal for short instruments assuming that the relevant desk has a stable business model. If the desk has a highly volatile business model then it will suffer term funding with its attendant buffer costs. Whilst the Treasury sees the funding requirements of the bank, not just each desk, there can still be volatility in funding needs in as much as desks (and business lines) have systematic dependencies.
We consider a fixed funding horizon requirement $h$ and fixed roll length $\alpha$. Any desk (or bank) will have a variety of funding maturities required at any time but we start from separate requirements. Regulatory-optimal funding in the face of uncertain requirements is outside the scope of this paper and is an area for future research.
The overlap of roll periods highlights the significance of the bid-ask spread for funding. Once we start the next roll, the previous funding with length $\le\Delta$ is of no use and can be sold. The difference with the input funding curve may well be significant and we capture this with a parameter $\varphi$. Generally $\varphi$ will depend on $\alpha$ to incorporate the market impact of selling but we use a constant for simplicity. Note that whatever the funding roll the quantity of funding will always be constant. This is because we assume that unnecessary funding will be sold immediately. Any funding with a maturity less than the buffer period ($\Delta)$ does not count towards the buffer (by definition). It does not matter what the bank does with these assets from the point of view of the Regulator, since they do not count for the liquidity buffer. However, for the bank, it does matter in terms of value. Since we assume that the interbank market pays Xibor we start from Xibor for how they can be invested
If the bid-ask spread is zero then it may appear that the optimal strategy, with an upwards-sloping yield curve, is to set $\alpha=\Delta+1$day. Thus the bank would buy and sell funding every day. Although this strategy appears to enable the bank to obtain funding at the overnight rate this is not optimal. Consider the case where interest rates are going up faster than the yield curve. Under these circumstances (lagging yield curve) it is optimal to get funding for longer (we are back to our hedging-or-not discussion). Of course this assumes that the bank can detect such situations. Generally the bid-ask spread will not be zero and we include it in our calculations.
\FloatBarrier
Working in units of $\Delta$ the number of times funding must be rolled to horizon $h_n$ with roll length $\alpha_n$, $n_\ensuremath{{\rm {rolls}}}(h_n,\alpha_n)$ is:
\ben
{\ensuremath{n_\rolls}}(h_n,\alpha_n) =
\begin{cases}
0 & \alpha_n \ge h_n \\
\left\lceil \frac{h_n-\alpha_n}{\alpha_n-1} \right\rceil & {\rm otherwise}\\
\end{cases}
\label{e:nroll}
\een
where: $\alpha_n=\alpha/\Delta$; $h_n = h/\Delta$. Thus the gross excess funding $G_{e\!f}$ will be, as a percentage:
\ben
G_{e\!f} = 100\times\frac{n_\ensuremath{{\rm {rolls}}}}{h_n}
\label{e:gross}
\een
Gross excess funding, $G_{e\!f}$, is funding with maturity less than $\Delta$. This funding does not count for the Regulatory buffer, hence it is excess funding. We call it gross because it is simply a quantity not the cost of the quantity.
Figure \ref{f:figure_HR} shows Equations \ref{e:nroll}, \ref{e:gross} from the point of view of constant roll lengths (left panels) and from the point of view of constant horizons and choice of roll length (right panels). All lengths are in units of $\Delta$. The advantage of moving to longer rolls lengths is clear in terms of gross excess funding. However, longer roll lengths mean more expensive funding when the input funding curve is upwards sloping, in the absence of other factors.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.90\textwidth,clip,trim=0 0 0 0]{figure_HR.pdf}
\caption{Equations \ref{e:nroll}, \ref{e:gross} from the point of view of constant roll lengths (left panels) and from the point of view of constant horizons and choice of roll length (right panels). All lengths are in units of $\Delta$.}
\label{f:figure_HR}
\end{figure}
\FloatBarrier
\subsection{Projected Costs and Optimization}
We use the expected undiscounted funding cost $\ensuremath{{C^{\rm av}}}$ as our primary metric because this is often used in practice. It is often quoted internally in banks in terms of the number of basis points above some reference curve (possibly an overnight curve, e.g. Fed Funds, Eonia, Sonia, etc). This may be calculated under a physical (\ensuremath{\mathbb{P}}) or risk-neutral (\ensuremath{\mathbb{Q}}) measure depending on the circumstances. This mostly refers to whether funding costs are hedged or not. As discussed above, we use Xibor is a proxy rate for unsecured borrowing.
Considering Figure \ref{f:figure_roll} we may write $\ensuremath{{C^{\rm av}}}$ as:
\begin{equation}
\begin{aligned}
\ensuremath{{C^{\rm av}}}_t^{\ddag}&(\alpha,h)=\\
&\frac{1}{h}\ensuremath{\mathbb{E}}_t^\ddag\left[
\min(\alpha,h) F_t(t,t+\min(\alpha,h) ) \vphantom{\sum_{i=1}^{{\ensuremath{n_\rolls}}} }\right. \\
&\ \ \qquad{} +
\sum_{i=1}^{{\ensuremath{n_\rolls}}}
\left( \vphantom{\sum}
-\varphi\Delta F_t(t+i(\alpha-\Delta),t+\Delta+i(\alpha-\Delta)) \right. \label{e:cav}\\
&\ \ \quad\quad\qquad\qquad {}+ \min(\alpha,h-i(\alpha-\Delta)) F_t(t+i(\alpha-\Delta),t+\min(i(\alpha-\Delta)+\alpha,h)
)
\left.\left. \vphantom{\sum}\right)\vphantom{\sum_{i=1}^{{\ensuremath{n_\rolls}}}}\right]
\end{aligned}
\phantom{\hspace{3cm}}
\end{equation}
Where:
$F_t(t_1,t_2)$ is the forward rate as seen from $t$ between $t_1$ and $t_2$;
if ${\ensuremath{n_\rolls}}<1$ then there are no terms in the summation;
$\ddag$ measure used in the expectation (\ensuremath{\mathbb{P}}, \ensuremath{\mathbb{Q}}, or some combination).
Funding costs depend on the interpretation of $F_t(t_1,t_2)$, i.e. what is its actual realizable value, or what value can be locked in. We work with continuously compounded rates for simplicity.
The myopic version of our optimization problem based on Equation \ref{e:cav} is:
\ben
\ensuremath{{C^{\rm av}}}_{t,{\rm opt}}^{\ddag}(h) = \min_\alpha{\ensuremath{{C^{\rm av}}}_t^{\ddag}(\alpha,h)} \label{e:opt}
\een
We call this problem myopic as we are only permitted to chose $\alpha$ once, at the start. This is not as great a limitation as it may appear when we consider that the funding we are optimizing is primarily short term, i.e. within O(10)$\Delta$. In addition, practically, to avoid market impact banks typically stagger their funding so as to be in the market every day with some fraction.
In general Equation \ref{e:opt} is a non-linear, non-convex, optimization problem because of the minimum terms (otherwise for polynomial yield curves it would be an equivalent polynomial problem). We assume that the yield curve is linear over the first year. The linear yield curve assumption is to enable a basic theoretical understanding of the nature of the optimal funding problem. For a linear model, the mean r-squared values for the four currencies over the period considered (weekly samples, Xibor maturity points, continuous compounding) are \{BP=0.66, EU=0.87, JP=0.76, US=0.80\}, and the geometric mean of the p-values\footnote{A geometric mean for p-values is appropriate because they cover a wide range, essentially log-scaling.} are \{BP=4.5e-6, EU=2.8e-7, JP=5.6e-8, US=1.1e-7\}, thus a linear model is supported by the data. We are not making a formal hypothesis test here, because we know that reality is more detailed. If we were making a hypothesis test then we would include Bonferroni (or similar) corrections for multiple tests and would phrase our observation as ``not rejected at a chosen p-level''.
If we assume a linear (continuously compounding) yield curve $y(T)$ with constants $a,b$:
\be
y(T) = a + b T
\ee
then, WLOG\footnote{WLOG = without loss of generality.} setting $t=0$ for simplicity, Equation \ref{e:cav} becomes:
\begin{equation}
\begin{aligned}
\ensuremath{{C^{\rm av}}}_0&(\alpha,h)=\\
&\ \ \frac{1}{h}\left[
(a+b\min(\alpha,h))\min(\alpha,h) \vphantom{\sum_{i=1}^{{\ensuremath{n_\rolls}}} }\right. \\
&\ \ \qquad{} +
\sum_{i=1}^{{\ensuremath{n_\rolls}}}
\left( \vphantom{\sum}
-\varphi \Delta[a + 2b i (\alpha-\Delta)+\Delta b ] \right. \label{e:cavL}\\
&\ \ \quad\quad\qquad\qquad {}+ q_i[a + 2b i (\alpha-\Delta)+q_i b ]
)
\left.\left. \vphantom{\sum}\right)\vphantom{\sum_{i=1}^{{\ensuremath{n_\rolls}}}}\right]
\end{aligned}
\phantom{\hspace{3cm}}
\end{equation}
where:
$q_i$ is $\alpha$ except for $i={\ensuremath{n_\rolls}}$ when it equals $\min(\alpha, h - i (\alpha-\Delta))$
\subsubsection{Optimal \ensuremath{\mathbb{Q}}\ Funding}
\ensuremath{\mathbb{Q}}\ Funding is where we solve the optimization problem under the \ensuremath{\mathbb{Q}}\ measure. This measure is determined by the observed prices of tradeable instruments at $t=0$ (see any text for details, e.g. \cite{Shreve2004a}). For example in the \ensuremath{\mathbb{Q}}\ measure the current yield curve perfectly predicts the expected future yield curves.
It may be very difficult to lock in forward funding according to an observed input yield curve. However, for completeness we include this case.
\begin{myProp}
Given a linear input yield curve $(a>0,b)$ the regulatory-optimal \ensuremath{\mathbb{Q}}-funding strategies with horizon $h$ are:
\begin{itemize}
\item $a>0, \varphi=1$: all funding strategies are equivalent;
\item $a>0,\ a+bT>0, \varphi<1$: Term funding;
\item $a>0,\ a+bT<0, \varphi<1$: Shortest possible.
\end{itemize}
\end{myProp}
\begin{proof}{\it Sketch.}
The first result is obvious by no-arbitrage: as there is no bid-ask spread no strategy can dominate.
When there is a bid ask spread and rolls are costly term funding dominates.
When there is a bid ask spread and observed negative zero rates then rolls become beneficial and the optimal strategy is to have as many rolls as possible.
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.90\textwidth,clip,trim=0 0 0 0]{figure_optQ.pdf}
\caption{Left panels show funding costs versus roll lengths for: $a>0, \varphi=1$; $a>0,\ a+bT>0, \varphi<1$; $a>0,\ a+bT<0, \varphi<1$. The right panels show the corresponding continuously compounded yield curves. Parameters were: $h=1$; $\Delta=1/12$; with $\varphi=1$ for the top panel, and $\varphi=0.75$ for the lower two panels. Axes for all graphs are the same: horizontal axes are roll length; vertical axes are funding costs.}
\label{f:optQ}
\end{figure}
Figure \ref{f:optQ} illustrates the three cases. Note that although this funding is optimal in \ensuremath{\mathbb{Q}}\ terms there is no guarantee that following this strategy will result in the lowest costs in \ensuremath{\mathbb{P}}\ terms. This is precisely the hedging caveat of \cite{Hull2011a}.
\FloatBarrier
\subsubsection{Optimal \ensuremath{\mathbb{P}}\ Funding}
\ensuremath{\mathbb{P}}\ Funding is where we solve the optimization under \ensuremath{\mathbb{P}}\ measures. Potentially there will be a different solution for each \ensuremath{\mathbb{P}}\ measure, although the problem is unchanged some of the inputs are different (i.e. measures and filtrations).
The \ensuremath{\mathbb{P}}\ measure has no unique definition, unlike the \ensuremath{\mathbb{Q}}\ measure. We will investigate several comparing the resulting strategies, and the \ensuremath{\mathbb{Q}}-optimal strategy with the \ensuremath{\mathbb{P}^{\rm PI}}-optimal strategy. \ensuremath{\mathcal{F}^{\rm PI}}\ is the Perfect Information filtration and \ensuremath{\mathbb{P}^{\rm PI}}\ the corresponding (set of) measure(s) where we know the future (a rather simple set of measures in fact). Thus we can calculate the value of perfect information (VPI) and state that the optimal strategy is the one with the lowest remaining VPI.
The \ensuremath{\mathbb{P}}\ situation is more complex than in the \ensuremath{\mathbb{Q}}\ case but there are some regularities that can be observed. We start with the assumption of a {{\it{Constant}}}\ yield curve, i.e. future yield curves have exactly the same shape as today.
\begin{myProp}
Given a linear input yield curve $(a>0,b)$ the regulatory-optimal \ensuremath{\mathbb{P}}-funding strategies where \ensuremath{\mathbb{P}}={{\it{Constant}}}\ yield curve, with horizon $h$ are:
\begin{itemize}
\item $a>0, b>0, \varphi=1$: Shortest possible;
\item $a>0, b>0, \varphi<1$: Neither the shortest nor Term funding are always optimal;
\item $a>0,\ b<0$: Term funding;
\end{itemize}
\end{myProp}
\begin{proof}
{\it Sketch.}
Given that rolls do not matter as the bid-ask spread is zero we can choose any roll. As the future funding cost is the same as today, then with an upward-sloping yield curve the lowest cost is from the shortest roll.
When rolls are costly there is a playoff between moving up the yield curve for longer funding and fewer rolls against the cost of doing so. Clearly the optimal strategy can move.
When the yield curve is downwards-sloping, and since $a>0$ and is always greater than zero rolls are always costly, hence the cheapest strategy is to have no rolls. That is, term funding is optimal.
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.90\textwidth,clip,trim=0 0 0 0]{figure_optPconstant.pdf}
\caption{\ensuremath{\mathbb{P}}\ Funding costs versus roll lengths for three of the same cases as Figure \ref{f:optQ}, with the addition of the upper right which has: $a>0, b>0, \varphi=0.75$. Equivalence of panels is clockwise from top right. Axes for all graphs are the same: horizontal axes are roll length; vertical axes are funding costs.}
\label{f:optPconstant}
\end{figure}
Figure \ref{f:optPconstant} illustrates four cases, three are the same as Figure \ref{f:optQ} with the addition of the $a>0, b>0, \varphi<1$ case. In this extra case the optimal strategy is intermediate between shortest possible and Term funding.
Strikingly, the \ensuremath{\mathbb{Q}}-optimal and \ensuremath{\mathbb{P}}-{{\it{Constant}}}-optimal strategies are almost opposite. The most realistic case is perhaps where there is a bid-ask spread and then the \ensuremath{\mathbb{P}}-{{\it{Constant}}}-optimal strategy is intermediate, which is not even present within the \ensuremath{\mathbb{Q}}-optimal strategies.
The {{\it{Constant}}}\ case is not as restrictive as it may appear. What we are effectively doing is comparing the movement of the baseline with the differential of the yield curve. In the second case the yield curve is steeper than the baseline movement, whilst in the third the baseline movement is steeper than the yield curve. Thus the key to practical strategies is short rate momentum versus yield curve steepness.
However, we have not yet examined how these optimal strategies would hold up practically. For this we turn to the Value of Perfect Information.
\FloatBarrier
\subsection{\ensuremath{\mathbb{P}}\ Identification and Perfect Information Benchmark\label{s:Pid}}
We can only identify \ensuremath{\mathbb{P}}\ on a statistical basis. We want to obtain lower funding costs that are materially lower than \ensuremath{\mathbb{Q}}-funding on average and that are statistically significant out-of-sample. The best we can do is if we know the future and this is our perfect information benchmark (often referred to as Expected Value of Perfect Information, or EVPI. EVPI is a standard metric from the stochastic programing literature \cite{Birge2011a} and originally in \cite{Raiffa1961a}. EVPI is the payment which makes a decision maker indifferent between receiving the payment and being given complete and accurate knowledge of the future (with respect to the decision).
\begin{figure}[htbp]
\centering
\includegraphics[width=1.00\textwidth]{figure_4whisk.pdf}
\caption{Libor curves (short blue lines; continuously compounding) for four major currencies compared to 1M Libor rate (continuous line; continuously compounding).}
\label{f:4whisk}
\end{figure}
Figure \ref{f:4whisk} shows examples of funding (i.e. Libor) curves out to 1Y used in this example for the four major currencies considered (GBP, EUR, JPY, USD).
In our \ensuremath{\mathbb{P}}\ identification we assume that the shape of the yield curve does not change but that it undergoes parallel shifts. We aim to predict these parallel shifts using a momentum strategy, specifically an exponentially weighted moving average (EWMA) filter (i.e. a trivial Kalman filter). We add some refinements:
\begin{itemize}
\item Limit gradient of short end movement so that projected base rates do not go negative
\item Apply a threshold $\theta$ to the calculated gradient
\item Weight the gradient by $\omega$, to between 0\%\ and 100\%\ of its calibrated value
\item The projected short end of the curve cannot go negative
\end{itemize}
Thus we have three parameters to calibrate: $\{\lambda, \theta, \omega\}$. $\lambda$ is the decay constant for the EWMA filter.
\subsection{Results}
Calibration solves the problem below:
\ben
\max_{\Lambda\in \Pi} \sum_{c\in\{\text{BP,EU,JP,US}\}}\left( \ensuremath{{C^{\rm av}}}_{t,{\rm opt}}^{\ensuremath{\mathbb{Q}}}(h;c) -\ensuremath{{C^{\rm av}}}_{t,{\rm opt}}^{\ensuremath{\mathbb{P}}}(h;c,\Lambda) \right) /4\label{e:cal}
\een
where the ``/4'' is because we average over four currencies $c$, which are now visible as parameters of the costs $C$. Also:
$\Lambda = \{\lambda,\theta,\omega\}$ is the set of parameters. $\lambda \in[0,10\ \text{years}]$ is the EWMA decay constant (units of time); $\theta\in[0,1]$ is the gradient threshold (no units); and $\omega\in[0,1]$ is the gradient scaling (no units). Parameter ranges over which to optimize are reasonable choices in the authors' opinions.
$c$ is a currency in the set of currencies \{\text{BP,EU,JP,US}\}.
$\Pi$ is the multi-dimensional space of feasible parameter values.
$h$ is the horizon, here one year.
Equation \ref{e:cal} is a maximization over a nested optimization, see Equations \ref{e:opt}, \ref{e:cav} for details of $\ensuremath{{C^{\rm av}}}$. In the inner optimization $\alpha$ is chosen. In the outer optimization (Equation \ref{e:cal}) the parameters are chosen. Note that \ensuremath{\mathbb{Q}}\ and \ensuremath{\mathbb{P}}\ replace \ddag\ in Equations \ref{e:opt} and \ref{e:cav} as we are solving for the parameters which improve the \ensuremath{\mathbb{P}}\ solution the most relative to the \ensuremath{\mathbb{Q}}\ solution. We have also made the parameters in the \ensuremath{\mathbb{P}}\ optimization explicit, ``$;\Lambda$''. It is also clear that to understand whether there is a real improvement, or not, we must test the resulting calibration out of sample.
We calibrated model parameters using the first five years of data by maximizing the average funding cost improvement relative to a \ensuremath{\mathbb{Q}}-optimal strategy (i.e. Equation \ref{e:cal}), see Table \ref{t:calib}. The calibration was done jointly over all the currencies because the main effects are due to economic cycles. The model was then tested out-of-sample on the remaining data by comparing with the \ensuremath{\mathbb{Q}}-optimal strategy and the EVPI strategy.
\begin{table}[htbp]
\centering
\begin{tabular}{lcc}
& Parameter & Value \\ \hline
\multirow{3}{*}{Setup} & Horizon & 1Y \\
&Minimum regulatory buffer & 1M \\
&Bid-ask ($\varphi$) & 0.75 \\ \hline
\multirow{4}{*}{Calibration}
& Calibration length & 5Y \\
& EWMA decay ($\lambda$) & 90D \\
& Gradient threshold ($\theta$) & 0.005 per day \\
& Gradient scaling ($\omega$) & 0.3 \\ \hline
\end{tabular}
\caption{Setup, and calibration for EWMA and additional parameters. Calibration was combined over all the currencies.}
\label{t:calib}
\end{table}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.90\textwidth]{figure_QonEWMAcost.pdf}
\caption{\ensuremath{\mathbb{Q}}-funding average undiscounted cost relative to EWMA-based funding cost (0.01 on vertical scale = 100bps).}
\label{f:QvsEWMA}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.90\textwidth]{figure_EMWAonEVPIcost.pdf}
\caption{EWMA-based average undiscounted cost relative to perfect information funding cost (0.01 on vertical scale = 100bps).}
\label{f:EWMAvsVPI}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.90\textwidth]{figure_EWMAalpha.pdf}
\caption{Optimal EWMA-based roll choice (years, on vertical scale). \ensuremath{\mathbb{Q}}-based roll choice is always term funding, i.e. here 1Y.}
\label{f:EWMAroll}
\end{figure}
\begin{table}[htbp]
\centering
\begin{tabular}{ccccc}
& BP & EU & JP & US \\ \hline
\ensuremath{\mathbb{Q}}\ vs EWMA (average, bps) & 13 & 22 & 10 & 19 \\
T-Test p-level& E-25 & E-27 & E-45 & E-37 \\
EWMA vs EVPI (average, bps) & 16 & 15 & 4 & 15 \\ \hline
\%-efficient & 44\% & 59\% & 71\% & 56\%
\end{tabular}
\caption{Out-of-sample results for different funding strategies and their statistical significance (p-value). Efficiency is defined as (\ensuremath{\mathbb{Q}}\ - EWMA) / (\ensuremath{\mathbb{Q}}\ - EVPI).}
\label{t:res}
\end{table}
Figure \ref{f:QvsEWMA} shows optimal \ensuremath{\mathbb{Q}}\ funding costs versus our EWMA model (\ensuremath{\mathbb{P}}\ optimal funding). We see that following an optimal \ensuremath{\mathbb{Q}}\ funding strategy is always more expensive (positive cost differences) since 2008 for all currencies. Prior to 2008 the picture is more mixed but there is a preponderance of positive costs for EU,JP,US whereas for BP the picture pre-2008 seems balanced.
Figure \ref{f:EWMAvsVPI} shows our EWMA model funding (\ensuremath{\mathbb{P}}\ optimal funding) costs versus funding when the future is known (i.e. Value of Perfect Information). For BP,EU,US our strategy is usually bounded by 0.005 (i.e. 50bps) and for JP it is 0.002 (or 20bps). That is we are within 50bps (or 20bps for JP) of a perfect performance. Figure \ref{f:QvsEWMA} by contrast shows that \ensuremath{\mathbb{Q}}\ optimal funding can be multiples of this bound worse than our strategy. There are a few short spikes where a change of market conditions has not been picked up by our strategy quickly enough. One or two spikes, per currency, over 20 years is an encouraging performance.
Figure \ref{f:EWMAroll} shows the EWMA model (\ensuremath{\mathbb{P}}\ optimal funding) roll choices. Generally the strategy is bang-bang\footnote{``bang-bang'' is a standard term in optimal control theory \cite{Craven1998a}.}, with dis-continuous changes in the control value (here the roll length). It is noticeable that roll choices are even shorter when the model detects a significant mismatch between market-observed yield curves and market behavior. This occurs when rates are low and the yield curve is upwards sloping or flat. Recall that \ensuremath{\mathbb{Q}}\ always chooses term funding when there is a bid-ask spread, as here.
In all Figures \ref{f:QvsEWMA}, \ref{f:EWMAvsVPI}, \ref{f:EWMAroll}, the first five years are calibration and the rest are performance.
The relative funding cost figures show that the major information missing is the exact timing of the start of the late-2008 drop in rates for GBP, EUR and USD. For JPY the drop started gradually so the EWMA-based setup adapted smoothly. Notice that the funding cost increase is a relatively sharp peak so we see that the EWMA adapted quickly to the new setup. An additional point is that there is no equivalent cost peak relative to perfect information when rates approached zero. This is easy to explain as we built-in knowledge that the short end of the funding (yield) curve could not go past zero.
Table \ref{t:res} summarizes the out-of-sample results. We see that EWMA-based optimal funding is between 10bps and 22bps better on average relative to \ensuremath{\mathbb{Q}}-optimal funding. However, EWMA-based funding still lacks 4bps to 16bps relative to perfect information. We see that EWMA-based funding, for the example setup, achieves 44\%\ to 71\%\ efficiency. Effeiciency is defined as achieved average improvement relative to \ensuremath{\mathbb{Q}}\ funding relative to perfect information. Statistically these results are highly significant with p-values of at least 10E-20.
\FloatBarrier
\section{Discussion}
On a derivatives desk the question of funding costs is normally answered by "ask Treasury". Here we take the view from Treasury and integrate regulatory constraints into an optimal funding problem. We have shown theoretically that \ensuremath{\mathbb{Q}}-optimal and \ensuremath{\mathbb{P}}-optimal strategies are radically different. Practical funding optimization relies on identification of an appropriate \ensuremath{\mathbb{P}}-measure on statistical grounds. We have shown that this is possible and that quite simple strategies based on EWMA (i.e. momentum) are statistical, and practical, improvements on hedged funding. These strategies achieve 44\%\ to 71\%\ efficency when compared to perfect information.
Prospective Prudent Valuation regulations \cite{EBA-CP-2013-28,EBA-CP-2013-28-FAQ} may see the funding strategies proposed here as business models. Thus they would not be applicable for use in pricing trades for capital purposes.
We have limited ourselves to a one-year funding horizon because that is how far out deposits are available. We used these as proxy funding costs for a typical bank. Future work could go out further using bond curves or CDS. There are issues with both data sources. Bond prices are generally only visible in the secondary market which is not the one that the issuing bank has to optimize against. CDS are unfunded instruments, have only been available for ten to fifteen years, they can have significant bond CDS bases. These are outside the scope of this paper.
We consider funding costs in terms of expectations, i.e. very simple utility functions. More complex utility functions could be introduced considering VAR or Expected Shortfall \cite{BCBS-219}. However, these may not be appropriate because the buffer itself acts against downside risk so introducing further downside risk measures appears superfluous. Other more complex utility functions would require justification. Utility functions are equivalent to strategy choices, they cannot be used to compare different choices. At executive level the utility function should be compared with the risk appetite of the bank.
Our yield curve prediction based on EWMA and our myopic optimization can be improved in many ways. A Kalman filter, or machine learning approach could be applied to prediction. The optimization could be moved from myopic to multi-stage stochastic optimization. However, even at this early stage we achieve significant practical improvements, and these are at low computational cost. We leave these developments for future research.
It may appear that our \ensuremath{\mathbb{Q}}-funding setup is unfair in that hedged funding is assumed to be possible in the optimization, but must then play out in the physical measure for its actual cost. This is not a difficulty because (myopic) optimal \ensuremath{\mathbb{Q}}-funding is always term funding (when the yield curve is positive, as it is in practice). Thus the anticipated funding cost is achieved.
Practically one may argue that taking downwards-sloping yield curves for funding curves is infeasible because funding at longer tenors may be volume-limited. We leave consideration of market impact for future research but note that downwards-sloping curves are rare so we do not expect them to change our statistical conclusions.
This paper highlights the importance of regulatory liquidity constraints on funding, specifically the minimum liquidity buffer. It also sets up the funding cost problem from the Treasury point of view and shows how it can be optimized using \ensuremath{\mathbb{Q}} and \ensuremath{\mathbb{P}}\ points of view. Demonstration results are significant both statistically and practically.
\bibliographystyle{alpha}
|
1,314,259,993,954 | arxiv |
\section{Details of Evaluation on Downstream Tasks}
\label{sec:appendix:eval-details}
\InputWithSpace{tables/table-comment-update-full-models-results.tex}
\section{Background}
We first give a high-level overview of the building blocks that are necessary to understand our approach.
\subsection{Generation with Transformer-Based Models}
\paragraph{Conditional Sequence Generation}
Conditional sequence generation entails generating an output sequence given an input sequence. Many tasks are framed in this manner, including machine translation (e.g., translating a sentence from French to English), text summarization (e.g., generating a brief summary for a given news article), and code generation (e.g., generating a code snippet for a given natural language specification).
\paragraph{Encoder-Decoder Framework}
In recent years, conditional sequence generation tasks are being addressed with encoder-decoder models. An encoder-decoder model consists of two neural components: an encoder and a decoder. The input sequence is fed into the encoder, which produces learned representations of the tokens in that sequence. These learned representations are then passed into the decoder, which generates the output sequence one token at a time.
\paragraph{Transformers}
Transformers~\cite{VaswaniETAL17Attention} are powerful neural models that are commonly adopted as the encoder and decoder in the encoder-decoder framework. These models rely on an \textit{attention} mechanism to learn representations for tokens by relating them to other tokens in the sequence. Namely, a transformer-based encoder will learn representations for each token in the input sequence by ``attending'' to other input tokens. For the decoder, when generating a token at timestep $t$, it will ``attend'' to the representations of the output tokens generated from timestep 1 to $t-1$ as well as the representations of tokens from the input sequence. Transformer models can become very large with huge numbers of attention heads, encoder and decoder layers.
\subsection{Large Pretrained\xspace Language Models}
Large pretrained\xspace language models generally refer to the class of large transformer-based models that are trained on large amounts of unlabeled web data with unsupervised training objectives. This includes a vast number of models like GPT~\cite{radford2019language,brown2020language}, BART~\cite{lewis-etal-2020-bart}, and T5~\cite{raffel2020exploring}.
\paragraph{Pretrained\xspace with Denoising Autoencoding}
BART and T5 models are pretrained\xspace using denoising autoencoding unsupervised training objectives. Namely, a noising function is first applied to a given input sequence \textit{inp}\xspace to form \textit{inp$'$}\xspace. Common noising functions include \textit{Token Masking}: tokens in the input sequence are randomly masked;
\textit{Token Deletion}: random tokens are deleted from the input sequence;
\textit{Token Infilling}: a span of tokens are sampled and replaced with a mask token;
\textit{Sentence Permutation}: sentences in the document are shuffled in a random order.
Then, \textit{inp$'$}\xspace is fed into a model's encoder, and the encoder's learned representation is passed into the decoder, which generates an output sequence, \textit{out}\xspace, that is expected to resemble the original input sequence (\textit{inp}\xspace). In other words, the model is trained to ``denoise'' \textit{inp$'$}\xspace, using a training objective that minimizes the error between \textit{out}\xspace and the original input, \textit{inp}\xspace.
Through this, the model learns to extract meaning from the input sequence and also generate fluent and coherent output. Therefore, by pretraining\xspace on massive amounts of data, the model develops an understanding of how things in the world relate to one another as a strong language modeling capability.
\paragraph{Finetuning for Downstream Tasks}
Since large pretrained\xspace language models are trained using unsupervised training objectives on huge web data, they cannot generally be directly applied to downstream tasks (e.g., translation, summarization).
Fine-tuning\xspace is a common technique to transfer the knowledge learned during pretraining\xspace to target downstream tasks.
Specifically, the pretrained\xspace model is further trained for the downstream task on some amount of supervised data.
\subsection{Pretrained\xspace Language Models for Software Engineering}
\label{sec:CodeT5}
Inspired by the success of large pretrained\xspace models in Natural Language Processing (NLP), a number of machine learning models pretrained\xspace on source code and technical text have been proposed
for solving various software-related problems.
For instance, inspired by BART, \citet{ahmad-etal-2021-unified} have developed PLBART, which is a large pretrained\xspace language model that has been fine-tuned\xspace for a number of code understanding (e.g., code summarization) and generation (e.g., code translation) tasks. Similarly, inspired by T5, \citet{wang2021codet5} built a larger model CodeT5\xspace, which is pretrained\xspace on eight programming languages together
with their natural language comments collected from open-source
repositories.
Specially, it is pretrained to incorporate information from identifiers in the code.
CodeT5\xspace has shown promising results in code-related generation
tasks such as code summarization, code generation and code-related
understanding tasks such as clone detection and defect detection.
\section{For every submission}
\subsection{Did you discuss the \textit{limitations} of your work?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{mainClaims}{Yes,No,N/A}\\[0.2cm]
\tf[0.85]{mainClaimsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss any potential \textit{risks} of your work?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{risks}{Yes,No,N/A}\\[0.2cm]
\tf[0.85]{risksJustification}
\end{tabular}
\end{Form}
\subsection{Do the abstract and introduction summarize the paper’s main claims?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{abstractIntro}{Yes,No,N/A}\\[0.2cm]
\tf[0.85]{abstractIntroJustification}
\end{tabular}
\end{Form}
\section{Did you use or create \textit{scientific artifacts}?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this sectio. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{createArtifacts}{Yes,No}\\[0.2cm]
\end{tabular}
\end{Form}
If yes:
\subsection{Did you cite the creators of artifacts you used?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{citeCreators}{Yes,No,N/A}\\[0.2cm]
\tf{citeCreatorsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the \textit{license or terms} for use and/or distribution of any artifacts?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{legalGrounds}{Yes,No,N/A}\\[0.2cm]
\tf{legalGroundsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss if your use of existing artifact(s) was consistent with their \textit{intended use}, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{intendedUse}{Yes,No,N/A}\\[0.2cm]
\tf{intendedUseJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the steps taken to check whether the data that was collected/used contains any \textit{information that names or uniquely identifies individual people} or \textit{offensive content}, and the steps taken to protect / anonymize it?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{personallyIdentifiableInformationOrOffensiveContent}{Yes,No,N/A}\\[0.2cm]
\tf{personallyIdentifiableInformationOrOffensiveContentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{documentation}{Yes,No,N/A}\\[0.2cm]
\tf{documentationJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report relevant statistics like the number of examples, details of train/test/dev splits, etc. for the data that you used/created?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{relevantStatistics}{Yes,No,N/A}\\[0.2cm]
\tf{relevantStatisticsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\section{Did you run \textit{computational experiments}?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this section. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{computationalExperiments}{Yes,No}
\end{tabular}
\end{Form}
If yes:
\subsection{Did you report the \textit{number of parameters} in the models used, the \textit{total computational budget} (e.g., GPU hours), and \textit{computing infrastructure} used?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{reportReproducibility}{Yes,No,N/A}\\[0.2cm]
\tf{reportReproducibilityJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the experimental setup, including \textit{hyperparameter search} and \textit{best-found hyperparameter} values?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{bestFoundHyperparameter}{Yes,No,N/A}\\[0.2cm]
\tf{bestFoundHyperparameterJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report \textit{descriptive statistics} about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{descriptiveStatistics}{Yes,No,N/A}\\[0.2cm]
\tf{descriptiveStatisticsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{existingPackages}{Yes,No,N/A}\\[0.2cm]
\tf{existingPackagesJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\section{Did you use \textit{human annotators} (e.g., crowdworkers) or \textit{research with human subjects}?} If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this section. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{hummanAnnotators}{Yes,No}\\
\end{tabular}
\end{Form}
If yes:
\subsection{Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{fullTextInstructions}{Yes,No,N/A}\\[0.2cm]
\tf{fullTextInstructionsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such \textit{payment is adequate} given the participants’ demographic (e.g., country of residence)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{payment}{Yes,No,N/A}\\[0.2cm]
\tf{paymentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss whether and how \textit{consent} was obtained from people whose data you're using/curating (e.g., did your instructions explain how the data would be used)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{consent}{Yes,No,N/A}\\[0.2cm]
\tf{consentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Was the data collection protocol \textit{approved (or determined exempt)} by an ethics review board?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{ethicsAmountSpent}{Yes,No,N/A}\\[0.2cm]
\tf{ethicsAmountSpentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report the basic demographic and geographic characteristics of the \textit{annotator} population that is the source of the data?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{annotator}{Yes,No,N/A}\\[0.2cm]
\tf{annotatorJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\end{document}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we present a novel edit-driven pretraining\xspace objective and use it to develop \textsc{CoditT5}\xspace, a pretrained\xspace language model for software-related editing tasks. \textsc{CoditT5}\xspace is pretrained\xspace on large amounts of source code and natural language comments to perform edits, and we evaluate this model by
fine-tuning\xspace it on three distinct downstream tasks:
comment updating\xspace, bug fixing\xspace and automated code review\xspace.
By outperforming task-specific baselines and pure generation baselines across tasks, we demonstrate the suitability of \textsc{CoditT5}\xspace (and our pretraining\xspace objective) for editing tasks and its generalizability. We additionally find that a pure generation-based model and \textsc{CoditT5}\xspace can complement one another through simple reranking strategies, which outperform each of the models individually and also achieve new state-of-the-art performance for the three downstream editing tasks that we consider.
\section{Evaluation}
We organize our evaluation around three main research questions:
\vspace{3pt}
\RQ{1}{How does our edit-based model, \textsc{CoditT5}\xspace, compare to generation
and task-specific baselines for edit-related tasks?}
\vspace{3pt}
\RQ{2}{Does our proposed pretraining\xspace objective help a model in better reasoning about and performing edits?}
\vspace{3pt}
\RQ{3}{Can a pure generation model complement \textsc{CoditT5}\xspace by integrating the two models?}
\subsection{{Comparing \textsc{CoditT5}\xspace to Baselines}}
\InputWithSpace{figures/figure-coditT5-example.tex}
\InputWithSpace{tables/table-comment-update-models-results.tex}
\InputWithSpace{tables/table-bf-models-results.tex}
\InputWithSpace{tables/table-code-review-models-results.tex}
\InputWithSpace{figures/figure-new-objective-code-review-example.tex}
We present results in Tables~\ref{tab:comment-update-results}-\ref{tab:code-review-results}. Note that the results shown in the last two rows in each of the tables are explained later in Section~\ref{sub:rerank}. We perform statistical significance testing using bootstrap
tests~\cite{Berg-KirkpatrickETAL12Empirical} with confidence level
95\%.
Results for comment updating\xspace full test data are provided in the Appendix.
\\
\noindent\fbox{%
\parbox{\columnwidth}{%
\textbf{RQ1:} How does our edit-based model, \textsc{CoditT5}\xspace, compare to generation and task-specific baselines for edit-related tasks?
}%
}\\
We find that \textsc{CoditT5}\xspace (and most of the pretrained\xspace models) drastically outperforms
\citet{PanthaplackelETAL20Learning} (a non-pretrained model) across metrics for comment updating\xspace. This demonstrates the value of large language model pretrained\xspace on large amounts of data using unsupervised pretraining\xspace objectives.
Next, across all three tasks, \textsc{CoditT5}\xspace achieves higher performance than the two pure generation-based pretrained\xspace models, outperforming PLBART\xspace across all metrics and CodeT5\xspace for all metrics except for BLEU-4 (\UseMacro{coditT5-better-than-codeT5} out of \UseMacro{total-metrics-number} metrics), highlighting the benefit of explicitly modeling edits for these editing tasks.
In fact, CodeT5 (w/ edit-based output)\xspace, which explicitly models edits only during fine-tuning\xspace rather than pretraining\xspace, outperforms CodeT5\xspace on more than half of the metrics (\UseMacro{codeT5-edit-better-than-codeT5} out of \UseMacro{total-metrics-number} metrics). This further underlines the utility of the edit-based output sequence representation that we developed.
Finally, \textsc{CoditT5}\xspace also outperforms CodeT5 (w/ edit-based output)\xspace across most metrics (\UseMacro{coditT5-better-than-codeT5-edit} out of \UseMacro{total-metrics-number} metrics), which is not pretrained\xspace using the pretraining\xspace objective but uses the same edit-based output sequence representation during fine-tuning\xspace. This demonstrates the importance of actually pretraining\xspace with this representation rather than relying on fine-tuning\xspace alone.
\subsection{{Evaluating our Pretraining\xspace Objective}}
\InputWithSpace{tables/table-CoditT5-plan-edit-consistent.tex}
While we observe that \textsc{CoditT5}\xspace tends to achieve slightly lower performance than CodeT5\xspace on BLEU-4 (a generation-based metric) for two of the tasks, we find that it significantly outperforms other metrics which capture whether the correct edits are generated, such as xMatch and GLEU and SARI for comment updating\xspace.
This suggests that \textsc{CoditT5}\xspace is indeed better at \textit{editing}.
By inspecting the outputs of the two models, we find that CodeT5\xspace tends to make drastic and unnecessary edits while \textsc{CoditT5}\xspace appears to be better at making more fine-grained edits.
For example, in Figure~\ref{tab:qual-example}, CodeT5\xspace generates output that completely discards critical statements in the code, whereas \textsc{CoditT5}\xspace is able to correctly localize the part of the input code that needs to be changed and make editions properly.
We attribute this to the fact that CodeT5\xspace is not designed to reason about edits while \textsc{CoditT5}\xspace is.
We further evaluate the influence of our proposed pretraining\xspace objective on this editing capability.\\
\noindent\fbox{%
\parbox{\columnwidth}{%
\textbf{RQ2:} Does our proposed pretraining\xspace objective help a model in better reasoning about and performing edits?
}%
}\\
First, we compare how often \textsc{CoditT5}\xspace naively copies the input content without actually performing any edits, to two pretrained\xspace models which use generation-based pretraining\xspace objectives.
We report the percentages in Table~\ref{tab:copy-pct-results}.
By copying substantially less often than the PLBART and CodeT5\xspace, we find that \textsc{CoditT5}\xspace learns to more frequently perform edits with our proposed edit-based pretraining\xspace objective which indicates it is suitable for editing tasks.
\textsc{CoditT5}\xspace's decoder is encouraged to generate a target sequence that follows the outlined edit plan; however, we do not constrain the decoder in any way to do this.\footnote{We do not want potential errors in the edit plan to propagate to the target sequence.}
Nonetheless, we find that in the majority of cases (\UseMacro{coditT5-consistent-rate-low}-\UseMacro{coditT5-consistent-rate-high}), the target sequence is consistent with the edit plan, as shown in Table~\ref{tab:plan-edit-consistent}.
More concretely, the target sequence generally resembles what would be produced if the edit operations in the edit plan were applied to the original content.
This suggests that the pretraining\xspace objective does in fact guide the model in reasoning about edits.
For cases in which there is ambiguity or errors in the edit plan, we find that \textsc{CoditT5}\xspace still often manages to generate the correct target sequence, by disregarding unreasonable or the ambiguous edits. We show two examples in automated code review\xspace in
Figure~\ref{fig:new-objective-code-review-example} with the Java method
before review, the generated edit plan, and the generated target sequence.
In Example 1, the edit plan is ambiguous since there are multiple instances of ``('' and it does not specify which one(s) should be deleted. However, the generated target sequence is correct, as the model was able to correctly reason about the most appropriate edit locations.
In Example 2, the edit plan is imprecise and blindly following this plan would result in syntactically incorrect code, but the model still managed to perform the correct edits and produced valid output by ignoring the fallacious edit.
Overall, we find that both components of the edit-based output sequence representation used in the pretraining\xspace objective (edit plan and target sequence) are critical.
\section{Experimental Design}
\InputWithSpace{tables/table-dataset.tex}
To assess \textsc{CoditT5}\xspace and our proposed pretraining\xspace objective, we fine-tune\xspace the model on three software-related downstream tasks.
Note that during fine-tuning\xspace, the model is still trained to generate the edit-based output sequence.
However, at test time, we discard the edit plan and take the generated target sequence as the final model output.
Namely, we use the generated sequence after the seperation token \texttt{<s>} as model's prediction.
\subsection{Downstream Tasks}
\label{sec:downstream-task}
\paragraph{Comment Updating\xspace}
The task of comment updating\xspace entails automatically updating a natural language
comment to reflect changes in the corresponding body of
code~\cite{PanthaplackelETAL20Learning}. For instance, in the Example
2 in Figure~\ref{tab:cup-example}, the old \texttt{@return} needs to
be revised given the Java methods changes between two commits: instead of
directly returning the yaw Euler angle measured in radians, the equivant
angle measure in degrees is returned by calling \texttt{Math.toDegrees()}
in the new version.
\paragraph{Bug fixing}
Given a \textit{buggy} code snippet, the task of bug fixing\xspace entails
generating a \textit{fixed} code snippet, which no longer contains the
bug~\cite{tufano2019learning}.
\paragraph{Automated Code Review\xspace}
Given a code snippet under review and a brief natural language
sentence prescribing code edits, automated code review\xspace requires automatically
generating the revised code snippet, which captures the recommended
changes~\cite{Tufano21Towards}. For example, in Figure~\ref{fig:intro-example:1},
\texttt{emptyList()} should be changed to \\
\texttt{Collections.emptyList()} because the reviewer suggests \textit{not} using
static import.
\subsection{Data for Downstream Tasks}
\label{sec:downstream-ds}
We use datasets that have been established and previously used for
each of the three tasks.
The statistics of the datasets is shown in Table~\ref{tab:dataset}.
\paragraph{Comment Updating\xspace}
For this task, \citet{PanthaplackelETAL20Deep} has released a corpus
of Java method changes paired with changes in the corresponding
comments (spanning \texttt{@return}, \texttt{@param}, and summary
comments). This dataset also comes with a \textit{clean} subset of the
test set which was manually curated.
The input sequence used for fine-tuning\xspace is formed by concatenating the old
comment and code edits. The code edits follow
the representation described in Section~\ref{sec:edit-action}, except that an additional \texttt{Keep} operation is included to denote spans that are left unchanged.
\paragraph{Bug Fixing}
We consider the Java Bug Fix Pairs-Small ($B2F_{s}$\xspace) and Bug Fix Pairs-Medium ($B2F_{m}$\xspace)
datasets, originally released by~\citet{tufano2019learning}.
\citet{ChakrabortyAndRay21Multi} supplemented
these datasets with additional context, namely natural language
guidance from the developer, and the method where the patch
should be applied. $B2F_{s}$\xspace contains shorter methods
with a maximum token length 50, and $B2F_{m}$\xspace contains longer methods with up
to 100 tokens in length.
The input sequence used for fine-tuning\xspace is formed with the buggy code, natural language guidance, and code
context.
\paragraph{Automated Code Review}
We use the automated code review\xspace dataset released
by~\citet{Tufano21Towards}, which consists of Java methods (before and
after review) paired with pull request comments, derived from pull
request reviews on GitHub and Gerrit. To reduce the vocabulary size,
they further abstracted Java methods by replacing identifiers and
literals with special tokens.
In this work, we use the data with concrete tokens.
The input sequence used for fine-tuning\xspace is formed using the
code snippet before review and the pull request comment from reviewers.
\subsection{Baselines}
\subsubsection{Generation Baselines}
\label{sub:generation_baselines}
We consider two large generation language models trained with
denoising autoencoding pretraining\xspace objectives which are not
edit-based: \textbf{PLBART\xspace} and \textbf{CodeT5\xspace}. Both of these are fine-tuned\xspace to generate the target output sequence. Furthermore, to better assess the value of actually pretraining\xspace using the proposed objective instead of simply fine-tuning\xspace a model to generate an edit-based output sequence, we also consider fine-tuning\xspace CodeT5\xspace to generate the specialized edit-based output sequence representation. We refer to this as \textbf{CodeT5 (w/ edit-based output)\xspace}. We fine-tune\xspace each of these models
using the same input context as \textsc{CoditT5}\xspace.
\subsubsection{Task-Specific Baselines}
We additionally compare against the state-of-the-art models for each of the downstream tasks. For comment updating\xspace, the state-of-the-art model is \citet{PanthaplackelETAL20Learning}\xspace, which entails Recurrent Neural Network (RNN) based encoders for representing the old comment and code edits, and an
RNN-based decoder for decoding edits.
These edits are parsed at test time and reranked based on similarity to the old
comment and likelihood based on a comment generation model.
For bug fixing\xspace, the state-of-the-art model is essentially PLBART\xspace
fine-tuned\xspace on the $B2F_{s}$\xspace and $B2F_{m}$\xspace to generate the fixed code~\cite{ChakrabortyAndRay21Multi}.
For automated code review\xspace, no baselines are available for the specific version of the dataset we used with concrete identifiers and literals (rather than the one with abstracted identifiers and literals).
Therefore, we rely on those described in
Section~\ref{sub:generation_baselines} and establish new baselines for this version of the dataset.
\subsection{Evaluation Metrics}
For comment updating\xspace, we report performance on the same metrics that have been
used previously to benchmark models for this
task~\cite{PanthaplackelETAL20Learning}. This includes: xMatch (does the model prediction \textit{exactly match} the ground truth), common metrics
that measure lexical overlap for evaluating text generation (BLEU-4~\cite{papineni2002bleu} and METEOR~\cite{banerjee2005meteor}), and common metrics for measuring text editing (GLEU~\cite{napoles2015ground} and SARI~\cite{xu2016optimizing}).
For bug fixing\xspace, we use
xMatch, as done in prior work~\cite{ChakrabortyAndRay21Multi}. For automated code review\xspace, we
report performance on xMatch and BLEU-4, which have been used
previously to benchmark models for this task~\cite{Tufano21Towards}.
\section{Introduction}
Large language models pretrained\xspace on massive amounts of data have led
to remarkable progress in recent years, with models like
BART~\cite{lewis-etal-2020-bart}, GPT~\cite{radford2019language,brown2020language}, and T5~\cite{raffel2020exploring} yielding
huge improvements for a vast number of text generation tasks. Inspired
by this, a new research initiative has emerged around building large
models that are pretrained\xspace on source code and technical text to
address software-related tasks. This includes models like
PLBART~\cite{ahmad-etal-2021-unified},
CodeGPT-2~\cite{lu2021codexglue}, and
CodeT5~\cite{wang2021codet5}. While these models demonstrate
impressive performance on generation tasks like code summarization,
code generation, and code translation,
it is unclear if they are well-suited for the \textit{editing} nature of many software-related tasks.
For instance, bug fixing\xspace~\cite{TufanoETAL19Empirical} entails editing source
code to resolve bugs, automated code review\xspace~\cite{Tufano21Towards} requires editing source
code to incorporate feedback from review comments, and
comment updating\xspace~\cite{PanthaplackelETAL20Learning} pertains to updating
outdated natural language comments to reflect code changes.
In principle, such editing tasks can be framed as standard generation
tasks in which an input sequence (e.g., \emph{buggy} code snippet) is
completely re-written to form the output sequence (e.g., \emph{fixed}
code snippet).
In this way, existing pretrained\xspace conditional generation models can be fine-tuned to autoregressively generate a sequence from scratch. However, this can be problematic in practice~\cite{PanthaplackelETAL20Learning}.
When applying large generation models like
PLBART and CodeT5 to these tasks, we find that they can
generate
output which merely copies the input without performing any edits (up to 29.28\%\xspace) or
even deviates substantially from the input, introducing irrelevant
changes.
We provide an example of automated code review\xspace in
Figure~\ref{fig:intro-example:1}, where a reviewer prescribes edits that need to be made
to a given code snippet: ``Generally better to qualify than making static import\xspace''. Using the code snippet and this comment, PLBART
generates an output sequence which copies the original code, without
applying any edits. While the output is valid and a likely sequence
according to PLBART's language model, it makes no edits based on
the reviewer's comments.
We attribute these weaknesses to the fact that such models rely on pretraining\xspace objectives designed for generating code (or software-related natural language) in sequence by exploiting patterns with respect to preceding tokens. Therefore, a model could only
learn to
\textit{implicitly} perform edits by generating tokens one by one in accordance with the underlying probability that it has learned for which tokens belong alongside one another, rather than being aware of where information should be retained or modified.
Intuitively, edit-based generation involves a different mentality that
more frequently refers back to the input sequence, and can often be
characterized by localized operations (e.g., insertion, deletion,
substitution). This paper's \emph{first} contribution is to formulate a novel pretraining\xspace objective that
\textit{explicitly} models edits. It guides a model in learning to discern edit locations in the input
sequence and reason about the necessary edit
operations. Our approach is inspired by content planning in natural language generation where a skeleton of key elements are first generated and used to guide more accurate and precise generation of full text~\cite{reiter1997building,pichotta2016learning,martin2018event,fan2019strategies}.
Specifically, we first generate an \textit{edit plan} that explicitly details the edit
operations before generating the target sequence as a result of the edits.
This effectively allows the decoder to condition on the edit plan during decoding.
Using this objective, we develop \textsc{CoditT5}\xspace, a large language model for software-related edit tasks that is pretrained\xspace
on more than \UseMacro{pl-pretrain-number} million open-source programming language functions and \UseMacro{nl-pretrain-number} million natural language comments from the CodeSearchNet~\cite{HusainETAL19Codesearchnet} training data.
For evaluation, we fine-tune\xspace \textsc{CoditT5}\xspace on three downstream tasks (comment updating\xspace, bug fixing\xspace, and automated code review\xspace) and show that \textsc{CoditT5}\xspace outperforms state-of-the-art models as well as large pretrained\xspace generation models for each of these tasks. Through this, we demonstrate that our model and the proposed edit-based pretraining\xspace objective generalize across tasks and are better suited for editing tasks in the software domain.
\InputWithSpace{figures/figure-intro-example.tex}
Furthermore, in our evaluation, we find that our edit-based model, \textsc{CoditT5}\xspace,
can be further improved if informed by a pure generation-based model. A generation model can
provide insights that are complementary to an edit model, especially in terms of contextual coherence of the generated target sequence. Similarly, the edit-based model provides better explicit modeling of concrete edits, complementing the generation model.
To exploit this complementary nature of the models,
we combine the two models through reranking strategies which require no additional training. Our results show that the combined approaches outperform the two models individually by up to \UseMacro{rerank-best-improve}\%.
\vspace{5pt}
\noindent
We summarize our main contributions as follows:
\begin{itemize}[topsep=3pt,itemsep=1ex,partopsep=0ex,parsep=0ex,leftmargin=*]
\item We formulate a novel pretraining\xspace objective that entails first generating a plan consisting of edit operations to be applied to the input sequence followed by the resulting output sequence.
\item We build and release\footnote{The model will be available upon publication.} \textsc{CoditT5}\xspace, a large language model for software-related editing tasks that is pretrained\xspace on large amounts of source code and natural language with the new pretraining\xspace objective.
\item Upon task-specific fine-tuning, we show that \textsc{CoditT5}\xspace achieves improved performance over existing models for three distinct downstream editing tasks (comment updating\xspace, bug fixing\xspace and automated code review\xspace), demonstrating its effectiveness and generalizability.
\item We show that by combining our edit-based \textsc{CoditT5}\xspace model with a standard generation model through simple reranking strategies, we can beat each of the individual models, demonstrating the complementary nature of edit-based and pure generation models.
\end{itemize}
\begin{figure*}[t]
\centering
\input{figures/figure-pretrain}
\caption{\UseMacro{FCap-pretrain-model} \label{fig:pretrain}}
\label{fig:pretrain-model}
\end{figure*}
\section*{Acknowledgments}
We thank Nader Al Awar, Yu Liu, Aditya Thimmaiah, Zhiqiang Zang, and
the anonymous reviewers for their comments and feedback. This work is
partially supported by the US National Science Foundation under Grant
Nos. CCF-1652517, CCF-2107291, IIS-2107524 and IIS-2145479.
\balance
\section{CoditT5}
In this section, we first explain our proposed pretraining\xspace objective (Section~\ref{sub:pretraining}). We then
discuss how we build \textsc{CoditT5}\xspace by pretraining\xspace on this objective, including the data used for pretraining\xspace (Section~\ref{sub:pretrain_data}), and additional details of the pretraining\xspace setup (Section~\ref{sub:pretrain_setup}).
\subsection{Pretraining Objective}
\label{sub:pretraining}
We formulate a new pretraining\xspace objective that is designed to encourage
a model to explicitly reason about edits. At a high-level, this
objective falls under the realm of denoising autoencoding in which an
input sequence is first corrupted with noising functions and the model
is trained to \textit{denoise} the corrupted sequence by generating an
output sequence that matches the original input sequence. While existing models like PLBART and CodeT5\xspace pretrained\xspace using this setup perform very well on various generation tasks (e.g., code summarization/generation), we find that they do not generalize well when fine-tuned\xspace on editing tasks. Namely, they are susceptible to learning to copy the original input sequence instead of actually performing edits, up to 29.28\%\xspace of the time (Table~\ref{tab:copy-pct-results}).
In this work, we propose an \textit{edit-based output sequence}
representation (shown in Figure~\ref{fig:pretrain-model}): [Edit Plan] \texttt{<s>} [Target Sequence], where the
model is trained to generate an \textit{edit plan} (\circled{1}) consisting of
explicit edit operations that reconstruct the input sequence, followed by a separation token
(\texttt{<s>}), and finally the \textit{target sequence} (\circled{2}) that matches
the original input sequence. This is inspired by the concept of
\textit{content planning}, originating from natural language
generation~\cite{reiter1997building}. In content planning, a
high-level plan is first outlined, specifying the discourse structure
of the content to be generated, and then lexical realization is
performed to generate the text.
\subsubsection{Edit Plan}
\label{sec:edit-action}
The edit plan entails the specific edit operations that are needed to recover the original input sequence. For example, in
Figure~\ref{fig:pretrain-model}, the input sequence: ``\texttt{@param}
users List of user objects'' is corrupted by masking ``users'':
``\texttt{@param} \texttt{[MASK]} List of user objects''.
With this, a model must first reason about the fact that \texttt{[MASK]} in the
corrupted input sequence needs to be replaced with ``users'' when producing the target sequence.
To construct the sequence of edit operations, we closely follow the format proposed by \citet{PanthaplackelETAL20Learning}:
\\{\texttt{<Operation> [span of tokens] <OperationEnd>.}}\\
Here, \texttt{<Operation>} is either \texttt{Insert}\xspace or \texttt{Delete}\xspace.
We also include the \texttt{Replace} operation, with a slightly different structure (since both the old content to be replaced as well as the new content to replace it with must be specified):
{\texttt{<ReplaceOld> [span of old tokens] <ReplaceNew> [span of new
tokens] <ReplaceEnd>}}.
To determine the specific edit operations for a given example, we use difflib\footnote{\url{https://docs.python.org/3/library/difflib.html}} to compute the optimal set of edits need to transform the corrupted input sequence into the original input sequence.
\subsubsection{Target Sequence}
One might ask whether we could simply apply the sequence of edit
operations in the generated edit plan to the corrupted input sequence
directly to recover the original input sequence
\textit{heuristically}. For example, if we align ``\texttt{<ReplaceOld>
\texttt{[MASK]} <ReplaceNew> user <ReplaceOld>}'' with the corrupted input
sequence ``\texttt{@param} \texttt{[MASK]} List of user objects'', it is very
clear that all we need to do is replace \texttt{[MASK]} with ``user''and no
additional generation is needed. However,
there are two main issues with this. First, not all operations will be
specified in a deterministic manner (e.g., if there were two \texttt{[MASK]}
tokens in the corrupted input sequence, which one(s) should be
replaced?). Next, the generated sequence of edit operations do not
correspond to contiguous output tokens, and so they may not capture
important properties of language such as fluency and
coherency~\cite{PanthaplackelETAL20Learning}.
Therefore, we need an additional step for \textit{learning} to apply
edits while simultaneously maintaining fluency and coherency.
For this reason, once
the edit plan is outlined as a sequence of edit operations, the target sequence (which is expected to recover the original input sequence) must also be generated: ``\texttt{@param} users List of user objects''. The decoder generates
tokens in a left-to-right manner, meaning that when generating a token
at a given timestep, it is aware of all tokens generated in previous
timesteps. So, when generating the target sequence, the decoder
can exploit the sequence of edits that was generated in the edit plan
earlier.
In this way, we hope the model can reason the edits and the generation simultaneously.
\InputWithSpace{tables/table-span-stats.tex}
\subsubsection{Noising Functions}
To support learning across a diverse set of edit actions during pretraining\xspace, we consider multiple noising functions for corrupting the input sequence:
1) randomly masking spans with the special \texttt{[MASK]}
token which requires the model to replace it with the correct spans (e.g., Figure~\ref{fig:pretrain-model}),
2) inserting \texttt{[MASK]} token at random positions which requires the
model to identify the useless spans and delete them and 3) deleting
spans of tokens in the input sequence which requires the model
pinpoint the position and add back the missing pieces.
\subsection{Pretraining Data}
\label{sub:pretrain_data}
\subsubsection{Data Collection}
Following prior work, we pretrain\xspace \textsc{CoditT5}\xspace on large amounts of source code and natural language comments from CodeSearchNet~\cite{HusainETAL19Codesearchnet} dataset which consists of functions of six programming languages (java, python, ruby, php, go and javascript) together with the human-written comments.
CodeSearchNet dataset is widely used to pretrain\xspace large language models, such as CodeT5\xspace~\cite{wang2021codet5} and UniXcoder~\cite{GuoETAL22Unixcoder}.
We use the processed CodeSearchNet dataset
provided by~\citet{GuoETAL22Unixcoder} which contains 6.1 million programming languages data and 1.9 million natural language data.
\InputWithSpace{tables/table-pretrain-dataset.tex}
\subsubsection{Data Preparation}
To enable \textsc{CoditT5}\xspace to capture common edit patterns, we want the
pretraining\xspace dataset to reflect the common activities conducted by
software developers.
Specifically, in the pretraining\xspace dataset,
the probability of each edit operations applied to the spans in the input sequence and
the length (number of tokens) of the corrupted span should be consistent with the downstream editing tasks.
To this end, we collect the following statistics from the downstream
tasks training dataset: 1) the probability of each edit operation
(insert, delete and replace) to be performed on a span;
2) the average number of tokens in each span that is edited (insert, delete or replace);
and 3) the average number of spans that are edited (replaced, deleted and inserted) in
each input sequence.
Note that the three statistics are collected for programming
language edits and natural language edits respectively from
the training data for the downstream tasks as shown in Table~\ref{tab:span-stats}.
For each example in the pretraining\xspace dataset, we then uniformly sample the spans and the edit operations that should be applied in accordance with the statistics collected from the downstream datasets.
Similar to CodeT5\xspace~\cite{wang2021codet5}, we use the RoBERTa~\cite{liu2019roberta} tokenizer to tokenize all sequences (input, edit plan, target). More concretely, the tokenizer splits words in the sequence into \textit{subwords}, constituting the \textit{tokens} that are used by the model.
Moreover, we remove input sequences that are shorter than 3 tokens and longer than 512 tokens after tokenization.
Statistics of the pretraining\xspace dataset are presented in Table~\ref{tab:pretrain-dataset}.
\subsection{Pretraining Setup}
\label{sub:pretrain_setup}
\paragraph{Model Architecture}
\textsc{CoditT5}\xspace consists of 12 encoder and decoder layers, 12 attention heads, and a hidden dimension size of 768. The total
number of parameters is 223M. Model parameters are initialized from the CodeT5\xspace-base model, and we further pretrain\xspace it on the CodeSearchNet pretraining\xspace dataset (Section~\ref{sub:pretrain_data}) using our proposed objective (Section~\ref{sub:pretraining}).
\paragraph{Training}
We implement \textsc{CoditT5}\xspace using PyTorch 1.9.0 and run the experiments on 4
NVidia 1080-TI GPUs, Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz with
the same hyper parameters as \CodeTf for fine-tuning. For
pretraining\xspace, we use 16 NVidia 1080-TI GPUs, Intel(R) Xeon(R) CPU
E5-2620 v4 @ 2.10GHz for 48K steps. The time to pretrain\xspace \textsc{CoditT5}\xspace is 4 days.
We use
difflib\footnote{https://docs.python.org/3/library/difflib.html} to
construct sequences of edit actions\xspace.
\InputWithSpace{tables/table-models-copy-results.tex}
\section{Related Work}
\label{sec:related}
In this section, we consider the most closely related work on learning
edits, large pretrained\xspace models for code, pretrained\xspace models for code edits and combining complementary
models.
\MyPara{Learning Edits} Prior work has studied learning edits in both
natural language and programming language. We followed the approach
of explicitly representing edits as sequences with edit actions. Our
edit representation is inspired by
\citet{PanthaplackelETAL20Learning,PanthaplackelETAL20Deep}, who
studied learning comment edits based on code edits.
\citet{BrodyETAL20CodeChanges,TarlowETAL20Learning,ChenETAL21PLUR,
YaoETAL21Learning} represented code as ASTs (abstract syntax trees) and the code edits as edit actions over the ASTs rather than tokens. We do not focus on editing
structured data (AST) as it can not be generalized to natural
language, and it can not be easily combined with large pretrained\xspace
models which are primarily based on sequence of tokens.
Alternatively, edits can be encoded into vector representations (or
embeddings). \citet{GuuETAL18EditingPrototypes} studied learning edit
embeddings for natural language generation in a prototype-then-edit
style. \citet{YinETAL18RepresentEdits} studied learning code edits as
embeddings and then applying them to natural language insertion and
code bug fixing. \citet{HashimotoETAL18RetrieveAndEdit} developed a
retrieve-and-edit framework for text-to-code generation, where the
edits are learned as parameters of a seq2seq model.
Similarly, \citet{LiETAL21Editsum} proposed a retrieve-and-edit framework for code summarization task where the model first learns an edit vector and then generate the revised summary conditioned on it.
Although learning edits as embeddings can be
effective for individual tasks, it is not suitable to be used in the
pretraining\xspace fine-tuning\xspace paradigm, because there is a large domain gap
between the edit embeddings learned on different tasks. More over,
edit embeddings are less explainable compared to the explicit edit
representations we use.
Another line of work that carries out the idea of learning edits is
copying mechanism, including copying individual
tokens~\cite{VinyalsETAL15PointerNetworks,GuETAL16Copying} and
spans~\cite{ZhouETAL18Copying,PanthaplackelETAL21CopyThat}, which
helps the model to ``keep'' unchanged tokens and focus on generating
the edited part.
\citet{Logan21Fruit} built a T5-based model to update the existing articles based on the given new evidence. The model is trained to output a \textit{copy} token instead of the copied sentence and a special \textit{reference} token before the updated text which identifies the evidence to support the update.
\citet{DingETAL20Patching} trained the model to emit
pointers that indicate the positions for editions and new tokens
to be inserted at the same time.
Similarly, \citet{TarlowETAL20Learning, ChenETAL21PLUR} augmented the transformer-based decoder with pointers to the input graph representation of the code which specify the input locations to edit.
Although related, it is orthogonal to our work of
learning edits with pretraining\xspace.
\MyPara{Large Pretrained\xspace Models for Code}
Motivated by the success of large pretrained\xspace models for many NLP
tasks, domain-specific models that are pretrained\xspace on source code and
technical text have emerged, including
CodeBERT~\cite{feng2020codebert},
GraphCodeBERT~\cite{guo2020graphcodebert},
CodeGPT-2~\cite{lu2021codexglue}, CodeT5~\cite{wang2021codet5},
PLBART~\cite{ahmad-etal-2021-unified}, PyMT5~\cite{clement2020pymt5},
SYNCOBERT~\cite{wang2021syncobert}, SPT-Code~\cite{niu2022spt},
Codex~\cite{ChenETAL21Codex} and UniXcoder~\cite{GuoETAL22Unixcoder}.
Similar to our approach, GraphCodeBERT, CodeT5\xspace and UniXcoder also designed specialized pretraining\xspace objectives driven by their targeted tasks.
While they demonstrate impressive performance on various tasks, none of them are fundamentally well-suited for edit tasks.
In this work, we develop \textsc{CoditT5}\xspace with a novel pretraining\xspace objective for generating edit sequences, which can complement the generation model such as CodeT5\xspace for edit tasks.
\MyPara{Pretrained\xspace Models for Code Edits}
Prior work already explored applying pretrained\xspace models, despite not
well-suited, on editing tasks. \citet{ChakrabortyAndRay21Multi} used
PLBART for code bug fixing, which we compared to in
our work. \citet{drain2021generating} pretrained\xspace a model on 67K
Java repositories mined from GitHub and fine-tuned\xspace for bug fixing task.
Codex~\cite{ChenETAL21Codex} showed promising performance on editing tasks by specifying the existing code as a prompt and providing an edit instruction to the model.
\citet{tufano2022using} and \citet{Li22codereviewer} both proposed
a transformer-based encoder-decoder model pretrained\xspace on large code reviewer
specific data for code review related tasks including
code change quality estimation, review comment generation and
code refinement.
As we showed in this work, the combination of an edit-based language
model and a standard language model can achieve better performance
than using the standard language model alone.
\MyPara{Combining Complementary Models} We used
reranking~\cite{NeubigETAL15Reranking,KrizETAL19Reranking} to combine
complementary models in this work.
Ensembling~\cite{LeClairETAL21Ensemble} is another approach for
combining complementary models for generation tasks, but requires
additional training. Co-training~\cite{BlumAndMitchell98CoTraining}
and tri-training~\cite{ZhouAndLi05TriTraining} approaches, although
shown to be very effective in combining complementary models, are
designed for classification models rather than generation models.
\subsection{{Integrating \textsc{CoditT5}\xspace and \CodeTf}}
\label{sub:rerank}
\textsc{CoditT5}\xspace is designed to complement a generation model by providing more
explicit guidance for edits. However, a model that is trained to
generate edits can struggle with coherence and fluency since it is not
actually trained to generate consecutive
text~\cite{PanthaplackelETAL20Learning}. By also including the generation
of the target sequence in the pretraining\xspace objective, we do mitigate this to some extent, even when there are ambiguities or errors in the edit plan.
However, there appears to be a trade-off between being able to perform the correct edits while maintaining performance with respect to generation metrics.
More specifically, in Tables~\ref{tab:comment-update-results}-\ref{tab:code-review-results}, \textsc{CoditT5}\xspace outperforms CodeT5\xspace with respect to xMatch (and GLEU and SARI for comment updating\xspace), but underperforms with respect to BLEU-4. To exploit the slight superiority of CodeT5\xspace in this respect, we consider incorporating CodeT5\xspace into our approach.\\
\noindent\fbox{%
\parbox{\columnwidth}{%
\textbf{RQ3}: Can a pure generation model complement CoditT5 by integrating the two models?
}%
}\\
\input{figures/figure-rerank-examples.tex}
\subsubsection{Experimental Setup}
We combine the two models using simple likelihood-based reranking
strategies at test time (with no additional training). Namely, at test time, \textsc{CoditT5}\xspace and CodeT5\xspace each generate 20 candidates using beam search.
While we have been only looking at the top one prediction for all previous experiments, we will consider all 20 candidates for reranking.
We compute a reranking score for each of these to essentially re-score them. The candidate which has the highest reranking score will be the final model prediction. We investigate two different reranking strategies:
\paragraph{\MOEDIT (reranked with CodeT5)\xspace:}
To exploit the language-specific norms learned by CodeT5\xspace, we rerank the candidates generated by \textsc{CoditT5}\xspace based on the probability score CodeT5\xspace's language model assigns to the corresponding target sequences (namely after \texttt{<s>}).
We compute the length-normalized conditional log
probability score of \CodeTf generating the target sequence, conditioned on the same
input:
\[ score = log(P(T|I)^{\frac{1}{N}}) \]
where $T$ is the target sequence, $I$ is the model's input, $N$ is
the length of $T$. We also length-normalize the log probability of the candidate, as scored by \textsc{CoditT5}\xspace, and then add the two probability scores together to obtain the reranking score.
\paragraph{CodeT5 (reranked with \MOEDIT)\xspace:} Conversely, we also rerank the output of \CodeTf based on the likelihood of \textsc{CoditT5}\xspace, such that the generated sequence can be assessed in terms of explicit edits.
We first parse the output of \CodeTf into the edit-based output sequence representation (as described in Section~\ref{sec:edit-action}) and then concatenate it with the model's output using \texttt{<s>}.
Then we compute the likelihood of \textsc{CoditT5}\xspace generating this sequence, conditioned on the same input. We then add the length-normalized log probability score of \textsc{CoditT5}\xspace with the score originally assigned by \CodeTf (after length-normalizing and applying log).
\subsubsection{Results}
We provide results in the bottom two rows of Tables~\ref{tab:comment-update-results}-\ref{tab:code-review-results}. By reranking the output of \textsc{CoditT5}\xspace using CodeT5\xspace, we are able to achieve improved performance on all the metrics including BLEU-4 across tasks (and the other generation-based metric, METEOR, for comment updating\xspace). To illustrate this, consider Example 1 in Figure~\ref{tab:cup-example},
with a buggy code snippet and outputs corresponding to \textsc{CoditT5}\xspace before
and after reranking.
We observe that \textsc{CoditT5}\xspace correctly localizes the
bug and correctly identifies that the edit entails initializing an \CodeIn{ArrayList} in the return statement.
However, the generated target sequence is a defective code snippet which does not properly initialize an \CodeIn{ArrayList} with the correct type \CodeIn{TagVFilter}.
By leveraging CodeT5\xspace's likelihood score, we are able to effectively filter out the defective prediction and obtain the correct output.
By reranking the output of CodeT5\xspace using \textsc{CoditT5}\xspace, we see significant improvements with respect to CodeT5\xspace on metrics that more directly evaluate whether the correct edits were performed, including xMatch as well as GLEU and SARI for comment updating\xspace. This suggests that the edit-based and generation-based models are indeed complementary to one another. As a case
study, consider Example 2 in Figure~\ref{tab:cup-example}. CodeT5\xspace
produces a sequence which simply copies the old comment,
without capturing the code changes. While this may be a likely comment
sequence, according to \CodeTf's language model, copying without
applying any edits is not a likely edit plan to be generated for \textsc{CoditT5}\xspace.
By combining \textsc{CoditT5}\xspace and CodeT5\xspace through reranking, we can further boost performance substantially across most metrics for all three tasks, outperforming the two models individually, and achieving new state-of-the-art.
|
1,314,259,993,955 | arxiv | \section{Introduction}\label{sec:introd}
Computational Fluid Dynamics (CFD) tools for simulating reacting flows are crucial in developing less-polluting and highly-efficient energy and propulsion technologies \cite{poinsot2005theoretical}. While the turbulent flow and chemical reactions are strongly coupled in practical devices, modelling the multi-scale, multi-phase and multi-species physio-chemical processes under engine-relevant conditions remain a scientific challenge. In addition, simulating reacting flows in strong turbulence with the scales and species fully resolved (known as direct numerical simulation, DNS) requires quite demanding computational resources. Simplified modelling methods such as Reynolds-averaged Navier-Stokes (RANS) approach and large eddy simulation (LES) mostly rely on statistical or topological models to impose physical assumptions to accelerate simulations. However, the generalisation abilities of these models have long been the major issue limiting the practical application of reacting flow simulations \cite{peters2000turbulent}.
To resolve the above dilemma of accuracy versus efficiency, the recent rapid growth in Artificial Intelligence (AI) for Science, particularly in machine learning has brought new perspectives for accelerating simulation of reactive flows with accurate models and detailed chemistry. As a pioneer work, Christo et al. \cite{christo1996artificial} adopted Artificial neural network (ANN) in the joint PDF/Monte Carlo simulation of H$_2$/CO$_2$ turbulent jet diffusion flames to predict chemical kinetics. Blasco et al. \cite{blasco1998modelling} trained more than one ANNs based on a typical combustion simulation to capture the changes of species composition at various time steps, so that the reaction rates can be directly obtained by ANNs instead of solving ordinary differential equations (ODEs) or accessing look-up table. Sen et al. \cite{sen2009turbulent} successfully adopted ANNs as a chemical kinetic integrator for LES of turbulent flame. They found ANN exhibits satisfying behaviour both in memory and time efficiency. Wan et al. \cite{wan2020chemistry} trained a deep neural network (DNN) based on the turbulent micro-mixing data to predict the reaction rates. The DNN was used in simulating a turbulent non-premixed syngas oxy-flame and obtained a considerable speed-up. Yao et al. \cite{yao2022gradient} adopted gradient boosted decision tree (GBDT) as a machine learning approach to directly solve the chemistry ODEs and gained a speed-up of one order of magnitude.
More recently, to improve the generalisation ability of DNN, Zhang et al. \cite{zhang2022multi} proposed a new sampling method for collecting multi-scale combustion data. The neural network trained from such data set has been confirmed to be accurate and efficient in predicting reaction rates under various conditions. Besides, DNN was also extended to the high-dimensional tabulation of flamelets and effectively reduced the memory requirement \cite{chen2021application,chi2022efficient,perry2022co,zhang2020large}.
It is widely acknowledged that the growing success of deep learning is built upon open-source and data sharing culture in the scientific and industrial communities.
However, to the best of our knowledge, most of the studies mentioned above were conducted using in-house codes~\footnote{In some papers the training and testing codes were released in supplementary material or uploaded onto GitHub for reproducibility. However, the CFD codes in which the pre-trained model deployment and the following \textit{a posteriori} assessment were carried out remain difficult to access.}. Despite the promising potential of machine learning in this field, the related works are still limited because of the lack of the powerful tools and platforms leveraging the growing assets in fluid dynamics, machine learning and chemical kinetics communities. Nowadays, the advantages of open-source community in developing new technology has been widely proved. A number of open-source frameworks for machine learning (e.g. TensorFlow\cite{abadi2016tensorflow} and Torch\cite{collobert2011torch7}), CFD (e.g. OpenFOAM\cite{opencfd2009open}), and chemical kinetics computation (e.g. Cantera\cite{goodwin2002cantera}) are available for users and developers from both the scientific and industrial communities. However, it is necessary to build a reacting flow simulation platform that brings together the individual strengths of the CFD, machine learning and chemical kinetics open-source communities, so that the cross-disciplinary research interaction and code development can be facilitated.
With this motivation, the main objective of the present work is to develop an open-source CFD platform named {\em DeepFlame} for simulating reacting flows with capabilities of utilising state-of-the-art machine learning algorithms and libraries. Briefly, {\em DeepFlame} integrates existing libraries and organises the computational tasks in the following manner. Tools and functions for solving general continuum fluid flow problems are called or derived from OpenFOAM, including flow field data-structure, numerical discretisation, iteration for linear solvers, MPI-based parallel computing, and pre-/post-processing; chemical mechanism related I/O and multi-species thermochemistry data-structure and property calculation are handled by Cantera; the deep learning framework libTorch (Torch C++ API) is coupled in {\em DeepFlame} for the manipulation of input/output tensor-format data and inference of deep neural network models.
In addition, methods aiming to improve simulation efficiency such as dynamic load balance (DLB) and adaptive mesh refinement (AMR) have also been implemented. A preliminary heterogeneous computing approach is available in this version, where AI acceleration infrastructure (i.e. GPU) can be used to further magnify the simulation speed-up when machine learning models are activated.
The structure of this paper is as follows. In Section \ref{sec:Govern}, we discuss the governing equations solved by the different flow solvers implemented in {\em DeepFlame}, where the approaches for obtaining chemistry source terms are emphasised. In Section \ref{sec:Inplem}, we introduce the implementation details including the code structure and algorithms. In Section \ref{sec:Res}, we conduct a broad range of canonical cases to validate the solvers for different flow conditions. In Section \ref{sec:Perf}, the computational performance of {\em DeepFlame} is evaluated. Finally, the conclusions and further works are summarised in Section \ref{sec:Conclusion}.
\section{Theoretical Background}\label{sec:Govern}
\subsection{Governing Equations}\label{subs:GorvEq}
To directly solve the compressible reacting flows with $N$ number of species, the conservation equations of mass, momentum, species and energy used in this work, in common tensorial notations, are given by
\begin{equation}
\frac{\partial \rho}{\partial t} + \frac{\partial \rho u_i}{\partial x_i} =0 \:,
\end{equation}
\begin{equation}
\frac{\partial \rho u_j}{\partial t} + \frac{\partial \rho u_iu_j}{\partial x_i}=-\frac{\partial p}{\partial x_j}+ \frac{\partial \tau _{ij}}{\partial x_i} \:,
\label{eq:momenteq}
\end{equation}
\begin{equation}
\frac{\partial \rho Y_\alpha}{\partial t} +\frac{\partial \rho u_i Y_\alpha}{\partial x_i} =- \frac{\partial \rho Y_\alpha V_{\alpha,i}}{\partial x_i} + \dot{\omega}_\alpha \:,
\label{eq:specieeq}
\end{equation}
\begin{subequations}
\begin{align}
\frac{\partial (\rho H)}{\partial t} +\frac{\partial (\rho u_i H)}{\partial x_i} & =\frac{\partial p}{\partial t}-\frac{\partial q_i}{\partial x_i} + \frac{\partial}{\partial x_j}(\tau _{ij} u_i) \:,\\
{\rm and}~~~~\frac{\partial (\rho E)}{\partial t} +\frac{\partial (\rho u_i E)}{\partial x_i} & =-\frac{\partial q_i}{\partial x_i} + \frac{\partial}{\partial x_j}\left[(\tau _{ij}-p\delta_{ij}) u_i\right] \:,
\end{align}
\label{eq:energyeq}
\end{subequations}
where $t$ is time, $u_j$ and $x_j$ are the velocity component and Cartesian spatial coordinate in the $j$ direction respectively, $\rho$ is mixture mass density, and $p$ is pressure. In Eq.~\eqref{eq:specieeq}, $Y_\alpha$ and $\dot{\omega}_\alpha$ are mass fraction and net reaction rate of the $\alpha$-th species, respectively. In Eq.~\eqref{eq:energyeq}, $H = h + \frac{1}{2}u_i u_i$ and $E = e + \frac{1}{2}u_i u_i$ are the total enthalpy (absolute enthalpy + kinetic energy) and total energy (internal energy + kinetic energy) respectively, and $q_i$ is energy flux in direction $i$.
Depending on the flow velocity with respect to the speed of sound, different forms of energy are solved and this will be described in detail later in §\ref{subs:Solvers}.
The viscous tensor $\tau_{ij}$ in Eq.~\eqref{eq:momenteq}, by applying Stokes' hypothesis, is written as
\begin{equation}
\tau _{ij}=-\frac{2}{3}\mu \frac{\partial u_k}{\partial x_k} \delta _{ij}+\mu \left(\frac{\partial u_i}{\partial x_j}+\frac{\partial u_j}{\partial x_i}\right)\:.
\end{equation}
The dynamic viscosity of mixture, $\mu$, is calculated via the Wilke mixture formulation:
\begin{equation}
\mu = \sum_{\alpha}\frac{\mu _\alpha X_\alpha}{ {\textstyle \sum_{\beta}}\Phi _{\alpha,\beta}X_\beta }
\end{equation}
with $\mu_\alpha$ and $X_\alpha$ being the dynamic viscosity and mole fraction of the $\alpha$-th species respectively, and $\Phi_{\alpha,\beta}$ is computed using
\begin{equation}
\Phi _{\alpha,\beta}=\frac{\left [ 1+\sqrt{\left ( \frac{\mu _\alpha}{\mu _\beta}\sqrt{\frac{M_\beta}{M_\alpha} } \right ) } \right ]^2 }{\sqrt{8}\sqrt{1+M_\alpha/M_\beta}} \:,
\label{eq:Phi}
\end{equation}
where $M_\alpha$ is the mole mass of the $\alpha$-th species.
In the species transport equations (see Eq.~(\ref{eq:specieeq})), the reaction rates, $\dot{\omega}_\alpha$, can be obtained from the Arrhenius Law and on-the-fly integration or a pre-trained deep neural network, which will be introduced in §~\ref{subs:ChemEq}.
The diffusion velocity of the $\alpha$-th species in $i$-th direction, $V_{\alpha,i}$, is given by Fick's Law and corrected with a correction velocity $V_i^c$ to ensure mass conservation:
\begin{equation}
V_{\alpha,i}=-\frac{D_\alpha}{Y_\alpha}\frac{\partial Y_\alpha}{\partial x_i}+\underset{V_i^c}{ \underbrace{\sum_{\alpha=1}^{N} (D_\alpha\frac{\partial Y_\alpha}{\partial x_i}) } } \:,
\label{eq:V_ki}
\end{equation}
where $D_\alpha$ is the diffusion coefficient between $\alpha$-th species and the rest of the mixture. By default, {\em DeepFlame} employs (via a Cantera interface function) the Hirschfelder and Curtiss mixture-averaged transport model \cite{curtiss1949transport}:
\begin{equation}
D_\alpha = \frac{1-Y_\alpha}{ {\textstyle \sum_{\beta\ne \alpha}}X_\beta/\mathcal{D}_{\beta\alpha}} \:,
\end{equation}
where $\mathcal{D}_{\beta\alpha}$ is the binary diffusion coefficient between $\alpha$-th species and $\beta$-th species, which can be obtained according to Takahashi correlation \cite{kee2005chemically}. The simpler model implemented in OpenFOAM assuming unity Lewis number ($Le$) for all species is also retained in {\em DeepFlame}, and the mass diffusivity is modelled as
\begin{equation}
D_\alpha \equiv \frac{a}{Le} = \frac{\lambda}{\rho C_p}\:,
\label{eq:D_UL}
\end{equation}
where $a$, $\lambda$ and $C_p$ are the thermal diffusivity, thermal conductivity and constant-pressure heat capacity of the mixture, respectively.
The mixture overall value of $\lambda$ is obtained using the conductivity of each pure species ($\lambda_\alpha$):
\begin{equation}
\lambda =0.5(\sum_{\alpha}X_\alpha\lambda_\alpha+\frac{1}{ {\textstyle \sum_{\alpha}X_\alpha/ \lambda_\alpha} } ) \:,
\label{eq:lambda}
\end{equation}
and a similar procedure is used for $C_p$ based on the JANAF database.
The more detailed multi-component transport model is also supported via the Cantera interface function and the theoretical details are available in \cite{ern1994lecture}.
The energy equation for total enthalpy and total energy are respectively given by Eq.~(\ref{eq:energyeq}a) and (\ref{eq:energyeq}b), the energy flux $q_i$ is
\begin{equation}
q_i = -\lambda \frac{\partial T}{\partial x_i} + \rho \sum_{\alpha=1}^{N} h_\alpha Y_\alpha V_{\alpha,i}
\label{eq:qi} \:.
\end{equation}
The RHS of the above equation includes a heat conduction term (expressed by Fourier's Law, $\lambda {\partial T}/{\partial x_i}$) and a species diffusion term, where $h_\alpha$ is the enthalpy of the $\alpha$-th species.
Furthermore, temperature in Eq.~(\ref{eq:qi}) is usually cast to enthalpy to facilitate the numerical solving procedure of the energy equation. The heat conduction term is rewritten as
\begin{equation}
\begin{aligned}
\lambda \frac{\partial T}{\partial x_i} &= \rho a \sum_{\alpha=1}^{N}Y_\alpha C_{p\alpha}\frac{\partial T}{\partial x_i} \\
&= \sum_{k=1}^{N}\rho a Y_\alpha\frac{\partial h_\alpha}{\partial x_i}\\
&= \rho a \frac{\partial h}{\partial x_i} - \sum_{\alpha=1}^{N}\rho a h_\alpha\frac{\partial Y_\alpha}{\partial x_i}
\end{aligned}
\label{eq:gradT}
\end{equation}
Finally, combined with Eq.~(\ref{eq:V_ki}), Eq.~(\ref{eq:qi}) and Eq.~(\ref{eq:gradT}), the energy equation (Eq.~(\ref{eq:energyeq}a) and (\ref{eq:energyeq}b)) can be expressed more detailed as:
\begin{subequations}
\begin{align}
&\frac{\partial (\rho H)}{\partial t} +\frac{\partial (\rho u_i H)}{\partial x_i}=\frac{\partial p}{\partial t}+ \frac{\partial}{\partial x_j}(\tau _{ij} u_i)+\frac{\partial }{\partial x_i} (\rho a \frac{\partial h}{\partial x_i} ) \nonumber \\ &-\sum_{\alpha=1}^{N}\frac{\partial}{\partial x_i} (\rho a h_\alpha\frac{\partial Y_\alpha}{\partial x_i} )-\frac{\partial}{\partial x_i}\left [ \rho\sum_{\alpha=1}^{N}h_\alpha Y_\alpha(-\frac{D_\alpha}{Y_\alpha}\frac{\partial Y_\alpha}{\partial x_i} +\sum_{\alpha=1}^{N}D_\alpha\frac{\partial Y_\alpha}{\partial x_i}) \right ] \\
&\frac{\partial (\rho E)}{\partial t} +\frac{\partial (\rho u_i E)}{\partial x_i}=\frac{\partial}{\partial x_i}\left [(\tau _{ij}-p\delta_{ij}) u_i\right ]+\frac{\partial }{\partial x_i} (\rho a \frac{\partial h}{\partial x_i} ) \nonumber \\ &-\sum_{\alpha=1}^{N}\frac{\partial}{\partial x_i} (\rho a h_\alpha\frac{\partial Y_\alpha}{\partial x_i} )-\frac{\partial}{\partial x_i}\left [ \rho\sum_{\alpha=1}^{N}h_\alpha Y_\alpha(-\frac{D_\alpha}{Y_\alpha}\frac{\partial Y_\alpha}{\partial x_i} +\sum_{\alpha=1}^{N}D_\alpha\frac{\partial Y_\alpha}{\partial x_i}) \right ]
\end{align}
\label{eq:energy2}
\end{subequations}
\subsection{DeepFlame Solvers}\label{subs:Solvers}
Based the above governing equations, {\em DeepFlame} provides three solvers to simulate reacting flow under different conditions:
\begin{itemize}
\item {\em df0DFoam}-solver: developed to solve zero-dimensional auto-ignition problems, concerning both constant-pressure and constant-volume cases. The convection and diffusion terms are discarded here, and thus only the species equations are solved to estimate the time-variation of the thermochemical states of the zero-dimensional reactor.
\item {\em dfLowMachFoam}-solver: developed based on {\em rhoPimpleFoam} (the original pressure-based compressible solver in OpenFOAM) for simulation of low-Mach number reacting flows. The equation for total enthalpy (Eq.~(\ref{eq:energy2}a)) is solved here to describe conservation of energy, in which the contribution of viscous heating (the second term on the RHS) is neglected. Additionally, the Strang splitting scheme \cite{strang1968construction} is adopted to improve the solving accuracy.
\item {\em dfHighSpeedFoam}-solver: developed based on {\em rhoCentralFoam} (the original density-based compressible solver in OpenFOAM) solving algorithm to simulate high-speed reacting flows. The equation for total energy (Eq.~(\ref{eq:energy2}b)) is applied to describe the conservation of energy. The viscous heating and correction of diffusion velocity are taken into account when the flow is considered to be viscous
\end{itemize}
\subsection{Chemistry Integration}\label{subs:ChemEq}
Typically, in the simulation of reacting flows, the computational cost of the chemistry source term evaluations is completely dominant and even exceeds the cost of fluid dynamics by a factor of 100 \cite{peters2000turbulent}. Therefore, improving the efficiency of chemistry solver is crucial in the development of advanced reacting flow simulation platforms \cite{lu2009toward}. In this section, the methods evaluating the chemistry source term in {\em DeepFlame} are introduced. Here we start with a chemical system of $N$ species and $M$ reactions \cite{poinsot2005theoretical}:
\begin{equation}
\sum_{\alpha=1}^{N} {\nu}'_{\alpha j} \mathcal{M}_\alpha\rightleftharpoons \sum_{\alpha=1}^{N} {\nu}''_{\alpha j} \mathcal{M}_\alpha \ \ \ \mathrm{for} \ \ \ j=1,\dots,M\:,
\end{equation}
where $\mathcal{M}_\alpha$ denotes species $\alpha$, ${\nu}'_{\alpha j}$ and ${\nu}''_{\alpha j}$ represent the molar stoichiometric coefficients of species $\alpha$ in reaction $j$. The reaction rate of species $\alpha$ in such a system is calculated from
\begin{equation}
\frac{\mathrm{d} Y_\alpha}{\mathrm{d} t} = W_\alpha\sum_{j=1}^{M} (\nu''_{\alpha j}-\nu'_{\alpha j})\left \{ K_{fj} {\textstyle \prod_{\alpha=1}^{N}}(\frac{\rho Y_\alpha}{W_\alpha})^{\nu '_{\alpha j}} -K_{rj} {\textstyle \prod_{\alpha=1}^{N}} (\frac{\rho Y_\alpha}{W_\alpha})^{\nu ''_{\alpha j}} \right \} \:,
\label{eq:sourceeq}
\end{equation}
where $K_{fj}$ and $K_{rj}$ are the forward and reverse rates of reaction $j$, which are usually modelled using the well-known Arrhenius law. The molecular weight and molar concentration are denoted by $W_\alpha$ and $X_\alpha$ for the $\alpha$-th species, respectively.
Typically, the chemical source term $\dot{\omega}_\alpha$ in Eq.~(\ref{eq:specieeq}) is solved based on the large ODEs system formed by Eq.~(\ref{eq:sourceeq}). It has been shown in \cite{RN12,RN5,RN13,RN14} that the default OpenFOAM ODE solver suffers from several drawbacks (e.g. poor accuracy and stability, slow integration, etc.). Following these previous studies , {\em DeepFlame} provides an interface function to call the well-established implicit ODE solver (CVODE of the SUNDIALS package \cite{hindmarsh2005sundials} from Cantera) to improve the solving accuracy and efficiency for the computation of reaction rates.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.6]{figures/DNN.png}
\caption{The schematics of the DNN configurations.}
\label{fig:DNN}
\end{figure}
However, since the time scale in reaction system has a broad distribution (from $\mathcal{O}$(ns) to $\mathcal{O}$(s)), the ODEs system is so stiff that the computational cost of which is still dominant in simulating reacting flow. Therefore, {\em DeepFlame} also adopts machine learning method to accelerate the solution of chemistry. Following the approach proposed by Zhang et al. \cite{zhang2022multi}, a fully-connected deep neutral network with three hidden layers is trained. The architecture of the DNN model is schematically demonstrated in Fig.~\ref{fig:DNN}. The input vector $\boldsymbol{x}(t)=\{T(t),P(t),\mathcal{F}(Y_\alpha(t))_{\alpha=1, \dots, n}\}$ represents the temperature, pressure and mass fractions of each species at time $t$. The operator $\mathcal{F}()$ donates the Box-Cox transformation (BCT) \cite{box1964analysis} which aims to convert the mass fractions from low-order quantity to $\mathcal{O}$(1). The output of the DNN (labelled as $\boldsymbol{u}(\boldsymbol{x})$) is the change of the input $\boldsymbol{x}$ during a large time step (typically $\Delta t=1\ \mu s$). Now, the chemistry source term $\dot{\omega}_\alpha$ can be explicitly obtained via $\boldsymbol{u}(Y_\alpha)$ and thus DNN can be regarded as an ODE integrator. Later in Section \ref{sec:Res} and Section \ref{sec:Perf}, the accuracy and computational efficiency of this deep learning method will be demonstrated in various cases.
\section{Implementation Details}\label{sec:Inplem}
\subsection{DeepFlame Code Structure}
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figures/classdiagram.png}
\caption{The class diagrams of thermo-physio-chemistry library in (a) OpenFOAM and (b) {\em DeepFlame}. Lines ending with hollow triangle, hollow diamond and filled diamond shapes denote the inheritance, aggregation and composition relationships, respectively. Note that a line with sharp arrow denotes one class is adopted as a template in another. The corresponding case setup files are shown on the right.}
\label{fig:UML}
\end{figure}
The first distinct feature of {\em DeepFlame} as compared to the existing codes having an OpenFOAM-Cantera interface is that our implementation for the two-way coupling is more compact and the cleanest possible. All the complex thermo-physcio-chemistry operations are handled using Cantera functions via an interface class. The lengthy and redundant OpenFOAM native species, mixture, reaction and derived thermophysical classes have been removed, and only the base class {\em fluidThermo} is kept to couple with the flow solver (same as in {\em rhoPimpleFoam} for non-reacting flow). Specifically, the coupling of OpenFOAM and libCantera (Cantera C++ API) is achieved as follows: OpenFOAM solves the basic conservation equations and outputs state parameters ($p$, $T$, $Y_\alpha$); libCantera is responsible for handling the chemical mechanism as well as the calculation of thermophysical coefficients ($\mu$, $\lambda$, $D_\alpha$) and reaction rates ($\dot{\omega}_\alpha$). In {\em DeepFlame}, the coupling routine is implemented in a new thermo-physio-chemistry library, which strictly follows the standard classes in OpenFOAM. In the following, a comparison between the newly developed and the original libraries is illustrated, and the advantageous features of {\em DeepFlame} can be easily observed.
First, the classes related to the thermo-physio-chemistry library of OpenFOAM are summarized. The class diagram as well as the corresponding runtime files (interface for user settings) are shown in Fig.~\ref{fig:UML}a. As listed in the class diagram, OpenFOAM develops a large number of classes with complicated relationship to describe the thermal and chemical properties of the mixture. The abstract basic classes are constructed to declare public interfaces. For example, the functions returning the field thermal properties are virtually defined in {\em basicThermo}, the mixture-related functions which output the thermal properties of mixture in a single cell are declared in {\em basicSpecieMixture}. These virtual functions are implemented in the derived classes with the models specified in runtime files by users. In general, the thermophysical models are determined in the file ``{\em constant/thermophysicalProperties}" and the chemistry models are set in the file ``{\em constant/chemistryProperties}". Here we produce a brief introduction of these models via a setting example illustrated in the right part of Fig.~\ref{fig:UML}a. In {\em thermoType}-dict, the keyword {\em type} specifies the underlying thermophysical model is {\em rho}-based or {\em psi}-based, reacting or nonreacting; the keyword {\em mixture} specifies the mixture composition is fixed or variable; keywords {\em transport} and {\em thermo} determine the transport and thermodynamic models, which are used in evaluating transport-parameters ($\mu$, $\lambda$ and $a$) and specific heat $C_p$; the last three keywords represent the equations of state model, species model (calculates the composition of each constituent) and energy form (internal energy or enthalpy), respectively. The {\em chemistryType}-dict specifies the solver for chemistry kinetics ODEs, and the coefficients of which is set in {\em odeCoeffs}-dict.
Figure~\ref{fig:UML}b illustrates the simplified structure of thermo-physio-chemistry library in {\em DeepFlame}. From the class diagram, an obvious simplification can be noted:
only several {\em thermo} classes derived from {\em basicThermo} are kept; the {\em mixture} classes and the {\em chemistry} classes are all replaced by {\em CanteraMixture} and {\em dfChemistryModel}. {\em CanteraMixture} is a class built to return the thermophysical properties calculated via libCantera, while {\em dfChemistryModel} is a chemistry model which enables CVODE and deep neural network for the solution of chemistry kinetics ODEs. Therefore, libCantera and libTorch are also included in this library. As shown in Fig.~\ref{fig:UML}b, the settings of the two classes are all contained in a new file ``{\em constant/CanteraTorchProperties}'': keyword {\em CanteraMechanismFile} is defined to read chemistry mechanisms; keyword {\em transportModel} sets the transport model, {\em UnityLewis}, {\em Mix}, and {\em Multi} are provided for users; the {\em zeroDReactor}-dict specifies the conditions for zero-dimensional reactor; the switch {\em torch} controls the use of deep neural network, and the network name as well as the normalisation parameters are respectively set in {\em torchModel} and {\em torchParameters}; the device for the DNN inference can also be specified by user via the switch {\em GPU}; the keywords in {\em loadbalancing}-dic specify the load balance switch for solving chemistry and log file output. Next, in §~\ref{subs:models}, the detailed structure and the underling algorithm of the two classes will be further introduced.
\subsection{Description of the new classes}\label{subs:models}
Figure~\ref{fig:classes} shows the detailed class diagram of the new library in {\em DeepFlame}. The description of four critical classes, {\em CanteraMixture}, {\em dfChemistryModel}, {\em DNNInference} and {\em LoadBalancer} will be described in this section.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figures/UML_chemistry_mixture.jpg}
\caption{The main classes of the thermo-physio-chemistry library in {\em DeepFlame}. The notations ``$-$" and ``$+$" specify the visibility of the class member as public and private.}
\label{fig:classes}
\end{figure}
{\em CanteraMixture} is developed to evaluate the thermal properties of Mixture in a single cell with libCantera functions. Note that this class is also adopted in the construction of {\em heThermo} and Chemistry model (see Fig.~{\ref{fig:UML}b), so the operations of which are almost kept identical to the original mixture classes in OpenFOAM. Most of the operations in {\em CanteraMixture} are implemented based on the the attributes {\em CanteraSolution\_}, {\em CanteraGas\_} and {\em CanteraTransport\_}, which are constructed from the original libCantera classes in Fig.~{\ref{fig:classes}}. For example, function {\em mu()} is realised by calling {\em CanteraTransport\_}-$>${\em viscosity()} and function {\em Cp()} is implemented by calling {\em CanteraGas\_}-$>${\em cp\_mass()}. Additionally, to read chemistry mechanisms and user settings , a private attribute {\em CanteraTorchProperties\_} is constructed from the original OpenFOAM I/O controlling class {\em IOdictionary}.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figures/flowchart.jpg}
\caption{A schematic demonstrating the main operations of the {\em dfLowMachFoam}-solver in a single iteration step.}
\label{fig:flowchart}
\end{figure}
{\em DNNInferencer} is developed to calculate chemical reaction rates via inference of deep neural network. It is constructed with the network, normalisation parameters and a switch for GPU as the DNN inference device. The operation {\em inference} is built to calculate the chemistry source terms with DNN. It mainly consists of three stages: 1) pre-process the input tensor ($p$, $T$, $Y$) by normalisation and Box-Cox transformation; 2) inference the processed tensor with the network; 3) evaluate the mass fractions via the reverse-transformation of stage 1 and finally output the reaction rates. {\em LoadBalancer} is built to balance the chemistry load when using CVODE solver, the detailed implementation is adopted from DLBFoam \cite{RN17}.
Based on the above three classes, the core of {\em DeepFlame}, {\em dfChemistryModel}, can be finally constructed. The thermal and chemistry fields are all contained in this class as the attributes. Also, most of these fields can be received by calling the public operations, such as {\em RR()}, {\em Qdot()}, {\em Y()}, etc. The two public functions {\em solve()} and {\em correctThermo()} are built for solving chemistry source and updating thermal properties. The execution procedure of the two operations is described in Fig.~\ref{fig:flowchart}, which presents the computational algorithm of the {\em dfLowMachFoam}-solver. It can be seen that the conservation equations for mass, momentum, species, energy, and a Poisson equation for pressure are successively solved in an iterative manner. Besides, the other two separate steps, the solving of chemical sources and thermal properties, are achieved by calling the function {\em solve()} and {\em correctThermo()}. The implementation of {\em correctThermo()} can be divided into the following steps:
construct the attribute {\em CanteraGas\_} with $h$, $p$ and $Y$ obtained from previous solutions; update thermal properties and transport coefficients from the new {\em CanteraGas\_}. Note that the diffusion coefficient ($D_\alpha$) of each species are determined according to the user-defined transport model.
\subsection{Adaptive mesh refinement}\label{subs:AMR}
Adaptive mesh refinement (AMR) is readily available in OpenFOAM as an effective way to reduce computational cost for spatially stiff computations such as involving shock and detonation waves. However, the original AMR algorithm in OpenFOAM named {\em hexRef8} uses the octree refinement and splits each cell in eight child cells (homogeneously halved in three directions). This brings extra computational cost for one- and two- dimensional cases since it will refine cells in invariant directions. Therefore, the original AMR was extended from {\em hexRef8} to {\em hexRef4} in~\cite{amr2015Baniabedalruhman,amr2019load2d} to implement a quadtree refinement for two-dimensional problems. Additionally, multiple refinement criteria was also included for higher flexibility.
In this work, we further extend the AMR capability to one-dimensional via a new mesh cutter named {\em hexRef2}.
The {\em empty} boundary type (for invariant directions) can be automatically detected, and the corresponding boundary faces will be marked as divisible.
During an AMR process, each divisible face is split into two new faces and a new internal face is added to each cell to be refined.
To keep consistency, we strictly follow the code structure of~\cite{amr2019load2d} and {\em hexRef2} is added as a new derived class to the base class {\em hexRef}. Upon simulation startup, {\em dfDynamicRefineFvMesh} selects the suitable mesh cutter according to the dimension of the case. Additionally, the temporally evolving grid file related I/O is improved to facilitate AMR restart and post-processing.
\section{Validation and Results}\label{sec:Res}
In this section, the above implementation of {\em DeepFlame} is validated. The solvers described in §~\ref{subs:Solvers} are systematically tested using a broad range of canonical test cases under different dimension and flow speed conditions.
\subsection{df0DFoam-solver}\label{subs:0D}
As introduced in §~\ref{subs:GorvEq}, the {\em df0DFoam} is developed to solve zero-dimensional problems, which can be further subdivided into constant-pressure and constant-volume conditions. Figure~\ref{fig:0Dpressure} compares the hydrogen autoignition results obtained by Cantera and {\em df0DFoam} at constant pressure. The chemical mechanism adopted here is developed by Evans et al. \cite{evans1980influence}, containing 8 species and 16 reversible reactions. The neural network is trained according to the method proposed in \cite{zhang2022multi}. In the later hydrogen flame cases, we will also adopt the above mechanism and network if is not specified. In this case, the initial condition (temperature, pressure and equivalence ratio) of the H$_2$/air mixture is set as $T$ = 1400 K, $p$ = 1 atm and $\phi$ = 1. From Fig.~\ref{fig:0Dpressure}, it can be seen that the {\em df0DFoam} can accurately capture the evolution of $T$, $p$ and $Y$, for both the DNN and CVODE integrators. Figure~\ref{fig:0Dvolume} shows constant-volume autoignition results of H$_2$ given by {\em df0DFoam}-solver at $T$ = 1000 K, $p$ = 0.5 atm and $\phi$ = 1. It reveals that the DNN integrator has a satisfying performance even in the condition with a large variation of pressure. To sum up, the cases conducted in this part confirm the validity of the implementation of chemical reaction source terms in {\em DeepFlame}.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figures/1_1400_threelines.png}
\caption{Zero-dimensional constant-pressure autoignition results comparison between Cantera and {\em df0DFoam} (with CVODE and DNN integrators).}
\label{fig:0Dpressure}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.6\textwidth]{figures/fig_5_0Dvolume.png}
\caption{Zero-dimensional constant-volume autoignition results given by {\em df0DFoam}.}
\label{fig:0Dvolume}
\end{figure}
Figure~\ref{fig:ignitionDelay} shows the ignition delay times (IDTs, defined using [dT/dt]$_{max}$) versus initial temperature variations for both hydrogen and n-Heptane fuels. For the hydrogen/air mixture at stoichiometric and atmospheric conditions, both the DNN and CVODE chemistry integrators give very close results to the IDTs calculated ated using Cantera for a broad range of temperature conditions. For a more complex fuel n-Heptane (n-C$_7$H$_{16}$) with negative temperature coefficient (NTC) behaviours, {\em df0DFoam} with CVODE also shows very good agreement with the Cantera numerical results and the experimental measurements~\cite{heufer2010determination}.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figures/ignitionDelay_vs_rT.png}
\caption{Ignition delay time versus temperature for stoichiometric (a) hydrogen/air mixture at atmospheric pressure and (b) n-Heptane/air mixture at 12.5~atm pressure.}
\label{fig:ignitionDelay}
\end{figure}
\subsection{dfLowMachFoam-solver}\label{subs:dfLowMachFoam}
To assess the the implementation of the transport processes, the laminar free-propagating 1D planar premixed flame is first examined. Then, a 2D non-premixed lifted jet flame and a reactive Taylor-Green vortex (TGV) are simulated to validate the capability of the solver for multi-dimensional applications. For all the three cases, hydrogen is adopted as the fuel and the mixture-averaged model is chosen to calculate the diffusion coefficients. Time derivatives are discretised with the implicit Euler scheme. Discretisation of convective and diffusion terms is based on the second-order central differencing scheme.
\subsubsection{One-Dimensional Planar Flame}
The computational setup of the 1D case is schematically shown in Fig.~\ref{fig:flamefront}a. Except for the Inlet and Outlet, all the side boundaries are defined using the {\em empty} condition (i.e. transverse spatial terms are not calculated), ensuring no fluxes in lateral direction. The domain is initialised using steady-state 1D freely-propagating flame solution from Cantera. The velocity of the inlet mixture is set to be identical to the flame speed given by Cantera so that the flame front can be stabilised inside the domain. The length of the computational domain is according to the solution from Cantera as well, and the cell number is set to ensure the flame front is discretised by at least 30 elements.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\textwidth]{figures/fig_6_flameStructure.png}
\caption{(a): Numerical setup of one-dimensional premixed flame. (b): Profiles of mass fractions of main species compared between Cantera and {\em dfLowMachFoam}.}
\label{fig:flamefront}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{figures/fig_7_ft_sl.png}
\caption{Comparison of laminar flame speed (left) and flame thickness (right) of hydrogen/air mixture at different equivalence ratios.}
\label{fig:sl}
\end{figure}
Figure~\ref{fig:flamefront}b shows that the distributions of the species mass fractions given by {\em dfLowMachFoam}-solver and Cantera exhibit an excellent agreement. This observation can be extended to different equivalence ratios as presented in Fig.~{\ref{fig:sl}. The laminar flame speed ($S_l$) and flame thickness ($\delta_l$) given by {\em dfLowMachFoam} compare well with the Cantera solution for the range of $\phi = $ 0.5 to 1.8. Here $S_l$ is evaluated by subtracting the propagation velocity of flamefront (position of $\nabla T_{max}$) by the inlet velocity and $\delta_l$ is calculated as $(T_b-T_a)/\nabla T_{max}$. The minor deviations in the above comparison can be attributed to the different meshing methods adopted in Cantera and {\em dfLowMachFoam}: the former uses the adaptive grid with high refinement near the flamefront, whereas the latter adopts a static grid. Nevertheless, the cases conducted in this part demonstrate that the convection-diffusion-reaction algorithms implemented in {\em DeepFlame} are stable and accurate.
\subsubsection{Two-Dimensional Jet Flame}
In this subsection, a 2D non-premixed planar jet flame is simulated using {\em dfLowMachFoam} with the DNN and CVODE integrators. As shown in Fig.~\ref{fig:Triple}a, a H$_2$ fuel jet with $T$ = 300 K and $p$ = 1 atm enters from the left end of the rectangle domain, surrounded by an air co-flow. The fuel jet has a diameter of 8~mm and the composition of 25$\%$ H$_2$/75$\%$ N$_2$ in volume. The velocities of the jet and the co-flow are 5 and 1 m/s, respectively. The size of the computational domain is 3 $\times$ 5 cm, discretised by a structured mesh with 300 $\times$ 500 resolution. The initial fields are specified as follows. An ignition region with the temperature of 1400 K is set at the centre of the domain (shaded area in Fig.~\ref{fig:Triple}a); the velocity and mass fractions of the H$_2$ jet decay along the streamwise direction and a 1/7 power law is applied for the velocity profile in the transverse direction. The right figure in Fig.~\ref{fig:Triple}a depicts the profiles of $u$ and $Y$ along the central line.
Figure~\ref{fig:Triple}b shows the evolution of the jet flame using heat release rate (HRR) contours. It can be seen that the {\em dfLowMachFoam} gives almost identical results for the DNN and CVODE cases. After ignition, two flame branches develop towards upstream and form the {\em triple\ flame} structures at 4 ms. Here we make a further quantitative comparison between the DNN and CVODE solvers: transverse profiles of species mass fractions in the two triple flames are extracted at the point with maximum heat release rate. As shown in Fig.~\ref{fig:Triple}c, the results obtained by DNN and CVODE show excellent agreement for all major species.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\textwidth]{figures/triple.png}
\caption{Simulation results of the two-dimensional jet flame using {\em dfLowMachFoam}. (a) numerical setup and the initial profiles of velocity and species mass fractions along centreline; (b) Contours of HRR calculated by two ODE integrators at $t$ = 0.5 and 4 ms; (c) Transverse profiles of species mass fractions at the maximum HRR point for the 2D triple flame.}
\label{fig:Triple}
\end{figure}
\subsubsection{Three-Dimensional reactive Taylor-Green Vortex}\label{subs:TGV}
Finally, the performance of {\em dfLowMachFoam} is assessed using a recently established benchmark case for reacting flow DNS codes $-$ 3D Taylor-Green Vortex (TGV) interacting with a non-premixed flame \cite{RN4}. Figure~\ref{fig:TGV1}a shows the cubic computational domain with the edge length of 2$\pi L$ and the initial fields of vorticity magnitude and temperature. All boundaries are set to be {\em periodic}. The initial condition for the velocity field is given by
\begin{equation}
u_x=u_0\sin(\frac{2\pi x}{L})\cos(\frac{2\pi y}{L})\cos(\frac{2\pi z}{L})\:,
\end{equation}
\begin{equation}
u_y=-u_0\cos(\frac{2\pi x}{L})\sin(\frac{2\pi y}{L})\cos(\frac{2\pi z}{L})\:,
\end{equation}
\begin{equation}
u_z=0\:,
\end{equation}
where the reference velocity magnitude $u_0$ is set to 1 m/s and the reference length $L$ is set to 1 mm. The chemical mechanism of by Boivin et al. \cite{boivin2011explicit} used in \cite{RN4} involving 9 species and 12 reversible reactions is also adopted here to make a direct comparison. The $x$-direction profiles of the temperature and species are specified following \cite{RN4} as depicted in Fig.~\ref{fig:TGV1}b. The flame is initialised using the equilibrium value of the local thermodynamic state. The more details for the initial setup of this reactive TGV benchmark are given in \cite{RN4}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\textwidth]{figures/TGV_1.png}
\caption{(a) Initial contours of vorticity magnitude and temperature for the reactive TGV; (b) Initial profiles of temperature, hydrogen mass fraction and oxygen mass fraction at $y$ = 0.5$L$ and $z$ = 0.5$L$.}
\label{fig:TGV1}
\end{figure}
To fully resolve the flame front, this case is computed on an equidistant grid with 256$^3$ cells. The simulation is carried out for a physical time of $t$ = 0.5 ms (i.e. 2 vortex turnover reference times). Figure~\ref{fig:TGV2}a shows the contours of $Y_{\rm H_2}$ and $T$ calculated using DNN chemistry solver. The accuracy of {\em dfLowMachFoam} in solving complex flow dynamics is demonstrated via the comparison presented in Fig.~\ref{fig:TGV2}b, where the profiles of $Y_{\rm H_2}$ and $T$ at $t$ = 0.5 ms along the $y$-centreline show a quantitatively good agreement between the present and reference (conducted using the code DINO \cite{RN4}) cases.
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{figures/TGV_2.png}
\caption{(a) Contours of hydrogen mass fractions and temperature calculated with DNN integrators at t = 0.5 ms; (b) Comparison of $Y_{\rm H_2}$ and $T$ at $x$ = 0.5$L$, $z$ = 0.5$L$ and time $t$ = 0.5 ms.}
\label{fig:TGV2}
\end{figure}
\subsection{dfHighSpeedFoam-solver}\label{subs:dfHighSpeedFoam}
In this subsection, we assess the implementation of {\em dfHighSpeedFoam}, in which the original OpenFOAM density-based solver {\em rhoCentralFoam} is extended to consider multi-species transport and reaction also via the {\em dfChemistryModel} class shown earlier in Fig.~\ref{fig:classes}.
For high-speed supersonic reactive flows, it is necessary to capture the discontinuous structures such as shock and detonation waves, where adaptive meshing becomes essential to ensure both numerical accuracy and efficiency. Thus, the implementation of AMR described in §~\ref{subs:AMR} is also validated here using the following test cases.
\subsubsection{One-Dimensional Reactive Shock Tube}
First, the multi-component reactive shock tube \cite{shocktube1982Oran} is simulated. The computational domain is a tube of 0.12 m long and filled with the mixture of H$_2$/O$_2$/Ar having a 2/1/7 proportion by mole. The domain is divided into 2400 cells uniformly. The initial condition is set as
\begin{equation}
(T, p, u)=\left\{\begin{array}{lccc}
378.656 \mathrm{~K}, & 7173 \mathrm{~Pa}, & 0 \mathrm{~m} / \mathrm{s}, & x<0.06 \mathrm{~m} \\
748.472 \mathrm{~K}, & 35594 \mathrm{~Pa}, & -487.34 \mathrm{~m} / \mathrm{s}, & x>0.06 \mathrm{~m}
\end{array}\right.
\end{equation}
Supersonic inlet condition is set for the right boundary and solid wall is set for the left boundary. The 9 species and 18 reactions of \cite{mechanism1985H2} mechanism is used in this case. Figure~\ref{fig:reactiveShockTube} shows the distribution of temperature, velocity and mass fraction of H at different times. It can be seen that the results computed using {\rm dfHighSpeedFoam} are in good agreement with the results of Mart\'inez Ferrer et al. \cite{shocktube2014Martinez}. At around $t=190~\mu$s, the reactive flow merges into the shock wave forming a detonative front, which is well captured in this present simulation.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{figures/fig_9_reactiveShockTube.png}
\caption{Comparisons of simulation results for multi-component reactive shock tube problem between {\em dfHighSpeedFoam} and Ref. \cite{shocktube2014Martinez}: (a) temperature, (b) velocity, (c) mass fraction of H at $t$ = 170 $\mu$s, 190 $\mu$s, and 230 $\mu$s.}
\label{fig:reactiveShockTube}
\end{figure}
\subsubsection{Detonation Propagation Speed}
Detonation propagation can show the coupling of shock wave and chemical reaction. It contain a complex interaction of the leading shock wave and auto-igniting reaction. This case aims to validate the accuracy of dfHighSpeedFoam in capturing this process and the propagation speed. The length of computational area is 0.5 m. Since adaptive mesh refinement is applied in this case, the coarse cell of 0.8 mm is used in this case. After being refined at the discontinuity, the length of the minimum cell is 0.1 mm since the maxRefinement is set to 3. The domain is filled with homogeneous stoichiometric H$_2$/O$_2$/N$_2$ mixture and the detailed mechanism of 9 species and 21 reactions \cite{mechanism2004Li} is used to calculate combustion. The detonation is ignited by a 2 mm hot spot of 2000 K and 90 atm, which the other area is initialized at 300K and 1atm.
By recording the position of wave, we can calculate the detonation propagation speed of different equivalent ratio. Figure~\ref{fig:detonation1} show that our result is close to the theoretical result calculated by SDToolbox \cite{SDToolBox}, which can prove the accuracy of this solver.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.5]{figures/fig_10_detonation1.png}
\caption{Detonation propagation speed versus different equivalent ratio in homogeneous hydrogen/air mixture.}
\label{fig:detonation1}
\end{figure}
\section{Computational Performance}\label{sec:Perf}
As introduced above, machine learning, adaptive mesh refinement and dynamic load balance are adopted in {\em DeepFlame} to speed up reacting flow simulations. In this section, the efficiency improvement given by these approaches will be quantitatively evaluated. First, we will present the speed-up in solving zero-dimensional ignition case when adopt DNN as the chemistry solver. Then the effect of AMR and DLB in accelerating the simulation of one-dimensional detonation is evaluated. Finally, a strong scaling test is conducted for the reactive TGV case to demonstrate the parallel computing efficiency.
\subsection{Acceleration from Machine Learning}
In this subsection, the performance of the DNN chemistry solver is demonstrated by comparing with the CVODE solver. The computational efficiency of chemistry ODE integrator is evaluated via the execution time in simulating the constant-pressure 0D ignition case (§~\ref{subs:0D}). Figure~\ref{fig:GPUeff} shows the computational time required for different ODE integrators and computing architectures (CPU or GPU). The CPU used here is Intel i7-12700KF and the GPU is NVIDIA RTX3070TI. Note that other than GPU, chips specifically made for AI computations (e.g. Sugon Deep Computing Unit) were also tested but not reported here for conciseness. In general, similar observations to GPU were obtained for the speed-up behaviour – several orders of magnitude acceleration was achieved.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{figures/GPUtime.png}
\caption{Computational time cost in simulating the zero-dimensional ignition problems when adopting different integrators and processing units.}
\label{fig:GPUeff}
\end{figure}
It can be seen in Fig.~\ref{fig:GPUeff} that the computational cost for the given test case is nearly halved on CPU (same as CVODE) and further reduces by two orders of magnitude on GPU when using DNN as the integrator. Additionally, the DNN performance further improves with the increase of the grid number, whereas the CVODE cost linearly scales with the grid number. This speed up given by DNN is essentially attributed to the parallelisation of operators and data, i.e. the solving of chemistry is implemented by matrix addition and multiplication via DNN inference instead of logical iterations. Furthermore, considering the relatively small size of hydrogen/air mechanism, more promising acceleration from DNN can be expected for more complex hydrocarbon fuels.
\subsection{Acceleration from DLB and AMR}
As mentioned earlier, AMR can effectively reduce the simulation cost for high-speed reacting flows. When combined with DLB, the acceleration effect becomes even more significant. As AMR refines the cells in chemically intense areas (near flame, shock or detonation fronts), these spatially clustered cells are likely to be allocated on a small number of processors. This leads to a computational load imbalance, which also varies temporally as the refined region moves within the domain. Therefore, the dynamic balancing technique is almost essential for large-scale multi-dimensional AMR simulations.
\begin{figure}[!h]
\centering
\includegraphics[scale=1.0]{figures/fig_11_AMR.png}
\caption{Comparison of computational time cost for different calculation parts in simulating one-dimensional detonation problem when closing AMR and DLB, only adapting AMR and adapting both.}
\label{fig:AMR_DLB}
\end{figure}
To demonstrate this effect, Fig.~\ref{fig:AMR_DLB} compares the original, AMR only and AMR+DLB computational times required for a typical 1D detonation simulation using 5 processors. The total time is broken down into 3 parts: chemistry integration, solving species and flow transport equations. It can be observed that chemical source term evaluation accounts for over 95\% of the computation and AMR speeds up the simulation by a factor of about 6.5. When both AMR and DLB are activated, the computation is 10 times faster than the original speed.
\subsection{Parallel efficiency}
The parallel scalability of {\em DeepFlame} is estimated via a strong scaling test, which is performed by varying the number of processors to solve the same problem. The reactive TGV case introduced in §~\ref{subs:TGV} is adopted here as the benchmark scaling using up to 8192 CPU cores. The machine used in this section is the UK National Supercomputing facility ARCHER2. It is composed of 5860 compute nodes, each with dual AMD EPYCTM 7742 64-core 2.25 GHz processors.
\begin{table}[h]\footnotesize
\begin{center}
\caption{Comparison of code set-up and performance in simulating reactive TGV. The meanings of the acronyms are given in the text.}
\begin{tabular}{c c c c c c c c c c c c}
\hline
\makecell[c]{Data\\unit} & \makecell[c]{$N$p\\$[-]$} & \makecell[c]{$N$cores\\$[-]$} & \makecell[c]{$N$it\\$[-]$} & \makecell[c]{$T_{\rm sim}$\\$[\rm{ms_{sim}}]$}& \makecell[c]{$\overline{\Delta t}$\\$\frac{\rm{\mu s_{sim}}}{\rm{iter}}$}& \makecell[c]{TCPU\\$[\rm{h_{CPU}}]$}& \makecell[c]{RCT\\$[\frac{\rm{\mu s_{CPU}}}{\rm{iter\cdot point}}]$}& \makecell[c]{RTTS\\$[\frac{\rm{\mu s_{CPU}}}{\rm{\mu s_{sim}\cdot point}}]$}\\
\hline
YALES2 & 256$^3$ & 384 & 1484 & 2.5 & 1.685 & 923 & 594 & 354 \\
DINO & 256$^3$ & 1024 & 13417 & 2.5 & 0.186 & 2414 & 38 & 660\\
Nek5000 & 256$^3$ & 576 & 3627 & 2.5 & 0.689 & 486 & 201 & 283 \\
DeepFlame & 256$^3$ & 512 & 2500 & 2.5 & 1 & 1617 & 134 & 134 \\
\hline
\end{tabular}
\label{table:perf}
\end{center}
\end{table}
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\textwidth]{figures/TGV-scalling.png}
\caption{Strong scaling for the reactive Taylor-Green vortex setup. Numbers within the top two figures indicate the parallel efficiency.}
\label{fig:TGVscal}
\end{figure}
First, a typical performance comparison between {\em DeepFlame} and the reference codes \cite{RN4} is given in Table~\ref{table:perf}. Here $N$p is the number of grid points, $N$cores is the CPU core number, $N$it is the iteration steps. The subscripts $_{\rm {sim}}$ and $_{\rm {CPU}}$ denote that the variables and units are related to simulation physical time or CPU time. The variable TCPU represents the total CPU time for the simulation and it is computed by multiplying $N$cores and the total CPU time. The Reduced Computational Time (RCT) is calculated with the expression: RCT = TCPU/($N$it $\times$ $N$p). This is introduced to evaluate the CPU time needed to simulate one time-step on a single degree of freedom by using one processor. The CPU time needed to obtain the solution for a single degree of freedom after a specified simulation time $T_{\rm {sim}}$ is represented by Reduced Time to Solution (RTTS), which is calculated as RTTS = TCPU/($T_{\rm{sim}}$ $\times$ $N$p). From the results listed in Table~\ref{table:perf}, {\em DeepFlame} shows the best computational efficiency for the criterion of RTTS. This indicates that for a given duration of flow time, the least CPU time is required for this specific case. The satisfying performance can be mainly attributed to the efficiency of CVODE and DNN chemistry solvers (allowing for a large time-step size) and also the optimised load balancing algorithm. In addition, even for the computation of a single time-step, {\em DeepFlame} gives the second best RCT performance, only slower than the fully structured code DINO with the much faster explicit time-integration. However, it must be noted that all other codes listed in Table~\ref{table:perf} are built with high-order time and space discretisation, whereas {\em DeepFlame} is limited to second order. Nevertheless, this comparison is not intended to claim superiority but to demonstrate the relative computational speed with respect to the state-of-the-art reacting flow codes.
The strong scaling performance is shown in Fig~\ref{fig:TGVscal}. The sum time ($\tau_{sum}$) represents the total elapsed time simulating the reactive TGV from $t$ = 0.5 to 0.6 ms on the given number of CPU cores. The main components of $\tau_{sum}$, $\tau_{chem}$, $\tau_{Y}$ and $\tau_{flow}$, denote the wall-time cost in solving chemistry kinetic equations, species equations and flow equations, respectively. Moreover, both the conditions with and without chemistry dynamic load balance are considered in this section to estimate the performance of the current load balance method. As seen in Fig.~\ref{fig:TGVscal}, $\tau_{chem}$ accounts for more than 70$\%$ of the total time. With the aid of DLB, $\tau_{chem}$ firstly gains a speed-up of about 30$\%$ (for the same MPI rank) but then slows down as the number of CPU cores exceeds about 1000 (i.e. $<16000$ cells per core). This is mainly attributed to the relatively high communication cost as the overhead of DLB at this time. The scaling of $\tau_{Y}$ and $\tau_{flow}$ is super linear up to about 1000 CPU cores. 0However, note that the efficiency in solving flow equations is obviously reduced with the CPU numbers over 5000 ($<3500$ cells per core). This is possibly due to the poor performance of the {\em GAMG}-solver used for solving the Poisson equation of pressure. For this simulation setup, there is an optimum in terms of efficiency at about 16000 cells per core . Further reduction in cell number (i.e. increase CPU number) may benefit the simulation time but at a cost of nonlinear reduction in efficiency.
\section{Conclusion and Future Work}\label{sec:Conclusion}
This study introduced a machine learning empowered CFD platform {\em DeepFlame} for simulating reacting flows. The implementation of this platform is achieved via interfacing the CFD toolbox OpenFOAM, chemical kinetics software Cantera and deep learning framework Torch. Deferring to all of the previous OpenFOAM-Cantera interface codes, we adopt the simplest nonreactive pure fluid base model and the thermo-physio-chemistry is entirely handled by the {\em DeepFlame} mixture and chemistry objects and functions interfaced with Cantera. Three solvers are developed and thoroughly validated to cover the cases with various flow speeds and physical dimensions. Methods including dynamic load balance (DLB), adaptive mesh refinement (AMR), and deep neural network (DNN) accelerated chemistry solver are available to improve the simulation efficiency.
We also presented a detailed computational performance analysis of {\em DeepFlame}. The validity of the solving algorithms implemented in {\em DeepFlame} is confirmed via a broad range of canonical cases, including 0D ignition, 1D planar flame and reactive shock wave, 2D jet flames, and 3D reactive Taylor-Green vortex. In additional, we also reported the computational efficiency of the code on different computing chips to showcase the acceleration provided by machine learning. Specifically, the DNN chemistry solver shows a 30–50\% speed-up on a CPU due to data and operation vectorisation, and a speed-up of two orders of magnitude on a GPU or DCU due to many-core parallelisation. The simulation of 1D detonation is 10 times faster when adopting AMR and DLB. Finally, the parallel scalability on many CPUs is also reported showing good scaling behaviour compared to the state-of-the-art reactive flow codes while running on up to tens of thousands of processors.
Despite the advantages we have introduced in this study, there are still many aspects of {\em DeepFlame} to be improved from the current version:
\begin{itemize}
\item The solvers developed in the current platform only focus on the gaseous fuel. Lagrangian/Euler solvers for multi-phase spray reacting flow will be implemented.
\item The current DNN chemistry solver can only be used on a single GPU/DCU card. In the future we will provide support for the parallel computation of DNN via multi-cards on single and many nodes.
\item The parallel scalability needs to be further improved to achieve the linear scaling up to more than ten thousands CPU cores.
\end{itemize}
These features are actively under development on our public repository website and will be released in the future {\em DeepFlame} versions.
\section*{Acknowledgements}
The work of Z.X.C. is supported by the National Science Foundation (Grant No. 52276096) and the Fundamental Research Funds for the Central Universities of China.
Part of the numerical simulations was performed on the High Performance Computing Platform of CAPT of Peking University.
\nocite{*}
\bibliographystyle{elsarticle-num}
|
1,314,259,993,956 | arxiv | \section{Introduction}
Compactifications of type II string theory on Calabi-Yau (CY) threefolds represent a fruitful laboratory to
generate, test, and exemplify various ideas on string dynamics, dualities and non-perturbative physics.
They are very rich from both, physical and mathematical, points of view and have numerous relations with other
subjects such as BPS black holes, supersymmetric gauge theories, integrable systems, etc.
Moreover, in contrast to compactifications with fewer preserved supersymmetries,
CY vacua seem to be amenable for an exact description. Although such a description, which is supposed to provide
the complete {\it non-perturbative} low energy effective action for the compactification on arbitrary CY $\CY$,
has not been achieved yet, this goal appears now within our reach.
Let us summarise what is known about this problem up to now (see \cite{Alexandrov:2011va,Alexandrov:2013yva} for reviews).
At two derivative level, the low energy action is completely determined by the metrics on the moduli spaces of vector multiplets (VM)
and hypermultiplets (HM), $\cM_V$ and $\cM_H$ \cite{deWit:1984px}. The former is a special \kahler manifold whose geometry is determined
by a holomorphic prepotential $F(X)$, a homogeneous function of degree 2, which is in principle known for arbitrary CY
in terms of its topological data \cite{Candelas:1990rm,Hosono:1993qy}:
triple intersection numbers $\kappa_{abc}$, Euler characteristic $\chi_\CY$,
and genus zero Gopakumar--Vafa invariants $\gfinv_{q_a}$.
On the other hand, the latter is a \qk (QK) manifold \cite{Bagger:1983tt},
receiving stringy quantum corrections, whose exact geometry is not known yet and represents the main challenge.
The quantum $g_s$-corrections to the classical metric on $\cM_H$ can be split into perturbative and non-perturbative ones,
and the latter come either from (Euclidean) D-branes wrapping non-trivial cycles of the CY,
or from NS5-branes wrapped on the whole compactification manifold \cite{Becker:1995kb}.
Remarkably, only the very last set of corrections, namely those given by NS5-brane instantons, remain unknown so far.
More precisely, the perturbative corrections are restricted to one-loop and have been incorporated in
\cite{Antoniadis:1997eg,Gunther:1998sc,Antoniadis:2003sw,Robles-Llana:2006ez,Alexandrov:2007ec}.
All D-instantons have been described in \cite{Alexandrov:2008gh,Alexandrov:2009zh} within the type IIA formulation.
Finally, in \cite{Alexandrov:2010ca} an attempt to include NS5-instantons in the one-instanton approximation has been made
using the mirror type IIB framework.
As a result, what remains is to find NS5-brane corrections beyond the one-instanton approximation.
This is precisely the goal of the present paper. In fact, we have already announced our main results in a short note \cite{Alexandrov:2014mfa}.
Here we provide their detailed derivation and extend them by including the effects of D1-D(-1)-instantons.
More precisely, we concentrate on the type IIB formulation where all quantum corrections to the metric on $\cM_H$
can be arranged into sectors invariant under the action of the S-duality group $SL(2,\IZ)$.
This can be represented by the following table:
\be
\mbox{
\begin{tabular}{l|c|c|c|c|c|c|cc|}
\cline{2-2} \cline{4-4}
$\alpha'$-corrections: \hspace{0.1cm} & perturbative & \hspace{0.1cm} & w.s. instantons & \multicolumn{4}{c}{} \rule{0pt}{12pt}
\\
\cline{6-6} \cline{8-9}
$g_s$-corrections: & 1-loop \ \ D(-1) & & D1 & \hspace{0.1cm} & \,D3\, & \hspace{0.1cm} & \,D5 & NS5 \rule{0pt}{13pt}
\\
\cline{2-2} \cline{4-4} \cline{6-6} \cline{8-9}
\end{tabular}}
\label{quantcor}
\ee
and makes possible studying each sector independently of the others. Moreover, one can use S-duality
to find all quantum corrections inside some sector if one knows already at least a part of them.
It is sufficient just to apply the method of images. For instance,
this was precisely the idea used in \cite{RoblesLlana:2006is}
to find D1 and D(-1)-instantons from the knowledge of $\alpha'$-corrections
encoded in the holomorphic prepotential $F(X)$.
Looking at the pattern \eqref{quantcor}, it is tempting to apply the same idea to the last sector to obtain NS5-instanton corrections
from D5-instantons, which follow from the results of \cite{Alexandrov:2008gh,Alexandrov:2009zh} and mirror symmetry.
This was realized in \cite{Alexandrov:2010ca}, but only in the one-instanton approximation due to
several complications arising on the way.
The first difficulty is related to the action of S-duality.
As we will review below, instanton corrections to the HM moduli space have the simplest incarnation
in the twistor space $\cZ$ of $\cM_H$, and are encoded in a set of holomorphic functions, known as {\it transition functions}.
Therefore, to derive NS5-instantons from D5 ones, it is important to know how S-duality acts on the transition functions.
This was understood only recently in \cite{Alexandrov:2013mha} and, unfortunately, the resulting action
turned out to be highly non-linear which makes its application very non-trivial.
The second complication is that the sectors in \eqref{quantcor} are not actually completely independent. As we will see,
when translating the results on D-instanton corrections from type IIA to the manifestly S-duality invariant framework,
adapted to the symmetries of the type IIB formulation, the first three sectors affect the last one.
Thus, this effect should be taken into account in the complete picture including all quantum corrections.
In this paper we show how both these difficulties can be overcome.
A way to avoid the first one was in fact already proposed in \cite{Alexandrov:2014mfa}, and is based on an alternative
parametrization of the twistor space which uses, instead of the usual transition functions, certain {\it contact Hamiltonians}.
This allows to linearize the action of S-duality so that the derivation of fivebrane instantons becomes straightforward.
Here we also include into this description the effects of D1-D(-1)-instantons coming from the first two sectors in \eqref{quantcor}.
Thus, we provide the twistorial formulation of the non-perturbative geometry of $\cM_H$ where only D3-instantons
are missing. Although they are known on the mirror type IIA side, where they appear as a subset of D2-brane instantons,
their manifestly S-duality invariant formulation, which is what we really need here, has not been
found yet.\footnote{The work in this direction was initiated in \cite{Alexandrov:2012au} where it was shown that
the type IIA construction of these instanton corrections is consistent with S-duality at least in the one-instanton approximation.
However, the corresponding twistorial formulation adapted to this symmetry is still lacking.}
This is related to the fact, distinguishing them from other instanton corrections and clearly seen from \eqref{quantcor},
that they are {\it selfdual} under $SL(2,\IZ)$. Thus, a better understanding of these instanton corrections is required before
including them into our picture.
Another important result, which we present here, is an improved understanding of the discrete isometry group of $\cM_H$.
Already in \cite{Alexandrov:2010ca} it was observed that the fivebrane corrections obtained by applying S-duality as described above
appear to be incompatible with other discrete symmetries such as large gauge transformations of the RR-fields and
monodromy transformations of the complexified \kahler moduli. We trace this incompatibility back to
the failure of the generators of these discrete isometries to form a group representation.
At the same time, we show how this situation can be cured by adjusting the action of monodromies on the RR-scalars
and demonstrate that our results on fivebrane instantons are consistent with the resulting duality group.
The organisation of the paper is as follows. In the next section we present the basic information about
the HM moduli space concentrating on the type IIB formulation. Here we also discuss the isometries of $\cM_H$, the subtleties
related to their action at quantum level, and provide the corrected form of the discrete symmetry transformations.
In section \ref{sec-twistor} we review the twistorial construction of QK manifolds, improved parametrization introduced
in \cite{Alexandrov:2014mfa}, and constraints imposed by the presence of the $SL(2,\IZ)$ isometry group.
In section \ref{sec-Dinst} this twistor framework is used to describe D-instanton corrections, after which it is shown
how D1-D(-1)-instantons can be reformulated in a manifestly S-duality invariant way and how this reformulation affects
other D-instanton contributions. Then in section \ref{sec-fivebrane} we derive the fivebrane instantons at all orders
in the instanton expansion. Section \ref{sec-concl} present our conclusions.
In addition, in appendix \ref{ap-Udual} we provide details on the isometry group of $\cM_H$.
In appendix \ref{ap-contactbr} we give a proof of a crucial transformation property of our twistorial construction.
Appendix \ref{ap_Sdualconstrap} verifies that the non-linear S-duality constraint of \cite{Alexandrov:2013mha}
is indeed satisfied by the transition functions of fivebrane instantons which we compute in this paper.
In appendix \ref{ap-MHinv} we check that the twistorial construction of fivebrane instantons is compatible
with all isometries expected to survive quantum corrections.
And finally, in the last appendix we provide explicit expressions for derivatives of fivebrane transition functions.
They are to be used in the integral equations determining the metric on $\cM_H$ which includes all quantum corrections except D3-instantons.
\section{Hypermultiplet moduli space in CY compactifications}
\label{sec-HM}
\subsection{Classical moduli space}
In this section we review the main facts about the hypermultiplet moduli space $\cM_H$ of CY string vacua,
with emphasis on its symmetries at classical and quantum level.
This moduli space appears in the two versions corresponding to type IIA and type IIB formulations of string theory,
but mirror symmetry, or more precisely its non-perturbative extension \cite{Ferrara:1995yx},
requires them to coincide if the compactification manifolds in the two formulations are chosen to be mirror to each other.
Here we will mostly work with the type IIB version since it is better suited to the application of S-duality.
In type IIB string theory compactified on a CY threefold $\CYm$, $\cM_H$ is a QK manifold of real
dimension $4 (h_{1,1} (\CYm) + 1)$. It comes with a set of natural coordinates which correspond
to scalar fields in four dimensions and comprise
\begin{itemize}
\item the ten-dimensional dilaton equal to the inverse string coupling $\tau_2=1/g_s$;
\item the K\"ahler moduli $b^a + \I t^a\equiv \int_{\gamma^a} \mathcal{J}$
($a=1,\dots, h_{1,1}$)
where $\mathcal{J} \equiv B+\I\, J$ is the complexified \kahler form on $\CYm$
and $\gamma^a$ is a basis of $H_{2}(\CYm,\IZ)$;
\item the Ramond-Ramond (RR) scalars $c^0,c^a,\cla,\cl0$, corresponding to (suitable combinations of) periods of the RR 0-form,
2-form, 4-form and 6-form potentials;
\item the NS axion $\psi$, dual to the 2-form $B$ in four dimensions.
\end{itemize}
It is useful also to combine the string coupling and the RR scalar $\tau_1 = c^0$ into an
axio-dilaton field $\tau = \tau_1 + \I \tau_2$.
At tree level the metric on $\cM_H$ is given by the so-called {\it local c-map} \cite{Ferrara:1989ik}.
We do not need its explicit expression in this paper. What is important for us is that it is completely determined
by the holomorphic prepotential on the \kahler structure moduli space $\SK$ of $\CYm$.
The prepotential is known to have the following form \cite{Candelas:1990rm,Hosono:1993qy}
\be
\label{lve}
F(X)=-\kappa_{abc}\frac{X^a X^b X^c}{6 X^0}
+ \chi_{\CYm}\frac{\zeta(3)(X^0)^2}{2(2\pi\I)^3}
-\frac{(X^0)^2}{(2\pi\I)^3}{\sum_{q_a\gamma^a\in H_2^+(\CYm)}} \gfinv_{q_a}\,
\Li_3\[ \expe{q_a\, \frac{X^a}{X^0}}\],
\ee
where $X^\Lambda$ ($\Lambda=0,\dots,h_{1,1}$) are homogeneous coordinates related to the \kahler moduli by
$X^a/X^0 = b^a + \I t^a$ and we introduced the convenient notation $\expe{x}=e^{2\pi\I x}$.
In \eqref{lve} the first term describes the classical part of the prepotential, whereas the second and third terms
correspond to a perturbative $\alpha'$-correction and contributions of worldsheet instantons, respectively.
The instantons are labeled by effective homology classes $q_a\gamma^a\in H_2^+(\CYm)$, which means that $q_a\ge 0$ for all $a$,
not all of them vanishing simultaneously, and introduced via the trilogarithm function $\Li_3(x)=\sum_{n=1}^\infty x^n/n^3$.
It is useful also to introduce another set of coordinates which appears to be more convenient in the mirror type IIA formulation.
The relation between the two coordinate sets is known as the {\it classical mirror map}
\cite{Bohm:1999uk}
\be
\label{symptobd}
\begin{split}
z^a & =b^a+\I t^a\, ,
\qquad\ \
\zeta^0=\tau_1\, ,
\qquad\
\zeta^a = - (c^a - \tau_1 b^a)\, ,
\\
\tzeta_a &= \cla+ \frac{1}{2}\, \kappa_{abc} \,b^b (c^c - \tau_1 b^c)\, ,
\qquad
\tzeta_0 = \cl0-\frac{1}{6}\, \kappa_{abc} \,b^a b^b (c^c-\tau_1 b^c)\, ,
\\
\sigma &= -2 (\psi+\frac12 \tau_1 \cl0) + \cla (c^a - \tau_1 b^a)
-\frac{1}{6}\,\kappa_{abc} \, b^a c^b (c^c - \tau_1 b^c)\, .
\end{split}
\ee
Using the type IIA coordinates, we can easily write down the continuous transformations leaving the tree level metric on $\cM_H$
invariant. These are the so-called Peccei-Quinn symmetries arising due to the fact that the RR-scalars and the NS-axion
originate from gauge fields.
They act by shifting the corresponding scalars and form the Heisenberg group
\be
\label{heis0}
\opTH_{\eta^\Lambda,\tleta_\Lambda,\kappa}\ :\quad
\bigl(\zeta^\Lambda,\tzeta_\Lambda,\sigma\bigr)\ \mapsto\
\bigl(\zeta^\Lambda + \eta^\Lambda ,\
\tzeta_\Lambda+ \tleta_\Lambda,\
\sigma + 2 \kappa- \tleta_\Lambda \zeta^\Lambda
+ \eta^\Lambda \tzeta_\Lambda \bigr).
\ee
Furthermore, in the large volume limit, where one can drop the last two terms in the prepotential \eqref{lve},
there are additional symmetries. One of them is another Peccei-Quinn symmetry shifting the scalars $b^a$
coming from the 2-form gauge field $B$. This shift however should be accompanied by certain transformations
of the RR-scalars so that the full transformation is given by
\be
\label{bjacr}
M_{\epsilon^a}\ :\quad \begin{array}{c}
\displaystyle{b^a\mapsto b^a+\epsilon^a\, ,
\qquad
\zeta^a\mapsto \zeta^a + \epsilon^a \zeta^0\, ,
\qquad
\tzeta_a\mapsto \tzeta_a -\kappa_{abc}\zeta^b \epsilon^c
-\frac12\,\kappa_{abc} \epsilon^b \epsilon^c \zeta^0\, ,}
\\
\displaystyle{\tzeta_0\mapsto \tzeta_0 -\tzeta_a \epsilon^a+\frac12\, \kappa_{abc}\zeta^a \epsilon^b \epsilon^c
+\frac16\,\kappa_{abc} \epsilon^a \epsilon^b \epsilon^c \zeta^0\, .}
\end{array}
\ee
And finally the classical metric in the large volume limit is invariant under transformations which
form the $SL(2,\IR)$ group and, in contrast to the previous ones, are most easily written in the type IIB field basis
\be\label{SL2Z}
SL(2,\IR)\ni\gl{}\ :\quad
\begin{array}{c}
\displaystyle{
\tau \mapsto \frac{a \tau +b}{c \tau + d} \, ,
\qquad
t^a \mapsto t^a |c\tau+d| \, ,
\qquad
\cla\mapsto \cla \, ,}
\\
\displaystyle{
\begin{pmatrix} c^a \\ b^a \end{pmatrix} \mapsto
\begin{pmatrix} a & b \\ c & d \end{pmatrix}
\begin{pmatrix} c^a \\ b^a \end{pmatrix}\, ,
\qquad
\begin{pmatrix} \cl0 \\ \psi \end{pmatrix} \mapsto
\begin{pmatrix} d & -c \\ -b & a \end{pmatrix}
\begin{pmatrix} \cl0 \\ \psi \end{pmatrix},}
\end{array}
\ee
with $ad-bc=1$. As we review below, all these continuous isometries are lifted by quantum corrections, but
at the same time each of them leaves an unbroken discrete subgroup.
\subsection{Quantum corrections}
Besides the $\alpha'$-corrections completely captured by the prepotential \eqref{lve}, the HM moduli space receives
$g_s$-corrections. At perturbative level, there is only a one-loop correction controlled by the Euler characteristic $\chi_{\CYm}$.
The resulting metric is a one-parameter deformation of the c-map metric whose explicit form can be found in \cite{Alexandrov:2007ec}.
The situation is more interesting at the non-perturbative level where one finds two types of instanton contributions.
The first type comes from D-branes wrapping non-trivial cycles of the CY compactification manifold and has
the following generic form
\be
\label{d2quali}
\delta \de s^2\vert_{\text{D-inst}} \sim \qfD{\gamma}\,\Omega(\gamma;z)\,
e^{ -2\pi|Z_\gamma|/g_s
- 2\pi\I (q_\Lambda \zeta^\Lambda-p^\Lambda\tzeta_\Lambda)} .
\ee
Here $\gamma=(p^\Lambda,q_\Lambda)$ is the D-brane charge, the function $Z_\gamma(z)$ is the the central charge of the supersymmetry
subalgebra preserved by the instanton, which is given by ($z^0\equiv 1$)
\be
\label{defZ}
Z_\gamma(z) = q_\Lambda z^\Lambda- p^\Lambda F_\Lambda(z),
\ee
$\Omega(\gamma;z)$ are generalized Donaldson-Thomas invariants (BPS indices)
dependent of the moduli $z^a$ in a piecewise constant way,
and finally $\qfD{\gamma}$ is the so-called quadratic refinement factor whose defining property is
\be
\qfD{\gamma}\qfD{\gamma'}=(-1)^{\langle\gamma,\gamma'\rangle}\qfD{\gamma+\gamma'},
\label{defqr}
\ee
where $\langle\gamma,\gamma'\rangle=q_\Lambda p'^\Lambda-q'_\Lambda p^\Lambda$ is the Dirac-Schwinger product.
On the type IIB side, a mathematically rigorous way to think about D-instantons
is as objects in the derived category of coherent sheaves ${\rm D^{b}Coh}(\CYm)$ \cite{Sharpe:1999qz,Douglas:2000ah}.
Then the charge is given by the generalized Mukai vector
\be
\gamma= \ch (\mathscr{E}) \, \sqrt{\Td \CYm}
= p^0 + p^a \omega_a - q_a \omega^a + q_0\, \omega_{\CYm}\, ,
\label{chMu}
\ee
where $\mathscr{E}$ is a coherent sheaf, and $\{\omega_a\}$, $\{\omega^a\}$, $\omega_{\CYm}$ are
respectively a basis of 2-forms, 4-forms and the volume form of $\CYm$.
For non-vanishing $p^0$ the sheaf describes a bound state of D5, D3, D1 and D(-1)-branes
with charges given by the components of $\gamma=(p^0,p^a,q_a,q_0)$.
If $p^0=0$ but $p^a$ is non-vanishing, the coherent sheaf is supported on a divisor and describes a D3-instanton, etc.
What is important is that the expression \eqref{chMu} leads to {\it non-integer} D1-D(-1)-charges $q_\Lambda$ which
satisfy the following quantization conditions
\be
\label{fractionalshiftsD5}
q_a \in \IZ - \frac{p^0}{24}\, c_{2,a} - \frac12\, \kappa_{abc} p^b p^c ,
\qquad
q_0\in \IZ-\frac{1}{24}\, p^a c_{2,a} ,
\ee
where $c_{2,a}$ are the components of the second Chern class of $\CYm$ in the basis $\omega^a$.
In other words, the charge vector is an element of $H^{\text{even}}(\CYm,\mathbb{Q})$.
On the other hand, on the type IIA side all D-brane charges are integer.
To reconcile these two facts with mirror symmetry, one should note that the holomorphic prepotential, which one obtains by applying
this symmetry, is not exactly the same as in \eqref{lve}, but differs from it by a quadratic contribution
\cite{Candelas:1990rm,Hosono:1993qy}
\be
F_{\rm m.s.}(X)=F(X)+ \frac12 \,A_{\Lambda\Sigma} X^\Lambda X^\Sigma.
\label{fullprep}
\ee
The additional term is characterized by a real symmetric matrix $A_{\Lambda\Sigma}$.
Although, as can be easily checked, it does not affect the \kahler potential of the special \kahler manifold $\SK$,
it is this term that ensures the consistency of charge quantization with mirror symmetry
and, as will be shown below, plays an important role in the correct implementation of discrete symmetries of $\cM_H$ at full quantum level.
The idea is that the type IIA and type IIB charge vectors are related by a symplectic transformation generated by $A_{\Lambda\Sigma}$.
It affects both, charges and fields,\footnote{In \cite{Alexandrov:2010ca,Alexandrov:2011va} the charges $q_\Lambda$
and the RR-fields $\tzeta_\Lambda$ were denoted by $q'_\Lambda$ and $\tzeta'_\Lambda$, respectively, whereas the unprimed notations
were reserved for the charges and fields in the type IIA frame.
However, since in this paper we work mostly in the type IIB basis, we omit the prime.
\label{foot-prime}}
\be
\tzeta_\Lambda\ \mapsto\ \tzeta_\Lambda+A_{\Lambda\Sigma}\zeta^\Lambda,
\qquad
q_\Lambda\ \mapsto\ q_\Lambda+A_{\Lambda\Sigma} p^\Sigma,
\label{symA}
\ee
and also restores the quadratic term in the prepotential \eqref{fullprep}.
It turns out that the properties satisfied by this matrix (see \eqref{propA})
are sufficient to ensure the integrality of the transformed charges \cite{Alexandrov:2010ca}.
Note that the central charge \eqref{defZ} and the whole D-instanton correction \eqref{d2quali} are symplectic invariant
and are not affected by the transformation \eqref{symA}.
The second type of non-perturbative corrections is provided by NS5-brane instantons wrapping the whole CY.
Their general form is
\be
\delta \de s^2\vert_{\text{NS5-inst}} \sim
e^{-2 \pi |k| \cV /g_s^2+\I\pi k \sigma},
\label{couplNS5}
\ee
where $\cV$ is the Calabi-Yau volume. In the small string coupling limit they are exponentially suppressed comparing
to the D-instantons \eqref{d2quali}. However, for finite coupling they cannot be neglected and represent an important
non-perturbative contribution.
\subsection{The duality group}
\label{subsec-Udual}
\subsubsection{Discrete isometries}
\label{subsubsec-isom}
An immediate consequence of the presence of the instanton corrections \eqref{d2quali} and \eqref{couplNS5} is that they
break the Heisenberg group of continuous transformations \eqref{heis0}. Furthermore, already the $\alpha'$-corrections
to the holomorphic prepotential break the other two continuous symmetries, \eqref{bjacr} and \eqref{SL2Z}.
Thus, the non-perturbative metric on the HM moduli space does not have {\it any} continuous isometries.
Nevertheless, each of the broken continuous groups leaves an unbroken discrete subgroup.
Before we discuss these discrete isometries, we need to provide a more detailed information on the two objects
appearing in the discussion of D-instanton corrections: the matrix $A_{\Lambda\Sigma}$ and the quadratic refinement $\qfD{\gamma}$.
The matrix $A_{\Lambda\Sigma}$ is known to satisfy the following conditions \cite{Hosono:1994av,Alexandrov:2010ca}
\be
A_{00}\in \IZ ,
\qquad
A_{0a} = \frac{c_{2,a}}{24}+ \IZ ,
\qquad
\frac12\, \kappa_{abc} \eps^b \eps^c-A_{ab}\eps^b\in \IZ \quad \text{for}\ \forall \eps^a\in\IZ.
\label{propA}
\ee
Without loss of generality, we can drop the possible integer contributions to $A_{0\Lambda}$ since they can always be removed by
an integer valued symplectic transformation. Thus, we set
\be
A_{00}=0,
\qquad
A_{0a} = \frac{c_{2,a}}{24}.
\label{valA}
\ee
An explicit expression for the components $A_{ab}$, restricted by \eqref{propA} to be half-integer,
has been found in the one modulus case in \cite{Huang:2006hq} and reads
\be
A_{11}=\hf\int_{\CYm}\iota_\star c_1(D)\wedge J,
\ee
where $D$ is the divisor dual to $J$. Although this formula begs for a generalization, it is not clear to us
how to ensure that the resulting matrix is symmetric. For most purposes, the properties listed in \eqref{propA}
turn out to be sufficient, provided they are supplemented by another property\footnote{The property \eqref{c2aprop}
follows from the fact that the expression on the l.h.s. is the holomorphic Euler characteristic of the divisor $\gamma_a$
Poincar\'e dual to the 2-form $\omega_a$. Besides, the third condition in \eqref{propA} implies another
restriction on the intersection numbers, $\frac12\(\kappa_{aab}-\kappa_{abb}\)\in\IZ$, which in turn can be derived
from an index theorem \cite{1966InMat...1..355W}.
We thank R. Valandro for clarifying the origin of these relations.}
satisfied by the second Chern class coefficients \cite{Maldacena:1997de,Denef:2007vg}
\be
\frac16\, \kappa_{abc}\eps^a\eps^b\eps^c+\frac{1}{12}\, c_{2,a}\eps^a\in \IZ \quad \text{for}\ \forall \eps^a\in\IZ.
\label{c2aprop}
\ee
The quadratic refinement factor $\qfD{\gamma}$ typically appears in chiral boson partition functions
\cite{AlvarezGaume:1986mi,AlvarezGaume:1987vm,Witten:1996hc,Freed:2000ta}. Here it is required by consistency with the wall-crossing
to ensure the smoothness of the metric across lines of marginal stability where the BPS indices $\Omega(\gamma)$ may jump \cite{Alexandrov:2011ac}.
A general solution to its defining relation \eqref{defqr} is provided by \cite{Belov:2006jd}
\be
\qfD{\gamma} = \expe{-\frac12\,p^\Lambda \(q_\Lambda+A_{\Lambda\Sigma}p^\Sigma\)
+ \(q_\Lambda+A_{\Lambda\Sigma}p^\Sigma\) \theta_{\text{D}}^\Lambda
- p^\Lambda \phi_{{\text{D}},\Lambda}},
\label{quadraticrefinementpq}
\ee
where $\theta_{\text{D}}^\Lambda,\phi_{{\text{D}},\Lambda}$ are the so-called characteristics or generalized spin structure on $\CYm$,
defined modulo integers, and the terms proportional to the matrix $A_{\Lambda\Sigma}$ arise due to the change of the basis \eqref{symA}
and the non-integrality of charge $\gamma$.
Although one could think that the characteristics are just (half-integer) numbers, the symplectic invariance of
the D-instantons requires them to transform under symplectic rotations in order to keep $\qfD{\gamma}$ invariant,
\be
\label{sympchar}
Sp(2h_{1,1}+2,\IZ)\ni\rho={\scriptsize \begin{pmatrix} \cD & \cC \\ \cB & \cA \end{pmatrix}}\ :\quad
\begin{pmatrix} \theta_{\text{D}}^\Lambda \\ \phi_{{\text{D}},\Lambda} \end{pmatrix}
\ \mapsto\
\rho
\cdot \[
\begin{pmatrix} \theta_{\text{D}}^\Lambda \\ \phi_{{\text{D}},\Lambda} \end{pmatrix}
-\frac12
\begin{pmatrix} (\cA^T\cC)_d \\ (\cD^T\cB)_d \end{pmatrix}
\],
\ee
where $(A)_d$ denotes the diagonal of a matrix $A$.
Now we are ready to present the discrete actions supposed to form the duality group of $\cM_H$.
Roughly, the idea is that one should take the parameters in the transformations \eqref{heis0}, \eqref{bjacr} and \eqref{SL2Z} to be integer.
Then they would correspond to large gauge transformations of the RR-gauge potentials and the B-field,
to monodromies around the large volume point, and to S-duality group of type IIB string theory, which are all expected to be symmetries
of the low-energy theory at full quantum level.
However, this naive idea requires some adjustments:
\begin{itemize}
\item
First, the correct form of the large gauge transformations is given by \cite{Bao:2010cc,Alexandrov:2010np}
\be
\opTH_{\eta^\Lambda,\tleta_\Lambda,\kappa}\ :\
\begin{array}{c}
\displaystyle{\zeta^\Lambda\ \mapsto\ \zeta^\Lambda+\eta^\Lambda,
\qquad
\tzeta_\Lambda\ \mapsto\ \tzeta_\Lambda+\tleta_\Lambda-A_{\Lambda\Sigma}\eta^\Sigma}
\\
\displaystyle{\sigma\ \mapsto\
\sigma + 2 \kappa- \tleta_\Lambda \(\zeta^\Lambda -2 \theta^\Lambda\)
+ \eta^\Lambda \(\tzeta_\Lambda +A_{\Lambda\Sigma}\zeta^\Sigma-2 \phi_\Lambda\)
- \eta^\Lambda \tleta_\Lambda. \rule{0pt}{17pt}}
\end{array}
\label{heisq}
\ee
Here $(\eta^\Lambda,\tleta_\Lambda,\kappa)\in \IZ^{2h_{1,1}+3}$, the $A$-dependent terms appear again as a consequence of \eqref{symA},
and $\theta^\Lambda,\phi_\Lambda$ are the characteristics, similar to the ones appearing in \eqref{quadraticrefinementpq},
which characterize the fibration of the line bundle of the NS-axion over the torus of RR-scalars.
\item
Second, in \cite{Alexandrov:2010np} it was shown that the monodromies, given by the transformation \eqref{bjacr} with $\eps^a\in \IZ$,
should be accompanied by a shift of the NS-axion
\be
\sigma\ \mapsto\ \sigma + 2 \kappa (M_{\eps^a}),
\label{shiftM}
\ee
where $\kappa(M)$ is a character of the symplectic group. Since the monodromy subgroup is abelian, it can be represented as
$\kappa(M_{\eps^a}) = \kappa_a \epsilon^a $. The additional shift \eqref{shiftM} originates in the one-loop $g_s$-correction
which modifies the topology of the NS-axion line bundle over $\SK$.
\item
Finally, the S-duality group is represented by the transformations \eqref{SL2Z}
with $\gl{}\in SL(2,\IZ)$, which should be supplemented by a shift of the RR-scalar $\cla$ \cite{Alexandrov:2010ca}
\be
\cla\ \mapsto\ \cla \, - c_{2,a}\, \varepsilon(\gl{})\, ,
\label{cla}
\ee
where $\varepsilon(\gl{})$ is the logarithm of the multiplier system of the Dedekind eta function defined in appendix \ref{subap-Dedekind}.
This shift is closely related to the quantization conditions \eqref{fractionalshiftsD5} and is required to ensure that
the Heisenberg transformation with parameter $\eta^0$ coincides with the $SL(2,\IZ)$ transformation $\tau \mapsto \tau + \eta^0$.
\end{itemize}
\subsubsection{Corrected transformations and group law}
\label{subsec-correcttr}
It turns out that, even taking into account all the non-trivial adjustments described above,
the resulting set of discrete transformations is not satisfactory. As we show in appendix \ref{ap-Udual}, the generators
of these transformations do not really form a group (see \eqref{twistheisen})!\footnote{Of course, one could just {\it generate} a group
by taking products of all generators. But this would lead to a half-integer periodicity of RR-scalars
(in other words, one would have to allow $\tleta_\Lambda\in \hf\IZ$ in \eqref{heisq}),
which does not have any physical justification.}
The origin of this problem can be traced back to the characteristics appearing in the Heisenberg transformations \eqref{heisq}.
To see this, let us note that the monodromies \eqref{bjacr}, once we pass to the type IIA frame using \eqref{symA}, are
represented by the integer valued symplectic matrix
\be
\rho(M_{\eps^a})
=\(\begin{array}{cccc}
1\ & 0 & 0 & 0
\\
\epsilon^a & {\delta^a}_b & 0 & 0
\\
\ L_0(\epsilon) & L_b(\epsilon)+2A_{bc}\epsilon^c & 1 & \ -\epsilon^b\
\\
-L_a(\epsilon)\ & -\kappa_{abc}\epsilon^c & 0 & {\delta_a}^b
\end{array}\) ,
\label{monmat}
\ee
where we introduced two functions
\be
L_a(\eps)\equiv \frac12\, \kappa_{abc} \eps^b \eps^c-A_{ab}\eps^b,
\qquad
L_0(\eps)\equiv\frac16\, \kappa_{abc}\eps^a\eps^b\eps^c+\frac{1}{12}\, c_{2,a}\eps^a,
\ee
which are integer valued due to \eqref{propA} and \eqref{c2aprop}.
Since the characteristics $\theta^\Lambda,\phi_\Lambda$ should transform under symplectic rotations
as the D-instanton characteristics, they undergo a monodromy transformation which can be obtained
by plugging \eqref{monmat} into \eqref{sympchar}. Setting $\theta^\Lambda=0$, one eliminates some of the terms, but
even in this case one gets a non-trivial result
\be
\begin{split}
\phi_a \ \mapsto\ &\,\phi_a + \frac12\, \kappa_{aac} \epsilon^c,
\\
\phi_0 \ \mapsto\ &\,\phi_0 - \epsilon^a\phi_a
- \hf \(L_0(\epsilon) - \epsilon^a L_a(\epsilon) + \kappa_{aac} \epsilon^a \epsilon^c\).
\end{split}
\label{transchar}
\ee
On the other hand, this is in contradiction with the fact that the monodromies can be obtained by commuting
$\eta^a$-Heisenberg shift with S-duality (see \eqref{maincom}) and that the characteristics are not expected
to transform under other isometries.
To resolve these inconsistencies, we note that the D-instanton characteristics can be absorbed into a redefinition
of the RR-fields and the NS-axion
\be
\begin{split}
\zeta^\Lambda-\thetaD^\Lambda\qquad\qquad \mapsto\ &\, \zeta^\Lambda,
\\
\tzeta_\Lambda-\phi_{{\text{D}},\Lambda}+A_{\Lambda\Sigma}\thetaD^\Sigma\qquad \mapsto\ &\, \tzeta_\Lambda,
\\
\sigma +\phi_{{\text{D}},\Lambda}\zeta^\Lambda-\thetaD^\Lambda\(\tzeta_\Lambda+A_{\Lambda\Sigma}\zeta^\Sigma\)\ \mapsto\ &\, \sigma.
\end{split}
\ee
This redefinition requires to modify the properties of these fields under symplectic transformations to take into account
the inhomogeneous terms in the corresponding transformations of characteristics \eqref{sympchar}.
In particular, this changes the monodromy transformations of $\tzeta_\Lambda$ and $\sigma$.
Instead of \eqref{bjacr} and \eqref{shiftM}, we can now take
\be
\label{bjacr-mod}
M_{\epsilon^a}\ :\quad \begin{array}{l}
\displaystyle{b^a\ \mapsto\ b^a+\epsilon^a,
\qquad
\zeta^a\ \mapsto\ \zeta^a + \epsilon^a \zeta^0,}
\\
\displaystyle{\tzeta_a\ \mapsto\ \tzeta_a -\kappa_{abc}\zeta^b \epsilon^c
-\frac12\,\kappa_{abc} \epsilon^b \epsilon^c \zeta^0+A_{ab}\eps^b ,}
\\
\displaystyle{\tzeta_0\ \mapsto\ \tzeta_0 -\tzeta_a \epsilon^a+\frac12\, \kappa_{abc}\zeta^a \epsilon^b \epsilon^c
+\frac16\,\kappa_{abc} \epsilon^a \epsilon^b \epsilon^c \zeta^0
-\hf\, A_{ab}\epsilon^a\epsilon^b+\frac{c_{2,a}}{8}\, \epsilon^a,}
\\
\displaystyle{\,\sigma\ \mapsto\ \sigma -A_{ab}\eps^a\zeta^b-\hf\( A_{ab}\eps^a\eps^b+\frac14\, c_{2,a}\eps^a\)\zeta^0 +2\kappa_a \epsilon^a.}
\end{array}
\ee
A new input, which leads to an improvement of the duality group representation, is that
we require that the {\it new} redefined fields are related to the type IIB coordinates, transforming under S-duality according to
\eqref{SL2Z} and \eqref{cla}, by the {\it standard} classical mirror map \eqref{symptobd}.
Thus, we change transformations of some fields ($\cla$ and $\cl0$) under monodromies and leave other transformations
unmodified. All characteristics can now be set to zero.\footnote{More precisely, one can still have non-vanishing characteristics
$\theta^\Lambda,\phi_\Lambda$ which transform now homogeneously under monodromies. However, one can check that
the group law fixes them to zero. Non-vanishing values can appear only if one relaxes \eqref{valA}. For instance, one has $\phi_0=\hf\, A_{00}$.}
\begin{table}
\vspace{0.cm}\hspace{-1.cm}
\begin{tabular}{|c|c|c|c|c|c|}
\hline $\vphantom{\frac{A^{A^A}}{A_{A_A}}}$
& $b^a$ & $ c^a$ & $\cla$ &$\cl0$ & $\psi$
\\
\hline $\vphantom{\frac{A^{A^A}}{A_{A_A}}}$
$S$ & $c^a$ & $-b^a$ & $\cla+\frac{c_{2,a}}{8}$
& $-\psi$ & $\cl0$
\\
\hline $\vphantom{\frac{A^{A^A}}{A_{A_A}}}$
$T$ & $b^a$ & $c^a+b^a$ & $\cla-\frac{c_{2,a}}{24}$
& $\cl0$ & $\psi-\cl0$
\\
\hline $\vphantom{\frac{A^{A^A}}{A_{A_A}}}$
$\opT^{(1)}_{\epsilon^a,0}$ & $b^a+\epsilon^a$ & $c^a$ & $\cla+\frac12\, \kappa_{abc} \epsilon^b c^c+ A_{ab} \epsilon^b$
& $\begin{array}{c}\cl0-\epsilon^a \cla \\-\frac16\, \kappa_{abc} \epsilon^a (b^b+2\epsilon^b)c^c \\
- \frac12 A_{ab}\epsilon^a \epsilon^b +\frac{c_{2,a}}{8}\, \epsilon^a\end{array}$
& $\psi+\frac16\, \kappa_{abc}\epsilon^a c^b c^c- \kappa_a \epsilon^a$
\\
\hline $\vphantom{\frac{A^{A^A}}{A_{A_A}}}$
$\opT^{(1)}_{0,\eta^a}$& $b^a$ & $c^a+\eta^a$ & $\cla-\frac12\, \kappa_{abc} \eta^b b^c+A_{ab} \eta^b$
& $\cl0+\frac16\, \kappa_{abc} \eta^a b^b b^c + \frac{c_{2,a}}{24}\, \eta^a $
& $\begin{array}{c}\psi+\eta^a \cla+\frac12\, A_{ab} \eta^a \eta^b\\
-\frac16\, \kappa_{abc}\eta^a b^b (c^c+2\eta^c)\end{array}$
\\
\hline $\vphantom{\frac{A^{A^A}}{A_{A_A}}}$
$\opT^{(2)}_{\tilde\eta_a}$
& $b^a$ & $c^a$ & $\cla+\tilde\eta_a$
& $\cl0$ & $ \psi$
\\
\hline $\vphantom{\frac{A^{A^A}}{A_{A_A}}}$
$\opT^{(3)}_{\tilde\eta_0,\kappa}$ & $b^a$ & $c^a$ & $\cla$
& $\cl0+\tilde\eta_0$ & $\psi+\kappa$
\\
\hline
\end{tabular}
\caption{The action of generators of the discrete symmetry transformations in the type IIB coordinate basis.}
\label{tab-U}
\end{table}
\vspace{0.cm}
We summarize the resulting action of all generators of the duality group in the type IIB coordinate basis in Table \ref{tab-U}.
This table should be supplemented by the standard $SL(2,\IZ)$ action \eqref{SL2Z} on the variables $\tau$ and $t^a$
which are not affected by other transformations.
It uses the following notations for generators:
$\opT^{(0)}_{\eta^0}$, $\opT^{(1)}_{0,-\eta^a}$, $\opT^{(2)}_{\tilde\eta_a}$ and $\opT^{(3)}_{\tilde\eta_0,-\kappa}$
correspond to the generators of the Heisenberg subgroup $\opTH_{\eta^\Lambda,\tleta_\Lambda,\kappa}$,
$\opT^{(1)}_{\epsilon^a,0}=M_{\eps^a}$ is the monodromy generator, and $SL(2,\IZ)$ is generated by
\be
S=\(\begin{array}{cc}
0 & -1
\\
1 & 0
\end{array}\),
\qquad
T=\(\begin{array}{cc}
1 & 1
\\
0 & 1
\end{array}\).
\label{genST}
\ee
However, the action of the Heisenberg shift $\opT^{(0)}_{1}$ is identical to $T$ and therefore it is not presented in the table.
These notations indicate that the generators $\opT^{(n)}$ with $n>0$ form a graded nilpotent subgroup where each $n$th level
forms a representation of $SL(2,\IZ)$. This fact has a direct relation to the split of non-perturbative corrections
into S-duality invariant sectors presented in \eqref{quantcor}.
In appendix \ref{ap-Udual} we demonstrate that the transformations given in the above table
satisfy the group law provided one fixes the character of the monodromy group as
\be
\kappa_a=-\frac{c_{2,a}}{24}\, .
\label{indentkappa}
\ee
One of the most important group relations is given by
\be
S^{-1}\, \opT^{(1)}_{0,\eta^a}\, S=\opT^{(1)}_{\eta^a,0}.
\label{maincom}
\ee
This relation ensures that the fivebrane instantons generated via S-duality are guaranteed to be compatible with other isometries.
And indeed, in appendix \ref{ap-MHinv} we will prove that the transformations found in this section, unlike
the previous ones, are consistent with our results for fivebrane instanton corrections.
\section{QK manifolds in the twistor approach}
\label{sec-twistor}
\subsection{Twistorial construction of QK manifolds}
To incorporate instanton corrections to the geometry of the HM moduli space consistently with its QK property,
it is instrumental to use the twistorial construction of such manifolds \cite{MR664330,MR1327157,Alexandrov:2008nk}.
As we review below, it allows to encode any QK metric in a set of holomorphic data on the twistor space $\cZ$,
which is constructed as a canonical $\CP$ bundle over the original manifold $\cM$.
Whereas $\cM$ carries a triplet of non-integrable almost complex structures, $\cZ$ is a K\"ahler manifold.
Furthermore, it has a {\it complex contact structure}
defined globally by the kernel of the following $(1,0)$ form
\bea
D t = \text{d} t + p_+ - \I p_3 t + p_- t^2 ,
\eea
where $t$ is the fiber coordinate on $\IC\IP^1$ and $(p_{\pm},p_3)$
are the $SU(2)$ part of Levi-Civita connection on $\cM$.
It is more convenient however to use a local description of this structure in which case it can be represented by
a holomorphic one-form $\cX^{[i]}$ having the same kernel as $Dt$.
Here the upper index shows that this one-form is defined only in a patch $\cU_i$ of
an atlas covering the twistor space, $\cZ=\cup\cU_i$.
The contact form $\cX^{[i]}$ allows to define a set of local Darboux coordinates such that
\be
\cX^{[i]} = \text{d} \ai{i} + \xii{i}^\Lambda \text{d} \txii{i}_\Lambda .
\ee
Then the contact structure, and the full geometry of $\cM$, is completely determined
by the {\it contact transformations} relating the Darboux coordinate systems
on the overlaps of two patches $\cU_i\cap\cU_j$ and preserving the contact one-form up to a non-vanishing holomorphic factor.
One way to parametrize such transformations is to use holomorphic functions $\Hij{ij}(\xii{i},\txii{j},\ai{j})$
which depend on $\xi^\Lambda$ in patch $\cU_i$ and $\txi_\Lambda,\alpha$ in patch $\cU_j$.
Then the gluing conditions between Darboux coordinates read as follows \cite{Alexandrov:2009zh}
\be
\begin{split}
\xii{j}^\Lambda = &\, \xii{i}^\Lambda -\p_{\txii{j}_\Lambda }\Hij{ij}
+\xii{j}^\Lambda \, \p_{\ai{j} }\Hij{ij} ,
\\
\txii{j}_\Lambda =&\, \txii{i}_\Lambda
+ \p_{\xii{i}^\Lambda } \Hij{ij} ,
\\
\ai{j} = &\, \ai{i}
+ \Hij{ij}- \xii{i}^\Lambda \p_{\xii{i}^\Lambda}\Hij{ij} ,
\end{split}
\label{QKgluing}
\ee
and result in the following transformation of the contact one-form
\be
\label{glue2}
\CX\ui{j} = \(1-\p_{\ai{j} }\Hij{ij}\)^{-1} \CX\ui{i}.
\ee
Supplementing \eqref{QKgluing} by appropriate reality and regularity conditions,
these discrete equations can be rewritten as a system of integral equations which relate the Darboux coordinates
to the integrals along contours on $\CP$ of the discontinuities from \eqref{QKgluing} multiplied by a certain $t$-dependent kernel.
Their solution provides the Darboux coordinates as functions of the fiber coordinate $t$ and coordinates
on the base $\cM$ of the twistor fibration.
Then a straightforward but tedious procedure leads to the QK metric on $\cM$ \cite{Alexandrov:2008nk}.
Thus, the QK geometry turns out to be encoded in a set of holomorphic functions $\Hij{ij}$, which we call {\it transition functions},
and the associated set of contours on $\CP$. Typically, the contours separate the two patches whose Darboux coordinates are related by the contact
transformation generated by $\Hij{ij}$.
It is important to note that in this construction both closed and open contours may appear, as is the case, for instance,
in the twistorial description of the HM moduli space.
\subsection{Contact bracket}
\label{subsec-contact}
The twistorial construction presented above relies on the parametrization of contact transformations in terms of transition functions $\Hij{ij}$.
Although such parametrization is very explicit, the main obstacle in dealing with it comes from the fact that
the arguments of $\Hij{ij}$ belong to different patches. As a result, even simple-looking gluing conditions may be generated by
complicated transition functions. This issue becomes particularly problematic when one tries to describe the action
of some symmetries on the twistor data.
Typically such an action is most naturally formulated in terms of Darboux coordinates in one patch,
and it can become highly non-linear being written as a symmetry transformation of $\Hij{ij}$.
Below we will see several examples of such situation.
This complication can be avoided if one uses an alternative parametrization which we proposed in \cite{Alexandrov:2014mfa}.
It is based on the so-called {\it contact bracket} which is an extension of the Poisson bracket construction to the domain of contact geometry.
The contact bracket maps two local sections $\muh_1\in\cO(2m)$ and $\muh_2\in\cO(2n)$ to a local
section of $\cO(2(m+n-1))$ line bundle, given in terms of Darboux coordinates by \cite{Alexandrov:2008gh}
\be
\label{poisson}
\begin{split}
\{ \muh_1, \muh_2 \}_{m,n}= &\,
\pa_{\xi^\Lambda} \muh_1 \pa_{\txi_\Lambda} \muh_2 +
\(m \muh_1 -\xi^\Lambda \pa_{\xi^\Lambda} \muh_1\)\pa_\alpha\muh_2
\\
&\, - \pa_{\xi^\Lambda} \muh_2 \pa_{\txi_\Lambda} \muh_1
-\(n \muh_2-\xi^\Lambda \pa_{\xi^\Lambda} \muh_2\) \pa_\alpha \muh_1 .
\end{split}
\ee
It is easy to check that this bracket satisfies the standard Jacobi identity, skew-symmetry and Leibnitz rule provided one keeps track
of the geometric nature of all objects. For instance, the Leibnitz rule for $\mu_1,\mu_2$ defined as above and $\mu_3\in \cO(2k)$ reads as
\be
\{\mu_1 \mu_2 , \mu_3 \}_{m+n,k}
= \mu_1 \{\mu_2,\mu_3\}_{n,k} + \mu_2 \{\mu_1,\mu_3 \}_{m,k} .
\ee
We mostly need the specialization of \eqref{poisson} to the case $(m,n)=(1,0)$ which provides the action of
a vector field $\Xf{\mu_1}$ with the (generalized) moment map $\mu_1$ on a local complex function $\mu_2$ \cite{MR872143}.
Setting $\mu_1=\hH$ and $\mu_2$ to be one of the Darboux coordinates, one explicitly finds\footnote{If it is not indicated explicitly,
in the following the bracket $\{\, \cdot\, ,\, \cdot \}$ will always mean the contact bracket between $\cO(2)$
and $\cO(0)$ sections, i.e. of type (1,0).}
\be
\begin{split}
\{\hH,\xi^\Lambda\}=&\, -\p_{\txi_\Lambda} \hH+\xi^\Lambda\p_\alpha \hH,
\qquad
\{\hH,\txi_\Lambda\}=\p_{\xi^\Lambda} \hH,
\\
&\qquad
\{\hH,\alpha\}=\hH-\xi^\Lambda\p_{\xi^\Lambda} \hH.
\end{split}
\label{contbr}
\ee
Note that in the case where this bracket is evaluated on sections (of different bundles) represented by the same function,
despite the skew-symmetry property, the result is non-vanishing and is given by
\be
\{h,h\} =h\p_\alpha h.
\ee
Another important property, which plays a crucial role in our construction, is the behavior of \eqref{contbr}
under contact transformations. If $\vrh$ is such transformation mapping $\cX\mapsto \lambda\cX$ then
\be
\vrh\cdot \{h,f\}=\{ \lambda^{-1}\vrh\cdot h, \vrh\cdot f\}.
\label{trans-contact}
\ee
This property generalizes the familiar invariance of the Poisson bracket under canonical transformations
to the realm of contact geometry.
We provide its proof in appendix \ref{ap-contactbr} in a coordinate independent way.
The importance of the contact bracket becomes clear if one considers the action
of the vector field $\Xf{\hH}=\{\hH,\,\cdot\,\}$ on the contact one-form,
which is found to be
\be
\cL_{\Xf{\hH}} \cX
= (\p_\alpha \hH) \cX.
\label{tr-cb-X}
\ee
This means that it generates an infinitesimal contact transformation.
Furthermore, identifying $h$ with vanishingly small transition functions $\Hij{ij}$, one observes that
\eqref{contbr} and \eqref{tr-cb-X} represent a linearized version of \eqref{QKgluing} and \eqref{glue2}, respectively.
Therefore, {\it any} infinitesimal contact transformation can be generated in this way
and a finite transformation can be obtained by exponentiation.
Thus, we can rewrite the gluing conditions \eqref{QKgluing}
as
\be
\Xi^{[j]} = \exp\({\Xf{\hHij{ij}}}\) \cdot \Xi^{[i]} ,
\label{newglue}
\ee
where $\Xi^{[i]}$ denotes the set of Darboux coordinates in patch $\cU_i$.
This formula provides a parametrization of contact transformations in terms of functions
$\hHij{ij}$, which we call {\it contact Hamiltonians}\footnote{Note that we changed a bit the terminology as in \cite{Alexandrov:2014mfa}
we called $\hHij{ij}$ ``improved transition functions".}
and which, in contrast to the ordinary transition functions, are considered as functions of coordinates in one patch only.
As we will see below, this parametrization crucially simplifies
various properties and results.
A relation between $\hHij{ij}$ and $\Hij{ij}$ can be found by comparing the gluing conditions \eqref{newglue} and \eqref{QKgluing}.
Recombining some of these equations, one can get an explicit formula for transition functions in terms of the action
generated by contact Hamiltonians on the Darboux coordinates
\be
\Hij{ij}=\(e^{\Xf{\hHij{ij}}}-1\)\ai{i}+\xii{i}^\Lambda\( e^{\Xf{\hHij{ij}}}-1\)\txii{i}_\Lambda.
\label{relHH}
\ee
Note however that this expression computes $\Hij{ij}$ as a function of Darboux coordinates in patch $\cU_i$, whereas we need
to transfer $\txi_\Lambda$ and $\alpha$ to patch $\cU_j$ to be able to compute the derivatives entering the gluing conditions \eqref{QKgluing}.
Therefore, it is indispensable to compute the full contact transformation and not only the combination \eqref{relHH}.
In the particular case of $\hHij{ij}$ independent of $\txi_\Lambda$ and $\alpha$, the two objects coincide, $\Hij{ij}=\hHij{ij}(\xi)$,
and this problem does not arise.
\subsection{Gauge transformations}
\label{subsec-gauge}
A fact which will play an important role below is that the contact structure does not fix the Darboux coordinates uniquely,
but has a freedom to perform {\it local} contact transformations.
Such a ``gauge" transformation affects not only the Darboux coordinates, but also the transition functions
and the corresponding contact Hamiltonians. Here we want to display this action.
As any contactomorphism, in each patch the gauge transformation can be parametrized by a holomorphic function
in one of the two ways we described above:
either as in \eqref{QKgluing} or via the contact bracket as in \eqref{newglue}.
Let us choose the second way and denote the corresponding holomorphic functions by $\gi{i}$.
A crucial difference with the contact Hamiltonians is that $\gi{i}$ must be regular in $\cU_i$ in order
to preserve the regularity of the Darboux coordinates.
The contact Hamiltonian in the gauge transformed picture, $\hHij{ij}_g$, satisfies
\be
\exp\(\Xf{\hHij{ij}_g}\)
= e^{-\Xf{\gi{i}} } \, \exp\Bigl(\Xf{\hHij{ij}} \Bigr)\,e^{\Xf{\gi{j}} } .
\label{gaugetr}
\ee
Although it can in principle be extracted using the Baker-Campbell-Hausdorff formula, the result does not appear to be explicit.
In fact, in this paper we will need only a particular case of \eqref{gaugetr} where the gauge transformation functions
are the same in all patches, $\gi{i}=g$.
Then applying
\be
\[\Xf{g},\Xf{h}\]=\Xf{\{g,h\}_{1,1}^{}},
\label{commXXh}
\ee
which is nothing else but the Jacobi identity for the contact bracket,
the contact Hamiltonian $\hHij{ij}_g$ can be computed explicitly and is given by
\be
\hHij{ij}_{g}=e^{-\{ g,\,\, \cdot\,\, \}_{1,1}^{}}\cdot \hHij{ij}.
\ee
Furthermore, if $g$ depends only on $\xi^\Lambda$, the effect of the gauge transformation is just the shift of the arguments
of the contact Hamiltonian
\be
\hHij{ij}_g=\hHij{ij}\(\xi^\Lambda\, ,\, \txi_\Lambda - \p_{\xi^\Lambda} g \, ,\, \alpha-g+\xi^\Lambda\p_{\xi^\Lambda} g \).
\label{gaugehHxi}
\ee
The corresponding formula for the gauge transformed transition function $\Hij{ij}_g$ can be obtained either via \eqref{relHH}
or directly by applying the gauge transformation to the gluing conditions \eqref{QKgluing}. Both ways lead to the same result,
but since it is a bit complicated and not needed for our purposes, we refrain from giving it here.
\subsection{S-duality in twistor space}
\label{subsec-Sdual}
Finally, we discuss the constraints on the twistor data imposed by the presence of
the $SL(2,\IZ)$ isometry group on the QK manifold $\cM$.
We assume that there are coordinates in which the $SL(2,\IZ)$ action is given as in \eqref{SL2Z} and \eqref{cla}.
It is known that any isometry on $\cM$ can be lifted to a {\it holomorphic} action on the twistor space.
The lift of $SL(2,\IZ)$, without assuming that $\cM$ has any additional continuous isometries,
has been obtained in \cite{Alexandrov:2013mha}
and is provided by the following transformation of the fiber coordinate
\be
\label{SL2varpi}
\varpi\ \mapsto\ \gl{}\[\htm^{-c,a}\]\frac{t-\htp^{c,d}}{t-\htm^{c,d}}\, ,
\ee
where $\htpm^{c,d}$ are the two roots of the equation $c\xi^0 (t) +d=0$.
Then the resulting $SL(2,\IZ)$ action is isometric if the Darboux coordinates transform as follows \cite{Alexandrov:2008gh}
\be
\label{SL2Zxi}
\begin{split}
&\xi^0\mapsto \frac{a \xi^0 +b}{c \xi^0 + d} \, ,
\qquad
\xi^a \mapsto \frac{\xi^a}{c\xi^0+d} \, ,
\qquad
\txi_a \mapsto \txi_a + \frac{c}{2(c \xi^0+d)} \kappa_{abc} \xi^b \xi^c- c_{2,a}\, \varepsilon(\gl{})\, ,
\\
&\begin{pmatrix} \txi_0 \\ \alpha \end{pmatrix}\mapsto
\begin{pmatrix} d & -c \\ -b & a \end{pmatrix}
\begin{pmatrix} \txi_0 \\ \alpha \end{pmatrix}
+ \frac{1}{6}\, \kappa_{abc} \xi^a\xi^b\xi^c
\begin{pmatrix}
c^2/(c \xi^0+d)\\
-[ c^2 (a\xi^0 + b)+2 c] / (c \xi^0+d)^2
\end{pmatrix} .
\end{split}
\ee
Indeed, such transformation ensures that the contact one-form is only rescaled by a holomorphic factor
\be
\cX\ \mapsto\ \frac{\cX}{c \xi^0 +d}.
\label{Strans-X}
\ee
Thus, it represents an example of a holomorphic contact transformation and,
since it preserves the contact structure, it also preserves the metric.
The question we are interested in is: which twistor data, namely the contours and transition functions,
ensure the transformations \eqref{SL2Zxi}?
In \cite{Alexandrov:2012bu,Alexandrov:2013mha} it was shown that \eqref{SL2Zxi} holds if the twistor data
can be split into two parts. The first part gives a ``classical" space which is in fact identical to
$\cM_H$ in the classical, large volume limit. It is defined by the two transition functions
\be
\Hij{+0} =\Fcl(\xii{+}),
\qquad
\Hij{-0}=\bFcl(\xii{-}),
\label{treeHij}
\ee
where $\Fcl(X)=-\kappa_{abc}\,\frac{X^aX^bX^c}{6X^0}$ is the classical part of the holomorphic prepotential \eqref{lve},
associated with the contours around the north ($t=0$) and south ($t=\infty$) poles of $\CP$, respectively.
The second part can be viewed as ``quantum corrections" to $\cM_H$ and consists of
the contours $C_{m,n;i}$ and the corresponding transition functions $\Hij{i}_{m,n}$,
labeled by a pair of integers $(m,n)$ and additional index $i$. To preserve $SL(2,\IZ)$,
they should be such that $C_{m,n;i}$ are mapped into each other as
\be
C_{m,n;i} \ \mapsto\ C_{\hat m,\hat n;i},
\qquad
\( \hat m\atop \hat n\) =
\(
\begin{array}{cc}
d & -c
\\
-b & a
\end{array}
\)
\( m \atop n \) ,
\label{mappatches}
\ee
whereas $\Hij{i}_{m,n}$ satisfy a {\it non-linear} transformation property given
explicitly in appendix \ref{ap_Sdualconstrap} (see \eqref{Sdualconstr}).
However, the same constraint considerably simplifies once it is rewritten in terms of the contact Hamiltonians $\hHij{i}_{m,n}$,
consistently with the expectations of section \ref{subsec-contact}.
Indeed, since \eqref{SL2Zxi} is a contact transformation, one can apply the property \eqref{trans-contact}
of the contact bracket where $\lambda=(c\xi^0+d)^{-1}$ due to \eqref{Strans-X}.
As a result, it turns out that, to generate the Darboux coordinates satisfying \eqref{SL2Zxi},
the contact Hamiltonians should follow a simple {\it linear} transformation
\cite{Alexandrov:2014mfa}\footnote{If $C_{m,n;i}$ are closed contours, it is possible also that the result of the transformation
has in addition some regular contributions, which can then be absorbed
by a gauge transformation described in section \ref{subsec-gauge}
into a redefinition of Darboux coordinates not affecting the contact structure.\label{foot-regular}}
\be
\hHij{i}_{m,n}\ \mapsto\ \frac{\hHij{i}_{m',n'}}{c\xi^0+d},
\qquad
\begin{pmatrix} m'\\ n' \end{pmatrix} =
\begin{pmatrix}
a & c
\\
b & d
\end{pmatrix}
\begin{pmatrix} m \\ n \end{pmatrix}.
\label{transhH}
\ee
This provides an explicit example how the contact bracket formalism simplifies various aspects of the twistorial description
of QK manifolds.
Furthermore, since any isometry is realized on the twistor space as a contact transformation, the property \eqref{trans-contact}
ensures that the passage to the contact Hamiltonians linearizes any symmetry action.
\section{D-instantons in twistor space}
\label{sec-Dinst}
\subsection{D-instantons in type IIA picture}
The D-instanton corrections to the HM metric can be incorporated using the twistor framework
presented in the previous section. The most elegant formulation they obtain
in the type IIA picture \cite{Alexandrov:2008gh,Alexandrov:2009zh}
where they are induced by D2-branes wrapping special Lagrangian submanifolds of $\CYm$.
First, we note that the twistor description of the tree level metric on $\cM_H$ can be obtained starting
from the transition functions \eqref{treeHij} where $\Fcl$ should be replaced by the full prepotential \eqref{lve}.
To incorporate contributions from the D-instantons, we introduce the contours on $\CP$ known as {\it BPS rays},
which extend from the north to the south pole along the direction determined by the central charge $Z_\gamma$ \eqref{defZ}
\be
\ell_\gamma=\{\varpi:\ Z_\gamma(z)/t\in\I\IR^-\}.
\label{BPSray}
\ee
With these contours we associate the contact Hamiltonians
\be
\hHij{\gamma}(\xi,\txi)=H_{\gamma}(\Xi_\gamma),
\qquad
H_{\gamma}(\Xi_{\gamma})
= \frac{\bar \Omega(\gamma)}{4 \pi^2}\, \sigma_D (\gamma) \expe {-\Xi_{\gamma}},
\label{Hgam}
\ee
where $\Xi_\gamma=q_\Lambda \xi^\Lambda-p^\Lambda\txi_\Lambda$,
the coefficients $\bar\Omega(\gamma)$ are the so-called rational Donaldson-Thomas invariants \cite{Manschot:2010xp,Manschot:2010qz}
\be
\bar\Omega(\gamma) = \sum_{d\vert\gamma} \frac{1}{d^2} \, \Omega(\gamma/d),
\label{rationalinvariants}
\ee
and $\sigma_D(\gamma)$ is the quadratic refinement \eqref{quadraticrefinementpq} with all characteristics set to zero
(see section \ref{subsec-correcttr}).
These contact Hamiltonians, via \eqref{newglue}, generate contact transformations between Darboux coordinates on the two sides of the BPS rays,
thereby changing the contact structure and deforming the metric so that the leading corrections take the expected form \eqref{d2quali}.
Note that the operators $e^{\Xf{\hHij{\gamma}}}$ generating the contact transformations induced by \eqref{Hgam}
are nothing else but a lift to the contact geometry
of the Kontsevich-Soibelman (KS) operators\footnote{More precisely, the usual KS operators are obtained if in \eqref{Hgam}
the rational DT invariants are replaced by the usual ones and the exponential is replaced by the dilogarithm.
However, the product over all (collinear) charges, which enters the wall-crossing formula, is the same in the two versions.}
$U_\gamma^{\bar\Omega(\gamma)}$
satisfying the wall crossing formula \cite{ks}. It dictates how the DT invariants
change after crossing a wall of marginal stability in the special \kahler moduli space of $z^a$
and ensures the smoothness of the moduli space metric across the walls \cite{Gaiotto:2008cd}.
Provided $\Gamma(z)$ is a set of charges for which $Z_\gamma(z)$ become aligned at point $z^a$ and
$\bar\Omega^\pm(\gamma)$ are the rational DT invariants on the two sides of the wall,
the KS formula states that
\be
\label{ewall}
\prod^\ccwarrow_{\gamma\in \Gamma(z)}
U_{\gamma}^{\bar\Omega^-(\gamma)}
=
\prod^\cwarrow_{\gamma\in \Gamma(z)}
U_{\gamma}^{\bar\Omega^+(\gamma)},
\ee
where the two products are taken in the opposite order. (In both cases the order corresponds to decreasing the phase of $Z_\gamma$
at a given point in the moduli space.)
The fact that this formula extends from the operators generating symplectomorphisms to the level of contact transformations
was proven in \cite{Alexandrov:2011ac} using dilogarithm identities, which in turn follow from the classical limit of the motivic
version of \eqref{ewall}.
Another comment is that one can easily compute the transition function corresponding to the contact Hamiltonian \eqref{Hgam}.
Using \eqref{relHH} and the properties of the contact bracket \eqref{contbr}, one finds
\be
\Hij{\gamma} = H_{\gamma} - \frac12\, q_{\Lambda}p^\Lambda \(H'_{\gamma}\)^2,
\label{trHij}
\ee
where the prime means the derivative.
This is the form in which the D-instanton corrections have been first formulated to all orders in the instanton expansion
in \cite{Alexandrov:2009zh}. Note again the simplicity and symplectic invariance of the contact Hamiltonians
in contrast to (the absence of) the corresponding properties of the transition functions.
\subsection{D1-D(-1)-instantons and S-duality}
Although the formulation presented in the previous subsection is very simple and incorporates all D-instanton effects, it is not
suitable for our purposes. In the next section we are going to apply S-duality to derive fivebrane instantons. Therefore, we need
a formulation which respects this symmetry, whereas the above type IIA picture is rather adapted to symplectic invariance.
\lfig{Example of the BPS rays of D5 (red), D1 (green) and D(-1) (brown) branes and the effect of the gauge transformation
which rotates the two latter types of rays to the real line. $\cU_\gamma$ denotes the patch lying in the counterclockwise direction from
the BPS ray $\ell_\gamma$.}{Dinst-gauge3}{12cm}{fig-Dinst-gauge}{-0.6cm}
For the two sectors corresponding to D1 and D(-1)-instantons (see \eqref{quantcor}), the passage to a manifestly S-duality invariant formulation
was understood in \cite{Alexandrov:2009qq,Alexandrov:2012bu}. The idea is to perform a gauge transformation on the twistor space
such that the gauge transformed twistor data satisfy the constraints spelled in section \ref{subsec-Sdual}, which ensure the presence
of the $SL(2,\IZ)$ isometry.
To display the corresponding gauge transformation, we need to introduce some definitions.
First, let us define an ordering on the charge lattice according to the phase of the central charge function
saying that $\gamma>\gamma'$ if $\pi>\arg\(Z_\gamma Z_{\gamma'}^{-1}\)>0$.
Then for each charge $\gamma$ we define an associated set of D(-1)-brane charges
whose BPS rays lie in the same half-plane as $\ell_\gamma$
\be
\GamD{-1}_\gamma=\left\{\gamD{-1}=(0,0,0,\tilde q_0)\ :\
\tilde q_0\Re Z_\gamma>0
\right\},
\label{latDm1}
\ee
and another set of D1-brane charges
for which the BPS rays are between $\ell_\gamma$ and the imaginary axis
\be
\GamD{1}_\gamma=\left\{\gamD{1}=(0,0,\tilde q_a,\tilde q_0)\in H_2^+\cup H_2^-\ :\
\Nq(\gamD{1})=\Nq(\gamma) \ {\rm and}\
\begin{array}{c}
\gamD{1}> \gamma \quad \mbox{for}\ \Nq(\gamma)\ {\rm odd}
\\
\gamD{1}\le \gamma \quad \mbox{for}\ \Nq(\gamma)\ {\rm even}
\end{array}
\right\},
\label{latD1}
\ee
where $H_2^+$ is the set of charges corresponding to effective homology classes on $\CYm$,
$H_2^-$ is the set of opposite charges, and $\Nq(\gamma)$ denotes the quadrant which
$\ell_\gamma$ belongs to.\footnote{One can write
$\Nq(\gamma)=\left\lfloor\frac{2}{\pi}\, \arg \(\I Z_\gamma\)\right\rfloor$.}
Note that both the ordering and the two charge sets $\GamD{\pm 1}_\gamma$ may change after crossing a wall of marginal stability.
Given these definitions, we define a holomorphic function which generates the gauge transformation in the patch
$\cU_\gamma$ taken to lie in the counterclockwise direction from the BPS ray $\ell_\gamma$ (see Fig. \ref{fig-Dinst-gauge})
\be
\gi{\gamma}=(-1)^{\Nq(\gamma)}\[\frac{1}{2}\sum_{\gamD{-1}\in\GamD{-1}_\gamma} \hHij{\gamD{-1}}+
\sum_{\gamD{1}\in\GamD{1}_\gamma}\hHij{\gamD{1}}\].
\label{fungengauge}
\ee
This gauge transformation has a very simple geometric meaning: It simply
rotates the BPS rays corresponding to D1-instantons either to the positive or negative real axis
depending on which one is the closest to the given ray. On the other hand, the D(-1) BPS rays,
which all go along the imaginary axis, are split into two ``halves" which are also rotated to the two real half-axes.
As a result, the contours associated with all D1 and D(-1)-branes coincide with either positive or negative real axis
and the corresponding contact Hamiltonians or transition functions (which are the same in this case since they depend only on $\xi^\Lambda$)
can be summed up. Furthermore, a Poisson resummation of this series over $\tilde q_0$ provides an alternative twistor description
fitting the constraints of S-duality \cite{Alexandrov:2009qq,Alexandrov:2012bu}. Instead of BPS rays, we can now consider
the contours $C_{m,n}$ centered around the points $\htp^{m,n}$ defined below \eqref{SL2varpi},
whereas the corresponding contact Hamiltonians are given by
\be
\hH_{m,n}^{\rm D1}(\xi)
= -\frac{\I}{(2\pi)^3}\!\!\sum_{q_a\in H_2^+\cup\{0\}}\!\!\!\! n_{q_a}^{(0)}\,
\begin{cases}
\displaystyle\frac{e^{-2\pi \I m q_a\xi^a}}{m^2(m\xi^0+n)}, &
\quad m\ne 0,
\\
\displaystyle (\xi^0)^2 \,\frac{e^{2\pi \I n q_a\xi^a/\xi^0}}{n^3}, &\quad m=0.
\end{cases}
\label{defG1}
\ee
Here we set $\gfinv_{0}=-\chi_\CYm/2$ and used that
\be\label{instnr}
\begin{split}
\Omega(\gamD{1}) =&\, \gfinv_{q_a} \ \quad\mbox{\rm for}\quad \gamD{1}=(0,0,\pm q_a,q_0), \quad \{ q_a \} \ne 0,
\\
\Omega(\gamD{1})=&\, 2\gfinv_{0}\quad\mbox{\rm for}\quad \gamD{1}=(0,0,0,q_0).
\end{split}
\ee
It is useful to note also that the contributions to \eqref{defG1} with $m=0$ are nothing
but the $\alpha'$-corrected part of the prepotential, $\sum_{n>0}\hH_{0,n}^{\rm D1}=F^{\alpha'\text{-loop}}+F^{\ws}$.
It is easy to check that both the new contours and contact Hamiltonians satisfy \eqref{mappatches} and \eqref{transhH}, respectively,
where in the last relation it is important to take into account the possibility to drop regular terms (see footnote \ref{foot-regular}).
\subsection{D5-instantons after gauge transformation}
\label{subsec-D5trans}
Performing the gauge transformation which puts D1-D(-1)-instantons into an S-duality invariant formulation, we rotated their
BPS rays to the real axis. On the way they will necessarily cross the other BPS rays of D3 and D5-branes.
Since the charges of the crossing rays are generically mutually non-commuting, i.e. $\langle\gamma,\tilde\gamma\rangle\ne 0$,
the gauge transformation should have a non-trivial effect on the transition functions of the other branes.
\lfig{Schematic representation of the twistor data generating D(-1)-D1 and
D5-instantons in the type IIB picture.}{D1inst}{12cm}{fig-D1inst}{-0.6cm}
Indeed, the general action of the gauge transformation is shown in \eqref{gaugetr}.
Assuming that we are at the point in the moduli space which does not belong to any line of marginal stability,
for $\gamma=(p^\Lambda, q_\Lambda)$ with non-vanishing $p^\Lambda$ the holomorphic functions \eqref{fungengauge}
generating the gauge transformation on the two sides of the BPS ray $\ell_\gamma$ will be the same.
Therefore, we turn out to be in the situation where the formula \eqref{gaugehHxi} can be applied.
As a result, the gauge transformed D5-brane contact Hamiltonians read
\be
\hHij{\gamma}_g(\xi,\txi)=H_{\gamma}(\Xi^{(g)}_\gamma),
\qquad
\Xi^{(g)}_\gamma= \Xi_\gamma+p^\Lambda\p_{\xi^\Lambda}\gi{\gamma}(\xi).
\label{Hgam-tr}
\ee
The corresponding gauge transformed transition functions are more complicated and can be found in \eqref{trHTijall}.
All twistor data after the gauge transformation and the resummation are shown in Fig. \ref{fig-D1inst}.
The modification of the contact Hamiltonians \eqref{Hgam-tr} is crucial for keeping consistency of the twistor construction
with wall-crossing. Indeed, let us consider, for instance, the wall of marginal stability corresponding to the alignment of central charges
of a D5-brane of charge $\gamma$ and D(-1)-branes, so that after crossing this wall all BPS indices
$\bar\Omega(n\gamma+\gamD{-1})$, with $\gamD{-1}\in\GamD{-1}_\gamma$, change.
Since the central charges of D(-1)-branes are real, at the wall
the five-brane BPS rays $\ell_{n\gamma+\gamD{-1}}$ become aligned with the imaginary axis.
Before the gauge transformation, the D(-1) BPS rays $\ell_{\gamD{-1}}$ belonged to this axis and therefore, after crossing the wall,
the relative positions of $\ell_{n\gamma+\gamD{-1}}$ and $\ell_{\gamD{-1}}$ were exchanged.
This exchange compensated the change in the BPS indices
and ensured the smoothness of the contact structure and the metric on the moduli space across the wall.
But after the gauge transformation the contours associated with D(-1)-branes are rotated to the real axis.
Hence there is nothing to exchange its relative position with $\ell_{n\gamma+\gamD{-1}}$ to compensate the change of the BPS indices!
So how can the metric be still smooth in this gauge transformed picture?
It turns out that the smoothness is ensured precisely by the shift of $\Xi_\gamma$ in \eqref{Hgam-tr} induced by the gauge transformation.
The point is that the functions $\gi{\gamma}$ determining this shift are different on the different sides of the wall.
(In the considered example they differ by an overall sign due to the prefactor in \eqref{fungengauge}.)
As a result, crossing the wall, one also changes the form of the gauged transformed contact Hamiltonians, and
it is done in such a way that the combined effect of all changes is the smoothness of the moduli space.
A rigorous proof of this fact can be obtained by representing the gauge transformed KS operators as in \eqref{gaugetr}
and using the original KS wall-crossing formula \eqref{ewall}.
\section{Fivebrane instantons from S-duality}
\label{sec-fivebrane}
Now we have all ingredients to reach our main goal --- the twistorial description of fivebrane instantons in the presence of
D1-D(-1)-instanton corrections.\footnote{We remind that our construction ignores the effect of D3-instantons.
Although such approximation is physically unjustified, at a formal level it can be achieved by setting to zero all DT-invariants
$\Omega(\gamma)$ for charges with $p^0=0,\ p^a\ne 0$. Note however that we do include the effect of D3-branes bound to D5-branes,
as required by invariance under monodromies.}
To this end, we simply apply the modular constraint \eqref{transhH}
to the gauge transformed contact Hamiltonians \eqref{Hgam-tr} which are identified with the elements of an $SL(2,\IZ)$ multiplet with $m=0$.
More precisely, we set $\hHij{\hgam}_{0,p^0}=\hHij{\gamma}_g$ where we split charge $\gamma$ of a D5-D3-D1-D(-1)-bound state
into the D5-component $p^0$ and the reduced charge vector $\hgam=(p^a,q_a,q_0)$ identified with the index $i$ in \eqref{transhH}.
On this function we act by an $SL(2,\IZ)$ transformation parametrized as
\be
\label{Sdualde}
\gl{}= \begin{pmatrix} a & b \\ k/p^0 & p/p^0 \end{pmatrix} \in SL(2,\IZ)\, ,
\ee
where the two integers $(p,k)\neq (0,0)$ have $p^0$ as the greatest common divisor, whereas $a$ and $b$ must satisfy $a p - b k = p^0$.
The integer $k$ will appear as NS5-brane charge. As for the other charges, it is convenient to pack them into
rational charges $n^a=p^a/k$, $n^0=p/k$ and the so-called invariant charges \cite{Alexandrov:2010ca}
\be
\begin{split}
\hat q_a = &\, {q_a + \frac12 \,\kappa_{abc} \frac{p^b p^c}{p^0},}
\\
\hat q_0 =&\, { q_0 + \frac{p^a q_a}{p^0} + \frac13\, \kappa_{abc}\frac{p^a p^b p^c}{(p^0)^2}},
\end{split}
\ee
which are invariant under the spectral flow transformation, whose action on the charge vector $\gamma$ is identical to
the action \eqref{bjacr} on the symplectic vector $(\zeta^\Lambda, \tzeta_\Lambda)$.
The $SL(2,\IZ)$ action on the contact Hamiltonian is easily computed using \eqref{SL2Zxi}.
Then the S-duality constraint implies that
\be
\begin{split}
\hkp=&\,(p^0)^{-1}(k\xi^0+p)\,\, \gl{}\cdot \hHij{\gamma}_g(\xi,\txi)
\\
=&\, \frac{\bar\Omega_{k,p}(\hgam)}{4\pi^2} \frac{k}{p^0} (\xi^0+n^0)\sigma_D(\gamma)\, \expe{ S_{k,p;\hgam}},
\end{split}
\label{fivebraneh}
\ee
where the result is written using the following notations:
\begin{itemize}
\item
fivebrane twistorial action
\be
\begin{split}
S_{k,p;\hgam}= &\, -k S_{n^\Lambda}+ \frac{p^0(p^0 \hat q_0-k \hat q_a (\xi^a + n^a))}{k^2(\xi^0 +n^0)}
- \frac{a}{k}\,p^0 q_0- c_{2,a} p^a \varepsilon(\gl{})
\\
&\, -\frac{(-1)^{\Nq_{k,p}(\hgam)}}{2\pi\I} \sum_{\gamD{1}\in\Gamma_{k,p;\hgam}}\bn_{\tilde q}\, p^\Lambda \tilde q_\Lambda \,
\expe{\tS_{k,p;\gamD{1}}}
\end{split}
\label{fivebraneaction}
\ee
with $S_{n^\Lambda} =\alpha - n^\Lambda \txi_\Lambda + \Fcl (\xi + n)$;
\item
S-duality transformed D1-brane twistorial action\footnote{Note that both actions \eqref{fivebraneaction} and \eqref{Stildegam}
are regular at $k=0$ and reduce in this limit to the (gauge transformed) D-instanton twistorial actions $-\Xi^{(g)}_\gamma$ and $-\Xi_{\tilde\gamma}$,
respectively.}
\be
\tS_{k,p;\gamD{1}}= \frac{\tilde q_0 (p^0)^2}{k^2 (\xi^0 + n^0)}-\frac{p^0\tilde q_a \xi^a}{k(\xi^0+n^0)}-\frac{a}{k}\,p^0 \tilde q_0;
\label{Stildegam}
\ee
\item
rational Gopakumar-Vafa invariants $\bn_{q}$ constructed from $\gfinv_{q_a}$ as in \eqref{rationalinvariants};
\item
transformed BPS indices
$\bar\Omega_{k,p}(\hgam)= \bar\Omega(\gamma;\gl{}\cdot z)$ which take into account the fact that DT invariants
are only piecewise constant;
\item
transformed set of charges
\be
\Gamma_{k,p;\hgam}=
\GamD{1}_\gamma(\gl{}\cdot z)\cup \GamD{-1}_\gamma(\gl{}\cdot z),
\label{transGam}
\ee
where the dependence on $z^a$ comes from the dependence of \eqref{latDm1} and \eqref{latD1} on the central charge function,
or, more precisely, on the chamber in the moduli space, and is analogous to the dependence of the BPS indices;
\item
target quadrant in the complex plane
$\Nq_{k,p}(\hgam)= \left\lfloor\frac{2}{\pi}\, \arg \(\I \gl{}\cdot Z_\gamma\)\right\rfloor$.
\end{itemize}
The associated contours on $\CP$ are also just the images of $\ell_\gamma$ under \eqref{Sdualde} and thus can be written as
\be
\ell_{k,p;\hgam}=\{\varpi:\ Z_\gamma(\gl{}\cdot z)/(\gl{}\cdot t)\in\I\IR^-\}.
\label{BPSray-five}
\ee
From \eqref{SL2varpi} it follows that they are rays joining the points $\htpm^{k,p}$ (see Fig. \ref{fig-fiveinst}).
Together $\hkp$ and $\ell_{k,p;\hgam}$ determine the twistorial data sufficient to incorporate all fivebrane
instanton corrections to the metric on the HM moduli space.
\lfig{Schematic representation of the twistor data generating D(-1)-D1 and
all fivebrane instantons. BPS rays joining different points $\htpm^{k,p}$ correspond to different fivebrane charges $k$ and $p$.
Different BPS rays joining the same points correspond to different reduced charges $\hgam$.}{fiveinst2}{12cm}{fig-fiveinst}{-0.6cm}
The function \eqref{fivebraneh} is almost identical to the result for the fivebrane transition function found in \cite[Eq.(5.30)]{Alexandrov:2010ca}
in the one-instanton approximation. It differs only by a prefactor ensuring the correct modular weight and by
the last term in \eqref{fivebraneaction} appearing as a result of the gauge
transformation \eqref{fungengauge}.\footnote{We also flipped the sign of the NS5-brane charge $k$.}
In particular, in \cite{Alexandrov:2010ca} it was shown that the saddle point evaluation of the Penrose transform
of this function yields the exponential of the NS5-brane instanton action found previously from
the analysis of classical supergravity solutions \cite{deVroome:2006xu}.
It is important however that, in contrast to \cite{Alexandrov:2010ca}, our result provides fivebrane
instanton corrections to the HM metric to {\it all orders} of the instanton expansion.
This expansion can be seen explicitly when one computes the contact transformation \eqref{newglue} generated by
\eqref{fivebraneh}. Equivalently, this calculation provides expressions for the corresponding transition function $\Hij{\hgam}_{k,p}$
and its derivatives. The former is given by
\be
\begin{split}
\Hij{\hgam}_{k,p}
=&\, \hkp+2\pi^2(\hkp)^2\(\frac{\hat q_0 (p^0)^2}{k(\xi^0+n^0)}+\frac{2k^2\Fcl(\xi+n)}{(1+2\pi\I k\hkp)^2} \)
\\
&\,
-(-1)^{\Nq_{k,p}(\hgam)}\,\frac{k(\xi^0+n^0)}{4\pi^2 p^0}\sum_{\gamD{1}\in\Gamma_{k,p;\hgam}}
\bn_{\tilde q}\,\expe{\tS_{k,p;\gamD{1}}} \cE\(\frac{4\pi^2 p^0 (\tilde{q}_\Lambda p^\Lambda)}{k(\xi^0+n^0)}\,\hkp\),
\end{split}
\label{tranNS5all}
\ee
where we introduced the function
\be
\cE(x)=1-(1+x)\, e^{-x},
\label{fun}
\ee
whereas the results for derivatives are reported in appendix \ref{ap-derfivebrane}.
They can be used to write down explicitly a system of integral equations which will provide a manifestly S-duality invariant
twistorial formulation of the HM moduli space including all D(-1), D1 and fivebrane instanton corrections.
Of course, this system cannot be solved analytically, but it should allow a perturbative solution generating the instanton expansion
around the classical metric.
A very non-trivial consistency check of our computation is that, as shown in appendix \ref{ap_Sdualconstrap},
the transition functions \eqref{tranNS5all} satisfy the non-linear S-duality constraint derived in \cite{Alexandrov:2013mha}.
It is amazing to see how all non-linearities fit each other, but it is even more remarkable that all of them
disappear once one starts working in terms of contact Hamiltonians.
Another consistency check is to verify that our results for fivebrane corrections are compatible with the action of all discrete isometries on $\cM_H$
which we presented in section \ref{subsec-Udual}. This is particularly important as in \cite{Alexandrov:2010ca}
it was found that there is a clash between the one-instanton approximation to fivebrane corrections, which is essentially identical
to our results, and the Heisenberg and monodromy symmetries. But as we argued, the monodromy transformations need to be modified
to ensure the correct group representation and it is natural to expect that this should resolve the above issue as well.
Indeed, due to the invariance of D-instanton corrections, the invariance of the contact structure affected by fivebrane instantons
is {\it guaranteed} by the closure of the group action.
Nevertheless, we demonstrate this invariance explicitly in appendix \ref{ap-MHinv}.
Finally, it is worth to note that S-duality generates a new family of walls in the moduli space $\cM_H$ which do not belong
to the \kahler moduli subspace $\SK$. These are the images of the original walls of marginal stability under S-duality transformation.
Since $z^a$ is mapped into $cc^a+dd^a+\I|c\tau+d|t^a$,
the position of the new walls depends on the RR-fields $c^a$ and the complexified string coupling $\tau$.
Crossing such a wall, one changes the values of the transformed BPS indices $\bar\Omega_{k,p}(\hgam)$ which gives rise
to a potential discontinuity in the contact structure and the moduli space metric. However,
they both do remain continuous because the new twistorial data is just an image of the data which was already shown to be smooth.
The mechanism ensuring the smoothness is the same as in the end of section \ref{subsec-D5trans}.
Alternatively, this can be seen as a result of the change of the set $\Gamma_{k,p;\hgam}$ \eqref{transGam} which determines
the effect of D1-D(-1)-branes on the fivebrane instantons. Its change together with a rearrangement of fivebrane BPS rays
guarantees the smoothness.
\section{Discussion}
\label{sec-concl}
The main result of this paper is the twistorial construction of the HM moduli space $\cM_H$ of CY string vacua
in the type IIB picture which includes effects from fivebrane and D1-D(-1)-instantons.
In particular, the constructed fivebrane instantons generically have non-vanishing NS5-brane charge.
All non-perturbative corrections are encoded in the two sets of holomorphic functions, \eqref{defG1} found in \cite{Alexandrov:2009qq}
and \eqref{fivebraneh} derived here. These functions generate a system of integral equations which determine
Darboux coordinates on the twistor space and thereby the metric on $\cM_H$.
The key element of this construction was the use of the contact bracket formalism which provides
a new parametrization of contact transformations. The contact bracket was shown to satisfy the crucial property \eqref{trans-contact},
analogous to a similar property of the Poisson bracket, which ensures that the contact Hamiltonians $\hHij{ij}$,
encoding the geometry of a QK manifold in this twistor approach, transform {\it linearly} under all isometries.
In particular, this implies their linear transformation under S-duality \eqref{transhH}, which was used to derive
the contact Hamiltonians corresponding to fivebrane instantons.
Another important step was to improve the action of discrete isometries on $\cM_H$ at quantum level.
Namely, we found that the closure of the duality group requires a modification of certain symmetry transformations.
This adjustment had a double effect: not only it provided a consistent implementation of all symmetries,
but it also resolved a tension between fivebrane instantons
and monodromy and Heisenberg symmetries observed in \cite{Alexandrov:2010ca}.
However, the proposed modification of the monodromy action on the RR-fields raises the following problem.
Before the modification, it was given in \eqref{bjacr} and this seemingly complicated transformation
in fact follows from the definition of the RR-scalars in terms of the B-field and
the RR-potential $A^{\rm even}\in H^{\rm even}(\CY,\IR)$
\be
\label{ABze}
A^{\rm even}\, e^{-B}
= \zeta^0 - \zeta^a \omega_a - \tzeta_a \omega^a-\tzeta_0 \omega_{\CYm}
\ee
just by applying the shift of the B-field and keeping the potential fixed. Therefore, it is natural to ask whether
the modified transformation \eqref{bjacr-mod} can be generated in the same way. This would imply that
either the l.h.s. of \eqref{ABze} should be modified and acquires additional non-homogeneous (in $A^{\rm even}$) terms,
or the RR-potential transforms itself.
Since the new terms in \eqref{bjacr-mod} have their origin in the quadratic refinement, one might expect that
in both cases the corrections appear from some subtleties in the definition of the one-loop determinant
around the D-instanton background similar to the issues discussed, for instance, in \cite{Freed:1999vc}.
Returning to the fivebrane instantons, we note that the construction presented in this paper calls for two natural
extensions. First, it clearly misses the D3-brane contributions. As was indicated in the Introduction, the actual problem
is to find how the corresponding subset of D2-instantons on the type IIA side can be rewritten
in an S-duality invariant way. Unfortunately, this was not understood even in the linear (one-instanton)
approximation. Hopefully, once this problem is resolved at one-instanton level,
the contact bracket formalism will provide a fully non-linear solution.
The second extension is, in contrast, to map the fivebrane instantons found here in the type IIB picture
into the mirror type IIA formulation. What is non-trivial is that the resulting NS5-brane instanton corrections
should be automatically symplectic invariant, a symmetry which is not seen on the type IIB side.
An interesting related question is whether these corrections will exhibit some form of integrability as
there are strong indications that the inclusion of NS5-instantons may be equivalent to quantization
of a certain integrable structure \cite{Alexandrov:2011ac,Alexandrov:2012np,Alexandrov:2013yva}.
The knowledge of fivebrane instantons also allows to approach two problems which are expected to be related to this type of
non-perturbative corrections. The first one is the existence of a singularity in the one-loop corrected metric on $\cM_H$.
This singularity should be resolved by non-perturbative effects, but D-instantons seem to be incapable to do so \cite{Alexandrov:2009qq}.
Thus, these are the NS5-brane corrections that should be responsible for the smoothness of the metric.
It will be a very non-trivial check on our construction to see whether the fivebrane instantons found in this paper
indeed resolve the singularity.
Another issue whose resolution was attributed to NS5-branes is the divergence of the sum over D-brane charges
appearing due to the exponential growth of the DT invariants \cite{Pioline:2009ia}.
Somehow NS5-brane effects should regulate this sum to make the non-perturbative metric on $\cM_H$ well defined.
It is likely however that solution to this problem requires the passage to the mirror type IIA picture,
which makes such a reformulation even more pressing.
Our final comment concerns the isometry group of $\cM_H$.
In this work it appears as a semidirect product of $SL(2,\IZ)$ with the nilpotent group
generated by the Heisenberg transformations and monodromies around the large volume point.
On the other hand, one might expect that the true U-duality group of the low energy theory
should be semisimple and is obtained by adding some new symmetry generators. Such extensions have been proposed
in \cite{Pioline:2009qt,Bao:2009fg,Persson:2011xi}, but it is not clear so far what can be such a group for generic CY.
It is interesting to see whether the contact bracket formalism can help solving this problem given that it is particularly
suited for dealing with symmetries.
\section*{Acknowledgments}
We are grateful to Davide Gaiotto, Albrecht Klemm, Jan Louis, Jan Manschot, Daniel Persson, Boris Pioline and Roberto Valandro
for valuable discussions and correspondence. We also thank Daniel Persson and Boris Pioline for careful reading of the manuscript.
|
1,314,259,993,957 | arxiv | \section{Introduction}\label{sec:intro}
Random matrix theory (RMT) is a versatile tool for analyzing spectral statistics of operators like Hamiltonians in quantum chaotic and disordered systems~\cite{GMW,Haake,Sieber,Beenakker}, the density operator in quantum information theory~\cite{Bengtsson,ForresterKieburg}, and the Dirac operator in Quantum chromodynamics (QCD)~\cite{Verbaarschot:2000dy,Verbaarschot}. It even allows to compare spectra of completely different systems ranging over many orders of scales. Applications of RMT can also be found beyond physics, like telecommunications, time series analysis in finance, ecology, sociology and medicine, and mathematical topics like algebraic geometry, number theory, combinatorics and graph theory. For more examples, see~\cite{handbook:2010}. In QCD the applicability is two-fold. First, it allows to derive analytical relations between low energy constants of the chiral effective theory (non-linear $\sigma$-models) and spectral observables of the Dirac operator. {This enables to determine} {the} low energy constants by lattice simulations; see~\cite{Verbaarschot:2000dy,Verbaarschot}. Second, RMT can be applied to situations where the notorious sign problem impedes lattice simulations, like at finite baryon chemical potential~\cite{Osborn:2004,Splittorff:2006vj,Akemann:2007rf,Kanazawa:2012zzr} or {at} finite $\theta$-angle~\cite{Damgaard:1999ij,Janik:1999,Kanazawa:2011tt,Kieburg:2017}.
The random matrix model we consider here is inspired by a certain type of Dirac operators. Hence, it will be of interest in QCD although we {may expect} applications in other areas, as well. Especially, Hamiltonians in condensed matter theory sometimes share similar or even the same global symmetries as those of Dirac operators in QCD-like theories. Our model is a Gaussian distributed, chiral, two-matrix model exhibiting statistics corresponding to the Dyson index $\beta=2$ in the bulk of the spectrum. The random matrix is explicitly of the form
\begin{equation}\label{RMT-model}
\eqalign{
\fl\mathcal{D} =\Bigg(\begin{array}{cc} 0 & iW \\ iW^\dagger & 0 \end{array}\Bigg),~~~
W= H_1+i\mu H_2,~~~H_1,H_2\in{\rm Herm}(N)\ {\rm and}\ \mu\in[0,1]
}
\end{equation}
distributed as
\begin{equation}\label{distribution}
P(\mathcal{D})= \frac{1}{2^N \pi^{N^2}}\exp\left[-\frac{1}{2}\Tr(H_1^2+H_2^2)\right]
\end{equation}
with ${\rm Herm}(N)$ denoting the set of Hermitian $N\times N$ matrices. The coupling parameter $\mu$ can be chosen real {in} general. However, due to the symmetries $(\lambda,H_1,H_2,\mu)\to(\lambda,H_1,-H_2,-\mu)$ and $(\lambda,H_1,H_2,\mu)\to(\lambda/\mu,H_2,H_1,1/\mu)$ with $\lambda$ an arbitrary eigenvalue of $\mathcal{D}$, we can reduce its parameter range to $[0,1]$. For $\mu=1$ the model is exactly the one of chGUE while for $\mu=0$ we have the spectral statistics of {the singular values of the GUE. Hence} the exact chiral pairs of eigenvalues $(\lambda_j,-\lambda_j)$ {are} at any time present.
The model~\eref{RMT-model} is related to the elliptic {complex Ginibre} ensemble, for which the primary focus has been on the complex eigenvalue spectrum of the matrix $W$~\cite{Sommers,Fyodorov:1997,Akemann:2001bf,Akemann:2007rf}. Its physical application includes the scattering at disordered and chaotic systems~\cite{Fyodorov:1997b}, as well as 3d QCD at finite baryon chemical potential~\cite{Akemann:2001bf,Akemann:2007rf}. {In comparison to these works}, we are interested in the singular value statistics of $W$.
There are three applications of \eref{RMT-model} in QCD. The first application is 4d QCD at high temperature. Since the early 80's it is understood that at high temperature QCD-like gauge theories undergo dimensional reduction~\cite{Ginsparg:1980ef,Appelquist:1981vg,Nadkarni:1982kb,Nadkarni:1988fh,Landsman:1989be,Kajantie:1995dw}. In this regime the chiral condensate evaporates and RMT loses its validity for the infrared Dirac spectrum \cite{Kovacs:2009zj}. However, by judiciously choosing the boundary condition of quarks along {the time-like circle} $S^1$ it is possible to avoid chiral restoration up to an arbitrarily high temperature \cite{Stephanov:1996he,Bilgici:2009tx}. Then the dimensional crossover should manifest itself particularly strong in the smallest eigenvalues of the Dirac operator because they encode the type of spontaneous symmetry breaking. The Dirac operator of 4d QCD with more than two colors ($N_{\rm c}>2$) and quarks in the fundamental representation shares the global symmetries of chGUE~\cite{Shuryak:1992pi,Verbaarschot:1993pm,Verbaarschot}. In three dimensions the symmetries are those of GUE~\cite{Verbaarschot:1994ip,Akemann:2001bf,Akemann:2001}. Since chiral symmetry has to be always present for the Dirac operator in the 4d continuum theory, we expect the spectral statistics of~\eref{RMT-model}.
The second application can be found in 3d QCD at finite isospin chemical potential $\mu_{\rm I}$ \cite{Akemann:2001bf,Akemann:2007}. When analyzing the pion condensate $\langle\bar{u}d+\bar{d}u\rangle$ ($u$ and $d$ are the up and down quarks), one needs to introduce a source variable $j$. Arranging the two quark fields as $\bar{\psi}=(\bar{u},\bar{d})$, the fermionic part of the Lagrangian reads%
\footnote{The authors of~\cite{Akemann:2001bf,Akemann:2007} did not study the pion condensate $\langle\bar{u}d+\bar{d}u\rangle$. As such, they had no source term $j$ and thus had a decoupling of the two quarks.}
\begin{equation}\label{Lag-phys}
\mathcal{L} = \bar{\psi}[\mathcal{D}(m,\mu_{\rm I})+j\tau_1]\psi =\bar{\psi}\bigl[\mathcal{D}_{\rm 3d}
+ \mu_{\rm I} \sigma_3\tau_3+{\rm diag}(m_{\rm u},m_{\rm d})+j\tau_1\bigl]\psi,
\end{equation}
where $\mathcal{D}_{\rm 3d}$ is the Euclidean anti-Hermitian 3d Dirac operator, $m_{\rm u/d}$ are the quark masses and $\sigma_j$ and $\tau_j$ are the Pauli matrices in spinor and flavor space, respectively. {Let us} take the chiral limit for simplicity. The resonances (zeros of the characteristic polynomials of $\mathcal{D}+j\tau_1$) in $j$ are the eigenvalues of the operator $-\mathcal{D}(m,\mu_{\rm I})\tau_1$. Nonzero density of the latter at the origin is a necessary condition for the pion condensate formation \cite{Kanazawa:2011tt}. By replacing the operator $\mathcal{D}_{\rm 3d}$ by an anti-Hermitian random matrix ($i\,\cdot\,$GUE) we arrive at the random matrix model
\begin{equation}\label{Step.model}
\mathcal{D} = \left(\begin{array}{cc} 0 & iH - {\mu}_{\rm I}\leavevmode\hbox{\small1\kern-3.8pt\normalsize1} \\
iH+ {\mu}_{\rm I}\leavevmode\hbox{\small1\kern-3.8pt\normalsize1} & 0 \end{array}\right),\qquad H=H^\dagger.
\end{equation}
The relation of this model to~\eref{RMT-model} is {similar to} the relation between the Stephanov model~\cite{Stephanov:1996ki} and the Osborn model~\cite{Osborn:2004} for 4d QCD at finite baryon chemical potential. This means that for large ${\mu}_{\rm I}$ we have a phase transition of the model~\eref{Step.model} to a phase where $\mathcal{D} $ {develops} a spectral gap about the origin. Such a phase does not exist in the model~\eref{RMT-model}. However, in the other phase where the spectral gap is closed, we will show in~\cite{Kanazawa:2018b} that the hard edge statistics at the origin will be the same for both models.
The third application is 3d lattice QCD for staggered fermions. It is known~\cite{Damgaard:1998,Damgaard:2002,Bruckmann:2008,Bialas:2010hb,Kieburg:2014,Kieburg:2017rrk} that symmetries of the staggered lattice Dirac operator do not necessarily agree with those of the continuum theory. Recently a complete classification of such symmetry shift
was given for all dimensions \cite{Kieburg:2017rrk}. Towards the continuum limit the Dirac operator has to undergo a change of symmetries to reach the correct continuum theory. In~\cite{Bialas:2010hb} the model~\eref{RMT-model} was proposed as a description of the symmetry crossover of 3d staggered fermions. The comparison of lattice simulations and Monte Carlo simulations of~\eref{RMT-model} in~\cite{Bialas:2010hb} supports their idea.
Let us mention another model which interpolates between GUE and chGUE, namely of the form
\begin{equation}\label{Wilson}
\eqalign{
\mathcal{D}_5 =\left(\begin{array}{cc} 0 & W \\ W^\dagger & 0 \end{array}\right)+\mu H,
\quad W\in\mathbb{C}^{N\times N}~~{\rm and}~~H\in{\rm Herm}(2N)\,.
}
\end{equation}
This model was considered in~\cite{Akemann:2011} for the Hermitian Wilson Dirac operator (see also \cite{Damgaard:2010cz,Akemann:2010em} for a related model) {, in particular we want to compare the results of our model with those of the Hermitian Wilson Dirac operator in its chiral limit (only then the spectral gap of $\mathcal{D}_5$ is closed)}. The main difference {of~\eref{Wilson}} to~\eref{RMT-model} is the loss of chirality. Whenever $\mu\neq0$ there are no exact chiral pairs of eigenvalues like $(\lambda_j,-\lambda_j)$. We will see in sections~\ref{sec:kernel} and~\ref{sec:lim.mu} that this difference has an immediate impact on the behavior of the eigenvalues closest to the origin.
In the present work we will analyze the model~\eref{RMT-model} at finite matrix dimension. For this purpose we first derive the joint probability density of the eigenvalues of $\mathcal{D}$ (equivalent to the singular values of $W$ modulo sign) in section~\ref{sec:jpdf}. To achieve this, we evaluate a unitary group integral of a new kind in~\ref{app:group} which generalizes the Leutwyler-Smilga integral~\cite{Leutwyler:1992yt}. The joint probability density turns out to have a Pfaffian form. This is true also for the model \eref{Wilson} in \cite{Akemann:2011} and several other two-matrix models~\cite{Nagao:1998ysi,Nagao:1999nci,Adler2000,Nagao:2001db,Nagao2007JSP,Forrester2008,Akemann:2009fc,Akemann:2010tv,Kieburg:2012mix,Kieburg-Wilson}, where this list is by no means exhaustive. This Pfaffian form allows us to exploit general results on the method of mixing bi-orthogonal and skew-orthogonal polynomials~\cite{Kieburg:2012mix}. Those results are summarized in~\ref{app:Pfaff} and are used to find the Pfaffian point process (section~\ref{sec:Pfaff}), the kernels (section~\ref{sec:kernel}) and the skew-orthogonal polynomials (section~\ref{sec:poly}). Explicit expressions for the skew-orthogonal polynomials are computed via the supersymmetry method~\cite{Zirnbauer,Guhr}. In section~\ref{sec:lim.mu}, we study the limits to GUE ($\mu=0$) and to chGUE ($\mu=1$) in more detail. The qualitative and quantitative difference between the two models~\eref{RMT-model} and~\eref{Wilson} becomes clearer in the limit $\mu\to0$. We summarize our results in section~\ref{sec:conclusion}.
\section{Joint Probability Distribution}\label{sec:jpdf}
To obtain the eigenvalues of $\mathcal{D}$, see Eq.~\eref{RMT-model},
we perform the diagonalization $\mathcal{D}=i O(\Lambda\otimes\tau_3)O^\dagger$ with $\Lambda={\rm diag\,}(\lambda_1,\ldots,\lambda_N)>0$ the singular values of $W$ and $O\in{\rm U}(2N)$. Thus the singular values of $W$ completely determine the eigenvalue spectrum of $\mathcal{D}$. We are interested in the joint probability distribution of $\Lambda$ at finite $N$. For this purpose we express the distribution of $\mathcal{D}$ in terms of $W$ as
\begin{equation}
\fl P(W)=\frac{1}{(2\pi\mu)^{N^2}}\exp\left[-\frac{1+\mu^2}{4\mu^2}\Tr WW^\dagger+\frac{1-\mu^2}{8\mu^2}\Tr(W^2+(W^\dagger)^2)\right].
\end{equation}
To shorten the notation we define
\begin{equation}\label{eq:etadefin}
\eta_{\pm}=\frac{1\pm\mu^2}{4\mu^2}.
\end{equation}
Upon the singular value decomposition $W=U\Lambda V$ the measure transforms as
\cite{Edelman_Rao_2005}
\begin{equation}
\eqalign{
\label{eq:wmeas}
d W = \frac{2^N \pi^{N^2}}{N!\left(\prod_{j=0}^{N-1}j!\right)^2}
d \mu(U) d \mu(V) \Delta_N^2(\Lambda^2)
\prod_{i=1}^{N}\lambda_i d \lambda_i .
}
\end{equation}
where $d\mu$ is the Haar measure of ${\rm U}(N)$ and
the differential $dW$ is the product of all independent real differentials of the matrix entries of $W$.
Hence we have for the joint probability distribution of $\Lambda$
\begin{equation}
\fl\eqalign{
p(\Lambda)&=\frac{1}{N!\left(\prod_{j=0}^{N-1}2^jj!\right)^2|\mu|^{N^2}}
\Delta_N^2(\Lambda^2)\det\Lambda \exp\left(-\eta_+\Tr \Lambda^2\right)\\
&\quad \times\int_{{\rm U}(N)} d\mu(U)\int_{{\rm U}(N)}d\mu(V)\exp\left[\frac{\eta_-}{2}\Tr ((\Lambda VU)^2+((VU)^\dagger\Lambda)^2)\right]\,.
}
\end{equation}
The integral over $V$ can be absorbed in the integration over $U$. The remaining group integral can be performed with the formula derived in~\ref{app:group}. It is a particular case of the more general group integral
\begin{equation}
\mathcal{I} \equiv \int\limits_{{\rm U}(N)} d \mu(U)\;
\exp \left[\xi \Tr(AU+U^\dagger B)
+ \frac{1}{2}\Tr [(AU)^2 + (U^\dagger B)^2] \right],
\end{equation}
with $A$ and $B$ two arbitrary complex matrices and $\xi$ an arbitrary parameter; in our case we have $A=B=\Lambda$ and $\xi=0$. When assuming that the singular values $a={\rm diag\,}(a_1,\ldots,a_N)$ of $AB$ are non-degenerate, the integral is
\begin{equation}
\fl\mathcal{I} = \left(\prod_{j=0}^{N-1}\frac{j!}{\sqrt{4\pi}}\right)
\frac{e^{-\Tr a^2}}{\Delta_N(a^2)}\left\{\begin{array}{cl} {{\rm Pf\,}}\left[\bm{B}_{\xi}(a_k,a_l)\right]_{k,l=1,\ldots, N}, & N\ {\rm is\ even}, \\ {\rm Pf\,} \left[\begin{array}{c|c} \bm{B}_{\xi}(a_k,a_l) & \bm{C}_\xi(a_k) \\ \hline - \bm{C}_\xi(a_l) & 0 \end{array}\right]_{k,l=1,\ldots,N}, & N\ {\rm is\ odd} \end{array}\right.
\end{equation}
with the functions
\begin{eqnarray}
\eqalign{
\fl \bm{B}_{\xi}(a_k,a_l) &=
\int_{\mathbb{R}^2} d x\,d y~
\frac{x-y}{x+y}
\Big[ I_0(2a_k x)I_0(2a_l y)-I_0(2a_l y)I_0(2a_k x) \Big]
e^{-[(x-\xi)^2+(y-\xi)^2]/2},
\\
\fl \bm{C}_\xi(a_k) &= \sqrt{2}\int_{-\infty}^{\infty}d x\;
e^{- (x-\xi)^2/2} I_0(2a_k x).
}
\end{eqnarray}
The function $I_0$ is the modified Bessel function of the first kind. These results are also derived in~\ref{app:group}.
The structure of the group integral above carries over to the joint probability density $p(\Lambda)$ which is cast into the following form
\begin{equation}\label{jpdf.Neven}
\eqalign{
p(\Lambda)=&\frac{C_N}{N!}\Delta_N(\Lambda^2){\rm Pf\,}[G(\lambda_a,\lambda_b)]_{a,b=1,\ldots,N}
}
\end{equation}
for even $N$ and
\begin{equation}\label{jpdf.Nodd}
\eqalign{
p(\Lambda)=&\frac{C_N}{N!}\Delta_N(\Lambda^2){\rm Pf\,} \!
\left[\begin{array}{c|c} \widetilde{G}(\lambda_a,\lambda_b) & g(\lambda_a) \\ \hline -g(\lambda_b) & 0 \end{array}\right]_{a,b=1,\ldots,N}
}
\end{equation}
for odd $N$. The normalization constant is
\begin{equation}\label{norm.const}
C_N=\prod_{j=0}^{N-1}\frac{1}{\sqrt{4\pi\mu^2}(1-\mu^2)^jj!}
\end{equation}
and the weight functions are
\begin{eqnarray}
\fl G(\lambda_1,\lambda_2) &=&4\lambda_1\lambda_2e^{-\eta_+(\lambda_1^2+\lambda_2^2)}
\nonumber\\
\fl &&\times\int_0^{\pi} d \vartheta\tan \vartheta \sinh\big[ \eta_-(\lambda_1^2-\lambda_2^2)\sin (2\vartheta) \big]
\; I_0(2\eta_-\lambda_1\lambda_2\cos (2\vartheta)),
\label{def.G}\\
\fl g(\lambda) &=& 2\sqrt{\pi}\lambda e^{-\eta_+\lambda^2} I_0(\eta_-\lambda^2),\label{def.g}\\
\fl \widetilde{G}(\lambda_1,\lambda_2)&=&G(\lambda_1,\lambda_2)-\frac{g(\lambda_1)}{\bar{g}}H(\lambda_2)+H(\lambda_1)\frac{g(\lambda_2)}{\bar{g}}.\label{def.tildG}
\end{eqnarray}
For the definition of $\widetilde{G}$ we need the integrals
\begin{equation}\label{def.barg}
\bar{g}=\int_0^\infty d\lambda\,\lambda\, g(\lambda) =2\sqrt{\pi}|\mu|=\frac{1}{C_1}
\end{equation}
and
\begin{equation}\label{def.H}
\fl\eqalign{
H(\lambda)&=\int_0^\infty d\lambda' \, \lambda' \, G(\lambda',\lambda)
\\
&=\lambda\; e^{-\eta_+\lambda^2}
\int_0^{\pi} d \vartheta\tan \vartheta \Biggl(\frac{1}{\eta_+-\eta_-\sin(2\vartheta)}\exp\left[\frac{\eta_--\eta_+\sin(2\vartheta)}{\eta_+-\eta_-\sin(2\vartheta)}\eta_-\lambda^2\right]\\
&\quad -\frac{1}{\eta_++\eta_-\sin(2\vartheta)}\exp\left[\frac{\eta_-+\eta_+\sin(2\vartheta)}{\eta_++\eta_-\sin(2\vartheta)}\eta_-\lambda^2\right]\Biggl).
}
\end{equation}
The result above resembles the one in~\cite{Akemann:2011} of the Hermitian Wilson Dirac random matrix~\eref{Wilson}.
Let us underline that the joint probability density~\eref{jpdf.Nodd} for odd $N$ can also be written with $\widetilde{G}$ replaced by $G$. Indeed this would be more natural from the perspective of deriving the joint probability density, see~\ref{app:group}. The difference of the two representations is that we subtracted the last row and column from the first $N$ rows and columns which does not change the Pfaffian. In this way the two-point weight $\widetilde{G}$ is orthogonal to the constant, i.e. $\int_0^\infty d\lambda' \,\widetilde{G}(\lambda',\lambda)=0$. The reason why we do this is because we want to pursue the ideas in~\cite{Kieburg:2012mix} regarding the construction of the finite $N$ results via skew-orthogonal polynomials, especially those for odd $N$. This construction differs from Mehta's~\cite[Chapter~5.5.]{Mehta_book} only for odd $N$. It has the advantage that all pairs of skew-orthogonal polynomials can be derived in the same way regardless of the parity of $N$, while in Mehta's construction all polynomials are of the same order since they are modified by the one of the highest order~\cite[Eq.~(5.5.16)]{Mehta_book}, see more in section~\ref{sec:poly} and in~\ref{app:Pfaff}.
\section{Pfaffian Point Process}\label{sec:Pfaff}
The particular Pfaffian form, see Eqs.~\eref{jpdf.Neven} and~\eref{jpdf.Nodd}, of the joint probability density $p(\Lambda)$ implies already quite a lot. For example the partition function $Z_N^{(k_{\rm b},k_{\rm f})}$
with $k_{\rm f}$ fermionic quarks and $k_{\rm b}(\leq k_{\rm f})$ bosonic quarks,
\begin{equation}\label{eq:Zval}
\eqalign{
Z_N^{(k_{\rm b},k_{\rm f})}=& \int d H_1 dH_2 \;P(\mathcal{D})\;\frac{\displaystyle \prod_{j=1}^{k_{\rm f}}\det\Bigg(
\begin{array}{cc} \kappa_{{\rm f}, j} \leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_{N} & iH_1-\mu H_2 \\ iH_1+\mu H_2 & \kappa_{{\rm f}, j}\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_{N} \end{array}\Bigg)}
{\displaystyle \prod_{j=1}^{k_{\rm b}}\det\Bigg(
\begin{array}{cc} \kappa_{{\rm b}, j} \leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_{N} & iH_1-\mu H_2 \\ iH_1+\mu H_2 & \kappa_{{\rm b}, j}\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_{N} \end{array}\Bigg)}\,,
}
\end{equation}
simplifies drastically. The masses of the bosonic valence quarks must have a non-vanishing real part, ${\rm Re}\,\kappa_{j,{\rm b}}\neq0$, to guarantee the integrability. Usually one sets $k_{\rm b}=k_{\rm f}-N_{\rm f}=k$ and chooses the first $N_{\rm f}$ masses $\kappa_{{\rm f}, j}$ equal to the masses of the dynamical quarks and the remaining $\kappa_{{\rm f}, j}$ and $\kappa_{{\rm b}, j}$ being the valence quark masses which might be complex as it is the case for calculating the $k$-point correlation function.
The partition function~\eref{eq:Zval} can be reduced to a Pfaffian~\cite{Kieburg:2012mix}, see also~\ref{app:Pfaff}.
When $k_{\rm f}-k_{\rm b}=N_{\rm f}$ is even, we have
\begin{equation}\label{part.finite.even}
\fl\eqalign{
&Z_N^{(k_{\rm b},k_{\rm f})}(\kappa)=\frac{C_N}{C_{N+N_{\rm f}}}\frac{\prod_{a=1}^{k_{\rm b}}\prod_{b=1}^{k_{\rm f}}(\kappa_{{\rm f},b}^2-\kappa_{{\rm b},a}^2)}{\Delta_{k_{\rm b}}(\kappa_{\rm b}^2)\Delta_{k_{\rm f}}(\kappa_{\rm f}^2)}\\
\times&{\rm Pf\,}\left[\begin{array}{c|c} \underset{\ }{\frac{C_{N+N_{\rm f}}}{C_{N+N_{\rm f}+2}}(\kappa_{{\rm b},a}^2-\kappa_{{\rm b},b}^2)Z_{N+N_{\rm f}+2}^{(2,0)}(\kappa_{{\rm b},a},\kappa_{{\rm b},b})} & Z_{N+N_{\rm f}}^{(1,1)}(\kappa_{{\rm b},a},\kappa_{{\rm f},d})/(\kappa_{{\rm f},d}^2-\kappa_{{\rm b},a}^2) \\ \hline -Z_{N+N_{\rm f}}^{(1,1)}(\kappa_{{\rm b},b},\kappa_{{\rm f},c})/(\kappa_{{\rm f},c}^2-\kappa_{{\rm b},b}^2) & \overset{\ }{\frac{C_{N+N_{\rm f}}}{C_{N+N_{\rm f}-2}}(\kappa_{{\rm f},c}^2-\kappa_{{\rm f},d}^2)Z_{N+N_{\rm f}-2}^{(0,2)}(\kappa_{{\rm f},c},\kappa_{{\rm f},d})} \end{array}\right],
}
\end{equation}
where the indices take the values $a,b=1,\ldots,k_{\rm b}$ and $c,d=1,\ldots,k_{\rm f}$. It is worth noting that this representation is valid both for even and odd $N$. The first $N_{\rm f}$ fermionic quark masses can be identified with those of the dynamical quarks, $m_1,\ldots,m_{N_{\rm f}}$. The remaining $k_{\rm b}$ fermionic quark masses,
as well as those of the bosonic quarks, are from valence quarks.
To get the corresponding result for odd $N_{\rm f}$, one of the bosonic quark masses has to be taken to infinity yielding a row and a column in the Pfaffian which comprise the partition functions $Z_{N+N_{\rm f}-2}^{(0,1)}(\kappa_{{\rm f},j})$ and $Z_{N+N_{\rm f}+2}^{(1,0)}(\kappa_{{\rm b},j})$, only. The Pfaffian structure~\eref{part.finite.even} carries over to $N\to\infty$ and their expressions in the hard edge limit are given in~\cite{Kanazawa:2018b}.
Another consequence of the Pfaffian form of $p(\Lambda)$ is that the singular values $\Lambda$ build a Pfaffian point process~\cite{Mehta_book}. This means that each $k$-point correlation function,
\begin{equation}
R_{N}^{(k)}(\lambda_1,\ldots,\lambda_k)=\frac{N!}{(N-k)!}\int d\lambda_{k+1}\cdots d\lambda_N \; p(\Lambda),
\end{equation}
can be represented as a $(2k)\times(2k)$ Pfaffian, see~\ref{app:Pfaff},
\begin{equation}
\fl\eqalign{
R_{N}^{(k)} (\lambda_1,\dots,\lambda_k) =(-1)^{k(k-1)/2} {\rm Pf\,}\left[\begin{array}{c|c}
W_{N}(\lambda_a,\lambda_b) & G_{N}(\lambda_a,\lambda_c)
\\ \hline
- G_{N}(\lambda_d,\lambda_b) & K_{N}(\lambda_d,\lambda_c) \end{array}\right]_{a,b,c,d=1,\ldots,k}.\label{eq:Rkpointev}
}
\end{equation}
The minus sign results from the arrangement of the blocks, namely the upper left corner only comprises the matrix $W_N$, here. We could also arrange the columns and rows such that each entry consists of a $2\times2$ block containing all three kernels which would absorb the overall sign. Let us underline that the three kernels have a different form for even and odd $N$, as shown in section~\ref{sec:kernel}, while the structure~\eref{eq:Rkpointev} itself does not change.
The normalized level density is given by
\begin{equation}
\rho_N(\lambda)=\frac{1}{N}R_N^{(1)}(\lambda)=\frac{1}{N}G_{N}(\lambda,\lambda).
\end{equation}
We will make use of this relation in section~\ref{sec:kernel}.
The kernels can be given in terms of the three partition functions with two quarks. We have the following formulas~\cite{Kieburg:2012mix}, see also~\ref{app:Pfaff},
\begin{equation}\label{def.kernel}
\fl\eqalign{
K_N(\lambda_1,\lambda_2)=&\frac{C_N}{C_{N-2}}(\lambda_1^2-\lambda_2^2)Z_{N-2}^{(0,2)}(i\lambda_1,i\lambda_2),\\
G_N(\lambda_1,\lambda_2)=&\frac{1}{2\pi}\lim_{\epsilon\to0}\sum_{s=\pm1}s\frac{Z_{N}^{(1,1)}(i\lambda_1+s\epsilon,i\lambda_2)-1}{(\lambda_1-i s\epsilon)^2-\lambda_2^2}, \\
W_N(\lambda_1,\lambda_2)=&\frac{1}{(2\pi)^2}\lim_{\epsilon\to0}\sum_{s_1,s_2=\pm1}s_1s_2\frac{C_N}{C_{N+2}}((\lambda_1-i s_1\epsilon)^2-(\lambda_2-i s_2\epsilon)^2)\\
&\times Z_{N+2}^{(2,0)}(i\lambda_1+ s_1\epsilon,i\lambda_2+ s_2\epsilon).
}
\end{equation}
Again this holds due to the general form of the joint probability density and does not need any detail of the considered model, as can be readily shown by the algebraic rearrangement method proposed in~\cite{KieburgGuhr:2010a,KieburgGuhr:2010b,Kieburg:2012mix}.
We can also include $N_{\rm f}$ dynamical quarks with masses $m_1,\ldots,m_{N_{\rm f}}$ in the $k$-point correlation function~\eref{eq:Rkpointev}. This would yield a shift $N\to N+N_{\rm f}$ in the subscripts of the kernels and, additionally, we would get $N_{\rm f}$ rows and columns comprising $G_{N+N_{\rm f}}(\lambda_a,im_b)$, $K_{N+N_{\rm f}}(\lambda_a,im_b)$ and $K_{N+N_{\rm f}}(im_a,im_b)$. For odd $N_{\rm f}$ we can introduce an additional mass and send it afterwards to infinity. This would give us a further row and column with $\lim_{\epsilon\to0}\sum_{s=\pm1}s Z_{N+N_{\rm f}}^{(1,0)}(i\lambda_a+s\epsilon)/(2\pi)$ and $(C_{N+N_{\rm f}}/C_{N+N_{\rm f}-2})Z_{N+N_{\rm f}}^{(0,1)}(m_a)$.
Concluding this subsection, due to the very particular structure of the joint probability density of the eigenvalues of the Dirac operator $\mathcal{D}$ most quantities can be reduced to the knowledge of only {a} few functions. How the quantities depend on them is independent of the parity of $N$, only the explicit form of these few functions strongly depends on it.
We would like to point out that similar Pfaffian structures have been derived for several other two matrix models~\cite{Nagao:1998ysi,Nagao:1999nci,Adler2000,Nagao:2001db,Nagao2007JSP,Forrester2008,Akemann:2009fc,Akemann:2010tv,Akemann:2011,Kieburg:2012mix,Kieburg-Wilson}. Considering the fact that determinantal point processes can also be rewritten as Pfaffian ones \cite{Kieburg:2011ct}, Pfaffian point processes seem to be more natural than determinantal point processes.
\section{Skew-Orthogonal Polynomials}\label{sec:poly}
Random matrix ensembles having a probability weight of the form ~\eref{jpdf.Neven} and~\eref{jpdf.Nodd} can generally be solved with the method of
skew-orthogonal polynomials~\cite{Mehta_book} or a mixed version of bi-orthogonal and skew-orthogonal polynomials~\cite{Kieburg:2012mix} (for odd $N$), see~\ref{app:Pfaff}.
As we have seen, the \emph{quenched limit} ($N_{\rm f}=0$)~\cite{Verbaarschot:2000dy,Akemann:2007rf} is enough to consider since the theory with dynamical quarks can be easily constructed from it. Therefore we construct only the polynomials corresponding to the quenched weight.
Let us denote by
\begin{equation}
\fl \langle f(WW^\dagger)\rangle_j^{(\alpha,\beta)}=\left(\frac{\alpha^2-\beta^2}{\pi^{2}}\right)^{j^2/2}\int_{\mathbb{C}^{j\times j}}dW\; f(WW^\dagger)\;e^{-\alpha\Tr WW^\dagger+\beta\Tr[W^2+(W^\dagger)^2]/2}
\end{equation}
the average of a function $f$ over a $j\times j$ complex matrix $W$. In this definition the two parameters $\alpha$ and $\beta$ are independent, which is advantageous at a particular step of the calculation below. The random matrix model~\eref{RMT-model} corresponds to $(\alpha,\beta)=(\eta_+,\eta_-)$.
Following the approach in~\cite{Akemann:2010mt}, see also~\ref{app:Pfaff}, we define two kinds of polynomials via Heine-like formulas,
\begin{equation}\label{def:q}
q_j^{(\alpha,\beta)}(x^2)=\langle \det(x^2\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j-WW^\dagger)\rangle_j^{(\alpha,\beta)}
\end{equation}
and
\begin{equation}\label{def:tildq}
\widetilde{q}_j^{(\alpha,\beta)}(x^2)=\langle
(x^2+\Tr WW^\dagger+c_j) \det(x^2\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j-WW^\dagger) \rangle_j^{(\alpha,\beta)}
\end{equation}
with $c_j$ being arbitrary constants which can be adjusted appropriately at the end. The polynomials $q_j^{(\alpha,\beta)}(x^2)$ are of order $j$ in $x^2$ and the polynomials $\widetilde{q}_j^{(\alpha,\beta)}(x^2)$ are of order $j+1$. When we set $(\alpha,\beta)=(\eta_+,\eta_-)$ we choose the short hand notation $q_j=q_j^{(\eta_+,\eta_-)}$ and $\widetilde{q}_j=\widetilde{q}_j^{(\eta_+,\eta_-)}$. Moreover we define the skew-symmetric products
\begin{equation}
\eqalign{
\langle f_1|f_2\rangle_{\rm ev}=&-\langle f_2|f_1\rangle_{\rm ev}=\int_{\mathbb{R}_+^2}d\lambda_1d\lambda_2\; G(\lambda_1,\lambda_2)f_1(\lambda_1)f_2(\lambda_2),\\
\langle f_1|f_2\rangle_{\rm odd}=&-\langle f_2|f_1\rangle_{\rm odd}=\int_{\mathbb{R}_+^2}d\lambda_1d\lambda_2\; \widetilde{G}(\lambda_1,\lambda_2)f_1(\lambda_1)f_2(\lambda_2)\\
}
\end{equation}
for any integrable functions $f_1,f_2$.
The subscripts refer to even and odd $N$.
When using the algebraic rearrangement method in~\cite{KieburgGuhr:2010b}, see also~\ref{app:Pfaff}, we notice that the polynomials are proportional to Pfaffians, cf. Eq.~\eref{pol-construct}. Due to this Pfaffian structure of the polynomials they satisfy the following orthogonality relations by construction (for any $b\in\mathbb{N}_0$)
\begin{equation}
\eqalign{
\langle \lambda^{2a}|q_{2b}\rangle_{\rm ev}=\langle \lambda^{2a}|\widetilde{q}_{2b}\rangle_{\rm ev}=0,\quad\forall a=0,\ldots,b-1,\\
\langle \lambda^{2a}|q_{2b+1}\rangle_{\rm odd}=\langle \lambda^{2a}|\widetilde{q}_{2b+1}\rangle_{\rm odd}=0,\quad\forall a=0,\ldots,b,\\
\int_0^\infty d\lambda\; g(\lambda) q_{2b+1}(\lambda^2)=\int_0^\infty d\lambda \;g(\lambda) \widetilde{q}_{2b+1}(\lambda^2)=0.
}
\end{equation}
This is the foundation of our choice for the skew-orthogonal polynomials in section~\ref{sec:kernel}.
Before proceeding let us find explicit representations for the two kinds of polynomials $q_j^{(\alpha,\beta)}$ and $\widetilde{q}_j^{(\alpha,\beta)}$. We first consider $q_j^{(\alpha,\beta)}$ and follow the ideas of the supersymmetry method~\cite{Zirnbauer,Guhr}. We refer to~\cite{Berezin} for a mathematical introduction to supersymmetry. In the first step we rewrite the determinant as a Gaussian integral over a $j$-dimensional complex Grassmann-valued vector $\psi$,
\begin{equation}
\det(x^2\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j-WW^\dagger)\propto\int d\psi \; \exp(x^2\psi^\dagger\psi+\Tr WW^\dagger\psi\psi^\dagger).
\end{equation}
We omit the overall constants at the moment since we know that the polynomials are given in monic normalization, $q_j^{(\alpha,\beta)}(x^2)=x^{2j}+\ldots$ In the next step we employ the Hubbard-Stratonovich transformation with a Hermitian matrix $H$
\begin{equation}
\fl e^{\beta\Tr[W^2+(W^\dagger)^2]/2}\propto\int_{{\rm Herm}(j)} d H \exp\left[-\beta\Tr WW^\dagger-\frac{1}{2\beta}\Tr H^2+\Tr H(W+W^\dagger)\right].
\end{equation}
After integration over $W$ we obtain
\begin{equation}
\fl\eqalign{
q_j^{(\alpha,\beta)}(x^2)\propto&\int_{{\rm Herm}(j)} d H\int d\psi\;\det[(\alpha+\beta)\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j-\psi\psi^\dagger]^{-j}\\
&\times\exp\left[x^2\psi^\dagger\psi-\frac{1}{2\beta}\Tr H^2\left(\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j-2\beta[(\alpha+\beta)\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j-\psi\psi^\dagger]^{-1}\right)\right].
}
\end{equation}
The Gaussian integral over $H$ can be performed via the identity
\begin{equation}
\int_{{\rm Herm}(j)} d H \exp(-\Tr H^2 K)\propto\frac{1}{\sqrt{\det(K\otimes\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j+\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j\otimes K)}}\;,
\end{equation}
which is valid for any positive definite Hermitian matrix $K$ and can be proven by spectral decomposing $K$ and then integrating $H$ over each matrix entry, separately. It remains to simplify this expression when we set $K=\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j-2\beta[(\alpha+\beta)\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j-\psi\psi^\dagger]^{-1}$. For this simplification we make use of the identity
\begin{equation}
\det(A\otimes\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j+B\otimes\psi\psi^\dagger)=\frac{\det A^{j+1}}{\det(A+B\psi^\dagger\psi)}
\end{equation}
with arbitrary matrices $A$ and $B$, several times. In the end we arrive at
\begin{equation}
\eqalign{
q_j^{(\alpha,\beta)}(x^2)\propto&\int d\psi\;\frac{(\alpha^2-\beta^2-\alpha \psi^\dagger\psi)^{j+1}}{\sqrt{\alpha^2-\beta^2-2\alpha\psi^\dagger\psi+(\psi^\dagger\psi)^2}}\,e^{x^2\psi^\dagger\psi}.
}
\end{equation}
Note that everything depends on $\psi^\dagger\psi$, only. Hence, we can employ the superbosonization formula~\cite{Sommers:2007,Zirnbauer:2008,Kieburg:2008} and replace the integration over $\psi$ by an integration over a phase, $\psi^\dagger\psi\to z$. This yields after proper normalization
\begin{equation}\label{pol.q.gen}
\eqalign{
q_j^{(\alpha,\beta)}(x^2)=&\frac{j!}{(\alpha^2-\beta^2)^{j+1/2}}\oint \frac{d z}{2\pi i\,z^{j+1}}\frac{(\alpha^2-\beta^2-\alpha z)^{j+1}}{\sqrt{\alpha^2-\beta^2-2\alpha z+z^2}}\,e^{x^2z}.
}
\end{equation}
The contour only encircles the origin counter-clockwise. Changing $z\to(\alpha^2-\beta^2) z/\alpha$ we can rewrite the polynomial to
\begin{equation}
\fl\eqalign{
q_j^{(\alpha,\beta)}(x^2)=&j!\left(\frac{\alpha}{\alpha^2-\beta^2}\right)^j\oint \frac{d z}{2\pi i\,z^{j+1}}\frac{(1- z)^{j}}{\sqrt{1-(\beta/\alpha)^2z^2/(1-z)^2}}\exp\left(\frac{\alpha^2-\beta^2}{\alpha}x^2z\right).
}
\end{equation}
When expanding the square root in $(\beta/\alpha)^2$ we can identify the Laguerre polynomials
\begin{equation}\label{Laguerre}
\eqalign{
L_k\left(\frac{\alpha^2-\beta^2}{\alpha}x^2\right) &=(-1)^k\oint \frac{d z}{2\pi i\,z^{k+1}}(1- z)^{k}\exp\left(\frac{\alpha^2-\beta^2}{\alpha}x^2z\right)\\
&=\frac{(-1)^k}{k!}\left(\frac{\alpha^2-\beta^2}{\alpha}x^2\right)^k+\ldots
}
\end{equation}
This yields a more explicit expression in terms of a finite sum,
\begin{equation}
\eqalign{
q_{j}^{(\alpha,\beta)}(x^2) = & j! \left(\frac{\alpha}{\alpha^2-\beta^2}\right)^j
\sum_{l=0}^{\lfloor j/2 \rfloor}
\Bigg(\begin{array}{c} 2l \\ l \end{array}\Bigg)
\Big(\frac{\beta}{2\alpha}\Big)^{2l}L_{j-2l}
\left(\frac{\alpha^2-\beta^2}{\alpha}x^2\right).
\label{eq:L3r}
}
\end{equation}
{Here, we have used the floor function $\lfloor j/2 \rfloor$ yielding the largest integer which is smaller than or equal to $j/2$.}
An expression analogous to~\eref{pol.q.gen} can be derived for $\widetilde{q}_j^{(\alpha,\beta)}$. In fact it can be completely derived from $q_j^{(\alpha,\beta)}$. Recalling the definition~\eref{def:tildq} we notice that the term $x^2+c_j$ can be pulled out of the integral such that these terms are proportional to $q_j^{(\alpha,\beta)}$. The term with $\Tr WW^\dagger$ can be generated by a derivative in $\alpha$. However we have, then, also to differentiate the normalization constant but this yields only a shift in the arbitrary constants $c_j$. With a slight abuse of notation, we denote the new constants also by $c_j$. We have
\begin{equation}
\widetilde{q}_j^{(\alpha,\beta)}(x^2)=\left(-\partial_\alpha+x^2+c_j\right)q_j^{(\alpha,\beta)}(x^2).
\end{equation}
When applying this relation to the result~\eref{pol.q.gen}, we find (after {shifting $c_j$ again})
\begin{equation}\label{pol.tildq.gen}
\fl\eqalign{
\widetilde{q}_j^{(\alpha,\beta)}(x^2)=&\frac{j!}{(\alpha^2-\beta^2)^{j+1/2}}\oint \frac{d z}{2\pi i\,z^{j+1}}\frac{(\alpha^2-\beta^2-\alpha z)^{j}}{(\alpha^2-\beta^2-2\alpha z+z^2)^{3/2}}\,e^{x^2z}\\
&\times\bigl[(\alpha^2-\beta^2-\alpha z)(\alpha-z)+(\alpha^2-\beta^2-2\alpha z+z^2)\\
&\times(-(j+1)(2\alpha-z)+(x^2+c_j)(\alpha^2-\beta^2-\alpha z))\bigl].
}
\end{equation}
In terms of the Laguerre polynomials this expression reads
\begin{equation}
\fl\eqalign{
&\widetilde{q}_j^{(\alpha,\beta)}(x^2)=\tilde{c}_j q_j^{(\alpha,\beta)}(x^2)+ j! \left(\frac{\alpha}{\alpha^2-\beta^2}\right)^{j+1}
\sum_{l=0}^{\lfloor j/2 \rfloor}
\Bigg(\begin{array}{c} 2l \\ l \end{array}\Bigg)
\Big(\frac{\beta}{2\alpha}\Big)^{2l}
\\
&\qquad \times\Bigg[
\frac{\beta^2}{\alpha^2}(j-2l) L_{j-2l-1}\left(\frac{\alpha^2-\beta^2}{\alpha}x^2\right) - (j-2l+1)
L_{j-2l+1}\left(\frac{\alpha^2-\beta^2}{\alpha}x^2\right)
\Bigg]
\label{eq:L3r2}
}
\end{equation}
after an additional shift of the constant from $c_j$ to $\tilde{c}_j$.
Here we used the identities $\partial L_n(x)=n[L_n(x)-L_{n-1}(x)]/x$ and $xL_n(x)=(2n+1)L_n(x)-nL_{n-1}(x)-(n+1)L_{n+1}(x)$.
Both results, Eqs.~\eref{pol.q.gen} and~\eref{pol.tildq.gen}, simplify when setting $(\alpha,\beta)=(\eta_+,\eta_-)$. {Thus,} we arrive at the main results of this subsection,
\begin{equation}\label{pol.q}
q_j(x^2)=j!\oint \frac{d z}{2\pi i\,z^{j+1}}\frac{(1-(1+\mu^2)z)^{j+1}}{\sqrt{1-2(1+\mu^2) z+4\mu^2z^2}}e^{x^2z}
\end{equation}
and
\begin{equation}\label{pol.tildq}
\fl\eqalign{
\widetilde{q}_j(x^2)=\;&j!\oint \frac{d z}{2\pi i\,z^{j+1}}\frac{(1-(1+\mu^2)z)^{j}}{(1-2(1+\mu^2) z+4\mu^2z^2)^{3/2}}e^{x^2z}\\
&\times\bigl[(1-(1+\mu^2)z)(1+\mu^2-4\mu^2z)+(1-2(1+\mu^2) z+4\mu^2z^2)\\
&\times(-(j+1)(2(1+\mu^2)-4\mu^2 z)+(x^2+c_j)(1-(1+\mu^2)z))\bigl].
}
\end{equation}
When we define the quotient
\begin{equation}
h_j=\frac{C_{j}}{C_{j+2}}=4\pi\mu^2(1-\mu^2)^{2j+1}j!(j+1)!
\end{equation}
with $C_{-n}=1$ for $n\in\mathbb{N}_0$, each pair $(q_j,\widetilde{q}_j)$ satisfies the normalization
\begin{equation}
\fl\eqalign{
\langle q_{2j}|\widetilde{q}_{2j}\rangle_{\rm ev}=\langle \lambda^{4j}|\widetilde{q}_{2j}\rangle_{\rm ev}=h_{2j}\quad{\rm and}\quad\langle q_{2j+1}|\widetilde{q}_{2j+1}\rangle_{\rm odd}=\langle \lambda^{4j+2}|\widetilde{q}_{2j+1}\rangle_{\rm odd}=h_{2j+1}.
}
\end{equation}
This can be readily checked by the Pfaffian representation~\eref{pol-construct} of the polynomials.
Now we are well-prepared for giving explicit representations of the kernels~\eref{def.kernel} since the partition functions for two flavors are directly given in terms of the skew-orthogonal polynomials, see~\cite{Kieburg:2012mix} and~\ref{app:Pfaff}.
\section{Kernels}\label{sec:kernel}
The skew-orthogonal polynomials are different for even and odd $N$ because the two-point weight changes, cf. Eqs.~\eref{def.G} and~\eref{def.tildG}. Therefore also the explicit form of the kernels~\eref{def.kernel} will be different. We collect the results for even $N$ in subsection~\ref{sec:kernel.even} and for odd $N$ in subsection~\ref{sec:kernel.odd}.
\subsection{Even $N$}\label{sec:kernel.even}
For even $N$ the skew-orthogonal polynomials and their normalization constants are given by the triple $\{q_{2j},\widetilde{q}_{2j},h_{2j}\}_{j=0,\ldots,N/2-1}$, see~\ref{app:Pfaff}. Thus the kernels for the $k$-point correlation function~\eref{eq:Rkpointev} are given by
\begin{equation}\label{kernel.even}
\eqalign{
W_{N}(\lambda_1,\lambda_2)=&-G(\lambda_1,\lambda_2)-\int_{\mathbb{R}_+^2}dx_1dx_2\;G(x_1,\lambda_1)G(x_2,\lambda_2)K_{N}(x_1,x_2),\\
G_{N}(\lambda_1,\lambda_2)=&\int_0^\infty dx\; G(x,\lambda_1)K_{N}(x,\lambda_2),\\
K_{N}(\lambda_1,\lambda_2)=&\sum_{j=0}^{N/2-1}\frac{q_{2j}(\lambda_2^2)\widetilde{q}_{2j}(\lambda_1^2)-q_{2j}(\lambda_1^2)\widetilde{q}_{2j}(\lambda_2^2)}{4\pi\mu^2(1-\mu^2)^{4j+1}(2j)!(2j+1)!}.
}
\end{equation}
The level density at finite $N$ is then
\begin{equation}\label{level.density.even}
\rho_N(\lambda)=\int_0^\infty dx\;G(x,\lambda)\sum_{j=0}^{N/2-1}\frac{q_{2j}(\lambda^2)\widetilde{q}_{2j}(x^2)- q_{2j}(x^2)\widetilde{q}_{2j}(\lambda^2)}{4\pi\mu^2(1-\mu^2)^{4j+1}(2j)!(2j+1)!}.
\end{equation}
We show its behavior in Fig.~\ref{fig:even} and compare it with Monte Carlo simulations of the model~\eref{RMT-model} for small $N$.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{level-density-even.pdf}
\caption{\label{fig:even} The normalized level density
for the quenched ensemble of even, small matrix size ($N=4$, left plot, and $N=6$, right plot). The analytical result~\eref{level.density.even} (solid curves) are compared with Monte Carlo simulations (symbols). We plotted the ensemble for the coupling parameter $\mu=0.1$ (red squares) and $\mu=0.9$ (green triangles). The ensemble of the Monte Carlo simulations consists of $10^5$ matrices drawn from the random matrix model~\eref{RMT-model}.}
\end{figure}
It is notable that for decreasing $\mu$ a discontinuity of the level density is building up at the origin. The reason for this is that the limit $\mu\to0$ is not uniform, see also~\cite{Bialas:2010hb} where it was observed for the level density of the staggered Dirac operator in three dimensions. This can be understood by the level densities of {the} GUE and {the} chGUE. While the level density of {the} GUE is non-zero at the origin it vanishes linearly for the chGUE, see~\cite{Mehta_book}.
Another important point is the approach to the limit $\mu\to0$ compared to the GUE and {the} chGUE interpolation in~\cite{Akemann:2011,Damgaard:2010cz,Akemann:2010em} where chirality is broken. In~\cite{Akemann:2011} the authors considered the random matrix model~\eref{Wilson}. The particular form of this model implies that regardless of how small $\mu$ is the chirality is broken and one has a finite density at the origin. In our model~\eref{RMT-model} we preserve chirality which implies that we have always a linear drop off at the origin. The level repulsion reflected in this behavior results from the exact chiral pairs $(\lambda_j,-\lambda_j)$ of eigenvalues of $\mathcal{D}$ which feel each other and which is missing in the model~\eref{Wilson}. The regime were the interaction of the chiral pairs $(\lambda_j,-\lambda_j)$ takes place is of order $\mu$ for small $\mu$ and shows up in the level density about the origin, see Fig.~\ref{fig:even}.
The third point we want to emphasize is the merging of eigenvalue peaks of $\mathcal{D}$ for $\mu\to0$ on the positive and negative line, cf. Fig.~\ref{fig:even}. The reason is that we have on average only $N/2$ eigenvalues on the positive and negative axis, separately, for GUE. Those are represented by $N/2$ peaks in the level density. For chGUE we have $N$ peaks, thus, twice as much. This is also the reason why the width of the level density for $\mu\approx1$ is obviously bigger than the one for $\mu\approx0$ {; we note that the level density is always normalized to unity.} One can interpret this behavior also differently. Since we plot in Fig.~\ref{fig:even} the singular values of $\mathcal{D}$ one has to compare it with the level density of the eigenvalues of GUE while the singular values of the GUE is equivalent to a direct sum of two independent random matrices, see~\cite{Akemann:2001,Forrester:2006,Edelman:2014,Bornemann:2016}.
In comparison to the limit $\mu\to0$, the limit $\mu\to1$ seems to be less dramatic. The level density approaches this limit uniformly without any surprising features.
\subsection{Odd $N$}\label{sec:kernel.odd}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{level-density-odd.pdf}
\caption{\label{fig:odd} The normalized level density
for the quenched ensemble at odd matrix dimension $N=3$ (left plot) and $N=5$ (right plot). Again we compare the analytical result~\eref{level.density.odd} (solid curves) with Monte Carlo simulations (symbols). The coupling parameter is, as before, $\mu=0.1$ (red squares) and $\mu=0.9$ (green triangles). For the Monte Carlo simulations we have drawn $10^5$ matrices from the random matrix model~\eref{RMT-model} such that the statistical error is about one percent.}
\end{figure}
Let us consider the case of odd $N$, now. Then, the skew-orthogonal polynomials and their normalizations are $\{q_{2j+1},\widetilde{q}_{2j+1},h_{2j+1}\}_{j=0,\ldots,(N-3)/2}$, see~\ref{app:Pfaff}. We underline that the polynomial of order zero, which is $1$, is not missing. It corresponds to the one-point weight $g(\lambda)$, see Eq.~\eref{def.g}. Therefore the kernels have the form
\begin{equation}\label{kernel.odd}
\fl\eqalign{
W_{N}(\lambda_1,\lambda_2)=&-\widetilde{G}(\lambda_1,\lambda_2)-\int_{\mathbb{R}_+^2}dx_1dx_2\; \widetilde{G}(x_1,\lambda_1)\widetilde{G}(x_2,\lambda_2)K_{N}(x_1,x_2),\\
G_{N}(\lambda_1,\lambda_2)=&\frac{\lambda_1 }{|\mu|}e^{-\eta_+\lambda_1^2} I_0(\eta_-\lambda_1^2)+\int_0^\infty dx \; \widetilde{G}(x,\lambda_1)K_{N}(x,\lambda_2),\\
K_{N}(\lambda_1,\lambda_2)=&\sum_{j=0}^{(N-3)/2}\frac{q_{2j+1}(\lambda_2^2)\widetilde{q}_{2j+1}(\lambda_1^2)-q_{2j+1}(\lambda_1^2)\widetilde{q}_{2j+1}(\lambda_2^2)}{4\pi\mu^2(1-\mu^2)^{4j+3}(2j+1)!(2j+2)!}.
}
\end{equation}
We want to point out the additional term in $G_N$ in comparison to~\eref{kernel.even} which results from $g(\lambda)$ and essentially describes the eigenvalue closest to the origin.
The formulas~\eref{kernel.odd} imply the level density
\begin{equation}\label{level.density.odd}
\fl\eqalign{
\rho_N(\lambda)=\;&\frac{\lambda }{|\mu|}e^{-\eta_+\lambda^2} I_0(\eta_-\lambda^2)\\
&+\int_0^\infty dx \; \widetilde{G}(x,\lambda)\sum_{j=0}^{(N-3)/2}\frac{q_{2j+1}(\lambda^2)\widetilde{q}_{2j+1}(x^2)- {q_{2j+1}(x^2)}\widetilde{q}_{2j+1}(\lambda^2)}{4\pi\mu^2(1-\mu^2)^{4j+3}(2j+1)!(2j+2)!}.
}
\end{equation}
Its behavior and the comparison with Monte Carlo simulations are displayed in Fig.~\ref{fig:odd}.
The behavior of the limits $\mu\to0,1$ of the level density~\eref{level.density.odd} is more or less the same as for even $N$. The only difference is the number of peaks. While for even $N$ the density converges to a distribution with $N/2$ peaks in the limit $\mu\to0$, the number is $(N+1)/2$ for odd $N$. At the origin an unpaired peak merges with the one on the negative axis. This merging is non-uniform as we can see in Fig.~\ref{fig:odd}. The reason is the same as for the even case and has its origin in the preserved chirality, which is completely different from the results in~\cite{Akemann:2011}, cf. Eq.~\eref{Wilson}, where chirality is broken.
\section{\boldmath Limits $\mu\to0$ and $\mu\to1$ at finite $N$}\label{sec:lim.mu}
As we have already seen in section~\ref{sec:kernel}, the limits $\mu\to0$ and $\mu\to1$ are differently approached. In this section we want to analytically understand how they are approached. For this purpose we consider two quantities. The first are the polynomials $q_j$, see~\eref{def:q}, and the second is the joint probability density of the singular values $\Lambda$, see Eqs.~\eref{jpdf.Neven} and~\eref{jpdf.Nodd}. The limit $\mu\to0$ is analyzed in subsection~\ref{sec:lim.mu0} and the limit $\mu \to 1$ in subsection~\ref{sec:lim.mu1}.
\subsection{Limit $\mu\to0$}\label{sec:lim.mu0}
We want to consider the limit $\mu\to0$ for the polynomial $q_j$, see Eq.~\eref{pol.q}, which is the average of a characteristic polynomial. When setting $\mu=0$ we have the average
\begin{equation}
q_j(x^2,\mu=0)=\langle\det(x^2\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_j-H^2)\rangle_{\rm GUE},
\end{equation}
which is an average over a $j\times j$ dimensional GUE matrix $H$.
Since { the GUE yields} a determinantal point process we already know the answer~\cite{Mehta_book}:
\begin{equation}\label{pol.q.mu0}
\eqalign{
q_j(x^2,\mu=0)&=(-1)^j\frac{H_{j+1}(x)H_{j}(-x)-H_{j+1}(-x)H_{j}(x)}{2x}\\
&=(-2)^j\left\lceil\frac{j}{2}\right\rceil\left\lfloor\frac{j}{2}\right\rfloor L_{\lceil j/2\rceil}^{(-1/2)}\left(\frac{x^2}{2}\right)L_{\lfloor j/2\rfloor}^{(+1/2)}\left(\frac{x^2}{2}\right)
}
\end{equation}
with $H_j$ the monic Hermite polynomials with respect to the weight $e^{-x^2/2}$, $L_j^{(\nu)}$ being the generalized Laguerre polynomials and we have employed the floor ($\lfloor.\rfloor$) and {the} ceil ($\lceil.\rceil$) function. This result can be checked with the help of Eq.~\eref{eq:L3r} with $(\alpha,\beta)=(\eta_+,\eta_-)$ at $\mu=0$.
From the result~\eref{pol.q.mu0} one can guess that the singular value statistics of the ensemble is factorizing for $\mu\to0$ as it is indeed known for the singular values of GUE, see~\cite{Akemann:2001,Forrester:2006,Edelman:2014,Bornemann:2016}. The joint probability density of the eigenvalues $E$ {of the GUE} is given by~\cite{Mehta_book}
\begin{equation}\label{jpdf.ev.GUE}
p_{\rm GUE}^{\rm (ev)}(E)=\frac{1}{N!}\left(\prod_{j=0}^{N-1}\frac{1}{\sqrt{2\pi}j!}\right)\Delta_N^2(E)\exp\left(-\frac{1}{2}\Tr E^2\right).
\end{equation}
The singular values $\Lambda$ are the modulus of the eigenvalues, i.e. $\lambda_j=|E_j|$. Hence we have to sum over the signs of the eigenvalues. Since the Gaussian is even, the monomials of the two Vandermonde determinants can only combine as even order with even order and odd with odd. Therefore the joint probability density of the singular values $\Lambda$ of a matrix $H$ drawn from a GUE is~\cite{Akemann:2001,Forrester:2006,Edelman:2014,Bornemann:2016}
\begin{equation}\label{jpdf.sv.GUE}
\fl p_{\rm GUE}^{\rm (sv)}(\Lambda)=\left(\prod_{j=0}^{N-1}\frac{2}{\sqrt{2\pi}j!}\right)\frac{\Delta_{\lceil N/2\rceil}^2(\Lambda_{\rm ev}^2)}{(\lceil N/2\rceil)!}\frac{\Delta_{\lfloor N/2\rfloor}^2(\Lambda_{\rm odd}^2)}{(\lfloor N/2\rfloor)!}\det\Lambda_{\rm odd}^2\exp\left(-\frac{1}{2}\Tr \Lambda^2\right),
\end{equation}
where $\Lambda={\rm diag\,}(\Lambda_{\rm ev},\Lambda_{\rm odd})$, $\Lambda_{\rm ev}={\rm diag\,}(\lambda_1,\ldots,\lambda_{\lceil N/2\rceil})$, and $\Lambda_{\rm odd}={\rm diag\,}(\lambda_{\lceil N/2\rceil+1},\ldots,\lambda_{N})$. Hence it is a sum of two complex Laguerre ensembles, one with index $-1/2$ and of dimension $\lceil N/2\rceil$ and the other one with $+1/2$ and dimension $\lfloor N/2\rfloor$. These two Laguerre ensemble correspond to the Gaussian antisymmetric unitary ensemble of even and odd dimension (GAOE, see~\cite{Kieburg:2017rrk} for the notation), meaning Gaussian distributed imaginary anti-symmetric matrices. This factorization was also observed in section~\ref{sec:kernel}.
From this picture it becomes clear what the level density of $\Lambda$ is. It is the sum of the level densities of $\Lambda_{\rm ev}$ and $\Lambda_{\rm odd}$. This is in agreement with the level density of GUE, see~\cite{Mehta_book,Akemann:2001,Forrester:2006,Edelman:2014,Bornemann:2016}, because Hermite polynomials of even order, $H_{2j}$, can be expressed in terms of the generalized Laguerre polynomials $L_{j}^{(-1/2)}$ and those of odd order, $H_{2j+1}$, by the Laguerre polynomials $L_{j}^{(+1/2)}$.
We can derive the result above from the joint probability distributions~\eref{jpdf.Neven} and~\eref{jpdf.Nodd} by considering the asymptotics of the two-point weight
\begin{equation}
G(\lambda_1,\lambda_2)\overset{|\mu|\ll1}{\propto}\sum_{s_1,s_2=\pm1}\frac{(s_1\lambda_1-s_2\lambda_2)^2}{\lambda_1^2-\lambda_2^2}\exp\left(-\frac{\lambda_1^2+\lambda_2^2}{2}\right)
\end{equation}
and the one-point weight
\begin{equation}
g(\lambda)\overset{|\mu|\ll1}{\propto}\exp\left(-\frac{\lambda^2}{2}\right).
\end{equation}
The asymptotics of the two-point weight can be found by noticing that the saddle points of the integral in Eq.~\eref{def.G} are given by $(\cos[\vartheta+\pi/4],\sin[\vartheta+\pi/4])=(\pm\lambda_2,\pm\lambda_1)/\sqrt{\lambda_1^2+\lambda_2^2}$ for one term of the {\em sine hyperbolic} and $(\cos[\vartheta+\pi/4],\sin[\vartheta+\pi/4])=(\pm\lambda_1,\pm\lambda_2)/\sqrt{\lambda_1^2+\lambda_2^2}$ for the other term. The Gaussian terms can be pulled out of the Pfaffian and for the remaining term we use
\begin{equation}
\fl{\rm Pf\,}\left[\sum_{s_a,s_b=\pm1}\frac{(s_a\lambda_a-s_b\lambda_b)^2}{\lambda_a^2-\lambda_b^2}\right]_{a,b=1,\ldots,N}=\pm \sum_{s_1,\ldots,s_N=\pm1}\frac{\Delta_N^2(s_1\lambda_1,\ldots,s_N\lambda_N)}{\Delta_N(\Lambda^2)}
\end{equation}
for even $N$ and similar for odd $N$. The Vandermonde determinant in the denominator cancels with those in Eqs.~\eref{jpdf.Neven} and~\eref{jpdf.Nodd}. The sum over the signs and the regrouping of the singular values into $\Lambda_{\rm ev}$ and $\Lambda_{\rm odd}$ yield the expected result~\eref{jpdf.sv.GUE}.
This kind of limit is in the sense of~\cite{ForresterKieburg} where the Pfaffian structure results from the Schur Pfaffian identity~\cite{Schur}
\begin{equation}\label{Pfaff-ident}
\frac{\Delta_{N}^2(\Lambda)}{\Delta_{N}(\Lambda^2)}=\left\{\begin{array}{cl} \displaystyle {\rm Pf\,}\left[\frac{\lambda_b-\lambda_a}{\lambda_b+\lambda_a}\right]_{a,b=1,\ldots,N}, & {\rm for}\ N\ {\rm even}, \\ {\rm Pf\,}\left[\begin{array}{c|c} 0 & 1\ \cdots\ 1 \\ \hline \begin{array}{c} -1 \\ \vdots \\ -1 \end{array} & \displaystyle\frac{\lambda_b-\lambda_a}{\lambda_b+\lambda_a} \end{array}\right]_{a,b=1,\ldots,N}, & {\rm for}\ N\ {\rm odd}. \end{array}\right.
\end{equation}
\subsection{Limit $\mu\to1$}\label{sec:lim.mu1}
As before we first consider the polynomial $q_j$, see Eq.~\eref{pol.q}, because it is simpler in interpretation. The limit $\mu\to1$ corresponds to chGUE. Thus the averaged characteristic polynomial is proportional to the Laguerre polynomial $L_{j}(x^2/2)$ due to the scaling of the distribution~\eref{distribution}. Indeed when setting $\mu=1$ in Eq.~\eref{pol.q} we have
\begin{equation}\label{pol.q.mu1}
q_j(x^2,\mu=0)=j!\oint \frac{d z}{2\pi i\,z^{j+1}}(1-2z)^{j}e^{x^2z}=(-2)^j j!L_{j}\left(\frac{x^2}{2}\right)
\end{equation}
confirming our expectations. We used Eq.~\eref{Laguerre} in the second equality.
We can derive the limit $\mu\to1$ to chGUE also on the level of the joint probability densities~\eref{jpdf.Neven} and~\eref{jpdf.Nodd}. This time we expand the two-point weight as
\begin{equation}
G(\lambda_1,\lambda_2)\overset{|\mu|\approx1}{\propto}\exp\left(-\frac{\lambda_1^2+\lambda_2^2}{2}\right)\sum_{a,b=0}^\infty G_{ab}(1-\mu^2)^{a+b}\lambda_1^{2a+1}\lambda_2^{2b+1}
\end{equation}
with $G_{ab}$ antisymmetric while the one-point weight is
\begin{equation}
g(\lambda)\overset{|\mu|=1}{\propto}\lambda\exp\left(-\frac{\lambda^2}{2}\right).
\end{equation}
Due to the skew-symmetry of the Pfaffian one can start the series of $G(\lambda_1,\lambda_2)$ with $a$ and $b$ at $1$ when $N$ is odd. The skew-symmetry is also the reason why we cannot just take the leading order term in $(1-\mu^2)$ but need the series which can be minimally cut off at $a,b=N-1$. Pulling the factors $\lambda_j\exp\left[-\lambda_j^2/2\right] $ out the Pfaffian and defining
\begin{equation}
\mathcal{G}=\{G_{ab}\}\quad{\rm and}\quad \mathcal{V}=\{\lambda_b^{2a}\},
\end{equation}
the double sum is equal to $\mathcal{V}^T\mathcal{G}\mathcal{V}$. To evaluate the Pfaffian we can employ
\begin{equation}\label{Pfaff-Vand}
{\rm Pf\,}[\mathcal{V}^T\mathcal{G}\mathcal{V}]=\det \mathcal{V}\ {\rm Pf\,} \mathcal{G}
\end{equation}
for even $N$ (and similar for odd $N$). The Pfaffian ${\rm Pf\,} \mathcal{G}$ is a constant while the determinant of the Vandermonde matrix $\mathcal{V}$ is equal to the Vandermonde determinant $\det\mathcal{V}=\Delta_N(\Lambda^2)$. This term together with the other Vandermonde determinant in Eqs.~\eref{jpdf.Neven} and~\eref{jpdf.Nodd} and the product $\prod_{j=1}^N\lambda_j\exp\left[-\lambda_j^2/2\right]$ yields the joint probability density of the eigenvalues of the Laguerre ensemble of index $0$, see~\cite{Mehta_book}
\begin{equation}\label{jpdf.ev.LUE}
p_{\rm Lag}(\Lambda)=\frac{1}{N!}\left(\prod_{j=0}^{N-1}\frac{1}{2^{2j}(j!)^2}\right)\Delta_N^2(\Lambda^2)\det\Lambda\exp\left(-\frac{1}{2}\Tr \Lambda^2\right).
\end{equation}
This limit arises in a way as proposed in~\cite{Kieburg:2011ct} where a non-trivial Pfaffian is created by rephrasing the Vandermonde determinant as in~\eref{Pfaff-Vand}, which has to be seen in comparison to subsection~\ref{sec:lim.mu0} for $\mu\to0$ where the Pfaffian arises in a totally different way from a determinantal point process.
\section{Conclusions}\label{sec:conclusion}
We computed the joint probability density of the chiral random matrix model~\eref{RMT-model} and studied its eigenvalue statistics at finite matrix dimension $N$. The statistics are governed by a Pfaffian point process meaning that all observables depending on the eigenvalues of the chiral random matrix $\mathcal{D}$, only, can be expressed in terms of a small number of functions, the kernels. This also carries over to the limit of large matrix dimension, see~\cite{Kanazawa:2018,Kanazawa:2018b}. We derived explicit formulas for these kernels by the method of skew-orthogonal polynomials. The analysis at large matrix dimension will be carried out in~\cite{Kanazawa:2018b} and a summary of these results has been reported in~\cite{Kanazawa:2018}, since the calculations are very technical. Inspired from physics we particularly study the hard edge in those two works and derive the corresponding non-linear $\sigma$-model which is the chiral perturbation theory in QCD.
The considered random matrix interpolates between GUE ($\mu=0$) and chGUE ($\mu=1$) as does the model for the Hermitian Wilson Dirac operator in~\cite{Akemann:2011,Damgaard:2010cz,Akemann:2010em}. However our model preserves chirality at any time while it is broken in~\cite{Akemann:2011,Damgaard:2010cz,Akemann:2010em}. This leads to a non-uniform convergence in the limit $\mu\to0$ about the origin while it is uniform for the Hermitian Wilson Dirac operator (when the quark mass is vanishing), cf.~\cite{Akemann:2011,Damgaard:2010cz,Akemann:2010em}. {The exact chiral pairs of eigenvalues $(\lambda_j,-\lambda_j)$ are the reason}, which repel each other {the strongest} at the origin. This repulsion is absent for the Hermitian Wilson Dirac operator~\cite{Akemann:2011,Damgaard:2010cz,Akemann:2010em}. The implications of this behavior to applications in QCD will be studied in more detail in~\cite{Kanazawa:2018b}.
When considering our results in the limit $\mu\to0$ one has to be careful with the interpretation. The limit does not exactly yield the eigenvalue statistics of GUE but its singular value statistics. Thus one has to compare the results rather with those in~\cite{Akemann:2001,Forrester:2006,Edelman:2014,Bornemann:2016}. The difference of the statistics is the sign of the eigenvalues over which one has to average. This yields two independent eigenvalue spectra equivalent to those of two GAOE ({Gaussian distributed imaginary antisymmetric matrices,} see~\cite{Kieburg:2017rrk} for the notation), one of even and one of odd dimension. The extension of the present {model} to orthogonal and symplectic ensembles {might be} an interesting direction of future research.
\section*{Acknowledgements}
{We thank Gernot Akemann for given us comments on the first draft. Moreover, we} acknowledge support by the RIKEN iTHES project (TK) and the German research council (DFG) via the CRC 1283: ``Taming uncertainty and profiting from randomness and low regularity in analysis, stochastics and their applications" (MK).
|
1,314,259,993,958 | arxiv | \section{Introduction}
In a UWB relay channel, two diversity techniques, i.e., cooperative communication and UWB radios are combined and the system performance is improved considerably \cite{Sendonaris2003}.
The relay channel was first introduced in \cite{vandermeulen-aap-1971}. In \cite{covergamal1979}, the relay channel was studied carefully, e.g., special capacity results and the best achievable rates via two coding schemes, namely decode-and-forward (DF) and estimate-and-forward (EF) strategies were obtained. We have other studies on the relay channel in\cite{gamalaref1982, gamalzahedi2005, aleksicrazaghi2009, cover2007, hodtani2008, hodtani2009}.
Limited research has been done on the capacity bounds for frequency-selective block-fading relay channels. Achievable rates using amplify-and-forward (AF) with network training in which the source node and the destination node broadcast training symbols and each relay node carries out channel estimation are analyzed in \cite{wang2006}, for narrow and wideband relaying over frequency selective fading channel. In \cite{zolfa2009} an upper bound and DF lower bound were derived for UWB relay channel with an assumption of independent noises at the relay and destination. In \cite{goldsmith2011} Gaussian relay channels with correlated noises have been studied.
\textbf{Our Work}, includes first, the investigation of a more general lower bound that is achieved by partial decode-and-forward scheme (PDF) and determining it for frequency-selective block-fading UWB relay channel. Our result encompasses the DF lower bound in \cite{zolfa2009}. Second, we obtain the UWB new version of max-flow min-cut upper bound with the assumption of correlated noises at the relay and destination. Third, we show that the upper bound coincides with the lower bound in two special cases of degraded and reversely degraded relay channels when the corresponding correlation coefficients are applied and the capacity is determined.
\textbf{Notation:} Throughout the paper $\Re{(.)}$, $\mathbb{E}$, $var(.)$ and $cov(.)$ denote real part, expectation, variance and covariance operations, respectively. $\lfloor x\rfloor$ returns the largest integer $\leq x$. $diag(.)$ builds a diagonal matrix and $C(x)\triangleq\log(1+x)$.
The paper is organized as follows: In Sec. II we define the UWB relay channel model. The lower bound on the capacity of the relay channel obtained via PDF strategy and the max-flow min-cut upper bound for the defined channel model are derived in Sec. III. Two capacity achieving cases corresponding to the degraded and reversely degraded relay channels are discussed in Sec. IV and Sec. V, respectively. Numerical results are illustrated in Sec. VI and finally, we provide the conclusion in Sec. VII.
\setlength{\arraycolsep}{0.0em}
\section{System Model}
We assume that data is sent in blocks of size $K$ as a train of impulse based UWB signals to the destination via link 1 and to the relay via link 2, as depicted in Fig.\ref{figure1}. Based on the received signal, the relay builds a secondary message and forwards it as a UWB signal to the destination via link 3. We assume that all nodes are perfectly synchronized and channel state information (CSI) is available at the receiving terminals only. The fading coefficients between different nodes are assumed mutually independent identically distributed. We assume arbitrarily correlated noises at the relay and destination. The complex baseband impulse response of each UWB link can be considered based on the Saleh-Valenzuela (S-V) model \cite{ieee802.15.4a}
\begin{equation}
h(t){}={}\overset{\sim}{\beta}\sum_{l=0}^{L-1} \sum_{i=0}^{M-1}a_{i,l}e^{j\phi_{i,l}}\delta(t-T_{l}-\tau_{i,l}),
\end{equation}
where $L$ is the number of clusters and $M$ is the number of rays in each cluster. $T_{l}$ and $\tau_{i,l}$ represent the cluster and ray arrival times. The factor $\overset{\sim}{\beta}$ jointly models the pathloss, shadowing, and antenna insertion loss. $a_{i,l}$ is the gain of the $i$th path in the $l$th cluster and finally $\phi_{i,l}$ is the complex baseband phase of each multipath component.\\
We know that if the transmitter sends a block of $K$ symbols $(x_{0},\cdots,x_{K-1})^{T}$ through the above UWB channel, the received signal $(y_{0},\cdots,y_{K-1})^{T}$ is in the form \cite{arikan2004}:\\
\begin{equation}
\begin{array}{@{}lc@{\quad}@{\quad}@{\quad}r@{}}
y_{i}=\sum_{k=0}^{K^{'}-1}g_{k}x_{i-k}+z_{i}&&(i=0,\cdots,K-1)
\end{array} \label{eq.2}
\end{equation}
where $z_{i}$s are complex zero mean additive white Gaussian noises, $K^{'}$ is the ISI length due to the multipath fading, and $g_{k}$'s are related to the channel impulse response as\\
\begin{equation}
g_{k}{}={}\sum_{i,l:\lfloor(T_{l}{}+{}\tau_{i,l})/T_{s}\rfloor {}={}k} \overset{\sim}{\beta} a_{i,l}e^{j\phi_{i,l}}.
\end{equation}
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{figure1.pdf}
\caption{Illustration of UWB relay channel.}
\label{figure1}
\end{figure}
In this equation, $T_{s}{}={}\frac{1}{W}$, where $W$ is the bandpass channel bandwidth. We assume that the channel coefficients stays constant within each block of data transmission and change in an independent and identically distributed fashion from one block to another, i.e. a block fading channel is considered because the UWB channel is underspread \cite{ieee802.15.4a}. The size of each block, $K$ is constrained by the channel coherence time and can be at most equal to $\frac{T_{c}}{T_{s}}$. Taking the DFT of the two sides of (\ref{eq.2}), we obtain the frequency domain UWB channel model \cite{arikan2004}. The vectors $\textbf{G}^{(n)}(n=1,2,3)$, $\textbf{X}_{\textbf{1}}$, $\textbf{X}_{\textbf{2}}$, $\textbf{Y}_{\textbf{1}}$ and $\textbf{Y}$ corresponds to the $K$-point DFT of vectors of complex baseband channel coefficients related to each link, the transmitted signals from the source and the relay and the received signals at relay and destination, respectively. Now, we can formulate the input-output relation for the UWB relay channel in the frequency domain as
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
Y_{1i}&{}={}&G_{i}^{(2)}X_{1i}+Z_{1i}\quad\quad\quad\quad\quad\quad\quad(i=0,...,K-1) \nonumber\\
Y_{i}&{}={}&G_{i}^{(1)}X_{1i}+G_{i}^{(3)}X_{2i}+Z_{i}\label{eq.4}
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
where $Z_{i}\thicksim \mathcal{C}\mathcal{N}(0,N)$ and $Z_{1i}\thicksim \mathcal{C}\mathcal{N}(0,N_{1})$ are complex circularly symmetric zero mean additive white Gaussian noises with correlation coefficient $\rho_{z_{i}}{}={}\mathbb{E}\left\lbrace Z_{1i}Z_{i}^{*}\right\rbrace /\sqrt{NN_{1}}$.
\section{Upper and Lower Bounds on Capacity}
In this section, we provide the PDF achievable rate and max-flow min-cut upper bound for the capacity of the UWB relay channel.
\subsection{UWB Lower Bound}
When the channel between the source and the relay is better than the channel between the relay and the receiver, DF strategy gives the best achievable rate. To date the best rate achieved by DF strategy is obtained in \cite[theorem 7]{covergamal1979} by substituting $\widehat{Y}_{1}{}={}\phi, V{}={} X_{2},U{}={}(X_{2}, U)$ which we call it as PDF. A delay-constrained form for this lower bound can be expressed as
\begin{small}
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
C{}&\quad \geq& {}\sup _{(\textbf{X}_{\textbf{1}},\textbf{X}_{\textbf{2}},\textbf{U})}\min\bigg\lbrace \dfrac{1}{K} \sum_{i=0}^{K-1} I(X_{1i},X_{2i}; Y_{i}),\nonumber\\
&&\dfrac {1}{K} \sum_{i=0}^{K-1} I(U_{i};Y_{1i}\mid X_{2i}){}+{}I(X_{1i};Y_{i}\mid X_{2i}, U_{i})\bigg\rbrace \label{eq.7}
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
\end{small}
where the supremum is taken over all joint probability mass functions of the form
\begin{equation}
\label{eq6.prob.distr} p(\textbf{X}_{\textbf{1}},\textbf{X}_{\textbf{2}},\textbf{U}){}={}\prod _{i=1}^{K}p(X_{2i})p(U_{i}\mid X_{2i})p(X_{1i}\mid U_{i}X_{2i})
\end{equation}\\
The resulted UWB version is expressed in the following theorem.
\paragraph*{Theorem 1}
A delay-constrained achievable rate with PDF strategy for frequency-selective block fading UWB relay channel is given by
\begin{equation}
R=\max _{
\setlength{\arraycolsep}{0.0em}
\begin{small} \begin{matrix} \overline{\alpha}_{0},&\dots ,&\overline{\alpha}_{k-1}\\
\overline{\beta}_{0},&\dots ,&\overline{\beta}_{k-1}\end{matrix}\end{small}
\setlength{\arraycolsep}{5pt}
}\min \left\lbrace \dfrac{1}{K}\sum_{i=0}^{K-1}C(\gamma_{1i}), \dfrac{1}{K}\sum_{i=0}^{K-1}C(\gamma_{2i})\right\rbrace ,\label{achievable}
\end{equation}
\begin{small}
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
&\gamma_{1i}&{}={}\label{achievable1}\\
&&\dfrac{\vert G_{i}^{(1)}\vert^{2}P_{1}{}+{}\vert G_{i}^{(3)}\vert^{2} P_{2}{}+{}2\sqrt{P_{1}P_{2}}\Re \left\lbrace \sqrt{\overline{\alpha}_{i}\overline{\beta}_{i}}G_{i}^{(1)}G_{i}^{(3)^{*}}\right\rbrace}
{N}\nonumber
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
\end{small}
\begin{small}
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
&\gamma_{2i}&{}={}\label{achievable2}\\
&&\left( 1+\dfrac{\big\vert G_{i}^{(2)}\big\vert^{2}\vert \alpha_{i}\vert \vert\overline{\beta}_{i}\vert P_{1}}{\big\vert G_{i}^{(2)}\big\vert^{2}\vert \beta_{i}\vert P_{1}+N_{1}}\right) \left( 1+\dfrac{\big\vert G_{i}^{(1)}\big\vert^{2}\vert \beta_{i}\vert P_{1}}{N}\right) {}-{}1\nonumber
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
\end{small}
\paragraph*{Proof}
We now construct the code book and find the achievable rate based on the code book definitions. We assume that the source and relay nodes transmit their signals per complex baseband sample with average powers $P_{1}$ and $P_{2}$, respectively. For every $i\in\lbrace0, 1, \ldots, K-1\rbrace$ define $\overline{\alpha}_{i}$ and $\overline{\beta}_{i}$ as complex variables with $0\leq\vert\overline{\alpha}_{i}\vert, \vert\overline{\beta}_{i}\vert\leq1$ and assume that $\vert\alpha_{i}\vert{}={}1{}-{}\vert\overline{\alpha}_{i}\vert$ and $\vert\beta_{i}\vert{}={}1{}-{}\vert\overline{\beta}_{i}\vert$. Let $\setlength{\arraycolsep}{0.0em}
X_{2i}\thicksim \mathcal{C}\mathcal{N}(0,P_{2}), N_{1i}\thicksim \mathcal{C}\mathcal{N}(0,\vert\alpha_{i}\vert P_{0}), U_{i}\thicksim \mathcal{C}\mathcal{N}(0,P_{0}), M_{1i}\thicksim \mathcal{C}\mathcal{N}(0,\vert\beta_{i}\vert P_{1})\setlength{\arraycolsep}{5pt}
$ and $\setlength{\arraycolsep}{0.0em}
X_{1i}\thicksim \mathcal{C}\mathcal{N}(0,P_{1})\setlength{\arraycolsep}{5pt}
$, where $X_{2i},N_{1i}$ and $M_{1i}$ are mutually independent. First generate normal distributed random variables $X_{2i},N_{1i}$ and $M_{1i}$. Then, define $U_{i}$ and $X_{1i}$ as (\ref{eq6.prob.distr}) suggests in the following way
\begin{eqnarray}
U_{i}&{}={}&\sqrt{\overline{\alpha}_{i}\dfrac{P_{0}}{P_{2}}}X_{2i}{}+{}N_{1i}\\
X_{1i}&{}={}&\sqrt{\overline{\beta}_{i}\dfrac{P_{1}}{P_{0}}}U_{i}+M_{1i}\\
&{}={}&\sqrt{\overline{\alpha}_{i}\overline{\beta}_{i}\dfrac{P_{1}}{P_{2}}}X_{2i}{}+{}\sqrt{\overline{\beta}_{i}\dfrac{P_{1}}{P_{0}}}N_{1i}{}+{}M_{1i}\nonumber
\end{eqnarray}
The random code associated with this distribution is then given by
\setlength{\arraycolsep}{0.0em}
\begin{equation}
\begin{array}{@{}rcl@{\quad}rcl@{}}
\textbf{X}_{2}(s)i.i.d.&\thicksim & \mathcal{C}\mathcal{N}_{K}(0,P_{2}\textbf{I}) & s&\in &[1,2^{R_{0}}]\\
\textbf{N}_{1}(w_{1})i.i.d.&\thicksim & \mathcal{C}\mathcal{N}_{K}(0,\textbf{C}_{\textbf{N}_{1}}) & w_{1}&\in &[1,2^{nR_{1}}]\\
\textbf{M}_{1}(w_{2})i.i.d.&\thicksim & \mathcal{C}\mathcal{N}_{K}(0,\textbf{C}_{\textbf{M}_{1}}) & w_{2}&\in &[1,2^{nR_{2}}]
\end{array}
\end{equation}
\setlength{\arraycolsep}{5pt}
where the covariance matrices of $K$-variate complex normal distribution of $\textbf{N}_{\textbf{1}}$ and $\textbf{M}_{\textbf{1}}$ are
\begin{eqnarray}
&\textbf{C}_{\textbf{N}_{1}}{}={} diag(P_{0} \vert\alpha_{0}\vert,P_{0} \vert\alpha_{1}\vert, \cdots, P_{0} \vert \alpha_{K-1}\vert )&\\
&\textbf{C}_{\textbf{M}_{1}}{}={}diag(P_{1} \vert\beta_{0}\vert, P_{1} \vert\beta_{1}\vert, \cdots, P_{1}\vert \beta_{K-1}\vert ),&
\end{eqnarray}
and $\textbf{U}$ and $\textbf{X}_{1}$ are constructed as
\begin{small}
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
\textbf{U}(w_{1}\mid s)&=&\sqrt{\dfrac{P_{0}}{P_{2}}}[\overline{\alpha}_{0} \; \overline{\alpha}_{1} \; \cdots \; \overline{\alpha}_{K-1}]\times \textbf{X}_{2}(s)\\
&&{}+{}\textbf{N}_{1}(w_{1})\nonumber\\
\textbf{X}_{1}(w_{2}\mid w_{1},s)&{}={}&\sqrt{\dfrac{P_{1}}{P_{2}}}[\overline{\alpha}_{0} \overline{\beta}_{0}\; \overline{\alpha}_{1}\overline{\beta}_{1}\; \cdots \;\overline{\alpha}_{K-1}\overline{\beta}_{K-1}]\times \textbf{X}_{2}(s)\nonumber\\
&&{}+{}\sqrt{\dfrac{P_{1}}{P_{0}}}[\overline{\beta}_{0}\; \overline{\beta}_{1}\; \cdots \; \overline{\beta}_{K-1}]\times\textbf{N}_{1}(w_{1})\nonumber\\
&&{}+{}\textbf{M}_{1}(w_{2})
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
\end{small}
where $\times$ denotes an element by element matrix multiplication.
Then if
\begin{small}
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
R_{0}&<&\frac{1}{K}\sum_{i=0}^{K} I(X_{2i},Y_{i})\\
R_{1}&<&\frac{1}{K}\sum_{i=0}^{K}\min\left\lbrace I\left( U_{i};Y_{1i}|X_{2i}\right),R_{0}+I\left(U_{i};Y_{i}|X_{2i}\right) \right\rbrace \\
R_{2}&<&\frac{1}{K}\sum_{i=0}^{K-1}I\left(X_{1i};Y_{i}|U_{i},X_{2i}\right)
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
\end{small}
where
\begin{small}
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
\label{eq.16}I(&X_{2i}&;Y_{i}){}={}h(Y_{i})-h(Y_{i}\mid X_{2i})\nonumber\\
&{}={}&\log \big(\pi e\; var(Y_{i})\big){}-{}\log \big(\pi e\; \mathbb{E}\;var(Y_{i}\mid X_{2i})\big) \nonumber\\
&{}={}&\log\left( 1+\frac{\left|G_{i}^{(1)}\sqrt{\overline{\alpha}_{i}\overline{\beta}_{i}P_{1}}{}+{}G_{i}^{(3)}\sqrt{P_{2}}\right|^{2}}{\left|G_{i}^{(1)}\right|^{2}P_{1}\left(\left|\alpha_{i}\right| \left|\overline{\beta}_{i}\right|+\left|\beta_{i}\right|\right)+N}\right),\\
I(&U_{i}&;Y_{1i}\mid X_{2i}){}={}h(Y_{1i}\mid X_{2i})-h(Y_{1i}\mid X_{2i},U_{i})\nonumber\\
&{}={}&\log \bigg(\pi e\; \mathbb{E}\;var(Y_{1i}\mid X_{2i})\bigg)\nonumber\\
&&{}-{}\log \bigg(\pi e\;\mathbb{E}\; var(Y_{1i}\mid X_{2i},U_{i})\bigg)\nonumber\\
&{}={}&\log \bigg(1+\dfrac{\big\vert G_{i}^{(2)}\big\vert^{2}\vert\alpha_{i}\vert \vert\overline{\beta}_{i}\vert P_{1}}{\big\vert G_{i}^{(2)}\big\vert^{2}\vert\beta_{i}\vert P_{1}{}+{}N_{1}}\bigg),\\
I(&U_{i}&;Y_{i}\mid X_{2i}){}={}h(Y_{i}\mid X_{2i})-h(Y_{i}\mid X_{2i},U_{i})\nonumber\\
&{}={}&\log \bigg(\pi e\mathbb{E}var(Y_{i}\mid X_{2i})\bigg){}-{}\log \bigg(\pi e\mathbb{E} var(Y_{i}\mid X_{2i},U_{i})\bigg)\nonumber\\
&{}={}&\log \bigg(1+\dfrac{\big\vert G_{i}^{(1)}\big\vert^{2}\vert\alpha_{i}\vert \vert\overline{\beta}_{i}\vert P_{1}}{\big\vert G_{i}^{(1)}\big\vert^{2}\vert\beta_{i}\vert P_{1}{}+{}N}\bigg),\\
I&(X_{1i}&;Y_{i}\mid X_{2i},U_{i}){}={}h(Y_{i}\mid X_{2i},U_{i})-h(Y_{i}\mid X_{1i},X_{2i},U_{i})\nonumber\\
&{}={}&\log \bigg(\pi e\; \mathbb{E}\;var(Y_{i}\mid X_{2i},U_{i})\bigg)\nonumber\\
&&{}-{}\log \bigg(\pi e\; \mathbb{E}\;var(Y_{i}\mid X_{1i},X_{2i},U_{i})\bigg)\nonumber\\
&{}={}&\log \bigg(1+\dfrac{\big\vert G_{i}^{(1)}\big\vert^{2}\vert\beta_{i}\vert P_{1}}{N}\bigg),\label{eq.17}
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
\end{small}
the rate $R=R_{1}+R_{2}$ can be achieved with arbitrarily small probability of error and the proof of Theorem 1 is completed.
\subsection{UWB Upper Bound}
A $K$-block delay constrained form for the max-flow min-cut upper bound on the capacity of the general relay channel, established in \cite[theorem 7]{covergamal1979} can be expressed as
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
\label{upperbound}C\leq \sup_{p(X_{1},X_{2})}\min\bigg\lbrace\dfrac{1}{K}&\sum_{i=0}^{K-1}&I(X_{1i},X_{2i};Y_{i}),\\
\dfrac{1}{K}&\sum_{i=0}^{K-1}&I(X_{1i};Y_{i},Y_{1i}\mid X_{2i})\bigg\rbrace\nonumber
\setlength{\arraycolsep}{5pt}\end{eqnarray}
The UWB version of this upper bound is expressed in the following theorem.
\paragraph*{Theorem 2}
The delay-constrained max-flow min-cut upper bound on the capacity of a frequency-selective block fading relay channel is given by
\begin{small}
\begin{eqnarray}
C{}\leq{}\max_{\setlength{\arraycolsep}{0.0em}\begin{small}\begin{matrix} \overline{\alpha}_{0},&\dots,&\overline{\alpha}_{k-1}\\
\overline{\beta}_{0},&\dots, &\overline{\beta}_{k-1}\end{matrix}\end{small}}\min\left\lbrace \frac{1}{K}\sum_{i=0}^{K-1}C(\gamma_{1i}), \frac{1}{K}\sum_{i=0}^{K-1}C(\gamma_{3i})\right\rbrace, \label{eq.uwb upper bound}
\end{eqnarray}
\end{small}
where
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
\gamma_{3i}{}={}&&P_{1}\dfrac{(1-\vert\overline{\alpha}_{i}\vert\vert\overline{\beta}_{i}\vert)}{1-\vert\rho_{zi}\vert^{2}}\\
&\bigg(&\dfrac{\vert G_{i}^{(1)}\vert^{2}}{N}+\dfrac{\vert G_{i}^{(2)}\vert^{2}}{N_{1}}-2\dfrac{\Re\big\lbrace G_{i}^{(1)}G_{i}^{(2)^{*}}\rho_{zi}\big\rbrace}{\sqrt{NN_{1}}}\bigg)\nonumber
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
and $\gamma_{1i}$ has been defined in (\ref{achievable1}).
\paragraph*{Proof}
We start from (\ref{upperbound}). By noticing that normal random variables have the maximum entropy and by letting $X_{1i}$ and $X_{2i}$ each have the maximum allowed power $P_{1}$ and $P_{2}$ and by choosing $E(X_{1i}X_{2i}^{*})=\sqrt{\overline{\alpha}_{i}\overline{\beta}_{i}P_{1}P_{2}}$, the first term $I(X_{1i},X_{2i};Y_{i})$ is upper bounded by $C(\gamma_{1i})$, the same expression as in the lower bound. The proof is a trivial extension of that in \cite[theorem 4]{covergamal1979} and is skipped here. For the second term, by the same assumptions we have
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
I(&X_{1i}&;Y_{i},Y_{1i}\mid X_{2i}){}={}h(Y_{i},Y_{1i}\mid X_{2i})-h(Z_{i},Z_{1i})\nonumber\\
&\leq &\dfrac{1}{2}\log \big( (2\pi e)^{2}\mathbb{E}\;\det cov(Y_{i},Y_{1i}\mid X_{2i})\big)\nonumber\\
&&-\dfrac{1}{2}\log\big((2\pi e)^{2}\mathbb{E}\;\det cov(Z_{i},Z_{1i} \big)=C(\gamma_{3i})
\end{eqnarray}
where
\begin{eqnarray}
\mathbb{E}\; &\det\; &cov(Y_{i},Y_{1i}\mid X_{2i})\nonumber\\
&=&NN_{1}\big(1-\vert\rho_{z_{i}}\vert^{2}\big)+P_{1}NN_{1}\left(1-\vert\overline{\alpha}_{i}\vert \vert\overline{\beta}_{i}\vert\right)\nonumber\\
&&\bigg(\frac{\big\vert G_{i}^{(1)}\big\vert^{2}}{N}+\frac{\big\vert G_{i}^{(2)}\big\vert ^{2}}{N_{1}}-2\frac{\;\Re\left\lbrace G_{i}^{(1)}G_{i}^{(2)^{*}}\rho_{z_{i}}\right\rbrace}{\sqrt{NN_{1}}}\bigg),\nonumber\\
\mathbb{E}\; &\det\; & cov(Z_{i},Z_{1i} \big)=NN_{1}\left( 1-\vert\rho_{z_{i}}\vert ^{2}\right)
\end{eqnarray}\setlength{\arraycolsep}{5pt}
Due to space constraints, the details are omitted.
\setlength{\arraycolsep}{0.0em}
\section{Capacity of Degraded UWB Relay Channel}
\paragraph*{Theorem 3}
The delay-constrained capacity of the degraded frequency-selective block fading relay channel is
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
C{}={}\max_{\overline{\alpha}_{0},\ldots, \overline{\alpha}_{k-1}}\min\bigg\lbrace\dfrac{1}{K}\sum_{i=0}^{K-1}\log\bigg(1+\frac{1}{N}\big(\vert G_{i}^{(1)}\vert^{2}&P_{1}&\nonumber\\
+\vert G_{i}^{(3)}\vert^{2}P_{2}+2\sqrt{P_{1}P_{2}}\;\Re \bigg \lbrace\sqrt{\overline{\alpha}_{i}}G_{i}^{(1)}G_{i}^{(3)^{*}}\bigg \rbrace\big)&\bigg)&\nonumber\\
,\dfrac{1}{K}\sum_{i=0}^{K-1}\log\bigg(1+\dfrac{\vert G_{i}^{(2)}\vert^{2}\vert\alpha_{i}\vert P_{1}}{N_{1}}\bigg)&\bigg\rbrace &
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
\paragraph*{Proof}
We evaluate the upper and lower bounds on the capacity of the degraded UWB relay channel and show that they coincide with each other.
As stated in \cite{covergamal1979} a relay channel is called degraded if the following relationship holds
\begin{equation}
p(y\mid y_{1},x_{1},x_{2}){}={}p(y\mid y_{1},x_{2})\label{eq.degraded}
\end{equation}
To adapt the general input-output relation (\ref{eq.4}) to this condition, we rewrite (\ref{eq.4}) as
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
Y_{1i}&{}={}&G_{i}^{(2)}X_{1i}+Z_{1i}\nonumber\\
Y_{i}&{}={}&\dfrac{G_{i}^{(1)}}{G_{i}^{(2)}}Y_{1i}{}+{}G_{i}^{(3)}X_{2i}{}+{}Z_{2i}\end{eqnarray}
\setlength{\arraycolsep}{0pt}
where
\begin{equation}
Z_{2i}{}={}Z_{i}-\dfrac{G_{i}^{(1)}}{G_{i}^{(2)}}Z_{1i}
\end{equation}
In order to hold (\ref{eq.degraded}), $Z_{1i}$ and $Z_{2i}$ should be independent for each $i$. For normal distributed random variables correlation and dependency are equivalent terms, thus we find $\rho_{zi}$ so that $Z_{1i}$ and $Z_{2i}$ become uncorrelated. This is achieved when\\
\begin{equation}
\rho_{zi}{}={}\left( \frac{G_{i}^{(1)}}{G_{i}^{(2)}}\right) ^{*}\sqrt{\frac{N_{1}}{N}}\label{eq.rodeg}
\end{equation}
\paragraph*{Achievability}
The achievable rate is resulted from the substitution of $U_{i}{}={}\sqrt{\frac{P_{0}}{P_{1}}}X_{1i}$ in the code book of Theorem 1, i.e., by setting $\overline{\beta}_{i}{}={}1$ in (\ref{achievable}), (\ref{achievable1}) and (\ref{achievable2}).
\paragraph*{Converse}
We have computed the max-flow min-cut upper bound in the previous section for the general UWB relay channel. The upper bound on the capacity of degraded UWB relay channel can be obtained if we apply the condition (\ref{eq.rodeg}) to the established upper bound (\ref{eq.uwb upper bound}) and to make the notations consistent with the achievability part, choosing $E(X_{1i}X_{2i}^{*})=\sqrt{\overline{\alpha}_{i}P_{1}P_{2}}$ in the proof of the upper bound. By applying these substitutions the obtained upper bound coincides with the lower bound and the proof is completed.
\section{Capacity of Reversely Degraded UWB Relay Channel}
\paragraph*{Theorem 4}
The delay-constrained capacity of the reversely degraded frequency-selective block fading relay channel is
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
C{}={}\dfrac{1}{K}\sum_{i=0}^{K-1}\log\bigg(1+\dfrac{\vert G_{i}^{(1)}\vert^{2} P_{1}}{N}\bigg)
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
\paragraph*{Proof}
As stated in \cite{covergamal1979} a relay channel is called reversely degraded if the following relationship holds
\begin{equation}
p(y_{1}\mid x_{1},x_{2},y){}={}p(y_{1}\mid y,x_{2})
\label{revdegraded} \end{equation}
To adapt the general input-output relation (\ref{eq.4}) to this constraint, we rewrite (\ref{eq.4}) as
\begin{eqnarray}
Y_{1i}&{}={}&\dfrac{G_{i}^{(2)}}{G_{i}^{(1)}}Y_{i}{}-{}\dfrac{G_{i}^{(2)}G_{i}^{(3)}}{G_{i}^{(1)}}X_{2i}{}+{}Z_{3i}\\
Y_{i}&{}={}&G_{i}^{(1)}X_{1i}{}+{}G_{i}^{(3)}X_{2i}+Z_{i},\quad \nonumber
\end{eqnarray}
where
\begin{equation}
Z_{3i}{}={}Z_{1i}-\dfrac{G_{i}^{(2)}}{G_{i}^{(1)}}Z_{i}
\end{equation}
In order to hold (\ref{revdegraded}), $Z_{i}$ and $Z_{3i}$ should be independent for each $i$. This is achieved if\\
\begin{equation}
\rho_{zi}{}={}\frac{G_{i}^{(2)}}{G_{i}^{(1)}}\sqrt{\frac{N}{N_{1}}} \label{eq.rorevdeg}
\end{equation}
\paragraph*{Achievability}
The achievable rate is resulted from the substitution of $U_{i}{}={}\sqrt{\frac{P_{0}}{P_{2}}}X_{2i}$ in the code book of Theorem 1, i.e., by substituting $\overline{\alpha}_{i}{}={}1$ in (\ref{achievable}), (\ref{achievable1}) and (\ref{achievable2}).
\setlength{\arraycolsep}{5pt}
\paragraph*{Converse}
The upper bound on the capacity of reversely degraded UWB relay channel can be obtained if we apply the condition (\ref{eq.rorevdeg}) to the established upper bound (\ref{eq.uwb upper bound}) and to make the notations consistent, choosing $E(X_{1i}X_{2i}^{*})=\sqrt{\overline{\beta}_{i}P_{1}P_{2}}$. By applying these substitutions the upper bound is reached and coincides with the lower bound as follows
\setlength{\arraycolsep}{0.0em}
\begin{small}
\begin{eqnarray}
\label{eq.revdeg previous} C{}={}\sup_{\overline{\beta}_{0},\ldots, \overline{\beta}_{k-1}}\min\bigg\lbrace\dfrac{1}{K}\sum_{i=0}^{K-1}\log\bigg(1+\frac{1}{N}\big(\vert G_{i}^{(1)}\vert^{2}&P_{1}&\nonumber\\
+\vert G_{i}^{(3)}\vert^{2}P_{2}+2\sqrt{P_{1}P_{2}}\;\Re \bigg \lbrace\sqrt{\overline{\beta}_{i}}G_{i}^{(1)}G_{i}^{(3)^{*}}\bigg \rbrace \big)&\bigg)&\nonumber\\
,\dfrac{1}{K}\sum_{i=0}^{K-1}\log\bigg(1+\dfrac{\vert G_{i}^{(1)}\vert^{2}\vert\beta_{i}\vert P_{1}}{N}\bigg)&\bigg\rbrace &
\end{eqnarray}
\end{small}
Now, we show that the first term in (\ref{eq.revdeg previous}) is always greater than the second one. Consider $S$ as the subtraction of the second term from the first term of (\ref{eq.revdeg previous}), determined as
\begin{equation}
S=\dfrac{1}{K}\sum_{i=0}^{K-1}C(\zeta_{i})
\end{equation}
where
\begin{small}
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray}
\zeta_{i}=&&\\
&&\dfrac{\big\vert G_{i}^{(1)}\big\vert^{2}\vert \overline{\beta}_{i}\vert P_{1}+\big\vert G_{i}^{(3)}\big\vert^{2}P_{2}+2\sqrt{P_{1}P_{2}}\;\Re\left\lbrace \sqrt{\overline{\beta}_{i}}G_{i}^{(1)}G_{i}^{(3)^{*}}\right\rbrace}{N+K\big\vert G_{i}^{(1)}\big\vert^{2}\vert\beta_{i}\vert P_{1}}\nonumber
\end{eqnarray}
\setlength{\arraycolsep}{5pt}
\end{small}
We now show that $\zeta_{i}\geq 0$. The minimum value of $\zeta_{i}$ happens when $\sqrt{\overline{\beta}_{i}}G_{i}^{(1)}G_{i}^{(3)^{*}}$ is a real negative value, so considering the worst case, we choose
\begin{equation}
\sqrt{\overline{\beta}_{i}}=-\big\vert\sqrt{\overline{\beta}_{i}}\big\vert\dfrac{G_{i}^{(1)^{*}}G_{i}^{(3)}}{\vert G_{i}^{(1)^{*}}G_{i}^{(3)}\vert}.
\end{equation}
Now, the numerator of $\zeta_{i}$ can be written as
\begin{equation}
\left(\big\vert G_{i}^{(1)}\big\vert\big\vert\sqrt{\overline{\beta}_{i}}\big\vert\sqrt{P_{1}}-\big\vert G_{i}^{(3)}\big\vert\sqrt{P_{2}} \right)^{2}\geq0
\end{equation}
and the denominator of $\zeta_{i}$ is always positive. Therefore, $\zeta_{i}\geq0$ and $S\geq0$. So, the minimum of the two terms of (\ref{eq.revdeg previous}) is the second one and the capacity is achieved by maximizing the second term with respect to $\overline{\beta}_{i}$ which results in $\overline{\beta}_{i}=0$. This completes the proof of the Theorem 4.
\section{Numerical Results}
\begin{figure}[!t]
\centering
\includegraphics[width=3.2in]{PDFfig.pdf}
\caption{Comparisons of the bounds on the UWB relay channel capacity}
\label{figure2}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=3.2in]{PDFfig3.pdf}
\caption{PDF and Upper bound for different values of $\rho_{zi}$}
\label{figure3}
\end{figure}
In this section, we present illustrating figures to examine the obtained general achievable rate (PDF) and the max-flow min-cut upper bound and compare them with the DF achievable rate which is a special form of PDF by substituting $\overline{\beta}_{i}=0$ which was investigated in \cite{zolfa2009}. The simulations are done based on the channel model for residential NLOS environments in \cite{ieee802.15.4a}. The transmitted powers by the source and the relay nodes are equal to the maximum allowed power for the UWB systems, defined by FCC ($-$41.3dBm/MHz). We assume equal noise power spectral densities at the relay and destination($-$114dBm/MHz). The distance between the source and destination is fixed at $d_{1}=3m$ and the bounds are plotted versus the distance between the source and the relay. In Fig. \ref{figure2}, we set $\rho_{zi}=0$ in the upper bound. As we see when the multiple-access channel is the bottleneck, the three bounds reach the same rate. The difference occurs when the broadcast channel is the bottleneck in which PDF performs better than the DF and this improvement increases as the relay moves toward the destination. The capacity of the direct transmission is also plotted in which the power of the source is assumed twice its power in the relay channel scenario for a more fair comparison. In Fig. \ref{figure3} the upper bound for three different values of $\rho_{zi}=0, 0.6, 0.9$ is plotted. It can be observed that the upper bound increases for higher values of correlation coefficients and also the peak value occurrence of the upper bound moves toward the destination.
\section*{Conclusion}
In this paper, we first computed an achievable rate obtained with PDF coding scheme and the max-flow min-cut upper bound with correlated noises at the relay and destination for a relay channel with UWB links versus channel coefficients, transmitted powers and correlation coefficients of noises with the assumption of known CSI at the receiving terminals only. Then, by appropriately finding the corresponding noise correlation coefficients, we established the capacity of degraded and reversely degraded UWB relay channels. The UWB lower and upper bounds obtained here can be used for further investigation of the UWB relay channels, specially for the ones with known capacities.
\bibliographystyle{IEEEtran}
|
1,314,259,993,959 | arxiv | \section{Introduction}
Let $X\subset\PP^N$ be an $n$-dimensional irreducible, nondegenerate projective variety defined over an algebraically closed field $\kk$ of characteristic $0$. The (classical) Gauss map is the rational morphism
$\gamma :X\dashrightarrow \Gr(n,N)$ that assigns to a smooth point $x$ the projective tangent space of $X$ at $x$, $\gamma(x)={\mathbb T}_{X,x}\cong\PP^n.$
It is known that the general fiber of $\gamma$ is a linear subspace of $\PP^N$, and that the morphism is finite and birational if $X$ is smooth unless $X$ is all of $\PP^N$, \cite{Zak93,KP91, GH79}.
In \cite{Zak93}, Zak defines a generalization of the above definition as follows. For $n\leq m\leq N-1$, let $\Gr(m,N)$ be the Grassmanian variety of $m$-dimensional linear subspaces in $\PP^N$, and define $\sP_m=\overline{\{(x,\alpha)\in X_{sm}\times \Gr(m,N) | {\mathbb T}_{X,x}\subseteq L_\alpha\}},$ where $L_\alpha$ is the linear subspace corresponding to $\alpha \in \Gr(m,N)$ and the bar denotes the Zariski closure in $X \times \Gr(m,N)$. The {\bf $m$-th Gauss map} is the projection $\gamma_m: \sP_m\to \Gr(m,N)$. When $m=n$ we recover the classical Gauss map, $\gamma_n=\gamma.$ These generalized Gauss maps still enjoy the property that a general fiber is a linear subspace, \cite[2.3 (c)]{Zak93}. Moreover a general fiber is always finite if $X$ is smooth and $n\leq m\leq N-n+1,$ \cite[2.3 (b)]{Zak93}.
In this paper we consider a different generalization of the Gauss map where, instead of higher dimensional linear spaces tangent at a point,
we use linear spaces tangent to higher order, namely the osculating spaces. The osculating space of order $k$ of $X$ at a smooth point $x\in X_{sm}, \Osc_x^k,$ is a linear subspace of $\PP^N$ of dimension $t,$ where $n\leq t\leq {n+k\choose n},$ see Definition \ref{def:osc}. We can then define a rational map
$\gamma^k:X\dashrightarrow \Gr(d_k-1,N)$ that assigns to a point $x$ the $k$-th osculating space of $X$ at $x$, $\gamma^k(x)=\Osc^k_x,$ where $d_k$ is the general $k$-th osculating dimension, see Definition \ref{def:gauss}. Notice that when $k=1$, we recover the classical Gauss map, $\gamma^1=\gamma_n=\gamma.$ We call $\gamma^k$ the {\bf Gauss map of order k}.
This definition was originally introduced in \cite{CA22} and later studied in
\cite{Pohl} under the name of associated maps. Higher order Gauss maps have subsequently been studied in connection to higher fundamental forms in \cite{L94} and \cite{DI15}, while Gauss maps of order $2$ have been investigated in \cite{FI01}.
For the classical Gauss map $\gamma$ the linearity of a general fiber is a consequence of the reflexivity property of the projective dual variety of $X.$ Higher order tangency has also been used to generalize the notion of duality and define the higher order dual varieties, $X^k,$ see \cite[Ch 2]{P81}. Unfortunately $X^k$ does not always enjoy reflexivity properties, even if $X$ is nonsingular, as pointed out in \cite[Prop. 1]{P81}. It is therefore reasonable not to expect linearity of the general fiber of $\gamma^k.$ We concentrate instead on establishing a generalization of the finiteness of the Gauss maps of order $k$ when the variety is nonsingular (a property that, as remarked above, does not always hold for Zak's Gauss maps). First we generalize the classical picture by requiring the Gauss maps to be regular when the variety $X$ is nonsingular and thus we consider $k$-jet spanned embeddings, see Definition \ref{def:kjet}, for which $d_k= {n+k\choose n}$ at all points. The use of certain techniques from projective geometry imposes the assumption of $\kk=\CC.$ We sternly think though that the results in this paper should be extendable to any field of characteristic zero. Theorem \ref{finite} shows:
\begin{theorem} Let $i: X\hookrightarrow \PP^N$ be a $k$-jet spanned embedding of a nonsingular complex variety. Then
the Gauss maps $\gamma^s$ are finite for all $s\leq k$, unless $X=\PP^n$ is embedded by the Veronese embedding of order $k.$ \end{theorem}
Section \ref{sec:toric} is dedicated to giving a combinatorial description of the maps $\gamma^k$ and the images $\gamma^k(X),$ called the $k$-th osculating variety, in the case when $X$ is a toric variety.
In \cite[Theorem 1.1]{FI14} it is shown that if $X_A$ is a toric variety (not necessarily smooth) given by a finite lattice set $A$, then the tangential variety $\overline{\gamma(X)}$ is projectively equivalent to a toric variety $X_B$ where $B$ is obtained by taking appropriate sums of elements in $A$. Theorem~\ref{thm:Bklattice} is a direct generalization of this result and the ideas in the proof.
\begin{theorem} If $X_A$ is a toric variety given by a set of lattice points $A$ and the embedding is generically $k$-jet spanned, then
there exists a finite set of lattice points $B_k$ and lattice projection $\pi$ such that $\overline{\gamma^k(X)}$ is projectively equivalent to $X_{B_k}$ and the closure of the irreducible components of the fiber of $\gamma_k$ are projectively equivalent to $X_{\pi(A)}.$
\end{theorem}
This description allows us to reprove our finiteness result in the toric setting using combinatorial methods. A simple consequence of the combinatorial proof of finiteness is that the Gauss map of order $k\geq 1$ is birational for smooth $k$-jet spanned toric embeddings, when finite.
This is false outside the toric category, the Gauss maps of order $k\geq 2$ need not in general be birational when finite, see \cite{FI01}.
We remark that the assumption of $k$-spannedness cannot be relaxed in general. In particular Example~\ref{genkspanned} is a generically $2$-jet spanned smooth surface with positive dimensional fibers under the Gauss map of order $2$.
Theorem~\ref{thm:Bklattice} also makes it possible to compute the image and general fiber of the Gauss map of order $k$ in the toric setting. This is implemented in the Package \textsf{LatticePolytopes}, \cite{LP} for \textsf{Macaulay2}, \cite{M2}.
\subsection*{Future directions and applications}
The Gauss map is used to define the (normalized) Nash Blow up of a variety. The Gauss map of higher order could lead to a definition
of ``Higher order Nash blow ups.'' For toric varieties interesting results have been proved for example in \cite{A11}. It is reasonable to expect that a generalization would lead to new resolution properties, at least in the toric category.
Moreover it is worth mentioning that Gauss maps play an important role in characterizing the bounderies of amoebas and in connection with real $A$-discriminants, see for example \cite{K91}. A generalization, using higher order Gauss maps, could yield interesting applications within real algebraic geometry. We plan to investigate both directions mentioned above in future work.
\subsection*{Conventions}
We work over the field of complex numbers $\CC.$
Throughout the paper, $X$ denotes a smooth, complete complex algebraic variety of dimension $n.$ We use additive notation for the group operation in ${\rm Pic}(X).$
\subsection*{Acknowledgements}
The first and third author were partially supported by the VR grants [NT:2010-5563, NT:2014-4763].
The second author was partially supported by the G\"oran Gustafsson foundation.
\section{Definitions and background}
\subsection{Restrictions imposed by ample divisors.} In this section we collect the necessary background on the invariants of ample Cartier divisors used in the proof of the main result.
Let $L$ be an ample line bundle on $X.$ The {\it nef-value} of $L$ is defined as
$$\tau(L)=min_\RR\{t\,|\, K_X+tL \text{ is nef }\}.$$
Kawamata's Rationality Theorem shows that $\tau(L)$ is in fact a rational number. Let $X\to \PP^M$ be the morphism defined by the global sections of an appropriate multiple of $K_X+\tau L$ and let
$\psi\circ\phi_\tau$ be its Remmert-Stein factorization. The map $\phi_\tau:X\to Y$ has connected fibers and it is called the {\it nef-value morphism}. See \cite[1.5]{BS95} for more details.
An easy way to compute $\tau(L)$ is provided by the following lemma.
\begin{lemma}\label{nefnotample}\cite[1.5.5]{BS95}
Let $L$ be an ample line bundle on $X$ and $\tau\in\RR.$ Then $\tau=\tau(L)$ if and only if
$K_X+\tau L$ is nef but not ample.
\end{lemma}
The nef-value morphism always contracts curves on the variety $X$ (since the defining line bundle is not ample). To state this more precisely, let $\overline{NE(X)}$ be the closure of the cone generated by the effective $1$-cycles on $X.$
\begin{lemma}\label{face}\cite[4.2.13 (1)]{BS95}
Let $L$ be an ample line bundle on $X.$ Then the nef-value morphism $\phi_\tau$ is the contraction of an extremal face $F_H$ of $\overline{NE(X)},$ where $H=K_X+\tau L$ and $F_H=H^\perp\cap(\overline{NE(X)}\setminus \{0\}).$
\end{lemma}
Finally we recall a useful classification of Fano varieties,
based on the length of extremal rays, that is the key observation for our classification.
Let $R\in\overline{NE(X)}$ be an extremal ray, its length is defined as $l(R)=min\{-K_X\cdot C\,|\, C\text{ is a rational curve and }[C]\in R\}.$ The cone theorem implies that $0< l(R)\leq n+1.$
\begin{proposition}\cite[6.3.12,]{BS95},\cite{CM02}\label{characterization}
Let $C$ be an extremal rational curve on $X.$ If $-K_X\cdot C=n+1$, then $-K_X$ is ample and $\Pic(X)\cong\ZZ.$
\end{proposition}
\subsection{Osculating spaces.}
Let $L$ be a line bundle on $X$ and let $V=H^0(X,L).$ The coherent sheaf $J_k(L)={p_1}_*(p_2^*(L)\otimes{\mathcal O}_{X\times X}/\sI_{\Delta}^{k+1}
),$ where $\Delta\subset X\times X$ is the diagonal and $p_i$ are the projection maps $p_i: X\times X\to X,$ is locally free of rank ${n+k\choose n}$, and is called the \emph{$k$-jet bundle of $L$}. The fiber at a point $x\in X$ can be identified with $J_k(L)_x\cong H^0(X, L\otimes \sO_X/\mathfrak m_x^{k+1}),$ where $\mathfrak m_x$ is the maximal ideal at $x.$ The quotient map $$j_{k,x}: V\to H^0(X, L\otimes \sO_X/\mathfrak m_x^{k+1})$$ evaluating a global section and its derivatives of order at most $k$ at the point $x:$ $$j_k(s)=(s(x),\ldots, \frac{\partial^t s }{\partial {\mathbf x}^t}(x),\ldots)_{1\leq t \leq k}$$ for a coordinate system ${\mathbf x}=(x_1,\ldots,x_n),$ extends to a vector bundle map: $j_k: V\otimes\sO_X\to J_k(L).$
We denote by $U_k\subset X$ the open locus where the vector bundle map $j_k$ obtains its maximal rank $d_k\leq {n+k\choose k}.$ Moreover if $s_0,\dots,s_m$ is a basis for $V$ then the rank of the map $j_k$ at a point $x\in X$ equals the rank of \emph{the matrix of $k$-jets} which is defined as $[J_{k,x}]=[j_k(s_0)|\ldots | j_k(s_m)]$.
\begin{definition}\label{def:osc}
The projectivization of the image $\PP(j_{k,x}(V))=\Osc_x^k\subseteq\PP(V)$ is called the {\bf $k$-th osculating space} at $x.$
The integer $d_k$ is the {\bf general osculating dimension }of $L$ on $X.$ If $U_k=X$ the integer $d_k$ is called the {\bf $k$-th osculating dimension }of $L$ on $X.$
\end{definition}
The line bundles for which the $k$-th osculating dimension is maximal define embeddings with high geometrical constraints. These are the embeddings that we will consider in the remaining of the article.
\begin{definition}\label{def:kjet}
Let $L$ be a line bundle on $X$ and let $d_k$ be its general $k$-th osculating dimension. If $d_k={n+k\choose k}$ then the (rational) map defined by the global sections of $L$ is said to be {\bf generically $k$-jet spanned}. If $d_k={n+k\choose k}$ is the $k$-th osculating dimension then the map is said to be {\bf $k$-jet spanned}.
\end{definition}
\begin{remark}
Observe that $0$-jet spanned is equivalent to being globally generated.
Moreover, if a line bundle $L$ is $k$-jet spanned then it is $s$-jet spanned for all $s\leq k$.
\end{remark}
\begin{example}
As a first example, we see that $(X,L)=(\PP^n,\sO_{\PP^n}(k))$ is $k$-jet spanned. Indeed, a basis of global sections of $\sO_{\PP^n}(k)$ is given by all the degree $k$ monomials in $x_0, \ldots, x_n$. Thus, the maximal rank of $j_{k,x}$, at any $x \in \PP^n$, is $d_k ={n+ k\choose k}$. Note that $\sO_{\PP^n}(k)$ is not $l$-jet spanned for $l>k$.
\end{example}
In the next example we distinguish between $k$-jet spanned and generically $k$-jet spanned.
\begin{example}
Let $p:X\to\PP^2$ be the blow up of $\PP^2$ at three non-collinear points $p_1, p_2$ and $p_3$ and let $L=-K_X=p^*(\sO_{\PP^2}(3))-E_1-E_2-E_3,$ where the $E_i$ are the exceptional divisors. Let $l_{ij}$ be the lines in $\PP^2$ connecting $p_i$ and $p_j$ for $1 \leq i < j \leq 3$, and denote by $\tilde{l}_{ij}$ the proper transform. If $x \in X$ is a point that is not in any exceptional divisor nor in any $\tilde{l}_{ij}$, then the rank of $j_{2,x}$ is $6$; if $x$ lies on the intersection of an exceptional divisor and $\tilde{l}_{ij}$, then the rank of $j_{2,x}$ is $4$; and for any other $x \in X$ the rank of $j_{2,x}$ is $5$ \cite[Theorem 2.1]{LM01}. Thus for $x$ not in any exceptional divisor nor in any $\tilde{l}_{ij}$, $L$ is $2$-jet spanned at $x$, and hence the embedding defined by $L$ is generically $2$-jet spanned. However, if $x \in E_i$ or $x \in \tilde{l}_{ij}$, then $L$ is not $2$-jet spanned at $x$, and thus the embedding defined by $L$ is not $2$-jet spanned.
\end{example}
The generation of $k$-jets imposes strong conditions on intersections with irreducible curves on $X.$
\begin{lemma}\label{restr}
Let $L$ be a $k$-jet spanned line bundle on $X$ and let $C\subset X$ be an irreducible curve. Then
\begin{enumerate}
\item $L\cdot C\geq k;$
\item $L\cdot C=k$ if and only if $C\cong\PP^1.$
\end{enumerate}
\end{lemma}
\begin{proof} Since $L$ is $k$-jet spanned, its restriction to $C$, $L|_C$, is a $k$-jet spanned line bundle on $C.$ Assume now that $L\cdot C\leq k$ so that for any $x \in C$ it holds that $H^0(C, L|_C\otimes \mathfrak{m}_x^{k+1})=0.$
Because the map $j_{k,x}:H^0(C,L|_C) \to H^0(C, L|_C \otimes \sO/\mathfrak{m}_x^{k+1})$ is surjective for all points $x \in C$, we have that $\dim(H^0(C,L|_C))=k+1+\dim(H^0(C, L|_C\otimes \mathfrak{m}_x^{k+1}))=k+1.$ This in turn implies that the map $H^0(C,L|_C)\times C\to J_k(L|_C)$ is an isomorphism, and thus $ J_k(L|_C)=\sO_{C}^{\oplus k+1}.$ But the only smooth curve, $C$, having a line bundle, $H$, with trivial jet bundle is $(C,H) = (\PP^1,\sO_{\PP^1}(k))$, see \cite{FKPT85, DRS01}. \end{proof}
\subsection{Toric Geometry.}\label{sec:toricbackground}
In this section we provide a short background on relevant parts of toric geometry. References are \cite{Fulton} and \cite{CLS}.
Let $M$ be a lattice of rank $n$, then the maximum spectrum of the group ring $\CC[M]=\bigoplus_{\mathbf{u}\in M} \CC \mathbf{x}^\mathbf{u}$ is an algebraic torus $T_M=\operatorname{Spec}(\CC[M])\cong(\CC^*)^n$. Moreover a finite subset $A=\{\mathbf{u_0},\ldots,\mathbf{u_N}\}\subseteq M$ induces the following map
\begin{align}
\phi:T_M\cong (\CC^*)^n&\to \PP^{N}\label{torusemb}\\
\mathbf{x}&\mapsto(\mathbf{x}^\mathbf{u_0},\ldots ,\mathbf{x}^\mathbf{u_N})\notag
\end{align}
where $\mathbf{x}=(x_1,\ldots,x_n)$, $\mathbf{u}_i=(u_i^1,\ldots,u_i^n)$ and $\mathbf{x}^{\mathbf{u}_i}=x_1^{u_i^1}\cdots x_n^{u_i^n}$. It is a standard fact that the image, $\im(\phi_A)$, is an algebraic torus $T_{\langle A-A \rangle}$, where $\langle A-A \rangle=\{u-u'\in M \mid u,u'\in A\}$. The closure of the image is a toric variety, $X_A=\overline{\im(\phi_A)}$, which has $T_{\langle A-A \rangle}$ as an open dense subset.
Recall that if $L$ is a line bundle on a normal toric variety $X$ and $P_L\subset M_\RR$ is the associated polytope then
\begin{equation}\label{eqn:gsectpoly}
H^0(X,L)\cong\bigoplus_{m\in M\cap P_L}\CC \langle x^m\rangle
\end{equation}
where $m=(m_1,m_2,\dots,m_n)$, $x=(x_1,\dots,x_n)$ and $x^m=x_1^{m_1}x_2^{m_2}\cdots x_n^{m_n},$ after a choice of basis vectors for $M$.
It follows that the space of global sections of a line bundle on a normal toric variety has a monomial basis. As a consequence the matrix of $k$-jets, $[J_{k,x}]$ has a particularly simple form which makes the toric setting appealing from a computational perspective. In particular, it can be shown that a line bundle $L$ is $k$-jet spanned at the general point of a toric variety $X$ if and only if $L$ is $k$-jet spanned at the image of the point $(1,\ldots,1)$ under the map \eqref{torusemb}, see \cite[p. 3]{Perkinson}.
\section{Higher order Gauss maps}
Let $\dim(H^0(X,L))=\dim(V)=N+1$ and let $Gr(t,N)$ denote the Grassmanian variety of linear spaces $\PP^t\subset \PP(V).$ Assume that $L$ is very ample and thus $X\subset\PP(V),$ which in particular implies that the general $k$-th osculating dimension is $d_k\geq n+1,$ for $k\geq 1.$
\begin{definition}\label{def:gauss}
The Gauss map of order $k$ is the (rational) map:
$$\gamma^k: X\dashrightarrow \Gr(d_k-1,N)$$
assigning to $x\in U_k\subseteq X$ the $k$-th osculating space $\gamma^k(x)=\Osc_{k,x}\cong\PP^{d_k-1}.$
We call the image variety, $\gamma^k(X)$, the {\bf osculating variety of order k}.
\end{definition}
\begin{remark} If $k=1$ then $\Osc_{1,x}={\mathbb T}_{X,x}\cong\PP^n.$
It follows that $\gamma^1=\gamma$ is the classical Gauss map.
\end{remark}
\begin{example} On $X=\PP^n$, $L=\sO_{\PP^n}(k)$ can be considered an extreme case. The line bundle is $k$-jet spanned and thus $d_k={n+k\choose k}$ is the osculating dimension at every point. The line bundle defines the $k$-th Veronese embedding $\PP^n\hookrightarrow \PP^{{n+k\choose k}-1}=\PP(V)$ and the
osculating space at every $x$ is the whole $\PP(V).$ The Gauss map of order $k$ is a regular map contracting the whole $\PP^n$ to a point.
\end{example}
In order to generalize the classical result on the finiteness of the fibers of the Gauss map, we will now assume that the very ample line bundle $L$ is $k$-jet spanned.
Then the Gauss map of order $k$ is a regular map $\gamma^k:X\to \Gr({n+k\choose k}-1,N).$
Consider the so called $k$-jet sequence:
$$0\to \Sym^k(\Omega^1_X) \otimes L \to J_k(L)\to J_{k-1}(L)\to 0$$
An induction argument shows that
\begin{equation}\label{jetdet}\det(J_k(L))=\frac{1}{n+1}{n+k\choose k}(kK_X+(n+1)L).\end{equation}
In particular, $\det(J_k(L))$ will be ample, nef or globally generated if $kK_X+(n+1)L$ is ample, nef or globally generated, respectively.
The following two lemmas are the key observations for the proof of our main result.
\begin{lemma}\label{det}
Assume $L$ is a $k$-jet spanned line bundle on $X$ such that $\det(J_k(L))$ is ample. Then the regular map $\gamma^k$ is finite.
\end{lemma}
\begin{proof} Because the line bundle $L$ is $k$-jet spanned on the whole variety $X$ then the $k$-th osculating dimension is $d_k={n+k\choose k}$ and the map
$\gamma^k$ is regular. Consider the composition of the Gauss map of order $k$ with the Pl\"ucker embedding, $pl$:
\[
pl\circ\gamma^k:
\xymatrix{
X\ar@{->}[r]
&\Gr(d_k-1,M) \ar @{^{(}->}[r]
& \PP^T}
\]
Recall that the vector bundle map $j_k: V\otimes\sO_X\to J_k(L)$ is onto and thus $J_k(L)$ is generated by the global sections of $L.$
Because $\gamma^k(x)=\PP(J_k(L)_x)$ for every point $x\in X$ the composition $pl\circ\gamma^k$
is the map defined by the global sections of the line bundle $\wedge^{{n+k\choose k}} J_k(L)=\det(J_k(L)).$
If this composition has a fiber $F$ of positive dimension $s\geq 1$ then $\det(J_k(L))^{n-s}\cdot F=0.$ This cannot happen if $\det(J_k(L))$
is ample.
\end{proof}
\begin{lemma}\label{trivaldet} Let $\sE$ be a globally generated rank $r$ vector bundle on $X$ such that $\det \sE = \sO_X$, then $\sE \cong\sO_X^{\oplus r}$. \end{lemma}
\begin{proof}
We will prove this by induction on the rank of $\sE$, with the case of $r=1$ being clear. Suppose it holds true for rank $r-1$. If $\sE$ is globally generated of rank $r$, then there is a surjection $\sO_X^{\oplus N} \to \sE$, where $N:= h^0(X,\sE)$. Since $\det \sE = \sO_X$, the dual $\sE^*$ of $\sE$ is isomorphic to $\wedge^{r-1} \sE$, and so $\sE^*$ is also globally generated. In particular, a section of $\sE^*$ gives a map $\sE \to \sO_X$. This then gives a non-zero map $\sO_X^{\oplus N} \to \sO_X$ which is surjective and admits a splitting. Thus $\sE \cong \sE' \oplus \sO_X$, for some vector bundle $\sE'$ of rank $r-1$. Moreover, $\sE'$ is globally generated with trivial determinant, hence by the induction hypothesis, $\sE' \cong \sO_X^{\oplus r-1}$, and $\sE \cong \sO_X^{\oplus r}$.
\end{proof}
\begin{theorem}\label{finite}
Let $L$ be a $k$-jet spanned line bundle on $X,$ with $k\geq1.$ Then the Gauss map of order $k,$ $\gamma^k:X\to \Gr\left({n+k\choose k}-1,N\right),$ is finite unless $(X,L)=(\PP^n,\sO_{\PP^n}(k)).$
\end{theorem}
\begin{proof} By Lemma \ref{det} it suffices to prove that $\det(J_k(L))$ is ample unless $(X,L)=(\PP^n,\sO_{\PP^n}(k)).$ In view of formula (\ref{jetdet}) it is in turn sufficient to show that the line bundle $kK_X+(n+1)L$ is ample unless $(X,L)=(\PP^n,\sO_{\PP^n}(k)).$
Assume that $kK_X+(n+1)L$ is not ample. As previously observed the vector bundle map $j_k: V\otimes\sO_X\to J_k(L)$ is onto and thus $J_k(L)$ is generated by the global sections of $L$, implying that $\det(J_k(L))$ is also globally generated and thus nef.
Again formula (\ref{jetdet}) gives that the line bundle
$kK_X+(n+1)L$ is also nef. By Lemma \ref{nefnotample} we can conclude that the nef-value of $L$ is $\tau(L)=\frac{n+1}{k}.$
Let $R$ be an extremal rational curve in the face contracted by the nef-value morphism, as in Lemma~\ref{face}. Then $(kK_X+(n+1)L)\cdot R=0$ and $-R\cdot K_X \leq n+1$.
But because $L\cdot R\geq k,$ by Lemma \ref{restr}(a), we must have $K_X\cdot R=-n-1$ and $L\cdot R=k.$ Proposition \ref{characterization} implies then that $X$ is a Fano variety, i.e. $-K_X$ is ample, and $\Pic(X)=\ZZ.$
Thus $kK_X+(n+1)L \cong \sO_X$, and so $\det (J_k(L))$ is also trivial. Since $J_k(L)$ is a globally generated vector bundle with trivial determinant, we can apply Lemma \ref{trivaldet} and conclude that $J_k(L) \cong \sO_X ^{n+k\choose k}$.
By \cite{DRS01}, if $J_k(L)$ is trivial then either $(X,L) = (\PP^n, \sO_{\PP^n}(k))$ or $X$ is an abelian variety and $L$ is trivial. The second case cannot occur in our situation because $L$ is ample. Thus we conclude that $(X,L)=(\PP^n, \sO_{\PP^n}(k))$.
\end{proof}
The following example shows that, as in the classical case, finiteness cannot be expected if we drop the smoothness assumption.
\begin{example}\label{non smooth}
Consider the toric variety $X$ together with a very ample line bundle $L$ given as the closure in $\PP^5$ of the following torus embedding:
\begin{align*}
\phi:(\CC^*)^2&\to \PP^5\\
(x,y)&\mapsto (1:x:y:xy:x^2:xy^2)
\end{align*}
It is readily checked that $(X,L)$ is generically $2$-jet spanned by computing the rank of the matrix of $2$-jets at a general point. However, $\dim(H^0(X,L))=6$ so the Gauss map of order $2$ is a map from $X$ to the one point space $\Gr(5,5)$. Thus $\gamma^2$ must be the contraction of $X$ to a point, and have $X$ as its fiber, in particular $\gamma^2$ is not generically finite. We remark that a direct computation shows that $(X,L)$ is not $2$-jet spanned at the point $(1:0:0:0:0)$, in particular $(X,L)$ is not $2$-jet spanned.
\end{example}
In Section~\ref{sec:toric} we will give a further family of examples: for every pair of integers $n\ge 2$ and $N\ge 2$, we will construct a singular toric variety of dimension $n$ in $\PP^{\binom{n+2}{2}+N-2}$ which is generically $2$-jet spanned, but which has a Gauss map of order $2$ with positive dimensional fibers. See Example~\ref{ex:infinitefibers}.
Furthermore, as the following example shows, smoothness and only generically $k$-jet spannedness does not in general imply that the general fiber of $\gamma^k$ is finite.
\begin{example}\label{genkspanned}
Let $X$ be the Del Pezzo surface of degree $5,$ given by the blow up of $\PP^2$ in $4$ points in general position embedded by the anticanonical bundle $-K_X.$ In \cite[Theorem 2.1]{LM01} it is shown that $-K_X$ is $2$-jet spanned at all points outside the $4$ exceptional divisors $E_1,\ldots, E_4.$ It follows that the Gauss map of second order is the rational map
$\gamma^2:X\dashrightarrow \Gr(5,5)=\text{pt},$ contracting $X\setminus (E_1\cup\ldots\cup E_4)$ to a point and
is in particular not generically finite.
\end{example}
\section{Toric Gauss maps}\label{sec:toric}
In \cite{FI14} Furukawa and Ito gave combinatorial descriptions of the image and fiber of the classical Gauss map in the toric setting. In this section we will use the techniques introduced in \cite{FI14} to extend their results to Gauss maps of higher order. Let $M$ be a lattice and let $A=\{u_0,\dots,u_N\}\subset M$. Then as explained in Section~\ref{sec:toricbackground}, $A$ determines a map $\phi_A:T_M\mono\PP^N$ and a toric variety $X_A=\overline{\im(\phi_A)}$. We make the following definitions.
\begin{definition}
Let $A\subset M$ be a finite set of lattice points. $A$ is called \emph{generically $k$-jet spanned} if the associated line bundle $\phi_A^*({\mathcal O}_{\PP^{|A|-1}}(1))$ determines an embedding that is generically $k$-jet spanned.
\end{definition}
\begin{definition}
Assume that $A=\{u_0,\ldots,u_N\}$ is generically $k$-jet spanned and let $q=\binom{n+k}{k}$. For every subset $\{u_{i_1},\ldots,u_{i_q}\}$ of $q$ lattice points in $A$ we denote by $\left[J_{k,(1,\ldots,1)}^{\{u_{i_1},\ldots,u_{i_q}\}}\right]$ the matrix of $k$-jets of the torus embedding given by $\phi_{\{u_{i_1},\ldots,u_{i_q}\}}$ evaluated at the point $(1,\ldots,1)$. We define the following subset of the lattice $M$:
\[
B_k=\{u_{i_1}+u_{i_2}+\dots+ u_{i_q}\mid u_{i_1},\ldots, u_{i_q}\in A\text{ and } \det\left[J_{k,(1,\ldots,1)}^{\{u_{i_1},\ldots, u_{i_q}\}}\right]\ne 0\}.
\]
\end{definition}
Observe that the assumption $\det\left[J_{k,(1,\ldots,1)}^{\{u_{i_1},\ldots, u_{i_q}\}}\right]\ne 0$ is equivalent to saying that the set of lattice points $\{u_{i_1},\ldots,u_{i_q}\}$ is generically $k$-jet spanned. The set $B_1$ defined above is denoted by $B$ in \cite{FI14}. Going through the proof of \cite[Theorem 1.1]{FI14} and replacing $B$ with $B_k$ yields the following result.
\begin{theorem}\label{thm:Bklattice}
Let $\pi_k:M\to M'=M/(\langle B_k-B_k\rangle)_\RR\cap M)$ be the natural projection and assume that $(X_A,L_A)$ is generically $k$-jet spanned. The following holds:
\begin{enumerate}[(i)]
\item{The closure $\overline{\gamma^k(X_A)}$ of the Gauss map of order $k$ is projectively equivalent to $X_{B_k}$.}\label{thma}
\item{The restriction of $X_A \rat \overline{\gamma^k(X_A)}$ to $T_M$ is the morphism $\T_M\twoheadrightarrow \T_{\langle B_k-B_k\rangle}$ induced by the inclusion $\langle B_k-B_k \rangle\hookrightarrow M$.}\label{thmb}
\item{Let $F$ be an irreducible component of a (general) fiber of $\gamma^k|_{\T_M}$ with the reduced structure. Then $F$ is a translation of $\T_{M'}$ by an element of $\T_M$. Moreover the closure $\overline{F}$ is projectively equivalent to $X_{\pi(A)}$. In particular the dimension of the general fiber is \[
\delta_\gamma^k(X_A)=\rk M'=n-\rk\langle B_k-B_k\rangle.\]}\label{thmc}
\end{enumerate}
\end{theorem}
Recall that two varieties $X_1\subseteq \PP^{N_1}$ and $X_2\subseteq \PP^{N_2}$ are said to be \emph{projectively equivalent} if there exist embeddings $j_i: \PP^{N_i}\mono \PP^N$ such that $j_1(X_1)=j_2(X_2)$ and $j_i^*(\sO_{\PP^N}(1))=\sO_{\PP^{N_i}}(1)$.
\begin{proof}
Following the proof of Theorem 1.1 in \cite{FI14} one shows there is a commutative diagram of the following form
\[
\xymatrix{
T_M \ar@{^{(}->}[r] ^{\phi_A} \ar@{_{(}->}@/_0.75pc/[drrr]_{\phi_{B_k}}
&X_A \ar@{-->}[r]^{\gamma^k}
&\Gr(q,N) \ar@{^{(}->}[r]^{pl}
&\PP\left(\bigwedge^{q}V\right)\\
&
&
&\PP^{|B_k|-1} \ar@{_{(}->}[u]_j}
\]
where $pl$ is the Pl\"{u}cker embedding and $j$ is a linear embedding. By the above diagram
\[
\overline{\gamma^k(X_A)}=\overline{pl\circ \gamma^k\circ\phi_A(T_M)}=\overline{j\circ \phi_{B_k}(T_M)}=j(X_{B_k})
\]
This proves part \emph{(\ref{thma})}.
Restricting the morphism $X_A \rat \overline{\gamma^k(X_A)}$ to $T_M$ corresponds to considering the composition $pl\circ \gamma^k \circ \phi_A: T_M\to \PP\left(\bigwedge^{q}V\right)$. As $T_{\langle B_k-B_k\rangle}$ is the dense open torus in $X_{B_k}$ part \emph{(\ref{thmb})} follows from the commutativity of the above diagram.
The proof of part \emph{(\ref{thmc})} relies on a series of well know facts for algebraic tori (see \cite{FI14}). Namely since $\langle B_k-B_k\rangle\cap M$ is a sublattice of $M$ one has the following short exact sequence of lattices
\[
\xymatrix{
0 \ar@{->}[r]
&\langle B_k-B_k \rangle_\RR \cap M \ar@{->}[r]
&M \ar@{->}[r]
&M/\langle B_k-B_k\rangle_\RR \cap M \ar@{->}[r]
&0}
\]
The above sequence induces the following short exact sequence on algebraic tori
\[
\xymatrix{
1 \ar@{->}[r]
&T_{M/\langle B_k-B_k\rangle_\RR \cap M} \ar@{->}[r]
&T_M \ar@{->}[r]^-{g}
&T_{\langle B_k-B_k\rangle_\RR \cap M} \ar@{->}[r]
&1}
\]
Hence $g^{-1}(1_{T_{\langle B_k-B_k\rangle_\RR \cap M}})=T_{M/\langle B_k-B_k\rangle_\RR \cap M}$ so by \cite[Lemma 2.1]{FI14} it holds that $\overline{g^{-1}(1_{T_{\langle B_k-B_k \rangle_\RR \cap M}})}$ is projectively equivalent to $X_{\pi(A)}$. If $F$ is an irreducible fiber of $\gamma^k|_{T_M}$ then by \cite[Lemma 2.2]{FI14} $F$ is also a fiber of $g$, i.e. $F$ is a translation of $\overline{g^{-1}(1_{T_{\langle B_k-B_k\rangle_\RR\cap M}})} $ by an element of $T_M$. It now follows from \cite[Lemma 2.1]{FI14} that the closure $\overline{F}$ is projectively equivalent to $X_{\pi(A)}$ proving part \emph{(\ref{thmc})}.
\end{proof}
We now reprove Theorem~\ref{finite} in the toric setting using a combinatorial approach based on Theorem~\ref{thm:Bklattice}.
\begin{proposition}\label{prop:toricfinite}
Let $X$ be a smooth and projective toric variety and let $L$ be a $k$-jet spanned line bundle on $X$. Then the general fiber of the Gauss map of order $k$, $\gamma^k$, is finite and birational unless $(X,L)=(\PP^n,\sO(k))$.
\end{proposition}
\begin{proof}
The pair $(X,L)$ corresponds to a convex lattice polytope $P\subset M_\RR$. Combinatorially the assumption that $X$ is smooth means that the primitive vectors through every vertex of $X$ form a basis for the underlying lattice $M$. Thus we may assume that $P$ is contained in the first orthant and that it has a vertex at the origin, and an edge along each coordinate axis. Moreover, as shown in \cite{DR99}, the assumption that $L$ is $k$-jet spanned corresponds to the fact that every edge of $P$ contains at least $k+1$ lattice points. It follows that $P$ contains the simplex $k\Delta_n=\conv(0,k\hat{e}_1,\ldots,k\hat{e}_n)$, where $\hat{e}_i$ is the unit vector along the $x_i$-axis. There are now two possibilities. The first possibility is that $P=k\Delta_n$, in which case $(X,L)=(\PP^n,\sO(k))$.
If instead $P\supsetneq k\Delta_n$ we consider for every $i\in \{1,\ldots,n\}$ the vertex $v_i$ which lies along the $x_i$-axis and is not the origin. By convexity and because the edges through $v_i$ form a basis for $M$, there is, for every $j\ne i$, an edge through $v_i$ that passes through a point of the form $a\hat{e}_i+\hat{e}_j$ for some $a\in \ZZ$. Because every edge of $P$ contains at least $k+1$ lattice points, it then follows by convexity that $k-1\le a$ since $P$ is contained in the first orthant. Note that the if $a=k-1$ then convexity implies that $v_i=k\hat{e}_i$ and that the edge through $(k-1)\hat{e}_i+\hat{e}_j$ has its vertices at $v_i=k\hat{e}_i$ and $k\hat{e}_j$. Thus for every $i$ there must be a lattice point in $P$ of the form $k\hat{e}_i+\hat{e}_j$ for some $j$, since the only other possibility is that $v_i=k\hat{e}_i$ and that the edges through $v$ in the $x_ix_j$-plane end in the point $k\hat{e}_j$ for all $j\ne i$. However, with these assumptions, convexity implies that $P=k\Delta_n$ which is a contradiction. Thus for all $i$ there must exist some $j$ such that $k\hat{e}_i+\hat{e}_j\in P$. Note that $S=k\Delta_n\cap M$ is $k$-jet spanned, thus $S_i=(k\Delta_n\cap M)\setminus ((k-1)\hat{e}_i+\hat{e}_j)\cup(k\hat{e}_i+\hat{e}_j)$ is $k$-jet spanned since $(k-1)\hat{e}_i+\hat{e}_j$ is the only lattice point in $k\Delta_n$ which gives a monomial $\mathbf{x}^\mathbf{m}$ such that $\frac{\partial^k}{\partial x_i^{k-1}\partial x_j}(\mathbf{x}^\mathbf{m})$ evaluated at $(1,\ldots,1)$ is non-zero and $\frac{\partial^k}{\partial x_i^{k-1}\partial x_j}(\mathbf{x}^{k\hat{e}_i+\hat{e}_j})(1,\ldots,1)\ne 0$. Set $s=\sum_{u\in S} u$ and $s_i=\sum_{u_i\in S_i} u_i$, then for all $i$, the difference $s_i-s=\hat{e}_i$ lie in $\langle B_k-B_k\rangle$, which implies that $\langle B_k-B_k\rangle_\RR \cap M$ has maximal rank, i.e. the general fiber of $\gamma^k$ is finite, by Theorem~\ref{thm:Bklattice}. Moreover, observe that the above argument proves that the inclusion $\langle B_k-B_k\rangle\subseteq M$ is an equality, and hence the induced map of tori is the identity morphism. This shows that $\gamma^k$ is also birational.
\end{proof}
\begin{figure}
\begin{minipage}[t]{0.43\linewidth}
\centering
\begin{tikzpicture}[scale=0.7]
\draw[thick,->] (-1,0)--(5,0);
\draw[thick,->] (0,-1)--(0,5);
\draw[ultra thick,fill=gray,fill opacity=0.3] (0,0)--(2,0)--(4,2)--(4,4)--(2,4)--(0,2)--(0,0);
\draw[ultra thick, dashed] (2,0)--(0,2);
\foreach \x in {0,1,2,3,4}{
\foreach \y in {0,1,2,3,4}{
\node[fill=black, shape=circle, scale=0.5] at (\x,\y) {};
}
}
\node at (2,-0.5) {$v_1$};
\node at (-0.5,2) {$v_2$};
\node at (3.9,0.5) {$a\hat{e}_1+\hat{e}_2$};
\node at (0.7,0.5) {$k\Delta_2$};
\end{tikzpicture}
\captionsetup{width=0.72\textwidth}
\caption{Illustration for the proof of Proposition~\ref{prop:toricfinite}.}
\end{minipage}
\begin{minipage}[t]{0.56\linewidth
\centering
\begin{tikzpicture}
\draw (0.866025,1,0.5)--(0,1,0)--(0.5,3,-0.866025)--(1,0,-1.73205)--(1.73205,0, 1)--(0.866025,1,0.5);
\draw (0.5, 3, -0.866025)--(1.73205, 0, 1);
\draw[dashed] (0,0,0)--(1, 0, -1.73205);
\draw (0,0,0)--(1.73205, 0, 1);
\draw (0,0,0)--(0,1,0);
\draw (0.866025,1,0.5)--(0.5,3,-0.866025);
\foreach \pt in {(0,0,0),(0,1,0),(0.866025,0,0.5),(0.866025,1,0.5),(1,0,-1.73205),(1.73205,0, 1),(1.36603, 0., -0.366025)}{
\node [fill=blue,shape=circle, scale=0.5] at \pt {};
}
\foreach \n in {0,1}{
\foreach \pt in {(0.5, \n., -0.866025)}{
\node [fill=green,shape=circle, scale=0.5] at \pt {};
}}
\foreach \n in {2,3}{
\foreach \pt in {(0.5, \n., -0.866025)}{
\node [fill=red,shape=circle, scale=0.5] at \pt {};
}}
\end{tikzpicture}
\captionsetup{width=0.87\textwidth}
\caption{Illustration for Example~\ref{ex:infinitefibers}. The set $S_3$ consists of the blue and green points, while the set $L_3$ consists of the green and red points.}
\end{minipage}
\end{figure}
Observe that if $L$ is a generically $2$-jet spanned line bundle on a $n$-dimensional toric variety $X$, then $\dim(H^0(X,L))\ge \binom{n+2}{2}$. Below we give an example of a generically $2$-jet spanned pair $(X,L)$, with $X$ singular, such that the Gauss map of order $2$ has infinite fibers, $\dim(X)=n$ and $\dim(H^0(X,L))=\binom{n+2}{2}+N-2$ for all $n,N\ge 2$. These examples were found using the package \cite{LP} for \textsf{Macaulay2} which uses Theorem~\ref{thm:Bklattice} to compute the image and fiber of the Gauss map of order $k$ in the toric setting.
\begin{example}\label{ex:infinitefibers}
For every pair of integers $n\ge 2$ and $N\ge 2$ we define the convex lattice polytope $P_n^N=\conv(A_1,A_2,\ldots A_n)\subset M_\RR=M\otimes \RR$ where
\[
A_1=\{0,\hat{e}_1+N\hat{e}_2,2\hat{e}_1\}, A_2=\{\hat{e}_2\}, A_j=\{\hat{e}_1+\hat{e}_j,2\hat{e}_j\} \text{ for }2<j\le n
\]
and $\hat{e}_1,\ldots,\hat{e}_n$ is a basis for $M$. We claim that the Gauss map of order $2$ for the projective and normal variety $X_{P_n^N\cap M}$ and generically $2$-jet spanned line bundle $L$ associated to $P_n^N$ has positive dimensional fibers. For every $n$ let $S_n$ be the set of lattice points corresponding to monomials of degree at most $2$ in the variables $x_1,\ldots, x_n$, but without the lattice point $2\hat{e}_2$. Moreover for every $N\ge 2$ let $L_N$ be the set of lattice points of the form $\hat{e}_1+m\hat{e}_2$ for $0\le m\le N$. Then by considering the fibers under the projection onto the $x_1x_2$-plane one readily checks that the lattice points of $P_n^N$ decompose as $P_n^N\cap M=S_n \cup L_N$.
By the above we have that $P_n^2\cap M\subseteq P_n^N\cap M$ for all $n\ge 2$ and $N\ge 2$. By direct computation one checks that $P_n^2$ is generically $2$-jet spanned. Thus $P_n^N$ is generically $2$-jet spanned for all $N\ge 2$. Now for any $P_n^N$ consider the column-span $C$ of the columns in the matrix of $2$-jets corresponding to all lattice points in $L_N$. Using linear algebra techniques one readily checks that $\dim(C)=3$. Thus every generically $2$-jet spanned subset $A$ of $P_n^N\cap M$ such that $|A|=\binom{n+2}{2}$ can contain at most $3$ lattice points in $L_N$. However, as there are exactly $\binom{n+2}{2}-1-2$ lattice points in $S_n\setminus L_N$, it then follows that every such subset $A$ of $P_n^N\cap M=S_n\cup L_N$ is determined by the choice of three lattice points in $L_N$. Now, by definition, every element of $B_k$ is the sum of the elements of a generically $2$-jet spanned subset $A\subseteq P_n^N\cap M$ such that $|A|=\binom{n+2}{2}$. As the only difference between two such subsets lies in the choice of $3$ lattice points in $L_N$, the only difference between two elements in $B_k$ is their $x_2$-coordinate. Thus $\langle B_k-B_k\rangle_\RR$ has dimension $0$ if $|P_n^N\cap M|=\binom{n+2}{2}$ and dimension $1$ otherwise. As a consequence the fibers of the Gauss map of order $2$ for the projective, normal and singular toric $n$-fold $X_{P_n^N\cap M}$ have dimension $n$ if $|P_n^N\cap M|=\binom{n+2}{2}$ and dimension $n-1$ if $|P_n^N\cap M|>\binom{n+2}{2}$ by Theorem~\ref{thm:Bklattice}.
\end{example}
\begin{remark}
In \cite[Corollary 1.3]{FI14} the authors show that the sets of lattice points $A$ giving degenerate Gauss maps are so called Cayley sums. Example~\ref{ex:infinitefibers} shows that this characterization does not directly generalize to higher order since the sets of lattice points appearing there are not Cayley sums. We leave it as an open problem to characterize sets of lattice points yielding degenerate Gauss maps of order $k$.
\end{remark}
\begin{bibdiv}
\begin{biblist}
\bib{A11}{article}{
label={A11},
AUTHOR = {A. Atanasov, C. Lopez, A. Perry, N. Proudfoot, M Thaddeus},
TITLE = {Resolving toric varieties with {N}ash blowups},
JOURNAL = {Exp. Math.},
FJOURNAL = {Experimental Mathematics},
VOLUME = {20},
YEAR = {2011},
NUMBER = {3},
PAGES = {288--303},
ISSN = {1058-6458},
MRCLASS = {14M25 (14E15 52B20)},
MRNUMBER = {2836254 (2012h:14119)},
MRREVIEWER = {Nathan Owen Ilten},
DOI = {10.1080/10586458.2011.565238},
URL = {http://dx.doi.org/10.1080/10586458.2011.565238},
}
\bib{BS95}{book}{
label={BS95},
AUTHOR = {M.C.Beltrametti},
author={ A.J. Sommese},
TITLE = {The adjunction theory of complex projective varieties},
SERIES = {de Gruyter Expositions in Mathematics},
VOLUME = {16},
PUBLISHER = {Walter de Gruyter \& Co., Berlin},
YEAR = {1995},
PAGES = {xxii+398},
ISBN = {3-11-014355-0},
MRCLASS = {14C20 (14-02 14E35 14N05)},
MRNUMBER = {1318687 (96f:14004)},
MRREVIEWER = {Jaros{\l}aw A. Wi{\'s}niewski},
DOI = {10.1515/9783110871746},
URL = {http://dx.doi.org/10.1515/9783110871746},
}
\bib{CA22}{article}{
label={C22},
author={M. Castellani},
title={Sule superfici i cui spazi osculatori sono biosculatori},
journal={Rom. Acc. I. Rend.},
volume={5},
date={1922},
number={31},
pages={347-350}
}
\bib{CM02}{incollection}{
label={CM02},
AUTHOR = {K. Cho},
author={ Y. Miyaoka},
author={N.I. Shepherd-Barron},
TITLE = {Characterizations of projective space and applications to
complex symplectic manifolds},
BOOKTITLE = {Higher dimensional birational geometry ({K}yoto, 1997)},
SERIES = {Adv. Stud. Pure Math.},
VOLUME = {35},
PAGES = {1--88},
PUBLISHER = {Math. Soc. Japan, Tokyo},
YEAR = {2002},
MRCLASS = {14M20 (14E05)},
MRNUMBER = {1929792 (2003m:14080)},
MRREVIEWER = {S{\'a}ndor J. Kov{\'a}cs},
}
\bib{CLS}{book}{
label={CLS11},
author={D.A. Cox},
author={J.B. Little},
author={H.K. Schenck},
title={Toric varieties},
series={Graduate Studies in Mathematics},
volume={124},
publisher={American Mathematical Society, Providence, RI},
date={2011},
pages={xxiv+841},
isbn={978-0-8218-4819-7},
review={\MR{2810322 (2012g:14094)}},
}
\bib{DI15}{article}{
label={DI15}
Author = {P. De Poi and G. Ilardi},
Issn = {0022-4049},
Journal = {Journal of Pure and Applied Algebra},
Number = {11},
Pages = {5137 - 5148},
Title = {On higher Gauss maps},
Url = {http://www.sciencedirect.com/science/article/pii/S0022404915001243},
Volume = {219},
Year = {2015},
}
\bib{DR99}{article}{
label={DR99},
author={S. Di Rocco},
title={Generation of $k$-jets on toric varieties},
journal={Math. Z.},
volume={231},
date={1999},
number={1},
pages={169--188}
}
\bib{DRS01}{article}{
label={DRS01},
AUTHOR = {S. Di Rocco},
author={A.J. Sommese},
TITLE = {Line bundles for which a projectivized jet bundle is a
product},
JOURNAL = {Proc. Amer. Math. Soc.},
FJOURNAL = {Proceedings of the American Mathematical Society},
VOLUME = {129},
YEAR = {2001},
NUMBER = {6},
PAGES = {1659--1663},
ISSN = {0002-9939},
CODEN = {PAMYAR},
MRCLASS = {14C20 (14J40)},
MRNUMBER = {1814094 (2002c:14012)},
MRREVIEWER = {Maria Luisa Spreafico},
DOI = {10.1090/S0002-9939-00-05875-5},
URL = {http://dx.doi.org/10.1090/S0002-9939-00-05875-5},
}
\bib{FI01}{article}{
label={FI01},
author={D. Franco},
author={G. Ilardi},
title={On Multiosculating Spaces},
Journal={Communications in Algebra},
Volume={29},
numbver={7},
year={2001},
pages={2961-2976}
}
\bib{FKPT85}{article}{
label={FKPT85},
AUTHOR = {W. Fulton},
author={S. Kleiman},
author={R. Piene},
author={H. Tai},
TITLE = {Some intrinsic and extrinsic characterizations of the
projective space},
JOURNAL = {Bull. Soc. Math. France},
FJOURNAL = {Bulletin de la Soci\'et\'e Math\'ematique de France},
VOLUME = {113},
YEAR = {1985},
NUMBER = {2},
PAGES = {205--210},
ISSN = {0037-9484},
CODEN = {BSMFAA},
MRCLASS = {14E25 (14N05)},
MRNUMBER = {820319 (87a:14012)},
MRREVIEWER = {Takao Fujita},
URL = {http://www.numdam.org/item?id=BSMF_1985__113__205_0},
}
\bib{Fulton}{book}{
label={F93},
author={W. Fulton},
title={Introduction to toric varieties},
series={Annals of Mathematics Studies},
volume={131},
publisher={Princeton University Press},
place={Princeton, NJ},
date={1993},
}
\bib{FI14}{article}{
label={FI14},
author={K. Furukawa},
author={A. Ito},
title={Gauss maps of toric varieties},
journal={arXiv 1403.0793},
yesr={2014}}
\bib{GH79}{article}{
label={GH79},
author={P. Griffiths},
AUTHOR = {J. Harris},
TITLE = {Algebraic geometry and local differential geometry},
JOURNAL = {Ann. Sci. \'Ecole Norm. Sup. (4)},
FJOURNAL = {Annales Scientifiques de l'\'Ecole Normale Sup\'erieure.
Quatri\`eme S\'erie},
VOLUME = {12},
YEAR = {1979},
NUMBER = {3},
PAGES = {355--452},
ISSN = {0012-9593},
CODEN = {ENAQAF},
MRCLASS = {53A20 (14C21 53A60)},
MRNUMBER = {559347 (81k:53004)},
MRREVIEWER = {M. A. Akivis},
URL = {http://www.numdam.org/item?id=ASENS_1979_4_12_3_355_0},
}
\bib{K91}{article}{
label={K91},
AUTHOR = {M.M. Kapranov},
TITLE = {A characterization of {$A$}-discriminantal hypersurfaces in
terms of the logarithmic {G}auss map},
JOURNAL = {Math. Ann.},
FJOURNAL = {Mathematische Annalen},
VOLUME = {290},
YEAR = {1991},
NUMBER = {2},
PAGES = {277--285},
ISSN = {0025-5831},
CODEN = {MAANA},
MRCLASS = {14M25 (33C70)},
MRNUMBER = {1109634 (92j:14066)},
MRREVIEWER = {Aleksandar Lipkovski},
DOI = {10.1007/BF01459245},
URL = {http://dx.doi.org/10.1007/BF01459245},
}
\bib{KP91}{incollection}{
label={KP91},
AUTHOR = {S. Kleiman},
author={R. Piene},
TITLE = {On the inseparability of the {G}auss map},
BOOKTITLE = {Enumerative algebraic geometry ({C}openhagen, 1989)},
SERIES = {Contemp. Math.},
VOLUME = {123},
PAGES = {107--129},
PUBLISHER = {Amer. Math. Soc., Providence, RI},
YEAR = {1991},
MRCLASS = {14N05 (14M10 14N10)},
MRNUMBER = {1143550 (93b:14082)},
MRREVIEWER = {Susan J. Colley},
DOI = {10.1090/conm/123/1143550},
URL = {http://dx.doi.org/10.1090/conm/123/1143550},
}
\bib{L94}{article}{
label={L94},
year={1994},
issn={0020-9910},
journal={Inventiones mathematicae},
volume={117},
number={1},
doi={10.1007/BF01232243},
title={On second fundamental forms of projective varieties},
url={http://dx.doi.org/10.1007/BF01232243},
publisher={Springer-Verlag},
author={Landsberg, J.M.},
pages={303-315},
language={English}
}
\bib{LM01}{article}{
label={LM01},
AUTHOR = {A. Lanteri},
author = {R. Mallavibarrena},
TITLE = {Osculatory behavior and second dual varieties of del {P}ezzo
surfaces},
JOURNAL = {Adv. Geom.},
FJOURNAL = {Advances in Geometry},
VOLUME = {1},
YEAR = {2001},
NUMBER = {4},
PAGES = {345--363},
}
\bib{LP}{misc}{
label={LP},
author={A. Lundman},
author={G. S\ae d\'{e}n St\aa hl},
title={LatticePolytopes, a package for computations with Lattice Polytopes},
publisher={Available at
\href{http://www.math.illinois.edu/Macaulay2/}
{http://www.math.illinois.edu/Macaulay2/}}
}
\bib{M2}{misc}{
label={M2},
author = {D. R. Grayson},
author={M. Stillman},
title = {Macaulay2, a software system for research
in algebraic geometry},
publisher = {Available at
\href{http://www.math.illinois.edu/Macaulay2/}%
{http://www.math.illinois.edu/Macaulay2/}}
}
\bib{Perkinson}{article}{
label={P00},
AUTHOR = {D. Perkinson},
TITLE = {Inflections of toric varieties},
JOURNAL = {Michigan Math. J.},
FJOURNAL = {The Michigan Mathematical Journal},
VOLUME = {48},
YEAR = {2000},
PAGES = {483--515},
ISSN = {0026-2285},
MRCLASS = {14M25},
MRNUMBER = {1786502 (2001h:14066)},
MRREVIEWER = {Dag E. Sommervoll},
DOI = {10.1307/mmj/1030132730},
URL = {http://dx.doi.org/10.1307/mmj/1030132730},
}
\bib{P81}{incollection}{
label={P81},
AUTHOR = {R. Piene},
TITLE = {A note on higher order dual varieties, with an application to
scrolls},
BOOKTITLE = {Singularities, {P}art 2 ({A}rcata, {C}alif., 1981)},
SERIES = {Proc. Sympos. Pure Math.},
VOLUME = {40},
PAGES = {335--342},
PUBLISHER = {Amer. Math. Soc., Providence, RI},
YEAR = {1983},
MRCLASS = {14J40 (14D25 14N05)},
MRNUMBER = {713259 (85d:14056)},
MRREVIEWER = {A. A. Iarrobino, Jr.},
}
\bib{Pohl}{article}{
label={P62},
AUTHOR = {W. Pohl},
TITLE = {Differential geometry of higher order},
JOURNAL = {Topology },
FJOURNAL = {The Michigan Mathematical Journal},
VOLUME = {1},
YEAR = {1962},
PAGES = {169-211},
}
\bib{Zak93}{book}{
label={Z93},
AUTHOR = {F.L. Zak},
TITLE = {Tangents and secants of algebraic varieties},
SERIES = {Translations of Mathematical Monographs},
VOLUME = {127},
NOTE = {Translated from the Russian manuscript by the author},
PUBLISHER = {American Mathematical Society, Providence, RI},
YEAR = {1993},
PAGES = {viii+164},
ISBN = {0-8218-4585-3},
MRCLASS = {14M07 (14L30 14M17 14N05)},
MRNUMBER = {1234494 (94i:14053)},
MRREVIEWER = {Andrew J. Sommese},
}
\end{biblist}
\end{bibdiv}
\end{document}
\bib{BSS}{article}{
author={Beltrametti, M.C.},
author={Schneider, M.},
author={Sommese, A.J.},
title={Chern inequalities and spannedness of adjoint bundles},
conference={
title={Proceedings of the Hirzebruch 65 Conference on Algebraic
Geometry },
address={Ramat Gan},
date={1993},
},
book={
series={Israel Math. Conf. Proc.},
volume={9},
publisher={Bar-Ilan Univ., Ramat Gan},
},
date={1996},
pages={97--107},
}
\bib{Cox}{article}{
label={Cox},
author={Cox, D.A.},
title={The homogeneous coordinate ring of a toric variety},
journal={J. Algebraic Geom.},
volume={4},
date={1995},
number={1},
pages={17--50}
}
\bib{Hartshorne}{book}{
author={Hartshorne, R.},
title={Algebraic geometry},
note={Graduate Texts in Mathematics, No. 52},
publisher={Springer-Verlag},
place={New York},
date={1977},
pages={xvi+496},
}
\bib{HMP}{article}{
author={Hering, M.},
author={Musta\c{t}\v{a}, M.},
author={Payne, S.},
title={\href{http://aif.cedram.org/cedram-bin/article/AIF_2010__60_2_607_0.pdf}%
{Positivity properties of toric vector bundles}},
journal={Ann. Inst. Fourier (Grenoble)},
volume={60},
date={2010},
number={2},
pages={607--640}
}
\bib{Kan}{article}{
label={Kan}
AUTHOR = {Kaneyama, Tamafumi},
TITLE = {Torus-equivariant vector bundles on projective spaces},
JOURNAL = {Nagoya Math. J.},
FJOURNAL = {Nagoya Mathematical Journal},
VOLUME = {111},
YEAR = {1988},
PAGES = {25--40},
ISSN = {0027-7630},
CODEN = {NGMJA2},
MRCLASS = {14F05 (14L32)},
MRNUMBER = {961215 (89i:14012)},
MRREVIEWER = {T. Oda},
URL = {http://projecteuclid.org/euclid.nmj/1118781050},
}
\bib{Kly}{article}{
label={Kly},
author={Klyachko, A.A.},
title={Equivariant bundles over toric varieties},
language={Russian},
journal={Izv. Akad. Nauk SSSR Ser. Mat.},
volume={53},
date={1989},
number={5},
pages={1001--1039, 1135},
translation={
journal={Math. USSR-Izv.},
volume={35},
date={1990},
number={2},
pages={337--375},
issn={0025-5726},
},
}
\bib{Payne}{article}{
label={Pa1},
author={Payne, S.},
title={\href{http://dx.doi.org/10.1112/S0010437X08003461}%
{Moduli of toric vector bundles}},
journal={Compos. Math.},
volume={144},
date={2008},
number={5},
pages={1199--1213},
}
\bib{Payne2}{article}{
label={Pa2},
author={Payne, S.},
title={Toric vector bundles, branched covers of fans, and the resolution
property},
journal={J. Algebraic Geom.},
volume={18},
date={2009},
number={1},
pages={1--36},
}
\raggedright
|
1,314,259,993,960 | arxiv | \section{Introduction}
Existing object detection methods~\cite{FasterRCNN-NIPS} require huge amounts of annotated training data. However,
in the real world, samples of some categories are difficult to acquire and the cost to label high-quality samples can be very high.
On the contrary, a child can recognize and locate elephants or horses in a picture that s/he has never seen before with only a few examples.
Thus, few-shot object detection (FSOD)~\cite{TFA,FSCE,Attention-RPN,DeFRCN,MPSR,Relation_Reasoning,Meta-DETR,MetaRCNN,liutianying,Han_2022_CVPR} is gaining increasing research interests, which tries to detect novel
objects with only a few labeled examples.
However, many objects in real life fall into hierarchical fine-grained category structures. For example, elephants have different families and species, e.g. African elephants and Asian elephants. And African elephants have two subspecies, which are African savannah elephants and African forest elephants, so as the Asian elephants. Obviously,
it is difficult for an ordinary people (let alone a child) to distinguish between an African savannah elephant and an African forest elephant if only a few photos are given.
Moreover, existing FSOD methods do not consider such hierarchical fine-grained category structures of objects that exist ubiquitously in real life, thus they cannot cope with such scenarios well.
In this paper, we propose a new problem of \textbf{hi}erarchical \textbf{f}ew-\textbf{s}hot \textbf{o}bject \textbf{d}etection, \textbf{Hi-FSOD} in short, which aims to perform few-shot object detection under a hierarchical taxonomy. Obviously, the FSOD task is a special case of Hi-FSOD when the hierarchical taxonomy is degenerated to a flat category structure. So comparing to FSOD, Hi-FSOD is more challenging and has wider applications than FSOD, especially in the scenarios that the number of categories of objects is huge, where existing FSOD methods are neither efficient nor effective.
To address the Hi-FSOD problem, we have tackled two major subproblems:
On the one hand, we construct the first
high-quality and large-scale Hi-FSOD benchmark dataset of wild birds, which is called \textbf{HiFSOD-Bird}.
Although there are already some datasets of wildlife for computer vision (CV) tasks~\cite{CUB,AwA,AP-10k,FishDataset}, most of them are for classification tasks and a few of them are dedicated to object detection tasks.
Nevertheless, few of them have a strictly hierarchical organization of categories.
Existing FSOD methods perform training and testing on the modified COCO~\cite{COCO} and VOC~\cite{VOC} datasets whose label structures are flat and contain only 80 and 20 categories, respectively, which thus are unsuitable for the Hi-FSOD task.
Our HiFSOD-Bird dataset contains totally 1,432 categories and 176,350 bird images with high-quality annotated bounding boxes. All categories are organized into a 4-level hierarchical taxonomy: from top to bottom, order, family, genus and species,
as shown in Fig.~\ref{fig:intro}(a).
It consists of 32 orders, 132 families, 572 genera and 1,432 species, covering more than 90\% of the world's water birds and part of forest birds.
The bounding boxes and class labels of each image are manually annotated and carefully double-checked.
Moreover, each category of birds comes with a textual description, so the dataset can be further used for the zero-shot object detection task.
The HiFSOD-Bird dataset is also of great significance to the monitoring and protection of endangered birds, since the samples of endangered birds are difficult to acquire and the domain knowledge is mainly from expert annotations.
On the other hand, we develop the first Hi-FSOD method \textbf{HiCLPL}, which is a two-stage method with \textbf{hi}erarchical \textbf{c}ontrastive \textbf{l}earning and \textbf{p}robabilistic \textbf{l}oss. Here, hierarchical contrastive learning (HiCL) is used to constrain the feature space so that the feature distribution of objects is consistent with the hierarchical category structure, and the probabilistic loss is designed to enable the child nodes to correct the classification errors of their parent nodes.
Fig.~\ref{fig:intro}(b) illustrates the HiCL mechanism. We use memories to hold the prototypes of classes in the hierarchical tree. Then, a hierarchical contrastive loss is designed to control the distance between box features and memories at different levels. Finally, we utilize exponential moving average to update the parameters of memories. HiCL can boost the generalization power of the model.
Meanwhile, we found that in the process of hierarchical classification from top to bottom, if a non-leaf node wrongly classifies an instance, the classifications of the instance at the descendants nodes are useless. Therefore, we design a probabilistic loss such that the child nodes can learn to identify and correct the misclassified samples of their parent nodes.
In summary, contributions of this paper are as follows:
1) We propose a new problem of hierarchical few-shot object detection (Hi-FSOD), which is an extension to the existing FSOD problem, so it is more challenging and has wider applications.
2) We establish the first large-scale and high-quality benchmark dataset HiFSOD-Bird, specifically for the Hi-FSOD problem.
3) We develop the first Hi-FSOD method HiCLPL, which uses hierarchical contrastive learning to constrain the feature space and a probabilistic loss to correct the classification errors of parent nodes.
4) We conduct extensive experiments on the benchmark dataset HiFSOD-Bird to evaluate the proposed method HiCLPL. Experimental results show that our method HiCLPL outperforms the existing FSOD methods.
\section{Related Work}\label{sec:related_work}
\subsection{Few-shot Object Detection}
Existing few-shot object detection (FSOD) methods roughly fall into two types: meta-learning based and fine-tuning based.
Meta-learning based methods~\cite{Feature-Reweighting,MetaRCNN,Attention-RPN,Meta-DETR, SQGuidance} learn meta knowledge from base classes to facilitate model training for novel classes. Among them,
FSRW~\cite{Feature-Reweighting} utilizes
a feature re-weighting strategy to construct a one-stage object detector.
Attention-RPN~\cite{Attention-RPN} integrates the information of supports into RPN, in order to pay more attention to the foreground objects relevant to support classes. Meta-DETR~\cite{Meta-DETR} exploits the inter-class correlation to apply the detection transformer~\cite{deformableDETR} to the FSOD task. We proposed a support-query mutual guidance strategy that can generate more support-relevant candidate regions, together with a hybrid loss to enhance the metric space~\cite{SQGuidance}.
Fine-tuning based methods~\cite{TFA,MPSR,Relation_Reasoning,DeFRCN,FSCE} formulate the FSOD problem in a transfer learning setting. TFA~\cite{TFA} is the first work that proposes a two-stage fine-tuning strategy. It first trains the entire model on the base classes, and then fine-tunes the final classifier on a balanced dataset containing base and novel data. Experiments show that such fine-tuning method is simple yet very effective. Following TFA, a number of methods are developed. DeFRCN~\cite{DeFRCN} adopts multi-stage and multi-task decoupling to improve performance. FSCE~\cite{FSCE} uses a contrastive learning strategy to constrain the intra-class similarity and enhance the inter-class similarity of box features.
Nevertheless, existing methods do not consider the scenarios where object classes form a hierarchical taxonomy, thus they cannot be directly used to effectively handle the problem proposed in this paper.
Different from these works above, here we address a new problem --- hierarchical few-shot object detection (Hi-FSOD). To this end, we build a large-scale and high-quality benchmark dataset and develop an effective method.
\subsection{Fine-grained Image Recognition}
Fine-grained image recognition (FGIR) is to recognize numerous visually similar subcategories under the same basic category.
Existing FGIR methods can be divided into the following three types: (1) discriminative region based methods
~\cite{Part-Stacked_CNN,Object-Part-Attention,Mask-CNN,Look-Closer,Multi-Attention-Convolutional,Multi-Class-Constraint} adopt a two-stage paradigm that first locates the key object parts and then does classification based on the discriminative regions, (2)
attention based methods
~\cite{Look-Closer,Multi-Attention-Convolutional,Multi-Class-Constraint} leverage attention mechanisms to automatically localize discriminative regions of fine-grained objects, and (3)
loss function based methods ~\cite{Maximum-Entropy-Fine,Channel-Interaction,Subtle-Differences} focus on constructing effective loss functions to directly regularize the learnt representations.
Like the FSOD works, existing FGIR methods also perform classification under a flat category structure, but here the difference between categories is visually too slight to be used to easily distinguish the objects of different categories.
\subsection{Hierarchical Classification}
Hierarchical classification (HC) is a challenging problem where the classes are organized into a predefined hierarchy.
HC methods in traditional machine learning mostly follow the top-down strategy that decomposes the hierarchical classification task into a series of subtasks and then trains an independent classifier for each subtask, i.e., transforming a coarse-grained category into several fine-grained subcategories~\cite{taxonomies-visual,Embedding-Trees,Hierarchical-Category-Structure,FastandBalanced}.
For deep hierarchical classification, some works try to directly embed the prior semantic knowledge contained in the class structure to visual features to guide the classification. Among them, \cite{DeViSE} develops a visual-semantic embedding model, which transfers the learnt semantic information to visual object recognition. \cite{Hierarchy-Retrieval} proposes to map images to class embeddings to learn semantically discriminative features. Besides, some approaches introduce the multi-task framework to solve deep hierarchical classification efficiently~\cite{MakingBetterMistakes,Semantics-aware}.
However, hierarchical classification for object detection has received little attention, though hierarchical categories exist ubiquitously in real life.
\section{Problem Definition}\label{sec:problem}
Hierarchical few-shot object detection~(Hi-FSOD) is an extension to few-shot object detection, where the categories are organized hierarchically.
Formally, the class space $C$ is divided into the base classes $C_b$ and novel classes $C_n$, where $C = C_b \cup C_n $ and $ C_b \cap C_n$ = $\varnothing$.
In the base classes $C_b$, each class has many instances while only $K$ (usually less than 10) instances available per category in the novel classes $C_n$.
In our hierarchical setting, each class $c_i \in C$ can be mapped to a leaf node in the hierarchical tree (taxonomy) $T$, and there is a path from the root to each leaf in $T$, as shown in Fig.~\ref{fig:dataset_hierarchy}.
The classification of an instance belonging to $c_i \in C$ can be decomposed into classifications by multiple classifiers from the root to the leaf node corresponding to $c_i$ of the tree $T$, i.e., from coarse to fine.
An instance is correctly classified if and only if it is classified correctly at each level.
\section{Benchmark}\label{sec:benchmark}
Data are the basis of any machine learning or deep learning based task. To advance the research of Hi-FSOD, we construct the first Hi-FSOD benchmark dataset HiFSOD-Bird.
\subsection{Data Construction}
First, we collect publicly available bird images from various non-copyright websites.
Then, we perform a standard cleaning procedure to remove all low-quality images (including extremely vague ones) and duplicate images.
To acquire high-quality annotations, we recruit several well-trained annotators and ask them to annotate all the objects with bounding boxes and labels.
All the annotators have undergone professional training before annotating.
In order to make the annotation accurate, each annotator is only responsible for a specific part of data under a certain order in a period of time.
Following that, we carry out cross-checking to guarantee the annotation quality. The images that are not in our pre-defined category space are discarded.
Finally, since we are to perform 10-shot experiments, if there are less than 11 images in a category, we remove this category and all its images.
The final annotated dataset contains 176,350 high-quality images. All the bounding boxes belong to 1,432 classes of birds, falling into 32 orders, 132 families, 572 genera and 1,432 species.
Moreover, in order to facilitate the zero-shot object detection task in the future, we also acquire the textual description of each class of birds from Wikipedia, which will be released with our dataset. Fig.~\ref{fig:dataset_hierarchy} shows the taxonomy of our dataset, and Tab.~\ref{tab:dataset_summary} presents the major statistics of our dataset.
\subsection{Base/Novel Split}
To perform FSOD, we need to divide all the classes into base classes and novel classes.
Since ``order'' is the top-1 level of the bird taxonomy, and birds of different orders are quite different, we divide the bird images into the base and novel classes at the order level, as shown in Fig.~\ref{fig:dataset_hierarchy}.
Moreover, the numbers of images in orders and species all exhibit a long-tail distribution as shown in Fig.~\ref{fig:dataset_dist}, which reflects the true distribution of birds in the wild.
Therefore, we sort the orders according to the number of images contained in each order and take the tail part as novel classes.
Specifically, we take the species in the top-7 orders with the largest number of images as base classes, and the species in the other 25 orders as novel classes.
Although the base classes contain birds from a less number of orders,
their images account for about 80\% of the total images.
Our base/novel division has the following advantages:
(1) The difference between base and novel classes is relatively large, since the birds of different orders are obviously different in appearance.
(2) The novel classes contain birds from a variety of different orders,
which means that on average the difference between two novel classes is also relatively large.
This is beneficial to FSOD.
(3) The species located at the distribution tail are mostly rare species that are difficult to acquire samples, which implies the potential application of our method to the monitoring and protection of endangered birds.
With the split above, we sample the images for test in each category (species). In order to perform 10-shot experiments, we sample $min$(6, $k_{c_i}$-10) images for each category as test images, where $k_{c_i}$ is the number of images in category $c_i$. Finally, a total of 8211 images are used as the test set.
All annotation files are in the COCO format for the convenience of usage.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/dataset_hierarchy}
\caption{The hierarchical taxonomy and base/novel split in the HiFSOD-Bird dataset. We divide the base and novel classes at the order level.
}
\label{fig:dataset_hierarchy}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/dataset_distribution}
\caption{Distributions of images in different orders and species~(classes). The number of images within an order or class exhibits a long-tailed distribution. The classes in orders located at the distribution tail are used as novel classes.
}
\label{fig:dataset_dist}
\end{figure}
\begin{table}[t]
\caption{Statistics of the HiFSOD-Bird dataset. }
\label{tab:dataset_summary}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{llll}
\toprule
& Total & Base & Novel \\
\midrule
\#classes (\#cls) & 1432 & 1145 & 287 \\
\#orders & 32 & 7 & 25 \\
\#families & 132 & 94 & 38 \\
\#genera & 572 & 436 & 136 \\
\#species & 1432 & 1145 & 287 \\
\#species/\#orders & 44.75 & 163.57 & 11.48 \\
\#images~(\#img) & 176350 & 141239 & 35111 \\
Range of \#img/\#cls & [11, 1112] & [13, 1112] & [11, 1000] \\
Avg of \#img/\#cls & 123.08 & 123.35 & 122.01 \\
\#boxes & 179042 & 143608 & 35434 \\
box-size range & [4, 9455160] & [4, 9455160] & [1636, 4474202] \\
box W/H range & [0.274, 7] & [0.274, 7] & [0.292, 5.27] \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Statistics and Characteristics}
\textbf{Statistics.}
As presented in Tab.~\ref{tab:dataset_summary}, data in HiFSOD-Bird are organized in a hierarchical structure, with a total of four levels.
HiFSOD-Bird contains 1,432 species of birds, of which 1,145 species fall into base classes and the remaining 287 species belong to novel classes. It covers more than 90\% of the world's water birds and part of the forest birds.
The number of images or species in base classes vs. that in novel classes is roughly 4:1.
Meanwhile, the size of object bounding boxes varies greatly, which makes it be challenging to detect the birds.
\textbf{Characteristics.}
(a) \emph{Hierarchical and fine-grained class space.} All categories in HiFSOD-Bird are organized according to the hierarchical taxonomy.
Meanwhile, as the level becomes deeper, the difference between categories gradually decreases and the classification difficulty increases.
(b) \emph{Long-tailed distribution}. As shown in Fig.\ref{fig:dataset_dist}, the number of images in each species indicates a long-tailed distribution, which is consistent with the nature. Some kinds of birds are common, and their samples are easy to obtain, while most birds are less common, their samples are not easy to obtain, and there are some rare and endangered birds, whose samples are difficult to acquire.
\section{Method}\label{sec:method}
\begin{comment}
\subsection{Problem Definition}
Hierarchical few-shot object detection~(Hi-FSOD) is an extension of few-shot object detection, where the categories are organized hierarchically.
Formally, the class space $C$ is divided into the base classes $C_b$ and novel classes $C_n$, where $C = C_b \cup C_n $ and $ C_b \cap C_n = \varnothing $.
In base classes $C_b$, each class has many instances while only $K$ (usually less than 10) instances available per category in novel classes $C_n$.
In our hierarchical setting, each class $c_i \in C$ can be map to a uniqueness path $P_i$ on the tree, as shown in Fig.~\ref{fig:intro}(a).
Here, we denote the nodes of $P_i$ as $\widehat{c}_i^{j}$, where $j \in [0,L]$ is the level index, $\widehat{c}_i^{0}$ is the root node and $\widehat{c}_i^{L}$ is the leaf node.
The prediction of a instance that belongs to $c_i \in C$ can be decomposed into predictions by multiple classifiers from the root node $\widehat{c}_i^{0}$ to the leaf node $\widehat{c}_i^{L}$ of path $P_i$, i.e. from coarse to fine.
An instance is correctly predicted if and only if it is predicted correctly at every level.
The evaluation is performed on a test set of both the base classes and the novel classes, where the detection accuracy $AP$ is used as the measurement.
\end{comment}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{figures/framework}
\caption{The framework of our proposed hierarchical few-shot object detection method HiCLPL. To make the distribution of object features consistent with the hierarchical taxonomy, we propose hierarchical contrastive learning. Meanwhile, to correct the errors of internal classifiers not at the lowest level in the hierarchical class tree, we design a probabilistic loss function.
}
\label{fig:framework}
\end{figure*}
\subsection{Overview}
Fig.~\ref{fig:framework} illustrates the framework of our method HiCLPL, where
we adopt a modified Faster R-CNN~\cite{FasterRCNN-NIPS} detection model by replacing the original classification head with a hierarchical head, where the structure of classifiers is the same as the hierarchical taxonomy $T$, whose levels are indexed from $0$~(root) to $L$~(leaf).
Each classifier $\mathcal{F}_{c_i}^j$ corresponds to a non-leaf node $\widehat{c}_i^j$ in $T$, where $j$ is the level index and $j \in [0,L-1]$, indicating that it is not needed to build classifiers for leaf nodes.
$\mathcal{F}_{c_i}^j$ needs only to discriminate the fine-grained child classes of $\widehat{c}_i^j$.
$\mathcal{F}_{c_i}^j$ is composed of two fully connected layers, and its output dimension is the number of children of $\widehat{c}_i^j$.
Meanwhile,
we separate the regression head from the classification head, and set the regression parameters for each child class of the root node.
HiCLPL employs a simple two-stage training process.
In the first stage, we train the hierarchical Faster R-CNN with abundant base-class data (i.e., $D_{train} = D_{base}$) and the hierarchical head is built according to $T^{base}$, which is the hierarchical taxonomy of the base classes.
Then, the base detector is transferred to the novel classes through fine-tuning on a balanced dataset~\cite{TFA}, where the novel instances and sampled base instances are used as training set (i.e., $D_{train} = D_{base} \cup D_{novel} $) and the hierarchical head is built with $T^{all}$ = $T^{base} \cup T^{novel}$.
We freeze the backbone and RPN in the second stage, while unfreezing the RoI feature extractor to perform the subsequent feature distribution transformations.
In order to learn a better feature space and enhance the generalization power of the model, we propose hierarchical contrastive learning to constrain the feature space such that the feature distribution of objects is consistent with the hierarchical taxonomy.
Meanwhile, note that in the process of hierarchical classification, once an instance is classified incorrectly at a class (or node), the subsequent classifications of the instance at the descendant classes (or nodes) are meaningless. Therefore, we design a probabilistic loss to handle this problem.
Both hierarchical contrastive learning and probabilistic loss are applied in the $1^{th}$ and $2^{nd}$ stages.
Finally, we jointly optimize the hierarchical contrastive loss, the probabilistic loss and the original RPN classification and regression losses in a multi-task fashion.
\subsection{Hierarchical Contrastive Learning}
As mentioned above, in order to enhance the model's generalization power, we use hierarchical contrastive learning to constrain the feature space such that the distribution of object features is aligned with the hierarchical taxonomy $T$.
As shown in Fig.~\ref{fig:intro}(b), if an internal category $a$ has three children $a.1$, $a.2$ and $a.3$, the features of the three child categories should be close to each other in the feature space as much as possible.
We utilize a hierarchical contrastive loss to control the distribution of features, memories to hold class representations at all levels, and exponential moving average (EMA) to update memory parameters.
\subsubsection{Hierarchical Contrastive Loss}
For a box feature $x_i$ with label $c_i$,
we denote the corresponding ground-truth path of $c_i$ in $T$ as $P_i$
and nodes in $P_i$ as $\widehat{c}_i^j$, where $j$ is the level index.
For each node $\widehat{c}_i^j \in P_i$, we use a memory $\mathcal{M}_{c_i}^j$ to hold its prototype, where $j \in [0,L]$, indicating that each node on the path from root to leaf has a prototype.
Then, our goal is to maximize the agreement between box feature $x_i \in c_i$ and $\{\mathcal{M}_{c_i}^j | j \in [0,L]\}$,
and promote the deviation of $x_i$ from all the other memories at all levels.
In this way, we can not only cluster together the fine-grained classes under the same parent node,
but also make the internal categories at multiple levels be clustered according to the category hierarchy.
Inspired by supervised contrastive loss~\cite{Contrastive_Learning}, our hierarchical contrastive loss (HiCL) is defined to adapt to the hierarchical taxonomy.
Specifically, for a mini-batch containing $N$ foreground proposals with labels $\{x_i, c_i\}_{i=1}^{N}$, where $x_i \in \mathbb{R}^{2048}$ is the $i^{th}$ box feature from the RoI feature extractor without the last ReLU activation,
$c_i$ is the ground-truth class of $x_i$.
HiCL is formulated as
\begin{equation}
\mathcal{L}_{HiCL} = \frac{1}{N} \sum^{N}_{i=1} \mathcal{L}_{H_i}(x_i,c_i)
\end{equation}
\begin{small}
\begin{equation}
\label{Eq:item_HiCL}
\mathcal{L}_{H_i}(x_i,c_i) = \frac{-1}{ {\textstyle \sum_{j=0}^{L}}\mathcal{G}(j) }
\sum^{L}_{j=0}\mathcal{G}(j) \log \frac{\exp ( \overline{x_i} \cdot \mathcal{M}_{c_i}^{j} / \tau ) }
{{\textstyle \sum_{c_{\hat{i}} \in C, \hat{j} \in [0,L] } } \exp( \overline{x_{\hat{i}}} \cdot \mathcal{M}_{c_{\hat{i}}}^{\hat{j}} / \tau ) }
\end{equation}
\end{small}
where $\tau$ is the hyper-parameter that controls temperature,
$L$ is the height of $T$ and $\overline{x_i}=\frac{x_i}{\left \| x_i \right \|}$ is the normalized box feature.
$\mathcal{G}(\cdot)$ is the function to regularize the strength of aggregation at different levels. We found that it is better to increase the strength of aggregation with the increase of the level $j$, and we set $\mathcal{G}(j) = j$ in our experiments. We present the experimental results on how the function impacts the performance in Sec.~\ref{Sec:AB_G} .
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/hierarchical_contrastive}
\caption{Illustration of hierarchical contrastive learning. We constrain the feature distribution of objects in feature space to be consistent with the hierarchical taxonomy.
}
\label{fig:Hierarchical_Contrastive}
\end{figure}
In Eq.~(\ref{Eq:item_HiCL}), we try to narrow the distance between $x_i$ and the memories it belongs to from the leaf memory $\mathcal{M}_{c_i}^{L}$ to the root memory $\mathcal{M}_{c_i}^{0}$ with different aggregation strength $\mathcal{G}(\cdot)$, and enlarge the distances to the other memories. In this way, the object features will be aggregated according to the hierarchical taxonomy $T$ and the distribution of the feature space is consistent with the hierarchy of the categories. Fig.~\ref{fig:tSNE} shows the effectiveness by t-SNE visualization.
\subsubsection{Update of Memory Parameters}
The memories described above store the latest representation of each node in $T$. Hence, the memories need to be updated during training.
Here, we use exponential moving average (EMA)~\cite{EMA,Episodic_Memory,Memory_REID} to update the parameters of memories.
Concretely, for a box feature $x_i$ with ground-truth class $c_i$ and ground-true path $P_i$ in $T$, each memory $\mathcal{M}_{c_i}^j, j \in [0,L]$ in path $P_i$ will be updated as follows:
\begin{equation}
\label{Eq:EMA_update}
\mathcal{M}_{c_i}^j \gets f(j) \mathcal{M}_{c_i}^j + [1-f(j)] \overline{x_i}, \quad j \in [0,L]
\end{equation}
\begin{equation}
\label{Eq:update_rate}
f(j) = 1 - {\epsilon}^{L - j + 1}
\end{equation}
where $\overline{x_i}$ is the normalized $x_i$ as that in Eq.~(\ref{Eq:item_HiCL}).
$f(j)$ is the momentum coefficient for updating memories and
$\epsilon$ is a hyper-parameter to control the value of $f(j)$, which empirically takes a small value ($<1.0$) such as 0.1.
Eq.~(\ref{Eq:EMA_update}) indicates that for a box feature $x_i$, we update all the memories in a bottom up way along its ground-truth path in $T$.
In other words, the memory of a node aggregates the box features of all its descendant nodes.
Meanwhile, we can see that the momentum coefficients $f(j)$ of the memories at different levels are different, as the memories at different levels are updated by using different numbers of their most recent box features. An upper-level memory needs much more recent features to be updated than any of its descendant memories, due to covering more descendant categories.
\subsection{Probabilistic Loss}
A common problem in hierarchical classification is that if a node classifies incorrectly, the classifications of its descendant nodes will be meaningless.
Therefore, we design a mechanism to enable a node to judge and correct the classifications of its parent node.
For a box feature $x_i$ with ground-true path $P_i$ in $T$, $\{\widehat{c}_i^j | \widehat{c}_i^j \in P_i, j \in [0,L-1] \}$ are internal nodes in $P_i$.
As mentioned before, each internal node corresponds to a classifier. For an arbitrary classifier $\mathcal{F}_{c_k}^j$,
our aim is to reduce the outputs for all classes of $\mathcal{F}_{c_k}^j$ if the ground-truth path of $x_i$ does not pass through $\mathcal{F}_{c_k}^j$. Then, the product of probabilities on any wrong path will be restrained to less than the product of probabilities on the correct path, so the classification error of the parent node can be corrected.
To achieve this, we add an ``others'' virtual class for each classifier.
The structure of classifiers is shown in Fig.~\ref{fig:classifier_structure}.
If a sample does not belong to any predefined class of the classifier, it will be classified to the ``others'', such as the classifier $a.1.2$ does in Fig.~\ref{fig:classifier_structure}.
When inferring a model, we do not only go along a path of the largest score when classifying from root to leaf, but also use beam search~\cite{Beam-Search}, taking multiple paths into consideration so that the potentially correct path can be discovered and nodes with classification errors can be abandoned later because of their children's low scores.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/classifier_structure}
\caption{A 3-level structure of classifiers. We add a virtual class ``others'' for each classifier to make the classifier classify a sample not belonging to its predefined classes into the ``others'' class, so as to correct the errors of upper nodes. The blue arrow is the ground-truth path. The yellow arrow is an error of node $a.1$, which will be corrected by node $a.1.2$ via classifying the sample to its ``others'' class.
}
\label{fig:classifier_structure}
\end{figure}
To train the classifier $\mathcal{F}_{c_i}^j$ with the virtual class ``others'', we need to sample instances that do not belong to $\mathcal{F}_{c_i}^j$'s predefined classes (labeled from 0 to $d_i^j$-1) for the ``others'' class and assign them the label $d_i^j$, where $d_i^j$ is the number of children for node $\widehat{c}_i^j$.
For $\mathcal{F}_{c_i}^j$, if all instances not belonging to its original classes are assigned to ``others'', then extreme class imbalance will happen. To solve this problem, we introduce the \emph{probabilistic loss} (PL).
Formally,
for a mini-batch containing $N$ foreground proposals with labels $\{x_i,c_i,P_i\}_{i=1}^N$, where $x_i$ is the box feature, $c_i$ is the ground truth label and $P_i$ is the ground-truth path, the probabilistic loss is formulated as
\begin{equation}
\label{Eq:loss_prob}
\begin{split}
\mathcal{L}_{prob} = \frac{1}{MN} \sum_{c_k \in C, j \in [0,L-1]}
\sum_{i=1}^{N}
[
\mathbb{I}(\widehat{c}_k^j \in P_i) \cdot \mathcal{L}_{CE}(c_k, j, i) \\
+ \mathbb{I}(\widehat{c}_k^j \notin P_i) \cdot \mathbb{I}(\mathcal{P}_k^j) \cdot \mathcal{L}_{oth}(c_k, j, i)
]
\end{split}
\end{equation}
\begin{equation}
\label{Eq:CE_loss}
\mathcal{L}_{CE}(c_k, j, i) = - y_i^{k,j} \cdot \log(\mathcal{F}_{c_k}^j(x_i))
\end{equation}
\begin{equation}
\mathcal{L}_{oth}(c_k, j, i) = - d_k^j \cdot \log(\mathcal{F}_{c_k}^j(x_i))
\end{equation}
Eq.~(\ref{Eq:loss_prob}) indicates that we train all classifiers sequentially. If $x_i$ belongs to the internal node $\widehat{c}_k^j$, i.e., $\widehat{c}_k^j \in P_i$, we train $\mathcal{F}_{c_k}^j$ with cross entropy loss as in Eq.~(\ref{Eq:CE_loss}), where $y_i^{k,j}$ is the corresponding label of $x_i$ for $\mathcal{F}_{c_k}^j$. Otherwise, we train $\mathcal{F}_{c_k}^j$ to classify $x_i$ into ``others'' with a probability $\mathcal{P}_k^j$.
Here, $\mathbb{I}(\mathcal{P}_k^j)$ is the indicator function that takes true with a probability $\mathcal{P}_k^j$. $d_k^j$ is the number of children of node $\widehat{c}_k^j$, which is also the label (or index) of the ``others'' class. $M$ is the number of classifiers, each of which is trained once in an iteration.
$\mathcal{P}_k^j$ reflects the distribution of categories.
We assume the total number of instances in the dataset is $\mathcal{N}$,
the ground-truth path of instance $x_i$ is $P_i$.
$\mathcal{P}_k^j$ is calculated as follows:
\begin{equation}
\label{Eq:prob_cal}
\mathcal{P}_k^j = \frac{ \sum_{i=1}^{\mathcal{N} } \mathbb{I}( \widehat{c}_k^j \in P_i ) }{\mathcal{N} } \cdot \beta
\end{equation}
In Eq.~(\ref{Eq:prob_cal}), we calculate the proportion of instances contained in each internal node, and then multiply the proportion by an adjustment factor $\beta$, which is used as the probability to train the ``others'' class. In this way, the probability that the classifier trains the ``others'' class is related to the number of samples it contains, which can effectively alleviate the class imbalance problem.
In order to make our correction mechanism work better, we need to consider multiple child nodes of a parent node during model inference.
The prediction result of box feature $x_i$ is
\begin{equation}
\widehat{y}_i = argmax\{\prod_{j=1}^{L} p_i^{j,k} | p_i^{j,k} \in P_k, P_k \in \mathbf{P} \}
\end{equation}
where $\mathbf{P}$ is the set of all paths in $T$, $P_k$ is the $k^{th}$ path in $\mathbf{P}$ and $p_i^{j,k}$ is the prediction score at the $j^{th}$ level of path $P_k$ for $x_i$ after \textit{Softmax}.
However, it is time consuming to recursively run the procedure above.
Here, we utilize beam search~\cite{Beam-Search} to balance time consumption and performance.
Finally, standard threshold screening and non-maximum suppression~(NMS)
are applied to getting outputs.
\begin{table*}[t]
\caption{Performance of novel classes on HiFSOD-Bird dataset.}
\label{tab:results_comp}
\resizebox{1\linewidth}{!}{
\begin{tabular}{c|ccc|ccc|ccc|ccc|ccc}
\toprule
\multirow{2}{*}{Method / Shot} &
\multicolumn{3}{c|}{1-shot} &
\multicolumn{3}{c|}{2-shot} &
\multicolumn{3}{c|}{3-shot} &
\multicolumn{3}{c|}{5-shot} &
\multicolumn{3}{c}{10-shot}
\\
& $AP$ & $AP_{50}$ & $AP_{75}$ & $AP$ & $AP_{50}$ & $AP_{75}$ & $AP$ & $AP_{50}$ & $AP_{75}$ & $AP$ & $AP_{50}$ & $AP_{75}$ & $AP$ & $AP_{50}$ & $AP_{75}$ \\
\midrule
Attention-RPN~\cite{Attention-RPN} &12.81&18.55&14.89& 19.10& 26.31&23.24& 15.28&21.22&18.54&25.20& 34.40& 30.71&28.74&39.25& 35.08 \\
TFA~\cite{TFA} & 5.16 & 7.41 & 5.83 & 24.47&32.70&29.51& 29.84&39.76&35.91&37.16&49.61&44.31& 42.74&56.62&50.30 \\
GFSOD~\cite{GFSOD} & 10.58 &22.88&7.98&13.61&29.94&9.32&14.70&33.95&10.36&15.15&37.20&9.32&20.66&47.95&13.42\\
FSCE~\cite{FSCE} & 11.83&15.78&12.35& 26.89&35.31&32.13&30.47&41.10&36.15& 38.20&51.28&45.94&42.90&56.79&50.88 \\
HiCLPL~(Ours) & \textbf{18.95} &\textbf{26.09} &\textbf{22.31} &
\textbf{28.50} &\textbf{37.83}&\textbf{34.17}&
\textbf{31.77} & \textbf{42.95} &\textbf{37.82} &
\textbf{39.37} & \textbf{52.93} & \textbf{47.31} &
\textbf{43.54} & \textbf{57.74} & \textbf{51.57}
\\
\bottomrule
\end{tabular}
}
\end{table*}
\begin{table}[t]
\caption{Performance for base and novel classes.
}
\label{tab:results_comp_base_novel}
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{cc|cc}
\toprule
Shot & Method & Base AP50 & Novel AP50
\\
\midrule
\multirow{4}{*}{1} & TFA~\cite{TFA} & 72.79 & 7.41 \\
& GFSOD~\cite{GFSOD} & 71.47 & 22.88 \\
& FSCE~\cite{FSCE} & 72.72 & 15.78 \\
& HiCLPL~(Ours) & \textbf{75.72} & \textbf{26.09} \\
\bottomrule
\midrule
\multirow{4}{*}{5} & TFA~\cite{TFA} & 74.54 & 49.61 \\
& GFSOD~\cite{GFSOD} & 71.93 & 37.20 \\
& FSCE~\cite{FSCE} & 74.16 & 51.28 \\
& HiCLPL~(Ours) & \textbf{75.97} & \textbf{52.93} \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Training Loss}
The total loss of our model used for training is as follows:
\begin{equation}
\mathcal{L} = \mathcal{L}_{RPN_{Cls}} + \mathcal{L}_{RPN_{Reg}} + \mathcal{L}_{Reg}
+ \lambda_1 \mathcal{L}_{HiCL} + \lambda_2 \mathcal{L}_{prob}
\end{equation}
where the first two terms are the cross entropy loss and regression loss of RPN respectively. $\mathcal{L}_{Reg}$ is the loss for box regression in Fig.~\ref{fig:framework}. $\mathcal{L}_{HiCL}$ is our hierarchical contrastive loss. $\mathcal{L}_{prob}$ is the probabilistic loss. We set $\lambda_1 = 0.5$ and $\lambda_2 = 1$ in our experiments to balance different loss functions.
\section{Experiments}\label{sec:experiments}
\subsection{Implementation Details}
We use Faster-RCNN~\cite{FasterRCNN-NIPS} with ResNet-101~\cite{Resnet} and Feature Pyramid Network~\cite{FPN} as backbone. All models are trained on 4 NVIDIA RTX 3090 in parallel with batch size 16. We employ SGD with momentum 0.9 as the optimizer.
For different shots, we take a different number of iterations.
The height of taxonomy $L$ is set to 4. The hyper-parameter $\tau$ in Eq.~(\ref{Eq:item_HiCL}) is set to 0.2, $\epsilon$ in Eq.~(\ref{Eq:update_rate}) is set to 0.1 and $\beta$ in Eq.~(\ref{Eq:prob_cal}) is set to 0.5.
Code and dataset are presented in \href{Github}{https://github.com/zhanglu-cst/HIFSOD}.
\subsection{Comparison with Existing Methods}
We select several typical FSOD algorithms as baselines for comparison and make minor modifications to them for fitting our HiFSOD-Bird dataset, including Attention-RPN~\cite{Attention-RPN}, TFA~\cite{TFA}, GFSOD~\cite{GFSOD} and FSCE~\cite{FSCE}.
Results of novel classes on HiFSOD-Bird are shown in Tab.~\ref{tab:results_comp}.
We strictly follow a consistent evaluation protocol~\cite{TFA} of COCO to ensure fair comparison.
Metrics of $AP$, $AP_{50}$ and $AP_{75}$ are used.
As we can see, our method HiCLPL outperforms all the compared existing methods in any shot and on all metrics, demonstrating the effectiveness and superiority of our method in
hierarchical few-shot object detection.
Especially, when the number of shots is extremely low, our method outperforms the baselines a large margin. Concretely,
for 1-shot, our method surpasses Attention-RPN~\cite{Attention-RPN} by 6.14, 7.54 and 7.42 points in terms of $AP$, $AP_{50}$ and $AP_{75}$ respectively.
As the shot number grows, our method still keep advantageous.
For 5-shot, our method outperforms FSCE~\cite{FSCE} by 1.17, 1.65 and 1.37 points in $AP$, $AP_{50}$ and $AP_{75}$ respectively.
Following previous works~\cite{TFA,FSCE}, we also report the performance on base classes for all methods. Results are shown in Tab.~\ref{tab:results_comp_base_novel}.
We can see that our method also exceeds all the baselines on the base classes, proving that our method is also effective in detecting the base classes.
Due to the large number of base classes in HiFSOD-Bird, Attention-RPN~\cite{Attention-RPN} does not work due to out of memory, so its base-class results are not reported here.
\subsection{Ablation Study}
We check the effects of various components.
Unless otherwise specified, the ablation studies are carried out in 2-shot setting.
\begin{table}[t]
\caption{Effects of different components. HiHead: Hierarchical Head. HiCL: Hierarchical Contrastive Learning. Prob Loss: Probabilistic Loss. }
\label{tab:ab_all}
\resizebox{0.85\linewidth}{!}{
\begin{tabular}{ccc|cc}
\toprule
HiHead & HiCL & Prob Loss & Base AP50 & Novel AP50 \\
\midrule
& & & 73.52 & 32.70 \\
$\checkmark$ & & & 72.05 & 30.81 \\
$\checkmark$ & & $\checkmark$ & 73.24 & 32.53 \\
$\checkmark$ & $\checkmark$ & & 75.79 & 36.16 \\
$\checkmark$ & $\checkmark$ & $\checkmark$ & \textbf{76.12} & \textbf{37.83} \\
\bottomrule
\end{tabular}
}
\end{table}
\subsubsection{\textbf{Effects of different components}}
We investigate the effects of different modules and summarize the results in Tab.~\ref{tab:ab_all}.
The implementation details are:
(1) The baseline without hierarchical head (the first line) is TFA~\cite{TFA}.
(2) For models without hierarchical contrastive learning, we remove the supervision from RoI feature extractor. The RoI feature extractor is supervised by subsequent classifiers and regressors.
(3) For models without probabilistic loss, we use standard cross-entropy loss to train the classifiers.
From Tab.\ref{tab:ab_all}, we can see that
after adding hierarchical head, the performance is worsened, which is due to the overfitting caused by the excessive number of parameters in the hierarchical head. However,
after HiCL is introduced,
the performance is greatly improved.
Compared with baseline with only hierarchical head, the model with HiCL boosts the performance on base classes from 72.07 to 75.79 and that on novel classes from 30.81 to 36.16, demonstrating the effectiveness of
hierarchical contrastive learning.
The probabilistic loss can also boost the performance to a certain extent, concretely, bringing about one point improvement on both base classes and novel classes.
\subsubsection{\textbf{Ablation study on function $\mathcal{G}(\cdot)$}}
\label{Sec:AB_G}
Function $\mathcal{G}(\cdot)$ is used to adjust the aggregation strength of different levels.
We conduct experiments with different forms of $\mathcal{G}(\cdot)$,
the results are presented in Tab.~\ref{tab:ab_func_g}.
We can see that when we apply the same aggregation strength to each level ($\mathcal{G}(x) = 1$), the performance is not good.
This is because similar strengths may cause multiple subclasses under the same parent node to be too close, making it difficult to distinguish the subclasses.
When we use $\mathcal{G}(x) = x^2$, large aggregation strengths will be applied to the subclasses, but the aggregation strength for the parent node will be reduced.
\begin{table}[t]
\caption{Ablation study on the function $\mathcal{G}(\cdot)$. }
\label{tab:ab_func_g}
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{c|cc}
\toprule
Adjust Function $\mathcal{G}(\cdot)$ & Base AP50 & Novel AP50 \\
\midrule
$\mathcal{G}(x) = 1$ & 75.47 & 37.28 \\
$\mathcal{G}(x) = x$ & 76.12 & 37.83 \\
$\mathcal{G}(x) = x^2$ & 75.87 & 37.60 \\
\bottomrule
\end{tabular}
}
\end{table}
\subsubsection{\textbf{Ablation study on hyper-parameters in HiCL}}
\label{Sec:AB_hyper_HiCL}
Here, we check the effects of two hyper-parameters in hierarchical contrastive learning, which are the temperature $\tau$ for contrastive learning in Eq.~(\ref{Eq:item_HiCL}) and $\epsilon$ for
controlling the value of momentum coefficient $f(j)$ in Eq.~(\ref{Eq:update_rate}).
Results are given in Tab.~\ref{tab:ab_hyper_hicl}, where we can see that $\tau = 0.2$ performs better than the other settings.
As for $\epsilon$, when $\epsilon = 0.1$, the performance is best.
A large $\epsilon$ means that the memory update amplitude is large, which may cause instability in training. But
a much smaller $\epsilon$ may make prototype update too slower.
\begin{table}[h]
\caption{Ablation on hyper-parameters in HiCL.}
\label{tab:ab_hyper_hicl}
\resizebox{0.75\linewidth}{!}{
\begin{tabular}{c|cc}
\toprule
hyper-parameter & Base AP50 & Novel AP50 \\
\midrule
$\tau = 0.05, \epsilon = 0.1$ & 75.69 & 37.65\\
$\tau = 0.2, \epsilon = 0.1$ & 76.12 & 37.83 \\
$\tau = 0.5, \epsilon = 0.1$ & 76.05 & 37.78\\
\hline
$\tau = 0.2, \epsilon = 0.01$ & 75.97 & 37.50 \\
$\tau = 0.2, \epsilon = 0.5$ & 74.71 & 36.62\\
\bottomrule
\end{tabular}
}
\end{table}
\subsubsection{\textbf{Visualization effect of HiCL}}
Fig.~\ref{fig:tSNE} shows the visual results of (a) cross entropy loss and (b) our hierarchical contrastive learning.
Here, we select the species under ``Pycnonotidae'' family for visualization, which are
``ashy bulbul'' and ``chestnut bulbul'' under the ``hemixos'' genus,
``white-throated bulbul'' and ``puff-throated bulbul'' under the ``alophoixus'' genus,
``himalayan black bulbul'' and ``brown-eared bulbul'' under the ``hypsipetes'' genus,
``crested finchbill'' and ``collared finchbill'' under the ``spizixos'' genus.
In the feature space learnt by the naive cross entropy loss, the species of different genera are closely located, resulting in poor generalization.
With HiCL, the features of species are organized according to the genera they belong to.
Moreover, the feature space forms a more compact structure with more distinctive boundary, which greatly enhances the generalization power of the model.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/tSNE}
\caption{t-SNE visualization of box features. We use arrows to indicate different species under the same genus. (a) The features learnt by CE loss are messy. (b) Our HiCL can effectively constrain the feature space.
}
\label{fig:tSNE}
\end{figure}
\subsubsection{\textbf{Ablation study on probabilistic loss}}
First, we perform ablation study on the hyper-parameter $\beta$ in Eq.~(\ref{Eq:prob_cal}) of the probabilistic loss.
$\beta$ is an adjustment factor that controls the probability of training the virtual ``others'' class.
Results of different $\beta$ values are shown in Tab.~\ref{tab:ab_hyper_beta}.
We can see that the model performs best when $\beta = 0.5$.
When $\beta$ is close to 0 (e.g. $\beta = 0.1$), the probability loss is degenerating to a cross-entropy loss. As a result, performance is degraded. However,
when $\beta$ is too large (e.g. $\beta = 1.0$), the number of samples used for training the ``others'' class for each classifier will be too large, resulting in a certain degree of class imbalance.
Then, we show some cases of wrong classification corrected by our probabilistic loss in Fig.~\ref{fig:correct}, which verify that the probabilistic loss can recognize and correct the errors of parent nodes. Essentially, this is an embodiment of ensemble learning.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/correct}
\caption{Some cases of misclassification corrected by our probabilistic loss. The cross indicates where the original CE loss classification goes wrong.
}
\label{fig:correct}
\end{figure}
\begin{table}[t]
\caption{Ablation on $\beta$ in the probabilistic loss.}
\label{tab:ab_hyper_beta}
\setlength{\tabcolsep}{1.5mm}{
\begin{tabular}{c|cc}
\toprule
$\beta$ & Base AP50 & Novel AP50 \\
\midrule
0.1 & 75.83 & 36.20 \\
0.5 & 76.12 & 37.83 \\
1 & 76.07 & 37.79 \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Conclusion}
This paper proposes a new problem called hierarchical few-shot object detection~(Hi-FSOD), which is a nontrivial extension to the existing FSOD task.
To solve Hi-FSOD, on the one hand, we establish the first large-scale and high-quality Hi-FSOD benchmark dataset HiFSOD-Bird, which contains 176,350 wild-bird images falling to 1,432 categories that are organized into a 4-level taxonomy.
On the other hand, we develop the first Hi-FSOD method HiCLPL, where hierarchical contrastive learning is proposed to constrain the feature space and a probabilistic loss is designed to correct the errors of parent nodes.
Extensive experiments on the benchmark dataset demonstrate the effectiveness and advantage of the proposed method.
\textbf{Acknowledgement.} Yang Wang and Jihong Guan were partially supported by NSFC (No.~U1936205) and Key R\&D Projects of the Ministry of Science and Technology of China~(No.~2021YFC3300300).
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,993,961 | arxiv | \section{Introduction}
We consider the Cauchy problem of the
wave equation with a time-dependent damping
and a power-type nonlinearity
\begin{align}
\begin{cases}
\Box u + b(t) u_t = |u|^{p},
&\qquad t \in \lbrack 0,T), \quad x \in \mathbb R^n,\\
u(0) = u_0,
\quad u_t(0) = u_1,
&\qquad x \in \mathbb R^n.
\end{cases}
\label{eq:1}
\end{align}
Here
$u= u(t,x)$ is a real-valued unknown function,
$b(t)$
is a smooth positive function,
$\Box$ denotes $\partial_t^2 - \Delta_x$,
and
$u_0=u_0(x), u_1=u_1(x)$ are given initial data.
Damped wave equations are known as models
describing the voltage and the current on an electrical transmission line
with a resistance.
It is also derived as a modified heat conduction equation
from the heat balance law and the so-called
Cattaneo--Vernotte law
instead of the usual Fourier law (see \cite{Ca58, Li97, Ve58}).
The term $b(t)u_t$ is called the damping term,
which prevents the motion of the wave and reduces its energy,
and the coefficient $b(t)$ represents
the strength of the damping.
From a mathematical point of view,
it is an interesting problem to study
how the damping term affects the properties of the solution.
In particular, in this paper
we investigate
the relation between the damping term and
the blow-up behavior of the solution of the Cauchy problem \eqref{eq:1}.
To this end, as a typical case, we assume that
$b(t)$ satisfies
\begin{align}%
\label{b}
b_1 (1+t)^{-\beta} \le b(t) \le b_2 (1+t)^{-\beta},\quad
|b^{\prime}(t)| \le b_3 (1+t)^{-1}b(t)
\end{align}%
for $t \ge 0$
with some
$\beta \in \mathbb{R}$
and some positive constants
$b_1, b_2$ and $b_3$.
Since the the nonlinearity $|u|^p$ of \eqref{eq:1}
is a source term,
in general the solution may blow up in finite time
even if the initial data is sufficiently small.
Indeed, for the semilinear heat equation
$v_t - \Delta v = v^p$
with a nonnegative initial data
$v(0) = v_0 \ge 0$,
Fujita \cite{Fu66} found
that there is the critical exponent
$p_F = 1+2/n$,
that is,
if $p>p_F$, then the global solution uniquely exists
for suitably small initial data comparing with the Gaussian;
if $1<p< p_F$, then all positive solutions blow up in finite time.
Later on, it is shown that the critical case $p=p_F$
belongs to the blow-up region
(see Hayakawa \cite{Ha73} and Kobayashi, Sino and Tanaka \cite{KoSiTa77}).
The blow-up of solutions of semilinear damped wave equations
was firstly studied by
Li and Zhou \cite{LiZh95}.
They treated the so-called classical damping case,
that is, \eqref{eq:1} with $b(t) \equiv 1$,
and proved that when $n=1$ or $n=2$ and
$p\le p_F$,
if the initial data satisfy
$u_0,u_1\in C_0^{\infty}(\mathbb{R}^n)$
and
$\int_{\mathbb{R}^n} (u_0+u_1)(x)\,dx > 0$,
then the local solution blows up within a finite time.
Moreover, they obtained the sharp upper bound of the lifespan
in terms of the size of the initial data.
Namely, denoting
$u_0 = \varepsilon a_0, u_1 = \varepsilon a_1$
with
$\varepsilon > 0$ and
$a_0, a_1\in C_0^{\infty}(\mathbb{R}^n)$ having positive average,
they proved that the lifespan
(maximal existence time of the local solution)
$T_0$
satisfies
\begin{align}
\label{ls}
T_0
\le \begin{cases}
C\varepsilon^{-\frac{1}{\frac{1}{p-1}-\frac{n}{2}}}&(1<p<p_F),\\
\exp\left( C \varepsilon^{-(p-1)} \right)&(p=p_F).
\end{cases}
\end{align}
Furthermore, by their method, we can prove
the estimate
\begin{align*}%
I(t) \ge C \left( C I(0) -t \right)^{-\frac{2}{p-1}}
\quad \mbox{with}\quad
I(t) = \int_{\mathbb{R}^n} u(t,x) dx.
\end{align*}%
This shows the blow-up rate of the average of the solution.
However, their argument depends on the positivity of the
fundamental solution of the damped wave equation,
which is valid only in $n \le 3$,
and it cannot be applied to higher dimensional cases
or variable coefficient cases.
They also proved the global existence of solutions
for small initial data when $p>p_F$.
Therefore, they determined the critical exponent
for the classical damped wave equation for $n \le 2$
for smooth and compactly supported initial data.
Here, we say that
a number $p_c >1$ is the critical exponent for the
semilinear damped wave equation \eqref{eq:1}
if $p>p_c$, then the global solution uniquely exists for
sufficiently small data;
if $p\le p_c$, then the local solution blows up in finite time,
provided that the data has certain positive average determined from $b(t)$.
Later on,
for $n=3$,
Nishihara \cite{Ni03, Ni03Ib}
discovered a decomposition of the linear solution
\[
S_n(t) u_1(x) = J_n(t) u_1(x) + e^{-\frac{t}{2}} W_n(t) u_1(x),
\]
where
$S_n(t), W_n(t)$ are the fundamental solution of the linear damped wave equation
$\Box u + u_t = 0$
and the linear wave equation
$\Box u = 0$, respectively,
and $J_n(t)u_1$ behaves as the solution of the linear heat equation
$v_t - \Delta v = 0$.
Then, he proved
the small data global existence when $p>p_F$
and
the sharp upper bound of the lifespan \eqref{ls}
when $p \le p_F$.
For $n=1,2$ and $n \ge 4$,
the same type decomposition was obtained by
Marcati and Nishihara \cite{MaNi03}, Hosono and Ogawa \cite{HoOg04}
and Narazaki \cite{Na04}
(see also Sakata and the third author \cite{SaWaMZ} for the exact decomposition for
$n \ge 4$).
For higher dimensional cases
$n \ge 4$,
Todorova and Yordanov \cite{ToYo01}
and Zhang \cite{Zh01}
determined the critical exponent as
$p = p_F$.
Concerning the estimate of the lifespan for $n \ge 4$,
for the subcritical case
$p<p_F$,
the second and the third author \cite{IkeWa15}
showed an almost sharp estimate of the lifespan
\begin{align*}
C_1 \varepsilon^{-\frac{1}{\frac{1}{p-1}-\frac{n}{2}}+\delta}
\le T_0
\le C_2 \varepsilon^{-\frac{1}{\frac{1}{p-1}-\frac{n}{2}}}
\end{align*}
for small $\varepsilon > 0$
with arbitrary small $\delta>0$ and
some constants $C_1, C_2 >0$.
For the critical case
$p=p_F$,
the second author and Ogawa \cite{IkeOg16} obtained
\begin{align*}
\exp\left( C_1 \varepsilon^{-(p-1)} \right)
\le T_0
\le \exp \left( C_2 \varepsilon^{-p} \right)
\end{align*}
with some constant $C_1, C_2 >0$
(see Proposition \ref{prop_IkOg} below).
As in the case $n \le 3$,
we expect that the sharp upper estimate of the lifespan
is given by
$T_0 \le \exp ( C \varepsilon^{-(p-1)} )$
for higher dimensional cases $n\ge 4$.
However, this problem is still open.
In regard to the lifespan estimate for the
semilinear wave equation with time-dependent damping
\eqref{eq:1},
much less is known.
When $b(t) = (1+t)^{-\beta}$
with $\beta \in (-1, 1)$
and $(u_0, u_1) \in H^1(\mathbb{R}^n) \times L^2(\mathbb{R}^n)$
is compactly supported,
Nishihara \cite{Ni11TJM}
and Lin, Nishihara and Zhai \cite{LiNiZh12}
proved that the critical exponent
is given by $p=p_F$
(see also D'Abbicco, Lucente and Reissig \cite{DaLuRe13}
for more general damping and initial data).
After that,
for subcritical cases
$p<p_F$,
the second author and the third author \cite{IkeWa15}
obtained an almost sharp estimate of the lifespan
\begin{align*}
C_1 \varepsilon^{-\frac{1}{(\frac{1}{p-1}-\frac{n}{2})(1+\beta)}+\delta}
\le T_0
\le C_2 \varepsilon^{-\frac{1}{(\frac{1}{p-1}-\frac{n}{2})(1+\beta)}}
\end{align*}
with arbitrary small $\delta>0$ and
some constants $C_1, C_2 >0$.
For the case where $b(t) = b_0/(1+t)$ with $b_0 >0$,
the linearized problem of \eqref{eq:1}
\begin{align*}
\Box u + \frac{b_0}{1+t} u_t = 0
\end{align*}
has scaling invariance, and
it is known that the asymptotic behavior
of the solution depends on the value of the constant $b_0 > 0$
(see Wirth \cite{Wi04}).
For the semilinear problem
\begin{align}
\label{mu}
\Box u + \frac{b_0}{1+t} u_t = |u|^p,
\end{align}
D'Abbicco and Lucente \cite{DaLu13} and
D'Abbicco \cite{Da15}
determined the critical exponent
as $p=p_F$ when
$b_0 \ge 5/3 \ (n=1)$, $b_0 \ge 3 \ (n=2)$ and $b_0 \ge n+2 \ (n\ge 3)$.
Moreover, in the special case $b_0 = 2$,
by setting $u = (1+t)w$,
the equation \eqref{mu} is transformed into
the semilinear wave equation
$\Box w = (1+t)^{-(p-1)}|w|^p$.
In view of this,
D'Abbicco, Lucente and Reissig \cite{DaLuRe15}
showed that the critical exponent is given by
$p_2(n) = \max\{ p_F, p_0(n+2) \}$ for $n\le 3$,
where
$p_0(m)$ is the positive root of
$(m-1)p^2 - (m+1)p -2 = 0$.
These results were recently extended to
scale-invariant mass and dissipation
by Nasciment, Palmieri and Reissig \cite{NasPaRe16}.
Wakasa \cite{Wak16} obtained the optimal
estimate of the lifespan of solutions to \eqref{mu} with $b_0 =2$ for $n=1$:
\begin{align*}
T_0 \sim \begin{cases}
C \varepsilon^{-\frac{p-1}{3-p}} & (p<3),\\
\exp \left( C \varepsilon^{-(p-1)} \right) &(p=3).
\end{cases}
\end{align*}
However, in the general case $b_0 \neq 2$,
the optimal lifespan estimate is not known,
while partial results were given in \cite{Wa_thesis}.
When $\beta = -1$,
the third author \cite{Wa17JMAA} recently studied the global existence
and asymptotic behavior for $p>p_F$.
However, there are no results about
blow-up and estimates of the lifespan
for $p \le p_F$.
Finally, for $\beta < -1$,
we expect that
for any $p>1$,
the global solution uniquely exists
for sufficiently small initial data.
This problem will be discussed elsewhere.
When $\beta >1$,
we expect that the critical exponent
is given by $p_0(n)$,
that is, the critical exponent coincides with that of
the semilinear wave equation without damping.
However, this is still an open problem.
In this paper, we give the sharp upper estimate of lifespan
for subcritical nonlinearities
$p<p_F$
and the effective damping
$\beta \in [-1, 1)$.
The case $\beta = -1$ is completely new.
We also prove the sharp lower estimate the lifespan
when $p=p_F$ and $\beta \in [-1,1)$.
For the case $\beta = 1$,
some upper estimates of the lifespan will be given,
while it seems not to be optimal in general.
To state our main results,
we first introduce the definition of strong solutions:
\begin{Definition}\label{def_sol}
Let
$(u_0, u_1) \in H^1(\mathbb{R}^n) \times L^2(\mathbb{R}^n)$
and let
$T \in (0,\infty]$.
A function
\[
u \in C^2([0,T); H^{-1}(\mathbb R^n))
\cap C^1([0,T); L^2 (\mathbb R^n)) \cap C([0,T); H^1 (\mathbb R^n))
\]
is called a strong solution of the Cauchy problem \eqref{eq:1}
on $[0, T)$
if $u$ satisfies the initial conditions
$u(0) = u_0, u_t(0) = u_1$
and the first equation of \eqref{eq:1} in $C^2([0,T) ; H^{-1}(\mathbb{R}^n))$.
\end{Definition}
When $1<p <\infty\ (n=1,2)$, $1<p \le n/(n-2)\ (n\ge 3)$,
for any $(u_0, u_1)\in H^1(\mathbb{R}^n) \times L^2(\mathbb{R}^n)$,
it is well-known that
there exist $T>0$ and
a unique strong solution $u$ of the Cauchy problem
\eqref{eq:1} on $[0,T)$
(see \cite{Str89} or \cite{NakOn93}).
We will show the existence of the strong solution
for some $T>0$ in the appendix for the reader's convenience.
\begin{Definition}\label{def_ls}
The lifespan $T_0$ of a solution $u$ for the Cauchy problem \eqref{eq:1}
is defined by
\[
T_0 = \sup \{T > 0 \mid u \mbox{ is a strong solution for \eqref{eq:1} on } [0,T) \}.
\]
\end{Definition}
To state our main results,
we introduce assumptions and notations.
We recall $p_F = 1 + 2/n$.
In what follows, we assume that
the coefficient of the damping term $b(t)$ is a smooth function
satisfying \eqref{b} with some $\beta \in [-1,1]$.
We put
\[
B(t) = \int_0^{t} b(\tau)^{-1}\,d\tau.
\]
Then, by the assumption \eqref{b}, $B(t)$ satisfies
\begin{align}%
\label{b_est}
\begin{cases}
B_1 (1+t)^{1+\beta} \le B(t) \le B_2 (1+t)^{1+\beta}&(\beta \in (-1,1]),\\
B_1 \log (2+t) \le B(t) \le B_2 \log(2+t) &(\beta =-1)
\end{cases}
\end{align}%
for $t \ge 1$ with some constants $B_1, B_2 >0$
(the second inequalities of each case are still valid for any $t \ge 0$).
Note that the function $B(t)$ is strictly increasing
due to the positivity of $b(t)$,
and hence, $B(t)$ has the inverse function $B^{-1}(\tau)$ satisfying
\begin{align}%
\label{b_inv}
\begin{cases}
B_3 (1+\tau)^{\frac{1}{1+\beta}}
\le B^{-1}(\tau)
\le B_4 (1+\tau)^{\frac{1}{1+\beta}} &( \beta \in (-1,1]),\\
\exp \left( B_3 (1+\tau) \right)
\le B^{-1}(\tau)
\le \exp \left( B_4 (1+\tau) \right) &(\beta =-1)
\end{cases}
\end{align}%
for $\tau \ge 1$ with some constants $B_3, B_4>0$
(the second inequalities of each case are still valid for any $\tau \ge 0$).
We also remark that the changing variable $s = B(t)$
transforms the corresponding parabolic problem
$b(t)v_t - \Delta v = |v|^p$
into
$\tilde{v}_s - \Delta \tilde{v} = |\tilde{v}|^p$
with $v(t,x) = \tilde{v}(B(t),x)$.
Therefore, the function $B(t)$ acts as a scaling function
for the time variable.
Next, we define $\widetilde \psi \in C^\infty([0,\infty);[0,1])$ as
\[
\widetilde \psi (r) =
\begin{cases}
1 &\mbox{if}\quad r \leq 1,\\
\searrow &\mbox{if}\quad 1 < r < 2,\\
0 &\mbox{if}\quad r \geq 2.
\end{cases}
\]
Let $\psi : \mathbb R^n \ni x \mapsto \widetilde \psi (|x|)$
and let $\psi _R(x) = \psi (x/R)$.
For $p>1$, $\beta \in [-1,1]$ and $A>0$,
we define $\mu (p,b,\beta ,A)$ by
\begin{align*}
&\mu (p,b,\beta ,A) \\
&= \min \left\{ 1, \frac{p-1}{2} b(0)A,
\left[
\frac{2(p+1)}{(p-1)^2}
b_1^{-2} \left\{ 2^{\frac{1}{1+\beta}} (1+B_4) \right\}^{\max(0, 2 \beta )}
+ \frac{2 (b_1^{-1}b_3+1)}{p-1} \right]^{-1}
\right\}.
\end{align*}
Here, we interpret the term
$\{ 2^{\frac{1}{1+\beta}} (1+B_4) \}^{\max(0, 2 \beta )} = 1$
if $\beta = -1$.
We also note that $A_1 \le A_2$ implies
$\mu(p,b,\beta,A_1) \le \mu(p,b,\beta,A_2)$.
The constant
$\mu(p,b,\beta,A)$
appears as the coefficient of the initial data
in the estimate of the lifespan
and the lower estimate of the average of the solution
(see Proposition \ref{Theorem:1.3}).
For $n \in \mathbb{N}$, $p>1$, $\ell \in \mathbb{N}$ satisfying $\ell > 2p'$,
and $\phi \in C_0^{\infty}(\mathbb{R}^n)$ with $\phi \ge 0$,
we also define $A(n,p,\ell,\phi )$ as
\[
A(n,p,\ell,\phi )
= 2^{p'-1} p'^{-\frac{1}{p}} p^{\frac{1-p'}{p}}
\| \Phi^{p'} \phi ^{\ell-2p'} \|_{L^{1}(\mathbb R^n)}^{\frac{1}{p}}
\| \phi ^{\ell} \|_{L^1(\mathbb R^n)}^{\frac{1}{p'}},
\]
where
$p' = p/(p-1)$ and
\[
\Phi
= \phi ^{2-\ell} \Delta(\phi^\ell)
= \ell(\ell-1) \nabla \phi \cdot \nabla \phi + \ell \phi \Delta \phi .
\]
We will derive an ordinary differential inequality
for the weighted average of the solution up to the constant $A(n,p,\ell,\phi)$.
Now we are in a position to give our main results.
The first one is the upper estimate of the lifespan of solutions to \eqref{eq:1}
in a general setting.
At the moment we do not need any condition on $p$ such as
$p \le p_F$ but we impose certain condition
with respect to the test function $\phi$.
\begin{Proposition}
\label{Theorem:1.3}
Let $\beta \in [-1,1]$, $p \in (1, \infty)$
and $(u_0,u_1) \in H^1(\mathbb R^n) \times L^2(\mathbb R^n)$,
and let $u$ be the associated strong solution on
$[0,T_0)$ with the lifespan $T_0$.
Assume that there exists $\phi \in \mathcal S (\mathbb R^n;[0,\infty))$
such that
\begin{align}
0
< I_\phi (0) - A(n,p,\ell,\phi )
< 2^{\frac{1}{p-1}} \|\phi ^\ell\|_{L^1(\mathbb R^n)},
\quad
I_\phi '(0)
> 0,
\label{eq:2}
\end{align}
where
\[
I_\phi (t)
= \int_{\mathbb R^n} u (t,x) \phi ^\ell(x) dx.
\]
Let
\begin{align*}
J_\phi (t)
&= I_\phi (t) - A(n,p,\ell,\phi ),\\
\widetilde J_\phi (0)
&= 2^{-\frac{1}{p-1}} \|\phi ^\ell\|_{L^1(\mathbb R^n)}^{-1} J_\phi(0),\\
A_1
&= \frac{J_\phi '(0)}{J_\phi (0)}
= \frac{I_\phi '(0)}{I_\phi (0) - A(n,p,\ell,\phi )}.
\end{align*}
Then, we have
\begin{align}
J_\phi (t)
\geq
J_\phi (0)
\bigg( 1 - \mu (p,b,\beta ,A_1)
\widetilde J_\phi (0)^{p-1}
B(t)
\bigg)^{- \frac{2}{p-1}}
\label{eq:rate}
\end{align}
for $t \in [0,T_0)$.
Moreover,
the lifespan $T_0$ of the solution $u$ is estimated as
\[
T_0 \leq
B^{-1} \left( \mu (p,b,\beta, A_1)^{-1} \widetilde J_\phi(0)^{1-p} \right).
\]
\end{Proposition}
Proposition \ref{Theorem:1.3} implies that
\eqref{eq:2} is a sufficient condition for the blow-up of the solution.
The condition \eqref{eq:2} is related with the scaling of the equation.
Indeed, letting $p \in (1,p_F)$ and taking the test function
$\phi = \psi_{R(\varepsilon)}$
with an appropriate scaling parameter $R(\varepsilon)$,
we ensure the condition \eqref{eq:2} and
show the sharp estimate of the lifespan of solutions to \eqref{eq:1}.
\begin{Corollary}
\label{Theorem:1.4}
Let $\beta \in [-1,1]$, $p \in (1,p_F)$
and $(u_0, u_1) = \varepsilon (a_0, a_1)$
with $\varepsilon > 0$.
We assume that
$(a_0, a_1)
\in (H^1(\mathbb R^n)\cap L^1(\mathbb{R}^n))
\times (L^2(\mathbb R^n) \cap L^1(\mathbb{R}^n))$
satisfy
\[
I_0 = \int_{|x|<R} a_0(x) dx > 0,
\quad
I_1 = \int_{|x|<R} a_1(x) dx > 0.
\]
Let
\begin{align}
\label{r}
R(\varepsilon)
= A(n,p,\ell,\psi )^{\frac{p-1}{n(p_F-p)}}
\bigg(\frac{\varepsilon}{4} I_0 \bigg)^{ - \frac{p-1}{n(p_F-p)}}
\end{align}
and let $\varepsilon_0 >0$ satisfy that
\begin{align}
&\int_{\mathbb R^n} \psi _{R(\varepsilon_0)}^{\ell}(x) a_0 (x)
\ge \frac{1}{2} I_0,
\label{eq:3}\\
&\int_{\mathbb R^n} \psi _{R(\varepsilon_0)}^\ell(x) a_1(x) dx
\geq \frac{1}{2} I_1,
\label{eq:4}\\
&\varepsilon_0 I_0
\le 2^{\frac{1}{p-1}} \| \psi^{\ell} \|_{L^1(\mathbb{R}^n)} R(\varepsilon_0)^n.
\label{eq:41}
\end{align}
Then, for any $\varepsilon \in (0,\varepsilon_0]$,
the associated strong solution $u$ of \eqref{eq:1} satisfies,
\begin{align}%
\label{bl-rt}
\int_{\mathbb{R}^n} u(t,x) \psi_{R(\varepsilon)}^{\ell}(x) dx
\ge \frac{\varepsilon}{4}I_0
\left( 1- \mu_0
\varepsilon^{\frac{1}{\frac{1}{p-1}-\frac{n}{2}}} B(t)
\right)^{-\frac{2}{p-1}}
\end{align}%
with some constant $\mu_0 = \mu_0(n,p,b,\beta,\ell,\psi,I_0,I_1) > 0$
and the lifespan $T_0 = T_0(\varepsilon)$
is estimated as
\begin{align}
\label{lf_upp2}
T_0 \le
B^{-1} \left( \mu_0^{-1} \varepsilon^{-\frac{1}{\frac{1}{p-1}-\frac{n}{2}}} \right).
\end{align}
\end{Corollary}
\begin{Remark}
The constant $\mu_0$ is given by \eqref{mu0}.
Also, under the assumptions in Corollary \ref{Theorem:1.4},
combining \eqref{lf_upp2} and \eqref{b_inv}, we see that
\begin{align*}%
T_0 \le
\begin{cases}
B_4 \left(
1+ \mu_0^{-1} \varepsilon^{-\frac{1}{\frac{1}{p-1}-\frac{n}{2}}}
\right)^{\frac{1}{1+\beta}}
&(\beta \in (-1,1]),\\
\exp \left( B_4
\left( 1+ \mu_0^{-1} \varepsilon^{-\frac{1}{\frac{1}{p-1}-\frac{n}{2}}} \right)
\right)
&(\beta = -1).
\end{cases}
\end{align*}%
\end{Remark}
Propositions \ref{Theorem:1.3} and \ref{Theorem:1.4}
show the blow-up behavior for the solutions of \eqref{eq:1}
and how it depends on the parameter $\beta$.
They are summarized in the following way:
\begin{itemize}
\item
The blow-up rate of the solution near the blow-up time
is similar to
that of the nonlinear wave equation,
though the time variable is scaled by $B(t)$.
\item
On the other hand,
the estimate of the lifespan of the solution
is similar to
that of the nonlinear heat equation.
\end{itemize}
Concerning the upper estimate of the lifespan
$T_0$
in the critical case $p=p_F$,
we refer the reader to a recent result of
the second author and Ogawa \cite[Theorem 2.5]{IkeOg16}:
\begin{Proposition}[\cite{IkeOg16}]\label{prop_IkOg}
Let $b(t) = (1+t)^{-\beta}$, $\beta \in (-1,1)$, $p=p_F$ and
let $(u_0, u_1) = \varepsilon (a_0, a_1)$
with $\varepsilon > 0$, and we assume that
$(a_0, a_1) \in H^1(\mathbb{R}^n) \times L^2(\mathbb{R}^n)$
and $(a_0, a_1)$ satisfies
\[
B_0 a_0 + a_1 \in L^1(\mathbb{R}^n)
\quad \mbox{and} \quad
\int_{\mathbb{R}^n} ( B_0 a_0 + a_1 )(x)\, dx > 0,
\]
where
\[
B_0 =
\left( \int_0^{\infty} \exp \left( -\int_0^t (1+s)^{-\beta}\,ds \right)\,dt \right)^{-1}.
\]
Then, there exists a constant
$C = C(n, \beta, a_0, a_1) >0$
such that the lifespan $T_0 = T_0(\varepsilon)$
of the associated strong solution is estimated as
\[
T_0 \le \exp \left( C \varepsilon^{-p} \right)
\]
for any $\varepsilon \in (0,1]$.
\end{Proposition}
Next, we discuss the optimality of the estimate \eqref{lf_upp2}
with respect to the power of $\varepsilon$,
that is, the estimate of the lifespan from below.
Following the third author's recent work \cite{Wa17JMAA},
we have the lower estimate of the lifespan.
\begin{Proposition}\label{prop_low}
Let
$\beta \in [-1, 1)$, $p \in (1, p_F)$ and let
$(u_0, u_1) = \varepsilon (a_0, a_1)$
with $\varepsilon > 0$.
We assume that
$(a_0, a_1) \in H^{1,m}(\mathbb{R}^n) \times H^{0,m}(\mathbb{R}^n)$
with
$m = 1\ (n=1)$, $m > n/2 +1 \ (n\ge 2)$.
Then, there exist constants
$\varepsilon_1 = \varepsilon(n,\beta, p, m, \| a_0 \|_{H^{1,m}}, \| a_1 \|_{H^{0,m}} )> 0$
and
$C_{\ast} = C_{\ast} (n,\beta, p, m, \| a_0 \|_{H^{1,m}}, \| a_1 \|_{H^{0,m}} ) > 0$
such that
for any $\varepsilon \in (0,\varepsilon_1]$,
the lifespan $T_0 = T_0(\varepsilon)$
of the associated strong solution
is estimated by
\begin{align*}%
T_0 \ge B^{-1} \left( C_{\ast} \varepsilon^{-\frac{1}{\frac{1}{p-1}-\frac{n}{2}} } \right).
\end{align*}%
\end{Proposition}
\begin{Remark}
Under the assumptions in Proposition \ref{prop_low},
combining the above estimate and \eqref{b_inv}, we see that
\[
T_0 \ge
\begin{cases}
\displaystyle C_{\ast} \varepsilon^{-\frac{1}{(\frac{1}{p-1}-\frac{n}{2})(1+\beta)} }
& (\beta \in (-1,1)),\\
\displaystyle \exp \left( C_{\ast} \varepsilon^{-\frac{1}{\frac{1}{p-1}-\frac{n}{2}} } \right)
& (\beta = -1).
\end{cases}
\]
\end{Remark}
In the case $\beta \in (-1,1)$,
the rate of $\varepsilon$ coincides with that of Corollary \ref{Theorem:1.4}.
Namely, we have the sharp estimate of the lifespan of the solution.
In the case $\beta = -1$, we have an exponential lower bound,
which is the so-called almost global existence of the solution.
This is quite reasonable because in this case where the damping
is very strong, it helps well the solution exist longer time.
On the other hand, in the critical case $p=p_F$, we have the following:
\begin{Proposition}\label{prop_lowcr}
Let $\beta \in [-1, 1)$, $p=p_F$
and let
$(u_0, u_1) = \varepsilon (a_0, a_1)$
with $\varepsilon >0$.
We assume that
$(a_0, a_1) \in H^{1,m}(\mathbb{R}^n) \times H^{0,m}(\mathbb{R}^n)$
with
$m = 1\ (n=1)$, $m>n/2+1\ (n \ge 2)$.
Then, there exist constants
$\varepsilon_2 = \varepsilon(n,\beta, p, m, \| a_0 \|_{H^{1,m}}, \| a_1 \|_{H^{0,m}} )> 0$
and
$C_{\ast} = C_{\ast} (n,\beta, p, m, \| a_0 \|_{H^{1,m}}, \| a_1 \|_{H^{0,m}} ) > 0$
such that
for any $\varepsilon \in (0,\varepsilon_2]$,
the lifespan $T_0 = T_0(\varepsilon)$
of the associated strong solution is estimated by
\begin{align*}%
T_0 \ge B^{-1}\left( \exp \left( C_{\ast} \varepsilon^{-(p-1)} \right) \right).
\end{align*}%
\end{Proposition}
\begin{Remark
Under the assumptions in Proposition \ref{prop_lowcr},
combining the above estimate and \eqref{b_inv}, we see that
\[
T_0 \ge
\begin{cases}
\displaystyle
\exp \left( C_{\ast}\varepsilon^{-(p-1)} \right)
& ( \beta \in (-1,1)),\\
\displaystyle
\exp \left(
\exp \left( C_{\ast} \varepsilon^{-(p-1)} \right)
\right)
& (\beta = -1).
\end{cases}
\]
\end{Remark}
Proposition \ref{prop_lowcr} shows that
in the critical case, we have
the exponential and the double-exponential estimate
from the below for the case
$\beta \in (-1,1)$ and $\beta = -1$, respectively.
Comparing with Proposition \ref{prop_low},
it is also quite reasonable.
We also remark that Propositions \ref{prop_low} and Proposition \ref{prop_lowcr}
are still true if we replace the nonlinearity
$|u|^p$
by
$\pm |u|^{p-1}u$ or $-|u|^p$.
Our results with the previous ones
are summarized in Table 1,
where we consider the damping
$b_0 (1+t)^{-\beta}$
with $b_0 > 0$ and $\beta \in [-1,1]$.
\begin{table}[!h]
\begingroup
\renewcommand{\arraystretch}{1.8}
\begin{tabular}{|c|c|c|} \hline
$\beta \backslash p$
& $\displaystyle 1< p < p_F$
& $\displaystyle p= p_F$ \\[4pt]
\hline
$\beta = -1$
&$T_0 \sim
\exp \left( C \varepsilon^{-\frac{1}{\frac{1}{p-1}-\frac{n}{2}} } \right)$
&$\exp \left(
\exp \left( C \varepsilon^{-(p-1)} \right)
\right)
\le T_0$\\
\hline
$-1<\beta <1$
&$T_0 \sim C \varepsilon^{-\frac{1}{(\frac{1}{p-1}-\frac{n}{2})(1+\beta)} }$
&$\exp \left( C \varepsilon^{-(p-1)} \right)
\le T_0 \le \exp \left( C \varepsilon^{-p} \right)$ \ \\
\ &\ &(upper bound is by \cite{IkeOg16})\\
\hline
$\beta = 1$
&$T_0 \le C \varepsilon^{-\frac{1}{2(\frac{1}{p-1}-\frac{n}{2})} }$,
&open (in general), \\
\
&$T_0 \sim \varepsilon^{-\frac{p-1}{3-p}}$
&$T_0 \sim \exp \left( C \varepsilon^{-(p-1)} \right)$\\
\ &for $n=1, b_0=2$ (\cite{Wak16})
&for $n=1, b_0=2$ (\cite{Wak16})\\
\hline
\end{tabular}
\caption{Estimates of lifespan}
\endgroup
\end{table}
When $1<p<p_F$ and $\beta =1$,
we have an upper bound of $T_0$,
while it seems not to be optimal in general.
In this case it is known that
the critical exponent may change
(see D'Abbicco, Lucente and Reissig \cite{DaLuRe15}
and Wakasa \cite{Wak16}).
We mention about the strategy of the proof.
Our method is a hybrid version of
the method by Li and Zhou \cite{LiZh95}
and by Zhang \cite{Zh01}.
The method by Li and Zhou \cite{LiZh95} is based on
a ordinary differential inequality.
However, in order to derive an ordinary differential inequality
from the damped wave equation,
their argument requires the positivity of the fundamental solution,
which fails in higher dimensional cases.
The method by Zhang \cite{Zh01} is the so-called
test function method.
He considered an average of the nonlinearity of the solution
$\int_0^{\infty}\int_{\mathbb{R}^n} |u(t,x)|^p \psi_R(t,x) dxdt$
with suitable family of cut-off functions
$\{ \psi_R \}_{R>0}$,
and leads to contradiction
by the integration by parts and a scaling argument.
However, this method is based on contradiction argument,
and the mechanism of the blow-up is unclear.
Moreover, by this approach, we cannot obtain
blow-up rates of solutions.
To overcome these difficulties,
we employ the method developed by the first author and Ozawa \cite{FuOz1, FuOz2}
in which the lifespan of the solution for
a nonlinear Schr\"{o}dinger equation is studied.
They considered a localized average of the solution
$\int_{\mathbb{R}^n} u(t,x) \phi(x) dx$
with a cut-off function $\phi (x)$,
and derive an ordinary differential inequality for it.
Then, they showed the estimate of the lifespan of
the solution of the ordinary differential inequality.
In this paper, we adapt their method to the damped wave equations.
First, we establish the blow-up and estimate of the lifespan
of the solution to the ordinary differential inequality
\begin{align}
\label{eq:6}
\begin{cases}
f''(t) + b(t) f'(t) \geq \gamma f(t)^p,\\
f(0) \geq \varepsilon _0,\\
f'(0) \geq A_0 \varepsilon _0,
\end{cases}
\end{align}
with $A_0, \gamma, \varepsilon_0 >0$.
We remark that Li and Zhou also obtained the
finite time blow-up for \eqref{eq:6} and
the life-span estmate.
However, as far as the authors know,
explicit subsolutions for \eqref{eq:6}
had not been known even though
they are well known
for a first order ordinary differential inequality
$f' \geq f^p$.
We construct explicit subsolutions
by a comparison lemma given by Li and Zhou \cite{LiZh95}
and the blow-up rate \eqref{eq:rate} follows form these explicit subsolution.
For detail, see Proposition \ref{Theorem:2.3}.
Next, to prove Proposition \ref{Theorem:1.3},
we follow \cite{FuOz1, FuOz2} and consider the localized average
$I_{\phi}(t) = \int_{\mathbb{R}^n} u(t,x) \phi(x) dx$,
and derive the ordinary differential inequality \eqref{eq:6}
from the equation \eqref{eq:1}.
Finally, for Corollary \ref{Theorem:1.4},
we choose a special family of cut-off functions
and apply a scaling argument
to reduce its proof to Proposition \ref{Theorem:1.3}.
For Propositions \ref{prop_low} and \ref{prop_lowcr},
we employ the method of scaling variables,
which was originally introduced by
Gallay and Raugel \cite{GaRa98}.
Coulaud \cite{Co14} refined it and applied
to the second grade fluids equations in three space dimensions.
Recently, the third author \cite{Wa17JMAA} applied the method to
obtain the asymptotic profile for the
semilinear wave equation with time-dependent damping.
This paper is organized as follows.
In section 2,
we study the blow-up of solutions to
the ordinary differential inequality \eqref{eq:6}.
In section 3,
applying the theory of ordinary differential inequalities
prepared in Section 2, we give a proof of Proposition \ref{Theorem:1.3}
and Corollary \ref{Theorem:1.4}.
Section 4 is devoted to the proof of Propositions \ref{prop_low} and \ref{prop_lowcr}.
Finally, in the appendix, we give a proof of local existence
of solutions in the energy space.
We finish this section with notations used throughout this paper.
We denote by $C$ a generic constant, which may change
from line to line.
In particular, we sometimes use the symbol
$C(\ast,\ldots,\ast)$
for constants depending on the quantities appearing in parenthesis.
We give the notations of function spaces.
Let $L^p(\mathbb{R}^n)$ be the usual Lebesgue space
equipped with the norm
\begin{align*}%
\| f \|_{L^p} = \left( \int_{\mathbb{R}^n} |f(x)|^p dx \right)^{1/p}
\ (1\le p < \infty),\quad
\| f \|_{L^{\infty}} = {\rm ess\,sup\,}_{x\in \mathbb{R}^n} |f(x)|.
\end{align*}%
For $s \in \mathbb{Z}_{\ge 0}, m \ge 0$,
we define the weighted Sobolev space $H^{s,m}(\mathbb{R}^n)$ by
\begin{align*}
H^{s,m}(\mathbb{R}^n)
&= \{ f \in L^2(\mathbb{R}^n) ; \| f \|_{H^{s,m}} < \infty \}, \\
\| f \|_{H^{s,m}}
&= \left( \sum_{|\alpha| \le s }
\int_{\mathbb{R}^n} (1+|x|^2)^m |\partial_x^{\alpha} f (x) |^2\,dx \right)^{1/2}.
\end{align*}
In particular, when $m=0$, we also denote $H^{s,0}(\mathbb{R}^n)$ as $H^{s}(\mathbb{R}^n)$.
For an interval $I$
and a Banach space $X$,
we define $C^r(I; X)$ as the space of $r$-times
continuously differentiable mapping from
$I$ to $X$ with respect to the topology in $X$.
\section{Estimates of the lifespan of solutions to ordinary differential inequalities}
In this section,
we study the estimates of lifespan of solutions to
the ordinary differential inequality \eqref{eq:6}.
To this end, the following comparison theorem plays a critical role:
\begin{Lemma}{\cite[Lemma 3.1]{LiZh95}}
\label{Theorem:2.1}
Let $T>0$.
We assume that functions
$k, h \in C^2([0,T))$
satisfy
\[
\begin{cases}
a(t) k''(t) + k'(t) \geq c(t) k(t)^p,\\
a(t) h''(t) + h'(t) \leq c(t) h(t)^p
\end{cases}
\]
for $t \in [0,T)$,
where $p \geq 1$ and
$a(t), c(t)$
are nonnegative continuous function on $[0,T)$.
We further assume that
\[
\begin{cases}
k(0) > h(0),\\
k'(0) \geq h'(0).
\end{cases}
\]
Then, we have $k'(t) > h'(t)$ for any $t \in [0,T)$.
\end{Lemma}
Thanks to Lemma \ref{Theorem:2.1},
we analyze the behavior of solutions for \eqref{eq:6}
by comparing with subsolutions.
In the next lemma, we introduce our subsolution.
\begin{Lemma}
\label{Theorem:2.2}
Let $A_0 > 0$, $\beta \in [-1,1]$, $p>1$ and
let $\varepsilon _0 \in (0, 1]$.
We put
\begin{align}
\label{t1}
T_1 =
B^{-1} \left( \mu (p,b,\beta, A_0)^{-1} \varepsilon_0^{1-p} \right).
\end{align}
Moreover, for $t \in \lbrack 0,T_1)$, we define
\begin{align*}
g(t)
= \varepsilon_0
\bigg( 1 - \mu (p,b,\beta ,A_0) \varepsilon _0^{p-1}
B(t) \bigg)^{- \frac{2}{p-1}}.
\end{align*}
Then $g$ satisfies that
\[
\begin{cases}
g''(t) + b(t) g'(t) \leq g(t)^p,
&\quad \mbox{for}\quad t \in \lbrack 0,T_1),\\
g(0) = \varepsilon_0,\\
g'(0) \leq A_0 \varepsilon_0.
\end{cases}
\]
\end{Lemma}
\begin{proof}
For simplicity, we denote $\mu(p,b,\beta,A_0)$ as $\mu$.
Since $\mu \leq \frac{p-1}{2} b(0) A_0$,
by a direct calculation, we have
\begin{align*}
g'(t)
&= \frac{2 \mu }{p-1} \varepsilon _0^p
\bigg( 1 - \mu \varepsilon _0^{p-1} B(t) \bigg)^{- \frac{p+1}{p-1}}
b(t)^{-1},\\
g'(0)
&= \frac{2 \mu }{p-1} b(0)^{-1}\varepsilon_0^p
\leq A_0 \varepsilon_0,\\
g''(t)
&= -\frac{2 \mu }{p-1} \varepsilon _0^p
\bigg( 1 - \mu \varepsilon _0^{p-1}
B(t) \bigg)^{- \frac{p+1}{p-1}}
b'(t) b(t)^{-2}\\
&\quad + \frac{2 (p+1)}{(p-1)^2} \mu^2 \varepsilon_0^{2p-1}
\bigg( 1 - \mu \varepsilon _0^{p-1} B(t)
\bigg)^{- \frac{2p}{p-1}} b(t)^{-2}.
\end{align*}
Then, for $t < T_1$, we obtain
\begin{align*}
&g''(t) + b(t) g'(t)\\
&\leq g(t)^p
\bigg(
\frac{2(p+1)}{(p-1)^2} \varepsilon_0^{p-1} b(t)^{-2} \mu^2
+ \frac{2 }{p-1} b'(t)b(t)^{-2}\mu + \frac{2}{p-1} \mu \bigg)\\
&\leq g(t)^p
\bigg(
\frac{2(p+1)}{(p-1)^2}
2^{\frac{2\beta}{1+\beta}} b_1^{-2} (1+B_4)^{\max(0,2 \beta )}
+ \frac{2 ( b_1^{-1}b_3 +1)}{p-1} \bigg) \mu\\
&\leq g(t)^p.
\end{align*}
Here, for the second inequality, when
$\beta \in (0,1]$,
we have used that
\begin{align*}
\varepsilon_0^{p-1} b(t)^{-2} \mu
&\le \varepsilon_0^{p-1}\mu b_1^{-2} (1 + T_1)^{2\beta} \\
&\le \varepsilon _0^{p-1} \mu b_1^{-2}
\left[
1+ B^{-1} \left( \mu^{-1} \varepsilon_0^{1-p} \right) \right]^{2\beta} \\
&\le \varepsilon _0^{p-1} \mu b_1^{-2}
\left[ 1 +
B_4 \left( 1+\mu^{-1} \varepsilon_0^{1-p} \right)^{\frac{1}{1+\beta}}
\right]^{2\beta} \\
&\le \varepsilon _0^{p-1} \mu b_1^{-2} (1+B_4)^{2\beta}
\left( 1+ \mu^{-1} \varepsilon_0^{1-p} \right)^{\frac{2\beta}{1+\beta}} \\
&\le 2^{\frac{2\beta}{1+\beta}}
\left( \varepsilon _0^{p-1} \mu \right)^{\frac{1-\beta}{1+\beta}} b_1^{-2}
(1+B_4)^{2\beta} \\
&\leq 2^{\frac{2\beta}{1+\beta}} b_1^{-2}(1+B_4)^{2\beta},
\end{align*}
and for the third inequality we have used the definition of
$\mu(p,b,\beta,A_0)$.
\end{proof}
\begin{Proposition}
\label{Theorem:2.3}
Let $T_0>0$,
$A_0 > 0$,
$\beta \in [-1,1]$,
$p>1$,
$\gamma > 0$,
and let
$\varepsilon _0 \in (0,\gamma^{-\frac{1}{p-1}}]$.
Assume that $f \in C^2([0,T_0))$ satisfies
$f(t) > 0$ for $t\in [0,T_0)$ and
\[
\begin{cases}
f''(t) + b(t) f'(t) \geq \gamma f(t)^p \quad \mbox{for}\quad t \in [0,T_0),\\
f(0) > \varepsilon _0,\\
f'(0) \geq A_0 \varepsilon _0.
\end{cases}
\]
Then,
with $\delta_0 = \gamma^{\frac{1}{p-1}} \varepsilon _0$,
we have
\begin{align*}
f (t)
\geq
\varepsilon _0 \bigg( 1
- \mu (p,b,\beta,A_0) \delta_0^{p-1}
B(t) \bigg)^{- \frac{2}{p-1}}
\end{align*}
for $t \in [0,T_0)$,
and $T_0$ is estimated as
\begin{align}
\label{t0}
T_0 \le
B^{-1} \left( \mu (p,b,\beta, A_0)^{-1} \delta_0^{1-p} \right) .
\end{align}
\end{Proposition}
\begin{proof}
Let $\widetilde f = \gamma^{\frac{1}{p-1}} f$
and $\delta_0 = \gamma^{\frac{1}{p-1}} \varepsilon _0$.
Then, $\widetilde f$ satisfies
\[
\begin{cases}
\widetilde f''(t) + b(t) \widetilde f'(t) \geq \widetilde f(t)^p
\quad \mbox{for}\quad t\in [0,T_0),\\
\widetilde f(0) > \delta_0,\\
\widetilde f'(0) \geq A_0 \delta_0.
\end{cases}
\]
Let $T_1$ be defined in \eqref{t1}
with $\varepsilon_0 = \delta_0$,
that is,
$T_1$
is the right-hand side of \eqref{t0}.
For $\rho \in [0,1)$,
we put $\delta_\rho = (1-\rho ) \delta_0$
and define
\[
\widetilde g_\rho (t)
= \delta_\rho
\bigg( 1 - \mu (p,b,\beta,A_0) \delta_\rho ^{p-1} B(t)
\bigg)^{- \frac{2}{p-1}}
\]
for $t \in \lbrack 0,T_1)$.
Noting
$\delta_0 \in (0, \mu(p,b,\beta,A_0)^{-\frac{1}{p-1}}]$
and applying Lemma \ref{Theorem:2.2},
we see that
$\widetilde g_\rho$
satisfies
\[
\begin{cases}
\widetilde g_\rho ''(t) + b(t) \widetilde g'_\rho (t)
\leq \widetilde g_\rho (t)^p
&\ \mbox{for}\quad t \in \lbrack 0,T_1),\\
\widetilde g_\rho (0) = \delta_\rho,\\
\widetilde g_\rho '(0) \leq A_0 \delta_\rho.
\end{cases}
\]
We put $T_2 = \min (T_0, T_1)$.
Then, by Lemma \ref{Theorem:2.1},
for any $\rho \in (0,1)$,
we have $\widetilde f(t) \geq \widetilde g_\rho (t)$ for
$t \in \lbrack 0, T_2 )$.
Noting the continuity of
$\widetilde g_{\rho}$
with respect to $\rho \in [0,1)$
and letting $\rho \to 0$, we see that
$\widetilde f(t) \geq \widetilde g_0(t)$ holds for any $t \in \lbrack 0,T_2)$.
Next, we see that
$T_2 = T_0$.
Indeed, if $T_0 > T_1$, namely $T_2 = T_1$, then
$\tilde{f}(t)$ is defined as a $C^2$ function on the interval $[0,T_1]$.
However, by the definition of
$\widetilde g_{0}$,
we immediately obtain
$\lim_{t \to T_1-0} \widetilde g_{0}(t) = \infty$.
This and the fact
$\widetilde f(t) \geq \widetilde g_0(t)$ for $t \in \lbrack 0,T_1)$
imply $\lim_{t\to T_1-0} \tilde{f}(t) = \infty$,
which contradicts $\tilde{f} \in C^2 ([0,T_1])$.
Consequently, we have $T_2 = T_0$, namely $T_0 \le T_1$
and we complete the proof.
\end{proof}
\section{Proof of the Proposition \ref{Theorem:1.3} and Corollary \ref{Theorem:1.4}}
\begin{proof}[Proof of Proposition \ref{Theorem:1.3}]
Let $u$ be a strong solution of \eqref{eq:1} on $[0,T_0)$
with the lifespan $T_0$.
Let $\phi \in \mathcal{S}(\mathbb{R}^n; [0,\infty))$
satisfy the inequality \eqref{eq:2}.
Recall that
$I_\phi (t) = \int_{\mathbb R^n} u(t,x) \phi^\ell (x) dx$
and
$\Phi(x) = \ell(\ell-1) \nabla \phi (x) \cdot \nabla \phi (x) + \ell \phi (x) \Delta \phi (x)$.
Then,
by the continuity of $I_{\phi}(t)$ with respect to $t$,
there exists $t_0> 0$ such that
$I_\phi (t) - A(n,p,\ell,\phi ) > 0$ holds for $t \in [0,t_0)$.
By a direct calculation, we have for $t \in [0,t_0)$,
\begin{align*}
\frac{d^2}{dt^2} I_\phi (t) + b(t) \frac{d}{dt} I_\phi (t)
&= \int_{\mathbb R^n}
(\partial_t^2 + b(t) \partial_t) u(t,x) \phi ^\ell(x) dx\\
&= \int_{\mathbb R^n} u(t,x) \Delta (\phi ^\ell(x)) dx
+ \| u(t) \phi ^{\frac{\ell}{p}} \|_{L^p(\mathbb R^n)}^p\\
&= \int_{\mathbb R^n} u(t,x) \Phi(x) \phi ^{\ell-2}(x) dx
+ \| u(t) \phi ^{\frac{\ell}{p}} \|_{L^p(\mathbb R^n)}^p\\
&\geq - \| \Phi \phi ^{\frac{\ell}{p'}-2} \|_{L^{p'}(\mathbb R^n)}
\| u(t) \phi ^{\frac{\ell}{p}} \|_{L^p(\mathbb R^n)}
+ \| u(t) \phi ^{\frac{\ell}{p}} \|_{L^p(\mathbb R^n)}^p\\
&\geq - 2^{\frac{p'}{p}} p'^{-1} p^{-\frac{p'}{p}}
\| \Phi \phi ^{\frac{\ell}{p'}-2} \|_{L^{p'}(\mathbb R^n)}^{p'}
+ 2^{-1} \| u(t) \phi ^{\frac{\ell}{p}} \|_{L^p(\mathbb R^n)}^p\\
&\geq - 2^{p'-1} p'^{-1} p^{1-p'}
\| \Phi^{p'} \phi ^{\ell-2p'} \|_{L^{1}(\mathbb R^n)}
+ 2^{-1} \| \phi ^{\ell} \|_{L^1(\mathbb R^n)}^{1-p} I_\phi (t)^p\\
&= 2^{-1} \| \phi ^{\ell} \|_{L^1(\mathbb R^n)}^{1-p}
\big( I_\phi (t)^p - A(n,p,\ell,\phi )^p \big)\\
&\geq 2^{-1} \| \phi ^{\ell} \|_{L^1(\mathbb R^n)}^{1-p}
\big( I_\phi (t) - A(n,p,\ell,\phi ) \big)^p.
\end{align*}
Here we note that the above inequality holds
as long as
$I_\phi (t) - A(n,p,\ell,\phi ) > 0$.
The above inequality implies that
$J_\phi (t) = I_\phi (t) - A(n,p,\ell,\phi )$
satisfies
\begin{align}
\label{Jphi_ode}
\begin{cases}
J_\phi ''(t) + b(t) J'_\phi (t)
\geq 2^{-1} \| \phi ^{\ell} \|_{L^1(\mathbb R^n)}^{1-p} J_{\phi}(t)^p
&\mbox{for}\quad t \in [0,t_0),\\
J_\phi (0) = I_\phi (0) - A(n,p,\ell,\phi ),\\
J_\phi '(0) = A_1 J_\phi (0),\\
\end{cases}
\end{align}
where
$A_1 = I'_{\phi}(0)/( I_{\phi}(0) - A(n,p,\ell,\phi))$,
which is a positive constant thanks to the assumption \eqref{eq:2}.
Moreover, by the assumption \eqref{eq:2},
we have
$I_\phi (0) - A(n,p,\ell,\phi ) \leq 2^{\frac{1}{p-1}} \| \phi ^{\ell} \|_{L^1(\mathbb R^n)}$.
Thus, we apply Proposition \ref{Theorem:2.3}
with
$\varepsilon_0 = I_\phi (0) - A(n,p,\ell,\phi )$,
$\gamma = 2^{-1} \| \phi ^{\ell} \|_{L^1(\mathbb R^n)}^{1-p}$
and
$f(t) = J_{\phi}(t)$
to obtain
\begin{align}%
\label{Jphi_est}
J_{\phi}(t)
\ge J_{\phi}(0)
\left( 1
- \mu (p,b,\beta,A_1)
\tilde{J}_{\phi}(0)^{p-1}
B(t) \right)^{- \frac{2}{p-1}}
\end{align}%
for $t \in [0,t_0)$,
where
$\tilde{J}_{\phi}(0) = 2^{-\frac{1}{p-1}} \| \phi^{\ell} \|_{L^1(\mathbb{R}^n)}^{-1} J_{\phi}(0)$.
Next, we show that
$J_{\phi}(t) > 0$
holds for any $t\in [0,T_0)$.
Indeed, if $J_{\phi}(t_{\ast}) = 0$ holds for some $t_{\ast} \in (0, T_0)$
and $J_{\phi}(t) > 0$ holds for $t \in [0,t_{\ast})$,
then, applying the same argument above,
we can prove the estimate \eqref{Jphi_est} for $t \in [0, t_{\ast})$.
However, the right-hand side of \eqref{Jphi_est} remains positive
for $t = t_{\ast}$, which contradicts $J_{\phi}(t_{\ast}) = 0$.
Thus, we have $J_{\phi}(t) > 0$ for any $t\in [0,T_0)$,
and $J_{\phi}(t)$ also satisfies the estimate \eqref{Jphi_est} for $t\in [0,T_0)$.
Hence, Proposition \ref{Theorem:2.3}
with $\delta_0 = \widetilde{J}_{\phi}(0)$ gives the desired estimates for
$J_{\phi}(t)$ and $T_0$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{Theorem:1.4}]
Let $R(\varepsilon)$ be given by \eqref{r}.
Since $n - 2 \frac{p'}{p} = \frac{n(p - p_F)}{p-1}$,
we calculate
\[
A(n,p,\ell,\psi _{R(\varepsilon)})
= A(n,p,\ell,\psi ) R(\varepsilon)^{\frac{n(p - p_F)}{p-1}}
= \frac{\varepsilon}{4} I_0.
\]
From this and the assumption \eqref{eq:3}, we obtain
\begin{align*}%
I_{\psi_{R(\varepsilon)}} - A(n,p,\ell,\psi _{R(\varepsilon)})
\ge \frac{\varepsilon}{4} I_0.
\end{align*}%
Also, the assumption \eqref{eq:4} immediately implies
$I'_{\psi_{R(\varepsilon)}} \ge \frac{\varepsilon}{2}I_1$.
Finally, the assumption \eqref{eq:41} leads to
\begin{align*}%
I_{\psi_{R(\varepsilon)}} - A(n,p,\ell,\psi _{R(\varepsilon)})
&\le \varepsilon I_0 \\
&\le 2^{\frac{1}{p-1}} \| \psi^{\ell} \|_{L^1(\mathbb{R}^n)} R(\varepsilon)^n \\
&= 2^{\frac{1}{p-1}} \|\psi_{R(\varepsilon)}^{\ell} \|_{L^1(\mathbb R^n)}.
\end{align*}%
Therefore, the condition \eqref{eq:2} is fulfilled and
Proposition \ref{Theorem:1.3} with $\phi = \psi_{R(\varepsilon)}$ implies that
\begin{align*}%
J_{\psi_{R(\varepsilon)}}(t)
\ge J_{\psi_{R(\varepsilon)}}(0)
\left( 1 - \mu(p,b,\beta, A_1(\varepsilon)) \widetilde{J}_{\psi_{R(\varepsilon)}}(0)^{p-1}
B(t) \right)
\end{align*}%
and the lifespan $T_0$ is estimated as
\begin{align*}%
T_0 \le B^{-1}\left( \mu(p,b,\beta,A_1(\varepsilon))^{-1}
\widetilde{J}_{\psi_{R(\varepsilon)}}(0)^{1-p} \right),
\end{align*}%
where
\begin{align*}%
A_1(\varepsilon)
= \frac{I'_{\psi_{R(\varepsilon)}}(0)}{I'_{\psi_{R(\varepsilon)}}(0)-A(n,p,\ell,\psi_{R(\varepsilon)})}.
\end{align*}%
Now, we again use the assumptions \eqref{eq:3} and \eqref{eq:4} to obtain
\begin{align*}%
A_1(\varepsilon)
\ge \frac{\frac{\varepsilon}{2}I_1}{\varepsilon I_0 - \frac{\varepsilon}{4}I_0}
= \frac{2I_1}{3I_0}.
\end{align*}%
Moreover, we calculate
\begin{align*}%
\widetilde{J}_{\psi_{R(\varepsilon)}}(0)^{p-1}
&= 2^{-1} \| \psi_{R(\varepsilon)}^{\ell} \|_{L^1(\mathbb{R}^n)}^{-(p-1)}
J_{\psi_{R(\varepsilon)}}(0)^{p-1} \\
&\ge 2^{-1} \| \psi^{\ell} \|_{L^1(\mathbb{R}^n)}^{-(p-1)} R(\varepsilon)^{-n(p-1)}
\left( \frac{\varepsilon}{4} I_0 \right)^{p-1}\\
&= 2^{-1} \| \psi^{\ell} \|_{L^1(\mathbb{R}^n)}^{-(p-1)}
A(n,p,\ell, \psi)^{-\frac{(p-1)^2}{p_F-p}}
\left( \frac{\varepsilon}{4} I_0 \right)^{\frac{1}{\frac{1}{p-1}-\frac{n}{2}}}.
\end{align*}%
Consequently, letting
\begin{align}%
\nonumber
&\mu_0(n,p,b,\beta,\ell,\psi,I_0,I_1)\\
\label{mu0}
&= \mu \left(p,b,\beta,\frac{2I_1}{3I_0} \right)2^{-1}
\| \psi^{\ell} \|_{L^1(\mathbb{R}^n}^{-(p-1)}
A(n,p,\ell,\psi)^{-\frac{(p-1)^2}{p_F-p}}
\left( \frac{1}{4}I_0 \right)^{\frac{1}{\frac{1}{p-1}-\frac{n}{2}}},
\end{align}%
we have the desired estimates \eqref{bl-rt} and \eqref{lf_upp2}.
\end{proof}
\section{Proofs of Propositions \ref{prop_low} and \ref{prop_lowcr}}
\subsection{Scaling variables, local existence and spectral decompostion}\
We give proofs of Propositions \ref{prop_low} and \ref{prop_lowcr}.
Sections 4.1--4.3 are almost the same as in \cite{Wa17JMAA}
and we present only their outlines.
Following \cite{Wa17JMAA}, we introduce the scaling variables
\begin{align}
\label{sc}
s = \log (B(t) + 1 ),\quad y = (B(t) + 1 )^{-1/2}x.
\end{align}
Also, we use the notation
$t(s) = B^{-1}(e^s-1)$.
We change the coordinate and the unknown function as
\begin{align}
\label{uvw}
\begin{array}{l}
\displaystyle u(t,x) = (B(t)+1)^{-n/2}v(\log(B(t)+1), (B(t)+1)^{-1/2}x),\\[5pt]
\displaystyle u_t(t,x) = b(t)^{-1}(B(t)+1)^{-n/2-1}w(\log(B(t)+1), (B(t)+1)^{-1/2}x).
\end{array}
\end{align}
Then, the equation \eqref{eq:1} is transformed into the first order system
\begin{align}
\label{eq_vw}
\left\{\begin{array}{ll}
\displaystyle v_s-\frac{y}{2}\cdot \nabla_yv - \frac{n}{2}v = w,&s>0, y\in \mathbb{R}^n,\\[7pt]
\displaystyle
\frac{e^{-s}}{b(t(s))^2}\left( w_s-\frac{y}{2}\cdot \nabla_yw -\left(\frac{n}{2}+1\right)w \right)+w
= \Delta_yv+r(s,y),&s>0,y\in \mathbb{R}^n,\\[7pt]
\displaystyle v(0,y) = v_0(y) = \varepsilon a_0(y),\
w(0,y) = w_0(y) = \varepsilon a_1(y),
&y\in \mathbb{R}^n,
\end{array}\right.
\end{align}
where
\begin{align}
\label{r}
r(s,y) &= \frac{b^{\prime}(t(s))}{b(t(s))^2}w
+ e^{\frac{n}{2}(p_F-p) s} |v|^p.
\end{align}
The local well-posedness for the system \eqref{eq_vw}
was obtained by \cite[Proposition 3.6]{Wa17JMAA}.
In this paper, the solution satisfying certain integral equation
is constructed (mild solution).
Such solution also satisfies the condition of
our strong solution (see Definition \ref{def_sol}).
\begin{Proposition}{\cite[Proposition 3.6]{Wa17JMAA}}\label{prop_loc2}
There exists $S>0$ depending only on
$\| (v_0, w_0) \|_{H^{1,m}\times H^{0,m}}$
(the size of the initial data) such that
the Cauchy problem \eqref{eq_vw} admits a unique strong solution
$(v,w)$ satisfying
\[
(v,w) \in C([0,S);H^{1,m}(\mathbb{R}^n)\times H^{0,m}(\mathbb{R}^n)).
\]
Also, if
$(u_0, u_1) \in H^{2,m}(\mathbb{R}^n) \times H^{1,m}(\mathbb{R}^n)$,
then the solution $(v,w) $ satisfies
\begin{align}
\label{vwcls2}
(v,w) \in C([0,S);H^{2,m}(\mathbb{R}^n)\times H^{1,m}(\mathbb{R}^n))
\cap C^1([0,S); H^{1,m}(\mathbb{R}^n) \times H^{0,m}(\mathbb{R}^n)).
\end{align}
Moreover, for arbitrarily fixed time $S^{\prime}>0$,
we can extend the solution to the interval $[0,S^{\prime}]$
by taking $\varepsilon$ sufficiently small.
Furthermore, if the lifespan
\[
S_0 = S_0(\varepsilon) = \sup\{
S\in (0,\infty) ;
\mbox{there exists a unique strong solution}\ (v,w)\ \mbox{to \eqref{eq_vw}} \}
\]
is finite, then $(v,w)$ satisfies
$\lim_{s \to S_0} \| (v,w)(s) \|_{H^{1,m}\times H^{0,m}} = \infty$.
\end{Proposition}
Next, to obtain an a priori estimate for
$(v,w)$, we decompose
$(v,w)$ into the leading terms and the remainder terms.
Let $\alpha(s)$ be
\begin{align}
\label{alpha}
\alpha(s) = \int_{\mathbb{R}^n}v(s,y)dy,
\end{align}
which is well-defined due to $v(s) \in H^{1, m}$ with $m>n/2$.
We also put
\begin{align*}
\varphi_0(y) = (4\pi)^{-n/2} \exp \left( -\frac{|y|^2}{4} \right).
\end{align*}
Then, it is easy to see that
\begin{align}
\label{varphi0_int}
\int_{\mathbb{R}^n}\varphi_0(y) dy = 1
\end{align}
and
\begin{align}
\label{phi_eq}
\Delta_y \varphi_0 = -\frac{y}{2}\cdot \nabla_y\varphi_0-\frac{n}{2}\varphi_0.
\end{align}
We also put $\psi_0(y) = \Delta_y \varphi_0(y)$
and decompose $v, w$ as
\begin{align}
\label{sp_de_vw}
\begin{array}{l}
\displaystyle v(s,y) = \alpha(s) \varphi_0(y) + f(s,y),\\[5pt]
\displaystyle w(s,y) = \frac{d\alpha}{ds}(s) \varphi_0(y) + \alpha(s)\psi_0(y) + g(s,y),
\end{array}
\end{align}
where we expect that $(f,g)$ can be regarded as remainder terms.
In order to derive the system that $(f,g)$ satisfies,
we first note the following lemma.
\begin{Lemma}{\cite[Lemma 3.8]{Wa17JMAA}}\label{lem_alpha
We have
\begin{align}
\label{alpha_dt}
\frac{d\alpha}{ds}(s) &= \int_{\mathbb{R}^n}w(s,y)dy,\\
\label{alpha_ddt}
\frac{e^{-s}}{b(t(s))^2}\frac{d^2\alpha}{ds^2}(s)
&= \frac{e^{-s}}{b(t(s))^2} \frac{d\alpha}{ds}(s)
- \frac{d\alpha}{ds}(s) + \int_{\mathbb{R}^n}r(s,y)dy,
\end{align}
where $r$ is defined by \eqref{r}.
\end{Lemma
From the system \eqref{eq_vw}, Lemma \ref{lem_alpha}
and the equation \eqref{phi_eq},
we see that $f$ and $g$ satisfy the following system:
\begin{align}
\label{eq_fg}
\left\{\begin{array}{ll}
\displaystyle
f_s - \frac{y}{2}\cdot \nabla_yf-\frac{n}{2}f = g,&s>0, y\in\mathbb{R}^n,\\[5pt]
\displaystyle
\frac{e^{-s}}{b(t(s))^2}\left( g_s - \frac{y}{2}\cdot\nabla_y g -\left(\frac{n}{2}+1\right) g \right)
+ g = \Delta_y f + h,&s>0,y\in\mathbb{R}^n,\\[5pt]
f(0,y) = v_0(y)-\alpha(0)\varphi_0(y),&y\in\mathbb{R}^n,\\[5pt]
g(0,y) = w_0(y)-\frac{d\alpha}{ds}(0)\varphi_0(y)-\alpha(0)\psi_0(y),
&y\in\mathbb{R}^n,
\end{array}\right.
\end{align}
where $h$ is given by
\begin{align}
\nonumber
h(s,y) &= \frac{e^{-s}}{b(t(s))^2}
\left( -2 \frac{d\alpha}{ds}(s) \psi_0(y)
+\alpha(s)
\left(\frac{y}{2}\cdot\nabla_y\psi_0(y)
+\left(\frac{n}{2}+1\right)\psi_0(y) \right) \right)\\
\label{h}
&\quad + r(s,y) - \left(\int_{\mathbb{R}^n} r(s,y) dy \right) \varphi_0(y).
\end{align}
Moreover, from \eqref{alpha}, \eqref{varphi0_int} and \eqref{alpha_dt}, it follows that
\begin{align}
\label{fg_int}
\int_{\mathbb{R}^n}f(s,y)dy = \int_{\mathbb{R}^n}g(s,y)dy = 0.
\end{align}
We also notice that the condition \eqref{fg_int} implies
\begin{align}
\label{h_int}
\int_{\mathbb{R}^n} h(s,y) dy = 0.
\end{align}
\subsection{Energy estimates for $n=1$}\
To obtain the decay estimates for $f,g$, we introduce
\begin{align}
\label{1_FG}
F(s,y) = \int_{-\infty}^y f(s,z)dz,\quad
G(s,y) = \int_{-\infty}^y g(s,z)dz.
\end{align}
From the following lemma and the condition \eqref{fg_int},
we see that $F,G\in C([0,S); L^2(\mathbb{R}))$.
\begin{Lemma}[Hardy-type inequality]{\cite[Lemma 3.9]{Wa17JMAA}}\label{lem_hardy}
Let $f=f(y)$ belong to $H^{0,1}(\mathbb{R})$ and satisfy
$\int_{\mathbb{R}}f(y)dy = 0$,
and let $F(y) = \int_{-\infty}^y f(z)dz$.
Then, we have
\begin{align}
\label{hardy}
\int_{\mathbb{R}}F(y)^2 dy \le 4 \int_{\mathbb{R}}y^2 f(y)^2 dy.
\end{align}
\end{Lemma}
Since $f$ and $g$ satisfy the equation \eqref{eq_fg}, we can show that
$F$ and $G$ satisfy the following system:
\begin{align}
\label{eq_FG}
\left\{\begin{array}{ll}
\displaystyle F_s-\frac{y}{2}F_y = G,&s>0,y\in\mathbb{R},\\[5pt]
\displaystyle
\frac{e^{-s}}{b(t(s))^2}\left( G_s - \frac{y}{2}G_y -G \right) + G
= F_{yy} + H,&s>0, y\in \mathbb{R},\\[5pt]
\displaystyle F(0,y) = \int_{-\infty}^{y}f(0,z)dz,\
G(0,y) = \int_{-\infty}^{y}g(0,z)dz, &y\in \mathbb{R},
\end{array}\right.
\end{align}
where
\begin{align}
\label{H}
H(s,y) = \int_{-\infty}^yh(s,z)dz.
\end{align}
We define the following energy.
\begin{align*}
E_0(s) &= \int_{\mathbb{R}} \left( \frac{1}{2}\left( F_y^2 + \frac{e^{-s}}{b(t(s))^2}G^2 \right)
+ \frac{1}{2}F^2 + \frac{e^{-s}}{b(t(s))^2} FG \right) dy,\\
E_1(s) &= \int_{\mathbb{R}} \left( \frac{1}{2} \left( f_y^2 + \frac{e^{-s}}{b(t(s))^2}g^2 \right)
+ f^2 + 2\frac{e^{-s}}{b(t(s))^2}fg \right)dy,\\
E_2(s) & = \int_{\mathbb{R}} y^2 \left[ \frac{1}{2} \left( f_y^2 + \frac{e^{-s}}{b(t(s))^2}g^2 \right)
+ \frac{1}{2} f^2 + \frac{e^{-s}}{b(t(s))^2}fg \right] dy,\\
E_3(s) &= \frac{1}{2}\frac{e^{-s}}{b(t(s))^2}\left( \frac{d\alpha}{ds}(s) \right)^2
+ e^{-2\lambda s}\alpha(s)^2,\\
E_4(s) &= \frac{1}{2}\alpha(s)^2
+ \frac{e^{-s}}{b(t(s))^2}\alpha(s) \frac{d\alpha}{ds}(s)
\end{align*}
and
\begin{align*}
E_5(s) = \sum_{j=0}^4 C_j E_j(s),
\end{align*}
where $\lambda$ is a parameter such that
$0 < \lambda \le 1/4$
and $C_j\ (j=0,\ldots, 4)$ are constants such that
$C_2 = C_3 = C_4 =1$ and
$1 \ll C_1 \ll C_0$.
Then, we have the following energy estimates.
\begin{Lemma}{\cite[Lemmas 3.10--3.17]{Wa17JMAA}}\label{lem_en0
We have
\begin{align*}
\frac{d}{ds}E_j(s)
+ \delta_j E_j(s) + L_j(s)
= R_j(s),
\end{align*}
for $j = 0, \ldots, 4$,
where
$\delta_j= \frac12\ (j=0,1,2)$,
$\delta_3 = 2\lambda$,
$\delta_4 = 0$,
and
\begin{align*}
L_0(s) &= \int_{\mathbb{R}}\left( \frac{1}{2}F_y^2 + G^2 \right)dy,\\
L_1(s) &= \int_{\mathbb{R}}\left( f_y^2 + g^2 \right) dy
- \int_{\mathbb{R}}f^2 dy,\\
L_2(s) &= \int_{\mathbb{R}}y^2 \left( \frac{1}{2}f_y^2 + g^2 \right)dy
+ 2\int_{\mathbb{R}}y f_y \left( f+ g \right) dy,\\
L_3(s) &= \left( \frac{d\alpha}{ds}(s) \right)^2,\\
L_4(s) &= 0
\end{align*}
and
\begin{align*}
R_0(s) &= \frac{3}{2}\frac{e^{-s}}{b(t(s))^2}\int_{\mathbb{R}}G^2 dy
- \frac{b^{\prime}(t(s))}{b(t(s))^2}\int_{\mathbb{R}}\left( G^2 + 2FG \right)dy
+ \int_{\mathbb{R}} (F+G)H dy,\\
R_1(s) &= 3\frac{e^{-s}}{b(t(s))^2}\int_{\mathbb{R}}g^2 dy
+ 2\frac{e^{-s}}{b(t(s))^2}\int_{\mathbb{R}}fg dy
- \frac{b^{\prime}(t(s))}{b(t(s))^2}\int_{\mathbb{R}}(g^2+4fg)dy \\
&\quad + \int_{\mathbb{R}} \left(2f+g\right) h dy,\\
R_2(s) &= \frac{3}{2}\frac{e^{-s}}{b(t(s))^2}\int_{\mathbb{R}}y^2 g^2 dy
-\frac{b^{\prime}(t(s))}{b(t(s))^2}\int_{\mathbb{R}}y^2 (2f+g)g dy
+\int_{\mathbb{R}}y^2 (f+g)hdy,\\
R_3(s) &= \frac{1}{2}(2\lambda +1 ) \frac{e^{-s}}{b(t(s))^2}\left( \frac{d\alpha}{ds}(s) \right)^2
- \frac{b^{\prime}(t(s))}{b(t(s))^2}\left( \frac{d\alpha}{ds}(s) \right)^2 \\
&\quad + \frac{d\alpha}{ds}(s) \left( \int_{\mathbb{R}^n} r(s,y) dy \right)
+ 2 e^{-2 \lambda s} \alpha(s) \frac{d\alpha}{ds}(s),\\
R_4(s) &= \frac{e^{-s}}{b(t(s))^2}\left( \frac{d\alpha}{ds}(s) \right)^2
- 2 \frac{b^{\prime}(t(s))}{b(t(s))^2}\alpha(s) \frac{d\alpha}{ds}(s)
+\alpha(s) \left( \int_{\mathbb{R}^n} r(s,y)dy \right).
\end{align*}
Moreover, we have
\begin{align*}
\frac{d}{ds}E_5(s) + 2\lambda \sum_{j=0}^3 C_j E_j(s) + L_5(s) = R_5(s),
\end{align*}
where
\begin{align*}
L_5(s) &= \sum_{j=0}^2 \left[ \left( \frac{1}{2} - 2 \lambda \right) C_j E_j(s)
+ C_j L_j(s) \right]
+ C_3 L_3(s)
\end{align*}
and
\begin{align*}
R_5(s) = \sum_{j=0}^4 C_j R_j(s).
\end{align*}
Furthermore, there exist
$C_0 > C_1 > 1$ and $s_0 > 0$ such that
\begin{align*}
&\| f(s) \|_{H^{1,1}}^2 + \| g(s) \|_{H^{0,1}}^2 + \left( \frac{d\alpha}{ds}(s) \right)^2
\le C L_5(s),\\
&\| f(s) \|_{H^{1,1}}^2 + \frac{e^{-s}}{b(t(s))^2} \| g(s) \|_{H^{0,1}}^2
+ \alpha(s)^2 + \frac{e^{-s}}{b(t(s))^2} \left( \frac{d\alpha}{ds}(s) \right)^2
\le C E_5(s)
\end{align*}
and
\begin{align*}
|R_5(s)| &\le \frac12 L_5(s)
+ C e^{-\frac{1-\beta}{1+\beta} s} E_5(s)
+ C e^{n(p_F-p)s} E_5(s)^p
+ C e^{\frac{n}{2}(p_F-p) s} E_5(s)^{\frac{p+1}{2}}
\end{align*}
are valid for $s \ge s_0$.
\end{Lemma}%
\subsection{Energy estimates for $n \ge 2$}\
When $n\ge 2$, we cannot consider primitives.
Instead of them, we define
\begin{align*}
\hat{F}(s,\xi) = |\xi|^{-n/2-\delta}\hat{f}(s,\xi),\quad
\hat{G}(s,\xi) = |\xi|^{-n/2-\delta}\hat{g}(s,\xi),\quad
\hat{H}(s,\xi) = |\xi|^{-n/2-\delta}\hat{h}(s,\xi),
\end{align*}
where
$0<\delta<1$,
and
$\hat{f}(s,\xi)$ denotes the Fourier transform of $f(s,y)$ with respect to
the space variable.
By virtue of the cancelation conditions \eqref{fg_int}, \eqref{h_int},
$\hat{F}, \hat{G}, \hat{H}$ make sense as $L^2$-functions:
\begin{Lemma}{\cite[Lemma 3.11]{Wa17JMAA}}\label{lem_hardy2}
Let $m>n/2+1$ and $f(y) \in H^{0,m}(\mathbb{R}^n)$ be a function satisfying
$\hat{f}(0) = (2\pi)^{-n/2} \int_{\mathbb{R}^n}f(y)dy = 0$.
Let
$\hat{F}(\xi) = |\xi|^{-n/2-\delta}\hat{f}(\xi)$
with some $0<\delta<1$.
Then,
there exists a constant $C(n,m,\delta)>0$ such that
\begin{align}
\label{hardy2}
\| F \|_{L^2} \le C(n,m,\delta) \| f \|_{H^{0,m}}
\end{align}
holds.
\end{Lemma}
We also notice that
$\| f \|_{L^2}$
can be controlled by the terms
$\| \nabla f \|_{L^2}$ and $\| \nabla F \|_{L^2}$,
which come from the diffusion.
\begin{Lemma}{\cite[(3.39)]{Wa17JMAA}}\label{lem_f}
In addition to the assumptions in Lemma \ref{lem_hardy2},
we further assume $f \in H^{1}(\mathbb{R}^n)$.
Then, for any small $\eta > 0$, there exists a constant $C>0$ such that
we have
\begin{align*}
\| f \|_{L^2}^2 \le \eta \| \nabla f \|_{L^2}^2 + C \| \nabla F \|_{L^2}^2
\end{align*}
holds.
\end{Lemma}
In this case
$\hat{F}$ and $\hat{G}$ satisfy the following system.
\begin{align*}
\left\{ \begin{array}{ll}
\displaystyle \hat{F}_s + \frac{\xi}{2}\cdot \nabla_{\xi}\hat{F}
+\frac{1}{2}\left( \frac{n}{2} + \delta \right) \hat{F} = \hat{G},
&s>0, \xi \in \mathbb{R}^n,\\
\displaystyle \frac{e^{-s}}{b(t(s))^2}\left( \hat{G}_s + \frac{\xi}{2}\cdot \nabla_{\xi} \hat{G}
+ \frac{1}{2} \left( \frac{n}{2}+\delta-2 \right) \hat{G} \right) + \hat{G}
= -|\xi|^2 \hat{F} + \hat{H},
&s>0, \xi\in\mathbb{R}^n.
\end{array} \right.
\end{align*}
We define the following energy
\begin{align*}
E_0(s) &= {\rm Re} \int_{\mathbb{R}^n}
\left( \frac{1}{2}\left( |\xi|^2 |\hat{F}|^2 + \frac{e^{-s}}{b(t(s))^2} |\hat{G}|^2 \right)
+ \frac{1}{2} |\hat{F}|^2 + \frac{e^{-s}}{b(t(s))^2}\hat{F} \bar{\hat{G}} \right) d\xi,\\
E_1(s) &= \int_{\mathbb{R}^n}
\left( \frac{1}{2}\left( |\nabla_y f |^2 + \frac{e^{-s}}{b(t(s))^2}g^2 \right)
+ \left( \frac{n}{4} + 1 \right)
\left( \frac{1}{2} f^2 + \frac{e^{-s}}{b(t(s))^2}fg \right) \right) dy,\\
E_2(s) &= \int_{\mathbb{R}^n}
|y|^{2m} \left[
\frac{1}{2}\left( |\nabla_y f|^2 + \frac{e^{-s}}{b(t(s))^2}g^2 \right)
+\frac{1}{2}f^2 + \frac{e^{-s}}{b(t(s))^2}fg \right] dy,\\
E_3(s) &= \frac{1}{2}\frac{e^{-s}}{b(t(s))^2}\left( \frac{d\alpha}{ds}(s) \right)^2
+ e^{-2\lambda s}\alpha(s)^2,\\
E_4(s) &= \frac{1}{2}\alpha(s)^2
+ \frac{e^{-s}}{b(t(s))^2}\alpha(s) \frac{d\alpha}{ds}(s)
\end{align*}
and
\begin{align*}
E_5(s) = \sum_{j=0}^4 C_j E_j(s),
\end{align*}
where $\lambda$ is a parameter such that
$0 < \lambda < \min\{ \frac12, \frac{m}{2}-\frac{n}{4} \}$
and $C_j\ (j=0,\ldots, 4)$ are constants such that
$C_2 = C_3 = C_4 =1$ and
$1 \ll C_1 \ll C_0$.
Then, we have the following energy estimates.
\begin{Lemma}{\cite[Lemmas 3.12--3.17]{Wa17JMAA}}\label{lem_en2
We have
\begin{align*}
\frac{d}{ds}E_j(s)
+\delta_j E_j(s) + L_j(s)
= R_j(s),
\end{align*}
for $j = 0, \ldots, 4$,
where
$\delta_0 = \delta_1 = \delta$,
$\delta_2 = m -\frac{n}{2} -\eta$,
$\delta_3 = 2 \lambda$,
$\delta_4 = 0$,
and
$\eta$ is a small parameter such that
$0<\eta < m-\frac{n}{2}$,
and
\begin{align*}
L_0(s) &= \frac{1}{2} \int_{\mathbb{R}^n} |\xi|^2 |\hat{F}|^2 d\xi
+\int_{\mathbb{R}^n} |\hat{G}|^2 d\xi,\\
L_1(s) &= \frac{1}{2}(1-\delta) \int_{\mathbb{R}^n}|\nabla_y f|^2 dy
+ \int_{\mathbb{R}^n} g^2 dy
- \left( \frac{n}{4}+\frac{\delta}{2} \right)
\left(\frac{n}{4}+1\right) \int_{\mathbb{R}^n}f^2 dy,\\
L_2(s) &= \frac{\eta}{2}\int_{\mathbb{R}^n}|y|^{2m} f^2 dy
+ \frac{1}{2} ( \eta + 1) \int_{\mathbb{R}^n} |y|^{2m} |\nabla_y f|^2 dy
+ \int_{\mathbb{R}^n} |y|^{2m} g^2 dy\\
&\quad + 2m \int_{\mathbb{R}^n}|y|^{2m-2} (y\cdot \nabla_y f)(f+g) dy,\\
L_3(s) &= \left( \frac{d\alpha}{ds}(s) \right)^2,\\
L_4(s) &= 0
\end{align*}
and
\begin{align*}
R_0(s) &= \frac{3}{2} \frac{e^{-s}}{b(t(s))^2}\int_{\mathbb{R}^n} |\hat{G}|^2d\xi
- \frac{b^{\prime}(t(s))}{b(t(s))^2}
{\rm Re}\int_{\mathbb{R}^n} \left( 2\hat{F} +\hat{G} \right) \bar{\hat{G}} d\xi \\
&\quad + {\rm Re} \int_{\mathbb{R}^n} \left( \hat{F} + \hat{G} \right) \bar{\hat{H}} d\xi,\\
R_1(s) &= \left( \frac{n}{2}+\delta \right)\left( \frac{n}{4}+1 \right)
\frac{e^{-s}}{b(t(s))^2}\int_{\mathbb{R}^n} fg dy
+ \frac{1}{2}(n+3+\delta) \frac{e^{-s}}{b(t(s))^2} \int_{\mathbb{R}^n} g^2 dy \\
&\quad
- \frac{b^{\prime}(t(s))}{b(t(s))^2}
\int_{\mathbb{R}^n} \left( 2 \left(\frac{n}{4}+1\right) f + g \right) g dy
+ \int_{\mathbb{R}^n} \left( \left( \frac{n}{4} + 1 \right) f + g \right) h dy,\\
R_2(s) &= -\eta \frac{e^{-s}}{b(t(s))^2} \int_{\mathbb{R}^n} |y|^{2m} f g dy
- \frac{1}{2}(\eta - 3) \frac{e^{-s}}{b(t(s))^2} \int_{\mathbb{R}^n} |y|^{2m} g^2 dy\\
&\quad - \frac{b^{\prime}(t(s))}{b(t(s))^2} \int_{\mathbb{R}^n}|y|^{2m} (2f +g) g dy
+ \int_{\mathbb{R}^n} |y|^{2m} (f+g) h dy,\\
R_3(s) &= \frac{1}{2}(2\lambda +1 ) \frac{e^{-s}}{b(t(s))^2}\left( \frac{d\alpha}{ds}(s) \right)^2
- \frac{b^{\prime}(t(s))}{b(t(s))^2}\left( \frac{d\alpha}{ds}(s) \right)^2 \\
&\quad + \frac{d\alpha}{ds}(s) \left( \int_{\mathbb{R}^n} r(s,y) dy \right)
+ 2 e^{-2 \lambda s} \alpha(s) \frac{d\alpha}{ds}(s),\\
R_4(s) &= \frac{e^{-s}}{b(t(s))^2}\left( \frac{d\alpha}{ds}(s) \right)^2
- 2 \frac{b^{\prime}(t(s))}{b(t(s))^2}\alpha(s) \frac{d\alpha}{ds}(s)
+\alpha(s) \left( \int_{\mathbb{R}^n} r(s,y)dy \right).
\end{align*}
Moreover, we have
\begin{align*}
\frac{d}{ds}E_5(s) + 2\lambda \sum_{j=0}^3 C_j E_j(s) + L_5(s) = R_5(s),
\end{align*}
where
\begin{align*}
L_5(s) &= C_0 (\delta-2\lambda) E_0(s)
+ C_1 (\delta-2\lambda) E_1(s) + (m-\frac{n}{2}-\eta-2\lambda) E_2(s) \\
&\quad + \sum_{j=0}^4 C_j L_j(s)
\end{align*}
and
\begin{align*}
R_5(s) = \sum_{j=0}^4 C_j R_j(s).
\end{align*}
Furthermore, there exist
$C_0 > C_1 > 1$ and $s_0 > 0$ such that
\begin{align*}
&\| f(s) \|_{H^{1,m}}^2 + \| g(s) \|_{H^{0,m}}^2 + \left( \frac{d\alpha}{ds}(s) \right)^2
\le C L_5(s),\\
&\| f(s) \|_{H^{1,m}}^2 + \frac{e^{-s}}{b(t)^2} \| g(s) \|_{H^{0,m}}^2
+ \alpha(s)^2 + \frac{e^{-s}}{b(t)^2} \left( \frac{d\alpha}{ds}(s) \right)^2
\le C E_5(s)
\end{align*}
and
\begin{align*}
|R_5(s)| &\le \frac12 L_5(s)
+ C e^{-\frac{1-\beta}{1+\beta} s} E_5(s)
+ C e^{n(p_F-p)s} E_5(s)^p
+ C e^{\frac{n}{2}(p_F-p) s} E_5(s)^{\frac{p+1}{2}}
\end{align*}
are valid for $s \ge s_0$.
\end{Lemma}%
\subsection{A priori estimate and the proof of Proposition \ref{prop_low}}\
By Lemmas \ref{lem_en0} and \ref{lem_en2}
with taking $0< \lambda < \min\{ \frac12, \frac{\delta}{2}, \frac{m}{2}-\frac{n}{4}\}$
and $\eta$ sufficiently small if $n \ge 2$,
we can see that
$(f,g)$ satisfies the following a priori estimate for
$s \ge s_0$.
Here we note that
the local solution exists for $s > s_0$,
provided that $\varepsilon$ is sufficiently small
by Proposition \ref{prop_loc2}.
\begin{Lemma}{\cite[(3.53)]{Wa17JMAA}}\label{lem_e5}
There exists $s_0 > 0$ such that for $s \ge s_0$, we have
\begin{align}
\label{e5est}
\frac{d}{ds}E_5(s)
\le C e^{-\frac{1-\beta}{1+\beta} s} E_5(s)
+ C e^{n(p_F-p)s} E_5(s)^p
+ C e^{\frac{n}{2}(p_F-p) s} E_5(s)^{\frac{p+1}{2}}
\end{align}
(where we interpret $1/(1+\beta)$ as an arbitrarily large number
when $\beta = -1$).
\end{Lemma}
Now we are in a position to prove Proposition \ref{prop_low}.
\begin{proof}[Proof of Proposition \ref{prop_low}]
Let $\varepsilon_1 > 0$ be sufficiently small so that
the local solution $(v,w)$ of \eqref{eq_vw} exists for
$s > s_0$
(see Proposition \ref{prop_loc2}).
Therefore, by Lemma \ref{lem_e5}, we see that
$(f,g)$ satisfies the a priori estimate \eqref{e5est}.
We put
\[
\Lambda (s) := \exp \left( -C\int_{s_0}^s e^{- \frac{1-\beta}{1+\beta} \tau}\, d\tau \right)
\]
(where we interpret $1/(1+\beta)$ as an arbitrarily large number
when $\beta = -1$).
We note that
$c_0 \le \Lambda(s) \le 1$ holds for some $c_0 > 0$,
and $\Lambda(s_0) = 1$.
Multiplying \eqref{e5est} by $\Lambda(s)$
and integrating it over $[s_0, s]$, we see that
\[
\Lambda(s) E_5(s) \le E_5(s_0)
+ C \int_{s_0}^s
\left[ \Lambda(\tau) e^{n(p_F-p) \tau} E_5(\tau)^p
+ \Lambda(\tau) e^{\frac{n}{2}(p_F-p) \tau} E_5(\tau)^{\frac{p+1}{2}}
\right] \, d\tau
\]
holds for $s_0 \le s < S_0(\varepsilon)$.
Putting
\[
M(s) := \sup_{s_0 \le \tau \le s} E_5(\tau)
\]
and noting
\[
M(s_0) \le C(s_0) \varepsilon^2 \| (a_0, a_1) \|_{H^{1,m}\times H^{0,m}}^2,
\]
which can be easily proved by local existence result
(see the proof of \cite[Proposition 3.5]{Wa17JMAA}),
we have
\begin{align}
\label{estm}
M(s) \le
C_0^{\prime} \varepsilon^2 I_0
+ C_0^{\prime}
\left( e^{n(p_F-p)s} M(s)^p + e^{\frac{n}{2}(p_F-p)s } M(s)^{\frac{p+1}{2}} \right)
\end{align}
for $s_0 \le s < S_0(\varepsilon)$ and some $C_0^{\prime} > 0$,
where
$I_0 = \| (a_0, a_1) \|_{H^{1,m}\times H^{0,m}}^2$.
Let
$S_1 = S_1(\varepsilon) \ge s_0$ is the first time
such that $M$ attains the value
\[
M(S_1) = 2 C_0^{\prime} \varepsilon^2 I_0.
\]
We note that if $S_0(\varepsilon) = \infty$,
then Proposition \ref{prop_low} obviously holds,
and if $S_0(\varepsilon) < \infty$,
then such $S_1$ actually exists because
$\lim_{s \to S_0(\varepsilon)} M(s) = \infty$.
Thus, in what follows we assume $S_0(\varepsilon) < \infty$.
Then, substituting $s= S_1$ in \eqref{estm}, we see that
\begin{align*}
C_0^{\prime} \varepsilon^2 I_0
&\le 2C_0^{\prime}
\max\left\{ e^{n(p_F-p)S_1} (2C_0^{\prime} \varepsilon^2 I_0)^p,
e^{\frac{n}{2}(p_F-p)S_1 } (2C_0^{\prime} \varepsilon^2 I_0)^{\frac{p+1}{2}}
\right\}.
\end{align*}
No matter which quantity attains the maximum,
we obtain
\[
\varepsilon^{-\frac{2(p-1)}{n(p_F-p)}} \le C e^{S_1}.
\]
Thus, we conclude
\[
\varepsilon^{-\frac{1}{\frac{1}{p-1} - \frac{n}{2}}}
\le C \left( B(T_0(\varepsilon)) + 1 \right).
\]
This and the definition of $B(t)$ lead to the
desired estimate, and we finish the proof.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop_lowcr}]
In the same way to the derivation of \eqref{estm},
noting $p=p_F$, we have
\begin{align}
\label{estmcr}
M(s) \le
C_0^{\prime} \varepsilon^2 I_0
+ C_0^{\prime}
(s-s_0) \left( M(s)^p + M(s)^{\frac{p+1}{2}} \right)
\end{align}
for $s_0 \le s < S_0(\varepsilon)$ and some $C_0^{\prime} > 0$,
Let
$S_1 = S_1(\varepsilon) \ge s_0$ is the first time
such that $M$ attains the value
\[
M(S_1) = 2 C_0^{\prime} \varepsilon^2 I_0.
\]
Moreover,
we take
$\varepsilon_2 \le \varepsilon_1$ further small so that
$2 C_0^{\prime} \varepsilon^2 I_0 \le 1$
holds for $\varepsilon \in (0,\varepsilon_2]$.
Then, it is obvious that
$M(S_1)^p \le M(S_1)^{\frac{p+1}{2}}$ for $\varepsilon \in (0,\varepsilon_2]$
and hence, we eventually obtain
\[
2 C_0^{\prime} \varepsilon^2 I_0
\le C_0^{\prime} \varepsilon^2 I_0
+ 2C_0^{\prime} (S_1 - s_0) \left( 2 C_0^{\prime} \varepsilon^2 I_0 \right)^{\frac{p+1}{2}}.
\]
This implies
\[
\exp \left( C \varepsilon^{-(p-1)} + s_0 \right) \le B(T_0) + 1.
\]
Therefore, by the definition of $B(t)$,
we have the desired estimate.
\end{proof}
\section*{Appendix}
Here, we prove existence of a unique strong solution
in the sense of Definition \ref{def_sol}
for the Cauchy problem \eqref{eq:1}.
\begin{Proposition}\label{prop_le}
Let
$p$
satisfy
$1<p <\infty \ (n=1,2)$, $1<p \le n/(n-2) \ (n\ge 3)$.
We assume that
$b(t)$ is a smooth nonnegative function.
Let
$(u_0, u_1) \in H^1(\mathbb{R}^n) \times L^2(\mathbb{R}^n)$.
Then, there exist a constant $T>0$ and a unique strong solution $u$
of the Cauchy problem \eqref{eq:1} on $[0,T)$.
Moreover, we have the blow-up alternative, that is,
for the lifespan $T_0$ defined in Definition \ref{def_ls},
if $T_0 < \infty$, then
\begin{align*}%
\lim_{t \to T_0-} \| (u, u_t)(t) \|_{H^1\times L^2} = \infty.
\end{align*}%
\end{Proposition}
\begin{proof}
Let $(u_0, u_1) \in H^1(\mathbb{R}^n) \times L^2(\mathbb{R}^n)$
and let
$F \in L^1_{{\rm loc}} ([0,\infty); L^2(\mathbb{R}^n))$.
First, we recall that the strong solution of the linear problem
\begin{align}%
\label{lin_dw}
\left\{ \begin{array}{ll}
\square u + b(t) u_t = F,&t>0, x \in \mathbb{R}^n,\\
u(0) = u_0, \ u_t(0) = u_1,& x\in \mathbb{R}^n
\end{array}\right.
\end{align}%
can be easily obtained via the Fourier transform and the Duhamel principle,
and the corresponding solution belongs to
$C^1([0,T) ; L^2(\mathbb{R}^n)) \cap C ([0,T) ; H^1(\mathbb{R}^n))$.
Moreover, by an approximation argument, we see that the solution $u$
satisfies the energy identity
\begin{align*}%
&\frac12 \| (\nabla_xu, u_t)(t) \|_{L^2}^2
+ \int_0^t \int_{\mathbb{R}^n} b(\tau) u_t(\tau, x)^2\,dxd\tau \\
&\quad = \frac12 \| (\nabla_x u_0, u_1) \|_{L^2}^2
+ \int_0^t \int_{\mathbb{R}^n} F(\tau, x) u_t(\tau,x) \,dxd\tau
\end{align*}%
for $t > 0$.
Combining this with a Gronwall-type inequality (see \cite[Lemma 9.12]{Wa_thesis})
and the assumption $b(t) \ge 0$,
we further obtain
\begin{align}%
\label{en_es}
\| (\nabla_xu, u_t)(t) \|_{L^2}
\le \| (\nabla_x u_0, u_1) \|_{L^2}
+ \int_0^t \| F(\tau) \|_{L^2} \,dxd\tau
\end{align}%
for $t > 0$.
To construct a solution of the Cauchy problem \eqref{eq:1},
we employ the contraction mapping principle.
To this end, we take a constant $R>0$ such that
$ \| (u_0, u_1) \|_{H^1 \times L^2} \le R$
and define
\begin{align*}%
&K(T,R) \\
&:=
\left\{ v \in C^1([0,T) ; L^2(\mathbb{R}^n)) \cap C ([0,T) ; H^1(\mathbb{R}^n))
\,;\, \sup_{t\in [0,T)}\| (v, v_t )(t) \|_{H^1 \times L^2} \le 3R \right\}.
\end{align*}%
We define the metric
\begin{align*}%
d(u,v) := \sup_{t\in [0,T)} \| (u-v, u_t - v_t)(t) \|_{H^1\times L^2}
\end{align*}%
for $u, v \in K(T,R)$.
Then, $K(T,R)$ is a complete metric space with the metric $d(\cdot, \cdot)$.
Let $1<p <\infty\ (n=1,2)$, $1<p \le n/(n-2)\ n\ge 3$.
Let
$u^{(0)}$
be the solution of the linear problem \eqref{lin_dw} with $F=0$.
Then, for $j =1, 2, \ldots$, we successively define
$u^{(j)}$
as the solution of the linear problem \eqref{lin_dw} with $F = |u^{(j-1)}|^p$.
By \eqref{en_es}, the first approximation
$u^{(0)}$ clearly belongs to $K(T,R)$.
Furthermore, by the Sobolev embedding theorem,
we see that if $u^{(j-1)} \in K(T,R)$, then
\begin{align*}%
\| (\nabla u^{(j)}, u^{(j)}_t)(t) \|_{L^2}
&\le R + \int_0^t \| u^{(j-1)}(\tau) \|_{L^{2p}}^p\,d\tau \\
&\le R + C \int_0^t \| u^{(j-1)}(\tau) \|_{H^1}^p\, d\tau \\
&\le R + C T R^p.
\end{align*}%
We also estimate
\begin{align*}%
\| u^{(j)} (t) \|_{L^2} &\le \| u_0 \|_{L^2} + \int_0^t \| u^{(j)}_t (\tau) \|_{L^2} \,d\tau \\
&\le R + T(R + C T R^p).
\end{align*}%
Therefore, taking $T>0$ sufficiently small so that
$C T R^p + T(R + C T R^p) \le R$ holds, we obtain
$u^{(j)} \in K(T,R)$.
Thus, it follows from the mathematical induction that
$u^{(j)} \in K(T,R)$ for all $j \ge 0$.
Next, making use of
\begin{align*}%
| |u|^p - |v|^p | \le C (|u| + |v|)^{p-1} |u-v|,
\end{align*}%
we estimate
\begin{align*}%
&\| (\nabla u^{(j)}(t) - \nabla u^{(j-1)}(t), u^{(j)}_t(t) - u^{(j-1)}_t(t)) \|_{L^2} \\
&\quad \le C \int_0^t
\| (|u^{(j-1)}(\tau)| + |u^{(j-2)}(\tau)| )^{p-1}
|u^{(j-1)}(\tau) - u^{(j-2)}(\tau)| \|_{L^2}\, d\tau\\
&\quad \le C \int_0^t \| (|u^{(j-1)}(\tau)| + |u^{(j-2)}(\tau)| ) \|_{L^{2p}}^{p-1}
\| u^{(j-1)}(\tau) - u^{(j-2)}(\tau) \|_{L^{2p}}\, d\tau \\
&\quad \le C \int_0^t
( \| u^{(j-1)}(\tau) \|_{H^1} + \| u^{(j-2)}(\tau) \|_{H^1} )^{p-1}
\| u^{(j-1)}(\tau) - u^{(j-2)}(\tau) \|_{H^1}\,d\tau \\
&\quad \le C T R^{p-1} d(u^{(j-1)}, u^{(j-2)}).
\end{align*}%
We also have
\begin{align*}%
\| u^{(j)}(t) - u^{(j-1)}(t) \|_{L^2}
&\le \int_0^t \| u^{(j)}_t (\tau) - u^{(j-1)}_t(\tau) \|_{L^2}\,d\tau \\
&\le CT^2 R^{p-1} d(u^{(j-1)}, u^{(j-2)}).
\end{align*}%
Therefore, taking $T >0$ further small so that
$CTR^{p-1} + CT^2 R^{p-1} \le \kappa$
with some $\kappa \in (0,1)$,
we conclude
\begin{align*}%
d(u^{(j)}, u^{(j-1)}) \le \kappa d(u^{(j-1)}, u^{(j-2)}).
\end{align*}%
Hence, $\{ u^{(j)} \}_{j \ge 0}$ is a Cauchy sequence in $K(T,R)$
and we find the limit $u \in K(T,R)$.
Since each $u^{(j)}$ satisfies the initial conditions
$u^{(j)}(0) = u_0$ and $u^{(j)}_t (0) = u_1$, so does $u$.
Moreover, taking the limit $j \to \infty$ in the equation
\begin{align*}%
u^{(j)}_{tt} = \Delta u^{(j)} - b(t) u^{(j)}_t + |u^{(j)}|^p,
\end{align*}%
which is valid in $C([0,T) ; H^{-1}(\mathbb{R}^n))$,
we see that
$u \in C^2([0,T) ; H^{-1}(\mathbb{R}^n))$
holds and
$u$ is a strong solution of \eqref{eq:1}.
Indeed, let $t_0 \in [0,T)$ and take a small neighborhood $\omega$ of $t_0$ in $[0,T)$.
Then, we have
\begin{align*}%
\Delta u^{(j)} \to \Delta u,\quad
b(t) u^{(j)}_t \to b(t) u_t,\quad
|u^{(j)}|^p \to |u|^p
\end{align*}%
in $C(\omega ; H^{-1}(\mathbb{R}^n))$ as $j \to \infty$,
and this convergence is uniform in $\omega$.
Thus, there exists $w \in C(\omega ; H^{-1}(\mathbb{R}^n))$ such that
$u^{(j)}_{tt} \to w$ in $C(\omega ; H^{-1}(\mathbb{R}^n))$ as $j \to \infty$,
and this convergence is also uniform in $\omega$.
Thus, we compute
\begin{align*}%
&\lim_{h\to 0} \left\| \frac{1}{h}( u_t(t_0 + h) - u_t(t_0)) - w(t_0) \right\|_{H^{-1}} \\
&\quad = \lim_{h\to 0} \lim_{j \to \infty}
\left\| \frac{1}{h}( u^{(j)}_t(t_0 + h) - u^{(j)}_t(t_0)) - u^{(j)}_{tt} (t_0) \right\|_{H^{-1}} \\
&\quad = \lim_{j \to \infty} \lim_{h\to 0}
\left\| \frac{1}{h}( u^{(j)}_t(t_0 + h) - u^{(j)}_t(t_0)) - u^{(j)}_{tt} (t_0) \right\|_{H^{-1}} \\
&\quad = 0,
\end{align*}%
which leads to $u_{tt} = w$ and hence,
$u \in C^2([0,T) ; H^{-1}(\mathbb{R}^n))$.
In order to show the uniqueness, let
$u, v$ are strong solutions of \eqref{eq:1} on $[0,T)$ with the initial data $(u_0, u_1)$.
Let
$M = \sup_{t\in [0,T)}
( \| (u(t), u_t(t)) \|_{H^1\times L^2} + \| (v(t), v_t(t)) \|_{H^1\times L^2} )$.
Then, a similar argument above implies
\begin{align*}%
&\| (u(t) - v(t), u_t(t) - v_t(t)) \|_{H^1\times L^2} \\
&\quad \le (1+C M^{p-1})
\int_0^t \| (u(\tau ) - v(\tau), u_t(\tau) - v_t(\tau)) \|_{H^1\times L^2}.
\end{align*}%
This and the Gronwall inequality give $u = v$.
Finally, we prove the blow-up alternative.
Let the lifespan $T_0$ be finite and we suppose that
\begin{align*}%
\lim_{t \to T_0-} \| (u, u_t)(t) \|_{H^1\times L^2} < \infty.
\end{align*}%
Then, there exists a constant $R'>0$ such that
$\| (u, u_t)(t) \|_{H^1\times L^2} \le R'$
for any $t \in [0,T_0)$.
Let $t_0 \in [0,T_0)$ be arbitrarily fixed.
In the same way as before, we see that
there exists $T'>0$ depending only on $R'$ such that
we can construct a unique strong solution on $[t_0, t_0+T')$.
However, if $t_0 \in (T_0 - T', T_0)$, this contradicts
the definition of the lifespan.
\end{proof}
\section*{Acknowledgments}
The authors are deeply grateful to Professor Mitsuru Sugimoto
for his helpful comments.
The first, second and third authors were
partly supported by the Japan Society for the Promotion of Science,
Grant-in-Aid for JSPS Fellows No.
16J30008,
14J01884
and
15J01600,
respectively.
|
1,314,259,993,962 | arxiv | \section{Introduction}
The delay time distribution (DTD) of binary neutron stars (BNS) is currently poorly constrained, but as we have recently shown it can be determined using both the mass distribution of BNS merger host galaxies in the local universe (\citealt{sb19}; hereafter, Paper I) and the redshift distribution of BNS mergers as probed by third-generation gravitational wave (GW) detectors (\citealt{sb+19}; hereafter, Paper II). The former approach takes advantage of galaxy scaling relations that map halo/stellar mass into star formation history (SFH), which when convolved with the DTD lead to a predicted BNS merger host galaxy mass function (this approach was previously proposed and used in the context of short GRBs: \citealt{Zheng:2007hl,Kelley2010,lb10,fbc+13,Behroozi:2014bp}). The host galaxies of BNS mergers can be identified through the detection of electromagnetic (EM) counterparts, but this is likely only achievable within a few hundred Mpc. We found that for a power law DTD characterized by index $\Gamma$ and minimum delay $t_{\rm min}$, $\mathcal{O}(10^3)$ host galaxies are required to reasonably constrain the DTD.
The latter approach instead relies on a redshift mapping of the BNS merger rate, which requires GW detections to $z\sim{\rm few}$, achievable with the next-generation Einstein Telescope (ET) and Cosmic Explorer (CE). In this approach, it is unlikely that EM counterparts can be detected, but the individual redshift uncertainties from the GW data ($\delta z/z\approx 0.1z$) can be overcome through a large number of anticipated detections, $\sim 10^5$ yr$^{-1}$. We found that with about a year of CE+ET data the DTD parameters, as well as the mass efficiency of BNS production, can be determined to about 10\%.
Here we continue our investigation of the DTD, with an alternative approach to the use of BNS merger host galaxies at $z\approx 0$. Namely, unlike in Paper I, which used scaling relations between mass and SFH, we explore the use of detailed reconstructed SFHs for the individual host galaxies. In \S\ref{sec:gama} we present the galaxy sample used for this study (the Galaxy and Mass Assembly survey) and its SFH reconstruction. In \S\ref{sec:method} we present the method for extracting the DTD from the galaxy SFHs, as well as our approach to evaluating the number of host galaxies required. In \S\ref{sec:results} we discuss our findings in terms of the sample size needed, and we conclude in \S\ref{sec:conc}. We adopt the Planck 2015 cosmological parameters \citep{Collaboration:2016bk}: $\Omega_M=0.308$, $\Omega_\Lambda=0.692$, $\Omega_b=0.048$, and $H_0=67.8$ km s$^{-1}$ Mpc$^{-1}$.
\section{Galaxy Data}
\label{sec:gama}
\begin{figure*}
\centering
\includegraphics[width=0.32\linewidth]{SFR_history_cont_ind_530_logm_ind_522.pdf}
\includegraphics[width=0.32\linewidth]{continuity_ndot_galaxy_index_530_trapz.pdf}
\includegraphics[width=0.32\linewidth]{logm_ndot_galaxy_index_522_trapz.pdf}
\includegraphics[width=0.32\linewidth]{SFR_history_cont_ind_585_logm_ind_577.pdf}
\includegraphics[width=0.32\linewidth]{continuity_ndot_galaxy_index_585_trapz.pdf}
\includegraphics[width=0.32\linewidth]{LogM_ndot_galaxy_index_577_trapz.pdf}
\caption{The BNS merger rate for two galaxies from the GAMA survey with distinct SFHs ({\it Left}) that peak at early ({\it Top}) and later ({\it Bottom}) cosmic time (based on Equation~\ref{eqn:ndot}). We show the SFH based on the two types of priors (yellow: logM; grey: continuity), with the solid line indicating the median SFH and the shaded regions marking the range of 16th to 84th percentile. {\it Middle} and {\it Right} are the merger rate PDFs from convolution of the 9 DTDs with the posterior distribution of SFH of the galaxy modeled based on continuity and LogM priors respectively. The vertical bars indicate the median values of the merger rate PDFs for each DTD, shown with the same line style and color.}
\label{f:priors}
\end{figure*}
We use SFHs inferred from galaxy photometry in \citet{leja19}. The photometry is measured with the LAMBDAR code \citep{wright16} from DR3 of the Galaxy and Mass Assembly (GAMA) survey \citep{driver11,baldry18}, and includes 21 bands ranging from the far-UV ({\it GALEX}) to the far-IR ({\it Herschel}). The galaxies have spectroscopic redshifts. All galaxies at $0.05<z<0.08$ with stellar masses $M_*>10^9$ M$_{\odot}$ as determined in \citet{taylor11} are modeled, resulting in a mass-complete sample of 6134 galaxies.
The photometry was fit using the Prospector-$\alpha$ model within the \texttt{Prospector} inference machine \citep{leja17,prospector17}. This model includes a seven-parameter non-parametric star formation history, as well as flexible dust attenuation model, far-infrared dust re-emission via energy balance, and nebular emission self-consistently powered by the stellar ionizing continuum.
A key source of uncertainty in SFH recovery is the choice of prior \citep{carnall19,leja19}. This sensitivity occurs because the SEDs of stellar populations change slowly as a function of time; specifically, SEDs evolve roughly evenly in each logarithmic time step \citep{ocvirk06}. As a result, the SFR inferred in adjacent time bins is typically highly degenerate. Fortunately, key outputs of galaxy SED-fitting such as the mass-to-light ratio and, to a lesser extent, the recent SFR, are calculated using moments of the SFH, which are largely insensitive to this degeneracy \citep{bell03,leja19}. However, the DTD does not interact with these conserved quantities but instead couples directly to the SFH, and is thus sensitive to this degeneracy. Accordingly, we perform the analysis here using two different priors that assume opposite behaviors in this degeneracy: one that favors bursty SFHs (hereafter, logM) and one that favors smooth SFHs (hereafter, continuity) \citep{leja19}. Two representative examples for galaxies with opposite SFHs are shown in Figure~\ref{f:priors}, for both the logM and continuity priors. These examples highlight both the range of behavior and the associated uncertainties (statistical and systematic) in the reconstructed SFH.
The two SFH priors adopted in this work encapsulate the plausible range of choices \citep{leja19}.
Star formation histories derived from high-resolution optical/near-IR spectroscopy can provide more precise SFHs than photometry alone,
although in most cases this will be a modest improvement \citep{2012MNRAS.421.2002P}.
\section{DTD Determination Method}
\label{sec:method}
The expected BNS merger rate at $z=0$ for galaxy $i$ with star formation history $\psi_i(z)$ is given by:
\begin{align}
\dot{n}_i=&\int_{z_b=10}^{z_b=0}
\lambda\frac{dP_m}{dt}(t-t_b-t_{\rm
min})\psi_i(z_b)\frac{dt}{dz}(z_b)dz_b,
\label{eqn:ndot}
\end{align}
where $dt/dz = -[(1+z) E(z) H_0]^{-1}$ and $E(z)=\sqrt{{\Omega}_{m,0}(1+z)^3+{\Omega}_{k,0}(1+z)^2+{\Omega}_{\Lambda}(z)}$; $t_b$ is the cosmic time corresponding to $z_b$; $\lambda$ is the BNS production mass efficiency, assumed to be a fixed value of $10^{-5}$ M$_{\odot}^{-1}$, independent of redshift or environment; and $dP_m/dt$ is the DTD, likewise assumed to be independent of environment.
As in Papers I and II, we parameterize the DTD to follow a power law\footnote{In this work (as well as in Papers I and II) we have focused on a power law DTD, which is well motivated.
However, it is possible that the true DTD might deviate from this power law form.
This could potentially be investigated by comparing the results of the analysis proposed here
(using host galaxies at $z\sim 0$) and the analysis in Paper II (using the redshift distribution of BNS mergers from third-generation detectors).}
with index $\Gamma$, minimum delay time $t_{\rm min}$, and a fixed maximum delay time $t_{\rm max}=10$ Gyr. Our results are not sensitive for a larger value of $t_{\rm max}$. This formulation of Equation~\ref{eqn:ndot} is identical to that used in Paper I, except that there we took $\psi$ to be a direct function of halo mass, $M_h$.
To generate the simulated data, we assume a fixed number of $N_{\rm gal} = 1000$ galaxies that can serve as possible hosts for BNS merger events. These are selected to serve as a representative subset of the GAMA sample that preserves the mass distribution of the full sample. For a given DTD ($\Gamma$ and $t_{\rm min}$) and SFH ($\psi_i(z)$) we can then estimate the mean merger rate, $\dot{n}_i$, for each galaxy using Equation~\ref{eqn:ndot}. This then gives each galaxy a different probability to host a BNS merger event for the different DTDs (see Figure~\ref{f:priors}). As in Papers I and II, we use a set of 9 representative DTDs, with $\Gamma=[-1.5,-1,-0.5]$ and $t_{\rm min}=[10,100,1000]$ Myr.
The number of BNS merger events, $N_i=\{0, 1, \dots\}$, {\it observed} for any given galaxy over a given period of time, $\Delta t$, follows a Poisson distribution based on the merger rate:
\begin{equation}
P(N_i|\Gamma, t_{\rm min}, \psi_i(z)) = {\rm Poisson}(\dot{n}_i \Delta t)
= \frac{(\dot{n}_i \Delta t)^{N_i} e^{-\dot{n}_i \Delta t} }{N_i!}
\label{eqn:poisson}
\end{equation}
We then can simulate $N_i$ directly by drawing it from the Poisson distribution based on the associated rate:
\begin{equation}
N_i \sim {\rm Poisson}(\dot{n}_i \Delta t)
\end{equation}
Once we have simulated a set of BNS merger events, we determine the constraining power they have on the underlying DTD. Assuming the BNS merger events in each galaxy are independent of each other and the SFHs are known, the corresponding likelihood is
\begin{equation}
P(\{N_i\}|\Gamma, t_{\rm min}, \{ \psi_i(z) \})
= \prod_{i=1}^{N_{\rm gal}} P(N_i|\Gamma, t_{\rm min}, \psi_i(z)).
\end{equation}
We further need to marginalize over the uncertainty on the SFH of each galaxy:
\begin{equation}
P(N_i|\Gamma, t_{\rm min}) = \int P(N_i|\Gamma, t_{\rm min}, \psi_i(z)) P(\psi_i(z)) d[\psi_i(z)].
\end{equation}
We can approximate this integral by averaging over $N_{\rm samp}$ of the samples from the SFH posteriors for each galaxy:
\begin{equation}
P(N_i|\Gamma, t_{\rm min}) \approx \frac{1}{N_{\rm samp}} \sum_{j=1}^{N_{\rm samp}} \frac{(\dot{n}_{i,j} \Delta t)^{N_i} e^{-\dot{n}_{i,j} \Delta t}}{N_i!}
\end{equation}
The resulting SFH-marginalized posterior of the DTD is therefore given by:
\begin{align}
P(\Gamma, t_{\rm min} | \{ N_i \}) &\propto P(\{ N_i \} | \Gamma, t_{\rm min} |) P(\Gamma, t_{\rm min}) \\
&\propto \prod_{i=1}^{N_{\rm gal}} \sum_{j=1}^{N_{\rm samp}} \frac{(\dot{n}_{i,j} \Delta t)^{N_i} e^{-\dot{n}_{i,j} \Delta t}}{N_i!}
\end{align}
where we have assumed that the prior over $\Gamma$ and $t_{\rm min}$ is uniform such that $P(\Gamma, t_{\rm min})$ is a constant.
We are interested in marginalizing over any particular set of events $\{ N_i \}$ associated with the $N_{\rm gal}$ possible host galaxies to forecast possible constraints on the DTD as a function of the \textit{total number} of BNS merger events, $N_{\rm BNS}$. This gives:
\begin{align}
&P(\Gamma, t_{\rm min} | N_{\rm BNS}) \nonumber \\
&= \int P(\Gamma, t_{\rm min} | \{ N_i \}, N_{\rm BNS}) P(\{ N_i \} | N_{\rm BNS}) d(\{ N_i\})
\end{align}
We can approximate this integral using $N_{\rm repeat}$ realizations of the observed BNS merger event counts, conditioned on the total number of events $\sum_i N_i$ being equal to $N_{\rm BNS}$. Combining this with our previous expression then gives:
\begin{align}
P(\Gamma, t_{\rm min} | N_{\rm BNS}) &\approx \frac{1}{N_{\rm repeat}} \sum_{k=1}^{N_{\rm repeat}} P(\Gamma, t_{\rm min}|\{N_i\}_k) \\
&\propto \sum_{k=1}^{N_{\rm repeat}} \prod_{i=1}^{N_{\rm gal}} \sum_{j=1}^{N_{\rm samp}} \frac{(\dot{n}_{i,j}\Delta t)^{N_{i,k}} e^{-\dot{n}_{i,j} \Delta t}}{N_{i,k}!}
\end{align}
where again $\sum_{i=1}^{N_{\rm gal}} N_{i,k} = N_{\rm BNS}$ for each realization.
Since the rate per galaxy is generally very small, $\dot{n}_i \ll 1$, we expect the uncertainty to be dominated by variation in the observed counts over the potential $N_{\rm gal}$ host galaxies rather than the uncertainties in each galaxy's SFH. As such, we opt to use the same SFHs used to generate the data to evaluate the posterior rather than trying to marginalize over them directly:
\begin{equation}
P(\Gamma, t_{\rm min} | N_{\rm BNS}) \sim \sum_{j=1}^{N_{\rm repeat}} \prod_{i=1}^{N_{\rm gal}} \frac{(\dot{n}_{i,j}\Delta t)^{N_{i,j}} e^{-\dot{n}_{i,j} \Delta t}}{N_{i,j}!}
\label{eqn:posterior1}
\end{equation}
\begin{figure*}
\centering
\includegraphics[width=0.66\columnwidth]{Contourf_Poisson_N_obs30_tmin1_gamma_-15_Nsample_20_correct_90_percentile.pdf}
\includegraphics[width=0.66\columnwidth]{Contourf_Poisson_N_obs100_tmin1_gamma_-15_Nsample_20_correct_90_percentile.pdf}
\includegraphics[width=0.66\columnwidth]{Contourf_Poisson_N_obs300_tmin1_gamma_-15_Nsample_20_correct_90_percentile.pdf}
\includegraphics[width=0.66\columnwidth]{Contourf_Poisson_N_obs30_tmin2_gamma_-1_Nsample_20_correct_90_percentile.pdf}
\includegraphics[width=0.66\columnwidth]{Contourf_Poisson_N_obs100_tmin2_gamma_-1_Nsample_20_correct_90_percentile.pdf}
\includegraphics[width=0.66\columnwidth]{Contourf_Poisson_N_obs300_tmin2_gamma_-1_Nsample_20_correct_90_percentile.pdf}
\includegraphics[width=0.66\columnwidth]{Contourf_Poisson_N_obs30_tmin3_gamma_-05_Nsample_20_correct_90_percentile.pdf}
\includegraphics[width=0.66\columnwidth]{Contourf_Poisson_N_obs100_tmin3_gamma_-05_Nsample_20_correct_90_percentile.pdf}
\includegraphics[width=0.66\columnwidth]{Contourf_Poisson_N_obs300_tmin3_gamma_-05_Nsample_20_correct_90_percentile.pdf}
\caption{The constraint achieved on the parameters of the DTD as a function of the number of host galaxies of BNS merger events ({\it Left}: 30; {\it Middle}: 100; {\it Right}: 300) and for the two choices of SFH priors (red: logM; blue: continuity). In each row the input model is marked with a yellow circle. {\it Top:} An input DTD with $\Gamma=-3/2$ and $t_{\rm min}=10$ Myr. {\it Middle:} An input DTD with $\Gamma=-1$ and $t_{\rm min}=100$ Myr. {\it Bottom:} An input DTD with $\Gamma=-1/2$ and $t_{\rm min}=1000$ Myr. The contours show the 90\% percentile confidence.}
\label{f:figure_2}
\end{figure*}
It is critical to note that the posterior above is taken over the observed counts of \textit{all} potential galaxy hosts, including those with $N_i = 0$ for which no BNS merger events have been detected. That is because these ``non-detections'' in aggregate contain a non-negligible amount of information in a regime where $\dot{n}_i\ll 1$ and merger events are rare. To illustrate this, we also compare the ``complete'' posterior distribution $P(\Gamma, t_{\rm min} | N_{\rm BNS})$ derived above with the biased posterior distribution, $\tilde{P}(\Gamma, t_{\rm min} | N_{\rm BNS})$, ignoring the non-detections:
\begin{align}
\tilde{P}(\Gamma, &t_{\rm min} | N_{\rm BNS}) \sim \sum_{j=1}^{N_{\rm repeat}} \prod_{i=1}^{N_{\rm gal}} \mathcal{I}(N_{i,k} > 0) \frac{(\dot{n}_{i,j} \Delta t)^{N_{i,j}} e^{-\dot{n}_{i,j} \Delta t}}{N_{i,j}!},
\label{eqn:posterior2}
\end{align}
where $\mathcal{I}(N_{i,j} > 0)$ is the indicator function that evaluates to $1$ if the condition $N_{i,j} > 0$ is true and $0$ otherwise. In general, we expect that ignoring non-detections will bias the inferred DTD, which we discuss in the next section.
We compute the above posterior using $N_{\rm gal} = 1000$ galaxies and $N_{\rm repeat} = 100$ realizations for the 9 different assumed DTDs, and interpolate between their associated $\Gamma$ and $t_{\rm min}$ parameters to obtain the probability for a different pair of $\Gamma$ and $t_{\rm min}$ values.
To summarize, our inference procedure is as follows:
\begin{enumerate}
\item We select a star formation history, $\psi_i(z)$, for every galaxy $i$ from its SFH posterior distribution and compute the corresponding BNS merger rate, $\dot{n}_i$, using Equation~\ref{eqn:ndot}.
\item We then sample the number of BNS merger events, $N_i$, from the corresponding Poisson distribution based on Equation~\ref{eqn:poisson} for a given timescale $\Delta t$. We repeat this process until the total number of events is $N_{\rm BNS} = \sum_i N_i$.
\item We repeat this procedure $N_{\rm repeat}$ times to generate many realizations of the BNS merger events, $\{ N_{i} \}$, for varying SFHs.
\item We then use the simulated BNS merger events and SFHs from these realizations to compute the DTD posteriors including and excluding galaxies that did not host detected BNS merger events using Equations~\ref{eqn:posterior1} and \ref{eqn:posterior2}, respectively.
\end{enumerate}
\section{Results}
\label{sec:results}
In Figure~\ref{f:priors} we show the BNS merger rate probability distribution function (PDF) of two galaxies from the GAMA survey, with SFHs chosen to peak at early and late cosmic time (right panels), for the 9 different DTD models; the vertical lines in each panel indicate the value of $\dot{n}$ for the mean SFH. We show the SFHs, and merger rate PDFs for both sets of priors (middle panel: continuity; right panel: logM). The SFHs from different assumed priors often do not overlap, illustrating the challenge of inferring SFHs from photometry. In particular, the galaxy in the lower panel illustrates a specific degeneracy where the photometry of star forming galaxies can often be fit equally well with a continuous SFH or with a large burst at $\lesssim 300$ Myr followed by a sharp drop in the star formation rate \citep{leja19}. The figure also clearly indicates how the convolution of the DTDs with different SFHs leads to distinct merger rate PDFs and how the choice of SFH prior affects the resulting merger rate PDFs.
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{Contourf_Poisson_N_obs300_tmin2_gamma_-1_Nsample_20_correct_90_percentile_bad.pdf}
\caption{Same as Figure~\ref{f:figure_2}, but for just a single injected DTD (yellow circle) and a sample size of 300 host galaxies. Here we use the SFHs of only the host galaxies and ignore the galaxies that did not host BNS mergers (Equation~\ref{eqn:posterior2}). We find that the resulting DTD parameters are biased with respect to the input model, but not severely.}
\label{f:figure_3}
\end{figure}
In Figure~\ref{f:figure_2} we show the constraints achieved on the parameters of the DTD model as the number of observed host galaxies increases from 30 to 100 to 300. We show the results for three different injected DTD models and for both the logM and continuity SFH priors. The contours mark the 90\% confidence region. There are several key takeaway points from Figure~\ref{f:figure_2}. First, in the case that the true DTD prefers short merger timescales, by having a steep power law slope and short $t_{\rm min}$ (upper row of Figure \ref{f:figure_2}), the DTD parameters can be reasonably constrained with fewer than $\mathcal{O}(100)$ host galaxies. Second, in other permutations of the DTD, $\mathcal{O}(300)$ host galaxies may be required to constrain the DTD parameters, but with a lingering degeneracy between $\Gamma$ and $t_{\rm min}$. Third, the logM prior leads to tighter constraints on the DTD parameters compared to the continuity prior because it allows for bursty SFHs that pick more well-defined timescales when convolved with the DTD; the continuity prior smooths the SFH and hence systematically reduces its constraining power.
We note that the results in Figure~\ref{f:figure_2} assume knowledge of the SFHs of all potential host galaxies in the cosmic volume that contains the BNS merger events. In the case of Advanced LIGO/Virgo at design sensitivity, the luminosity distance range for BNS merger detection is about 200 Mpc, while for the planned A+ and Voyager upgrades this distance is expected to be at least twice as large. Thus, even at Advanced LIGO/Virgo sensitivity, this requires knowledge of the SFHs of $\sim 10^6$ galaxies, while for A+/Voyager this number increases to $\gtrsim 10^7$ galaxies. In Figure \ref{f:figure_3} we demonstrate the effect of neglecting the galaxies that did not host BNS mergers, using instead the SFHs of only the actual host galaxies. As expected, the resulting reconstructed DTD parameter distribution is biased with respect to the input model. However, this bias is not severe, and the resulting degenerate range of $\Gamma$ and $t_{\rm min}$ contains the ``true'' answer. This bias may be acceptable given the need to model the SFHs of only a few hundred galaxies as opposed to $\gtrsim 10^6$ galaxies.
\section{Summary and Conclusions}
\label{sec:conc}
As we have argued in Papers I and II, the DTD of BNS systems can be constrained in two primary ways using GW events: (i) using the properties of BNS merger host galaxies in the local universe, identified via an associated electromagnetic counterpart (Paper I and here); and (ii) using the BNS merger rate as a function of redshift, which requires third-generation GW detectors (Paper II). Here we expand on the method of Paper I, in which we used galaxy scaling relations to relate the mass function of BNS merger host galaxies to the parameters of the DTD. In particular, we explore the use of the actual SFHs of individual BNS merger host galaxies.
We find that the SFH reconstruction method improves on the use of scaling relations, reducing the required sample size by a factor of $\sim 3-10$, to $\mathcal{O}(100)-\mathcal{O}(300)$. The exact level of improvement depends on the choice of SFH prior, as well as on the location of the true DTD in the $\Gamma-t_{\rm min}$ parameter space. We further note that accurate reconstruction of the DTD requires knowledge of the SFHs of not only the actual host galaxies but also of the general galaxy population within the relevant cosmic volume ($\sim 3\times 10^7$ Mpc$^3$ for Advanced LIGO and an order of magnitude larger for A+). However, while using the SFHs of only the host galaxies results in a bias, we find that this bias is not severe.
With the currently allowed range of the BNS merger rate, $110-3840$ Gpc$^{-3}$ yr$^{-1}$ \citep{gwtc1}, it may take a decade or longer to collect a sample of $\mathcal{O}(100)-\mathcal{O}(300)$ host galaxies. However, there is a reasonable chance that such a sample will be available prior to the advent of third-generation GW detectors. This will allow for independent determinations of the DTD from the properties of host galaxies in the local universe (Papers I and III) and from the full merger rate redshift distribution (Paper II).
\acknowledgements
We are thankful to Enrico Ramirez-Ruiz, Evan Scannapieco, and Doug Finkbeiner for helpful discussions. This work was supported by the National Science Foundation under grant AST14-07835 and by NASA under theory grant NNX15AK82G. The Berger Time-Domain Group at Harvard is supported in part by NSF under grant AST-1714498 and by NASA under grant NNX15AE50G. J.L. is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1701487. MTS is thankful to the Center for Astrophysics | Harvard \& Smithsonian for hospitality, which made this work possible.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.